91
Notes on Numerical Fluid Mechanics and Multidisciplinary Design (NNFM)
Editors E. H. Hirschel/München K. Fujii/Kanagawa W. Haase/München B. van Leer/Ann Arbor M. A. Leschziner/London M. Pandolfi/Torino J. Periaux/Paris A. Rizzi/Stockholm B. Roux/Marseille Yu. Shokin/Novosibirsk
Computational Science and High Performance Computing II The 2nd Russian-German Advanced Research Workshop, Stuttgart, Germany, March 14 to 16, 2005 Egon Krause Yurii Shokin Michael Resch Nina Shokina (Editors)
ABC
Professor Egon Krause
Professor Michael Resch
Aerodynamic Institute of the RWTH Aachen Wuellnerstr. zw. 5 u. 7 52062 Aachen Germany
High Performance Computing Center Stuttgart University of Stuttgart Nobelstrasse 19 70569 Stuttgart Germany
Professor Yurii Shokin
Dr. Nina Shokina
Institute of Computational Technologies of SB RAS Ac. Lavrentyev Ave. 6 630090 Novosibirsk Russia
High Performance Computing Center Stuttgart University of Stuttgart Nobelstrasse 19 70569 Stuttgart Germany
Library of Congress Control Number: 2006921738 ISBN-10 3-540-31767-8 Springer Berlin Heidelberg New York ISBN-13 978-3-540-31767-8 Springer Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable for prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springer.com c Springer-Verlag Berlin Heidelberg 2006 Printed in The Netherlands The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: by the authors and TechBooks using a Springer LATEX macro package Cover design: design & production GmbH, Heidelberg Printed on acid-free paper
SPIN: 11664239
89/TechBooks
543210
NNFM Editor Addresses
Prof. Dr. Ernst Heinrich Hirschel (General editor) Herzog-Heinrich-Weg 6 D-85604 Zorneding Germany E-mail:
[email protected]
Prof. Dr. Maurizio Pandolfi Politecnico di Torino Dipartimento di Ingegneria Aeronautica e Spaziale Corso Duca degli Abruzzi, 24 I-10129 Torino Italy E-mail: pandolfi@polito.it
Prof. Dr. Kozo Fujii Space Transportation Research Division The Institute of Space and Astronautical Science 3-1-1, Yoshinodai, Sagamihara Kanagawa, 229-8510 Japan E-mail: fujii@flab.eng.isas.jaxa.jp
Prof. Dr. Jacques Periaux Dassault Aviation 78, Quai Marcel Dassault F-92552 St. Cloud Cedex France E-mail:
[email protected]
Dr. Werner Haase Höhenkirchener Str. 19d D-85662 Hohenbrunn Germany E-mail:
[email protected]
Prof. Dr. Arthur Rizzi Department of Aeronautics KTH Royal Institute of Technology Teknikringen 8 S-10044 Stockholm Sweden E-mail:
[email protected]
Prof. Dr. Bram van Leer Department of Aerospace Engineering The University of Michigan Ann Arbor, MI 48109-2140 USA E-mail:
[email protected]
Dr. Bernard Roux L3M – IMT La Jetée Technopole de Chateau-Gombert F-13451 Marseille Cedex 20 France E-mail:
[email protected]
Prof. Dr. Michael A. Leschziner Imperial College of Science Technology and Medicine Aeronautics Department Prince Consort Road London SW7 2BY U. K. E-mail:
[email protected]
Prof. Dr. Yurii Shokin Institute of Computational Technologies of SB RAS Ac. Lavrentyev Ave. 6 630090 Novosibirsk Russia E-mail:
[email protected]
Preface
This volume is published as the proceedings of the second Russian-German Advanced Research Workshop on Computational Science and High Performance Computing in Stuttgart in March 2005. The contributions of these proceedings were provided and edited by the authors, chosen after a careful selection and reviewing. The workshop was organized by the High Performance Computing Center Stuttgart (Stuttgart, Germany) and the Institute of Computational Technologies SB RAS (Novosibirsk, Russia) in the framework of activities of the German-Russian Center for Computational Technologies and High Performance Computing. The success of the first workshop, held in Novosibirsk in September 2003, has shown a keen interest in a close cooperation between German and Russian specialists in the field of computational science and high performance computing. In the same way the second workshop gave the possibility of sharing and discussing the latest results and developing further scientific contacts in the above-mentioned field. The topics of the workshop include high performance computing, theory of mathematical methods, parallel numerical modelling in computational fluid dynamics, combustion processes, fluid-structure interaction and General Relativity, numerical modelling of fiber optical lines and resistivity sounding problems, software and hardware for high performance computation, using of high performance computing systems for meteorological forecasts, medical imaging problems, the development of new generation of materials, particle based simulation methods, modern facilities for visualization of computational modelling results, cryptography problems, dynamic Virtual Organizations in engineering. The participation of representatives of major research organizations engaged in the solution of the most complex problems of mathematical modelling, development of new algorithms, programs and key elements of information technologies, elaboration and implementation of software and hard-
VIII
Preface
ware for high performance computing systems, provided a high level of competence of the workshop. Among the German participants were the heads and leading specialists of the High Performance Computing Center Stuttgart (HLRS) (University of Stuttgart), Institute of Aerodynamics and Gasdynamics (University of Stuttgart), Institute of Structural Mechanics (University of Stuttgart), Institute for Fluid Mechanics and Hydraulic Machinery (University of Stuttgart), Institute of Aerodynamics (RWTH Aachen), NEC High Performance Computing Europe GmbH, Institute for Astronomy and Astrophysics (University of Tübingen), German Weather Service, Institute of Applied Mathematics (University of Freiburg i. Br.), Freiburg Materials Research Center (Freiburg i. Br.), Fraunhofer-Institute for Mechanics of Materials (Freiburg i. Br.), Chair of Computational Mechanics (Technical University of Munich), Regional Computing Center Erlangen (RRZE (University of Erlangen-Nuremberg), Chair of System Simulation (LSS) (University of Erlangen-Nuremberg), Institute of Fluid Mechanics (University of Erlangen-Nuremberg), Institute of Mathematics (University of Lübeck), Center for High Performance Computing (ZHR) (Dresden University of Technology). Among the Russian participants were researchers of the Institutes of the Siberian Branch of the Russian Academy of Sciences: Institute of Computational Technologies SB RAS (Novosibirsk), Institute of Automation and Electrometry SB RAS (Novosibirsk), Institute of Computational Modelling SB RAS (Krasnoyarsk), Institute for System Dynamics and Control Theories SB RAS (Irkutsk), Institute of Geography SB RAS (Irkutsk). This time, further to the long-term collaboration between German and Siberian scientists, at Prof. Yurii Shokin’s suggestion the Kazakh scientists from the al-Farabi Kazakh National University and Institute of Mathematics and Mechanics (al-Farabi Kazakh National University) participated in the workshop, hereby developing a multilateral cooperation. This volume provides state-of-the-art scientific papers, presenting the latest results of the leading German, Russian and Kazakh institutions. We are glad to see the successful continuation and promising perspectives of the highly professional international scientific meetings, which bring new insights and show the ways of future development in the problems of computational sciences and information technologies. The editors would like to express their gratitude to all the participants of the workshop and wish them a further successful and fruitful work.
Novosibirsk-Stuttgart, August 2005
Egon Krause Yurii Shokin Michael Resch Nina Shokina
Contents
Breakdown of compressible slender vortices E. Krause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
Construction of monotonic schemes on the basis of method of differential approximation Yu.I. Shokin, G.S. Khakimzyanov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Industrial and scientific frameworks for computational science and engineering M.M. Resch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Parallel numerical modelling of gas-dynamic processes in airbag combustion chamber A.D. Rychkov, N. Shokina, T. Bönisch, M.M. Resch, U. Küster . . . . . . . . . . . . 29 The parallel realization of the finite element method for the Navier-Stokes equations for a viscous heat conducting gas E.D. Karepova, A.V. Malyshev, V.V. Shaidurov, G.I. Shchepanovskaya . . . . . . 41 On solution of Navier-Stokes auxiliary grid equations for incompressible fluids N.T. Danaev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 An efficient implementation of an adaptive and parallel grid in DUNE A. Burri, A. Dedner, R. Klöfkorn, M. Ohlberger . . . . . . . . . . . . . . . . . . . . . . . . . 67 Operational DWD numerical forecasts as input to flood forecasting models G. Rivin, E. Heise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
X
Contents
Robustness and efficiency aspects for computational fluid structure interaction M. Neumann, S.R. Tiyyagura, W.A. Wall, E. Ramm . . . . . . . . . . . . . . . . . . . . . 99 The computational aspects of General Relativity J. Frauendiener . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Arbitrary high order finite volume schemes for linear wave propagation M. Dumbser, T. Schwartzkopff, C.-D. Munz . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Numerical simulation and optimization of fiber optical lines with dispersion management Yu.I. Shokin, E.G. Shapiro, S.K. Turitsyn, M.P. Fedoruk . . . . . . . . . . . . . . . . . . 145 Parallel applications on large scale systems: getting insights H. Brunst, U. Fladrich, W.E. Nagel, S. Pflüger . . . . . . . . . . . . . . . . . . . . . . . . . 159 Convergence of the method of integral equations for quasi three-dimensional problem of electrical sounding M. Orunkhanov, B. Mukanova, B. Sarbassova . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Sustaining performance in future vector processors U. Küster, W. Bez, S. Haberhauer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Image fusion and registration – a variational approach B. Fischer, J. Modersitzki . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 The analysis of behaviour of multilayered nodoid shells on the basis of non-classical theory S.K. Golushko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 On the part load vortex in draft tubes of hydro electric power plants E. Göde, A. Ruprecht, F. Lippold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Computational infrastructure for parallel processing spatially distributed data I.V. Bychkov, A.D. Kitov, E.A. Cherkashin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Particle methods in powder technology B. Henrich, M. Moseler, H. Riedel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Tangible interfaces for interactive flow simulation M. Becker, U. Wössner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Using information theory approach to randomness testing B.Ya. Ryabko, A.N. Fionov, V.A. Monarev, Yu.I. Shokin . . . . . . . . . . . . . . . . . . 261
Contents
XI
Optimizing performance on modern HPC systems: learning from simple kernel benchmarks G. Hager, T. Zeiser, J. Treibig, G. Wellein . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Dynamic Virtual Organizations in engineering S. Wesner, L. Schubert, Th. Dimitrakos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Algorithm performance dependent on hardware architecture U. Küster, P. Lammers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 A tool for complex parameter studies in grid environments: SGM-Lab N. Currle-Linde, P. Adamidis, M.M. Resch . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Lattice Boltzmann predictions of turbulent channel flows with turbulence promoters K.N. Beronov, F. Durst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
List of Contributors
P. Adamidis High Performance Computing Center Stuttgart (HLRS) University of Stuttgart Nobelstraße 19 Stuttgart, 70569, Germany
[email protected]
T. Bönisch High Performance Computing Center Stuttgart (HLRS) University of Stuttgart Nobelstraße 19 Stuttgart, 70569, Germany
[email protected]
M. Becker High Performance Computing Center Stuttgart (HLRS) University of Stuttgart Allmandring 30a Stuttgart, 70550, Germany
[email protected]
H. Brunst Center for High Performance Computing (ZHR) Dresden University of Technology Dresden, 01062, Germany
[email protected]
K.N. Beronov Institute of Fluid Mechanics University of Erlangen-Nuremberg Cauerstraße 4, Erlangen, 91058, Germany
[email protected]
A. Burri Institute of Applied Mathematics University of Freiburg i. Br. Hermann-Herder-Str. 10 Freiburg i. Br., 79104, Germany
[email protected]
W. Bez NEC High Performance Computing Europe GmbH Heßbrühlstr. 21b Stuttgart, 70565, Germany
[email protected]
I.V. Bychkov Institute for System Dynamics and Control Theories SB RAS Lermontov str. 134 Irkutsk, 664033, Russia
[email protected]
XIV
List of Contributors
E.A. Cherkashin Institute for System Dynamics and Control Theories SB RAS Lermontov str. 134 Irkutsk, 664033, Russia
[email protected]
F. Durst Institute of Fluid Mechanics University of Erlangen-Nuremberg Cauerstraße 4, Erlangen, 91058, Germany
[email protected]
N. Currle-Linde High Performance Computing Center Stuttgart (HLRS) University of Stuttgart Allmandring 30 Stuttgart, 70550, Germany
[email protected]
M.P. Fedoruk Institute of Computational Technologies SB RAS Lavrentiev Ave. 6 Novosibirsk, 630090, Russia
[email protected]
N.T. Danaev al-Farabi Kazakh National University Masanchi str. 39/47 Almaty, 480012, Kazakhstan
[email protected]
B. Fischer Institute of Mathematics University of Lübeck Wallstraße 40 Lübeck, 23560, Germany
[email protected]
A. Dedner Institute of Applied Mathematics University of Freiburg i. Br. Hermann-Herder-Str. 10 Freiburg i. Br., 79104, Germany
[email protected]
A.N. Fionov Institute of Computational Technologies SB RAS Lavrentiev Ave. 6 Novosibirsk, 630090, Russia
[email protected]
Th. Dimitrakos British Telecom 2A Rigel House, Adastral Park, Martlesham Heath Ipswich, Suffolk, IP5 3RE, UK
[email protected]
U. Fladrich Center for High Performance Computing (ZHR) Dresden University of Technology Dresden, 01062, Germany
[email protected]
M. Dumbser Institute of Aerodynamics and Gasdynamics University of Stuttgart Pfaffenwaldring 21 Stuttgart, 70550, Germany
[email protected]
J. Frauendiener Institute for Astronomy and Astrophysics University of Tübingen Auf der Morgenstelle 10 Tübingen, 72076, Germany
[email protected]
List of Contributors
S.K. Golushko Institute of Computational Technologies SB RAS Lavrentiev Ave. 6 Novosibirsk, 630090, Russia
[email protected]
E.D. Karepova Institute of Computational Modelling SB RAS Academgorodok, Krasnoyarsk, 660036, Russia
[email protected]
E. Göde Institute for Fluid Mechanics and Hydraulic Machinery University of Stuttgart Pfaffenwaldring 10 Stuttgart, 70550, Germany
[email protected]
A.D. Kitov Institute of Geography SB RAS Ulan-Batorskaya str. 1 Irkutsk, 664033, Russia
[email protected]
S. Haberhauer NEC High Performance Computing Europe GmbH Heßbrühlstr. 21b Stuttgart, 70565, Germany
[email protected] G. Hager Regional Computing Center Erlangen (RRZE) University of Erlangen-Nuremberg Martensstraße 1 Erlangen, 91058, Germany
[email protected]
G.S. Khakimzyanov Institute of Computational Technologies SB RAS Lavrentiev Ave. 6 Novosibirsk, 630090, Russia
[email protected] R. Klöfkorn Institute of Applied Mathematics University of Freiburg i. Br. Hermann-Herder-Str. 10 Freiburg i. Br., 79104, Germany
[email protected]
E. Heise German Weather Service Kaiserleistr. 42+44 Offenbach am Main, 63067 Germany
[email protected]
U. Küster High Performance Computing Center Stuttgart (HLRS) University of Stuttgart Nobelstraße 19 Stuttgart, 70569, Germany
[email protected]
B. Henrich Freiburg Materials Research Center Stefan-Meier-Str. 21 Freiburg i. Br., 79104, Germany
[email protected]
E. Krause Institute of Aerodynamics RWTH Aachen Wuelnnerstr. zw. 5 u.7 Aachen, 52062, Germany
[email protected]
XV
XVI
List of Contributors
P. Lammers High Performance Computing Center Stuttgart (HLRS) University of Stuttgart Nobelstraße 19 Stuttgart, 70569, Germany
[email protected]
B.G. Mukanova Institute of Mathematics and Mechanics al-Farabi Kazakh National University Masanchi str. 39/47 Almaty, 480012, Kazakhstan
[email protected]
F. Lippold Institute for Fluid Mechanics and Hydraulic Machinery University of Stuttgart Pfaffenwaldring 10 Stuttgart, 70550, Germany
[email protected]
C.-D. Munz Institute of Aerodynamics and Gasdynamics University of Stuttgart Pfaffenwaldring 21 Stuttgart, 70550, Germany
[email protected]
A.V. Malyshev Institute of Computational Modelling SB RAS Academgorodok, Krasnoyarsk, 660036, Russia
[email protected]
W.E. Nagel Center for High Performance Computing (ZHR) Dresden University of Technology Dresden, 01062, Germany
[email protected]
J. Modersitzki Institute of Mathematics University of Lübeck Wallstraße 40 Lübeck, 23560, Germany
[email protected] V.A. Monarev Institute of Computational Technologies SB RAS Lavrentiev Ave. 6 Novosibirsk, 630090, Russia
[email protected] M. Moseler Fraunhofer-Institute for Mechanics of Materials Wöhlerstr. 11, Freiburg i. Br., 79108, Germany
[email protected]
M. Neumann Institute of Structural Mechanics University of Stuttgart Pfaffenwaldring 7 Stuttgart, 70550, Germany
[email protected] M. Ohlberger Institute of Applied Mathematics University of Freiburg i. Br. Hermann-Herder-Str. 10 Freiburg i. Br., 79104, Germany
[email protected] M.K. Orunkhanov Institute of Mathematics and Mechanics al-Farabi Kazakh National University Masanchi str. 39/47 Almaty, 480012, Kazakhstan
[email protected]
List of Contributors
S. Pflüger Center for High Performance Computing (ZHR) Dresden University of Technology Dresden, 01062, Germany
[email protected]
B.Ya. Ryabko Institute of Computational Technologies SB RAS Lavrentiev Ave. 6 Novosibirsk, 630090, Russia
[email protected]
E. Ramm Institute of Structural Mechanics University of Stuttgart Pfaffenwaldring 7 Stuttgart, 70550, Germany
[email protected]
A.D. Rychkov Institute of Computational Technologies SB RAS Lavrentiev Ave. 6 Novosibirsk, 630090, Russia
[email protected]
H. Riedel Fraunhofer-Institute for Mechanics of Materials Wöhlerstr. 11, Freiburg i. Br., 79108, Germany hermann.riedel@iwm. fraunhofer.de
B. Sarbassova Institute of Mathematics and Mechanics al-Farabi Kazakh National University Masanchi str. 39/47 Almaty, 480012, Kazakhstan
[email protected]
M. Resch High Performance Computing Center Stuttgart (HLRS) University of Stuttgart Nobelstraße 19 Stuttgart, 70569, Germany
[email protected]
L. Schubert High Performance Computing Center Stuttgart (HLRS) University of Stuttgart Allmandring 30 Stuttgart, 70550, Germany
[email protected]
G.S. Rivin Institute of Computational Technologies SB RAS Lavrentiev Ave. 6 Novosibirsk, 630090, Russia
[email protected]
T. Schwartzkopff Institute of Aerodynamics and Gasdynamics University of Stuttgart Pfaffenwaldring 21 Stuttgart, 70550, Germany
[email protected]
A. Ruprecht Institute for Fluid Mechanics and Hydraulic Machinery University of Stuttgart Pfaffenwaldring 10 Stuttgart, 70550, Germany
[email protected]
XVII
V.V. Shaidurov Institute of Computational Modelling SB RAS Academgorodok, Krasnoyarsk, 660036, Russia
[email protected]
XVIII
List of Contributors
E.G. Shapiro Institute of Automation and Electrometry SB RAS Koptuyg Ave. 1 Novosibirsk, 630090, Russia
[email protected] G.I. Shchepanovskaya Institute of Computational Modelling SB RAS Academgorodok, Krasnoyarsk 660036, Russia
[email protected] Yu.I. Shokin Institute of Computational Technologies SB RAS Lavrentiev Ave. 6 Novosibirsk, 630090, Russia
[email protected] N.Yu. Shokina High Performance Computing Center Stuttgart (HLRS) University of Stuttgart Nobelstraße 19 Stuttgart, 70569, Germany
[email protected] S.R. Tiyyagura High Performance Computing Center Stuttgart (HLRS) University of Stuttgart Allmandring 30 Stuttgart, 70550, Germany
[email protected] J. Treibig Chair of System Simulation (LSS) University of Erlangen-Nuremberg Cauerstr. 6 Erlangen, 91058, Germany
[email protected] S.K. Turitsyn Institute of Automation and Electrometry SB RAS Koptuyg Ave. 1 Novosibirsk, 630090, Russia
[email protected]
W.A. Wall Chair of Computational Mechanics Technical University of Munich Boltzmannstraße 15 Garching, 85747, Germany
[email protected]
G. Wellein Regional Computing Center Erlangen (RRZE) University of Erlangen-Nuremberg Martensstraße 1 Erlangen, 91058, Germany
[email protected]
S. Wesner High Performance Computing Center Stuttgart (HLRS) University of Stuttgart Allmandring 30 Stuttgart, 70550, Germany
[email protected]
U. Wössner High Performance Computing Center Stuttgart (HLRS) University of Stuttgart Allmandring 30a Stuttgart, 70550, Germany
[email protected]
T. Zeiser Regional Computing Center Erlangen (RRZE) University of Erlangen-Nuremberg Martensstraße 1 Erlangen, 91058, Germany
[email protected]
Breakdown of compressible slender vortices E. Krause Institute of Aerodynamics, RWTH Aachen, Wuellnerstr. zw. 5 u. 7, 52062 Aachen, Germany
[email protected]
Summary. Slender vortices of compressible flow are studied, in particular the deceleration of the axial flow to a free stagnation point on the axis, causing bursting or breakdown of the vortex. Steady, inviscid, compressible, axially symmetric flow conditions are assumed to enable a reduction of the Euler equations for a stream tube of small radius. The angular velocity near the axis is shown to be directly proportional to the axial mass flux, indicating that a decelerated axial flow can cause the angular velocity to vanish and breakdown to occur. This behavior is reversed in supersonic flow. A breakdown criterion is derived for a Rankine vortex with isentropic and normal-shock deceleration of the axial flow. The results are compared with available experimental data.
1 Introduction This article describes an exploratory attempt to investigate the flow in the core of compressible slender vortices and to study the influence of density variations on the interaction between the axial and the azimuthal flow near the axis. Previous studies of incompressible flow were mainly concerned with the analysis of decelerated axial flow, leading to the formation of a stagnation point and breakdown of the vortex. Some of the results are described in [1]–[4]. The approach offered there is extended here to compressible flow, with the aim to find a condition which enables the prediction of vortex breakdown. Only the simplest case of axially symmetric flow is studied. It is not intended to give a final answer to this question but rather enhance the understanding of the breakdown process. Another two decades of research may be necessary for a detailed penetration of the entire flow process, according to the opinion of experts. Nevertheless, it is hoped that some more general conclusions can be drawn from the results presented in this paper. The destruction of the core of slender vortices, for example by the intersection with shock waves, is of general scientific interest: If supersonic flow containing one or more vortices with their axes orientated parallel to the main stream, passes through an oblique or normal shock, its shape is
2
E. Krause
changed because of the non-uniformity of the flow in the vortex core. Vice versa, the flow in the core of the vortex is also changed. Weak shocks disturb the flow in the core and in the outer part of the vortex only slightly. Strong shocks, however, causing a large pressure rise in the axial direction of the vortex, influence the flow in the core substantially. If a free stagnation point is formed on the axis of the vortex, and if axial back-flow downstream from the stagnation point sets in, the angular velocity near the axis vanishes, and, as already mentioned, the vortex is said to break down or burst. Vortex breakdown is observed in flows of technical devices and apparatus. For example, in [5] it is conjectured, that vortex breakdown may occur in turbine engines. When the compressor is operated near the stability limit, the tip leakage vortex may break down, caused by an intersection with a shock in the supersonic part of the flow, which may give rise to rotating instabilities and subsequent stall. Bursting is also encountered in supersonic flow over delta wings, again caused by the interaction of the primary vortex with a strong shock. For example, the vortex originated on the front aileron of canard configurations can interact with shocks on the main wing. If the vortex bursts, rolling motions of the aircraft may set in due to the loss of local lift [6]. On delta wings with leading-edge extension breakdown may occur, if one of the vortices hits the vertical tail. Severe buffeting may result. This problem was studied extensively in experimental investigations, primarily for military applications. References [7]–[13] may be consulted for details. Other experiments are reported in [14]–[16]. Although the experimental investigations provide invaluable results, they are still handicapped by the presently available measuring techniques. The examples given may suffice to explain the technical importance of the problem of vortex bursting. It is for this reason that numerous numerical investigations were carried out in recent years: In [17] and [18] the Euler and Navier-Stokes equations were solved for time-dependent, compressible, axially symmetric, and also for three-dimensional subsonic and a supersonic flow at a free-stream Mach number of Ma∞ = 1.75. An initially slender Lamb-Oseen vortex with a uniform axial velocity component was forced to bubble- and helix-type bursting. In [19] the bursting of a Burgers vortex was numerically simulated for free-stream Mach numbers 1.3 ≤ Ma∞ ≤ 10.0 with a solution of the Euler equations for axially symmetric flow. In [20] the bursting of a Lamb-Oseen vortex, caused by an oblique shock, was studied with a numerical solution of the Euler equations for unsteady, compressible, threedimensional supersonic flow at Ma∞ = 3.0 and 5.0. The numerical results revealed three regimes of weak, moderate, and strong interaction. Bursting was only observed in the case of strong interaction. Vertical tail buffeting, caused by a burst vortex on a delta wing with leading edge extensions was studied in [21]. The simulation procedure was based on a numerical solution of the Reynolds-averaged Navier-Stokes equations for compressible, three-dimensional flows and on the aeroelastic equations
Breakdown of compressible slender vortices
3
for coupled and uncoupled bending and torsional modes, described in [22] and [23], including the deformation of the grid according to the tail deflections. The solution was validated in [24]. Results for coupled and uncoupled bending-torsion modes of twin-tail buffeting were reported in [25] and [26]. These references also include a comparison with existing experimental data. Recently a numerical solution of the Euler and Navier-Stokes equations for time-dependent, three-dimensional, compressible flows and the application to the problem of shock-vortex interaction was described in [27] and [28]. In the present study the Euler equations for steady, compressible, axially symmetric flow will first be reduced for the neighborhood of the axis of the vortex. Then the radial flow distribution of a Rankine vortex in compressible flow will be introduced, and finally a breakdown criterion will be derived for isentropic and normal-shock deceleration of the axial flow. The results will be compared with the experimental data of [6].
2 Reduction of Euler equations The inviscid, steady, compressible flow in an axially symmetric slender vortex is described by the Euler equations, consisting out of the continuity equation, the three momentum equations, the energy equation, and the thermal equation of state. Let ρ denote the density, p the pressure, u, v, w the axial, radial, and the azimuthal velocity components, respectively, T the temperature, x and r the axial and radial coordinates, RG the gas constant, and c p the specific heat at constant pressure. Then the governing equations can be written as (rρv)r = 0, (ρu) x + r px uu x + vur = − , ρ w2 pr =− , r ρ w uw x + vwr + v = 0, r c p ρ (uTx + vTr ) = up x + vpr , p ρ= T. RG uv x + vvr −
(1)
The subscripts x and r denote partial differentiation with respect to the axial and radial coordinate directions, respectively. The solution of eqs. (1) requires the specification of boundary conditions. For the inflow cross-section, with axial coordinate x = xi , they are x = xi , 0 ≤ r :
u (r ) = f 1 (r ) , v (r ) = f 2 (r ) , w (r ) = f 3 (r ) , p (r ) = f 4 (r ) , T (r ) = f 5 (r ) ,
(2)
4
E. Krause
along the center line the symmetry conditions are r = 0, xi ≤ x :
ur = v = w = pr = Tr = 0,
(3)
and for large radial distances the far-field conditions are r → ∞, xi ≤ x :
u = u∞ ( x), v = w = 0, p = f 6 [u∞ ( x), p0∞ , ρ0∞ ] , T = f 7 [u∞ ( x), T0∞ ] .
(4)
In eq. (4) the quantities p0∞ , ρ0∞ , and T0∞ are the stagnation pressure, density, and temperature, respectively. If the flow is subsonic, the boundary conditions for x → ∞ are generally not known, and meaningful approximations must be introduced. For supersonic flow, the downstream boundary conditions need not be specified, because of the change of type of the governing equations. The functions f 1 – f 7 are known functions of the arguments indicated. As the analysis described here is mainly concerned with the flow in the core of the vortex, i. e. the flow for r → 0, and as it is not intended to provide a complete numerical solution of eqs. (1), the approach to be followed is based on a reduction of he Euler equations for the neighborhood of the axis. To this end the term ρv/r is eliminated from the continuity equation with the aid of the azimuthal momentum equation. There results the expression w w + (ρv)2 =0 (5) (ρu)2 ρu x ρv r Next, the azimuthal velocity component w is expanded in a power series in terms of the radial coordinate r, with the coefficients being functions of the axial coordinate x. The leading term of the series for w( x, r → 0) is then given by the product of the radius r and the angular velocity Ω ( x, r → 0), being identical with the rigid body rotation w( x, r → 0) = Ω ( x, r → 0)r + · · · · .
(6)
If eq. (6) is inserted into eq. (5), the following simple relation for the neighborhood of the axis, r → 0, is obtained:
Ωx = Ω
(ρu) x . ρu
(7)
The above equation states, that near the axis the logarithmic derivative of the angular velocity Ω ( x, r → 0) is proportional to the logarithmic derivative of the mass flow ρu( x, r → 0). Eq. (7) can be integrated to yield
Ω ( x, r → 0) = Ωi ( xi , r → 0)
ρu( x, r → 0) . ρ ui ( xi , r → 0 )
(8)
The index i in eqs. (7) and (8), first derived in [3] and [4], again denotes the inflow cross-section, compare eq. (2). If breakdown of an axially symmetric
Breakdown of compressible slender vortices
5
vortex is defined by requiring that Ω ( x, r → 0) has to vanish, then, according to eq. (8), breakdown can be enforced by letting the axial mass flow near the axis ρu( x, r → 0) vanish, implying that the formation of a free stagnation point on the axis will always lead to breakdown. This result has long been questioned, but is manifested now by the above derivation, which was already obtained for incompressible flow in [2]. It is instructive to eliminate the derivative of the density ρ x in eq. (7) with the axial momentum equation, the energy equation, reduced for r → 0, and the thermal equation of state, all given in eqs. (1). After introducing the speed of sound a2 = γ p/ρ, the differential equation for the angular velocity, (7), can then be written in terms of the differential of the static pressure and the Mach number of the axial flow Ma = u/ a dΩ Ma2 − 1 dp . = Ω γ Ma2 p
(9)
Eq. (9) is analogous in form to the area-Mach number relation for onedimensional compressible inviscid flow: If in subsonic flow, Ma < 1, the angular velocity Ω is to decrease, dΩ < 0, the static pressure has to increase, dp > 0, while in supersonic flow, Ma > 1, the angular velocity decreases, dΩ < 0, when the pressure decreases,dp < 0. Another interesting result can be derived from eq. (7). If breakdown is to occur, the radius of the stream tube near the axis, r → 0, has to increase. The slope of the projection of the streamlines on the meridional plane is given by the differential equation dr v (10) = . dx u The velocity ratio v/u can be expressed by the azimuthal momentum equation, given in eqs. (1), and for r → 0, the leading term of the series expansion, eq. (6), can be introduced. There results v rw x rΩ x dr = = = , dx u (rw)r 2Ω
(11)
and integration of eq. (11) yields
Ωi ( xi , r → 0 ) r ( x ) = ri ( xi ) Ω ( x, r → 0)
1/2 .
(12)
Eq. (12) shows, that the radius of the stream tube r( x) increases if the angular velocity Ω is decreased. The flow behavior observed in decelerated flow upstream of a stagnation point, becomes clear, if the ratio of the angular velocities in eq. (12) is replaced by the axial mass flow ratio, given in eq. (8) r ( x ) = ri ( xi )
ρ ui ( xi , r → 0 ) ρu( x, r → 0)
1/2 .
(13)
6
E. Krause
Before the dependence of the angular velocity on the axial mass flow near the axis can further be studied, a model for the radial distribution of the flow quantities must be introduced. The Rankine vortex is chosen here together with the assumption of a constant total enthalpy. The resulting radial temperature and pressure profiles will be discussed in the next chapter.
3 Rankine vortex in compressible flow In eq. (8) it was shown that the angular velocity near the axis is directly proportional to the axial mass flow. However, it follows from the formulation of the problem, that the mass flow near the axis, ρu( x, r → 0) also depends on the lateral boundary conditions given in eqs. (2) and (3). In order to elucidate the influence of this dependence in a closed form, a Rankine vortex is introduced. Other models may be used, as for example, the Lamb-Oseen or the Burgers vortex, but the Rankine vortex was chosen, since it is convenient to handle. The radial distribution of the azimuthal velocity component of the Rankine vortex is given by rigid body rotation, eq. (6), in the core of the vortex, and a potential vortex for the flow outside of the core w( x, r) =
Γ . 2π r
(14)
The circulation of the vortex Γ is related to the angular velocity Ω , by matching eq. (6) with eq. (14) at the edge of the core with radius r = R = D /2, being defined by the location of the maximum value of the azimuthal velocity component wmax Γ . (15) wmax = Ω R πD In order to find out, under what conditions a stagnation point can be formed on the axis, the radial temperature and pressure profiles have to be known. For the present study, in addition to eqs. (6) and (14), the stagnation enthalpy of the flow h0∞ is assumed to be constant, representing an integral of the energy equation in eq. (1). With a constant specific heat at constant pressure c p , also the stagnation temperature T0∞ is constant. If it is further assumed, that the axial velocity component does not vary in the radial direction, i. e. u = u∞ , and that the radial velocity component v is small and can be neglected, the integral of the energy equation can be written as T (r) = T0∞ −
u2∞ + w2 (r) . 2c p
(16)
With the maximum value of the azimuthal velocity component wmax at the edge of core, i. e. at r = R, given by eq. (15), an azimuthal reference
Breakdown of compressible slender vortices
7
Mach number Mac∞ can be defined with the reference axial velocity u∞ , a reference temperature T∞ T∞ = T0∞ −
u2∞ . 2c p
(17)
and with a∞ = (γ RG T∞ )1/2 , the free-stream speed of sound Mac∞ =
wmax
(γ RG T∞ )1/2
.
(18)
The radial profile of the temperature, eq. (17), can now be written in the following form r
R:
r2 T (r ) = 1 − (γ − 1) Ma2c∞ 2 , T∞ 2R R2 T (r ) = 1 − (γ − 1) Ma2c∞ 2 . T∞ 2r
(19)
The static temperature at the edge of the core, i. e. at r = R is r=R:
Ma2c∞ Tc . = 1 − (γ − 1) T∞ 2
(20)
With the radial temperature profile defined, the radial pressure profile can be determined. To this end the radial momentum equation in eq. (1) is simplified with the assumption of a small radial velocity component v, already introduced in the simplification of the energy equation. By eliminating the density ρ in the simplified radial momentum equation with the thermal equation of state in eq. (1), the radial pressure distribution can be obtained by insertion of eqs. (6), (14), and (19) into the following simplified radial momentum equation
(ln p)r =
w2 r
R G T (r )
,
(21)
and after integration of eq. (21) there results rR:
−γ 2γ r 2 γ −1 p( x, r) γ −1 2 2 = 1 − (γ − 1) Mac∞ , 1 − (γ − 1) Mac∞ 2 p∞ 2R γ 2 γ −1 p( x, r) 2 R = 1 − (γ − 1) Mac∞ 2 . p∞ 2r (22)
The pressure on the axis p( x, 0) follows from eq. (22) to
8
E. Krause
r=0:
p( x, 0) = p∞
Ma2c∞ 1 − (γ − 1) 2
γ2−γ1 .
(23)
The pressure p∞ represents the static pressure at large radial distances, r → ∞. Eq. (23) can now be employed to develop a breakdown criterion with the aid of eq. (8).
4 Breakdown criterion for the compressible Rankine vortex In order to include the results of the last chapter, in particular the pressure on the axis p( x, 0), as obtained in eq. (23), with those derived for the axial flow near the axis, eq. (9) will be integrated first, since it relates the angular velocity Ω ( x, r → 0) to the pressure p( x, r → 0). Then by requiring that Ω ( x, r → 0) has to vanish, if breakdown is to occur, the corresponding Mach number Mac∞ or the circulation Γ can be determined as a function of the axial Mach number at the inflow cross-section Mai∞ . Eq. (9), however, can only be integrated, if the axial Mach number is assumed to be given by the isentropic relation as a function of the static pressure p and the stagnation pressure p0 of the axial flow: ⎛ γ −1 ⎞1 2 γ p0 − 1 ⎜ ⎟ p ⎟ . Ma = ⎜ ⎝2 ⎠ γ−1
(24)
Eq. (9) can then be written as ⎡
⎤
⎢ dΩ γ−1 =⎢ ⎣ 2 − γ −1 Ω γ p0 p
−1
⎥ dp ⎥ ⎦ 2p .
(25)
Integration of eq. (25) yields
Ω = Ωi
p pi
γ +1 2γ
p p0
−(γ −1)/γ
1 2
−1
pi p0
−(γ −1)/γ
− 1 2
−1
.
(26)
In eq. (26) the index i again denotes the inflow cross-section; it is seen, that Ω can vanish only, if the first square bracketed term on the right-hand side of eq. (26) vanishes. This is only possible, if the pressure p is equal to the stagnation pressure p0 , i. e. if the flow develops a stagnation point. If a stagnation point on the axis is to be formed, the static pressure has to increase by an amount equal to the radial pressure difference
Breakdown of compressible slender vortices
∆ p0radial = p∞ − p( x, 0).
9
(27)
Following the assumption introduced earlier, namely that the axial flow is isentropically decelerated, the required pressure difference along the axis is ⎧ − γ ⎫ γ −1 ⎬ ⎨ 2 Mai∞ ∆ p0axial = p0 − pi (0, 0) = p0 1 − 1 + (γ − 1) . (28) ⎩ ⎭ 2 In eq. (28) the pressure pi (0, 0) is the static pressure in the inflow crosssection of the vortex, and the axial Mach number Mai∞ is defined by the uniform axial velocity component u∞ and the free-stream speed of sound a∞ , i. e. Mai∞ = u∞ / a∞ . A stagnation point can be formed, if
∆ p0radial = p0 − p∞ ( x, 0) = ∆ p0axial = p0 − pi (0, 0).
(29)
Eq. (29) relates the azimuthal Mach number Mac∞ , defined by eq. (18), to the axial Mach number Mai∞ , defined earlier in conjunction with eq. (24):
Ma2c∞ 1 − (γ − 1) 2
− γ2−γ1
2 Mai∞ = 1 + (γ − 1) 2
γ γ −1
.
(30)
Eq. (30) is solved for Mac∞ , to give
Mac∞ = 2
1 − 1 + (γ − 1)
2 Mai∞ 2
− 1 2
.
γ−1
(31)
Since the right-hand side of eq. (31) varies between zero and 2/(γ − 1) for Mach numbers 0 ≤ Mai∞ ≤ ∞, it follows, that for all axial Mach numbers 0 ≤ Ma∞ ≤ ∞ there exist azimuthal Mach numbers Mac∞ , such that a stagnation point can be formed on the axis of the vortex, and breakdown can be initiated. By letting Mac∞ and Mai∞ approach zero, the limiting value for incompressible flow wmax /u∞ = (1/2)1/2 , given in [2] is readily obtained. It is also possible to determine the circulation Γ of the vortex required for the formation of a stagnation point on the axis instead of the azimuthal Mach number Mac∞ , by simply replacing Mac∞ in eq. (31) by wmax , defined in eq. (15) and the free-stream speed of sound a∞ . The circulation required for the formation of a stagnation point on the axis of the vortex is given by the following expression:
Γ 2 = 8a2∞ π 2 R2
1 − 1 + (γ − 1)
γ−1
Ma2∞ 2
− 21 .
(32)
Finally the deceleration of the axial flow by a letting a normal shock intersect the vortex is discussed. Then the axial pressure difference, given by
10
E. Krause
eq. (28) has to be adapted to include the loss of the total pressure across the normal shock. With the relation for the radial pressure difference, eq. (23), the isentropic relation for the increase of the pressure along the axis of the vortex, eq. (24), and the relation for the ratio of the total pressures across a normal shock, given by − 1 − γ γ p02 γ −1 γ −1 γ −1 2 2 = 1 + 2γ Mai∞ −1 +2 (γ + 1) Ma2∞ (γ − 1) Mai∞ p01 (33) there results for the azimuthal Mach number Mac∞ Ma2c∞ = 2
1−
2 −1 1+2γ ( Mai∞ ) γ +1
γ 2
γ−1
(γ + 1)
2 Mai∞ 2
− 1 2
.
(34)
Fig. 1. The azimuthal Mach number Mac∞ and the ratio Mac∞ / Ma∞ at which breakdown occurs, as a function of the axial Mach number Ma∞ . Experimental data are given in [6]
Again, for each axial Mach number Mai∞ an azimuthal Mach number Mac∞ can be determined, such that a stagnation point can be formed. The azimuthal Mach number Mac∞ and the ratio Mac∞ / Mai∞ are shown in Fig. 1, taken from [4] as a function of the axial Mach number Mai∞ , with a slight change in notation, together with the experimental results of [6], indicated by the shaded area. The results shown in Fig. 1 confirm that for low supersonic axial Mach numbers the assumption of isentropic deceleration of the axial flow in the
Breakdown of compressible slender vortices
11
vortex core does not introduce a large error, compared to the deceleration by a normal shock. The few experimental data available in the literature [6], substantiate the results obtained in the present study.
5 Concluding remarks In the present investigation the steady, compressible, inviscid, axially symmetric flow near the axis of slender vortices was studied with simplified relations of the Euler equations. By combining the continuity equation with the azimuthal momentum equation, it could be shown that near the axis of the vortex the angular velocity is directly proportional to the mass flow in the axial direction. It could also be shown that in subsonic flow the angular velocity decreases, if the static pressure increases in the axial direction, while in supersonic flow the angular velocity decreases, when the pressure decreases. The integration of the simplified equation relating the angular velocity to the axial mass flow leads to a breakdown criterion, in the present study restricted to a Rankine vortex. According to the criterion derived, the maximum azimuthal Mach number, needed for breakdown, can be determined, if the axial reference Mach number is known. The few available experimental data confirm at least qualitatively the results presented. Further studies are necessary for a more deeper understanding of the process of vortex breakdown in compressible flow.
References 1. Krause E, Gersten K (eds) (1998) Dynamics of slender vortices. Proc. of IUTAM Symp. on Dynamics of Slender Vortices 1997. Kluwer Academic Publishers 2. Billant P, Chomaz JM, Delbende I, Huerre P, Loiseleux T, Olendraru C, Rossi M, Sellier A (1998) Instabilities and vortex breakdown in swirling jets and wakes. In: Krause E, Gersten K. (eds) Proc. of IUTAM Symp. on Dynamics of Slender Vortices 1997. Kluwer Academic Publishers 3. Krause E (2000) Shock induced vortex breakdown. In: Proc. of Int. Conf. on Meth. of Astrophysical Research, part II. Publishing House of Siberian Branch of Russian Academy of Sciences, Novosibirsk 4. Krause E (2002) J Eng Thermophysics 11/3:229–242 (Review in: Bathe KJ (ed) (2003) Computational Fluid and Solid Mechanics. Elsevier Science, Oxford 5. Schlechtriem S, Lötzerich M (1997) Breakdown of tip leakage vortices in compressors at flow conditions close to stall. In: Proc. of IGTI-Asme Conf., Orlando, Florida 6. Delery JM (1994) Frag Aerospace Sci 30:1–59 7. Sellers WL III, Meyers JF, Hepner TE (1988) LDV survey over a fighter model at moderate to high angle of attack. SAE Paper No 88-1448 8. Erickson GE, Hall RM, Banks DW, Del Frate JH, Shreiner JA, Hanley RJ, Pulley CT (1989) Experimental investigation of the F / A-18 vortex flows at subsonic through transonic speeds. AIAA Paper No 89-2222
12
E. Krause
9. Wentz WH (1987) Vortex-fin interaction on a fighter aircraft. In: Proc. of AIAA Fifth Applied Aerodynamics Conf., Monterey, CA 10. Lee B, Brown D (1990) Wind tunnel studies of F / A-18 tail buffet. AIAA Paper No 90-1432 11. Cole SR, Moss SW, Dogget RV Jr (1990) Some buffet response characteristics of a twin- vertical- tail configuration. NASA TM-I02749 12. Bean OE, Lee BH (1994) Correlation of wind tunnel and flight test data for F / A-18 vertical tail buffet. AIAA Paper No 94-1800-CP 13. Washburn AE, Jenkins LN, Ferman MA (1993) Experimental investigation of vortex-fin interaction. AIAA Paper No 93-0050. In: Proc. of AIAA 31st Aerospace Sciences Meeting, Reno, NV 14. Cattafesta NL, Settles GS (1992) Experiments on shock-vortex interaction. AIAA Paper 92-0315. In: Proc. of AIAA 30th Aerospace Sciences Meeting, Reno, NV 15. Kalkhoran IM, Smart MK, Betti A (1996) AIAA J 34/9 16. Kalkoran IM, Smart MK (1997) AIAA J 55:1589–1596 17. Kandil OA, Kandil HA, Liu CH (1991) Computation of steady and unsteady compressible quasi-axisymmetric vortex flow and breakdown. AIAA Paper 91-0752. In: Proc. of AIAA 29th Aerospace Sciences Meeting 18. Kandil OA, Kandil HA, Liu CH (1992) Shock-vortex interaction and vortexbreakdown modes. In: Schumann et. al. (eds) IUTAM Symposium of Fluid Dynamics of High Angle of Attack. Springer Verlag, Tokyo 19. Erlebacher G, Hussaini MY, Shu CW (1996) Interaction of a shock with a longitudinal vortex. ICASE Report No 96-31 20. Nedungadi A, Lewis MJ (1996) AIAA J 34/12 21. Kandil OA, Kandil HA, Massey SJ (1993) Simulation of tail buffet using delta wing-vertical tail configuration. AIAA Paper No 93-3688-CP. In: Proc. of AIAA Atmospheric Flight Mechanics Conf., Monterery, CA 22. Kandil OA, Massey SJ, Kandil HA (1994) Computations of vortex-breakdown induced tail buffet undergoing bending and torsional vibrations. AIAA Paper No 94-1428-CP. In: Proc. of AIAA/ ASME/ ASCE/ ASC Structural, Structural Dynamics and Material Conf. 23. Kandil OA, Massey SJ, Sheta EF (1996) Structural dynamics/CFD Interaction for computation of vertical tail buffet. In: Proc. of International Forum on Aeroelasticity and Structural Dynamjcs, Royal Aeronautical Society, Manchester, U.K. Also published in Royal Aeronautical J 1996:297-303 24. Kandil OA, Sheta EF, Liu CH (1996) Computation and validation of fluid/structure twin- tail buffet response. In: Proc. of Euromech Colloquium 349, Structure Fluid Interaction in Aeronautics. Institut Für Aeroelastik, Göttingen, Germany 25. Kandil OA, Sheta EF, Massey SJ (1997) Fluid/structure twin tail buffet response over a wide range of angles of attack. AIAA Paper No 97-2261-CP. In: Proc. of 15th AIAA Applied Aerodynamics Conf., Atlanta, GA 26. Kandil OA, Sheta EF, Liu CH (1997) Effects of coupled and uncoupled bendingtorsion modes on twin-tail buffet response. In: Krause E, Gersten K (eds) IUTAM Symp. on Dynamics of Slender Vortices. Kluwer Academic Publishers 27. Thomer O (2003) Numerische Untersuchung von Längswirbeln mit senkrechten und schrägen Verdichtungsstößen; ein Vergleich verschiedener Lösungsansätze. Ph.D. Thesis, RWTH Aachen 28. Krause E, Thomer O, Schröder W (2003) Int J Comp Fluid Dyn 12(2)/33:266–278
Construction of monotonic schemes on the basis of method of differential approximation Yu.I. Shokin and G.S. Khakimzyanov Institute of Computational Technologies SB RAS, Lavrentiev Ave. 6, Novosibirsk, 630090, Russia [email protected] [email protected]
Summary. The new approach to construction of monotonic nonlinear difference second order schemes, based on the investigation of the differential approximation of a scheme, is presented. One of possible formulas for the definition of approximating viscosity is given, which leads to the coincidence of the constructed scheme with the Harten’s TVD scheme. The known and widely used TVD schemes with other limiters can be also obtained using the presented approach.
1 Introduction Nowadays TVD schemes and their numerous modifications are used for solving many problems with discontinuous solutions. The high popularity of these methods lies in the fact that they provide nonoscillating solution profiles and high solvability in the areas of discontinuities, and maintain high accuracy in the areas of solution smoothness. The modern high order TVD schemes are based on the various methods of reconstruction of the function values on cell edges using its values in the centers of neighboring cells. Here the scheme pattern is variable and dependent on the numerical solution behavior. The reconstruction techniques are based on the usage of the special flux limiters [1, 2], which are constructed so that the scheme with the limiters has the TVD property [3] (Total Variation Diminishing - i.e. the total variation of a numerical solution is nonincreasing), and, as consequence, maintains the monotonicity of a numerical solution. In the present work, the monotonization of second order schemes is done not by the construction of the flux limiters, but by the analysis of the differential approximation of a scheme. This approach to construction of monotonic schemes is shown for the explicit predictor-corrector scheme [4] for the nonlinear scalar equation and for the shallow water equation system with one spatial variable.
14
Yu.I. Shokin and G.S. Khakimzyanov
2 Monotonic predictor-corrector scheme for scalar equation Let us consider the explicit predictor-corrector scheme for the scalar equation: (1) ut + [ f (u)] x = 0. On the predictor step: f j∗ − 12 ( f jn−1/2 + f jn+1/2 )
τ ∗j
+ anj
f jn+1/2 − f jn−1/2 h
=0
(2)
the fluxes f j∗ are calculated, which correspond to the integer nodes x j (cell boundaries) of the uniform grid with the step size h. In the equation (2): τ ∗j = 0.5τ (1 + θ nj ), τ is the time step, θ nj is the scheme parameter, which changes in general from one node to another and from one time level to another, f jn+1/2 = f unj+1/2 ,
⎧ n f − f jn−1/2 ⎪ ⎨ j+1/2 for unj+1/2 = unj−1/2 , anj = unj+1/2 − unj−1/2 ⎪ ⎩ a(unj+1/2 ) for unj+1/2 = unj−1/2 , a(u) = f u (u). The difference equation (2) is the result of the approximation of the differential equation f t + a(u) f x = 0,
(3)
which is obtained after the multiplication of (1) by the function a(u). On the corrector step: unj++11/2 − unj+1/2 f j∗+1 − f j∗ + =0 (4) τ h the sought values unj++11/2 are obtained, which are defined in the half-integer
nodes x j+1/2 = x j + h/2 (cell centers). For θ = 0 the written scheme coincides with the Lax-Wendroff scheme. If θ = O(h), then the scheme (2), (4) approximates the equation (1) with the second order in τ and h at the law of the passage to the limit æ = τ /h = const. The necessary stability condition, obtained for a(u) = const, θ = const, requires that in particular the parameter θ takes nonnegative values. Therefore, the following limitation on the grid function θ nj is assumed to be fulfilled: θ nj ≥ 0.
Construction of monotonic schemes
15
The scheme (2), (4) can be written in the following form: unj++11/2 − unj+1/2
τ
+ C−, j unx, j − C+, j+1 unx, j+1 = 0,
(5)
where unx, j
=
unj+1/2 − unj−1/2 h
C±, j =
,
1 æ(1 + θ nj )( anj )2 ∓ anj . 2
The fulfillment of the inequalities: C−, j ≥ 0,
C+, j ≥ 0,
C−, j + C+, j ≤
1 æ
(6)
is sufficient [3] for the scheme (5) to be TVD scheme. Therefore, if the condition is fulfilled: 1 1 ≤ | anj |æ ≤ (7) n 1 + θj 1 + θ nj and θ = O(h), then the scheme (2), (4) is the second order scheme, which maintains the monotonicity of the numerical solution. Let us note, that for the arbitrary nonnegative function θ ( x, t) such that θ = O(h), it is practically impossible to get the conditions (7) fulfilled. Even, if a(u) = const and the conditions (7) are fulfilled, and, therefore, the considered scheme remains monotonic, then it has a significant disadvantage. This disadvantage is that in order to maintain the second order of approximation and monotonicity for the fine grids, the calculation has to be done with the Courant numbers, close to one. Moreover, the finer grid is used, the closer the Courant number must be to one. From now on we assume that the grid function θ nj depends on the numerical solution on the n-th time level. It turns out that for such variable parameter the predictor-corrector scheme can maintain the solution monotonicity for all values of the Courant number, which are subject to the following condition:
| anj |æ < 1.
(8)
The method of differential approximation [5, 6] is used for the choice of parameter θ. Assuming that the function θ ( x) and all its derivatives have the order O(h), we obtain the following expression for the first differential approximation (f.d.a.) of the scheme (2), (4): τ h2 2 2 (θ a2 u x ) x + . (9) a( a æ − 1)u x 2 6 xx In those subdomains, where the possibility of the numerical solution oscillations appears, it is necessary to change the dispersion of the difference ut + f x =
16
Yu.I. Shokin and G.S. Khakimzyanov
scheme. For θ = O(h) the main contribution to the dispersion of the difference scheme will be done by the second term in the right-hand side of (9). Let us take the function θ so that the first term in the right-hand side of f.d.a either totally or partially compensate the dispersive term ah2 2 2 a æ − 1 u xxx , 6
(10)
or give the contribution to f.d.a., which leads to the change of a sign for the coefficient of the third derivative u xxx . For definiteness, let us assume that a > 0. Therefore, in (10) the coefficient of u xxx will have the negative sign. If we assume, for instance, that a ( 1 − a 2 æ2 ) u x x θ ( x) = h , 3a2 æu x then the dispersive term (10) is totally compensated by the first term in the right-hand side of f.d.a. (9). If the function θ ( x) is chosen in the form
θ ( x) = h
[ a ( 1 − aæ) u x ] x , a 2 æu x
(11)
then, taking into account the first term in the right-hand side of (9), the dispersive term of f.d.a. ah2 (1 − aæ) (2 − aæ) u xxx , 6 will have the positive coefficient. Thus, the dispersion of the difference scheme can be controlled with the help of the function θ ( x). For definiteness, from now on we consider the function θ ( x) in the form (11) and still assume that a > 0. It is clear, that there is no sense to change the dispersion of the scheme all over the whole domain. Therefore, the formula (11) requires the further refinement. Let us now take into account the condition of non-negativity of the function θ ( x). Therefore, in those subdomain, where the derivatives [ a(1 − aæ)u x ] x and u x are of different sign, it is possible either to assume θ = 0 (the Lax-Wendroff scheme) or to set θ = Ch, C = const > 0 (the predictor-corrector scheme with a small approximate viscosity). Let us consider the first case. Therefore, we obtain the following expression instead of (11): [ a ( 1 − aæ) u x ] x θ ( x) = max h , 0 . a 2 æu x Besides, it is necessary to take care of the boundedness of the function θ ( x) for h → 0. It is possible to claim, for instance, for the function θ ( x) to be bounded above by the number, which satisfies the inequalities (7) for the given æ and a > 0. For example, it is possible to request
Construction of monotonic schemes
θ ( x) ≤
17
a − a2 æ . a2 æ
Therefore, the indicated method of the choice of the function θ ( x) leads to the formula: [ a ( 1 − aæ) u x ] x a − a2 æ θ ( x) = min , max h , 0 . a2 æ a 2 æu x The form of the grid function θ j depends on the choice of the approximative formula for calculating the derivative [ a(1 − aæ)u x ] x . For anj > 0 we can substitute it, for instance, with the following finite-difference expression (upwind approximation): anj 1 − anj æ u x, j − anj−1 1 − anj−1 æ u x, j−1 . h Acting in a similar way for a < 0, we obtain the one of many possible formulas for calculating the grid function θ j in the case of an arbitrary sign of the function a(u): ! ! ! ! ⎧ 0 for a j = 0 or !g˜ j ! ≤ ! g˜ j−s ! , u x, ⎪ ⎪ j u x, j−s ≥ 0, ⎪ ⎪ 2 2 ⎪ | a j | − æa j u x, j − | a j−s | − æa j−s u x, j−s ⎪ ⎪ ⎪ ⎪ for a j = 0 ⎨ æ a2 u ! ! ! ! j x, j (12) θj = ⎪ and ! g˜ j ! > ! g˜ j−s ! , u x, j u x, j−s ≥ 0, ⎪ ⎪ ⎪ ⎪ | a j | − æa2j ⎪ ⎪ ⎪ for a j = 0 and u x, j u x, j−s < 0, ⎪ ⎩ æa2j where s = sgn a j ,
# |a j | " 1 − | a j |æ u x, j , 2 the upper index n is omitted for the grid functions anj and unx, j . Let us show, that under the condition (8) the predictor-corrector scheme (2), (4) with the grid function θ, given by the formula (12), maintain the monotonicity of the numerical solution. For this purpose we modify the expression for the flux (the index n is omitted): g˜ j =
f j∗ =
1 f j+1/2 + f j−1/2 − τ a2j (1 + θ j )u x, j = 2
(13)
1 f j+1/2 + f j−1/2 − | a j |hu x, j + 2h g˜ j − a2j τθ j u x, j . 2 The numbers u x, j and g˜ j are of equal sign (due to the condition (8)), therefore, the expression for the flux can be conceived of as the form:
=
18
Yu.I. Shokin and G.S. Khakimzyanov
f j∗ =
1 f j+1/2 + f j−1/2 − | a j |hu x, j + 2hg j−s/2 , 2 $
where g j+1/2 =
sgn( g˜ j ) min(| g˜ j |, | g˜ j+1 |) for g˜ j g˜ j+1 ≥ 0, 0 for g˜ j g˜ j+1 < 0.
As far as the numbers g j−1/2 and g j+1/2 are of equal sign, then
| g j+1/2 − g j−1/2 | ≤ max(| g j−1/2 |, | g j+1/2 |) ≤ | g˜ j | and h|γ j | ≤
# |a j | " | g˜ j | |a j | , 1 − | a j |æ ≤ = |u x, j | 2 2
(14)
⎧ ⎨ g j+1/2 − g j−1/2 for u = 0 x, j hu x, j γj = ⎩ 0 for u x, j = 0.
where
From the estimation (14) it follows that the numbers a j and ν M j = a j + hγ j are of equal sign, therefore, the equality is valid: ! ! 1 ! ! . f j+1/2 + f j−1/2 + h g j−1/2 + g j+1/2 − !ν M f j∗ = j ! u x, j 2 Using this expression for the flux, the scheme (4) can be written in the form (5): unj++11/2 − unj+1/2 where C−, j
+
M |ν M j | + νj
unx, j −
M |ν M j+1 | − ν j+1
τ 2 2 ≥ 0, C+, j ≥ 0, therefore, if the condition æ|ν M j | ≤ 1,
unx, j+1 = 0,
(15)
is fulfilled, then the scheme has the TVD-property and maintains the solution monotonicity. Using the estimation (14) for u x, j = 0, we obtain the following: # 1" 1 æ|ν M | ≤ æ | a | + − | a | æ 1 . j j j 2 Thus, the fulfillment of the inequality # 1" 1 − | a j |æ æ| a j | 1 + ≤1 2
(16)
is sufficient for the fulfillment of the condition (15). It is easy to check, that under the condition (8) the equality (16) is valid. Let us note, that if the function θ is given by the formula (12), then the TVD scheme (2), (4) coincides with the known Harten scheme [3] with the limiters of the minmod type.
Construction of monotonic schemes
19
3 Predictor-corrector scheme for shallow water equation system with one spatial variable Let us apply the dispersion control method considered above to the predictorcorrector scheme, which approximates the shallow water equations on the uniform grid with the step h: Vt + F x = G. Here
V=
H Hu
⎛
F=⎝
,
⎞
Hu Hu2 +
(17)
H2
⎠,
G=
2
0 Hh x
,
u( x, t) is the fluid velocity, H ( x, t) = η( x, t) + h( x) is its total depth, η( x, t) is the free surface elevation over the still-water level z = 0, and z = −h( x) is the function, defining the basin bottom. On the corrector step, the values Vnj++11/2 are obtained in the half-integer
nodes on the (n + 1)-th time level. For this purpose the approximation of the divergence system (17) is used: Vnj++11/2 − Vnj+1/2
τ
+
F∗j+1 − F∗j h
= G∗j+1/2 .
(18)
The fluxes F∗j are calculated on the predictor step. For this purpose the equation for the flux vector (the analogue of the equation (3)) is approximated: (19) Ft + AF x = AG, obtained as the result of the multiplication of the equation (17) by the Jacobi matrix A = ∂F/∂V. Let λk , (k = 1, 2) be the eigenvalues of the matrix A, Λ – the diagonal matrix with the elements λ1 , λ2 on its diagonal, Rk (V) – the right eigenvectors, corresponding to the eigenvalues λk , R(V) – the matrix, which columns are these vectors, L = R−1 . Therefore, A2 = RΛ 2 L, and the equation (19) can be written as follows: LFt + Λ 2 LV x = LAG. Let us approximate it with the following difference equation: D
−1
L
n F∗j − 12 (Fn + Fnj−1/2 ) j+1/2 j
τ /2
n n + Λ 2 P = D−1 LAG . j
j
n , Here Pnj = Lnj Vnx, j , D nj is the diagonal matrix with the elements 1 + θ1, j n on the diagonal, θ n ≥ 0, θ = O ( h ). Hence follows the final formula 1 + θ2, k j k, j for calculating the flux vector:
20
Yu.I. Shokin and G.S. Khakimzyanov
F∗j
n 1 n n n 2 = F j+1/2 + F j−1/2 − τ RΛ DP + τ ( AG) j . 2 j
(20)
The last formula is similar to the formula (13) for the flux f ∗ , therefore, it is possible to use the analogues of the formula (12), which was used for solving the scalar equation, for calculating functions θk : ⎧ ! ! ! ! ! ! ! ! ⎪ ˜ ˜ 0 for g g λ = 0 or ≤ ⎪ ! ! ! , Pk, j Pk, j−s ≥ 0, ! k, j k, j k, j − s ⎪ ⎪ ⎪ ! ! ! ! ⎪ ˜ ˜ − g g k, j−s ⎪ ! ! ! ! ⎨ 2 k, j ˜ ˜ g g for λ = 0 and > ! ! ! k, j k, j k, j−s ! , Pk, j Pk, j−s ≥ 0, 2 P æλk, θk, j = j k, j ⎪ ⎪ 2 ⎪ | λ | − æλk, ⎪ k, j j ⎪ ⎪ for λk, j = 0 and Pk, j Pk, j−s < 0, ⎪ ⎩ æλ 2 k, j
(21) where s = sgn λk, j , k = 1, 2, P1 and P2 are the components of the vector P, the index n is omitted, and g˜ k, j =
|λk, j | 1 − |λk, j |æ Pk, j . 2
The calculations of the test problems with discontinuous solutions have verified that the scheme (20), (18), (21) maintains the monotonicity of the profiles of grid functions.
References 1. Sweby P (1984) SIAM J Numer Anal 21: 995–1011 2. Kulikovskii A, Pogorelov N, Semenov A (2001) Mathematical aspects of numerical solution of hyperbolic systems. Chapman & Hall/CRC Monographs and Surveys in Pure and Applied Mathematics, 118, London/Boca Raton 3. Harten A (1983) J Comput Phys 49: 357–393 4. Khakimzyanov G, Shokin Yu, Barakhnin V, Shokina N (2001) Numerical simulation of fluid flows with surface waves. Publishing House of SB RAS, Novosibirsk 5. Shokin Yu (1983) The method of differential approximation. Springer Series in Computational Physics. Springer-Verlag, Berlin 6. Shokin Yu, Khakimzyanov G (1997) Introduction to the method of differential approximation. NSU, Novosibirsk
Industrial and scientific frameworks for computational science and engineering M.M. Resch High Performance Computing Center Stuttgart (HLRS), University of Stuttgart, Nobelstraße 19, 70569 Stuttgart, Germany [email protected]
Summary. With the increase of processor performance numerical simulation has become a ubiquitous technology both in science and industry. In both fields it is part of a workflow or a process chain in which it has to be integrated in a seamless way. In research this integration is sometimes referred to as e-science. The term indicates that electronic means change the way scientists work. In industry the key words are information lifecycle management (ILM) and product lifecycle management (PLM). They describe the fact that information processes are integrated via a coordinated software solution into processes and work flows. Simulation and its results have to become a part of this context. The paper presented focuses on simulations carried out on supercomputing systems. The special problems of their integration both in the scientific work and the industrial process are discussed.
1 Introduction High performance computing (HPC) has seen a dramatic change of concepts in recent years. With the advent of systems built from commodity parts the supercomputer typically is no longer a monolithic system but a living concept [1]. Based on standard components it can be assembled in any size and grow over time with the budget of the owner. This concept of a living high performance computer has shattered the traditional idea of a supercomputer. Although a list of fastest systems is still kept - mainly in order to define at least to some extent the fastest systems worldwide [2] via some measured performance - the line between supercomputers and ordinary systems has become fuzzy. Most experts in the field therefore today prefer to talk about "High Performance Technical Computing" (HPTC) - a term that is also used by market analysts [3]. The name reflects that the gap between standard systems (previously called workstations) and supercomputers has been closed recently. This was made possible because microprocessors were catching up with special purpose processors. On the one hand the clock rates were increased - still keeping up with Moore’s assumption on the growing speed of processors [4]. On
22
M.M. Resch
the other hand the concept of parallelism helped increase the performance of small systems. Recently parallelism was extended to the processor level using so called multi-core concepts [5]. As a consequence all supercomputers today get their performance from an increased number of parts rather than from an increased quality of each individual part [2]. The benefits of such a concept have been widely discussed. The downside of the concept was only recently made visible by a US report on the future of supercomputing [6]. From a users’ perspective the closing of the gap has brought high performance computing closer to the average scientist and engineer. With the adoption of commodity parts as building blocks for supercomputers technical simulation has become a potentially ubiquitous activity. With the introduction of Linux in the commercial field software has become portable across the whole range of performance level. Programs running on a laptop or PC can work with the same operating system and software environment on a large supercomputer. There are hardly any portability issues. This has dramatically increased the potential number of scientists and engineers working with large scale systems.
2 Science & industry With the closing of the gap between supercomputers and work stations the role of the supercomputer has undergone a change. It is no longer an expensive resource that can only be found in large government funded laboratories or universities. Instead we find large scale systems more and more in nonresearch settings. The percentage of systems among the fastest in the world that are found in industry has grown from about 16% in 1995 to over 50% in 2005 [2]. This alone reflects the trend towards usage of supercomputers in non-scientific non-elite simulation in the last decade. With the growing ease of acquiring and maintaining computing systems for simulation the scope of usage has been widened. Full exploitation of such systems is still extremely difficult. However, pre-packaged solutions can easily help the user to exploit such systems. Thus, simulation becomes a standard activity both in research and industry. A niche market technology evolves into an ubiquitous technology and with this transition come all the problems that similar technologies have seen during similar transition periods. Basically two large fields of deployment of such new HPTC systems can be found, both of which represent a challenge to the supercomputing community. 2.1 e-science The vast opportunities that were created by the internet, international research networks and increased availability of middleware has given rise to expectations in the scientific community that culminate in what is generally
CSE in industry and science
23
called e-science. In the UK e-science is expected to be "the large scale science that will increasingly be carried out through distributed global collaborations enabled by the Internet" [7]. Consequently in 2000 a separate e-science project was set up bringing together computer scientists, computer centres, networking experts and users. Although with a focus on e-science the project was extended to aim to bring in also industrial applications. The project has shown some very good results but was unable to meet the high expectations. Funding will therefore not be extended to the UK e-science project. In Germany a similar initiative was taken in 2004 with a project start in 2005. Again the expectations are high claiming that "the opportunities to achieve better results - both quantitatively and qualitatively - in science and industrial development based on completely new methods have increased dramatically" [8]. Both definitions and the expectations that are put into words here make it clear that e-science is intended to go beyond the traditional concept of computational science and engineering (CSE). CSE was introduced into science as a third method of getting insight - the other two ones being theoretical work and experimental work. CSE, however, considered the simulation to be a numerical or computational experiment that was somehow linked to the other activities but not an integrated part of the scientists work. Furthermore, like classical experimental work, simulation was considered to be a rather isolated activity. With the concept of e-science this is supposed to change. The new concept can rather be compared with the CERN (Conseil europèen pour la recherche nuclèaire) experiments. A large and international group of scientists share resources in order to be able to conduct research that none of them could do alone. For e-science the resources to be shared are mainly information, data and software - less so computers and other hardware. The goal of e-science is then to create a virtual space of knowledge and resources in which scientists can work together on simulation experiments as well as data exploration. The supercomputer is considered to be just another resource in this concept. It is a tool to do a special type of research - namely computational. However, it is only a small part of an overall concept and hence has to adapt to the requirements of the users - rather than the other way round as we have seen in the last decades. 2.2 e-industry In industry the purpose of integration is not so much collaboration in a scientific sense but collaboration among different groups which have to share information and data. A typical development and production chain in industry includes at least the following steps which typically are not sequential but do substantially overlap. • Initial Product Design: This is a process where a product is laid out considering customers’ requirements. The information or data created
24
•
•
•
•
•
•
•
M.M. Resch
is typically drawings - sometimes technical but sometimes also artistic paintings. Detailed Product Design: The results of an initial design phase are now brought into a well defined process. Artistic drawings are converted to computer readable data. For automotive industries these might be CAD data. Design Verification: The initial design undergoes a number of test steps to verify its feasibility and to make sure it meets the requirements set forth at the beginning of the project. This includes simulation as one way of understanding the design and predicting the behaviour. Simulation is done for overall system configuration but also for individual parts. Input data are received from the detailed product design phase. Output created may be a large set of data. Production Planning: Together with the verification the production planning has to start. The information from the detailed design phase serves as input for this. Also during this phase simulation is a tool to better understand and predict the behaviour or production machines. Financial Planning: Cost is a decisive factor for competitiveness. A tight control of costs during the development process is mandatory. This, however, means that technical information has to be fed into the financial planning process and vice versa. If for technical reasons materials have to be changed this has a financial impact because different materials may have different costs. The financial planning process therefore goes together with the overall design process and it can not be ignored. Manufacturing: The planning for manufacturing processes goes hand in hand with product planning and design. Simulation starts to become relevant in this field. Again modifications that are done during the design process have an impact on the manufacturing process. On the other hand findings from the planning for manufacturing have to be fed back into the design process. A design may simply be too expensive in manufacturing and thus may have to be changed. Marketing & Sales: Marketing starts already during the design phase such that we have an interaction between these two processes. Sales has close relations with financial planning and thus a close relation with the design process. Maintenance: Ease of maintenance can be achieved already during the design process. Again simulation can help to improve maintenance.
In the overall process we have a variety of people and processes as well as a variety of software and information. This requires an overall coordination of which simulation is only a small part. This small part, however, has to be integrated in a seamless way. The Grid seems to offer an opportunity to integrate such processes.
CSE in industry and science
25
3 Grids and workflows In 1999 the term Grid was introduced by Foster and Kesselman [9] to replace metacomputing on the one hand but also to express that the community had to broaden its horizon. The new concept introduced was aiming at using not only computers but a set of resources to solve large scale problems. This could include scientific instruments as well as software and databases. However, the notion of the Grid as a utility to mainly harvest compute power was still kept [[10]. Finally in 2000 a new term was introduced that tries to combine internet technologies and traditional metacomputing by widening the scope of the Grid. E-science [11] – as it was called – was now much more about collaboration and co-operation based on resource sharing than on pure compute power. In industry the term Grid had much less relevance and was only introduced through projects (be it European or national ones in Europe) in the last years. In the field of coupled simulation industry was rather active [12]– [13]. For industry, however, the interplay of people and systems has been a problem in itself for years. Special emphasis has always been given to look into the creation and transformation of knowledge [14]. The development of Grid software, however, over the last years was driven mainly by science. The key issues that were tackled are [15]: • Security: Security is considered to be a major concern. The basic assumption is that any middleware has to be able to create a secure environment. This results in attempts to create certificate authorities and authentication mechanisms. Given the complexity of the problem scientists, however, often resort to the solution of "mutual trust". • Data Handling: Some of the leading scientific projects in Grid computing evolved from data handling problems [16]–[17]. Typically a huge amount of data related to some specific problem have to be stored and processed. Middleware for data management is able to provide a meta-view of these data and to distribute data across a variety of scientific sites. • Scheduling: Since the Grid is still widely considered to be mainly a compute Grid scheduling is one of the main problems tackled. The key factor for optimization is the best usage of distributed resources.
4 Scientific and industrial Grids The requirements of industry and science are partially the same. But in many senses and especially when it comes to supercomputing they differ from each other [1]. Obviously we can find the following problems: • Security: Mutual trust is a concept that works very well in a research setting. Although scientists do compete they have established a system of
26
M.M. Resch
checks that make sure that misconduct is avoided. Whenever they share their resources exchange of information is therefore no problem. This can not be said for industry. Only for pre-competitive research an exchange of information takes place. For competitive research and development and much higher level of security and confidentiality has to be created. Only then will industry have an interest in sharing resources publicly. • Open/Closed Environments: Science is by definition an open environment. Results are created to be made public. It is the goal of scientists to make results open. For industry knowledge is an asset in a competition. Results are created to improve the competitiveness of the company. Science hence is aiming for a world-wide Grid that is open - at least potentially - to everybody. Industry is much more interested in intragrids which are open only to those people inside a company that need to have access the sensitive data. • Data Management: The main data problem in science is size. Huge amounts of data (in the range of hundreds of Petabyte) are created by scientific instruments and have to be handled by thousands of scientists. These data have to be classified and organized. Simple and fast access to these data has to be provided. The data, however, are uniform. Organization of data is mainly about better handling and is looking at criteria like "date of creation" and "size of connected data blocks". Furthermore data are typically created once and read often. For industry the problem is a different one. Data that are stored come from a variety of sources. They are classified according to the context in which they were created. Organization is mainly about better combining information and is looking at criteria like "context" and "relevance". Information created is furthermore modified permanently such that it can be classified as "write many - read many". • Scheduling: For science the purpose of scheduling is to fill resources in an optimum way. Although capability computing is still relevant in a small number of centres (supercomputing) the focus of most organizations is currently to make best use of their capacity. Time to solution or just-intime simulation are not only of less relevance but are considered to be harmful for the best usage of a system. For industry time-to-solution and total-cost-of-ownership are the key factors. Optimum scheduling systems should make sure that jobs are run when their solution is needed and that cost effective usage of systems is made. Science and industry obviously have different requirements and only a few publicly funded projects are able to couple these two worlds.
5 Conclusion In this paper we have extracted the main features of Grids in science and industry. We have worked out the key criteria for success in both fields. Our
CSE in industry and science
27
findings are that both worlds have rather different requirements. Industry can obviously not make use of many of the tools that are developed in a scientific setting. It is mainly the problems of costs and security that are not sufficiently tackled. Security is a concept that is highly irrelevant for science. It has thus been mainly treated like the password problem in Unix which mainly is there to create a low barrier for entering such systems. Industry will not only need higher barriers but will also need concepts like multi level security (MLS). Furthermore it will need sophisticated techniques for access controls. Cost is a concept that is nearly irrelevant in science. For industry it is a driving factor. That does not imply that industry will always look for the cheapest solution. It will be rather important to get the best solution at the lowest possible costs. As industry is following the rules of a market it will furthermore drive the development of concepts in scheduling that relate costs of computing to speed or performance of a simulation. To summarize we find that Grid is a concept that is important for industry but the problems we find in industry can not be tackled with the solutions offered by research. Looking more carefully at some developments we find that we might rather see an uptake of industrial solutions by research. This development will be driven by the fact that also researchers have to become cost-aware and that security has become such an overwhelming factor in the internet-society that one will be unable in the near future to work without it.
References 1. Resch M (2005) High Performance Computing in engineering and science’. In: Krause E, Shokin YuI, Resch M, Shokina N (eds) Computational Science and High Performance Computing. Notes on Numerical Fluid Mechanics and Multidisciplinary Design 88. Springer, Berlin, Heidelberg, New York 2. TOP 500 list http://www.top500.org 3. Joseph E II, Willard Ch, Goldfarb D, Kaufmann N (2001) Capability market dynamics, part 1: changes in vector supercomputers - will the Cray/NEC partnership change the trend? IDC, March 2001 4. Moore GE (1965) Electronics 38(8):114-117 5. Intel multi-core processor architecture development backgrounder http://www.intel.com/cd/ids/developer/asmo-na/eng/201969. htm?page=1 6. Graham SL, Snir M, Patterson CD (eds) (2004) Getting up to speed: the future of supercomputing. National Academy Press 7. UK e-science web page http://www.rcuk.ac.uk/escience/ 8. Resch MM (2004) inSIDE 2(1) 9. Foster I, Kesselman C (1999) The Grid: blueprint for a new computing infrastructure. Morgan Kaufmann San Francisco/California 10. Foster I, Kesselmann C, Tuecke S (2001) Int J High Perf Comp Appl 15(3):200-222 11. UK E-Science Project http://www.rcuk.ac.uk/escience/ 12. Müller M, Gabriel E, Resch M (2002) Concurr Comp Pract Exp 14: 1543-1551
28
M.M. Resch
13. Rieger H, Fornasier L, Haberhauer S, Resch MM (1996) Pilot implementation of an aerospace design system into a parallel user simulation environment. In: Liddell H, Colbrok A, Hertzberger P, Sloot P (eds) High-Performance Computing and Networking. Lecture Notes in Computer Science 1067. Springer Verlag 14. Amann R, Longhitano L, Moggia P, Testa S (2005) The role of a simulation tool in the creation and transformation of knowledge in a company in the automotive industry’, 21st EGOS Conf., Freie Universität Berlin 15. Fox G, Walker D (2003) e-science gap analysis. Technical Report UKeS-200301, UK e-Science Core Programme http://www.nesc.ac.uk/technical_ papers/UKeS-2003-01/GapAnalysis30June03.pdf 16. Hoschek W, Jaen-Martinez J, Samar A, Stockinger H, Stockinger K (2000) Data management in an International Data Grid Project’, IEEE/ACM Int. Workshop on Grid Computing Grid’2000, Bangalore, India 17. EU EGEE Project "Enabling Grids for E-sciencE" http://www.eu-egee.org/
Parallel numerical modelling of gas-dynamic processes in airbag combustion chamber A.D. Rychkov1 , N. Shokina1,2 , T. Bönisch2 , M.M. Resch2 , and U. Küster2 1 2
Institute of Computational Technologies SB RAS, Lavrentiev Ave. 6, 630090 Novosibirsk, Russia [email protected] High Performance Computing Center Stuttgart (HLRS), University of Stuttgart, Nobelstraße 19, 70569 Stuttgart, Germany [email protected]
Summary. The current results of the joint project of the Institute of Computational Technologies of the Siberian Branch of the Russian Academy of Sciences (ICT SB RAS, Novosibirsk, Russia) and the High Performance Computing Center Stuttgart (HLRS, Stuttgart, Germany) are presented. The project is realized within the framework of the activities of the German-Russian Center for Computational Technologies and High Performance Computing (http://www.grc-hpc.de). A three-dimensional non-stationary flow in the airbag combustion chamber is numerically simulated. The technology of parallelization of the upwind LU difference scheme is considered. The future perspectives and challenges of the project are outlined.
1 Introduction The constantly increasing number of automobiles leads to the increase of traffic volume and speed regime on roads, making the problem of safety of drivers and passengers the most important one. Nowadays the airbag becomes the most reliable means in the arsenal of safety systems. Most automobile manufactures equip their automobiles with airbags decreasing significantly the number of traffic fatalities. The number of different constructions of airbags and their application schemes increases constantly. The importance of further research and development in that field is shown by the fact, that the Conference "Airbag-2000" (Fraunhofer Institute for Chemical Technology, Pfinztal (Berghausen), Germany) has been regularly held during the last ten years. The Conference is specially devoted to the problems of construction, exploitation and development of airbags. An airbag consists of the special elastic shell, which is made of a gas-proof fabric, connected to the combustion chamber, which is filled with the granules of a solid monopropellant with comparatively low combustion temperature. In initial state the shell is rolled up into a compact roll. After collision of
30
A.D. Rychkov, N. Shokina, T. Bönisch, M.M. Resch, and U. Küster
an automobile with an obstacle the system of solid propellant ignition is initialized. The combustion products fill the shell during 60–100 milliseconds, transforming it into the elastic bag. Pyrotechnic gas-generators are widely used as high-production gas sources for filling the elastic airbag shell. Their solid monopropellant compositions provide ecologically safe gaseous combustion products with low temperature as output. The number of different constructions of such airbags can reach few tens, beginning from miniature ones, which serves for pulling safety belts, to gas-generators for filling airbags of varying capacity, which can change, for example, according to weight of passenger or driver. In the development these gas-generators make their way, which is similar to solid-propellant rocket engines. It is the way from purely experimental investigation using model facilities and test benches to development of computer modelling systems, which allow decreasing significantly gas-generator projection time and expenses as well as to improve its quality. Nowadays for the numerical modelling zero-dimensional (balance) models are mainly used. Multidimensional effects are taken into account with the help of different subsidiary coefficients, which are obtained from experiments. The composition of combustion products is defined on the basis of thermodynamic calculations. The calculation of the process of airbag shell filling is very often limited by the modelling of filling of stiff cylindrical vessel of the same volume as the filled shell. However, the complication of ignition schemes of airbag gas-generators and their constructions and search of new perspective compositions of solid propellants lead to the necessity of deeper investigation of the following phenomena: • the process of ignition and combustion of granular propellants at their high bulk density in combustion chamber, where the ignition of granules is done by high-speed flame jet, which contains fine solid particles with high temperature; • the formation of fine-dyspersated solid phase in combustion products, its movement over the combustion chamber and interaction with shell walls during filling of the shell; • the dynamics of the filling process of the shell and the definition of its dynamic strength. Therefore, it is necessary to develop physical and mathematical models of different complexity levels, conduct model and full-scale tests and summarize the obtained results in the form of different criterion dependencies, practical recommendations and application software. The full mathematical modelling of a vehicle safety system which is based on traditional schemes of solid-propellant gas-generators leads to the detailed investigation of the following processes: 1. performance of the ignition unit (booster);
Parallel numerical modelling of airbag combustion chamber
31
2. gradual ignition of a solid propellant charge taking into account transient character of combustion and erosive burning effect; 3. three-dimensional flow of combustion products in the working chamber of an airbag; 4. processes of heat exchange and possible mass exchange with elements of gas-generator construction, including the developed surface of cooling system of combustion products; 5. movement and sedimentation of fine-dyspersated condensed components of combustion products; 6. combustion of granular bulk charge; calculation of its combustion rate and composition; evaluation of environmental safety of using certain propellant compositions; 7. deployment, filling and supercharge of an airbag shell; 8. heat exchange processes of combustion products with material of an airbag shell; 9. dynamic strength analysis of an airbag shell at its filling by combustion products; 10. process of interaction of a human being head with an airbag shell in the frame of simplified kinematic scheme in order to define the strength influences on a human being and their consequences. Mathematical models of the mentioned problems are multidimensional systems of partial differential equations. Their solution requires the application of advanced numerical methods based on parallel computations using high performance computer facilities. Only in this case it becomes possible to carry out large volumes of parametric investigations, which allow making the valid choice of the optimal constructions of existing and perspective airbags. The following steps for numerical modelling of the airbag combustion chamber are suggested as the first stage: 1. spatial (three-dimensional Navier-Stokes equations) modelling of combustion product flow of ignition compositions during the initial stage of airbag work in order to solve adequately the problem on propagation of ignition zone over granular bulk charge of solid propellant in the airbag combustion chamber; 2. investigation of the influence of fine-dyspersated condensed particles contained in igniter combustion products on the process of propellant granule ignition; 3. modelling of granule ignition by flame jet from the booster. To this end, the problem on ignition of spherical granule by the stream of hot booster combustion products and the system of such granules, which simulate the filtration ignition and combustion conditions in the airbag combustion chamber, will be considered. The current results of the joint project of the Institute of Computational Technologies of Siberian Branch of Russian Academy of Sciences (ICT SB
32
A.D. Rychkov, N. Shokina, T. Bönisch, M.M. Resch, and U. Küster
RAS, Novosibirsk, Russia) and the High Performance Computing Center Stuttgart (HLRS, Stuttgart, Germany) are presented. The project is realized within the framework of the activities of the German-Russian Center for Computational Technologies and High Performance Computing (http: //www.grc-hpc.de). The technology of parallelization of the upwind LU difference scheme is considered. The future perspectives and challenges of the project are outlined.
2 Physical model The airbag combustion chamber (see Fig. 1) represents a cylinder. The igniter (booster) is placed in the central part of the cylinder and consists of granules, which have a spherical form. The combustion products of these granules contain hard particles with sufficiently high concentration, which serves to the intensification of the ignition process of the main bulk of propellant granules.
Fig. 1. The scheme of the airbag combustion chamber
The propellant granules, filling the remaining volume of the combustion chamber, have a cylindrical form. The granule composition is selected in such a way that on the one hand to ensure a high burning rate, on the other hand to have a relatively low temperature of combustion products and guarantee the absence of high concentrations of substances, which are dangerous to a human health. Taking into account the complexity of the running processes and impossibility of its detailed description, the following main assumptions are used. 1. The flow is spatial and non-stationary. 2. The modelling is done within the framework of continuum model: all main components (booster, propellant granules, combustion products) are
Parallel numerical modelling of airbag combustion chamber
33
considered as three interpenetrating media with their velocities and temperatures. The interchangement of mass, impulse and energy is carried out between the media. 3. The chemical reaction rates are sufficiently large and the combustion processes are completed near a surface of the propellant element. It allows to describe these processes by the source terms in the equations of mass balance and energy balance. 4. The propellant granule form is assumed to be spherical. The deviations of the real cylindrical form from a sphere are taken into account by the form coefficient in the resistance laws. 5. The propellant granules are considered to be immovable during the device operation. The number of granules per a unit of volume (number density) remains constant and is defined from the initial conditions of granule load into the combustion chamber. The temperature inside granules in the time of their heating till the ignition is obtained from the solution of the heat conduction problem for the equivalent sphere. 6. The model of combustion with a constant surface temperature is used [1]. It is assumed that a granule ignition occurs when its surface temperature reaches some empirically given value, which is named as the ignition temperature. The combustion rate is also defined by the empirical formula, which takes into account the dependence of the combustion rate on the value of pressure in the combustion chamber. The heat, generated by a granule combustion, goes to the warming up the combustion products, the part of this heat returns to a granule through the heat conduction mechanism. 7. The combustion products (carrying gas) is the perfect gas with the constant relation of specific heats. Contrary to our previous work [2], the carrying gas flow is described by the averaged Navier-Stokes equation system for a compressible non-stationary turbulent three-dimensional gas flow with a constant composition, closed by k – ε turbulence model. The work of the friction force and the pressure force are not taken into account in the energy equation due to the low flow velocity. The conducted heat transfer between burning granules is also neglected. 8. The initiation of booster granule ignition occurs due to the delivery of high-temperature products of the igniter (black gunpowder) combustion through the upper boundary of the domain, filled with the booster. The booster combustion products is a two-phase medium, consisting of gas and solid fine-dyspersated particles. The size of particles is sufficiently small, therefore the motion of such two-phase medium can be considered as the equilibrium one.
3 Mathematical model The system of the averaged Navier-Stokes equations for a compressible nonstationary turbulent three-dimensional gas flow with a constant composi-
34
A.D. Rychkov, N. Shokina, T. Bönisch, M.M. Resch, and U. Küster
tion, closed by k – ε turbulence model, has the following vector form in Cartesian coordinates: ∂Q ∂R ∂F ∂G + + + = H, (1) ∂t ∂x ∂y ∂z where Q = (ρ, ρu, ρv, ρw, ρ E, ρk, ρε) T , p = constant, M is the molecular gas weight,
R0 M ρ T,
R0 is the universal gas
R = (ρu, ρu2 + p − τ xx , ρuv − τ xy , ρuw − τ xz , u(ρ E + p) − uτ xx − vτ xy − wτ xz − q xx , ρuk − τ xk , ρuε − τ xε ) T , F = (ρv, ρuv − τ yx , ρv2 + p − τ yy , ρvw − τ yz , v(ρ E + p) − uτ yx − vτvy − wτ yz − q yy , ρvk − τ yk , ρvε − τ yε ) T , G = (ρw, ρuw − τ zx , ρvw − τ zy , ρw2 + p − τ zz , w(ρ E + p) − uτ zx − vτ zy − wτ zz − q zz , ρwk − τ zk , ρwε − τ zε ) T , H = (0, 0, 0, 0, (Ψ − ρε), (C1 f 1 Ψ − C2 f 2 ρε)ε/k), ∂u ∂v τ xy = τ yx = µe + , ∂y ∂x ∂u ∂w τ xz = τ zx = µe + , ∂z ∂x ∂v ∂w τ yz = τ zy = µe + , ∂z ∂y ∂u ∂u 2 ∂v ∂w τ xx = 2µe − µe + + , ∂x 3 ∂x ∂y ∂z ∂u ∂v 2 ∂v ∂w τ yy = 2µe − µe + + , ∂y 3 ∂x ∂y ∂z ∂u ∂w 2 ∂v ∂w τ zz = 2µe − µe + + , ∂z 3 ∂x ∂y ∂z ∂T ∂T , q zz = λe , ∂y ∂z µt ∂k µt ∂k µt ∂k , τ yk = µt + , τ zk = µt + , τ xk = µt + σk ∂x σk ∂y σk ∂z µt ∂ε µt ∂ε µt ∂ε , τ yε = µt + , τ zε = µt + , τ xε = µt + σε ∂x σε ∂y σε ∂z % ∂u ∂v 2 ∂u ∂v ∂w Ψ = µt τ xx + τ yy + τ zz + + ∂x ∂y ∂z ∂y ∂x 2 2 & ∂u ∂u ∂w ∂v ∂w 2 ∂v ∂w + + + + − ρk + + , ∂z ∂x ∂z ∂y 3 ∂x ∂y ∂z q xx = λe
∂T , ∂x
q yy = λe
Parallel numerical modelling of airbag combustion chamber
C1 = 1.44,
C2 = 1.92,
f 1 = 1,
σk = 1, Rt =
ρk2 , µε
µe = µ + µt ,
Cµ = 0.09, a1 = −1.5 · 10−4 ,
f 2 = 1 − 0.22exp
35
− Rt , 36
σε = 1.3,
λe = λ + µt
Cp , Prt
µt = Cµ fµ
ρk2 , ε
1/2 fµ = 1 − exp a1 Rk + a2 R3k + a3 R5k , a2 = −1.0 · 10−9 ,
a3 = −5 · 10−10 ,
Rk =
ρk 1/2 yn , µ
where ρ is the density; u, v, w are the Cartesian velocity components; p is the pressure; T is the temperature; k is the turbulent kinetic energy, ε is the 2 2 2 dissipation rate of turbulent kinetic energy; E = e + u +v2 +w is the total internal energy per unit of volume, e is the internal energy, e = Cv T for a perfect gas with a constant composition, Cv and C p are the constant volume specific heat and constant pressure specific heat; yn is the distance to the nearest wall; µ is the molecular viscosity, λ is the heat conductivity.
4 Initial and boundary conditions At t = 0 the gas phase velocity is equal to zero everywhere in the computational domain; the gas phase pressure is equal to the pressure p0 of the surrounding environment; the temperature of each phase is equal to the temperature T0 of the surrounding environment. At the inlet: at t < tig the igniter work is modelled as the hot gas input thorough the left boundary of booster combustion chamber: ρu = Jig (t) , v = w = 0, T = Tig , where tig is the igniter work time, Jig is the specific mass flow, Tig is the temperature of igniter; at t ≥ tig the following conditions are specified: u = v = w = 0, ∂T ∂x = 0. At the impermeable surfaces of the computational domain the no-slip conditions (the gas velocity vector is equal to zero) and the heat insulation condition for the temperature are set: u = v = w = 0, ∂T ∂n = 0. If at the outlet (the slot) the gas is subsonic, then the non-reflective boundary condition is specified. If the gas flow is supersonic, then no conditions are set.
5 Numerical solution The finite volume method using the second or third order upwind LU difference scheme with TVD-properties [3] is applied for solving numerically the system of equations (1).
36
A.D. Rychkov, N. Shokina, T. Bönisch, M.M. Resch, and U. Küster
For this purpose the derivative with respect to pseudo-time ∂Q ∂τ is added to the left-hand side of the system (1). The iterative process is organized with respect to this derivative on each time level. The system is linearized according to Newton’s method and written down in the so called ”delta”-form. It means: • the linearization is carried out with respect to the variables on the n-th time level, where the dependence of the Jacobi matrices on these variables is not taken into account; • only the terms, responsible for the first order of approximation for nonviscous terms, and the terms, which approximate the repeated derivatives with respect to the corresponding directions for viscous terms, are kept in the left-hand side of the obtained system. The linearized system is written down in the following form:
[
Vi, j,k 3 Vi, j,k + + A˜ i−+1/2 ∆i+1/2 + A˜ i+−1/2 ∆i−1/2 + A˜ − ∆ + j+1/2 j+1/2 ∆τ 2 ∆t + A˜ + ∆ + A˜ − ∆ + A˜ + ∆ + ∆ψs+1 = j−1/2 j−1/2 k+1/2 k+1/2 k−1/2 k−1/2 s+1 1 1 3 Qi,n+ − 4Qi,n j,k + Qi,n− j,k j,k 1 Vi, j,k + RHSi,n+ =− , j,k 2∆ t
where
(2)
s+1 s ∆ψs+1 = Qn+1 − Qn+1 ,
RHSi, j,k = − [ (R · S)i+1/2 − (R · S)i−1/2 + (F · S) j+1/2 − (F · S) j−1/2 + + (G · S)k+1/2 − (G · S)k−1/2 + Vi, j,k Hi, j,k , Si±1/2 , S j±1/2 , Sk±1/2 are face areas of the volume Vi, j,k . (R · S)i±1/2 , (F · S) j±1/2 , (G · S)k±1/2 are corresponding total (viscous and non-viscous) difference fluxes through these faces, s and s + 1 are pseudo-time steps. After the iterations with respect to the pseudo-time have converged, the left-hand side of the system (2) goes to zero due to ∆ψs+1 = 0. Therefore, the system of difference equations (2) approximate the system (1) on the n + 1-th time level with the second order in time and with the second or third order, depending on the approximation of the difference fluxes through faces, in space. The method based on LU-factorization is applied in order to solve the difference equations (2). In order to avoid an inversion of matrices during the realization of (2), the structure of matrices in the left-hand side of (2) is preliminary simplified, being converted to the diagonal ones. Naturally, the convergence rate slightly decreases, but the simplicity of matrix structure allows to reduce the solution of (2) to successive scalar recursive calculations of a running calculation type. It increases the efficiency of the solution of (2), especially when using parallel computers.
Parallel numerical modelling of airbag combustion chamber
37
Suppressing all unessential technical details of these transformations, let us write the LU algorithm for solving (2), which is realized in two stages. L-path (sequential increase in the indices i, j, k):
∆ψi,∗ j,k = B−1 [ −
s+1 1 1 3 Qi,n+ − 4Qi,n j,k + Qi,n− j,k j,k
s 1 Vi, j,k + RHSi,n+ + j,k
2∆ t + ∗ ∗ ∗ ˜ ˜+ . + Ai−1/2 ∆ψi−1, j,k + A˜ + ∆ψ + ∆ψ A i, j − 1,k i, j,k − 1 j−1/2 k−1/2 (3)
U-path (sequential decrease in the indices i, j, k):
∆ψi,s+j,k1 =∆ψi,∗ j,k − ∗ ∗ ˜− − B−1 A˜ i−+1/2 ∆ψi∗+1, j,k + A˜ − ∆ψ + ∆ψ A i, j+1,k i, j,k+1 . j+1/2 k+1/2 (4) The matrix B is also diagonal, and its inversion does not cause a problem. Therefore, the realization of the algorithm (3)–(4) is particulary effective for cluster computing systems, where each processor performs its independent program, works with distributed memory, and has a possibility of information interchange with other processors. Let us consider the possible realization of parallel computations using such multiple-processor computing system. Let us consider, for simplicity, a two-dimensional case. The algorithm (3)–(4) is written as follows:
∆ψi,∗ j,k = B−1 [ −
s+1 1 1 3 Qi,n+ − 4Qi,n j,k + Qi,n− j,k j,k
1 + RHSi,n+ j,k
s
2∆ t + A˜ +
i −1/2
Vi, j,k +
(5)
∆ψi∗−1, j + A˜ + ∆ψi,∗ j−1 , j−1/2
∗ ∆ψi,s+j,k1 = ∆ψi,∗ j,k − B−1 A˜ i−+1/2 ∆ψi∗+1, j + A˜ − ∆ψ i, j+1 . j+1/2
(6)
The solution domain is a rectangle, therefore, it is necessary to choose the direction, which has the largest number of computational cells, as the calculation direction in order to have the better processor loading. Let us take the index i as an example. The solution of the equations (5) can be reduced to the series of solutions of one-dimensional problems, which are realized by the "running calculation" method. Here the values of ∆ψi∗−1, j , ∆ψi,∗ j−1 are obtained from the boundary conditions of the problem as symmetric or non-symmetric values in additional fictitious cells. The integration domain is cut in strips, each of them is given to one processor. The stripes are numbered sequentially and they are bound to the process with the respective number (rank). The processor with the rank 0
38
A.D. Rychkov, N. Shokina, T. Bönisch, M.M. Resch, and U. Küster
starts with the calculation. If the number of available processors during a parallel execution is large enough, then the stripe size is only one mesh cell. After the first processor (rank=0) has calculated the value of ∆ψ∗0,0 (the values of all matrices and other variables are calculated for the lower iterative level s, their calculation does not cause a problem), the processor with the rank 1 starts. When this processor has calculated the value ∆ψ∗0,1 , the processor with the rank 2 starts and so on. The general structure of calculations has the stepped configuration with the lag of one computational cell along the index j (see Fig. 2).
Fig. 2. The usage of processors
Fig. 3. The application of the ring topology for the better processor loading
The sequential idle time of processors starts from the moment, when the first processor (rank = 0) reaches the right boundary, as it is necessary to calculate all values ∆ψ∗j, j for the realization of U-pass (6). Applying this parallelization scheme using MPI, it is important to choose the optimal number of processors, as the increase of this number causes, on the one hand, the decrease of calculation time, on the other hand, the increase of interprocessor exchange expenses. Using the ring topology [4], it is possible to reach the better processor loading at some simultaneous decrease of the number of interprocessor exchanges (Fig. 3). In this case, the domain of variation over the index j is divided into several parts (sections), and the calculation starts from the first part 0 ≤ j ≤ N0 . Within this section the processors are distributed, as before, on every index j and the calculation is organized as described above. When the first processor reaches the right boundary, it switches to the calculation of the lower boundary of the second section. All necessary data are prepared already by the N0 -th processor of the first section. When other processors of the first section reach the right boundary, they switch to the calculation of their j-lines in the second section N0 + 1 ≤ j ≤ N1 and so on, until all values ∆ψ∗j, j will be calculated. The realization of U-pass for calculating the value ∆ψsj,+j 1 is done in the same manner with the only difference that it starts from the upper right computational cell.
Parallel numerical modelling of airbag combustion chamber
39
In the three-dimensional case the main calculation direction is also chosen. The calculation is reduced to the series of one-dimensional realizations of "running calculation". The described parallelization technology can be applied to other problems.
6 Perspectives and challenges The present paper describes the physical model and the numerical modelling of an airbag combustion chamber. It points out which steps have to be taken to the modelling of a full airbag deployment. As the calculation of a combustion process with the current model is already calculation time intensive, high performance computing systems are considered as target platforms. Within the next stages of the project the following topics will be covered: • the verification and further improvement of the physical model also using experimental data; • the extension of the simulation to the deployment process of an airbag shell including fluid-structure coupling and interaction of reaction products and the shell; • the analysis of the parallel code and its optimization for its usage on large PC clusters and vector systems. The final goal of the project is to provide a numerical tool, which can be used for the development and improvement of current and future airbag systems.
References 1. Zeldovich YaB, Leipunskii OI, Librovich VB (1975) Theory of non-stationary gunpowder combustion. Nauka, Moscow (in Russian) 2. Rychkov A, Shokina N, Miloshevich H (2003) 3-D modeling of ignition and combustion processes in combustion chamber of automobile airbag. In: Brebbia CA, Carlomagno GM, Anagnostopoulos P (eds) Computational Methods and Experimental Measurements XI. Halkidiki 2003, WIT Press, Southampton, Boston 3. Yoon S, Jameson A (1987) An LU-SSOR scheme for Euler and Navier-Stokes equations. AIAA 87-600, January 4. Korneev VD (2003) Parallel programming in MPI. Institute of Computer Science, Moscow-Izhevsk (in Russian)
The parallel realization of the finite element method for the Navier-Stokes equations for a viscous heat conducting gas E.D. Karepova, A.V. Malyshev, V.V. Shaidurov, and G.I. Shchepanovskaya Institute of Computational Modelling SB RAS, Academgorodok, Krasnoyarsk, Russia [email protected] Summary. A boundary value problem for the Navier-Stokes equations for a viscous heat conducting gas in a finite computational domain is considered. The space approximation is constructed with the use of the Bubnov-Galerkin method combined with the method of lines. The parallel realization of this method is discussed for multiprocessor computational system.
1 Introduction This paper deals with the numerical solution of a boundary value problem for the Navier-Stokes equations for a viscous heat conducting gas. The space approximation of the two-dimensional Navier-Stokes problem by the finite element method is considered. Notice that a feature of the formulation of the problem used here is that boundary conditions on the boundary of a computational domain relate derivatives of velocity to pressure. These boundary conditions are natural for the variational (integral) formulation, i.e., they do not impose additional conditions on subspaces of trial and test functions as opposed to main boundary conditions (of the Dirichlet type). Moreover, they are "nonreflecting" since they do not distort propagation of local perturbations of velocities outside of the computational domain and have no influence on the values of velocities inside the domain. To construct the space approximation, the Bubnov-Galerkin method combined with the method of lines is used. For a space of trial and test functions, a space of functions being piecewise bilinear on square meshes is used. For calculation of integrals over an elementary domain the quadrature formulae of the trapezoid method and of its two-dimensional analogue as the Cartesian product are applied.
This work was supported by Russian Foundation of Basic Research (grants No. 05-01-00579, 05-07-90201)
42
E.D. Karepova, A.V. Malyshev, V.V. Shaidurov, G.I. Shchepanovskaya
As a result, we obtain a system of ordinary differential equations in time with respect to four vectors which consist of the values of density, velocities, and energy at the nodes of a square grid and depend on time. Then we discuss the parallel realization of these method for multiprocessor computational system MCS-1000/16 with 16 processors under MPI.
2 Problem formulation Let Ω = (0, 1) × (0, 1) be a bounded (computational) domain in R2 with the boundary Γ . Let also (0, t f in ) be the time interval. Consider the problem on a nonstationary flow of a viscous heat conducting gas in the following form. In the cylinder (0, t f in ) × Ω we write four equations in unknowns ρ, u, v, e which differ from the standard ones by a linear combination on the equations (2) and (3) with (1) in order to simplify the variational formulation to be considered: ∂ ∂ ∂ρ + (ρu) + (ρv) = 0, ∂t ∂x ∂y ∂u u ∂ρ ∂u u ∂ ∂u u ∂ ρ + + ρu + (ρu) + ρv + (ρv) + ∂t 2 ∂t ∂x 2 ∂x ∂y 2 ∂y ∂τ xy ∂P ∂τ xx − − = 0, ∂x ∂x ∂y ∂v v ∂ρ v ∂ ∂v v ∂ ∂v ρ + + ρu + (ρu) + ρv + (ρv) + ∂t 2 ∂t ∂x 2 ∂x ∂y 2 ∂y
+
∂τ yy ∂P ∂τ xy − − = 0, ∂y ∂x ∂y ∂q y ∂u ∂ ∂ ∂ ∂v ∂q x + =− − + Φ. (ρe) + (ρeu) + (ρev) + P ∂t ∂x ∂y ∂x ∂y ∂x ∂y
+
(1)
(2)
(3) (4)
Here we use the following notations: ρ(t, x, y) is density; e(t, x, y) is internal energy of unit mass; u(t, x, y), v(t, x, y) are components of the vector of velocity; P(t, x, y) is pressure ; τ xx , τ xy , τ yy are components of the stress tensor given by the formulae ∂u ∂v 2 ∂v 2 ∂u ∂u ∂v τ xx = µ 2 − − + , τ yy = µ 2 , τ xy = µ ; (5) 3 ∂x ∂y 3 ∂y ∂x ∂y ∂x
µ (t, x, y) =
1 ∗ µ (t, x, y), µ ∗ is the dynamic coefficient of viscosity: Re ω µ ∗ = (γ − 1)γ M2∞ e ω, 0.76 ≤ ω ≤ 0.9;
(6)
The parallel realization of FEM for Navier-Stokes equations
43
(q x , q y ) are components of the vector of density of a heat flow given by the formulae: q x (t, x, y) = −
γ ∂e µ , Pr ∂x
q y (t, x, y) = −
γ ∂e ; µ Pr ∂y
(7)
Re is the Reynolds number; Pr is the Prandtl number; M∞ is the Mach number; γ is a gas constant. The equation of state has the form: P = (γ − 1)ρe.
(8)
The dissipative function Φ will be considered in one of the following forms: % & ∂v ∂u 2 2 ∂u ∂v 2 2 ∂u 2 2 ∂v 2 Φ=µ + + + + − = (9) 3 ∂x 3 ∂y ∂x ∂y 3 ∂x ∂y % % 2 & & ∂u 2 ∂u ∂v ∂v ∂v ∂u 2 4 =µ − + + + . (10) 3 ∂x ∂x ∂y ∂y ∂x ∂y From (9) it follows that it is nonnegative. Denote the vector of a unit outer normal to Γ at a point ( x, y) by n( x, y) = (n x ( x, y), n y ( x, y)). To specify boundary conditions for the equation of continuity (1) we dwell on the case where a flow across Γ , defined by the vector u = (u, v), is directed outward Ω , i.e., u·n ≥ 0
on
[0, t f in ] × Γ .
(11)
In this case the characteristics of the equation (1) on the boundary [0, t f in ] × Γ are directed outward the domain (0, t f in ) × Ω and in order for the problem to be well-posed there is no need for boundary conditions for ρ. To close the problem on the boundary Γ of the computational domain Ω we consider the following boundary conditions which are natural in a variational sense:
τ xx n x + τ xy n y = P n x − Pext n x ,
on (0, t f in ) × Γ ,
(12)
τ yy n y + τ xy n x = P n y − Pext n y ,
on (0, t f in ) × Γ ,
(13)
where Pext (t, x, y) is given external pressure on the boundary of the computational domain. In inward boundary stream Pext is known indeed and equals the pressure in unperturbed medium. In outward boundary stream we take Pext as the value of pressure which is drifted from the previous time level. Boundary conditions of this type are natural for the variational formulation of the problem, i.e., they do not impose additional requirements on spaces of trial and test functions, as opposed to main boundary conditions
44
E.D. Karepova, A.V. Malyshev, V.V. Shaidurov, G.I. Shchepanovskaya
(for example, when u and v are given on the boundary). Besides, from the computational point of view these boundary conditions are nonreflecting, i.e., they allow perturbations of the functions u and v to pass through the computational boundary Γ leaving their values inside the domain unaffected. For the energy equation (4) we consider the Neumann boundary conditions:
∇e · n = 0 on Γ .
(14)
Initial conditions are taken in the form
ρ(0, x, y) = ρ0 ( x, y),
u(0, x, y) = u0 ( x, y),
v(0, x, y) = v0 ( x, y),
e(0, x, y) = e0 ( x, y)
on Ω .
(15)
Notice that in linearization of the equations or their time approximations we will use different approximations of second-order terms like ρu. To distinguish between coefficients and main unknowns, we denote u and v in the coefficients of (2) – (4) by a and b respectively.
3 Space discretization In this section we construct the space discretization of the system (1) – (4) using the method of lines. √ Before this we substitute unknown nonnegative function ρ by σ = ρ in equation (1) [4]: ∂σ 1 ∂σ 1 ∂ (σ u) 1 ∂σ 1 ∂ (σ v) u+ v+ + + = 0. ∂t 2 ∂x 2 ∂x 2 ∂y 2 ∂y
(16)
In the following it simplifies analysis due to change of trial space L1 (Ω ) for ρ by Hilbert space L2 (Ω ) for σ . Along with the differential formulation, we will use integral identities which follow from it. To this end, we multiply the equations (16), (2) – (4) by an arbitrary function w ∈ W21 (Ω ) and integrate them over Ω using integration by parts. Taking into account the boundary conditions (12) – (14), we arrive at the following identities: ' ∂σ 1 ∂σ 1 ∂w 1 ∂σ 1 ∂w w + u w − σu + v w − σv dΩ + ∂t 2 ∂x 2 ∂x 2 ∂y 2 ∂y Ω
+
1 2
' Γ
σ w u · n dΓ = 0,
(17)
The parallel realization of FEM for Navier-Stokes equations ' Ω
+
1 2
+
1 2
' Ω
1 + 2
+
1 2
45
' ∂u u ∂ρ 1 ∂w ∂u ρ + −ρ au + ρ a w dΩ + w dΩ + ∂t 2 ∂t 2 Ω ∂x ∂x
'
−ρbu
Ω
(
∂w ∂u + ρb w ∂y ∂y
(ρ auw dy − ρbuw dx) =
Γ
dΩ + '
P
Ω
'
(τ xx )
Ω
∂w dΩ − ∂x
∂w ∂w + (τ xy ) ∂x ∂y
dΩ +
( Γ
Pext w dy,
(18)
' ∂v v ∂ρ 1 ∂w ∂v ρ + −ρ av + ρ a w dΩ + w dΩ + ∂t 2 ∂t 2 Ω ∂x ∂x
' Ω
(
(ρ avw dy − ρbvw dx) =
Γ
' Ω
+
∂w ∂v −ρbv + ρb w ∂y ∂y
( Γ
dΩ + '
P
Ω
' Ω
∂w ∂w (τ xy ) + (τ yy ) ∂x ∂y
∂w dΩ + ∂y
(eρ a w dy − eρb w dx) =
Ω
−P
dΩ +
( Γ
Pext w dx,
∂ ∂w ∂w ∂w ∂w (ρe) w − (eρ a) − (eρb) − qx − qy ∂t ∂x ∂y ∂x ∂y '
∂u ∂v + ∂x ∂y
(19)
dΩ +
(20)
+ Φ w dΩ .
In order to pass to grid analogues, we introduce a uniform grid xi = ih, y j = jh, i, j = 0, ±1, ±2, . . . with a mesh size h = 1/n and integer n ≥ 2. Denote the set of nodes of the domain Ω¯ by
Ω¯ h = { zi j = ( xi , y j ),
i, j = 0, 1, . . . , n},
(21)
the sets of interior and boundary nodes by
Ωh = { zi j = ( xi , y j ), i, j = 1, 2, . . . , n − 1},
Γh = { zi j = ( xi , y j ) ∈ Ω¯ h ∩ Γ }
respectively. As a result, the computational domain Ω¯ is subdivided into n2 square meshes ωi j = ( xi , xi+1 ) × ( y j , y j+1 ), i, j = 0, 1, . . . , n − 1. For each node zi j ∈ Ω¯ we introduce the basis function ϕi j which equals one at zi j , equals zero at the other nodes of Ω¯ h and is bilinear on each mesh:
46
E.D. Karepova, A.V. Malyshev, V.V. Shaidurov, G.I. Shchepanovskaya
ϕi, j ( x, y) =
⎧ | y j − y| | xi − x | ⎪ ⎪ ⎪ 1− 1− , ⎪ ⎨ h h ⎪ ⎪ ⎪ ⎪ ⎩
( x, y) ∈ [ xi−1 , xi+1 ] × [ y j−1 , y j+1 ],
(22)
0 otherwise.
Denote the span of these functions by H h = span{ϕi, j }i,n j=0 .
(23)
With the use of the introduced notations, we formulate the BubnovGalerkin method for each of the equations of the system (16), (2) – (4). Find the function σh (t, x, y):
σh (t, x, y) = such that
n
∑
i, j=0
(σ )
αi, j (t)ϕi, j ( x, y),
aσ (σ h , wh ) = 0
∀ wh ∈ H h
(24)
(25)
with the bilinear form '
∂σ h h 1 ∂σ h h 1 h ∂wh w + u w − σ u + ∂t 2 ∂x 2 ∂x Ω ' 1 ∂σ h h 1 h ∂wh 1 w − σ v σ h wh u · ndΓ . + v dΩ + 2 ∂y 2 ∂y 2 Γ
aσ (σ , w ) = h
h
(26)
Find the functions uh , vh : uh =
n
∑
i, j=0
(u)
αi, j (t)ϕi j ,
vh =
n
∑
i, j=0
(v)
αi, j (t)ϕi j
(27)
such that au (uh , vh , wh ) = ( f u , wh )
∀ wh ∈ H h ,
(28)
av (uh , vh , wh ) = ( f v , wh )
∀ wh ∈ H h
(29)
with the bilinear forms
The parallel realization of FEM for Navier-Stokes equations
auh (uh , vh , wh ) =
1 + 2 1 + 2 2 + 3
+
1 2
%
' Ω
%
'
%
' Ω
∂wh ∂uh h w −ρ auh + ρa ∂x ∂x ∂wh ∂uh h w −ρbu + ρb ∂y ∂y
Ω
%
'
µ
Ω
∂vh ∂uh 2 − ∂x ∂y
ρ auh wh dy − ρbuh wh dx
Γ
+
+
%
' Ω
%
'
% Ω
1 2
dΩ +
=
∂vh vh ∂ρ ρ + ∂t 2 ∂t
Ω
∂wh ∂vh h w −ρ avh + ρa ∂x ∂x ∂wh ∂vh h w −ρbv + ρb ∂y ∂y
µ
∂vh ∂uh + ∂y ∂x
&
%
' Ω
µ
∂vh ∂uh + ∂y ∂x
&
∂wh dΩ + ∂y
& wh dΩ +
(31)
& dΩ +
&
h
Ω
'
&
%
'
(30)
dΩ +
∂wh dΩ + ∂x
(
avh (uh , vh , wh )
1 + 2
&
wh dΩ +
&
h
and
1 + 2
∂uh uh ∂ρ + ∂t 2 ∂t
ρ
47
&
∂wh 2 dΩ + ∂x 3
dΩ + %
' Ω
µ
∂uh ∂vh 2 − ∂y ∂x
&
∂wh dΩ + ∂y
( Γ
ρ avh wh dy − ρbvh wh dx .
The linear forms in (28) and (29) are defined by the equalities
( f u , wh ) =
' Ω
P
∂wh dΩ − ∂x
( Γ
Pext wh dy
(32)
Pext wh dx,
(33)
and
( f v , wh ) = respectively.
' Ω
P
∂wh dΩ + ∂y
( Γ
48
E.D. Karepova, A.V. Malyshev, V.V. Shaidurov, G.I. Shchepanovskaya
Find the function eh : eh =
n
∑
i, j=0
(e)
αi, j (t)ϕi j ,
(34)
such that ae (eh , wh ) = ( f e , wh )
∀ wh ∈ H h
(35)
∂w ∂w − (eρb) − ∂x ∂y
(36)
with the bilinear form aeh (eh , wh )
=
' ∂ Ω
− qx
∂t
(ρe) w − (eρ a)
∂w ∂w dΩ + − qy ∂x ∂y
and the linear form
( f e , wh ) = =
' Ω
−P
( Γ
∂u ∂v + ∂x ∂y
(eρ a w dy − eρb w dx)
+ Φ w dΩ .
(37)
Since {ϕi, j }i,n j=0 is a basis in the space H h , as the functions wh in the Bubnov-Galerkin method it is sufficient to consider only the basis functions. Then with the equalities (24) – (25), (27) – (29), (34) – (35) we can associate the systems of equations aσ (σ h , ϕi, j ) = 0;
(38)
au (u , v , ϕi, j ) = ( f u , ϕi, j ),
(39)
av (uh , vh , ϕi, j ) = ( f v , ϕi, j );
(40)
h
h
ae (e , ϕi, j ) = ( f e , ϕi, j ) h
i, j = 0, 1, 2, . . . , n.
(41)
The equalities (38) – (41) involve integrals which can not be calculated exactly in the general case. For their approximation we use the two-dimensional analogue of the trapezoid quadrature formula. We introduce the vectors σ h (t), uh (t), vh (t) and eh (t) with the components σi,h j (t), uihj (t) vihj (t) and eihj (t), respectively. We also consider the righthand side vectors Fuh , Fvh and Feh with the components ( f u (t), ϕi, j ), ( f v (t), ϕi, j ) and ( f e (t), ϕi, j ), respectively. The approximate replacement of the integrals at each instant t results in h (the operator of multiplication by σihj (t)), Aσh (t), linear operators Mh , Msqrt Auh (t), Buh (t), Avh (t), Bvh (t), Aeh (t, uh , vh ). When numbering the nodes of Ω¯ h from zero to n2 , these operators become isomorphic matrices. We shell use lexicographic ordering. In this case Mh is a diagonal matrix with positive entries and the other matrices are five-diagonal.
The parallel realization of FEM for Navier-Stokes equations
49
Thus, for all t ∈ (0, t f in ) we obtain the systems of ordinary differential equations dσ h + Aσh (t)σ h = 0, dt d h h Msqrt Msqrt uh + Auh (t)uh + Buh (t)vh = Fuh (t), dt d h h h Msqrt Msqrt v + Bvh (t)uh + Avh (t)vh = Fvh (t), dt d h h M e + Aeh (t, uh , vh )eh = Feh (t). dt
Mh
(42) (43) (44) (45)
Fig. 1. Density ρ
Notice that the system (42) is written for the derivative and, assuming that σ (t, x, y) is known at the stage where u, v, e are determined, we obtain three other systems written for the corresponding derivatives. We shall follow this idea of determination of u, v, e at each time level t once σ is determined in numerical methods as well. Thus, we obtain a discrete analogue of the Navier-Stokes equations by the finite element method and the systems of ordinary differential equations (42) – (45). Some useful properties being discrete analogues of continuous balance relations, like conservation of mass and total energy, are proved for them. The natural boundary conditions (12) – (13) for the equation of motion and the Neumann condition (14) for the equation of energy are of crucial importance in conservation of balance of total energy. In addition, some new methods for solving these systems of ordinary differential equations, which conserve basic balance relations, are compared with well-known ones. Numerical experiments were performed, for example, for the problem (42) – (45) with the initial conditions
50
E.D. Karepova, A.V. Malyshev, V.V. Shaidurov, G.I. Shchepanovskaya
ρ( x, y, 0) = 1,
u( x, y, 0) = v( x, y, 0) = 0, 1 e( x, y, 0) = f (0.5, 0.5, x, y) + on 2 γ (γ − 1) M∞
Here
Ωh .
(46)
⎧ " #2 " #2 8 1 8 ⎪ ⎪ , ⎨ 2 R2 − x − A − y − B R2 #2 " #2 " f ( A, B, x, y) = 2 if x−A + y−B ≤ R , ⎪ ⎪ ⎩ 0, otherwise;
R = 1/30. Besides, in the numerical experiment we used the following values of the parameters:
γ = 1.4,
Re = 103 ,
2 M∞ = 16,
Pr = 0.7,
ω = 1.
Taking into account the relation between temperature and internal energy: 2 T = e γ (γ − 1) M∞ , initial conditions for the equation of energy were taken so that temperature in the nonperturbed domain is equal to one. Then in time t∗ > 0 we add the portion of energy by formula (46) with spatial argument shifted at front of the first shock and solve such a problem with new initial condition. The figure shows the behavior of density ρ at the instant t > t∗ (the deeper is color, the greater is density). In this figure the graph of ρ( x, 0.5, t∗ ) is shown as well.
4 Parallelization of algorithms and efficiency estimate We consider the block scheme of parallel decomposition for one time level of the scheme (42)-(45). The domain Ω is divided into p = l1 × l2 subdomains where l1 and l2 are integer so that
Ωq,r = [q/l1 , (q + 1)/l1 ] × [r/l2 , (r + 1)/l2 ], q = 0, ..., l1 − 1,
r = 0, ..., l2 − 1.
We assume that l1 ≥ l2 and n is divisible by l1 and l2 . We distribute all data related to the subdomains Ωq,r among p = l1 l2 computational nodes and determine the unknowns ui, j,k in each subdomain independently of other subdomains. This realization is often called block decomposition. Now we make a preliminary efficiency estimate for this realization. We introduce the notion of performance c of a computational node which has
The parallel realization of FEM for Navier-Stokes equations
51
dimension number of operations per second and the notion of rate ν of interprocessor data transfer which has dimension amount of transferred numbers per second. As hardware we take an MBC-1000/16 homogeneous cluster. Let us suppose that the calculation of value is one grid point, say, by Jacobi or Gaiss-Seidel method for the any formulae (42)-(45) requires σ operations. Assume that the coefficients of linear equations are stored in memory at each point of the grid domain Ωh . Then calculations are sequentially performed one step over the whole domain Ωh requires about n2σ operations with the time of simultaneous work of computational nodes being defined by Tc =
n2σ . pc
To continue the calculations at the next time level, it is necessary to make the exchange of the values of ui, j,k at the nodes adjacent to the boundary between computational nodes. For each node the number of these points equals n(l1 + l2 )/ p. To avoid clashes, the time it takes for all exchanges through one net to be completed is equal to 2n(l1 + l2 )/( pν ) seconds. As a result, the time it takes for one iteration step to be performed is estimated by the quantity Tp =
n2σ 2n(l1 + l2 ) + . pc pν
Thus, acceleration that would be expected is defined by S = Tc / Tp =
ndσ · p , ndσ + 2(l1 + l2 )
where
d=
ν . c
(47)
Notice that the formula (47) involves the quantities l1 and l2 defined by the identity l1 l2 = p only. Solving the simplest optimization problem ⎧ ⎨(l1 + l2 ) → min, ⎩l l = p, 1 2
l1 ,l2
(48)
√ we obtain l1 = l2 = p. This choice provides a maximum of acceleration and attenuates "efficiency degradation", i.e., reducing efficiency with increasing the number of processors. This decomposition will be called square decomposition. Efficiency of square decomposition equals E = S/ p =
ndσ √ . ndσ + 4 p
(49)
Here the "efficiency degradation" effect is clearly seen. In particular, efficiency is halved (E = 0.5) as p increases for
52
E.D. Karepova, A.V. Malyshev, V.V. Shaidurov, G.I. Shchepanovskaya
p = pcr =
(ndσ )2 . 16
(50)
From (50) it follows that the greater are σ (the computational work of one iteration step), d = ν /c (the characteristic number of a computation system), and n2 (dimension of the grid domain), the greater is the quantity pcr . To this point the assumption has been made that the conditional parameters c and ν are constants independent of the volume of data being processed. But this is not the case. Performance as well as transmission speed depend on the volume of data transferred. In particular, data transmission speed in a TCP/IP net asymptotically depends on the volume of data transferred. The graph in Fig. 2 shows the results of experiments performed with LAM/MPI. The corresponding curve for MPICH begins with a higher step and has more clearly defined jumps on the horizontal part.
Fig. 2. The relationship between transmission speed and the volume of data
This relationship between transmission speed and the volume of data transferred can be approximated by the formula
ν ( x) =
ν0 x , ν1 + x
where
ν0 = 9.8 · 105 , ν1 = 4.9 · 103 .
(51)
Contrary to transmission speed, performance is more difficult to formalize. However, we can determine the averaged value that allows us to obtain approximate estimates. From experimental results with a Pentium III866 processor this value is of the order of 108 operations per second. Substituting the expression (51) into the formulae (47) and (49) results in S=
n2 ν0 I · p √ , n2 ν0 I + ν1 c · p + 4nc · p
(52)
The parallel realization of FEM for Navier-Stokes equations
E=
n2 ν0 I √ . n2 ν0 I + ν1 c · p + 4nc · p
53
(53)
The estimate (51) takes the form
n p = pcr = ν1
)
ν0 ν1 I +4−2 c
2 .
(54)
The formulae (52) – (54) can be considered as generalizations of (47) – (50) for the transmission speed function represented in the form (51). Observe that the estimates for more realistic transmission speed are worsened. However, for an MBC-1000/16 cluster and for the parameters n = 1200, σ = 280, p = 15 the estimates corresponding to the formulae (52) – (54) yield pcr = 777,
S = 14.65,
E = 0.97,
(55)
i.e., expected efficiency of parallel calculations is rather high.
5 Characteristics of implementation of developed algorithm The main characteristics of the implementation of the algorithm are as follows: – the algorithm is implemented in language C with the use of the MPI standard constructions and without recourse to special libraries. This enabled one to construct an efficient and well-scalable application with predictable behaviour from the computational point of view as well as in the sense of parallel efficiency; – the program is implemented as a collection of modules that enables one to extend it without modification of the code which has already been written; – to store intermediate results of iteration steps, the library libpng which enables one to construct images in the png-format is used. This allows one to avoid making data compress by hand when data are transferred and facilitates the visualization of a solution. After completing calculations or during the iterative process storage of an obtained solution on a disk may be required for subsequent analysis. This solution can be stored as a gray-scale image or an array of numbers. Since no thread involves the whole solution (in the general case it can not be located in the memory of one computational node), its assembling is performed by successive transmission of data blocks from all nodes to one node and gradual storage in a file. This procedure leads to a significant amount of work and in some cases (for example, when solutions at each iteration step are stored) takes more time than calculations in themselves. When only the last iteration step (the final result) is stored, the time can be reduced to a minimum.
54
E.D. Karepova, A.V. Malyshev, V.V. Shaidurov, G.I. Shchepanovskaya
Since the storage procedure is not necessary when performing calculations, the amount of work that it requires was not taken into account when calculating time was evaluated.
References 1. Rannacher R (2000) Finite element method for the incompressible Navier-Stokes equations. In: Galdi G, Heywood JG, Rannacher R (eds) Fundamental directions in mathematical fluid mechanics. Birkhauser Verlag, Berlin 2. Shaidurov VV, Shchepanovskaya GI (2003) Mathematical and numerical modeling of nonstationary propagation of a pulse of high-power energy in a viscous heat conducting gas. Part I. Mathematical formulation of the problem. Inst. of Computational Modelling SB RAS, Krasnoyarsk, Russia 3. Karepova ED, Shaidurov VV (2004) The numerical solution of the Navier-Stokes equations for a viscous heat conducting gas. Part II. Space approximation by the finite element method. Inst. of Computational Modelling SB RAS, Krasnoyarsk, Russia 4. Samarskii AA, Vabishchevich PN (2003) Numerical methods for solving problems on convection-diffusion. Publishers of scientific and educational literature, Moscow
On solution of Navier-Stokes auxiliary grid equations for incompressible fluids N.T. Danaev al-Farabi Kazakh National University, Masanchi str. 39/47, 480012 Almaty, Kazakhstan [email protected]
Summary. Effective iterative algorithms are considered for numerical implementation of the solution of auxiliary differential equations. These equations appear when using splitting schemes, for which convergence theorems are proved and convergence rates are estimated. An effective algorithm is suggested, allowing the grid continuity equation to be satisfied identically.
1 Problem formulation The problem on unsteady motion of viscous incompressible fluid in a bounded area Ω with a solid boundary ∂Ω is reduced to the solution of nonlinear systems of differential equations in partial derivatives [1]–[2]: ∂u + (u∇)u + ∇ p = ν∆ u + f, ∂t
(1)
divu = 0,
(2)
u|t=0 = u0 ( x) ,
(3)
u|∂Ω = 0,
(4)
with initial boundary conditions
where u = (u1 , u2 , ..., u N ) is velocity vector, p – pressure, f – mass force field, ν – viscosity factor, and N = 2, 3 – space dimensionality. The finite-differential splitting schemes for solving the numerical problem (1)-(4), which are currently used, can be written as in [3]–[4]: Bu
n+1/2 −un
τ
+ (un ∇h )un = − gradh pn + ν∆h un + fn , + τ gradh ( pn+1 − pn ) = un+1/2 , divh un+1 = 0,
un+1
(5)
56
N.T. Danaev N
where B = ∏ ( E + ω Rα ), E + ω Rα , α = 1, N, are the operators, allowing α =1
to use a scalar sweep method: gradh p = { p x1 , p x2 , ..., p x N },
(1)
(2)
(N)
divh u = u x¯1 + u x¯2 + ... + u x¯ N ,
τ is a grid step in time. Herein after, a standard notation of the theory of differential schemes [5] are used. It is assumed, that the components of velocity vector um , m = 1, N, are defined in the nodes of corresponding grids: Dm,h = {(11 h, 12 h, ..., 1m−1 h, (1m + 1/2)h, 1m+1 h, ..., 1 N h), 1k = 0, M, k = m, 1m = 0, M − 1, Mh = 1}, the pressure values - in the nodes: * + Dh = (11 h, 12 h, ..., 1 N h), 1k = 0, M − 1, k = 1, N , and normal velocity component values are given on the corresponding sections of the boundary ∂Dm,h . The main point of numerical solution of the boundary value problems on incompressible fluid flows in variables (u, p) using finite-differential methods of a type (5) is the implementation of the second step of calculations, i.e. obtaining a solution for auxiliary grid equations u + τ gradh p = a, divh u = 0, xm ∈ Dm,h , (u, n) = 0, x ∈ ∂Dm,h ,
(6)
where a( x) is an unknown vector, n is an internal bound normal. The basic approach to solving the equations (6), which is similar to the basis variant of MAC method [4], consists in obtaining the equations for pressure after application of divergence operation. The equation for the pressure has the following form: 1 ∆h p = divh a. (7) τ In order to obtain the values of pressure, satisfying the adiabatic equation (Poisson’s equation) (7), it is necessary to set the boundary conditions for p, which are absent in initial boundary value problem statement (1)-(4). Therefore, it is a drawback of this method. The given work is devoted to the development of effective iterative algorithms for solving auxiliary grid Navier-Stokes equations for incompressible fluid and the justification of a convergence rate of the suggested algorithms.
On solution of Navier-Stokes auxiliary grid equations
57
2 About one class of multi-parameter iterative algorithms for solving auxiliary grid Navier-Stokes equations The following iterative process is considered in [6] in order to solve the equations (6): s+1 + τ ( ps − τ div us ) s+1 s um xm = ττ0 δ ( um,xm − um,xm ) xm + am , m = 1, N, 0 h ps+1 − ps τ0
+ divh u,s+1 = 0,
(8)
where u0 , p,0 are given, δ > 0 is a constant, τ0 is an iterative parameter, for which the convergence theorem is true. Further analysis of the iteration (8) for estimation of a convergence rate and various modifications are shown in [7]–[8]. The numerical calculations are done, which show that the convergence rate of (8) is actually independent of the number of nodes of finitedifference grid. The investigation of iterations (8) is interesting because the calculations are performed using the effective algorithms. The motion equations are solved using scalar sweeps on each iteration over index s. Then the value of ps+1 is found using an explicit formula, and boundary conditions for pressure are not necessary. Let us consider the following iterative algorithm for solving the differential equations (6), which is somewhat a modification method of (8): s+1 − us ) + us + τ ( ps − τ div us ) α (um xm = 1 m m h s+1 − us = ττ1 δ (um,x ) + a , m = 1, N, m m,xm xm m ps+1 − ps τ2
+ divh us+1 = 0,
(9)
where α , δ , τ1 , τ2 are iterative parameters. The following theorem is true. Theorem 1. If δ ≥ N, α ≥ 1 and τ1 ≥ τ2 , then the iterative process (9) converges to the solution of the differential problem (6). The following estimation of the error of solution is valid: , , , , , s+1 ,2 ,! , + (α − 1),!s+1 − !s ,2 + τ (τ1 − τ2 ),divh !s+1 ,2 + , s+1 ,2 s , − Ωm,x + Es+1 − Es ≤ 0, +ττ1 divh !s 2 + ττ1 (δ − N )∑ ,Ωm,x m m (m) m , ,2 , s ,2 , , Es = (α − 1),!s+1 , + ττ2 π s 2 + ττ1 δ ∑ ,Ωm,x m (m) m
!s
=
us
− u,
πs
=
ps
− p,
!s
=
(Ω1s , Ω2s , ..., Ω Ns )
is the iteration error.
Proof. The relationships for the error of iterative process follow from the expressions (6) and (9):
α (Ωms+1 − Ωms ) + Ωms + τ (π s − τ1 divh !s ) xm = s+1 − Ω s = ττ1 δ (Ωm,x m,xm ) xm , m = 1, N, m
(10)
58
N.T. Danaev
π s+1 − π s + divh !s = 0, τ2 with zero boundary conditions for !s , i.e.
(!s+1 , n) = 0,
(11)
x ∈ ∂Dh .
The scalar multiplication of (10) by Ωms+1 gives , ,2 , ,2 , ,2 α ,!s+1 , − !s 2 + ,!s+1 − !s , + ,!s+1 , + , ,2 + !s 2 − ,!s+1 − !s , − 2τ (π s , divh !s+1 )+ +2ττ1 (divh !s , divh !s+1 )+ N , , , ,2 , s+1 ,2 s+1 ,2 − ,Ω s s , + ,(Ωm,x − Ωm,x , ) = 0. +ττ1 δ ∑ (,Ωm,x m,xm m m m m=1
We note the following from the relationship (11):
−2τ (π s+1 + τ2 divh !s+1 , divh !s+1 ) + 2ττ1 (divh !s , divh !s+1 ) = , ,2 = 2ττ2 (π s+1 , π s+1 − π s ) − 2ττ2 ,divh !s+1 , + 2ττ1 (divh !s , divh !s+1 ) = , ,2 , ,2 , ,2 = ττ2 ,π s+1 , − π s 2 + ,π s+1 − π s , − 2ττ2 ,divh !s+1 , + , ,2 , ,2 +ττ1 ,divh !s+1 , + divh !s 2 − ,divh (!s+1 − !s ), = , ,2 , ,2 = ττ2 ,π s+1 , − π s 2 + τ (τ1 − τ2 ),divh !s+1 , + ττ1 divh !s 2 − , ,2 −ττ1 ,divh (!s+1 − !s ), . Substituting these relationships into (12), we obtain , ,2 , ,2 (α + 1),!s+1 , − (α − 1) !s 2 + (α − 1),!s+1 − !s , + N , , , ,2 , s+1 ,2 s+1 ,2 − ,Ω s s , , , +ττ1 δ ∑ (,Ωm,x m,xm (m) + Ωm,xm − Ωm,xm (m) )+ m (m) ,m=1 , , ,2 2 + ττ2 ,π s+1 , − π s 2 + τ (τ1 − τ2 ),divh !s+1 , + , ,2 +ττ1 divh !s 2 = ττ1 ,divh (!s+1 − !s ), . Taking advantage of the apparent inequality ,2 , , , ,divh (!s+1 − !s ), ≤ N we obtain
N
∑
m=1
, , , 2 , s+1 , , s , , ,Ωm,xm , − ,Ωm,x m (m)
, ,2 , ,2 (α + 1),!s+1 , − (α − 1) !s 2 + (α − 1),!s+1 − !s , + N , , , ,2 , s+1 ,2 s+1 ,2 − ,Ω s s , , , +ττ1 δ ∑ (,Ωm,x m,xm (m) + Ωm,xm − Ωm,xm (m) )+ m (m) ,m=1 , , ,2 2 + ττ2 ,π s+1 , − π s 2 + τ (τ1 − τ2 ),divh !s+1 , + N ", , , ,#2 s+1 , − ,Ω s , . +ττ1 divh !s 2 ≤ ττ1 N ∑ ,Ωm,x m,xm m (m) m=1
(12)
On solution of Navier-Stokes auxiliary grid equations
59
Therefore, N , , ,2 , ,2 ,2 s+1 − Ω s , 2,!s+1 , + (α − 1),!s+1 − !s , + ττ1 (δ − N ) ∑ ,Ωm,x m,xm (m) + m m=1 , , 2 +τ (τ1 − τ2 ),divh !s+1 , + ττ1 divh !s 2 + N , , ,2 , , , s+1 ,2 + τ ,π s+1 ,2 ≤ +(α − 1),!s+1 , + ττ1 δ ∑ ,Ωm,x τ2 m (m)
(α
− 1) !s 2
m=1
, s ,2 , + ττ1 δ ∑ ,Ωm,x + m (m) N
m=1
τ s 2 τ2 π .
Thus, we have the following a priori estimate: ,2 , ,2 , 2,!s+1 , + (α − 1),!s+1 − !s , + N , ,2 s+1 − Ω s , +ττ1 (δ − N ) ∑ ,Ωm,x m,xm (m) + m m = 1 , ,2 +τ (τ1 − τ2 ),divh !s+1 , + ττ1 divh !s 2 + Es+1 − Es ≤ 0, N , ,2 s , where Es = (α − 1) !s 2 + ττ1 δ ∑ ,Ωm,x + m (m) m=1
(13)
τ s 2 τ2 π .
The following convergence rate theorem is also true. Theorem 2. If δ > N, α ≥ 1 and τ1 ≥ τ2 , then the iterative process (9) converges to the solution of the difference scheme (6) with a rate of geometrical progression, and the following recursion relationship is fulfilled: F s+1 = qF s , q < 1, 2 , s ,2 h τ , , F s = (α − βK ) !s 2 + π s 2 + + ττ1 δ ∑ ,Ωm,x m (m) τ2 4 m where q = max αα−−β1K , 1 − 2βττ2 C02 , h2ττ1 δ ; β, C0 are uniformly bounded 4
+ττ1 δ
constants, independent from grid parameters. 0
Proof. For any grid function ’ ∈ W21 the expression (10) gives the following:
α (!s+1 − !s , ’) + (!s , ’) + τ (∇h π s , ’) − ττ1 (∇h divh !s , ’) = N
s+1 s − Ωm,x , ϕm ) = 0, = ττ1 δ ∑ (Ωm,x m xm m xm m=1
therefore, using the formulas for summation by parts, we obtain
τ (∇h π s , ’) = −(α − 1)(!s+1 − !s , ’) − (!s+1 , ’)− N
s+1 − Ω s −ττ1 (divh !s , divh ’) − ττ1 δ ∑ (Ωm,x m,xm , ϕm,xm ) , m m = 1 ! ! τ |(∇h π s , ’)| ≤ (α − 1)!(!s+1 − !s , ’)!+
60
N.T. Danaev
! ! ! N ! s + 1 s ! ! 1 |( div h h ’ )| + ττ1 δ ! ∑ ( Ωm,xm − Ωm,xm , ϕm,xm )! ≤ m=1 , s+1 , , , , , ≤ C¯ ((α − 1),! − !s , ’ 0 + ,!s+1 , ’ 0 + ττ1 ,divh !s+1 , ’ 0 +
+|(!s , ’)| + ττ
!s , div
W21
, s+1 , s , ’ + ττ1 δ ∑ ,Ωm,x − Ωm,x m m N
m=1
W21
0
W21
W21
),
where C¯ is a uniformly bounded constant, independent of τ , τ0 and h. Hence, using the equivalence inequalities of a norm [9] C1 π L2 ( Dh ) ≤ π L−2 ( Dh ) ≤ C2 π L2 ( Dh ) ,
(14)
where π L−2 ( Dh ) = sup |(∇π , ’)|, which are true for any grid function, sat’∈W21
N −1 N −1
isfying the additional condition ∑
∑ πkm = 0, we obtain
m=1 k=1
, , , , τ C0 πs L2 ≤ (α − 1),!s+1 − !s , + ,!s+1 ,+ N , , , , s+1 − Ω s , +ττ1 ,divh !s+1 , + ττ1 δ ∑ ,Ωm,x m,xm . m m=1
Thus, , ,2 C02 τ 2 πs 2 ≤ K ((α − 1),!s+1 − !s , + N , , ,2 , ,2 ,2 s+1 − Ω s , +,!s+1 , + ττ1 ,divh !s+1 , + ττ1 δ ∑ ,Ωm,x m,xm (m) ) , m
(15)
m=1
where K is a limited positive constant. If we multiply (15) by β and add the result to (13), then , ,2 , ,2 (2 -βK ) ,!s+1 , + (α − 1)(1 − βK ),!s+1 − !s , + , s+1 ,2 s , + +ττ1 (1 − βK ) divh !s 2 + ττ1 (δ − N − βδ K )∑ ,Ωm,x − Ωm,x m m (m) m N , , ,2 , , , s+1 ,2 + τ ,π s+1 ,2 ≤ +(α − 1),!s+1 , + ττ1 δ ∑ ,Ωm,x τ2 m (m)
≤ (α
− 1) !s 2
m=1 N ,
,2 s , + ττ1 δ ∑ ,Ωm,x + ττ2 π s 2 − τ 2 βC02 π s 2 , m (m) m=1
therefore, , ,2 , ,2 (1 -α -βK ) ,!s+1 , + (α − 1)(1 − βK ),!s+1 − !s , + , s+1 ,2 s , + +ττ1 (1 − βK ) divh !s 2 + ττ1 (δ (1 − βK ) − N )∑ ,Ωm,x − Ωm,x m m (m) m N , , , , s+1 ,2 + τ ,π s+1 ,2 ≤ +ττ1 δ ∑ ,Ωm,x τ2 m (m) m=1
N , ,2 s , + ττ2 (1 − ττ2 βC02 ) π s 2 . ≤ (α − 1) !s 2 + ττ1 δ ∑ ,Ωm,x m (m) m=1
On solution of Navier-Stokes auxiliary grid equations
61
Let us choose a number β > 0, satisfying the following conditions: 1 − βK > 0, ττ1 (δ (1 − βK ) − N ) ≥ 0. Taking into account the inequality , , , h2 N , , s+1 ,2 , s+1 ,2 Ω ,! , ≥ , m,xm , , 4 m∑ =1 we obtain
, ,2 (α -βK ) ,!s+1 , + ττ1 δ +
h2 4
N , , s+1 ,2 + ∑ ,Ωm,x m (m)
m=1
,
,
τ , s+1 ,2 τ2 π
≤
N , ,2 s , + ττ2 (1 − ττ2 βC02 ) π s 2 . ≤ (α − 1) !s 2 + ττ1 δ ∑ ,Ωm,x m (m) m=1
We proceed as follows: , ,2 (α -βK ) ,!s+1 , + ττ1 δ +
≤ +
α −1 α −βK (α ττ1 δ
2
ττ1 δ + h4
N , , s+1 ,2 + ∑ ,Ωm,x m (m)
m=1
,
,
τ , s+1 ,2 τ2 π
≤
+ ττ2 (1 − ττ2 βC02 ) π s 2 + N , ,2 h2 s , . ∑ ,Ωm,x 4 m (m) m=1
− βK ) !s 2
ττ1 δ +
h2 4
If we introduce the notation α−1 q = max , 1 − 2βττ2 C02 , α − βK
(16)
ττ1 δ h2 4
+ ττ1 δ
,
then (16) leads to the following: F s+1 = qF s , Fs
q < 1,
= (α − βK ) !s 2 + ττ2 π s 2 +
h2 4
, , , s ,2 + ττ1 δ ∑ ,Ωm,x , . m m
(m)
Therefore, it is proved that the iterative process (10) converges with the rate of geometric progression.
3 About one implicit iterative process In order to find a solution for the differential problem (6) let us consider the following iterative process: us+1 + τ gradh ps+1 = a,
ps+1 − ps + divh us+1 = 0, τ0 s + 1 (u , n) = 0, x ∈ ∂D
x ∈ Dm,h , m,h ,
where it is assumed that τ > 0, τ0 > 0 are iterative parameters. The following theorem is true.
(17)
62
N.T. Danaev
Theorem 3. The iterative process (17) converges to the solution of differential problem (6) with the rate of geometric progression. The following estimation of the error of solution is valid: , , , ,2 , , τ , s+1 ,2 , s+1 ,2 ττ0 , s+1 , s 2 = 0, (18) ,divh ! , + ,π , − π ,! , + 2 2τ0
π s+1 = q π s ,
q < 1,
where q = (1 + 2C02 ττ0 )−1 , C0 is a uniformly bounded constant, independent of grid parameters. Proof. The expressions (6) and (17) lead to the following relationships for the error of iterative process: !s+1 + τ gradπ s+1 = 0,
(19)
π s+1 − π s + divh !s+1 = 0, τ0
(20)
with zero bound conditions for !s , i.e.
(!s+1 , n) = 0,
x ∈ ∂Dm,h .
Scalar multiplication of (19) by !s+1 leads to , , , s+1 ,2 ,! , − τ (π s+1 , divh !s+1 ) = 0. Taking into account the relationship (20), we note, that , s+1 ,2 ,! , + , s+1 ,2 ,! , +
τ s+1 , π s+1 − π s ) = τ0 ( π , ,2 τ , s+1 − π s , + 2ττ0 2τ0 π
0, , , ,π s+1 ,2 − π s 2 = 0,
therefore, we obtain the estimation (18). From the expression (19) the following is true for any grid function ’ ∈ ◦
W21 :
(!!s+1 , ’) + τ (∇! h π s!+1 , ’) = !0, , , τ !(∇h π s+1 , ’)! ≤ !(!s+1 , ’)! ≤ C¯ ,!s+1 , ’
◦
W21
,
where C¯ is a uniformly bounded constant, independent of τ , τ0 and h. Hence, using the inequality (14), we obtain , , , , , , , , τ C0 ,π s+1 , ≤ ,!s+1 ,. L2
Therefore, it follows from (18) that
On solution of Navier-Stokes auxiliary grid equations
63
, ,2 , τ , τ , , , s+1 ,2 τ 2 C02 ,π s+1 , + π s 2 , ,π , ≤ 2τ0 2τ0 which finishes the proof of the theorem. It has to be noted, that the iteration (17) converges to the solution of the equation (6) with the rate of geometric progression with the denominator q < 1, independent of the number of finite-difference grid nodes. The solution of 17) can be found as follows. If the value ps+1 = ps − τ0 divh us+1
(21)
is substituted into the first relationship (17) for defining velocity components, then the equation is obtained: Ah us+1 ≡ us+1 − ττ0 gradh divh us+1 = un+1/2 − τ gradh ps with the operator Ah , which satisfies the following conditions: Ah = A∗h and ( Ah u, u) = u 2 + ττ0 divh u 2 , γ1 u 2 ≤ ( Ah u, u) ≤ γ2 u 2 , 0 γ1 = 1, γ2 = 1 + 4Nhττ . 2 In fact
( Ah u, v) = (u − ττ0 gradh divh u, v) = (u, v) + ττ0 (divh u, divh v) = = (u, v − ττ0 gradh divh v) = (u, Ah v), i.e. Ah = A∗h . In order to find the values us+1 we can consider the iterative process with the Chebyshev set of parameters: un+1 − un + Ah un = G( x), τn+1
u0 ∈ H,
(22)
τ0 τ0 = γ1 +2 γ2 , 1+ρ0 µn , √ γ1 1−√ξ 1−ξ , ξ = , ρ = , 1 1+ξ γ2 1+ ξ . πθk (n) n = −Cos 2k , n = 1,k .
τn+1 = ρ0 = µn ∈
Then, we define pressure values from the obtained values us+1 using the formula (21). Let us note, that n0 (ε) ≈ o( 1h ln ε1 ) for the iteration (22). The convergence of iterations (17) to the solution of equations (6) is independent of the number of difference grid nodes, therefore, it can be assumed that the total number of iterations, which are necessary for obtaining a solution of (6), will also be o( 1h ln ε1 ).
64
N.T. Danaev
4 Numerical realization of solution of auxiliary differential Navier-Stokes equations using vector potential One of drawbacks of the approaches, based on the relationships (7)-(9), (17), is that the velocity divergence can satisfy only the following condition on each time step: , , , , ,divh un+1,s , ≤ ε, where ε is a given set accuracy of internal iterations, i.e. the continuity equation is not fulfilled exactly. In [10] two-dimensional Navier-Stokes equations and the equation (6) are considered. As a result of algebraic transformations, they reduce to the solution of the following equation for stream function ψ, which is defined in the nodes ( xk+1/2 , ym+1/2 ): ∆h ψn+1 = roth a, where
(1)
(2)
roth a = ( ak+1/2,m ) y − ( ak,m+1/2 ) x .
This approach allows the law of mass conservation to be satisfied. Generalizing the results of [10] for a three-dimensional case, let us consider the following grid vectors for the numerical realization of the solution of equations (6): (1)
(2)
(3)
u = (Uk+1/2,l,m , Uk,l +1/2,m , Uk,l,m+1/2 ) , (1)
(2)
(3)
Ψ = (Ψk,l +1/2,m+1/2 , Ψk+1/2,l,m , Uk,l,m+1/2 ) ,
and let us set
u = roth Ψ.
After simple transformations (6) gives: roth roth Ψ = roth a or
−∆h Ψ + gradh (divh Ψ) = roth a( x).
Let us consider an adiabatic equation (Poisson’s equation) in order to determine the components of grid vector potential:
−∆h Ψ = roth a( x)
(23)
with the following boundary conditions: (n)
Ψ xn = 0,
Ψ (s) = Ψ (t) = 0,
which guarantee the fulfillment of the condition divΨ = 0 in all nodes of the computational domain Dh .
On solution of Navier-Stokes auxiliary grid equations
65
In order to find solutions for the equation (23) we can consider the alternate-triangular method (ATM): s+1
( E + wR1 )( E + wR2 ) Ψ τs+−1 Ψ = ∆h Ψs − roth a( x), R1 Z = −0.5Z x1 x 1 − h1 Z x2 − h1 Z x3 , 2 3 R2 Z = −0.5Z x1 x 1 + h1 Z x2 − h1 Z x3 , 2 3 s
for which it is easy to obtain the estimates of coefficients necessary for defining the Chebyshev set of parameters.
References 1. Temam R (1981) Navier-Stokes equations: theory and numerical analysis. Mir, Moscow (in Russian) 2. Ladyzhenskaya OA (1970) Mathematical questions of dynamics of vicious incompressible fluid. Nauka, Moscow (in Russian) 3. Roach P (1980) Computational Hydrodynamics. Mir, Moscow (in Russian) 4. Belotserkovsky OM (1984) Numerical modeling in mechanics of continium media. Nauka, Moscow (in Russian) 5. Samarsky AA (1988) Theory of differential schemes. Nauka, Moscow (in Russian) 6. Danaev NT, Smagulov ShS (1995) About realization of solution of differential equations Vn+1/2 + τ gradh pn+1 = Vn+1/2 , divh Vn+1 = 0. In: Some numerical methods of solving Navier-Stokes equations for incompressible fluid. Preprint IA Kazakhstan, Almaty, Kazakhstan (in Russian) 7. Urmashev BA (1997) Numerical investigation of one problem for Navier-Stokes equation in variables (U, p). In: Bulletin of KazNU, Mathematics, Mechanics and Informatics Series 8. Almaty, Kazakhstan (in Russian) 8. Danaev NT, Urmashev BA (2000) Iterative schemes for solving Navier-Stokes auxiliary grid equations. In: Bulletin of KazNU, Mathematics, Mechanics and Informatics Series 4, Almaty, Kazakhstan (in Russian) 9. Kobelnikov RM (1978) Reports of USSR Academy of Sciences 243/4:843-846 (in Russian) 10. Smagulov ShS, Danaev NT, Temirbekov NM (1989) Numerical solution for Navier-Stokes equation with discontinuous coefficients. Preprint of Computing Center SB AC USSR 15, Krasnoyarsk, Russia (in Russian)
An efficient implementation of an adaptive and parallel grid in DUNE A. Burri, A. Dedner, R. Klöfkorn, and M. Ohlberger Institute of Applied Mathematics, University of Freiburg i. Br., Hermann-Herder-Str. 10, 79104 Freiburg i. Br., Germany [email protected] DUNE website: http://dune.uni-hd.de Summary. In this contribution we describe and evaluate an efficient implementation of an adaptive and parallel grid (ALUGrid) within the Distributed and Unified Numerics Environment DUNE. A generalization of the serial grid interface of DUNE, described in [1], to the adaptive and parallel case is discussed and example computations using the grid interface are presented. The computations are compared with computations of the original code, which was optimized for the specific example problem studied here.
1 Introduction In [1] a serial version of a generic grid interface was introduced that was realized within the Distributed and Unified Numerics Environment DUNE. One of the major goals of such an interface based numerics environment is the separation of data structures and algorithms. For instance, the problem implementation can be done on the basis of the interface independent of the data structure that is used for a specific application. Moreover such a concept allows a reuse of existing codes beyond the interface. Up to now, within DUNE, there are five implementations of the grid interface, for example the interface implementation for the PDE software toolbox UG [2], for the Finite Element toolbox ALBERTA [3], and an implementation for a structured grid. Some of these implementations can be used to perform parallel computations. In this paper we focus on the detailed description of the parallel part of the grid interface that provides the necessary functionality for parallel computations. As some of the packages are already endowed with a parallelisation concept, the interface has to support an efficient access to 1 2
R. Klöfkorn was supported by the Bundesministerium für Bildung und Forschung under contract 03KRNCFR. M. Ohlberger was supported by the Landesstiftung Baden-Württemberg under contract 21-665.23/8.
68
A. Burri, A. Dedner, R. Klöfkorn, M. Ohlberger
the already existing parallelisation concepts. In this contribution we focus on the description of an efficient implementation of the parallel interface for the adaptive and parallel ALUGrid library [4, 5]. ALUGrid is an adaptive, load balanced, unstructured grid implementation that was specifically designed for an efficient implementation of explicit finite volume schemes for nonlinear conservation laws. The goal of this contribution is to demonstrate that the parallel grid interface to ALUGrid can be implemented in such an efficient way that the resulting adaptive and parallel computations based on the implementation in DUNE are competitive with computations of the original finite volume code in ALUGrid. The paper is organized as follows: in Section 2 we give an abstract definition of a parallel hierarchic grid and discuss the corresponding interface classes in DUNE. In addition, the specific features of the ALUGrid library are discussed. In Section 3 the handling of arbitrary data during grid reorganization in the case of grid adaptation and dynamic load balancing is discussed and an efficient implementation is presented that avoids the usage of virtual functions in C++. Finally, in Section 4 a run time comparison between the original finite volume implementation in ALUGrid and the interface based implementation in DUNE is given.
2 Design of the parallel Grid Interface The DUNE grid interface is an interface for parallel grids. This means that a serial grid can be seen as a parallel grid which runs on one processor. Therefore the described functionality is provided by every grid implementing the interface and for some implementations, methods such as loadBalance just do nothing. This guarantees that code written for parallel applications can be used for serial calculations as well. Furthermore the part of the grid interface responsible for parallelisation should be such that the user can write code for parallel applications without much effort, i.e. without coding MPI commands. The intention of the design is to provide a parallel extension of the grid interface by adding only a minimum number of methods. This section is split into three parts: first an abstract mathematical definition of the parallel extension of the DUNE grid is presented. Then in the second part the classes implementing the abstract definitions are described. The last part describes the features of the ALUGrid library concerning the grid and the interpretation of the features in terms of the abstract definition of the DUNE grid interface. 2.1 Abstract definition of the parallel grid In the following we define a grid T in mathematical terms. It is supposed to discretize a domain Ω ⊂ IRn , n ∈ IN, n > 0, with piecewise smooth boundary ∂Ω . A grid T consists of L + 1 grid levels
An efficient implementation of an adaptive and parallel grid in DUNE
69
T = {T0 , T1 , . . . , T L } . Each grid level Tl consists of sets of grid entities Elc of codimension c ∈ {0, 1, . . . , d} where d ≤ n is the dimensionality of the grid: . Tl = El0 , . . . , Eld . c : Each entity set consists of individual grid entities which are denoted by Ωl,i . c c c Elc = Ωl,0 , Ωl,1 , . . . , Ωl,N (l,c)−1 .
The number of entities of codimension c on level l is N (l, c) and we define a corresponding index set Ilc = {0, 1, . . . , N (l, c) − 1}. Definition 1. T is called a grid on Ω if the following conditions hold: 1. (Tessellation). The entities of codimension 0 on level 0 define a tessellation of the whole domain: /
0 = Ω, Ω0,i
0 ∀i = j : Ω0,i ∩ Ω0,0 j = ∅.
i ∈ I00
2. (Nestedness). Entities of codimension 0 on different levels form a tree structure. We require: 0 ∀l > 0, i ∈ Il0 : ∃! j ∈ Il0−1 : Ωl,i ⊂ Ωl0−1, j . 0 . For entities with at least one side on the This Ωl0−1, j is called father of Ωl,i boundary this condition can be relaxed. We define the set of all descendant enti0 as ties of codimension 0 and level l ≤ L of an entity Ωk,i 0 0 C L (Ωk,i ) = {Ωl,0 j | Ωl,0 j ⊂ Ωk,i , l ≤ L}.
3. (Recursion over codimension). The boundary of a grid entity is composed of grid entities of the next higher codimension, i. e. for c < d we have c = ∂Ωl,i
/ c+1 j∈ Il,i ⊂ Ilc+1
1 Ωl,c+ j .
Grid entities Ωl,d j of codimension d are points in IRn . c there is a reference 4. (Reference elements and dimension). For each grid entity Ωl,i element ωcl,i ⊂ IRd−c and a sufficiently smooth map c mcl,i : ωcl,i → Ωl,i
from the reference element to the actual element. Reference elements are convex polyhedrons in IRd−c . The dimension of the grid d is the dimension of the reference elements corresponding to grid entities of codimension 0. For c = d the map mdl,i simply returns the corresponding point in IRn .
70
A. Burri, A. Dedner, R. Klöfkorn, M. Ohlberger
5. (Nonconformity). Note that we do not require the mesh to be conforming in the sense that the intersection of the closure of two grid entities of codimension c is either zero or a grid entity with codimension greater than c. However, we require that all grid entities in Elc are distinct, i. e. : c ∀i, j, c, l : Ωl,i = Ωl,c j ⇒ i = j. 0 is represented by the set of all non empty The set of all neighbors of an entity Ωl,i intersections with that entity: 0 0 ∩ Ω 0 | Ω 0 ∩ Ω 0 = ∅, i = j}. I(Ωl,i ) = {Ωl,i l, j l,i l, j c In the following we also use the notion of leaf entities which are all entities Ωl,i c which are not further subdivided, i.e., C L (Ωl,i ) = ∅ for all L ∈ IN. c is also defined The index set Ilc is called LevelIndexSet. A similar index set Ileaf for the leaf entities.
For parallel computation we use a domain decomposition strategy, in which each processor performs the simulation on a grid which covers only a part of the whole computational domain Ω . Definition 2. Let the domain Ω¯ be decomposed into K disjoint partitions
Ω¯ = Ω¯ 1 ∪ · · · ∪ Ω¯ K
(1)
and let Tk be a grid on Ωk for k = 1, . . . , K in the sense of Definition 1. The entity sets corresponding to Tk are distinguished in the following by a subscript k. b,c 1. (Border entities). For c = 1, . . . , d and l = 0, . . . , L we denote with Ek,l ⊂ c c ¯ Ek,l the set of border entities consisting of entities Ωl,i ⊂ ∂Ωk for which there exists at least one index ki ∈ {1, · · · , K } \ {k} and an entity Ωl,c j ∈ Ekc ,l i i c = Ω c . Note that for border entities with codimension c > 1 there with Ωl,i l, ji can exist a large number of copies in the other partitions but for border entities 1 ∈ E b,1 the indices k , j are unique. Ωl,i i i k,l g,0
0 0 2. (Ghost entities). The set Ek,l = {Ωl,N , . . . , Ωl,N } consists of (l,0) (l,0)+ Ng (l,0) b,1 ¯ 0 0 ¯ ghost entities Ω ⊂ Ω \ Ω with Ω ∩ Ω ∈ E and for which there exists l,i
k
l,i
k
k,l
0 = Ω0 . an index ki ∈ {1, · · · , K } \ {k} and an entity Ωl,0 j ∈ Ek0 ,l with Ωl,i l, j i i i Note that as for border entities of codimension one the indices ki , ji are unique. 0. The entity Ωl,0 j is the master entity of the ghost entity Ωl,i i 3. (Interior entities). All entities that are neither border entities nor ghost entities are called interior entities.
An efficient implementation of an adaptive and parallel grid in DUNE
71
2.2 An adaptive parallel extension of the DUNE grid interface According to the abstract description of the parallel grid in 2.1, the grid interface consists of the following classes: 1. Griddim, dimworld, ... This class corresponds to the grid Tk on Ωk that is processed on processor k. It is parameterized by the grid dimension d = dim and the space dimension n = dimworld. The grid class provides iterators for the access to its entities. For grid adaption, load balancing and the communication in the parallel case, the following methods are provided: a) myRank(): Gives the processor number k. b) mark(ref, en): Marks the entity en for refinement or coarsening. c) adapt(data): Modifies the grid Tk with respect to the refinement marks. During this procedure the numerical data is projected to the new grid. d) loadBalance(data): Calculates load of the grid Tk and repartitions the parallel grid, if necessary. For any entity that is relocated the corresponding numerical data is of course also relocated. e) communicate(data): Communicates data on the parallel grid and handles the unique mapping from ghost entities of the grid Tk to its master entity on some grid Tl . 2. Entitycodim, dim, dimworld, ..., Geometrydim, dimworld, ... c of codimension c = codim are realized by the classes EnGrid entities Ωl,i tity and Geometry. The Entity class contains all topological information, while geometrical specifications are provided by the Geometry class. The affiliation of an entity to one of the partition types interior, border, or ghost is provided by the member function partitionType(). The method state() of the Entity class (c = 0) determines whether an entity might be removed during the next grid adaptation. After an adaptation took place this method allows to detect whether an entity was refined or not. 3. LevelIteratorcodim, partitionType, ... The level iterator gives access to all grid entities on a specified level l of the partition partitionType, where partitionType is either interior, border, or ghost. This allows a traversal of the sets Elc \ g,c Elb,c , Elb,c , El . 4. LeafIteratorcodim, partitionType, ... The leaf iterator gives access to all grid entities of the partition partitionType that do not have any further children. 5. HierarchicIteratordim, dimworld, ... Another possibility to access grid entities is provided by the hierarchic iterator. This iterator runs over all descendant entities with level l ≤ L of 0 . Therefore, it traverses the set C ( Ω 0 ). a given entity Ωk,i L k,i
72
A. Burri, A. Dedner, R. Klöfkorn, M. Ohlberger
6. IntersectionIteratordim, dimworld, ... Part of the topological information provided by the Entity class of codi0 mension 0 is realized by the intersection iterator. For a given entity Ωl,i 0 the iterator traverses the set I(Ωl,i ). 2.3 The ALUGrid library The ALUGrid library [6] allows the use of both hexahedral and tetrahedral grids for simulations on 3d domains, i.e., Ω ⊂ IRn , d = n = 3 using the notation from Section 2.1. Together with local adaptivity and dynamic loadbalancing this enables efficient simulations on arbitrary domains. In the following we describe the structure of the grid and the restrictions in comparison with the general definition of a parallel grid given above. 1. (Macro grid). The grid is initialized with the entities of codimension 0 on level 0 called the macro grid in the following. The entities of the macro grid are subsets in Ω . In a parallel computation it is feasible for some of the grids Tk to be empty; thus it is possible to start with an initial tessellation of the whole computational domain without having to perform an a-priori partitioning of the initial grid. The union of all the macro grid entities form a conform tessellation of Ω . 2. (Restriction on non-conformity). If two leaf entities Ωl0 ,i , Ωl0 ,i have a 1 1 2 2 codimension one intersection then |l1 − l2 | ≤ 1, i.e., only one level of non-conformity is admissible. For the parallel grid this restriction must also hold between all ghost entities and grid entities with codimension one intersection. 0 can be 3. (Refinement/Coarsening). During a simulation, leaf entities Ωl,i marked for refinement or coarsening. Entities which are marked for refinement are decomposed into entities on the next level Ωl0+1, j , . . . , 0
Ωl0+1, jq . So far q = 8 is implemented both for tetrahedral and hexahedral elements. If the non-conformity restriction is not satisfied, neighboring entities are also refined. Entities marked for coarsening are removed only if no violation of the non-conformity restriction occurs during the coarsening process. 4. (Load balancing). After each grid adaptation, the current load on each partition is estimated. The repartitioning of the grid is only performed on 0 together with all Ω c ⊂ Ω 0 the macro grid level, i.e., only entities Ω0,i 0,i l, j are moved between partitions. To perform load balancing a partitioning algorithm using the library METIS [7, 8] is utilized for the dual graph of the macro grid. Details can be found in [5, 7, 8].
An efficient implementation of an adaptive and parallel grid in DUNE
73
3 Handling user data during grid reorganization Most software for numerical simulations has it’s own data formats for storage of the numerical data, like for example DOF_REAL_VEC in ALBERTA [3]. This is a critical point because code once written using the data structure of a certain package is hardly portable. Therefore the general approach in DUNE is to separate the handling of numerical data from the grid. This means the interface has to provide some kind of identifier which allows to identify for example vertices or elements of the grid (see Definition 1). In DUNE this means that each entity must provide a minimal set of indices (see [9] for detailed description). Lets assume for simplicity that each element of the grid, for example each tetrahedron or triangle, can be identified by a unique index i, with i ∈ IN. Using this the numerical data can be stored in vectors and can be accessed via these indices. This approach leads to difficulties if grid reorganization requires the projection of user data from the old to the new grid. For example during adaptation of the grid, when elements are created or removed, all user data, i.e., solution, right hand side, etc. has to be projected onto the new grid. Therefore, since the data is not stored together with the grid, the grid interface has to provide methods to handle the projection process for all persistent data. Using the interface method state() defined for entities of codimension 0 — which identifies whether an entity will be coarsened or was refined — one can separate the restriction/prolongation process from the grid. This means that before the grid is adapted, all persistent data which belong to leaf entities of the grid has to be projected to their father; after grid adaptation the data is projected onto the new entities. Of course data is not modified if an entity is not changed. Unfortunately this method is very expensive in terms of CPU time when grid adaptation takes up a substantial part of the overall computational cost, e.g., in a time explicit finite-volume scheme. We adopt a different strategy to project the relevant data onto the new grid. Using a call-back functionality during element creation or removal the data can be projected more efficiently. Of course all projections have to be done simultaneously for an element-children tuple. This means all data which have to be projected to the new grid should be available in a list-like structure. For the adaptation step we pass an object through the interface to the grid which provides a prolong and a restrict method. An easy but unefficient implementation of such a mechanism would use the virtual function concept of C++. Since efficiency is a primary goal in numerical software, a different approach relying on the template mechanism of C++ is used. The following code snippet explains how the required functionality for prolongation/restriction can be achieved, for simplicity showing only the method prolong.
74
A. Burri, A. Dedner, R. Klöfkorn, M. Ohlberger
// project data from father to son entity template < c l a s s VectorType > class SimpleProlongation { VectorType & vec ; public : S i m p l e P r o l o n g a t i o n ( VectorType & v ) : vec ( v ) { } void p r o l o n g ( E n t i t y & f a t h e r , E n t i t y & son ) { vec [ son . i n d e x ( ) ] = vec [ f a t h e r . i n d e x ( ) ] ; } }; template < c l a s s A , c l a s s B> c l a s s CombinedProlongationOperator { A & _a ; B & _b ; public : // stores the references to the objects that should be combined CombinedProlongationOperator ( A & a , B & b ) : _a ( a ) , _b ( b ) { } void p r o l o n g ( E n t i t y & f a t h e r , E n t i t y & son ) { _a . p r o l o n g ( f a t h e r , son ) ; _b . p r o l o n g ( f a t h e r , son ) ; } };
Now the algorithm looks the following way: // somewhere in the implementation template < c l a s s GridType > void a l g o r i t h m ( GridType & g r i d ) { // vector storing the unknown density v e c t o r < double > density ; // carray is an array of fixed length typedef c a r r a y <double ,3 > double_3 ; // vector storing the unknown velocity v e c t o r < double_3 > v e l o c i t y ; // the CombinedProlongationOperator is parameterized by // the SimpleProlongation classes typedef CombinedProlongationOperator < S i m p l e P r o l o n g a t i o n < v e c t o r <double > > , S i m p l e P r o l o n g a t i o n < v e c t o r <double_3 > > > CombinedProlongationType ; S i m p l e P r o l o n g a t i o n < v e c t o r <double > > p r o l o n g D e n s i t y ( d e n s i t y ) ; S i m p l e P r o l o n g a t i o n < v e c t o r <double_3 > > p r o l o n g V e l o c i t y ( v e l o c i t y ) ; CombinedProlongationType rpData ( p r o l o n g D e n s i t y , p r o l o n g V e l o c i t y ) ; f o r ( . . . ) // all timesteps { // start adaptation process, entities are already marked // when an element is refined then the method // prolong of the class CombinedProlongationOperator // is called and the data is prolongated g r i d . adapt ( rpData ) ; } }
An efficient implementation of an adaptive and parallel grid in DUNE
75
In this example the class A is of the type SimpleProlongation > and the class B is of the type SimpleProlongation >. Because all types are known at compile time the compiler is able to inline function calls for optimization. Both approaches allow to combine different types of objects, as long as they all provide a prolong method with exactly the same parameter list. As in the approach using the virtual functions, more than two objects can be combined, because class A or class B could also be of the type CombinedProlongOperator. This example only shows the basic idea. The current implementation of the code uses a more sophisticated implementation which is available in an example code on the web [6]. Except for grid adaptation, two further situations occur where the above described functionality is needed. These are the communication and the load balancing procedure. During both steps, access to the user data is needed and the access needs to be as efficient as possible. The following examples shortly delineate the idea of the concept as used in the communication step. For further details the interested reader should refer to [6]. // read and write data from/to message buffer template < c l a s s E n t i t y T y p e , c l a s s VectorType > c l a s s ReadWriteData { VectorType & vec ; public : ReadWriteData ( VectorType & v ) : vec ( v ) { } void readData ( MessageBuffer & b u f , E n t i t y T y p e & e ) { b u f . r e a d O b j e c t ( vec [ e . i n d e x ( ) ] ) ; } void w r i t e D a t a ( MessageBuffer & b u f , E n t i t y T y p e & e ) { b u f . w r i t e O b j e c t ( vec [ e . i n d e x ( ) ] ) ; } };
Here the message buffer is implemented as an object stream. Before a communication step, the data of all entities located at the process boundary is inserted in the stream. The interprocess communication consists of exchanging the stream objects, from which the data is then extracted in the same order as it was inserted on the other process. Because ALUGrid guarantees an iteration order which is the same on each side of a process border, the data of the entities get inserted in the right place. An example of the message buffer implementation can be found in the code of ALUGrid. Here we only need to know that the methods readObject and writeObject just read and write data to and from the object stream. For the load balancing process almost the same functionality is needed. Instead of entities on a process boundary, all children of a macro grid entity marked for relocation to another processor are considered. After inserting the entity’s refinement information and data into the stream, it is sent to processor k. On processor k first the entity tree below the macro entity is recreated by refining the macro entity as described by the refinement information. Afterwards the data from the other processor is extracted on the newly created
76
A. Burri, A. Dedner, R. Klöfkorn, M. Ohlberger
entities. To this end, the method xtractData is called which makes a hierarchical walk over the restored tree calling the method readData. // Inline and Xtract Operator for exchanging // leaf data during load balancing template < c l a s s GridType , c l a s s ReadWriteDataType > c l a s s I n l i n e X t r a c t D a t a // pack/unpack data to/from message buffer { GridType & g r i d ; typedef typename GridType : : E n t i t y E n t i t y T y p e ; ReadWriteDataType & rwData ; public : I n l i n e X t r a c t D a t a ( GridType & g , ReadWriteDataType & rwd ) : g r i d ( g ) , rwData ( rwd ) { } void i n l i n e D a t a ( MessageBuffer & b u f , E n t i t y T y p e & e ) { typedef typename E n t i t y T y p e : : H i e r a r c h i c I t e r a t o r H i e r a r c h i c I t e r a t o r ; f o r ( H i e r a r c h i c I t e r a t o r i t = e . hbegin ( g r i d . maxlevel ( ) ) ; . . . ) { i f ( ( ∗ i t ) . i s L e a f ( ) ) { rwData . w r i t e D a t a ( buf , ∗ i t ) ; } } } void x t r a c t D a t a ( MessageBuffer & b u f , E n t i t y T y p e & e ) { typedef typename E n t i t y T y p e : : H i e r a r c h i c I t e r a t o r H i e r a r c h i c I t e r a t o r ; f o r ( H i e r a r c h i c I t e r a t o r i t = e . hbegin ( g r i d . maxlevel ( ) ) ; . . . ) { i f ( ( ∗ i t ) . i s L e a f ( ) ) { rwData . readData ( buf , ∗ i t ) ; } } } };
4 Example computation and performance evaluation Since ALUGrid was designed with explicit finite volume schemes in mind, we base our efficiency test of the parallel DUNE interface on an explicit first order finite volume scheme for the Euler equations of gas dynamics. Using the notation from Definition 1 the scheme for evolving the piecewise constant 0 from a time level tn to a time discrete solution {Uin }i,n on the leaf entities Ωl,i n + 1 n n = t + ∆ t reads as follows: level t Uin+1 = Uin +
∆ tn 0| |Ωl,i
∑ Gi j (Uin , U nj )
Ωk,0 j
where the sum is taken over all leaf entities Ωk,0 j which have a codimension 0. one intersection with Ωl,i The conservative quantities are U = (ρ, ρu, ρv, ρw, ρe) and the function G is a Riemann-solver based numerical flux function. A detailed description of the algorithm can be found in [4]. Basically the algorithm consists of five steps — assuming the data {Uin }i is given:
1. (Communication). The data is exchanged from master entities to the ghost entities on the other processors.
An efficient implementation of an adaptive and parallel grid in DUNE
77
0 , Ω 0 with codimension 2. (Flux evaluation). For each pair of entities Ωl,i k, j one intersection and i < j the numerical flux Gi j (Uin , U nj ) is evaluated
and the sum Vin :=
1 0| |Ωl,i
∑Ω 0 Gi j (Uin , U nj ) is computed for each leaf enk, j
tity. During this step, the maximal admissible local time-step sizes ∆inj are also computed. 3. (Global time step). The minimum time step size ∆ tn = min(i, j) ∆inj is computed using a global communication step between all processors. 4. (Evolution). The conservative quantities at the next time level are constructed: Uin+1 = Uin + ∆ tn Vin . 5. (Adaptation and load balancing). The grid entities are refined, coarsened, and the grid is repartitioned with respect to the new solution. The main difference between the implementation of the scheme in DUNE compared to using ALUGrid directly concerns the storage of the data. In the ALUGrid implementation, the data (i.e., Uin , Vin ) is stored directly in the objects representing the grid entities. Therefore accessing data is very direct and efficient since it is loaded into the cache together with the geometric information. Also the reorganization of the grid during the adaption process is very efficient since storage space for the data is automatically allocated together with the geometric information for the new entities. Since grid adaptation is performed in each time step, the execution time for the grid modification is comparable to the cost of the numerical scheme (about 20% of the overall time). In this sense the explicit finite volume scheme is a very challenging problem for a grid interface like DUNE where data is managed independently of the grid. As a test case we use the forward facing step benchmark problem [10] for a perfect gas law with γ = 1.4. The domain is shown in Figure 1. As initial data we use Ui0 = (1.4, 4.2, 0.0, 0.0, 8.8) for all leaf entities. The Dirichlet data on the inflow boundary is also set to this value and remains constant over time. This leads to a Mach three flow in the ”wind-tunnel”. inflow
reflexion
outflow
1 LE
0,2 LE
z y
0,6 LE
slip−boundary
3,0 LE
x
Fig. 1. Setting for the Forward-Facing-Step problem
z y x
78
A. Burri, A. Dedner, R. Klöfkorn, M. Ohlberger
Figure 2(top) shows the density of the solution at time t = 1.75 together with the locally refined grid. The bottom part of Figure 2 shows the grid partitioning for the same point in time using K = 8 processors. The evolution in the interval t ∈ [1.5, 2.0] of the grid size (average number of leaf elements together with the number of leaf elements in the largest and smallest partition) is plotted in Figure 3(left). The points of grid redistribution can be clearly distinguished where maximum and minimum size are almost identical to the average number. The CPU time for the flux calculation (step 2 of the algorithm) in the same time interval is shown on the right of Figure 3. The total runtime per time-step is also shown. It is clearly visible that doing a grid redistribution at times causes the total runtime per time-step to increase on average with the total number of elements. The peaks in the total runtime show the computational cost of the redistribution step, which increases the total runtime of these time-steps by merely 20%.
Fig. 2. Density (top) and partitioning (bottom) at t = 1.75
An efficient implementation of an adaptive and parallel grid in DUNE 290000
14000
average max. min.
280000
average max. min. total runtime
13000
270000
79
12000
260000
11000
250000 10000 240000 9000 230000 8000
220000
7000
210000
6000
200000 190000
5000 1.5
1.55
1.6
1.65
1.7
1.75
1.8
1.85
1.9
1.95
2
1.5
1.55
1.6
1.65
1.7
1.75
1.8
1.85
1.9
1.95
2
Fig. 3. Evolution of grid size (left) and corresponding runtimes (right) for t ∈ [1.5, 2]
4.1 Definition of performance measures We define the size of the problem in a fixed time interval [tstart , tend ] as the average over all time-steps tn with n ∈ N := {m | tm ∈ [tstart , tend ]} of the number of leaf entities in the locally adapted grid at time tn : ΣK := 1 K Sn where Snk is the number of leaf entities of codimension zero | N | ∑n∈ N ∑k=1 k at time-level tn in the grid Tk . To measure the efficiency we study the average runtime on K processors: τK := | N1 | ∑n∈ N τKn ; here τKn is the total runtime per time-step. For a more detailed analysis we furthermore study the computan on K processors of each time-step tn for the steps s = 2, 4, and tional cost τs,K n . Average 5 of the algorithm described above; we set τs,K := | N1 | ∑n∈ N τs,K values for the runtime per element are now easily defined as ηK := τs,K ΣK
τK ΣK
and
ηs,K := for s = 2, 4, 5. To estimate the parallel effectiveness of the DUNE interface we compute the speedup and the efficiency using the average total runtime per element ηK . The speedup from L to K > L processors is then given by S L→K := ηηKL
and is in the optimal case equal to KL ; the efficiency E L→K := KL S L→K should therefore be approximately 1. Note that the definition of speedup and efficiency differs from the standard definitions, where the speedup and efficiency would be defined as s L→K := ττ L and e L→K := KL s L→K respectively. For non-adaptive computaK tions (i.e. with a fixed number of entities), the definitions S L→K , s L→K and E L→K , e L→K respectively are identical. The use of the modified definitions S L→K and E L→K enables us to compare problems with a slightly varying number of entities. 4.2 Comparison between the original and the DUNE code In Figure 4 we plot the average runtimes η2,K , η4,K , η5,K , and ηK summing over all time-steps in [1.5, 2.0]. We exclude the results from the start of the
80
A. Burri, A. Dedner, R. Klöfkorn, M. Ohlberger Original Code flux update
0.01
adapt. total
0.001
0.0001
Dune Code flux
1e-05
update adapt. total 1e-06 4
8
16
32
Fig. 4. Average runtime for steps 2,4,5 and the total runtime per time-step of the finite volume scheme using the original code and the using the parallel DUNE interface
simulation since at the beginning the grid is too small to reach meaningful conclusions on 32 processors. Our results confirm the observations from [1], demonstrating that the DUNE interface hardly reduces the efficiency of the numerical scheme. Although the explicit finite volume scheme is very challenging for a general grid interface, the difference between the original code and the DUNE code in the overall runtime is small (about 9 – 12 %). Table 1 shows the relTable 1. Relative performance losses of the DUNE code compared to the original implementation. The relative performance loss θs,K of a substep s is defined as θs,K := orig
ηdune s,K − ηs,K ηdune K
; for the total runtime we define θK := K 4 8 16 32
θ2,K 0.0772087 0.0752792 0.0685054 0.0493819
θ4,K -0.0497788 -0.0497255 -0.0496885 -0.0481409
orig
ηdune −ηK K ηdune K
θ5,K 0.0929186 0.0916843 0.0915318 0.0905287
θK 0.122302 0.117822 0.108574 0.090217
ative contribution to the performance gap from each of the algorithm’s substeps. It can be seen that the DUNE code is inferior especially in the adaptation and flux computation steps. For the adaptation step this loss in performance can be solely attributed to the disadvantage of storing the data separately from the grid, which causes a high amount of data reorganization in these explicit problems. The contribution from the flux computation is of about the same order – a time difference which is only due to the DUNE
An efficient implementation of an adaptive and parallel grid in DUNE
81
interface. Part of the arrearage of the DUNE code can be made up in the update step, where we see a significant advantage of storing the data in a consecutive vector: no extra grid traversal is necessary, which makes this operation for the DUNE code about an order of magnitude faster and resulting in a speed regain of about 5 %. On 32 processors flux and update step cancel each other and performance loss occurs solely in the adaptation step. 4.3 Efficiency of the parallel interface The goal of the following investigation is to quantify the additional cost of having to access the ALUGrid through the DUNE interface; in addition we also demonstrate the parallel efficiency of the code using the definition from section 4.1. We performed the forward facing step simulation on the HP C6000 Linux Cluster at the SSC Karlsruhe using K = 4, 8, 16, and 32 processors. Sine we study a fixed size problem the parallel overhead increases with the number of processors while the cost of the numerics decreases. Hence we cannot expect the optimal efficiency in this case. The corresponding values for the original code and the DUNE code are shown in Table 2(left) and Table 2(right), respectively. We observe that the efficiency is quite high (around 90%) and that the values are approximately the same for both versions of the algorithm. As already pointed out we cannot expect optimal efficiency using a fixed size problem – due to the restriction on the time-step ∆ t in the explicit finite volume scheme and due to the difficile control of the grid adaptation process, it is difficult to study problems with a fixed size per processor. For some indication of the effectiveness of the load-balancing procedure we study the efficiency of the flux calculation (step 2). Since this step involves no communication, we expect no parallel overhead so that the runtime is only determined by the processor with the largest chunk of the grid. In Table 3 we see that for this step the efficiency is very close to 1 which demonstrates that the load-balancing procedure used for the simulations leads to a grid partitioning which is close to being optimal. Table 2. Speedup and efficiency measured with respect to a run with four processors using a fixed sized problem. Left the results for the original code are shown; on the right we have the corresponding results for the DUNE code K 4 8 16 32
original code ηK S4→K E4→K 0.00890626 0.00460453 1.93424 0.967118 0.0023943 3.71978 0.929945 0.00127103 7.00712 0.87589
K 4 8 16 32
DUNE ηK S4→K E4→K 0.0101473 0.0052195 1.94411 0.972054 0.00268592 3.77795 0.944488 0.00139707 7.26325 0.907906
82
A. Burri, A. Dedner, R. Klöfkorn, M. Ohlberger
Table 3. Speedup and efficiency of the flux calculation measured with respect to a run with four processors using a fixed sized problem. Left the results for the original code are shown; on the right we have the corresponding results for the DUNE code K 4 8 16 32
original code η2,K S4→K E4→K 0.00762761 0.00388685 1.96241 0.981207 0.00198272 3.84705 0.961761 0.00100939 7.55666 0.944582
K 4 8 16 32
DUNE η2,K S4→K E4→K 0.00841107 0.00427977 1.96531 0.982653 0.00216672 3.88194 0.970486 0.00107838 7.79971 0.974963
5 Conclusions In this paper, a parallel, adaptive grid implementation (ALUGrid) to the parallel grid interface of the Distributed and Unified Numerics Environment DUNE was described. Using the forward facing step test case from [10], it could be shown that the implementation is efficient. Although an explicit finite volume scheme on a locally refined grid is extremely challenging for a grid interface the overhead of using ALUGrid through the parallel interface of DUNE only causes losses of less than 10 % in the total runtime.
References 1. Bastian P, Droske M, Engwer C, Klöfkorn R, Neubauer T, Ohlberger M, Rumpf M (2004) Towards a unified framework for scientific computing. In: Proc. of the 15th International Conference on Domain Decomposition Method 2. Bastian P, Birken K, Johannsen K, Lang S, Neuss N, Rentz-Reichert H, Wieners C (1997) Comput Vis Sci 1:27–40 3. Schmidt A, Siebert K (2005) Design of adaptive finite element software – the finite element toolbox ALBERTA. Springer, Berlin Heidelberg New York 4. Dedner A, Rohde C, Schupp B, Wesenberg M (2004) Comput Vis Sci 7:79–96 5. Schupp B (1999) Entwicklung eines effizienten Verfahrens zur Simulation kompressibler Strömungen in 3D auf Parallelrechnern. phd thesis, Mathematische Fakultät, Universität Freiburg 6. ALUGrid: http://www.mathematik.uni-freiburg.de/IAM/Research/alugrid/ 7. METIS: http://www-users.cs.umn.edu/∼karypis/metis/ 8. Karypis G, Kumar V (1999) SIAM Rev 41(2):278–300 9. DUNE: http://dune.uni-hd.de 10. Woodward P, Colella P (1984) J Comput Phys 54:115–173
Operational DWD numerical forecasts as input to flood forecasting models G. Rivin1 and E. Heise2 1 2
Institute of Computational Technologies SB RAS, Novosibirsk, Russia [email protected] German Weather Service, Kaiserleistr. 42+44, 63067 Offenbach am Main, Germany [email protected]
Summary. The description of the DWD-contribution (DWD - Deutscher Wetterdienst) in the work under the EU-Project "An European Flood Forecasting System" (EUContract EVG1-CT-1999-00011 EFFS) is presented. The aim of the EU-funded project was the development of a prototype version of a medium-range (up to 10 days ahead) flood forecasting system for the whole of Europe. The brief description of methods for preparing input meteorological fields on high performance systems for EU-Project and the complex operational models of an atmosphere used for their construction are given. The used algorithm for numerical analysis of high-resolution analyses of 24h precipitation heights on the basis of surface observations for the four flood events in 1994 (Po, November), 1995 (Rhine/Meuse, January), 1997 (Odra, July) and 2002 (Elbe, August) is presented.
1 Introduction The aim of the EU-funded project ’An European Flood Forecasting System’ (EFFS; 01 March 2000 to 30 September 2003) was the development of a prototype version of a medium-range (up to 10 days ahead) flood forecasting system for the whole of Europe. Predictions of this system can be used 1) for an early alert of national water authorities to the possibility of a forthcoming major flooding event, 2) as a fall-back system for national flood forecasting systems, and 3) as a replacement if national flood forecasting systems are not available for some river catchment areas. The medium-range flood forecasting system will mainly be based on the ensemble prediction system (EPS) of the European Centre for Medium-Range Weather Forecasts (ECMWF) and possibly also on the results of other global forecast models. In the development phase, which was covered by the EFFS-project, hydrological models capable of being run on a grid resolution of a few kilometres for the whole of Europe had to be developed and tested. This testing was accomplished by an investigation of three historical flood events. Meteorological forecast models
84
G. Rivin and E. Heise
supplied the necessary forcing data for the hydrological models, covering the time span of the historical flood events. These data were used to perform flood forecasts, which were verified against the water levels at gauging stations of the rivers affected by the respective flood. Provided that the hydrological models are correct, this verification can also be used as an indirect verification of area-averaged rainfall predictions by the meteorological models. Some results of this kind are shown in the contribution of the project’s Work Package 7 to [1]. Three European meteorological centres participated in the project: The European Centre for Medium-Range Weather Forecasts (ECMWF), the Danish Meteorological Institute (DMI) and the German Weather Service (DWD). ECMWF used the predictions of the EPS and the results of deterministic forecasts, the DMI used the operational HIRLAM-version with lateral boundary data provided by ECMWF deterministic forecasts, and the DWD used the GME/LM-system. The data produced by the two limited-area models (HIRLAM and LM) were provided for the total area of the respective model. In contrast, data for the global models were supplied for a region covering Europe and some surrounding areas. The long lead times of the forecasts of the global models used in the project (EPS and deterministic models of ECMWF, GME of the DWD) allow for an estimation of the possible lead times for early flood warnings. On the other hand, the results of the limited-area models (HIRLAM of the DMI and LM of the DWD) serve as a kind of benchmark for high-resolution precipitation prediction in comparison to the rather low-resolution results of the global models. The high-resolution models should improve the areal distribution and the timing of the precipitation. The influence on the results of the hydrological models is investigated by the hydrologic institutions participating in the project. Some results are given in [1]. Of special interest for the examination of NWP models is their behaviour in extreme situations. The operational model verification, normally based on monthly means of certain quality parameters of the models, regularly blurs the effect of singular cases. In contrast, in this project the model behaviour is investigated especially with respect to extreme cases. In addition to the provision of forcing data for hydrological models, there were two more tasks for the DWD in the project. 1. The DWD performed high-resolution analyses of 24h precipitation heights on the basis of surface observations for the three flood events. The analyses are realized on the LM-grid for the whole LM-area. If the hydrological models are forced by observed precipitation, they should provide the best possible flood simulation. As a by-product these analyses serve for verification of the results of the meteorological models. 2. The DWD developed a prototype scheme [2], [3] for an operational near real-time 24h precipitation analysis on the basis of radar data and synoptic precipitation observations. This scheme, too, is based on the LM-grid.
Operational DWD numerical forecasts as input to flood forecasting models
85
As a prototype system it runs with the radar composite available at the DWD. An extension to larger areas requires the implementation of international standards for the exchange of radar data. This is beyond the scope of the present project. In this paper we first describe the synoptic situation of the flood events used for investigation in the project. Then the analysis method for observed precipitation amounts and the data base are presented, followed by an overview of the results of the analyses. Then the models used at the DWD are shortly described and results of the hindcasts are presented (more details in [2]).
2 Flood events In the first meeting of the project three historical flood events were selected for investigation: • The Po-flood in November 1994, • The Rhine/Meuse-flood in January 1995, • The Odra-flood in July 1997. In autumn 2002 it was decided to add the Central-European- or Elbeflood in August 2002 to the set of cases to be investigated. In order to restrict the data volume, only a small set of near surface data had to be delivered (see Table 2 in Summary). The data were confined to the data necessary as forcing data for hydrological models. The selection of flood cases only was considered a shortcoming of the project, as only the probability of early detection of a flood signal could be investigated. There was no possibility to study the probability of false alarms. This problem was discussed at length at the beginning of the project, but it was decided to confine the project to flood situations in order to keep the amount of work tractable. Despite of the reduction of the data sets a large amount of data was provided. ECMWF produced a set of 16 CD-ROM, distributed to all participating institutions, whereas the DMI and the DWD provided the data to a server at the Stichting Waterloopkundig Laboratorium in Delft, the Netherlands (WL | DELFT HYDRAULICS). This server was installed especially for use in the EFFS-project. The DWD data provided are listed in chapter A2.2. of the Appendix in [2]. 2.1 Synoptic situation The synoptic situation for the three historical flood events was described in some detail by [4]. Especially, he presented time series of 12h precipitation heights for some stations in the respective catchment areas. These clearly
86
G. Rivin and E. Heise
showed the differences in the synoptic situations. The Po-case of November 1994 was caused by extreme precipitation amounts on one day (November 5-6, 1994), when warm and moist Mediterranean air was advected towards the Alpine arc. The Rhine/Meuse-case of January 1995 was characterized by a very long duration of successive moderate rainfall events combined with snow melt. In the Odra-case of July 1997 a more or less stationary low pressure system over Central Europe caused long-lasting and in places exceptionally high precipitation heights. The Central-European- or Elbe-flood in August 2002 was caused by a socalled Vb-depression (e. g. [4]). Depressions of this type develop in the northwestern Mediterranean Sea, in front of a marked West European trough. East of the Alps they move to the north, often causing long-lasting heavy precipitation especially in the Odra catchment area. They are characterized by generally high northerly winds to the west of the system. This enhances precipitation in the Ore Mountains and in the Sudety Mountains. In fact, in the case of the Central-European- or Elbe-flood the highest precipitation rates were measured close to the ridge of the Ore Mountains (Zinnwald-Georgenfeld 312 mm/24h or 409 mm/120h). Figure 1 shows hourly values of precipitation for Zinnwald and for Dresden. For Zinnwald also the values recorded every minute are given. These plots clearly show the combination of a more or less continuous moderate to heavy precipitation, enhanced by some convective events. In these convective events extreme high precipitation rates (up to 1.5 mm/min) occurred. 2.2 Analysis method for observed precipitation All analyses of observed precipitation are realized on the LM-grid. For the needs of the project this grid seems to have sufficient resolution to depict all relevant structures of precipitation. General remarks, data base The analysis of observed precipitation will solely be based on surface observations. At first glance the distribution of surface stations seems to be reasonable in Central Europe, as is shown in Figure 2 for the LM-area. In fact, only roughly one third of all observations is depicted in this map. A total of almost 2000 stations is available in the LM-area. Also, if we look at a distribution of stations reporting 12h precipitation amounts, the distribution looks quite reasonable. But actually precipitation distributions can be of much smaller scale than resolvable by the station distribution. Although much of the fine-scale structure will be averaged out in a 12h or 24h distribution, significant details will remain, which are not included in a precipitation distribution based on synoptic observations alone. Therefore, a reliable highresolution precipitation analysis requires the inclusion of the dense precipitation networks available in all countries. But this strictly confines the analyses
Operational DWD numerical forecasts as input to flood forecasting models
87
to 24h precipitation amounts and to non real-time analyses, as the stations of the precipitation network only provide daily precipitation values and the observations are not distributed in real time. Also, these data are only available on special request for limited time spans. In part 3 of this report we will deal with an operational high-resolution precipitation analysis on the basis of synoptic and radar data. Besides the precipitation data regularly exchanged between the national weather services for use in the project the following high-resolution precipitation data were available: 1) Data of Germany and Switzerland for all flood events; 2) MAP-data (MAP = Mesoscale Alpine Programme, see [5]) for the Po-event; 3) data supplied by Météo France for the Rhine/Meuse event; 4) data supplied by IMGW (Poland) for the Odra event; and 5) data supplied by IMGW, and by the Hydrometeorological Services of the Czech Republic and of Slovenia for the Central European or Elbe event. These data made possible a reliable precipitation analysis at least for the respective catchment areas of the flood events. More detailed information on the data and on the analyses supplied to the project ftp-server is given in the Appendix. In the following we describe the analysis method for the combination of synoptic and high-resolution precipitation data. Also, examples of the results for the flood events will be given. Analysis method for synoptic and high-resolution rain gauge data The analysis of observed 24h precipitation height is based on a distance weighting scheme. This very simple scheme seems to be sufficient as in the regions of special interest - the river catchment areas of the respective flood events - data of dense precipitation measuring networks are available. The basis of the analyses are K values of the observed precipitation Pobs,k within a radius of influence Rscan surrounding an LM grid-point m. The analyzed value Pana,m in the model grid point m is given by a weighted sum of all observations k: Pana,m =
K
K
k=1
k=1
∑ (wmk Pobs,k )/ ∑ wmk ,
if the sum of the weights wm k exceeds a threshold value wscan depending on the radius of the influence circle, i. e., if K
∑ wmk > wscan .
k=1
The weights are a combination of a horizontal distance function h and a vertical distance function v: m m wm k = hk vk .
88
G. Rivin and E. Heise
The horizontal distance function is given by k,m hm , k = 0.5 1 + cos πρ h / Rscan where ρk,m h is the horizontal distance between the model grid point m and the observation location k. Correspondingly, we use a vertical weight k,m k,m vm . k = 0.5 1 + cos πρv / Hmax / 1 + 0.8ρv / Hmax k,m Here Hmax = max ρk,m v , Zmax , and ρv is the vertical distance between the model grid point mand the observation location k. Zmax is the maximum m vertical distance allowed. If ρk,m v > Zmax , vk = 0 is prescribed. The analyses are performed in 4 steps with increasing radius of influence. Only those model gridpoints, which are not analyzed in a previous step, are dealt with in the actual step. We use Zmax = 400 m and the following numbers:
Table 1. Steps of the analyses step
1
2
3
4
Rscan (km) 0 40 70 110 0.2 0.2 0.2 0.02 wscan
K
If after step 4 there is still ∑ wm k < wscan , then the respective grid point k=1
will be assigned a negative value (undefined). 2.3 Results of analyses of surface observations for test cases In this part we describe some results of the analyses of observed precipitation. Especially, we use analyses which will be compared to model results in the next chapter. The Po-flood, November 1994 This flood event started with some heavy precipitation (more 90 mm/24 hours) in a very limited area in the Valle d’Aosta region on November 1, 1994. During the next days both the precipitation rate and the area of high precipitation rates increased, reaching the culmination on November 5, 1994. A maximum value of almost 360 mm/24 hours is analyzed for November 5. The maximum value amounts to almost 580 mm/168 hours. And large areas of the upper Po catchment area received precipitation heights of more than 100 mm.
Operational DWD numerical forecasts as input to flood forecasting models
89
The Rhine/Meuse-flood, January 1995 In contrast to the Po event, there were no extraordinary high precipitation rates in this event. But there was long-lasting moderate to high precipitation over a large area over a period of eight days. Nearly the complete catchment area of Rhine and Meuse were covered by precipitation amounts of more than 50 mm, large areas exhibit more than 100 mm. These large precipitation amounts were accompanied in the upper part of the Rhine valley by a considerable amount of water supplied by snow melt. The Odra-flood, July 1997 The precipitation distribution in this event comprises some similarity to both of the other events. Large precipitation amounts over extensive areas was accompanied by orographically enhanced values in the Sudety and Beskidy mountains. The analysis of observed precipitation shows a maximum of 263 mm/48 hours in this region for the period July 6, 1997, 06:00 UTC, to July 8, 1997, 06:00 UTC. For the total period of the Odra-flood maxima up to more than 540 mm/144 hours. It should be noted that there is high precipitation also in the upper Vistula catchment area. The Central-European- or Elbe-flood, August 2002 This event will be covered in the part on the analysis of observed precipitation on the basis of synoptic and radar data. 2.4 Model system for hindcasts The present model system of the DWD consists of a global hydrostatic model GME [6] and a limited-area nonhydrostatic model LM [7]. These model systems, which were not operational during the time of the historical flood events (1994, 1995, and 1997), were used for the hindcasts of the flood events. GME is a global gridpoint model with a rather uniform horizontal resolution of some 60 km and 31 vertical layers. Prognostic variables are temperature, horizontal wind components, specific humidity, specific cloud water content, and surface pressure. All these variables are defined on the main model levels, the lowest level at a height of approximately 30 m. A comprehensive physics package includes the radiative transfer [8], a grid-scale precipitation scheme with a diagnostic treatment of the vertical precipitation flux [9], convection [10], vertical turbulent fluxes after [11] based on [12] for the Prandtl layer and on a level 2 closure after [13] for the free atmosphere, subgrid-scale orographic effects [14], and a soil model based on [15]. The analysis scheme is a 3D multivariate optimal interpolation for mass
90
G. Rivin and E. Heise
and wind fields and a 3D univariate optimal interpolation for specific humidity in the atmosphere. Additionally, sea surface temperature and snow height are analyzed. LM is a limited-area fully elastic non-hydrostatic model on a rotated latitude/longitude grid with a horizontal resolution of some 7 km and 35 vertical layers. The prognostic variables are temperature, horizontal and vertical wind components, specific humidity, specific cloud water content, pressure, and turbulent kinetic energy. Most of the parameterizations are the same as for GME. Exceptions are: i) Subgrid-scale orographic effects are not parameterized. ii) The turbulence parametrization in the atmosphere is based on the prognostic turbulent kinetic energy (level 2.5 after [13]). iii) The surface layer includes a laminar sublayer and a transition layer between the earth’s surface and the Prandtl-layer [16]. Instead of an optimal interpolation for the atmospheric variables a nudging scheme is applied. In order to run the GME-/LM-system for the hindcasts, the GME-assimilation was started 2 days before the respective event by interpolating the analyses of the DWD’s former global model to the GME-grid. Then the GME assimilation was run over a period of two days for adjusting the fields to this model. After two days of assimilation the operational analysis/forecast cycle started for GME and LM over the whole period of the event. The GMEforecasts provided the boundary data for LM, whereas initial data for LM were provided by LM’s nudging analysis. Forecasts were run for 12:00 UTC only, in order to be comparable to the ECMWF model results. For the CentralEuropean- or Elbe-flood in August 2002 additional model runs were not necessary. During this flood the GME-/LM-system was used operationally. All forecast results were still available in the DWD’s model archive. Only a retrieval of the data from the archives was necessary. Of special importance for a successful prediction of flood events is a detailed representation of orography in the models. The LM-orography resolves even very small details, the maximum elevation in the Alps exceeds 3000 m. Much less detail is present in the GME-orography for the LM-area. The maximum height in the Alps is below 2500 m. However, it must be kept in mind that the forecast runs with GME are performed using the triangular model grid of ca. 60 km resolution. Only for the distribution of data for use by the hydrologic institutions participating in the EFFS-project, results are interpolated to the latitude/longitude-grid of 0.75o resolution. 2.5 Hindcast results in comparison to analyses of observed precipitation In this chapter we provide some examples of model results and compare them to the corresponding observations shown in section 2.3.
Operational DWD numerical forecasts as input to flood forecasting models
91
Fig. 1. GME forecasts of precipitation (mm) for the verification period from November 5, 1994, 06:00 UTC to November 6, 1994, 06:00 UTC. The initial time of the forecasts is: a) November 1, 12:00 UTC, b) November 2, 12:00 UTC, c) November 3, 12:00 UTC, and d) November 4, 12:00 UTC
The Po-flood, November 1994 Figure 1 shows the GME results for a series of four consecutive forecasts, the first one starting nearly four days before the onset of the extraordinary precipitation. Even in this forecast there is a very good signal for precipitation amounts of more than 100 mm/24 hours in the Alps. The next two forecasts improve the precipitation distribution considerably by splitting the high pre-
92
G. Rivin and E. Heise
Fig. 2. LM forecasts of precipitation (mm) for the period a) November 5, 1994, 06:00 UTC to November 6, 1994, 06:00 UTC, initial date November 4, 1994, 12:00 UTC; b) November 2, 1994, 06:00 UTC to November 8, 1994, 06:00 UTC, results of 6 forecasts with initial dates November 1st until November 6th are added to produce the total predicted precipitation height
cipitation area into two separate maxima and by reducing the too high precipitation predicted in the first forecast in the northern Appenines area. It is worth to be noted that in the last forecast the highest precipitation rates are predicted close to the Mediterranean Sea and a region of high precipitation is predicted south-west of the Alpine arc, where observations show only very low precipitation amounts. The LM forecasts (Figure 2) produce a clear separation of the two precipitation maxima, and the absolute maximum is correctly placed south-west of Lago Maggiore. A problem is a considerable overprediction of precipitation amounts by LM. This is also obvious in the sum over 6 days (Figure 2b). This can at least partly be due to an overestimation of precipitation on the windward side of mountains, combined with an underestimation on the lee side. Corresponding to the 18 to 42 hour GME forecast (Figure 1d), LM produces an area of high precipitation rates south-west of the Alpine arc, where only very low amounts were observed. The Rhine/Meuse-flood, January 1995 This event was characterized by long-lasting moderate precipitation. In general, the predicted precipitation amounts are considerably lower than observed. Due to orographic forcing, LM forecasts are somewhat better than
Operational DWD numerical forecasts as input to flood forecasting models
93
the GME forecasts. But also LM underestimates the precipitation. In this event the precipitation effect on the flood was enhanced by melting of snow. Whereas in regions above ∼ 1500 m snowfall enhanced the snow cover from 22 to 30 January, the existing snow cover in regions lower than ∼ 1500 m completely melted. The Odra-flood, July 1997 The maximum precipitation was observed from July 6, 06:00 UTC to July 8, 06:00 UTC. In Figure 3 we look at the model behaviour on the basis of 24 hour precipitation averaged for the Odra catchment area. For the first 14 days of July 1997 the 18 to 42 hour GME- and LM-forecasts are compared to the analyses of observed precipitation. The figure clearly reveals that the high-resolution LM-forecasts are superior to the GME-forecasts for most of the 14 days shown. As the EFFS-project aims at the development of a medium-range flood warning system, we will also look into the longer range GME-forecasts. For the period July 5–14 all GME-forecasts with five different lead times ranging from 18 to 42 hour to 114 to 138 hour are compared to the respective analyses in Figure 4. As a general conclusion we see that decreasing lead times do not automatically increase the forecast quality, even on the basis of averages over large areas. This is especially obvious for day 7 and 8. Here forecasts with lead times of 90 to 114 hour (for day 7) and even 114 to 138 hour (for
Fig. 3. Precipitation rates averaged over the Odra catchment area for the first 14 days of July 1997. Blue: Analysis, green: LM 18 to 42 hour forecasts, brown: GME 18 to 42 hour forecasts
94
G. Rivin and E. Heise
Fig. 4. Precipitation rates averaged over the Odra catchment area for July 5–14, 1997. Blue: Analyses, GME-forecasts with lead times of 18 to 42 hours (GME I), 42 to 66 hours (GME II), 66 to 90 hours (GME III), 90 to 114 hours (GME IV), 114 to 138 hours (GME V)
day 8) give the best results. This poses a serious problem to flood prediction (and - of course - also to weather prediction). The generally accepted method is to use only the most recent meteorological forecast to produce a flood forecast. This can result in completely misleading forecasts. It seems to be advisable to additionally use older meteorological forecasts with longer lead times in comparison to the most recent forecast, or to combine older forecasts to use them as a single input to hydrological models. It should be noted that the same problem of a non-monotonous increase of forecast quality with decreasing lead time was also very important for the Central-European- or Elbe-flood in August 2002. For the situation in the Czech Republic this problem was discussed by [18].
3 Summary Hindcasts of three historical flood events using the present operational model system of the DWD (a global model GME of ca. 60 km resolution and a limited-area model LM of ca. 7 km resolution) were the main tasks of the DWD to support the development of a medium-range flood forecasting system in the EFFS project. The hindcasts were run over time periods of 10 to 14 days, depending on the respective flood event. The data required by hydrologists as forcing data for their models were extracted from the forecast
Operational DWD numerical forecasts as input to flood forecasting models
95
Table 2. A set of near surface data Description surface pressure water equivalent of snow large scale precipitation large scale snow convective precipitation convective snow latent heat flux 2-m temperature 2-m dew point 2-m max temperature 2-m min temperature 10-m zonal wind 10-m meridional wind soil moisture in layer 1 soil moisture in layer 2
Element No Level-type Tab No Unit 1 65 102 79 113 78 121 11 17 15 16 33 34 86 86
1 1 1 1 1 1 1 105 105 105 105 105 105 112 112
2 2 201 2 201 2 2 2 2 2 2 2 2 2 2
Pa kg/m2 kg/m2 kg/m2 kg/m2 kg/m2 W/m2 K K K K m/s m/s kg/m2 kg/m2
results and were delivered to the project ftp-server. Also the respective data for the August 2002 Central-European- or Elbe-flood for a period of 40 days were delivered. This was done by request of the project coordinator. High-resolution analyses of observed precipitation on the basis of synoptic data and dense precipitation networks for the three historical flood events were performed. Data of precipitation networks were collected for the catchment areas of the rivers affected by the respective flood events. These analyses were also delivered to the project ftp-server. The analyses were performed on the LM-grid for the LM-domain. The time periods are the same as for the respective hindcasts. As a third part of the work in the project, a precipitation analysis scheme based on a combination of real time (synoptic) observations and radar data was developed. A period of five days around the culmination day of the Central-European- or Elbe-flood was used to test the analysis scheme. As a kind of reference, also for this period analyses based on the use of synoptic and high-resolution precipitation data was performed. Additional highresolution data were provided by members of the project. The analyses of this period were provided to the project ftp-server. In general the hindcasts showed rather good success in predicting the precipitation distributions for the different flood events. Problems occur because of signals changing from day to day in consecutive meteorological forecasts. This makes the interpretation of consecutive flood forecasts difficult, as flood heights vary considerably. The high-resolution precipitation analyses were used for model validation. They can be performed only after the results of dense precipitation networks have been collected. Therefore, only a combination of the data available in real time (synoptic and radar
96
G. Rivin and E. Heise
data) can be used to perform a real time high-resolution analysis of precipitation, which can be used for real time model verification and as input to water budget models. The analysis scheme developed in the project is superior both to analyses on the basis of synoptic data alone and to analyses based on not adjusted radar data. The following forecast fields from LM and GME are provided for all historical flood events (Depth of soil moisture layer 1 is 0.00 m - 0.10 m and soil moisture layer 2 is 0.10 m - 1.00 m):
4 Acknowledgement The authors gratefully acknowledge the contributions of Alexander Cress, who conducted the hindcasts and started the work with the high-resolution analysis of observed precipitation. Christina Köpken provided a basic version of a program to use the radar data. Thomas Hanisch was indispensable for technical support while using his scripts necessary for running the hindcasts. Additional technical support was provided by Bodo Ritter. We also acknowledge the support of Ulrich Damrath in using the high-density precipitation data. Special thanks are due to the MAP data centre and to those members of the EFFS project who supplied additional high-density precipitation data. We also thank Peter Meyring, who was responsible for the layout of this Report. Special thanks are due to the two coordinators of the EFFS-project, Jaap Kwadijk and Paolo Reggiani. They ensured a fruitful and friendly atmosphere of cooperation throughout the whole course of the project.
References 1. Reggiani P (2002-2003) EFFS Annual Report 3 2. Heise E, Rivin G (2004) Precipitation analysis and prediction. Final report on the DWD-contribution to EU project ’An European Flood Forecasting System’, EUContract EVG1-CT-1999-00011, Arbeitsergebnisse, Deutscher Wetterdienst, Offenbach am Main, Germany, 80:38 3. Rivin G, Heise E (2004) High resolution 24 hour precipitation analysis on the basis of radar and synoptic data. In: Joint issue of Comp Techn 9 and Bulletin of KazNU 3/42: Proc. of the Int. Conf. "Computational and informational technologies for science, engineering and education", Almaty, Kazakhstan, part 1 4. Sattler K (2002) Precipitation hindcasts of historical flood events. Danish Meteorological Institute, Scientific Report 02-03, Copenhagen, 26 5. Scherhag R (1948) Neue Methoden der Wetteranalyse und Wetterprognose. Springer, Berlin, Göttingen, Heidelberg 6. Frei C, Schär Chr (1998) Int J Climatol 18:873–900 7. Majewski D, Liermann D, Prohl P, Ritter B, Buchhold M, Hanisch Th, Paul G, Wergen W, Baumgardner J (2002) Mon Weather Rev 130:319–338 8. Steppeler J, Doms G, Schättler U, Bitzer HW, Gassmann A, Damrath U, Gregoric G (2003) Meteorol Atmos Phys 1/4:75–96
Operational DWD numerical forecasts as input to flood forecasting models
97
9. Ritter B, Geleyn JF (1992) Mon Wea Rev 120:303–325 10. Doms G, Schättler U (1997) The nonhydrostatic limited-area model LM (LokalModell) of DWD. Scientific documentation. Deutscher Wetterdienst, Offenbach, Germany 11. Tiedtke M (1989) A comprehensive mass flux scheme for cumulus parameterization in large-scale models. Mon Weather Rev 117:1779–1800 12. Müller E (1981) Turbulent flux parameterization in a regional-scale model. In: Proc. of ECMWF Workshop on Planetary Boundary Layer Parameterization. ECMWF, Reading, UK 13. Louis JF (1979) Bound Layer Meteor 17:187–202 14. Mellor GL, Yamada T (1974) J Atmos Sci 31:1791–1806 15. Lott F, Miller M (1997) Quart J Roy Meteor Soc 123:101–128 16. Jacobsen I, Heise E (1982) Beitr Phys Atmos 55:128–141 17. Raschendorfer M (2001) The new turbulence parameterization of LM. COSMONewsletter 1: 89–97 (www.cosmo-model.org) 18. Sopko F (2003) Model predictions of the floods in the Czech Republic during August 2002: The forecasters’ perspective. ECMWF Newsletter 97:2–6
Robustness and efficiency aspects for computational fluid structure interaction M. Neumann1 , S.R. Tiyyagura2 , W.A. Wall3 , and E. Ramm1 1 2 3
Institute of Structural Mechanics, University of Stuttgart, Pfaffenwaldring 7, 70550 Stuttgart, Germany {neumann,ramm}@statik.uni-stuttgart.de High Performance Computing Center Stuttgart, Allmandring 30, 70550 Stuttgart, Germany [email protected] Chair of Computational Mechanics, Technical University of Munich, Boltzmannstraße 15, 85747 Garching, Germany [email protected]
Summary. For the numerical simulation of large scale CFD and fluid-structure interaction (FSI) problems efficiency and robustness of the algorithms are two key requirements. In this paper we would like to describe a very simple concept to increase significantly the performance of the element calculation of an arbitrary unstructured finite element mesh on vector computers. By grouping computationally similar elements together the length of the innermost loops and the vector length can be controlled. In addition the effect of different programming languages and different array management techniques will be investigated. A numerical CFD simulation will show the improvement in the overall time-to-solution on vector computers as well as on other architectures. Especially for FSI simulations also the robustness of the algorithm is very important. For the transient interaction of incompressible viscous flows and nonlinear flexible structures commonly used sequential staggered coupling schemes exhibit weak instabilities. As best remedy to this problem subiterations should be invoked to guarantee kinematic and dynamic continuity across the fluid-structure interface. To ensure the efficiency of these iterative substructuring schemes two robust and problemindependent acceleration methods are proposed.
1 Introduction To be a true alternative or complement to wind tunnel experiments, field tests and prototyping, Computational Fluid Dynamics (CFD) and Fluid Structure Interaction (FSI) simulations must fulfill two key requirements: efficiency and robustness. In this paper we cover some aspects of these requirements. For the numerical simulation of large scale CFD and FSI problems computing time is still a limiting factor for the size and complexity of the problem. But very often the existing algorithms only use a small fraction of the available computer power [1]. Therefore it is highly advisable to take a closer
100
M. Neumann, S.R. Tiyyagura, W.A. Wall, E. Ramm
look at the efficiency of algorithms and improve them to make the best out of the available computer power. Besides the solution of the set of linear equations, the element evaluation and assembly for stabilized, highly complex elements on unstructured grids is often a main time consuming part of the calculation. Whereas a lot of research is done in the area of solvers and their efficient implementation, there is hardly any literature on efficient implementation of advanced finite element formulations. Still a large amount of computing time can be saved by an expert implementation of the element routines. We would like to propose a straightforward concept to improve significantly the performance of the integration of element matrices of an arbitrary unstructured finite element mesh on vector computers. Partitioned analysis techniques enjoy great popularity for the solution of multiphysics problems. This is due to their computational superiority over simultaneous, i.e. fully coupled monolithic approaches, as they allow the independent use of suitable discretization methods and modular, optimized analysis software for physically and/or dynamically different partitions. However major drawbacks in terms of accuracy and stability problems can occur along with a number of rather popular partitioned analysis approaches. Therefore the subject of the second part of this paper is the analysis and discussion of specific problems of partitioned analysis techniques and to introduce possible remedies for the considered class of applications - the transient interaction of incompressible viscous flows and nonlinear flexible structures.
2 Fluid structure interaction environment Our partitioned fluid structure interaction environment is described in detail in Wall [2] or Wall et al. [3] and is therefore presented here in a comprising overview in figure 1. In this approach a non-overlapping partitioning is employed, where the physical fields fluid and structure are coupled at the interface Γ , i.e. the wetted structural surface. A third computational field Ω M , the deforming fluid mesh, is introduced through an arbitrary Lagrangean-Eulerian (ALE) description. Each individual field is solved by semi-discretization strategies with finite elements and implicit time stepping algorithms. Key requirement for the coupling schemes is to fulfill two coupling conditions: the kinematic and the dynamic continuity across the interface. Kinematic continuity requires that the position of structure and fluid boundary are equal at the interface, while dynamic continuity means that all tractions at the interface are in equilibrium:
Robustness and efficiency aspects for computational FSI
101
Fig. 1. Non-overlapping partitioned fluid structure interaction environment
dΓ (t) · n = r Γ (t) · n
and
σ ΓS (t) · n
uΓ (t) = uΓG (t) = f (r Γ (t)),
= σ ΓF (t) · n
(1) (2)
with n denoting the unit normal vector on the interface. Satisfying the kinematic continuity leads to mass conservation at Γ , satisfying the dynamic continuity leads to conservation of linear momentum, and energy conservation finally requires to simultaneously satisfy both continuity equations. In this paper (and in figure 1) only no-slip boundary conditions and sticking grids at the interface are considered.
3 CFD and high performance computing The numerical simulation of CFD and FSI problems can provide a costeffective alternative or complement to wind tunnel experiments, field tests and prototyping. These numerical simulations rely heavily on high performance computing (HPC), which consists of two major components: First one is related to advanced algorithms capable of accurately simulating complex, real-world problems. The other component is advanced computer hardware with sufficient power to execute those simulations [4]. To evaluate the performance of a numerical method several criteria are of course available. For computational scientists who attempt to solve a given
102
M. Neumann, S.R. Tiyyagura, W.A. Wall, E. Ramm
problem the most relevant is most probably the time-to-solution. This criteria takes into account a lot of different factors. For example these are the efficiency of the algorithm, the use of a particular hardware platform at a percentage of its peak speed and also the effort to include additional capabilities into the numerical code. However, the multitude of quantities included in this benchmark makes it difficult to use it for comparisons. A more universal performance benchmark is the raw computational speed, typically expressed in FLoating-point OPerations per Second (FLOPS). Even though the significance of such an isolated performance figure is limited, it still gives an approximate measurement of the capability of a given algorithm-architecture combination [4]. FLOPS is also the basis to evaluate the efficiency an application or algorithm reaches on a given architecture: The efficiency is usually given as the ratio of the achieved sustained FLOPS of the application and the peak FLOPS of the architecture. 3.1 Computational efficiency For the numerical simulation of large scale CFD and fluid-structure interaction (FSI) problems computing time is still a limiting factor for the size and complexity of the problem. Waiting for more powerful computers will not solve this problem, as the demand on larger and more complex simulations usually grows as fast as the available computer power. It is rather highly advisable to use the full power that computers already offer today. Especially on superscalar processors the gap between sustained and peak performance is growing for scientific applications. Very often the sustained performance is below 5 percent of peak. On the other hand the efficiency on vector computers is usually much higher. For vectorizable programs it is possible to achieve a sustained performance of 30 to 60 percent, or above of the peak performance [1, 5]. Starting with a very low level of serial efficiency, e.g. on a superscalar computer, it is a reasonable assumption that the overall level of efficiency of the code will drop even further when run in parallel. Especially if one is to use only moderate numbers of processors, it is essential to use them as efficiently as possible. Therefore in this paper we only look at the serial efficiency as one key ingredient for a highly efficient parallel code [1]. 3.2 Performance optimization To achieve a high efficiency on a specific system it is in general advantageous to write hardware specific code, i.e. the code has to make use of the system specific features like vector registers or the cache hierarchy. As our main target architecture is a NEC SX-6 parallel vector computer, we will address some aspects of vector optimization in this paper. But as we will show later this kind of performance optimization has also a positive effect on the performance of the code on other architectures.
Robustness and efficiency aspects for computational FSI
103
Vector processors Vector processors like the NEC SX-6 processor use a very different architectural approach than conventional scalar processors. Vectorization exploits regularities in the computational structure to accelerate uniform operations on independent data sets. Vector arithmetic instructions involve identical operations on the elements of vector operands located in the vector registers. A lot of scientific codes like FE programs allow vectorization, since they are characterized by predictable fine-grain data-parallelism [5]. The SX-6 processor contains an 8-way replicated vector pipe capable of issuing a MADD each cycle and 72 vector registers, each holding 256 64bit words. For non-vectorizable instructions the SX-6 also contains a cachebased superscalar unit. Since the vector unit is significantly more powerful than this scalar processor, it is critical to achieve high vector operations ratios, either via compiler discovery or explicitly through code and data (re-) organization. To summarize, the main aspects one has to consider to get a high performance on a vector processor are: • a high vector operations ratio, to make efficient use of the more powerful vector unit • a large (optimal) vector length, to efficiently use the vector registers ⇒ The same operations have to be performed simultaneously on a large amount of independent data. Vector optimization To achieve high performance on a vector architecture there are three main variants of vectorization tuning: • compiler flags • compiler directives • code modifications The easiest way to influence the vector performance of a code is of course the use of compiler flags. The behavior of modern compilers with respect to vectorization can be influenced by flags in various areas, e.g. expansion, unrolling, division and reordering of loops or inlining of functions. Compiler directives can control similar aspects of vectorization as compiler flags but they can be used more specific, e.g. only on some loops of a function. Both these techniques rely on the existence of vectorizable code and on the ability of the compiler to recognize it. In a lot of cases the resulting performance will not be as good as desired. In most cases an optimal performance on a vector architecture can only be achieved with code that was especially designed for this kind of processor. Here the data management as well as the structure of the algorithms are
104
M. Neumann, S.R. Tiyyagura, W.A. Wall, E. Ramm
important. But often it is also very effective for an existing code to concentrate the vectorization efforts on performance critical parts and use more or less extensive code modifications to achieve a better performance. The reordering or fusion of loops to increase the vector length or the usage of temporary variables to break data dependencies in loops can be simple measures to improve the vector performance. We would like to put forward a very simple concept, that requires only little changes to an existing FE code, to improve the vector performance of the integration of element matrices of an arbitrary unstructured finite element mesh significantly. 3.3 Vectorization concept for FE The main idea of this concept is to group computationally similar elements into sets and then perform all calculations necessary to build the element matrices simultaneously for all elements in one set. Computationally similar in this context means, that all elements in one set require exactly the same operations to integrate the element matrix, i.e. they have e.g. the same topology and the same number of nodes and integration points. element calculation loop all elements loop gauss points shape functions, derivatives, etc. loop nodes of element loop nodes of element .... calculate stiffness contributions .... assemble element matrix
element calculation group similar elements into sets loop all sets loop gauss points shape functions, derivatives, etc. loop nodes of element loop nodes of element loop elements in set .... calculate stiffness contributions .... assemble all element matrices
Fig. 2. Old and new structure of the algorithm to evaluate element matrices
The changes necessary to implement this concept are visualized in the structure charts in figure 2. Instead of looping all elements and calculation the element matrix individually, now all sets of elements are processed. For every set the usual procedure to integrate the matrices is carried out, except on the lowest level, i.e. as the innermost loop, a new loop over all elements in the current set is introduced. As some intermediate results now have to be stored for all elements in one set, the size of these sets is limited. The optimal size also depends strongly on the hardware architecture. For a detailed description of the dependency of the size of the sets and the processor type see section 3.4.
Robustness and efficiency aspects for computational FSI
105
3.4 Further influences on efficiency Programming language & array management It is well known that the programming language can have a large impact on the performance of a scientific code. Fortran is often considered the best choice for highly efficient code [6] whereas some features of modern programming languages, like pointers in C or objects in C++, make vectorization more complicated or even impossible [5]. Especially the very general pointer concept in C makes it difficult for the compiler to identify data-parallel loops, as different pointers might alias each other. There are a few remedies for this problem like compiler flags or the restrict keyword. The latter is quite new in the C standard and it seems that it is not yet fully implemented in every compiler. We have implemented the proposed concept for the calculation of the element matrices in 5 different variants. The first four of them are implemented in C, the last one in Fortran. Further differences are the array management and the use of the restrict keyword. For a detailed description of the variants see table 1. Multi dimensional arrays denote the use of 3- or 4-dimensional arrays to store intermediate results, whereas one-dimensional arrays imply a manual indexing. Table 1. Influences on the performance. Properties of the five different variants and their relative time for calculation of stiffness contributions orig
var1
var2
var3
var4
var5
language array dimensions restrict keyword
C multi
C multi
C multi restrict
C one
C one restrict
Fortran multi
SX-61 Itanium22 Pentium43
1.000 1.000 1.000
0.024 1.495 2.289
0.024 1.236 1.606
0.016 0.742 1.272
0.013 0.207 1.563
0.011 0.105 0.523
The results in table 1 give the cpu time spent for the calculation of some representative element matrix contributions standardized by the original code. The positive effect of the grouping of elements can be clearly seen for the vector processor. The calculation time is reduced to less than 3 % for all 1 2 3
NEC SX-6, 565 MHz; NEC C++/SX Compiler, Version 1.0 Rev. 063; NEC FORTRAN/SX Compiler, Version 2.0 Rev. 305. Hewlett Packard Itanium2, 1.3 GHz; HP aC++/ANSI C Compiler, Rev. C.05.50; HP F90 Compiler, v2.7. Intel Pentium4, 2.6 GHz; Intel C++ Compiler, Version 8.0; Intel Fortran Compiler, Version 8.0.
106
M. Neumann, S.R. Tiyyagura, W.A. Wall, E. Ramm
variants. On the other two processors the grouping of elements does not result in a better performance for all cases. The Itanium architecture shows only a improved performance for one dimensional array management and the variant implemented in Fortran and the Pentium processor performs in general worse for the new structure of the code. Only for the last variant the calculation time is cut in half. It can be clearly seen, that the effect of the restrict keyword varies for the different compilers/processors and also for one-dimensional and multidimensional arrays. Using restrict on the SX-6 results only in small improvements for one-dimensional arrays, on the Itanium architecture the speed-up for this array management is even considerable. In contrast to this on the Pentium architecture the restrict keyword has a positive effect on the performance of multi-dimensional arrays and a negative effect for one-dimensional ones. The most important result of this analysis is the superior performance of Fortran. The last variant is the fastest on all platforms. This is the reason we favor Fortran for performance critical scientific code and use the last variant for our further examples. Size of element sets As already mentioned before the size of the element sets and with it the length of the innermost loop needs to be different on different hardware architectures. To find the optimal sizes on the three tested platforms we measured the time spent in one subroutine, which calculates representative element matrix contributions, for different sizes of the element sets (figure 3).
Calculation time [sec]
30
20
SX6
Itanium2
10
Pentium4
0 0
64
128
192
256
320
384
448
512
Size of one element set
Fig. 3. Calculation time for one subroutine that calculates representative element matrix contributions for different sizes of one element set
Robustness and efficiency aspects for computational FSI
107
For the cache based Pentium4 processor the best performance is achieved for very small sizes of the element sets. This is due to the limited size of cache which usage is crucial for performance. The best performance for the measured subroutine was achieved with 12 elements per set. The Itanium2 architecture shows a almost constant performance for a large range of sizes. The best performance is achieved for a set size of 23 elements. For the vector processor SX-6 the calculation time decrease for growing sizes up to 256 elements per set, which corresponds to the size of the vector registers. For larger sets the performance only varies slightly with optimal values for multiples of 256. For further calculations we use 512 elements per set on the vector architecture. 3.5 Results
Calculation time [sec]
Concluding we would like to demonstrate the positive effect of the proposed concept for the calculation of element matrices on a full CFD simulation. The flow is the Beltrami flow (for details see [7]) and the unit-cube was discretized by 32768 stabilized 8-noded hexahedral elements [2]. 20000 15000 10000
other solver element calc.
5000 0
Original Variant 5
Fig. 4. Split-up of total calculation time for 32 time steps of the Beltrami Flow on the SX-6
In figure 4 the total calculation time for 32 time steps of this example and the fractions for the element calculation and the solver on the SX-6 are given for the original code and the full implementation of variant 5. The time spent for the element calculation, formerly the major part of the total time, could be reduced by a factor of 24. This considerable improvement can also be seen in the sustained performance given in table 2 as percentage of peak performance. The original code not written for any specific architecture has only a poor performance on the SX-6 and a moderate one on the other platforms. The new code, designed for a vector processor, achieves for the complete element calculation an acceptable efficiency of around 30 percent and for several subroutines, like the calculation of some stiffness contributions, even a superior efficiency of above
108
M. Neumann, S.R. Tiyyagura, W.A. Wall, E. Ramm Table 2. Efficiency of original and new code in percent of peak performance element calc. original var5 SX-6 Itanium2 Pentium4
0.95 8.68 12.52
stiffness contr. original var5
29.55 35.01 20.16
0.83 6.59 10.31
71.07 59.71 23.98
70 percent. It has to be noted that these high performance values come along with a vector length of almost 256 and a vector operations ratio of above 99.5 percent. But also for the Itanium2 and Pentium4 processors, which were not the main target architectures, the performance was improved significantly and for the Itanium2 the new code reaches around the same efficiency as on the vector architecture.
4 Partitioned analysis schemes for fluid structure interaction While in the last chapter mainly efficiency aspects with respect to computational implementation have been discussed, the following focusses on robustness and efficiency aspects from the algorithmic point of view. The partitioned analysis schemes under consideration with synchronous time discretizations in the fluid and structural part (figure 5) can be cast in a unified algorithmic framework. They are discussed in detail in Mok [8]. In the following, (·) I and (·)Γ denotes variables/coefficients in the interior of a subdomain Ω j and on the coupling interface Γ , respectively, while a vector without any of the subscripts I and Γ comprises degrees of freedom (DOFs) on the whole subdomain (interior and interface). Basic sequential staggered scheme
WF
2
WF
Sequential staggered scheme with structural predictor
WF
2
Iterative staggered (substructuring) scheme
WF
WF
WS
WS
WS
n)1
n
t n)1
WF
1 1
3
WS t
n
4
t
3
WS
WS
n)1
n
t
4
t
t
Fig. 5. Synchronous partitioned analysis schemes
Robustness and efficiency aspects for computational FSI
109
4.1 Sequential staggered schemes For schemes without iterations, so-called sequential staggered algorithms, the quality of the predictor has a major impact on accuracy and stability of the partitioned method. Beside the simple predictor of the basic sequential staggered scheme dating back to Felippa et al. [9, 10] a far more accurate choice is a second order predictor suggested by Piperno [11]. However, a predictor is never exact, thus every synchronous sequential staggered scheme necessarily violates the kinematic continuity condition, thus leading to accuracy reduction and, much worse, weak instability of the numerical solution. Numerical studies have further shown that surprisingly those weak instabilities occur much earlier with the more accurate predictor than with the simpler one. Several possible remedies have been proposed for this problem. Some of them are: • Reduction of the time step, but: instability appears earlier (artificial added mass effect, discussed in detail in Le Tallec et al. [12], Wall [2] and Mok [8]) • Introduction of strong viscous damping, but: changes the physics! • Introduction of fluid compressibility, but: changes the physics! • Asynchronous staggered scheme (Lesoinne & Farhat [13]), but: fulfils the kinematic and the dynamic continuity condition at different points in time, restricts the structural time integration to the midpoint integration scheme; both facts again cause instabilities (the latter one for nonlinear structural formulations). • Subiterations, but: increase of numerical costs, convergence not automatically ensured. 4.2 Iterative staggered schemes - iterative substructuring schemes Apparently the only appropriate choice, which neither involves artificial changes in the underlying physics nor is restricted to time integration schemes that can become unstable in nonlinear regimes (midpoint rule), are subiterations. These so-called iterative staggered schemes must then of course be designed to be as cheap and robust as possible. The iterative scheme used in the algorithmic framework for fluid structure interaction can be interpreted as an iterative Dirichlet-Neumann substructuring scheme based on a nonstationary Richardson iteration. This becomes obvious from the iterative evolution equation, mod,n+1 1 n+1 n+1 −1 f , (3) = d + ω S − S + S d dnΓ + ( ) F i S S ,i +1 Γ ,i Γ ,i Γ ext which is reduced to the degrees of freedom on the interface Γ . S F and S S denote the Schur complement matrices of the fluid and structural fields, re+1 spectively, and f mod,n is the external load vector resulting after static conΓ ext densation of the DOFs in the interior of the subdomains. In this iterative
110
M. Neumann, S.R. Tiyyagura, W.A. Wall, E. Ramm
scheme convergence is accelerated and ensured by the relaxed updates of the interface position. The iteration then converges to the simultaneous solution, exactly fulfilling the required coupling conditions: kinematic and dynamic continuity at every discrete time tn , tn+1 , . . . However a key question remains: How to choose optimal relaxation parameters ωi ? The commonly used strategy to employ an experimentally (by trial and error) determined fixed parameter is unsatisfactory, because such a parameter is very problem-dependent, for nonlinear problems even changes with time, is in general suboptimal (especially for nonlinear problems), and it requires a careful, time-consuming and difficult determination. Following two techniques are described which are both robust in the sense that they have problem-independent acceleration properties even for nonlinear systems, and user-friendly in the sense that the relaxation parameters are determined automatically without any user-input being necessary (see Mok [8] for more details). 4.3 Iterative substructuring schemes accelerated via gradient method The first technique is an acceleration via the application of the gradient method (method of steepest descent) to the iterative substructuring scheme. This method also guarantees convergence. In every iteration a relaxation parameter ωi is computed,
ωi =
g iT g i
1 g iT S− S (S F + S S ) g i
=
g iT g i
1 g iT (S− S SF gi + gi )
,
(4)
that is locally optimal with respect to the actual search direction, i.e. the residual n+1 mod,n+1 n+1 1 1 f − S + S d = d˜ Γ ,i+1 − dnΓ + (5) g i = S− ) ( F S Γ ,i ,i . Γ ext S A procedure for evaluating eq. (4) without explicitly computing and storing the Schur complements S F and S S has been proposed in Wall et al. [14]. It is based on the fact that applying a [inverted] Schur complement operator to a vector is equivalent to the solution of the respective partition with that vector as Dirichlet [Neumann] b.c. The algorithmic realization of this acceleration scheme involves the following steps: solving a homogeneous auxiliary 1 problem in order to first determine the term S− S S F g i and then the relaxation parameter ωi . 4.4 Iterative substructuring schemes accelerated via Aitken method A second technique for explicitly calculating a suitable relaxation parameter is the application of Aitken’s acceleration scheme for vector sequences according to Irons et al. [15] To calculate the relaxation parameter ωi the following steps are necessary:
Robustness and efficiency aspects for computational FSI
111
1. Compute difference between actual and previous interface solution. 1 n+1 ˜ n+1 ∆ dnΓ + ,i +1 : = dΓ ,i − dΓ ,i +1 .
(6)
2. Compute Aitken factor (for i ≥ 1 and with µ0n+1 = µinmax , µ01 = 0).
µin+1
T 1 1 1 ∆ dnΓ + − ∆ dnΓ + ∆ dnΓ + ,i ,i + 1 ,i +1 n+1 n+1 = µi −1 + µi −1 − 1 . 2 1 n+1 ∆ dnΓ + − ∆ d ,i Γ ,i +1
(7)
3. Compute relaxation parameter.
ωi = 1 − µin+1 .
(8)
Even though a rigorous analysis of its convergence properties does not exist, numerical studies performed so far have shown that the Aitken acceleration for vector sequences applied to the fluid structure interaction problems considered here shows a performance which is at least as good as the acceleration via the gradient method. Furthermore the evaluation of the relaxation parameters via the Aitken method is extremely cheap (in terms of both CPU and memory) and simple to implement. 4.5 Numerical example: flexible restrictor flap in converging channel This example is a 2D simulation of a converging channel with a flexible restrictor flap (figures 6 and 7; due to symmetry only the lower half of the channel is modelled). The material of the flap is a stiff rubber with Young’s modulus E S = 2.3 · 106 N/m2 , density ρ S = 1500.0 kg/m3 and Poisson ratio ν S = 0.45. The fluid is a silicon oil with density ρ F = 956.0 kg/m3 and dynamic viscosity µ F = 0.145 kg/(m · s). The prescribed inlet flow velocity profile and time-history (v¯ (t)) are given in the figure. The fully developed flow has a Reynolds number of approx. Re = 100, i.e. the flow is laminar. Stabilized Q1Q1 and Q2 finite elements are used [2, 3] to model the fluid and the structure, respectively. Both fields are implicitly time integrated with y v(t) 0.25
0.2
v [mńs]
0.25
A
v max
5mm flexible restrictor flap
B x
0.5m
v(t) +
1.25m
ǒ
v max 1* cos pt 2 10
Ǔ
v max + 0.06067 10
Fig. 6. Restrictor flap in channel - problem statement
time t [s]
112
M. Neumann, S.R. Tiyyagura, W.A. Wall, E. Ramm
Fig. 7. Restrictor flap in channel - deformed configurations (streamlines on horizontal flow velocity)
∆ t = 0.1 s using backward Euler for the flow and Generalized-α with numerical dissipation (ρ∞ = 0.6) for the structure. In this example sequential staggered schemes completely failed to produce stable solutions, the weak instability already occurred after the first few time steps, even when reducing the time step size to ∆ t = 0.0001 s (artificial added mass effect!). In contrast, the iterative staggered schemes were again perfectly stable over the whole computed time range. Figure 8 shows the number imax of subiterations over the partitions required within every time step to yield a converged solution. As convergence criterion √ ||gi ||/ neq < 10−9 was used. A computation using a carefully chosen, “best” fixed relaxation parameter ω = 0.125 = const is compared to the two proposed techniques, where the iterative staggered scheme is accelerated via the gradient and the Aitken method. The unrelaxed algorithm (ω = 1.0) diverges in this case. The results clearly show the suboptimal behaviour of the strategy with fixed relaxation parameter, and the efficient acceleration of convergence of the two proposed schemes.
Fig. 8. Restrictor flap in channel - convergence study (number of subiterations i per time step)
Robustness and efficiency aspects for computational FSI
113
5 Conclusions In the present paper a simple approach for a very efficient implementation of the element calculations for stabilized fluid elements on unstructured grids has been discussed. This concept, requiring only little changes to an existing code, achieved a high performance on the intended vector architecture and also showed a good improvement in the efficiency on other platforms. By grouping computationally similar elements together the length of the innermost loop can be controlled and adapted to the current hardware. In addition the effect of different programming languages and different array management techniques on the performance was investigated. A numerical example showed the improvement in the overall time-to-solution for a CFD simulation. Further partitioned analysis approaches for the transient interaction of incompressible viscous flows and nonlinear flexible structures with large deformations have been discussed as crucial robustness and efficiency aspects for FSI. As a best remedy to the weak instabilities of sequential staggered coupling schemes it is recommended to invoke subiterations which ensure kinematic and dynamic continuity across the fluid-structure interface, thus ensuring stable and accurate numerical solutions even for long-time simulations. For the desired acceleration of convergence of such iterative staggered schemes two robust and user-friendly techniques have been proposed: acceleration via the gradient method and via Aitken’s method for vector sequences. The computational efficiency and robustness has been demonstrated with a numerical example.
Acknowledgements The authors would like to thank Uwe Küster of the ’High Performance Computing Center Stuttgart’ (HLRS) for his continuing interest and most helpful advice and the staff of ’NEC - High Performance Computing Europe’ for the constant technical support.
References 1. Behr M, Pressel DM, Sturek WB (2000) Comp Meth Appl Mech Eng 190:263–277 2. Wall WA (1999) Fluid-Struktur-Interaktion mit stabilisierten Finiten Elementen. PhD thesis, Institut für Baustatik, Universität Stuttgart 3. Wall W, Ramm E (1998) Fluid-structure interaction based upon a stabilized (ale) finite element method. In: Oñate E, Idelsohn S (eds) Computational Mechanics. Proc. of the Fourth World Congress on Computational Mechanics WCCM IV, Buenos Aires
114
M. Neumann, S.R. Tiyyagura, W.A. Wall, E. Ramm
4. Tezduyar T, Aliabadi S, Behr M, Johnson A, Kalro V, Litke M (1996) Comp Mech 18:397–412 5. Oliker L, Canning A, Carter J, Shalf J, Skinner D, Ethier S, Biswas R, Djomehri J, van der Wijngaart R (2003) Evaluation of cache-based superscalar and cacheless vector architectures for scientific computations. In: Proc. of the ACM/IEEE Supercomputing Conf. 2003, Phoenix, Arizona, USA 6. Pohl T, Deserno F, Thürey N, Rüde U, Lammers P, Wellein G, Zeiser T (2004) Performance evaluation of parallel large-scale lattice Boltzmann applications on three supercomputing architectures. In: Proc. of the ACM/IEEE Supercomputing Conf. 2004, Pittsburgh, USA 7. Ethier CR, Steinman DA (1994) Int J Numer Meth Fluids 19:369–375 8. Mok DP (2001) Partitionierte Lösungsansätze in der Strukturdynamik und der Fluid-Struktur-Interaktion. PhD thesis, Institut für Baustatik, Universität Stuttgart 9. Felippa C, Park K, de Runtz J (1977) Stabilization of staggered solution procedures for fluid-structure interaction analysis. In: Belytschko T, Geers TL (eds) Computational Methods for Fluid-Structure Interaction Problems. American Society of Mechanical Engineers, New York. 26 10. Felippa C, Park K (1980) Comp Meth Appl Mech Eng 24:61–111 11. Piperno S (1997) Int J Numer Meth Fluids 25:1207–1226 12. Le Tallec P, Mouro J (2001) Comp Meth Appl Mech Eng 190:3039–3067 13. Lesoinne M, Farhat C (1998) AIAA J 36:1754–1756 14. Wall W, Mok D, Ramm E (1999) Partitioned analysis approach of the transient coupled response of viscous fluids and flexible structures. In: Wunderlich W (ed) Solids, Structures and Coupled Problems in Engineering. Proc. of the European Conf. on Computational Mechanics, Munich 15. Irons B, Tuck R (1969) Int J Numer Meth Eng 1:275–277
The computational aspects of General Relativity J. Frauendiener Institute for Astronomy and Astrophysics, University of Tübingen, Auf der Morgenstelle 10, D-72076 Tübingen, Germany [email protected]
Summary. The main line of application of computational methods in General Relativity is concerned with the determination of the waveforms of the gravitational radiation which is emitted from astrophysical processes. Gravitational wave detectors, currently under construction and calibration, need this information in order to filter out the signals from the noisy background. This contribution describes the basic ideas behind these efforts.
1 Introduction General Relativity (GR) or, more precisely, Einstein’s theory of gravitation is the theory which describes the behaviour of large massive systems. In particular, it is the theory which one uses in order to accurately model astrophysical systems such as supernovae, galaxies or binary systems of compact objects. But also the entire universe must be described according to Einstein’s theory. In this paper I will focus on the astrophysical applications of the theory in particular the generation and detection of gravitational waves. Let us consider two compact objects such as two neutron stars or black holes which move in a bounded configuration around each other. Within the Newtonian theory of gravity these bodies would move on Keplerian ellipses around their common centre of mass. However, Einstein’s theory of gravity is different. The most immediate consequence of this theory is the fact that gravitational waves exist which can extract energy from a self-gravitating system. The configuration of the two bodies has a quadrupole moment which varies in time and, according to Einstein’s quadrupole formula [1], the system must emit gravitational waves which leave the system in the form of ripples in the space-time continuum. This emission of gravitational radiation leads to a reduction of the energy contained in the system, and a subsequent reduction in the distance of the two bodies. Since in this process the angular momentum will remain (almost) constant, this implies that the orbital velocity of the bodies will increase, the period of the motion will decrease and the
116
J. Frauendiener
Amplitude
temporal change in the quadrupole moment will increase. Hence, the system will loose even more energy due to the gravitational radiation. Thus, we expect that gravitational wave detectors should pick up a signal from this event which looks roughly like the ‘chirp’ signal in Fig. 1. Once the
Time
Fig. 1. Expected signal from a coalescing binary
two bodies are sufficiently close one cannot argue on the basis of the changing quadrupole moment anymore because now the details of the structure of the two bodies become important. In order to obtain some information about the form of the detected wave signal one needs a much more detailed model of the situation than the one we have employed before which consisted basically of two mass points moving according to Newtonian gravity together with the quadrupole formula from Einstein’s theory describing the energy loss. In this note I will give a brief summary of how this situation is actually modelled and what some of the current issues and problems in the numerical implementations are.
2 Mathematical model 2.1 Equations According to Einstein’s theory a situation such as the one given above is described as a solution of the field equations which may be written in the form of a tensor equation1 Gab = −
8πγ T . c4 ab
(1)
This equation should be read as follows: the right-hand side is the energymomentum tensor, an object which summarizes the properties of any matter 1
Here, γ denotes the gravitational constant and c is the velocity of light in vacuum. Using appropriate units for length, time and mass, these constants may both be put equal to unity. Our choice of conventions is as in [2]
The computational aspects of General Relativity
117
sources having to do with energy and momentum. It is the object which allows us to compute the momentum-flux through a given surface, or the energy contained in a certain volume of space etc. The left-hand side is the Einstein tensor which is a measure of the curvature of the space-time. The curvature of a manifold is that structure which tells us how the straight lines behave. According to Einstein’s theory bodies which are freely falling, i.e., which are subject only to the influence of the gravitational field will move on such straight lines, also called geodesics. The Einstein tensor is given as an expression containing first and second derivatives of the space-time metric g ab . This is the fundamental variable in Einstein’s General Relativity. The metric determines the geometric properties of the space-time manifold, i.e., it allows us to compute the (spatial or temporal) distance between any two space-time events and thereby the overall geometry of the space-time manifold. It creates the framework in which other fields evolve. However, in contrast to the classical theories, in General Relativity this framework is not an a priori given, passive background but it is active, participating in the dynamics. By the Einstein equation (1) the energy of the matter in space-time determines the curvature of the space-time, which, in turn determines how the motion of the matter proceeds. Of course, the equation as it stands is not complete. If we think of the matter as being some kind of elastic material, then its energetic aspects are not enough to specify the configuration completely. One needs to consider the full information of the elastic deformations in order to describe the bodies in full detail. Therefore, one also needs to take into account the matter equations, i.e., those equations which govern the material degrees of freedom. In the situation alluded to above where two compact objects circle around each other there are two ways to implement the properties of the ‘matter’ depending on whether the bodies are neutron stars or black holes. In the former case, one usually assumes that the body material can be described sufficiently accurately as an ideal fluid with an equation of state describing the properties of nuclear matter. Consequently, one uses the Euler equations for an ideal fluid as the matter equations.2 In the latter case, the body is a black hole which, strictly speaking, does not consist of any common material but which is a region of space-time within which the curvature is so high that every signal originating in that region is ‘trapped’ inside it and cannot escape. This ‘trapping’ property can be used to formulate boundary conditions for the equations. Thus, black holes are described by the vacuum equations, i.e., eqns. (1) with Tab = 0 together with appropriate boundary conditions. For concreteness and simplicity we will focus on the binary black hole problem from now on. However, the binary neutron star problem is also under heavy investigations, see e.g. [4] 2
Nowadays, also elastic properties of the neutron star crust have to be taken into account which leads to equations of relativistic elasticity (see e.g. [3]).
118
J. Frauendiener
2.2 Boundary conditions In the situations which are normally considered one is interested in the properties of a particular astrophysical system such as the binary black hole system. In Nature, these systems are embedded into the space-time manifold of the entire universe and Einstein’s equations pertain to the entire universe. However, in these applications one is usually not interested in these global aspects. Therefore, one needs to devise a way to isolate the system from its surroundings in the universe. In other theories, such as Newtonian gravity or Maxwell’s electrodynamics such a procedure is more or less straightforward. One requires that the influence of the system on test bodies diminishes appropriately as one recedes further and further away from the system. This works for the simple reason that in these classical theories one has a notion of distance which is independent of the fields under consideration. However, as pointed out above, in Einstein’s theory the dynamical quantity is the metric, which determines distances only after it has been found from the solution of the Einstein equations. So in that case the notion of distance i.e., the geometry is not independent of the dynamical fields and this complicates matters considerably. It was only in 1965 when Penrose [5] found the appropriate notion of an isolated system in GR. The basic idea is that if one assumes that the universe is empty except for the system of interest then the space-time should ‘look’ more and more like a space-time which does not contain any sources at all. This means that the solution of the Einstein equations describing this situation should – in an appropriate sense – approach Minkowski space-time at large distances from the source. The exact mathematical formulation of this idea is somewhat complicated due to the fact that coordinates have no meaning in GR, so that every coordinate system is as good as any other. Penrose formulated the idea of asymptotic flatness geometrically without coordinates using the so called conformal compactification procedure. Once one has established this geometric idea of asymptotic flatness one can introduce appropriate coordinate systems which can be used to express asymptotic conditions as boundary conditions on the metric and its first derivative as limits for a radial coordinate r approaching infinity. It should be pointed out here, that the boundary conditions have the form of fall-off conditions for ‘r → ∞’. In the standard numerical implementations of the Einstein equations where for obvious reasons one cannot have an infinitely extended computational domain this implies that one needs to introduce an artificial outer boundary where the conditions have to be imposed. This leads to well known complications in the formulation of numerical algorithms in the implementations of the Einstein equations. There is another way to treat the fall-off conditions which is based more heavily on the Penrose picture of asymptotically flat space-times. By taking serious the conformal compactification one can achieve that the entire system can globally be
The computational aspects of General Relativity
119
simulated on a finite grid. This method is akin to the stereographic projection which can be used to represent the Euclidean 2-plane as a 2-sphere. However, due to the high mathematical complexity we will not delve into this method here, see [6]. 2.3 Initial value formulation It is customary in physics to make predictions, i.e., to determine the future properties of a system from its present status by means of an evolution process. Usually, the evolution is expressed as a set of (partial) differential equations which relate the time rate of change of quantities to the current values of these quantities. As they stand, the Einstein equations (1) are not yet in this form. They simply state that (a certain part of) the curvature of spacetime is determined by the energy-momentum tensor of the matter contents of the space-time resp. vanishes in the case of the vacuum equations. In order to transform these equations into evolution equations one needs to perform certain steps. ‘Evolution’ takes place in ‘time’ so one first needs to introduce a time coordinate into space-time. Due to the general covariance of GR this step is not unique, the choice of a time coordinate is highly ambiguous. But once such a choice is made one can ‘foliate’ the space-time by the hypersurfaces of constant time. These ‘t = const’ hypersurfaces Σt define instants in time. In order to localize events one needs three further coordinates to fix the spatial location of an event on a given hypersurface Σt . Consider now a (small) test body which is kept fixed at a spatial location in the sense that its spatial coordinates remain constant. Then the existence of this body in space-time is described by a curve as indicated in Fig. 2 which shows two hypersurfaces Σt and Σt+dt . The dotted lines indicate the world-lines of bodies at fixed spatial coordinates ξ . They mark those events for which only time passes. The vector t a which connects the event P at time t with the event P at the same location at the later time t + dt can be decomposed into a piece which is normal to Σt and a piece tangential to Σt : t a = Nn a + N a . The physical meaning of the two pieces is easily seen. The normal piece is proportional to the unit normal vector n a to Σt at P with a proportionality factor N, the so called lapse function. The lapse tells us how much time passes on a standard clock between the two instants t and t + dt. The tangential part is called the shift vector. Consider a family of test bodies which are released simultaneously into free fall in such a way that they are mutually at rest initially. Then their world-lines will be orthogonal initially to the hypersurface Σt . They fall freely in the gravitational field defined by the geometry of space-time. In general, another body which is kept at a fixed spatial location will move with respect to these freely falling test bodies, sometimes also called Eulerian observers. The size of the relative velocity of this movement
120
J. Frauendiener
P’ ξ
Σt+dt
t n P
N
ξ
Σt
Fig. 2. 3 + 1-splitting of space-time
is determined by the shift vector. Thus, lapse function and shift vector are four functions which can be considered as determining the coordinates. A hypersurface in space-time can be characterized by two fundamental quantities, the first and second fundamental forms. The first fundamental form, h ab , is the metric induced on Σt by the space-time metric g ab . It determines distances on Σt and, hence, the intrinsic geometry on Σt . The second fundamental form, k ab is also called the extrinsic curvature because it describes the bending of Σt within the space-time. As time progresses the ‘instants’ Σt follow one after the other, each with a different intrinsic geometry and extrinsic curvature. Thus, the space-time is traced out by one instant after the other. There is a kinematical relationship between the intrinsic geometries and the extrinsic curvatures given by the equation3 ∂t h ab = −2Nk ab + 2∇(a Nb) , (2) which effectively relates the time derivative of the intrinsic geometry of a hypersurface Σt to its extrinsic curvature k ab corrected by terms involving the specific properties of the coordinate system used. It is a fact, that knowing all the different intrinsic metrics h ab (t) and extrinsic curvatures k ab (t) is tantamount to knowing the geometry, i.e., the metric gab of space-time. Thus, one expects that the Einstein equations which give conditions on the space-time metric can be reformulated as equations for the geometry of the hypersurfaces Σt . This point of view has become 3
The symbol ∇ a refers to the covariant derivative operator on Σt defined by the metric h ab .
The computational aspects of General Relativity
121
known as geometrodynamics. Roghly speaking, one probes the space-time geometry by sticking 3-dimensional slices into the space-time and examining the effect of the space-time curvature on their geometry. Mathematically, this reformulation is achieved by projecting the Einstein equations (1) in horizontal (tangential) and vertical directions to the hypersurfaces. This yields three kinds of equations, the horizontal-horizontal, horizontal-vertical and vertical-vertical components. The first ones indeed yield an equation of the form ∂t k ab = N ( R ab − 2k ac kc b + kc c k ab ) − ∇ a ∇b N + · · · .
(3)
Here R ab is the Ricci tensor of the 3-metric h ab . It is an expression which is linear in the second derivatives of the metric and quadratic in its first derivatives. The dots indicate further terms which involve the shift vector and its first derivatives. The other two sets of equations obtained from (1) are
and
∇ a k a b − ∇b k a a = 0,
(4)
R a a − k ab k ab + (kc c )2 = 0.
(5)
Equations (2) and (3) together constitute an evolution system for h ab and k ab . They relate the time rate of change of h ab and k ab to their values and those of their derivatives on Σt . Thus, given initial values for these quantities we can produce their values at all later times. However, these initial values cannot be chosen arbitrarily because of the additional equations (4) and (5). These do not contain any time derivatives so they have to be regarded as additional constraints on the choice of initial data. The validity of these constraints is not restricted to the initial instant: initial data which are constrained remain constrained or, put differently, if (4) and (5) hold initially, then by means of the evolution equations one can show that they must hold also at all later instants: the constraints propagate. Note, that there are no evolution equations for the lapse and the shift vector. These are functions which appear in the equations above but which are not fixed by either constraint or evolution equations. They can be chosen arbitrarily which is the manifestation of the fact that GR is generally covariant, the coordinates are arbitrary. However, it is clear that each choice of lapse and shift will influence the behaviour of the evolution even though, so far, it is largely unclear exactly how. The evolution equation (3) is of first differential order in time but of second order in space because of the appearance of the Ricci tensor. This mismatch has several disadvantages, e.g., there is no theorem for existence of solutions for the system (2)-(3). For this and other reasons it is useful to rewrite the system in an entirely first order form by introducing further variables. There are many ways to achieve this. Some of them even result in symmetric hyperbolic systems of partial differential equations. To be specific, in one
122
J. Frauendiener
popular formulation one has to deal with a system of 30 evolution equations for 30 variables and 22 constraint equations. These numbers vary with the formulation but they are always roughly of the same size.
3 Numerical issues The numerical simulations rely mainly on the first order formulations for equations (2)-(3) mentioned above. The main reason is that these formulations and, in particular, the symmetric hyperbolic ones are among the most analyzed systems in numerical analysis. There exist a number of results on the properties of discretization schemes for such systems (see e.g. [7]). Therefore, there are no basic difficulties in devising schemes to solve the evolution equations. The real problems emerge when the full system of equations including the constraints is considered, when boundaries come into play and when coordinate issues are involved. These are topics which have so far not been completely understood neither from the mathematical nor from the numerical side. The issue is complicated by the fact that these three difficulties interact so that it is hard to isolate their separate influences. 3.1 Coordinate freedom We have seen that coordinates play no role in the physical implications of Einstein’s theory. However, as it is apparent from the evolution equations above, the evolution is not well defined unless the lapse and the shift functions have been specified which in turn means that a certain choice of coordinates has been made. The choice of these coordinates is entirely arbitrary but, clearly, the choice of these functions will influence the performance of the numerical algorithm. For instance, the lapse function determines the numerical speeds at which the various modes travel and by choosing it one can influence the hyperbolic properties of the system. But also the shift function plays a role. In the simulations of a binary system mentioned above it is common practice to describe the system in corotating coordinates such that the two bodies remain almost fixed on the computational grid. However, this choice of coordinates results in superluminal coordinate speeds outside a certain region which then makes it difficult to guarantee stability of the numerical scheme and to impose boundary conditions. Another effect of the choice of coordinates are moving boundaries. As we have seen in the previous section, keeping the boundary at fixed coordinate values still allows the boundary to move with respect to the Eulerian observers defined by the hypersurfaces Σt . By appropriate choice of the coordinates one can let the boundaries move in almost any fashion. It is even possible to have situations where the boundaries effectively ‘play ping-pong’ with the fields inside. Of course, this poses problems for the formulation
The computational aspects of General Relativity
123
of boundary conditions because the fields change their character, ‘in-going’ fields may turn into ‘out-going’ fields and vice versa. It is even possible that the code crashes due to high gradients in the grid functions without there being a physical reason for it. The source of the crashes is simply due to the coordinate system becoming singular in the sense that two separate space-time events are not described any more by different coordinate values. So, ultimately, the crash is due to the improper choice of the lapse and shift coordinates. However, it is nearly impossible to know beforehand what implications this choice will have. Sorting out the reason for the crashes is difficult and requires a thorough analysis of the curvature of the space-time. 3.2 Propagation of constraints We have seen in the previous section that the constraints propagate. Enforcing the constraints initially implies their validity for all times. In a numerical context, the partial differential equations are replaced by their discretized versions and one can ask whether this result also holds for these discrete versions. It turns out that this is not automatically the case as is shown in Fig. 3. This means that the discrete version of the evolution equations is not 0
10
1
average L norm per point
10 10 10 10 10 10 10
Total Constraints Total Error Phys. Constraints Phys. Error
−1
−2
−3
−4
−5
−6
−7
1.5
2
2.5
3
3.5
t
Fig. 3. Example for constraint violation
compatible with the discrete constraint equations. The constraints vanish up to round-off error initially because in these runs we specified analytical initial data, then they immediately jump to the level of the truncation error of the numerical scheme. During the evolution they grow more than exponentially so that after some time the constraints cannot be considered as being satisfied. This is clearly an unstable behaviour. However, it has nothing to do with numerical instabilities. This kind of behaviour is similar to the behaviour of dynamical systems in the neighbourhood of critical sets, like e.g. the strange
124
J. Frauendiener
attractor of the Lorenz system. In contrast to the dynamical systems the present system is infinite-dimensional which makes the analysis even more complicated. The geometrical situation seems to be the following. In the space P of all initial data for the evolution equations there is defined a lower dimensional sub-manifold C of those data which satisfy the constraints. This constraint hypersurface is invariant under the flow generated by the evolution equations. When we do a free evolution we locate a point on C and then the analytical evolution defines a trajectory on C . However, when evolved with the discrete evolution by solving the discretized evolution equations, the trajectory is not guaranteed to stay within C but will in general wander off the hypersurface into the ambient space P . In free evolution codes one only specifies the initial point and then hopes that the trajectory will remain on C . In the constrained evolution procedure the use of the constraint equations has the effect of projecting the trajectory (partly) back onto C . However, because one has no control on the projection there is no guarantee that the projected trajectory has anything to do with the true trajectory. The initial point is usually determined also numerically. So we have a small perturbation in the initial data off the constraint hypersurface. Now we would like to have an analytical theorem to the effect that if we start the evolution close to C then it will remain close. This is a statement about the stability of the invariant set C under the evolution flow. However, such statements are elusive at the moment. The numerical observations show that they cannot hold for the formulations of the equations considered so far. There always seems to be present a ‘mode’ which drives the evolution away from C . This would seem to indicate that the constraint hypersurface is a hyperbolic invariant set in the sense of dynamical systems theory. The task is, therefore, to find other formulations of the equations which have this kind of stability properties of the constraint hypersurface, or show that the constraint surface is necessarily unstable. This problem is quite difficult not only because of its infinite dimensionality but also because of the interactions between the choice of coordinates and the properties of the evolution equations. It is clear that these cannot be considered independently because e.g., a different choice of a time coordinate implies a different slicing of space-time with hypersurfaces of constant time. This implies different extrinsic curvatures and therefore different coefficients in several of the evolution equations. In fact, in a related situation one can derive stability conditions for the choice of coordinates which show that the violation of the constraints can be influenced quite strongly by the choice of coordinates [8]. From the numerical point of view there are more pragmatic ways to keep the constraint violation low. They are based on techniques like minimizing a certain error functional or on other ways to incorporate information about the propagation of the constraints into the evolution equations.
The computational aspects of General Relativity
125
While the above description applies to the intrinsic reasons for constraint violation there is another one which is equally important. This one has to do with the fact that the validity of the constraints inside the computational domain implies a restriction on the boundary conditions which can be imposed at the boundary of the computational domain. 3.3 Boundary conditions Numerical simulations always have to deal with an initial boundary value problem (IBVP). Thus, there is always the task of specifying boundary conditions to obtain a unique solution of the field equations. In the usual approach to simulating asymptotically flat space-times one decides about an outermost radius beyond which the space-time is ignored. Then one needs to substitute an appropriate boundary condition for the rest of the space-time. The choice of the boundary condition is a very delicate issue because it influences the system on at least three different levels. Most importantly, the boundary condition needs to reflect the physics. In the present case, it needs to encode somehow that the ignored part of the space-time is asymptotically flat. Furthermore, the boundary condition has to ensure that the mathematical IBVP is well-posed and, finally, it has to be such that it is guaranteed that there are numerical implementations which are stable. The lack of a simple formulation for the IBVP for the Einstein equations (see, however, [9, 10, 11, 12]) is another major stumbling block for stable evolutions in numerical relativity using the standard approach. However, as described in section 2.2 it is possible to reformulate the problem so that the physics is automatically taken care of. According to Penrose [5] asymptotically flat space-times can be characterized as those which can be conformally compactified by a suitable rescaling of the metric. This process is achieved by formally attaching the points at infinity to the spacetime so that they have finite affine distance with respect to the rescaled metric. The physical space-time acquires a boundary, the so called null-infinity. Friedrich [13] derived a system of equations which expresses the validity of the Einstein equations in terms of the rescaled metric and derived quantities. This system of conformal field equations can be split into a symmetric hyperbolic system of evolution equations and a set of constraint equations. One can show that there are various well-posed initial value formulations for these equations. The main point is that these equations hold in the compactified spacetime and that they are regular everywhere, even at the boundary points which correspond to the points at infinity. The regularity of solutions at these points translates into the required fall-off conditions of the physical spacetime metric and other physical fields which correspond exactly to the asymptotic flatness condition. The regularity of the solutions at null-infinity implies that one can extend them even beyond the boundary points into an unphysical region.
126
J. Frauendiener
The numerical approach now proceeds as follows. One sets up an IBVP for the conformal field equations based on a space-like hypersurface Σt which extends beyond null-infinity into the unphysical region. Initial data are prescribed and the evolution is performed with a standard numerical method for symmetric hyperbolic systems. The boundary conditions now have to be specified in the unphysical region. Since null-infinity is a nullhypersurface the information generated at that boundary cannot travel into the physical region. Hence, by conformal compactification we have managed to push the boundary into a region where it cannot influence the physics while the regularity of the solutions still guarantees that asymptotic flatness of the physical space-time is maintained. To illustrate this point we present in Fig. 4 the results of an evolution of
Fig. 4. The membrane property of null-infinity
Minkowski data with random noise as boundary data outside the physical region (see [14]. In the diagram is shown the magnitude of the Weyl tensor, which is an indication of the curvature of the space-time and which vanishes identically in flat Minkowski space. It is obvious that null-infinity suppresses the influence of the random noise in the physical region.
4 Conclusion I have discussed in this note the various approaches towards a numerical simulation of asymptotically flat space-times governed by Einstein’s equations. It turns out that these attempts are remarkably hard. Indeed, the Einstein equations seem to probe the current state-of-art in numerical analysis
The computational aspects of General Relativity
127
because they pose problems which have not occurred in that form before. Even though this paper might seem to convey a somewhat pessimistic picture of the current numerical simulations this should not be taken as a permanent impression. The situation is rapidly improving because more and more results on the numerical properties of the Einstein equations in their various formulations become available. In the meantime, there is a lot of work devoted to more or less ingeniously circumvent some of the issues to obtain simulations which seem to be reasonable.
References 1. Einstein A (1916) Näherungsweise Integration der Feldgleichungen der Gravitation. Sitz. Ber. Preuss. Akad. Wiss. 688–696 2. Penrose R, Rindler W (1984) Spinors and Space-Time: vol. 1. Cambridge University Press, Cambridge 3. Beig R, Schmidt BG (2003) Class Quant Grav 20:889–904 4. Final report of NASA Grand Challenge project on coalescing neutron star binaries (2000) http://wugrav.wustl.edu/research/projects/final_ report/nasafinal3.html 5. Penrose R (1965) Proc Roy Soc London A 284:159–203 6. Frauendiener J (2003) Conformal infinity. Living Reviews in Relativity 7 7. Gustafsson B, Kreiss HO, Oliger J (1995) Time dependent problems and difference methods. Wiley, New York 8. Frauendiener J, Vogel T (2005) On the stability of constraint propagation. Submitted to Class Quant Grav gr-gc/0410100 9. Calabrese G, Lehner L, Tiglio M (2002) Phys Rev D 65 104 031 10. Friedrich H, Nagy G (1998) Comm Math Phys 201:619–655 11. Szilágyi B, Schmidt B, Winicour J (2002) Boundary conditions in linearized harmonic gravity. Phys Rev D 65 064 015 12. Szilágyi B, Winicour J (2003) Well-posed initial-boundary evolution in General Relativity. Phys. Rev. D 68 041 501 13. Friedrich H (1981) Proc Roy Soc London A 375:169–184 14. Frauendiener J, Hein M (2002) Phys Rev D 66 104 027.
Arbitrary high order finite volume schemes for linear wave propagation M. Dumbser, T. Schwartzkopff, and C.-D. Munz Institute of Aerodynamics and Gasdynamics, University of Stuttgart, Pfaffenwaldring 21, 70550 Stuttgart, Germany [email protected] Summary. Wave propagation over long distances is usually modeled numerically by high order finite difference schemes, compact schemes or spectral methods. The schemes need good wave propagation properties, i.e. low dispersion and low dissipation. In this paper we show that finite volume schemes may be a good alternative with a number of nice properties. The so called ADER schemes of arbitrary accuracy have been first proposed by Toro et. al. for conservation laws as high order extension of the shock-capturing schemes. In this paper we show theoretically and numerically their dispersion and dissipation properties using the method of differential approximation of Shokin. In two dimensions the stability of these ADER schemes is investigated numerically with the von Neumann method. Numerical results and convergence rates of ADER schemes up to 16th order of accuracy in space and time are shown and compared with respect to the computational effort.
1 Introduction Finite volume schemes have become very popular in computational fluid dynamics due to their robustness and flexibility. They consist of two steps: the reconstruction and the flux calculation. For finite volume schemes the discrete values are approximated by cell averages. By the reconstruction step, local values are interpolated from the average values to calculate the numerical flux between the grid cells. The numerical flux calculation itself takes into account the direction of the wave propagation, so-called upwinding, which provides high robustness of the schemes in the presence of shocks and strong gradients. If a piecewise constant reconstruction procedure is applied, upwind finite volume schemes are first order accurate only. With the MUSCL technique, second order of accuracy in space and time can be obtained quite easily (e.g. see [1]). Extensions of the spatial discretization to third order and above become more difficult but become feasible using e.g. ENO (essentially non-oscillatory) or WENO (weighted essentially non-oscillatory) techniques. There is no a priori barrier which prevents the construction of very high order schemes concerning spatial discretization, however, time integration
130
M. Dumbser, T. Schwartzkopff, and C.-D. Munz
usually becomes a limiting factor for accuracy. Since ENO/WENO schemes are generally discretized in time with TVD Runge-Kutta (RK) schemes [2], there is a severe barrier of obtaining efficient time integration schemes for orders of accuracy higher than four [3]. The idea of the ADER approach of Toro et al., see e.g. [4]-[7], is to circumvent this efficiency barrier for the time discretization by considering finite space-time volumes, where the temporal evolution of the fluxes over the borders of the finite volumes is estimated by a Taylor series in time. The time derivatives are then replaced by space derivatives using the so-called Cauchy-Kovalevskaya or Lax-Wendroff procedure. With this approach it is possible to construct schemes of arbitrary high order in space and time with a timestep close to the stability limit. In this paper we use the fast-ADER formulation derived by Schwartzkopff et al. [6] to carry out a theoretical error analysis including a thorough study of the wave propagation properties of the ADER approach and compare our theoretical results to the numerical experiments presented in [6]. Although the reconstruction step does not include any monotonicity preserving technique these schemes are quite robust in the sense that the dispersion errors are controlled by the dissipation. In other words approximation errors associated with dispersion are dominated by dissipation errors and waves with wrong speeds are damped very fast. No artificial damping as known e.g. from finite difference schemes is necessary. Computational aeroacoustics to simulate noise propagation in the time domain or electromagnetic wave propagation is a typical field of application. The scope of the paper is as follows. In section 2 we give a short review of ADER and fast-ADER schemes. In sections 3 and 4 we theoretically analyze the stability as well as the dissipation and dispersion errors of the schemes for the scalar advection equation in one space dimension, using the method of differential approximation [8, 9] and in two dimensions carrying out a numerical von Neumann stability analysis. In section 5 we show the convergence rates obtained with ADER schemes in numerical experiments for the two-dimensional advection equation up to 16th order of accuracy in space and time.
2 ADER finite volume schemes Some general definitions concerning the finite volume approach are given here. In this paper only equally spaced Cartesian grids in a rectangular domain are considered. The computational domain [ a, b] × [c, d] is covered by cells Ii j ≡ [ xi− 1 , xi+ 1 ] × [ y j− 1 , y j+ 1 ], the centers of which are ( xi , y j ) and 2 2 2 2 ∆ xi = xi+ 1 − xi− 1 = const., ∆ y j = y j+ 1 − y j− 1 = const. denote the grid 2 2 2 2 sizes and are called subsequently ∆ x and ∆ y. A general system of linear hyperbolic PDE’s in two dimensions is given by
Arbitrary high order schemes for linear wave propagation
Ut + A U x + B U y = 0, U = U( x, y, t) and U( x, y, 0) = U0 ( x, y),
131
(1)
where U denotes the vector of physical variables and the matrices A and B are assumed to be constant (the matrix A may depend on y and B may depend on x without changing anything for the standard ADER scheme). With the physical fluxes F and G being determined by F = AU
G = B U,
and
(2)
the set of evolution equations (1) may be rewritten in the conservation form Ut + F (U) x + G (U) y = 0.
(3)
Consider now a space-time element Ii j × [tn , tn+1 ] as a control volume. The integration of the conservation equations (3) over this control volume produces the formula ∆t ˆ n+1 n ˆ ˆ (4) G Ui j = Ui j − − Fi+ 1 , j − Fˆ i− 1 , j + G 1 1 i, j+ 2 i, j− 2 , 2 2 | Ii j | where | Ii j | is the area of the cell Ii j and Ui j ( t ) =
1 | Ii j |
' x 1 ' y 1 i+ j+ 2
x
i − 12
2
y
U(ξ , η, t) dξ dη
(5)
j− 12
are the cell averages of U over all Ii j at time t. The flux F is given by Fˆ i+ 1 , j = 2
' y 1 ' tn+1 j+ 2 1 y
j− 12
∆t
tn
F(U( xi+ 1 , η, τ )) dτ dη.
(6)
2
The scheme (4) is arbitrarily accurate in space and time, though it looks like a first order scheme, but its accuracy only depends on the quality of the numerical flux. If a high order spatial reconstruction operator using N-th order polynomials is used and the temporal evolution of the flux can also be predicted with an order of accuracy of N, the scheme is of order N + 1. In the following we will use the ADER approach to get a high order accurate prediction of the flux evolution in time. For spatial reconstruction we use the tensor product of one-dimensional Lagrange interpolation. Therefore, the state vector U is expanded into a Taylor series in time and then time derivatives are replaced by space derivatives using the Cauchy-Kovalevskaya procedure. For the linear system (1) this can be written as follows: U( x, y, t) =
N
∑
k=0
N tk ∂k U (−t)k = ∑ k! k! ∂tk k=0
∂ ∂ A+ B ∂x ∂y
k U.
(7)
According to the work of Schwartzkopff, Dumbser and Munz [6] the finite volume fast-ADER scheme can be cast into the following final form:
132
M. Dumbser, T. Schwartzkopff, and C.-D. Munz n+1
Ui j
n
= Ui j −
∆t ˆ ˆ ˆ = Fi+ 1 , j − Fˆ i− 1 , j + G 1 −G 1 i, j + i, j − 2 2 2 2 | Ii j | ⎡ ⎤
= Ui j − ⎣ n
O/2
∑
O/2
∑
⎦ Cii, j j Ui +ii, j+ j j . n
ii =−O/2 j j=−O/2
(8)
contains all information about the reconstruction, The coefficient matrix Cii, jj the time integration and the mesh. If the Jacobians A and B are constant in the can be used for all cells. We note that scheme (8) now has the domain, Cii, jj structure of a single-step finite difference scheme that performs spatial and temporal discretization at the same time and uses a real two-dimensional stencil. It is obvious, that the ADER approach in this fast formulation needs less memory than a finite difference discretization with the same spatial order and Runge-Kutta time discretization.
3 Stability and accuracy analysis of the ADER schemes 3.1 Differential approximation in one dimension For the following analysis, we restrict ourselves to the one-dimensional linear advection equation (9) ut + au x = 0. In order to get a profound insight into the internal structure of the linear ADER scheme and in order to be able to compare it with standard RungeKutta central finite difference schemes, we apply the method of differential approximation, developed by Shokin [8] and Warming and Hyett [9]. The goal of this method is to derive the PDE which the numerical scheme solves exactly, while it solves (9) only approximately. The 1D ADER scheme is ⎡ ⎤ Uin+1 = Uin − ⎣
O/2
∑
ii =−O/2
Cii Ui+ii ⎦ ,
(10)
dropping the bar notation of the numerical solution for legibility reasons. For the scalar advection equation, the coefficients Cii of the fast-ADER schemes because for order 2 − 6 are given in tables 1 – 2. We note that Cii = −C− ii C0 = 0. The 2nd order scheme is equivalent to the classical one–step one– dimensional Lax–Wendroff scheme [10]. The cell average values Uin+1 and Uin+ii are now expanded in a Taylor series in time and space, respectively, and are inserted into (10). This yields after division by ∆ t ⎡ % &⎤ ∞ ∞ lU ∂ 1 ∂m U m−1 −1 ⎣ O/2 1 ∆t = Ut + ∑ ∑ Cii Uin + ∑ l! ∂xl · (ii · ∆ x)l ⎦ . m m! ∂t ∆ t m=2 l =1 ii =−O/2 (11)
Arbitrary high order schemes for linear wave propagation
133
This equation is consistent with the original PDE (9), if and only if the following consistency conditions are fulfilled: O/2
∑
ii =−O/2
Cii = 0 ,
O/2
∑
ii =−O/2
ii · Cii = a
∆t := ν . ∆x
(12)
The coefficients of the ADER scheme as given in tables 1 - 2 satisfy these consistency relations. Table 1. Coefficients Cii for second, third and fourth order ADER schemes ii
ADER O 2
−2
0
−1
− 21 ν (ν 2
ADER O 3 " # 1 2 6ν 1 − ν
1
− 21 ν (ν − 1)
1 2 ν (ν + 1 ) (ν − 2 ) " # 1 2 2 ν 1 + 2ν − ν 1 6 ν (ν − 1 ) (ν − 2 )
2
0
0
0
+ 1)
(ν )
ADER O 4 1 − 24 ν (ν − 1) (ν + 1) (ν + 2) 1 6 ν (ν − 2 ) (ν + 2 ) (ν + 1 ) " # − 41 ν 2 ν 2 − 5 1 6 ν (ν − 2 ) (ν + 2 ) (ν − 1 ) 1 − 24 ν (ν − 1) (ν + 1) (ν − 2)
Table 2. Coefficients Cii for the sixth order ADER scheme ii
ADER O 6
−3
1 − 720 ν (ν − 1) (ν + 1) (ν − 2) (ν + 2) (ν + 3)
−2
1 120 ν (ν − 1 ) (ν + 1 ) (ν − 3 ) (ν + 3 ) (ν + 2 ) 1 − 48 ν (ν − 2) (ν + 2) (ν − 3) (ν + 3) (ν + 1) " 2 #2 1 2 36 ν ν − 7 1 − 48 ν (ν − 2) (ν + 2) (ν − 3) (ν + 3) (ν − 1) 1 120 ν (ν − 1 ) (ν + 1 ) (ν − 3 ) (ν + 3 ) (ν − 2 ) 1 − 720 ν (ν − 1) (ν + 1) (ν − 2) (ν + 2) (ν − 3)
−1 0 1 2 3
Inserting the consistency conditions into (11) leads to the so-called Γ -form of the differential approximation of the scheme (10): ⎡ % &⎤ ∞ 1 ∂m U m−1 −1 ⎣ O/2 ∞ 1 ∂l U ∆t = Ut + aUx + ∑ ∑ Cii ∑ l! ∂xl · (ii · ∆ x)l ⎦ . m m! ∂t ∆ t m=2 l =2 ii =−O/2 (13)
134
M. Dumbser, T. Schwartzkopff, and C.-D. Munz
Successive derivations of the Γ -form with respect to x and t and inserting the results again into (13) finally yield the so-called Π -form of the differential approximation, ∞ ∂l U (14) Ut + aUx = ∑ cl l , ∂x l =2 where time derivatives have been consequently substituted by space derivatives, as in the derivation of the ADER approach. The dispersion and dissipation coefficients cl are functions of the advection speed a, the time-step ∆ t and the mesh size ∆ x. In practice, they are calculated using the so-called Warming-Hyett procedure [9] which can easily be implemented in modern computer algebra systems. With the ansatz (15) U ( x, t) = U0 · ei(kx−ωt) the following dispersion relation can be obtained from (14):
ω = ak + i
= ak + i
∞
∑ cl (ik)l =
l =2 ∞
∑
m=1
c2m (−1)m k2m −
∞
∑
m=1
c2m+1 (−1)m k2m+1 .
(16)
The solution can now be written as
U ( x, t) = U0 e
∑ c2m (−1)m k2m t
∞
m=1
·e
∞ i kx− ak− ∑ c2m+1 (−1)m k2m+1 t m=1
.
(17)
It is easy to see in equation (17) that the coefficients of even order c2m are associated to dissipation errors whereas the odd coefficients c2m+1 are related to dispersion errors. The scheme is stable, if the following inequality holds: ∞
∑
m=1
c2m (−1)m k2m < 0
π ∀k ∈ 0, ∧ ∀m ∈ N + . ∆x
(18)
A necessary but not sufficient stability condition can be derived from (18) in the limit k → 0. Let us denote the dissipation coefficient of lowest order which is not equal to zero with c2m∗ , i.e. c2m = 0 ∀m < m∗ and c2m = ∗ 0 ∀m ≥ m∗ , then (18) can be divided by k2m and written as ∗
(−1)m c2m∗ +
∞
∑∗
m=m +1
c2m (−1)m k2(m−m
∗)
< 0.
The sum vanishes in the limit k → 0 and hence, a necessary but not sufficient stability criterion is ∗ (−1)m c2m∗ < 0. (19)
Arbitrary high order schemes for linear wave propagation
135
Table 3. Dissipation and dispersion coefficients for O 2 schemes cm
ADER O 2
Finite Difference O 2, RK O 2
c2
0
0
c3
− 61 a∆ x2 + 16 a3 ∆ t2 1 4 1 2 3 2 8 a ∆ t − 8 a ∆ t∆ x 1 1 5 1 3 − 120 a∆ x4 + 20 a ∆ t4 − 24 a ∆ t2 ∆ x2 1 4 1 2 3 2 4 48 a ∆ t ∆ x − 48 a ∆ t ∆ x
− 61 a∆ x2 + 16 a3 ∆ t2
c4 c5 c6
1 4 3 8 a ∆t 1 1 5 1 3 − 120 a∆ x4 + 20 a ∆ t4 + 12 a ∆ t2 ∆ x2 1 4 3 2 12 a ∆ t ∆ x
Table 4. Dissipation and dispersion coefficients for O 2 schemes, CFL notation cm ADER O 2
Finite Difference O 2, RK O 2
c2
0
0
c3
1 2 6 a ∆ x (ν − 1 ) (ν + 1 ) 1 3 8 a ∆ x ν (ν − 1 ) (ν + 1 ) " 2 1 4 120 a ∆ x (ν − 1 ) (ν + 1 ) 6ν 1 5 48 a ∆ x ν (ν − 1 ) (ν + 1 )
1 2 6 a ∆ x (ν − 1 ) (ν + 1 ) 1 3 3 8 a∆ x ν " 4 # 1 4 2 120 a ∆ x 6ν + 10ν − 1 1 5 3 12 a ∆ x ν
c4 c5 c6
+1
#
Table 3 gives the dissipation and dispersion coefficients of the second order ADER scheme and for comparison the coefficients of a central second order finite difference scheme which is integrated in time using the explicit second order standard Runge-Kutta method. There is obviously no difference in the first dispersion coefficient c3 but already in the next coefficient c4 , which is the first non-vanishing dissipation coefficient c2m∗ . We note that the ADER scheme adds a term − 18 a2 ∆ t∆ x2 . Also the other coefficients cl are quite similar in most terms, but there is always one term which differs between ADER and the RK finite difference scheme. If we introduce the definition of the CFL number ν = a ∆∆xt , we get table 4 in terms of ν . We note that both schemes are second order accurate since the error coefficients are of order O 2. For the second order central finite difference scheme, the necessary stability condition (19) is obviously violated for any positive CFL number, since c4 > 0, whereas for the ADER scheme c4 < 0 ∀0 < ν < 1. Evaluation of (18) up to any order shows that the ADER scheme is stable under CFL condition ν < 1. The terms of the form (ν − 1) cause the ADER scheme to reproduce the exact solution for ν = 1. Table 6 shows a comparison of ADER O 4 and O 4 finite differences. Both schemes are obviously fourth order accurate and from the factorization in terms of the CFL number, we deduce once again that the ADER scheme reproduces the exact solution for ν = 1. The necessary and sufficient stability
136
M. Dumbser, T. Schwartzkopff, and C.-D. Munz
Table 5. Dissipation and dispersion coefficients for the ADER O 3 scheme, CFL notation cm
ADER O 3
c2,3
0
c4
1 − 24 a∆ x3 (ν − 1) (ν + 1) (ν − 2)
c5
1 − 60 a∆ x4 (ν − 1) (ν + 1) (ν − 2) (2ν − 1) " # 1 − 144 a∆ x5 (ν − 1) (ν + 1) (ν − 2) 2ν 2 − 2ν + 1 " # 1 − 504 a∆ x6 (ν − 1) (ν + 1) (ν − 2) (2ν − 1) ν 2 − ν + 1
c6 c7
Table 6. Dissipation and dispersion coefficients for O 4 schemes, CFL notation cm
ADER O 4
c2,3,4 0 c5 c6 c7 c8
Finite Difference O 4, RK O 4 "
0
#
a∆ x4 2 2 120 (ν − 1 ) (ν + 1 ) ν − 2 " # 5 a∆ x 2 2 144 ν (ν − 1 ) (ν + 1 ) ν − 2 " 2 #" 2 # a∆ x6 2 3ν + 1 1008 (ν − 1 ) (ν + 1 ) ν − 2 " 2 #" 2 # a∆ x7 2 ν +1 1152 ν (ν − 1 ) (ν + 1 ) ν − 2
"
a∆ x4 2 120 ν − 2ν + 2 5 a∆ x 5 144 ν " 6 # a∆ x6 1008 3ν + 4 a∆ x7 7 1152 ν
#" 2 # ν + 2ν + 2
Table 7. Dissipation and dispersion coefficients for O 6 schemes, CFL notation cm
ADER O 6
Finite Diff. O 6, RK O 4
c2,3,4 0 c5
0
c6
0
c7 c8 c9 c10
0
"
#"
#
a∆ x6 2 2 ν 2 − 32 5040 (ν − 1 ) (ν + 1 ) ν − 2 " # " 2 # 7 a∆ x 2 2 ν − 32 5760 ν (ν − 1 ) (ν + 1 ) ν − 2 " 2 #" 2 #" # a∆ x8 2 ν − 32 2ν 2 + 1 25920 (ν − 1 ) (ν + 1 ) ν − 2 " 2 #" 2 #" # a∆ x9 2 ν − 32 2ν 2 + 3 86400 ν (ν − 1 ) (ν + 1 ) ν − 2
a∆ x4 4 120 ν a∆ x5 5 144 ν " 6 # a∆ x6 1680 5ν − 12 a∆ x7 7 1152 ν " 8 # a∆ x8 25920 5ν − 36
0
criteria (19) and (18) can be shown to be satisfied by both schemes this time. However, a comparison of the coefficients c5 shows that the ADER scheme produces less dispersion errors than the standard finite difference scheme 1 3 a ∆ x2 ∆ t2 . due to an extra term − 24 Table 7 shows the dissipation and dispersion coefficients for a sixth order ADER scheme and for a sixth-order finite difference scheme in space using
Arbitrary high order schemes for linear wave propagation
137
Table 8. Dissipation and dispersion coefficients for the ADER O 16 scheme, CFL notation cm
ADER O 16
c2 − c16 0
"
#"
#"
#
c17
1 16 ν 2 − 12 ν 2 − 22 ν 2 − 32 · 355687428096000 a ∆ x " 2 #" 2 #" 2 #" 2 #" # 2 2 2 ν −5 ν −6 ν − 72 ν 2 − 82 · ν −4
c18
1 17 2 2 ν 2 − 22 ν 2 − 32 376610217984000 a ∆ x ν ν − 1 " 2 #" 2 #" 2 #" 2 #" # 2 2 2 ν −5 ν −6 ν − 72 ν 2 − 82 · ν −4
c19
1 18 ν 2 − 12 ν 2 − 22 ν 2 − 32 · 2385198047232000 a ∆ x " 2 # " # " # " #" #" · ν − 42 ν 2 − 52 ν 2 − 62 ν 2 − 72 ν 2 − 82 3ν 2
c20 c21 c22
"
#"
#"
#
"
#"
#"
#
·
# +4 " # " # " # 1 19 2 2 ν 2 − 22 ν 2 − 32 · 2510734786560000 a ∆ x ν ν − 1 " 2 # " # " # " #" #" # · ν − 42 ν 2 − 52 ν 2 − 62 ν 2 − 72 ν 2 − 82 ν 2 + 4 " # " # " # 1 20 ν 2 − 12 ν 2 − 22 ν 2 − 32 · 52725430517760000 a ∆ x " 2 # " # " # " #" #" # · ν − 42 ν 2 − 52 ν 2 − 62 ν 2 − 72 ν 2 − 82 5ν 4 + 40ν 2 + 26 " 2 #" 2 #" # 1 21 2 ν − 22 ν 2 − 32 · 165708495912960000 a ∆ x ν ν − 1 " 2 # " # " # " # " # " # · ν − 42 ν 2 − 52 ν 2 − 62 ν 2 − 72 ν 2 − 82 3ν 4 + 40ν 2 + 78
fourth order Runge-Kutta timestepping. The ADER scheme is obviously of O 6 and fulfills the stability criteria (18) and (19), whereas the fourth order Runge-Kutta time discretization limits the formal order of accuracy of the sixth order finite difference scheme to O 4, unless very small time steps are chosen. Finally table 8 shows the dispersion and dissipation coefficients obtained by differential approximation for a 16th order scheme. One can conclude that the designed formal order of accuracy of the ADER schemes can also be reached for very high orders. 3.2 Numerical von Neumann analysis in two dimensions For the two dimensional case a numerical von Neumann stability analysis is carried out [10]. The stability region is plotted in Fig. 1 as a function of the CFL numbers in x and y direction, respectively, and as a function of the order of the scheme. The stability limit is increasing the higher the order of the analyzed ADER schemes. On the left hand side of Fig. 1 the stability region of ADER schemes of order 2,4,6,10,12 and 14 are shown, in the right diagram the stability regions are plotted for orders 3,5,7,9,11 and 13. The dashed circle marks the stability region of maximal CFL number equal to 1 in all directions. It is interesting that obviously two different limits are obtained if the order of accuracy is increased: For even order schemes it is a square with CFL x = CFL y = 1 and for the odd order schemes it is well above a reference circle
M. Dumbser, T. Schwartzkopff, and C.-D. Munz 1
1
0.8
0.8
0.6
0.6
CFLy
CFLy
138
0.4
0.4
0.2
0.2 stable region
0
0
0.2
0.4
stable region
0.6
0.8
0
1
0
0.2
0.4
CFLx
0.6
0.8
1
CFLx
Fig. 1. Stability regions for even (left) and odd (right) order ADER schemes
with CFL = 0.9 but in some region below the reference circle CFL = 1.0. In practical calculations one cannot make use of the larger stability on the diagonal for the even order schemes. Thus the practical stability limit is there CFL = 0.98.
4 Dispersion and dissipation properties In Computational Aeroacoustics (CAA) good wave propagation properties are a crucial point for numerical schemes. The dispersion and dissipation errors must be as low as possible in order to provide accurate wave propagation over long distances on reasonably coarse grids. With the method of differential approximation it is possible to derive the dispersion and dissipation errors of a numerical scheme analytically, making use of (17). The amplitude error relative to the initial amplitude of a sine wave after a certain computation period T is given by
χ = 1−e
m ∑ c2m (−1) k2m T
∞
m=1
,
(20)
and the phase error in radiants with respect to the exact solution of (9) is % &
∆ϕ =
∞
∑
m=1
c2m+1 (−1)m k2m+1
T.
(21)
In figure 2 we plot the phase and amplitude errors against the points per wavelength (PPW) which are used to resolve a wave with a fixed wavelength for different schemes: ADER schemes of second up to 16th order of accuracy and central finite difference schemes of fourth and sixth order accuracy in space and fourth order accuracy in time. The figures contain the measured errors of numerical experiments [6] as well as the ones predicted
Arbitrary high order schemes for linear wave propagation
139
1
0.75
Amplitude Error
X Φ
ADER O2 (Theory) ADER O2 (Num. Exp.) ADER O3 (Theory) ADER O3 (Num. Exp.) ADER O4 (Theory) ADER O4 (Num. Exp.) ADER O5 (Theory) ADER O5 (Num. Exp.) ADER O6 (Theory) ADER O6 (Num. Exp.) ADER O15 (Theory) ADER O15 (Num. Exp.) ADER O16 (Theory) ADER O16 (Num. Exp.) FD O4 RK O4 (Theory) FD O4 RK O4 (Num. Exp.) FD O6 RK O4 (Theory) FD O6 RK O4 (Num. Exp.)
0.5 ADER O15 and O16 are magnified by 25
0.25 X ΦX
0
ΦX XX ΦΦ XXX ΦΦΦ XXXXXXXXXXXXX Φ ΦΦΦΦΦΦΦΦΦΦΦΦΦΦ X Φ X Φ X X X X X X
5
7.5
10
12.5
15
X
17.5
X
20
PPW 2 1.75 1.5
| Phase Error |
X Φ
1.25
ADER O2 (Theory) ADER O2 (Num. Exp.) ADER O3 (Theory) ADER O3 (Num. Exp.) ADER O4 (Theory) ADER O4 (Num. Exp.) ADER O5 (Theory) ADER O5 (Num. Exp.) ADER O6 (Theory) ADER O6 (Num. Exp.) ADER O15 (Theory) ADER O15 (Num. Exp.) ADER O16 (Theory) ADER O16 (Num. Exp.) FD O4 RK O4 (Theory) FD O4 RK O4 (Num. Exp.) FD O6 RK O4 (Theory) FD O6 RK O4 (Num. Exp.)
1 ADER O15 and O16 are magnified by 200
0.75 0.5 0.25 0
X ΦX ΦΦ XΦΦ XXX ΦΦΦΦΦΦΦΦΦΦΦΦΦΦΦΦ XXXXXXXXXXXXXXX Φ X Φ X Φ X X X X X X
5
7.5
10
12.5
15
Φ
17.5
Φ
20
PPW Fig. 2. Amplitude and phase errors according to theory and numerical experiments
140
M. Dumbser, T. Schwartzkopff, and C.-D. Munz
by equations (21) and (20) according to the theory of differential approximation, where the appearing infinite sums have been evaluated to sufficiently large values of m so that the truncation errors are negligible. Note that the graphs for the 15th and 16th order schemes are magnified. The CFL number is ν = 0.9, the mesh size ∆ x = 1 and the time T corresponds to 100 peri2π ods of advection at an advection speed of a = 1, so we have k = PPW ∆x 100·2π and T = ak . As expected, the phase errors are considerably lower for the ADER schemes. An O 4 ADER scheme has approximately the same phase error as a sixth order standard finite difference scheme. The amplitude errors do not depend very much on spatial discretization but on the time discretization, so that the amplitude errors for all schemes which are fourth order in time can be compared. However with the ADER approach it is easy to obtain very high order time discretization which leads to considerably lower amplitude errors, as shown for the ADER O 6 example in figure 2. Within the ADER schemes it is remarkable, that the third and fourth order scheme and the fifth and sixth order scheme produces more or less the same phase errors whereas the amplitude errors decrease continuously while increasing the order. We emphasize that the ADER schemes possess an inherent filtering property since the amplitude errors start to increase significantly for wave numbers which are not well resolved any more in the phase error plot. With this property, no artificial damping, as known e.g. from finite difference schemes, is needed to stabilize the ADER scheme.
5 Numerical results In section 3 we have shown the formal order of accuracy for the onedimensional ADER schemes of order O 2 − 6 and 16 at the aid of differential approximation. In this section we show numerical convergence studies for the simple two-dimensional advection equation ut + u x + u y = 0.
(22)
The initial condition is given by u( x, y, t = 0) = e
− 21
x2 + y2 σ2
(23)
with halfwidth σ = 3 units which is advected through the computational domain along the diagonal y = x. The computational domain has the extent [0; 100] × [0; 100] with four periodic boundary conditions and the error with respect to the exact solution, which is equal to the initial condition (23), is calculated after one period of advection (T = 100). The number of grid points in x and y direction is NG .
Arbitrary high order schemes for linear wave propagation
141
Tables 9 - 11 show clearly that the respective design orders of the numerical schemes have been reached. Although the rates for the 15th and 16th order schemes are slightly below the design order, the numerical convergence rates nevertheless confirm that the construction of very high order schemes with the ADER approach is possible. In figure 3 the convergence rates are plotted over the effective CPU time instead of the grid. CPU time is measured in floating point operations, not in seconds. If low accuracy is required, the low order schemes are clearly the fastest ones. The situation changes rapidly if higher accuracy is required. As an example an iso-error line of 1 · 10−8 is included in the figure. The CPU time of a 16th order scheme is 0.8 orders of magnitude lower compared to a 6th order scheme. Moreover this quality cannot be reached with a 2nd order scheme on a manageable grid. Second, an iso-CPU time line is drawn. With that CPU-time an error of 4.3 · 10−4 is obtained if a second order scheme is used. A 6th error scheme can already decrease the error to 8 · 10−7 and the Table 9. Numerical convergence rate for ADER O 2 and ADER O 3 schemes ADER O 2 NG L ∞
O L∞ L 1
ADER O 3 O L1 L ∞
O L∞ L 1
O L1
75 100 125 150 175 200 225 250 300
– 0.4 0.4 0.5 0.8 0.9 0.8 1.1 1.3
– 0.8 1.0 1.3 1.4 1.6 1.8 1.9 2.0
– 1.3 0.7 1.8 1.4 2.2 1.8 2.5 2.4
– 1.4 1.7 2.1 2.2 2.5 2.5 2.7 2.7
8.20861E-01 7.25056E-01 6.63756E-01 6.09218E-01 5.37958E-01 4.79431E-01 4.35047E-01 3.89292E-01 3.08095E-01
8.06241E-03 6.49501E-03 5.18986E-03 4.13067E-03 3.30563E-03 2.66634E-03 2.15978E-03 1.76404E-03 1.21577E-03
5.74654E-01 3.98063E-01 3.37183E-01 2.43357E-01 1.96549E-01 1.46708E-01 1.18091E-01 9.09957E-02 5.87466E-02
2.00478E-03 1.33527E-03 9.09633E-04 6.23312E-04 4.43239E-04 3.18543E-04 2.36809E-04 1.78767E-04 1.08505E-04
Table 10. Numerical convergence rate for ADER O 4 and ADER O 6 schemes ADER O 4 NG L ∞
O L∞ L 1
ADER O 6 O L1 L ∞ O L∞ L 1
O L1
75 100 125 150 175 200 225 250 300
– 1.6 2.4 2.6 2.8 3.3 3.7 3.8 3.7
– 2.3 3.0 3.5 3.9 3.9 4.0 4.0 4.0
– 3.9 4.8 5.5 5.9 5.8 5.9 5.9 5.9
3.72449E-01 2.36426E-01 1.36960E-01 8.57745E-02 5.59129E-02 3.61400E-02 2.34689E-02 1.57950E-02 8.10816E-03
1.77316E-03 9.05164E-04 4.66537E-04 2.47178E-04 1.36447E-04 8.07881E-05 5.04996E-05 3.32331E-05 1.60201E-05
1.69150E-01 7.40704E-02 2.87557E-02 1.19117E-02 5.71214E-03 2.77699E-03 1.38659E-03 7.42244E-04 2.61651E-04
– 2.9 4.2 4.8 4.8 5.4 5.9 5.9 5.7
7.24048E-04 2.37120E-04 8.14385E-05 2.98742E-05 1.20060E-05 5.56011E-06 2.76728E-06 1.48983E-06 5.03787E-07
142
M. Dumbser, T. Schwartzkopff, and C.-D. Munz
Table 11. Numerical convergence rate for ADER O 15 and ADER O 16 schemes ADER O 15 O L∞ L 1 NG L ∞
ADER O 16 O L1 L ∞ O L∞ L 1
O L1
75 100 125 150 175 200 225 250 300
– 8.8 11.3 13.0 13.7 13.8 14.0 14.3 14.5
– 9.1 12.4 13.5 14.3 14.6 15.0 15.1 15.4
1.97202E-02 1.61827E-03 2.16656E-04 2.06930E-05 3.11082E-06 4.66691E-07 9.68098E-08 2.07457E-08 1.52288E-09
– 8.7 9.0 12.9 12.3 14.2 13.4 14.6 14.3
7.31108E-05 5.85823E-06 4.66663E-07 4.32633E-08 5.26761E-09 8.37322E-10 1.60041E-10 3.55351E-11 2.54584E-12
1.61349E-02 1.48536E-03 1.14677E-04 1.36468E-05 1.50203E-06 2.07234E-07 3.92277E-08 7.83094E-09 4.88339E-10
– 8.3 11.5 11.7 14.3 14.8 14.1 15.3 15.2
6.58760E-05 4.87504E-06 3.03334E-07 2.57730E-08 2.82493E-09 3.99890E-10 6.83031E-11 1.38960E-11 8.40766E-13
16th order scheme reaches an error as low as 2.5 · 10−8 with the same CPU time but with a much smaller memory requirement due to the coarser grid. For an error of 5 · 10−5 , e.g. the sixth and 16th order scheme are more or less equally fast. Therefore for less restrictive requirements concerning the quality of the numerical solution, the 6th order scheme is the better choice. It is clear that one can only really benefit from very high order schemes, if a certain high quality of the solution must be reached. On the other hand, if one is satisfied with larger errors, low order schemes would be the better candidates. Finally we study numerically the properties of ADER schemes for wave propagation over very long times. A plot of the solution of the same problem as solved previously, but after 100 periods of advection (T = 10000), is given 100
10
-6
10
-8
3.3 orders
10-4
3.8 orders
10-2
iso-error
10
-12
O2 O3 O4 O5 O6 O15 O16
10-14 10
iso-CPU Time
10
-10
0.8 orders
-16
106
107
108
109
1010
1011
CPU Time
Fig. 3. Log-log plot of the convergence rates for ADER O 2-O 16 over CPU Time
Arbitrary high order schemes for linear wave propagation
143
f)
Fig. 4. Gaussian fluctuation with halfwidth of σ = 3 units at T = 10000 . a) ADER O 3, b) ADER O 4, c) ADER O 5, d) ADER O 6, e) ADER O 15, f) ADER O 16
in figure 4 for ADER schemes of order O 3 − 6, 15 and 16 using σ = 3 and ∆ x = ∆ y = 1. Hence the advection of the Gaussian distribution has to be captured on a very coarse grid. Up to 5th order, the schemes fail to produce acceptable results compared to the exact solution. However, the inherent robustness of the ADER approach is clearly visible and the dissipative errors balance the dispersive errors very well. The increase of the order does not generate additional wiggles, but provides a more and more accurate result.
6 Conclusions In the first part the ADER scheme has been shortly reviewed and discussed. The fast-ADER formulation leads to a single–step scheme in time of arbitrary order of accuracy. The practical limit is given only by computer precision in the calculation of the coefficients. We implemented the scheme in such a way, that the order of accuracy in space and time becomes simply a parameter to be specified by the user, so a really arbitrary high order method computer code has been achieved. Compared to other high–order integration schemes in time such as Runge–Kutta methods this is a clear benefit, because Runge– Kutta methods suffer from the Butcher barriers for orders higher than four. In one and two dimensions the second order ADER scheme is identical to the classical second order one–step Lax–Wendroff scheme. In the second part of the paper a truncation error analysis of the schemes has been carried out using the method of differential approximation of Shokin. The method was applied to the one–dimensional version of the scheme and a scalar linear advection equation. We were able to show the formal order of accuracy and the stability limit of CFL = 1. For two space dimensions a numerical stability analysis with the von Neumann method is
144
M. Dumbser, T. Schwartzkopff, and C.-D. Munz
shown and the high stability limits are again retrieved. With the method of differential approximation it was also possible to compute analytically the dispersion and dissipation errors which have been compared to numerical experiments. The ADER scheme was compared to standard finite difference schemes, using Runge–Kutta time integration. Finally we presented a numerical convergence study for the two dimensional schemes up to O 16. The computational effort grows quadratically with the order of accuracy of the fast-ADER scheme due to the quadratic growth of the stencil size. The schemes presented in this paper are based on linear central reconstructions and are designed to capture linear wave propagation with small dissipation and dispersion errors. If discontinuities were inherent in the solution, other reconstruction techniques like WENO should be used. The extension to three dimensions is straight forward. Considerable work on these topics has been carried out by Titarev and Toro [7].
References 1. Toro EF (1997) Riemann solvers and numerical methods for fluid dynamics. Springer 2. Gottlieb S, Shu CW (1996) Total variation diminishing Runge–Kutta schemes. ICASE 96-50, NASA Langley Research Center, Hampton, USA 3. Butcher JC (1987) The numerical analysis of ordinary differential equations: Runge–Kutta and general linear methods. Wiley 4. Schwartzkopff T, Munz CD, Toro EF (2002) J Sci Comp 17(1–4):231–240 5. Toro EF, Millington RC, Nejad LAM (2001) Towards very high–order godunov schemes. In: Toro EF (eds) Godunov Methods: Theory and Applications. Kluwer Academic, Plenum Publishers 6. Schwartzkopff T, Dumbser M, Munz CD (2004) J Comp Phys 197:532–539 7. Titarev VA, Toro EF (2005) J Comp Phys 202:196–215 8. Shokin YuI (1983) The method of differential approximation. Springer Verlag 9. Warming RF, Hyett BJ (1974) J Comp Phys 14:159–179 10. Hirsch C (1988) Numerical computation of internal and external flows. Vol I: Fundamentals of numerical discretisation. Wiley
Numerical simulation and optimization of fiber optical lines with dispersion management Yu.I. Shokin1 , E.G. Shapiro2 , S.K. Turitsyn2 , and M.P. Fedoruk1 1 2
Institute of Computational Technologies SB RAS, Lavrentiev Ave. 6, Novosibirsk 630090, Russia [email protected], [email protected] Institute of Automation and Electrometry SB RAS, Koptuyg Ave. 1, Novosibirsk 630090, Russia [email protected], [email protected]
Summary. Several new possibilities to enhance information capacity of data transmission by integration of several key technologies such as dispersion management, wavelength-division multiplexing and optical regeneration of signals are discussed. Mathematical modelling results may be used for upgrade of existing fiber links and design of new generation of long-haul high-bit-rate communication lines.
1 Introduction The rapid progress in the research and development of optical communication lines is associated with the growth of the Internet and enhanced demand on telecommunication services. Practical and research interest is directed mostly toward two main goals: development of effective high capacity long-haul transmission systems and the upgrade of existing terrestrial fibre networks. High-speed (with data transmission bit rate exceeding 40 Gbit/s in single frequency channel) communication lines require dispersion management [1, 2] to compensate for linear broadening of the signal. A simple dispersion management (DM) scheme is to construct the transmission line by splicing together fiber segments having alternately anomalous and normal dispersion. The transmission line can then have both a low path-averaged group-velocity dispersion (GVD) and a high local GVD, thereby suppressing Gordon–Haus timing jitter and the four-wave mixing efficiency simultaneously [2]. One of the important goals in the design of efficient communication links is to increase spectral efficiency of the transmission - to increase channel rate and to space different channels as close to each other as possible. In this paper we discuss several new possibilities to enhance information capacity of data transmission by integration of several key technologies such as dispersion management, wavelength-division multiplexing and optical regeneration of signals.
146
Yu.I. Shokin, E.G. Shapiro, S.K. Turitsyn, and M.P. Fedoruk
Mathematical modelling results may be used for upgrade of existing fiber links and design of new generation of long-haul high-bit-rate communication lines.
2 Basic mathematical model The optical pulse propagation over transmission system is governed by nonlinear Schrödinger equation with the effects of third order dispersion and Raman term included [3]: ∂A β ∂2 A β ∂3 A + iγ A − 2 2 − i 3 3 + ∂z 2 ∂t 6 ∂t i ∂ ∂| A|2 2 2 = 0. + σ | A| A + | A| A − TR A ω0 ∂t ∂t
i
(1)
Here z is the propagation distance, t is the retarded time, | A|2 is the optical power, β2 is the group velocity dispersion parameter, β3 is third-order dispersion, σ is nonlinear parameter, γ is fiber loss, TR is the Raman time and ω0 is carrier frequency. We use split-step Fourier method for solving of the nonlinear Schrödinger (NLS) equation (1) [3]. For that it is useful to write Eq. (1) formally in the form # " ∂A ˜ A, = D˜ + N ∂z
(2)
˜ is differential operator that accounts for dispersion and loss in an where D ˜ is nonlinear operator that governs the effect of fiber linear medium and N nonlinearities on pulse propagation. These operators are given by 2 3 ˜ = −γ − i β2 ∂ + β3 ∂ , D 2 ∂t2 6 ∂t3
i 1 ∂ 2 ∂| A|2 2 ˜ N = iσ | A| + . | A| A − TR ω0 A ∂t ∂t
(3)
(4)
In general, dispersion and nonlinearity act together along the length of the fiber. The split-step Fourier method obtains an approximate solution by assuming that in propagating the optical field over a small distance δ z, the dispersive and nonlinear effects can be pretended to act independently. More specifically, propagation from z to z + δ z is carried out in two steps. In the ˜ = 0 in Eq.(3). In the second step, first step, the nonlinearity acts alone, and D ˜ dispersion acts alone, and N = 0 in Eq. (4). Mathematically,
Numerical simulation and optimization of fiber optical lines
147
˜ exp δ z N ˜ A ( z, t) . A ( z + δ z, t) = exp δ z D (5) ˜ can be evaluated in the Fourier doThe exponential operator exp δ z D main using the prescription . ˜ B ( z, t) = F −1 exp δ z D ˜ (iω) F B ( z, t) , (6) exp δ z D ˜ (iω) is obtained from Eq. where F denotes the Fourier-transform operation, D (3) by replacing the differential operator ∂/∂t by iω, and ω is the frequency in the Fourier domain. Such method is accurate to second order in the step size δ z. In calculations we used symmetric form of split-step Fourier method is given by ⎡ ⎢ A ( z + δ z, t) = exp ⎣
z+ δ2z
'
⎤
⎡
˜ (s) ds⎥ ˜ exp ⎢ N ⎦ exp δ z D ⎣
z'+δ z
⎤ ˜ (s) ds⎥ N ⎦ A ( z, t) .
z+ δ2z
z
(7) The most important advantage of using the symmetrized form of Eq. (7) is that leading error term is of third order in the step size δ z. The ”quality of transmission” in communication system is characterized by the bit error rate (BER) that determines the number of error bits with respect to the total number of transmitted bits [4]. A commonly used criterion for digital optical receivers requires BER ≤ 10−9 corresponds to on average 1 errors per billion bits. Following [4], we will define P(0/1) and P(1/0), where P(0/1) is probability of deciding 0 when 1 is received, and P(1/0) is probability of deciding 1 when 0 is received. Assuming the probability densities pi , (i = 0, 1) are normally distributed 1 ( x − µi )2 pi ( x ) = √ exp − . (8) 2σi2 2πσi Here µi , σi are the average values and variances respectively. Then P(0/1) =
'Id −∞
P ( 1 / 0 )=
1 p1 ( x)dx = er f c 2
0∞ Id
p0 ( x)dx = 12 er f c
µ1 − Id √ σ1 2
Id −√µ0 σ0 2
,
(9)
,
where er f c stands for the complementary error function. In practice, a threshold value Id is optimized to minimize the BER, BER =
P(0/1) + P(1/0) . 2
(10)
148
Yu.I. Shokin, E.G. Shapiro, S.K. Turitsyn, and M.P. Fedoruk
Let us find the value of the Q factor Q=
µ1 − µ0 , σ1 + σ0
(11)
then the BER (for Gaussian distribution of errors !) is given by [4] Q2 exp − 4 Q 1 √ BER = er f c √ ≈ . 2 2 2π Q
(12)
We should note that BER ≤ 10−9 corresponds to Q ≥ 6.
3 Flat-top spectrum data format for N × 40 Gbit /s WDM transmission with 0.8 bit /s/ Hz spectral efficiency In this section we examine N × 40 Gb/s WDM transmission using data format with flat-top spectrum over bandwidth B and temporal profile sinc(π Bt) [5]. Using sinc-shaped pulses it is possible to suppress interaction between neighbouring bits by positioning periodic zeroes of sinc(π Bt) in the center of time slots. To produce sinc-shaped carrier with flat top spectrum, short (1.7 ps) Gaussian pulses has been sent through the super-Gaussian optical filter. Figure 1 shows temporal profile of the pulse before (top) and after the optical
Power (a.u.)
80
60
40
20
0
-50
0
50
Time (ps) 1
Power (a.u.)
0.8
0.6
0.4
0.2
0
-50
0
50
Time (ps)
Fig. 1. Signal waveform before (top) and after (bottom) ideal square-like (dashed line) and 6-order super-Gaussian (solid line) optical filter
Spectral power (a.u.)
Numerical simulation and optimization of fiber optical lines 10
-1
10
-2
149
10-3
-150
-100
-50
0
50
100
150
50
100
150
50
100
150
50
100
150
Frequency (GHz)
Spectral power (a.u.)
1
0.75
0.5
0.25
0 -150
-100
-50
0
Spectral power (a.u.)
Frequency (GHz) 10
-1
10-2
10
-3
-150
-100
-50
0
Spectral power (a.u.)
Frequency (GHz) 10-1
10
-2
10
-3
-150
-100
-50
0
Frequency (GHz)
Fig. 2. From the top to the bottom: spectrum of the input short pulse before filtering (top); filter profile- ideal (solid) and super-Gaussian 6-th order (dashed); carrier spectrum after filtering; and WDM channels after multiplexing (bottom)
filter (bottom). Note that the zeroes of sinc(π Bt) are adjusted to the middle of time slots reducing corruption of eye for the neighboring bits. Figure 2 illustrates how WDM signal is formed using flat-top spectrum carrier signal. Top picture shows pulse spectrum before applying optical filter (shown for selected channel below the top). Next two pictures depict signal spectrum after band-limited filtering at the transmitter and mixed WDM channels after propagation over 1200 km (bottom). As a particular example, without loss of generality we examine performance of the band-limited sincshaped pulses in N × 40 Gbit/s WDM transmission with 50 GHz channel spacing. As an illustrative example we consider a periodic symmetric dispersion map SMF (20 km) + DCF (6.8 km) + SMF (20 km) + EDFA with the total length of 46.8 km. Here EDFA stands for the Erbium-doped optical amplifier. Optical amplifiers amplify incident signal through stimulated emission. In the case of EDFAs, the amplification process is simply √ obtained by multiplying the electrical field for the amplification factor G and by adding an independent ASE (amplified spontaneous emission) noise term to each spectral component of the signal with a typical noise figure of Fn = 4.5 dB. Transmission typically of 8 WDM channels located from 1548.78 nm with 50 GHz separation has been modelled. MUX and DEMUX are made of optical super-Gaussian filters (6 order) with the bandwidth 44 GHz and opti-
150
Yu.I. Shokin, E.G. Shapiro, S.K. Turitsyn, and M.P. Fedoruk
Back-to-back Q
mized detuning. Very short 1.7 ps pulses with 57 mW peak power have been filtered by optical filter of 44 GHz bandwidth producing band-limited sincshaped pulses (as shown in Fig. 1) with the averaged power of −5 dBm. Received signals are directly detected by conventional 40 Gb/s receiver with Butterworth electrical filter having bandwidth of 50 GHz. A system performance has been analyzed in terms of error-free transmission distance corresponding to a linear Q ≥ 6 obtained by averaging of 27 − 1 pseudorandom data patterns over 21 runs [6]. Different bit patterns were generated for different channels. Parameters of the fibers are as follows (a) SMF: dispersion at 1550 nm D = 17 ps/nm/km, slope S = 0.07 ps/nm2 /km, effective area Ae f f = 80 µ m2 , loss α = 0.2 dB/km; and (b) DCF: D = −100 ps/nm/km at 1550 nm, slope S = −0.41 ps/nm2 /km, effective area Ae f f = 19 µ m2 , loss α = 0.65 dB/km; EDFA has a noise figure of 4.5 dB, span average dispersion < D >= −0.03 ps/nm/km. The comparing the initial value of Q (figure 3) shown the unfit of usual on-of keying (OOK) format for multichannel transmission. Figure 4 shows the transmission distance as a function of the filter detuning. Note that the optimal detuning is sensitive to the optical filter shape. For instance, using super-Gaussian filter of the sixth order it can be found that the optimal detuning is shifted to −6 GHz. Thus, we have examined band-limited signal format with sinc-shaped temporal profile resonancely placed over few time slots. Sharp decay of the spectrum and corresponding suppression of WDM cross-talks has allowed to achieve spectral efficiency of 0.8 bit/s/ Hz with equally polarized channels. A feasibility of WDM transmission at 40 Gb/s channel rate over 1200 km with-
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
+
+
+ +
+
+ +
+
+
+
+
+
+
-10
-8
-6
-4
-2
0
+
-14
-12
Filter shift (GHz) Fig. 3. Back-to-back Q for flat-top spectrum format with super-Gaussian filter of 6-th order (solid) and for OOK format (dashed) in multichannel system
Numerical simulation and optimization of fiber optical lines
151
1200 +
Distance (km)
+ +
+
1000
+
800
+
+
+
+ + +
600 +
400 200 + +
-14
-12
-10
-8
-6
-4
-2
0
Filter shift (GHz) Fig. 4. Error-free transmission distance as a function of the optical filter detuning. Dashed line – ideal square-like filter; solid line – super-Gaussian filter of the 6-th order
out FEC with spectral efficiency of 0.8 bit/s/ Hz without polarization multiplexing is confirmed by numerical modelling.
4 Semiconductor saturable absorber in long-haul N × 40 Gbit/s WDM RZ transmission In this section we present numerical analysis of SA-based regeneration scheme [7] enhanced by highly nonlinear fiber in a N × 40 , Gbit/s WDM RZ long-haul transmission [8]. Figure 5 shows two basic configurations of WDM DM transmission systems considered in this section. In the map (a) symmetric dispersion map has the following periodic section: PSCF+RDF+PSCF+ EDFA where fiber base includes Pure Silica Core Fiber (PSCF, dispersion D = 20 ps/nm/km, dispersion slope S = 0.06 ps/nm2 /km, loss α = 0.18 dB/km, effective area Ae f f = 110 µ m2 ) and 20 km of Reverse Dispersion Fiber (RDF, D = −42 ps/nm/km, S = −0.13 ps/nm2 /km, α = 0.3 dB/km, Ae f f = 20 µ m2 ). Second symmetric map (b) is based on TL (dispersion D = 8 ps/nm/km, dispersion slope S = 0.06 ps/nm2 /km, loss α = 0.21 dB/km, effective area Ae f f = 65 µ m2 ) and RTL (D = −16 ps/nm/km, S = −0.12 ps/nm2 /km, α = 0.28 , dB/km, Ae f f = 25 µ m 2 ). The OR period is 300 km and includes 5 dispersion maps. The EDFA noise figure is 4.5 dB. The OR design is shown in Fig. 5c. The OR consists of EDFA, SA, highly nonlinear fiber (HNF), filter and attenuator. SA has powerand time-dependent transmission equal to 1 − α (t). The power- and time-
152
Yu.I. Shokin, E.G. Shapiro, S.K. Turitsyn, and M.P. Fedoruk PSCF RDF PSCF
(
EDFA
xN
)
OR
xN
)
OR
a) TL
RTL
TL
(
EDFA
b) EDFAOR SA EDFAOR SA demux
EDFAOR
SA
EDFAOR SA
HNF HNF HNF HNF
Att. F Att. F Att.
mux
F Att. F
c)
Fig. 5. Transmission distance versus average dispersion and input peak power for the map PSCF+RDF+PSCF+EDFA
dependent loss α (t) is modelled through the simple rate equation:
α − α0 α P(t) dα ( P, t) =− − , dt τR τ F Psat SA steady-state loss α0 = 3 dB, saturation power Psat = 7 dBm, the fall and recovery times τ R = τ F = 5 ps. Basic parameters of the OR are: highly nonlinear fibre (HNF) has D = 2 ps/nm/km, loss α = 0.5 dB/km, n2 / Ae f f = 4 × 10−9 W −1 . Attenuator is included both to take into account insertion loss and to control power of a carrier at the OR output. Other parameters of OR have been determined through massive numerical optimization as described below. We use input OOK-RZ pulses with 50% duty cycle (TFW HM = 12.5 ps) and consider 4 WDM channels spaced by 200 GHz. A system performance has been analyzed in terms of standard Q-factor (linear Q ≥ 6 for BER less than 10−9 ) obtained by averaging 27 -1 quasi-random bit sequences over 5-11 runs with different pattern and noise realizations [6]. Below we present only selection of final results obtained through massive optimization of the systems/signal parameters. The best observed errorfree WDM transmission distance (linear Q ≥ 6 without FEC) for map (a) is 10 400 km. This result has been achieved for the following parameters found through optimization: input peak power p0 = 2.5 mW, span average dispersion < D >= −0.01 ps/nm/km, EDFAOR gain = 7.25 dB, length of HNF = 6 km, OR filter (Gaussian shape) bandwidth = 130 GHz, bandwidth of MUX/DEMUX WDM optical filter Bop = 190 GHz, bandwidth of electrical filter Bel = 45 GHz. Figure 6 shows a typical optimization picture in the plane of input power at the transmitter and span average dispersion. Slightly normal span average dispersion is required to achieve maximal transmis-
Numerical simulation and optimization of fiber optical lines
153
5 0
00
4 12
24
00
120 0
10
0
24
00
3 10
Peak power (mW)
120
2
1
12
00
-0.03
60
00
2400
12
0
00
0.03
Average dispersion (ps/nm/km) Fig. 6. Transmission distance versus average dispersion and input peak power for the map PSCF+RDF+PSCF+EDFA
sion. A sharp optimal peak again indicates the existence of an optimal relation between peak power of the carrier pulse and the span average dispersion. The best observed error-free WDM transmission distance (linear Q ≥ 6 without FEC) for map (b) is 8300 km. This result has been achieved for the following parameters: input peak power p0 = 3.0 mW, span average dispersion < D >= 0.0 ps/nm/km, EDFAOR gain = 6.5 dB, length of HNF = 6 km, bandwidth of rectangle optical filter Bop = 190 GHz, bandwidth of Gaussian optical filter in OR Bop = 120 GHz, bandwidth of electrical filter Bel = 42 GHz. Optimization contour plot of error-free distance for map (b) is shown in Fig. 7. In conclusion, we have performed massive numerical modelling of N × 40 Gbit/s WDM RZ (50% duty cycle) transmission over PSCF/RDF and TL/RTL links with in-line 2R optical regeneration based on saturable absorber and highly nonlinear fiber. A feasibility of error-free transmission without FEC with 300 km OR spacing over 10000 km and over 8000 km are numerically demonstrated for these two symmetric dispersion maps, respectively.
5 Non-periodic quasi-stable nonlinear optical carrier pulses with sliding chirp-free points for transmission at 40 Gbit /s rate As is well known amplifier noise and nonlinear transmission penalty are the key factors responsible for limiting the reach of fibre transmission links.
154
Yu.I. Shokin, E.G. Shapiro, S.K. Turitsyn, and M.P. Fedoruk
5
0
0
4800
180
0
90
00
81 00
3
180
0
0
900
2 90
1
180
4
90
Peak power (mW)
18
-0.03
0
0
0.03
Average dispersion (ps/nm/km) Fig. 7. Transmission distance versus average dispersion and input peak power for the map TL+RTL+TL+EDFA
At 40 Gbit/s channel rate, the major nonlinear penalties result from intrachannel interactions [9]. A possible way of reducing nonlinear impairments is to use quasi-linear return-to-zero (RZ) transmission with pulse duration much shorter than the bit-period [10]. However, a short duration of the carrier pulse leads to a broader spectrum that is not always suitable for WDM systems with high spectral efficiency. Here we demonstrate a possibility of a quasi-stable nonlinear transmission regime with carrier pulses of 12.5 ps width. Note that in the considered system and studied range of parameters the so-called dispersion-managed solitons [2] (periodic breathing pulses) do not exist. Found quasi-stable pulses supported by a large normal span average dispersion and misbalanced optical amplification present a new type of nonlinear carriers. We demonstrate that a large normal span average dispersion can substantially improve performance of N × 40 Gbit/s (100 GHz channel spacing) transmission systems. We consider as an example, without loss of generality, the following two periodic transmission schemes: a) SMF(85 km) +EDFA +DCF(15 km)+ EDFA, with a two-step dispersion map composed of standard single-mode fibre (SMF) and dispersion compensating fibre (DCF). Parameters SMF and DCF were described in Sec.3. The total length of SMF is set to 85 km, which would require a total DCF length of 14.45 km for full dispersion compensation. Two erbium-doped fibre amplifiers (EDFA), with a noise figure of 4.5 dB and equal gains of 13.4 dB, are used at the end of the each fibre section. Note that a signal under compensation after SMF and overcompensation after DCF creates a gain misbalance that plays a role in supporting observed quasi-stable nonlinear carrier pulses.
Numerical simulation and optimization of fiber optical lines
155
b) SMF(85 km) + BRA + DCF(15 km) + EDFA, with the same fibre characteristics of scheme a), but replacing the first EDFA for a backward Raman amplifier (BRA). In order to study the impact of distributed gain on the stability of the nonlinear regime, different contributions from each of the amplifiers to the total gain were considered, including the situation in which most of the gain is provided by the EDFA at the end of the span. The span average dispersion (SAD) is adjusted by varying slightly the length of the DCF. Numerical modelling has been performed for 8 WDM channels spaced from 1550.1 to 1555.8 nm with 100 GHz separation at 40 Gbit/s bit rate using 12.5 ps Gaussian pulses of variable peak power. A system performance has been analyzed in terms of standard Q-factor (linear Q ≥ 6 for BER less than 10−9 ) obtained by averaging 27 -1 quasi-random data patterns over 25 runs with different pattern and noise realizations [6]. For scheme a), even for moderate input peak powers of about 1 mW, the propagation regime is essentially nonlinear, and the immediate consequence is that zero average dispersion is not the best configuration in this case, but actually a certain amount of normal dispersion is required in order to obtain the best result as it is seen in Fig. 8. As an illustration, Fig. 8 shows the Qfactor after 1700 km transmission for 8 WDM channels (vertical lines) versus average dispersion for initial peak power of 2.6 mW. Note that the absolute values of Q are not very important as this example is used only as an illustration. It is seen that the best system performance is obtained for a rather high normal average dispersion of −0.7 ps/nm/km. The large non-zero average dispersion indicates that the corresponding optimal operational regime is nonlinear. To understand the physical mechanisms behind such nonlinear transmission regime with large negative span average dispersion we consider now in more detail the evolution of a single carrier pulse for a number of different SADs. Figure 9 shows the evolution of the width along the line (configuration a) for a pulse with the input peak power of 2 mW. The broadening of the pulse under zero average dispersion shows that nonlinearity impacts the transmission and a certain amount of SAD is required to stabilize pulse width. Carrier pulse broadening at zero span average dispersion leads to signal degradation and Q-factor decrease. This broadening can be counteracted with a slightly normal SAD (−0.05 ps/nm/km), which eventually leads to the stabilization of the pulse. Note, however, that in this case stabilization takes place at a pulse width larger than the original 12.5 ps, as it is seen in Fig. 9. This results in stronger inter-symbol interference and corresponding signal degradation (see Fig. 8). We have found that using a large normal SAD (−0.65 ps/nm/km) it is possible to stabilize carrier pulse at the original width and even produce a small gradual compression of the pulse. The physical mechanism of stabilization or even slow compression is similar to the so-called chirped pulse amplification technique [11]. The basic idea is that due to the gain misbalance a nonlinear propagation before amplification leads to pulse chirping and then a subsequent evolution in a more linear (low power) regime in a
156
Yu.I. Shokin, E.G. Shapiro, S.K. Turitsyn, and M.P. Fedoruk 6
5.5
Q-factor
5
4.5
4
3.5 -1.5
-1
-0.5
0
Average dispersion (ps/nm/km) Fig. 8. Config. (a). Linear Q-factor in 8 channels after 1700 km versus average dispersion for initial peak power of 2.6 mW
= 0 ps/nm/km = -0.05 ps/nm/km = -0.65 ps/nm/km
Pulse width (ps)
13.5
13
12.5
0
500
1000
1500
Propagation distance (km) Fig. 9. Pulse width versus propagation distance for three values of span average dispersion
fibre with opposite dispersion results into effective pulse compression. Although the propagation regime is nonlinear, it is important to point out that this effect is not solitonic in origin, since the transmitted signal is far from the dispersion-managed soliton conditions [2]. The most important difference with the dispersion-managed soliton is that the chirp-free points of the carrier pulse in the SMF piece are drifting along the dispersion map shifting approximately by a factor ∆ z = L D / DSMF after each section, where L is the dispersion map length, as shown in Fig. 10. For a given input power, after a certain number of sections the chirp-free points disappear and the carrier
Position chirp-free point (km)
Numerical simulation and optimization of fiber optical lines
157
80
60
40
20
0
0
4
8
12
16
20
Section number Fig. 10. Evolution of the position of the chirp-free point of a carrier pulse within the dispersion map with distance
pulse starts to experience slow broadening from section to section. This increases the effect of inter-symbol interactions and leads to the degradation of the system performance. It is also important to stress that the found non-soliton nonlinear regime with a sliding chirp-free points is rather general and can be realized for a variety of systems. Similar results are obtained for the scheme b). In conclusion, we have described non-soliton nonlinear transmission regime at 40 Gbit/s using non-periodic carrier pulses with 50% duty cycle and sliding (with propagation distance) chirp-free points. Quasi-stable transmission of such carrier pulses is supported by a large normal span average dispersion and misbalanced optical amplification. It is shown that a simple analysis of a single pulse evolution can be an efficient and time-saving method to determine the optimal operational regimes for rather complex transmission systems. This work was supported by INTAS (grant 03-56-203).
References 1. Agrawal GP (2001) Applications of nonlinear fiber optics. Academic Press, New York 2. Turitsyn SK, Shapiro EG, Medvedev SB, Fedoruk MP, Mezentsev VK (2003) Comptes Rendus Physique 4:145–161 3. Agrawal GP (2001) Nonlinear fiber optics. Academic Press, New York 4. Agrawal GP (1997) Fiber-optic communication systems. Second edition. John Wiley & Sons Inc., New York 5. Lysakova MV, Fedoruk MP, Turitsyn SK, Shapiro EG (2004) Quantum Electronics 34:857–859
158
Yu.I. Shokin, E.G. Shapiro, S.K. Turitsyn, and M.P. Fedoruk
6. Shapiro EG, Fedoruk MP, Turitsyn SK (2001) Electron Lett 37:1179–1182 7. Leclerc O, Lavigne B, Balmefrezol E et al (2003) Comptes Rendus Physique 4:163– 173 8. Waiyapot S, Turitsyn SK, Fedoruk MP, Rousset A, Leclerc O (2004) Opt Commun 232:145-149 9. Essiambre R-J, Mikkelsen B, Raybon G (1999) Electron Lett 35: 1576–1578 10. Park S-G, Gnauck AH, Wiesenfeld JM, Garrett LD (2000) IEEE Photon Technol Lett 12:1085–1087 11. Fisher RA, Bishel WK (1974) Appl Phys Lett 24:468–469
Parallel applications on large scale systems: getting insights H. Brunst, U. Fladrich, W.E. Nagel, and S. Pflüger Center for High Performance Computing, Dresden University of Technology, 01062 Dresden, Germany {holger.brunst, uwe.fladrich, [email protected] Summary. This paper describes a case study which deals with the analysis of scalability properties on modern parallel computer architectures in light of a CFD related problem – the scalable parallel adaption of unstructured grids. It shows how state-ofthe-art benchmarking, profiling, and tracing tools can assist authors of parallel CFD applications in making the right design and implementation decisions regarding scalable application performance. A sophisticated platform evaluation framework and a distributed parallel program analyzer are presented.
1 Introduction Today, we observe a dramatic change in high performance computer architectures. While a few years ago, homogeneous systems build by a small group of hardware vendors dominated the market, the advent of standard PC technology into this domain leads to very individual computing solutions. Due to the relatively cheap computing and networking components, academic institutions and small companies started to build i. e. assemble their own computers again! This allows a lot of flexibility regarding architectural decisions on the one hand but also requires careful thoughts on the purpose of the machine on the other hand. The following factors typically have to fit into a given budget: • • • •
Number and architecture of processors Amount of main memory Network bandwidth, latency, and topology Organization and volume of secondary storage
Configuring the optimal machine is not an easy task, given a fixed budget. Clearly, detailed knowledge about the applications to be executed on the system is important if not mandatory for wise decisions. Unfortunately, applications are not always precisely defined from the beginning. Furthermore, they are likely to evolve during the lifetime of the system. In our experience,
160
H. Brunst, U. Fladrich, W.E. Nagel, and S. Pflüger
two methodologies have proven to be very useful in this context. The first one focuses on general performance aspects of a system configuration. Standardized benchmark data can be obtained and compared for various system types and configurations. This framework can be used prior, during, and after the acquisition of a new system. The second methodology focuses on the migration, development, and optimization process of parallel applications in the scope of large scale computing infrastructures. Such an optimization step is mandatory if best application/platform performance has to be achieved on a newly installed system. This paper is organized as follows: The next section describes a user application which is part of an in-house CFD project [1]. This code will be used as example application throughout the paper. In Section 3 the BenchIT framework for platform evaluation and comparison will be outlined while Section 4 presents the new Vampir NG infrastructure for distributed scalable program analysis. Section 5 gives a preliminary tool based analysis of our code example. Conclusions will be drawn in Section 6.
2 User application description We introduce a model problem and an appropriate implementation framework to illustrate the process of analyzing parallel programs and their behavior on high performance platforms. The implementation is regarded as an example which shows how tools can support the development of efficient software, rather than a complete CFD code. 2.1 Application background We consider the numerical simulation of flows emerging from engineering problems. Such flows may be highly complex in terms of physics (e. g. shocks, turbulence) as well as geometry (e. g. curved domains). Appropriate implementations are required to be accurate, geometrically flexible, and computationally efficient. Several techniques have been established over many years: Spectral methods are used for their high accuracy but face problems for complicated geometries. Finite element methods offer ease of implementation for complex domains but have a limited, or at least fixed, approximation order. The spectral element method (SEM, [2]), which we use to spatially discretize the model equations, combines advantages of both worlds: A flexible approximation order fulfills high accuracy requirements and triangulation of the computational domain into finite elements provides geometrical flexibility. The implementation is based on an unstructured grid consisting of tetrahedral spectral elements which may be either curved or planar. This choice allows the geometrically accurate discretization of a wide range of domains.
Parallel applications on large scale systems: getting insights
161
We use the software platform MG [1] which provides basic data structures and functionality for unstructured tetrahedral elements in a parallel environment. Convergence is attained by refining the spectral element grid: The size of elements is reduced (h-refinement), the local approximation order increased (p-refinement), or both techniques are applied simultaneously (hprefinement). However, uniform refinement leads to exceedingly large grids which is computationally prohibitive. Adaptivity, i. e. restricting the refinements to those parts of the grid where it is needed, may be used to overcome this problem. We restrict ourselves to h-refinement for the course of this presentation. Although the combination of the SEM and adaptation techniques renders an efficient method, we still need to exploit parallelism to run largescale simulations on todays high-performance computers. The triangulation of the computational domain into finite elements suggests the use of domain decomposition for this purpose. With all of these techniques at hand (that is: SEM, unstructured grids, adaptivity, and parallelization), their integration is still an ambitious task. Data structures must be flexible enough to facilitate all the techniques, but furthermore we need tools to evaluate and improve performance for both sequentially and parallel executions. 2.2 Model problem and adaptivity A model problem for this work is derived from the direct numerical simulation of incompressible flows. The temporal discretization of the NavierStokes equations typically results in a sequence of Helmholtz and Poisson equations. The SEM discretization of these equations leads to large systems of linear equations which are commonly solved iteratively. Since these iterations often account for a large portion of the run-time, they can be regarded as computational kernels for many flow solvers. Thus we state the model problem as follows: Find an approximation for u : Ω → IR which solves the Helmholtz equation ∇2 u + λ u = f (1) inside the cylindrical domain Ω . The problem is complemented by appropriate boundary conditions. The source term f of (1) is chosen such that u resembles an exponentially decaying function which is constant in the direction of the cylinder axis. The decay rate of u allows to control the difficulty of the problem. A cascade-style solution-adaptation algorithm is used to solve the discretized version of (1) adaptively: for L from 1 to Lmax do Solve (1) iteratively on grid level L
(Step 1)
162
H. Brunst, U. Fladrich, W.E. Nagel, and S. Pflüger
Compute exact solution and solution error on level L if (error is below threshold) exit if ( L < Lmax ) then Create grid level L + 1 by adaptation of level L endif end do
(Step 2)
(Step 3)
The conjugate gradient method is applied with diagonal pre-conditioning to iteratively solve the system of linear equations in step 1. The exact solution (which is known through the definition of f ) is used to compute the solution error in step 2. For real applications an error estimator has to be used. New grid levels are added to the MG grid structure at every solution-adaptation cycle in step 3 as the algorithm proceeds towards a solution which satisfies the accuracy requirement. Figure 1 shows a sequence of grid levels which has been produced by applying the solution-adaptation algorithm to the Helmholtz problem.
Fig 1.1. Initial grid
Fig 1.2. Grid level 2
Fig 1.4. Grid level 4
Fig 1.3. Grid level 3
Fig 1.5. Level 4 – Interior
Fig. 1. Adaptive solution of model problem: grid level
Solution error: ||e||oo / ||e0||oo
Parallel applications on large scale systems: getting insights
1e+00
Global refinement Adaptation
163
Global refinement Adaptation
1e-01 1e-02 1e-03 1e-04 1e+04
1e+05
1e+06 1e+00
Grid size: d.o.f.
Fig 2.1. Solution error vs. grid size
1e+01 1e+02 1e+03 Time: Tsol and Tsol + Tada resp.
Fig 2.1. Solution error vs. run-time
Fig. 2. Reduction of numerical complexity
Four grid levels are shown, ranging from the uniformly fine initial grid level 1 (1.1) to level 4 (1.4), which is obtained after three adaptations. Figure 1.5 shows the interior of grid level 4 where the center of the solution u can easily be recognized. The efficiency of the adaptation has been evaluated and the results are shown in Figure 2. The adaptive solution process is compared to uniform (global) grid refinement. The solution error is evaluated in every solution-adaptation cycle and normalized by the coarse-grid error. Figure 2.1 shows the dependence of the error on the grid size, i. e. the number of degrees of freedom which is a measure of the computational complexity of the problem. More than two orders of magnitude can be achieved in terms of accuracy when using adaptivity. Figure 2.2 proves that this achievements are also translated into conformable run-time savings. 2.3 Parallelization Domain decomposition is used as the main methodology for parallelization. Load balancing is the major concern when distributing unstructured grids over many processors. In the implementation discussed we use Metis (see [1] for algorithmic details) to create balanced partitions of the multi-level grid. Distribution proceeds level-by-level, starting with the second level. The first level is always handled by one processor only. This restriction is induced by the implementation framework. Speed-up and parallel efficiency are evaluated to analyze parallel performance of the model problem solver. Figure 3 shows results as obtained on a SGI Origin 3800 system using up to 96 processors. The plots show three different graphs for grid levels two, three, and four respectively. Run-times
164
H. Brunst, U. Fladrich, W.E. Nagel, and S. Pflüger 96 64
Level 2 Level 3 Level 4
1 Parallel efficiency
Speed-up
32 16 8 4
0.8 0.6 0.4 Level 2 Level 3 Level 4
0.2
2 1 1
2
4
8
16
32
Number of processors
64 96
1
2
4
8
16
32
64 96
Number of processors
Fig. 3. Scalability of parallel adaptation
include the solution of the linear system as well as adaptation overhead. Higher grid levels represent a larger problem size. Parallel performance drops rapidly for lower grid levels, since the local problem size is very small (e. g. about 500 degrees of freedom per processor for grid level two). However, for larger problems the adaptation and parallelization overhead is much more contained and a much better parallel efficiency (well over 80%) is maintained.
3 Platform evaluation with the BenchIT framework Understanding application performance on modern system architectures is an immanent and challenging task. Usually standard benchmarks and algorithms are used to get a general platform overview – from time to time some custom measurement kernels are developed to identify critical architecture issues. BenchIT [3] – a new tool to support the collection and presentation of such measurement data – is developed at the Center for High Performance Computing Dresden and will be discussed in this paper in light of the target platform selection for the selected example code. 3.1 The BenchIT architecture BenchIT consists of two parts for the measurement and the presentation of performance data. As illustrated in Figure 4, those two parts are connected by the result file of the BenchIT measurement run which can be uploaded to the web server to compare it to other results. The BenchIT main kernel driver initiates and controls the performance measurement as shown in Figure 5. It repeatedly calls the measurement kernel which implements a measurement algorithm with varying problem sizes
Parallel applications on large scale systems: getting insights
BenchIT Main Kernel Driver
165
BenchIT Web server Result file Data Presentation
Data Generation
Fig. 4. BenchIT components Initialization of program & kernel
Measurement with a fixed problem size Variation of the problem size Timelimit reached?
yes
no
Data analysis
Write result file
Fig. 5. A single BenchIT measurement run
(e. g. vector sizes or matrix dimensions). When the processing of the kernel is done with all problem sizes the data is analyzed, outliers are corrected, and all information is written into the result file. There is a special feature which allows to limit the total amount of time a measurement run can take. After reaching this limit the measurement will be cut of for the actual kernel. The results of a BenchIT measurement run are written into a plain ASCII file. It is clear that, since only the result file is uploaded to the BenchIT web server, the result file has to contain information about the measurement environment as well as the system architecture. Only with this additional information the measurement runs become comparable. The BenchIT web server [3] is the key element in the data analysis process. Once the results are uploaded it enables the user to compare them with his
166
H. Brunst, U. Fladrich, W.E. Nagel, and S. Pflüger
other compatible measurements1 . Even more important is the feature of sharing files with distinguished user groups, therefore, it allows the user to compare his results with the ones of colleagues or any other BenchIT user. The assembly of plots happens in steps where all available data is filtered to contain the results the user wishes to see. The selected data is presented using gnuplot – parts of the website are therefore a mere front end to make all gnuplot options available. Plots are given as PNG- and EPS-files to include them in presentations as well as in articles. Furthermore, plots can be stored, easily accessed, and post-processed. 3.2 Portability One of the main design goals in the development of BenchIT is largest portability between different platforms. Therefore, the web server only uses basic HTML and CSS features to ensure that most web browsers can display the BenchIT web site http://www.benchit.org. Furthermore, no JavaScript is used since it is sometimes switched off for security reasons. Real portability problems can arise regarding the main kernel driver since the measurements are to run on a large variety of platforms and operating systems. The greatest common denominator among all those systems seems to be a shell, a compiler, and some degree of POSIX-compatibility – especially if you consider brand new products with barely any software support at all. Therefore, the whole main kernel driver is steered by a set of shell scripts invoking the system compiler(s) to generate a binary for each measurement run and kernel. This also allows to easily test different compiler options. 3.3 Flexibility BenchIT was designed to allow any metrics to be measured and displayed. Therefore, only the kernel knows what is measured exactly and communicates that to the main kernel driver. Furthermore, the measurements (e. g. time or performance counter events) are also done by the kernel and transferred to the main kernel driver for further analysis and placement in the result file. Nevertheless, the main kernel driver offers service routines for the kernel to easily use system timer or performance counter library. Additionally, a measurement kernel can measure functions with more than one unit per measurement run (e. g. MFLOPS, number of events, and transfer rates). This for example allows direct studies and comparisons on how cache behavior influences performance.
1
It is clear that not all measurements can be compared directly since the x-axis values of the plot should have the same unit.
Parallel applications on large scale systems: getting insights
167
3.4 Usability Usability has been a major concern during the implementation of BenchIT. The measurement environment has been extended by shell scripts that try to configure the environment automatically. Accordingly, first measurements can take place without any user interaction. BenchIT has also been trained to cope with a variety of run-time systems. It supports common batch systems and interaction with LAM-MPI and MPICH. For frequent measurements with different settings B-CARE – the BenchIT Compiling and Run Environment – has been developed. It offers a text based menu structure to easily select and configure the measurement kernels. The main kernel driver also offers a Java based graphical user interface – BIG – the BenchIT GUI. It allows changing BenchIT settings (e. g. compiler settings, architecture information) in a convenient way. Additionally, the measurement kernels can be configured, compiled, and run from BIG. The locally available results can be plotted within BIG using JFreeChart [4]. Furthermore, remote access to the BenchIT web server has also been included within BIG allowing an instant comparison of local results with results available on-line. 3.5 Examples We present a few examples generated by the online BenchIT data base available at http://www.benchit.org. Let’s assume that we would start a new corporation with some partner which gives us access to additional computing resources on a foreign platform. From earlier experiments we know that our existing CFD code seems to be well optimized regarding its floating point performance (we typically observe 30% of the peak performance). Thus, the first thing we would like to know is the performance of our code on the new platform. Clearly, we would like to avoid extensive porting activities unless they are worth the effort. With a few mouse clicks in BenchIT we are able to query a set of standard floating point benchmarks (matrix multiplication, vector addition, etc.) for the target platform type. Figure 6.2 shows the results of the matrix multiplication benchmark on the new target architecture (SGI Origin 3800, 400 MHz) while Figure 6.1 depicts the respective reference measurement on our local platform (SGI Origin 2000, 195 MHz). Comparing the two graphs reveals that a port to the newer platform could pay off by a factor of approximately 1.4. This is a lot less than the factor of two which we expected due to the system clock increase. Naturally, this is just a very rough estimation which assumes that our code behaves somewhat similar to the given benchmark. To confirm these very first results, we should compare the results of additional BenchIT floating point benchmarks. As our code is an MPI code, looking at the MPI benchmarks for the two platforms is advisable as well. Due to the limited space we cannot provide all the figures here.
168
H. Brunst, U. Fladrich, W.E. Nagel, and S. Pflüger Matrix Multiplication 8e+08
7e+08
jik jik(temp) jki (romulus) kji kij ikj ijk
6e+08
FLOPS
5e+08
4e+08
3e+08
2e+08
1e+08
0 0
200
400
600
800 1000 Matrix Size
1200
1400
1600
Fig 6.1. SGI Origin 2000 Matrix Multiplication 8e+08
7e+08
jik jik(temp) jki (romulus) kji kij ikj ijk
6e+08
FLOPS
5e+08
4e+08
3e+08
2e+08
1e+08
0 0
200
400
600
800 1000 Matrix Size
1200
1400
1600
Fig 6.2. SGI Origin 3800 Fig. 6. FLOPS for matrix multiplication
Sometimes, performance comparisons between programming languages on the same platform are of interest. One of the best known classics is the comparison of numerical codes in Fortran and C. This time we were interested in the AMD Opteron platform which might become the basis for one of our new clusters. We quickly found results for the matrix multiply benchmark executed on an AMD Opteron 1.4 GHz with gcc and g77 respectively. Figure 7 shows the performance results for the standard matrix multiplication code in C and in Fortran. Here, it is not a real surprise that Fortran is
Parallel applications on large scale systems: getting insights
169
two times faster than C. Please note, that this is just a very simple example of the overall BenchIT functionality which is to provide a general source for standardized benchmark results going beyond LINPACK [5].
Matrix Multiply 1e+09 ijk (icnode1) ikj (icnode1) jik (icnode1) jki (icnode1) kij (icnode1) kji (icnode1)
9e+08 8e+08 7e+08
Flops
6e+08 5e+08 4e+08 3e+08 2e+08 1e+08 0 0
100
200
300 Matrix Size
400
500
600
Fig 7.1. C code (compiled with gcc) Matrix Multiply 1e+09 ijk (icnode1) ikj (icnode1) jik (icnode1) jki (icnode1) kij (icnode1) kji (icnode1)
8e+08
Flops
6e+08
4e+08
2e+08
0 0
100
200
300 Matrix Size
400
500
600
Fig 7.1. Fortran code (compiled with g77) Fig. 7. Matrix multiplication – C vs. Fortran (Opteron 1.4 GHz)
170
H. Brunst, U. Fladrich, W.E. Nagel, and S. Pflüger
4 Scalable program analysis with Vampir NG The distributed architecture of the parallel program analysis tool Vampir NG (VNG) [6] outlined in this section has been newly designed based on the experience gained from the development of the program analysis tool Vampir. The new architecture uses a distributed approach consisting of a parallel analysis server running on a segment of a parallel production environment and a visualization client running on a potentially remote graphics workstation. Both components interact with each other over the Internet through a socket based network connection. The major goals of the distributed parallel approach are: 1. Keep event trace data close to the location where they were created. 2. Analyze event data in parallel to achieve increased scalability (# of events ∼ 1.000.000.000 and # of streams (processes) ∼ 10.000). 3. Provide fast and easy to use remote performance analysis on end-user platforms. 4.1 Architecture VNG consists of two major components: an analysis server (vngd) and a visualization client (vng). Each is supposed to run on a different machine. Figure 8 shows a high-level view of the VNG architecture. Boxes represent Large Parallel Application
File System
VNG Analysis Server
Performance Run-time System
Worker 1
Trace 1 Trace 2 Trace 3 Trace N
Master
Worker 2
Worker m Event Streams One Process
VNG Visualization Client
Parallel I/O 16 Tasks in Timeline
MPI Com. Internet Internet
Closeup Indicator
768 Tasks in Thumbnail
Fig. 8. Vampir NG architecture overview
Parallel applications on large scale systems: getting insights
171
modules of the components whereas arrows indicate the interfaces between the different modules. The thickness of the arrows gives a rough measure of the data volume to be transferred over an interface, whereas the length of an arrow represents the expected latency for that particular link. In the top right corner of Figure 8 we can see the analysis server, which runs on a small interactive segment of a parallel machine. The reason for this is two-fold. Firstly, it allows the analysis server to have closer access to the trace data generated by an application being traced. Secondly, it allows the server to execute in parallel. Indeed, the server is a heterogeneous parallel program, implemented using MPI and pthreads, which uses a master/worker approach. The workers are responsible for storage and analysis of trace data. Each of them holds a part of the overall data to be analyzed. The master is responsible for the communication to the remote clients. He decides how to distribute analysis requests among the workers. Once the analysis requests are completed, the master merges the results into a single response package that is subsequently sent to the client. The bottom half of Figure 8 depicts a snapshot of the VNG visualization client which illustrates the timeline of an application run with 768 independent tasks. The idea is that the client is not supposed to do any time consuming calculations. It is a straightforward sequential GUI implementation with a look-and-feel very similar to performance analysis tools like Jumpshot [7], Vampir [8], Paje [9], etc. For visualization purposes, it communicates with the analysis server according to the user’s preferences and inputs. Multiple clients can connect to the analysis server at the same time, allowing simultaneous viewing of trace results. As mentioned above, the shape of the arrows indicates the quality of the communication links with respect to throughput and latency. Knowing this, we can deduce that the client-to-server communication was designed to not require high bandwidths. In addition, the system should operate efficiently with only moderate latencies in both directions. This is basically due to the fact that only control information and condensed analysis results are to be transmitted over this link. Following this approach we comply with the goal of keeping the analysis on a centralized platform and doing the visualization remotely. The big arrows connecting the program traces with the worker processes indicate high bandwidth which is a major goal to get fast access to whatever segment of the trace data the user is interested in. High bandwidth is basically achieved by reading data in parallel by the worker processes. To support multiple client sessions, the server makes use of multi-threading on the boss and worker processes. The next section provides detailed information about the analysis server architecture.
172
H. Brunst, U. Fladrich, W.E. Nagel, and S. Pflüger
5 Studying the application We have analyzed the behavior of the Helmholtz solver in a parallel environment with Vampir NG. Figure 9 shows the overall structure of a run on 32 60.0 s
2:00.0
3:00.0
CPU 1 CPU 2 CPU 3 CPU 4 CPU 5 CPU 6 CPU 7 CPU 8 CPU 9 CPU 10 CPU 11 CPU 12 CPU 13 CPU 14 CPU 15 CPU 16 CPU 17 CPU 18 CPU 19 CPU 20 CPU 21 CPU 22 CPU 23 CPU 24 CPU 25 CPU 26 CPU 27 CPU 28 CPU 29 CPU 30 CPU 31 CPU 32
MPI SolveLES Adaptation Other
Trace.avt (3.67 ms - 3:19.3 = 3:19.296)
Printed by Vampir ®
Fig. 9. Vampir NG analysis: global timeline
processors by displaying a global time line. The main activities of this display are MPI and SolveLES, which symbolize parts of the code which contribute to communication and the solution of the linear system, respectively. Four stages of the run can be identified easily – corresponding to four solutionadaptation cycles. The first three cycles are used to distribute the grid levels among all 32 processors. It is necessary to conduct the distribution in steps rather than at once, to avoid imbalances. However, the distribution phase is less pronounced for larger (more realistic) simulations. The adaptation overhead can be analyzed more closely by summary charts. Figure 10.1 shows relative time consumption for the overall run. The fraction of the run-time which can be contributed to adaptation is less than one percent. Based on this measurement, we conclude that adaptation overhead does not degrade performance severely. Figure 10.1 suggests that a rather large portion of the run-time contributes to inter-process communication (activity: MPI). This can be attributed to the initial grid distribution. For the relatively short test run shown, this effects the overall figure significantly. If we look at a later stage of the program to create a more realistic expectation of a long-running simulation, we get the following picture. By selecting the CG iteration part in cycle four, we get run-time portions as seen in figure
Parallel applications on large scale systems: getting insights
SolveLES
71.591% SolveLES
MPI
173
95.365%
27.229% MPI
Other
0.863%
Adaptation
0.316%
Other
20%
40%
60%
4.418%
0.217%
50%
Printed by Vampir ®
Fig 10.1. Complete run
100% Printed by Vampir ®
Fig 10.2. Cycle four
Fig. 10. Vampir NG analysis: summary charts
10.2. Iteration (activity: SolveLES) dominates the computational work while communication and other activities amount to less than five percent. This result promises scalability of the overall application, provided that the iterative solution of large linear systems is the most time-consuming part of the solver, as we assumed by choosing this kind of computational kernel.
6 Conclusion Developing portable and scalable CFD applications is not a new problem. Yet, two relevant factors in the development process are about to change: 1) parallelism is becoming more and more relevant in the future generation of one chip processor solutions and thus also in systems that build on top of this technology; 2) due to the shorter lifetime of commodity technology, porting activities will become more frequent compared to proprietary systems which were typically in use for several years. As a result, the time spent on software maintenance must be reduced in order to maintain productivity. We have described two independent tool architectures which can assist software experts prior and after a change in their computing facilities. The open data base approach of BenchIT allows to benefit from the experiments made by an entire community of software developers. Vampir NG addresses the problem of the increasing complexity in future computing system while maintaining the usability of traditional program analysis tools. From our perspective these are two very important building blocks in the software creation and maintenance process.
174
H. Brunst, U. Fladrich, W.E. Nagel, and S. Pflüger
References 1. Stiller J, Nagel WE (2000) MG – A toolbox for parallel grid adaption and implementing unstructured multigrid solvers. In: D’Hollander EH, Joubert GR, Peters FJ, Sips H (eds) Parallel computing: fundamentals and applications. Imperial College Press, London 2. Karniadakis GE, Sherwin SJ (1999) Spectral/hp element methods for CFD. Oxford University Press, New York, Oxford 3. Juckeland G, Börner S, Kluge M, Kölling S, Nagel WE, Pflüger S, Röding H, Seidl S, William T, Wloch R (2004) BenchIT – performance measurement and comparison for scientific applications. In: Joubert GR, Nagel WE, Peters FJ, Walter WV (eds) Parallel computing: software technology, algorithms, architectures, and applications. Proceedings of the 10th ParCo Conf., Elsevier 4. The JFreeChart web server. http://www.jfree.org/jfreechart/ 5. Dongarra J, Bunch J, Moler C, Stewart GW (1979) LINPACK user’s guide, Philadelphia, PA, USA http://www.netlib.org/benchmark/ 6. Brunst H, Nagel WE, Malony AD (2003) A distributed performance analysis architecture for clusters. In: IEEE Int. Conf. on Cluster Computing "Cluster 2003". IEEE Computer Society, Hong Kong, China 7. Zaki O, Lusk E, Gropp W, Swider D (1999) High Perf Comp Appl 13:277–288 8. Brunst H, Winkler M, Nagel WE, Hoppe HC (2001) Performance optimization for large scale computing: the scalable VAMPIR approach. In: Alexandrov VN, Dongarra JJ, Juliano BA, Renner RS, Tan CK (eds) Computational Science – ICCS 2001, Part II. Lecture Notes in Computer Science 2074. Springer, San Francisco, CA, USA, 9. Chassin de Kergommeaux J, de Oliveira Stein B, Bernard PE (2000) Parallel Comput 26(10):1253–1274
Convergence of the method of integral equations for quasi three-dimensional problem of electrical sounding M. Orunkhanov, B. Mukanova, and B. Sarbassova Institute of Mathematics and Mechanics, al-Farabi Kazakh National University, Masanchi str. 39/47, 480100 Almaty, Kazakhstan [email protected] Summary. The problem on vertical electrical sounding above the embedding with two-dimensional geometry of heterogeneity is considered. The conditions of convergence of the iterative method for solving the charge density equation are obtained.
1 Introduction Nowadays the conventional methods of electrical sounding are more and more based on computational methods. As a result, the effective numerical algorithms which allow implementing substantial computations for electrical sounding data interpretation are often used in practice. In order to solve practical problems of vertical electric sounding it is important to estimate the influence of geoelectrical boundary horizontally layered deviation on the sounding results. The general solution for two-layered model with an arbitrary angle of inclination has been obtained in [1], [2]. The potential distribution above an electrical inhomogeneous medium with a local embedding of various configurations is of specific interest. The results of solving similar problems for elementary configuration of an embedding are well known and mentioned in [3]. The numerical method for the electrical field calculation for some general case of an attitude of bed is considered.
2 Mathematical model Let us assume that the medium is electrical inhomogeneous and has a twodimensional piecewise constant distribution of conductivity. Though the extension of embedding is two-dimensional, the field parameters depend on three space coordinates because the electrical field in a medium is excited by point source.
176
M. Orunkhanov, B. Mukanova, and B. Sarbassova
Further considerations are based on the generally accepted mathematical model for the vertical sounding problems, i.e. the equations of stationary electrostatic field with a piecewise constant distribution of conductivity. Let the source probe, simulated as a point constant-current source, is situated at a point A on the flat surface coincident with the plane y=0 of Cartesian coordinates. Let the medium is composed of the beds with constant conductivity σ1 and σ2 (picture 1). Concerning the geometry of the attitude of beds we assume the following: 1. The plane ( x, z) of Cartesian coordinates coincides with the Earth surface, and the conductivity distribution doesn’t depend on the coordinate z.
Fig. 1. The scheme of vertical electrical sounding and the geometry of the attitude of bed. PM = ρ1 = r(θ1 ), PM = ρ = r(θ), r MM1 =MM1 , θ=MOM1
2. The section of the boundary Γ by the plane z = const is a closed curve (picture 1). There is the parametrization for this section in the polar coordinates r = r(θ ) with the center at the point P( x P , y P ), and r(θ ) ∈ C 2 ([0, 2π ]), 0 < R1 ≤ r(θ ) ≤ R2 , max(|r (θ )|, |r (θ )|) ≤ K, m = R1 − 2π K > 0, θ ∈ [0,2π ).
(1)
The electrostatic potential of a stationary field without volume sources satisfies the Laplace’s equation
∆ϕ = 0,
(2)
in the medium, excluding the boundary Γ between the domains with different conductivities. The following conditions of the continuity of the potential and the normal component of current are taken on the surface Γ : ! ! ∂ϕ !! ∂ϕ !! ϕ|Γ + = ϕ|Γ − , σ1 = σ (3) 2 ∂n !Γ + ∂n !Γ −
Convergence of the method of integral equations
177
The derivatives on the different sides of Γ are denoted by “+” and “-”. The conditions of decreasing at infinity ϕ(∞) = 0 and the boundary condition on the Earth surface ! −→ ∂ϕ !! = − I (δ (r − OA) ∂y ! y=0
−→ should be satisfied. OA is the radius-vector of the point constant-current source, I is the current strength of source probe. We represent the solution of this problem at the point M as the sum of potentials of the point source in the homogeneous half-space and an unknown regular component: ϕ = U0 ( M) + u( M) =
I
−−→ + u( M). 2σ1 π | MA|
(4)
The function u( M) also satisfies the Laplace’s equation everywhere except the boundary Γ . The boundary conditions for u( M) can be written as follows: % ! ! ! ! & ∂u !! ∂u !! ∂U0 !! ∂U0 !! σ1 ! − σ2 ! = − σ1 − σ2 , (5) ∂n Γ+ ∂n Γ− ∂n !Γ+ ∂n !Γ− ! ∂u !! = 0. ∂y ! y=0
(6)
3 Method of solution The integral equation for solving the electrical sounding problem above an inclined plane has been first obtained in [1]. The iterative method of solving this equation was suggested; the convergence of the method with different parameters of layers and nonzero angles of inclination was proved. According to [1] we construct the solution u( M) as a simple fiber potential produced by the secondary sources which are distributed on the geoelectrical boundary Γ and on its reflection in the upper half-space. The symmetric reflection is used to provide the condition (4) on the Earth surface. At the first stage we consider a simple fiber density ν (M) as a sought function. It satisfies the integral equation obtained from the condition (3) and the Green’s formula ([1], [4]): % & '' 1 λ ∂ 1 ν ( M) = ν ( M1 ) + (7) dΓ ( M1 ) + λ F0 ( M), 2π ∂n r MM1 r MM Γ
1
where F0 ( M) = ∂U0 /∂n( M), λ = (σ2 − σ1 )/(σ1 + σ2 ).
178
M. Orunkhanov, B. Mukanova, and B. Sarbassova
M1 belongs to the integration surface Γ , M1 belongs to its reflection in the upper half-space. r MM1 , r MM1 are the distances from M to M1 and M1 accordingly. It is not difficult to make sure that the derivative in the direction of l is % & 1 1 cos ψ cos ψ ∂ + = 2 + 2 , (8) ∂l r MM1 r MM r MM1 r MM 1 1
where the angles ψ and ψ are formed by the vector l and by directions MM1 and MM1 accordingly. Let consider the iteration scheme for solving the equations (6). We set some initial approximation ν0 ( M) ∈ C (Γ ). We calculate every next approximation νn+1 ( M) from the equation (5) by substituting νn ( M1 ) to right side instead ν ( M1 ): % & '' 1 λ ∂ 1 νm+1 ( M ) = νm ( M1 ) + dΓ ( M ) + λ F0 ( M), 2π ∂n r MM1 r MM (9) Γ
1
n = 0, 1, 2, ... We estimate the uniform norm of the difference of two progressive approximations. By substituting (7) to (8) we obtain: ! ! ! ! cos ψ cos ψ λ ! 00 ! |νm+1 − νm | ≤ |νm − νm−1 |C · 2π ! + Γ d 2 2 ! r MM r MM (10) Γ 1 1 λ ≡ |νm − νm−1 |C · 2π |( I1 ( M) + I2 ( M))| Let us obtain the estimation, which is uniform with respect to M, for the right side of (8). We consider the parametrization of the surface Γ according to the condition (1): −−→ → √ ∞ (− 20π 0 00 cos ψ MM1 , − n ) r(θ1 )2 +r2 (θ1 )dz1 dθ1 , dΓ = I1 ( M) = r2MM r3MM (11) 0 −∞ Γ 1 1 dr r (θ ) = dθ . Due to the parametrization of Γ : r2MM1 = ρ2 − 2ρ1 ρ cos(θ − θ1 ) + ρ121 ,
−−−→ MM1 = (ρ1 cos θ1 − ρ cos θ , ρ1 sin θ1 − ρ sin θ , z − z1 ) , " # 1 n= 1 −(r (`) sin ` + æ cos `), r (`) cos ` − æ sin `, 0 . æ2 + r2 ( ` )
(12)
In order to calculate the integral I1 we make a transformation to the cylindrical coordinates with the z-axis, which passes through the center of polar system of coordinates P, and rotate the polar axis in the direction of θ. Then
Convergence of the method of integral equations
179
the coordinates of the point M will be ρ = r(0), 0, z and of the point M1 : ρ1 = r(θ1 ), θ1 − θ, z1 . By substituting (10) into (9) and by the change of vari ables θ = θ − θ1 , z = z1 − z, we obtain: I1 =
'2π '∞ 0 −∞
ρ2 − ρρ1 cos θ − r (0)ρ1 sin θ " #3/2 dz dθ . ρ2 + ρ21 − 2ρρ1 cos θ + z2
By integrating over z we obtain: I1 = 2
'2π 2 ρ − ρρ1 cos θ − r (0)ρ1 sin θ
ρ2 + ρ21 − 2ρρ1 cos θ
0
dθ .
(13)
Let us represent the value of the function ρ = r(θ ) in the following form, using ρ1 = r(θ1 ):
ρ = r(θ1 ) + (θ − θ1 )
'1
01
dr (θ1 + t(θ − θ1 ))dt ≡ ρ1 + L(θ )θ . dθ
(14)
In a similar manner, taking into account the direction of polar axis and the definition of θ, we have: dr θ 2 ρ1 = r(θ ) − θ (0) + dθ 2
'1 '1 0 01
2 d2 r θ , ( ts θ ) dtds ≡ ρ − r ( 0 ) θ + L ( θ ) 1 2 dθ 2
(15) where | L(θ )| ≤ K , | L1 (θ )| ≤ K, are bounded continuous functions due to (1). Let us make an upper estimate of the integrand in (11). Expressing ρ from (13) and substituting it to the numerator we obtain θ replaced by θ:
ρ2 − ρρ1 cos θ − r (0)ρ1 sin θ = = 2ρ2 sin2 (θ /2) + r (0)ρ(θ cos θ − sin θ )+ +r2 (0)θ sin θ − L1θ 2 (ρ cos θ + r (0) sin θ )/2 ≤ ≤ 2R22 sin2 (θ /2) + 0.5KR2θ 2 + K 2θ 2 + 0.5K ( R2 + K )θ 2 ≤ ≤ (0.5R22 + KR2 + 1.5K 2 )θ 2 ,
(16)
Here we used the estimation |θcos(θ )-sin(θ )| ≤ 0.5θ 2 , which is obtained by the Taylor expansion and by estimating of the remainder term on [0, 2π ]. By substituting (12) to the denominator of integrand (11), we obtain:
ρ2 + ρ21 − 2ρρ1 cos θ = 4ρ21 sin2 (θ /2) + 4ρ1 L(θ ) sin2 (θ /2) + L2θ 2 ≥ ≥ (4R1 m sin2 (θ /2) + K 2θ 2 ). (17) The substitution of the estimations (15-16) to (11) gives:
180
M. Orunkhanov, B. Mukanova, and B. Sarbassova
I1 ≤ (0.5R22 + KR2 + 1.5K 2 )
'2π 0
θ 2 dθ ≤ C1 = const. (18) K 2θ 2 + 4R1 m sin2 (θ /2)
The estimation for the integral I2 is obtained much easier. If d is a depth of an embedding occurrence, then r MM ≥ 2d and 1
I2 =
≤
∞ 20π 0 0 −∞
20π 1
cos(ψ ) 1 2 r (θ ) + r2 (θ )dzdθ r2 MM 1
r2 (θ ) + r2 (θ )
0∞
−∞
0
dz dθ 4d2 + z2
≤
= Cπ ,
where C is the length of the contour of the section Γ . Then from (16) we obtain λ |νn+1 − νn |C ≤ |νn − νn−1 |C · (19) (C1 + C π ) . 2π It follows that for a sufficiently small λ, which is equal to good contrasts of layers, the iteration process converges as a geometric progression with the common ration λ (C1 + C π )/2π . The sought field potential u( M) can be recovered by integrating over the surface of media contact and its reflection in the upper half-space using the Green’s formula and the simple fiber density ν ( M): '' 1 1 1 u( P) = ν ( M) + (20) dΓ ( M). 2π r PM r PM Γ
Thus, some special cases of an embedding occurrence are considered and the above algorithm is realized.
References 1. Tikhonov AN (1946) On electric sounding above inclined bed. Proc. of Institute of Theor. Geophys. Publ. House of AS USSR, Moscow – Leningrad, 1:116–136 (in Russian) 2. Skalskaya IP (1948) J Tech Phys USSR 18:1242-1254 (in Russian) 3. (1989) Electrical sounding: geophysicist’s handbook. Nedra, Moscow (in Russian) 4. Orunkhanov MK, Mukanova B, Sarbassova B (2004) Numerical simulation of electric sounding problems. Joint issue of Comp Techn 9 and Bulletin of KazNU 3/42: Proc. of the Int. Conf. "Computational and informational technologies for science, engineering and education", Almaty, Kazakhstan, part 3 (in Russian)
Sustaining performance in future vector processors U. Küster1 , W. Bez2 , and S. Haberhauer2 1 2
High Performance Computing Center Stuttgart (HLRS), University of Stuttgart, Nobelstraße 19, 70569 Stuttgart, Germany [email protected] NEC High Performance Computing Europe GmbH, Heßbrühlstr. 21b, 70565 Stuttgart, Germany [email protected], [email protected]
Summary. Vector processors are invaluable tools for high performance numerical simulations due to their high sustained performance. The high efficiency can be attributed to a superior balance between peak performance of the arithmetic pipelines and memory bandwidth. In the Teraflop-Workbench project HLRS and NEC investigate how to sustain this performance for HLRS applications. We report first results of this cooperation.
1 Introduction Vector architectures offer the highest sustained floating point performance of all computer architectures. One reason is that in the architecture the high peak performance of multiple arithmetic pipelines is balanced with a high memory bandwidth. This balance is often seen as the major advantage of vector processing. But there are other advantages at least as important. Vector instructions have a very efficient way to hide memory latency. This mechanism comes as an integral part of the basic concept of vector processing. Hiding memory latency is as important for sustained performance as memory bandwidth or even more important. Finally, due to the higher processing speed of the single processor, fewer processors are required to sustain a targeted performance. This is important because there is a serial part and there is time required for communication in a parallel program. In applications with complicated structures, which are common e.g. in engineering, these non-scaling parts of the program limit parallel speed-up severely. HLRS and NEC Corporation have set up the Teraflop-Workbench project in order to facilitate and study applications with more than 1 Tflop/s performance on the new HLRS SX-8 system. The HLRS applications are highly vectorizable and a performance of up to 70% of the available vector peak performance can be sustained. Part of the project is also to investigate how the vector architecture can be complemented by other architectures, e.g. scalar clusters, to enhance workflow and performance. Finally, both HLRS and NEC want to
182
U. Küster, W. Bez, and S. Haberhauer
gain insight from an applications point of view how the vector architecture could evolve to ensure sustained performance in the future. In this paper we address these questions in a general way.
2 Applications with Teraflop/s performance The Teraflop-Workbench project currently covers about twenty applications from quite diverse fields. For about ten of these applications it is clear that they will reach more than 1 Tflop/s performance with straightforward extension of their current performance on the available SX-6 interim configuration to the final SX-8 configuration. Another ten need serious research work on algorithms, vectorization and parallelization. Some Teraflop-Workbench applications are: N3D for calculation of laminar turbulent transition, Aiolos for the simulation of industrial furnaces and boilers, CCarat, a Finite Element program for fluid structure interaction, Fenfloss as finite element flow solver and other codes for particle interaction, or utilizing Lattice-Boltzmann techniques. Descriptions for all Teraflop-Workbench applications can be found on the Teraflop-Workbench project website [1]. The general strategy to reach Tflop/s applications performance in this project is to combine high vector performance with good scaling. We find that all our applications are well vectorizable, typically less than 1% non-vector operations, and also show satisfactory parallel speedup, typically less than 1% serial and communications time. However, we have not seen applications yet that are good candidates for parallel speed-up of more than one thousand which would be required if we want to achieve 1 Tflop/s performance with a single processor sustained performance of 1 Gflop/s or less. Such very high scaling would require a serial part well under 0.1%. With the final hardware configuration for this project we will be able to investigate this question further.
3 Vectorization in Teraflop/s applications The high vectorization can be understood by looking at the algorithms used. Typical CFD applications are working on large sets of grid points. The data may be defined by large 3D arrays or better, as linearized 1D arrays. Even unstructured grids may be expressed in this way. For structured grids the neighborhood points are addressed by fixed offsets in the 1D arrays. Using unstructured grids the neighborhoods have to be formulated explicitly. This is done as in the case of sparse matrix storage techniques in a column oriented way. Restriction and prolongation operators in multigrids are formulated in the same way as sparse matrices. Pointers are not used. Dense matrix techniques are well vectorizing as long as the matrices are not too small. Krylov space solvers are well vectorizable. The fine granular recursive
Sustaining performance in future vector processors
183
ILU as preconditioner has to be replaced by other techniques e.g. balancing, Jakobi point or colored or hyperplane ordered Gauss-Seidel preconditioning. This does not mean that all algorithms are efficiently vectorizable. For example for the solution of tridiagonal systems there are only vectorizable algorithms with a higher operation count. The key to vectorization here is to solve many of these systems in parallel. This can be vectorized efficiently by using additional inner loops for the different systems. Generally speaking small objects destroy performance. Suitable objects are large arrays, dense and sparse matrices with predictable memory access. There is no principle difference between vector architectures and other architectures like x86, RISC or EPIC processors, but the performance degradation is worse for vector processors. The access to irregular data structures with no foreseeable successors like linked lists and trees is not vectorizable. But accessing data in these important and useful ways is also slow on all other architectures mainly due to memory latencies. Vectorization as parallel paradigm allows for exact and fast interaction of parallel instruction streams with balanced support of memory operations. The overhead of this kind of parallelization is smaller. In this sense the parallelization is more efficient. Instead of 2000 processors with a sustained performance of 0.5 Gflop/s the user has to handle only 200 processors each with 4 arithmetic and load/store pipes giving a sustained performance of 5 Gflop/s. This reduces the implications of Amdahls law on the performance. In addition to vectorization also the other parallelization techniques have to be applied, OpenMP in the shared memory computing nodes and MPI across the nodes. Because of the high sustained bandwidth the nodes deliver high efficiency for the OpenMP parallelization. The high bandwidth between the nodes allows for a high total efficiency.
4 Latency reduction in vector and cache architectures The gap between processing speed and memory access times is constantly widening. While processing speed has been increasing at a speed close to Moore’s law, roughly doubling every two years, memory access times are decreasing very much slower. Within the last 10 years Dram access speed has only doubled both in terms of bandwidth and latency. In relation to processing speed main memory is about 20 times slower than it was 10 years ago. For example, the Cray Y-MP had a memory latency of 17 clock periods, 3 cycles more than the load latency of the Itanium L3 cache. To overcome this gap caches were introduced in most architectures. Cache memory does not solve the problem of slow memory access in a conceptual way but it reduces latency by providing a small memory area with faster memory access and an automatic mapping into all of memory. It is left to the programmer to make use of this faster memory access by designing cacheable algorithms. Much research and programming has gone into the development of cache friendly
184
U. Küster, W. Bez, and S. Haberhauer
algorithms and many programs are available now that can make effective use of caches. Vector instructions on the other hand do offer a conceptual way to hide memory latency very efficiently. In fact, latency hiding is built into the concept of vector processing. By sorting out the memory access, for unit stride, non-unit stride and for indirect addressing, before the instruction starts memory latency is eliminated with the exception of bank conflicts when the vector instruction has started. Latency times only occur once for any vector. The mechanism of overlapping the next vector startup with the currently executing vector instruction (sometimes called chaining) reduces memory latency further for long vectors that require more than one vector instruction. For reasonably long vectors a memory latency of well under one cycle per memory access is easily achievable in the vector architecture while in cache based architectures this is never possible for out-of-cache access and only possible for cacheable access in the best of cases when there are almost no cache misses. Similarly, prefetching has been introduced in many cache architectures to reduce memory latency, a mechanism comparable to a vector load.
5 A map of sustained performance First, we would like to investigate and illustrate where vector architectures and scalar architectures show strong performance. For that purpose we would like to draw a graph of the different performance characteristics of the two architectures. This may seem difficult, because these characteristics differ from application to application. However, some general statements about performance can be made irrespective of a specific application. For an SX8 processor, or any vector processor, performance of a single instruction, a sequence of instructions, or even a whole application can be characterized well by the maximum sustainable performance of the instruction sequence and the parameter n 1 , the problem size or vector length for which half of 2 this performance is achieved. For the SX-8 a typical sustained application performance is in the range of 30% to 70% of peak vector performance, or between 5 and 10 Gflop/s for a single Cpu. The parameter n 1 is typically 2 about 100. For a typical scalar processor, e.g. the Intel Xeon, performance characteristics are very different. Performance is very much determined by cache locality of the data. If the application is designed so that L2 cache can be used very effectively and data can be re-used often without loading from memory then a performance of 500 Mflop/s for each processor on the fastest available dual Cpu system can be achieved. For the case that L1 caching has a very significant influence performance can even go higher. If caching is not efficient then performance is less than 100 Mflop/s. We can illustrate and support these statements with a surprisingly simple and general graph. We
Sustaining performance in future vector processors
185
plot a certain instruction sequence, e.g. the floating-point add operation, as a function of the number of operations or loop length for a single SX-8 processor and the Intel Xeon. On the x-axis a logarithmic scale has been chosen so that loop lengths over several orders of magnitude can be shown. The details of the plot do not matter for this discussion and they depend on the actual instruction sequence chosen, that is why we have left off labels on both axes. The overall shape of the graph is independent of these details and matters. For a more detailed discussion see [2] in this volume. For the SX-8 we plot the performance of the vector unit only. The scalar unit is not considered. Vector performance increases sharply and reaches its peak at loop length 256 which is the vector register length of the SX-8. The increase follows essentially a hyperbola which is warped by the logarithmic x-axis. For larger loop sizes the performance is approximately constant. The Xeon processor shows a very different behavior (see figure 1). Performance grows rapidly to about 500 Mflop/s and then slowly to 1 Gflop/s. All data are always in L1 cache for these loop lengths When the loop length becomes too large to fit all data in L1 cache, the performance drops back to 500 Mflop/s. For even larger loop lengths when data spill over L2 cache performance drops well below 100 Mflop/s. The soft decrease from L2 cache behavior to memory access behavior is due to the hardware prefetching capabilities insuring an early transfer of data not exceeding page boundaries. In the usual dual Cpu server setup where two Xeon CPUs share the same memory bus the performance drop is more severe. y(i)=x1(i)+x2(i)
Performance
Intel Xeon 3.2 GHz (Nocona) NEC SX-8
Logarithm of loop length
Fig. 1. Add for different architectures
From this discussion we conclude that the SX-8 offers a clear performance advantage, more than a factor of 20 for long vectors, for the case that caching is not effective. This performance advantage for the case that caching is not effective does not depend very strongly on vector length but we have not investigated any detailed behavior yet. When caching is effective the compari-
186
U. Küster, W. Bez, and S. Haberhauer
son becomes more subtle. In this case there is still a considerable performance benefit, between a factor of 5 to 10 depending on how efficient caching is, but only for the case of vector lengths above 100. For vector lengths below 100 the SX-8 has a less clear performance advantage when the data can be mostly kept in the L2 cache of a scalar processor. For very short loops the Nocona shows clear advantages due to small cycle times and the short distance of the L1-cache.
6 Ways to sustain vector performance in the future From the above discussion there are several ways to sustain vector performance in the future. The first one is to not change the micro-architecture of the vector processor at all but integrate the vector system into a hybrid architecture of vector and scalar processors. A vector system like the SX series is of course already a system that consists of a separate scalar processor and a vector processor. However, it would not make too much sense to run a purely scalar load on the SX scalar processor, because the vector processor including the very advanced memory subsystem would idle. The hybrid architecture to be discussed consists of a vector system, which includes vector and scalar processors, with a commodity scalar system to improve throughput and workflow. This is the concept that HLRS and NEC are pursuing in the Teraflop-Workbench project over the next three years. We have observed above that vector processors and cache based scalar processors complement themselves very well over the whole range of application requirements. The cache based systems do very well at short vector lengths and for applications which can make use of caches effectively while the vector processor is strongest at long vectors and where caches are not effective. Since most applications are written either one or the other way there is indeed a large benefit of splitting a data center workload between commodity scalar processors and vector processors. This can be done at the system level with seamless user access and a global file system shared by all processors. HLRS is pursuing such a concept with SX-8 vector processors, x86 clusters and Itanium SMP nodes all sharing the same global file space. The next step of a hybrid architecture would be an MPI connection over a fast interconnect like InfiniBand or the NEC IXS switch. While this is an attractive overall concept there is one serious disadvantage. There will be very few applications running on the commodity processors that reach Tflop/s performance and this performance range would be more or less restricted to applications using of long vectors. With a performance limitation of about 0.5 to 1 Gflop/s for the x86 processor one would need a parallel speed-up of 1000 to 2000 to reach one Tflop/s. This means that serial computations and communications overhead must be well below 0.1% which is a requirement that only few real world applications will be able to meet in the short term. In order to break the Tflop/s limit for applications with shorter vectors one has to im-
Sustaining performance in future vector processors
187
prove memory latency and communication latencies which are dependent essentially on memory latencies. This can be done by better latency hiding for short vectors or by introducing on-chip memory in the vector processor. Improved latency hiding for short vectors could be achieved by adding more load/store logic to have more overlap for short vector memory access. Since the problem of latency hiding for short vectors is not memory bandwidth but rather non-hidden vector startup it would seem feasible to have more short vector memory access requests in parallel and get a more overlapped vector startup. This assumes a large optimization context for the compiler or some kind of vector score board and either one might be difficult to implement. The same effect can be achieved by introducing on-chip memory with lower memory access time. In this case one would hope that vector startup is improved significantly when a vector load/store does not have to wait for far away off-chip memory but utilizes on-chip memory much closer to the arithmetic units. Basically it would seem feasible to go back to n 1 pa2 rameters in the 30 to 50 range which were quite common in earlier days of vector processing. In fact with on-chip memory one would expect that the ratio of processing speed to memory access speed would go back to the ratio seen using slower processors and off-chip memory in earlier vector processor generations. The use of on-chip memory has another big advantage. One would be able to configure more load/store pipelines from on-chip memory to the arithmetic units. This would allow to add arithmetic pipelines which does not make sense without increasing memory bandwidth. A desirable performance characteristic is shown in figure 2. y(i)=x1(i)+x2(i)
Performance
Ideal performance characteristic NEC SX-8
Logarithm of loop length
Fig. 2. Ideal and todays performance characteristic
We have assumed a processor with the same frequency and basic architecture of the SX-8 but with a large on-chip memory of more than 1 Mbyte. We have assumed half performance at vector length n 1 = 40 which is about 2 half of the SX-8 value. The assumed processor has 4 load pipelines and 2
188
U. Küster, W. Bez, and S. Haberhauer
store pipelines to on-chip memory. At a processor frequency of 2 GHz as for the SX-8 and four results per pipeline per cycle this would amount to an on-chip memory bandwidth of 384 Gbyte/s as compared to 64 Gbyte/s offchip memory bandwidth in the SX-8. The off-chip memory bandwidth of our ideal processor is assumed the same as in the SX-8. The assumed processor has twice the number of add/multiply pipelines of the SX-8 and has a peak vector performance of 32 Gflop/s. This desired performance characteristic is, of course, an ideal goal and it is beyond the scope of this paper and beyond the expertise of the authors to discuss if such a characteristic could be achieved in a real processor. However, the assumptions made, namely 2 load and 1 store per cycle and 2 add and multiply units, are certainly realistic and achievable in principle. From an applications point of view we can say that a performance characteristic based on a lower vector startup from on-chip memory, more load/store pipelines between on-chip memory and arithmetic units, and more arithmetic units would lead to a significantly higher overall sustained performance despite the unchanged off-chip memory bandwidth.
7 On-chip memory usage in vector architectures In our presentation of an ideal processor above we have not discussed how the on-chip memory should be used. There are basically three ways: Programmable vector cache, hardware controlled vector cache or unified scalar / vector cache, or local memory. The idea of combining the latency hiding of vector instructions with faster access speed of on-chip memory is not new. A certain kind of programmable vector cache termed vector data registers was indeed introduced already in the SX-2 and has been kept unchanged over all SX generations since, including the current SX-8. There are 64 vector data registers in the SX architecture and they are configured as a programmable temporary vector storage. They are used heavily by the compiler to store temporary results for short term reuse. The programmer can influence which vectors are kept by a vectorization directive. The concept of vector data registers is very successful and contributes significantly to the SX performance by effectively reducing the required memory bandwidth. However, this kind of caching is only efficient for high locality because of the small number of vectors that can be stored and because the compiler can only assign the temporary storage over small sections of code, typically inner loops. Some other attempts have been made for introducing caches or local memory to vector processors. Early examples are the Convex C1 which used a vector cache with bypass to memory and the Cray-2 which had a local memory of 512 Kbyte and global memory of 2 Gbyte, huge at that time. The local memory was accessible by special instructions and had an access time of 9 cycles. Local memory was not used directly by the compiler except for procedure frames. The reason for using local memory was then the same as today: decreasing latencies and increasing bandwidth. A big problem with
Sustaining performance in future vector processors
189
all of these early attempts was the small size of the caches or local memories. Large programs with many procedures could fill up the local memory only by the procedure frames. Today’s larger caches/local memories would allow for more flexible programming. A more recent example is the Cray X1 which uses special fast caches of limited size (2 Mbyte). These caches are not designed for high reuse but to speed up the handling of smaller datasets. A very recent example of employing local memory can be found in the cell processors, a very modern development for usage in video games but also in specialized applications. A small set of instruction parallel processors are used for delivering high performance for rendering. The bandwidth is high enough to sustain the performance, but only for a limited data size. Accessing larger data sets will produce again a bottleneck. Caches have the advantage to be transparent to the application. They are potentially useful for any part of the application and the operating system. But they are not simply programmable. They do not differentiate between important data to be held for longer times and data to be used only for one time even with LRU cache policies. For vector processing it is important to bypass cache for this type of data. The programmer and/or compiler must be able to influence the cache replacement strategy for specified data. There are latency penalties accessing data in cache lines and the run time analysis of the data location. In shared memory systems the cache coherence protocol has implications on the effective latencies. Local memories are faster in these terms. But they have to be programmed explicitly. The programmer has to copy data into the local memory and to write them back into the global memory. The programming languages have no provisions for declaring nearby data. OpenMP gives the opportunity to differentiate local and shared data. This could be generalized to connotations for this kind of memory. To alleviate the disadvantage of the non automatic data allocation there should be instructions allowing to load data with their first use for avoiding unnecessary use of the bus-memory system. Independent arrays have to be allocated. Loading the local memory from global memory has to overlap the access the local memory by the processor even for different parts of the same arrays. Loading and storing data must be possible by indirect addressing for data compression and deflation.
8 Cache characteristics for vector architecture When cache is used in combination with vector instructions the cache must be large, larger than a cache for scalar architectures, because for an efficient vector instruction a minimum vector length is required. Only when a significant portion of the data set can be kept in cache there is a performance benefit compared to loading from memory, otherwise there is a performance penalty. The cache size required depends on the application but a cache of reasonable size would have to hold most likely in the order of a thousand vectors,
190
U. Küster, W. Bez, and S. Haberhauer
just holding a hundred would probably lead to a performance penalty. If the cache blocking parameters in the application are so that vector length is also one thousand then the resulting cache size would be 8 Mbyte. For vector length 256 the cache size would be 2 Mbyte. Possible cache blocking parameters and related vector lengths for the HLRS Tflop/s applications have not been studied yet. However, we would estimate that a cache size of 1 Mbyte is the absolute lower limit where a vector cache still would make sense while 10 Mbyte is a comfortable size, possibly larger than what some applications require. On the other hand cache for vector architectures may have a larger latency without incurring a large performance penalty. The built-in latency hiding mechanism of the vector instruction is a quite robust feature that would tolerate a cache with higher latency and still have a significant effect on short vector performance. It is enough to reduce the n 1 parame2 ter from now about 100 for a typical sequence of vector instructions loading from memory to say 20 or 30 for the same sequence loading from cache. Another essential performance feature difficult to manage with cache is indirect addressing. There are penalties for cache based systems because there is a difficult trade-off to be made between too large and too small cachelines. Large cachelines are needed for efficient memory transfers because the transaction rate of the memory controller is limited. Small cachelines are needed because of the overhead of unused locations in the cacheline for indirect addressing.
9 Conclusions In the Teraflop-Workbench project a number of applications from various fields have been investigated that will achieve Tflop/s performance on the HLRS SX-8 system. The performance is due to high vectorization in combination with realistically achievable scaling. For further performance enhancement in future vector systems on-chip memory is an attractive path both to increase effective memory bandwidth by vector reuse and offer latency hiding for short vectors. Sustained performance for applications which use cache effectively can be significantly improved by using on-chip memory to reduce vector startup and to add load/store pipelines and arithmetic units. On-chip memory can be utilized to increase the number of vector data registers already available in the SX architecture, to add a vector cache or a unified scalar/vector cache, or as local memory.
10 Disclaimer The results presented in this paper do not represent any views of NEC Corporation.
Sustaining performance in future vector processors
191
11 Acknowledgements We would like to thank the Teraflop-Workbench team, in particular Martin Galle, Matthias Müller and Stefan Borowski for various contributions to this paper.
References 1. http://www.teraflop-workbench.org/htm/projects.htm 2. Küster U, Lammers P Algorithm performance dependent on hardware architecture, in this volume
Image fusion and registration – a variational approach B. Fischer and J. Modersitzki Institute of Mathematics, University of Lübeck, Wallstraße 40, 23560 Lübeck, Germany [email protected], [email protected] Summary. Image fusion or registration is central to many challenges in medical imaging today and has a vast range of applications. The purpose of this paper is to give an introduction to intensity based non-linear registration and fusion problems from a variational point of view. To do so, we review some of the most promising non-linear registration strategies currently used in medical imaging and show that all these techniques may be phrased in terms of a variational problem and allow for a unified treatment. A generic registration or fusion method depends on an appropriate chosen distance measure, a regularization, and some additional constraints. The idea of constraints is to incorporate higher level information about the expected deformation. We examine the most common constraints and show again that they may be conveniently phrased in a variational setting. As a consequence, all of discussed modules allow for fast implementations and may be combined in any favorable order. We discuss individual methods for various applications, including the registration of magnetic resonance images of a female breast subject to some volume preserving constraints.
1 Introduction Registration is the determination of a geometrical transformation that aligns points in one view of an object with corresponding points in another view of the same object or a similar object. There exist many instances in a medical environment which demand for a registration, including the treatment verification of pre- and post-intervention images, study of temporal series of cardiac images, and the monitoring of the time evolution of an agent injection subject to patient motion. Another important area is the need for combining information from multiple images, acquired using different modalities, like for example computer tomography (CT) and magnetic resonance imaging (MRI). This problem is also called fusion. The problem of fusion and registration arises whenever images acquired from different subjects, at different
194
B. Fischer and J. Modersitzki
times, or from different scanners need to be combined for analysis or visualization. In the last two decades, computerized non-rigid image registration has played an increasingly important role in medical imaging, see, e.g., [1], [2], [3], [4] and references therein. An optimal registration requires to incorporate characteristics of the underlying application. Thus, each individual application should be treated by a specific registration technique. Due to the wide range of applications a variety of techniques has been developed and is used. We present a flexible variational setting for intensity driven registration schemes, which may be adapted to a particular application. The building blocks of our variational framework resemble user demands and may be assembled in a consistent and intuitive fashion. The idea is to phrase each individual block in terms of a variational formulation. This not only allows for a unified treatment but also for fast and reliable implementation. The various building blocks comprises five categories: image model, distances and external forces, smoother and internal forces, “hard” or “soft” constraints, and optimization procedures. The external forces are computed from the image data and are defined to drive the displacement field in order to arrive at the desired registration result. In contrast, the internal forces are defined for the wanted displacement field itself and are designed to keep the displacement field smooth during deformation. Whereas the internal forces implicitly constrain the displacement to obey a smoothness criterion, the additional constraints force the displacement to satisfy explicit criteria, like for example landmark or volume preserving imposed constraints. In Sec. 2 we summarize the most popular choices for the outlined building blocks. Furthermore, we set up a general and unified framework for automatic non-rigid registration. In Sec. 3 we show in more detail, how these building blocks can be translated into a variational setting. It is this formulation, which allows for a fast and reliable numerical treatment. In Sec. 3.4 we indicate on how to actually implement the registration schemes. An example in Sec. 4 highlights the importance of adding constraints.
2 The variational framework Given the two images, a reference R and a template T, the aim of image registration is to find a global and/or local transformation from T onto R such that the transformed template matches the reference. Ideally there exists a coordinate transformation u such that the reference R equals the transformed template T [u], where T [u]( x) = T ( x + u( x)). Given such a displacement u, the registration problem reduces to an interpolation task. However, in general it is impossible to come up with a perfect u, and the registration problem is to compute an application conformal transformation u, given the reference and template image. Apart from the fact that a solution may not exist, it is
Image fusion and registration – a variational approach
195
not necessarily unique. In other words, intensity based image registration is inherently an ill-posed problem; see, e.g., [4]. A displacement u which does produce a perfect or nearly perfect alignment of the given images is not necessarily a “good” displacement. For example, a computed displacement which interchanges the eyes of one patient when registered to a probabilistic atlas in order to produce a nearly perfect alignment, has obviously to be discarded. Also, folding and cracks introduced by the displacement are typically not wanted. Therefore it is essential to have a possibility to incorporate features into the registration model, such that the computed displacement u does resemble the properties of the acquisition, like for example the elastic behavior of a human brain. To mimic the elastic properties of an objects under consideration is a striking example for internal forces. These forces constrain the displacement to be physically meaningful. In contrast, the external forces are designed to push the deformable template into the direction of the reference. These forces are based upon the intensities of the images. The idea is to design a similarity measure, which is ideally calculated from all voxel values. An intuitive measure is the sum of squares of intensity differences (SSD). This is a reasonable measure for some applications like the serial registration of histological sections. If the intensities of corresponding voxels are no longer identical, the SSD measure may perform poorly. However, if the intensities are still linearly related, a correlation (CC) based measure is the measure of choice for monomodal situations. In contrast, the mutual information (MI) related measure is based on the cooccurrence of intensities in both images as reflected by their joint intensity histogram. It appears to be the most successful similarity measure for multimodal imaginary, like MR-PET. For a discussion or comparison see, e.g., [5], [6], [7], [8], [9]. As compared to MI, the normalized gradient field (NGF) [10] measure is more restrictive. Here, the basic idea is to reduce the image contents to edges or contours and to ignore the underlying intensity information completely. In contrast to MI, where some kind of probability enters into play, the NGF approach is completely deterministic, easy to implement and to interpret. Finally, one may want to guide the registration process by incorporating additional information which may be known beforehand. Among these are landmarks and fiducial markers; cf., e.g., [11] or [12]. Sometimes it is also desirable to impose a local volume-preserving (incompressibility) constraint which may, for example, compensate for registration artifacts frequently observed by processing pre- and post-contrast images; cf., e.g., [13] or [14]. Depending on the application and the reliability of the specific information, one may want to insist on a perfect fulfilment of these constraints or on a relaxed treatment. For examples, in practise, it is a tricky (and time consuming) problem to determine landmarks to subvoxel precision. Here, it does not make sense to compute a displacement which produces a perfect one to one match between the landmarks.
196
B. Fischer and J. Modersitzki
Summarizing, the general registration problem may be phrased as follows. (IR) image registration problem:
J [u] = D[ R, T; u] + α S[u] + βC soft [u] = min, subject to C[u] = 0 for all x. Here, D models the distance measure (external force, e.g., SSD or MI), S the smoother (internal force, e.g., elasticity), C soft a penalization (soft constraints), and C hard or explicit constraints. The penalization and constraints could be empty (unconstrained) or based on landmarks, volume preservation, or anything else. The regularization parameter α may be used to control the strength of the smoothness of the displacement versus the similarity of the images and the parameter β controls the impact of the penalization. In the following we will discuss these building blocks in more detail.
3 Building blocks Our approach is valid for images of any spatial dimension d (e.g., d = 2, 3, 4). The reference and template images are represented by the compactly supported smooth mappings R, T : Ω → R, where without loss of generality, Ω =]0, 1[d . Hence, T ( x) denotes the intensity of the template at the spatial position x. For ease of discussion we set R( x) = b R and T ( x) = b T for all x ∈ Ω , where, b R and b T are appropriately chosen background intensities. The overall goal is to find a displacement u, such that ideally T [u] is similar to R. In this paper we use a continuous image model which is advantageous for three reasons. Firstly, it allows the proper computation of the deformed image at any spatial position. Secondly, it enables the usage of continuation, scale space, or pyramidal techniques. However, the discussion of these techniques is beyond the scope of this paper. Thirdly, and most importantly, it enables the usage of efficient and fast optimization techniques, which typically rely on smoothness. If the images are given in terms of discrete ddimensional arrays R and T, one typically uses interpolations or approximations R and T which are based on localized functions like splines or wavelets. 3.1 Smoother and internal forces The nature of the deformation depends strongly on the application under consideration. For example, a slice of a paraffin embedded histological tissue does deform elastically, whereas the deformation between the brains of two different individuals is most likely not elastically. Therefore, it is necessary to supply a model for the nature of the expected deformation.
Image fusion and registration – a variational approach
197
We now present some of the most prominent smoothers S and discuss exemplarily the Gâteaux-derivatives for two of them. An important point is, that we are not restricted to a particular smoother S . Any smoother can be incorporated into this framework, as long as it possesses a Gâteaux-derivative. In an abstract setting, the Gâteaux-derivative looks like dS[u; v] := limh→0 1h (S[u + hv] − S[u]) =
0
Ω
B[u], B[v]Rd dx,
where B denotes the associated linear partial differential operator. Note that for a complete derivation one also has to consider appropriate boundary conditions. However, these details are omitted here for presentation purposes; see [4] for details. Typically, the operator B is based on first order derivatives. Therefore, also affine linear deformation are penalized which unfavorable for many application. There are two remedies: choose a higher order operator (like the curvature regularizer below) or split the deformation space into a coarse (or linear) and a disjoint fine part and regularize only with respect to the fine space; see [15] for details. Example 1 (Elastic registration). This particular smoother measures the elastic potential of the deformation. In connection with image registration it has been introduced by [16] and discussed by various image registration groups; see, e.g., [17] or [18]. The partial differential operator is the well-known Navier-Lamé operator. For this smoother, two natural parameters, the socalled Lamé-constants can be used in order to capture features of the underlying elastic body. A striking example, where the underlying physics suggests to look for deformations satisfying elasticity constraints, is the threedimensional reconstruction of the human brain from a histological sectioning; details are given in [19] and [4]. Example 2 (Curvature registration). As a second example, we present the curvature smoother 0 S curv [u] := 12 ∑d=1 Ω (∆ u )2 dx, (1) introduced by [20]. The design principle behind this choice was the idea to make the non-linear registration phase more robust against a poor (affine linear) pre-registration. Since the smoother is based on second order derivatives, affine linear maps do not contribute to its costs, i.e.,
S curv [Cx + b] = 0,
for all
C ∈ Rd × d , b ∈ Rd .
In contrast to other non-linear registration techniques, affine linear deformations are corrected naturally by the curvature approach. Again the Gâteauxderivative is explicitly known and leads to the so-called bi-harmonic operator Acurv [u] = ∆ 2 u.
198
B. Fischer and J. Modersitzki
3.2 Distances and external forces Another important building block is the similarity criterion. As for the smoothing operators, we concentrate on those measures D which allow for differentiation. Moreover, we assume that there exists a function f : Rd × Rd → Rd , the so-called force field, such that dD[u; v] =
lim 1 (D[ R, T; u + hv] − D[ R, T; u]) h→0 h
=
' Ω
f v dx.
Again, we are not restricted to a particular distance measure. Any measure can be incorporated into our framework, as long as it permits a Gâteauxderivative. The most common choices for distance measures in image registration are the sum of squared differences, cross correlation, cross validation, and mutual information. We give explicit formulae for only two of them; for more information see, e.g., [8], [9] or [4]. We close this section by commenting on a relatively new measure, the so-called normalized gradient field; see [10, 21]. Example 3 (Sum of squared differences). The measure is based on a point-wise comparison of image intensities,
D SSD [ R, T; u] :=
1 0 2 Ω (R −
T [u])2 dx,
and the force-field is given by f SSD ( x, y) = ∇ T ( x − y) ( T ( x − y) − R( x)). This measure is often used when images of the same modality have to be registered. Example 4 (Mutual information). Another popular choice is mutual information. It basically measures the entropy of the joint density ρ( R, T ), where ρ( R, T )(r, t) counts the number of voxels with intensity r in R and t in T. The precise formula is
D MI [ R, T; u] := −
0
R2
ρ( R,T [u])
ρ( R, T [u]) log ρ( R)ρ(T [u]) d(r, t),
where ρ( R) and ρ( T [u]) denote the marginal densities. Typically, the density is replaced by a Parzen-window estimator; see, e.g. [22]. The associated forcefield is given by f MI ( x, y) = (Ψσ ∗ ∂t L)( R( x), T ( x + y)) · (∇ T ( x + y) v( x), ρ( R,T [u])
where L := 1 + ρ( R, T [u]) log ρ( R)ρ(T [u]) and Ψ is the Parzen-window function; see, e.g., [9] or [23]. This measure is useful when images of a different modality have to be registered.
Image fusion and registration – a variational approach
199
Example 5 (Normalized Gradient Field). Any reasonable distance measure depends on the deformed image and can thus be written as D[ R, T [u]]. Therefore, the associated force-field contains the factor ∇ T and edges enter inter play naturally. A distance measure directly based on edges has been proposed by [10, 21]. The basic idea is to use a directly accessible stable edge detector ne ( I, x) = ∇ I / ∇ I ( x) 22 + e2 , where the parameter e is related to the noise level and distinguishes between important and unimportant structures within the images. The distance measure is based on the pointwise alignment of the regularized gradient fields,
D NGF [ R, T; u] := −
' Ω
((ne ( R, x)) ne ( T [u], x))2 dx,
see [10] for details. 3.3 Additional constraints Often it is desirable to guide the registration process by incorporating additional information which may be known beforehand, like for example markers or characteristics of the deformation process. To incorporate such information, the idea is to add additional constraints or penalization to the minimization problem. Example 6 (Landmarks). One may want to incorporate information about landmarks or fiducial markers. Let r j be a landmark in the reference image and t j be the corresponding landmark in the template image. Our setting allows for either adding hard or explicit constraints
C j [u] := u(t j ) − t j + r j ,
j = 1, 2, . . . , m,
which have to be precisely fulfilled C j [u] = 0 (“hard” constraints), or by adding an additional cost term 2 C soft [u] := ∑mj=1 C j [u] R d
(“soft” constraints, since we allow for deviations). For a more detailed discussion of landmark constraints, we refer to [12]. Example 7 (Volume preservation). In some applications, like, for example, the monitoring of tumor growth, a change of volume due to registration is critical. Therefore one may restrict the deformation to be volume preserving, using the point wise constraints
C[u]( x) := det ∇u( x) − 1. [13] presented a penalized approach based on
200
B. Fischer and J. Modersitzki
C soft [u]( x) :=
' Ω
|log(C[u] + 1)| dx.
An extended discussion and the treatment of the constrained approach can be found in [14], see also [25] for numerical issues.
3.4 Numerical treatment of the constrained problem There essentially are two approaches for the minimization of (IR). The first approach is to first discretize the continuous problem and to treat the discrete problem by some optimization techniques; see, e.g. [25]. The second approach which we discussed is this paper is to deal with a discretization of the so-called Euler-Lagrange equations, i.e. the necessary conditions for a minimizer of the continuous problem; see [24] for an extended discussion. It remains to efficiently solve this system of non-linear partial differential equations. After invoking a time-stepping approach and after an appropriate space discretization, we finally end up with a system of linear equations. As it turns out, these linear systems have a very rich structure, which allows one to come up with very fast and robust solution schemes for all of the above mentioned building blocks. It is important to note that the system matrix does not depend on the force field and the constraints. Thus, changing the similarity measure or adding additional constraints does not change the favorable computational complexity. Moreover, fast and parallel solution schemes can be applied to even more reduce the computation time; see also [26], [27], or [28].
4 An example: MRI mammography In order to demonstrate the flexibility of the variational approach, we present numerical results for the registration of magnetic resonance images (MRI). In this application, MRI’s of a female breast are taken at different times (images from Bruce Daniel, Lucas Center for Magnetic Resonance Spectroscopy and Imaging, Stanford University). Fig. 1 shows an MRI section taken during the so-called wash-in phase of a marker (c) and an analogous section during the so-called wash-out phase (a). A comparison of these two images indicates a suspicious region in the upper part of the images (b). A quantitative analysis is a delicate matter since observable differences are not only related to contrast uptake but also due to motion of the patient, like breathing or heart beat. Fig. 1 shows the results of an elastic/SSD registration for the unconstrained (non) and volume preserving (VP) constrained approaches. Though it is almost impossible to distinguish the two deformed image (d) and (g) and even the difference images (e) and (h) are very much alike, there is a tremendous difference in the deformations as can be see from (f) and (i), where a
201
no registration
Image fusion and registration – a variational approach
b. | T [0] − R|
c. R
d. T [unon ]
e. | T [unon ] − R|
f. grid: x + unon
g. T [uVP ]
h. | T [uVP ] − R|
i. grid: x + uVP
VP constrained
unconstrained
a. T [0]
−9
x 10 2
1.00 0.75
1
0.50
0 0.25
−1 0.00
−2 −0.25
j. C hard [unon ]
k. C hard [uVP ]
Fig. 1. Results for the unconstrained (non) and volume preserving (VP) elastic/SSD registrations of a reference (a) and template (c) image; registered templates T [unon ] (d) and T [uVP ] (g); difference | T [0] − R| (b), | T [unon ] − R| (e), and | T [uVP ] − R| (h); deformed grid on a region of interest x + unon (f) and x + uVP (i); image of volume preservation of the unconstrained (j) and VP constrained (k) solutions
202
B. Fischer and J. Modersitzki
region of interest is superimposed with the deformed grids. A further analysis shows that the unconstrained solution unon does change tissue volume by a factor of 2.36 (max |C[unon ]| ≈ 1.36), whereas the VP solution uVP satisfies the constraints up to a numerical tolerance (max |C[uVP ]| ≤ 10−8 ). Note that a comparison or discussion of the results from an application point of view is beyond the scope of this paper. More generally, a general setting does not answer the question, which particular combination of building blocks leads to best results. However, the framework enables the computation of results for different choices and can thus be used to optimize the building blocks.
5 Conclusions In this note we presented a general approach to image fusion and registration and thereby giving an overview of state-of-the-art medical image registration schemes. The flexibility of the presented framework enables one to integrate and to combine in a consistent way various different registration modules. We discussed the use of different smoothers, distance measures, and additional constraints. The numerical treatment is based on the solution of a partial differential equation related to the Euler-Lagrange equations. These equations are well studied and allow for fast, stable, and efficient schemes. In addition, we reported on one example, showing the effect of constraints. Part of the software is available via http://www.math.uni-luebeck. de/SAFIR.
References 1. Maintz JBA, Viergever MA (1998) Medical Image Analysis 2:1–36 2. Fitzpatrick JM, Hill DLG, Maurer CR Jr (2000) Image registration. In: Sonka M, Fitzpatrick JM (eds) Handbook of medical imaging, Volume 2: medical image processing and analysis. SPIE Press 3. Zitová B, Flusser J (2003) Image Vision Comp 21:977–1000 4. Modersitzki J (2004) Numerical methods for image registration. Oxford University Press 5. Brown LG (1992) ACM Comput Surv 24:325–376 6. Collignon A, Maes F, Delaere D, Vandermeulen D, Suetens P, Marchal G (1995) Automated multi-modality image registration based on information theory. In: Bizais Y, Barillot C, Di Paola R (eds) Information Processing in Medical Imaging. Kluwer Academic Publishers, Dordrecht 7. Viola PA (1995) Alignment by maximization of mutual information. PhD thesis, Massachusetts Institute of Technology 8. Roche A (2001) Recalage d’images médicales par inférence statistique. PhD thesis, Université de Nice, Sophia-Antipolis, France 9. Hermosillo G (2002) Variational methods for multimodal image matching. PhD thesis, Université de Nice, France
Image fusion and registration – a variational approach
203
10. Haber E, Modersitzki J (2004) Intensity gradient based registration and fusion of multi-modal images. Technical Report TR-2004-027-A, Department of Mathematics and Computer Science, Emory University, Atlanta GA 30322 (submitted to IEEE Trans Med Imaging) 11. Johnson HJ, Christensen GE (2002) IEEE Trans Med Imaging 21:450–461 12. Fischer B, Modersitzki J (2003) Proc Appl Math Mech 3:32–35 13. Rohlfing T, Maurer CR Jr, Bluemke DA, Jacobs MA (2003) IEEE Trans Med Imaging 22:730–741 14. Haber E, Modersitzki J (2004) Volume preserving image registration In: Barillot C, Haynor D, Hellier P (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2004. Lecture Notes in Computer Science 3216. Springer Verlag 15. Haber E, Modersitzki J (2004) Cofir: coarse and fine image registration. Technical Report TR-2004-006-A, Department of Mathematics and Computer Science, Emory University, Atlanta GA 30322 16. Broit C (1981) Optimal Registration of Deformed Images. PhD thesis, Computer and Information Science, University of Pensylvania 17. Bajcsy R, Kovaˇciˇc S (1986) Toward an individualized brain atlas elastic matching. Tech. Report MS-CIS-86-71 Grasp Lap 76, Dept. of Computer and Information Science, Moore School, University of Philadelphia 18. Fischer B, Modersitzki J (2004) Linear Algebra Appl 380:107–124 19. Schmitt O (2001) Die multimodale Architektonik des menschlichen Gehirns. habilitation, Insitute of Anatomy, Medical University of Lübeck, Germany 20. Fischer B, Modersitzki J (2003) J Math Imaging Vis 18:81–85 21. Haber E, Modersitzki J (2005) Beyond mutual information: A simple and robust alternative. In: Meinzer H-P, Handels H, Horsch A, Tolxdoff T. (eds) Bildverarbeitung für die Medizin 2005. Springer 1–5 (accepted for publication) 22. Viola P, Wells III WM (1995) Alignment by maximization of mutual information. In: Proc. of the Fifth Int. Conf. on Computer Vision. IEEE Computer Society 23. D’Agostino E, Modersitzki J, Maes F, Vandermeulen D, Fischer B, Suetens P (2003) Free-form registration using mutual information and curvature regularization. In: Gee J, Maintz J, Vannier M ( eds) 2nd International Workshop on Biomedical Image Registration 2003. Lecture Notes in Computer Science 2717. Springer Verlag 24. Fischer B, Modersitzki J (2004) Large scale problems arising from image registration. Technical Report TR-2004-027-A, Institute of Mathematics, University of Lübeck (to appear in GAMM Mitteilungen 2005) 25. Haber E, Modersitzki J (2004) Inverse Probl 20:1621–1638 26. Henn S, Witsch K (2001) SIAM J Sci Comp 23:1077–1093 27. Droske M, Rumpf M, Schaller C (2003) Non-rigid morphological registration and its practical issues. In: Proc. ICIP ’03, IEEE Int. Conf. on Image Processing. Barcelona, Spain 28. Haber E, Modersitzki J (2004) A multilevel method for image registration. Technical Report TR-2004-005-A, Department of Mathematics and Computer Science, Emory University, Atlanta GA 30322 (accepted for publication in SIAM J Sci Comp)
The analysis of behaviour of multilayered nodoid shells on the basis of non-classical theory S.K. Golushko Institute of Computational Technologies SB RAS, Lavrentiev Ave. 6, Novosibirsk 630090, Russia [email protected]
Summary. The parametrical analysis of stressed-deformed state of multilayered reinforced nodoid shells on a basis of geometrically linear and nonlinear variants of classical and non-classical theories is made. Influence of structure of reinforcement of a composite material, cross shift of binding and an order of arrangement of the reinforced layers on behaviour of shells is investigated. Comparison of the numerical solutions received by the methods of spline-collocation and discrete orthogonalization is conducted. High efficiency of used numerical methods is shown at the solution boundary value problem for stiff systems of the differential equations.
1 Introduction The multilayer shells are the major elements of many modern designs, occupying the leading part in aircraft building, shipbuilding, mechanical engineering, the petroleum, gas and chemical industry. Opportunities of use of shells have considerably extended with occurrence composite materials (CM). Because of their lightness, strength, rigidity this CM essentially in compared with traditional metals and alloys in specific characteristics. Having an opportunity of change of the internal structure, the CM open before designers great opportunities for management of stressed-deformed state (SDS) of constructions, thus providing the best conditions of their work. A significant increase of requirements to strength and reliability of modern constructions results in necessity of consideration alongside with the classical linear theory, geometrically nonlinear and non-classical theories of shells. The systems of equations describing behaviour of shells are stiff and solutions have strong boundary effects. At the numerical calculation of such equations there are the difficulties connected to instability of the calculation.
Work is executed at support of the grant of the President of the Russian Federation No. 2314.2003.1 for support of young Russian scientific and leading scientific c S. K. Golushko, 2004. schools of the Russian Federation.
206
S.K. Golushko
Therefore the important problems are the choice and development of numerical methods and providing with reliability received numerical solutions.
2 Formulation of the problem The multilayer nodoid shell of thickness h made of fibrous CM is considered. For the description of elastic properties of the reinforced layer the structural model of CM with bidimentional fibres is used [1]. Analysis of the SDS of a nodoid shell is carried out on the basis of geometrically linear and nonlinear variants of the classical theory Kirchhoff — Love [2], theories of Timoshenko [3] and Andreev — Nemirovskii [4]. Resolving system of the equations, describing SDS of a nodoid shell has the form: dy(ξ ) = A(ξ , y(ξ )) + b(ξ ), ξ ∈ [0, 1], dξ (1) G0 y(0) = g0 ,
G1 y(1) = g1 .
Here y(ξ ) is a vector of resolving functions, ξ = s/b, s ∈ [ a, b], a, b are coordinates of the left and right edges of the shell. The system (1) is nonlinear, it has 8 order in case of the theory [4] and 6 order at use of theories of Kirchhoff — Love and Timoshenko. Behaviour of a nodoid shell depending on structural and mechanical parameters of the CM, the used linear and nonlinear variants of classical and non-classical theories, an arrangement of the reinforced layers is investigated. The numerical solution of a boundary value problem (1) receives by the methods of spline-collocation [5] and discrete orthogonalization [6].
3 The analysis of efficiency of numerical methods Property of stiffness of system (1) is especially brightly expressed in case of the theory [4]. It is researched using a cylindrical shell. If the longitudinal generalized effort is known, for instance from boundary conditions, then the right part of system (1) can be resulted in a linear kind A(ξ , y(ξ )) = A(ξ )y(ξ ). Eigen values of a matrix A(ξ ) have the form: 1 λ1,2 = 0, λ3,4,5,6 = ±µ ± iν , λ7,8 = ±λ , µ 2 + ν 2 λ , λ 1, here µ , ν — the real and imaginary parts of eigen values; ±λ — real eigen values. Presence in a non-classical case the real eigen values results in occurrence in the solution alongside with functions eµ (ξ −1) cos(νξ ), eµ (ξ −1) sin(νξ ), e−µξ cos(νξ ), e−µξ sin(νξ ) exponential functions eλ(ξ −1) , e−λξ , which values are essential near small vicinities of edges ξ = 1, ξ = 0 and quickly decrease at distance from them (fig. 1a).
The analysis of behaviour of nodoid shells
207
Fig. 1. The three-layer reinforced cylindrical shell: (a) the components of solution; (b) the dependence of spectral radius Λ ∗ of matrixes A(ξ ) on the parameters γ = R/h and Ω = E1a / E1c
Occurrence in the solution of strongly expressed boundary effect is caused by presence of such functions. On the other hand, the matrix of coefficients of system of the equations is badly conditioned. The eigen values for three-layer cylindrical shell with rigid covers and with various homogeneous layers are submitted in tab. 1, E1 = E3 = 30E2 . Here En are the Young’s modulus of a material of n-th layer. From tab. 1 it follows, that the real eigen values is much greater not only than unity, but also the modules of the complex eigen values. It results in that the number of conditionality of a matrix A(ξ ) becomes much greater than unity. Table 1. Eigen values of a matrix A R/h
λ µ ν
10
20
30
50
100
200
139.0 278.3 417.6 696.1 1392.2 2784.4 7.6 10.1 12.1 15.3 21.4 29.9 5.3 8.4 10.7 14.3 20.6 29.5
On fig. 1 b the dependence of number of conditionality Λ ∗ = max Λ (ξ ) ξ
of matrixes A(ξ ) for the three-layer reinforced cylindrical shell on the parameters γ = R/h and Ω = E1a / E1c is represented, where R is the radius of a cylindrical shell, E1a , E1c are the Young’s modulus of a materials of reinforcement and binding agent. From figure it is clear, that the number of conditionality in two order is more than unity. So the thinner the shell the greater the number of conditionality.
208
S.K. Golushko
For multilayered cylindrical shell with constant structural parameters it is possible to receive the analytical solution of system (1) [7]. Table 2. Maximal relative error ε by components Parameters
W
W
Π
S1
TOL
COLSYS package
10−4 10−8
7.51 · 10−6 1.35 · 10−7 1.39 · 10−5 7.62 · 10−7 3.22 · 10−9 1.72 · 10−10 5.72 · 10−9 8.30 · 10−10
J 600 1200
GMDO package 6.32 · 10−5
4.56 · 10−6
1.12 · 10−6 1.16 · 10−4 6.39 · 10−6 8.11 · 10−8 8.39 · 10−6 4.62 · 10−7
In tab. 2 it is submitted comparison of the results, received by the methods of spline-collocation (package COLSYS) and discrete orthogonalization (package GMDO) with the analytical solution for a three-layer cylindrical shell with homogeneous layers. Here ε is the maximal relative error in the uniform metrics; W, Π are dimensionless deflection and the kinematic characteristic which is taking into account the presence of cross shifts; S1 is the dimensionless generalized effort; TOL is the accuracy, set to package COLSYS; J is total number of elements in a grid of integrating procedure for method discrete orthogonalization. From tab. 2 it follows, that numerical solutions practically coincide with analytical, that allows to judge about high efficiency of used numerical methods. As additional experiment it is calculated the SDS of a cylindrical shell, where the value of parameter R/h = 200. In this case spectral radius of the matrix of system is equal λ = 2784.4. Both methods successfully calculate the solution of this problem at parameters of calculation TOL = 10−8 for package COLSYS and J = 4000 for the method discrete orthogonalization. Maximal relative errors, for example, for function Π are equal 1.52 · 10−8 and 1.26 · 10−5 for packages COLSYS and GMDO accordingly.
4 Calculation of the SDS of a nodoid shell The maximal internal volume at the minimal surface area of construction is the important requirement at designing balloons and pressure vessels. When designing cylindrical balloons with the bottoms without a polar aperture the hemisphere has the maximal volume. But the hemisphere ceases to be best of the bottoms when it is necessary to receive an aperture in the bottom of the
The analysis of behaviour of nodoid shells
209
set radius. Nodoid and unduloid shells have this property. The parametrical form of the equation of generatrix of nodoids and unduloids is: x = (2λ − r1 ) F (k , ϕ) + r1 E(k , ϕ), z = r1 1 − (k )2 sin2 ϕ.
√ Here k = (2λ − r1 )/r1 is the module of elliptic integral; k = 1 − k2 is the additional module; F, E are the elliptic integrals of I and II type; ϕ =
arcsin( r21 − z2 /(k r1 )) is the current coordinate; λ is the parameter which characterizes a curve; r1 is the initial radius of a shell. Under condition of 0 < λ < r1 /2 shells refers to unduloid, and at r1 /2 < λ < r1 — to nodoid (fig. 2a).
Fig. 2. The bottoms of balloons and pressure vessels: (a) the generatrix shape for unduloid, sphere and nodoid bottoms; (b) the multilayered reinforced nodoid shell
Let’s consider layered reinforced nodoid shell of thickness h. The internal layer of thickness h1 is reinforced with longitudinal family of armature, a middle layer of thickness h2 — with spiral families of armature with corners ψ and −ψ, and an external layer of thickness h3 — with circumferential armature (fig. 2b). Edges of a shell are rigidly clamped. Let’s carry out calculation of SDS of multilayered reinforced nodoid shell using classical and non-classical theories in geometric linear and nonlinear statements. 4.1 Coal-plastic nodoid shell On fig. 3 there are dependencies of maximal reduced stress intensity in elements of CM and deflections presented for coal-plastic shell. Results are obtained using model of CM with bidimentional fibers. To curves 1 there correspond the results received using the classical theory Kirchhoff — Love, to curves 2 — the results received using the theory of Timoshenko, curves 3 corresponds to the non-classical theory [4]. To continuous lines there correspond
210
S.K. Golushko
Fig. 3. The maximal reduced stress intensities bs0 , bs1 , bs2 in elements of composite and maximal dimensionless deflections w for coal-plastic nodoid shell
values h1 = h3 = 0.1h and the order of an arrangement of the reinforced layers (90, ψ, −ψ, 0); to dotted lines — h1 = h3 = 0.4h and (0, ψ, −ψ, 90). From fig. 3 one can see, that at h1 = h3 = 0.1h and ψ ≥ 50o results corresponding to all three theories, differ no more than on 10%. At ψ = 10o differences between stresses intensities, received using the classical theory and the theory of Timoshenko, makes for binding — 50%, for longitudinal armature — 60%. The results received using the non-classical theory at the same parameters practically three times there are more than the values received using the classical theory. Dependencies of maximal reduced stress intensities on an angle of spiral reinforcing becomes less strong if to change the order of an arrangement and a ratio of thickness of layers to h1 = h3 = 0.4h. Distinction between the results obtained using the classical theory and the theory of Timoshenko, makes for binding and longitudinal armature up to 20%.
The analysis of behaviour of nodoid shells
211
However, in this case values for stress intensities in binding and the longitudinal armature, received using the non-classical theory [4], at some angles of reinforcing practically twice exceed the similar values received using the classical theory. Deflections, stress intensities in binding and the circumferential armature received using the non-classical theory [4] decrease three times, and stress intensities in longitudinal armature — four times at h1 = h3 = 0.1h and increased up to 60o angle of spiral reinforcing. Using the classical theory stress intensities in binding and longitudinal armature practically do not change at such variation of an angle of reinforcing. 4.2 Fiberglass nodoid shell Let’s consider fiberglass nodoid shell at the structure (0, 90, ψ, −ψ), taking place under influence of constant internal pressure. On fig. 4 there are dependencies of maximal reduced stress intensities and dimensionless deflections on a corner of spiral reinforcing for rigidly clamped shell. Results are obtained using the classical theory Kirchhoff — Love. Other parameters have the same values, as for fig. 3. From fig. 4 one can see, that full neglecting of binding functioning results in distinctions between the values obtained using models with onedimensional fibers and filament model for circumferential armature are up to 30%, for spiral armature and deflections — up to 50%. The results received using models with one-dimensional fibers practically coincide at all values of an angle of reinforcing. Appreciable difference is observed for values of stress intensities received using models with one-dimensional and bidimentional fibers. 4.3 Metal-composite nodoid shell Let’s consider nodoid shell where the aluminium matrix is used as a binding material, a steel wire is used as a forcing fibers, the structure of CM is (0, ψ, −ψ, 90). In addition we use the law of continuous winding by fibers of constant cross section: rhωn cos ψn = const. On fig. 5 there are shown dependencies of maximal dimensionless deflections W and reduced stress intensities in binding on an angle of spiral reinforcing. Continuous and dashed lines correspond to the same parameters as for fig. 4. Results are received using the non-classical theory [4]. From fig. 5 one can see, that the values received using the model with onedimensional fibers and improved model with one-dimensional fibers, differ for stress intensities in binding up to 30%, for deflections — up to 25%. The values received on models with one-dimensional and bidimentional fibres differ among themselves more considerably. Difference for binding makes up to 70%, for deflections — up to 100%. Dependence of stress intensities and deflections from structure of reinforcing is not so brightly expressed, as in a case of coal-plastic or fiberglass plastic shells.
212
S.K. Golushko
Fig. 4. The maximal reduced stress intensities bs0 , bs1 , bs2 in elements of composite and maximal dimensionless deflections w for fiberglass nodoid shell
4.4 Influence of nonlinear terms Let’s carry out calculation of loadings of initial destruction for nodoid shell using linear and nonlinear variants of theories of Kirchhoff — Love, Timoshenko and [4]. In tab. 3 there are presented dimensionless loadings of initial destruc√ tion P = P∗ / σcσ a depending on s1 /h for rigidly clamped fiberglass nodoid shell obtained at h1 = h3 = 0.4h, (0, 90, 60, −60). Results are received when the model of CM with bidimentional fibres is used. In tab. 4 there are shown the dimensionless loadings of initial destruction for nodoid shell obtained using a condition of continuous winding. One can see from tab. 3, 4 that influence of nonlinear terms is insignificant and does not exceed 6%. The loadings of initial destruction obtained using the classical theory are overestimated to 15% in comparison with the theory
The analysis of behaviour of nodoid shells
213
Fig. 5. The maximal reduced stress intensity bs0 in binding and maximal dimensionless deflections w for metal-composite nodoid shell Table 3. Loadings of initial destruction P Kirchhoff — Love Timoshenko
Non-classical [4]
s1 /h linear nonlinear linear nonlinear linear nonlinear 50 24,158 24,162 70 17,312 17,315 100 12,148 12,151
20,769 20,744 15,419 15,407 11,151 11,147
21,074 20,941 13,931 13,575 10,810 10,142
Table 4. Loadings of initial destruction P (continuous winding) Kirchhoff — Love Timoshenko
Non-classical [4]
s1 /h linear nonlinear
linear nonlinear linear nonlinear
50 6,762 6,752 70 4,847 4,839 100 3,403 3,394
6,958 6,958 4,948 4,949 3,452 3,453
6,590 6,584 4,857 4,857 3,410 3,460
of Timoshenko. When using a condition of continuous winding distinction between the values received under various theories does not exceed 5%. 4.5 Influence of an arrangement of layers We research how the order of an arrangement of layers influences on behavior of rigidly clamped coal-plastic shell which is taking place under influence of constant internal pressure at h1 = h3 = h/3. The dependencies of the maximal dimensionless deflections, the loadings of initial destruction, the reduced stress intensities in binding and spiral ar-
214
S.K. Golushko
Fig. 6. The maximal reduced stress intensities bs0 , bs1 in binding and spiral armature, maximal dimensionless deflections w and loadings of initial destruction for coalplastic nodoid shell
mature on an angle of spiral reinforcing are shown on fig. 6. Results are obtained using the non-classical theory [4] at use of model of CM with bidimentional fibres. To curves 1 there is corresponds the order of an arrangement of the reinforced layers (0, 90, ψ, −ψ), to curves 2 — (0, ψ, −ψ, 90), to curves 3 — (90, ψ, −ψ, 0). The data presented on fig. 6 show that the arrangement of a layer with circumferential armature in an external layer (curves 2) in comparison with an arrangement of it in the middle layer (curves 1) insignificantly influences on SDS of shell. However, the arrangement of circumferential armature in an internal layer (curves 3), for example, at ψ = 30o allows due to downturn of a level of stresses in longitudinal armature to increase loading of initial destruction practically twice. The dependence of behavior of initial destruction’s loading from an angle of spiral reinforcing is changed. Characteris-
The analysis of behaviour of nodoid shells
215
tic maximum appears. The order of an arrangement of the reinforced layers practically does not influence on values of the maximal deflections. 4.6 Determination of loadings of initial destruction We research the influence of a choice of structural model of CM on a calculated level of loadings of initial destruction for nodoid shells. On fig. 7 there are shown dependencies of loading of initial destruction on an angle of reinforcing of spiral family of armature for coal-plastic nodoid shells. Results are obtained using the non-classical theory [4].
P 10
-5
2
3
4
3
2 4
Fig. 7. The loadings of initial destruction for coal-plastic nodoid shell
From fig. 7 one can see, that at h1 = h3 = 0.1h there are angles of reinforcing at which loadings of initial destruction accept the maximal values. Distinction between the values obtained using the models with one-dimensional and bidimentional fibres, can make up to 40%. At h1 = h3 = 0.4h dependence of loading of initial destruction on a corner of spiral reinforcing monotonous also varies insignificantly. 4.7 The analysis of reliability of numerical solutions In tab. 5 the maximal relative differences of the values describing SDS of fiberglass rigidly clamped nodoid shells calculated using methods of a splinecollocation and discrete orthogonalization are presented. Results are obtained at use of linear and nonlinear variants of the theory [4] and the model of CM with bidimentional fibres.
216
S.K. Golushko Table 5. Maximal relative error ε by components TOL, J
W
Π
W
M1
Linear non-classical theory [4] 10−4 ,250 10−6 , 500 10−8 , 1000
3.47 · 10−1 4.32 · 10−3 8.35 · 10−5
5.31 · 10−3 6.28 · 10−5 6.41 · 10−7
5.68 · 10−1 8.53 · 10−3 5.20 · 10−5
4.64 · 10−3 4.15 · 10−5 4.93 · 10−7
Nonlinear non-classical theory [4] 10−4 , 250 10−6 , 500 10−8 , 1000
3.53 · 10−1 4.56 · 10−3 7.93 · 10−5
5.24 · 10−3 6.87 · 10−5 8.01 · 10−7
5.72 · 10−1 8.61 · 10−3 3.29 · 10−5
4.98 · 10−3 4.34 · 10−5 4.14 · 10−7
From tab. 5 one can see the good proximity of results and rapprochement of the solutions when the amount of intervals is increased for the method of discrete orthogonalization and the prescribed accuracy for the package COLSYS is improved.
References 1. Nemirovskii YuV (1972) Mekhanika polimerov 5:861–873 (in Russian) 2. Novozhilov VV (1951) Theory of thin shells. Sudpromgiz, Leningrad (in Russian) 3. Grirorenko YaM, Vasilenko AT (1992) Static problems of anisotropic heterogeneous shells. Nauka, Moscow (in Russian) 4. Andreev AN, Nemirovskii YuV (2001) Multilayer anisotropic shells and plates. Nauka, Novosibirsk (in Russian) 5. Ascher U, Christiansen J, Russel RD (1981) ACM Trans Math Software 17/2:209– 222 6. Godunov SK (1961) Usp Mat Nauk 16/3:171–174 (in Russian) 7. Golushko SK, Gorshkov VV (2002) Analysis of behavior of cylindrical shells in non-classical formulation. In: Joint issue of Comp Techn 7 and Bulletin of KazNU 4/32: Proc. of the Int. Conf. "Computational and informational technologies for science, engineering and education", Almaty, Kazakhstan, part 2 (in Russian)
On the part load vortex in draft tubes of hydro electric power plants E. Göde, A. Ruprecht, and F. Lippold University of Stuttgart, Institute of Fluid Mechanics and Hydraulic Machinery, Pfaffenwaldring 10, D-70550 Stuttgart, Germany [email protected]
Summary. For a given draft tube geometry numerical flow simulations have been carried out. Equivalent to part load operation of a Turbine with fixed runner blades such as Francis- or Propellerturbines a set of different inlet boundary conditions have been specified to simulate the draft tube vortex. The intention was to find out some correlation between inlet condition and the draft tube vortex structure. The results can be essential for design purposes in turbine engineering.
1 Introduction The draft tube vortex is one of the most fascinating flow phenomena in a hydraulic turbine, but sometimes with considerable consequences on the operation of the power plant. For turbine runners with fixed runner blades, which are installed in Francis- as well as in Propellerturbines, the fluid leaves the runner with more or less swirl depending on the operating condition, if the rotational speed is constant. From kinematics it turns out that, the more the operating point is far away from best condition the higher the swirl is. Well known is the cork screw rolled up part load vortex (figure 1), that rotates with a fraction of the runner speed. Typically the vortex rotates at a speed between 30 and 50% of the runner speed. Accordingly, the pressure field rotates, and since the pressure is neither constant along the circumference nor constant in time there is permanently an excitation to vibration. The pressure fluctuations can lead to severe operational problems at critical operating conditions. In recent years great progress has been achieved in order to numerically simulate slender vortices in turbulent flows and the draft tube vortex respectively [1, 2, 3]. Contributions necessary for this achievement have been made especially in the field of turbulence modelling taking into account the anisotropic character of turbulence in the flow around a slender vortex. In addition, multi scale numerical approaches have been introduced (e. g. VLES: very large eddy simulation).
218
E. Göde, A. Ruprecht, and F. Lippold
Fig. 1. Part load vortex, experiment and simulation (Francisturbine)
Fig. 2. Turbine hill chart and part load operating point (sketched)
Now it seems to be possible to answer questions such as which kinematic parameters are of major influence on the development of the draft tube vortex. In fact, it would be a great progress to find out by simulation to what extent and through which measure the vortex and the corresponding unsteady flow field can be changed. In this paper the approach to simulate the draft tube flow is as follows: For a given operating point corresponding to a part load turbine operation (figure 2) a set of different boundary conditions at draft tube inlet have been specified. Since the operating point of the machine is fixed, the discharge as well as the swirl at runner outlet are given for a given head. Therefore, the different boundary conditions are in fact different distributions over the radius for the through flow as well as for the swirl.
Part load vortex
219
2 Influence of runner design For a given operating point of the turbine depending on the actual head as well as on the discharge the flow at draft tube inlet is specified by the flow field at runner outlet. Since the runner rotates (normally) with constant speed according to the frequency of the electric grid, the flow angles β2 relative to the runner blades are nearly independent from the actual discharge, see figure 3.
Fig. 3. Velocity triangles at runner outlet for three different turbine operating conditions (runner blade cascade in conformal mapping)
As a consequence, the velocity c2 in the absolute frame is very sensitive on changes in discharge. Figure 3 shows for increasing discharge the turning of the flow vector c2 , when the turbine operation is moved from part load (dark grey) to full load (green). In terms of circumferential component of the absolute velocity (cu2 ), the flow down stream of the runner rotates at part load in the same direction as the runner. At best condition, the flow has roughly no swirl (cu2 = 0, black colour), and at full load the flow rotates in the opposite direction as the runner (light grey colour). As indicated in figure 3, the relative flow angle down stream of the runner is similar to the blade angle at runner trailing edge. However, the two angles are not identical but somewhat different, which defines one of the major problems in the process of the runner design. In addition, the discharge at runner outlet is a priori not constant over the outlet area but depends on the local turning of the flow inside the runner blade cascade. This is why even if the power of the turbine is the same, the flow field at runner outlet can be
220
E. Göde, A. Ruprecht, and F. Lippold
Fig. 4. Francis runner design, blade profiles at band
different in terms of through flow as well as swirl over the radius from hub to band. In figure 4 an example is given to demonstrate how complex the shape of the runner blades in a Francis turbine can be. A great number of design parameters such as number of blades, blade length, profile thickness distribution, blade curvature, blade angles at leading and trailing edge and so on have influence on the flow through the runner. It is obvious that for different bladings the resulting flow field is different. However, for a required turbine power the necessary runner torque is given. This torque must be produced no matter which pressure distribution acts locally on the blade surface and no matter which through flow and swirl distribution at runner outlet is achieved. Finally, the runner wake can roughly be divided into two regions: the inner tail water region down stream of the hub and the outer through flow region down stream of the blade trailing edges. The shape of the hub has an influence on the tail water region, and the shape of the blades has an influence on the through flow as well as the swirl distribution from hub to band. To take into account the different strategies to design a turbine runner for the same operating condition, a set of possible boundary conditions at draft tube inlet have been specified.
Part load vortex
221
3 Computational modelling 3.1 Numerical algorithms The calculations are carried out using the program FENFLOSS which has been developed at the institute for more than a decade [4, 5]. The partial differential equations are solved by a Galerkin Finite Element Method. The spatial discretization of the domain is performed by 8-node hexahedral elements. For the velocity components and the turbulence quantities a tri-linear approximation is applied. The pressure is assumed to be constant within the element. For advection dominated flow a Petrov-Galerkin formulation with skewed upwind orientated weighting functions is applied. The time discretization is done by a three-level fully implicit finite difference approximation of 2nd order. For the solution of the momentum and continuity equation a segregated solution algorithm is applied. Each momentum equation is solved independently. The momentum equations are linearized by a Picard iteration. The linear systems of equations are solved by the BICGSTAB2 algorithm of van der Vorst [6] with an incomplete LU decomposition (ILU) for preconditioning. The pressure is treated by a modified Uzawa type pressure correction scheme [5, 7]. The pressure correction is carried out in a local iteration loop without reassembling the system matrices until the continuity error is reduced by a given order (usually 6-10 iterations needed). After the solution of the momentum and continuity equations the turbulence quantities are calculated and a new turbulence viscosity is obtained. The turbulence equations (e. g. k- and ε-equations) are also linearized by successive substitution and the linear systems are also solved by the BICGSTAB2 algorithm with ILU preconditioning. The whole procedure is carried out in a global iteration until convergence is obtained. For unsteady simulations the global iteration has to be carried out in each time step. The parallelization of the code is introduced by domain decomposition using overlapping grids. The linear equation solver BICGSTAB2 is carried out in parallel and the data exchange between the domains is organized on the level of the matrix-vector multiplication in the BICGSTAB2 solver. The preconditioning is carried out locally on each domain. The data exchange is organized using MPI (Message Passing Interface) on machines with distributed memory. On shared-memory-computers the code is also parallel by applying OpenMP. 3.2 Turbulence modelling The simulation of unsteady vortex motion needs quite sophisticated turbulence models. When applying the “wrong” models the vortices are severely damped and motions are unpredictable. A better approach compared to
222
E. Göde, A. Ruprecht, and F. Lippold
the usually applied Reynolds-averaged Navies-Stokes simulations is a Very Large Eddy Simulation (VLES). Large Eddy Simulation (LES) from the turbulence research point of view requires an enormous computational effort since all anisotropic turbulence scales have to be resolved in the computation and only the influence of the smallest isotropic eddies are treated by a turbulence model. Consequently this method can not be applied for industrial problems today, it requires a much to high computational effort.
Fig. 5. Modelling approach for RANS and LES
Today’s calculations of flows of practical relevance (characterized by complex geometry and very high Reynolds number) are usually based on the Reynolds-averaged Navier-Stokes (RANS) equations. This means that the influence of the complete turbulence behaviour is expressed by means of an appropriate turbulence model. To find a turbulence model, which is able to capture a wide range of complex flow effects quite accurate is impossible. Especially for unsteady flow behaviour this approach often leads to rather poor results. The RANS and LES approach is schematically shown in figure 5, where a typical turbulent spectrum and its division in resolved and modeled parts is presented. The recently new established approach of Very Large Eddy Simulation leads to quite promising results, especially for unsteady vortex motion. In contrary to unsteady RANS the very large turbulent eddies are captured by the unsteady simulation, consequently there is a requirement to the applied turbulence model, that it can distinguish between resolved unsteady motion and not resolved turbulent motion which must be included in the model. It is
Part load vortex
223
Fig. 6. Turbulence treatment in VLES
similar to LES, but only a minor part of the turbulence spectrum is resolved (schematically shown in figure 6), and therefore it is available for industrial flows today. For for details the reader is referred to [8, 9]. For comparison the vortex rope in a straight diffusor is shown for a modified k-ε model (Chen&Kim version [10]) and for VLES, figure 7. It can be observed that the damping of the vortices is reduced severely by the VLES approach and the results are in a better agreement with measurements and observations in the experiment.
Fig. 7. Vortex rope in a straight diffuser, k-ε Kim-Chen model vs. VLES
224
E. Göde, A. Ruprecht, and F. Lippold
4 Computational grid, boundary conditions and evaluation methods In order to take advantage from experience with previous investigations and since detailed measurement data were available, the draft tube geometry described in Ruprecht [11] was used for further examinations. The new and important work to be done here is the definition of appropriate boundary conditions. Since the impact of the direction of the velocity vectors at the draft tube inlet has to be examined, the choice of these has to be made thoughtfully. Furthermore, the operational point fixed by the measurements has to be maintained. The final step in the preparation of the examination of the draft tube vortex is the definition of the evaluation method and the appraisal criteria. 4.1 Geometrical model and computational grid The geometry examined in the paper is an elbow draft tube. It consists of a straight intake region with a slightly opening cross section. In the elbow the flow is redirected and finally distributed into three outlet channels. For the numerical analysis the geometry is discretized, which leads to a computational grid consisting of about 190000 grid points and 175000 hexahedral elements, see figure 8. To carry out the computations the grid was distributed onto six processors of a PC-cluster. 4.2 Definition of the boundary condition sets In order to keep the conditions given by the chosen turbine operating point, there are certain constraints the inlet boundary conditions have to meet. The first one is the conservation of the given discharge, which means that Q=
'R
cz · 2π · r dr
(1)
0
has to be constant. Furthermore, the integral swirl value must not differ from the original operational point. Therefore, the second condition the boundary conditions will satisfy is m=
'R
r cu · cz · 2π · r dr = const.
(2)
0
To judge the quality of a chosen velocity distribution the relative error is used. These values are defined as follows 2 ∆ q = ( Qnew − Qorig ) 2Qorig ∗ 100.0% (3) ∆γ = (mnew − morig ) morig ∗ 100.0%.
Part load vortex
225
Fig. 8. Computational (surface grid), approx. 175000 elements
At the outlet boundaries a constant pressure of p = 0 Pa is prescribed. The reference inlet boundary conditions was obtained from a numerical flow computation in a Francis runner. As it was already shown in [11] a vortex is formed in this point of operation. Boundary condition set 1 (cu cz1 ) This first generic boundary condition set models a high transport component cz on a small radius decreasing linearly with increasing radius. The backflow region in the centre is equivalent to the original one. Assuming a rigidbody-like swirl distribution at the inner and a constant cu at the middle and outer part yields the r*cu distribution shown in figure 9. The relative errors are ∆ q = 0.001% and ∆γ = 0.047%. Boundary condition set 2 (cu cz1 ) In order to obtain comparable results only one value of the boundary conditions should be changed at once. So, this set models the same swirl distribution described above (cu cz1 ), but, an increasing transport velocity component with increasing radius, see figure 10. The relative errors are ∆ q = 0.01% and ∆γ = −0.023%.
226
E. Göde, A. Ruprecht, and F. Lippold
Fig. 9. Boundary condition set 1 (cu cz1 )
Fig. 10. Boundary condition set 2 (cu cz2 )
Boundary condition sets 3 & 4 These sets combine the cz -distributions of set 2 and 1 with a cu -curve decreasing from inner to outer radii. This yields the declining swirl curve shown below in figure 11. Relative errors are again considerably low ∆ q = 0.01% and ∆γ = −0.049% and ∆ q = 0.001% and ∆γ = 0.013%, respectively. 4.3 Appraisal factors Two main issues coming with the draft tube vortex are the pressure fluctuations exciting the whole hydraulic system and the pressure amplitudes. To obtain comparable values for all test cases some characteristic numbers reflecting the behaviour under the given conditions have to be defined. First, this is the frequency of the pressure pulsations in a certain point. Second, the intermediate pressure amplitude, which means the mean peak to peak value. The curves in figure 12 give an impression of how the amplitude, which is the distance between the two lines (p_max/min,mean), is obtained. The peak
Part load vortex
227
Fig. 11. Swirl distribution for sets 3 & 4
value curves (p_max/min) are integrated in the time domain and the mean value is then obtained from the integral value divided by the total time period. This is, admittedly, an estimation, but, it will show the right trend between the single cases.
Fig. 12. Determination of the mean pressure amplitude
5 Simulation results 5.1 Geometrical model and computational grid In order to demonstrate the accuracy of the numerical simulation, a comparison with measurement data is shown according to [11] for the original boundary condition. In figure 14 the measured and computed pressure fluctuations are given for point no. 3 at the draft tube cone, figure 13. The FFTanalysis carried out for both measured and computed pressure signals verifies a quite good coincidence in terms of frequency as well as the amplitude
228
E. Göde, A. Ruprecht, and F. Lippold
Fig. 13. Positions of pressure measurement probes
of the signals, figure 14. This is why the same set-up for the flow analysis was chosen here using the specified sets of swirl distribution. 5.2 Influence of boundary condition In figure 15 two vortices are visualized by using a constant pressure surface. It can be seen that the rope produced with the cu cz1 -boundary condition (light) is shorter and more slender than the original one (dark). This is in contrast to the cu cz2 -case where the rope is thicker and longer than the original one. In the case of high discharge near the hub the rope has less room to be formed and vice versa, which is the reason for the phenomenon described above. A remarkable discovery made here is the correlation between frequency and amplitude. The values for point 3 (CH3) given in figure 16 show that the pressure fluctuations increase with decreasing frequency and vice versa. Since the rope hits the wall next to point 4 (CH4) there is a higher amplitude than at the upper point 3. Both frequencies are identical. Another aspect is that there seems to be no major influence of the swirl distribution on the values analyzed here. The results presented indicate that first of all the transport velocity has an impact on frequency and amplitude of the pressure pulsations. Furthermore, it turns out that the time-averaged pressure recovery coefficient of the draft tube increases with increasing frequency, figure 17 vs. figure 16. Here the amplitude of the pressure fluctuations comes into play. Since, the higher the discharge at the hub yields a longer and more slender rope the blocking of the cross section will decrease. Hence, the losses for these cases are less and the recovery coefficient is higher.
6 Further investigations and outlook Since the effects found above are hard to explain in detail for this complex example further examinations on a simpler geometry could be useful. Several impacts on the flow have to be taken into account, for instance the velocity
Part load vortex 40000
229
ch3 simulation ch3 messurement
30000 20000
pressure [Pa]
10000 0 -10000 -20000 -30000 -40000 -50000 -60000 0
0.5
1
1.5 time [s]
2
2.5
3
0.8 ch3 simulation ch3 measurement
0.7
relative pressure amplitude
0.6
0.5
0.4
0.3
0.2
0.1
0 1
3
5
7
9
11
13
15
17
19
21
23
25
27
29
Frequency [Hz]
Fig. 14. Measured and computed pressure signals (above), FFT-analysis (below)
distribution at draft tube inlet, second, the conical shape of the draft tube, third, the elbow itself, fourth the turbulence modelling. To understand the basic mechanism it would be useful to separate all these effects as far as possible and to start with simplified geometries, and then to increase the complexity of the problem successively. Moreover, the consistency of the boundary condition sets has to be studied more deeply, this can be accomplished by a theoretical approach of Resiga [12]. Finally, instead of using artificial boundary conditions, an actual runner design should be used to determine more realistic boundary conditions for the draft tube inlet.
230
E. Göde, A. Ruprecht, and F. Lippold
Fig. 15. Iso pressure surfaces obtained by the simulation of the vortex rope for (above) original (dark) and modified (light) cu cz1 boundary condition and original (dark) and modified (light) cu cz2 boundary condition
Fig. 16. Rope frequencies and mean pressure amplitudes
Part load vortex
231
Fig. 17. Pressure recovery coefficients (time averaged) for all bc-sets [ cp = (pin – pout )/(v2in /2) ]
References 1. Ruprecht A, Helmrich Th, Aschenbrenner Th, Scherer Th (2001) Simulation of pressure surge in a hydro power plant caused by an elbow draft tube. In: Proc. of the IAHR WG 1 Symposium The Behaviour of Hydraulic Machinery under Steady Oscillatory Conditions, Trondheim. 2. Helmrich Th, Ruprecht A (2001) Simulation of unsteady vortex rope in turbine draft tubes. In: Proc. of the Hydroturbo 2001, Podbanske, Slovak Republic. 3. Ruprecht A, Helmrich Th, Aschenbrenner T, Scherer T (2002) Simulation of vortex rope in a turbine draft tube. In: Proc. of the 21th IAHR Symposium on Hydraulic Machinery and Systems, Lausanne 4. Ruprecht A (1989) Finite Elemente zur Berechnung dreidimensionaler turbulenter Strömungen in komplexen Geometrien. Doctorate Thesis, University of Stuttgart. 5. Ruprecht A (2003) Numerische Strömungssimulation am Beispiel hydraulischer Strömungsmaschinen. Habilitationsschrift, Universität Stuttgart 6. Van der Vorst HA (1994): Recent developments in hybrid CG methods. In: Proc. of the High Performance Computing & Networking, München. 7. Zienkiewicz OC, Vilotte JP, Toyoshima S, Nakazawa S (1985) Comp Meth Appl Mech Eng 51: 3-29 8. Ruprecht A, Helmrich Th, Buntic I (2003) Very large eddy simulation for the prediction of unsteady vortex motion. In: Proc. of the Conference on Modeling Fluid Flow, Budapest. 9. Helmrich Th, Buntic I, Ruprecht A (2002) Very Large Eddy Simulation for flow in hydraulic turbo machinery. In: Proc. of the Classics and Fashion in Fluid Mechanics, Belgrade. 10. Kim SW, Chen CP (1989) Numer Heat Transfer 16(B): 193-221 11. Ruprecht A (2002) Unsteady flow simulation in hydraulic machinery. In: IAHR, Task quarterly 6 No 1., 187-208. 12. Romeo S-R (2004) Swirling flow downstream a francis turbine runner. In: Proc. of the 6th international conference on Hydraulic Machinery and Hydrodynamics, Timisoara, Romania.
Computational infrastructure for parallel processing spatially distributed data I.V. Bychkov1 , A.D. Kitov2 , and E.A. Cherkashin1 1 2
Institute for System Dynamics and Control Theories SB RAS, Lermontov str. 134, 664033 Irkutsk, Russia [email protected] Institute of Geography SB RAS, Ulan-Batorskaya str. 1, 664033 Irkutsk, Russia [email protected]
Summary. The purpose of a GRID environment construction for scientific GIS problems is considered. The environment is based on parallel computing, centralized warehousing, and decentralized resource control. Some examples of parallel computing are also shown.
1 Introduction The fundamental problem of infrastructure construction for sequential and parallel execution of computational modules results from improved computer networks, huge amounts of aggregated data (including cartographic data), and requirement of integration and intelligence level increase of spatially distributed data processing. The infrastructure is supported by corresponding information stored in shared warehouses of data and knowledge. One of the possible ways of distributed data processing problem solution, which is popular today, is a virtual environment construction for interdisciplinary cooperation in the field of high performance information and computation systems and networks devoted to solution of scientific, industrial and educational tasks. The term “GRID” is introduced by the collaborative society of researchers, but with respect to the considered problem and GISanalysis tasks the term has some various interpretations. On one hand side “GRID” denotes a GRID–technology supplying distributed computation (in some publication the abbreviation “GRID” denotes Global Resource Information Distribution), on another hand side “GRID” denotes an informational field (e.g., matrix) representing cell data structures [1]. We also note that a similar term “GRID” exists in the field of computer programming, which embodies techniques of digital spreadsheets representation and processing. Thus, in the context of the first interpretation the GRID–technologies are considered as methods and means, which supply the user with computation hardware and software, distributed in a corporate information network
234
I.V. Bychkov, A.D. Kitov, and E.A. Cherkashin
framework (e.g., Intranet). The framework supplies also various program modules and databases for a defined class of tasks. One of the main tasks of the GRID–technologies is to supply the computing collaboration with new means of cooperation and construction of virtual alliances aimed at desired results. This is achieved by distributed resource integration and administration, resulting in metacomputing environment of temporarily free resources of networked personal computers, supercomputers and servers. Such consolidation allows to increase the degree of processing component load from 20% to 90%. In addition to the networked computational resources new effective parallel programming methodologies accounting distribution of the information resources play the great role. The second interpretation of the “GRID” term is connected to the geoinformational notion of GRID–structure for territory data representation. At first, the GRID–data could be remote sensing results of the Earth, represented as digital images, and digital cartographic material, obtained as a result of previously defined point set measurement series or as a result of computer simulation. At present this direction is developed within international GRID program “The GLOBE project” [2] on the base of cartographic database GTOPO30 (cell size is 30 angular seconds), which corresponds to topography map of 1:1000000 scale, and high resolution (1 km) satellite data obtained with satellites of NOAA series. The analogous way of development was chosen in “Global Geochemistry Database” project, which is based on Global Geochemistry Reference Network (GGRN) with cell size be 166x166 km [3]. Methods of intelligent data are also under development, where pyramidal data representation system of the raster data is used as a prototype [4]. The usage of vector GIS–data, as simplified model structures, does not contradict to such data structures representation. This allows us to speak of some invariant information representation in databases of spatially distributed data. The set of means allow us to solve applied problems, most of which connected with spatial and temporal relations of Earth’s phenomena and also minimize the expenses on high performance computation environment construction by means of aggregation computer software (realizing alternative and supplemental methods of the tasks) within a corporate network. The GRID–technology of intellectualization of the end user applications allows us to solve the fundamental problem of the distributed programming systems accessibility to user. The proper usage of the programs is possible usually for author only or in teamwork with author’s close membership as a consultant or for specially organized interdisciplinary collaboration of experts. The problem solution is the application of following software, programming methods and technologies, most of which are developed in Institute for System Dynamics and Control Theory SB RAS [5]–[6]: 1. Software realizing processes of aggregation, debugging, using, modifying, complication, and visualization of packet knowledge, and also realizing
Computational infrastructure for parallel processing SDD
235
process of scientific problem definition, planning, implementation and execution of the problem solution planes. 2. Algorithms for automatic synthesis of problem solution schemes, using as a model of the problem area systems of rare Boolean equations; these algorithms are the basic means for distribute computation organization in Internet. 3. Logical language of positively constructed formulae, its calculi, special strategies on the calculi for the inference search control (on the base of QUANT/2 software, used for intellectualization of informational-controlling systems); logic-heuristic methods of network planning, having application for the schedule planning problems. 4. A language FlexT, which have no proper analogs today, for digital data format and specification description; the language and implementation software are a method for structure description of data generated by various programming systems (about 50 known formats are described, including graphical vector and raster data, e.g., GIS-data, execution file formats, programming libraries); at present the FlexT system allows to formalize the specifications of interpretation and translate digital data to a human–readable form with the descriptions, making the data usage transparent, i.e., the technology allows to supply robust information interchange between various programming systems without necessity to convert the data to an ineffective (e.g, XML) or universal (e.g., WMF) interchange format, e.g., XML. 5. Information system supplying a database access in Internet/Intranet environments, having an incorporated GIS system where it is necessary; our aggregated experience allows one to construct significantly universal programming technologies of such system realization, using database structure metainformation; 6. Instruments for cartographical data conversion and electronic map publication within Internet/Intranet; our aggregated experience allows one to construct a technology for spatial data warehousing; a software for raster and vector cartographical data structural analysis is being constructed; an application analysis task of city road network structure recognition has been solved on a vector map, the constructed system incorporates an Prolog interpreter for processing the data on the base of formalized knowledge and for fine-tuning the system to particular properties of objects under recognition; the software supposed to develop further to make an universal programming toolset for graphical vector data processing. To develop the GRID–technologies and to create the software for geoinformational research on the base of the technologies the development of software intellectualization is also necessary, as well as the means of specification and technologies of the user support for various information–computation classes of resources. Databases, GIS applications, module libraries, knowledge bases, metadata, multiagent technologies, agent action planning, logical inference belong to the resources.
236
I.V. Bychkov, A.D. Kitov, and E.A. Cherkashin
P
Q J
P, PJi Qj
PJi
Qj
Ps
Qk I
Qs
P, PJi Fig. 1. A graph structure of the two main direction of parallel processing in GIS. Q is an original cartographic object; Q j are object details (fragments, layers, subobjects); Qs is the desired synthesized object; P is a process (scheme, algorithm, operator) of an object processing; PJi are the branches of the parallel process; Ps is a main result of construction process from parallel processed elements. Only one level of parallel processing is shown, which can be a network or a hierarchy. Running the P process and subprocesses PJi , the object Q is decomposed, and its elements Q j are processed individually
The main advantage of the invariant GRID-structures is the feature of their parallel processing (fig. 1). For example, the same operation is done on each element in parallel for the pixel–by–pixel conversion task of the raster image. Another example of the fragmentation and parallel processing is image segmentation or the contour tracing. One of efficient recognition methods, the method of committee and voting, could be also done in parallel. In this case the original image is processed by a recognition method on an allocated processor, the results of the analysis are transferred to the decision– making processor, where the decision is made by means of a method of voting. The development of parallel data processing in the field of GIS–technologies is worthwhile in the following cases: 1) Hierarchical synthesis and data analysis. For example, in multispectral analysis of geoimages and processing dataset characterizing the investigated territory region the decomposition of the dataset to subgroups and their processing inside the subgroups can be done in parallel (fig. 2a.). 2) Creation of a topology map, e.g., landscape-assessment map of a territory. According to defined criteria (fig. 2b.) such as level, aspect and slope, density of distribution, brightness of a pixel of the image, a vector object lo-
Computational infrastructure for parallel processing SDD
1
2
3
12
237
Original space elements
4
Intermediate processing level
34
Processing result (synthesis)
1234 a
TIN- surface cell
GRID-themes
Space image
slope
brightness
aspect density of distribution
level
Vector element
object intersection
Map element (GRID-cell, region, etc.) b
Simultaneous coordinate calculation
c
d
Fig. 2. Examples of parallel processing in GIS-data analysis (description is in the main text)
cation on a digital topology map within the defined space, and so on. In the tasks a set of parameters is defined and the recognition procedures (e.g., classification and segmentation) on image element from the original dataset are done in parallel. 3) Variant formation and choice of the most optimal (in some sense) decision as a result of vector topological spatially distributed data, e.g., road networks. Each of the path variants is analyzed simultaneously, and the results are compared. The parallel–layered analysis with the consequent synthesis of a necessary variant of thematic map belongs to this processing case. 4) Map renewal: for attributive data the renewal is a modification of the property, for graphical objects it is a change of the location (fig. 2c.). The simultaneous renewal of the syntactic and semantic attributes (e.g., the coordi-
238
I.V. Bychkov, A.D. Kitov, and E.A. Cherkashin
nate shift of the location or coordinates recalculation upon the map projection change) can be done in parallel. 5) Data visualization: a composition of an image from vector segments and raster elements. The image representation on the computer display monitor is carried out with raster image. The zooming (as pixel generation) could be done in parallel, e.g., the “zoom out” operation constructs a pixel from a set of neighboring pixels, the “zoom in” operation constructs a set of neighboring pixels from one original pixel (fig. 2d.). The averaging operation of an image fragment also can be done in parallel. In this case the processing time is reduced by the number of the fragments. The parallel construction of the pyramidal layers of raster data is also efficient. 6) GRID–analysis: data is processed in accounting the defined reference network criteria, such as distance estimation from object to network nodes (a subtask of the object range recognition), recognition of the common properties of objects, spatial neighborhood analysis and so on. 7) Parallel analysis of data attributes, e.g., the pattern search for chosen independent attributes or calculation of the average values of all the attributes of an object class. The parallel computation infrastructure implementation for spatially distributed data processing implies in the considered conceptual framework the following plan to be done: 1) The conceptual methods of decentralized distributed resources and network connection planing and control should be created on the base of system of interconnected intelligent agents, Boolean models and efficient logical-heuristic algorithms of their solution. An informational model of distributed computation network aimed at optimization of its parameter values according defined fault–tolerance criterion are to be realized. 2) On the base of the present state of development of the FlexT software the high–level means for memory allocation technique description in digital data should be constructed, resulting in significantly improved precision of format file description specification, and, as well, allow one to implement automatic file–reading module synthesizer for various programming languages. 3) An universal database access scheme should be realized, allowing access via Internet/Intranet networks. For the purpose some database structure description tools are to be designed for construction user interface of a query subsystem and a translator of the user queries to some universal query language. 4) A technology for various spatial data integration on the base of spatial data warehouse, algorithms of comparison and automatic vector map generalization, a system of automatic vectorization of a raster maps using in its functioning logical methods of map object property description can be constructed. As the aggregated spatial GIS data and the remote sensing data have huge volumes, the global cartographic space and databases should be introduced with a “modification transfer” feature between the global data-
Computational infrastructure for parallel processing SDD
239
Fig. 3. A component scheme for distributed task solution
base environment and end–user applications. That is the methods of transfer and integration of the modifications on the base of an invariant data formats should be devised. 5) For the GRID–system construction an supercomputer system will be inculcated in the Irkutsk Scientific Educational Complex network. The component scheme, including hardware, software and infoware subsystem, of the GRID-technologies for the task solution in GIS research is illustrated on the fig. 3. A traditional way of data transfer is the usage of converter software and universal exchange data formats. In this case it is usually impossible to preserve all peculiarities of the original information, especially semantic. The problem is solved with the FlexT language and its software, allowing to get rid of the universal exchange formats. On another hand side the problem is solved by aggregating data and knowledge in a warehouse, having an invariant structure, as well as with representation of the information with data structures, which have associated information on the possible transformations, and a data subset for reduction of the volume of transferred and visualized information (fig. 4). To reduce the traffic it is of purpose to have copies of the data in basic network nodes. The transfer of the data does not require higher operativity. As an example of such data could be a set of tematic and topography digital maps or a coverage of a territory by a series of
240
I.V. Bychkov, A.D. Kitov, and E.A. Cherkashin
Fig. 4. A scheme of data transition and conversion variants
space images. After a special information processing by a companion (a node operator) the data are sent as a graphic layer of the modifications to the central warehouse or to another companion of the same GIS–project. The proposed methods and means are to be used in large–scale interdisciplinary research projects on the base of unique original software and information resources of Irkutsk Scientific Center SB RAS and a distributed computation environment for solution of geospatial problems of a large size.
References 1. Kitov AD (2000) Computer analysis and synthesis of geoimages. Novosibirsk, SB RAS Publishing (in Russian) 2. Darnley AG, Bjorklund A, Bolviken B, Gustavsson N, Koval PV, Plant JA, Steenfelt A, Tauchid M, Xuejing Xie (1996) A Global Geochemical Database for environmental and resource management. Recommendations for International Geochemical Mapping. Final Report of IGCP Project 259. France, UNESCO Publishing 3. Delbaere B, Gulinck H (1995) European collaborative program. Remote sensing in landscape ecological mapping. Luxembourg, Office for Official Publications of the European Communities, IV
Computational infrastructure for parallel processing SDD
241
4. Berlandt AM, Tikuniv VS (eds) (1994) Cartography. Geoinformational Systems. Moscow, KartGeoCenter-Geoizdat, 4:350 (in Russian) 5. Bychkov IV, Khmel’nov AE, Kitov AD (2005) Comp Techn 10(2): 38–44 (in Russian) 6. Bychkov IV, Fedorov RK, Khmel’nov AE (2005) Comp Techn 10(12): 116–130 (in Russian)
Particle methods in powder technology B. Henrich1,2 , M. Moseler2,1 , and H. Riedel2 1 2
Freiburg Materials Research Center, Stefan-Meier-Str. 21, 79104 Freiburg i. Br., Germany [email protected] Fraunhofer-Institute for Mechanics of Materials, Wöhlerstr. 11, 79108 Freiburg i. Br., Germany [email protected]
Summary. The feasibility of particle based simulation methods is shown for powder technological applications like compaction, sintering and filling of a dispersed powder. In contrast to continuum methods this approach automatically takes into account the rearrangements of the grains and predicts structural composition. This allows for a comparison with analytical results in the case of powder compaction and sintering giving new insights into the dynamics of granular materials.
1 Introduction Powder technology (PT) is an important branch of materials with a wide range of applications. Because of the inherent brittleness of ceramics the knowledge of all processes which lead to structural inhomogeneities and microcracks are of great importance. Simulations can give additional insight in the processes and can help to design desired properties. For the filling of dyes, continuum methods fail and particle based solutions have been reported for simple 2D geometries only [1]. On the other hand, compaction and sintering can be treated by continuum models. Obviously, this methods suffer from a lack of mesoscopic informations. Predictions concerning the composition is not possible. Particle methods can bridge this gap. The basic constituents of this approach are the grains and with appropriate pair forces all of the above mentioned processes can be simulated in the framework of Molecular Dynamics (MD), see e.g. [2]. This article is organized in the following way. In the second section we give an overview of the basic method used in this work. Then, the next sections describe the results of our simulations in the case of powder compaction (section 3), sintering (section 4) and filling (section 5). Our first simulations as detailed in sections 3, 4 and 5 represent examples for the feasibility of our approach. Note, that systems with two orders of magnitude more particles can be studied using massive parallel techniques.
244
B. Henrich, M. Moseler, and H. Riedel
2 The Molecular Dynamics method In MD we consider a set of interacting particles (i.e. grains in our context of PT) described by their position vector ri velocity vi and mass m. The time evolution is governed by Newton’s equations of motion d d r = vi , m vi = Fi , dt i dt
(1)
where Fi denotes the force acting on the i-th grain. For PT, the forces are of the form Fi = fiext + ∑ fint ij .
(2)
j =i
Here, fint describes the grain-grain interaction and the sums extends over all other particle within a certain interaction range. fext represents geometrical boundaries and other external force fields. For the propagation we use the velocity Verlet algorithm [2] ri (t + δ t) = ri (t) + δ tvi (t) + vi ( t + δ t ) = vi ( t ) +
1 (δ t)2 Fi , 2m
1 δ t [Fi (t) + Fi (t + δ t)] , 2m
(3) (4)
where δ t is a small time increment.
3 Powder compaction Filling and powder die compaction are the basic powder technological shaping processes. The powder is filled in a die and then it is compressed by one or several punches. The deformation of the grains during the compaction process is taken into account by the history dependent interaction between two grains. If their distance is minimal at a certain time the two grains repel each other by the force 2+π F (r) = π √ σ y R(2R − r), 3
(5)
where R is the radius of the equal sized grains, σ y is the yield stress and r is the distance of the two grains. Otherwise the force is given by X (6) F (r) = − σ y Rr + ( X − C )σ y R2 , 2 √ where C = ( X /2 − π (2 + π )/ 3)(2R − rmin )/ R, with X = I + J (2R − rmin )/2R and I = 28.654, J = 3010.2 if this expression is positive. These force
Particle methods in powder technology
245
laws are derived from elasto-plastic considerations and the specific numbers are fitted to finite-element simulations of a grain pair. Details can be found in [3, 4]. The setup of our particle based simulation is as follows. About 38000 particles are placed in a cylindrical die with an initial relative density of 45 %, and compressed by the upper punch in the z direction. The yield stress of the material is chosen as σ y = 370 MPa. The results are compared with the analytical solution of Storakers et.al., which is based on the Prandtl solution and the Taylor-Bishop-Hill approximation [5]: D Z2+π 1 St σαβ = 0 √ σ y αβ + δαβkk , (7) 5 2 3 where D0 is the initial density and Z the coordination number. For uniaxial die compaction, the only nonzero strain component is zz , and the resulting stress components are σ zz and σrr . This is compared to the results obtained by the simulation, where the macroscopic stress tensor is given by the expression
σαβ =
1 V
∑ ∑ fiint jα ri jβ
(8)
i j >i
and the logarithmic strain measure ! ! ! ! z = !ln( L z / L0z )!
(9)
for the axial strain component is used. L z is the length of the powder in z direction and L0Z denotes its initial value. The additional indices α and β denote Cartesian components. Figure 1 shows the results of the particle dynamics simulation compared with Eq. (7). Over a rather large range of strain the macroscopic stress does not grow substantially. The reason is that the strain can be accommodated by particle rearrangements up to relative densities corresponding to a random dense particle packing (around 64 %; the initial density in the present computation is 45 %). Only after that point, the particles need to be compressed for continuing densification. Rearrangement is facilitated in the present analysis, since inter-particle friction is neglected. To compare with the analytical model, the straight lines obtained from Eq.(7) are shifted to the point where the rearrangement phase ends and the stress start to rise. The coordination number is chosen as Z = 7.93 and the initial relative density as D0 = 0.64. Here Z is chosen to be the average coordination number obtained by the simulation. As Figure 1 shows, there are substantial differences between the analytical model and the simulation. Especially the ratio radial-to-axial stress, which is 1/3 in the analytical model, is much greater in the numerical model
246
B. Henrich, M. Moseler, and H. Riedel 400
Compressive stress in MPa
350
σzz St σzz
300
σrr
250
St σrr
200
σrz
150 100 50 0 0.3
0.35
0.4
0.45
0.5
0.55
0.6
0.65
0.7
0.75
0.8
Compressive strain z
Fig. 1. Stress-strain curves of the powder according to [5] (dashed lines) and particle dynamics (solid lines)
to be around 0.76. Experimental values ranges from 0.4 to 0.7. Only the numerical model reproduces the slope of the stress-strain curves, which is typical for real powders. Figure 2 displays four snapshots of the process. The colours of the particles correspond to the coordination number. Picture a) shows the initial distribution, b) an intermediate state where the densification is achieved by rearrangements of the grains, c) a snapshot after half the simulation time, where the coordination number increases and d) the powder at the end of the simulation. At this stage there is hardly any grain rearrangement. Densification is achieved by reducing the interparticle distance, while the coordination number increases. Since friction was not taken into account, the density is homogeneous at the end of the compaction.
4 Sintering Almost all ceramics must be fired, i.e. sintered, at elevated temperatures to produce the desired microstructure. The driving force of sintering is a reduction of free surface energy of the system. To accomplish this, diffusion leads to a mass transport from the contact zones into the pores transforming the green body to a denser and more resistive composition [6]. We assume that grain boundary diffusion is the dominant transport mechanism and that the surface of the pores is in equilibrium. Then the force between two grains can be written as [7] F = γs (2κ A + L sin(ψ)) +
π c4 kT u˙ n , 8 Ω δ Db
(10)
Particle methods in powder technology
247
Fig. 2. Four snapshots of the compaction process
where k is the Boltzman constant, T the temperature, Ω the atomic volume, δ Db the grain boundary diffusion coefficient times the grain boundary thickness, A the contact area, L its boundary, u˙ n the relative velocity and κ the curvature at the grain boundary with the specific surface energy γs and dihedral angle ψ. The parametrization of the force assumes eight contacts. During the sintering process the masses of the particles must be neglected. In the modelling, this would lead to a time demanding singular-value-decomposition. Only small systems can consequently be treated by such an approach. In order to avoid this limitation a small artificial particle mass is introduced. The mass is chosen in such a way that the dynamics is not altered significantly. For the simulation, we use a cubic green body sintered from an initial density of 60.3 % to 85.0 %. It consists of about 105000 grains. An important property of random packings is the coordination number. Arzt [8] modelled the densification by a concentric growth of spherical particles which remain fixed. This leads to an equation which describe the increase in coordination Z A due to reduction of particle distances:
248
B. Henrich, M. Moseler, and H. Riedel
Coordination number
8.5
8
Z ZA
7.5
7
6.5
6 1
1.02
1.04
1.06
R
1.08
1.1
1.12
Fig. 3. Coordination number Z with respect to the function R ( D ) = ( D / D0 )1/3 and the postulated equation (11), Z A
Densification rate
24 22
ρ˙
20
ρ˙RSZ
18 16 14 12 10 8 0.65
0.7
0.75
0.8
0.85
Relative density
Fig. 4. Comparison of the densification rate in the case of the particle model, ρ˙ , and the analytical prediction by Riedel, Svoboda and Zipse [7], ρ˙ RSZ
Z A ( D ) = Z0 + C ( R ( D ) − 1).
(11)
Z0 is the initial coordination number assumed to be 7.3, C = 15.5 the slope and R a function which depend on density D: R ( D ) = ( D / D0 )1/3 . In Figure 3 the results of a simulation are compared with Eq. (11) shifted to the initial coordination number 6.07 of the used random packing. Figure 4 depicts the densification rate D˙ / D = ρ˙ versus density. We compared our numerical result with an analytical prediction of the constitutive equation by Riedel, Svoboda and Zipse (RSZ) [7]
Particle methods in powder technology
249
Fig. 5. Two sections of the sintered green body. a) shows the initial, b) the final composition
σs , (12) K where both the sintering stress σs and the bulk modulus K depend on relative density. The analytical theory is based on the Taylor-Bishop-Hill approximation stating that the kinematics of each grain is completely determined by the macroscopic strain rate. In a random packing this is only an approximation. Rearrangement of the grains are not taken into account in the analytical model and consequently one could conjecture that densification rates in amorphous arrangements of the grains are higher. However, our simulations revealed the opposite behaviour as can be seen in Fig. 4. Further analysis shows that this behaviour results from the rotational motion of the initial grain configuration. Figure 5 shows a section of a green body at the beginning and at the end of the simulation. The thickness of the slice is one grain diameter. Each grain is highlighted by its coordination number Z. Note, that the final green body consists of grains with a rather broad distribution of coordination numbers. ρ˙ RSZ =
5 Filling Dispersed powders are used for the printing of conducting paths on circuit elements. Immersion of powder grains in a solvent may give rise to an attractive force between the grains. We model the interaction between the grains by a repulsive force with a slight attractive tail. With this ansatz the numerous solvent molecules has not to be described explicitly. Their effect is taken into account via effective interactions between the grains. The presence of the solution changes the granular behaviour of the powder to that of a viscous fluid. In contrast to the last section, a grain is described as a cluster of 13
250
B. Henrich, M. Moseler, and H. Riedel
Fig. 6. a) shows an initial setup of a typical filling experiment, b) the final stage of the dispersed powder
subunits. This leads to a more realistic behaviour because of the additional internal degrees of freedom. We used an empirical expression of the force between the grains in such a cluster as well as the force between particles belonging to different clusters. Fig. 6 a) displays a simplified setup of a typical filling experiment: a series of five dies (red wells to the left) and a shoe (blue box to right) containing a dispersed powder (yellow material). The shoe was driven with a constant velocity to left. Figure 6 b) shows the final state of the dispersed powder. For clarity, the wells and the shoe are removed. Apparently, due to the high viscosity of the fluid the wells can’t be filled completely leading to a useless industrial product. Future work will focus on the variation of process
Particle methods in powder technology
251
parameters like driving speed, shape of the shoe, viscosity of the dispersed powder etc. in order to optimize the filling behaviour.
6 Acknowledgements We thank O. Coube and T. Kraft for fruitful discussions. Computations were performed on the CEMI cluster of the Freiburg Fraunhofer Institutes EMI/ISE/IWM.
References 1. Cocks ACF, Dihoru L, Lawrence T (2001) A fundamental study of die filling. In: Euro PM 2001 Proceedings. EPMA, Shrewsbury, U.K. 2. Allen MP, Tildesley DJ (1987) Computer Simulation of Liquids. Oxford university press 3. Coube O (1998) PhD Thesis, Th`ese de l’Universit´e Pierre et Marie Curie (Paris 6), Paris 4. Coube O, Henrich B, Moseler M, Riedel H (2004) Modelling and Numerical Simulation of Powder Die Compac-tion with a Particle Codein. In: Proc. of World Congress on Powder Metallury 2004, Vol. 5. EPMA, Shrewsbury, U.K. 5. Storakers B, Fleck NA, McMeeking RM (1999) J Mech Phys Solids 47:785–815 6. Rahaman MN (1995) Ceramic Processing and Sintering. Marcel Dekker, Inc. 7. Svoboda J, Riedel H, Zipse H (1994) Acta Metall Mater 42:435–443, Riedel H, Zipse H, Svoboda J (1994) Acta Metall Mater 42:445–452 8. Arzt E (1982) Acta Metall Mater 30:1883–1890
Tangible interfaces for interactive flow simulation M. Becker and U. Wössner High Performance Computing Center Stuttgart (HLRS), University of Stuttgart, Allmandring 30a, 70550 Stuttgart, Germany [email protected], [email protected]
Summary. In this Paper we will present a set of modules and plugins developed for the COVISE visualization system which allow setting up interactive CFD simulations. Real objects can be moved in a model and thus provide an easy to use interface to modify the geometry of the simulation.
1 Introduction To perform complex simulations like CFD, it is typically necessary to include laborious and time-consuming pre- and postprocessing steps. Concerning preprocessing, this begins with grid generation and the definition of boundary conditions such as inlet, wall, symmetry or pressure boundary conditions. Afterwards, the actual simulation is executed. In the next step, the results of the simulation are checked and finally evaluated. In order to obtain successful and meaningful simulation results, a high grade of expertise is necessary, containing the knowledge of different software packages as well as the interfaces between them. In our approach we try to simplify and automate the complete process chain from grid generation to visualization by steering everything intuitively from only one consistent environment. Thereby, the simulation is running online, that means, that it is possible to engage in a running simulation. The user can simply change boundary conditions, but also has the chance of changing the geometry and generate a new computational grid. This new geometry is not specified using traditional input devices like mouse or keyboard, the procedure proposed in this paper is much more vivid and intuitive: A model of the simulated object serves as input device. The user can physically touch parts of the model and move them. Once pleased with the new setup, he can start a simulation by pressing a button. Within seconds, first simulation results are obtained. In an iterative process, many
254
M. Becker and U. Wössner
configurations can be simulated in a very short timeframe and an optimized setup can easily be found. This is made possible by a single videocamera observing a model equipped with markers. The software analyzes the videostream of the camera and locates markers on objects to track the position and orientation of those objects.
2 Simulation 2.1 COVISE COVISE is a modular visualization system developed at the HLRS. The software uses a data-flow execution model, i.e. the data objects in COVISE flow through a network of modules. This method provides a simple and intuitive way to produce sophisticated graphical applications without any real programming work. The different COVISE modules have input and output ports. Matching ports can be connected by mouse click. These connections represent the data flow between the modules, which all run as a separate process and thus can be distributed among multiple computers. The module network is a simple graphical representation of the process executed on the data. At the end of the process chain there is usually a render module that displays the image either on a monitor or in an immersive environment. It is possible to couple COVISE environments for collaborative working. COVISE can easily be extended for new functionality by adding modules, plugins and new datatypes. COVISE can be used not only for off-line postprocessing and visualization can be used as a general distributed and collaborative integration platform. This allows for example to generate a simulation steering system which makes it possible to engage into running simulations, change boundary conditions and geometry. 2.2 Grid generation As we want to use FENFLOSS [1] as solver, we need to generate an unstructured grid consisting of hexahedral elements. In a first step, the extent of the grid has to be determined. In a cuboid area, a simple, equidistant uniform grid is generated. So far, it is possible to generate a grid around several hexahedron objects. The grid generation algorithm uses a subtractive approach, that means that we eliminate nodes and elements from the surrounding uniform grid. These are the nodes and cells lying inside the cubic objects and thus are not needed for the simulation. As we want the grid to be attached to the objects’ surface, we need to shift nodes adjacent to these surface subsequently.
Tangible interfaces for interactive flow simulation
255
As we do not capture the objects directly but rather track their position and orientation by using markers attached to them, the geometry of the objects has to be defined beforehand. Objects can so far be defined by a combination of cuboid subobjects within the grid generation module. The focus of our approach is on showing the feasibility of using tangible interfaces as an intuitive input device for simulations. Since obtaining accurate results is less relevant at the moment, we use relatively simple and rudimentary grid generation methods without any boundary layers and local refinements. In fig. 1 the wall polygons of a typical grid are shown.
Fig. 1. Wall polygons of grid with boundary lines
2.3 Domain decomposition The computational grid is divided up into several parts for parallel processing. We use METIS [1] (Unstructured Graph Partitioning and Sparse Matrix Ordering System) as a library for the partitioning of the mesh. METIS provides a fast and stable solution for domain decomposition. Besides, it is simple and practicable to use. 2.4 Solver The software FENFLOSS is developed at the IHS, which is the Institute of Fluid Mechanics and Hydraulic Machinery at University of Stuttgart. It can be used for the simulation of incompressible flows and uses Reynoldsaveraged Navier-Stokes equations on unstructured grids. FENFLOSS can be applied to laminar and turbulent flows. The turbulence models used are turbulent mixing length models as well as various k-ε models, containing nonlinear k-ε models and algebraic Reynolds stress models.
256
M. Becker and U. Wössner
The solver works for 2D or 3D geometries, which can be fixed or rotating and either steady state or unsteady problems. FENFLOSS can also handle moving grids (rotor-stator-interactions). FENFLOSS contains methods to calculate free surfaces flows. It can be used on massively parallel computer platforms and is optimized for vector processors e.g. NEC SX-8. The program employs a segregated solution algorithm using a pressure correction. The parallelization takes place in the solver (BICGstab2 including ILU pre-conditioning). Coupling of fixed and moving grids is accomplished by using integrated dynamic boundary conditions.
3 Visualization The simulation process chain consists of three COVISE modules. To be able to integrate all the aforementioned processing steps into COVISE, three Modules have been developed: SC2004Booth, the grid and boundary condition generator, DomainDecomposition to decompose the grid into multiple domains for parallel simulation, and Fenfloss, the simulation coupling module. The entire COVISE dataflow network is shown in Fig. 2. The simulation itself is a separate process that is coupled with the Fenfloss COVISE module using a socket connection. It sends new data to COVISE after each global iteration. All the other modules in Fig. 2 are used for data analysis and visualization, e.g. Tracer and CuttingSurface modules.
Fig. 2. Process chain in COVISE
Tangible interfaces for interactive flow simulation
257
New data objects have been defined to hand over cell type information and boundary conditions between the modules. Finally, the virtual environment system OpenCOVER has been extended by two plugins, the Fenfloss plugin which allows modification of solver parameters and boundary conditions and the TangiblePosition plugin, which acts as an interface between tangible interface and grid generation module. Fig. 3 illustrates the used modules, plugins and dataflow in COVISE.
Fig. 3. Used modules, plugins and dataflow in COVISE environment
3.1 Tangible interface Virtual reality addresses several important challenges on how the system is controlled. It is necessary to find solutions for interactions with the system that do not require any special expert knowledge but are easy and effective in usage. Collaborative working should be possible as well as the possibility to control multiple parameters simultaneously. Results of the simulation are available in a short time period. There are several approaches to fulfil these requirements, one of them is the use of so called "tangible interfaces", which enable the user to control simulations and visualizations through the manipulation of physical objects. Tangible interfaces [2] can be used as controls for digital parameters, data sets, computing resources, and other digital content. There are many advantages of this approach. First of all, the parameters that can be changed using
258
M. Becker and U. Wössner
the tangible objects are represented in a very clear way. The interaction is simple due to the fact that it is so close to reality. In addition to that, tangible interfaces can be used on desktop applications as well as in immersive environments. Users can explore the solution collaboratively by sharing multiple screens or immersive environments distributed in a room, building or anywhere around the globe. Results of the simulation are available in real-time. The tangible interface used in our application example is illustrated in Fig. 5. A high resolution IEEE1394 Camera is positioned above a physical model of the simulation domain. Each movable object in that model is equipped with a black and white fiducial marker which identifies the object. The camera picture is captured and analyzed by a modified version of ARToolKit [3]. ARToolkit analyzes the picture at around 10 frames per second on a Pentium M 1.6. It returns the position and orientation in six degrees of freedom of all markers which are completely visible. A VRML Model of the objects which are simulated serves as a visual reference in the virtual environment and it defines the tangible interface. A new VRML node, ARSensor, acts as interface between ARToolKit and the VRML Model. It takes the position and orientation of one of the markers from ARToolKit and transforms it into the VRML coordinate system. This transformation can then be routed to the transform node of the tracked object which thus follows the movement of the marker and the physical object respectively (Fig. 4).
Fig. 4. Videostream from camera in combination with tracked objects in the OpenCOVER VR renderer
By using tangible interfaces, a much more natural mode of operation is possible than with a traditional user interface where it is usually necessary to type in coordinates to move objects. The natural perception of the model in comparison to a simple monitor image is another important advantage.
Tangible interfaces for interactive flow simulation
259
4 Example and possible applications In our example we show a CFD simulation of the airflow around the HLRS booth at Supercomputing Conference 2004 (SC2004) (Fig. 5). Before being able to start the simulation process chain, it is necessary to build a physical model of the simulated objects. On the one hand, its objects serve as placeholders for the real objects, on the other hand they are used as steering objects for the position of the objects. The Model is observed by a high resolution IEEE1394 grey-scale camera positioned about 50 cm above the model. By moving parts of the model, the virtual counterpart represented by this object moves as well. This setup allows the user to easily rearrange all objects and thus try out various configurations.
Fig. 5. HLRS booth at SC2004 in reality and as a model serving as tangible interface
By pressing a button, the simulation starts and delivers its’ first results after 10 to 15 seconds. Fig. 6 shows typical simulation results with streamlines and cutting surfaces coloured with the value of pressure. This is just one example for using tangible interfaces in conjunction with online simulations. Some other possible application scenarios are the layout of cleanrooms, where a laminar airflow is required, or the allocation of supercomputer racks in a computing room where it is important to design and dimension the air conditioning. You could also use tangible interfaces with a coupled airflow simulation in the design process of new buildings or town districts.
5 Conclusion We have designed and developed an interface for interactive geometry modification of an online simulation. The interface is decoupled from the actual simulation, the movement of objects does not influence the simulation before
260
M. Becker and U. Wössner
Fig. 6. CFD simulation results
you press a button to restart it. One of the most important goals of this system is to make simulation tools more accessible for people who have little or no expert knowledge in simulation software, but are interested in finding optimized solutions for their problems. The goal is to solve flow problems without having profound knowledge in special software and fluid mechanics.
References 1. Karypis G, Kumar V (1995) Metis - unstructured graph partitioning and sparse matrix ordering system. Technical report, Department of Computer Science, University of Minnesota, Minneapolis, MN, http://www.cs.umn.edu/metis 2. Ishii H, Underkoffler J, Chak D, Piper B, Ben-Joseph E, Yeung L, Kanji Z (2002) Augmented urban planning workbench: overlaying drawings, physical models and digital simulation. In: Proceedings of Conference on IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR’02). Darmstadt 3. Billinghurst M, Kato H (1999) Collaborative mixed reality. In: Proceedings of International Symposium on Mixed Reality (ISMR’99) 4. Breckenridge A, Pierson L, Sanielevici S, Welling J, Keller R, Wössner U, Schulze J (2003) Future Gener Comput Syst 19:849–859 5. Wössner U, Schulze JP, Walz SP, Lang U (2002) Evaluation of a collaborative volume rendering application in a distributed virtual environment. In: Proc. of the 8th Eurographics Workshop on Virtual Environments (EGVE) 6. Eisinger R, Ruprecht A (2001) Automatic shape optimization of hydro turbine components based on CFD. In: Proceedings of Seminar CFD for turbomachinery applications. Gdansk
Using information theory approach to randomness testing B.Ya. Ryabko, A.N. Fionov, V.A. Monarev, and Yu.I. Shokin Institute of Computational Technologies SB RAS, Lavrentiev Ave. 6, Novosibirsk 630090, Russia [email protected], [email protected], [email protected] [email protected] Summary. We address the problem of detecting deviations of a binary sequence from randomness, which is very important for random number (RNA) and pseudorandom number generators (PRNG) and their applications to cryptography. Namely, we consider a hypothesis H0 that a given bit sequence is generated by the Bernoulli source with equal probabilities of 0’s and 1’s and the alternative hypothesis H1 that the sequence is generated by a stationary and ergodic source which differs from the source under H0 . We show that data compression methods can be used as a basis for such testing and describe two new tests for randomness, which are based on ideas of universal coding. Known statistical tests and suggested ones are applied for testing PRNGs which are used in practice. The experiments show that the power of the new tests is greater than of many known algorithms.
1 Introduction The randomness testing of random number and pseudo-random number generators is used for many purposes including cryptographic, modeling, and simulation applications; see, for example, [1, 2]. For many cryptographic applications a required bit sequence should be truly random, i.e., by definition, such a sequence could be interpreted as the result of flips of a "fair" coin with sides that are labeled "0" and "1" (for short, it is called a random sequence; see [2]). More formally, we consider the main hypothesis H0 that a bit sequence is generated by the Bernoulli source with equal probabilities of 0’s and 1’s. Associated with this null hypothesis is the alternative hypothesis H1 that the sequence is generated by a stationary and ergodic source which generates letters from {0, 1} and differs from the source under H0 . In this paper, we suggest some tests which are based on results and ideas of Information Theory and, in particular, the source coding theory. We show that a universal code can be used for randomness testing. (Let us recall that,
The authors were supported by INTAS grant no. 00-738, Russian Foundation for Basic Research grant no. 03-01-00495, and the UK Royal Society grant ref. 15995
262
B.Ya. Ryabko, A.N. Fionov, V.A. Monarev, and Yu.I. Shokin
by definition, the universal code can compress a sequence, asymptotically, up to the Shannon entropy per letter when the sequence is generated by a stationary and ergodic source). If we take into account that the Shannon perbit entropy is maximal (1 bit) if H0 is true and is less than 1 bit if H1 is true, we see that it is natural to use this property and universal codes for randomness testing because, in principle, such a test can distinguish each deviation from randomness, which can be described in a framework of the stationary and ergodic source model. Loosely speaking, the test rejects H0 if a binary sequence can be compressed by some universal code (or a data compression method). First, we show how to build a test based on any data compression method and give some application examples of such a test to PRNG’s testing. It should be noted that data compression methods were considered as a basis for randomness testing in the literature. For example, Maurer’s universal statistical test and approximate entropy test are connected with universal codes and are quite popular in practice, see, for example, [2]. In contrast to known methods, the suggested approach gives a possibility to make a test for randomness upon any lossless data compression method even if a distribution law of the codeword lengths is not known. Second, we describe two new tests, conceptually connected with universal codes. When both tests are applied, a tested sequence x1 x2 ...xn is divided into subwords x1 x2 ...xs , xs+1 xs+2 ...x2s , . . . , s ≥ 1, and the hypothesis H0∗ that the subwords obey the uniform distribution (i.e. each subword is generated with the probability 2−s ) is tested against H1∗ = ¬ H0∗ . The key idea of the new tests is as follows. All subwords from the set {0, 1}s are ordered and this order changes after processing each subword x js+1 x js+2 ...x js+s , j = 0, 1, . . . , in such a way that, loosely speaking, the more frequent subwords have small ordinals. When the new tests are applied, the frequencies of different ordinals are estimated (instead of frequencies of the subwords as, say, for chi-square test). The outline of the paper is as follows. In Sect. 2 the general method for construction of randomness testing algorithms based on lossless data compression is described. Two new tests for randomness which are based on constructions of universal coding are described in Sect. 3. In Sect. 4 the new tests are experimentally compared with methods from "A statistical test suite for random and pseudorandom number generators for cryptographic applications" [2]. It turns out that the new tests are more powerful than many of known ones.
2 Data compression methods as a basis for randomness testing Let A be a finite alphabet and An be the set of 3 all words of the length n over n ∞ is the set of A, where n is an integer. By definition, A∗ = ∞ n=1 A and A
Using information theory approach to randomness testing
263
all infinite words x1 x2 . . . over the alphabet A. A data compression method (or code) ϕ is defined as a set of mappings ϕn : An → {0, 1}∗ , n = 1, 2, . . . , and for each pair of different words x, y ∈ An , ϕn ( x) = ϕn ( y). Informally, it means that the code ϕ can be applied for compression of each message of any length n over the alphabet A and the message can be decoded if its code is known. Now we can describe a statistical test which can be constructed upon ˆ 0 be a hypothesis that the words from any code ϕ. Let n be an integer and H the set An obey the uniform distribution, i.e., p(u) = | A|−n for each word u ∈ {0, 1}n . (Here and below | x| is the length if x is a word, and the number of elements if x is a set.) Let a required level of significance (or a Type I error) be α , 0 < α < 1. The main idea of the suggested test is quite natural: the wellˆ0 compressed sequence of words should be considered as non-random and H should be rejected. More exactly, we define a critical value of the suggested test by (1) tα = n log | A| − log(1/α ) − 1. (Here and below log x = log2 x.) ˆ 0 is accepted if Let u be a word from An . By definition, the hypothesis H (n)
|ϕn (u)| > tα , and rejected if |ϕn (u)| ≤ tα . We denote this test by Γα ,ϕ . Theorem 1. For each integer n and a code ϕ, the Type I error of the described test (n)
Γα ,ϕ is not larger than α . Proof is given in the appendix (Sect. 5). Comment. We have considered codes, for which different words of the same length have different codewords (in Information Theory such codes are sometimes called non-singular). Quite often a stronger restriction is required in Information Theory, namely, that each sequence ϕn ( x1 )ϕn ( x2 )...ϕ( xr ), r ≥ 1, of encoded words from the set An could be uniquely decoded into x1 x2 ...xr . Such codes are called uniquely decodable. For example, let A = { a, b}, the code ψ1 ( a) = 0, ψ1 (b) = 00 is obviously non-singular, but is not uniquely decodable. (Indeed, the word 000 can be decoded into both ab and ba.) It is well-known in Information Theory that a code ϕ can be uniquely decoded if the following Kraft inequality is valid:
∑ n 2−|ϕn (u)| ≤ 1 .
(2)
u∈ A
If it is known that the code is uniquely decodable, the suggested critical value (1) can be changed. Let us define tˆα = n log | A| − log(1/α ) .
(3)
ˆ 0 is Let, as before, u be a word from An . By definition, the hypothesis H ˆ ˆ accepted if |ϕn (u)| > tα , and rejected if |ϕ(u)| ≤ tα . We denote this test by (n) Γˆα ,ϕ .
264
B.Ya. Ryabko, A.N. Fionov, V.A. Monarev, and Yu.I. Shokin
Theorem 2. For each integer n and a uniquely decodable code ϕ, the Type I error of (n) the described test Γˆα ,ϕ is not larger than α . Proof is also given in the appendix. So, we can see from (1) and (3) that the critical value is larger if the code is uniquely decodable. On the other hand, the difference is quite small and (1) can be used without a considerable lost of the test power even in the case of uniquely decodable codes.
3 Two new tests for randomness In this section, we suggest two tests which are based on ideas of universal coding, but they are described in such a way that can be understood without any knowledge of Information Theory. 3.1 The "book stack" test Let, as before, there be given an alphabet A = { a1 , ..., a S }, a source which generates letters from A, and the two following hypotheses: the source is i.i.d. and p( a1 ) = . . . = p( a S ) = 1/ S (H0 ) and H1 = ¬ H0 . We should test the hypotheses given a sample x1 x2 . . . xn generated by the source. In the "book stack" test, all letters from A are ordered from 1 to S and this order is changed after observing each letter xt according to the formula ⎧ ⎪ if xt = a, ⎨1, t+1 ν ( a) = ν t ( a) + 1, if ν t ( a) < ν t ( xt ), (4) ⎪ ⎩ t t t if ν ( a) > ν ( xt ). nu ( a), where ν t is the order after observing x1 x2 . . . xt , t = 1 . . . n, ν 1 being defined arbitrarily. (For example, we can define ν 1 = ( a1 , . . . , a S ).) Let us explain informally (4). Suppose that the letters from A are arranged in a stack, like a stack of books, and ν 1 ( a) is a position of a in the stack. Let the first letter x1 of the word x1 x2 . . . xn be a. If it occupies the i1 -th position in the stack (ν 1 ( a) = i1 ), then extract a out of the stack and push it on the top. (It means that the order is changed according to (4).) Repeat the procedure with the second letter x2 and the stack obtained, etc. It can help to understand the main idea of the suggested method if we take into account that, if H1 is true, the frequent letters from A (as frequently used books) will have relatively small numbers (will spend more time near the top of the stack). On the other hand, if H0 is true, the probability to find each letter xi at each position j is equal to 1/ S. Let us proceed with the description of the test. The set of all indexes {1, . . . , S} is divided into r, r ≥ 2, subsets A1 = {1, 2, . . . , k1 }, A2 =
Using information theory approach to randomness testing
265
{k1 + 1, . . . , k2 }, . . . , Ar = {kr−1 + 1, . . . , kr }. Then, using x1 x2 . . . xn , we calculate how many ν t ( xt ), t = 1, . . . , n, belong to a subset Ak , k = 1, . . . , r. We denote this number by nk . More formally, nk = |{t : ν t ( xt ) ∈ Ak , t = 1, . . . , n}|,
k = 1, . . . , r.
Obviously, if H0 is true, the probability of the event ν t ( xt ) ∈ Ak is equal to | A j |/ S. Then, using a usual chi-square test, we test the hypothesis ˆ 0 = P{ν t ( xt ) ∈ Ak } = | A j |/ S H ˆ 0 . Let ˆ 1 = ¬H being based on the empirical frequencies n1 , . . . , nr , against H us recall that the value x2 =
(ni − n(| Ai |/ S))2 n(| Ai |/ S) i =1 r
∑
(5)
is calculated, when the chi-square test is applied, see, for example, [3]. It is known that x2 asymptotically follows the chi-square distribution with (k − 1) ˆ 0 is true. If the level of significance (or a Type degrees of freedom (χ2k−1 ) if H ˆ 0 is accepted I error) of the chi-square test is α , 0 < α < 1, the hypothesis H 2 2 when x from (5) is less than the (1 − α )-value of the χk−1 distribution [3]. We do not describe the exact rule for constructing the subsets A1 , . . . , Ar , but we recommend to implement some experiments for finding the parameters which make the sample size minimal (or, at least, acceptable). The point is that there are many cryptographic applications where it is possible to implement some experiments for optimizing the parameter values and, then, to test the hypothesis based on independent data. For example, in case of testing a PRNG it is possible to seek suitable parameters using a part of generated sequence and then to test the PRNG using a new part of the sequence. Let us consider a small example. Let A = { a1 , . . . , a6 }, x1 . . . x8 = a3 a6 a3 a3 a6 a1 a6 a1 , r = 2, A1 = {1, 2, 3}, A2 = {4, 5, 6}, ν 1 = ( a1 , a2 , a3 , a4 , a5 , a6 ). Then
ν1 ν2 ν3 ν4 ν5 ν6 ν7 ν8 ν9
= ( a1 , a2 , a3 , a4 , a5 , a6 ), = ( a3 , a1 , a2 , a4 , a5 , a6 ), = ( a6 , a3 , a1 , a2 , a4 , a5 ), = ( a3 , a6 , a1 , a2 , a4 , a5 ), = ( a3 , a6 , a1 , a2 , a4 , a5 ), = ( a6 , a3 , a1 , a2 , a4 , a5 ), = ( a1 , a6 , a3 , a2 , a4 , a5 ), = ( a6 , a1 , a3 , a2 , a4 , a5 ), = ( a1 , a6 , a3 , a2 , a4 , a5 ),
n1 = 0, n2 = 0; n1 = 1; n2 = 1; n1 = 2; n1 = 3; n1 = 4; n1 = 5; n1 = 6; n1 = 7.
266
B.Ya. Ryabko, A.N. Fionov, V.A. Monarev, and Yu.I. Shokin
We can see that the letters a3 and a6 are quite frequent and the "book stack" indicates this nonuniformity quite well. (Indeed, the average values of n1 and n2 are equal to 4, whereas the real values are 7 and 1, correspondingly.) Examples of practical applications of this test will be given in Sect. 4, but here we make two notes. First, we pay attention to the complexity of this algorithm. The "naive" method of transformation according to (4) would take the number of operations proportional to S, but there exist algorithms which can perform all operations in (4) in O(log S) time. Such algorithms can be based on AVL trees, see, for example, [4]. The second comment concerns with the name of the method. The "book stack" structure is quite popular in Information Theory and Computer Science. In Information Theory this structure was firstly suggested as a basis of an universal code in [5] and then rediscovered in [6]. In the literature this code is frequently called "Move-to-Front" (MTF) scheme as it was suggested in [6]. 3.2 The order test This test is also based on changing the order ν t ( a) of the alphabet letters but the rule of change differs from (4). To describe the rule we first define λ t+1 ( a) as a count of occurrences of a in the word x1 . . . xt−1 xt . At each moment t the alphabet letters are ordered according to ν t in such a way that, by definition, for each pair of letters a and b ν t ( a) ≺ ν t (b) if λ t ( a) ≤ λ t (b). For example, if A = { a1 , a2 , a3 } and x1 x2 x3 = a3 a2 a3 , the possible orders can be as follows: ν 1 = ( a1 , a2 , a3 ), ν 2 = ( a3 , a1 , a2 ), ν 3 = ( a3 , a2 , a1 ), ν 4 = ( a3 , a2 , a1 ). In all other respects this method coincides with the book stack. (The set of all indexes {1, . . . , S} is divided into r subsets, etc.) Obviously, after observing each letter xt the value λ t ( xt ) should be increased and the order ν t should be changed. It is worth noting that there exist data structures and algorithms which allow maintaining the alphabet letters ordered in such a way that the number of operations spent is constant, independently of the size of the alphabet. See [9] for description of the corresponding data structure.
4 The experiments In this section, we describe some experiments that were carried out to compare our new tests with known ones. We compare the Order and Book Stack tests, the tests based on standard data compression methods, and the tests from [2]. It is important to note here that the tests from [2] were selected as a result of a comprehensive theoretical and experimental analysis and may be considered as the state-of-the-art in randomness testing. Besides, we also test the method [7] since it was published later than the book of Rukhin et al. [2].
Using information theory approach to randomness testing
267
We used several random number generators described in the literature. Namely, we used data generated by the PRNG "RANDU" (described in [8]), Shift Register generator from DIEHARD (the algorithm number 13, see http://stat.fsu.edu/diehard/ ), a linear congruent generator (LCG) from [10] with parameters Xn+1 = (134775813Xn + 1) mod 232 , the generators MT19937 [11], A5/1 [12], and SEAL [13]. Besides, we used truly random bits from "The Marsaglia Random Number CDROM" [14]. These sources of random data were chosen in order to present generators of different types and different "quality". Thus, it is known that the linear congruent generators (RANDU [8] and PRNG from [10]) are not good, whereas "The Marsaglia" [14], SEAL [13] and some others are considered as good sources of random bits. First we describe in detail an analysis of RANDU and "The Marsaglia Random Number" in order to describe how testing was carried out. The behavior of the tests was investigated for files of different lengths (see the tables 1 and 2 below). We generated 100 different files of each length and applied each test mentioned above to each file with the level of significance 0.01 (or less, see below). So, if a test be applied to a truly random bit sequence, on average, 1 file from 100 should be rejected. All the results are given in the tables, where the figures in cells are the numbers of rejected files (out of 100). If a number of the rejections is not given for a certain length and test, it means that the test cannot be applied for files of such a length. The table 1 contains information about testing of sequences of different lengths generated by RANDU, whereas the table 2 contains the results of application of all tests to 5,000,000-bit sequences either generated by RANDU or taken from "The Marsaglia Random Number CDROM". Let us give some comments about the tests which are based on popular archiving programs RAR and ARJ. We applied each of the programs to each file and examine the length of compressed data. Then we used the test (n)
Γα ,ϕ with the critical value (1) as follows. The alphabet size | A| = 28 = 256, n log | A| is simply the length of file (in bits) before compression, (whereas n is the length in bytes). So, taking α = 0.01, from (1), we see that the hypothesis about randomness (H0 ) should be rejected if the length of compressed file is less than or equal to n log | A| − 8 bits. (Strictly speaking, in this case α ≤ 2−7 = 1/128.) So, taking into account that the length of computer files is measured in bytes, this rule is very simple: if the n-byte file is really compressed (i.e. the length of the encoded file is n − 1 bytes or less), this file is not random. So, the following tables contain the numbers of cases, where the files were really compressed. Let us now give some comments about parameters of the methods considered. As it was mentioned, we investigated all methods from the book of Rukhin et al. [2], the test from [7] (RSS test for short), the described above two tests based on data compression algorithms, the Order test and the Book Stack test. For some tests there are parameters that should be specified. In
268
B.Ya. Ryabko, A.N. Fionov, V.A. Monarev, and Yu.I. Shokin
such cases the values of parameters are given in the table in the row which follows the test results. There are some tests from [2] where parameters can be chosen from a certain interval. In such cases we repeated all calculations three times, taking the minimal possible value of the parameter, the maximal one and the average one. Then the data for the case were taken, where the number of rejections of the hypothesis H0 was maximal. For the chosen lengths of files, some tests could not be applied at all. Information about those tests was excluded from the tables. We also investigated the other generators mentioned above. The same level of significance was used (0.01), but the behavior of the tests was investigated for files of the lengths 219 and 222 bytes (tables 3 and 4, respectively). We generated 20 files of each length by the following algorithms: Shift-Register generator, which generates 32-bit integers and uses three shifts (DIEHARD, number 13), the linear congruent generator (LCG) from [10], MT19937 [11], A5/1 [12], and SEAL [13]. For example, from the table 3 we can see that, if the file length is 219 bytes, the non-randomness of the ShiftRegister generator is found out quite well by the Rank, Order and Book Stack tests (20, 19 and 18 cases from 20, respectively). So, we can see from the tables 1–4 that the new tests can detect nonrandomness quite effectively. Seemingly, the main reason is their ability for adaptation. In conclusion, we can say that the results obtained show that the new tests, as well as the ideas of Information Theory in general, can be useful tools for randomness testing.
5 Proofs Proof of Theorem 1. First we estimate the number of words ϕ(u) whose length is less than or equal to an integer τ . Obviously, at most one word can be encoded by the empty codeword, at most two words by the codewords of length 1, . . . , at most 2i words by the codewords of length i, etc. Having taken into account that the codewords ϕn (u) = ϕn (v) for different u and v, we obtain the inequality
|{u : |ϕn (u)| ≤ τ }| ≤
τ
∑ 2i = 2τ +1 − 1.
i =0
From this inequality and (1) we can see that the number of words from the set An whose codelength is less than or equal to tα = n log | A| − log(1/α ) − 1 is not greater than 2n log | A|−log(1/α ) . So, we obtained that
|{u : |ϕn (u)| ≤ tα }| ≤ α | A|n . Taking into account that all words from An have equal probabilities if H0 is true, we obtain from the last inequality, (1), and the description of the test (n)
Γα ,ϕ that
Using information theory approach to randomness testing Table 1. Testing sequences generated by RANDU Test↓ Length (in bits)→
5 × 104 105 5 × 105 106
Order Test Book Stack
56 42
100 100 100 100
4
75
100
100
17
20
20
√ parameters for all lengths: s = 20, | A1 | = 5 2s
RSS parameters:
s = 16
100 100
RAR
0
0
100
100
ARJ
0
0
99
100
Frequency
2
1
1
2
1
2
1
1
Block Frequency parameters:
M = 1000
2000 10000
20000
Cumulative Sums
2
1
2
1
Runs
0
2
1
1
Longest Run of Ones
0
1
0
0
Rank
0
1
1
0
Discrete Fourier Transform
0
0
0
1
Non-overlapping Templates –
–
–
2
parameters:
m=
Overlapping Templates parameters:
10
–
–
–
m=
Universal Statistical
2 10
–
–
1
1
L=
6
7
Q=
640
1280
2
2
7
11
13
14
–
–
–
2
Random Excursions Variant –
–
–
2
Serial
1
2
2
14
16
18
–
–
–
1
–
–
–
3
parameters:
Approximate Entropy parameters:
1 m= 5
Random Excursions
parameters:
0 m= 6
Lempel-Ziv Complexity Linear Complexity parameters:
M=
2500
269
270
B.Ya. Ryabko, A.N. Fionov, V.A. Monarev, and Yu.I. Shokin Table 2. Results of all tests on 5,000,000-bit sequences Test↓ Generator→
RANDU Marsaglia
Order Test Book Stack
100 100
3 0
RSS
100
1
Frequency
2
1
Block Frequency
2
1
Cumulative Sums
3
2
Runs
2
2
Longest Run of Ones
2
0
Rank
1
1
Discrete Fourier Transform
89
9
√ parameters: s = 24, | A1 | = 5 2s
parameters: s = 24
parameters: M = 106
Non-overlapping Templates 5
5
parameters: m = 10
Overlapping Templates
4
1
1
2
100
89
4
3
Random Excursions Variant 3
3
Serial
100
2
Lempel-Ziv Complexity
0
0
Linear Complexity
4
3
parameters: m = 10
Universal Statistical parameters: L = 9, Q = 5120
Approximate Entropy parameters: m = 17
Random Excursions
parameters: m = 19
parameters: M = 5000
Using information theory approach to randomness testing
271
Table 3. Testing generators on 219 -byte sequences Test↓ Generator→
Shift-Register LCG [10] MT19937 A5/1 SEAL
RSS (block= 28) Order test (block=30) Book stack (block=30) Frequency Block Frequency Cumulative Sums Runs Rank Longest Run of Ones Discrete Fourier Transform Non-overlapping Templates Overlapping Template Universal Statistical Approximate Entropy Random Excursions Random Excursions Variant Serial Lempel-Ziv Complexity
9 19 18 1 0 1 1 20 1 1 1 1 1 3 2 2 0 0
0 0 0 0 0 1 0 0 0 3 1 0 0 4 2 2 0 0
0 0 0 0 0 0 0 0 1 0 1 0 1 2 2 2 1 0
0 0 0 0 3 0 12 0 0 1 3 0 3 3 2 2 1 0
0 0 0 0 0 0 0 0 0 0 1 0 0 3 3 3 0 0
Table 4. Testing generators on 222 -byte sequences Test↓ Generator→
Shift-Register LCG [10] MT19937 A5/1 SEAL
RSS Order test Book stack Frequency Block Frequency Cumulative Sums Runs Rank Longest Run of Ones Discrete Fourier Transform Non-overlapping Templates Overlapping Template Universal Statistical Approximate Entropy Random Excursions Random Excursions Variant Serial
20 20 20 0 0 2 0 20 0 13 2 0 0 16 2 2 1
9 20 20 0 1 0 0 0 0 20 2 0 0 16 2 2 0
0 0 0 0 0 0 0 0 1 15 1 0 1 12 2 2 1
0 0 0 0 6 0 20 0 0 8 1 0 0 14 2 2 0
0 0 0 0 0 0 1 0 0 14 1 0 0 15 2 2 0
272
B.Ya. Ryabko, A.N. Fionov, V.A. Monarev, and Yu.I. Shokin
Pr{|ϕn (u)| ≤ tα |} ≤ (α | A|n /| A|n ) = α if H0 is true. The theorem is proved. Proof of Theorem 2. We can think that tˆα in (3) is an integer. (Otherwise, we obtain the same test taking tˆα as a new critical value of the test.) From the Kraft inequality (2) we obtain that 1≥
∑ n 2−|ϕn (u)| ≥ |{u : | ϕn (u)| ≤ tˆα }| 2−tα . ˆ
u∈ A
This inequality and (3) yield
|{u : | ϕn (u)| ≤ tˆα }| ≤ α | A|n . If H0 is true then the probability of each u ∈ An equals | A|−n and from the last inequality we obtain that Pr{|ϕ(u)| ≤ tˆα } = | A|−n |{u : | ϕn (u)| ≤ tˆα }| ≤ α . This completes the proof.
References 1. Maurer U (1992) J Cryptol 5 (2):89–105 2. Rukhin A et al. (2001) A statistical test suite for random and pseudorandom number generators for cryptographic applications. NIST Special Publication 800-22 http://csrc.nist.gov/rng/SP800-22b.pdf 3. Kendall MG, Stuart A (1961) The advanced theory of statistics. Vol. 2: Inference and relationship. London 4. Aho AV, Hopcroft JE, Ulman JD (1976) The desighn and analysis of computer algorithms. Addison-Wesley, Reading MA 5. Ryabko BYa (1980) Prob Inf Transm 16(4):16–21 6. Bently JL, Sleator DD, Tarjan RE, Wei VK (1986) Commun ACM 29:320–330 7. Ryabko BYa, Stognienko VS, Shokin YuI (2004) J Stat Plan Infer 123(2):365–376 8. Dudewicz EJ, Ralley TG (1981) The handbook of random number generation and testing with TESTRAND computer code. In: American Series in Mathematical and Management Sciences, vol. 4. American Sciences Press, Columbus Ohio 9. Ryabko B, Rissanen J (2003) IEEE Commun Lett 7(1):33–35 10. Moeschlin O, Grycko E, Pohl C, Steinert F (1998) Experimental stochastics. Springer, Berlin Heidelberg 11. Matsumoto M, Nishimura T (1998) ACM Trans Model Comp Simul 8:3–30 12. Shneier B (1996) Applied Cryptography. Wiley, New York 13. Rogaway P, Coppersmith D (1998) A software-optimized encryption algorithm. In: Anderson R (ed) Fast software encryption. Lecture Notes in Computer Science 809. Springer Verlag 14. The Marsaglia Random Number CDROM http://stat.fsu.edu/diehard/ cdrom/
Optimizing performance on modern HPC systems: learning from simple kernel benchmarks G. Hager1 , T. Zeiser1 , J. Treibig2 , and G. Wellein1 1
2
Regional Computing Centre Erlangen (RRZE), University of Erlangen-Nuremberg, Martensstr. 1, 91058 Erlangen, Germany [email protected] Chair of System Simulation (LSS), University of Erlangen-Nuremberg, Cauerstr. 6, 91058 Erlangen, Germany
Summary. We discuss basic optimization and parallelization strategies for current cache-based microprocessors (Intel Itanium2, Intel Netburst and AMD64 variants) in single-CPU and shared memory environments. Using selected kernel benchmarks representing data intensive applications we focus on the effective bandwidths attainable, which is still suboptimal using current compilers. We stress the need for a subtle OpenMP implementation even for simple benchmark programs, to exploit the high aggregate memory bandwidth available nowadays on ccNUMA systems. If the quality of main memory access is the measure, classical vector systems such as the NEC SX6+ are still a class of their own and are able to sustain the performance level of in-cache operations of modern microprocessors even with arbitrarily large data sets.
1 Introduction Intel architectures and their relatives currently tend to stand in the focus of public attention because of their allegedly superior price/performance ratio. Many clusters and supercomputers are built from commodity components, and the majority of compute cycles is spent on Itanium2, IA32-compatible and similar processors. The availability of high-quality compilers and tools makes it easy for scientists to port application codes and make them run with acceptable performance. Although compilers get more and more intelligent, user intervention is vital in many cases in order to get optimal performance. In this context it is an important task to identify easy-to-use optimization guidelines that can be applied to a large class of codes. We use simple kernel benchmarks that serve both to evaluate the performance characteristics of the hardware and to remedy bottlenecks in complex serial as well as shared-memory parallel applications. The focus of this report is on the bandwidth problem. Modern microprocessors provide extremely fast onchip caches with sizes of up to several megabytes. However, the efficient use
274
G. Hager, T. Zeiser, J. Treibig, and G. Wellein
of these caches requires complex optimization techniques and may depend on parameters which are not under the control of most programmers, e. g. even for in-cache accesses performance can drop by a factor of up to 20 if data access is badly scheduled [1]. Although there has been some progress in technology, main memory bandwidth is still far away from being adequate to the compute power of modern CPUs. The current Intel Xeon processors, for instance, deliver less than one tenth of a double precision operand from main memory for each floating point operation available. The use of cachelines makes things even worse, if nonregular access patterns occur. Providing high-quality main memory access has always been the strength of classical vector processors such as the NEC SX series and therefore these “dinosaurs” of HPC still provide unique single processor performance for many scientific and technical applications [2, 3, 4, 5]. Since performance characteristics as well as optimization techniques are well understood for the vector architectures, we have chosen the NEC SX6+ to serve as the performance yardstick for the microprocessors under consideration here. This paper is organized as follows. The hardware and software characteristics of the systems used in our study are summarized in Section 2. In Section 3, the popular vector triad benchmark is used to compare the ability of different architectures to use the system resources to their full extent. It is shown that, even with such a seemingly simple code, vast fluctuations in performance can be expected, depending on fine details of hardware and software environment as well as code implementation. We first discuss the gap between theoretical performance numbers of the hardware and the effective performance measured. Then we introduce an approach based on the use of assembler instructions to optimize single processor memory performance on IA32 and x86-64 processors. In Section 4, performance effects of shared memory parallelization on ccNUMA systems like SGI Altix or Opteron-based nodes are investigated. It is shown that an overly naive approach can lead to disastrous loss of performance on ccNUMA architectures. In Section 5 we finally give a brief summary and try to restate the most important lessons learned.
2 Architectural specifications In Table 1 the most important single processor specifications of the architectures examined are sketched. The on-chip caches of current microprocessors run at processor speed, providing high bandwidth and low latency. The NEC vector system implements a different memory hierarchy and achieves substantially higher single processor peak performance and memory bandwidth. Note that the vector processor has the best balance with respect to the ratio of memory bandwidth to peak performance.
Optimizing performance
275
Table 1. Single processor specifications. Peak performance (Peak), maximum bandwidth of the memory interface (MemBW) and sizes of the various cache levels (L1,L2,L3) are given. All caches are on-chip, providing low latencies
Platform Intel Pentium4/Prescott (3.2 GHz) AMD Athlon64 (2.4 GHz) Intel Itanium 2 (1.3 GHz) NEC SX6+ (565 MHz)
Single CPU specifications Peak MemBW L1 L2 L3 GFlop/s GB/s [kB] [MB] [MB] 6.4 4.4 5.2 9.0
6.4 6.4 6.4 36.0
16 1.0 64 1.0 16 0.25 – –
– – 3.0 –
Intel Netburst and AMD64 The Intel Pentium4, codenamed “Prescott” in its current incarnation, and the AMD Athlon64/Opteron processors used in our benchmarks are the latest versions of the well-known Intel “Netburst” and AMD64 architectures, respectively. The Athlon64 and Opteron processors have the additional advantage of a fully downwards-compatible 64-bit extension (also available in Intel’s recent Xeon CPUs). Both CPUs are capable of performing a maximum of two double precision floating point (FP) operations, one multiply and one add, per cycle. Additionally, AMD CPUs have an on-chip memory controller which reduces memory latency by eliminating the need for a separate northbridge. The Prescott and Athlon64 benchmark results presented in sections 3.1 and 3.2 have been measured at LSS on single-processor workstations. For the OpenMP benchmarks in section 4 a 4-way Opteron server with an aggregate memory bandwidth of 25.6 GByte/s (=4×6.4 GByte/s) was used. Both systems run Linux and the benchmarks were compiled using the 32-bit Intel IA32 Fortran Compiler in version 8.1-023. Intel Itanium 2 The Intel Itanium 2 processor is a superscalar 64-bit CPU implementing the Explicitly Parallel Instruction Computing (EPIC) paradigm. The Itanium concept does not require any out-of-order execution hardware support but relies on heavy instruction level parallelism, putting high demands on compilers. Today clock frequencies of up to 1.6 GHz and on-chip L3 cache sizes from 1.5 to 6 MB are available. Please note that floating point data is transferred directly between L2 cache and registers, bypassing the L1 cache. Two multiply-add units are fed by a large set of 128 FP registers, which is another important difference to standard microprocessors with typically 32 FP registers. The basic building blocks of systems used in scientific computing are two-way nodes (e. g. SGI Altix, HP rx2600) sharing one bus with 6.4 GByte/s memory bandwidth.
276
G. Hager, T. Zeiser, J. Treibig, and G. Wellein
The system of choice in our report is an SGI Altix 3700 with 28 Itanium2 processors (1.3 GHz/3 MB L3 cache) running RedHat Linux with SGI proprietary enhancements (ProPack). Unless otherwise noted, the Intel Itanium Fortran Compiler in version 8.1-021 was used. NEC SX6+ From a programmer’s view the NEC SX6+ is a traditional vector processor with 8-track vector pipes running at 565 MHz. One multiply and one add operation per cycle can be executed by the arithmetic pipes, delivering a peak performance of 9 GFlop/s. The memory bandwidth of 36 GByte/s allows for one load or store per multiply-add operation. 64 vector registers, each holding 256 64-bit words, are available. An SMP node comprises eight processors and provides a total memory bandwidth of 289 GByte/s, i. e. the aggregated single processor bandwidths can be saturated. Vectorization of application codes is a must on this system, because scalar CPU performance is non-competitive. The benchmark results presented in this paper were measured on a NEC SX6+ at the High Performance Computing Center Stuttgart (HLRS).
3 Serial vector triad One of the most simple benchmark codes that is widely used to fathom the theoretical capabilities of a system is the vector triad [6]. In Fortran language, the elementary operation A(:)=B(:)+C(:)*D(:) is carried out inside a recurrence loop that serves to exploit cache reuse for small array sizes: do R=1, NITER do I=1,N A(I) = B(I) + C(I) * D(I) enddo enddo Appropriate time measurement is taken care of and the compiler is prevented from interchanging both loops which conflicts with the intention of the benchmark. Customarily, this benchmark is run for different loop lengths N, and NITER is chosen large enough so that startup effects are negligible. Parallelization with OpenMP directives appears straightforward, but we will first consider the serial version. The single processor characteristics are presented in Fig. 1 for different architectures using effective bandwidth (assuming a transfer of 4 × 8 byte = 32 byte per iteration) as a performance measure. The performance characteristics reflect the basic memory hierarchy: Transitions between cache levels with different access speeds can be clearly identified by sharp drops in bandwidth. For all cache-based microprocessors we find a pipeline start-up
Optimizing performance
277
Fig. 1. Memory bandwidth vs. loop length N for the serial vector triad on several architectures
effect at small loop length, two “transitions” at intermediate loop length and a maximum performance which scales roughly with peak performance. A totally different picture emerges on the vector system, where bandwidth saturates at intermediate loop lengths, when vector pipeline start-up penalties become negligible. Most notably, the vector system is able to sustain the incache performance of RISC/EPIC processors at arbitrarily large loop length, i. e. large data sets. 3.1 Compiler-generated code The cache characteristics presented in Fig. 1 seem to reflect the underlying hardware architecture very well. Nevertheless, the extremely complex logic of caches requires a closer inspection, which has been done for Itanium2 (see Fig. 2). In a first step we calculate the maximum triad bandwidth from basic hardware parameters like the number of loads/stores per cycle and theoretical bandwidth for each memory level. The Itanium2 processor has a sustained issue rate of four loads or two loads and two stores per cycle, which can in principle be saturated by the L2 cache (at a latency of roughly 7 cycles). Thus, two successive iterations of the vector triad could be executed in two clock cycles, leading to a performance of two FLOPs (half of peak performance) or 32 bytes per cycle (41.6 GByte/s). As shown in Figure 2 this limit is nearly reached for loop lengths of N ≈1000–7000 using dynamic allocation of the vectors at runtime. At smaller loop length pipeline start-up limits performance and at larger data sets the next level of memory hierarchy (L3 cache) is entered. Interestingly, the allocation strategy for the vectors shows up in a significant performance variation for the L2 regime. Using sta-
278
G. Hager, T. Zeiser, J. Treibig, and G. Wellein 45 40
bandwidth [GByte/s]
35 30 25 20 15 allocatable static static+common Hardware Limit (Plain) Hardware Limit (Triads)
10 5 0 1 10
2
10
3
10
4
10 loop length
5
10
6
10
Fig. 2. Serial vector triad bandwidth on Itanium2 using different memory allocation strategies. The bandwidth limits imposed by the hardware are depicted as well, Plain denoting the pure hardware numbers and Triads taking into account the additional load operation if an L2/L3 write miss occurs
tic allocation of the vectors with fixed array length (106 for the benchmark runs shown in Fig. 2) independently of the actual loop length may reduce the available bandwidth by more than 30%. The eight-way associativity of L2 cache is not responsible for this behavior, because only four different arrays are used. Jalby et al. [7] have demonstrated that this effect can be attributed to non-precise memory disambiguation for store-load pairs (as they occur in the vector triad) and/or bank conflicts in L2 cache that lead to OZQ stalls [8]. Obviously, dynamic array allocation minimizes the potential address and bank conflicts while static allocation requires (manual) reordering of assembler instructions in order to get optimal performance. If the aggregate data size of the four vectors exceeds the L2 cache size (at a loop length of 8000), a sharp drop in performance is observed although L2 and L3 have the same bandwidth and similar latencies (7 vs. 14 cycles). Here another effect must be taken into account. If an L2 write miss occurs (when writing to A), the corresponding cache line must first be transferred from L3 to L2 (write-allocate). Thus, four instead of three load operations are requested and the effective available bandwidth is reduced by a factor of 4/5 (see Fig. 2) if data outside the L2 cache is accessed. Nonetheless, our best measurements (for static array allocation) still fall short by at least 25 % from the theoretical maximum and achieve only 50 % of L2 bandwidth. Similar results are presented in [7] for the DAXPY operation (A(:)=A(:)+s*B(:)) and thus we must assume that another hardware bottleneck further reduces the available maximum bandwidth in the L3 regime. Indeed, as all FP data is transferred from L2 to registers, concurrent cache line refill and register load operations on this cache can lead to bank conflicts that induce stall cycles [9]. Further investigation of this problem is currently underway.
Optimizing performance
279
Memory performance, on the other hand, is limited by the ability of the frontside bus (FSB) to transfer one eight-byte double precision word at a bus frequency of 800 MHz. Following the discussion of L3 performance, nearly all RISC processors use outermost-level caches with write-allocate logic. Thus, also for main memory access every store miss results in a previous cache line load, leading to a total of four loads and one store per iteration for the vector triad, thereby wasting 20 % of the available bandwidth. In case of Itanium2, this means that the effective FSB frequency available for the memory-bound vector triad is 800 · 4/5 MHz, yielding a maximum available bandwidth 5.12 GByte/s ( = 0.8 × 6.4 GByte/s). Figure 2 shows that about 80 % of this limit is achievable, which is significantly higher than for Intel Prescott (60 %) or AMD Athlon64 (67 %), at least for plain, compilergenerated code. For further comments see the next section. 3.2 Handcoded optimizations While the achievable sustained memory performance of former x86 CPU generations was far below peak, latest generation CPUs as the AMD Athlon 64 and Intel Pentium 4 Prescott can nearly reach their theoretical peak memory bandwidth. However, this can only be done using the SSE/SSE2 instruction set extensions and special optimizations that have not found their way into standard compilers yet. We have explored the potential of these new instructions with handcoded assembly language versions of the vector triad. SSE/SSE2 SSE and SSE2, available on Intel Pentium 4 as well as AMD Athlon64 and Opteron processors, not only provide a completely new set of SIMD registers that has significant advantages over the traditional FPU register stack, but also add special instructions for explicit memory subsystem control: • Nontemporal stores can bypass the cache hierarchy by using the writecombine buffers (WCBs) which are available on all modern CPUs. This not only enables more efficient burst operations on main memory, but also avoids cache pollution by data which is exclusively stored, i. e. for which temporal locality is not applicable. • Prefetch instructions, being hints to the cache controller, are provided in order to give programmers more control over when and which cachelines are brought into the cache. While the built-in hardware prefetch logic does a reasonable job identifying access patterns, it has some significant limitations [10] that might be circumvented by manual prefetch. • Explicit cache line flush instructions also allow explicit control over cache use, making cache based optimizations more effective. These special instructions were not used in the benchmarks described below.
280
G. Hager, T. Zeiser, J. Treibig, and G. Wellein
The following tests concentrate on achieved cache and main memory bandwidth with the vector triad. All implementations use nontemporal stores for highest write bandwidth in the memory-bound regime. Cache characteristic data was taken using a self-written benchmark suite. The results can be reproduced by other micro benchmarking tools like, e. g., the rightmark memory analyzer [11]. Cache performance In order to get an impression of cache behavior we ran plain FPU and SSE2 versions of the vector triad (Fig. 3). An important general observation on all Pentium 4 revisions is that cache bandwidth scales with the register width used in load/store instructions, i. e. the CPU reaches its full potential only when using the 16-byte SSE registers. On the other hand, the AMD Athlon 64 bandwidth almost stays the same when going from FPU to SSE2. Moreover, this CPU shows a mediocre L2 bandwidth when compared to the Pentium 4. The AMD design clearly does not favor vector streaming applications as much as the Pentium 4 which is trimmed to SIMD performance. As 50
AMD FPU AMD SSE2 Precott FPU Prescott SSE2
bandwidth [GByte/s]
40
30
20
10
0 2 10
3
4
10
10 loop length
Fig. 3. Serial vector triad bandwidth on Intel Prescott and AMD Athlon64: comparison of SSE2 and FPU versions
a side note, the cache bandwidths on a Pentium 4 are asymmetric, meaning that read bandwidth is much higher than write bandwidth, the latter being 4 bytes/cycle on all cache levels. The Athlon64 does not show this asymmetry at all. One must still keep in mind that the Pentium 4 CPUs derive much of their superior cache performance from a very high clock rate. Memory bandwidth,
Optimizing performance
281
on the other hand, depends on completely different factors like the number of outstanding memory references or the effectivity of hardware prefetch. Memory performance Figure 4 compares results for memory-bound, compiler-generated, vectorized (using the -xW compiler option) and unvectorized triad benchmarks, together with different assembly language and hand-optimized compiler implementations. The x axis shows the category of the optimization. Still the code was specifically optimized for every architecture (concerning cache line lengths etc.). The version implemented with FPU instructions shows roughly the same performance as the unvectorized compiler version. The SSE2 version shows no improvement; using the wide SSE2 registers does not have any influence on memory performance, as expected. Nontemporal stores, on the other hand, yield significant speedup because of increased effective bandwidth as described above. This version shows comparable performance to the compiler version, indicating that the compiler also uses those instructions.
memory bandwidth [GB/s]
6
Intel Prescott AMD Athlon 64
5
4
3
2
r
ile
mp
co
er
il mp co ect. v
U
FP
E2
SS
.
ef.
ref
T
2N
E SS
kp
c blo
SW
pr
r ile ef. mp pr co lock b
Fig. 4. Serial vector triad main memory bandwidth on Intel Prescott and AMD Athlon64
Categories 6 and 7 in Fig. 4 are concerned with different prefetching strategies. “Block prefetch” [12] loads a block of data for all read-only vectors into L1 cache, using a separate prefetch loop, before any calculations are done. This can be implemented by prefetch instructions (no register spill or
282
G. Hager, T. Zeiser, J. Treibig, and G. Wellein
cache pollution) or by direct moves into registers, also referred to as “preload”: for SIZE for BLOCKSIZE preload B endfor for BLOCKSIZE preload C endfor for BLOCKSIZE preload D endfor for BLOCKSIZE A=B+C*D ! nontemporal store endfor endfor Note that there is no overlap between calculation and data transfer, and the preload loops for the read-only arrays are separate. This turns out to be the best strategy for block preload. Fusing the loops would generate multiple concurrent load streams that the CPU can seemingly not handle as effectively. “Software prefetch” uses inlined SSE prefetch instructions. While this should have clear advantages (no significant overhead, no register spill and cache pollution, overlap of data transfer and arithmetic), the block preload approach still works better on Intel architectures. The Athlon64, on the other hand, shows a very slight advantage of software prefetch over block preload. From the hand-coded assembly kernels we now come back to compilergenerated code. The question arises whether it would be possible to use the block preload in a high-level language, without the compiler messing up the careful arrangement of preload versus computation. The rightmost category in Fig. 4 shows that block preload can indeed be of advantage here, especially for the Intel Prescott processor. It must be noted though that the performance of this code is very sensitive to the tuning parameters. For instance, on an Athlon64 the best strategy is to block for L1 cache, as expected from the discussion about cache performance. On the Prescott processor, however, blocking for L2 cache is much better. This might be attributed to the smaller L1 cache size together with longer memory latencies because of a different system architecture (northbridge as opposed to built-in memory controller as on the Athlon64), and cannot be derived directly from in-cache triad performance (Fig. 3) because the recurrence loop hides startup effects due to memory latency. In summary, block preload is a very interesting alternative for Intel Prescott CPUs, but it does not really pay off for the Athlon64. On the Prescott, careful parameter tuning essentially eliminates the need for hand-coded as-
Optimizing performance
283
sembly language. Using block preload techniques for CFD applications thus seems to be a viable option and is being investigated. Contrary to the observations in section 3.1, where it became clear that naive, compiler-generated code is not able to saturate the memory bus, it is now evident that relatively simple prefetch/preload techniques can lead to a large increase in memory bandwidth, reaching 80%-90% of peak. Although the SSE/SSE2 instruction set extensions go to great lengths trying to give programmers more control over caches, they are still of limited use, mainly because of their obviously ineffective and often undocumented implementation on different architectures. While this has improved with latest CPU generations (Intel Prescott and AMD Athlon64), it is still necessary to reduce the limitations and increase the efficiency of these instructions in order to get a real alternative to more conservative means of optimization.
4 Shared-memory parallel vector triad As mentioned previously, OpenMP parallelization of the vector triad seems to be a straightforward task: do R=1, NITER !$OMP PARALLEL DO do I=1,N A(I) = B(I) + C(I) * D(I) enddo !$OMP END PARALLEL DO enddo The point here is that all threads share the same logical address space. In an SMP system with UMA characteristics like a standard dual-Xeon node, every processor can access all available memory with the same bandwidth and latency. As a consequence, the actual physical page addresses of the four arrays do not matter performancewise, at least when N is large (see previous section). On a ccNUMA architecture like SGI Altix or Opteron nodes, memory is logically shared but physically distributed, leading to non-uniform access characteristics (Fig. 5). In that case, the mapping between physical memory pages and logical addresses is essential in terms of performance, e.g. if all data is allocated in one local memory, all threads must share a single path to the data (SHUB, NUMALink4 or NUMALink3 on SGI Altix depending on the number of threads used) and performance does not scale at all. Customarily, a first-touch page allocation strategy is implemented on such systems, i. e. when a logical address gets mapped to a physical memory page, the page is put into the requesting CPU’s local memory. While this is a sensible default, it can lead to problems because the computational kernel is not where the initial mapping takes place. Due to the first-touch policy, initialization of the four arrays A, B, C and D must be done in a way that each
284
G. Hager, T. Zeiser, J. Treibig, and G. Wellein
RAM
RAM 10.2 GB/s
Itanium2
Itanium2
SHUB
SHUB
2x3.2 GB/s (NUMALink4)
6.4 GB/s
6.4 GB/s
Itanium2
Itanium2
2x1.6 GB/s (NUMALink3)
Fig. 5. SGI Altix 3700 SC-brick block diagram. One SC-brick comprises two nodes with two CPUs each. Intra-brick communication is twice as fast as brick-to-brick communication
thread in the computational kernel can access “its” portion of data through the local bus. Two conditions must be met for this to happen: (i) initialization must be done in parallel, using the same thread-page mapping as in the computational kernel, and (ii) static OpenMP scheduling must be used: !$OMP PARALLEL DO !$OMP SCHEDULE(STATIC) do I=1,N A(I)=0.0 B(I)=BI C(I)=CI D(i)=DI enddo !$OMP END PARALLEL DO
do R=1, NITER !$OMP PARALLEL DO !$OMP SCHEDULE(STATIC) do I=1,N A(I)=B(I)+C(I)*D(I) enddo !$OMP END PARALLEL DO enddo
It should be obvious that especially the first condition might be hard to meet in a real user code. Performance figures for out-of-cache data sizes (cf. Table 2) show the effects of improper initialization on ccNUMA architectures. In the column marked “NoInit”, the initialization loop was done on thread zero alone. “ParInit” denotes proper parallel initialization. The IA64 measurements were done on an SGI Altix 3700 system (Itanium2 at 1.3 GHz). While there is no difference between the NoInit and ParInit cases for two threads, which is obvious because of the UMA access characteristics inside the node, performance breaks down dramatically on four threads because two CPUs have
Optimizing performance
285
Table 2. Memory performance of the parallel vector triad in GB/s for purely bandwidth-limited problem sizes. Divide by 0.016 to get performance in MFLOP/s Threads IA64 (NoInit) IA64 (ParInit) AMD64 (ParInit) SX6+ 1 2 4 8 16
4.232 4.389 2.773 1.824 1.573
4.176 4.389 8.678 17.33 34.27
2.694 5.086 9.957
34.15
Fig. 6. Memory bandwidth vs. loop length N for the shared-memory parallel triad on SGI Altix (Itanium2) and one SX6+ processor
to use the slow NUMALink4 connection to access remote memory. Consequently, the four-CPU bandwidth is reduced to the performance of this link. With eight and more CPUs, the even slower NUMALink3 interconnect dominates available bandwidth (as there are two links per SC-brick, the breakdown is not as severe here). With parallel initialization, on the other hand, scalability is nearly perfect. For reference, data taken on a four-way AMD Opteron system (2.2 GHz) and on a single NEC SX6+ CPU is also included in Table 2. As expected, eight Altix nodes are the rough equivalent of one single NEC processor, at least for large loop lengths. Figure 6 gives a more detailed overview of performance data. In the “serial” version, the code was compiled with no OpenMP directives. In comparison to the OpenMP (1 thread) case, it is clear that the compiler refrains from aggressive, cache-friendly optimization when OpenMP is switched on, leading to bad L2 performance. In-cache scalability is acceptable, but OpenMP startup overhead hurts performance for small loop
286
G. Hager, T. Zeiser, J. Treibig, and G. Wellein
lengths. This has the striking effect that, even when the complete working set fits into the aggregated cache size of all processors, full cache performance is out of reach (see the 4-thread data in Fig. 6). On the other hand, SX6+ data for short loops shows that the often-cited vector pipeline fillup effects are minor when compared to OpenMP overhead. In conclusion, OpenMP parallelization of memory-intensive code on cache-based microprocessors can only be an alternative to vector processing in the very large loop length limit.
5 Conclusions We have presented a performance evaluation of the memory hierarchies in modern parallel computers using the vector triad. On Intel Pentium4 or AMD Athlon64 processors even this simple benchmark kernel only exploits roughly half of the bandwidth available if using a standard, straightforward implementation. A handcoded block prefetching mechanism implemented in assembly language has been shown to improve the performance from main memory by nearly a factor of two. The important observation here was that flooding the memory subsystem with numerous concurrent streams is counterproductive. This block prefetch technique can also be used in highlevel languages, making it interesting for real-world applications. Although modern shared-memory systems are widely considered as being easy to program we emphasize the importance of careful tuning in OpenMP codes, in particular on ccNUMA architectures. On these systems, data locality (or lack thereof) often dominates performance. If possible, data initialization and computational kernels must be matched with respect to data access because of the first-touch page mapping policy. To our experience, this seemingly trivial guideline is often violated in memory-bound user codes. Even considering the significant improvements in processor performance and ccNUMA technology over the past years, classical vector processors still offer the easiest and sometimes the only path to satisfactory performance levels for vectorizable code.
Acknowledgments This work has been financially supported by the Competence Network for Technical and Scientific High Performance Computing in Bavaria (KONWIHR). We thank H. Bast, U. Küster and S. Triebenbacher for helpful discussion. Support from Intel is gratefully acknowledged.
Optimizing performance
287
References 1. Lemuet C, Jalby W, Touati S (2004) Improving load/store queues usage in scientific computing. The International Conference on Parallel Processing (ICPP’04). Montraal IEEE 2. Oliker L et al. (2003) Evaluation of cache-based superscalar and cacheless vector architectures for scientific computations. In: Proc. SC2003, Phoenix, AZ 3. Deserno F et al. (2004) Performance of scientific applications on modern supercomputers. In: Wagner S et al. (eds) High Performance Computing in Science and Engineering. Munich 2004. Transactions of the Second Joint HLRB and KONWIHR Status and Result Workshop. Springer-Verlag, Berlin, Heidelberg 4. Oliker L et al. (2004) Scientific computations on modern parallel vector systems. In: Proc. SC2004, Pittsburgh, PA 5. Pohl T et al. (2004) Performance evaluation of parallel large-scale Lattice Boltzmann applications on three supercomputing architectures. In: Proc. SC2004, Pittsburgh, PA 6. Schönauer W (2000) Scientific Supercomputing. Self-edition, Karlsruhe 7. Jalby W, Lemuet C, Touati S An effective memory operations optimization technique for vector loops on Itanium2 processors. Concurrency Comput Pract Exp (accepted for publication) 8. Intel Corp. (2004) Itanium2TM programming and optimization reference manual. Intel http://developer.intel.com/ 9. Bast H, Levinthal D, Intel Corp. Private communication 10. Intel Corp. (2004) IA-32 optimization reference manual. Intel http:// developer.intel.com/ 11. Rightmark Memory Analyzer http://cpu.rightmark.org/products/ rmma.shtml 12. AMD Athlon processor, x86 code optimization guide 86–98 http: //www.amd.com/us-en/assets/content_type/white_papers_and_ tech_docs/22007.pdf
Dynamic Virtual Organizations in engineering S. Wesner1 , L. Schubert1 , and Th. Dimitrakos2 1 2
High Performance Computing Center Stuttgart (HLRS) Allmandring 30, 70550 Stuttgart, Germany [wesner,schubert]@hlrs.de British Telecom, 2A Rigel House, Adastral Park, Martlesham Heath Ipswich, Suffolk, IP5 3RE, UK [email protected]
Summary. "Virtual Organizations" belong to the key concepts in the Grid computing community. They are currently evolving from basically static to dynamic solutions that are created ad-hoc in reaction to a market demand. This paper provides a definition of "dynamic Virtual Organizations" in order to assess specific challenges of an abstract collaborative engineering scenario. The paper concludes with a description of an evolving architecture enabling such dynamic virtual organizations.
1 What is a Virtual Organization? The term "Virtual Organization" (VO) appears first in [1], [2] and [3] as an organizational model in the area of economics. Combining this with the idea of controlled resource sharing, as envisaged by the Grid community (see [4]), and the requirements from business to business interactions, the following definition can be given, which is outlined further in [5]: A Virtual Organization (VO) is understood as a temporary or permanent coalition of geographically dispersed individuals, groups, organizational units or entire organizations that pool resources, capabilities and information to achieve common objectives. Virtual Organizations can provide services and thus participate as a single entity in the formation of further Virtual Organizations. This enables the creation of recursive structures with multiple layers of "virtual" valueadded service providers. Important to note here is that the participants in the VO are typically part of a larger (but limited) network of enterprises that agreed on common formats and rules for interaction. So despite aiming at a dynamic process of building Virtual Organizations including processes like negotiating Service Level Agreements (SLA) and Security Policies, it is assumed that the potential participants are known and understand a common language. This is typically achieved by formulating "out-of-band" contracts and agreements between
290
S. Wesner, L. Schubert, and Th. Dimitrakos
the physical organizations. Note that this is not a globally accepted definition of "Virtual Organization" and in other contexts the term may be used to denote a network, whilst the dynamically on-demand created coalitions are called "sessions". For the rest of this paper however we will use "Virtual Organization" as defined above.
Universe
Network Common Purpose
Common ICT Infrastructure Diverse middleware Diverse applications Agree on standards Advertise capabilities Agree on potential roles in VOs Create contract templates
Market opportunity Market demand
Common Registries Interrelated Ontologies Contract Drafts Mutually Understood Policies …
VO
Define VO specific roles VO wide policies Instantiate Contracts Implement federation interfaces Integrate services and resources
Fig. 1. Evolution of potential service providers to VO participants
1.1 Life-cycle of VOs The life-cycle model presented here follows the ideas developed in the VOmap roadmap project "Roadmap Design for Collaborative Virtual Organisations in Dynamic Business Ecosystems" [6]. The analysis of specific challenges with applying the VO model to the engineering domain, as discussed below, will be aligned to the life-cycle phases identified in that project. A more detailed description of the phases can be found in [7]. Identification The identification phase is dealing with setting up the Virtual Organization - this includes selection of potential business partners from the network of enterprises, by using search engines or looking up registries. Generally, identification relevant information contain service descriptions, security grades, trust & reputation ratings etc. Depending on the resource types, the search process may consist in a simple matching (e.g. in the case of computational resources, processor type, available memory and respective data may be considered search parameters with clear cut matches) or in a more complex process, which involves adaptive, context-sensitive parameters. For an example, the availability of a simulation program may be restricted to specific user groups or only for certain data types, like less confidential data etc. The process may also involve metadata like security policies or SLA templates with ranges of possible values and/or dependencies between them, such as
Dynamic VOs for engineering
291
bandwidth depending on the applied encryption algorithm. The identification phase ends with a list of candidates that potentially could perform the roles needed for the current VO. After this initial step from the potentially large list of candidates the most suitable ones are selected and turned into VO members, depending on additional aspects that may further reduce the set of candidates. Such additional aspects cover negotiation of actual Quality of Service (QoS) parameters, availability of the service, "willingness" of the candidate to participate etc.. It should be noted that though an exhaustive list of candidates may have been gathered during the identification phase, this does not necessarily mean that a VO can be realized - consider the case where a service provider may not be able to keep the promised SLA at a specific date due to other obligations. In principle, the intended formation may fail due to at least two reasons: (a) no provider (or not enough providers) are able to fulfill all given requirements comes to SLA, security etc. or (b) providers are not (fully) available at the specified time. In order to circumvent these problems, either the requirements may be reduced ("choose the best available") or the actual formation may be delayed to be re-launched at a more suitable time. Obviously there may be the case, where a general restructuring of the requirements led to a repetition of the identification phase. Formation At the end of the (successful) identification phase the initial set of candidates will have been reduced to a set of VO members. In order to allow these member to perform accordingly their anticipated role in the VO they need to be configured appropriately. During the formation phase a central component such as the VO Manager (see also the description of the VO Manager Service on page 298) distributes the VO level configuration information, such as policies, SLAs etc. to all identified members. These VO level policies need to be mapped on local policies. This might include changes in the security settings (e.g. open access through a firewall for certain IP addresses, create users on machines on the fly etc.) to allow secure communication or simply translation of XML documents expressing SLAs or Obligations to a product specific format used internally. After the formation phase the VO can be considered to be ready to enter the operation phase where the identified and properly configured VO members perform according to their role. Operation The operational phase could be considered the main life-cycle phase of a Virtual Organization. During this phase the identified services and resources contribute to the actual execution of the VOs task(s) by executing pre-defined business processes (e.g. a workflow of simulation processes and pre- and
292
S. Wesner, L. Schubert, and Th. Dimitrakos Identification Step 1 Search Service Providers
Receive Candidate List
Start Process
[List is incomplete] Stop
Identification Step 2 Start Negotiation Process
Send Request (SLA, ...)
[retry later]
[give up]
Receive Counter Offer [no suitable candidates]
Stop
[found suitable candidates]
Formation Distribute Policies and Workflow
Receive Ready
Evolution
Operation
Termination
Stop
Fig. 2. A simplified view on the VO lifecycle
post-processing steps). A lot of additional issues related to management and supervision are involved in this phase in order to ensure smooth operation of the actual task(s). Such issues cover carrying out financial arrangements (accounting, metering), recording of and reacting to participants’ performance, updating and changing roles and therefore access rights of participants according to the current status of the executed workflow etc. In certain environments persistent information of all operations performed may be required to
Dynamic VOs for engineering
293
allow for later examination e.g. to identify fault-sources (for example, related to the scenario provided below, in case of a plane crash). Evolution Evolution is actually part of the operational phase: as participants in every distributed application may fail completely or behave inappropriately, the need arises to dynamically change the VO structure and replace such partners. This involves identifying new, alternative business partner(s) and service(s), as well as re-negotiating terms and providing configuration information as during identification, respectively formation phase. Obviously one of the main problems involved with evolution consists in re-configuring the existing VO structure so as to seamlessly integrate the new partner, possibly even unnoticed by other participants. Ideally, one would like the new service to take over the replaced partners task at the point of its leaving without interruption and without having to reset the state of operation. There may other reasons for participants joining or leaving the VO, mostly related to the overall business process, which might require specific services only for a limited period of time - since it is not sensible to provide an unused, yet particularly configured service to the VO for its whole lifetime, the partner may request to enter or leave the VO when not needed. Termination During termination, the VO structure is dissolved and final operations are performed to annul all contractual binding of the partners. This involves the billing process for used services and an assessment of the respective participants’ (or more specifically their resources) performances, like amount of SLA violations and the like. The latter may of particular interest for further interactions respectively for other potential customers. Additionally it is required to revoke all security tokens, access rights etc. in order to avoid that a participant may (mis)use its particular privileges. Generally the inverse actions of the formation phase have to be performed during Termination. Obviously partial termination operations are performed during evolution steps of the VO’s operation phase (cf. above).
2 VOs for Collaborative Engineering Due to the complexity of the involved tasks and the associated risks in the automotive and aerospace industry, companies very often form a Joint Venture (JV) in order to overcome these problems. Within such Joint Ventures, partners typically focus on specific business aspects and contribute their data, information and knowledge. Hence a JV can be seen as a simplified form
294
S. Wesner, L. Schubert, and Th. Dimitrakos
of a Virtual Organization that aims at service integration on administrative level only, rather than on the infrastructure level too (as intended by the VO). From the perspective of above described life-cycles, a Joint Ventures covers only the operational phase, whilst all other phases and aspects (identification, formation, evolution and termination) are performed by human actors using legal (paper) contracts. With respect to long-term goals that involve a rather static infrastructure, as e.g. when developing a next generation airplane, such an approach covers all relevant aspects and provides the required security. However other cases such as customizing airplanes e.g. with Wireless LAN, extended range etc. may require fast reaction to market demand in order to deliver a solution faster and/or cheaper than a competitor. 2.1 An example scenario This scenario is a simplified and abstracted version of a real-world scenario from the aerospace industry, which is examined further in the TrustCoM IST project http://www.eu-trustcom.com). In the given scenario different parts of the design data (hull, turbine, wheels etc.) are stored in databases and files in geographically dispersed companies. The main airplane manufacturer integrating this data wants to update his design on basis of configurations requested by a customer. To achieve this goal, several companies have to interact with each other performing different tasks that contribute to the overall goal - these roles are listed below. We hereby assume that the business partners are part of an existing enterprise network and have agreed upon specific formats for communication, as well as to describe their metadata, policies etc. Furthermore all partners must have published their services in searchable registries (in the commonly agreed form). Customer: The customer is expecting the VO to manufacture a certain product for him and/or perform specific tasks. In the given scenario the consumer requests the adaptation of the airplane design data on basis of his/her configuration data. The customer provides a resource to the VO that contains all configuration parameters. Computational Service Provider (CSP): The CSP offers computational resources to a VO. It is hereby assumed that a set of (task-specific) applications has been pre-installed. In order to execute these applications the particular permissions need be issued by a License Provider (cf. below). Obviously it might be necessary to deploy additional applications or customize the existing ones (e.g. macro packages) Application Provider (AP): An AP may offer specific applications that can be deployed on a CSP’s resources, or provides add-ins and/or configuration information for existing applications. License Provider (LP): The License Provider issues the execution rights for particular applications running (or to be run) on a specific CSP.
Dynamic VOs for engineering
295
Domain Expert: Domain Experts are to review intermediate results produced during the execution of the workflow an to decide at certain points how to proceed further. Storage Provider (SP): The storage provider offers resources that can store data such as simulation results. The SP needs to be able to control the access to the whole or parts of the resource(s), depending on the role and rights of the service trying to access the data. Process Designer (PD): The process designer generates the overall business process to be executed from the customers request - in the given scenario, this overall process may have been designed a priori. On basis of this business process, the list of required partners as well as their respective workflows is generated. With the initial request of the customer to reconfigure the design data, a known Process Designer (PD) is triggered to create a business process on basis of a given template. This process should define all the relevant tasks involved in updating the design data, as well as the metadata of each required role. On basis of this business process, the identification phase of the VO is initiated, i.e. potential service providers fulfilling the process’ requirements are identified by querying the registry. The resulting list of candidates is reduced to the actual set of VO participants during the second phase, and the additional configuration information is deployed (see also figure 2). During the operation phase, several calculation processes are executed in parallel. These processes are controlled by workflows derived from the overall business process by the PD. Workflows contain functions such as data retrieval from databases (e.g. the configuration and design databases), execution of calculations, retrieval of tokens from a license service provider and the storage of intermediate and final data. An excerpt of such a workflow for a single calculation process is depicted in figure 3 in the form of an UML Activity Diagram. The activities shown there are potentially executed on resources owned by different companies from different countries. 2.2 Selected challenges of this scenario Such a scenario as sketched above puts a lot of challenges on a potential implementation, since failure of the VO should be prevented at any cost. Considering the issues at stake (money, security), additional precautions must be taken to maintain the confidentiality of the data. However since the VO is overall a rather static one, further risks that arise from the changes of the VO structure bear less influence than in other scenarios - one must keep in mind though that complex computations are involved which could lead to huge delays in the overall process if lost in the middle. We shall elaborate these issues further in the following:
296
S. Wesner, L. Schubert, and Th. Dimitrakos Start Workflow
Query Data from configuration database
Create a virtual design data source
Query Data from virtual data source
«datastore» Design Database A «datastore» Design Database B
Store data using Storage Provider Service
Send Notification on Progress
Perform pre-processing
Store intermediate result
Acquire License for application
Send Notification on Progress
This can be done by a human, based on simple algorithms or done using complex data mining algorithms.
Store intermediate results
Perform computational intensive calculation on HPC resource
Analyse Results
[Results failed analysis] [Results successfully passed analysis]
Send Execution Completed
Result Achieved
Fig. 3. A part of the overall workflow for the execution of a calculation
Partner and service identification With respect to identifying the most suitable partners for a Virtual Organization, particular care must be taken to avoid risks of failure. One of the complex issues to solve consists in possible misinterpretation of the avail-
Dynamic VOs for engineering
297
able metadata 3 : e.g. mapping required functionalities to a set of (individually chosen) operations or descriptions, as required by the overall business process (cf. identification phase) is obviously not straightforward. Another challenge consists in addressing frequently changing information, for example the ability to provide a certain Service Level Agreement is dependent on the current load situation of the resources. More hazardous for the overall operation of the VO, potential misuse of partners’ roles should be prevented at any costs. During identification, one would like to select only a range of partners that can be trusted not to cause problems (on purpose) and support all required security measurements: Trustworthiness of partners, services and data A service provider’s trustworthiness to perform in an expected way is currently understood as a mixture of past behavior and recommendations from other companies - much in the same way as the reputation system of eBay. Ideally, contexts of past performances are taken into consideration and future behavior is estimated on this basis. Obviously other factors influence the trustworthiness required for putting up Virtual Organizations, such as applied security rules, involvement in other (competitive) VOs, geographical position (if political issues are taken into account) etc. Since companies may offer more than one service, a distinction between the trustworthiness of the services itself may be required and in some cases even the reliability of (input-/output-)data may have to be assessed. Though it would be theoretically possible to store all this additional data in the metadata service description, this is not sensible since the types of required data, as well as their content are very task-dependent. The current approach to solving this issue consists in additional, registry-like services that "assess" a company’s/service’s/data’s trustworthiness considering specific factors. Robustness against misbehavior It is generally assumed that the VO model is used in a scientific area where all partners share the same interests in contributing to reach the overall VO’s goals. In industrially driven scenarios however, such an assumption does not generally hold true. Some participants may, similar to the Joint Venture idea, contribute to the VO in order to be able to compete with larger companies and thus are willing to share the involved risks. Other may provide their resources simply for reasons of revenue. In order to ensure goal compliant behavior of all participants during the actual operation phase, monitoring and enforcing mechanisms of some kind are required: As discussed above, the Quality of Service to be maintained is negotiated during the formation phase. To enact this, a kind of monitor needs to constantly watch the partners 3
as said, we must assume that a consistent format was agreed upon first
298
S. Wesner, L. Schubert, and Th. Dimitrakos
performance and compare this with the agreed upon SLA - in case of a violation of this agreement, the customer or a management instance on his/her behalf is informed. A similar approach is chosen for controlling access rights and other security related issues by monitoring who accesses which data and the like. When violations (intentionally or not) occur, different reactions may be triggered in order to prevent an overall failure of the VO. The most common one is to exclude or at least suspend the violating member from the VO and replace him/her if necessary. Ideally this takes place without any loss of data, like computation state etc. With respect to confidential data and possible security leaks it may be sensible to put the overall status of the VO to "red-alert", i.e. locking any resource sharing, for the time being of this evolution process.
3 Towards a framework for dynamic VOs In this section the key elements of the TrustCoM framework which tries to address all involved challenges in realizing dynamic Virtual Organizations are highlighted. As this work is still in progress, the presented must be considered intermediate result which will be further updated. Service Provider Within the TrustCoM project, Service Providers are distinguished according to their types of service they provide to the VO, i.e. in what way they contribute to the overall goal(s): VO Management Services TrustCoM assumes that (at least) one company is in control of the VO and implicitly acts on behalf of the customer’s intentions - obviously VO Manager and customer may be the same instance. Management related services take over the responsibilities (amongst others) to initiate and lead the life-cycle phases, trigger specific tasks, maintain a member-list etc. Goal Oriented Services Goal Oriented Services, sometimes referred to as Application Services, provide services that directly contribute to the VO’s task as specified by the overall business process. In the scenario given above, this could be e.g. a company calculating the hull parameters. Trusted Third Parties This kind of provider offers services that can be generally used by any VO, yet are in some way adapted to the particular Virtual Organizations - to these belong additional log keeping and discover supporting services etc. These services obviously need to observe the VO specific access rights and react to according requests. Additional Services There are other services in a given enterprise network that are required for certain aspects of the VO (mainly regarding the identification phase), like e.g. registries storing metadata of all the services available in the network and the like. Note that not only "Goal Oriented Services" are discovered during the identification phase of the VO, but that no type of Service Provider needs to
Dynamic VOs for engineering
299
be known in advance, i.e. could be treated as any other partner that needs be discovered first. Furthermore no distinction is made between "stand-alone" and so-called "aggregated" services, i.e. it is generally not apparent whether a company provides the respective service all by itself or whether it needs support from other (company-extern) services. In the latter case however it must take responsibility for all additional service providers it introduces to the VO.
4 Framework units To enable a Virtual Organization in its envisaged form, certain types of functionalities are required that must be provided by the framework 4 In the following, a non-exhaustive list of the main functionalities is provided, structured along - most of these represent independent functional units that could be realized as separate Trusted Third Parties, yet others depict general capabilities that need be supported by the units. 4.1 General service structure Components, units and interfaces that need be supported by at least all goal oriented services of a VO: Manageable Interfaces In order to enact the control over application services, some interface to allow for monitoring and possibly influencing the settings, respectively the system configuration, thus allowing to manage the service to some degree. Service Management Interface The functionalities of the "Manageable Interface" are extended by VO specific management tasks relating to deployment of policies and contracts, which occurs once with the service entering the VO but may re-occur during evolution. Security Services One major requirement of the envisaged VO-types consists in data-protection. As such, only specific users are allowed to access the service providers and their data. The Security Services of a provider identify potential users on basis of information provided by the VO Management and check their privileges, potentially blocking access. Contract Management Each service provider must ensure that the agreed upon QoS are met - the contract management unit translates given SLAs into service-specific terms and constantly compares those with the service’s (machine, system etc.) status. 4
Due to the modular approach, it is obviously possible to have pre-existing service providers taking over the respective functionalities - it must however be assumed that most of them do not exist in the required manner and/or are not efficient enough.
300
S. Wesner, L. Schubert, and Th. Dimitrakos
Notification Systems Any component may trigger notification messages with specific events occurring. The notification systems aggregate these notifications, categorize them according to topics and forward these to so-called "subscribers" interested in the respective types of events. The System may furthermore be responsible for interpreting received notification messages. 4.2 VO management The functionalities that enable the management of a Virtual Organization. Membership Management A dynamic VO is characterized by VO members entering and leaving it when appropriate and by changes of their roles over time. Membership management is responsible for keeping track of these changes and initiating them where required. Furthermore, these changes have to be communicated to all service providers affected - this includes invitation and exclusion, as well as changes of locations for contact etc. Business Process Manager The overall process of the VO (i.e. the actions to be performed to achieve the VO goals) is controlled by the Business Process Manager: it generates events to trigger specific workflow steps and aggregates services’ states to keep track of the overall progress. Policy Generation & Deployment Setup, respectively configuration information (including security tokens, information about other service providers etc.) have to be passed in a standardized form from the Management Systems to the respective VO participants using this unit. Coordination & Federation Since a VO enacts distributed processes, coordinating the interaction is of high importance. This requires a set of mechanisms that ensure coordinated message exchange between interacting participants, distribution of agreements, policies and business processes etc. 4.3 Trusted third parties Functionalities which are preferably performed by trusted third parties, cover: Notification Brokers Notification Systems may be enhanced using Brokers that take care of managing subscribers ("event sinks") and event sources, i.e. service providers interested in specific topics may subscribe to the Brokers instead of all possible services raising these types of events. Service Discovery Unit This unit interacts with well-known repositories in order to gather potential service provider fulfilling the criteria required for achieving the overall goal (as defined by the Business Process Manager).
Dynamic VOs for engineering
301
Log A log of main events should be maintained in any Virtual Organization in order to trace error-sources and to evaluate a participant’s past behaviour (so as to re-assess his/her trustworthiness). Such a log can also be used for billing purposes during dissolution. 4.4 Additional functionalities From the overall setup, additional requirements arise that should be met yet are not particularly related to the VO framework: Service Publishing In order to retrieve services, descriptions of its functionalities, possible configuration etc. must be available and accessible in some form. For this reason, these descriptions are stored in phone book like repositories that can be queried by e.g. discovery units. Service Meta Directory To speed up the discovery process, it is sensible to distinguish between types of services and have repositories specialise on specific types. A Meta Directory would then list all repositories according to the service-types covered by it. Business Process template management Workflows for typical, recurring business requests can be created in advance and stored as a kind of template in a repository. During the identification phase, this registry may be queried for retrieving appropriate business process meeting the requested task.
5 Conclusions With the growing ebusiness market, future business collaborations will have to put additional focus on so-called Virtual Organizations. Generally, putting up such environments is very laborous and renders the collaborations rather static. In order to react to the constant changes in business demands, a dynamic component needs to be introduced. In this paper we have presented an approach towards enabling dynamic Virtual Organizations using Web Services and Grid technologies. The lifecycle of such VOs cover four major steps, namely Identification, Formation, Operation, and Dissolution. Since the structure, i.e. the participants, their processes etc. may change during the operation phase (e.g. if one of the providers is not maintaining the required Quality of Service) one needs to distinguish between the actual "Operation" of the VO and its "Evolution" during the third, main phase - this part of the Operation phase can be regarded as undergoing Identification, Formation and Dissolution with a restricted set of participants. On basis of a simplified collaborative engineering scenario, we discussed these phases and identified the typical roles involved in a Virtual Organization. We furthermore analyzed the specific requirements and challenges, in particular for the Identification and Formation phases, and derived from those an outline of a mid-
302
S. Wesner, L. Schubert, and Th. Dimitrakos
dleware framework. These building blocks enable the automatic creation of dynamic Virtual Organizations on-demand.
6 Acknowledgments The results presented here are partially funded by the European Commission under contract IST-2003-01945 through the project TrustCoM. In particular we would like to acknowledge the work done by David Golby from British Aerospace in the definition of the Collaborative Engineering scenario for TrustCoM which was the basis for the generic scenario presented here.
References 1. Saabeel W, Verduijn TM, Hagdorn L, Kumar K (2002) Electron J Organizational Virtualness 4: 1–17 2. Strader TJ, Lin F, Shaw MJ (1998) Decis Support Syst 23:75–94 3. Petropoulos K, Balatos A, Zompolas G, Voukadinova Z, Luken M, Spiewack M, Tarampanis K, Ignatiadis Y, Svirskas A, Sidiropoulos A (2003) D1.3 Conceptual Model of the LAURA Prototype - Definition of Functionalities. LAURA IST Project 4. Foster I, Kesselmann C, Tuecke S (2001) Int J Supercomputer Appl 15(3): 200-222 5. Dimitrakos T, Golby D, Kearney P (2004) Towards a trust and contract management framework for dynamic Virtual Organisations. eAdoption and the Knowledge Economy: eChallenges 2004. IOS Press 6. Camarinha-Matos LM, Afsarmanesh H (2003) A roadmap for strategic research on Virtual Organizations. In: Camarinha-Matos LM, Afsarmanesh H (eds.) Processes and foundations for Virtual Organisations. Springer Verlag 7. Valles j, Dimitrakos T, Wesner S, Serhan B, Ritrovato P (2003) The Grid for e-collaboration and Virtual Organisations. Building the Knowledge Economy: eChallenges 2003, IOS Press 8. Almond J, Snelling D (1999) Future Generation Comput Syst 15:539–548 9. Snelling D The abstract job object: an open framework for seamless computing. http://www.fz-juelich.de/unicoreplus/ 10. Foster I, Kesselman C, Nick J, Tuecke S The physiology of the Grid: an open Grid services architecture for distributed systems integration. Global Grid Forum 11. W3C Working Draft Web Services Description Language (WSDL) 1.1. http:// www.w3.org/TR/wsdl.html 12. Dimitrakos T, Mac Randal D, Fajin Yuan, Matteo Gaeta, Giuseppe Laria, Pierluigi Ritrovato, Bassem Serhan, Wesner S, Wulf K (2003) An emerging architecture enabling Grid-based application service provision. In Proc. 6th IEEE International Enterprise Distributed Object Computing Conf. (EDOC 2003). IEEE 6
Algorithm performance dependent on hardware architecture U. Küster and P. Lammers High Performance Computing Center Stuttgart (HLRS), University of Stuttgart, Nobelstraße 19, 70569 Stuttgart, Germany [email protected],[email protected] Summary. The performance of algorithms dependents on a whole bunch of parameters, not only frequency of the processor but also its architecture, bandwidth and different latencies for getting data. Also the implementation of the algorithm is essential. We try to identify some important parameters by the analysis of the delivered performance of some typical algorithms and to show the differences between architectures.
1 Introduction Benchmarks of machines typically give an integral number of a large collection of separate events on a complex machine. They are designed to support a decision process but not to give insight in the reasons a machine to be fast or slow under special circumstances. In the following sections we will give some remarks on analytical benchmarking where we are in search of parameters showing problems and potentials of architectures for numerical programs. The results can be understood as hints for programmers in designing numerical codes. Numerical programs consists on collections of loops, nested and non nested, long and short, of instructions of floating point operations. Complex branching is less typical, but possible. Indirect addressing is wide spread. The datasets are large. Otherwise there is no need for a high performance computer. Numerical programs may be quite large in contrast to the compute intensive kernel doing the most operations. These kernels have to be identified and optimized.
2 Fundamental properties of loops Nearly all work of numerical algorithms is done in loops. To run a loop the processor loads data periodically, combines these data by useful operations
304
U. Küster and P. Lammers
and stores the results. The type of processing is done in a pipelined way. Processing of the first data tuples is ongoing at the same time where later data tuples begin the processing. For a very long loop at least one of the participating units is busy at the same time. But if the problem loop size is short the time delay for filling the units will be an important part of the overall time. This appears in the situation of multiple nested loops. The loop length of any loop may be moderate, but the overall problem size as product of all nested loop lengths may be large. If the operations of the inner loop iterations are not overlapped for the outer loops, any overhead of the inner loop has to be multiplied by the iteration counts of the outer loops. We exclude algorithmic dependent branching. This is allowed for most numerical algorithms. In contrast to this branching is very important for example for parsing steps. For a cache processor the fundamental inner operation is done by the following steps: 1. load the buffers containing the data; because of the latencies load initialization should be done as early as possible; the time in getting the data depends on the actual location of the data; 2. operate on the data as soon as the first data tuples are present; if the buffers are long enough this step needs some time; 3. begin to store the data as soon as the first store buffer is ready. For cache based machines the buffers are the cachelines in the caches and for vector computers the vector registers. These mechanisms end up in a simple timing model time(n) = start_up + incr ∗ n,
(1)
start_up is the constant part of the loop including the initialization and ending overhead, and incr the time for one iteration. The start_up and the incremental part incr will vary on the location where the instructions will find the data. n is the problem size. The first increases with increasing latency and the second with the inverse of the bandwidth. Assuming that all important loops of an algorithm have the same size, all different parameter for start_up and for incr will be accumulated to a common sum. If the results of the first loop (e.g. dot product) are not used in the second loop, the start up of the second loop may be hiding by the first loop, as long as the compiler is able to detect this possibility. In this best case the start up of the second loop does not count for the computing time. We end for the combined loops by the same formula with different parameters. The parameters are dependent on probability distribution describing the location of the data in the caches and in the memory. For nested loops with fixed iteration counts we have to replace the simple timing model by polynoms over the different counts with coefficients which are dependent on the cache parameters in the same way. Measuring these coefficients for all problem sizes (= loop iterations counts) may reversely used for the determination of the caches properties
Algorithm performance dependent on hardware architecture
305
and the probability distribution for the specific algorithm. The performance model for a simple algorithm where time depends linearly on the problem size n is simply per f ormance(n) ∼
op(n) . start_up + incr ∗ n
(2)
This is applicable for a interesting set of numerical codes, here the Conjugate Gradient algorithm. For more general models we replace the denominator by polynoms with small degrees (e.g. 3 for dense matrix algebra).
3 Tested architectures We compare three different machines. Each of these represents another architecture. The Intel Nocona is the first Intel processor with the EM64T instruction set which is comparable to the AMD Opteron x86-64 instruction set. The tested Nocona has a frequency of 3.4 GHz and 6.4 GFLOPs peak performance and a L2 cache sizes of 1 MB. We used the Intel Fortran compiler 8.1. The Intel IA64 processor is tested on a NEC TX7 (Asama) with 32 processors, a frequency of 1.5 GHz and 16 KB L1, 256 KB L2, 6 MB L3 cache sizes. The peak performance is 6 GFLOPs. The compiler used is the NEC efc Revision 3.4, which is a slightly enhanced Intel 7.2 compiler. The NEC SX-6 is a vector machine with 565 MHz processor frequency, 8 arithmetical pipes and a peak performance of 9 GFLOPs. We used the Fortran compiler f90 Version 2.0 Ref. 305. This machine has also caches in his scalar processor part. But they have no influence on the run time behaviour of the algorithms tested here.
4 Measuring methodology To get the performance we counted the operations of the algorithm in dependence on the problem size and measured the run times by highly accurate hardware counters. Sufficient repetitions insured the accuracy for the timings for the small cases. For the large cases the smallest values of numerous retries guaranteed the replicability of the measurements. The generated curves show a very discontinuous behaviour. They are locally non monotone due to cache and memory access pattern. To get the parameters of the performance hyperbola (2) in the different sections of the curves we used a modified version of the orthogonal regression package ODRPACK (http://www.netlib.org) for nonlinear parameter fitting. The sections are glued together by blending functions. Locally best values have larger weights and are essentially used for the estimation. The details are not self understanding but are omitted here. Of main interest here is the first step-up
306
U. Küster and P. Lammers 3000
Intel Nocona NEC Asama NEC SX-6
2500
MFLOPs
2000
1500
1000
500
0
1
10
100
1000
10000
100000
1e+06
loop length
Fig. 1. Daxpy for different architectures
of the characteristic performance functions. Of interest are also the following performance levels due to cache resident data and the ending part where data are coming directly from memory.
5 Daxpy daxpy is an important loop for applications in linear algebra and is part of the Blas 1 library. It is also the main test loop of the ’streams’ benchmark. Table (1) shows the fitted parameters of the performance hyperbola (2) for the daxpy loop do i=1,imax a(i) = b(i) + alpha*c(i) enddo The operation count is op(n) = 2 ∗ n. The iteration count imax has a large range in a lot of applications. Figure (1) shows the performance of daxpy in dependence on the loop length for three processor architectures. The performance data are measured by repeating the same loop for a large number of times. This ensures the accuracy of the measurement and enables caching as long as all the data involved fit into one of the caches. The results have to be understood as best case results. We take the result of the last iteration as input of the first iteration of the next repetition. This forces a non overlapped operation of the loop and shows the total startup. The performance curves have deep fluctuations going step by step. This is the case even for the larger iteration counts but hidden here. Periodical setups of inner repetitions are the reason.
Algorithm performance dependent on hardware architecture
307
We see that the peak performance of the loop is comparable for all three architectures. Different are the locations of the maximal values. The Nocona shows the best values in the range of 40, the IA-64 in the range of 8000 and the NEC SX-6 beyond 40000. The cache machines show a performance breakdown for any next cache/memory level. Remark that the L1 cache of the IA-64 is not used for floating point numbers. Fitting these data by the performance hyperbola is not possible without a model for the periodic fluctuations. We are able to model these for the IA-64 at least in the early phase. But fitting of the locally best values can be done by (2) in a perfect way for the IA-64 as it can be seen in figure (1). Table 1. Daxpy: start up and incremental times in clock periods
start_up incr_1 incr_2 incr_3
Intel Nocona
Intel IA64
NEC SX-6
21.4 2.68 7.93 35.25
34.56 1.00 2.07 13.13
304.0 0.376 0.44
All times are given in clock periods. start_up is the loop start up. It includes the overhead of a second loop used for the repetitions. This surely influences the results, but we are doing the same for all machines. The relations of the start ups are still interesting. We assume the same startup for all the caches. This is clearly wrong but the data measured are not able to predict these values. incr_1 is the time per iteration for data coming from the L1 cache in case of Nocona, L2 cache in case of IA64. incr_2 is the time per iteration for data coming from L2 cache in case of Nocona, L3 cache in case of IA64. incr_3 is the time per iteration for data coming from the memory. Remember that the relation start_up/increment determines the n 1 loop length, that is the loop length where we have half of 2 the peak performance of the loop. In case of the NEC SX-6 the values are determined so that we reach the optimal memory bandwidth. The respective curve in figure 1 touches the values in the lower part of the measured curve but is overestimating the performance for larger values. The value incr_3 is adjusted for this part. We see a clear interconnection network contention or memory degradation. Measured in clocks the start up of the NEC SX-6 is 8.8 times higher as for the Intel IA64 and 14.2 as for the Nocona. This is due to the vector register length of 256 entries or due to the 8 pipelines of the vector machine which have to be filled up for continuous operation. A second reason is the memory latency which is here compared to the cache latencies of the micro processors which are naturally smaller. But even if the vector machine would have processor near caches we would see this depth of the vector registers.
308
U. Küster and P. Lammers
On the other hand the values for incr_1 and incr_3 are much smaller than the values for the microprocessors. This is at the end the essential advantage of the vector machine.
6 Simple CG algorithm We have done tests for a lot of these simple loops like daxpy. But there are reasons to assume, that performance of a complete algorithm cannot anticipated only by knowing the performance of these simple loops. So a simple CG algorithm was implemented and tested for different problem sizes. This algorithm only consists on some long loops which are repeated iteratively. It solves here the linear problem of a finite difference discretization of the Laplace equation with Dirichlet boundary conditions on a square domain. Preconditioning is not included. We tested the performance with a stencil of 5 and of 9 neighbours. This results in a sparse matrix with 5 and 9 diagonals. The sparse matrix vector multiplication will dominate the processing timings. The problem size is the number of nodes or equivalently the dimension of the matrix. All important long loops have this iteration count. We increase the size of the domain exponentially with some intermediate steps. In this way we get the performance of the algorithm as a function dependent on the problem size. We are comparing different implementations of the sparse matrix vector product. Our target is not to find the best possible implementation but to find hints for programming. The tested code segment uses Fortran 90 array syntax handling with vector operations. We assume that loop syntax would be faster but we didn’t test this. A derived type for the sparse matrix implements at the same time the sparse matrix for row ordered and jagged diagonal formulation. Row ordered schemes end in a long outer including a short reduction loop for the matrix vector product and jagged diagonal in a short outer including long inner loops. The second approach is clearly better suited for vector computing. The derived type includes all necessary arrays as pointers and additional description parameters. The matrix vector product is done in subroutine matrix_X_vector. This procedure contains the implementation for the row ordered and the jagged diagonal matrix schemes. The procedure includes additionally a call to a second procedure matrix_X_vector_help which includes different implementations of the row ordered and the jagged diagonal scheme on base of assumed size arrays as dummy parameters for the pointer arrays of the sparse matrix derived type. We expect additional overhead by this call but possible better optimization by the compiler. We omit any details but show in a fragment the implementation of the version (row_1) with derived type components
Algorithm performance dependent on hardware architecture
309
subroutine matrix_X_vector(result_vector,matrix & & ,vector) type(sparse_matrix_type) :: matrix real(kind=rk),dimension(:) :: vector,result_vector . do cn=1,matrix%number_of_rows pseudo_col=matrix%begin(cn) temp_no=0. do no=1,matrix%length(cn) index=matrix%index(no+ pseudo_col) temp_no=temp_no+matrix%value(no+ pseudo_col)*& & vector(index) enddo result_vector(matrix%row_number(cn))=temp_no enddo . call matrix_X_vector_help( ... ) end subroutine matrix_X_vector and version (col_4) with assumed shape arrays. subroutine matrix_X_vector_help(type & & ,number_of_rows & & ,maximal_neighbourhood & & ,begin,length,index & & ,mat_value,vec_value & & ,temp,row_number,result) integer,dimension(*) :: begin,length,index & & ,row_number real(kind=rk),dimension(*) :: mat_value,vec_value & & ,result,temp integer,parameter :: block_size=2048 real(kind=rk),dimension(block_size):: short . do cn=1,number_of_rows temp(cn)=0. enddo do cn=1,maximal_neighbourhood pseudo_col=begin(cn) do no=1,length(cn) ind=index(no+ pseudo_col) temp(no)=temp(no)+mat_value(no+pseudo_col)*& & vec_value(ind) enddo enddo do cn=1,number_of_rows
310
U. Küster and P. Lammers
result(row_number(cn))=temp(cn) enddo . end subroutine matrix_X_vector_help Table (2) addresses the differences of the versions for the sparse matrix vector product. Table 2. properties of matrix vector products name
array type
temporary accumulation by
row_1 row_2 col_3 col_4 col_8
row ord. derived type pointers row ord. assumed size arr. col. ord. derived type pointers col. ord. assumed size arr. col. ord. assumed size arr.
scalar scalar complete arr. temp complete arr. temp blocked in special short vector
Nocona 3400 MHz 800
CG_row1_9 CG_row2_9 CG_col3_9 CG_col4_9 CG_col8_9
700
performance (MFLOPs)
600 500 400 300 200 100 0 1
10
100
1000
10000
100000
1e+06
1e+07
size
Fig. 2. CG code for Nocona
Figure 2 shows the most variant performance results for the Nocona processor in the case of 9 neighbours. Not all cases are shown. As in the previous figures the problem size is logarithmically scaled in the graph. We see for small, medium, and large sizes different performance deviations. Some curves are crossing so that there is no clear preference. Figure 3 shows the most variant performance results for the NEC Asama in the case of 9 neighbours. Not all curves are shown. As in the previous figures the problem size is logarithmically scaled in the graph. We see for small,
Algorithm performance dependent on hardware architecture
311
IA-64 Asama 1500 MHz 1200
CG_row1_9 CG_row2_9 CG_col3_9 CG_col4_9 CG_col8_9
performance (MFLOPs)
1000
800
600
400
200
0 1
10
100
1000
10000
100000
1e+06
1e+07
size
Fig. 3. CG code for Asama NEC SX-6 565 MHz 2000
CG_row1_9 CG_row2_9 CG_col3_9 CG_col4_9 CG_col8_9
performance (MFLOPs)
1500
1000
500
0 1
10
100
1000
10000
100000
1e+06
size
Fig. 4. CG code for NEC SX-6
medium, and large sizes different performance deviations. Some curves are crossing so that there is no clear preference. Figure 4 shows the performance results for the NEC SX-6. The curves for the jagged diagonal format are nearly identical and show the expected performance hyperbolas. The performance of the row ordered schemes (46 MFLOPs!) is nearly 40 times smaller than for the jagged diagonal schemes (1750 MFLOPs). The Nocona likes the row ordered scheme if the number of matrix elements per row is not to small. The jagged diagonal scheme is obviously better for the vector machine and for the IA-64. For the jagged diagonal scheme
312
U. Küster and P. Lammers
it is useful to block the temporary result for defining the resulting vector in smaller distances (here 2048). This blocking mechanism increases the startup on the Nocona. Table 3. CG iteration: start up and incremental times in clock periods for small sizes case
start_up_5 incr_5
Intel Nocona
3400 MHz Intel 8.1
row_1 row_2 col_3 col_4 col_8
467 676 809 717 7423
158 96 128 92 93
start_up_9 incr_9 neighbor_incr
684 835 1107 981 8657
195 116 179 121 115
9.1 5.1 12.7 7.3 5.6
810 864 483 1407 2547
242 98 253 30 29
7.9 5.6 24.9 2.2 2.1
3352 5204 4112 5863 6155
346 336 1.98 1.91 1.48
4.33 4.37 0.22 0.24 0.25
NEC Asama IA-64 1500 MHz efc 7.2 row_1 row_2 col_3 col_4 col_8
652 864 840 1220 2341
211 75 153 22 21
NEC SX-6
565 MHz
f90 Version 2.0
row_1 row_2 col_3 col_4 col_8
4234 5659 3383 4652 4818
329 318 1.08 0.96 0.48
Table (3) contains the parameters in clocks of the different test cases and the three architectures of the timing model (1) for small problem sizes of the CG iteration. The parameters are calculated by the locally best values. The second column shows the start up cycles for the case with 5 pseudo diagonals, the third column the according increments. The forth and the fifth column show the case with 9 pseudo diagonals. The last column is an estimation of the increment per pseudo column derived from the third and fifth column. The start up values are more sensitive than the measured increments. The derivation of the start up values by the measured data is ill posed. Variances of 10 % are possible. Some inconsistencies appear which are difficult to explain. The results do not show the influence of the memory bandwidth. They show the best case for the cache architectures. The data are coming from the nearest cache. For the vector machine they have to be loaded from the memory. The comparison is unfair but the memory is the essential data source of the vector machine.
Algorithm performance dependent on hardware architecture
313
The data imply the following observations. Remind that we are comparing the costs on base of clock cycles. The performance is proportional to the frequency and the inverse of the operational costs in clock cycles. 1. The essentially higher increments for row_1 compared to row_2 and col_3 compared to col_4 for Nocona and Asama show optimization deficiencies of the Intel compiler handling derived types with pointers. Start up and the incremental values are worse for row_2. The effect is much smaller for the NEC SX compiler. 2. Comparing row_1 to row_2 and col_3 to col_4 we see an overhead for a call of the NEC SX-6 of 1300 – 1700 cycles. The calling overhead for Nocona and Asama is obviously much smaller comparing the two pairs. The start up for col_4 is even smaller as for col_3 in case of Nocona. We don’t know the reason. 3. Startups on the vector machine are much higher except for col_6, col_7, col_8 on Nocona. 4. The operational efficiency for the vector machine shown by the smaller increments is much higher for the column oriented cases. It falls clearly behind for the row oriented implementations. Reason are the short reduction loops The same applies for the IA-64 compared to the Nocona. 5. col_3 is the worst case for the cache machines because of the compiler problem and the best for the vector machine because of the small startup and the best increment per pseudodiagonal. 6. The Nocona has the most inefficient architecture but the by far highest frequency. The start ups are comparable to the IA 64. The incremental values are essentially higher. The reduction loop of row_2 has the same incremental costs as the IA-64. The performance of the Nocona is comparable to IA-64. 7. Comparing the increments of row_2 for the 5 and 9 pseudodiagonal case we estimate start up values for the inner short reduction loop of 50 cycles for the Nocona, of 49 for the IA-64 and of 297 for the NEC SX-6. 8. The startup data for col_4 of the NEC SX-6 are roughly 4-times higher as for the IA-64 and 6 times higher as for the Nocona.
7 Conclusions for the analytical part We have seen that the PC architecture is less efficient but fast by its high frequency. The calling overhead is small even if measured in clocks. This makes this architecture suitable for modern software engineering paradigms as object oriented programming. Expensive are calls on the vector computer. The vector architecture is very efficient but only running with a small frequency. That is even true if we compare with the NEC SX-8 at a frequency of 1000 MHz instead of 565 MHz. The loop start up is essentially higher due to the higher instruction parallelism.
314
U. Küster and P. Lammers
Assuming a limited processor frequency the only way for performance increase is to change the architecture for getting a higher efficiency. One way is implementing additional processors on the chips. The other way better exploitation of pipelining like in vector processors.
8 Parallel runs with BEST BEST is an implementation of a Lattice-Boltzmann algorithm for the simulation of viscid incompressible flow. The code runs with good efficiency on all types of architectures and especially well on vector architectures. It is fully parallelized by MPI and gets a high fraction of the peak performance on parallel machines. Figures 5, 6, and 7 show performance diagrams on the Nocona/Infiniband cluster at HLRS, the SGI Altix system at LRZ in Munich and the NEC SX-6 system at HLRS in Stuttgart. Lattice Boltzmann solver BEST Nocona/Infiniband Linux Cluster, 6.4 GFLOP/s/CPU peak, HLRS 20 1.2
Efficiency (% of peak)
0.8 10
0.6 1 CPU (1 Node) 2 CPUs (1 Node) 2 CPUs (2 Nodes) 4 CPUs (4 Nodes) 8 CPUs (8 Nodes) 16 CPUs (16 Nodes) 32 CPUs (32 Nodes) 64 CPUs (64 Nodes) 128 CPUs (128 Nodes)
5
0
2
2.5
3
3.5
5 5.5 4 4.5 6 log10(Number of grid points/CPU)
6.5
7
7.5
GFlop/s/CPU
1
15
0.4
0.2
8
0
Fig. 5. BEST on HLRS Intel Nocona with Infiniband
The total efficiency relative to the integral peak performance of the used processors is plotted over the problem size in cells per process. The total runtime of the main iteration loop is used and the number of operations is counted by hand. The differentiating parameter of the curves is the processor number used for the calculations. An ideally parallelizable and weakly scalable code (problem size is proportional to the number of processors) would deliver identical curves which are still dependent on the per process size. This type of performance diagram is different to the usual speed up and scale up or timing diagrams. It shows in a better way the sensitivity of the performance on the parameters involved, that are processor and node counts and the problem size.
Algorithm performance dependent on hardware architecture
315
Lattice Boltzmann solver BEST Altix 3700, 6.4 GFlop/s/CPU peak, LRZ 40
2.5
35
Efficiency (% of peak)
25 1.5 20 1
15 1 CPU 2 CPUs 4 CPUs 8 CPUs 16 CPUs 32 CPUs 64 CPUs
10 5 0
2
2.5
3
3.5
5 5.5 4 4.5 6 log10(Number of grid points/CPU)
6.5
7
7.5
GFlop/s/CPU
2
30
0.5
8
0
Fig. 6. BEST on LRZ SGI Altix 100
9.2 GFlop/s peak
9 8
80
6 60 5 4
40
1 CPU 2 CPUs 4 CPUs 8 CPUs 16 CPUs 32 CPUs
20
0
2
3
5 4 6 log10(Number of grid points/CPU)
GFlop/s/CPU
Efficiency (% of peak)
7
3 2 1 7
8
0
Fig. 7. BEST on HLRS NEC SX-6 cluster
We see on the diagrams that the efficiency decreases with the number of processes on each computing node (2 for the Nocona, 2 per hub for the Altix, 8 for NEC SX-6). For Nocona and Altix there is a severe efficiency loss going from 1 processor per node to two processors. The total efficiency is high for the vector system and lower on the micro processor systems. The relative degradation compared to the run with one processor is due to the insufficient memory bandwidth. But also the NEC SX-6 shows a clear degradation pointing to the weak point of all todays architectures; that is the bad relation of the memory bandwidth to the potential floating point performance. Interesting is the common behaviour of the efficiency curves on the microprocessors; they show the influence of the caches and beyond these a monotone in-
316
U. Küster and P. Lammers
crease of performance. The overall efficiency is limited. This is in contrast to the NEC system where we see the typical performance hyperbola beginning with small and ending with high efficiency with a nearly monotone increase. Enlarging the number of nodes we recognize only a small further decay of the efficiency on the Nocona cluster and no decay on the Altix machine. The NEC still has a clear efficiency loss going from 1 processor to 8 processors on 1 node. For more than 1 node we see a further loss for the small and medium problem sizes due to the inter node communication. For large problem sizes we see all curves convergent. We assume that the performance loss is caused by some communication setup insufficiencies. Nevertheless are total efficiency and performance very high.
A tool for complex parameter studies in grid environments: SGM-Lab N. Currle-Linde, P. Adamidis, and M.M. Resch High Performance Computing Center Stuttgart (HLRS), University of Stuttgart, Nobelstraße 19, 70550 Stuttgart, Germany linde,adamidis,[email protected]
Summary. This paper presents the design and implementation of the Science Grid Modeling Laboratory (SGM-Lab), an automated parametric modeling system for performing complex dynamically-controlled parameter studies. Nowadays, simulation programs are used not only in research but also during the development of products, often to optimize their quality. Typically, this involves repeated execution of the simulation codes, whereby for each run some of the input data is varied. As a result, many different jobs have to be launched and a huge amount of output data has to be administered. A grid environment can provide, and enable the exploitation of the necessary resources for this computation. However, in order to be able to use a grid environment effectively, tool support is required to automatically generate the parameter sets, issue jobs, control the successful operation and termination of jobs, and collect results. Support is also needed to generate new parameter sets based on previous results in order to obtain a functional optimum, after which the parameter study should terminate. The SGM-Lab software described in this paper offers a unified framework for such large-scale optimization problems.
1 Motivation and background Parametric studies are conceptually easy to parallelize. Thus, for this type of task, a parallelized distributed computational model is very appropriate. Until quite recent, thorough parameter studies were limited by the availability of adequate computer resources [1, 2]. The advent of global Grid-Technology makes it possible to integrate resources from distributed scientific centres with those of one’s own environment, creating a technical foundation for complex parametric investigations [2]. The automatic generation of parameter studies is suitable only for those with simple information processing. The complete information for processing the user-(task) information is comprised of a succession of previous and next wizards which in turn, determine the processing sequence. This model can only be applied to the single-level parametric modeling; it is not suitable for the description of complex processes. Complex processes, as a rule, result in several levels of parametrization, re-
318
N. Currle-Linde, P. Adamidis, and M.M. Resch
peated processing and data archiving, conclusions and branches during the processing, as well as synchronization of parallel branches and processes. In this cases the parametrization of data is an extremely difficult and workintensive process. Moreover, users are very sensitive to the level of the automation for application preparation [2]. Therefore, the user must be able to define a fine grained logical execution process, identify the position in the input area of the parameters, to be changed in the course of the experiment, as well as formulate the parametrization rules. All other remaining details of the parameter study generation should be hidden from the user. Furthermore, it is a problem for the scientist that no well-organized data storage for the parameter study is available [2]. This makes it extremely difficult to handle hundreds, or even thousands of experimental data. A database is the ideal means for the documentation, as well as for the analysis of results. A fast search, storage reliability and the availability of the information for the parameter study application correspond to our demands for a fast and precise information selection at the start of an experiment. Modern parametric studies, which require computer resources from diverse locations, are supposed to fit perfectly the idea of Grid Computing [3]. Grid environments are very dynamic - some resources are made available for users, others, however, are occupied with the execution of other processes. Therefore, the parameter study management system must be capable of adapting to the resources currently available in the network. The introduction of a resource broker mechanism for dynamic redirection of the jobs to a parameter study dispatcher during the execution of the experiment enables the achievement of a result within a minimum time.
2 Related work For parameter studies there are currently a variety of different approaches (e.g. Nimrod, ILab, AppLeS/APST). Nimrod [4], is a tool that can be used to manage the execution of parametric studies across distributed computers. It supports the creation of parameter sweeps based on a simple declarative language. Therefore, Nimrod can only be applied for single-level parametric modeling. With Nimrod/G [5] and Nimrod/O [6] the Grid services provided by Globus [7] are used for job launching, resource brokering and scheduling to allow the efficient execution of the parameter studies within a certain budgetary requirement. Compared with Nimrod, ILab[5, 8, 9] allows the generation of multi-parametric models and adds workflow management. With the help of a sophisticated GUI, the user can integrate several steps and dependencies within a parameter study task. ILab, developed at NASA, supports the execution of parameter sweep jobs at the IPG (NASA’s Information Power Grid). Although ILab provides the possibility to realize complex parameter studies, the complexity is limited by the CAD (Computer Assisted Design) screen which does not support many nested levels. Tri-
A tool for complex parameter studies in grid environments: SGM-Lab
319
ana is an open-source problem solving environment that abstracts the complexities of composing distributed workflows. It provides a "pluggable software architecture" to be used for the dynamic orchestration of applications from a group of predefined commodity software modules. Beside the above mentioned environments, tools like Condor, UNICORE [10] or AppLeH [11] (Application-Level scheduler) can be used to launch pre-existing parameter studies within distributed resources. The definition and execution of multiphysics applications, preprocessing steps, postprocessing filters, visualization, and the iterative search of the parameter space for optimum solutions requires user-friendly workflow description tools with graphical interfaces, which support the specification of loops, test and decision conditions, synchronization points, and communication via messages. Several grid workflow systems exist. Systems such as Triana [12] and UNICORE, based on directed acyclic graphs (DAG) [13], are limited with respect to the power of the model; it is difficult to express loop patterns, and the expression of process state information is not supported. Compared with these, workflow-based systems such as GSFL and BPEL4WS [14] have solved these problems. However, they are too complicated to be mastered by general users, and rt is difficult, even for experienced users, to describe non-trivial workflow processes involving data and computing resources without the help of additional tools. Within all the aforementioned tools, real dynamic parameterizations are not supported which are necessary for a number of applications, especially in the field of optimization. In these cases the result of the simulation has to fulfill certain criteria. For instance, some value or average value has to be larger or smaller than a given threshold. For these applications an iterative approach is required. The result of a simulation will be compared to the desired criteria. If the criteria are met the simulation is finished. If not, the application automatically has to update its input files and has to restart the simulation cycle. The purpose of our work presented here is specifically to close that gap and provide a tool that gives some support for dynamic parametrization.
3 Parametric modeling The proposed parameter study tool SGM-Lab (Science Grid Modeling Laboratory) defines both the means to design the parameter study and the means for the control of the experiment in a distributed computer network. During the design of a parameter study the scientist typically uses a graphical editor, and a database to describe the course of the experiment. The generation of the parameter study for the project and the interaction between the components is shown in Fig. 1. The scientist first designs the database structure, defining the input data for the study, followed by the description of the course of the experiment using the graphical editor. A detailed description for designing a project parameter study is discussed in section 4. Starting from the graphic notation
320
N. Currle-Linde, P. Adamidis, and M.M. Resch Im
po
rt
Graphical Project Editor
Fi
le n
Creatio
DataObject
Cre
ParamObject
Creation
tion Crea
n atio
Cre
Experiment Monitor
atio
n
TaskObject
InfoObject Project Object
JOB
Message
Message
DPA
JobManager Execute
File, Parameters
TaskManager DataManager
Grid Infrastructure
Fig. 1. Creation project of parameter study and interaction between the components
of the experiment’s program, the Object Generator produces all further program objects for the project: TaskObject - the task definition, ParamObjects object parameterized data and InfoObject - for the status information of the networked computer resources and to monitor the execution of the experiment’s processes. Furthermore, the ExperimentMonitorObject is generated for the visualization and for the actual control of the experimental process. The project related object-orientated database is installed on the ServerDatabase. TaskObject, ParamObject and InfoObject are attached to the Parameter Study Server; the ExperimentMonitor is placed on the user’s workstation. Afterwards, the user creates a list of target computers for conducting the experiment from the available computer resources in the network and starts the TaskManager on the Parameter Study Server. The TaskManager takes over complete control of the experiment’s sequence of events. The scientist merely monitors its progression from his own workstation. The TaskManager first chooses the computer resources currently available in the network, and then activates all the parallel processes of the experiment’s first stage. Afterwards, the TaskManager starts the DataManager and JobManager. The DataManager first transfers persistent input data to the file server of the corresponding target-computer, then activates the ParamObjects for generating the first parameter set. The ParameterObject holds the parameter set and its rules regarding changes. The generated parameter set is linked with replacement processes and then delivered to the corresponding target computer. After the replacement of the specified parameters, they are ready for the first computation stage. Parallel to these processes, the JobManager prepares all the jobs for the first stage. After receiving a status message regarding the availability of data, they are placed in a queue. After the preparation of this value for the first parameter set, the DataManager initializes the preparation of input files, which consist of the values of the second parameter set. The JobManager prepares a new job set for the second stage. After receiving a notification about the availability of data resources, the JobManager places
A tool for complex parameter studies in grid environments: SGM-Lab
321
the job set in the queue for execution on the appropriate remote computer. The preparation of further stages is accomplished in the same manner. After receiving a notification of the completion of the first computation stage and the execution of the transfer operation to the next computation stage, the TaskManager analyses the current status of computer resources, chooses the most suitable target computer for the next computation stage and starts the parallel process for this computation stage. In all stages, the output file is archived immediately after being received by the experiment’s database. The control of all processes of the next respective information processing levels always takes place according to the above-described schema. After starting the ExperimentMonitor on his workstation, the user receives continuously updated status information regarding the experiment’s progress.
4 System architecture and implementation Fig. 2 shows the system architecture of the experiment management system. It consists of three main components: the User Workstation (Client), the Parameter Study Server and an object-oriented database (OODB). The system operates according to a Client-Server-Model in which the Parameter Study Server interacts with remote target computers using a Grid Middleware Service. The implementation is based on the Java 2 Platform En-
Fig. 2. Client-Server model architecture
322
N. Currle-Linde, P. Adamidis, and M.M. Resch
terprise Edition (J2EE) specification and JBOSS Application server [15] The Software runs on Windows, as well as on UNIX platforms. To integrate the OODB, the Java Data Objects (JDO) implementation of FastObjects [16] is used. With the help of the ObjectGenerator, the ParamObjects, the TaskObject and the InfoObject are generated. These are also Java objects, started and administered on the server side. The server consists of three subsystems: the TaskManager, the JobManager and the DataManager. The TaskManager is the central component of the Parameter Study Server. It chooses the necessary computer resources for each computation stage after receiving information about their availability from the InfoObject. The InfoObject collects this information from the underlying infrastructure using existing services like the Globus resource broker. The TaskManager then informs the DataManager and the JobManager about the chosen resources and starts/stops each branch and blocks. It also takes care of synchronizing the events and controlling the message exchange between the parallel task branches. On the user’s request, the TaskManager can block the program flow. It informs the InfoObject about the current status of the jobs and the individual processes. The JobManager and the DataManager work closely together to control the interaction of the task and the Grid resources. They synchronize the preparation of the data as well as the initiation of the jobs. The DataManager controls the parametrization, the transport of data and the parameter files into the working directories of the remote target computers as well as their exchange and the archiving of the result files and the parameters in the experiment’s database. The preparation of the actual input files for the execution of the jobs is done with the help of the Data Preparation Agent (DPA). The JobManager generates jobs, places them in the queue and observes their execution. The automatic creation of the project specific OODB is done according to the structure designed by the user. The database collects all relevant information for the realization of the experiment such as input data for the parameter study, parametrization rules, etc. A detailed description for designing a project parameter study is discussed in chapter 5. To realize a smooth interaction of the Parameter Study Server with currently existing Grid middleware, the DataManager and the JobManager do not communicate directly with the Grid resources, but rather via the Grid Middleware Adaptors. These adaptors are used to establish the communication to existing Grid middleware services like job execution and monitoring services. Currently, e.g. Globus [3] and UNICORE offer these services. At the moment a UNICORE adaptor is under development. In the current implementation we have adapted the Arcon command line client to work as a UNICORE interface within the adaptor. With the help of the Arcon library, developed at Fujitsu Laboratories of Europe for the UNICORE project, the TaskManager is able to create an Abstract Job Object (AJO) which is necessary to run jobs on UNICORE sites. The AJO contains the job description. The UNICORE adaptor submits this AJO to a specified UNICORE site and processes the outcome of the UNI-
A tool for complex parameter studies in grid environments: SGM-Lab
323
CORE job. The required information to establish connections to UNICORE servers is stored within the InfoObject.
5 Parameter modeling from the user’s view This chapter explains the process of designing a parameter study using the proposed system. shows an example of a task flow of such an experiment as it will appear in the ProjectEditor. The graphical description of the application flow has two purposes: firstly, collecting all information for the creation of the Java objects described in the previous chapter and, secondly, to be used for the visualization of the current experiment in the ExperimentMonitor. For instance, the current point of execution is emphasized in a specific color within a running experiment. For designing the experiment, there are various facilities provided for denoting the program blocks, paths, branching, events, conditions, messages etc. The modules can be linked themselves, providing the direction of the data flow and the sequence of the computation processes. Each module has its own ID and can contain different properties, consisting of parameters and signals. To specify these properties, the user can allocate for example file names, arrays of dates, individual parameters, their values etc. Signals for modules describe initial and subsequent starts or permit further steps after the execution of the corresponding operations. A complex experiment system can contain various levels of nested blocks. Within the control flow (see Fig. 3) the user defines the rules and conditions for the execution of the modular computation process (logical schema of the experiment). Two different kinds of blocks are used: computational blocks and control blocks. Each computational block can represent a simple parameter study. On this block level, the manipulation of data will be handled. The control block is used to define branches, conditions or synchronization points. In the example shown in Fig. 3 the whole experiment is split into ibranches, which can be executed in parallel due to the logic of the experiment. Each branch consists of several program blocks interconnected by arrowed lines, which indicate their sequence of execution and/or possibility of parallel execution. The diagram also displays the control modules and the synchronization process. Using the information contained in each module, the system can begin to branch, collect and synchronize the processes as well as exchange messages between the processes during the execution of the experiment. However, the definition of a specific part of a program can be done within each module in a very fine grained way as shown in Fig. 4. The experimental Block 1.2 consists of computing modules, parameter modules, data replacement modules and the corresponding elements of the experiment OODB. These are connected to each other with arrowed lines showing the direction of data transfer between modules and the sequence of execution during the
324
N. Currle-Linde, P. Adamidis, and M.M. Resch
Fig. 3. Sample task flow (control flow)
computation process. In this example, not only simple, but also nested formation cycles of sets of dependant parameters are represented. The transition from one program block to the next can also be carried out. This allows an optimal simplified experimental design process. Before starting the execution of the whole experiment the user can do a simulated run to verify the correct chain of the logical process of the experiment. In this case an automatic or stepwise tracking of the experiment can be done within the ControlFlow window of the ProjectEditor. The actual active module, block or process is displayed with colored pulsing points. If necessary the user can now adjust the course of the experiment. The SGM-Lab experiment’s control flow graph (Fig. 3) is translated into internal formats, which are expressed in XML format. This format is used for further communication with the experiment engine. The workflow consists of three parts, namely parameter definitions, various module’s definitions (program blocks, branching, events, conditions, messages, etc) and data link definitions. Based on this information the TaskManager starts the DataManager and JobManager of SGM-Lab. The TaskManager has complete control of the experiment’s actions.
A tool for complex parameter studies in grid environments: SGM-Lab
325
Fig. 4. Block 1.2 (data flow)
6 Application The flow simulation program URANUS ( Upwind Relaxation Algorithm for Nonequilibrium Flows of the University of Stuttgart) [17, 18] has been developed at the institute of space systems at the University of Stuttgart, and calculates non-equilibrium flows around space vehicles reentering the earth’s atmosphere. The program simulates not only the supersonic flow but also the chemical reactions occurring as they have a significant influence on the flow. The reason for these chemical reactions is the high temperature of the gas flow during the reentry, while the space vehicle is slowed down by the friction of the air. At these temperatures the air’s components, mainly nitrogen and oxygen, react with each other. As a result of the calculation, the heat flow and the heat load at the surface is the main interest. In URANUS, the unsteady compressible Navier-Stokes equations in the integral form are discretized in space using the cell-centred finite volume approach, and multiblock meshes. The sequential URANUS program has been used on high-end workstations and on vector processors. The experience was that the compute time and the memory requirements are too high and it is not possible to use the program on these platforms for calculating real world problems with fine meshes or using the real gas model. Since only massively parallel platforms and modern hybrid parallel computers are able to fulfill
326
N. Currle-Linde, P. Adamidis, and M.M. Resch
Fig. 5. Screenshot of CFD experiment
the program’s requirements in computing speed and memory, the program has been parallelized. The goal is to be able to use the newest available parallel platforms. This requires a portable code which can be easily moved from one platform to another. The target types of parallel computing systems are MPPs, SMPs or hybrid systems, which are a cluster of SMP nodes. Therefore a parallel multiblock version of the simulation code [19] has been developed. In order to be able to use all of them, MPI [20] has been used as the communication library. The parallelization strategy followed uses a domain decomposition approach [19]. According to this, the blocks are cut and spread among the processors, where each processor calculates its own block. The number of cuts in each of the three dimensions is calculated in a way that the resulting blocks are not misshapen. During the solving step, communication between neighbors is performed to exchange intermediate data. This ensures a more accurate solution and a better convergence. To obtain a good performance on any kind of parallel computer it is essential to have a good load balance between the processors allocated to the parallel job. This is a pure distributed memory approach, which performs also on today’s SMP systems and on hybrid architectures. In order to measure the performance of the parallel algorithm, runs with different values of three parameters have to be made. In Fig. 5 the control
A tool for complex parameter studies in grid environments: SGM-Lab
327
flow of the application is shown. The block named cfd-solver represents URANUS. In this block, time integration is accomplished by the Euler backward scheme. The implicit system of equations is solved iteratively by Newton’s method. The resulting linear system of equations is iteratively solved by the Jacobi line relaxation method with subiterations to minimize the inversion error. The convergence speed is influenced by two parameters, the relaxation parameter (ω) and the CFL number. Using a small CFL number, decreases the solving portion, while a big CFL number increases the solving portion of the algorithm. These are at least, 7 values for the relaxation parameter (ω), 6 for the CFL number and 10 different number of processors, which leads to a total of 420 jobs, that have to be launched. After running the jobs with various values of the three above mentioned parameters, the simulation results are being evaluated. This happens in the evaluation block of the diagram illustrated in Fig. 5. Furthermore the CPU time needed by the parallel jobs is compared with the CPU time taken by the serial job. If the results are not satisfactory then the parameter study block (cfd-solver) is repeated with different values of the parameters. If the results are correct and the speedup acceptable, then a postprocessing step follows.
7 Conclusion In this paper we presented the design and implementation of SGM-Lab, an automated parametric modeling system for performing complex dynamically-controlled parameter studies. SGM-Lab consists of three main components: the User Workstation (Client), the Parameter Study Server, and an object-oriented database (OODB). The system operates according to a ClientServer-Model in which the Parameter Study Server interacts with remote target computers using UNICORE as a Grid Middleware Service. The implementation is based on the Java 2 Platform Enterprise Edition (J2EE) specification and JBOSS Application Server. The OODB is realized using the Java Data Objects (JDO) implementation of FastObjects. The workflow creation has been used to define an experiment for the optimization of a simulation program in the field of computational fluid dynamics.
References 1. de Vivo A, Yarrow M, McCann K (2000) Acomparison of parameter study creation and job submossion tools. Technical report NAS-01002, NASA Ames Research Center, Moffet Filed, CA 2. Yarrow M, McCann K, Biswas R, van der Wijngaart R (2000) An advanced user interface approach for complex parameter study process specification on the information power grid. In: Curbera F, Dholakia H, Goland Y, Klein J, Leymann F, Liu K, Roller D, Smith D, Thatte S, Trickovic I, Weerawarana S (eds)
328
3. 4.
5.
6.
7. 8.
9.
10. 11.
12. 13.
14.
15. 16. 17.
18.
N. Currle-Linde, P. Adamidis, and M.M. Resch Specification: Business Process Execution Language for Web Services Version 1.1, May 05, 2003, http://www.106.ibm.com/developerworks/library/ ws-bpel. Proc. of the 1st Workshop on Grid Computing (GRID 2002). Bangalore, India Foster I (2002) Phys Today 55(2):42–47 Abramson D, Giddy J, Kotler L (2000) High performance parametric modeling with Nimrod/G: killer application for the GlobalGrid. In: Int. Parallel and Distributed Processing Symposium (IPDPS). Cancun, Mexico Buyya R, Abramson D, Giddy J, Nimrod G (2002) An architecture for a resource management and scheduling system in a Global Computational Grid. In: The 4th International Conf. on High-Performance Computing in Asia-Pacific Region (HPC Asia 2000). Beijing, China Abramson D, Lewis A, Peachy T (2000) Nimrod/O: A tool for automatic design optimization. In: The 4th Int. Conf. on Algorithms and Architectures for Parallel Processing (ICA3PP 2000). Hong Kong, China Foster I, Kesselman C (1998) The Globus Project: a status report. In: Proc. IPPS/SPDP’98 Heterogeneous Computing Workshop Yarrow M, McCann KM, Tejnil E, deVivo A (2001) Production-level distributed parametric study capabilities for the Grid. Grid Computing - GRID 2001 Workshop Proceedings. Denver,USA McCann KM, Yarrow M, deVivo A, Mehrotra P (2004) ScyFlow: an environment for the visual specification and execution of scientific workflows. GGF10 Workshop on Workflow in Grid Systems, Berlin, Germany Erwin D(eds) (2000–2002) Joint project report for the BMBF project UNICORE Plus. Grant Number: 01 IR 001 A-D, ISBN 3-00-011592-7 Casanova H, Obertelli G, Berman F, Wolski R (2002) The AppLeS parameter sweep template: user-level middleware for the Grid. Proceedings of the Super Computing (SC 2002) Conference, Dallas USA Taylor I, Shields M, Wang I, Philp R (2003) Distributed P2P computing within Triana: a galaxy visualization test case. IPDPS 2003 Conference. Nice, France Guan Z, Hernandez F, Bangalore P, Gray J, Skjellum A, Velusamy V, Liu Y (2004) Grid-Flow: A Grid-enabled scientific workflow system with a Petri net-based interface. Department of Computer and Information Sciences, University of Alabama at Birmingham , USA Wohed P, van der Aalst WMP, ter Hofstede AHM (2002) Pattern-based analysis of BPEL4WS. QUT Technical report, FIT-TR-2002-04, Queensland University of Technology, Brisbane JBoss webpage: http://www.jboss.org/products/jboss FastObject webpage: http://www.fastobjects.com Schöll E, Frühauf H-H (1994) An accurate and efficient implicit upwind solver for the Navier-Stokes equations. In: Hebecker F-K, Ranacher R, Wittum G (eds) Notes on Numerical Fluid Mechanics, Numerical Methods for the Navier-Stokes Equations. Proc. of the Int. Workshop on Numerical Methods for the Navier-Stokes Equations. Heidelberg, Germany, October 1993 Frühauf H-H, Daiß A, Gerlinger U, Knab O, Schöll E (1994) Computation of reentry nonequilibrium flows in a wide altitude and velocity regime. AIAA paper 94–1961
A tool for complex parameter studies in grid environments: SGM-Lab
329
19. Bönisch T, Rühle R (2002) Implementation of an integrated efficient parallel multiblock flow solver. In: Joubert GR, Murli F, Peters FG, Vanneschi M (eds) Parallel computing advances and current issues. Imperial College Press, London 20. Message Passing Interface Forum (1995) MPI: A Message-Passing Interface standard. http://www.mpi-forum.org/docs/docs.html
Lattice Boltzmann predictions of turbulent channel flows with turbulence promoters K.N. Beronov and F. Durst Institute of Fluid Mechanics, University of Erlangen-Nuremberg, Cauerstraße 4, 91058 Erlangen, Germany [email protected]
Summary. Canonical flows like homogeneous, irrotationally strained or linearly sheared turbulence, or developed plane channel turbulence have been studied for many years. Direct numerical simulations have produced reliable data at moderate Reynolds numbers in some of these flows, where homogeneity and symmetry allows for substantial reduction of the computational effort. But flows possessing less symmetries, such as grid–generated turbulence or simple flows combining features of several canonical ones, have remained difficult to treat with such simulations. Motivated by specific industrial design applications, we have investigated by direct and large–eddy simulations several flows of that kind. Presented here are preliminary results focusing on turbulence excitation and control over the full cross–section of channel flows at moderate turbulent Reynolds numbers, using ’turbulence promoters’ in the form of fixed obstacles commensurate with the cross–section. Considered in particular are grids with large spacing and porosity, and channel contraction sections. The required computational resources are estimated and found to be high, by present standards, for the regimes considered. To optimally exploit the parallel architecture of high–performance platforms for these large–scale, spatially and temporally resolved computations, a lattice Boltzmann method is selected. It parallelizes optimally and has minimal, memory–local communication.
1 Introduction The characterization and subsequent modeling of incompressible wall–bounded turbulence continues to be an issue of central importance in mechanical and chemical engineering. Direct (DNS) and large–eddy (LES) numerical simulations provide the most informative flow data output at moderate Reynolds numbers. They are the basis for systematic characterization of parametric dependencies in simple flow geometries, which are then used as motivation and for calibration of turbulence closure models. The use of LES allows the treatment of higher Reynolds numbers, at least as far as lower– order turbulence statistics are concerned. But its use directly for engineering
332
K.N. Beronov and F. Durst
development projects remains still rather limited, mainly by its order–of– magnitude higher costs as compared to standard CFD calculations based on Reynolds–averaged (RANS) modeling. This cost factor is particularly pronounced because of the complex geometries of applications and is one factor which has limited for a long time basic research LES to relatively simple flow geometries. Other factors are the theoretical appeal of simple flow configurations and the substantial effort required to apply the high–fidelity in–house codes, which are often based on finite–difference and spectral discretizations, to complicated flow domain geometries. The numerical simulations presented here have a two–fold objective. Their theoretical motivation is the need for detailed knowledge about flows that are still relatively simple and indeed related to some of the canonical flows of turbulence research, but on the other hand provide valuable and as yet lacking information for the modeling of several industrial applications that were taken up for investigation at our institute during the last year. The technical motivation is the need to verify the reliability and computational efficiency of DNS and LES with the chosen numerical method described in Section 2. It appears to offer considerable advantages over standard CFD methods for time–resolved computations using large grids. A large portion of the flows occurring in the engineering design practice can be classified as turbulent channel flows, with a bulk velocity parallel to the fluid container walls in most of the flow domain. In general, these flows are affected by localized but large solid obstacles. In the vicinity of such “flow modifiers” the turbulence is not homogeneous in the local mean flow direction and, over relatively long distances downstream, no developed channel turbulence can set in. Of interest are both the flow characteristics immediately at and next to these obstacles, as well as the relaxation rates of individual statistics to their corresponding values in developed channel flow turbulence. The first aspect of interest is motivated by the desire to establish control options and corresponding parametric descriptions for engineering purposes. The second one is motivated by the need to calibrate or verify RANS models in other than the canonical flows used in the literature so far. Here we focus on two types of flow modifiers. The first one considered is a regular grid placed normally to the walls in a straight channel. If walls were not considered, one would have an example of the “grid generated turbulence,” the laboratory experimental substitute for isotropic turbulence. The latter is an acceptable theoretical approximation of the actual flow only far downstream of the obstacle. Close to the grid, the dependence of turbulence properties on its particular geometry and difficulties in their measurement or simulation have provided further reasons why this part of the flow has remained underexplored. Our practical motivation in studying turbulence near grids is an attempt to produce turbulence with controlled properties, such as characteristic length scales and mixing rates, using mixers with wire grids instead of stirrers with large solid impellers. The vessel walls of the mixing devices add further inhomogeneity to the
LBM for promoted channel turbulence
333
Fig. 1. Grid turbulence in a channel: instantaneous velocity intensity in mean flow direction. The grid is indicated by a low–velocity isosurface near the inflow end (right on the figure). The walls (top and bottom on the figure) are not shown but are indicated by boundary layers free of high–velocity fluid
problem, beside the inhomogeneity in the mean flow direction behind the grid. The flow, illustrated by Figure 1, is controlled by the spacing m and the “permeability” of the grid, as well as by the Reynolds number Re, which can be defined in various ways, and the width 2H of the channel. The other type of geometry is a simple “asymmetric confusor,” a linear contraction of cross–section of a straight channel, as illustrated by Figure 2. Over a part of the channel length, one of its walls slopes at a large angle to the mean flow direction. In this contraction section of the channel, the streamwise vorticity is enhanced on a large scale. The efficiency of the enhancement will depend on the angle ϕ, the overall contraction ratio α = H /h (here h is the channel half–width in the narrow part and H that in the wide part of the channel), as well as on the Reynolds number Re. To keep the problem focused and close to the application in mind, only the case ϕ = 450 is studied, and the parametric dependence on α and Re is addressed.
334
K.N. Beronov and F. Durst
Fig. 2. Converging channel flow: domain geometry indicated by distance to the nearest solid wall. Darker shades stand for larger distances. The contraction aspect ratio shown is 3:1. Axes are dimensioned in LB grid steps
2 Numerical method The solver chosen for this study is based on a well understood lattice Boltzmann method (LBM). It has been extensively validated for complex geometries as well as in DNS of channel turbulence. Descriptions of the LBM–based DNS solver developed at our institute can be found in [1] along with a performance and validation study for plane channel turbulence. Further application examples and performance on other platforms are discussed in [2]. The code has excellent parallel performance and has a vector–parallel version ported to the HLRS (the Stuttgart Supercomputer, NEC SX-6 and SX-8). For LES, the standard Smagorinsky SGS model was used in conjunction with a van Driest type of wall correction. An advantage of the chosen method is that the generation of the flow geometry for the considered cases is trivial and very fast. By a simple in– house program which places prismatic “obstacles” of different cross–sections and orientations into the computational domain, it is possible to generate the “obstacle information” and the computational grid in a matter of seconds for both types of geometries considered. Compared to other codes available at our institute for simulations of channel turbulence type of flows, the LBM code is easy and fast to handle and delivers higher computational efficiency, especially on large numbers of parallel processors. The use of LBM solvers is not new in channel flow turbulence [3, 4]. Recent accounts on LES with LBM [5, 6] demonstrate the flexibility and versatility of the approach. The articles cited so far contain sufficiently detailed descriptions of the respective variants of LBM used. The theory behind these algorithms, in particular why a kinetic equation solver provides good approximations for Navier–Stokes dynamics, can be found in corresponding references cited in these articles. Issues specific to turbulence simulations with LBM, ranging from the effect of numerical compressibility, over the cost of explicit time stepping, to the requirements for grid resolution and integration times, are discussed in our recent article [8], which extends the work of [1]. These reports provide performance data for the code, in particular on vector–parallel machines. The parallel efficiency remains as high as 75%
LBM for promoted channel turbulence
335
of the perfect scale–up on 64 nodes (with 8 processors each) on the pseudo– vector Hitachi SR-8000 (the Bavarian Supercomputer, HLRB) at LRZ–Munich and above 95% on up to 32 nodes of the NEC SX-6 and SX-8 at HLRS. For completeness, we briefly summarize the used LBM here. The pressure and velocity are computed resp. from the 0-th and 1-st order discrete moments p(t, x) = c2s
9
∑
α =−9
f (α ) (t, x) ,
v(t, x) =
9
∑
α =−9
f (α ) (t, x)ξ(α ) ,
(1)
where the discrete probability density distributions f (α ) are evolved according to a discretized kinetic equation of the BGK type: f (α ) (t + ∆ t, x +ξ(α ) ∆ t) = f (α ) (t, x) + F(α ) (t, x) eq − ω(t, x) f (α ) (t, x) − f (α ) (t, x) .
(2)
The external forcing vector F (t, x) is incorporated through F(α ) = c2s w ξ(α ) 2 ξ(α ) · F . The equilibrium distributions are given by a discretized “shifted Maxwellian”, eq 2 2 p − v 2 /2 + v · ξ(α ) + (v · ξ(α ) )c− f (α ) = w ξ(α ) 2 c− (3) s s /2 The “sound velocity” cs and the metric factors w(α ) are constant: cs =
1/3 ,
w(0) = 1/3 ,
w(1) = 1/18 ,
w(2) = 1/36 .
(4)
The discrete velocities of the D3Q19 lattice are
ξ(1) = ( 1, 0, 0) ,
ξ(2) = ( 0, 1, 0) ,
ξ(3) = ( 0, 0, 1) ,
ξ(4) = ( 0, 1, −1) ,
ξ(5) = (−1, 0, 1) ,
ξ(6) = ( 1, −1, 0) ,
ξ(7) = ( 0, 1, 1) ,
ξ(8) = ( 1, 0, 1) ,
ξ(9) = ( 1, 1, 0) ,
ξ(0) = ( 0, 0, 0) ,
ξ(−α ) = −ξ(α ) .
(5)
The relaxation time 1/ω is related to the effective (Newtonian or turbulent) scalar viscosity ν in the momentum equation:
ν = c2s (1/ω − ∆ t/2) .
(6)
A regular grid of constant step is used, whose control volumes are cubes with sides aligned with the coordinate axes. At grid points that are not in the fluid domain but which have, along at least one of the 18 directions of the ξ(α ) ’s
336
K.N. Beronov and F. Durst
above, an immediate neighbour that does belong to the flow domain, the so– called “bounce back” rule is applied: If x is point of the above described type and its neighbor x +ξ(α ) ∆ t is a “fluid point,” f (α ) (t + ∆ t, x +ξ(α ) ∆ t) = f (−α ) (t, x) .
(7)
This approximates the non–slip condition at solid walls, which are all at rest in our problems, and assures global conservation of mass up to machine precision.
3 Results 3.1 Grid turbulence The preliminary computations reported in [2] used a mesh size 1090 : 116 : 120 for DNS and a turbulence promoting grid consisting of rods with square cross–section, whose stride was m = 20 and cross–section side length l = 4 in mesh steps ∆ x. The highest flow rate, for which the simulation remained stable, was located by increasing the forcing strength and turned out to be very close to the critical Reynolds number, Reτ ≈ 105, above which sustained channel turbulence can be found in plane channels. This was interpreted as confirmation of the general observation, made for other DNS methods, that a spatial resolution of ∆ x ≤ min x η( x) is required for fully resolved DNS, where η denotes the Kolmogorov (isotropic dissipative) length scale, defined through the (local) average of turbulent kinetic energy and its dissipation rate. In standard grid–generated turbulence, the characteristic stream-wise length scale grows monotonously farther downstream, but in the presence of walls, the turbulence structure far downstream must include two main zones with different scaling: Close to the wall, the usual inner–layer structure of wall–bounded turbulence has constant but completely anisotropic characteristic lengths. In the flow core, far from walls, the simulated flow evolves similarly either to usual grid–generated or to core–flow channel turbulence, both of which are statistically axisymmetric. It was found that the grid stride m imposes the turbulence length scale not only shortly downstream of the grid, but at least up to downstream distances x = 10m in the core flow. The vigorous grid turbulence is “squeezed” into a core zone away from the walls and a boundary layer is established along each wall, over a relatively short distance x ≈ 5m. Its width remains stable downstream and can be estimated as the distance between walls and the trains of strong span-wise vortices sealing off the near–wall zone. Their location is found to be in agreement with that of maximum turbulence intensity in developed channel flow. Computations at larger Reynolds number and at different rod cross– section sizes confirme these observations. The far downstream scaling, however, needs to be simulated in very long domains, at least up to 100m, corresponding to computational grids of 4000 : 2002 or larger. Dissipation at the
LBM for promoted channel turbulence
337
walls plays a significant role at the low Reynolds numbers simulated. Simulations at higher Re would be more valuable, but to maintain an acceptabel grid size and simulation throughput, LES have to be performed instead of DNS. 3.2 Converging channel Preliminary investigations of confusor–like geometries of the kind shown in Figure 2 were carried out using the LBM code in its LES version. The marginal resolution (256 : 642 computational grid) limited the reliability of LES to low Reynolds numbers, Reτ < 400. On the other hand, it allowed a fast parametric study of the dependencies on Re and on the aspect ratio α ≥ 1. With appropriate spatially varying forcing, equivalent to the presence of well localized stream-wise vortices in the narrow gap part of the channel, it was possible to induce stream-wise vorticity in the driven flow, as well,
Fig. 3. Converging channel flow: instantaneous streamlines. The flow is from top to bottom, the view is orthogonally through the upper channel wall. The range of maximal concentration of lines corresponds to the narrow section of the channel in Figure 2 and to maximal bulk velocity
338
K.N. Beronov and F. Durst
that was strong enough to be enhanced in the contraction part and sustained in the shape of time–dependent, large–scale vortices. These were found to correlate well with large–scale, fast–flow regions emerging in that part and extending well beyond the backward facing step, see Figure 3. The parametric dependence on Re and α follows the expectations that larger aspect ratios (stronger contractions) as well as larger Reynolds numbers enhance turbulence. But it is no simple dependence. For α close to 1, no vortices could be generated over the whole simulated range, 100 ≤ Reτ ≤ 400. For fixed α , the strength of the vortices grows with Re. The critical Re, below which the flow laminarizes, appears to decrease with growing α . This may appear counterintuitive, if a direct analogy with the laminarization of boundary layers is pursued. In the latter case, stronger contractions would increase the parameter K = (dU /dy)ν /U 2 . Above a critical value of K ∗ ≈ 3. 10−6 , however, these layers have been observed to undergo laminarization. In the present case, related parameters like K1 = α / Reτ or K2 = 1/α Reτ can be introduced. It appears that larger K2 values correspond to the enhancement of turbulence rather than to laminarization. Furthermore, there is no universal critical value in terms of either K1 or K2 , and the critical values found for fixed α are much larger than K ∗ . When appreciating these differences, it must of course be born in mind that, in the present case, the turbulence is directly induced by a large–scale forcing in the core flow and not spontaneously at the channel walls. An interesting qualitative result is the spontaneous organization of the flow into fast “jets”, which fill the central part of the narrow channel and appear to be stabilized through their “wrapping” by the stream-wise vortical structures that are directly induced by the external forcing. In the course of numerical experiments, it became clear that a “seed” of stream-wise vorticity is required to be already present before entering the contraction, in order to enable this kind of flow self–organization.
4 Outlook We have now described two types of perturbed channel flows with immediate relevance to turbulence modeling and to device design: a regular rectangular grid placed in a plane channel orthogonally to the mean flow, and a linear contraction over part of a plane channel. The described preliminary results from DNS and LES of these flows with a lattice Boltzmann code have revealed important parametric dependencies on the Reynolds number and on the relevant geometric characteristics, such as grid strides or contraction ratio. The simulations have confirmed the expectations that using the LBM code on parallel computers allows to perform more efficient simulations and to carry out parametric studies with LES. These studies need to be continued for a larger number of geometries to clarify the parametric dependencies, respectively on grid spacing m and
LBM for promoted channel turbulence
339
permeability , or on contraction ratio α and angle ϕ. Beside the converging channel, it is also important to investigate the diverging channel (asymmetric plane diffusor). The latter corresponds to a different branch of the “Lumley diagram” or Reynolds stress anisotropy map. The latter, “diffusor” branch is important for turbulence modeling in separating flows, while the “confusor” branch treated here is relevant to flow reattachment. In order to extend the parametric study to other values of the contraction angle ϕ, it will be necessary to treat the boundary conditions not by the bounce–back rule but with a second–order scheme, using e.g. the approaches described in [7]. This will assure consistency with the formal order of precision of LBM within the flow domain. Systematic comparisons between DNS and LES will provide, on one hand, further advanced validation cases for near–wall SGS modeling, and on the other hand, new information on the specifics of LBM codes as turbulence solvers. While this is important for establishing confidence in the reliability of such codes, their excellent computational efficiency on medium– and large–scale problems can already be considered as established.
References 1. Lammers P, Beronov KN, Brenner G, Durst F (2003) Direct simulation with the lattice Boltzmann code BEST of developed turbulence in channel flows. In: Wagner S, Hanke W, Bode A, Durst F (eds) High Performance Computing in Science and Engineering. Munich 2004. Transactions of the Second Joint HLRB and KONWIHR Status and Result Workshop. Springer, Berlin 2. Beronov KN, Durst F (2004) Efficiency of lattice Boltzmann codes as moderate Reynolds number turbulence solvers. In: Wagner S, Hanke W, Bode A, Durst F (eds) High Performance Computing in Science and Engineering. Munich 2004. Transactions of the Second Joint HLRB and KONWIHR Status and Result Workshop. Springer, Berlin 3. Eggels JGM (1996) Int J Heat Fluid Flow 17:307–323 4. Toschi F, Amati G, Succi S, Benzi R, Piva R (1999) Phys Review Lett 82(25):5044– 5047 5. Dupuis A, Chopard B (2002) J Comput Phys 178:161–174 6. Krafczyk M, Tölke J, Luo L (2003) Int J Mod Phys B 17(1/2):33–39 7. Yu D, Mei R, Luo L, Shyy W (2003) Prog Aerospace Sci 39(4):329–367 8. Lammers P, Beronov KN, Volkert R, Brenner G, Durst F (2004) Lattice BGK direct numerical simulation of fully developed turbulence in incompressible plane channel flow (submitted to Comput Fluids)
Notes on Numerical and Fluid Mechanics and Multidisciplinary Design Available Volumes
Volume 91: E. Krause, Yu. Shokin, M. Resch, N. Shokina (eds.): Computational Science and High Performance Computing II - The 2nd Russian-German Advanced Research Workshop, Stuttgart, Germany, March 14 to 16, 2005. ISBN 3-540-31767-8 Volume 90: K. Fujii, K. Nakahashi, S. Obayashi, S. Komurasaki (eds.): New Developments in Computational Fluid Dynamics - Proceedings of the Sixth International Nobeyama Workshop on the New Century of Computational Fluid Dynamics, Nobeyama, Japan, April 21 to 24, 2003. ISBN 3-540-27407-3 Volume 89: N. Kroll, J.K. Fassbender (eds.): MEGAFLOW - Numerical Flow Simulation for Aircraft Design - Results of the second phase of the German CFD initiative MEGAFLOW, presented during its closing symposium at DLR, Braunschweig, Germany, December 10 and 11, 2002. ISBN 3-540-24383-6 Volume 88: E. Krause, Yu. Shokin, M. Resch, N. Shokina (eds.): Computational Science and High Performance Computing - Russian-German Advanced Research Workshop, Novosibirsk, Russia, September 30 to October 2, 2003. ISBN 3-540-24120-5 Volume 86: S. Wagner, M. Kloker, U. Rist (eds.): Recent Results in Laminar-Turbulent Transition - Selected numerical and experimental contributions from the DFG priority programme ‘Transition’ in Germany. ISBN 3-540-40490-2 Volume 82: E.H. Hirschel (ed.): Numerical Flow Simulation III - CNRS-DFG Collaborative Research Programme, Results 2000-2002. ISBN 3-540-44130-1 Volume 81: W. Haase, V. Selmin, B. Winzell (eds.): Progress in Computational FlowStructure Interaction - Results of the Project UNSI, supported by the European Union 19982000. ISBN 3-540-43902-1 Volume 80: E. Stanewsky, J. Délery, J. Fulker, P. de Matteis (eds.): Drag Reduction by Shock and Boundary Layer Control - Results of the Project EUROSHOCK II, supported by the European Union 1996-1999. ISBN 3-540-43317-1 Volume 79: B. Schulte-Werning, R. Grégoire, A. Malfatti, G. Matschke (eds.): TRANSAERO A European Initiative on Transient Aerodynamics for Railway System Optimisation. ISBN 3-540-43316-3 Volume 78: M. Hafez, K. Morinishi, J. Periaux (eds.): Computational Fluid Dynamics for the 21st Century. Proceedings of a Symposium Honoring Prof. Satofuka on the Occasion of his 60th Birthday, Kyoto, Japan, 15-17 July 2000. ISBN 3-540-42053-3
Volume 77: S. Wagner, U. Rist, H.-J. Heinemann, R. Hilbig (eds.): New Results in Numerical and Experimental Fluid Mechanics III. Contributions to the 12th STAB/DGLR Symposium Stuttgart, Germany 2000. ISBN 3-540-42696-5 Volume 76: P. Thiede (ed.): Aerodynamic Drag Reduction Technologies. Proceedings of the CEAS/DragNet European Drag Reduction Conference, 19-21 June 2000, Potsdam, Germany. ISBN 3-540-41911-X Volume 75: E.H. Hirschel (ed.): Numerical Flow Simulation II. CNRS-DFG Collaborative Research Programme, Results 1998-2000. ISBN 3-540-41608-0 Volume 66: E.H. Hirschel (ed.): Numerical Flow Simulation I. CNRS-DFG Collaborative Research Programme. Results 1996-1998. ISBN 3-540-41540-8