Computational Methods and Experimental Measurements XIV
WIT Press publishes leading books in Science and Technology. Visit our website for new and current list of titles. www.witpress.com
WITeLibrary Home of the Transactions of the Wessex Institute. Papers presented at Computational Methods and Experimental Measurements XIV are archived in the WIT eLibrary in volume 48 of WIT Transactions on Modelling and Simulation (ISSN 1743-355X). The WIT eLibrary provides the international scientific community with immediate and permanent access to individual papers presented at WIT conferences. Visit the WIT eLibrary at www.witpress.com.
FOURTEENTH INTERNATIONAL CONFERENCE ON COMPUTATIONAL METHODS AND EXPERIMENTAL MEASUREMENTS
CMEM XIV CONFERENCE CHAIRMEN C.A. Brebbia Wessex Institute of Technology, UK G. M. Carlomagno University of Naples di Napoli, Italy
INTERNATIONAL SCIENTIFIC ADVISORY COMMITTEE I. Abdalla A. Beinat Z. Bielecki A. Britten L. Cascini R. Cerny G. Dalla Fontana J. Everett S. Fattorelli L. Fryba W. Graf L. Guerriero
C. Karayannis J. Kompenhans R. Liebe O. Manca P. Prochazka H. Sakamoto K. Takayama M. Trajkovic M. Tsutahara F. Viadero Rueda M. Wnuk G. Zappala
Organised by Wessex Institute of Technology, UK Sponsored by WIT Transactions on Modelling and Simulation
WIT Transactions Transactions Editor Carlos Brebbia Wessex Institute of Technology Ashurst Lodge, Ashurst Southampton SO40 7AA, UK Email:
[email protected]
Editorial Board B Abersek University of Maribor, Slovenia Y N Abousleiman University of Oklahoma, USA P L Aguilar University of Extremadura, Spain K S Al Jabri Sultan Qaboos University, Oman E Alarcon Universidad Politecnica de Madrid, Spain A Aldama IMTA, Mexico C Alessandri Universita di Ferrara, Italy D Almorza Gomar University of Cadiz, Spain B Alzahabi Kettering University, USA J A C Ambrosio IDMEC, Portugal A M Amer Cairo University, Egypt S A Anagnostopoulos University of Patras, Greece M Andretta Montecatini, Italy E Angelino A.R.P.A. Lombardia, Italy H Antes Technische Universitat Braunschweig, Germany M A Atherton South Bank University, UK A G Atkins University of Reading, UK D Aubry Ecole Centrale de Paris, France H Azegami Toyohashi University of Technology, Japan A F M Azevedo University of Porto, Portugal J Baish Bucknell University, USA J M Baldasano Universitat Politecnica de Catalunya, Spain J G Bartzis Institute of Nuclear Technology, Greece A Bejan Duke University, USA
M P Bekakos Democritus University of Thrace, Greece G Belingardi Politecnico di Torino, Italy R Belmans Katholieke Universiteit Leuven, Belgium C D Bertram The University of New South Wales, Australia D E Beskos University of Patras, Greece S K Bhattacharyya Indian Institute of Technology, India E Blums Latvian Academy of Sciences, Latvia J Boarder Cartref Consulting Systems, UK B Bobee Institut National de la Recherche Scientifique, Canada H Boileau ESIGEC, France J J Bommer Imperial College London, UK M Bonnet Ecole Polytechnique, France C A Borrego University of Aveiro, Portugal A R Bretones University of Granada, Spain J A Bryant University of Exeter, UK F-G Buchholz Universitat Gesanthochschule Paderborn, Germany M B Bush The University of Western Australia, Australia F Butera Politecnico di Milano, Italy J Byrne University of Portsmouth, UK W Cantwell Liverpool University, UK D J Cartwright Bucknell University, USA P G Carydis National Technical University of Athens, Greece J J Casares Long Universidad de Santiago de Compostela, Spain, M A Celia Princeton University, USA A Chakrabarti Indian Institute of Science, India
A H-D Cheng University of Mississippi, USA J Chilton University of Lincoln, UK C-L Chiu University of Pittsburgh, USA H Choi Kangnung National University, Korea A Cieslak Technical University of Lodz, Poland S Clement Transport System Centre, Australia M W Collins Brunel University, UK J J Connor Massachusetts Institute of Technology, USA M C Constantinou State University of New York at Buffalo, USA D E Cormack University of Toronto, Canada M Costantino Royal Bank of Scotland, UK D F Cutler Royal Botanic Gardens, UK W Czyczula Krakow University of Technology, Poland M da Conceicao Cunha University of Coimbra, Portugal A Davies University of Hertfordshire, UK M Davis Temple University, USA A B de Almeida Instituto Superior Tecnico, Portugal E R de Arantes e Oliveira Instituto Superior Tecnico, Portugal L De Biase University of Milan, Italy R de Borst Delft University of Technology, Netherlands G De Mey University of Ghent, Belgium A De Montis Universita di Cagliari, Italy A De Naeyer Universiteit Ghent, Belgium W P De Wilde Vrije Universiteit Brussel, Belgium L Debnath University of Texas-Pan American, USA N J Dedios Mimbela Universidad de Cordoba, Spain G Degrande Katholieke Universiteit Leuven, Belgium S del Giudice University of Udine, Italy G Deplano Universita di Cagliari, Italy I Doltsinis University of Stuttgart, Germany M Domaszewski Universite de Technologie de Belfort-Montbeliard, France J Dominguez University of Seville, Spain
K Dorow Pacific Northwest National Laboratory, USA W Dover University College London, UK C Dowlen South Bank University, UK J P du Plessis University of Stellenbosch, South Africa R Duffell University of Hertfordshire, UK A Ebel University of Cologne, Germany E E Edoutos Democritus University of Thrace, Greece G K Egan Monash University, Australia K M Elawadly Alexandria University, Egypt K-H Elmer Universitat Hannover, Germany D Elms University of Canterbury, New Zealand M E M El-Sayed Kettering University, USA D M Elsom Oxford Brookes University, UK A El-Zafrany Cranfield University, UK F Erdogan Lehigh University, USA F P Escrig University of Seville, Spain D J Evans Nottingham Trent University, UK J W Everett Rowan University, USA M Faghri University of Rhode Island, USA R A Falconer Cardiff University, UK M N Fardis University of Patras, Greece P Fedelinski Silesian Technical University, Poland H J S Fernando Arizona State University, USA S Finger Carnegie Mellon University, USA J I Frankel University of Tennessee, USA D M Fraser University of Cape Town, South Africa M J Fritzler University of Calgary, Canada U Gabbert Otto-von-Guericke Universitat Magdeburg, Germany G Gambolati Universita di Padova, Italy C J Gantes National Technical University of Athens, Greece L Gaul Universitat Stuttgart, Germany A Genco University of Palermo, Italy N Georgantzis Universitat Jaume I, Spain G S Gipson Oklahoma State University, USA P Giudici Universita di Pavia, Italy F Gomez Universidad Politecnica de Valencia, Spain R Gomez Martin University of Granada, Spain
D Goulias University of Maryland, USA K G Goulias Pennsylvania State University, USA F Grandori Politecnico di Milano, Italy W E Grant Texas A & M University, USA S Grilli University of Rhode Island, USA R H J Grimshaw, Loughborough University, UK D Gross Technische Hochschule Darmstadt, Germany R Grundmann Technische Universitat Dresden, Germany A Gualtierotti IDHEAP, Switzerland R C Gupta National University of Singapore, Singapore J M Hale University of Newcastle, UK K Hameyer Katholieke Universiteit Leuven, Belgium C Hanke Danish Technical University, Denmark K Hayami National Institute of Informatics, Japan Y Hayashi Nagoya University, Japan L Haydock Newage International Limited, UK A H Hendrickx Free University of Brussels, Belgium C Herman John Hopkins University, USA S Heslop University of Bristol, UK I Hideaki Nagoya University, Japan D A Hills University of Oxford, UK W F Huebner Southwest Research Institute, USA J A C Humphrey Bucknell University, USA M Y Hussaini Florida State University, USA W Hutchinson Edith Cowan University, Australia T H Hyde University of Nottingham, UK M Iguchi Science University of Tokyo, Japan D B Ingham University of Leeds, UK L Int Panis VITO Expertisecentrum IMS, Belgium N Ishikawa National Defence Academy, Japan J Jaafar UiTm, Malaysia W Jager Technical University of Dresden, Germany Y Jaluria Rutgers University, USA C M Jefferson University of the West of England, UK
P R Johnston Griffith University, Australia D R H Jones University of Cambridge, UK N Jones University of Liverpool, UK D Kaliampakos National Technical University of Athens, Greece N Kamiya Nagoya University, Japan D L Karabalis University of Patras, Greece M Karlsson Linkoping University, Sweden T Katayama Doshisha University, Japan K L Katsifarakis Aristotle University of Thessaloniki, Greece J T Katsikadelis National Technical University of Athens, Greece E Kausel Massachusetts Institute of Technology, USA H Kawashima The University of Tokyo, Japan B A Kazimee Washington State University, USA S Kim University of Wisconsin-Madison, USA D Kirkland Nicholas Grimshaw & Partners Ltd, UK E Kita Nagoya University, Japan A S Kobayashi University of Washington, USA T Kobayashi University of Tokyo, Japan D Koga Saga University, Japan A Konrad University of Toronto, Canada S Kotake University of Tokyo, Japan A N Kounadis National Technical University of Athens, Greece W B Kratzig Ruhr Universitat Bochum, Germany T Krauthammer Penn State University, USA C-H Lai University of Greenwich, UK M Langseth Norwegian University of Science and Technology, Norway B S Larsen Technical University of Denmark, Denmark F Lattarulo, Politecnico di Bari, Italy A Lebedev Moscow State University, Russia L J Leon University of Montreal, Canada D Lewis Mississippi State University, USA S lghobashi University of California Irvine, USA K-C Lin University of New Brunswick, Canada A A Liolios Democritus University of Thrace, Greece
S Lomov Katholieke Universiteit Leuven, Belgium J W S Longhurst University of the West of England, UK G Loo The University of Auckland, New Zealand J Lourenco Universidade do Minho, Portugal J E Luco University of California at San Diego, USA H Lui State Seismological Bureau Harbin, China C J Lumsden University of Toronto, Canada L Lundqvist Division of Transport and Location Analysis, Sweden T Lyons Murdoch University, Australia Y-W Mai University of Sydney, Australia M Majowiecki University of Bologna, Italy D Malerba Università degli Studi di Bari, Italy G Manara University of Pisa, Italy B N Mandal Indian Statistical Institute, India Ü Mander University of Tartu, Estonia H A Mang Technische Universitat Wien, Austria, G D, Manolis, Aristotle University of Thessaloniki, Greece W J Mansur COPPE/UFRJ, Brazil N Marchettini University of Siena, Italy J D M Marsh Griffith University, Australia J F Martin-Duque Universidad Complutense, Spain T Matsui Nagoya University, Japan G Mattrisch DaimlerChrysler AG, Germany F M Mazzolani University of Naples “Federico II”, Italy K McManis University of New Orleans, USA A C Mendes Universidade de Beira Interior, Portugal, R A Meric Research Institute for Basic Sciences, Turkey J Mikielewicz Polish Academy of Sciences, Poland N Milic-Frayling Microsoft Research Ltd, UK R A W Mines University of Liverpool, UK C A Mitchell University of Sydney, Australia
K Miura Kajima Corporation, Japan A Miyamoto Yamaguchi University, Japan T Miyoshi Kobe University, Japan G Molinari University of Genoa, Italy T B Moodie University of Alberta, Canada D B Murray Trinity College Dublin, Ireland G Nakhaeizadeh DaimlerChrysler AG, Germany M B Neace Mercer University, USA D Necsulescu University of Ottawa, Canada F Neumann University of Vienna, Austria S-I Nishida Saga University, Japan H Nisitani Kyushu Sangyo University, Japan B Notaros University of Massachusetts, USA P O’Donoghue University College Dublin, Ireland R O O’Neill Oak Ridge National Laboratory, USA M Ohkusu Kyushu University, Japan G Oliveto Universitá di Catania, Italy R Olsen Camp Dresser & McKee Inc., USA E Oñate Universitat Politecnica de Catalunya, Spain K Onishi Ibaraki University, Japan P H Oosthuizen Queens University, Canada E L Ortiz Imperial College London, UK E Outa Waseda University, Japan A S Papageorgiou Rensselaer Polytechnic Institute, USA J Park Seoul National University, Korea G Passerini Universita delle Marche, Italy B C Patten, University of Georgia, USA G Pelosi University of Florence, Italy G G Penelis, Aristotle University of Thessaloniki, Greece W Perrie Bedford Institute of Oceanography, Canada R Pietrabissa Politecnico di Milano, Italy H Pina Instituto Superior Tecnico, Portugal M F Platzer Naval Postgraduate School, USA D Poljak University of Split, Croatia V Popov Wessex Institute of Technology, UK H Power University of Nottingham, UK D Prandle Proudman Oceanographic Laboratory, UK
M Predeleanu University Paris VI, France M R I Purvis University of Portsmouth, UK I S Putra Institute of Technology Bandung, Indonesia Y A Pykh Russian Academy of Sciences, Russia F Rachidi EMC Group, Switzerland M Rahman Dalhousie University, Canada K R Rajagopal Texas A & M University, USA T Rang Tallinn Technical University, Estonia J Rao Case Western Reserve University, USA A M Reinhorn State University of New York at Buffalo, USA A D Rey McGill University, Canada D N Riahi University of Illinois at UrbanaChampaign, USA B Ribas Spanish National Centre for Environmental Health, Spain K Richter Graz University of Technology, Austria S Rinaldi Politecnico di Milano, Italy F Robuste Universitat Politecnica de Catalunya, Spain J Roddick Flinders University, Australia A C Rodrigues Universidade Nova de Lisboa, Portugal F Rodrigues Poly Institute of Porto, Portugal C W Roeder University of Washington, USA J M Roesset Texas A & M University, USA W Roetzel Universitaet der Bundeswehr Hamburg, Germany V Roje University of Split, Croatia R Rosset Laboratoire d’Aerologie, France J L Rubio Centro de Investigaciones sobre Desertificacion, Spain T J Rudolphi Iowa State University, USA S Russenchuck Magnet Group, Switzerland H Ryssel Fraunhofer Institut Integrierte Schaltungen, Germany S G Saad American University in Cairo, Egypt M Saiidi University of Nevada-Reno, USA R San Jose Technical University of Madrid, Spain F J Sanchez-Sesma Instituto Mexicano del Petroleo, Mexico
B Sarler Nova Gorica Polytechnic, Slovenia S A Savidis Technische Universitat Berlin, Germany A Savini Universita de Pavia, Italy G Schmid Ruhr-Universitat Bochum, Germany R Schmidt RWTH Aachen, Germany B Scholtes Universitaet of Kassel, Germany W Schreiber University of Alabama, USA A P S Selvadurai McGill University, Canada J J Sendra University of Seville, Spain J J Sharp Memorial University of Newfoundland, Canada Q Shen Massachusetts Institute of Technology, USA X Shixiong Fudan University, China G C Sih Lehigh University, USA L C Simoes University of Coimbra, Portugal A C Singhal Arizona State University, USA P Skerget University of Maribor, Slovenia J Sladek Slovak Academy of Sciences, Slovakia V Sladek Slovak Academy of Sciences, Slovakia A C M Sousa University of New Brunswick, Canada H Sozer Illinois Institute of Technology, USA D B Spalding CHAM, UK P D Spanos Rice University, USA T Speck Albert-Ludwigs-Universitaet Freiburg, Germany C C Spyrakos National Technical University of Athens, Greece I V Stangeeva St Petersburg University, Russia J Stasiek Technical University of Gdansk, Poland G E Swaters University of Alberta, Canada S Syngellakis University of Southampton, UK J Szmyd University of Mining and Metallurgy, Poland S T Tadano Hokkaido University, Japan H Takemiya Okayama University, Japan I Takewaki Kyoto University, Japan C-L Tan Carleton University, Canada M Tanaka Shinshu University, Japan E Taniguchi Kyoto University, Japan
S Tanimura Aichi University of Technology, Japan J L Tassoulas University of Texas at Austin, USA M A P Taylor University of South Australia, Australia A Terranova Politecnico di Milano, Italy E Tiezzi University of Siena, Italy A G Tijhuis Technische Universiteit Eindhoven, Netherlands T Tirabassi Institute FISBAT-CNR, Italy S Tkachenko Otto-von-GuerickeUniversity, Germany N Tosaka Nihon University, Japan T Tran-Cong University of Southern Queensland, Australia R Tremblay Ecole Polytechnique, Canada I Tsukrov University of New Hampshire, USA R Turra CINECA Interuniversity Computing Centre, Italy S G Tushinski Moscow State University, Russia J-L Uso Universitat Jaume I, Spain E Van den Bulck Katholieke Universiteit Leuven, Belgium D Van den Poel Ghent University, Belgium R van der Heijden Radboud University, Netherlands R van Duin Delft University of Technology, Netherlands P Vas University of Aberdeen, UK W S Venturini University of Sao Paulo, Brazil
R Verhoeven Ghent University, Belgium A Viguri Universitat Jaume I, Spain Y Villacampa Esteve Universidad de Alicante, Spain F F V Vincent University of Bath, UK S Walker Imperial College, UK G Walters University of Exeter, UK B Weiss University of Vienna, Austria H Westphal University of Magdeburg, Germany J R Whiteman Brunel University, UK Z-Y Yan Peking University, China S Yanniotis Agricultural University of Athens, Greece A Yeh University of Hong Kong, China J Yoon Old Dominion University, USA K Yoshizato Hiroshima University, Japan T X Yu Hong Kong University of Science & Technology, Hong Kong M Zador Technical University of Budapest, Hungary K Zakrzewski Politechnika Lodzka, Poland M Zamir University of Western Ontario, Canada R Zarnic University of Ljubljana, Slovenia G Zharkova Institute of Theoretical and Applied Mechanics, Russia N Zhong Maebashi Institute of Technology, Japan H G Zimmermann Siemens AG, Germany
Computational Methods and Experimental Measurements XIV EDITORS C.A. Brebbia Wessex Institute of Technology, UK G.M. Carlomagno University of Naples Federico II, Italy
Editors: C.A. Brebbia Wessex Institute of Technology, UK G.M. Carlomagno University of Naples Federico II, Italy
Published by WIT Press Ashurst Lodge, Ashurst, Southampton, SO40 7AA, UK Tel: 44 (0) 238 029 3223; Fax: 44 (0) 238 029 2853 E-Mail:
[email protected] http://www.witpress.com For USA, Canada and Mexico Computational Mechanics Inc 25 Bridge Street, Billerica, MA 01821, USA Tel: 978 667 5841; Fax: 978 667 7582 E-Mail:
[email protected] http://www.witpress.com British Library Cataloguing-in-Publication Data A Catalogue record for this book is available from the British Library ISBN: 978-1-84564-187-0 ISSN: 1746-4064 (print) ISSN: 1743-355X (on-line) The texts of the papers in this volume were set individually by the authors or under their supervision. Only minor corrections to the text may have been carried out by the publisher. No responsibility is assumed by the Publisher, the Editors and Authors for any injury and/ or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. The Publisher does not necessarily endorse the ideas held, or views expressed by the Editors or Authors of the material contained in its publications. © WIT Press 2009 Printed in Great Britain by Athenaeum Press Ltd. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the Publisher.
Preface
This book contains the majority of the papers presented at the 14th International Conference on Computational Methods and Experimental Measurements (CMEM/ 09) held in Algarve (Portugal) in 2009. The series of these events is unique in the field and now part of a quite long tradition. Indeed, the first conference took place in Washington DC in 1981 and has been reconvened every two years, in several locations, with uninterrupted success. The main scope of the meeting is to provide to the international technical and scientific community a forum to discuss the interaction and the complementary aspects of computational methods and experimental measurements, the main consideration and importance being committed to their advantageous integration. It is well known that the stable progresses in computers efficiency and numerical techniques are producing a steady growth of computational simulations which nowadays influence both an ever-widening range of engineering problems, as well as our everyday activities. As these simulations are continuously expanding and improving, there still exists the necessity of their validation, which can be only accomplished by performing dedicated experimental tests. Furthermore, because of their incessant development, experimental techniques are becoming more complex and sophisticated so that, both their running as well as data collection can only be performed by means of computers. Finally, it must be stressed that, for the majority of the measurements, the obtained data must be processed by means of numerical methods. This volume contains a substantial number of excellent scientific papers, which present advanced approaches to modern research problems. They have been grouped in the following sections: • • • •
Computational and experimental methods Experimental and computational analysis Direct, indirect and in-situ measurements Detection and signal processing
• • • • • • •
Data processing Fluid flow Heat transfer and thermal processes Material characterization Structural and stress analysis Industrial applications Forest fires
The Editors are very grateful to all the authors for their valuable contributions and to the Members of the International Scientific Advisory Committee, as well as other colleagues, for their help in reviewing the papers published in this book. The Editors Algarve, 2009
Contents Section 1: Computational and experimental methods Heat and moisture transport in porous materials involving cyclic wetting and drying R. Černý, J. Maděra, J. Kočí & E. Vejmelková ................................................... 3 Influence of material characteristics of concrete and thermal insulation on the service life of exterior renders J. Maděra, V. Kočí, E. Vejmelková, R. Černý, P. Rovnaníková, M. Ondráček & M. Sedlmajer............................................................................ 13 A procedure for adaptive evaluation of numerical and experimental data J. Krok, M. Stanuszek & J. Wojtas..................................................................... 25 Gear predictor of manual transmission vehicles based on artificial neural network A. M. Wefky, F. Espinosa, M. Mazo, J. A. Jiménez, E. Santiso, A. Gardel & D. Pérez ........................................................................................ 37 Non-thermal, chemical destruction of PCB from Sydney tar ponds soil extract A. J. Britten & S. MacKenzie ............................................................................. 47 Section 2: Experimental and computational analysis Multi-scale FE analyses of sheet formability based on SEM-EBSD crystal texture measurement H. Sakamoto, H. Kuramae, E. Nakamachi & H. Morimoto............................... 61 Direct simulation of sounds generated by collision between water drop and water surface M. Tsutahara, S. Tajiri, T. Miyaoka, N. Kobata & H. Tanaka .......................... 71
Experimental and numerical analysis of concrete slabs prestressed with composite reinforcement R. Sovják, P. Máca, P. Konvalinka & J. L. Vítek............................................... 83 Measures in the underground work method to determine the mathematical relations that predicts rock behaviour S. Torno, J. Velasco, I. Diego, J. Toraño, M. Menéndez, M. Gent & J. Roldán.......................................................................................... 95 Implementation and validation of a strain rate dependent model for carbon foam G. Janszen & P. G. Nettuno............................................................................. 105 Statics and dynamics of carbon fibre reinforcement composites on steel orthotropic decks L. Frýba, M. Pirner & Sh. Urushadze ............................................................. 117 Study of the thermo-physical properties of bitumen in hydrocarbon condensates A. Miadonye, J. Cyr, K. Secka & A. Britten..................................................... 125 Section 3: Direct, indirect and in-situ measurements Modelling energy consumption in test cells D. Braga, Y. Parte, M. Fructus, T. Touya, M. Masmoudi, T. Wylot & V. Kearley...................................................................................... 137 Evaluation of insulation systems by in situ testing I. Enache, D. Braga, C. Portet & M. Duran.................................................... 147 Monitoring coupled moisture and salt transport using single vertical suction experiment Z. Pavlík, J. Mihulka, M. Pavlíková & R. Černý.............................................. 157 Application of image analysis for the measurement of liquid metal free surface S. Golak ........................................................................................................... 169 Damage assessment by automated quantitative image analysis – a risky undertaking P. Stroeven....................................................................................................... 179
Section 4: Detection and signal processing Special session chaired by A. Kawalec Antenna radiation patterns indication on the basic measurement of field radiation in the near zone M. Wnuk........................................................................................................... 191 Sub-ppb NOx detection by a cavity enhanced absorption spectroscopy system with blue and infrared diode lasers Z. Bielecki, M. Leszczynski, K. Holz, L. Marona, J. Mikolajczyk, M. Nowakowski, P. Perlin, B. Rutecka, T. Stacewicz & J. Wojtas................... 203 Multispectral detection circuits in special applications Z. Bielecki, W. Kolosowski, E. Sedek, M. Wnuk & J. Wojtas........................... 217 Modification of raised cosine weighting functions family C. Lesnik, A. Kawalec & J. Pietrasinski .......................................................... 229 Technique for the electric and magnetic parameter measurement of powdered materials R. Kubacki, L. Nowosielski, R. Przesmycki...................................................... 241 Acoustic watermark server effectiveness Z. Piotrowski & P. Gajewski ........................................................................... 251 Intrapulse analysis of radar signal A. Pieniężny & S. Konatowski ......................................................................... 259 Neural detection of parameter changes in a dynamic system using time-frequency transforms E. Swiercz ........................................................................................................ 271 Section 5: Data processing A versatile software-hardware system for environmental data acquisition and transmission G. Zappalà ....................................................................................................... 283 Modelling of the precise movement of a ship at slow speed to minimize the trajectory deviation risk J. Malecki......................................................................................................... 295 Automated safe control of a Self-propelled Mine Counter Charge in an underwater environment P. Szymak......................................................................................................... 305
A novel financial model of long term growing stocks for the Taiwan stock market S.-H. Liang, S.-C. Liang, L.-C. Lien & C.-C. Liang ........................................ 315 Section 6: Fluid flow On the differences of transitional separated-reattached flows over leading-edge obstacles of varying geometries I. E. Abdalla..................................................................................................... 329 A second order method for solving turbulent shallow flows J. Fe & F. Navarrina ....................................................................................... 341 Numerical analysis of compressible turbulent helical flow in a Ranque-Hilsch vortex tube R. Ricci, A. Secchiaroli, V. D’Alessandro & S. Montelpare ............................ 353 Turbulence: a new zero-equation model K. Alammar...................................................................................................... 365 Mesh block refinement technique for incompressible flows in complex geometries using Cartesian grids C. Georgantopoulou, G. Georgantopoulos & S. Tsangaris............................. 369 Application of the finite volume method for the supersonic flow around the axisymmetric cone body placed in a free stream R. Haoui........................................................................................................... 379 Some aspects and aerodynamic effects in repairing battle damaged wings S. Djellal & A. Ouibrahim ............................................................................... 389 Section 7: Heat transfer and thermal processes Natural and mixed convection in inclined channels with partial openings A. Andreozzi, B. Buonomo, O. Manca & S. Nardini ........................................ 401 Heat flux reconstruction in the grinding process from temperature data J. Irša & A. N. Galybin .................................................................................... 413 Analysis of non-isothermal fluid flow past an in-line tube bank M. Alavi & H. Goshayeshi ............................................................................... 425
Transport phenomenon in a jet type mold cooling pipe H. Kawahara & T. Nishimura ......................................................................... 437 Two-phase modelling of nanofluid heat transfer in a microchannel heat sink C. T. Nguyen & M. Le Menn............................................................................ 451 Numerical investigation of sensible thermal energy storage in high temperature solar systems A. Andreozzi, N. Bianco, O. Manca, S. Nardini & V. Naso ............................. 461 Dynamic modelling of the thermal space of the metallurgical walking beams furnaces D. Constantinescu............................................................................................ 473 Section 8: Material characterisation Investigation of shape recovery stress for ferrous shape memory alloy H. Naoi, M. Wada, T. Koike, H. Yamamoto & T. Maruyama .......................... 485 Growth behavior of small surface cracks in coarse and ultrafine grained copper M. Goto, S. Z. Han, Y. Ando, N. Kawagoishi, N. Teshima & S. S. Kim ........... 497 Section 9: Structural and stress analysis Numerical simulation of structures using generalized models for data uncertainty W. Graf, J.-U. Sickert & F. Steinigen .............................................................. 511 A dynamic model for the study of gear transmissions A. Fernandez del Rincon, F. Viadero, R. Sancibrian, P. Garcia Fernandez & A. de Juan.................................................................. 523 Long-term behaviour of concrete structures reinforced with pre-stressed GFRP tendons J. Fornůsek, P. Konvalinka, R. Sovják & J. L. Vítek........................................ 535 Application of the finite element method for static design of plane linear systems with semi-rigid connections D. Zlatkov, S. Zdravkovic, B. Mladenovic & M. Mijalkovic ............................ 547
Blade loss studies in low-pressure turbines – from blade containment to controlled blade-shedding R. Ortiz, M. Herran & H. Chalons .................................................................. 559 Section 10: Industrial applications Finding the “optimal” size and location of treatment plants for a Jatropha oil plantation project in Thailand J. E. Everett ..................................................................................................... 571 An industrial ship system for the flat development of undevelopable surfaces: algorithm and implementation E. M. Soto, X. A. Leiceaga & S. García........................................................... 579 Section 11: Forest fires Assessment of the plume theory predictions of crown scorch or crown fire initiation using transport models V. Konovalov, J.-L. Dupuy, F. Pimont, D. Morvan & R. R. Linn..................................................................................................... 593 Spotting ignition of fuel beds by firebrands C. Lautenberger & A. C. Fernandez-Pello ...................................................... 603 Impact of fuel-break structure on fire behaviour simulated with FIRETEC F. Pimont, J.-L. Dupuy & R. R. Linn ............................................................... 613 A new model of information systems for public awareness about wildfires P.-Y. Badillo & C. Sybord................................................................................ 623 Combustion modelling for forest fires: from detailed to skeletal and global models P. A. Santoni .................................................................................................... 633 Author Index .................................................................................................. 645
Section 1 Computational and experimental methods
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
3
Heat and moisture transport in porous materials involving cyclic wetting and drying R. Černý, J. Maděra, J. Kočí & E. Vejmelková Czech Technical University in Prague, Faculty of Civil Engineering, Department of Materials Engineering and Chemistry, Czech Republic
Abstract Computational modeling of coupled heat and moisture transport in porous building materials with hysteretic moisture transport and storage parameters in the conditions of difference climate is presented in the paper. A diffusion-type model is used for the description of coupled heat and moisture transport. An empirical procedure is chosen to describe the path between the transport and storage parameters corresponding to wetting and drying. In a practical example of computer simulation, a concrete wall provided with exterior thermal insulation is analyzed. Computational results reveal very significant differences in moisture and relative humidity profiles calculated using the model with hysteretic parameters and without hysteresis. As the differences are on dangerous side from the hygrothermal point of view, the application of hysteretic moisture transport and storage parameters in computational models can be considered as quite important for service life analyses of multi-layered systems of building materials. Keywords: moisture transport, hysteresis, computer simulation.
1
Introduction
Heat and moisture transport calculations are quite common in service life analyses of multi-layered systems of building materials. They make it possible to identify potential weak points in building envelopes from the hygro-thermal point of view, thus allow to react to possible danger in time and prevent excessive damage caused for instance by accumulation of liquid water in specific parts of a structure. Although it is known for years that moisture transport and storage parameters of many porous materials may exhibit considerable hysteresis, most of the calculations are still performed with parameters measured WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090011
4 Computational Methods and Experimental Measurements XIV during the adsorption phase which is an apparent consequence of difficulties the experimentalists face in the measurements of some parameters in desorption phase. In this paper, the effect of hysteresis of moisture transport and storage parameters on calculated moisture and relative humidity fields is investigated for a characteristic case of concrete wall with exterior thermal insulation, and the possible consequences of neglecting hysteresis of these parameters for service life calculations are analyzed.
2
Mathematical model
The diffusion model proposed by Künzel [1] was used for description of coupled heat and moisture transport. The heat and moisture balance equations were formulated in the form
[
]
dH ∂T = div (λ grad T ) + Lv div δ p grad (ϕ p s ) , dT ∂t ∂ρ v ∂ϕ = div Dϕ grad ϕ + δ p grad (ϕ p s ) , ∂ϕ ∂t
[
]
(1) (2)
where H is the enthalpy density, Lv heat of evaporation, λ thermal conductivity, T temperature, ρv partial density of moisture, ϕ relative humidity, δp permeability of water vapor, ps partial pressure of saturated water vapor,
Dϕ = Dw
dρ v dϕ
(3)
is the liquid water transport coefficient, DW capillary transport coefficient (moisture diffusivity). The inclusion of cyclic wetting and drying processes into the model was done using different moisture transport and storage parameters functions in wetting and drying phase and calculating the path between the parameter functions corresponding to wetting and drying in every time step. For describing the path between the adsorption and desorption isotherms an empirical procedure was chosen which follows Pedersen’s hysteretic model [2]. The actual value of moisture content, w, is determined using equation w = wp + ξ ϕ a − ϕ p , (4)
(
)
where ϕa is the actual value of relative humidity and ϕp the value of relative humidity from previous calculation step, ξ is the slope of the hysteretic parameter which is calculated as
ad (wp − wa ) ξ d + aa (wp − wd ) ξ a 2
ξ=
(wd − wa )2
2
,
(5)
wp is the value of moisture content from previous calculation step, wa and wd are values of moisture content for adsorption and desorption cycles, ξa and ξd the values for tangent adsorption and desorption in the points wa and wd, aa and ad WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
5
the correction coefficients. An example of the calculation of the path between the adsorption and desorption isotherms is given in Figure 1.
0.6 Adsorption
ξd
Desorption
0.5
Calculated with hysteresis
w [m3/m3]
0.4
ξ 0.3 0.2
ξa
0.1 0 0
Figure 1:
0.2
0.4
ϕ [-]
0.6
0.8
1
Example of application of hysteresis to sorption isotherms during drying and wetting cycles.
In the case of moisture diffusivity, a modification of (5) was necessary in order to express the hysteretic effect in more accurate way. The modified equation can be expressed as
ad (ln κ p − ln κ a ) ln ξ d + aa (ln κ p − ln κ d ) ln ξ a 2
ξ=
(ln κ d − ln κ a )2
2
,
(6)
where κp is the value of moisture diffusivity from previous calculation step.
3
Materials and building envelope
A simplified building envelope system was chosen for the investigation of the effect of hysteresis on the calculated moisture and relative humidity fields. The load-bearing structure was made of high performance concrete containing metakaolin (600 mm). Mineral wool (140 mm) was used as exterior thermal insulation. Lime-cement plaster (10 mm) was on both exterior and interior side. The building envelope was exposed from inside to constant conditions (temperature equal to 21°C and relative humidity equal to 55%) and from outside to climatic conditions corresponding to the reference year for Prague (Fig. 2).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
6 Computational Methods and Experimental Measurements XIV INSIDE
OUTSIDE
Constant temperature T = 21 °C Climatical Data from TRY
Constant relative humidity ϕ = 55 %
10
600
140
10
Figure 2:
Scheme of the studied envelope including boundary conditions.
Table 1:
Basic parameters of materials of the studied building envelope.
Parameter ρ whyg wsat λdry λsat κ
Unit [kg/m3] [m3/m3] [m3/m3] [W/mK] [W/mK] [m2/s]
HPCM 2366 0.107 0.13 1.56 2.09 see Fig. 4
c µ
[J/kgK] [-]
730 21
MW 170 0.0073 0.89 0.055 1.20 5.1E-10. e3.12w 1000 45
LCP 1550 0.03 0.40 0.70 2.40 7.3E-7. e3.2w 1200 7
LPMH 1745 0.024 0.33 0.84 2.40 3.9E-8 610 13
The initial conditions were chosen as follows: relative humidity 89 % and constant temperature profile equal to 21°C. The basic material parameters of concrete (HPCM), mineral wool (MW), lime-cement plaster (LCP) and hydrophobized lime plaster modified by metakaolin (LPMH) are shown in Table 1 where the following symbols were used: ρ – bulk density, c – specific heat capacity, µ – water vapor diffusion resistance factor, λdry – thermal conductivity in dry conditions, λsat – thermal conductivity in water saturated conditions, κ - moisture diffusivity, whyg – hygroscopic moisture content by volume, wsat – saturated moisture content by volume. Fig. 3 presents the sorption isotherms of concrete used in the simulations. The data were obtained by experiments performed at the Department of Materials Engineering and Chemistry, Faculty of Civil Engineering, Czech Technical University in Prague [3, 4]. Fig. 4 presents the moisture diffusivity of concrete. Moisture diffusivity vs. moisture function for adsorption was derived according to (7)-(9), where two input parameters were used, the normalized pore distribution curve f(r), Rmin < r < Rmax, and the average value of moisture diffusivity κav determined in a common WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
7
0.12 0.1
Adsorption Desorption
3
w [m /m ]
0.08 3
0.06 0.04 0.02 0 0
0.2
0.4
0.6
0.8
1
ϕ [-]
Figure 3:
Sorption isotherm of concrete.
1.00E-05 Adsorption Desorption
2
κ [m /s]
1.00E-07
1.00E-09
1.00E-11
1.00E-13 0.00
0.02
0.04
0.06
0.08 3
0.10
0.12
3
w [m /m ]
Figure 4:
Moisture diffusivity of concrete.
sorptivity experiment [5], and the tortuosity effect was linearized for the sake of simplicity, i.e., an assumption of n=1 was adopted,
w κ r (w) = wsat
2 R 2 Rmax f (Rmax ) ∫ Rmin r f (r )dr , R 2 f (R ) ∫ RRmax r 2 f (r )dr min n
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(7)
8 Computational Methods and Experimental Measurements XIV
2 3
κ wsat = κ av
(8)
R
w( R) = wsat
∫ f (r )dr
(9)
R min
Contrary to the sorption isotherms, where desorption curves can be obtained by common experimental techniques, the desorption curve of moisture diffusivity had to be set empirically as for its experimental determination no quite reliable techniques are available at present. Based on the results of experiments and computational analyses described in [6], it was estimated to be one order of magnitude lower than the adsorption curve. The moisture transport and storage parameters of mineral wool insulation and exterior and interior plasters were assumed to be the same in both adsorption and desorption phase. The water vapor adsorption and moisture diffusivity of mineral wool are so low that hysteretic effects are within the error range of experimental methods in any case. The thickness of renders is much lower as compared to both concrete and mineral wool so that the effect of hysteresis in their hygric parameters on moisture transport in the wall as a whole is supposed to be neglected. 1 Without hysteresis With hysteresis SI
0.06 0.05 0.04 0.03 0.02 0.01 0
0.8 0.7 0.6 0.5 0.4
0
Figure 5:
4
Without hysteresis With hysteresis SI
0.9
Relative humidity [-]
Moisture content [m3/m3]
0.07
200
400
Distance [mm]
600
800
0
100
200
300
400
500
Distance [mm]
600
700
800
Moisture content (a) and relative humidity (b) profiles for January 1.
Results of computer simulations and discussion
Three different simulations were performed, combining the effects of hysteresis of moisture transport and storage parameters, namely the simulation with hysteresis of sorption isotherm only, hysteresis of moisture diffusivity only and hysteresis of both sorption isotherm and moisture diffusivity. The calculations without hysteresis were done as well for the sake of comparison, using the data for adsorption phase which is usual in computational simulations where hysteresis is neglected.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
9
4.1 Hysteresis of sorption isotherm Figs. 5(a), (b) show moisture content and relative humidity profiles calculated for January 1, which can be considered as characteristic for the winter period. The calculation with hysteretic effects led to increase of moisture content in the envelope, whereas relative humidity was significantly lower. Figs. 6(a), (b) summarize the moisture and relative humidity fields during the whole simulated time period of 5 years calculated with hysteresis.
Figure 6:
Moisture content (a) and relative humidity (b) fields during a 5-year period.
4.2 Hysteresis of moisture diffusivity The moisture and relative humidity profiles for January 1 (Figs. 7(a), (b)) show that both moisture content and relative humidity calculated with hysteresis were higher, which was a consequence of lower moisture diffusivity in drying phase. 0.9 Without hysteresis With hysteresis κ
0.07
Without hysteresis With hysteresis κ
Relative humidity [-]
Moisture content [m3/m3]
0.08
0.06 0.05 0.04 0.03 0.02
0.8
0.7
0.6
0.5
0.01 0
0
Figure 7:
200
400
Distance [mm]
600
800
0.4
0
100
200
300
400
500
Distance [mm]
600
700
800
Moisture content (a) and relative humidity (b) profiles for January 1.
Figs. 8(a), (b) show the moisture and relative humidity fields during the whole simulated time period of 5 years calculated with hysteresis.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
10 Computational Methods and Experimental Measurements XIV
Figure 8:
Moisture content (a) and relative humidity (b) fields during a 5-year period. 1 Without hysteresis With hysteresis both values
0.07 0.06 0.05 0.04 0.03 0.02
0.8 0.7 0.6 0.5 0.4
0.01 0
Without hysteresis With hysteresis both values
0.9
Relative humidity [-]
Moisture content [m3/m3]
0.08
0
200
400
Distance [mm]
600
800
0
100
200
300
400
500
Distance [mm]
600
700
800
Figure 9:
Moisture content (a) and relative humidity (b) profiles for January 1.
Figure 10:
Moisture content (a) and relative humidity (b) fields during a 5-year period.
4.3 Hysteresis of both sorption isotherm and moisture diffusivity The moisture and relative humidity profiles (Figs. 9(a), (b)) were very similar to the simulations presented in 4.1, which meant that the hysteretic effect of sorption isotherm had remarkably higher influence on simulation results. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
11
Figs. 10(a), (b) present the moisture and relative humidity fields during the whole simulated time period of 5 years calculated with hysteresis.
Figure 11:
Calculation of hysteresis of moisture diffusivity (a) and sorption isotherm (b).
Figs. 11(a), (b) show how the hysteretic effects were manifested in the values of moisture diffusivity and sorption isotherm of concrete used by the model during the whole time period of 5-years simulation. While the water vapour sorption parameters were mostly near to the centerline of the area demarcated by the adsorption and desorption curves, the moisture diffusivities oscillated between the maximum and minimum values. 1 Without hysteresis With hysteresis SI With hysteresis κ With both hysteresis
0.07 0.06 0.05 0.04 0.03 0.02
Without hysteresis With hysteresis SI With hysteresis κ With both hysteresis
0.9
Relative humidity [-]
Moisture content [m3/m3]
0.08
0.8 0.7 0.6 0.5 0.4
365
Figure 12:
730
1095
1460
Time [days]
1825
2100
365
730
1095
1460
Time [days]
1825
2100
Moisture content (a) and relative humidity (b) in a characteristic locality in concrete, 1 cm from the interface between concrete and thermal insulation.
4.4 Comparison of the effects of hysteresis of moisture transport and storage parameters Figs. 12(a), (b) present a comparison of the effects of hysteresis of moisture transport and storage parameters on moisture content and relative humidity for a characteristic locality in concrete, 1 cm from the interface between concrete and thermal insulation. As for the relative humidity profiles, it is obvious that all values came close each other in the 5th year of simulation. Moisture content WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
12 Computational Methods and Experimental Measurements XIV profiles were quite different. Results of simulation without hysteretic effect and with hysteresis of moisture diffusivity only were very similar, but they differed from the other simulations in a significant way. Besides, the yearly oscillations of water content were not steady yet after 5 years. Involving hysteretic effects of both moisture diffusivity and sorption isotherm caused the highest increase of moisture content in the chosen point. Similar results were also obtained for other points within the concrete part of the wall.
5
Conclusions
The computer simulations of coupled heat and moisture transport in this paper have shown that the application of hysteretic moisture transport and storage parameters can be considered as quite important in service life analyses of multilayered systems of porous building materials. The results indicated very significant differences in moisture and relative humidity profiles calculated using the model with hysteretic parameters and without hysteresis. As the differences were always on the dangerous side, it can be concluded that neglecting the hysteretic effects while desorption is in progress can lead to underestimation of damage risk due to water presence in a structure which is rather undesirable in any service life analysis.
Acknowledgement This research has been supported by the Czech Science Foundation, under grant No 103/07/0034.
References [1] Künzel, H. M., Simultaneous Heat and Moisture Transport in Building Components, PhD Thesis, IRB Verlag Stuttgart, 1995. [2] Pedersen, C. R., Combined Heat and Moisture Transfer in Building Constructions, PhD Thesis, Report 214. Thermal Insulation Laboratory, TU Denmark, 1990. [3] Vejmelková, E. & Černý, R., Application of Alternative Silicate Binders in the Production of High Performance Materials Beneficial to The Environment. Proceedings of the 2008 World Sustainable Building Conference [CD-ROM]. Balnarring, Victoria: ASN Events Pty Ltd, 2008, p. 520-525, 2008. [4] Pernicová, R., Pavlíková, M., Pavlík, Z. & Černý, R., Vliv metakaolinu na mechanické, tepelné a vlhkostní vlastnosti vápenných omítek. Metakaolin 2007. Brno: VUT FAST, s. 70-77, 2007 [5] Černý, R. & Rovnaníková, P., Transport Processes in Concrete, 1. ed. London: Spon Press, pp. 26–29, 2002. [6] Pel L., Černý, R. & Pavlík Z, Moisture and Ion Transport. WP5 2-Years Report of the EU 6th Program Project SSPI-CT-2003-501571. TU Eindhoven, Eindhoven, 2006. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
13
Influence of material characteristics of concrete and thermal insulation on the service life of exterior renders J. Maděra1, V. Kočí1, E. Vejmelková1, R. Černý1, P. Rovnaníková2, M. Ondráček3 & M. Sedlmajer3 1
Czech Technical University in Prague, Faculty of Civil Engineering, Department of Materials Engineering and Chemistry, Czech Republic 2 Brno University of Technology, Faculty of Civil Engineering, Institute of Chemistry, Czech Republic 3 Brno University of Technology, Faculty of Civil Engineering, Institute of Technology of Building Materials and Components, Czech Republic
Abstract An assessment of the service life of exterior renders of building structures using combined computational-experimental approach is presented in the paper. In the experimental part, durability of selected renders and concretes is determined in terms of their frost resistance. A diffusion-type model is used for the description of coupled heat and moisture transport aimed at the identification of the number of frost cycles in a real structure. The computational implementation of the model leads to a system of two non-linear partial differential equations with the moisture accumulation function as additional condition. In a practical application of the model, a concrete wall provided with exterior thermal insulation system and both exterior and interior renders is analyzed. The influence of different material composition of building envelope in the service life of exterior renders is analyzed to meet the main objective of the paper. Different types of concrete, thermal insulation materials and renders are under consideration. Conclusions on the most advantageous material composition with respect to the service life of exterior renders are drawn. Keywords: computational analysis, coupled heat and moisture transport, concrete wall, thermal insulation system. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090021
14 Computational Methods and Experimental Measurements XIV
1
Introduction
Degradation of exterior renders is caused by many factors. The most principal are chemical and mechanical corrosion. As a mechanical corrosion we can assume the influence of weather conditions, particularly the destruction effect of freezing water which is contained in the exterior renders. Phase conversion of this water goes along with volume increase and this is the main mechanism leading to material destruction. Different exterior render can be frost-resistant in varying degrees which depends on its material characteristics. The frost resistance could be defined by experimental methods. However, more complicated is to determine the amount of freezing cycles which arise during the year in the investigated exterior render applied on real building structure. Freezing cycle can appear only when two conditions are met. The first condition is the presence of overhygroscopic (liquid) moisture, the second a temperature below water freezing point. So we have to observe the hygrothermal performance of the exterior render and compare the thermal and hygric state in parallel. Computational analysis is the best instrument for this operation. The amount of freezing cycles depends in the first instance on climatic conditions and material composition used in building envelope. Based on experimental and computational results we can then design optimal material composition of building envelope with respect to its service life.
2
Experimental
Determination of frost resistance of exterior renders’ materials and concretes was accomplished under laboratory conditions. For renders, the specimens in size of 40 × 40 × 160 mm were made, for concretes 100 x 100 x 400 mm. Temperature in the laboratory was 21±1°C, relative humidity was 45±5%. Water saturated specimens of renders - lime-cement plaster (LCP) and hydrophobic lime plaster modified by metakaolin (LPMH) - were cyclically frosted and defrosted until their damage got obvious. One frosting and defrosting cycle meant to put the specimens into plastic bag and then into freezing box for 6 hours, after removal to keep the specimens in laboratory with temperature of 20±1°C for 2 hours and then to put the specimens into the water for 16 hours. These cycles were repeated until the damage of specimens was visible. Damage of LPMH specimens is shown on Figure 1. Table 1:
Number of freezing cycles causing damage of material.
LCP Number of freezing cycles > 103
LPMH 40
CF > 100
CM > 100
CS > 100
CR > 100
Frost resistance tests of concretes - concrete modified by fly ash (CF), metakaolin (CM), slag (CS) and reference concrete without any modification (CR) - were carried out according to ČSN 73 1322/Z1:1968 [1]. The samples WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
15
were tested after 28 days of concrete maturing and standard curing. The total test required 100 freezing and thawing cycles. One cycle consisted of 4 hours freezing at -20°C and 2 hours thawing in 20°C warm water.
Figure 1:
3
Hydrophobic lime plaster modified by metakaolin after 40 freezing cycles.
Computational
3.1 Description of construction In this paper we assumed concrete wall made from different types of concrete (CF, CM, CS or CR) provided by thermal insulation system (EPS or mineral wool). The wall is provided by lime-cement plaster on interior side and by modified lime plaster on exterior side. By the same token we assumed the cases when the thermal insulation system is not present. To compare different hygrothermal behaviour of concrete in dependence on its modification we made simulation of simple concrete wall only. The material combination is shown in Figure 2. Hygrothermal performance was investigated in exterior plaster in a point just under the surface and in concrete in a point close to the interface with render. 3.2 Input parameters As the input parameters we need to know characteristics of used materials, boundary and initial conditions. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
16 Computational Methods and Experimental Measurements XIV
Figure 2: Table 2: ρ [kg/m3] ψ [%] c [J/kgK] µdry cup [-] µwet cup [-] λdry [W/mK] λsat [W/mK] κ [m2/s] whyg [m3/m3]
ρ [kg/m3] ψ [%] c [J/kgK] µ [-] λdry [W/mK] λsat [W/mK] κ [m2/s] whyg [m3/m3]
Basic material characteristics of concretes.
CF 2356 12.5 692 44.63 17.18 1.550 1.940 6.49e-9 0.074685
Table 3:
Scheme of building envelope.
CM 2366 13.0 728 32.44 20.99 1.565 2.085 4.09e-9 0.106943
CS 2334 9.7 720 17.70 8.99 1.632 2.077 3.77e-9 0.089000
CR 2380 12.3 672 15.80 6.60 1.660 2.085 7.15e-9 0.083300
Basic material characteristics of plasters. LCP 1550 40 1200 7 0.700 2.40 7.3e-7 0.040
LPMH 1745 33 610 10 0.845 2.40 3.9e-8 0.024
Basic material characteristics of analyzed materials are shown in Tables 2, 3 and 4. We used following symbols: ρ – bulk density [kg/m3], ψ − porosity [%], c – specific heat capacity [J/kgK], µ – water vapour diffusion resistance factor [-], λdry – thermal conductivity in dry conditions [W/mK], λsat – thermal conductivity in water saturated conditions [W/mK], κ - moisture diffusivity [m2/s], whyg – hygroscopic moisture content by volume [m3/m3]. All these parameters were WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
17
measured in laboratory of transport processes at the Department of Materials Engineering and Chemistry, Faculty of Civil Engineering, Czech Technical University in Prague [2–3]. Table 4:
Basic material characteristics of thermal insulation materials. EPS 50 97 1300 50 0.040 0.560 2.1e-11 0.001
3
ρ [kg/m ] ψ [%] c [J/kgK] µ [-] λdry [W/mK] λsat [W/mK] κ [m2/s] whyg [m3/m3]
Mineral wool 170 89 840 3 0.055 1.200 5.1e-10 0.0073
Initial and boundary conditions should be as realistic as possible. Therefore, we used climatic data in exterior for Prague in the form of a Test Referent Year (TRY), which contains average climatic data for 30 years. On interior side we used constant value of relative humidity 55% and temperature 21°C. The simulation started on 1 July and was done for 5 years. 3.3 Computational model The computations were accomplished by the computational program TRANSMAT 7.1, which was developed at the Department of Material Engineering and Chemistry, Faculty of Civil Engineering, Czech Technical University in Prague on the basis of the general finite element package SIFEL. The mathematical formulation of coupled transport of heat and moisture leads to a system of partial differential equations, which are solved by finite element method. In the particular case in this paper, Künzel’s model was used [4]:
dρ v ∂ϕ = div Dϕ gradϕ + δ p grad (ϕps ) dϕ ∂t dH ∂T = div(λgradT ) + Lv div δ p grad (ϕp s ) dT ∂t
[
]
[
]
(1) (2)
where ρv is the partial density of moisture, ϕ relative humidity, δp permeability of water vapour, ps partial pressure of saturated water vapour, H enthalpy density, Lv heat of evaporation, λ thermal conductivity and T temperature,
Dϕ = Dw
dρ v dϕ
is liquid moisture diffusivity coefficient, DW capillary transport coefficient.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(3)
18 Computational Methods and Experimental Measurements XIV 3.4 Results of computational simulation The results are summarized in a set of figures. Every figure shows time dependence of moisture and temperature at the same time during fourth year of simulation. This is advantageous with regard to interpretation of simulation results. There are two horizontal lines in each figure which represent hygroscopic moisture content and temperature of freezing point. From the vast number of figures produced in computational simulations only the most representative are chosen, rest of them is only described. At the beginning, hygrothermal properties of simple concrete wall are analyzed. This allows us to get real image about differences in hygrothermal behaviour of different types of concrete in dependence of their modifications. Analysis of other building envelopes then follows which utilizes the previous knowledge. 3.4.1 Simple concrete wall The most predisposed type of concrete to creation of freezing cycles is the reference concrete. However, due to low moisture content during the studied period there is not any freezing cycle. As we can see on Figure 3, overhygroscopic moisture content is reached only in summer month when the temperature is above zero. Freezing point of water Temperature Hygroscopic moisture content Moisture content
0.180000
0.160000
0.140000
300
Temperature [K]
0.120000 280 0.100000 260
0.080000
0.060000
240
0.040000 220
Moisture content by volume [m3/m3]
320
0.020000
200 1275
0.000000 1325
1375
1425
1475
1525
1575
1625
Time [days]
Figure 3:
Hygrothermal performance of concrete, simple concrete wall (CR), 2 mm under the surface.
In concrete modified by fly ash, the overhygroscopic moisture content is reached once per a reference year, but as in the previous case this happens in summer month so there are not any possibilities of creation of freezing cycles. In other types of concrete the overhygroscopic moisture content is not reached at all. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
19
3.4.2 Concrete wall without thermal insulation system If we consider concrete wall provided only by plasters, the worst material combination is to use reference concrete without any modifications. This material combination leads to creation of two freezing cycles in exterior plaster. The cycles took 2 and 13 hours and arisen in 325th and 326th day of reference year (Figure 4). Other types of concrete do not give suitable conditions for freezing of water. Freezing point of water Temperature Hygroscopic moisture content Moisture content
0.080000
0.070000
Temperature [K]
300
0.060000
0.050000
280
0.040000 260 0.030000 240 0.020000 220
200 1275
Moisture content by volume [m3/m3]
320
0.010000
0.000000 1325
1375
1425
1475
1525
1575
1625
Time [days]
Figure 4:
Hygrothermal performance of exterior plaster, concrete wall (CR) provided by exterior plaster, 2 mm under the surface.
In all cases, the water contained in concrete does not get frozen. All the results are similar to result on Figure 5. 3.4.3 Concrete wall provided with thermal insulation system EPS as the insulation material provided with plaster reliably keeps the concrete wall from effects of freezing cycles no matter which type of concrete is under consideration. Nevertheless, disadvantage of this material combination lies in abnormal strain of exterior plaster caused by weather conditions. In the simulation we counted more than 25 freezing cycles in every material combination in exterior plaster per reference year (Figure 6). Duration of freezing cycles is different. The longest one takes 36 hours, the shortest one only 1 hour. Single cycles are separated by tiny temperature or moisture fluctuations which raise their final number. Mineral wool has the same effect as EPS and protects the concrete wall from increase of moisture content and decrease of temperature at the same time. This prevents water in the wall getting overhygroscopic and getting frozen. Anyway, exterior plaster applied on mineral wool is abnormally exposed too. The number of freezing cycles during one referent year is little bit smaller then in plaster WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
20 Computational Methods and Experimental Measurements XIV applied on EPS and counting about 25 cycles in every material combination (Figure 7). As we can see, the cycles are separated by tiny moisture and temperature fluctuations, too. Freezing point of water Temperature Hygroscopic moisture content Moisture content
300
0.180000
0.160000
0.140000
Temperature [K]
280
0.120000
0.100000 260 0.080000
240
0.060000
0.040000 220
Moisture content by volume [m3/m3]
320
0.020000
200 1275
0.000000 1325
1375
1425
1475
1525
1575
1625
Time [days]
Hygrothermal properties of concrete, concrete wall (CM) provided by exterior plaster, 2 mm from material interface.
Freezing point of water Temperature Hygroscopic moisture content Moisture content
320
0.080000
0.070000
Temperature [K]
300
0.060000
0.050000
280
0.040000 260 0.030000 240 0.020000 220
200 1275
Moisture content by volume [m3/m3]
Figure 5:
0.010000
0.000000 1325
1375
1425
1475
1525
1575
1625
Time [days]
Figure 6:
Hygrothermal performance in exterior plaster, concrete wall (CS) provided by EPS with exterior plaster, 2 mm under the surface.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV Freezing point of water Temperature Hygroscopic moisture content Moisture content
0.080000
0.070000
Temperature [K]
300
0.060000
0.050000
280
0.040000 260 0.030000 240 0.020000 220
200 1275
Moisture content by volume [m3/m3]
320
21
0.010000
0.000000 1325
1375
1425
1475
1525
1575
1625
Time [days]
Figure 7:
4
Hygrothermal performance in exterior plaster, concrete wall (CF) provided by mineral wool with exterior plaster, 2 mm under the surface.
Discussion
In an assessment of the impact of freezing cycles on service life of exterior renders, it is not enough to consider only the frost resistance determined in the laboratory. It is important to realize, that the number of freezing cycles depends on material combination, not only on material characteristics of used materials. As important as the freezing cycles’ time is the lag between two cycles. If the freezing cycles’ time is too short, water cannot solidify into ice and cannot disrupt the structure of plaster. By the same token, if the lag between two cycles is too short, ice cannot melt back into water and then refreeze again, so its destruction effect can be neglected and we can consider both as one. When we evaluate the number of freezing cycles taking these considerations into account, we get new number of freezing cycles which is reduced (Table 5). In this computer simulation only the liquid moisture appearance caused by rain was considered. However, there can be locations on building, which are exposed to water originating from other sources. This can be the case of a socle part of building and places with bad construction details solution. In these cases, the number of freezing cycles during one year could be much higher. Although simple concrete wall built from all the types of investigated concrete does not show indications of freezing of the contained moisture, the wall made from reference concrete is very close to it. As we considered only reference year which is based on long-term average of relative humidity and temperature, the freezing cycles cannot be completely excluded in every particular year; in real weather conditions deviations from the average values may appear which could lead to creation of some freezing cycles. Basically, this WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
22 Computational Methods and Experimental Measurements XIV Table 5:
Simple concrete wall
Concrete wall provided with exterior plaster Concrete wall provided with EPS and exterior plaster Concrete wall provided with mineral wool and exterior plaster
Number of freezing cycles after reduction. LCP 0 0 0 0 0 0 0 0 0 0 0 0
CF 0 0 0 0 -
CM 0 0 0 0 -
CS 0 0 0 0 -
CR 0 0 0 0
EPS 0 0 0 0 -
MW 0 0 0 0
LPMH 0 0 0 2 20 20 19 20 20 21 19 21
is caused by the lowest value of moisture diffusivity of reference concrete which does not allow for a relatively fast release of contained moisture. Considering the wall provided with exterior plaster only, the reference concrete gives us the worst results again. There are some freezing cycles in all types of concrete, however water contained in exterior plaster applied on reference concrete gets frozen twice a reference year. This determines the service life of plaster approximately to 15 years. In rest of cases, damage caused by weather conditions is not the main cause of degradation. If we investigate concrete wall provided with thermal insulation system, namely EPS or mineral wool, due to propitious thermal insulating properties the temperature in the concrete will never drop below zero which makes freezing of water impossible. However, both materials of thermal insulation have very low value of moisture diffusivity. Therefore, the moisture cannot be transported to concrete in sufficient amount and its level remains relatively high for long time. Pull of low temperature causes then freezing of this water. Surface layer (2 mm) of exterior plaster will resist only for 2 years. But overall thickness of the plaster is 5 mm, so the service life could be doubled.
5
Conclusions
Comparing all the results obtained in this paper, from the point of view of frost resistance the best option is to use concrete modified by fly ash. Although in most of cases, concrete is not the material which will be damaged by freezing of water, its type has still relatively high influence on service life of other materials of the wall, renders in particular. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
23
Nowadays, most of concrete buildings are provided with thermal insulation. Based on results of this simulation, more advantageous is to use EPS, but the differences between EPS and mineral wool are not too high. So, the sole criterion of water freezing in the exterior render is not the decisive one for a choice of thermal insulation material.
Acknowledgement This research has been supported by the Czech Science Foundation, under grant No 103/07/0034.
References [1] ČSN 73 1322/Z1:1968 Concrete testing – Hardened concrete – Frost resistance. Czech Standardization Institute, Prague 2003. [2] VEJMELKOVÁ, E. – ČERNÝ, R.: Application of Alternative Silicate Binders in the Production of High Performance Materials Beneficial to the Environment. In: Proceedings of the 2008 World Sustainable Building Conference [CD-ROM]. Balnarring, Victoria: ASN Events Pty Ltd, 2008, pp. 520-525. [3] PERNICOVÁ, R. – PAVLÍKOVÁ, M. – PAVLÍK, Z. - ČERNÝ, R.: Vliv metakaolinu na mechanické, tepelné a vlhkostní vlastnosti vápenných omítek. In: Metakaolin 2007. Brno: VUT FAST, 2007, pp 70-77. [4] KÜNZEL, H.M.: Simultaneous Heat and Moisture Transport in Building Components, Ph.D. Thesis. IRB Verlag, Stuttgart, 1995.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
25
A procedure for adaptive evaluation of numerical and experimental data J. Krok1, M. Stanuszek2 & J. Wojtas2 1
Institute of Computer Methods in Mechanics, Cracow University of Technology, Cracow, Poland 2 Institute of Computer Modeling, Cracow University of Technology, Cracow, Poland
Abstract The work addresses extended formulation of a new approach proposed to control error of experimental data. It includes: development of postprocessing techniques to approximate data given in a discrete form, a’posteriori error estimation (evaluation) of measured data and definition of reliability index of experimental data. Theoretical consideration and numerical analysis are based on the Adaptive Meshless Finite Difference (MFDM) approach. Keywords: meshless FDM, experimental data approach and smoothing, a’posteriori error estimation of experimental data, adaptive methods.
1
Introduction
Almost all numerical procedures of computational mechanics consist of the discretization process in which the continuous model is transformed into a discrete one. The discretization process is made as well within experimental setup and constitutes the key point of the computer simulation. It has a strong influence on the exactness, efficiency, generality and usefulness of obtained results. Correct discretization strategy and control of the discretization process very often decide whether the solution of analysed task is obtained or not. One may investigate the exactness of calculations (simulation) i.e.: how to measure the error due to numerical simulation (coming from the discretization process), minimize obtained error and effectively eliminate it. The question of verification applies here as well, i.e. if one can assess received results by the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090031
26 Computational Methods and Experimental Measurements XIV theoretical consideration (using available classical solution), will be the proposed procedure of estimation safe and applicable to other difficult and complex task as for example estimation of the different errors of the experimental data. In present work the efficiency of a new, coherent concept of a'posteriori error estimation of experimental or numerical (results of FEM (Finite Element Method) or FDM (Finite Difference Method)) data, together with estimation of the mesh density, taking into consideration the equal error distribution, is considered. Approach using the discrete function and MWLS (Moving Weighted Least Square) method with constrains, defined by the differential theory equations (e.g. equations of equilibrium, boundary conditions etc.) was applied. Several ways of error estimation as well as experimental points distribution were proposed. The suggested procedures of error estimation and density prediction of experimental points distribution were tested on solution of certain mechanical problems. The introduced approach relates to the so-called, problem - oriented a'posteriori constructed estimators. The special attention was given to define an error functional with additional conditions i.e. constrains. Such an approach makes the estimation of the error of separate components possible (e.g. one component of strain or stress state or one element of body potential energy [3] may be analyzed). Mainly, error estimators base on the behaviour of the total energy, recording and responding to its change (e.g. FEM estimators of Zienkiewicz-Zhu [9]). Followed by the change of the total energy of the deformed body a part of the information on change of its components is lost. The problem of approach and error estimation of experimental and/or numerical data was commonly considered by many researchers during the last three decades. Among these the paper of Karmowski and Orkisz [1] provide first concept of coupling of analytical as well as physical information on the case within the solution of the problem. Following this idea, a concept of physically based local - global approach was introduced. Simultaneously Lukasiewicz and Stanuszek [5, 6] developed their own concept of filtering and verification of numerical as well as experimental data. They formulated an error functional, as a combination of least square approximation of error with constrains in the form of theoretical equation approach. Presented idea turned out very attractive, especially in problems of mechanics, where equations of equilibrium and continuity as well as boundary conditions have to be satisfied. The extension of the idea of physically based approach to experimental data was delivered recently by Magiera [7], where certain non-statistical considerations for a’posteriori estimation were presented. Next important stage in constructing a correct estimator of experimental data was achieved by Krok and Wojtas. They have defined [2, 4] the density distribution of experimental points as the function of error distribution. Several ideas on converting the data errors to the node density distribution were also considered there. Current paper follows and extends the approaches presented in those two works.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
2
27
Formulation of the problem
2.1 General remarks During the process of collecting experimental and/or calculating numerical data several typical situations may be encountered: a) it is impossible to avoid the errors of measurement or calculation, in other words it is impossible to obtain the results of such processes without errors, b) zones with large gradients of measured and/or calculated function, indicate the domains in which high density of points is required, c) application of data smoothing (approach technology) lowers the amount of information available to assess the results of FEM and/or FDM analysis. The user is strongly encouraged to make use of all available physical, theoretical as well as numerical information to estimate the error distribution and based on this, predict the corrected node distribution. 2.2 Definition of error functional In order to begin, let us assume a “raw” data vector u~{~u1,....,~un } with components being the values measured and/or numerically calculated at discrete n points of a grid θ. Additional information on the model of the analyzed system is provided by constraints and can be presented as the following set of equations: (1) Ηu = f in which H[n×k] is the matrix resulting from the application of constraints and u is a vector of unknown, corrected data u{ u1,....,um }, usually located at m points of a grid γ differing form grid θ; vector f{f1,....,fk } represents the right side of the constraints equations.
Figure 1:
~ - set The considered domain and points: o – with measured data u θ, ∆ - with unknown data u - set γ.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
28 Computational Methods and Experimental Measurements XIV When the grid of points with measured or calculated data differs from the grid of verified (unknown) data (θ≠ γ), one has to approximate the vector u{ u1,....,um } using for example the MWLS (Moving Weighted Least Square) technique: (2) u = Au where u{ u1,....,un } are the unknown values of verified data on the grid θ used for
measured and/or calculated data. A matrix A[m×k] results from the procedure (MWLS) for verified (γ) and measured/calculated (θ) points which in general are different. The constraints described by eq. (1) can be satisfied in a sense of the least squares technique. Therefore the calculated value of u has to minimize the following error function R
1 ~ ) 2 + ( H u − f )T λ (3) R(u, λ) = ( A u − u 2 where λ represents a vector of Lagrange multipliers. In other words, the vector u should satisfy the equality constraints (1) and be as close as possible to the ~ . The set of equations (3) has to be represented numerically calculated values u by a discrete model of the system in terms of finite difference operators constructed on irregular grids based on the approach (2). Based on the standard minimizing procedure with respect to u and λ vectors ∂R ~ + H T λ = 0 , ∂R = Hu − f = 0 (5) = A T Au − A T u ∂λ ∂u one can get two separated sets of linear equations leading to u and λ −1 ~ − (H(A T A) −1 H T )−1 f (6) λ = (H(A T A) −1 H T ) H(A T A) −1 A T u T −1 T ~ T −1 T (7) u = (A A) A u − (A A) H C
{(
where C = H(AT A)−1 HT and in the matrix form
)
−1
(
~ − H T H(AT A)−1 HT H(AT A)−1 AT u
)
−1
f
}
~ A T A H T u Au (8) × = 0 λ f H Solution of the set of equations (8) gives the vector u for which the error function reaches the minimum value and the constraint equations (1) are exactly satisfied. Procedure described above was implemented to verify and correct data determined while calculating stress distribution in 2D disc loaded by the concentrated force.
3
Numerical implementation
In order to demonstrate the functionality of the proposed verification procedure, a number of trial calculations have been made. With reference to works [5, 6] the examples of verification of the plane stress state in a half-plane, loaded by a concentrated force P at the boundary is presented. Based on [8] theoretical stress distribution (σxx and σyy) inside the half-plane loaded along the edge can be presented in the form WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
29
2
3
xy ; σ yy = 2 P ; (9) 2 2 2 2 2 2 2 π ( x + y )( x + y ) (x + y ) Force P was assumed to be equal to 5[kN/mb]. The size of the domain (fig.1) was depicted in [m] and the stress distribution in [kPa]. Calculations were performed on irregular grid of nodes with equilibrium and continuity equations in the forms: 2 ∂ 2σ xx ∂ σ yy − = 0 ; ∆ ( σ xx + σ yy ) = 0 (10) ∂x 2 ∂y 2 The relative error implemented in the tests has taken the form: x
σ xx = 2 P π
εR =
σ exact − σ ⋅ 100% σ exact
(11)
Irregular distribution of nodes was presented on fig.2. Figure 3 shows the exactly calculated (9) σxx stress component. 2.50 2.00
P
1.50 1.00 0.50 0.00 -0.50 -1.00 -1.50 -2.00 -2.50 1.00
Figure 2:
2.00
3.00
4.00
5.00
6.00
Discretization of the domain.
2.00
1.00
0.00
-1.00
-2.00 1.00
Figure 3:
2.00
3.00
4.00
σxx stress distribution. Input data - exact values.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
5.00
6.00
30 Computational Methods and Experimental Measurements XIV To find out the effectiveness of the proposed filtering procedure the above exact solution was artificially disturbed by the local errors reaching up to 50% of initial values of stresses. Stresses σyy remained unchanged. Figure 4 shows a view of disturbed stress state and fig. 5 depicts the measure of relative error of corrupted data.
2.00
1.00
0.00
-1.00
-2.00 1.00
Figure 4:
2.00
3.00
4.00
5.00
6.00
σxx stress distribution with random errors - corrupted data.
2.00
1.00
0.00
-1.00
-2.00 1.00
Figure 5:
2.00
3.00
4.00
5.00
6.00
Relative error of σxx – corrupted data.
Prepared corrupted data were subjected to filtering process with two types of constrained equations applied: a) constrained equations (10) were imposed only on the internal nodes of the domain; b) in addition to the previous case the boundary conditions of I order (assumed values of function on the edge of the domain) were introduced. The results of filtration in first case were presented in fig. 6 and 7 below. One can observe high efficiency of the procedure for the internal nodes and low for the boundary ones (fig.7). This is the effect of FDM approach of constrained equations only on internal nodes of the domain. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
31
2.00
1.00
0.00
-1.00
-2.00 1.00
Figure 6:
2.00
3.00
4.00
5.00
6.00
Stress σxx distribution after filtering. Constrains (10) imposed only on internal nodes of the domain.
2.00
1.00
0.00
-1.00
-2.00 1.00
Figure 7:
2.00
3.00
4.00
5.00
6.00
Relative (11) errors of σxx after filtering of corrupted data. Constrains (10) imposed only on internal nodes of the domain.
Very often certain boundary conditions in the analysed domains are explicitly known and should be imposed within the solution obtained from the filtering process. The result of such a procedure is presented on fig. 8, where theoretically calculated values of the approximated function were applied at the boundary points of analysed domain. It is worthy to point out that the maximum local relative error of σxx was reduced from 50% to less than 8% (fig.9). To estimate the quality of the numerical solution one can calculate the distribution of so called local approach error defined by the equation: σ approx − σ (12) εL = σ approx The distribution of such an error was presented in fig. 10 and 11. It is easy to notice, that estimation of the required density of node distribution based on such an approach is destined to fail. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
32 Computational Methods and Experimental Measurements XIV
2.00
1.00
0.00
-1.00
-2.00 1.00
Figure 8:
2.00
3.00
4.00
5.00
6.00
Stress distribution σxx after filtering of corrupted data. Constrains (10) with boundary condition imposed. 2.00
1.00
0.00
-1.00
-2.00 1.00
Figure 9:
2.00
3.00
4.00
5.00
6.00
Relative (11) errors of σxx. after filtering of corrupted data. Constrains (10) with boundary condition imposed. 2.00
1.00
0.00
-1.00
-2.00 1.00
Figure 10:
2.00
3.00
4.00
5.00
6.00
Relative (12) errors of σxx. Constrains (10) imposed on internal nodes of the domain.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
33
2.00
1.00
0.00
-1.00
-2.00 1.00
Figure 11:
2.00
3.00
4.00
5.00
6.00
Relative (12) errors of σxx. Constrains (10) with boundary condition imposed.
For further consideration the parameter called “efficiency index” will be introduced and defined by the equation: e ψ = approx (13) e exact i
This parameter equals Ψ = 0.41 when only constraints (10) are imposed, while when boundary conditions are satisfied Ψ = 0.84. Efficiency index takes here a role of a measure of approach quality. The perfect situation occurs when the estimated and exact solutions are identical e.g. Ψ = 1.00. High value of the efficiency index in second case is explained by the fig. 12 where one can note a similar distribution of absolute errors of corrupted and estimated σxx .
Figure 12:
Errors σ approx − σ and σ exact − σ
of σxx . Second case.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
34 Computational Methods and Experimental Measurements XIV Now we are going to provide a way for expressing the local error in terms of the nodal densities for which one can obtain even error distribution. This is the essential task underlying adaptive technologies. Based on our considerations it turns out that to define such a nodal distribution it is required to apply local as well as global concentration index. Lets introduce: Local index of nodal density correction defined as: σ iexact − σ iapprox (14) ξσLi = e Average error in the analysed domain may be calculated as (n number of nodes): n 1 (σ iexact − σ iapprox )2 eσ= (15) ∑ n i =1 Total weighted norm of stresses is depicted by: 2
n exact (16) ∑σ i σ n i =1 Finally, global index of nodal density correction, grid refinement, defined as: eσ ξσ = (17) η⋅ U σ U
1
=
where η corresponds to the level of imposed error distribution. Based on the norms listed above, global – local index of increasing grid density (I type) may be defined using square of the global index as: 2
e σ σ iexact − σ iapprox e ⋅ = 2 σ 2 σ iexact − σ iapprox (18) η ⋅ U e U σ η ⋅ σ σ Other possible global – local index of increasing grid density (II type) may be defined using linear form of global refinement index as e σ σ iexact − σ iapprox 1 ⋅ (19) ξ σLi = ξ σ ⋅ ξ σLi = = σ iexact − σ iapprox η ⋅ U e U η ⋅ σ σ σ
ξσLi = ξσ ⋅ ξσLi = 2
2.00
1.00
0.00
-1.00
-2.00 1.00
Figure 13:
2.00
3.00
4.00
5.00
6.00
Local-global mesh refinement index (18). Permissible error η=0.10.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
35
The distribution of the global – local refinement index (19) is depicted in fig.14. 2.00
1.00
0.00
-1.00
-2.00 1.00
Figure 14:
2.00
3.00
Local-global mesh refinement index (19). η=0.10.
4.00
5.00
6.00
Permissible error
It is easy to observe on the results presented above that the refinement of the grid was necessary within the areas exhibiting high gradient of the sought function. Moreover it is worthy to note that in I-st case, the required grid density was overestimated (20) while in II-nd attempt the refinement index takes realistic value of 4 (verified during analysis).
4
Conclusions
In this work the development of an adaptive approach for the experimental as well as numerical data collections was presented. An adaptive procedure of experimental and numerical data collections based on a’posteriori error analysis of data was proposed. It includes estimation of the new grid density taking into account equal distribution of an error (using different error norms). Thus adaptive procedure of experiment or numerical discretization planning is possible. Numerical approach was executed using Adaptive Meshless Finite Difference Method with certain additional constrains taken into consideration. The paper also presents a new concept of error estimation with the use of so called global – local estimator. The implementation of such estimators lets to refine the grid of numerical or experimental nodes. Obtained results show a good efficiency of proposed adaptive procedure.
References [1] W. Karmowski, J. Orkisz, Physically Based Method of Enhancement of Experimental Data - Concept, Formulation, and Application to Identification of Residual Stresses, Proc. of the IUTAM Symp. on Inverse Problems in Engng Mech., Tokyo, Japan, Springer-Verlag, 1993, 61-70. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
36 Computational Methods and Experimental Measurements XIV [2] J. Krok, An Extended Approach to Error Control in Experimental and Numerical Data Smoothing and Evaluation Using the Meshless FDM, Revue Europenne des elements finis, no 7-8/2002, pages 913-935. [3] J. Krok, Meshless FDM based Approach to Error Control and Evaluation of Experimental or Numerical Data. II MIT Conf. on Comp. Fluid and Solid Mechanics, 2003, Cambridge, MA, USA. [4] J. Krok, J. Wojtas, An Adaptive Approach to Experimental Data Collection Based on A Posteriori Error Estimation of Data, Comp. Meth. in Mechanics – CMM-2007, June 2007, Spała-Łódź, pp.1-13. [5] S. A. Łukasiewicz, M. Stanuszek, J.A. Czyż, Filtering of the Experimental or FEM Data in Plane Stress and Strain Fields, Experimental Mechanics, 1993, June, pp. 139-147. [6] S. A. Łukaszewicz, M. Stanuszek, Constrained, weighted, least square technique for correcting experimental data, 6th Int. Conf. on Comp. Methods and Experimental Measurements 93, Vol 2: Stress analysis, pp. 467-480, Elsevier Applied science, London New York, 1993. [7] J. Magiera, Non-statistical physically reasonable technique for a posteriori estimation of experimental data error, Computer Assisted Mechanics and Engineering Sciences, 13, 593-611, 2006 [8] S. Timoshenko, J. N. Goodier Theory of elasticity, New York Toronto London, 1951. [9] O.C. Zienkiewicz, R.L. Taylor, The Finite The Finite Element Method, Vols. I-III, Sixth ed. Butterworth-Heinemann, Oxford, 2005.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
37
Gear predictor of manual transmission vehicles based on artificial neural network A. M. Wefky, F. Espinosa, M. Mazo, J. A. Jiménez, E. Santiso, A. Gardel & D. Pérez Department of Electronics, University of Alcala, Spain
Abstract Nearly all mechanical systems involve rotating machinery (i.e., a motor or a generator), with gearboxes used to transmit power or/and change speed. Concerning vehicles, there is a specific nonlinear relationship between the size of the tires, linear velocity, engine RPM, gear ratio of the differential, and the gear ratio of the transmission. However, for each car there is a specific range of gear ratio of the transmission for each gear. On the other hand, the gear value is an indication of the driver behaviour and the road conditions, therefore it should be considered to establish non-pollutant driving guidelines. In this paper, two novel feed-forward artificial neural network (ANN) models have been developed and tested with the gear as the network output and the velocity of the engine (RPM) and the velocity of the car in (Km/h) as the network inputs. A lot of experiments were made using two commercial cars. The prediction efficiency of the proposed models is superior (i.e., the generalization mean square error is about 0.005). However after testing with two different vehicles, the conclusion is that on one hand the structure of the ANN model is suitable. On the other hand each vehicle has its specific model parameters. This paper shows that it is difficult to develop a universal model that predicts the gear based on the RPM and speed of any car. Keywords: feed-forward artificial neural networks, gear predictor, manual transmission.
1
Introduction
0B
The drivetrain system of the automobile engine consists of the following parts: engine, transmission, drive shaft, differential, and driven wheels. Firstly, the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090041
38 Computational Methods and Experimental Measurements XIV transmission is a gear system that adjusts the ratio of engine speed or engine regime (RPM) to the vehicle speed. Mainly, it enables the engine to operate within its optimal performance range regardless of the vehicle speed. In a manual transmission, the driver selects the correct gear ratio from a set of possible gear ratios (usually five of six for modern passenger cars). For each gear, there is a specific gear ratio. But an automatic transmission selects this gear ratio by means of an automatic control system. Secondly, the drive shaft is used on front-engine rear wheel drive vehicles to couple the transmission output shaft to the differential input shaft. However, in front wheel drive automobiles, a pair of drive shafts couples the transmission to the drive wheels through flexible joints known as constant velocity (CV) joints. Thirdly, the differential has the following three purposes. The first is the right angle transfer of the rotary motion of the drive shaft to the wheels. The second is to allow each driven wheel to turn at a different speed, because the external wheel must turn faster than the internal wheel when the vehicle is turning a corner. The third is the torque increase provided by the gear ratio. The gear ratio also affects fuel economy. In front wheel drive cars, the transmission, differential, and drive shafts are known collectively as the transaxle assembly. The combination of drive shaft and differential completes the transfer of power from the engine to the rear wheels [1]. Finally, the car’s tires can almost be thought of as a third type of gearing. In other words, if the circumference of the tires is L, then for every complete revolution of the wheel, the car travels L meters. Eqn. (1) shows the formula relating the overall gear ratio (i.e., gear ratios of the transmission (grt) and differential (grd)), the size of the tires (Ct), the speed of the car (vc), and the engine speed (ve). The overall gear ratio gro is the product of grd and grt [2].
gro = grd * grt = Ct *
ve vc
(1)
One of the challenges of the MIVECO research project, in which the authors are involved, is to establish a relationship between the driver behaviour and the road conditions with the non-pollutant driving guidelines. Consequently, the knowledge of the gear value and its relationship with the gases measurement is required. However, in most engine control systems it’s difficult to find a sensor for the transmission gear selector position.
RPM
Artificial Neural Network
Vehicle’s Gear
Velocity
Figure 1:
Procedure used to test and train the ANN.
This paper proposes a novel artificial neural network (ANN) model to predict the overall gear ratio gro based on the engine RPM and the corresponding speed WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
39
of the car, see Figure 1. The model has been trained, validated, and tested with experimental tests using two commercial vehicles: Peugeot 205 and 405. ANNs have been used widely in recent years in various fields such as finance [3], medicine [4], industry [5] and engineering [6, 7], due to their computational speed, their ability to handle complex non-linear functions and their robustness and great efficiency, even in cases where full information for the studied problems is absent.
2
Methodology
1B
Figure 2 shows the process followed to evaluate the capability of a feedforward artificial neural network to predict the gear based on the corresponding velocity of the car and RPM of the engine. That procedure was applied to two different vehicles, as explained in the following sections.
Measure the engine RPM and the car speed
Deduce the corresponding gear
Normalize and divide the entire data set into training , validation, and testing data sets .
Train the neural network model using the training data set
Test the neural network model with the data of the same car
Test the neural network model with the data of the other car Figure 2:
Procedure used to test and train the ANN.
2.1 First case of study: Peugeot 205 The engine RPM and the car speed, shown in Figure 3, as well as the overall gear ratio gro were measured in different zones with different driving conditions. The instantaneous values of the gear were calculated where there is a specific WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
40 Computational Methods and Experimental Measurements XIV range of the overall gear ratio for each gear as illustrated in Table 1. The overall gear ratio, the corresponding gear signals, and the filtered gear signal are shown in Figure 4. The horizontal dashed lines in the graph of the overall gear ratio represent the boundaries of the ranges indicated in Table 1. The overall gear ratio and therefore the gear signal oscillates back and fourth around some boundaries in some areas marked by ellipses. Obviously, these oscillations in the gear signal happen in a very short time period which is impossible in reality. Consequently the gear signal was filtered in order to get rid of these repetitive changes. Table 1:
Ranges of gro with the corresponding gear.
Range of the overall gear ratio (gro) From To 20 -11,87628 20 6,74544 11,87628 4,87968 6,74544 3,83916 4,87968 3,12156 3,83916
Figure 3:
Gear Neutral First Second Third Fourth Fifth
Engine RPM and car velocity of the PEUGEOT 205.
2.2 Second case of study: Peugeot 406 6b
In INSIA laboratories in Madrid, the engine RPM was measured during a test of the New European Driving Cycle (NEDC) on rolling roads. The NEDC consists WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
41
of four repeated ECE-15 driving cycles and an Extra-Urban driving cycle (EUDC). The EUDC and only one of the four ECE-15 driving cycles were used. Concerning the speed of the vehicle, the standard values were used. The gear signal was deduced using the clutch signal and the RPM signal. In other words, the gear change event is always synchronized with activating the clutch signal. On the other hand, if the gear changes from a low to a higher position, the RPM level is suddenly decreased. On the contrary, when the gear changes from a high to a lower position, the RPM level decreases smoothly. The instantaneous values of the RPM, reference velocity, clutch, and the deduced gear signals are plotted in Figure 5.
Figure 4:
Overall gear ratio, gear, and filtered gear.
2.3 Artificial neural network modelling Numerous neural networks are available for function approximation problems. A multilayer Perceptron MLP feedforward neural network trained with backpropagation was chosen to analyze the data because it has many properties useful for the vehicle gear prediction problem. It can efficiently learn large data sets. To obtain a good generalization, the entire data set was divided into training (60%), validation (20%), and testing (20%) groups. MLPs are more powerful than single layer networks because single layer networks are only able to solve linearly separable classification problems [11]. For example, a single-hidden layer feedforward network with a sufficiently large number of neurons can satisfy the "universal approximation" property [8, 12, 13, and 14]. A singlehidden layer neural network (1-S-1), with S sigmoid neurons in the hidden layer and linear neurons in the output layer, can produce a response that is a superposition of S sigmoid functions [11]. Moreover, a single-hidden layer feedforward network with any bounded nonlinear transfer function with N-1 WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
42 Computational Methods and Experimental Measurements XIV hidden neurons can represent any N input-target relations exactly (with zero error) [15, 16, 17, and 18]. However, a double-hidden layers network can represent any N input-target relations with a negligible small error using only (N/2) + 3 hidden neurons. This means that a network with 2 hidden layers is better than a network with one hidden layer in terms of number of training samples [15]. Consequently, in this paper both single and double hidden layer networks were used with sigmoid hidden neurons and linear output neurons.
Figure 5:
RPM, reference velocity, clutch, and the gear.
When a particular training algorithm fails on a MLP; it could be due to one of two reasons. The learning rule fails to converge to the proper values of the network parameters, perhaps due to unsuitable network initialization. Or the inability of the given network to implement the desired function, perhaps due to insufficient number of hidden neurons. To avoid the first possibility, the neural network models were trained and tested 10 times and the network with the lowest mean square error was chosen. Concerning the second possibility, there is no theory yet to tell you how many hidden neurons are needed to approximate any given function. In most situations, there is no way to determine the best number of hidden neurons without training several networks and estimating the generalization error of each. However, the designer must put into consideration that the MLP with the minimum size is less likely to learn noise during the training phase; consequently generalizes better to unseen data. The methods to achieve this design objective are: network growing and network pruning. In network growing, we start with a small MLP, and then add a new hidden neuron or new hidden layer when we are unable to meet the design specifications. On the other hand, in network pruning, we start with a large MLP, and then prune it by eliminating certain weights in an orderly manner. If there were too few hidden neurons, high training error and high generalization error would result due to underfitting and high statistical bias. On the other hand, if there were too many hidden neurons, low training WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
43
error, but still have high generalization error, would result due to overfitting and high variance [7, 10, 11, and 13].
3
Results
The mean square errors resulting from testing single and double feedforward neural networks with the data of the same vehicle are shown in Tables 2 and 3. The minimum size model architecture that met the design goal (about 0.005) for the PEUGEOT 205 and PEUGEOT 406 was 2-10-1 with 10 hidden neurons. Table 2: Number Neurons 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
MSE resulting from testing single hidden layer networks.
of
Table 3: Number neurons S1 1 2 3 4 5 6 7 8 9 10
Hidden
Testing Mean Square Error (MSE) PEUGEOT 205 PEUGEOT 406 0.022929804113799 0.030737847949962 0.008113754649602 0.013009639928021 0.007150878530542 0.007953683727784 0.006916124765493 0.007533362116994 0.006770174929462 0.007356262805218 0.006801600634453 0.006821096479489 0.007161435831671 0.006249000207199 0.007262385239038 0.005906782786126 0.006665429326786 0.005840597665814 0.006486881856067 0.005527620454196 0.006699752798641 0.004349509945806 0.006961214546624 0.005195854106452 0.006687424328677 0.004042310392445 0.006733573608595 0.005189990010043 0.006861530582299 0.003762549738464
MSE resulting from testing double hidden layer networks. of S2 1 2 3 4 5 6 7 8 9 10
hidden
Testing Mean Square Error (MSE) PEUGEOT 205 0.021251738513978 0.007897066497980 0.007403216890455 0.007136833232957 0.007266126627429 0.006853332450965 0.006382907932383 0.006284173922234 0.006515939055022 0.006873106767166
PEUGEOT 406 0.029045230801033 0.007546159875325 0.007741855207261 0.003671035573348 0.002963149051319 0.002682340983058 0.002440590330384 0.002303214429815 0.002130067441743 0.002144081137843
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
44 Computational Methods and Experimental Measurements XIV
Figure 6:
Figure 7:
Testing results of the proposed model for P-205 9B.
Testing results of the proposed model for P-406.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
45
The proposed models showed satisfactory results when it was tested with the data of the same vehicle as can be shown in Figures 6 and 7. On the other hand, the models failed to predict the gear signal when they were tested with the data of the other vehicle.
4
Discussion
3B
The neural network models succeeded to predict the gear given only the corresponding engine RPM and car speed. Two structures models have been studied. Considering the value of 0.005 for the mean square error as a goal, it can be deduce that the simplest solution is the following structure model: only one hidden layer and 10 neurons per layer. On the other hand the same neural network model failed to predict the gear signal when it was tested with the data of the other vehicle. This means that the parameters (weights and biases) of the model should be calculated for each vehicle.
5
Conclusion
An approach to predict the vehicle’s gear based on the engine regime (RPM) and the vehicle’s velocity (Km/h) using feedforward neural networks is presented. Two neural network models were evaluated for two different vehicles. The proposed ANN model structure is: only one hidden layer and 11 neurons [10 hidden plus 1 out]. This model allows one an acceptable mean square error (about 0.005) to predict the gear of manual transmission vehicles. However, the calculated model parameters for a car cannot be extended to another vehicle; they should be checked for any different case.
References [1] Norman P. M., Gerald L., Charles W. Battle, Edward C.J, Understanding Automotive Electronics, 2003, Elsevier Science, USA. [2] http://en.wikipedia.org/wiki/Main [3] Y. Bodyanskiy, S. Popov, Neural network approach to forecasting of quasiperiodic financial time series, European Journal of Operational Research, 175(3) (2006) 1357-1366. [4] M. Frize, C.M. Ennett, M. Stevenson, and H.C.E. Trigg, Clinical decision support systems for intensive care units: using artificial neural networks, Medical Engineering & Physics 23(3) (2001) 217-225. [5] M. Sloleimani-Mohseni, B. Thomas, Per Fahlen, Estimation of operative temperature in buildings using artificial neural networks, Journal of Energy and Buildings, 38 (2006) 635-640. [6] Y.J. Chen, Y.M. Chen, C.B. Wang, H.C. Chu, T.N. Tsai, Developing a multi-layer reference design retrieval technology for knowledge management in engineering design, Expert Systems with Applications, 29(4) (2005) 839-866. HU
UH
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
46 Computational Methods and Experimental Measurements XIV [7] S. Haykin, Neural Networks: a comprehensive foundation. New York:
[8] [9]
[10]
MacMillan College Publishing Company 1994. O. Nolles, Nonlinear system identification: from classical approaches to neural networks and fuzzy models, Spring-Verlag, Berlin, 2001. Mukesh Khare, S.M. Shiva Nagendra, Artificial Neural Networks in Vehicular Pollution Modelling, Springer 2007 T. I. Maris, L. Ekonomou, G.P. Fotis, A. Nakulas, E. Zoulias, Electromagnetic field identification using artificial neural networks, Proceedings of the 8th WSEAS International Conference on Neural Networks, Vancouver, British Columbia, Canada, June 19-21, 2007. Khalaf, A.A.M.; Abo-Eldahab, M.A.M.; Ali, M.M., System Modelling Using Neural Networks in the Presence of Noise, Electronics, Circuits and Systems, 2003. ICECS 2003. Proceedings of the 2003 10th IEEE International Conference on Volume 2, 14-17 Dec. 2003 Page(s):467 - 470 Vol.2. Martin T. Hagan, Howard B. Demuth, Neural Network Design, 1996 by PWS Publishing Company. Bishop, C.M. (1995), Neural Networks for Pattern Recognition, Oxford: Oxford University Press. Sarle, W., 1997. Neural network frequently asked questions. ftp://ftp.sas.com/pub/neural/ FAQ.html. Guang-Bin Huang; Lei Chen; Chee-Kheong Sien, Universal approximation using incremental constructive feedforward networks with random hidden nodes, Neural Networks, IEEE Transactions on Volume 17, Issue 4 , July 2006 Page(s):879 – 892. S.I. Tamura, M. Tateishi, Capabilities of a four-layered feedforward neural network: four layers versus three”, IEEE Transactions on Neural Nets, 8(2) (1997) 251-255. Sartori, M.A.; Antsaklis, P.J.; A simple method to derive bounds on the size and to train multilayer neural networks, Neural Networks, IEEE Transactions on Volume 2, Issue 4 , July 1991 Page(s):467 – 471. Huang, S.-C.; Huang, Y.-F.; Bounds on number of hidden neurons of multilayer perceptrons in classification and recognition; Circuits and Systems, 1990., IEEE International Symposium on 1-3 May 1990 Page(s):2500 - 2503 vol.4. Guang-Bin Huang; Babri, H.A.; Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions, Neural Networks, IEEE Transactions on Volume 9, Issue 1 , Jan. 1998 Page(s):224 – 229. H
H
[11] [12] [13]
HU
[14]
U
H
[15] [16]
H
H
H
H
H
[17]
H
H
H
H
[18]
H
H
H
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
H
Computational Methods and Experimental Measurements XIV
47
Non-thermal, chemical destruction of PCB from Sydney tar ponds soil extract A. J. Britten & S. MacKenzie Cape Breton University, Sydney, Nova Scotia, Canada
Abstract The Sydney Tar Ponds, in Sydney, Nova Scotia, Canada, contains more than 700,000 tonnes of contaminated sediments including PAH, hydrocarbon compounds, coal tar, PCB, coal dust, and municipal sewage. An important source of contamination are the PCB which cause adverse health affects to humans as well as environmental problems for the surrounding ecosystems through bioaccumulation and resistance to environmental breakdown. There are various processes for the remediation of contaminated sites. The most commonly used methods include incineration, solvent washing and/or extraction, stabilization/solidification and base catalyzed soil remediation. A recent and more environmentally friendly method for remediation is the “SonoprocessTM.” The claim is that PCB are destroyed in a non-thermal way using a sodium reaction and high frequency vibration to remove the chlorine atoms from the biphenyl. In this study, the process is modified to suit the Tar Ponds matrix and is tested on samples of PCB and PAH contaminated soil from the Tar Ponds. A steel bar (with a chamber containing the contaminated soil, sodium, and solvent attached to the end) is brought to its resonance frequency to destroy harmful contaminants. The energy which is generated is used to vibrate the PCB extract with sodium to break the C-Cl bonds. The soil mixture is removed and washed, resulting in clean, safe soil and sodium chloride byproduct. The remaining solution from the extraction has a possibility of being used as a low-grade fuel. GC-ECD and GC-MS were used to identify and to quantify the compounds present before and after the PCB destruction process. PCB present at 160mg/kg in soil were reduced to <0.25mg/kg after extraction treatment. The concentrated oil extract containing 400mg/kg PCB had no detectable amount of PCB after the sodium/Sonic process. Chromatograms, mass spectra, and mass spectral interpretation are included in the paper. Keywords: PCB, non-thermal, GC-ECD, GC-MS, tar ponds, sonoprocessTM. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090051
48 Computational Methods and Experimental Measurements XIV
1
Introduction
The Sydney Tar Ponds is considered a highly contaminated site. “The Tar Ponds themselves are actually a tidal estuary of thirty-three hectares that contain over 700,000 tonnes of contaminated sediments including Polycyclic Aromatic Hydrocarbons [PAH], hydrocarbon (HC) compounds, coal, tar, Polychlorinated Biphenyls [PCB], coal dust, and municipal sewage” [1]. Environmental analysis of the Tar Ponds site and the degree of contamination showed that levels of PAH in the marine life were 200 times higher than elsewhere in Cape Breton [1]. In the Sydney Tar Ponds, PAH are one of the major contributors to the contamination of the site. PAH are hydrophobic organic compounds with a varying number of aromatic rings fused together and differ in the arrangement of the rings [2–3]. These environmental contaminants are produced through the incomplete combustion process of fossil fuels such as oil and other organic materials including coal and wood [2–5]. The five major sources of PAH in the environment are: heating homes, industrial sources, power plants, incineration, and transportation [5]. PCB are another source of contamination present in the environment including the Sydney Tar Ponds [1]. They differ in chemical structure and toxicity in the number and arrangement of chlorine atoms on the biphenyl structure [6]. The most dangerous PCB are those in the coplanar conformation with chlorine atoms in the para-positions as well as two or more in the meta positions [7]. Coplanar PCB is the conformation where the biphenyl rings share the same plane. The meta- and para-substituted PCB are where the chlorine atoms are attached in the position one and two carbons away from the carbon bonded to the second phenyl ring. The ortho-substituted PCB is where the chlorine atoms are attached to the carbon directly adjacent to that bonded to the second phenyl ring. If chlorine atoms were substituted in the ortho-positions, the steric hindrance of the substituents would eliminate the coplanar conformation. Contaminated soils are of great concern because of the adverse and carcinogenic health effects to humans and animals due to bioaccumulation and their persistent presence in the environment [8–14]. PCB are man-made chemical compounds that were used in industrial practices due to their low vapour pressure, dielectric properties, as well as being non-flammable, chemically and thermodynamically stable [6,13–15]. Sources for the presence of PCB in the environment include waste sites and landfills, incineration of non-PCB containing waste, improper storage, and high temperature chemical reactions between carbon and chlorine [10–12]. Within the Sydney Tar Ponds site, approximately 5% (45,000 tonnes) of the total contamination is due to PCB levels greater than 50 parts per million [15]; the internationally accepted threshold for defining material as PCBcontaminated. This 5% is also contaminated by PAH compounds. The main source of these contaminants is due to SYSCO (Sydney Steel Corporation), and possibly CN. CN used PCB oil as lubricants within transformers. When the PCB oil was spilled, it went into the ground, leaked into the sewers, and into the North or South ponds. Therefore, high levels of contamination are present at sewage WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
49
outlets and run off sites. Areas in the Sydney Tar Ponds area with the highest levels of PCB include MAID (Municipal Ash and Industrial Disposal Site) 25ppm and the Coke Ovens area 61ppm [16]. Other areas with significant levels of PCB contamination include NOCO (area North of Coke Ovens), Domtar, Benzol Plant, surface water ways, and Sydney Landfill. Treatment process for the decontamination of PCB polluted areas can be carried out in a number of ways. The government of Canada favours removal and treatment of contaminated soil and sediment. This solution is often coupled with incineration. Incineration technology for remediation is the most extensively used method today [16]. A drawback to this technique is the high cost of construction and operation. The incinerator must be kept above approximately 540oC for long periods of time in order for the process to be effective, complete, and without the release of dioxin or other toxic compounds [9,11,12,15]. Also, the public perception of incineration is not socially acceptable. Incineration is viewed as being unsafe and dangerous for the environment as well as the health and well being of the surrounding residents. If incineration is not applied to a contaminated area, the government of Canada is often in favour of containment, leaving the contamination in place. Solvent extractions are used where organic solvents can be employed to separate and concentrate contaminants in the solvent [7,11,15,16]. This mixture can then be incinerated for remediation. Solvent washing and extraction followed by dechlorination has been shown to be an effective decontamination method [7,8,16]. Base catalyzed soil remediation is a promising technique where sodium is added to the soil to breakdown PCB. The result of the system is the treated soil, biphenyl, and sodium chloride [6]. An alternative to incineration and landfill is Sonic Environmental’s process. In combination with this technique, the Terra-Kleen solvent extraction process is used to remove the PCBs from the soil in a non-thermal way through the use of a nontoxic solvent [16]. Combined, they are commercially known as the SonoprocessTM. The non-thermal destruction of PCB and PAH contamination is a socially acceptable method of bioremediation. Solvent extraction is a valuable resource for the removal of PCB, PAH, pesticides, dioxins, DDT, and petroleum products from soil. In the Sonoprocess,TM a 2.8 ton steel bar is brought to its resonance frequency using an electromagnetic drive system at each end (Figure 1a). The contaminated soil, sodium, and the solvent are contained within a cell which is attached to the sonic generator (Figure 1b). The energy which is generated from the resonance frequency is captured and used to vibrate and separate the soil particles, releasing the PCB for fragmentation with sodium. The soil mixture is removed and washed, resulting in clean, safe soil and sodium chloride. The remaining solution from the soil extraction has a possibility of being used as a low-grade fuel. Gas Chromatography (GC) is a widely used method for the analysis of soil extracts [6]. The methodology used in combination with GC was Electron Capture, Flame Ionization, and Mass Spectroscopy [10,13,14,17] including Electron Impact [EI] and Negative/ Positive Ion Chemical Ionization [NCI/ PCI].
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
50 Computational Methods and Experimental Measurements XIV The Electron Capture Detector (ECD) provides a high degree of sensitivity and selectivity, which is of great value with environmental analysis.18 The ECD is a commonly used detector, however, the ECD cannot be used to study all types of chemicals. It is sensitive to compounds with electronegative atoms and functional groups such halogenated compounds, (e.g. PCB) and some PAH. The radioactive source in the detector creates high-energy electrons (β particles) that ionize the detector gas creating thermal electrons. The presence of electrophilic compounds captures these electrons. The remaining electrons are collected at the detector. The ECD provides sensitivity as low as picogram levels and a linear range of approximately 105 [18].
Figure 1:
A. steel bar with electromagnetic drive system; b. reaction cell.
The Flame Ionization Detector (FID) is a non-selective detector, which can be applied to environmental analysis.[18] The sample components are burned in a flame of hydrogen and air, producing ions. The ions are collected at the detector to construct the chromatogram of the sample. The FID has the capability of producing chromatograms of all oxidizable carbon containing compounds in the sample. The application of the FID provides a stable detector for analysis with sensitivity of approximately 10-10g/s and a linear range of 107. The Mass Spectrometer Detector or Mass Selective Detector (MSD) is a very sensitive detector for low concentration samples. It can be used for both quantitative and qualitative analysis and can be applied to identify unknown substances present in the sample based on GC retention time plus a characteristic spectrum of mass-to-charge ratio versus ion abundance for molecular ions and fragment ions. Ionization of sample components can be achieved in a number of ways including EI, NCI, and PCI. EI ionization often results in a high degree of fragmentation of sample molecules. Chemical Ionization, whether negative or positive, is a ‘softer’ ionization technique due to lower energy molecule-ion collisions (rather than molecule-electron collision of EI) and therefore results in less fragmentation. Because the molecular ion or pseudo-molecular ion is often of high relative abundance, this technique can be useful especially for analysis of more fragile compounds and can, in some cases, increase selectivity and sensitivity of the components of interest.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
2
51
Experimental
2.1 Instrumentation A Hewlett Packard 6890N Network GC System and 7693B Series Injector from Agilent Technologies. Gas chromatography was combined with FID, ECD and MS (EI, PCI, and NCI). GC column was a J&K Scientific ICB-5 (30.0m length x 0.25mm internal diameter x 0.25µm film thickness). The conditions for each of the methods were varied and the most suitable were applied. 2.2 Samples Samples of contaminated soil were collected by the Sydney Tar Ponds Agency from the Sydney Tar Ponds. 2.3 Sample preparation For the sample preparation of the soil samples collected from the Sydney Tar Ponds (STP), approximately 10 grams of the soil sample was mixed in a beaker with 100mL of acetone (Anachemia AC-0150 UN-1090 CAS 67-64-1 99.5%min). This solution was spiked with 1000ppm Chrysene-D12 in acetone and was placed in an ultrasonic bath for a twenty-minute extraction. The soil acetone mix was filtered with Whatman Student Grade Filter Paper (11cm) into a filter flask. The above extraction was carried out an additional two times with clean acetone each time. The soil extract was transferred into a 500mL separatory funnel. The filter flask was rinsed with benzene to remove left over heavy oil. After extraction the soil was saved for further analysis. The separatory funnel was rinsed with benzene four times to ensure complete transfer of the extract. The Rotovapor was used to remove the acetone and benzene from the extract. The sample was then collected into vials for analysis.
3
Results and discussion
The Sydney Tar Ponds soil sample collected and extracted contained high levels PAH as well as PCB contamination. The soil samples were treated and analyzed using various gas chromatographic techniques. The PAH compounds remaining in the treated soil sample from the Sydney Tar Ponds were analyzed using Mass Spectrometry. A standard of PAH compounds was used to identify as many PAH as possible based on retention time and mass spectral matching. The Sydney Tar Ponds soil sample was analyzed using GC-ECD. Figure 2 shows the chromatograms of the Sydney Tar Ponds soil extract as well as a PCB standard. The standard shows the peaks, which are PCB compounds present in the contaminated soil. Figure 3 shows the treated soil extract and the PCB standard once again. From these chromatograms, one may observe that the treatment process was successful in the destruction of PCB. Remaining peaks appear to be PCB compounds based on their retention times, however through WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
52 Computational Methods and Experimental Measurements XIV
a. tar ponds extract: upper (red) b. PCB standard: lower (black)
Figure 2:
GC-ECD of a. tar ponds extract; b. PCB Standard.
a. tar ponds lower (red)
treated
extract:
b. PCB standard: upper (black)
Figure 3:
GC-ECD of a. treated tar ponds extract; b. PCB Standard.
analysis with Mass Spectrometry, it was determined that these compounds are not PCB. PCB are not detected following the treatment process. Therefore, it can be assumed that the PCB compounds have been destroyed during the treatment WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
53
process from the Sydney Tar Ponds soil sample. To confirm the observation of the destruction of the PCB, samples were analyzed by an external Canadian Association for Environmental Analytical Laboratories (CAEAL) accredited lab, which also used GC-ECD. It was confirmed that the PCB had been destroyed during treatment. Prior to treated PCB in soil was 160mg/kg and in the concentrated oil, 400mg/kg. After treatment the concentration of PCB in soil was <0.25 mg/kg and was below the GC-ECD detection limit in the oil. For further evidence that the PCB compounds were removed during the treatment process, the treated soil sample was analyzed using NCI, which is an ionization technique with high sensitivity for PCB compounds. From this technique, it shown, again, that PCB are no longer being detected. Figure 4 has GC-NCI MS chromatograms of Sydney Tar Ponds Soil Extract before and following treatment. The peaks which are PCB compounds are indicated on the figure. These compounds are no longer detected following treatment.
20 Figure 4:
minutes
30
GC-NCI MS chromatograms of a. treated tar ponds extract; b. untreated tar ponds extract (*PCB in untreated, not present in treated).
In Figure 5, three chromatograms are shown between the significant retention times 20 to 26 minutes. This figure illustrates the differences resulting from the untreated soil sample extract, the effects of using the sep- pack, as well as the treatment of the soil extract sample. One may notice that the Sydney Tar Ponds WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
54 Computational Methods and Experimental Measurements XIV soil extract contains many compounds based on the number of peaks present in the chromatogram. These compounds include PCB, PAH, as well as others, which were not of interest for this particular study. It is the presence of these contaminants, which cause environmental problems for the surrounding ecosystems in addition to adverse health affects to humans and other organisms. It is therefore of great importance to determine as many of these compounds so toxicity of the area can be evaluated. Also, from Figure 5, one might assume that some compounds remaining in the treated soil sample appear to be PCB due to their retention times as compared to the soil extract. Although these compounds have the same retention times as PCB, based on their mass spectra it was determined that they were not PCB due to the lack of the characteristic isotopic cluster for chorine atoms. Compounds can have the same retention times and not be the same compounds because more than one compound may have eluted from the column at the same time. They were simply not resolved from one other in the untreated soil mass spectrum. The remaining compounds in the treated soil sample were of hydrocarbon compound nature, which are not of concern.
Figure 5:
GC-NCI MS chromatograms of a. tar ponds extract; b. tar ponds extract filtered through sep-pak; c. treated tar ponds extract.
Negative Ion Chemical Ionization provides a more selective method of ion separation and results in a chromatogram with increased resolution as compared to PCI (Figure 6). This improvement in the resolution is due to the increased selectivity with NCI. The NCI method of ionization produced sharp, narrow, and WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
55
well-resolved peaks in both the before and after treatment of the soil sample. The resolution of neighbouring peaks is quite low for the PCI method of ionization. Therefore, the preferred method of ionization for environmental analysis is Negative Ion Chemical Ionization.
STP Treated Soil Extract NCI
STP Treated Soil Extract PCI
5
10 Figure 6:
15
20 Minutes
25
30
35
GC-MS of treated tar ponds extract: a. NCI; b. PCI.
From the GC-EI of the treated soil sample, many of the remaining PAH compounds and other aromatic compounds were identified (Figure 7). Some of these compounds included naphthalene, biphenyl, acenaphthene, fluorene, phenanthrene, anthracene, and pyrene. All of the USEPA 16 priority pollutant PAH were present as well as many higher molecular weight PAH which are suspected carcinogens.
4
Conclusion
The soil samples collected and extracted from the Sydney Tar Ponds contained high levels of contamination including PAH and PCB. The PCB are no longer being detected following the treatment process and therefore it was successful in destroying the PCB. This result has been verified through the use of GC-MS, GC-FID, and GC-ECD as well as by an external lab. The PAH compounds present in the treated Sydney Tar Ponds sample were analyzed GC-MS and identified using a PAH standard mixture for spectral matching and retention time confirmation. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
56 Computational Methods and Experimental Measurements XIV
GC-EIMS
7
3 5 1
6
4
2
4
8
Figure 7:
12
16 Minutes
20
24
28
GC-EI MS of treated tar ponds extract. J&K Scientific ICB-PAH column. 1. naphthalene; 2. biphenyl; 3. acenaphthene; 4. fluorene; 5. phenanthrene; 6. anthracene; 7. pyrene.
Acknowledgements Financial support was provided from an Office of Research and Academic Institutes, Cape Breton University RAP Grant; the Atlantic Canada Opportunities Agency (ACOA); Enterprise Cape Breton Corporation (ECBC); J&K Scientific, Inc.; and the Sydney Tar Ponds Agency (STPA). Dr. Rod McElroy’s (Sonic Environmental Solutions, Inc.) assistance with the SonoprocessTM is greatly appreciated.
References [1] Haalboom, B.; Elliott, S.J.; Eyles, J.; Muggah, H., The risk society at work in the Sydney Tar Ponds. Canadian Geographer 2006, 50, 227-241. [2] Haapea, P.; Tuhkanen, T., Integrated treatment of PAH contaminated soil by soil washing, ozonation and biological treatment. Journal of Hazardous Materials 2006, 136, 224- 250. [3] Chaspoul, F.; Barban, G.; Gallice, P., Simultaneous GC/MS Analysis of Polycyclic Aromatic Hydrocarbons and their Nitrated Derivatives in Atmospheric Particulate Matter from Workplaces. Polycyclic Aromatic Compounds 2005, 25, 157- 167.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
57
[4] Yurchenko, S.; Mölder, U., The determination of polycyclic aromatic hydrocarbons in smoked fish by gas chromatography mass spectrometry with positive-ion chemical ionization. Journal of Food Consumption & Analysis 2005, 18, 857-869. [5] Bjorseth, A.; Ramdahl, T., eds, Handbook of Polycyclic Aromatic Hydrocarbons: New York, New York, 1985. [6] Stover, D., Recipe for PCB destruction: Add baking soda. Popular Science 1993, 243, 25. [7] Majid, A.; Argue, S.; Sparks, B.D., Removal of Aroclor 1016 from contaminated soil by Solvent Extraction Soil Agglomeration Process. Journal of Environmental Engineering & Science 2002, 1, 59-64. [8] Chu, W.; Kwan, C.Y., Remediation of contaminated soil by a solvent/ surfactant system. Chemosphere 2003, 53, 9-20. [9] Seok, J.; Hwang, K., Thermo-chemical destruction of polychlorinated biphenyls (PCBs) in waste insulating oil. Journal of Hazardous Materials 2005, 124, 133-138. [10] Nobbs, D.; Chipman, G., Contaminated site investigation and remediation of chlorinated aromatic compounds. Separation & Purification Technology 2003, 31, 37-44. [11] Wu, W.; Xu, J.; Zhao, H.; Zhang, O.; Liao, S., A practical approach to the degradation of polychlorinated biphenyls in transformer oil. Chemosphere 2005, 60, 944-950. [12] Van Gerven, T.; Geysen, D.; Vandecasteele, C., Estimation of the contribution of a municipal waste incinerator to the overall emission and human intake of PCBs in Wilrijk, Flanders. Chemosphere 2004, 54, 13031308. [13] Shimura, M.; Hayakawa, T.; Kyotani, T.; Ushiogi, T.; Kimbara, K., Bioremediation of polychlorinated biphenyl contaminated sludge and ballast. Proceedings of the Institution of Mechanical Engineers Part F Journal of Rail & Rapid Transit 2003, 217, 285-290. [14] Kontsas, H.; Pekari, K.; Determination of polychlorinated biphenyls in serum using gas chromatography–mass spectrometry with negative chemical ionization for exposure estimation. Journal of Chromatography B 2003, 791, 117-125. [15] Magar, V.S., PCB Treatment Alternatives and Research Directions. Journal of Environmental Engineering 2003, 129, 961-965. [16] Sonic Environmental Solutions, Inc., 2006. [17] Gurprasad, N.P.; Haidar, N.A.; Manners, T.G., Applications of Negative Ion Chemical Ionication Mass Spectrometry Technique in Environmental Analysis. Communications in Soil Science & Plant Analysis 2002, 33, 3449-3456. [18] Karasek, F.W.; Onuska, F.I., Open Tubular Column Gas Chromatography in Environmental Sciences: New York, New York, 1984.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Section 2 Experimental and computational analysis
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
61
Multi-scale FE analyses of sheet formability based on SEM-EBSD crystal texture measurement H. Sakamoto1, H. Kuramae2, E. Nakamachi3 & H. Morimoto4 1
Graduate School of Science and Technology, Kumamoto University, Japan 2 Department of Technology Management, Osaka Institute of Technology, Japan 3 Graduate School of Life and Medical Science, Doshisha University, Japan 4 The Furukawa Electric Co. Ltd., Japan
Abstract The asymmetric rolling is applied to the material process of aluminum alloy sheet to control the micro-crystal structure and the texture in order to improve the mechanical properties. The formability of asymmetrically rolled (ASR) aluminum alloy sheet and conventional symmetrical rolled (SR) one is examined experimentally by employing limiting dome height (LDH) test. The micro crystalline morphological changes, such as the texture evolution under the various proportional straining paths, are observed by using SEM-EBSD measurement. Simultaneously, the failures of the sheets are detected to establish the forming limits in the in-plane principal strains space, such as forming limit diagram. SEM-EBSD measurement results show a significant effect of the strain path on the texture evolutions. It reveals that the ASR sheet evolutes the shear texture more than SR one, and it can be related to improve the formability by employing this asymmetrical rolling. However, the finite element simulations to predict the deformation induced texture evolution of the asymmetrically rolled sheet metals have not been investigated rigorously. In this study, the crystallographic homogenized finite element (FE) codes were developed and applied to simulate the LDH tests, and compared with experimental one. It is shown that this dynamic explicit type crystallographic homogenization FEM code shows a comprehensive tool to predict the plastic induced texture evolution. Keywords: multi-scale, SEM-EBSD, crystal texture, sheet formability, LDH test. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090061
62 Computational Methods and Experimental Measurements XIV
1
Introduction
The aluminum alloy has superior strength, mechanical properties and corrosion resistance, but it is used less than steel because of the poor formability [1]. Generally, the formability of sheet metal is evaluated by Lankford value (r-value). The r-value is closely related to the crystal texture [2]. The accumulation of {111}orientation in plane causes the increasing of r-value and the sheet formability are improved [3,4] In the FCC material such as aluminum alloy, <111>//ND are developed by shear deformation. So, the asymmetric rolling process was adopted in order to induce the texture evolution by shear deformation. In order to clarify the effect of the texture development during plastic deformation on metal forming, crystallographic homogenized finite element (FE) codes [5] were developed. The crystallographic homogenization FE Analysis, however, needs much computation time because of multi-scale and crystalline viscoplastic analysis. So, a parallel analysis method [6] is applied to the crystallographic homogenization FE Analysis and implemented on PC cluster system. The parallel analysis system is implemented with a dynamic workload balancing technique to improve parallel performance. This paper describes application to three-dimensional sheet forming analyses such as LDH test analyses.
2
Asymmetric rolling process and textures
The starting material for the ASR tests is hot-rolled plate of A6022 (6mm thickness) alloy. The plate was submitted to asymmetric warm rolling up to the thickness reduction at 250°C by two passes, with the a rolling depth from 6.0mm to 3.0mm in the first pas by thickness reduction of 50% from 3.0mm to 1.0mm in the second pass by thickness reduction of 66.7%. The ratio of lower roll velocity to upper roll velocity, the asymmetric ratioη was set to 1.75 and 2.0 respectively. The roll diameter in shown in Fig.1 is 450mm. The solution treatment is not done to this 1mm thickness A6022-ASR plate. The textures of ASR and SR plate were measured by using SEM-EBSD shown in Fig.2. Table 1 shows SEM-EBSD measurement conditions. Asymmetric ratio
1.75
Roll
3m
Figure 1:
2.0
1mm
Schematic view of asymmetric rolling.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Figure 2:
63
Photograph and schematic view of SEM-EBSD measurement system.
[111] ND TD
[001] [101]
TD
RD
80 Pm
(a)A6022-T43
Figure 3:
80 Pm
RD
(b)A6022-ASR
Comparison of crystal morphologies between the symmetricallyand asymmetrically-rolled sheet, A6022-T43 and A6022-ASR.
Table 1:
SEM-EBSD measurement conditions.
Measurement magnification Measurement interval (pixel size) Measured size
Table 2:
A6022-T43 A6022-ASR
×2000 0.95µm 64×50 = 3200pixel
Grain size and number of grains. Grain size [µm] RD TD 27.8 28.9 6.57 6.30
Number of grains 58 1122
Figures 3(a) and (b) show 2D crystal orientation maps of A6022-T43 (symmetric) and A6022-ASR(asymmetric). The average grain size was obtained by intercept method. The each grain size and number of grains are shown in Table 2. From Fig. 3 and Table 2, the reduction of grains size and the strong {111} orientations in plane was occurred by asymmetric rolling [7]. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
64 Computational Methods and Experimental Measurements XIV
3
Formability evaluation of asymmetric rolled sheet on LDH and texture
Figure 4 shows forming limit diagram (FLD) of A6022-ASR and A6022-T43, where the in-plane maximum and minimum principal strains were plotted in the principal strain space with forming limit curves (FLC), those were obtained by LDH test. This figure shows that the formability of A6022-ASR has been improved especially at the strain pathε1/ε2=tan60° (proportional rolling ratio) and the bi-axial stretching. Fig.5 shows texture evolutions of A6022-ASR under three strain path conditions. The strain paths correspond to the ones in Fig. 4. The (111) orientation is formed as the strain path approaches to the state of Equibiaxial from the one of Plane strain (Fig. 5). The textures in 60°direction (Fig. 5(b)) and Equi-biaxial (Fig. 5(c)) are obviously different from the texture in Plane strain (Fig. 5(a)) which shows low formability. 0.35
1mm thickness Asymmetric (A6022-ASR) 0.25 0.20 ε1
Major principal strain
0.30
0.15 0.10 0.05
1mm thickness Symmetric (A6022-T43)
0.00 -0.05 0.00
0.05 0.10
0.15 0.20 ε2
0.25
0.30 0.35
Minor principal strain
Figure 4:
4
Forming limit Curves of A6022-T43 and A6022-ASR.
Finite element analyses
4.1 Elastic/crystalline viscoplastic constitutive equation We employed the strain rate dependent crystal plasticity constitutive equation. The crystalline viscoplastic shear strain rate of the power law form defined on the slip system a is expressed as follows: ,
(1)
where , WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(2)
Computational Methods and Experimental Measurements XIV
65
and are the resolved shear stress, the effective resolved shear stress and the resolved back stress on the slip system a, respectively. is the reference shear stress, is the reference shear strain rate, and m is the coefficient of strain rate sensitivity. The hardening evolution equation of the reference shear stress is given by (3)
,
RD
RD
[111] TD
TD
[111]
RD
{111}
{100}
RD
RD
ND [101]
[001]
(a) Plane strain
[001] [101]
[111]
[111] TD
TD {111}
RD {100}
ND
[001]
[101]
[001]
[101]
(b) 60q direction RD
RD [111]
[111] TD {111}
Figure 5:
RD
TD {100}
ND [101] [001]
[001]
[101]
(c) Equi-biaxial
Texture evolutions of asymmetric A6022-ASR(t=1mm) sheet under three strain path conditions.
where N is the total number of slip systems, for the FCC crystal N=12, and the BCC crystal N=48. hab, the hardening coefficients for hyper tangent saturation equation, are expressed as follows: (4) , ,
(5)
where the matrix qab is introduced to describe the self and latent hardenings. The parameters qab are employed for the coplanar or collinear slip systems as qc=1; for the slip systems which have mutually perpendicular Burgers vectors, WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
66 Computational Methods and Experimental Measurements XIV qv =1.2, for the others, ql =1.3 in the case of FCC aluminum. For BCC steel case, we adopted qc = qv = ql.= 1. γ is the accumulated shear strain over all the slip systems, h0 is the initial hardening modulus, τ0 and τs are the critical (initial) and the saturated resolved shear stress, respectively. These values are determined by the parameter identification calculation through the comparison with the experimental results. A three-dimensional polycrystalline macro-continuum is formed by periodic microscopic structures of a representative volume element (RVE) as shown in Fig. 6. 4.2 SEM-EBSD measured 3D RVE and its micro finite element modeling We obtained the distribution of crystal orientation in a 3D parallelepiped box region of the aluminum alloy (A6022) sheet metals, which we call the reference box as shown in Fig. 7. The interval of measurements in three directions, such as RD, TD and ND, was 3.8 µm, 3.8 µm and 5.0 µm, which correspond to the size of the unit voxel. Figure 8 shows initial texture of RVEs for micro finite element modeling.
[001]
[100]
Micro-coordinate
Macro-continuum Micro-structure Homogenization
Figure 6:
RVE
Crystal orientation
Macroscopic continuum and micro polycrystal structure.
….….
Figure 7:
Macroscopic continuum and micro-structure.
Figure 8:
Initial texture of SEMEBSD measured RVEs.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
67
4.3 LDH test analyses The limiting dome height (LDH) test is simulated by using the parallel dynamic explicit crystallographic homogenized FE code with SEM-EBSD measured RVE model. The dome is formed by stretching a flat sheet over a flat punch. Tool setup for the problem is shown in Fig. 9. The number of finite elements of macrocontinuum is 448. FE model of micro-structure with 216 crystal orientations is divided into 27 (= 3×3×3) solid elements. 216 crystal orientations are assigned to each integration point on micro FE model. In this study, a cluster of AMD Opteron 1.8 GHz PCs connected by Gigabit Ethernet network is used in the parallel FE analyses. Parallel analysis time and allocated memory on each CPU for the LDH problem are respectively 6.5 hour and 683 MB by using 38 CPUs. Liner speed-up is achieved by the parallel method [6].
Die Blank Holder Punch
50.0 mm
Figure 9:
Figure 10:
Tool set-up for FE analysis of problem.
Thickness strain distribution of LDH tests by FEM simulation.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
68 Computational Methods and Experimental Measurements XIV Figure 10 shows thickness strain distribution at the forming limit. The thickness strain localization occurs around punch shoulder, which is caused by the flat punch. The asymmetrically rolled alloy (A6022-ASR) occurs thinning at plain strain and 75º strain conditions. The thickness strain distributions of the asymmetrically rolled aluminum sheet metal A6022-ASR are compared with experimental results as shown in Fig. 11. The thickness strain distributions along the centerline of RD direction of blank by FE results are agrees well to the experimental results. position along center line[mm] -60
0
position along center line[mm]
60 -60
0
60
Thickness strain
0.05 0 -0.05 -0.1 -0.15
FE analysis
-0.2
ٟ
-0.25
Experiment
80mm (Plane strain)
90mm (75º)
position along center line[mm] -60
0
position along center line[mm]
60 -60
0
60
Thickness strain
0.05 0 -0.05 0.1 -0.15 -0.2 -0.25
100mm (60º)
Figure 11:
5
110mm (Equi-biaxial)
Comparison of thickness strain distribution of A6022-ASR along centre line specimen.
Conclusions
In this study, a parallel crystalline homogenization algorithm based on the dynamic explicit method has been implemented on PC cluster and applied to LDH test problem for evaluating the automotive sheet metal forming. SEMEBSD measured three-dimensional RVEs are constructed for micro polycrystal model. The forming limit diagram of asymmetrically rolled aluminum alloy A6022-ASR indicates higher formability than the symmetrically rolled aluminum sheet metal A6022-T43 at 60º and 75º directions.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
69
References [1] Japan Institute of Light Metals (Eds.), Aluminum Product and the Manufacturing Technology (in Japanese), Japan Institute of Light Metals, Japan, 2001. [2] Nagashima, S., (Ed.), Texture (in Japanese), Maruzen Co. Ltd., 1984. [3] Sakai, T., Yoneda, K. and Osugi, S., Microstructure and Texture Control of Al-Mg Alloy Sheets by Differential Speed Rolling, Proc. of the 14th International Conference on Textures of Materials, pp. 597-602, 2005. [4] Inoue, H. and Inakazu, N., Estimation of r-Value in Aluminum Alloy Sheets by Quantitative Texture Analysis (in Japanese), Journal of Japan Institute of Light Metals, Vol.44, No.2, pp. 97-103, 1994. [5] Nakamachi, E., Tam, N. N. and Morimoto, H., Int. J. Plasticity, 23, 450489, 2007. [6] Kuramae, H., Okada, K., Yamamoto, M., Tsuchimura, M., Sakamoto, H. and Nakamachi, E., Parallel Performance Evaluation of Multi-scale Finite Element Analysis based on Crystallographic Homogenization Method, Computational Plasticity part 1, Eds. D. R. J Owen et al., CIMNE, Barcelona, pp. 622-625, 2005. [7] Nakamachi, E., Development of Multi-Scale Finite Element Analysis Codes for High Formability Sheet Metal Generation, Materials Processing and Design: Modeling, Simulation and Applications (Proc. of Numiform 2007), American Institute of Physics, pp. 215-220, 2007.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
71
Direct simulation of sounds generated by collision between water drop and water surface M. Tsutahara1, S. Tajiri1, T. Miyaoka1, N. Kobata1 & H. Tanaka2 1 2
Graduate School of Engineering, Kobe University, Japan Universal Shipbuilding Corporation, Japan
Abstract In this paper, we present some applications of the finite difference lattice Boltzmann method (FDLBM) to direct simulations of fluid dynamic sound. Two-particle model is used to simulate two-phase flows, and introducing a fluid elasticity the sound propagation inside the liquid is recovered. The sounds generated by bubbles and water drop collision on the shallow water and on deep water are successfully simulated. Keywords: fluid dynamic sound, lattice Boltzmann method, two-phase flow, underwater sound,bubble, water drop.
1
Introduction
The lattice Boltzmann method [1–6] is now a very powerful tool of computational fluid dynamics (CFD). This method is different from ordinary Navier-Stokes equations based CFD methods, and is based on the particle motions. However, mostly successful model so far is for incompressible fluids, but several models for thermal compressible models have been proposed including our model [7–13]. On the other hand, this method has great advantage to simulate multi-phase flows, because the interface is automatically determined in this method without special treatment [14–17]. We propose a new model for liquids considering the elasticity of liquid, and the sound speed propagating inside the liquid is correctly realized. A simulation of a water drop colliding the water surface and sound emission is performed.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090071
72 Computational Methods and Experimental Measurements XIV
2
Basic equations
2.1 Discrete BGK equation The basic equation for the finite difference lattice Boltzmann method is the following discrete BGK equation ∂f i k ∂f k 1 (1) + ciα i = − ( fi k − f i eqk ) τ ∂t ∂xα where f i is the distribution function which is the number density of the particle having velocity ciα and subscripts i represents the direction of particle translation and α is the Cartesian co-ordinates. fi eq is the local equilibrium distribution function. The term on the right hand side represents the collision of particles and τ is called the single relaxation time factor. Superscript k represents gas and liquid phases, and we will use two particle model and k = G represents gas phase and k = L liquid phase, respectively. Macroscopic variables, density ρ , flow velocity u are obtained by mp
mp
ρ k = ∑ fi k = ∑ fi eqk i G, L mp
(2)
i
G ,L mp
ρ u = ∑∑ fi k ci = ∑∑ fi eqk ci k
i
k
(3)
i
where m represents the number of the particles. The pressure for gas phase p is P = ρ G / 3 . 2.2 Interface treatment In order to obtain sharp interface for immiscible two-particle model, artificial separation techniques are sometimes introduced. In this study, phase separation or re-color technique by Latva-Kokko and Rothman [18, 19] is employed. In this technique, an additional term is introduced to the discrete BGK equation as ∂fi k ∂f k 1 (5) + ciα i = − ( fi k − fi eqk ) + ( fi k − fi′k ) τ ∂t ∂xα where fi ′k is re-distributed function calculated from the gradient of the interface. fi ′k is given by
ρG
ρG ρ L f eqG (0) + f i eqL (0) ) cos ϕ 2 ( i ( ρG + ρ L )
i
ρG ρ L fi′L = fi G + fi L ) − κ f eqG (0) + f i eqL (0) ) cos ϕ ( 2 ( i ρG + ρ L ( ρG + ρ L )
i
f i′G =
ρG + ρ L
(f
G i
)
+ fi L + κ
ρL
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(6a,b)
Computational Methods and Experimental Measurements XIV
73
where κ is separation parameter to control the thickness of the diffusive interface and fi eqk (0) is the equilibrium distribution function for velocity zero considering the natural or pure diffusion. Each sum of f i′G , fi ′L is not changed at the collision, and then the density, the momentum and the energy are conserved. ϕ is the angle between the gradient of the density and the particle velocity and given by
cos ϕ i =
G ⋅ ci G ⋅ ci
G ( x ) = ∑ ci ρ G ( x + ci ) − ρ L ( x + ci )
(7) (8)
i
2.3 Introduction of external forces External forces are introduced to the equilibrium distribution function as an impulsive force per mass. The equilibrium distribution function fi eqk = fi eqk ( t , ρ k , uα ) is modified by the external force Fα as
ρ→ρ uα → uα + τ Fα
(9) (10)
where the impulsive force includes the gravitational force in Sec. 2.4, the surface tension force in Sec.2.5, and the acceleration modification due to the density difference in Sec.2.6. 2.4 Gravitational force The gravitational force can be introduced by expressing the force in (10) as
Fα = gα δαβ
(11)
where gα represents the direction of the gravitational force. However, this force is not employed in this study, because some compression waves generated when the gravitational force is introduced inside the deep water described later and these waves hardly damps. Instead of introducing the gravitational force, the initial velocity of the drop is imposed, and the detail will be given in Sec.4. 2.5 Surface tension We employ a model proposed by Gunstensen et al. [20] and Continuum Surface Force (CSF) method [21]. In CSF, the surface tension force FS is given by FS = σ Κnˆ
(12)
where σ is the surface (interfacial) tension coefficient, Κ is the curvature of the interface, nˆ is the normal unit vector of the interface, and nˆ ( x ) = n ( x ) n ( x ) . The normal vector on the interface n ( x ) is calculated by
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
74 Computational Methods and Experimental Measurements XIV n (x) =
(
∂ ρ G (x) − ρ L (x)
)
∂x
The curvature Κ is also calculated by 1 n Κ = − ( ∇ ⋅ nˆ ) = ⋅∇ n − ( ∇ ⋅ n ) n n
(13)
(14)
2.6 Model for two fluids with large density difference He et al. [22] have proposed a large density difference fluid model up to about 40, and Inamuro et al. [23] have proposed a model with density difference up to 1000. But the fluid of liquid phase is completely incompressible in Inamuro’s model, and the sound in liquid phase cannot be simulated with this model. Therefore we will propose a novel model for two-phase flows with large density difference. The density difference is realized by changing the acceleration, and this effect is also introduced to the model by impulsive force for each particle 1 ∂P µ 2 1 ∂P′ µ ′ 2 (15) Fin = −a + a′ = − − + ∇ u + − + ∇ u ρ ∂x ρ mρ ∂x mρ where P′ is the effective pressure, and a is the acceleration to cancel the original force term, a′ is newly introduced acceleration due to the density and the viscosity of considering fluid. m represents an averaged density and µ ′ is an averaged viscosity µk ρk mk ρ k ∑ ∑ (16a,b) k = G , L µ′ = m = k =G , L k k ρ ρ ∑ ∑ k =G , L
k =G , L
The density ratio of the two fluids is given as mL / mG , and in case of water and air, it is about 800, and the ratio of viscosity is about 70 and they change continuously across the interface. 2.7 Compressibility of liquid A simple model of bulk elasticity is introduced to consider the compressibility of the liquid as (17) P′ = P0 + β ( ρ − ρ0 ) where β is a parameter to control the elasticity and corresponds the bulk modulus of elasticity and P0 is a reference pressure and ρ0 is a reference density and is fixed to unity in this study. In liquid phase, we do not consider the temperature change. The sound speed of the liquid phase is given as βL ∆P ′ (18) = csL = ∆ρ mL
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
75
2.8 Two-dimensional 9 velocity model In this study, two-dimensional 9-velocity (D2Q9) model [1–6] is used, whose equilibrium distribution function is given by 9 3 (19) fi eq = ωi ρ 1 + 3uα ci ,α + uα uβ ci ,α ci , β − u 2 2 2 where (20) ω0 = 4 9, ω1−4 =1 9, ω5−8 =1 36 The velocity set is shown in Fig.1 and expressed as
ci = ( 0, 0 )
(
for i = 0
ci = cos π ( i − 1) 2 ,sin π ( i − 1) 2
(
)
for i = 1 − 4
(21)
)
ci = 2 cos π ( i − 9 2 ) 2 ,sin π ( i − 9 2 ) 2 for i = 5 − 8 The sound velocity in gas phase is csG = 1/ 3 by this model. The scheme for calculation is the third order upwind scheme (UTOPIA) for space and the second order Runge-Kutta method for time integral.
3 Underwater sound augmented by air bubbles As a preliminary test, we briefly describe the simulation on the underwater sound augmented by air bubbles. The initial condition is shown in Fig.1. A circular cylinder is set in a uniform flow of water, and three bubbles approach the cylinder. The computation parameters are as follows. The number of grid is 251×154 . The flow velocity is U = 0.2 (normalized by the particle velocity given in (21)), and the kinetic viscosity of water is ν = 0.01 . The separation parameter and the surface tension are κ = 2.0 and σ = 0.00001 , respectively. The ratio of gasliquid densities is 1:800 (air and water) and liquid bulk elasticity is β L = 6400 . The non-dimensional time is defined by T * = tU / D .
Figure 1:
Initial position of circular cylinder and bubbles.
Figure 2(a) shows the bubble position and sound pressure field around the cylinder. Aeolian tone can be detected, but in this scale the pattern cannot be shown. A circular disturbance is that occurs at the early stage when the flow suddenly starts. Figure 2(b) shows that when the bubbles are stretched, strong WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
76 Computational Methods and Experimental Measurements XIV sound is emitted, and the bubbles also produce strong dipole-like sound when they are involved in the Karmann vortices (Fig.2(c) and (d)). The direction of the dipole is perpendicular to the flow unlike Aeolian tones in which the direction is on the flow direction.
(a) T * = 62
(c) T * = 66 Figure 2:
(b) T * = 64
(d) T * = 68 Bubble position and underwater sound.
4 Sound generated by a water drop colliding with a shallow water surface The sound generated by a water drop colliding against the water surface is a very well-known sound, and intensive studies have been done [24, 25]. In this section, we calculated the sound generated by collision of a water drop to shallow water using the above mentioned model. The reason why we chose this problem is that the results, such as splash shape except sound, can be compared with other simulation results. In this calculation, we did not impose the gravitational force, because some internal compression waves are generated just after the gravitational force is imposed. Instead, the initial velocity was imposed as shown in Fig. 3. The parameters of the calculation are given as follows. The number of grids is 503 × 401 and the minimum grid size is 2 ×10−5 . Time increment is set 2 × 10 −6 . The density ratio mL / mG = 1000 , kinematic viscosity of gas ν G = 1× 10−6 , and the bulk elastic modulus of liquid is 6400. The phase separation coefficient κ = 0.9 , the surface tension coefficient σ = 1×10−7 , the droplet diameter D = 2 × 10−3 , the water depth D f = 0.075 D , and the initial velocity of the droplet U = 0.02 . It should be noted here that these parameters are all non-dimensional values based on the minimum particle speed c = 1 , the reference time t = 1 , and the reference length c / t = 1 . WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV Mirror boundary condition
77
Calculation domain
Gas Liquid droplet
Non-slip wall
z y
x
U
Df
Calculation domain for impact of a drop on a thin film of the same liquid. Mirror boundary condition was employed for the central domain, and the upper boundary condition was free boundary.
Spread factor r*=r/D
.
Figure 3:
D
10
Re=100 Re=500 Re=1000 R=(DUt)^0.5 1
0.1
0.01 0.01
r
0.1
1
10
Dimensionless time t*=Ut/D
Figure 4:
Log-log plot of the spread factor r * . The solid line corresponds to the power law R = DUt .
Figure 4 shows the relationship between another non-dimensional time
t* = Ut / D and space factor r* = r / D , where U is the initial velocity of the
drop, D is the diameter of the drop and r is the radius of the splash root. The relationship fits well to a power law. Figure 5 shows the shape of splash and it should be noted that three small gas bubbles are also generated and caught by water. The sound generated at the collision is shown in Fig. 6. In this figure, the pressure fluctuation p* = ( p − p0 ) / p0 , where p0 is the initial pressure or pressure at infinity, is shown. Sounds propagating into the gas phase and also into the liquid phase are simultaneously shown there. It is also seen that the sound propagating into gas phase has complicated directivity, and that the directivity depends on the depth of the water.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
78 Computational Methods and Experimental Measurements XIV
Figure 5:
Splash at an early stage of collision between drop and shallow water.
Figure 6:
Sound field at the same time as in Fig. 5.
5 Sound generated by a water drop colliding with a deep water surface In this section, we calculated the sounds generated by collision of a water drop to water surface and propagate into the gas phase and also into the water phase using the above mentioned model. In this calculation, we did not impose the gravitational force, because some internal compression waves are generated just after the gravitational force is imposed. Instead, the initial velocity was imposed as shown in Fig.7(b). The parameters of the calculation are given as follows. 5.1 Simulation parameters A non-uniform Cartesian grid is used and fine grid is gathered near the horizontal water surface and the drop as shown in Fig.7. The number of grid is 303 × 501 and the minimum grid size is 2 ×10−5 , and the time increment is set 2 × 10 −6 . The diameter of the drop is D = 2.0 × 10 −3 , the initial height or distance between the water surface and the drop is H = 0.05D . The density ratio is mL / mG = 800 , and the viscosities of the gas and the liquid are, respectively, µ G = 1.2 × 10 −6 and σ = 3.9 ×10−4 , and the separation coefficient The sound speed in
µ L = 6.4 × 10−5 . The surface tension coefficient is bulk elastic modulus of liquid is β L = 6400 . The phase κ = 0.9 , and the initial velocity of the droplet U = 0.02 . the gas phase is csG = 0.578 and that in the liquid is
csL = 2.50 , then the ration of two sound speeds is 4.33. It should be noted here that these parameters are all non-dimensional values based on the minimum particle speed c = 1 , the reference time t = 1 , and the reference length c / t = 1 . The non-dimensional parameters are the Reynolds number, the Mach number, and the Weber number and Re = ρ LUD / µ L = 500 , Ma = U / csL = 0.0008 , We = ρ LU 2 D / σ = 1.6 , respectively.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
(a) Grid system in a half domain
Figure 7:
79
(b) Initial position of the drop
Grid system near the drop and water surface and initial position of the drop. Mirror boundary condition was employed for the central domain, and the upper boundary condition is free boundary.
Figure 8:
Sounds emitted into gas phase and liquid phase.
5.2 Sounds propagating in gas and liquid phases The sounds propagating into the gas phase and liquid phase in very early stage of collision are shown in Fig.8. The sound in the gas phase is generated first as the drop moves suddenly and then generated at the collision of the drop against the water surface. On the other hand, under water sound is generated at the collision and is seen to propagate about 4 times faster than sound propagating into the gas phase. It is noted here that there appears a sound pattern inside the drop, and the sound originaly generated at the collision plane (horizontal palne) propagates upward to inside the drop and downward to the deep water. The sound goes up and reflects at the surface of the drop and goes downward again. Some of that sound passes through the neck formed by contact of the drop and water surface and goes out to the deep water and the other reflects and goes back to the drop. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
80 Computational Methods and Experimental Measurements XIV The reflection continues and the sound is emmitted into the water in groups. In Fig.6, two groups are shown. We shall discuss this phenomenon later. The directivility of the sound propagation is shown in Fig.9, which is calculated by taking sound pressure data at points shown in the left figure in Fig.9 and taking the maximum pressure fluctuation ∆P = p *max − p *min . In the gas
0.01
90[deg]
0.0075 0.005
Gas
0.0025 0 -0.0025
Impact point
-0.005 Liquid
-0.0075 -0.01 0 0.005 0.01 Normalized distance from impact point r*=r/c
Figure 9:
6
20
Sampling point of pressure fluctuation
Normalized amplitude of pressure fluctuation ǻP*f=|P*f,max-P*f,min| [㽢10-7]
Normalized distance from impact point r*=r/c
phase, the directivility seems complicated and strong directivility is seen in the direction of 30 degrees from the horizontal surface. This sound is generated by the splash. In the liquid phase, the directivility is dipole-like and this has been reported in the rain-fall sounds.
15
45[deg]
10
5
0[deg]
0
-5
-10
-15
-45[deg]
-20
-90[deg]
Observing points and directivity of sound.
Conclusions
A two-phase flow model of the finite difference lattice Boltzmann method with large density difference and including the elasticity of liquid is proposed. Direct simulations of aerodynamic sound and underwater sound are successively performed, especially the sound generated on interfaces between gas and liquid.
References [1] Qian, Y.H., Succi, S. and Orszag, S.A., Recent Advances in Lattice Boltzmann Computing, Ann. Rev. of Comp. Phy. III, D. Stauffer ed. World Scientific, pp.195-242, 1995. [2] Rothman, D.H. and Zalenski, S., Lattice-Gas Cellular Automata, Cambridge U.P., 1997. [3] Chopard, B.. and Droz, M., Cellular Automata Modeling of Physical Systems, Cambridge University Press, 1998.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
81
[4] Chen, S. and Doolen, G.D., Lattice Boltzmann method for fluid flows, Ann. Rev. Fluid Mech., Ann. Rev. Inc. pp.329-364, 1998. [5] Wolf-Gladrow, D.A., Lattice-Gas Cellular Automata and Lattice Boltzmann Models, Lecture Notes in Mathematics, Springer, 2000. [6] Succi, S., The lattice Boltzmann Equation for Fluid Dynamics and Beyond, Oxford, 2001. [7] Alexander, F.J. et al., Lattice Boltzmann thermodynamics, Phys. Rev. E, 47 R2249-R2252, 1993. [8] Chen, Y., et al., Thermal lattice Bhatanagar Gross Knook model without nonlinear deviations in macrodynamic equations, Phys. Rev. E, 50, pp.2776-2283, 1994. [9] Takada, N. and Tsutahara M., Proposal of Lattice BGK model with internal degrees of freedom in lattice Boltzmann method, Transaction of JSME B, 65-629, pp.92-99, 1999 (in Japanese). [10] McNamara, G.R., et al., Stabilization of thermal lattice Boltzmann models, J. Stat. Phys., 81(1/2), pp. 395-408, 1995. [11] Kataoka, T. and Tsutahara, T., Lattice Boltzmann model for the compressible Navier-Stokes equations with flexible specific-heat ratio, Phys. Rev. E, 67, pp.036306-1-4, 2004. [12] Watari, M. and Tsutahara, T., Two-dimensional thermal model of the finite-difference lattice Boltzmann method with high spatial isotropy, Phys. Rev. E, 69, pp.035701-1-7, 2004. [13] Tsutahara M, Takada N, Kataoka T, Lattice gas and lattice Boltzmann methods, Corona-sha, (in Japanese) 1999. [14] Swift, M.R., Orlandini, E., Osborn, W.R., and Yeomans, J.M., Lattice Boltzmann simulations of liquid-gas and binary-fluid systems. Phys. Rev. E 54, pp.5041-5052, 1996. [15] Swift, M.R., Osborn, W.R., and Yeomans, J.M., Lattice Boltzmann simulation of non-ideal fluids. Phys. Rev. Lett. 75, pp.830-833, 1995. [16] Shan, X. and Chen, H., Lattice Boltzmann model for simulating fows with multiple phases and components. Phys. Rev. E 47, pp.1815-1819, 1993. [17] Shan, X. and Chen, H. Simulation of non-ideal gases and liquid-gas phasetransitions by the lattice Boltzmann-equation. Phys. Rev. E 49, pp.29412948, 1994. [18] Latva-Kokko, M., Rothman, D. H., Diffusion properties of gradient -based lattice Boltzmann models of immiscible fluids, Physical Review E, Vol.71, 056702, 2005. [19] Latva-Kokko, M., Rothman, D. H., Static contact angle in lattice Boltzmann models of immiscible fluids, Physical Review E, Vol.72, 046701, 2005. [20] Gunstensen, A. K., Rothman, D. H., Zaleski, S. Zanetti, G., Lattice Boltzmann model of immiscible fluid, Physical Review A, Vol.43, 43204327, 1991. [21] Brackbill, J. U., Kothe, D. B., Zemach, C., A continuum method for modeling surface tension, Journal of Computational Physics, Vol.100, 335354, 1992. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
82 Computational Methods and Experimental Measurements XIV [22] He X., Chen S., and Zhang R., A lattice Boltzmann Scheme for Incompressible Multiphase Flow and Its Application in Simulation of Rayleigh-Taylor Instability, J. Computational Physics 152, pp.642-663, 1999. [23] Inamuro T., Ogata T., Tajima S. and Konishi N., A lattice Boltzmann method of incompressible two-phase flows with large density differences, J. Computational Physics 198, pp.628-644, 2004. [24] Pumphrey, H. C., Crum L.A., Bjomo, L., Underwater sound produced by individual drop impacts and rainfall, Journal of Acoustical Society of America 85(4), 1518-1526, 1989. [25] Prospeeretti A., Ogus H. N., The impact of drops on liquid surface and the underwater noise of rain, Ann. Rev. Fluid Mech., 25, pp.577-602, 1993.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
83
Experimental and numerical analysis of concrete slabs prestressed with composite reinforcement R. Sovják1, P. Máca1, P. Konvalinka1 & J. L. Vítek2 1
Experimental Centre, Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic 2 Department of Concrete and Masonry Structures, Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic
Abstract The behaviour of concrete slabs prestressed with glass fibre reinforced polymer (GFRP) composite bars is investigated in this paper. The main advantage of GFRP bars is their high strength/self-weight ratio. On the other hand, Young’s modulus is very low compared to steel reinforcement, which is the main cause for unacceptable deflections. To eliminate such deflections and to utilize the high tensile strength of GFRP bars it is very useful to pretension the bars. During the experimental work a set of three concrete slabs 4.5 m long was casted. Each slab was prestressed with four GFRP bars. The slabs were subjected to four points bending. Each specimen was subjected to ten tow-load cycles and afterwards loaded until failure. Deflection under each loading point and in the middle of the beam span was recorded. Moreover, stress at the lower and upper surface was measured. The experimental procedure was modelled numerically in finite element nonlinear software. Brick elements were used for meshing and the full Newton-Raphson method was used for calculation. Special bond slip-law (GFRP-concrete) is involved based on the experimental results of pull-out tests. A stress-strain diagram and the stress and crack development were plotted. The experimental and numerical results show a statistically important relationship. A large deflection typical for GFRP reinforced slabs is observed, as well as very early crack propagation. The serviceability limit state (SLS) is exceeded much earlier than bar rupture is observed. For this reason, it is recommended that the design of GFRP reinforced structures is governed by SLS criteria. Keywords: composites, concrete, GFRP, numerical modelling, prestressing. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090081
84 Computational Methods and Experimental Measurements XIV
1
Introduction
The service lifetime of reinforced concrete structures is in many cases determined by the durability of the reinforcement itself. For this reason much attention is being paid to improving the resistance of the reinforcement to aggressive environments. Because steel bars are currently the most used type of reinforcement, several options of improving its corrosion resistance have been developed. One way is to improve the properties of the concrete itself by decreasing its permeability, increasing concrete cover and waterproofing the concrete. The other way is to use epoxy coated bars. However, it has been proven that none of these measures or their combination can eliminate the longterm risk of steel corrosion [1]. The ultimate way to provide corrosion resistance is to use bars made from stainless steel, but this solution is highly expensive and not always possible. Therefore, in recent years non-metallic reinforcement has gained a great deal of interest from many researchers. Fibre-reinforced polymer (FRP) reinforcement is by nature corrosion resistant. Therefore it can be successfully used in highly corrosive environments such as bridge decks, off-shore structures and slabs in chemical factories where high corrosion resistance is required. Furthermore, FRP reinforcement has other great properties such as magnetic transparency, thermal non-conductivity and generally higher tensile strength than steel. The behaviour of FRP bars, however, is different than that of steel and is highly dependent on the type of fibre and the production process. FRP reinforcement is linear-elastic up to failure and its elastic modulus is typically lower than that of steel. For instance FRP bars that are reinforced with glass fibres (GFRP) have typically a modulus of elasticity of 15 to 25% of steel. Furthermore, there are FRP bars with many different surface textures available on the market and every producer makes a unique type of bar. For this reason the design of structures reinforced with FRP bars must be approached with maximal caution. As mentioned above the low elastic modulus is the biggest issue that needs to be considered when designing concrete structures that are reinforced with GFRP bars. The main problem is that the overall stiffness of GFRP reinforced concrete member decreases significantly after the concrete cracks in the tension zone. From the cracked section analysis [2] it can be easily calculated that deflection and crack widths will be much larger for concrete member reinforced with GFRP bars compared to that reinforced with steel. One of the methods how to eliminate unacceptable deflections on serviceability limit state (SLS) is to pretension the GFRP reinforcing bars. This paper describes both experimental and numerical analysis of concrete slabs that are reinforced with GFRP pre-tensioned bars. Many researchers reported [3, 4, 10] that pretensioning of the GFRP bars help to decrease deflections and crack widths. 1.1 Research significance Composite reinforcement is relatively new material and it has a potential to be widely applied in structures where special properties such as corrosion resistance WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
85
or magnetic wave transparency are required. In fact, FRP reinforcement has been implemented in several structures all over the world. However, composite materials, including FRP, are very dependent on the production technology and process. Therefore every producer creates a unique product which is very similar to other products but not exactly the same – it is like with finger prints. FRP bars consist of two main components - fibres and matrix. Every component can be chosen accordingly to the specific needs of the customer. This gives the much needed adjustability and variability of building materials. Moreover the final characteristics of the entire rod will be different than characteristics that would be obtained by simple summation of properties of fibres and matrix. This effect is called synergism [5]. Based on these facts it is very important to conduct extensive research in this specific area in order to describe, clarify and completely understand behaviour of concrete structures reinforced with FRP reinforcement which is available locally.
2
Material characteristics
Before the loading tests were performed exact mechanical characteristics of materials used during this research were determined. This is very important as it helps to eliminate the number of variables. Elastic modulus, modulus of rupture (MOR) and compressive strength of concrete were measured. Cylinders were used for the modulus of elasticity measurement and 150 mm cubes were used for the ultimate compressive strength determination. The average results of these measurements are summarized in Table 1. Furthermore, basic properties of GFRP were determined such as modulus of elasticity and axial coefficient of thermal expansion. The tensile strength was determined by the producer and was not checked by the researchers. These material characteristics are compared with standard steel properties in Table 2. Table 1:
Material characteristics of used concrete.
Compressive strength MOR Modulus of elasticity
Table 2:
Size of test specimen cubes 150 mm beams 100x100x400 mm cylinders Ø150x400mm
Average value 36,55 MPa 6,08 MPa 30,8 GPa
Material characteristics of GFRP bars in comparison with standard steel.
Tensile strength [MPa] Modulus of elasticity [GPa] Axial coefficient of thermal expansion [1/°C] Stress-strain diagram
GFRP 650 40 6e-06
Steel B500 500 210 12e-06
Linear-elastic
Bi-linear with hardening
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
86 Computational Methods and Experimental Measurements XIV
3
Experimental procedure
Experimental work was done in laboratories of Experimental Centre in the faculty of Civil Engineering, Czech Technical University, Prague. Three slabs reinforced with prestressed GFRP bars were tested in four point bending test. The slabs were 4.5 m long with a clear span of 4 m. The loading points were approximately in thirds of the span. The setup of the experiment is shown in Figure 1 and the cross section of the slab is shown in Figure 2.
Figure 1:
Figure 2:
Slab dimensions.
Cross-section.
Each slab was reinforced with four GFRP bars with diameter of 14 mm. The bars were pretensioned to stress 215 MPa. This stress level corresponds to one third of the ultimate tensile strength of the bar as provided by the producer. At both ends of the slab three stirrups (at 50 mm centres) were inserted to provide reinforcement for the lateral stresses induced by the pre-tensioned bars. It can be seen in Table 1 that the concrete used to cast the slabs had average compressive strength of 36 MPa and the concrete cover was 43 mm. 3.1 Pretensioning procedure The pretensioning procedure was relatively simple. A wooden mould was constructed on the floor of the lab and both stirrups and GFRP tendons were inserted inside. GFRP tendons were 6 m long with resin filled anchors at both ends. Anchors on one side of the mould were inserted into a fixed holder (Fig. 3) and on the other side the anchors were gripped into specially developed jacking mechanism. This mechanism consisted of anchor holder and a screw steel bar. During the pre-tensioning procedure a load transducer cell was attached to the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
87
end of the screw bar and the bar was tightened with a moment inducing wrench. This jacking mechanism used was similar to that presented in Fig. 3. (Note that Fig. 3 shows different experiment with 5 CFRP bars). When the tensile force measured by the load transducer reached 33 kN which corresponds to stress 215 MPa in the GFRP bar, the pretensioning procedure was finished. After that the screw bar was fixed in its position by a female screw.
Figure 3:
The passive end holder (on the right) and jacking mechanism (on the left).
3.2 Loading After the GFRP tendons were pretensioned concrete was casted in the moulds and vibrated. Each slab was properly cured by moisturising its surface until testing. After 28 days from casting, the slabs were tested in four point bending. At the beginning of the experimental procedure each slab was loaded ten times to 30% of its calculated ultimate strength. This load level approximately corresponded to the SLS. After the ten cycles, the loading force was increased in constant force increments until failure. Deflections were measured with LVDT’s at the middle of the span and under the loading points. Strain gauges were attached at the bottom and top of the slab.
4
Numerical model
To be able to simulate similar tests in the future and to successfully predict behaviour of GFRP reinforced members, numerical model in software called ATENA 3D was created. This software is commercially developed by Cervenka Consulting Ltd. and a version 3.3.2 was used in this research programme. ATENA is a three-dimensional nonlinear finite element program and it is primarily developed for reinforced concrete design. The calculation was preformed with help of full Newton-Raphson method. This method was selected to match the experimental loading procedure. The basic principle of this method is to increase loading force in small increments since the problem is generally non-linear. Stiffness matrix is recalculated in every step and after each iteration. Additionally line-search can be added to the N-R method in order to speed up the procedure when negligible nonlinearity is expected. This helps to decrease the number of iterations in every step significantly. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
88 Computational Methods and Experimental Measurements XIV Table 3:
Equations for SBETA model.
4.1 Concrete Material characteristics of concrete has been determined according to the Constitutive SBETA model (CCSbetaMaterial) implemented in ATENA 3D [6]. Parameters were estimated according to the ultimate cubic strength. 4.2 GFRP Based on the series of experiments [7] it was proved that linear-elastic behaviour is typical for glass based composites. Therefore linear-elastic model was chosen with characteristics showed in Table 2. 4.3 Reinforcement bond model The basic property of the reinforcement bond model is the bond-slip relationship. This relationship defines the bond strength depending on the value of current slip between reinforcement and surrounding concrete so called bond-slip law [8]. GFRP reinforcement is specific by its surface coating therefore a special bond model was developed in order to describe concrete-GFRP interaction. During the previous experimental work [8] the slip of sand coated bars was measured. Similar bars were used in this experiment and therefore the slip values between bar and concrete (Fig. 5) were used in the ATENA reinforcement model. 4.4 FE mesh The size and type of finite-element mesh has very important implications towards the numerical result. For this particular design linear elements were used with nodes only on their corners. The elements were cubic with a dimension of WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Figure 4:
Stress-strain diagram for concrete.
14 12,353
12
[MPa]
10
9,702
8 6
5,029
4 2 0 0
0,05
0,1
0,15
0,2
0,25
0,3
slip [mm]
Figure 5:
Figure 6:
Bond-slip law.
Finite element brick model with nodes only on the corners.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
89
90 Computational Methods and Experimental Measurements XIV 50 mm. Such elements are referred to as bricks (Fig. 6). This mesh was generated using the mesh generator T3D which is integrated into the ATENA software. Moreover, contact elements were implemented in order to cover interface between steel plates and concrete slab. Steel plates were used for a load distribution across the entire width of the concrete slab. 4.5 Basic equations As mention before the solution method is non-linear and standard NewtonRaphson method is used. The principle of this method is in solving a matrix of nonlinear equations, which can be written as:
k p p q f p ,
(1)
where: q is the vector of total applied joint loads, f p is the vector of internal joint forces, p is the deformation increment due to loading increment, p are the deformations of structure prior to load increment, K p is the stiffness matrix, relating loading increments to deformation increments.
Figure 7:
Full Newton-Raphson method.
The right hand side of eqn. (1) represents out-of-balance forces during a load increment. Generally, the stiffness matrix is deformation dependent, as a function of p, but this is usually neglected within a load increment in order to preserve linearity. In this case the stiffness matrix is calculated based on the value of p pertaining to the level prior to the load increment. The set of equations (1) is nonlinear because of the non-linear properties of the internal forces:
f kp kf p ,
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(2)
Computational Methods and Experimental Measurements XIV
91
and nonlinearity in the stiffness matrix
K p K p p ,
(2)
where k is an arbitrary constant. The concept of solving a nonlinear set of equations by Full Newton-Raphson method is depicted in Figure 7.
5
Results
Stress-strain diagram of concrete members reinforced with GFRP bars is bilinear. First part so called un-cracked elastic is typical for small increments of deflection within rising bending moment. The second linear part is called cracked-elastic and it is expected that the tensile stress is carried solely by the reinforcement. Because the GFRP bar behaviour is linear-elastic the second part of stress-strain diagram of the slab is also liner-elastic. As rather big deflections were observed during the experimental procedure (Fig. 8 and 9) it is evident that serviceability limit state will be limiting factor when designing concrete members with GFRP reinforcement. Pre-stressing of GFRP tendons benefits from the high tensile strength of GFRP and performs significantly better in terms of deflection and crack development compared to non-prestressed elements. Figure 8 shows crack widths and stress field in the numerical model of the slab at collapse. Also failure of slab during the experimental part of the work is shown in Fig. 8. GFRP bars were pre-stressed up to 30% of their ultimate strength according to ACI 440.4 [10] recommendations. This corresponds to the force 33 kN in every tendon.
Figure 8:
Stress field and crack formation at failure (numerical model) and slab failure (experimental model).
The correlation coefficient proved a very good relationship between numerical model and real structure as shown in Fig. 10. Therefore it is possible to predict behaviour of concrete structures with altered shape or even slightly different FRP reinforcement type. However, it is important to pay attention to input values and material relations that are inserted into the numerical model as the model is very sensitive. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
92 Computational Methods and Experimental Measurements XIV
Figure 9: Figure 10:
6
Load-deflection diagram.
Relationship between numerical and experimental values in logarithmic scale.
Conclusions and further research
High strength-weight ratio and corrosion resistance are characteristics that make GFRP an exceptional structural material. On the other hand, low modulus of elasticity causes bigger deflections and early crack development. Prestressing WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
93
significantly enhance behaviour of concrete members with GFRP reinforcement. Specifically prestressing decreases deformations and therefore prevents early crack development. By pre-stressing the GFRP reinforcement is more utilized in tension and therefore the usage of prestressed GFRP is more economical. Behaviour description of concrete members reinforced with FRP tendons should provide guidance for engineers and designers when using this new type of reinforcement. However, there is no design guide or code for FRP structures in the Czech Republic. Basic understanding of GFRP application in concrete prevents eventual problems and summarizes what to expect when dealing with GFRP as an internal reinforcement in concrete structures. Numerical predictive model has been developed based on the experimental outcome in order to evaluate behaviour of other concrete members with different shapes or even slightly different FRP reinforcement. Concrete structures for civil purposes are usually designed for fifty years lifespan, bridges even for hundred. This report is part of research that has shown so far how concrete structures reinforced with FRP bars behave under load. Nevertheless the long term behaviour of such structures is still not described very well and more research is needed in this area. The researchers suggest conducting an experimental work focused on the permanently loaded elements as well as the high cycle fatigue test. These will be the next tasks on the field of fibre reinforced polymers.
Acknowledgement This research has been supported by the Ministry of Education of the Czech Republic under the no. MSM6840770031.
References [1] Mosley, C.P., Tureyen, A.K., Frosch R.J., (2008) Bond Strength of Nonmetallic Reinforcing Bars, ACI Structural Journal, Vol.105, no. 5. [2] Procházka, J., Bradáč, J., Krátký, J., Filipová J., Hanzlová, H., (2003), Concrete Structures: Design according to the EUROCODE 2 [3] Burke, C.R., Dolan, C.W., (2001), Flexural design of prestressed concrete beams using FRP tendons, PCI J. 46, p.76-87. [4] El-Hacha, R., (2005), Prestressing concrete structures with FRP tendons (ACI 440.4R-04), 1635-1642. [5] Michna, Š., (2006), Composites materials [6] Červenka, J., (2005), ATENA Program Documentation: Tutorial for Program ATENA 3D. [online], [seen 2008-05-13]. Available at: http://www.cervenka.cz/papers [7] Sovják, R., Máca, P., Fornůsek, J., Konvalinka, P., Vítek, J., (2009), Determination of material characteristics of GFRP bar from nano to macro level, In: Workshop 2009, CTU in Prague [8] Fornůsek, J., (2007), Bc. Thesis: Bond Between GFRP Bars and Concrete Depended on Bar’s Surface WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
94 Computational Methods and Experimental Measurements XIV [9] Sovják, R., Konvalinka, P., Vítek, J., (2008), Concrete structures with nonmetallic reinforcement, In: Contributions to numerical and experimental investigation of building materials and structures, vol. 1, p. 73-76 [10] ACI 440.4R-03, (2003), Prestressing concrete structures with FRP tendons, American Concrete Institute, Farmington Hills, Mich. [11] Červenka, V., Jendele, L., (2007), ATENA Theory, Part 1
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
95
Measures in the underground work method to determine the mathematical relations that predicts rock behaviour S. Torno, J. Velasco, I. Diego, J. Toraño, M. Menéndez, M. Gent & J. Roldán School of Mines, Oviedo University, Asturias, Spain
Abstract The perforation parameter measure was introduced in geological prospecting and in the mining, petrol and gas industry, becoming an efficient tool allowing one to identify in detail and predict the geological and geomechanical characteristics while drilling is being carried out. The field measures and the calibration requirements of the tools and equipment used, which constitute the main obstacle in the prediction model establishment. Analogously, in underground excavations, the continuous knowledge of the rock mass, geological and geomechanical properties in front of the excavation wall during the operations, is of great importance in designing optimum, and also in the planning of the work carried out by this equipment and machinery. Through intense work measure campaigns, in the present work, the mathematical relations obtained in tunnel boring through a percussion perforation in blasting by a jumbo and a tunnel boring machine (TBM) are shown. Considering percussion perforation blasting for the models establishment or for the drilling parameters recorders (DPR) and tool adjustment, parameters such as uniaxial compressive strength, specific cut energy and destructive energy were used. Considering TBM and taking into account the high variable number that intervene in the tunnel boring process and also based on easily obtained parameters such as the already defined specific cut energy and the penetration index, we have reached a relation between the latter parameters and the rock mass rating index (RMR), which allows to adjust and subsequently predict the geological exploration from data which were previously carried out (were carried out previously to the project). Keywords: tunnel boring machine, jumbo, mathematical relation, drilling parameter recorder.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090091
96 Computational Methods and Experimental Measurements XIV
1
Introduction
The perforation parameter measurements are an efficient tool, allowing one to identify in detail and predict the geological and geomechanical characteristics while drilling is being carried out. The field measurements and the calibration requirements of the tools and equipment used, constitute the main obstacle to obtaining prediction models.
Figure 1:
Jumbo used in percussion perforation.
Figure 2:
Jumbo machine detail.
In this research mathematical relations obtained from tunnel excavations, considering percussion perforation for blast by a jumbo (Fig. 1 and Fig. 2) and by a tunnel boring machine (TBM) (Fig. 3) are exposed. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
97
These mathematical relations are useful, in the percussion perforation case, to adjust the drilling parameter recorder (DPR) tools, and in the case of TBM to predict the rock mass geomechanical index (RMR).
Figure 3:
Disc Cutters of a tunnel boring machine.
Teale [1] introduced the specific energy concept as a necessary energy to excavate a unit rock volume and establish that it can be used as a mechanical property index of the rock mass. The correlation between the perforation parameters and the geological characteristics of rock mass in drill hole blasting in coal mining, was tested by Scoble et al. [2], who demonstrated, the relation between the penetration velocity variation, the rotation torque variation, the rotation velocity, the specific cut energy and the rock characteristics. Schunnesson [3] established that the correlation between the obtained parameters from the perforation equipment monitoring and rock characteristics is not direct. The correlation may not occur when the perforation crosses broken rocks or in the case of no typical rock, also indicates, that external factors such as the tricone bit and the different string bar parts can influence the perforation parameters. Turtola [4], through the geological data interpretation DPR obtained from rotation perforation in drill hole blasting, identifies the main rock types, based on the penetration velocity variation. Likewise, Mozaffari [5] using DPR, together with an image analysis system, concluded that the penetration velocity measurement, the rotation torque and the specific energy while boring, provide relevant information about mechanic properties of rock mass. We have come to the conclusion that DPR are systems in which the calibration ‘in situ’ together with other relations between variables are needed to WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
98 Computational Methods and Experimental Measurements XIV obtain a good interpretation. In tunnel boring through TBM, it is difficult to predict the geomechanical conditions of rock mass in front of the machine owing to the observation difficulties and the large number of variables which control the excavation process. Geomechanical properties prediction methods of rock mass have been developed by machine operation parameters and geological – geotechnical tunnel profiles (Rostami et al. [6] and Ozdemir [7]).
2
Underground excavations. Mathematical models
2.1 Parameters in percussion perforation and TBM excavation Teale [1] indicated that the work process carried out in the rock breaking in each volume unit was related to the uniaxial compressive strength of that rock. Further research in this field has been carried out by Mellor [8], Reddish and Yasar [9], and Ersoy [10], and others. Two types of specific energy can be distinguished in the rotation perforation, the one required to move a rock volume unit during rotation perforation (SEv) Teale [1], and the energy to generate a new surface area (SEa) Paithankar and Misra [11]. Specific energy is a cut mechanic efficiency index and can be considered as the sum both the pressure energy strength et and the rotation energy er
F kJ/m3 A
et =
(1)
2π NT )( ) kJ/m3 (2) A V where F is the contact pressure (kN), A the excavated section (m2), N the cutterhead rotation velocity (rpm) and T the cutterhead rotation torque (kN-m). If p is named penetration per revolution, the previous equation becomes:
er= (
F 2πT + kJ/m3 (3) A Ap The T/p relation is the necessary rotation torque to drill a rock length p in one revolution. Therefore, it is considered as a useful specific energy indicator. The Wz Specific Destruction Work [kJ/m3] is a required energy quantity measurement in the destruction, in new surface creation, or in rock cracks. This term allows the comparison between different rock materials. In Fig. 4, Young’s modulus corresponds to the curve lineal slope from the starting point of the loading to the breaking point. The area under tension-deformation curve is the specific destruction work. Thuro [12] verified, by comparing the penetration velocities of different materials with their corresponding specific destruction work, in which the specific work is a parameter that presents a good correlation with the perforation velocity. The relation between the uniaxial compressive strength and the specific cut energy in various rock types was analyzed by Reddish and Yasar [9]. The
SE=
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
99
following equation, which relates the specific energy (SE) to the rock uniaxial compressive strength, was proposed: SE [MJ/m3] = 9,927 (UCS [MPa]) – 73,71 (4) In tunnel and mining gallery excavation by TBM, we emphasize the empiric prediction methods of the Norwegian Institute of Technology (NHT) Rostami et al. [6], the mathematical methods of the Colorado School of Mines (CSM) Ozdemir [7], the net penetration parameters P (relation between the average advance V and the rotation velocity ω): V (mm / min) P (mm / rpm) = (5) ω(rpm) and the penetration index Ip that represents the pushing that needs to be transmitted to a cutter Sc to penetrate 1 mm per revolution and it is also used for the indirect detection of the rock mass quality variations Sc(kN ) Ip(kN / mm ) = (6) P (mm ) Based on our experiences in underground works and later as researchers in TBM tunnel execution we have contributed the mathematical relations which can help the setting up of the prediction methods and systems.
Figure 4:
Wz, specific destruction work estimation, from tension – deformation curve, of a rock sample under unconfined compression.
2.2 Mathematical relations in percussion perforation During the excavation phase in Cabrejas tunnel in Guadarrama mountain chain (Spain), rock geomechanical characteristics in relation to boreability, which were different from the ones predicted in the project, were observed. For this reason, for example, it was sometimes necessary to change the predicted excavation system, turning from mechanical excavation to excavation by explosives. In order to avoid the latter and to establish a prediction model to get knowledge of WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
100 Computational Methods and Experimental Measurements XIV the rock type in front of the excavation, a follow-up and rock resistance control during the excavation was carried out by normalized trials, such as the uniaxial compressive strength over the rock sample. In Fig. 5 the unconfined uniaxial compressive strength values carried out by the Franklin test, in a total of 460 trials are shown.
Figure 5:
Figure 6:
Unconfined uniaxial compressive strength by Franklin test.
MWD (Measurement While Drilling) graphic registration outlet in a 4 m drill hole.
In this tunnel perforation, a jumbo equipped with a data register system (DPR), which registers (Fig. 6) the advancing depth (mm), the penetration velocity (dm/min) (PR), the percussion pressure (HP), the advance pressure (FP) WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
101
and the damping pressure (bar) (DP), the rotation velocity (r.p.m.) (RS), the rotation pressure or force (bar) (RP) and both water flow (l/min) (WF) and pressure (bar) (WP), was used. 2.3 Mathematical relations in TBM excavation Taking into account that in the tunnel boring machine case, the excavation is carried out by penetration, the specific rotation energy can be related to the RMR or Bieniawski index. In Fig. 7 the correlation between the two indicated parameters and the application of this correlation to new excavation fronts based on a campaign of 14,200 measurements in a tunnel executed in Guadarrama mountain chain, in Spain [13] are shown. It is excavated over both plutonic rocks and metamorphic rocks. The TBM machine has a 3,500 kW cutting power, a torque of 14,216 kNm, 53 simple discs and 4 double discs, all 17 inches in diameter. Important relations between the specific rotation energy and the RMR geomechanical index (Fig. 8) were achieved from the data and the relations obtained.
Figure 7:
3
Relation between rotation energy and RMR.
Conclusions
A very important aspect in tunnel excavation is to predict the geological and geomechanical characteristics in the front of the tunnel, particularly in sections where the rock formation is expected to change considerably, allowing to take adequate measures before reaching conflictive rock mass formations. The prediction models, which are the basis of the automatic systems to be implemented in the perforation equipment, can be estimated through traditional
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
102 Computational Methods and Experimental Measurements XIV
Figure 8:
Relation between specific rotation energy and RMR index.
mathematics. In this methodology model adjustments through wide measurement campaigns are fundamental. The perforability parameters recorded by the DPR tools and their adjustment by the corresponding measurement campaigns, allow to predict the rock mass behaviour and to optimize the perforation parameters. The Specific Cut Energy, the Uniaxial Compressive Strength and the Specific Destruction Energy, play an important role in this prediction methodology.
References [1] Teale, R., The Concept of Specific Energy in Rock Drilling. Int. J. Rock Mech. Sci. Vol 2, pp. 57-73, 1965. [2] Scoble, M.J., Peck, J., Hendricks, C., Correlation between Rotary Drill Performance parameters and Borehole Geophysical Logging. Mining Science and Technology, Vol. 8, pp. 301-312, 1989. [3] Schunnesson, H., 1997. Drill process monitoring in percussive drilling for location of structural features, lithological boundaries and rock properties, and for drill productivity evaluation. Thesis (PhD). Lulea University of Technology (Sweden). [4] Turtola, H., 2001. Utilization of Measurement While Drilling to Optimize Blasting in Large Open Pit Mining. Thesis (PhD). Lulea University of Technology (Sweden). [5] Mozaffari, S., 2007. Measurement While Drilling System in Aitik Mine. Thesis (PhD). School of Applied Geosciences and Mining. Lulea University of Technology (Sweden). WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
103
[6] Rostami, J., Ozdemir, L., Ilso, B., Comparison between CSM and TH Hard Rock TBM Performance Prediction Models. Proc. of Annual Technical Meeting of the Institute of Shaft Drilling Technology, Las Vegas, 1996. [7] Ozdemir, L., 2003, CSM Computer Model for TBM Performance Prediction, Colorado School of Mines. [8] Mellor, M., Normalization of specific energy. Int. J. Rock Mech. Sci. Vol. 9, pp. 661-663, 1972. [9] Reddish, D.J., and Yasar, E., A new portable rock strength index tested based on specific energy of drilling. Int. J. Rock Mech. Sci. Vol. 33, nº 5, pp. 543-548, 1996. [10] Ersoy, A., Automatic drilling control based on minimum drilling specific energy using PDC and WC bits. Mining Technology (Trans. Int. Min. Metal. A.), Vol. 112, pp 86-96, 2003. [11] Paithankar, A.G., and Misra, G.B., Critical appraisal of the Protodyakonov index. Int. J. Rock Mech. Sci. Vol. 13, pp. 249-251, 1976. [12] Thuro, K., Drillability prediction-geological influences in hard rock drilling blast tunnelling, Geol Rundsch Vol. 86, pp. 426-438, 1997. [13] Tardáguila, J.L. Suarez, Metodología para el seguimiento y control del terreno en el interior de los túneles de Guadarrama. Ingeotúneles libro 12 (Capítulo 16), ed. Entorno Gráfico, pp. 329-360, 2007.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
105
Implementation and validation of a strain rate dependent model for carbon foam G. Janszen & P. G. Nettuno Politecnico di Milano, Dipartimento di Ingegneria Aerospaziale, Italy
Abstract Carbon foam showed good ballistic performances for relatively small fragment impacts: low density samples (0.56 g/cm3 and 0.24 g/cm3) were able to stop and in some cases hold a 5 mm diameter stainless steel sphere shot at a speed up to 240 m/s by a compressed air gun. The results were used to calibrate and benchmark an Ls-Dyna model which had to be based only on a few and easy-to-measure material parameters. Therefore, performing only static compressive loading characterization tests, a suitable cellular Ls-Dyna material model was chosen. To justify the promising energy dissipation results, which cannot only be due to the static performances, a strain rate dependency was supposed. Based on ceramic materials which have inhomogeneities of the same size of the foam pores, a strain rate law typical of these was applied. Similar relations were applied to both the foams, and a calibrating coefficient was made on a single impact velocity test. The same model was then used to reproduce the impact at different impact velocities and very good agreement between experimental results and simulation was achieved. Keywords: carbon foam, ballistic impact, strain-rate effect.
1
Introduction
Foams, both in the form of natural ones as well as engineered, have been used widely in very different fields for a long time. The applications vary from packaging shock mitigation, to lightweight structure for the automotive and aerospace fields. It is therefore possible to find comprehensive references covering general characteristics and illustrating the fields of application [1]. However, carbon foams represent a relatively novel group of materials, for which a widely available characterization database is not easily accessible. The WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090101
106 Computational Methods and Experimental Measurements XIV reasons to investigate them are not only justified by the need to cover this lack of knowledge but by the chance to use them for different applications. They can be used as catalysts, electrodes, while the high surface area combined with the nature of pure glassy carbon make carbon foam well suited for filtration systems and for fuel cells [2]. There is no specific application suggestion or example available as structural component: there are just a few characterization investigations for various configurations. They move from simple sandwich panels used in quasi-static test [3] to studies about hypervelocity impact protection for space vehicles [4]. Moreover, the similar coefficient of thermal expansion suggests the use of carbon foams for carbon composites sandwich panels. Based on these reasons and on the fact that usually the cellular structures allow better energy dissipation performances in respect of the full dense solid [1], a complete investigation program has been developed. The program has been thought to understand and model the mechanical behaviour of these materials in quasi static and high strain-rate regimes. The ballistic tests showed that in spite of the limited fracture toughness, these foams were able to dissipate in an efficient way the kinetic energy of relatively small spherical stainless steel bullets. One interesting feature was that even if the foams showed a quasi static fragile behaviour, after the high speed impacts the dimensions of the damage were limited. The crushing mechanism involved at the micromechanical scale can provide for bullet entrapment passing through impact velocity reduction and rebound control. This is of particular interest for all those protective applications in which not only target trespassing is considered fatal but protection from bullet rebound prevents catastrophic failures. In this paper we present an overview of the performed low strain rate characterization campaign of two carbon foams with particular interest for the confined compression which leads to uniaxial strain. There was no chance to test and measure the dynamic properties of the materials under high strain rate loading condition, therefore a phenomenogical relationship between compressive strength σ and strain rate ε [5] was used to model the material behaviour. This was then implemented in an Ls-Dyna material model to simulate the ballistic experiments. This relationship is generally used for the ceramics and is based on their micro-cracks evolution, which have sizes comparable with the cells dimensions in the tested carbon foams. This is reported in eqn.(1).
σ = σ c 0 + B ⋅ ε N
(1)
The exponent N is 1/3 which is a common value for these materials.
2
Quasi-static mechanical characterization
A complete quasi-static experimental measurement campaign was performed to better understand material behaviour and its characteristics. In the following subsections an overview of the results will be presented. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
107
2.1 Uniaxial compression Typical measurements with an MTS machine under a mono-axial stress compressive condition, showed a highly brittle mechanical behaviour of both investigated foams. During the tests, performed in accordance with the [7] norm at a speed of 0.25 mm/min, the specimens reached ultimate failure with very low deformation values. They showed a sudden loss in load bearing capability for strains value of about 1.5%. The cracks evolved in a nearly instantaneous and instable manner with the form of large and long separations along the load application axis. Examples of specimen failure and stress-strain curve are reported in Figure 1. 3
240 kg/m foam -0,016
-0,014
-0,012
-0,01
-0,008
-0,006
-0,004
-0,002
0 0,0E+00
-2,0E+06
ax stress [Pa]
-4,0E+06
-6,0E+06
-8,0E+06
-1,0E+07
-1,2E+07 ax strain
Figure 1:
p3_ultimate_50x100
Example of cylindrical specimen failure under 1-D stress loading condition and the correspondent stress-strain curve. 3
CONFINED 560 kg/m -240 kg/m -0,9
-0,8
-0,7
-0,6
-0,5
-0,4
-0,3
FPA 15 FPA 35
3
-0,2
-0,1
0 0,0E+00
-2,0E+07
ax stress [Pa]
-4,0E+07
-6,0E+07
-8,0E+07
-1,0E+08
-1,2E+08
-1,4E+08 ax defo
Figure 2:
Stress-strain curves measured under lateral confinement for uniaxial strain loading conditions.
2.2 Confined compression Confined compression experiments demonstrated a dramatic change in material behaviour. This meant mostly a wide continuation of the load capability up to very large strains. As it is possible to notice in Figure 2, instead of a brittle WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
108 Computational Methods and Experimental Measurements XIV nature with a sudden load decay, the lateral confinement restricted the evolution of the cracks in the specimen. This is reflected in a three zones stress strain curve, composed by: a first elastic portion which copes mainly the unconfined results, a second stage which for the less dense foam is basically a wide constant stress plateau while the denser one still exhibits a noticeable stress increment. This part of the curves is what is referred as brittle crushing plateau [1] and it is the main mechanism which absolves energy dissipation and shock mitigation. The last portion is what corresponds to the densification strain, limit in which the porosity of the material has been filled by the broken fragments and ligaments that constitute the cellular structure of the foam. This stage, which happens for strain values of 0.8 and 0.5 for the less and more dense foam, corresponds to carbon dust compression with a steep rise in the stress value.
3
Computational analyses
The simulation of the ballistic impact was computed using LS-Dyna 971 r. 3.2.1 in double precision mode. This is a commercial state of the art software, derived from the public software DYNA 3D which was developed by the “Lawrence Livermore National Laboratories” in the seventies. It is particularly suited for transient non linear dynamics mechanical interaction modelling such as impacts, crashes and penetrations. Model development followed an iterative revision process while the understanding of the experimental evidences was in progress. An overview of the material models which have been chosen to describe the dynamic response of the involved materials is presented in Section 3.2 and its subsections. 3.1 Problem definition The impact problem analyzed in the present work consists of a spherical steel impactor with a diameter of nominal 5 mm, with a mass of 0.5 grams colliding at different speeds a solid block of carbon foam. The values of the velocity were in accordance with the experiments: 240 m/s, 210 m/s, 75 m/s, and perpendicular to the foam surface. The carbon foam surface hit is a square with a side of 50 mm while the depth of the block is 40 mm. The bullet impacts the carbon foam in the top face central point: a condition close to reality but not always verified in the ballistic experiments. The whole model is composed of 8 nodes hexahedrons solids with single integration point. Different meshes were used to model the domain both for target and for impactor, allowing various combinations of refinements of the two model parts. 3.2 Material modelling There are basically two key points regarding the identification of the material characteristics in the model: the first one is the use of the information deriving by the confined compression test, which allows to consider the material like an effective foam behaving one. The second refers to the extension, application and WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
109
calibration of the micromechanical models relating compressive strength and high strain rate commonly used for ceramics. These latter ones exhibit a brittle compressive behaviour similar to the carbon foam under quasi static compression. In addition to this, they show an evident and strong raise of the compressive strength in case of dynamic loading process. This property, which is generically common for the foams too, justifies the measured ballistic performances. It is worth to notice that the approach of considering the compressive confined behaviour of similar materials involved in ballistic impact has been already used [6].
Figure 3:
Finite element model of the ballistic experiments presented in the present work.
3.2.1 Steel bullet Considering the fact that the spherical bullet did not undergo any significant deformation due to the impact with the target, and that the elastic modulus is two orders of magnitude higher than the one of the foams a simple model to limit computational costs was used for it. The characteristics were the ones of the AISI 4130 steel as reported by the MIL-HDBK 5H 1/12/98 (Table 1). Table 1:
Bullet material data.
AISI 4130 Steel – MIL HDBK 5H 1/12/98 Density [kg/m3] 7833 E [GPa] 200 G [GPa] 76
The material model is the number 1 (MAT_ELASTIC) in the Ls-Dyna library [8], and is used for isotropic elastic materials, needing only density, Young’s modulus and Poisson ratio. 3.2.2 Carbon foam target The procedure to individuate the correct material relied first on the definition of the material model in the Ls-Dyna library which could include the peculiarities WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
110 Computational Methods and Experimental Measurements XIV showed by the foams in the experimental campaign. These are represented by stress-strain curves input by points, and different behaviours depending on instantaneous strain-rate value. Referring to the measured static confined compression curves different stress-strain relationships, varying the B parameters of eqn.(1), were computed covering a range of strain rate values. Once a good agreement between experimental damage in the target due to the impact and the value resulting from the analysis with the standard mesh was reached the B parameter was fixed and the computations for the other impacts speed were performed.
Figure 4:
Stress-Strain curves used in the computations for the 240 kg/m3 foam.
Figure 5:
Stress-Strain curves used in the computations for the 560 kg/m3 foam.
Deep considerations and trials lead to material kind 83 “Mat Fu Chang Foam”. It is possible to find interesting application of this model in the automotive [9] and in the aerospace [10] fields. To include the fragile behaviour, material 83 has been completed by an erosion criteria as a limit on the maximum principal strain which corresponds to the value identified as the densification strain. In fact, during the experimental tests, once the maximum strain was reached, the foams were transformed completely in dust. At this stage we considered that this carbon dust was free to evacuate on the back of the bullet WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
111
and/or through the pores of the target. Therefore once the maximum principal strain gets to the limit the corresponding element is deleted and a new contact surface is defined by the “contact eroding surface to surface” algorithm chosen. 3.2.2.1 Low density carbon foam target For this foam the parameter B resulted 0.28. This means that the stress strain curves used for compression were the ones reported in Figure 4. The erosion value for the principal strain was 0.8. 3.2.2.2 High density carbon foam target For this foam the parameter B resulted 1. This lead to the stress-strain curves represented in Figure 5. The limit for erosion for the principal strain is 0.5.
Figure 6:
The model (on the right) represented correctly the uniaxial strain resulted in the ballistic experiment.
Figure 7:
Comparison between test sample and simulation for impact against the light foam at 240 m/s.
4
Validation results and discussion
The type and extent of the damage were very well represented by the simulation which caught the uniaxial deformation process that was highlighted by the ballistic experiments. The shape and the size of the holes were always close to WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
112 Computational Methods and Experimental Measurements XIV the bullet ones without any significant propagation, although the fragile nature of the material could potentially involve cracking instabilities. As represented in Figures 6, 7 and 8, the macro aspects of the impacts were respected for both the foams densities.
Figure 8:
Comparison between test sample and simulation for impact against the denser foam at 240 m/s.
Once the relationship between strain rate and compressive strength was set, only the impact speeds were varied. The results were absolutely satisfactory in both cases as reported in Table 2. Table 2:
240 kg/m3 foam 560 kg/m3 foam
Comparison of the holes depth value between computations and experimental results. The boxed cells are the ones used to calibrate the compressive strength – strain rate relation. Impact Velocity [m/s] 75 210 240 210 240
Standard mesh – hole depth [mm] 4.3 23 28 8 8.9
Experimental hole depth [mm] 4 22 28 8 9
4.1 Mesh sensitivity Several meshes of both target and impactor were used to perform computations once the material curves were set, basing on the standard mesh results. In Tables 3 and 4 the main features of the target and bullet meshes are respectively reported. All target meshes were designed with a progressive refinement of the elements size in the central area which is involved by the impact. This has been done to understand the effect of the element size perpendicular to the impact speed and its thickness, avoiding excessive elongation ratio in the velocity direction. In a consistent way, the bullet meshes were refined dimensioning the elements to assure best interaction between the parts involved in the contact. It WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Table 3:
Target meshes main features.
Number of elements on the impact plane 2.500 3.800 1.600
Mesh kind standard 2 4
Central element diagonal [mm]
Elements thickness [mm]
0.49 0.28 0.6
1 0.6 0.5
Table 4: Mesh kind standard 3
113
Total number of elements 100.000 252.000 128.000
Bullet meshes main features.
Element projected diagonal [mm] 0.49 0.3
Total number of elements 864 6912
must be noticed that the projected diagonals are the dimensions which are correlated with the size of the elements of the impact area on the target. Computations showed an high stability concerning mesh sensitivity. The maximum difference reached is of 7% in the case of target 4 vs. bullet 3 using as baseline the standard case as reported in Table 5. Big differences in elements size did not involve dramatic changes in results: It is worth to notice that target mesh 2 presents half size elements on the impact face compared with the standard case. In spite of this the results showed to be consistent. Comparing the standard target mesh and the mesh 4, the ratio between thickness and the central element diagonal passes from 2 to 1. This seems to influence slightly the material stiffness. Due to the bias of the mesh on the top surface anyhow this ratio is respected just for the central area due to the constant thickness value of them. Table 5: Impact Velocity [m/s] 75 210 240
Standard mesh – hole depth [mm] 4.3 23 28
Table 6: Impact Velocity [m/s] 210 240
Low density foam computations results summary.
Standard mesh – hole depth [mm] 8 8.9
Bullet 3 vs Target std [mm] 4.4 22 27
Bullet 3 vs target 2 [mm] 4.5 22.3 27.4
Bullet 3 vs target 4 [mm] 4.4 21.5 26
Experimental hole depth [mm] 4 22 28
High density foam computations results summary. Bullet std vs target 2 – [mm]
Bullet 3 vs Target std -[mm]
7.5 9.1
8 9.1
Bullet 3 vs target 2- [mm] 7.7 9.4
Bullet 3 vs target 4 [mm] 7.2 8.5
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Experimental hole depth [mm] 8 9
114 Computational Methods and Experimental Measurements XIV
5
Conclusions
A complete experimental investigation in quasi-static compressive stress-strain condition allowed us to understand mechanical behaviour of low and high density carbon foams. In lateral confinement conditions the foam showed a wide brittle crushing plateau which gives the same stress-strain curve shape of an elastic – plastic behaving material. This part of the stress-strain curve shows a very low strain hardening coefficient for the low density foam and a noticeable one for the high density one. The modulus of elasticity was not modified by the confinement. Passing from uniaxial stress to uniaxial strain, the failure evolution changed without influencing the elastic behaviour. Therefore under confinement multiaxial loading condition, these foams present energy absorption much higher than in the case of simple compression, due to the extend crushing plateau. In ballistic experiments with small fragments (about 1/10 of target thickness) the foam experienced confined compression failure, associated with very limited failure area extension. Even if the extended plateau region allows good energy dissipation properties, this was not in accordance with the ballistic results in terms of spherical bullet penetration into the target. Therefore a phenomenological relationship between strain-rate and compressive strength used for ceramics, which have some points in common with these foams, was adopted. This has been then modified and calibrated on a single impact test speed. The stress-strain curves obtained by the application of these relationships were then validated on different velocity impact test computations, allowing an high adherence between experimental and numerical simulation results. Additional computations were performed adopting finer mesh combinations of the domain, and showed good consistency of the results. In conclusion, it is possible to affirm that a simple form model to be employed in numerical simulations, describes accurately high strain rates behaviour of carbon foams under mono-dimensional strain loading conditions.
References [1] Gibson, L.J., Ashby, M.F., Cellular solids, structure and Properties, Second Edition, Cambridge (1999). [2] Oak Ridge National Laboratory (ORNL) Dept. of Energy Website. [3] M.D. Sarzynski, Carbon Foam Characterization: Sandwich Flexure, Tensile and Shear Response, Master Thesis, (2003). [4] M. Grujicic, B. Pandurangan, C.L. Zhao, S.B. Biggers, D.R. Morgan, Hypervelocity impact resistance of reinforced carbon-carbon/carbon-foam thermal protection systems, Applied Surface Science, 2005. [5] J. Lankford, Mechanism Responsible for Strain – Rate – Dependent Compressive Strength in Ceramic Materials, Communications of the American Ceramic Society, (1981). [6] P. Forquin, A. Arias, R. Zaera, An experimental method of measuring the confined compression strength of high performance concretes to analyse WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
[7] [8] [9] [10]
115
their ballistic behaviour. 8th International conference on Mechanical and Physical Behaviour of Materials under Dynamic Loading, J. Cirne, R. Dormeval, et al. J. Phys. IV France 134 (2006) 629-634. DOI: 10.1051/jp4:2006134097. RILEM TC 148-ssc: Test Methods for compressive softening, 2001. Ls-Dyna Keyword User’s Manual, LSTC, May 2007, Version 971. Sambamoorthy, Halder, Characterization and Component Level Correlation of Energy Absorbing PU Foams using Ls-Dyna Material Models, Lear Corporation. Chocron, Walker, Nicholls, Dannemann, Anderson, Analytical model of the confined compression Test Used to Characterize Brittle Materials; Journal of Applied Mechanics, March 2008 vol 75.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
117
Statics and dynamics of carbon fibre reinforcement composites on steel orthotropic decks L. Frýba, M. Pirner & Sh. Urushadze Institute of Theoretical and Applied Mechanics, Academy of Sciences of the Czech Republic.v.v.i., Prosecká, Prague, Czech Republic
Abstract Carbon fibre reinforced composites (CFRC) have become an increasingly applicable material in civil engineering over the last decades. The advantages of CFRC usage are its high stiffness and strength. The steel and composite interaction was tested on simple beams, which could forestall the development of fatigue cracks. It is projected that the application of this material be applied to real steel bridge construction, especially to repair damages of orthotropic decks. The paper examines the influence of material characteristics with and without CFRC. Orthotropic deck details have been investigated under static and dynamic loads and several theoretical and experimental methods have been applied. The experiments of orthotropic deck details are turned to the crack propagation, stress concentrations and fatigue life. The most vulnerable detail appears in the spatial crossing of the deck with cross girder and longitudinal rib, where the stress concentration appears. The static tests were realised on a real size part of the orthotropic deck. The specimens are loaded by many times the repeated load. Keywords: carbon fibre reinforced composite, orthotropic deck, fatigue life, dynamic load, crack propagation.
1
Introduction
Steel bridges with orthotropic decks are spatial structures where the bridge deck is supported by cross-girders and longitudinal stiffeners. Therefore, they provide WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090111
118 Computational Methods and Experimental Measurements XIV different static properties in two orthogonal directions (orthotropic equals orthogonally anisotropic). The orthotropic deck serves for the distribution of loads on bridges. The steel bridges with orthotropic decks have been built since the Second World War in several countries. Now, their number in service reaches many thousands in the world. Their advantage is a low constructional height, appreciated especially in towns. However, being fully welded, the orthotropic decks raises specific problems regarding fatigue under dynamic loads. The most vulnerable point on this type of structure is the spatial crossing of the bridge deck, cross-girder and longitudinal rib. The stress concentrations appear at that places and form a possible source of fatigue cracks [1, 2]. Therefore, a lot of calculations and tests of orthotropic decks have been conducted, e.g. [3]. Today’s, plastic materials have been applied in many cases and that is why an idea crossed our mind to apply the carbon fibre reinforcement, glued on the steel plates, to reduce the formation and propagation of fatigue cracks in orthotropic decks.
2 CFRC composites Our idea is a test of Carbon Fibre Reinforcement Composites (CFRC), which glued on the specimen, could forestall the development of fatigue cracks. After a search of materials offered by the market, we choose the carbon fibre composite (Fig. 1, type Carbonpree 300 Biaxial) and the glue Sikadur 30.
Figure 1:
Carbon fibre reinforcement composites (CFRC).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
119
Before application of the glue, the surface of steel specimens must be carefully brushed. The CFRC was tested separately (Fig. 2) as well as glued on a steel bar. The stress-strain diagrams are represented on Figs. 3 and 4. The form of the Czech standard specimen for tensile tests is shown in Fig. 5.
Figure 2:
Tensile test of CFRC.
400
350
300
200
stress
σ
[MPa]
250
150
100
50
0 0
0,02
0,04
0,06
0,08 strain
Figure 3:
0,1
0,12
ε∗10 −6 [1]
The stress-strain diagram of the steel bar.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
0,14
120 Computational Methods and Experimental Measurements XIV 350
300
stress σ [MPa]
250
200
150
100
50
0 0
0,002
0,004
0,006
0,008
0,01
strain
Figure 4:
0,012
0,014
0,016
0,018
[1]
The stress-strain diagram of a steel bar with glued CFRC.
Figure 5:
3
ε∗10 −6
Standard specimen for tensile tests.
Orthotropic decks
The experiments of orthotropic deck details are turned to the crack propagation, stress concentrations and fatigue life. The most vulnerable detail appears in the spatial crossing of the deck with cross girder and longitudinal rib where the stress concentration appears. Therefore, the specimen of type A (see Fig. 6) was designed and tested. The first specimen A1 was subjected to static forces from 0 to 477 kN with steps of 20 kN, loading machine GTM 500 kN. After completing these tests, we intend to glue the CFRC on some specimens of type A, subject them to alternating forces and compare with the results of steel specimens.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Figure 6:
Figure 7:
The specimen of type A.
Static test of the specimen A1, supported on two bearings.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
121
122 Computational Methods and Experimental Measurements XIV 200
180
160
140
a [m m ]
120
100 c
80
60
40
20
0 0
200000
400000
600000
800000
1000000
1200000
1400000
1600000
1800000
N [1]
Figure 8:
Fatigue crack length a as a function of the number N of stress cycles (specimen A2).
Figure 9:
Fatigue crack on the specimen A2.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
4
123
Fatigue cracks in the steel orthotropic decks
The first step of our investigations contained the study of fatigue cracks propagation. The specimen A2 was cycled with the force 10 kN to 210 kN, test machine MTS 250 kN, frequency 3 Hz, N = 5 527 812 cycles without any crack. After that, the specimen A2 was cycled with the force 10 kN to 410 kN, test machine GTM 500 kN, frequency 2 Hz. The relation between fatigue crack length a and the number N of stress cycles can be shown in Fig. 8. The end of the test was at N = 1 543 930 cycles when the specimen lost rigidity in general. Propagation of a fatigue crack shown in Fig. 8; it can be seen in Fig. 9.
5
Conclusions
The future of our research contains the steel and composite interaction on selected details of orthotropic decks. We focus our investigation on the adhesive gluing.
Acknowledgement This research work was supported by the Grant Agency of the Czech Republic under grant number 103/08/1340. Identification code of research project of the Institute of Theoretical and Applied mechanics is AVOZ 20710524.
References [1] W. De Corte, Ph. Van Bogaert.: Fatigue assessment of orthotropic bridge decks through high frequency strain gauge measurements. Proc. of the Third European Conference on Structural Control, 3ECSC, 12-15- July 2004, Vienna University of technology, Vienna, Austria. [2] L. Frýba: Dynamics and fatigue of orthotropic decks. In G. Augusti, C. Borri, P. Spinelli (editors): Structural Dynamics, EURODYN 96. Balkema, Rotterdam, Brookfield, 1996. Vol. 2, str. 753-758. [3] L. Frýba, L. Gajdoš: Fatigue properties of orthotropic decks on railway bridges. Engineering Structures, 21 (1999), No. 7, pp. 639-652.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
125
Study of the thermo-physical properties of bitumen in hydrocarbon condensates A. Miadonye, J. Cyr, K. Secka & A. Britten Department of Chemistry Cape Breton University Sydney, Canada
Abstract The reduction of viscosity with inappropriate diluents could lead to asphaltene precipitation. To use condensates effectively however, their effects on asphaltene precipitation and deposition must be well examined. In this study, the influence of inter-particle interactions of asphaltene on the viscosity and heat of mixing in solutions of bitumen-condensate mixtures are compared to those of bitumen in aromatic solvents and in condensate-toluene mixtures. Three bitumen, two heavy oils, and five condensates from different reservoirs were examined. The results indicate that the presence of aromatic solvent in bitumen delays the on-set of asphaltene precipitation due to the presence of dipole-dipole interactions as well as hetero-molecular interactions. In pure condensates, strong hydrogen bonding and moderate homo-molecular interactions are predominant and, therefore, result in reduced asphaltene precipitation and low enthalpy of mixing. The on-set of asphaltene precipitation in bitumen and heavy oils is delayed in the presence of aromatic solvents, decreasing with reduction in the composition of aromatic solvent in the condensates. Keywords: thermo-physical property, bitumen, hydrocarbon condensates, solvents, asphaltene, calorimetry, viscosity, heavy oil.
1
Introduction
Bitumen and heavy oil contain high concentrations of asphaltenes, heteroatoms, and maltenes and possess high densities and viscosities; the characteristics that make their production, transportation and refining very difficult [1,2]. The addition of hydrocarbon diluents promotes the reduction of viscosity and has been employed in several upstream bitumen recovery technologies with some measure of success [3]. Traditionally, producers have used large quantities of WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090121
126 Computational Methods and Experimental Measurements XIV aromatic solvents in the vapour extraction (Vapex) process to reduce the viscosity of bitumen under reservoir conditions. However, due to environmental concerns, and the high costs of toluene and xylene, the focus has shifted to paraffinic diluents, preferably natural gas condensates. Using low molecular weight paraffinic hydrocarbons in reducing the viscosity of bitumen results in asphaltene precipitation and deposition. This can cause severe clogging of wells and pipelines and totally halt production, leading to a loss in revenue [4]. Asphaltene are defined, according to ASTM standards (ASTM D2007, ASTM D2006, and ASTM D4124), as n-pentane and n-heptane insoluble and toluene soluble petroleum fraction of complex molecular structures. In the efforts to remediate the problems, calorimetric methods offer valuable information into the heat effects and phase transformations surrounding the asphaltene precipitation. However, it has proven very difficult to provide standard evaluations due to their heteroatom content, complex molecular structure, and polarity [5-7]. Heat is either evolved or absorbed during the creation or destruction of these molecular bonds and thus provides the framework for this study. Prior work in our group addressed aspects of asphaltene threshold multiphase behaviour in pure solvents such as hexane, toluene and different naphtha [8, 9]. In this study, different bitumen samples are mixed with various binary mixtures of condensates and toluene. Their heats of mixing are determined. The weight percent of asphaltene precipitation and changes in viscosity at various concentrations are examined with heat of mixing to evaluate changes in the thermo-physical property of the mixtures at the onset of asphaltenes precipitation.
2
Experimental section
2.1 Materials Five bitumen, Husky Golden Lake, Celtic, Cold Lake, Plover Lake and Rush Lake, (from Alberta, Canada) supplied by Alberta Research Council; Mongstad light crude oil (from Norway) and naphtha supplied by Esso Refinery (Halifax, Canada); and natural gas condensates (from Sable Island, Nova Scotia, Canada) supplied by Exxon-Mobil Corporation, were selected and used as received for this study. The hydrocarbon solvents (n-pentane, ethylbenzene, toluene and xylenes), and methanol are analytical grade chemicals (Sigma-Aldrich, HPLC grade, 99.9+%) commercially available and were used as received. 2.2 Experimental methods Asphaltene precipitation was carried out using a 1:40 mL bitumen:n-pentane ratio. The mixture was agitated using a G10 gyratory shaker for 24 hours, left to stand for two hours, and then filtered using a 7.0cm Whatman filter paper. The asphaltenes remain as the residue on the filter paper. This was rinsed with an excess of n- pentane until it turned colourless. The precipitated asphaltenes were placed in a desiccator for evaporation of any excess liquid. Masses were WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
127
recorded until a final constant mass was obtained. In a separate experiment, the filtrate which is a mixture of an excess of n-pentane and the deasphalted oil was vacuum distilled to recover the n-pentane and the deasphalted bitumen. To determine the solubility of bitumen in condensates and toluene, a Parr 1455 solution calorimeter was used to measure the enthalpies of mixing. The calorimeter was interfaced with a desktop PC for automatic data acquisition and analysis. The calorimeter’s Dewar unit is equipped with a microprocessor-based thermometer with a 0.0008 heat leak constant when measuring results obtained at atmospheric pressure and a +/-5ºC of room temperature. The calorimeter was standardized with tris[hydroxymethyl]aminomethane (THAM) and 0.1 M hydrochloric acid in accordance with the procedure specified in the manufacturer’s manual. To measure the enthalpy of mixing, masses ranging from 0-20g of the pure bitumen or bitumen-diluent mixtures were placed in a rotating sealed glass sample cell inside the Dewar (solvent chamber). Thermal equilibrium is allowed to be reached before the sample is then released into the solvent chamber. This mixture will reach a new equilibrium temperature as the reaction is complete; thus producing the thermogram. As depicted in Figure 1, the thermogram provides the initial and final temperatures which are obtained through the trend line equations drawn from temperatures before and after mixing period. The heat values and hence the enthalpy of mixing are calculated in accordance with the procedure and equation given in PARR’s manufacturer manual. 24.6
Temperature (degrees Celsius)
24.5
24.4
24.3
24.2
24.1
24
23.9
23.8 0
100
200
300
400
500
Time (seconds)
Figure 1:
Typical solution calorimetry thermograph for mixing solvent with heavy petroleum blend.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
128 Computational Methods and Experimental Measurements XIV
3
Results and discussion
The results obtained for the thermo-physical property transformations when bitumen is mixed with light oil, condensates and naphtha are illustrated in Figures 2 to 6. Enthalpies of mixing for pure Mongstad light oil, and for a 50/50 w/w mixture of Rush Lake bitumen with the Mongstad oil are shown in Figure 2. 0.8
Enthalpy (kJ)
Mongstad, Rush Lake 50:50 2
0.7
R = 0.9991
0.6
R2 = 0.9997
Mongstad
0.5
0.4
0.3
0.2
0.1
0 0
20
40
60
80
100
weight % oil
Figure 2:
Enthalpy of mixing for pure light oil and 50:50 w/w Rush Lake bitumen and light oil mixture in toluene.
The trend lines gave parabola patterns of endothermic nature and it is observed that the presence of Rush Lake in the Mongstad oil causes the values of enthalpy to decrease by shifting the parabola closer to the x-axis. This suggests that the Rush Lake bitumen present in the sample retains the energy which would otherwise be used to mix the bitumen with the toluene. Figure 3 shows the enthalpies of mixing at lower concentrations of bitumen and bitumen blends in 100mL toluene. It is observed that the presence of Rush Lake causes the values of enthalpy to decrease. Therefore, the presence of Rush Lake removes more heat from the condensates-toluene mixture, absorbing the heat normally dispersed from mixing pure condensates with toluene. The results indicate that WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
129
the higher the mass of bitumen in the ratio of bitumen-condensates mixtures, the lower the enthalpy of mixing. It takes more energy to dissolve the large molecules of bitumen during the mixing. This is evident by the fact that replacing the condensates with higher viscosity Mongstad light oil further reduced the heat of mixing. 0.4
condensate 50:50 Rush Lake-condensate 70:30 Rush Lake-condensate 50:50 Rush Lake-Mongstad Mongstad 90:30 Rush Lake-condensate
0.35
Enthalpy of Mixing (KJ)
0.3
0.25
0.2
0.15
0.1
0.05
0 0
2
4
6
8
10
12
14
16
Mass of Solute
Figure 3:
Enthalpy of mixing for pure diluents and different compositions of bitumen and diluents.
When the asphaltene from the samples were recovered and weighed, it was observed that the amounts of asphaltene in the mixtures were inversely proportional to the per cent diluents. Figure 4 depicts the enthalpies of mixing for asphalted and deasphalted bitumen. It is suggested from the results that the presence of bitumen in the mixture withholds the energy usually released upon mixing. This energy is contained within the molecular components in the bitumen, with lower enthalpies of mixing for asphalted bitumen than for deasphalted bitumen. The de-asphalted bitumen and the asphalted bitumen were each made into solutions containing 10g of the bitumen with 20mL of toluene. The viscosities of each were determined to see the effect of asphaltene on the viscosity. The WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
130 Computational Methods and Experimental Measurements XIV 0.8
0.7
0.6
0.5 Enthalpy (kJ) 0.4
0.3
0.2 De-asphalted 0.1
Containing Asphalt
0 0
Figure 4:
10
20 30 Weight percent of Bitumen in Solution
40
50
Enthalpy of mixing for 50:50 w/w Plover lake bitumen: condensate mixture in toluene.
original bitumen mixtures (in Figures 5 and 8) show large reductions in viscosity with increase in weight percent of condensates. The large reduction in viscosity at 50:50 w/w ratio compared to blends with larger ratios of bitumen is not observed for enthalpies of mixing. This observation indicates that asphaltene alone is not responsible for the endothermic property of the mixture. In Figure 6, the effect of solvent type on solubility of bitumen is shown for Cold Lake bitumen. For the same bitumen and similar volume per cent of solvents, bitumen mixed with toluene showed lower viscosities at different temperatures compared to bitumen-naphtha mixtures. Aromatic solvents such as toluene have been shown to be good solvents for bitumen as they dissolve asphaltene [1,4,8,9]. As noted in the Introduction, asphaltene are defined as toluene soluble, the solubility in toluene could well be attributed to the polar nature of asphaltene. Consequently, n-paraffin with no inherent dipolar molecules will increase hetero-molecular interactions, leading to asphaltene precipitation in high solvent concentration. To examine further the mixing characteristics of bitumen and condensates, the enthalpies of mixing for low molecular weight paraffin and aromatic
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
131
1000
Log Viscosity
50:50 Rush Lake Oil/Condensate #50 70:30 Rush Lake Oil/Condensate #50 90:30 Rush Lake Oil/Condensate #50
100
10 10
15
20
25
30
35
40
45
Temperature (°C)
Figure 5:
Viscosity–temperature condensate mixtures.
relationship
for
different
bitumen-
35.1% Toluene 35.04% Naphtha
60
50
Viscosity (cSt)
40
30
20
10
0 20.1
Figure 6:
30 Temperature (OC)
40
Effects of solubility and temperature on the viscosity of Cold Lake bitumen.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
132 Computational Methods and Experimental Measurements XIV 1.0
1.0
Water-Methanol Mixture Ethylbenzene-Naphtha Mixture Xylenes-Naphtha Mixture Toluene-Naphtha Mixture
0.5
0.0
0.0 0
20
40
60
80
100
H Mixture (kJ)
H Mixture (kJ)
0.5
Weight Percent of Solute (%)
Figure 7:
-0.5
-0.5
-1.0
-1.0
Enthalpy of mixing for several binary systems with a high level of miscibility.
solvents, and for methanol-water systems, were determined as illustrated in Figure 7. The slope of enthalpy versus volume percent of methanol decreases with increasing concentrations showing that the enthalpies of mixing distilled water with methanol are exothermic. The result represented in Figure 7 is a well known standard enthalpy diagram for methanol-water mixture. In contrast, the non-polar hydrocarbon solvents exhibited endothermic enthalpies of mixing. The toluene-naphtha mixture is more endothermic than ethylbenzene-naphtha and xylene-naphtha mixtures. This observation is in agreement with results obtained for bitumen-condensates mixtures. Both ethylbenzene and xylene are slightly larger molecules than toluene and their enthalpies of mixing are lower. This suggests that molecular size, and possibly structure, affect heat of mixing. Previous work showed that this argument is true for hydrocarbon solvent mixtures [8]. The presence of extra methyl pendent groups on the aromatic solvent (ethylbenzene and xylene) allows for greater homomolecular interactions with naphtha resulting in lower enthalpy of mixing.
4
Conclusions
The enthalpy of mixing and phase transformations of bitumen in paraffinic diluents such as natural gas condensates and naphtha are reduced with addition of aromatic solvent due to the occurrence of dipole-dipole interactions. Low WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
133
enthalpy of mixing and reduced asphaltene precipitation is observed for bitumenlight oil mixtures illustrating the influence of hydrogen bonding and homomolecular interactions in bitumen solubility. These results also show clearly that bitumen blend is stable in low concentrations of condensates (up to 5 per cent) but for higher concentration of condensates would require addition of aromatic solvent additive to prevent or reduce the precipitation of asphaltene. 30
7000
Plover Lake asphaltene Celtic asphaltene HuskyOil asphaltene
25
6000
Plover Lake viscosity
Celtic viscosity Husky Oil viscosity
5000
20
15 3000
viscosity
% asphaltene
4000
10 2000
5
1000
0
0 0
5
10
20
30
40
50
% diluent
Figure 8:
Comparison of changes in asphaltene content and viscosity with condensate composition for different bitumens.
Acknowledgements This work was funded by an NSERC-undergraduate student research award (USRA). The support of CBU-ORAI is highly appreciated. Thanks to Loree D’Orsay for her data analysis contributions.
References [1] Reynolds, J.G., Metals and heteroatoms in heavy oils. Petroleum Chemistry and Refining, ed. J.G. Speight, Taylor and Francis: Washington DC, 1999. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
134 Computational Methods and Experimental Measurements XIV [2] Speight, J.G., Petroleum analysis and evaluation. Petroleum Chemistry and Refining, ed. J.G. Speight, Taylor and Francis: Washington DC, 1999. [3] Miadonye, A., Singh, B., Huang, S.S., Srivastava, R. & Puttagunta, V.R., Modelling the Effect of Dissolved Gases on the Viscosity of Heavy Oils. Chemical Engineering Research and Design, 73 (A2), pp. 208-213, 1995. [4] Das, S.K., & Bulter, R.M., Mechanism of the vapour extraction process for heavy oil and bitumen. Journal of Petroleum Science, 12, pp. 219-231, 1995. [5] Porte, G., Zhou, H. & Lazzeri, V., Reversible Description of Asphaltene Colloidal Association and Precipitation, Langmui. 19, pp. 40-47, 2003. [6] Alboudwarej, H., Beck, J., Svrcek, W.Y., Yarronton, H.W. & Akbarzadeh, K., Sensitivity of Asphaltene Properties to Separation Techniques. Energy & Fuels, 16, pp. 462-469, 2002. [7] Akbarzadeh, K., Alboudwarej, H., Svrcek, W.Y. & Yarronton, H.W., A Generalized Regular Solution Model for asphaltene precipitation from nalkane diluted heavy oils and bitumens. Fluid Phase Equilibria, 232, pp. 129-140, 2005. [8] Miadonye, A., Evans, L. & McKenna, T.M., Study of Asphaltene Precipitation in Dilute Solution by Calorimetry. Computational Methods in Multiphase Flow III, eds. A.A. Mammoli and C.A. Brebbia, WIT Press, 50, pp.13-21, 2005. [9] Wallace, D., Henry, D., Miadonye, A. & Puttagunta, V.R., Viscosity and Solubility of Mixtures of Bitumen and Solvent. Fuel Science & Technology International, 14 (3), pp. 465-478, 1996.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Section 3 Direct, indirect and in-situ measurements
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
137
Modelling energy consumption in test cells D. Braga1 , Y. Parte2 , M. Fructus2 , T. Touya2 , M. Masmoudi2, T. Wylot3 & V. Kearley3 1 ACTIS,
Avenue de Catalogne, 11300 Limoux, France de Mathe´matique de Toulouse, 31062 Toulouse, France 3 TRADA Technology Ltd, High Wycombe, HP144ND, U.K. 2 Institut
Abstract In situ measurements of energy consumption in test cells are often carried out to predict the thermal performance of the insulating product. Design engineers need a tool that can use available in situ measurements to model the thermal performance of the test cell at any location world wide and also permit comparison of different insulation products. Towards this goal, we present GAP, Global Assimilation Process which is a neural network based meta modeling technique. The key feature of this method is the zero memory minimization routine and a regularization technique that avoids over training of the network. We present the theory and applications of GAP software to predict thermal performance of mineral wool and multi-foil insulation products. GAP based meta models are used to predict thermal performance of these insulation products at different test sites. It is shown that the properly trained neural network model of GAP can accurately predict the energy consumption in test cells of any location. Keywords: In situ measurements, neural network, GAP, multifoil insulation.
1 Introduction Thermal properties of insulation materials such as U-value or R-value are measured using controlled laboratory tests that employ simplistic specimen geometries and boundary conditions [1, 2] or by established standards [3]. However, a detailed report from the Building Research Establishment Ltd. [4] shows the considerable differences amongst standard calculation method such as [3] and in situ measurements. The difference arises from the fact that the laboratory conditions or standards does not simulate factors such as wind, solar radiation, relative humidity, WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090131
138 Computational Methods and Experimental Measurements XIV infiltration, etc., which influence in situ performance. Hygrothermal performance modeling softwares which claim to model these factors are inadequate as they employ simplifying assumptions for wind profiles and solar radiation. Realistic estimation of in situ performance of insulation products requires a tool that can couple the model and the measured data. Moreover, such model should be computationally inexpensive so that design engineers can use it to model the thermal performance of the test cell at any location world wide and also permit comparison of different insulation products. Towards this goal, we present Global Assimilation Process (GAP) which is a neural network based meta modeling technique. In section 2 we give a description of GAP, whose key features are use of algorithmic differentiation to develop a low-memory Levenberg-Marquardt algorithm and the use of a regularization technique; by which it provides a neural networks that do not ”over fit” the given data. Using GAP, the neural network is trained to predict in situ consumption data corresponding to various meteorological conditions. The in situ data collection strategy and test set up are discussed in section 3. Finally in section 4 it is shown that properly trained neural network can accurately predict the energy consumption in test cells located at other geographic locations.
2 GAP theory The core idea of GAP is to build neural network based model of the energy consumption in test cells as a function of meteorological parameters. The model parameters are tuned by minimizing the difference between the measured and simulated consumption. We assume that energy consumption is a continuous function ψ defined from RnI (nI input variables, for e.g. meteorological parameters) into RnO (nO output variables, for e.g. energy consumption). In following subsections we describe neural network model and the training algorithm. 2.1 Neural networks We consider three-layer neural networks, as they are universal approximators for continuous functions [5–7]. The first layer is the input layer and contains nI + 1 cells corresponding to the nI input variables, and an additional cell which is called the bias. The second (or intermediate) layer is called the hidden layer, consists of nH hidden cells, nH usually being increased with the complexity of the function to be approximated. The third layer is the output layer and contains nO cells. Each cell cj of one layer is connected to each cell ci of the following layer, and each of these links is associated to a weight wij . If we denote by xli the state of cell ci of the layer l, then the state of cell cj of the second layer is given by n I 2 1 1 1 xj = f wjk xk + wj,nI +1 (1) k=1
where f is the activation function given by eqn. (2). f (z) =
1 ∀z ∈R 1 + e−z/10
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(2)
Computational Methods and Experimental Measurements XIV
139
This means that some basis functions are built between the input and hidden layers. The state of cell ci of the output layer is finally given by a linear approximation in this basis nH 2 2 x3i = wij xj (3) j=1
We will now consider the following vectorial notations: T T • X 1 = x11 , . . . , x1nI is the input vector, X 1 = x11 , . . . , x1nI , 1 , and T X 3 = x31 , . . . , x3nO is the output vector. 1 , and W 2 is the • W 1 is the nH × (nI + 1) matrix formed with the weights wjk 2 1 2 nO × nH matrix formed with the wij . W = (W , W ) ∈ RnH ×(nI +1) × RnO ×nH . T • F is the function defined for all X 2 = x21 , . . . , x2nH by F (X 2 ) = 2 (f (x1 ), . . . , f (x2nH ))T With these notations, the response R of the neural network to the input X 1 with the weights W is simply given by X 3 = R(W, X 1 ) := W 2 F (W 1 X 1 )
(4)
2.2 Training of neural networks Consider the observation set Ω with nP observations Ω = {(Xi , Yi ), i = 1, . . . , nP }
(5)
in which each Xi ∈ RnI is a vector corresponding to input variables, and Yi ∈ RnO is the response to the input Xi . For each observation we denote by ri (W ) = R(W, Xi ) − Yi the residual and r(W ) = (r1 (W ), . . . , rnp (W ))T will be called residual vector. The difference between neural network output and observed response is called the discrepancy function and is given by eqn. (6). GΩ (W ) = (r1 , . . . , rnP ) ∈ RnO ×nP
(6)
In order to make the neural network a good approximation model, we minimize the difference between the network output and the observed response, i.e. we look ˆ which is a solution of the following minimization problem for the weights W min hΩ (W ) := W
1 GΩ (W )2 2
(7)
The minimization problem is solved using zero memory Levenberg-Marquardt algorithm as described in section 2.3. First, the set of patterns Ω is divided into three parts, the training set ΩT , the generalization set ΩG , and the validation set ΩV . The initial weights W0 are set to small random values between −0.1 and 0.1 and the neural network is trained in following three phases. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
140 Computational Methods and Experimental Measurements XIV 2.2.1 First training phase We look for the size of the hidden layer nH that allows us to make a good training of the neural network on the patterns from ΩT . This means that nH is automatically increased, and (8) hΩT (Wˆ0 ) = min hΩT (W ) W
is solved until hΩT (Wˆ0 ) becomes smaller than a threshold precision η > 0. 2.2.2 Regularization phase We now add a Tikhonov regularization term to the functional hΩT and we look for the regularization parameter β in order to enforce the weights involved in the definition of the basis functions to remain small. By this way, we try to define smooth and stretched basis functions that will prevent the neural network to oscillate too much. • If hΩG (Wˆ0 ) < η, then we set βˆ = 0; • otherwise, for several increasing values of β > 0, we look for Wˆβ that solves 1 W 1 2 min hΩT (W ) + β (9) W 2(nI + 1)nH In this minimization problem weights are initialized to the value obtained at the end of first training phase. We denote by βˆ the value of β for which hΩG (Wˆβ ) is the smallest. 2.2.3 Final training phase We perform finally a new training phase on the set ΩT ∪ΩG using the regularization parameter βˆ provided by the previous step and Wˆβ as initial weights. 1 1 2 ˆ W min hΩT ∪ΩG (W ) + β (10) W 2(nI + 1)nH 2.3 Levenberg-Marquardt algorithm Levenberg-Marquardt algorithm is a combination of steepest descent method and Gauss-Newton algorithm [8–10]. The iterative descent algorithms consist in defining a descent direction d, and the new point W+ is obtained from the current point W using the following update rule W+ = W + d The descent direction, d is given by [10] J(W )T J(W ) + αI d = −J(W )T r(W )
(11)
(12)
where α > 0 and J(W ) = ∇r(W )T is the Jacobian matrix. The LevenbergMarquardt algorithm is then the following: • Choose an initial point W0 and a real number α0 > 0, k = 0 WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
141
• Compute dk , solution to J(Wk )T J(Wk ) + αk I dk = −J(Wk )T r(Wk ) • If hΩ (Wk + dk ) < hΩ (Wk ), set Wk+1 = Wk + dk , choose αk+1 < αk , increase k and goto previous step; otherwise, decrease αk and goto previous step. Stop if hΩ (Wk + dk ) < η 2.3.1 Memory reduction and adjoint computation As seen in eqn. (12), the Levenberg-Marquardt algorithm usually requires the computation of the inverse of J(W )T J(W ) + αI, whose size may be quite large in some cases. For memory reduction, at least in terms of storage, the linear system in eqn. (12) can be solved using the conjugate gradient method, which requires only matrix-vector products. We only need to compute the left-hand side of eqn.(12) in an efficient way. This can be done in two steps. 1. We first compute z = J(W )d. This quantity can be rewritten as follows ∂r(W + εd) r(W + εd) − r(W ) = (13) J(W )d = lim ε→0 ε ∂ε ε=0 and J(W )d corresponds to the differentiation of a vector-valued function r with respect to a single parameter ε. This can be done very efficiently using the forward mode of the algorithmic differentiation. 2. Then, we have to compute J(W )T z, which can be rewritten n nP P T ∇ri (W )zi = ∇ ri (W )zi = ∇ r(W )T z (14) J(W ) z = i=1
i=1
In this form, J(W )T z corresponds to the differentiation of a scalar function with respect to several parameters and the reverse mode of the algorithmic differentiation is particularly efficient in this case [11–13]. The computation of the right-hand side of (12) is realized in the same manner as in the second step of the computation of the left-hand side.
3 Experimental setup 3.1 Description of the test cells Two test cells, located at Limoux, France, with a roof surface area of 35 m2 , outside dimensions of 4×7 m2 on floor level and height of 3 m were used for the test. Each cell is built without windows and there is no ventilation. The roof with inclination of 36◦ is made up of rafter of 8 × 11 cm with spacing of 48 cm between adjacent rafters and has clay tiles. The roof ridge has north-south orientation. The floor is made up of wood paving and the under floor gap is over insulated with 40 cm of mineral wool. The access to each test volume is by an airlock in the gable wall and thus the thermal exchange takes place through walls and roof alone. Inside temperature of each cell is maintained at 23◦ C using two fan heater of 1 KW output. The airlock is heated to 1◦ C less than the main cell and acts as a guard cell. One test WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
142 Computational Methods and Experimental Measurements XIV cell has TS9 multi-foil insulation of ACTIS [14] and other has 20 cm of mineral wool insulation. The layout of each cell is in accordance with the manufacturer’s instructions [14]. Infra red pictures made outside and inside of the test cells do not reveal any significant differences between each kind of cells. 3.2 Instrumentation and measurements Each cell is equipped with two temperature sensors located 1.5 m above the floor and placed in an open gray PVC tube to shield them from air movements. Energy consumption in each test cell is measured by recording current and voltage using calibrated instruments. Weather parameters namely, the outside temperature, relative humidity, wind direction, wind speed and total solar radiation were recorded per minute by a dedicated weather station at the site. The period of measurement was from 1 December 2005 to 28 February 2006. Measurements are carried out per minute and are recorded by dedicated data logger units. All quality control checks pertaining to instrumentations were made. [14]
4 Simulations using GAP 4.1 Training of neural network
Consumption (KWh)
The consumption in a test cell is a function of meteorological parameters, namely, the difference between outside temperature and temperature inside the chalet, wind speed, wind direction, relative humidity and global solar radiation. Using GAP, two neural networks pertaining to the data of two test cells are developed. Total 1513 observations spanned over the period 1/12/2005 to 31/12/2005 are used to train the network. These networks are validated by predicting the consumption values for the period of 1/1/2006 to 28/02/2006. Fig. 1, 2 shows the comparison between measured and simulated energy consumptions corresponding to these experiments. Table 1 shows the measured and simulated energy consumption in these test cells. The difference between measured and simulated consumption is less than 1%. It is observed that the energy consumption in a test cell with TS9 material is 4% less than the energy consumption for a test cell with mineral wool.
Simulated
0.2 0.15 0.1 0.05
0
500
Hours→ 1000
Measured
1500
Figure 1: Comparison of measured and simulated energy consumption in a test cell with TS9 insulation, located at Limoux, France. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Consumption (KWh)
Computational Methods and Experimental Measurements XIV
Simulated
0.2 0.15 0.1 0.05
Measured
1500
Hours→ 1000
500
0
143
Figure 2: Comparison of measured and simulated energy consumption in a test cell with mineral wool insulation, located at Limoux, France.
Table 1: Measured and simulated net energy consumption in test cells located at Limoux, France. Energy consumption in kWh Cell with TS9 Measured Simulated
Consumption (KWh)
137
136
Cell with mineral wool Measured Simulated 143
142
Simulated
0.2 0.15 0.1 0.05 0
500
Hours→ 1000
Measured
1500
Figure 3: Comparison of measured and simulated energy consumption in a test cell with multi-foil insulation, located at TRADA, U.K.
4.2 Prediction using neural network These trained and validated neural networks are used to predict the energy consumption of similar test cells with multi-foil insulation similar to TS9 and mineral wool insulation located at TRADA in United Kingdom [15]. The weather data and the consumption were measured for the period 1/1/2006 to 28/2/2006. Fig. 3, 4 shows the comparison of measured and simulated consumption. Table 2 provides the values of measured and simulated consumption. For the cell with multi-foil insulation the simulated consumption differs by 4% where as for the test cell with mineral wool insulation the difference is 3%. As part of the long term in situ data collection strategy, ACTIS planned to set up new test cells in United Kingdom. The weather data spanned over the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Consumption (KWh)
144 Computational Methods and Experimental Measurements XIV
Simulated
0.2 0.15 0.1 0.05
0
500
Measured
1500
Hours→ 1000
Figure 4: Comparison of measured and simulated energy consumption in a test cell with mineral wool insulation, located at TRADA, U.K.
Table 2: Measured and simulated net energy consumption in test cells at TRADA, United Kingdom. Energy consumption in kWh Cell with multi-foil insulation
Cell with mineral wool
Measured
Simulated
Measured
Simulated
165
159
138
134
period of 1/2/2006 to 28/02/2006 at eight different location in United Kingdom, namely, Manchester, Norwich, London, Plymouth, Cardiff, Aberdeen, Newcastle and Belfast is available. The weather data characteristics at these locations are different than those at Limoux, France. Fig. 5 shows the range of measured values for outside temperature, wind direction, wind speed and global solar radiation at Limoux, France and at these location in United Kingdom. The minimum temperature at these locations is higher than Limoux whereas global solar radiation values are small compared to Limoux. In order to estimate the energy consumption in test cells that would be built in near future at these sites, the neural network model trained using weather data and consumption details in Limoux, France is used to simulate the in situ energy consumption in test cells with TS9 and mineral wool insulation materials. Table 3 shows predicted energy consumption for the test cell with TS9 and mineral wool insulation to be built at each of the eight sites. The difference in energy consumption for the test cell with TS9 and mineral wool insulations is also shown. The predicted consumption in test cell with multi-foil insulation is less compared to test cell with mineral wool insulation at Plymouth, Cardiff, Aberdeen, Newcastle and Belfast whereas it is more by 2% at Manchester and by 1% at Norwich and London. The standard deviation of predicted consumption values for test cell with mineral wool is 9.7 whereas the corresponding values for the test cell with multifoil insulation is 6.5, possibly indicating that multifoil insulation is more robust to varying weather conditions as compared to mineral wool insulation. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Solar radiation (W/m2) Wind direction (deg) Temperature (oC) Wind speed (m/s)
Computational Methods and Experimental Measurements XIV
145
30 20 10 0
Limoux Manchester Norwich
London Plymouth
Cardiff
Aberdeen Newcastle Belfast
Limoux Manchester Norwich
London Plymouth
Cardiff
Aberdeen Newcastle Belfast
400
300
200 100 0
17 15
10
5 0
Newcastle Belfast LimouxManchester Norwich London Plymouth Cardiff Aberdeen
800 600 400 200 0
Newcastle Belfast Norwich London Plymouth Cardiff Aberdeen LimouxManchester
Figure 5: Comparison of weather data at eight different locations in U.K. and of Limoux, France.
Table 3: Simulated consumptions of test cells with ACTIS TS9 and mineral wool insulation at eight different locations in United Kingdom. Location
Energy consumption (kWh)
Difference
Mineral wool
ACTIS TS9
(A)
(B)
(B-A)/B*100
Manchester
135
138
2%
Norwich
136
137
1%
London
129
130
1%
Plymouth
155
128
−17%
Cardiff
132
126
−5%
Aberdeen
154
143
−7%
Newcastle
144
143
−1%
Belfast
140
136
−3%
5 Conclusion GAP is a result of a powerful combination of several techniques such as the use of a zero memory minimization method, specific activation function that guarantees WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
146 Computational Methods and Experimental Measurements XIV the minimal change of weights, the use of a Tikhonov regularization technique in order to build smooth and stretched basis functions. GAP is used to generate neural network model to predict energy consumption of test cells as a function of meteorological parameters using in situ data. It is demonstrated with examples that properly trained networks can accurately predict the energy consumption of houses located at some other locations also.
References [1] ISO 8302, Thermal insulation. Determination of steady state thermal resistance and related properties-Gaurded hot plate apparatus. [2] ISO 8990:1994, Thermal insulation. Determination of steady state thermal transmission properties-Calibrated and guarded hot box. [3] ISO 6946, Building components and building elements-Thermal resistance and thermal transmittance-Calculation method. [4] Doran, S., Field investigation of thermal performance of construction elements as built. Technical Report Building Research Establishment Ltd. 78132, November, 2001. [5] Cybenko, G., Continuous valued neural networks with two hidden layers are sufficient. Technical report, Department of Computer Science,Tufts University, Mdeford, Massachusetts, 1988. [6] Cybenko, G., Approximation by superpositions of sigmoidal function. Mathematics of control, signals and systems, 2, pp. 303–314, 1989. [7] K. Hornik, M.S. & White, H., Multilayer feedforward networks are universal approximators. Neural networks, 2, pp. 359–366, 1989. [8] Levenberg, K., A method for the solution of certain problems in least squares. Quarterly of Applied Mathematics, 2, pp. 164–168, 1944. [9] Marquardt, D., An algorithm for least-squares estimation of nonlinear parameters. SIAM Journal on Applied Mathematics, 11, pp. 431–441, 1963. [10] Nocedal, J. & Wright, S., Numerical Optimization. Springer Series in Operations Research and Financial Engineering, Springer, 2nd edition, 2006. [11] J. Gilbert, G.V. & Masse, J., La differentiation automatique de fonctions representes par des programmes. Technical Report 1557, Technical report,INRIA, 1991. [12] Griewank, A., Evaluating derivatives: principles and techniques of algorithmic differentiation. SIAM: Philadelphia, USA, 2000. [13] Rall, L.B. & Corliss, G.F., An introduction to automatic differentiation. Computational Differentiation: Techniques, Applications, and Tools, eds. M. Berz, C.H. Bischof, G.F. Corliss & A. Griewank, SIAM: Philadelphia, USA, pp. 1–17, 1996. [14] ACTIS. http://www.actis-isolation.com. [15] Kearley, V., Multifoil testing at TRADA. Technical report, TRADA Technology Ltd, United Kingdom.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
147
Evaluation of insulation systems by in situ testing I. Enache1, D. Braga1, C. Portet1 & M. Duran2 1 2
Research and Development Department, Actis Limoux, France EMM, Bruxelles, Belgium
Abstract Due to the global warming of the earth, the energy performance of buildings is now a crucial subject. In order to have an accurate comprehension of the energy loss of a building, we have developed real time in-situ tests for the thermal performance of building insulation systems. The test cells are pitched roofed with two gable walls, have the same interior and exterior dimensions, are placed in outside weather conditions and are constructed with the same materials (apart from the roof and gables insulation). All cells are heated in the same way. The temperature inside the test cell is maintained at the same specified level in winter by fan heaters. By comparing the air temperature within the test cell to the outside weather conditions and monitoring the energy required to maintain the internal temperatures, the real life thermal efficiency of each insulation system can be estimated. This paper deals with the results obtained over several test centres around Europe using a thin multi-layer reflective insulation product for the insulation of the first test cell, mineral wool for the insulation of the second one, and without insulation for the third one. Keywords: in situ testing, real life thermal efficiency, thin multifoil insulation.
1
Introduction
Reducing CO2 emissions has become one of the most important challenges for all industrial sectors. Concerning the building industry, there has to be a significant improvement in thermal performance. New regulations continuously reinforce the requirements on the contribution of building products to energy saving.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090141
148 Computational Methods and Experimental Measurements XIV Nowadays, the estimate of energy demand for a building is made with dedicated software that allows a study of the impact of different solutions contributing to the building energy requirement. Insulation products are a key parameter. Their thermal properties are estimated by standardized guarded hot plate or hot box measurement. However, important differences between the standard calculation [1,2] and in situ measurements can be observed. A detailed report from the Building Research Establishment Ltd [3] has concluded that there are important differences, in certain cases equivalent to almost 30%, between the calculation of the coefficient of thermal transmission U under the norm ISO 6946 [4] and the measurements in situ made by the Alba Building Science company on the walls of buildings built between 1995 and 1999. In this context, the European Multifoil Manufacturers association (EMM) has chosen to categories different insulation systems using in situ tests. This method has the advantage of taking into account the influence of real conditions on the thermal performance of different insulation solutions. In this way this technique gives much more realistic information about the thermal behaviour of the tested insulation product once installed. In situ tests developed by EMM and presented here concern three identical buildings insulated with different insulation systems: one insulated with a thin multi reflective product (MF), one insulated with 200 mm of mineral wool (MW) and the last one is not insulated. The in situ measurements are also compared with simulation results obtained with TRNSYS® software.
2
Experimental part
2.1 Structural description of the test cells Three test cells in timber frame representative of an attic that can be converted, with a roof surface about 35 m², outside dimensions of 4 x 7 m² on floor area and a height of 3 m were used for the described tests (figure 1). The access to each test cell is gained through an insulated airlock situated on the gable wall. The gable walls and the attached airlock are made of 23 mm thick plywood. The roof (36° pitch), is traditional, made up of rafter of 8 x 11 cm with a roof void of 48 cm whilst the roofing is made of clay tiles. The floor is timber with an under floor void and over-insulated with 20 cm of polystyrene and 10 cm of mineral wool. In order to obtain a reasonable accuracy of the thermal performance of the insulation system, the cells have no windows and no controlled ventilation system. Also, the temperatures of the airlock entries and under floor spaces are maintained at the same level as inside; therefore the thermal exchanges take place only through walls insulated with the tested material. 2.2 Insulation set-up Insulation products tested are (figure 2): - Non-commercial thin (45 mm) multi reflective insulation product (MF) with a core thermal resistance of 1.25 m²K/W measured with standards methods [1,2]. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
149
- Mineral wool of 200 mm (MW) with a declared thermal conductivity of 0,040 W/m.K (RD = 5 m²K/W). An additional and continuous vapour control layer made of polyethylene foil (0.2 mm) was placed between mineral wool and plasterboard. - The last cell had no insulation above the plasterboard. The layout has been contrived under the manufacturers’ instructions [5,6]. A HPV under tile liner was placed under the tiles in each cell. 2.3 Test method In order to analyse the thermal behaviour of the insulation systems in different conditions, the three cells have been placed on exposed sites in three different
Figure 1:
Photography of the test cells. Under tile liner and ventilated airgap IRMM
200 mmMW Vapor control layer Plasterboard
Figure 2:
Scheme of roof constructions (for the non-insulated cell, the same configuration was contrived without any insulation) Table 1:
Site North of Europe (lat 54°N) Center of Europe (43° N) South of Europe (40 °N)
Characteristics of the test sites. Average temperature Low High High
Temperature variation Very low High Very high
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Wind speed High High Low
Solar radiation Low High Very high
150 Computational Methods and Experimental Measurements XIV regions of Europe with very different weather conditions. The typical weather characteristics of the three sites are shown in table 1. 2.3.1 Instruments inside the testing cells Each cell has been equipped with two temperature sensors located at 1.50m above the floor. The sensors are placed in an open grey PVC tube in order to protect them from air movements which might affect the measurements. Seven other temperature sensors are installed in the cell in order to control the temperature distribution in the cell volume. During test in winter conditions, the temperature in the cells is maintained at a constant level with two fan heaters of 1 kW. Current and voltage measurements using calibrated transducers allow, in each cell, the determination of the exact energy consumption. A meteorological station, located nearby the cells is equipped to permanently record the following weather parameters: temperature, relative humidity, pressure, global solar radiation and wind speed. All measurements are registered on a constant rate (one per minute) 2.3.2 Cells calibration Before the test starts, a calibration phase takes place in order to ensure that the 3 test cells are similar in terms of internal dimensions and thermal performance when insulated with the same product. During this phase, the three cells are insulated with 200 mm mineral wool and the temperature inside is maintained at the same level for five days. The actual recorded energy consumptions were similar (difference less than 5%) and therefore the cells are considered to be identical. 2.3.3 Test of different insulation systems After the calibration phase, the cell which consumed the lower amount of energy retained MW as insulation product. The cell which consumed the higher amount of energy is not insulated during the test. Finally, the last test cell is insulated with MF. Before the test started, new measurements of the cell interior dimensions and air tightness were performed. As we can see in table 2, the three cells are very similar. Table 2:
Interior dimensions and air tightness of the test cells.
Surface (m²) MF
North of MW Europe Not insulated MF
Centre of MW Europe Not insulated MF
South of MW Europe Not insulated
43.62 44.18 44.41 44.19 45.31 44.71 43.93 44.51 44.59
Volume (m3) 27.78 28.37 28.45 28.93 30.32 30.18 28.15 28.72 28.79
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
n50 (h-1) 4.60 5.00 4.85 4.50 4.35 4.40 5.29 5.57 5.72
Computational Methods and Experimental Measurements XIV
Table 3:
151
Daily weather conditions and energy consumptions on the south of Europe site. South of Europe
Date
Temp (°C)
Wind speed (m/s)
29/01/2008 09:00 30/01/2008 09:00 31/01/2008 09:00 01/02/2008 09:00 02/02/2008 09:00 03/02/2008 09:00 04/02/2008 09:00 05/02/2008 09:00 06/02/2008 09:00 07/02/2008 09:00 08/02/2008 09:00 09/02/2008 09:00 10/02/2008 09:00 11/02/2008 09:00 12/02/2008 09:00 13/02/2008 09:00 14/02/2008 09:00 15/02/2008 09:00 16/02/2008 09:00 17/02/2008 09:00 18/02/2008 09:00 19/02/2008 09:00 20/02/2008 09:00 21/02/2008 09:00 22/02/2008 09:00 23/02/2008 09:00 24/02/2008 09:00 25/02/2008 09:00 26/02/2008 09:00 27/02/2008 09:00 28/02/2008 09:00 29/02/2008 09:00 01/03/2008 09:00 02/03/2008 09:00 03/03/2008 09:00 04/03/2008 09:00 05/03/2008 09:00 06/03/2008 09:00 07/03/2008 09:00 08/03/2008 09:00 09/03/2008 09:00 10/03/2008 09:00 11/03/2008 09:00
1.51 1.44 2.52 6.03 4.38 6.99 6.16 7.46 5.74 5.09 4.59 4.32 3.45 3.75 3.38 2.80 7.29 5.31 5.96 6.87 7.60 8.38 7.83 7.06 7.41 10.24 9.63 7.95 8.22 10.36
0.24 0.38 0.30 1.46 1.17 3.00 1.29 1.14 0.43 0.21 0.17 0.24 0.22 0.23 0.47 0.40 0.49 0.27 0.24 0.65 0.65 0.13 0.35 0.52 0.35 0.42 0.30 0.49 0.49 0.62
3
Solar radiation (W/m²) 152.82 97.79 157.27 57.26 144.94 49.90 130.35 74.17 165.90 166.90 164.42 172.50 170.31 174.30 170.93 173.84 113.83 125.39 166.95 140.67 83.84 55.08 111.04 195.91 199.26 141.79 85.40 189.18 124.73 116.47
kWh MF
kWh MW
kWh Not insulated
8.88 9.08 9.05 7.68 8.21 7.48 7.44 6.57 7.01 7.44 7.10 7.27 7.45 7.59 7.67 7.96 6.76 6.25 6.68 6.58 6.07 5.60 5.48 5.87 5.90 5.33 4.98 5.15 5.61 4.89
9.06 9.21 8.87 7.35 7.42 6.83 6.76 5.89 6.13 6.67 6.31 7.00 7.26 7.39 7.38 7.76 6.57 5.99 6.54 6.38 5.75 5.29 5.13 5.56 5.74 5.18 4.73 4.77 5.44 4.66
30.81 31.29 29.93 26.08 28.56 27.22 25.81 22.62 23.38 24.21 24.64 24.97 26.04 27.10 27.33 28.59 22.98 23.32 23.57 23.03 21.86 20.24 20.62 21.70 21.40 18.00 17.90 19.00 19.84 17.38
6.51 6.63 6.60 4.21
23.31 25.03 26.54 15.97
Invalid data
7.20 7.04 8.39 12.79
0.85 2.01 3.46 2.44
222.45 170.01 42.27 210.80
6.67 6.78 6.73 4.36
Experimental and simulation results
3.1 Experimental measurements In situ testing was performed during the 2007-2008 winter. The weather conditions encountered during this period and the energy needed to maintain the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
152 Computational Methods and Experimental Measurements XIV temperature set point into the cells (21°C for North of Europe, 23°C for the two other sites) are presented in tables 3 to 5. The energy consumption of the three test cell for the entire test period on each site is given in the table 6. Obviously, the energy consumption depends on the number of days of test but also on the weather conditions and the temperature inside the cells. Table 7 gives the values of the thermal transmittance for the each cell on the three test sites. Table 4:
Daily weather conditions and energy consumptions on the centre of Europe site. Centre of Europe
Date 14/01/2008 15/01/2008 16/01/2008 17/01/2008 18/01/2008 19/01/2008 20/01/2008 21/01/2008 22/01/2008 23/01/2008 24/01/2008 25/01/2008 26/01/2008 27/01/2008 28/01/2008 29/01/2008 30/01/2008 31/01/2008 01/02/2008 02/02/2008 03/02/2008 04/02/2008 05/02/2008 06/02/2008 07/02/2008 08/02/2008 09/02/2008 10/02/2008 11/02/2008 12/02/2008 13/02/2008 14/02/2008 15/02/2008 16/02/2008 17/02/2008 18/02/2008 19/02/2008 20/02/2008 21/02/2008
09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00
Temp (°C)
Wind speed (m/s)
6.91 7.20 7.69 11.95 13.12 9.80 5.86
0.71 1.64 4.68 3.96 4.21 1.00 0.09
10.78 5.94 7.97
5.71 1.03 1.02
4.36 8.00 5.88 5.15 6.73 3.33 7.07 3.19 6.97 5.16 6.65 10.93 8.12 7.44 6.93 6.22 8.94 6.91 8.50 6.87 6.12 7.71 8.87 10.49 12.02 9.76 11.38
0.02 0.87 0.92 1.30 3.72 1.49 1.25 1.73 1.41 8.34 0.13 3.95 0.12 4.14 1.97 1.60 2.44 1.71 1.95 1.31 0.37 1.05 1.97 1.20 1.74 0.77 0.77
Solar kWh radiation MF (W/m²) 47.50 6.45 62.66 6.43 49.76 6.70 67.97 5.05 46.29 3.94 42.33 4.99 94.88 6.63 Invalid data 26.20 5.15 83.72 6.19 94.18 6.47 Invalid data 106.17 7.31 106.97 6.61 107.59 6.65 79.60 7.06 36.66 6.55 26.68 7.38 68.61 6.26 92.41 7.43 97.97 6.70 91.00 6.94 128.27 6.72 102.14 4.87 133.18 5.56 138.24 6.25 138.62 6.30 137.64 6.55 134.56 6.25 144.15 6.22 143.57 6.17 115.38 5.90 152.56 6.43 161.24 6.34 176.31 5.42 148.89 5.19 103.04 3.91 42.82 4.41 154.30 4.09
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
kWh MW
kWh Not insulated
5.45 5.55 5.91 4.29 3.30 4.15 5.73
23.54 23.49 26.33 18.60 15.84 19.24 25.19
4.85 5.44 5.72
20.70 24.50 23.09
6.46 5.94 5.87 6.24 5.94 6.51 5.62 6.67 6.00 6.08 5.97 4.33 4.89 5.58 5.60 5.79 5.62 5.46 5.48 5.10 5.83 5.71 4.73 4.59 3.47 3.81 3.70
27.56 23.81 25.49 26.28 25.68 28.85 23.11 29.57 23.78 27.26 24.77 17.02 21.42 23.19 23.58 24.54 23.38 23.58 22.61 22.09 24.62 22.82 20.59 17.92 14.21 16.49 16.67
Computational Methods and Experimental Measurements XIV
Table 5:
153
Daily weather conditions and energy consumptions on the north of Europe site. North of Europe
Date 01/02/2008 02/02/2008 03/02/2008 04/02/2008 05/02/2008 06/02/2008 07/02/2008 08/02/2008 09/02/2008 10/02/2008 11/02/2008 12/02/2008 13/02/2008 14/02/2008 15/02/2008 16/02/2008 17/02/2008 18/02/2008 19/02/2008 20/02/2008 21/02/2008 22/02/2008 23/02/2008 24/02/2008 25/02/2008 26/02/2008 27/02/2008 28/02/2008 29/02/2008 01/03/2008 02/03/2008 03/03/2008 04/03/2008 05/03/2008 06/03/2008 07/03/2008 08/03/2008 09/03/2008
Temp (°C) 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00 09:00
1.61 1.88 0.44 1.73 2.14 4.00 3.76 2.30 3.88 3.44 0.59
Wind speed (m/s) 4.44 4.17 3.11 3.44 2.31 2.64 1.49 1.06 2.94 3.89 1.87
Solar radiation (W/m²) 14.11 31.64 66.52 66.69 7.53 25.43 33.06 26.18 52.27 31.72 7.21
kWh MF
kWh MW
kWh Not insulated
7.74 7.73 8.17 7.75 7.07 6.54 6.57 6.78 6.65 6.76 7.63
4.96 5.08 5.27 4.98 4.58 4.22 4.27 4.43 4.26 4.34 4.88
32.07 31.17 31.94 28.85 26.66 24.23 24.73 25.62 25.92 27.63 29.92
7.25 6.01 5.02 5.56 5.00 4.37 4.37 5.14 4.12 4.62 3.88 4.28 5.10 4.71 4.41 5.29 5.21 5.68 5.58 5.41 4.94 4.08 3.70
39.31 32.48 27.14 38.69 27.66 25.86 23.51 26.89 22.46 25.97 22.19 25.13 27.73 26.15 25.32 29.08 28.64 31.31 31.35 32.85 27.24 21.98 20.29
Invalid data -6.17 0.36 2.27 -0.57 2.27 3.85 6.78 4.36 6.79 4.28 6.75 5.39 3.69 3.32 3.56 2.44 1.85 -0.20 0.19 1.17 2.27 4.45 5.95
2.50 4.48 3.35 2.58 2.47 3.41 7.12 5.65 4.74 4.99 4.28 5.94 5.27 3.31 3.77 5.40 3.08 3.39 3.20 4.98 2.59 1.23 1.56
94.55 11.99 18.91 93.72 31.95 45.41 17.18 22.23 10.47 25.55 58.22 36.40 60.18 33.39 12.93 24.37 44.00 118.53 109.21 30.78 101.26 55.65 52.99
10.89 8.63 7.38 8.46 7.38 6.65 5.93 6.96 5.92 6.58 5.71 6.11 7.11 6.86 6.58 7.43 7.52 8.28 8.21 7.94 7.30 6.16 5.61
Table 6: Energy consumptions on the three test sites during the entire test period. Site South of Europe Centre of Europe North of Europe
Days of test 34 27 34
MF 229.5 223.5 245.0
Energy consumption (kWh) MW Not insulated 219.0 810.3 197.4 841.4 165.0 948.0
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
154 Computational Methods and Experimental Measurements XIV Table 7:
Thermal transmittance of the tested cells. Thermal transmittance (W/m²K) MF MW Not insulated 0.37 0.35 1.32 0.37 0.32 1.39 0.39 0.26 1.50
Site South of Europe Centre of Europe North of Europe
Energy savings MF MW 72% 73% 73% 77% 74% 83%
Comparison between in situ measurements and the Trnsys® simulation.
Table 8:
South of Europe MW Configuration Without thermal bridges Thermal bridges (Sext)
In situ (kWh) 218.96
Trnsys (kWh) 121.11 156.51
MF
Ecart Air change to fit (Trnsys-In situ) measurements (h-1) -45% 0.73 -29% 0.46
In situ (kWh) 229.546
Trnsys (kWh) 243.83 328.99
Ecart (Trnsys-In situ) 6% 43%
Trnsys (kWh) 257.08 344.27
Ecart (Trnsys-In situ) 15% 54%
Trnsys (kWh) 241.54 327.81
Ecart (Trnsys-In situ) -1% 34%
Air change to fit measurements (h-1) -
Centre of Europe MW Configuration Without thermal bridges Thermal bridges (Sext)
In situ (kWh) 197.39
Trnsys (kWh) 128.09 163.21
MF
Ecart Air change to fit (Trnsys-In situ) measurements (h-1) -35% 0.50 -17% 0.24
In situ (kWh) 223.46
Air change to fit measurements (h-1) -
North of Europe MW Configuration Without thermal bridges Thermal bridges (Sext)
In situ (kWh) 164.99
Trnsys (kWh) 126.95 166.43
Ecart Air change to fit (Trnsys-In situ) measurements (h-1) -23% 0.28 1% -
MF In situ (kWh) 245.03
Air change to fit measurements (h-1) 0.03 -
Table 7 shows that the thermal performance of MF is higher in the south and centre of Europe compared to the north of Europe. MW has an opposite effect with higher performance on the north of Europe and lower in the south. The same table presents the energy savings of each insulated cell compared to the non-insulated cell. One can see that in the north of Europe, the MW has higher energy savings than the MF. This is different on the two other sites where the cell insulated with MF and the one insulated with MW have very similar energy savings. 3.2 Simulation results The results presented in this section have been obtained with TRNSYS® software [7]. The simulations have been performed using the exact geometry of the cells and the weather conditions registered on each test centre. The input thermal properties of each wall of the structure are determined using standardised methods for measurement and calculation [1,2,5]. The simulation results in terms of energy consumption are compared with measurements in table 8. If the linear thermal bridges are not taken into account then the simulation results for the MW cell fit with the measurements using a level of air infiltration between 0.28 h-1 and 0.73 h-1. If thermal bridges are taken into account by considering the external surface of the cell as heat loss surface than the level of air infiltration to fit simulation results with measurements is considerably reduced (max value is 0.46 h-1).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
155
For MF cell, simulations overestimated the energy needed to maintain the temperature set point into the cells even without taking into account the linear thermal bridges effect. Moreover, as shown in table 2, the cells air tightness is very similar and therefore the same level of infiltration found for MW cells should be applied to the MF one. In this case the difference between simulations and measurements is up to 70 %. This inconsistency could be explained by the underestimation of the thermal performance of MF + adjacent air gaps. This raises the question of the efficiency of traditional methods and the need for a new suitable method to determine the correct resistive characteristics of the cavities with reflective walls. A three dimensional CFD model coupling the different heat transfer mechanisms could allow a better understanding of the thermal behaviour of MF product.
4
Conclusion
The in situ tests performed in regions of Europe with different weather conditions have showed that the thermal performance of the MF product is clearly underestimated by the standard measurements and calculations currently employed. The weather conditions seem to have a high impact on the thermal performance of different insulation systems. The difference between calculation and in situ measurement is lower on the site situated in north of Europe probably because the test conditions are very similar with those imposed by the standard methods i.e. very low temperature variation. The highest difference between calculation and measurement is obtained in the south of Europe where the weather conditions are completely different from those imposed by standards: high temperature variation during the course of one day. This clearly shows that the actual standards are not appropriate to determine the true thermal performance of MF products. The protocol detailed in this paper also allows the direct determination of the energy saving using a given insulation system in comparison with a noninsulated cell. It appears that the MF solution allows a significant energy saving and, therefore, this solution can be an interesting alternative especially for old buildings where the space for thick insulation is not available.
Acknowledgement We would like to thank all people helping us in this project: T. Labrousse, F. Laché, B. Saintpeyre, B. Sanchez, T. Bonnafoux.
References [1] ISO 8302, Thermal Insulation. Determination of steady-state thermal resistance and related properties -- Guarded hot plate apparatus. [2] ISO 8990, Thermal insulation. Determination of steady-state thermal transmission properties – Calibrated and guarded hot box. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
156 Computational Methods and Experimental Measurements XIV [3] Report BRE n° 78132, Fields investigations of the thermal performance of construction elements as built, June 2001. [4] ISO 6946, Building components and building elements - Thermal resistance and thermal transmittance – Calculation method. [5] ACTIS guideline on http://www.actis-isolation.com [6] Example of set up on: http://www.knaufinsulation.co.uk/PDF/Book_3_2_2_ Pitched_Roofs_Rafter_Level.pdf [7] http://sortware.cstb.fr/soft/present.asp?page_id=fr!Trnsys [1] ISO 8302, Thermal Insulation. Determination of steady-state thermal resistance and related properties -- Guarded hot plate apparatus. [2] ISO 89 [3] Report BRE n° 78132, Fields investigations of the thermal performance of construction elements as built, 2001. [5] ACTIS guideline on http://www.actis-isolation.com [6] Example of set up on: http://www.knaufinsulation.co.uk/PDF/Book_3_2_2_Pitched_Roofs_Rafter_Lev el.pdf [7] http://sortware.cstb.fr/soft/present.asp?page_id=fr!Trnsys
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
157
Monitoring coupled moisture and salt transport using single vertical suction experiment Z. Pavlík, J. Mihulka, M. Pavlíková & R. Černý Department of Materials Engineering and Chemistry, Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic
Abstract A new method for simultaneous monitoring of coupled moisture and chloride ions transport is presented in the paper. The experiment is done in the conditions of one-sided 1.0 M NaCl solution vertical uptake into a sample of calcium silicate based material. In the experimental work, rod-shaped sample is used for the determination of moisture and chloride concentration profiles in simulated 1D water and chloride solution transport. For the measurement, advanced Time Domain Reflectometry (TDR) sensors are used. The sensors allow for moisture content assessment on the basis of relative permittivity measurement and chloride concentration monitoring based on electrical conductivity measurement. On the basis of measured data, moisture and chloride concentration profiles are obtained. Experimentally determined chloride concentration profiles and moisture profiles are then used for identification of apparent chloride diffusion coefficient and moisture diffusivity on the basis of inverse analysis using a simple diffusion model. Finally, the calibration procedure of the applied measuring method is discussed and practical recommendations for application of the combined TDR/electrical conductivity sensors for monitoring of coupled moisture and salt solution transport are given. Keywords: moisture, salt concentration, relative permittivity, electrical conductivity, calcium silicate.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090151
158 Computational Methods and Experimental Measurements XIV
1
Introduction
Moisture and salt induced damage is considered to be one of the essentially problematic elements in the decay of building materials and structures. There is evident that the rising of moisture content in buildings and their materials leads to serious negative events, like degradation of materials (disintegration of inorganic plasters, porous stones and ceramic bricks, binder decomposition, surface erosion, etc.). It has also negative effects on biological devaluation of constructions (mould growing) and on the conditions of interior climate from the point of view of hygienic aspects. Water can deteriorate building materials and structure surfaces by acid decomposition reactions. Typical example is sulphur dioxide that dissolves in water and partly forms sulphurous acid and sulphur trioxide which forms acid as well. Both acids decompose lime and lime-mixed binders in coatings [1, 2]. The final result is formation of gypsum, CaSO4·2H2O. The reaction can be expressed in a simplified manner as: (1) CaCO3 + SO2 + 1 / 2O2 + H 2O → CaSO4 ⋅ 2 H 2O + CO2 and
CaCO3 + SO3 + 2 H 2O → CaSO4 ⋅ 2 H 2O + CO2 .
(2)
Gypsum has a large molar volume and, under favourable humidity conditions, large crystals of gypsum form. As a result, crystallisation pressure decays the surface layer of lime based material. Significant is also the negative effect of moisture on compressive and bending strength of bearing-structures materials. In the areas, where the temperature fluctuates around 0°C, water gives rise to freeze-thaw weathering that especially deteriorates porous structure of building materials. Ice, compared to the water volume in liquid phase, has a volume that is 9% higher and its crystallisation pressure causes the damage of solid porous structure of materials. A part of building damages assigned to the negative moisture effects would not arise, if only pure water would be present. Water represents very often only transport medium for other harmful pollutants that take part in the process of surface degradation of building materials. Water transport in porous materials makes possible salt transport or accumulation as well. Salt accumulation in specific places of building structures can lead in consequence to their failure or destruction. Among water soluble salt action in building materials, especially salt crystallization, salt hydration, hygroscopic water absorption, efflorescence and leaching represent the most harmful effects on the properties of building materials and structures. Salt crystallisation is physico-chemical degradation process that is related to formation of saturated and oversaturated salt solutions due to water evaporation. After overrun of the range of solubility, the salt crystals are growing and exert crystallisation pressures on walls of porous space. In dependence on the strength of materials, the damage of porous structure of materials is initiated. Salt hydration is related to salts that are able to bond in their crystal lattice certain defined number of water molecules. They form hydrates what is accompanied by volume changes and hydration pressures. For building materials WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
159
are the most dangerous salts changing their forms at standard climatic conditions; sodium sulphate, sodium carbonate and calcium nitrate. Example of hydration of calcium nitrate is given in equation 30° C 100° C Ca ( NO3 ) 2 ⋅ 4 H 2 0 ← → Ca ( NO3 ) 2 ⋅ 3H 2O ← → Ca ( NO3 ) 2 . (3) Although the moisture and water soluble salts related problems of buildings and their particular materials are generally known and proven, the exact description of coupled moisture and salt transport mechanism in porous medium remains still open field for building physicists and engineers. On this account, the main motivation of the presented work is to contribute to the explanation of salt solution transport and identification of parameters that can be used for its more exact characterization and description.
2
Method for simultaneous measurement of moisture and salt concentration
Water and salt ions possess many anomalous properties, which also affect the properties of a porous material. Therefore, there exist various methods of determination of moisture and salt content in porous materials, and various moisture and salt concentration meters. The presence of salt ions can negatively affect the accuracy of moisture measurement methods, especially of relative methods, where the measured physical quantity is dependent on salt concentration. Therefore, in case of simultaneous moisture and salt concentration measurement, proper method which accuracy is not influenced by presence of salt ions must be chosen. As stated in literature, the high frequency microwave methods based on permittivity determination correlated to moisture content can be used for such type of measurements. In this paper we introduce TDR (Time Domain Reflectometry) method for moisture measurement combined with simultaneous assessment of salt concentration by means of electrical conductivity measurement. TDR technique represents specific methodology among the microwave impulse techniques. The principle of TDR device consists in launching of electromagnetic waves and the amplitude measurement of the reflections of waves together with the time intervals between launching the waves and detecting the reflections. Time/velocity of pulse propagation depends on the apparent relative permittivity of the porous material, which can be expressed using the formula
ct εr = p 2 Ls
2
(4)
where εr is the complex relative permittivity of the porous medium, c the velocity of light (3·108 m/s), tp the time of pulse propagation along the probe rods measured by TDR meter and Ls the length of the sensor’s rod inserted into a measured porous medium. The determination of moisture content using the permittivity measurements is then based on the fact that the static relative permittivity of pure water is equal to WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
160 Computational Methods and Experimental Measurements XIV approximately 80 at 20°C [3], while for most dry building materials it ranges from 2 to 6. For evaluation of moisture content from measured relative permittivity values, three basic approaches can be used. The first possibility is utilization of empirical conversion functions generalized for a certain class of materials. On the basis of analysis performed in [4], it can be stated that empirical conversion functions used in current research for TDR data conversion are anything but universal. They are always limited to specific groups of materials. The second is application of dielectric mixing models, which assumes knowledge of the relative permittivities of the material matrix, water, air and other parameters, that cannot be measured directly but have to be determined by empirical calibration of the model. Dielectric mixing models were tested in many practical applications and their perspectives for further use seem to be better than those of the empirical conversion functions [5–7]. The third method for evaluation of moisture content from measured relative permittivity consists in empirical calibration for the particular material using a reference method, such as the gravimetric method. This method is the most reliable until now and was used also in this work for calibration of TDR method for moisture measurement in calcium silicate based material. As it was mentioned above, measurement of salt concentration (in our case of chlorides) was done by means of electrical conductivity measurement that is in clear relation with salt concentration. The calibration was done using measurement of salt concentration by ion selective electrode together with measurement of electrical conductivity. In this way, the salt-concentration dependent electrical conductivity was accessed.
3
Experimental
The experiment for determination of moisture and salt concentration profiles was done in the conditions of one-sided 1.0 M NaCl solution vertical uptake into the sample of calcium silicate based material. In the experimental work, rod-shaped sample was used for the determination of moisture and chloride concentration profiles in simulated 1-D water and chloride solution transport. The sample size was 50/100/300 mm and all the lateral sides of the sample were vapor proof insulated by epoxy resin to ensure 1-D moisture and salt solution transport. Into the studied sample, 8 two-rod miniprobes LP/ms (Easy Test) were placed for the monitoring of complex relative permittivity and electrical conductivity. The sensors are made of two 53 mm long parallel stainless steel rods, having 0.8 mm in diameter and separated by 5 mm [3]. The sphere of sensor’s influence creates the cylinder having diameter about 7 mm and height about 60 mm, circumference around the rods of sensor. The accuracy of relative permittivity and electrical conductivity reading, and the measuring range of applied sensors are given in Table 1. For the TDR measurements in this paper, the cable tester TDR/MUX/mts produced by Easy Test which is based on the TDR technology with sin2-like
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Table 1:
161
Accuracy and measuring range of applied TDR sensors.
Measured quantity Relative permittivity ε
Measuring range 2 ÷ 90
Electrical conductivity σ
0 ÷ 1 S/m
Accuracy Absolute error: ± 1 for 2 ≤ ε ≤6 ± 2 for ε ≥ 6 Relative error: ± 5%
needle pulse having rise-time of about 250 ps, was used. The working frequency of this device is 1.8 GHz for the relative permittivity measurement. The measuring technology was as follows. At first, the sensors were placed into the sample and sealed by silicone gel. Since the material is rather soft, the sensors were placed into the sample by simple impress. Then, the sample was put into a vessel containing 1.0 M NaCl water solution and the suction has started. The complex relative permittivity and electrical conductivity were continuously monitored and stored in computer. After specific time interval, the experiment was interrupted and the sample was cut to eight separate pieces containing particular sensors. Finally, the sensors were removed and in each piece the moisture content and chloride concentration were measured by reference method. The moisture content was accessed by gravimetric method and chloride concentration by ion selective electrode (device pH/ION 340i) applied on leaches from particular sample pieces. In this way, the empirical calibration curves of TDR method for calcium silicate were determined. The suction experiment was realized on calcium silicate material. It is material having high thermal insulation properties, high total open porosity (87%), low bulk density (230 kg/m3) and from the chemical point of view is formed by Ca2SiO4. Sample arrangement and measuring technology is visible from Fig. 1.
4
Determination of moisture diffusivity and chloride diffusion coefficient
Moisture diffusivity and chloride diffusion coefficient represent necessary input data for computational modeling of coupled moisture and chloride ions transport in porous building materials. Their knowledge is important also for evaluation of moisture and salt solution transport properties of specific materials and plays important role in the process of design of building structures. In this paper, inverse analysis of experimentally measured moisture and chloride concentration profiles was used for the computational identification of moisture diffusivity and chloride diffusion coefficient of calcium silicate. Inverse analysis is based on the assumed mode of salt solution transport. In the presented work we have assumed only diffusion mechanism of moisture as well as of chloride ions transport. In this way, the chloride diffusion coefficient is considered to be apparent parameter that includes also the effect of chloride ions binding on porous space walls, advection of salt ions, surface diffusion and WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
162 Computational Methods and Experimental Measurements XIV osmotic effects. Compared to simple Fick’s diffusion, the dependence of moisture diffusivity on moisture content κ(w) and apparent diffusion coefficient on salt concentration D(C) is considered. In this case, the salt mass balance is expressed by equation
∂C = div ( D (C ) gradC ) , ∂t
(5)
where C [kg/m3] is salt concentration in kg per volume of the dry porous body, D [m2/s] the apparent salt diffusion coefficient. In this way the salt solution transport is formally described by the same parabolic equation with the same boundary and initial conditions usually used for description of water transport. Therefore, the calculation of concentration-dependent diffusion coefficients from the measured salt concentration profiles could be done using basically the same inverse methods as those for the determination of moisture-dependent moisture diffusivity or temperature-dependent thermal conductivity. In this paper, this type of model was employed for determination of both D(C) and κ(w) functions. In calculations, we assume that the concentration field C(x, t) and moisture content field w(x,t) are known from the experimental measurements as well as the initial and boundary conditions of the experiment. Using Matano method [8] and applying two Boltzmann transformations we arrive to the following final formulas for calculation of apparent salt (in our case chloride) diffusion coefficient
D(C0 ) =
1 dC 2t0 ( ) z = z0 dz
∞
dC
∫ z dz dz ,
z0
and moisture diffusivity
Figure 1:
Experimental setup of vertical suction experiment.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(6)
Computational Methods and Experimental Measurements XIV
Figure 2:
163
Relative permittivity as function of moisture content.
κ ( w0 ) =
1 dw 2t0 ( ) z = z0 dz
∞
dw
∫ z dz dz ,
(7)
z0
where C0 = C(z0, t0) is salt concentration in the position z0 and time t0, w0 = w(z0, t0) the corresponding moisture content and z space variable. The integral in equations (6) and (7) is solved by common numerical methods, such as Simpson’s rule. The details on inverse analysis procedure can be found e.g. in [9].
5
Experimental and computational results
Figs. 2, 3 present calibration curves of applied measuring techniques for moisture and chloride concentration measurement. The measured data show the dependences of relative permittivity on moisture content and electrical conductivity on chloride concentration which knowledge is imperative for the applicability of used methods for salt solution transport monitoring. For calibration purposes, the experimental data were smoothed by simple polynomial relations that can be considered as empirical calibration curves of TDR for measurement of chloride water solution transport in calcium silicate. The experimentally measured moisture and chloride concentration profiles are presented in Figs. 4, 5. The data gives clear evidence about the velocity of moisture and chloride ions propagation in calcium silicate material. Fig. 6 shows the moisture diffusivity as a function of moisture content, Fig. 7 the apparent chloride diffusion coefficient as a function of chloride concentration. These figures reveal the necessity to consider in balance equations the dependence of transport parameters on moisture and salt concentration. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
164 Computational Methods and Experimental Measurements XIV
Figure 3:
Dependence of measured electrical conductivity on chloride concentration.
Figure 4:
Moisture profiles for calcium silicate measured by TDR.
Looking at the data presented in Fig. 6, the moisture diffusivity varies within the range of four orders of magnitude. A similar trend was found also for concentration dependent chloride diffusion coefficient. In the range of lower concentrations, typically up to 0.005 [g/g], the apparent diffusion coefficient is close to diffusion coefficient of chlorides in water. On the other hand, in range of higher concentrations it increases very rapidly. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Figure 5:
Chloride concentration profiles measured by TDR.
Figure 6:
Moisture diffusivity of calcium silicate
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
165
166 Computational Methods and Experimental Measurements XIV
Figure 7:
6
Apparent chloride diffusion coefficient of calcium silicate.
Conclusions
The experiment presented in this paper has proven the capability of TDR combined sensors for simultaneous monitoring of moisture and salt concentration in porous building materials. This finding is very perspective for future work, especially for building practice that requires complex, precise and reliable methods for moisture and salinity measurement. The assessment of chloride diffusion coefficient and moisture diffusivity represents important information for application of studied calcium silicate material in practice regarding to its intended use in interior thermal insulation systems of building envelopes. This data can also find use in computational modeling of moisture and chloride ions transport in calcium silicate based materials what can be useful for example at damage assessment by means of salt action.
Acknowledgement This research has been supported by the Czech Ministry of Education, Youth and Sports, under project No MSM 6840770031.
References [1] Rovnaníková, P., Environmental pollution effects on other building materials (Chapter 7). Environmental Deterioration of Materials, ed. A. Moncmanová, WIT Press, Southampton, pp. 217-247, 2007.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
167
[2] Moncmanová, A., Environmental factors that influence the deterioration of materials (Chapter 1). Environmental Deterioration of Materials, ed. A. Moncmanová, WIT Press, Southampton, pp. 1-21, 2007. [3] Malicki, M. & Skierucha, W.M., A Manually Controlled TDR Soil Moisture Meter Operating with 300 ps Rise-Time Needle Pulse. Irrigation Science, Vol. 10, pp. 153-163, 1989. [4] Fiala, L., Pavlík, Z., Jiřičková, M., Černý, R., Sobczuk, H. & Suchorab, Z., Measuring Moisture Content in Cellular Concrete Using The Time Domain Reflectometry Method. CD-ROM Proceedings of 5th International Symposium on Humidity and Moisture, J. Brionizio, P. Huang (eds.), Inmetro, Rio de Janeiro, paper No. 103, 2006. [5] Dobson, M.C., Ulaby, F.T., Hallikainen, M.T. & El-Rayes, M.A., Microwave dielectric behavior of wet soil, Part II: Dielectric mixing models, IEEE Trans. Geosci. Remote Sensing GE-23, pp. 35-46, 1985. [6] Jacobsen, O.H. & Schjonning, P., Comparison of TDR Calibration Functions for Soil Water Determination. Proceedings of the Symposium Time-Domain Reflectometry - Applications in Soil Science, L. W. Petersen and O. H. Jacobsen (eds.), Danish Institute of Plant and Soil Science, Lyngby, pp. 25-33, 1995. [7] Pavlík, Z., Fiala, L., Pavlíková, M., Černý, R., Sobczuk, H. & Suchorab, Z., Calibration of the Time Domain Reflectometry Method for Measuring Moisture Content in AAC of Various Bulk Densities, ISEMA 2007, Hamamatsu: Shizuoka University, pp. 151-158, 2007. [8] Matano, C., On the relation between the diffusion coefficient and concentration of solid metals. Jap. J. Phys., 8, pp. 109-115, 1933. [9] Fiala, L., Pavlík, Z., Pavlíková, M. & Černý, R., Water and Chloride Transport Properties of Materials of Historical Buildings. Recent Developments in Structural Engineering, Mechanics and Computation, Rotterdam: Millpress Science Publishers, pp 581-582, 2007.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
169
Application of image analysis for the measurement of liquid metal free surface S. Golak Faculty of Materials Science and Metallurgy, Department of Electrotechnology, Silesian University of Technology, Poland
Abstract During the induction melting and stirring of molten metal a meniscus forms on the surface of the bath. This phenomenon affects the processes occurring on the metal-gas interface since they change the size of the real free surface of the metal. Numerical simulations indicate a significant scale of the phenomenon during typical processes of the induction melting of metals. Experimental measurements of the real free surface of induction melted metals can be quite difficult because of the rapid changes in the surface shape and high temperature of the molten metal. Therefore, a fast non-contact method with the use of laser and image analysis was proposed. However, in the case of liquid metals an optical method of measurement has its limitations as many molten metals shine. The problem was resolved by the use of a green laser and a narrow-band optical filter. This paper presents the methodology and problems of meniscus shape non-contact measurements. Keywords: liquid metal, free surface, laser measurement.
1
Introduction
The processes of induction melting, stirring and refining of liquid metals in crucibles and ladles are becoming more and more popular in the metallurgical industry. This situation promotes constant development of the equipment used in the above-mentioned processes, which in turn means continuous improvement of the designing methods. One of these is to expand the potential of simulating metallurgical processes and indeed it can be observed nowadays that more and more physical phenomena are taken into account in such simulations. At the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090161
170 Computational Methods and Experimental Measurements XIV beginning just one-dimensional or quasi two-dimensional analytical models of electromagnetic and temperature fields were constructed. Then, together with the development of computer science, there appeared numerical tools for simulation of these fields and their coupling. Greater availability of computers and higher computing power contributed to progress in computational fluid dynamics, which made it possible to simulate liquid metal hydrodynamics occurring in real metallurgical devices. Initially, the simulations of electromagnetic and hydrodynamic fields were conducted as separate stages. First, the distribution of electromagnetic forces acting on the metal was calculated, and it was followed by the determination of the liquid metal velocity field induced by these forces. Those simulations assumed that the geometry of the liquid metal is not subject to change. The most frequent assumption was even that the surface of the liquid metal was perfectly flat. In fact, each metal forms on its surface a meniscus being the resultant of the surface energy of the crucible walls and total surface tension of the liquid metal. This process intensifies significantly under the influence of an electromagnetic field. The geometry of the liquid metal is distorted, and in effect the actual contact area between the metal and the atmosphere is bigger than in the case of a flat surface (Blacha et al. [1], Golak and Przylucki [2,3]). The simulations conducted in our department have proved that the surface area after deformation might even be increased by as much as 150 percent. Information of the real shape that the surface assumes is of great significance in the analysis of the melting, stirring and refining processes. During metal melting and heating it is the geometry of the melt that determines the actual power emitted in the charge, and consequently the capacity of the equipment in question. The distribution of the velocity field in induction stirrers determined for a flat free surface will be substantially different from the distribution for the deformed surface. Yet, the greatest importance of a free surface can be seen in the refining and other processes occurring at the interface of the phase’s liquid metal–atmosphere. Processes of this type show almost linear dependency on the actual size of the metal free surface. For this reason inaccurate estimation of the bath shape may completely distort the results of the most accurate simulations. The above-mentioned simulations conducted in our department revealed the scale of the phenomenon and its significance. However, the results of numerical simulation should be verified experimentally in order to validate the methodology applied and make any necessary corrections. To do this the measurements of the real shape of the bath must be performed and compared with the simulation results. In this way not only the calculations of the bath shape may be verified, but also, indirectly, the calculations of the electromagnetic forces and velocity field can be confirmed as the obtained meniscus is a resultant of the processes represented by these simulations. Measurements of a liquid metal are by no means an easy task because of its high temperature and chemical activity. That is why non-contact methods are often preferred. The level of liquid metal may be determined by point measurement performed with the use of radar, ultrasonic, and recently also laser sensors. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
171
Within the range of applications discussed, however, the point measurement proves to be insufficient. It turns out to be necessary that the whole shape of the liquid metal should be scanned with adequate resolution. The paper presents an attempt to adapt a widely used technique of 3D laser scanning to measure the geometry of a liquid metal in induction crucibles.
2
Method description
Laser measurement of the level is most often performed by triangulation method because of its accuracy and relative simplicity (e.g. Mass et al. [4]). It consists in a projection of a laser beam at a certain angle onto the surface from which the distance is to be measured and the camera recording of the beam reflection.
b
c
L
camera sensor
a laser
y
x surface of liquid metal
Figure 1:
Level measurement by the method of diffused light recording.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
172 Computational Methods and Experimental Measurements XIV Two techniques of measurement may be distinguished here. In one of them it is the light diffused on the surface that is recorded, while in the other it is the beam reflected from the surface and projected onto the screen. The choice between the above-mentioned methods of level measurement (and consequently surface profile measurement) of the liquid metal will depend on several factors. The first method is applicable when the laser beam is diffused on the surface of the liquid metal, which occurs when various kinds of impurities, micro-bubbles and the like can be found on it. Despite the inconvenience, the method is widely used for the simplicity of the measurement procedure. Figure 1 presents a schematic diagram of the measurement of metal level by the method of diffused light recording. As can be seen, a shift in the level of liquid causes a shift of the recorded light spot of the laser beam. Coordinates of the surface point can be estimated by the solution of equation set (1). c b x a y L y tan x
(1)
where: x,y – coordinates of the surface point – the tilt of the laser L – the position of the laser beam recorded by the camera a,b,c – geometric values (see fig. 1) In the case of some metals which melt at high temperatures (e.g. copper) a problem of their shining occurs. The spectrum of this light has a distribution similar to the light of a black body (fig. 2) described by Planck’s law. ucc [-]
1,40E-06 1,20E-06 1,00E-06 8,00E-07 6,00E-07 4,00E-07
2,00E-07 0,00E+00 300
Figure 2:
350
400
450
500
550
[nm]
600
650
700
Normalized Planck distribution for the temperature of liquid copper.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
173
Laser light reflected from the metal surface is then suppressed and its recording is not possible. The solution here is a narrow-band filter on the camera lens that will remove the spectrum components radiated by the metal and leave only the part of spectrum components in which the laser beam light is contained. Semiconductor red lasers, which are most frequently used, emit light of wavelength 650 nm, which is contained in the part of spectrum radiated by the heated metal. For this reason, a good idea is to use a laser with a shorter wavelength is suggested. Green semiconductor laser with wavelength of 532 nm have been recently available on the market. Within this range of radiation hot metal shines less brightly, and owing to filtration it is easier to distinguish between the laser light and metal shine. The method of point measurement of the liquid metal can be extended to linear measurement. In order to do this a dot line must be projected onto the metal surface. In 3D laser scanners a mechanical system deflecting the laser beam is commonly used. In the case of induction devices, however, quickly changing processes have to be often recorded. Having in mind fast digital cameras so easily available today, it must be concluded that the mechanical system would become the factor limiting the speed of measurement. Therefore what was applied in the presented solution was multiplication of laser beam by an optic method based on interference grid. The selection of a suitable head allows a dot line to be projected onto the surface of the liquid metal. Since the induction machines considered here are usually characterized by full axial symmetry, the measurement of the radial component of the curvature makes it possible to determine full geometry of the surface. The optic method can also be used to project a dot matrix, which will allow a full scanning of the surface. When the laser ray is not diffused on the metal surface, the method of the reflected beam should be applied. Figure 3 shows the idea of this measurement. Coordinates of the surface point are estimated by solution of equation set (2).
y b H a x tan 2 y x tan
(2)
where: x,y – coordinates of the surface point - the tilt of surface - the tilt of the laser H – the position of the laser beam spot on the projection panel recorded by the camera a,b – geometric values (see fig. 3) The technique is sensitive to the curvature of the surface from which the distance is measured. As Figure 4 presents, even small tilt of the surface which reflects the beam causes a significant change in its direction. A shift of the light spot on the screen resulting from the change of the beam direction may be far more serious than the shift resulting from the change in the metal level. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
174 Computational Methods and Experimental Measurements XIV Because in the application considered a meniscus occurs on the surface of the liquid metal, the method of point measurement based on the reflected beam is completely useless. However, multipoint measurement will allow the calculation of the metal surface curvature on the basis of the measurements performed by the method of reflected beam, according to solution of equation set (3).
0i N y i b H i a x i tan i 2 i y x tan 0i N i i i 2 x x x i i i 1 tan y i i 1 x i 1 x i xi 1 x i 1 0i N 2 x i x i 1 x i 1 2 x i x i 1 x i y i x x x x y i 1 x x x x i i 1 i i 1 i 1 i 1 i 1 i 2 x 0 x1 x 2 2 x0 x0 x 2 tan y y 0 0 x 0 x1 x 0 x 2 1 x1 x 0 x1 x 2 2 x 0 x 0 x1 y 2 x x x x 2 0 2 1 2 x N x N 1 x N tan y N N 2 x N 2 x N 1 x N 2 x N 2x N x N 2 x N 2 x N x N 2 x N 1 y N 1 x x x x y N x x x x N 1 N 2 N 1 N N N 2 N N 1 x0 x2 x1 x 2 y 0 x x x x y1 x x x x 1 0 1 2 0 1 0 2 x 0 x1 0 y2 x 2 x 0 x 2 x1
(3)
where: xi,yi – coordinates of the i-th surface point i - the tilt of surface estimated from derivative of parabolically approximated function of shape i - the tilt of the laser i-th laser beam Hi – the position of the i-th laser beam spot on the projection panel recorded by the camera a,b– geometric values N – a number of laser beams and estimated surface points. The last equation of the set represents the known, zero value of the surface tilt on the axis of the crucible. Unfortunately, although the application of the above method solves the problem of the meniscus curvature effecting the measurement of the beam WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
175
reflection, it does not eliminate the errors resulting from local changes in the tilt of the surface. The solution to this problem lies in the assumption that the curvature of the surface caused by the meniscus is subject to changes much slower than the local deformations in the surface shape. This method relies on the measurement of the surface shape in a given period of time (short enough to record momentary shape of the meniscus), and then averaging the obtained result in time.
a laser
b y
projection panel
H
- -2
x surface of liquid metal
Figure 3:
Level measurement by the method of reflected light recording.
laser
Figure 4:
projection panel
Influence of surface tilt.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
176 Computational Methods and Experimental Measurements XIV
3
Method application
On the basis of the measurement methods developed an experimental setup was designed where the technique of diffused beam was applied. A green semiconductor laser with wavelength of 532 nm was used. The model chosen for recording of the light (both reflected and diffused) was a colour camera, uEye UI-1640-C, resolution 1280 x 1024, with CMOS matrix and scanning frequency of 25 Hz with full resolution and the possibility of increasing the frequency (at the expense of resolution) do 254 Hz. The camera was equipped with the lens Pentax C1614-M with focal length of 16mm. So instrumented, the camera will monitor an area of 12 cm by 10 cm from the distance of 0,5 m. A 532 nm narrow-band interference filter with bandwidth of 3nm was also added to the measurement system. Figure 5 presents the arrangement of the stand for the measurement by the method of diffused beam. Theoretical accuracy of the measurement of coordinates in this arrangement can be calculated from eqn (4) and is equal to about 10-4 m. In reality, the accuracy is reduced along with the increase in the tilt of the metal surface because of the blurring of the light spot. For this reason the lowest accuracy is obtained nearby the crucible walls. y
ac ctg bc L ; x y ctg L c ctg 2
where: ∆x ,∆y– measurement errors of coordinates – the tilt of the laser L – the position of the laser beam recorded by the camera sensor a,b,c – geometric values (fig. 1) ∆L – the accuracy of the camera sensor
green laser
head with inteference grid
Figure 5:
camera lens
interference filter crucible coil
Experimental setup.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(4)
Computational Methods and Experimental Measurements XIV
4
177
Conclusion
The method of laser measurement of the surface profile may be successfully applied to record the surface of a liquid metal. The highest accuracy of results is obtained by a variant of this method in which light diffused on the metal surface is recorded. This technique avoids the problems with the influence of the surface curvature on the change in the direction of laser beam, which is why it was applied in our experimental setup. The measurements performed with the use of the method incorporating the reflected beam allows us to calculate the curvature of the liquid metal on which the laser beam is not diffused; however, the disadvantage is the complexity and lower accuracy of the measurement. Further studies on the issues discussed in this paper should be devoted to the expansion of the method so that the whole surface of the metal could be scanned, which would result in its application to the problems without axial symmetry.
Acknowledgement This research work was carried out within project No. N508 034 31/1889, financially sponsored by the Polish Ministry of Science and Higher Education.
References [1] Blacha L., Fornalczyk A., Przyłucki R., Golak S.: Kinetics of the evaporation process of the volatile component in induction stirred melts, 2nd International Conference Simulation and Modelling of Metallurgical Processes in Steelmaking STEELSIM 2007, Graz, Austria, pp. 389-395, 2007 [2] Golak S. , Przyłucki R.: Oxidation of the surface of a liquid metal in the induction furnaces., Acta Metallurgica Slovaca 13, pp. 256-259, 2007 [3] Golak S. , Przyłucki R. : The optimization of an inductor position for minimization of a liquid metal free surface, Electrotechnical Review, 11/2008, SIGMA-NOT, pp. 163–164, 2008 [4] Mass H. G., Hentschel B., Schreiber F.: An optical triangulation method for height measurements on water surfaces, Videometrics VIII (Electronic Imaging 2003), Ed. S. El Hakim, SPIE Proceedings Series Vol. 5013, pp. 103-109. 2003
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
179
Damage assessment by automated quantitative image analysis – a risky undertaking P. Stroeven Faculty of Civil Engineering and Geosciences, Delft University of Technology, The Netherlands
Abstract This paper presents the economic concept of describing damage in concrete by a partially linear planar system. The practical cases are elaborated of prevailing compressive or tensile stresses. They only require quantitatively analyzing the image patterns of vertical sections by sweeping test lines. Further it is demonstrated that automation of quantitative image analysis generally yields biased information. Keywords: concrete, image analysis, automation, sweeping test line, vertical section.
1
Introduction
Concretes are undergoing a process of degradation during the lifetime of the engineering structure. The degree of degradation can be reflected by characteristics of the internal damage structure. Nature of this problem is three dimensional (3D). To get access to the relevant 3D information would require a random set of section images, which is a laborious and time consuming operation. So, most investigations are of 2D nature only. A method introduced by the author renders possible reducing such efforts tremendously [1–3]. It assumes the crack structure for the most general case composed of 3D, 2D and 1D portion. The method is therefore discussed for the practical case of concrete under prevailing compressive stresses and prevailing tensile stresses, respectively. Damage characteristics under such conditions can be assessed on a single so-called vertical section. A parallel set of such sections can be necessary of course to reduce scatter to acceptable proportions. However, the efforts for sawing and quantitative image analysis are obviously dramatically reduced. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090171
180 Computational Methods and Experimental Measurements XIV The images of such vertical sections can be analyzed by the direct secants approach [4–9], whereby the number of intersections is determined per unit of line length for a stepwise rotating grid of parallel lines. Information can be obtained on total crack length per unit of area and on degree and direction of crack orientation. This is 2D information; however it is readily possible without investment of extra labour to assess the 3D specific crack surface area, and spatial orientation distribution of the cracks. This will be elaborated for the aforementioned practical cases. Due to the repetitive character of such investigations, one would be tempted to use an automated set up. However, the paper will demonstrate information generally to be seriously biased [10, 11]. This will be accomplished mathematically as well as by a visualization method proposed by Underwood [12]. This approach gives detailed insight into the level of bias as a function of conditions, such as the degree of prevailing orientation in the damage structure. Since damage is a fractal phenomenon, the obtained results will fundamentally be a function of magnification of the images. This allows performing a comparative study only.
2
Damage assessment
The damage structure according to the Stroeven concept [13] is denoted as a partially linear-planar structure. The 3D portion encompasses small flat crack element dispersed isotropic uniformly random (IUR) in space. In the 2D portion only small crack elements are collected that are parallel to an orientation plane, however otherwise they are “randomly” distributed. The 1D portion unites crack elements all parallel to an orientation axis, however otherwise “randomly” dispersed. When the 2D portion can be neglected, a so-called partially linearly oriented system is obtained. This model can be used in situations where compressive stresses are prevailing. Alternatively, for high tensile stresses the partially planar oriented damage model can be employed in which the 1D portion is neglected. Damage can be seen as surfaces distributed in space (representing the two crack surfaces at very small distances). Crack density is commonly expressed in total surface area, S, per unit of volume, V. So, leading descriptor of the damage structure is SV (in mm-1). Alternatively, in 2D the total crack length, L, per unit of area, A, yields information on LA. Measurements are made by superposition of line grids on images, of which fig. 1 (left) reveals only a small part (pore is visible at the bottom). Contrast was improved by applying a fluorescent spray. This author has extensively used this method of directed secants in the past 30 years. Incidentally, also other researchers in concrete technology used the very method [14–17]. Fig. 1 (right) shows grid orientations on a full-size hand-made copy of section image. Larger aggregate grains are visible as crack-free areas. 2.1 Concrete in compression Uniaxial compressive stresses produce predominantly cracks that are parallel to the stress direction. In a more general set up, we assume a portion of cracks WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Figure 1:
181
Load-induced cracks in section plane visualized by fluorescent spray (left), and application of method of directed secants for assessment of intersection densities (right).
distributed isotropic uniformly random (IUR); this is denoted as the SV3 component. The resulting portion consists of cracks parallel to the orientation axis, denoted as the SV1 component. Total crack density is the summation of both components: SV 1 + SV 3 = SV . The proper approach (in technical as well as economic terms) is sampling by vertical sections. Hence, the specimen should be cut to yield one or more image planes parallel to the orientation axis. Such section images can provide the 3D information on SV. Averaging over more vertical images reduces the scatter around the average, and thus the reliability of the results. The results are unbiased, which means that averaging over an increasing number of images will bring the average closer and closer to the population value we are interested in. The analysis of the images is accomplished by line scanning. A grid of parallel lines is superimposed on the crack pattern, successively in the direction of the orientation axis (indicated by index & ) and perpendicular to it (indicated by index ⊥ ), as shown in fig. 1. The following relationships can be derived PL & =
1 2
SV 3
and
PL ⊥ =
1 2
SV 3 +
2
π
SV 1
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(1)
182 Computational Methods and Experimental Measurements XIV Hence, crack density is obtained by simple mathematical manipulations, yielding SV =
π
PL ⊥ + (2 −
π
) PL & (2) 2 2 P in eqs. (1) and (2) stands for the number of intersections of grid lines and cracks. The constants account for probabilities that cracks appear in the section image [6,7].
2.2 Concrete in tension The methodology is very similar. The vertical section is again parallel to the tensile stresses. The grid is also successively superimposed in the stress direction and perpendicular to it, with the same indices accounting for the position of the grid. In this case we have 1 1 PL & = SV 3 + SV 2 and PL ⊥ = SV 3 (3) 2 2 Again, simple manipulation will yield SV = SV 2 + SV 3 = PL & + PL ⊥
(4)
Here, SV 2 stands for the portion of cracks perpendicular to the tensile stresses, with SV = SV 2 + SV 3 . To accomplish such operations, the contrast should be improved by ink penetration or by application of a fluorescent spray (applied in the case of fig. 1) or dye. Details can be found in the relevant literature [5,8].
3
Biases due to automated set up
3.1 Analogue images An elegant way to reveal differences in outcomes of quantitative image analysis approaches by directs secants to analogue and digitised images is to make use of the earlier mentioned Stroeven concept. Hence, LA is assumed consisting of two portions, a “random” one, denoted by LAr , and a fully oriented one, indicated by LAo . The latter “sticks” (short straight elements as part of the 2D crack) run parallel to the orientation axis that supposedly makes an angle β with the positive x-axis. This strategy allows dealing with both portions separately. The rose of intersections per unit of grid line length (intersection densities) of the random portion approximates (for very large images) a circle around the origin with radius PLr . The rose of intersection densities for the oriented portion approximates a circle through the origin with PLo ( β ) = 0 and PLo (ς ) = PLo max . Note that
β = ς + π 2 . PLo (θ ) = PLo max cos(θ − ς ) for an arbitrary angle θ.
When combined, the rose of intersection densities is obtained for a partially linear structure of lineal features in a plane, shown in fig. 3 [10]. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
183
Figure 2:
Rose of intersection densities for oriented (left) and random (right) line segments in a plane (together forming the 2D crack pattern) for an analogue image.
Figure 3:
Rose of intersection densities for random (dashed line) plus oriented line segments in a plane (continuous line; small circles) shown in fig. 2. Crack pattern is supposed to encompass only relatively small lineal portion (so-called weakly oriented pattern).
3.2 Digitized images The smooth contours of the cracks can be conceived in conventionally digitized images replaced by two orthogonal sets of mono-size sticks as shown in fig. 4. As before, a distinction can be made between the “random” portion and the oriented one running parallel to an orientation axis enclosing an angle β with the positive x-axis. As before, β = ς + π 2 . The random portion consists of two equally large sub-sets of sticks oriented in the respective coordinate directions {x,z}. This leads to two equally large roses of intersection densities that run through the origin and are orthogonally oriented. Circle diameter is PLr . The summation yields a symmetric flower-like rose displayed in fig. 5 at the bottom. In an arbitrary direction, the intersection density is given by
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
184 Computational Methods and Experimental Measurements XIV PLr (θ ) = PLr (sin θ + cos θ ) =
2 PLr cos(θ −
π 4
)
(5)
A striking but expected observation is the preferred orientation in a direction enclosing an angle π/4 with the positive x-axis; the random portion is reflected significantly biased by a digitised image with maximum value 2 PLr . The projected portions of the oriented fraction of the cracks in the x- and zdirections are LA (0) = LAo sin ς and LA (π 2) = LAo cos ς , respectively. This can '
'
be transformed to intersection counts per unit of grid line length, whereby the horizontal system of oriented sticks leads to two circles sharing the origin of the polar system and having the z-axis as symmetry line. The intersection with the zaxis is PLo sin ς . The second system of oriented sticks parallel to the z-axis is modelled in an analogous way by two circles sharing the origin of the polar system and having the x-axis as symmetry line. The intersection with x-axis is PLo cos ς . The two pairs of circles and their summation is displayed in fig. 5 (top). The intersection density in an arbitrary direction of the oriented portion is PLo (θ ) = PLo sin θ sin ς + PLo cos θ cos ς = PLo cos(θ − ς )
(6)
The direction of the principal axis of the rose is found for zero value of the first derivative of eqn. (6). This occurs for θ = ς , whereby PLo (ς ) = PLo .
Figure 4:
Detail of conventionally digitized section image of particulate structure of which the surfaces like cracks are dispersed in 3D space. Smooth surface contours are replaced by orthogonal set of straight line segments.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Figure 5:
185
Rose of PL (θ ) values for oriented (top) and random (bottom) line segments in digitized image of crack pattern.
The roses for the partially oriented lineal crack features are obtain by summation of the total roses at the top and bottom of fig. 5. The mathematical expression for this system is obtained upon adding up Eqs. (5) and (6). In both cases it becomes obvious that the direction of preferred orientation, β = ς + π 2 , is only reflected by the experimental data when the signal is very strong (i.e., PLo / Plr 1 ); when weak, the preferred orientation direction will be very close to π/4. Intermediate situations will be biased to an unknown degree when used for predicting the direction of preferred orientation.
4
Discussion
A more detailed insight into differences between quantitative image analysis outcomes obtained on either analogue or digitized images can be achieved by superimposing the relevant estimates assessed on the different images for the fully oriented and random portions separately. Fig. 6 (top) presents the fully oriented portions as reflected by the two approaches. In the first and third quadrant the solutions are identical, whereas they are distinctly different in the other two quadrants. Digitized images only offer correct information along the axes of (four-connexity) digitization (fig. 6: bottom). All other measurements WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
186 Computational Methods and Experimental Measurements XIV will be biased to an unknown degree. Only when confronted with very strong signals one might successfully assess such geometric parameters by an automated approach to digitized images. By equating the first derivative to zero of the aforementioned expression for the analogue and digitized images, this is confirmed by the respective equations that are obtained (in the first quadrant): PLo sin(θ − ς ) = 0 and PLo sin(θ − ς ) +
2 PLr sin(θ − π 4) = 0 , for analogue and
digitized images, respectively. The correct solution ( θ = ς ) is only for the analogue image. Estimation of total length of lineal features in a plane, LA, is based on π π π π π L = PL (θ ) = ∫ [ PLo cos(θ − ς ) + 2 PLr cos(θ − )]dθ / ∫ dθ . This holds for 2 2 0 4 0 analogue as well as for digitized images. The respective solutions are π π and LA = 2 cos(ς − ) PLo + 2 PLr (digit.) (7) LA = PLr + PLo (anal.) 4 2 LA as obtained from digitized image is always biased (i.e., overestimated); even for very strong signals, the ratio of digitized to analogue image information A
Figure 6: Digitization-induced biases for oriented (top) and random portion (bottom) of rose of intersection densities. Dashed line represents the analogue image, the solid line the digitized image subjected to the automated model. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
is
187
2 cos(ς − π 4) . For very weak signals this yields 4 / π . For mixed
2 and 4 / π , with even an situations, the bias can be anywhere between influence of the direction of preferred orientation. A parameter used to characterize the strength of preferred orientation is the degree of orientation, ω. This parameter can be derived for a space system (ω3), as well as for a planar one (ω2) by the information in this paper. As an example, for ω2 this leads to PLo .100 100 (%) (analogue) (8) ω2 = = PLr 1.572 PLr + PLo +1 1.572 PLo 1+
ω2 =
PLr PLo
[1 − 1.73 cos(ς +
PLr
π
π 4
)] 100 (%) (digitized)
(9)
[1 + cos(ς + )] 4 PLo The correct signal for the analogue image declines smoothly from strong to weak signals from 100% to 0%. For the digitized image it declines also from 100% for strong signals to anywhere between 100% and 0%, depending on the orientation direction in the image field, for weak signals. So, information will generally be biased to an unknown degree. 1+
5
Conclusions
Spatial damage structures can be analyzed by application of quantitative image analysis by sweeping test line system on orthogonal section images only. For the practical cases of prevailing compressive or tensile loadings, (a set of) vertical sections will do, restricting dramatically efforts required for preparation of samples and image analysis. The choice to automate the quantitative image analysis operation is a risky one, because characteristic measures for the damage structure, like total crack length (or specific crack surface area) and degree and direction of prevailing crack orientation will be seriously biased.
References [1]
[2]
Stroeven, P., Shah, S.P., Use of radiography-image analysis for steel fiber reinforced concrete, Testing and Test Methods of Fiber Cement Composites, ed. R.N. Swamy, Constr. Press: Lancaster, pp. 345-353, 1978. Stroeven, P., de Haan, Y.M., Structural investigations on steel fiber reinforced concrete, High Performance Reinforced Cement Composites, eds. H.W. Reinhardt, A.E. Naaman, E & FN Spon: London, pp. 407-418, 1992. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
188 Computational Methods and Experimental Measurements XIV [3]
[4] [5]
[6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]
Stroeven P., Damage evolution in concrete; application of stereology to quantitative image analysis and modelling, Advanced Materials for Future Industries: Needs and Seeds, eds. Kimpara I, Kageyama K, Kagawa Y, SAMPE: Tokyo, pp. 1436-1443, 1991. Nemati, K.M., Stroeven, P., Stereological analysis of micro-mechanical behaviour of concrete. Mat. Struct. 34, pp. 486-494, 2000. Reinhardt, H.W., Stroeven, P., den Uijl, J.A., Kooistra, T.R., Vrencken, J.H.A.M., Einfluss von Schwingbreite, Belastungshöhe und Frequenz auf die Schwingfestigkeit von Beton bei niedrigen Bruchlastwechselzahlen. Betonw. & Fertigteil-Techn., 44, pp. 498-503, 1978. Stroeven, P., Hu, J., Gradient structures in cementitious materials, Cem. Concr. Comp., 29, pp. 313-323, 2007 Stroeven, P., Hu, J., Stereology: Historical perspective and applicability to concrete technology, Mat. Struct., 39, pp.127-135, 2005. Stroeven, P., Some observations on microcracking in concrete subjected to various loading regimes, Engr. Fract. Mech., 35(4/5), pp.775-782, 1990. Stroeven P., Geometric probability approach to the examination of microcracking in plain concrete, J. Mat. Sc. 14, pp.1141-1151, 1979. Stroeven, P., Stroeven, A.P., Dalhuisen, D.H., Image Analysis of ‘natural’ concrete samples by automated and manual procedures, Cem. Concr. Comp., 23, pp. 227-236, 2001. Chaix, J.M., Grillon, F., On the rose of direction measurements on the discrete grid of an automatic image analyser, J. Microsc., 184, pp. 208213,1996. Underwood E.E., Quantitative Stereology, Addison-Wesley: Reading (MA), 1970. Stroeven, P., Structural modelling of plain and fibre reinforced concrete, J. Comp., 13, pp. 129-139, 1982. Stang, H., Mobasher, B., Shah, S.P., Quantitative damage characterization in polypropylene fibre reinforced concrete, Cem. Concr. Res., 20, pp. 540558, 1990. Carcassès, M., Ollivier, J.P., Ringot, E., Analysis of microcracking in concrete, Acta Stereol., 8(2), pp. 307-312, 1989. Ringot, E., Automatic quantification of microcracks network by stereological method of total projections in mortars and concrete, Cem. Concr. Res., 18, pp. 35-43,1988. Nemati, K.M., Generation and interaction of compressive stress-induced microcracks in concrete, PhD Thesis, University of California: Berkeley, 1994.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Section 4 Detection and signal processing Special session chaired by A. Kawalec
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
191
Antenna radiation patterns indication on the basic measurement of field radiation in the near zone M. Wnuk Faculty of Electronics, Military University of Technology, Poland
Abstract The paper presents the method of antennas radiation patterns measurements in the near zone. Afterwards, using the analytical methods, the measured data is transformed into radiations patterns in the far field. This technique is used in measurements in the closed areas, which enable researchers to manage the environmental characteristics. The calculated radiation patterns are as precise as the range measurements in the far field. It needs to be outlined however that this method requires more complex and expensive regulation procedures as well as more sophisticated software, whereas the radiation patterns are not obtained in the real time. Keywords: antennas radiation, near zone, far field.
1
Introduction
An antenna is one of the important components of a radio communication system. It is designed to convert the input current into an electromagnetic field and to emit it into the surrounding space (transmitting antenna) to the contrary (receiving antenna). Therefore, the antenna is a device which adjusts the waveguide to free space. Due to its location between a transmitting or receiving device and the space, requirements set forth for an antenna are imposed both by the conditions of expansion of electromagnetic waves in space and by interaction of the antenna as an element of the given device on its operation [1]. Its parameters and patterns affect not only effective information transfer but also meeting the compatibility conditions, i.e. the antenna should not disturb operation of other systems, particularly along the lateral lobe radiation lines. That is why during the recent period the antenna measurement technique WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090181
192 Computational Methods and Experimental Measurements XIV develops very rapidly. It must ensure high accuracy of measurements, which must be taken at a receding level of signal. This results from the necessity to measure lateral lobes at below -40 dB. In order to meet these requirements the measurements should be taken in special conditions, which are ensured in anechoic chambers.
2
Antenna radiation zones
The area surrounding the antenna may be typically split into three zones of the electromagnetic field generated by that antenna: the reactive near-field, the radiating near-field, the far-field (Figure 1). The far-field expands to infinity and it is that area in space in which the electromagnetic field changes with distance r from the transmitting antenna according to the exp(-jkr)/r relationship, where k = 2π/λ, λ – wave length. It is assumed that the far-field expands between the distance from the studied antenna to infinity, where D is the largest geometrical dimension of the studied antenna. 2D 2 + λ R g = λ
(1)
Factor λ added to the 2D2/λ relationship covers the case in which the maximum geometrical dimension of the antenna is smaller than wave length λ. The space area stretching between the antenna and the conventional limit of the far-field is called the near-field, with the reactive near-field expanding between the studied antenna and distance λ. In the reactive near-field the electric and magnetic field phases are almost in quadrature, the Poynting vector is of a complex nature. The imaginary part of the Poynting vector is responsible for collecting the energy of the electromagnetic field near the antenna surface, and the real part relates to energy emitted by the antenna. Polenear-field indukcji Reactive Strefa bliska Near field
Strefa daleka
Far field
Rozkład pola
Field distribution
Odległość Distance λ
Figure 1:
2D /λ
Zones around the antenna.
At a distance greater than λ from the antenna the electromagnetic field has a complex nature and changes significantly in the function of distance from the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
193
antenna, it is the so-called radiating near-field. Antenna measurements in the near-field are usually taken in the radiating near-field. Sometimes areas of the electromagnetic field around the antenna are named with terms borrowed from optics, such as the Fresnel zone and Fraunhofer zone. The Fraunhofer zone is a term equivalent with the far-field, whereas the Fresnel zone expands from distance to the conventional limit of the far-field [1]. 1
D 3 D Rd = + λ 2λ 2
(2)
Traditional methods require the measurements to be taken in the far-field. It is difficult to meet this requirement in case of antennas operating in the micro wave range. For instance, the radius of this zone, for an L=3m antenna with working frequency of f=9 GHz, expands to a distance of over 540m. It should be noted that the size of the largest anechoic chambers does not exceed 50m. Hence the need to develop such methods which allow reduction of the size of the measured structure do dimensions suitable for confined spaces such as an anechoic chamber. This problem may be solved by taking measurements in the near-field the socalled (4 ÷10)λ distance. There are many methods of measurements in the near-field. Three of them are generally used. These are: the planar method (Figure 2(a)), the cylindrical method (Figure 2(b)) and the spherical method (Figure 2(c)). Each of the methods has both advantages and disadvantages. Spherical scanning requires larger anechoic chambers, as compared with the remaining methods. Cylindrical scanning proves excellently for measurement of area monitoring radars. Planar scanning is limited by the angular sector which allows measurement of the main beam and the nearest lateral lobes. The main advantage of planar scanning consists in dense and uniform distribution of sampling points in the grid. In the case of scanning polar (uniaxial) and bipolar (biaxial) plane it will be possible to obtain a scanning plane larger than that offered by the anechoic chamber.
(a) spherical method Figure 2:
(b) cylindrical method
(c) rectangular planar method
View of setups for antenna measurements in the near-field.
In the methods of antenna measurement in the near-field, the values of the electromagnetic field are measured at discrete points in a preset surface. Sampling points are in nodes of suitably defined grid, inscribed on this surface. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
194 Computational Methods and Experimental Measurements XIV Three types of sampling point grids are applied; they are presented in Fig. 3. Excessively dense sampling is not necessary for accurate presentation of the electromagnetic field. In practice, a ca 10-20% redundancy coefficient is introduced. Redundant density of sample occurrence is mostly affected by the pattern in which the lines with sampling points converge into a single central point (3(b)), (3(c)). Such patterns of sampling point grids are often used in practice due to the advantageous kinematic systems of the scanner. ∆x
∆y
b
(m∆x, n∆y, zt)
a
(a) Rectangular grid
(b) Uniaxial planar grid Figure 3:
(c) Biaxial planar grid
Measurement grids.
The radiating area of the near-field expands between the distance equal to λ wave length from the antenna and the distance determined by formula (1). Beyond this distance we have the far-field, where angular distribution of energy does not oscillate with the distance and the radiating power disappears with the distance. Dimension of the measurement area is important, because we are considering accuracy of the planar measurement technique in the near-field. Sampling plane
Antenna aperture plane
Figure 4:
Clarification of the size of angular sector of the area of importance of the measured pattern.
The size and location of the measurement area define the value of the angular sector of the area of importance. The size of this angular sector depends, i.a., on the size of scanning surface of the electromagnetic field and on the distance of this surface from the aperture of the tested antenna [7]. The computed radiation pattern in the far-field will be precise in the ±ΘS area. a − D Θ S = arctg 2zt WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(3)
Computational Methods and Experimental Measurements XIV
195
Total angular coverage may be obtained in the spherical system by adding measurements in the near-field along the whole spherical surface of the nearfield Regardless of the chosen measurement method, equipment for carrying out specific measurements is similar. The differences are primarily caused by: arrangement of different measuring instruments in respect of the source of the measurement signal and the tested antenna, the type of measurements to be taken and the required level of automation of measurements. The equipment required for taking measurements of antenna patterns in the near-field consists of 4 major sub-systems which may be controlled from one, central control panel. These are: • positioning and control subsystem, • receiving subsystem, • signal source subsystem, • measurement data saving and processing subsystem. It should be emphasized that while taking measurements in the near-field the results obtained should be transformed, with the use of analytical methods, into data suitable for computing radiation patterns in the far-field. The computed radiation patterns are as accurate as measurement of range in the far-field. Depending on the required accuracy it is necessary to use more complex and expensive regulatory procedures and more complicated software and radiation patterns are not obtained in real time.
3
Theoretical basis for determination of antenna pattern based on the near-field measurement
Modern planar scanning techniques in measurements of the antenna near-field are based on representation of the field in form of planar wave spectrum. Electromagnetic waves with a given frequency may be represented as a superposition of elementary planar waves of the same frequency. Further considerations shall be based on a rectangular x, y, z coordinate system (Figure 5). In the passive and lossless area of free space, Maxwell equations describing the phenomenon of electromagnetic wave propagation, may be transformed into homogeneous second order Helmholtz equations [9],
G G ∇2E + k 2E = 0
G G ∇ H + k 2H = 0 G G ∇⋅E =∇⋅H =0 2
(4) (5) (6)
On assumption that observations of components of the vector of electric and magnetic field, for a wave sinusoidal variable in time, were carried out at the same moment t = tp in all studied points of space, the segments dependent on independent variable t representing time, were abandoned in the above equations. Due to linearity of the said operators and linearity of the medium in which the described phenomenon of electromagnetic wave propagation takes place, it is WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
196 Computational Methods and Experimental Measurements XIV fairly easy to prove that the equations below satisfy the set of equations (4), (5), (6) and the threshold requirements in plane z = 0, +∞ +∞ G GG G (7) E ( x, y , z ) = A(k , k )exp(− jk r )dk dk ,
∫∫
x
y
x
y
− ∞− ∞
(
)
+∞ +∞ G GG G G H (x, y, z ) = ∫ ∫ k × A(k x , k y )exp − jk r dk x dk y , − ∞− ∞
(8)
G G k ⋅ A(k x , k y ) = 0 .
(9) G G G G where: k = k x i x + k y i y + k z i z - wave vector, indicates the direction of propagation of the wave described by wave equations (4), (5), (6), G G - wave number (it is the length of the wave vector), k2 = k ⋅k G G G G r = xi x + yi y + zi z - vector indicating to observation point, G G G G A(k x , k y ) = Ax (k x , k y )i x + A y (k x , k y )i y + Az (k x , k y )i z - wave vector describing
the planar wave spectrum. z
(xt,y,z) r
X
(xt,o,o) Θ
ϕ Tested antenna
(0,0,0)
y
Sampling plane
Figure 5:
Location of the antenna in the reference arrangement adopted in the analysis G G The integrand A(k x , k y )exp − jk rG occurring in relationships (7), (9) represents the homogeneous planar wave propagating along the direction G determined by vector k therefore a monochrome wave emitted through the aperture may be recorded as superposition of planar waves with the same frequency, different amplitudes and expanding in different directions. Equation (9) in turn, which is a natural consequence of the Gauss law for a
(
)
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
197
passive area expressed in form of equation (6), allows distinguishing two independent components (here Ax (k x , k y ) i A y (k x , k y ) ) of vector AG (k x , k y ) Az (k x , k y ) = −
1 (Ax (k x , k y )k x + Ay (k x , k y )k y ). kz
(10)
In order to determine the value of the electric field for an aperture located in the far-field, the following relationship was obtained with the use of expression (7), Ax (k x , k y ) , G j exp(− jkr ) G j exp(− jkr ) E (r , θ , ϕ ) = k z A(k x , k y ) = k z A y (k x , k y ) 2π r 2π r A (k , k ) z x y
where:
(11)
k x = k sin θ cos ϕ , k y = k sin θ sin ϕ , k z = k cos θ ,
Az (k x , k y ) is expressed by (10).
The necessity to determine components Eθ = (r , θ , ϕ ) and Eϕ = (r , θ , ϕ ) of the far-field determined in spherical coordinates implies carrying out transformations which produce the relationship (11) in respective form: G jk exp(− jkr ) ((Ax (k x , k y )cos ϕ + Ay (k x , k y )sin ϕ )iθ + E (r , θ , ϕ ) = 2π r + cos θ (Ay cos ϕ − Ax sin ϕ )iϕ ) = Eθ (k x , k y )iθ + Eϕ (k x , k y )iϕ
(12)
In the next step, depending on the method of polarisation of the tested aperture, we determine co-polarisation E co (θ , ϕ ) and cross-polarisation
E cross (θ , ϕ ) patterns.
Polarization of antenna E x :
E co (θ , ϕ ) = Eθ (θ , ϕ ) cos ϕ − Eϕ (θ , ϕ ) sin ϕ =
(
)
= Ax (k x , k y ) cos 2 ϕ + sin 2 ϕ cos θ + Ay (k x , k y )sin ϕ cos ϕ (1 − cos θ ) E cross (θ , ϕ ) = Eθ (θ , ϕ ) sin ϕ − Eϕ (θ , ϕ ) cos ϕ =
(
= Ax (k x , k y )sin ϕ cos ϕ (1 − cos θ ) + Ay (k x , k y ) cos 2 ϕ + cos θ cos 2 ϕ
plane E (ϕ = 0) :
E co (θ , ϕ ) = Ax (k x , k y ) ,
)
(13) (14)
(15)
E cross (θ , ϕ ) = Ax (k x , k y )cos θ ,
(16)
E co (θ , ϕ ) = Ax (k x , k y )cos θ
(17)
plane H ϕ = π :
2
E cross (θ , ϕ ) = Ax (k x , k y ) .
(18)
In order to determine the far-field pattern it is necessary to know the components G Ax (k x , k y ) and A y (k x , k y ) of the planar wave spectrum vector A(k x , k y ) . In the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
198 Computational Methods and Experimental Measurements XIV case of observation of the electric field vector in the z = z t plane, the vector equation (7) adopts the following form: E x ( x, y , z = z t ) =
+∞ +∞
∫ ∫ [A (k x
x
, k y )exp(− jk z z t )]exp(− jk z x ) exp(− jk z y )dk x dk y , (19)
− ∞− ∞
E y ( x, y , z = z t ) =
+∞ +∞
∫ ∫ [A (k y
x
, k y )exp(− jk z z t )] exp(− jk z x ) exp(− jk z y )dk x dk y ,
(20)
x
, k y )exp(− jk z z t )]exp(− jk z x ) exp(− jk z y )dk x dk y ,
(21)
− ∞− ∞
E z ( x, y , z = z t ) =
+∞ +∞
∫ ∫ [A (k z
− ∞− ∞
where:
(22) Selection of sample spacing allows obtaining such equations for Fourier integrals in which the structure is actually a modified version of two-dimensional discrete Fourier transform or inverse. Because of the considerable number of sampling points acquired during scanning the measurement plane, the choice of effective numerical algorithms for data processing becomes essential. A good example here is offered by algorithms of Fourier fast transform (FFT) and inverse Fourier fast transform (IFFT), which may be applied to determine transforms based on line/column type decomposition. For a finite number of observation points (2N sampling points) and the rectilinear domain (s ∈ (− ∆s, ∆s )) the expression may be as follows:
~ F (s ) =
G sin w(s − n∆s ) ψ (s − n∆s )F (n∆s ) . w(s − n∆s ) n = − N +1 N
∑
(23)
The approximating function ψ (s ) is designed to ensure quick convergence of the approximation error with the growing value of the rate of oversampling π / w ( π / w is the maximum admissible spacing – Nyquist spacing – χ= ∆s
resulting from the sampling theorem) and minimisation of the so-called truncation error resulting from the finite size of the measurement grid. On the other hand, in the case of occurrence of approximation of angular variable domain ϕ (ϕ ∈ (− ∆ϕ , ∆ϕ )) in the task, the following rule should be applied: M G ~ (24) F (ϕ ) = ∑ DM n (ϕ − m∆ϕ )Ω M r (ϕ − m∆ϕ )F (m∆ϕ ) , m = − M +1
where:
ϕ sin (2 M n + 1) 2 - Dirichlet function, D M n (ϕ ) = (2M n + 1) sin (ϕ / 2)
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
199
Ω M r (ϕ ,0) plays the role of a function which reduces the truncation error
value. In view of the fact that this paper describes the application of planar scanning, errors for this case shall not be analysed.
4
Principles of field sampling in the near-field
In this paper the antenna was tested with the use of the planar method with a rectangular grid of sample points. This choice was based upon its major advantages, such as: low cost of scanning mechanism, the smallest amount of computations and stationary tested antenna. Data acquisition in a planar near-field is done over a rectangular x-y grid, Figure 3(a), with maximum sample spacing in the near-field (25) ∆x = ∆y = λ 2 The measurement procedure requires that the zt plane surface at a distance from the tested antenna be selected where the measurements are taken. The zt distance should be located at a distance of at least two or three wavelengths between the tested antenna and the near-field interaction limit. The plane in which the measurements are taken is split into a rectangular grid with M x N points spaced ∆x and ∆y apart and defined by coordinates (m∆x n∆y, zt), where: M M N N (26) − ≤m≤ −1 and − ≤ n ≤ −1 2
2
2
2
Values M and N are determined by linear dimensions of sampling plane divided by sampling spaces. Measurements are taken till the time when the signal at the plane edges reaches the level of -40dB below the highest level of the signal inside the measured plane. Defining a and b as the width and height for the measured plane, M and N are determined by the expressions: a b (27) M = +1 and N= +1 ∆x
∆y
The selected sampling spaces in the measurement grid should be smaller than half the wave length and should meet the Nyquist sampling criterion. If plane z = zt is located in the far-field of the source, sampling spaces may grow to its maximum value λ/2. Points of the rectangular grid are spaced by grid spacing, as: π π (28) ∆x =
k xo
and
∆y =
k yo
where k xo and k yo are real numbers and are the largest dimensions k x and k y respectively, so that f (k x , k y ) ≅ 0 for
k x > k xo or k y > k yo .
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
200 Computational Methods and Experimental Measurements XIV
5
Verification of the experiment
Determination of the radiation pattern of the antenna based on data from measurements taken in the near-field requires application of an advanced mathematical tool. Because of complicated computing techniques, large volumes of input data, the requirement of graphical interpretation of computation results, the Matlab 6.5 programme was used. Two computer programmes were developed. The first of them determines theoretical distributions of the electric field on the surface of the antenna aperture, as well as distributions of intensity and phase of the electric field in a plane in parallel with the aperture plane at zt distance from it. The second programme determines the cross-section of the radiation pattern of the tested parabolic antenna based on data from measurements or theoretical data determined by the first programme. In order to check the correctness of the developed concept for measurement setup concept antenna measurements were carried out with a symmetrical dish reflector with diameter D = 0.6m. The radiant element used was in form of halfwave dipole, combined with a circular convergent mirror. In this case the far-field will occur at the distance of 7.2 m. The size of the available anechoic chamber allow measurements at a maximum distance of 5 m. The figures below present measurements of the amplitude and electromagnetic field phase pattern of the tested antenna, measured in the near-field. They were taken in an anechoic chamber. The amplitude and signal phase were measured with the use of vector analyzer HP HP8530 with accuracy to two decimal places. The results of measurements were standardized. The reference was adopted as the 0dB signal level. The phase difference was computed by a frequency converter based on the signal received from the antenna to the reference signal ratio. Figures 6 and 7 present the measured distributions of electric field amplitudes in the scanning plane for component E x . Ex [dB]
Kierunek próbkowania y sampling direction y
Ex [ dB ]
Kiey rusa n em kppli rn óbg kodir wec a n tio ia n y
tioxn
ec ia owan g kdir prób mplin runek Kie x sa
Kierunek próbkowania x x sampling direction
Figure 6:
Spatial standardized amplitude pattern of the tested antenna in the near-field.
Figure 7: Standardized amplitude of electric field intensity.
Whereas Figure 8 presents the measured distribution of the electric field phase in the scanning plane. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
201
The kierunkowa programme was used for determination of the following patterns: theoretical (based on data generated by the pole_bliskie programme) and that obtained from computations (based on formula (11). Both patterns are presented in one figure – Figure 9. The same figure also additionally presents radiation pattern obtained from measurements. In such diagram comparisons and verification of the experiment may be done. φ [°] 180
100
-180
Kie run ek
Figure 8:
p ró bko wa nia y
ne Kiru
x ania ko w rób kp
Spatial phase pattern of the tested antenna in the near-field.
The patterns have been marked with numbers and colours respectively: Blue – cross-section of the antenna radiation pattern, determined from measurements – No 1; Red – cross-section of the antenna radiation pattern computed by the kierunkowa programme based on data received from the antenna nearfield – No 2; Green – theoretical cross-section of the antenna radiation pattern generated by the pole_bliskie and kierunkowa programmes – No 3. Angel
Measurments pattern Calculations pattern Theoretical pattern
Figure 9:
Radiation patterns of the tested antenna.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
202 Computational Methods and Experimental Measurements XIV While comparing the patterns of Figure 9, one may observe contraction of patterns 2 and 3 as compared with 1. This is caused not only by phase disturbances but also by failure to meet the requirement of the far-field during measurements of pattern No 1. The difference between the far-field limit and the distance at which the measurements were taken and which amounts to 2.85m is so significant that it has a direct impact on the pattern form.
6
Conclusions
It was assumed in the to-date analysis that components tangent to the measurement plane of the electric field vector are measured precisely at point. In reality such probe does not exist and the antenna used for measurements has some definite geometrical dimensions. Therefore the values of the amplitude and phase are averaged on its surface. The impact of the probe radiation pattern is also significant. The pattern was further distorted by imprecise, non-automatic scanning setup. Precision of positioning the probe in vertical plane is not satisfactory, nor is the time necessary for completion of measurements. The measurement results obtained confirmed the correctness of adopted design assumptions and correctness of algorithms made. Radiation patterns transformed into the far-field are convergent with the theoretical patterns and the results of comparative measurements.
References [1] R. E. Collin; Prowadzenie fal elektromagnetycznych; WNT Warszawa 1966, [2] HP 8530A Microwave Receiver. Operating and Programming Manual, Edition 2, Hewlett-Packard Company, February 1994. [3] P. Kabacik: Reliable evaluation and property determination of modern-day advanced antennas; Oficyna Wydawnicza Politechniki Wrocławskiej, Wrocław 2004. [4] J. Modelski, E. Jajszczyk, H. Chaciński, P. Majchrzak: Pomiary parametrów anten Oficyna Wydawnicza Politechniki Warszawskiej, Warszawa 2004. [5] Y. Rahmat-Samii, L. I. Williams, and R.G. Yaccarino: „The ULCA bi-polar planar near-field antenna –measurement and diagnostic range”, IEEE. Antennas and Propagat., Magazine., vol. 37, pp. 16-34, December, 1995. [6] H. Trzaska; Pomiary pól elektromagnetycznych w polu bliskim. PWN 1998, [7] W Zieniutycz; Anteny. Podstawy polowe; WKŁ Warszawa 2001. [8] http://www.nearfield.com [9] M. Wnuk; Analiza struktur promieniujących położonych na wielowarstwowym dielektryku, Wojskowa Akademia Techniczna, Warszawa 1999
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
203
Sub-ppb NOx detection by a cavity enhanced absorption spectroscopy system with blue and infrared diode lasers Z. Bielecki1, M. Leszczynski2, K. Holz2, L. Marona2, J. Mikolajczyk1, M. Nowakowski1, P. Perlin2, B. Rutecka1, T. Stacewicz3 & J. Wojtas1 1
Military University of Technology, Poland Institute High Pressure Physics Unipress, Poland 3 Warsaw University, Poland 2
Abstract This paper presents opportunities of application of cavity enhanced absorption spectroscopy (CEAS) in nitrogen oxide (NOx) detection. The CEAS technique is based on the off-axis arrangement of an optical cavity. In this system, an absorbing gas concentration is determined by measurement of the decay time of a light pulse trapped in an optical cavity. Measurements are not sensitive to laser power fluctuation or photodetector sensitivity fluctuation. In this configuration, the setup includes the resonance optical cavity, build with spherical mirrors of high reflectance. Pulsed lasers are used as the light sources. NOx detection is carried out in the blue and far infrared range. The signal is registered with a newly developed low noise photoreceiver. The features of the designed system show that it is possible to build a portable trace gases sensor. Its sensitivity could be comparable with that of chemical detectors. Such a system has several advantages: relatively low price, small size and weight, low power consumption, and the possibility of the detection of other gases. Keywords: CEAS, NOx sensor.
1
Introduction
Cavity ringdown spectroscopy (CRDS) is a high-sensitivity absorptionmeasurement technique [1–4]. It is based on measurement of the changes in the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090191
204 Computational Methods and Experimental Measurements XIV relaxation time of a high-finesse optical cavity. The changes depend on the absorbance of the species filling the cavity. In CRDS, intensity measurements are replaced with time measurements, and hence this method is not sensitive to laser intensity fluctuation. This technique is based on the phenomenon of light trapping inside an optical cavity composed of two mirrors characterized by the high reflectivity coefficient (R > 0,99995%). In this method, a pulse of the laser light is injected into an optical cavity (resonator) equipped with spherical and high reflectance mirrors (Figure 1). The pulse yields to multiple reflections in the resonator. After each reflection, a part of the laser light leaves a resonator because of lack of 100% mirrors reflectivity. The part of light leaving the cavity is registered with a photoreceiver. The amplitude of the optical signal decreases exponentially.
Figure 1:
Idea of the CRDS technique.
The speed of the decay intensity of the pulse of the laser light is dependent on the mirrors reflectivity coefficient R, the resonator length L, and extinction, which consists of absorption and scattering of light in the absorber filling the cavity. Therefore, by measuring the resonator quality, determination of the extinction coefficient is possible [2]. The resonator quality can be determined with measurement of the radiation decay time constant τ τ=
L , c[( 1 R) + αL]
(1)
where c is the light speed [3]. The decay time τ is measured once when the cavity is empty ( = 0), and then when the cavity is filled with the absorber ( > 0). By comparison of the decay times for these two cases, and assuming that the absorption dominates, value of the absorber concentration N can be found N=
1 1 1 , cσ τ τ 0
(2)
where σ is the absorption cross section, τ0, τ are the time constants of the exponential decay of the output signal for the empty resonator and for the resonator filled with the absorber, respectively [4]. Assuming that the relative precision of τ determination is X=
τ0 τ , τ0
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(3)
Computational Methods and Experimental Measurements XIV
205
the detectable concentration limit NL is given by NL =
X . cσσ0
(4)
High sensitivity of the absorption measurement is achieved due to increasing the effective optical path length up to several kilometers in a very small volume of the optical cavity. Pulsed or AM-modulated cw lasers are used. However, in order to avoid multimode excitation of the cavity (and multiexponential changes of registered light) CRDS setup requires spatial mode filtering and matching of the laser light with the resonator. In order to much the laser wavelength to the mirror reflectivity spectrum and to avoid broadband pulses of fluorescence from the laser a spectral filtering must be applied. This demand causes some disadvantages. The optical system becomes complicated and requires rigorous vibration isolation as well as temperature stability to operate with high sensitivity. These limitations pose engineering challenges for field deployment. Subsequently, several modifications of CRDS have been developed [5–7]. One of them is cavity enhanced absorption spectroscopy (CEAS). It is based on off-axis arrangement of the laser and optical cavity. The light is repeatedly reflected by the mirrors, like in CRDS technique, but the light spots on mirror surfaces are spatially separated. It fills the whole volume of the cavity. Avoiding the spots overlapping eliminates the light interference and allows one to eliminate the sharp resonances of the cavity. Consequently the smooth and broad absorption spectrum might be observed. In practical implementation, the off-axis design in CEAS technique additionally eliminates optical feedback from the cavity to the laser, reduces the sensitivity to vibration. It is especially important when in the CEAS setup the diode lasers are used. However, some instabilities in the light source intensity may decrease the signal-to-noise ratio. In this work we report of an off-axis CEAS-based sensors applied to NO2, NO and N2O measurement. Such sensors are ideal for field measurements because of its advantages. Furthermore, simplicity of CEAS setup causes the costs reduction.
2
Analysis of NOx absorption
The described method is based on measurements of absorption changes of optical radiation. The changes are caused by existence of trace amount of the detected substances in the air. For this reason, both the familiarity with characteristic absorption spectrum of the substances and proper selection of diagnostic radiation source are very important. The research and experimental works were focused on detection of nitrogen dioxide so far. Note that the maximum of the absorption spectrum of nitrogen dioxide is in the range of 400–450 nm. There are no absorption interferences from other gases or other vapors normally existing in the air in the mentioned spectral range. For this reason, it could be assumed that the intensity changes of registered radiation WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
206 Computational Methods and Experimental Measurements XIV passed through the investigated air are caused by changes of nitrogen dioxide concentration. The assumption does not take into consideration an influence of scattering by some aerosols and smokes existing in the air. However, they can be eliminated from the investigated gas using special filters. For other nitrogen oxides, analysis of detection opportunities based on spectroscopy in the infrared spectral range were performed.
Figure 2:
Absorption cross-section of nitrogen dioxide and a spectrum of matched laser.
For this purpose, detailed analysis of absorption spectrum was done. The required information was received from database presented by United States Environmental Protection Agency EPA (www.epa.gov). The wavelength range of 2-10 m was considered. Since the NOx measurements might be interfered by other compounds commonly existing in the air. In the described analysis the influence of the water vapor (steam) and other selected compounds was also analyzed. In Figure 3 the absorption characteristic of nitrous oxide (N2O) in the range of 4.43 − 4.58 μm is presented. It is worth to notice, that in this spectral range also slightly contribute carbon monoxide (CO) as well as carbon dioxide (CO2). Due to an adulteration possibility of measurements by air gases (e.g. CO2), the N2O detection one should conduct in the first absorption band, i.e. 4.45 − 4.49 μm. For this spectral range the quantum cascade lasers seems to be the most suitable. In Figure 3 the possibility of matching of the Alpes #sb1840UP laser to nitrous oxide detection using cavity ring down spectroscopy method is also presented. In Figure 4 the detailed absorption characteristic of nitric oxide (NO) in the range 5.10 − 5.60 μm is shown. In this region H2O absorption is also observed. Due to this fact the NO detection should be realized in the first absorption band, i.e. at 5.20 − 5.30 μm. In this figure the possibility of matching of the quantum cascade laser to nitric oxide detection using cavity ring down spectroscopy method is presented. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
207
The absorption spectra presented in Figures 3 and 4 are deeply modulated, sometimes show split into single peak trains following from vibrational structure of the molecule energy levels. In contrary to this the experimental spectra are supposed to be smoother due to pressure broadening in the atmosphere. The most beneficial wavelengths to detect individual oxides of nitrogen occur in the regions that are listed in Table 1.
Figure 3:
Absorption cross-section for nitrous oxide and a spectrum of Alpes #sb1840UP laser [8, 9].
Figure 4:
Absorption cross-section for nitric oxide and a spectrum of Alpes #sb1770DN laser [8, 9].
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
208 Computational Methods and Experimental Measurements XIV Table 1:
Detection wavelength ranges of the NO2, NO and N2O.
Particle nitrogen dioxide (NO2) nitrogen oxide (NO)
Detection wavelength range 0.35-0.45 μm 5.2-5.3 μm
nitrous oxide (N2O)
4.45-4.49μm
Analysis of the parameters of the laser systems offered by Alpes Lasers [9] shows that in the market there is a wide range of laser radiation sources (semiconductor lasers), which can be used for spectral analysis within the broad spectrum range of infrared radiation. It is also possible to use the lasers as a radiation source in the presented wavelength ranges (complying with a maximum of nitrogen oxides absorption). A very important feature of these lasers is their tunability, to some extent, of the wavelength of emitted radiation. In addition, broad bands of absorption of investigated compounds, compared with the emission spectrum of lasers allow for the selection of the laser, with the best energetic parameters (energy pulse, peak power).
3
Lasers used in NO2 sensor
The pulsed laser diodes were constructed in the Institute of High Pressure Physics Unipress in collaboration with its spin-off TopGaN. In this proprietary technology, a number of technological steps are patented unique solutions, as: i) the application of high-pressure grown GaN crystals with a very low dislocation density, ii) plasma enhanced molecular beam epitaxy (MBE), iii) misorientation of the GaN substrates leading to special features of p-type GaN and electrical contacts. (a)
Figure 5:
(b)
Schematic drawing of the violet laser diode (a), and I-V characteristics for laser diodes grown on GaN substrates with different miscuts (b).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
209
Figure 5(a) shows the laser diode structure: the substrate is a 100 micron thick GaN crystal grown at high hydrostatic pressure with dislocation density as low as 100 cm-2 (the world lowest value) and free electron concentration of above 1019 cm-3, what makes the lower electrical contact to N-face of GaN highly conductive. A special feature of the substrates used is the misorientation angle optimized to have low series resistance of the structure with electrical contacts. Figure 5(a) shows an advantage of such substrate preparation. The epitaxial structures were grown using plasma enhanced molecular beam epitaxy (MBE) or metalorganic chemical vapor phase epitaxy (MOVPE). The active layer consists of 2-5 quantum wells (3-4 nm) of InGaN with In content of about 10%, and GaN barriers of thickness (6-8 nm). The emitted light is confined by two claddings of AlGaN/GaN short period (2nm/2nm) superlattices to obtain high doping effectiveness and strain compensation. To have a good light confinement, the claddings are grown with thickness and Al-content close to the critical conditions for mismatch-related defect generation. The dislocation density in such laser-diode structure is of 105 cm-2 what means just a few dislocations per laser stripe. An important issue in the nitride-based laser diodes is a small contrast in refractive indices between GaN in the waveguide and AlGaN lower cladding layer resulting in a leakage of electromagnetic wave into the substrate. Application of thick cladding and with a high Al content would result in the creation of defects (misfit dislocations and cracks); therefore, we have developed new cladding that eliminates such leakage. Figure 6 shows the far-field patterns of the laser emission for standard and improved lower AlGaN claddings. (a)
Figure 6:
(b)
Far field patterns of the violet laser diodes for standard (a) and modified claddings (b).
For the first picture, a leakage of electromagnetic wave into the substrate can be seen. Figure 7(a) shows the L-I characteristics for those two lasers. The stripe is of width 5-10 microns what results in a multimode emission and peak width around 1 nm. The length of the lasers was of 500-1000 microns. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
210 Computational Methods and Experimental Measurements XIV (a)
Figure 7:
(b)
L-I characteristics for lasers with standard and modified cladding that eliminates a leakage of light into the substrate (a), and spectral dependence of light transmittance at the back mirror of the laser diode (b).
The back laser mirror consists of five-fold /4 pairs of SiO2/TiO2. The front mirror is covered with 50 Å Al2O3. Figure 7(b) shows the spectra dependence of transmission for the back mirror. It can be seen that at 400 nm, almost 95% of light is reflected by the back mirror and the ratio between the light intensity emitted through the front and back mirror is about 150. All the improvements mentioned above give the laser diode technical parameters close to those reported by Japanese manufacturers, as Nichia, Sony, or Sanyo. The table 2 shows those parameters. Table 2:
Laser diode technical parameters.
Density of threshold current
Voltage at threshold
Slope efficiency
Pulse duration
Power at peak [mW]
5-8 kA/cm2
7-8 V
0.3-0.5 W/A
50 ns
200-2500
The laser diodes are packaged in standard 5.6 mm cans and mounted in laser diode modules manufactured by TopGaN (Figure 8(a)). The highest power obtained was 2.5 W for 50 micron stripe, as shown in Figure 8(b). However, for such wide stripes, the filamentation is observed, what must be corrected with the special optics.
4
Detection system
The detection system was constructed in the Institute of Optoelectronics MUT. Light at 414 nm from the diode laser (TopGaN firm) was directed into a 50 cm long optical cavity consisting of two concave, highly reflective mirrors (R > WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
211
0.999976 at 414 nm, Los Gatos Research). The laser beam was directed to the cavity using the diffraction grating and the mirror. The radiation leaving the cavity was registered with a photomultiplier (R7518, Hamamatsu), which is characterized by high gain (1.1107), high speed (the pulse rise time is equal 6.4±0.1ns) and low dark current. The PMT was equipped with the interference filter, the bandpass of which was matched to the laser line. Signal from the PMT is fed to transimpedance preamplifier. In the preamplifier, the operational amplifier AD 8038 type was used. It is characterized by a wide dynamic range. (a)
Figure 8:
(b)
405 nm laser module (a), and optical power versus current for the 50 micron wide laser stripes (b).
In order to obtain signal-to-noise ratio (S/N) of the photoreceiver we analyzed its noise equivalent scheme [13]. The calculated value of (S/N) of the first stage of photoreceiver (PMT plus preamplifier) is equal 115. Next, the signal from the photoreceiver output was digitized (with 100 MS/s sampling rate) using a 12-bit USB oscilloscope CS 328 (Clever Scope). Signal-to-noise ratio of the detection system was additionally improved by the use of the coherent averaging. In the system, the S/N is directly proportional to a root of the number of the averaging samples (nsampl) S S nsampl , N total N FSP
(5)
where S/NFSP is the signal-to-noise ratio of the first stage of photoreceiver. If nsampl = 104, thus S/N of our photoreceiver is equal 1.1104. Probing of ambient air was accomplished through a measurement cell constructed of aluminum coated with Teflon inside. The cavity mirrors were mounted on the two ends of the cell. The mirrors were adjusted using Q-ring mounts and three fine-pitched screws. The cavity is equipped with a gas inlet and outlet. In the investigation a mixing system supplied from a bottle with NO2, and additionally from the source of pure nitrogen, was applied. The system allows for precision gas mixing and preparing the assumed NO2 concentration. The measurement was performed under the steady flow of the gas through the cavity. Moreover the measurement with good detection limit requires also good WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
212 Computational Methods and Experimental Measurements XIV filtration of the investigated air. This is necessary in order to avoid the light scattering in the aerosol particles as well as the dust deposition on the mirrors surfaces.
5
Experimental results
In Figure 9 the photography of the CEAS system is presented. The diode laser generated the radiation pulses (414nm) which duration time was about 50 ns and a repetition rate -10 kHz. Their peak power was about 250mW. The laser beam was directed to the cavity using mirrors. The filter composed of diffraction grating and iris diaphragm was used to eliminate broadband fluorescence of laser diode. In order to determine the signal-to-noise ratio, the noise measurements were performed. They were determined with the spectrum analyzer (SR 770). The voltage of noise above 20 Hz was below 1V. The transimpedance preamplifier noise was dominant (about 90%) in the first stage of photoreceiver. S/N of the developed system with coherent averaging of 104 samples reached the value of 1400.
Figure 9:
Photography of portable NO2 optoelectronic sensor.
Table 3: No. 1. 2. 3. 4. 5. 6. 8.
NO2 optoelectronic sensor parameters.
Parameter Sensitivity (NO2) Measurement range (NO2) Resolution of measurements Uncertainty SNR Measurement time Interfaces
Value 2.8×10 cm-1 (0,2 ppb) 0,2 ppb – 43 ppm 0,2 ppb 0,3% about 1400 1 min. USB 2.0
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
-9
Computational Methods and Experimental Measurements XIV
213
The software controlling the data acquisition was developed using LabVIEW 7.1. The software provides possibility of the signal decay time and concentration of the NO2 was determination. The main sensor parameters are presented in the Table 3. Thanks to the first stage of the photoreceiver optimization and coherent averaging of the measured signal, the detection limit achieves the value of 2.810-9cm-1. Therefore measurements of 0.2 ppb NO2 concentration with uncertainty of 0.3% are available. Detailed analyses are given in our earlier works [10-14].
6
Future work
Sensors based on optical-absorption spectroscopy determine the presence of chemical vapors by comparing light-absorption spectra from multiple air samples. The spectra can be measured remotely with lidar or passive spectrometers, or in air samples drawn into optical cavities or other chambers. The spectra of analytes and interferentes are fit to the measured spectrum of each air sample to determine the best fitting concentration combination. A sample is determined to contain a given analyte when the fit yields a concentration that is statistically greater than ambient. The sensitivity and selectivity of a sensor are therefore determined by the ability to separate spectral differences due to the analyte concentration from spectral differences due to instrument noise or fluctuation in ambient conditions. Choosing analyte spectral features with the best combination of absorption strength and spectral uniqueness is therefore key to sensor performance. Progress of CEAS sensors construction is strictly dependent on new radiation sources investigations and optical systems and photoreceivers. Elaboration of new quantum cascade lasers and hetero-system detectors provide possibility of the new generation CEAS sensors developing.
Figure 10:
Block diagram of a system used for vapor IM detection.
Figure 10 presents a block diagram of a system, which can be used for vapor improvised material (IM) detection. It will include a sampling and preconcentrator system and three sensors; NO2, NO, and N2O. The WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
214 Computational Methods and Experimental Measurements XIV preconcentrator will adsorb of an explosive molecules from an inlet supply of air flowing at a high rate and relatively large volume. Then the adsorbing medium will be heated quickly to desorb the explosive material on NO2, NO and N2O, which will be directed into sensors. The NO2 sensor was analyzed earlier. In the NO and N2O sensors we would like to use quantum cascade lasers Alpes Lasers firm, and detection system with a polish detector, Vigo System S.A. [15]. An active element of a detector is a multilayer Hg1-xCdxTe heterostructure, optimized for radiation detection at the wavelength of about 5 µm. The element of a photodetector is mounted at three-stage thermoelectric cooler situated in a hermetic housing. This cooler ensures receiving the work in temperature of a detector, of about 200 K at the ambient temperature of 20oC.
7
Conclusions
In the paper, a portable NOx optoelectronic sensor was described. In the sensor CEAS technique was applied. It is one of the most sensitive laser spectroscopy methods. Thanks to the theoretical analysis, the main parameters of the optical cavity, the signal processing system, and in particular the signal-to-noise ratio, the signal processing system was developed. This system ensures registration of low level signals, and the decay time measurements with the uncertainty below 0.3%. The system consists of PMT, low noise transimpedance preamplifier, and 12-bit digital signal processing circuit. Moreover coherent averaging technique was applied. The features of the designed sensor show that it is possible to build a portable NOx sensor with the sensitivity of subppb. Such a kind of system has several advantages such as: relatively low price, small size and weight, and possibility of detection of other gases.
References [1] OKeefe A., D.A. Deacon, “Cavity ringdown optical spectrometer for absorption measurements using pulsed laser sources”, Rev. Sci. Instrum., Vol. 59, No. 12, 2544-2551, 1988. [2] Engeln R., G. Berden, R. Peeters, G. Meier, “Cavity enhanced absorption and cavity enhanced magnetic rotation spectroscopy”, Review Of Scientific Instruments, Vol. 69, No. 11, 3763 – 3769, 1998. [3] Kasyutich V.L., C.E. Canosa-Mas, C. Pfrang, S. Vaughan, R.P. Wayne, “Off-axis continuous – wave cavity-enhanced absorption spectroscopy of narrow-band and broadband absorbers using red diode lasers”, Appl. Phys. B, Vol. 75, 755-761, 2002. [4] Merienne M.F., A Jenouvrier, B. Coquart, “The NO2 absorption spectrum. I: absorption cross-sections at ambient temperature in the 300-500 nm region”, J. Atmos. Chem., Vol. 20, No. 3, 281-297, 1995. [5] Andriew C., Papino R., Ultrasensitive surface spectroscopy with a miniature optical resonator. Phys. Rev. Letters, Vol. 83, No 15, 1999. [6] Andriew C., Papino R., Evanescent wave cavity ring-down spectroscopy for ultra-sensitive chemical detection, SPIE, Vol. 3535, 1998. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
215
[7] Nicola J. van Leeuwen, J. C. Diettrich, and A.C. Wilson. Periodically Locked Continuous-Wave cavity ring-down spectroscopy, Appl. Opt. Vol. 42, pp.3670-3677, 2003. [8] http://www.epa.gov/ttn/emc/ftir/aedcdat1.html [9] http://www.alpeslasers.ch/lasers-on-stock/index.html [10] J. Wojtas, A. Czyżewski, T. Stacewicz, Z. Bielecki. Sensitive detection of NO2 with cavity enhanced spectroscopy. Optica Applicata, vol. 36, No 4, pp. 461-467 (2006), [11] Czyżewski, J. Wojtas, T. Stacewicz, Z. Bielecki, M. Nowakowski. Study of optoelectronic NO2 detector using cavity enhanced spectroscopy. Proc. SPIE. Optics and Optoelectronics. Optical sensor. 6585-68 (2007), [12] J. Wojtas, T. Stacewicz, Z. Bielecki, A. Czyżewski, M. Nowakowski. NO2 monitoring setup applying cavity enhanced absorption spectroscopy. The International Conference on Computer as a Tool, EUROCON 2007, Warsaw, September 9-12. Conference Proceedings, pp. 1205-1207 (2007). [13] J. Wojtas, Z. Bielecki. Signal processing system in the cavity enhanced spectroscopy. Opto-Electron. Rev. 16, No 4, pp. 44-51, 2008. [14] J. Wojtas, Z. Bielecki, J. Mikołajczyk, M. Nowakowski. Signal processing system in portable NO2 optoelectronic sensor. 6-8 May. Sensor+Test 2008 Proceedings, Nurnberg, Germany pp. 105-108 (2008). [15] www.vigo.com.pl
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
217
Multispectral detection circuits in special applications Z. Bielecki1, W. Kolosowski1, E. Sedek2, M. Wnuk1 & J. Wojtas1 1 2
Military University of Technology, Poland Telecommunication Research Institute, Poland
Abstract In this paper, the first stages of receivers of optical radiation are described. Special attention was paid to the selection of a detector adequate for the range of radiation being detected. Detectors of optical radiation, including UV, visible, and IR detectors are characterised. The methods of broadening of a transmission bandwidth were also discussed. Moreover, the design criteria of low noise preamplifiers used for the mentioned detectors were presented. Keywords: photoreceiver, noise model, low noise electronics circuits.
1
Introduction
Receivers of optical radiation are used in many up-to-date fields of science and technology, determining the current level of technological progress. The most important fields of their applications are in industrial automation, robotics, space technology, medicine, and in military technology. The optical radiation range covers electromagnetic radiation longer than the gamma but shorter than millimetre waves. Optical radiation from an object (signal source) is detected with a photoreceiver. It consists of optics, a detector, a preamplifier and a signal processing system. All objects are continually emitting radiation at a rate with a wavelengths distribution that depends on the temperature of the object and its spectral emissivity. However, free space optical signal transmission is required for most of the mentioned applications. Then radiation is attenuated by the processes of scattering and absorption. Scattering changes direction of a radiation beam; it is caused by absorption and subsequent re-radiation of energy by suspended particles. For small particles, WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090201
218 Computational Methods and Experimental Measurements XIV compared with the radiation wavelength, the process is known as Rayleigh scattering and exhibits a -4 dependence. However, scattering by gas molecules is negligibly small for wavelengths longer than 2 m. Also, smoke and light mist particles are usually smaller than IR wavelengths. Therefore IR radiation can penetrate further through smoke and mists than visible radiation. However, rain, fog and aerosol particles are larger and consequently they scatter IR and visible radiation to a similar degree. Total radiation received from any object is the sum of the emitted, reflected and transmitted radiation. Objects that are not blackbodies emit only a fraction of blackbody radiation, and the remaining fraction is either transmitted or reflected, in the case of opaque objects. When the scene is composed of the objects and backgrounds of similar temperatures, the reflected radiation tends to reduce the available contrast. However, reflections of hotter or colder objects significantly affect the appearance of a thermal scene. The reflected sunlight is negligible for 8–14 m imaging, but it is important in UV, VIS and 3-5 m band. In general, the 8–14 m band is preferred for high performance thermal imaging because of its higher sensitivity to ambient temperature object and its better transmission through mist and smoke [1, 2]. However, the 3-5 m band may be more appropriate for hotter objects, or when sensitivity is less important than contrast. Also, additional differences occur, e.g. a virtue of MWIR (Medium Wave Infrared) band. There, smaller diameter optics are required to obtain a certain resolution. Furthermore some detectors may operate at higher temperatures (thermoelectric cooling) than in the LWIR (Long Wave Infrared) band. Detectors operating in this region require cryogenic cooling (about 77 K). The ultraviolet detectors have been accomplished by two different devices: the photomultiplier tube (PMT) and the solid state detectors. The PMT is a vacuum tube in which radiation falls on a photocathode, causing electrons to be emitted. Photocathode materials such as SbKCs and CsTe can be used which exhibit maximum sensitivity in the range from 400 nm to 235 nm, respectively. The photon in photoemissive devices must have sufficient energy to eject electron from a photocathode material. The emitted electrons are accelerated to strike another plate, the dynode, causing the emission of a number of secondary electrons. The process is repeated several times, leading to gains of a 106 or more. The resulting current is amplified by an external circuit. PMTs are relatively high-cost, require high-voltage power supplies, and are susceptible to magnetic fields that distort electron trajectories. Photoemissive devices produce negligible dark backgrounds at room temperatures and can be constructed to be inherently solar blind. The alternative approach has been to use semiconductor ultraviolet detectors. In most semiconductor devices, the photon causes an electron to be transition into the conduction band. Currently, the most common devices are silicon-based CCDs where the detection process requires energies of approximately an electron volt. The detectors should have excellent sensitivity in the visible and near-IR range. These devices should be sensitive to thermally induced backgrounds at room temperatures. The semiconductor detectors made of GaN or other highband-gap materials, such as SiC, AlN, diamond, have the activation energy WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
219
higher than 3.2 eV, making these devices inherently solar blind and UV sensitive without thermally induced background. While there is an increased interest in UV monitoring, the detection methods have not in general kept pace with the demand. However, looking for next generation semiconductor materials, have now led to the development of new kinds of UV detector. In particular, UV detectors based on a new material, Gallium Nitride, show very competitive properties, such as high responsivity, low dark current, very low input power requirements and good rejection of visible light. The latter is particularly important, since optical filters used to reject visible light cause complications in the design of optical systems and make it more difficult to ensure reproducible measurements. The visible rejections are 750:1 and 0.85:1 for GaN and Si detectors respectively (visible rejection 325nm:400nm). Another very important benefit of GaN based detectors is that it is possible to tailor this wide band gap semiconductor to fit a popular UV response range. When GaN and AlN form an alloy AlGaAn, the band gap can change from 3,4 to 6,2 eV, which corresponds to the change of the optical cut-off from 365 nm 200 nm. Within this range a particular cut-off can be achieved by controlling the composition of the alloy. The band gap also determines the maximum operation temperature. At high temperatures the band gap starts to collapse so that the material can no longer act as a semiconductor. The nitride based semiconductors have a band gap which is 3 to 5 times longer and in consequence, are more stable at high temperatures. Current detectors can operate well at up to 85C. Moreover, if material quality improves the detectors will operate at even higher temperatures. The natural rejection of visible light, high temperatures operation, low operating voltage and flexibility of the optical cut-off will allow this material to be used as a next generation tool in the field UV detection. Typical applications UV detectors: UV curing and drying, combustion monitoring, arc detection, phototherapy, sterilization control, spectroscopy, biological agent detection, solar irradiance measurement, industrial process monitoring, missile and artillery fire detection, solar irradiance measurement, climatological and biological studies.
2
Detection theory
The common problem of any type of photon detector is how to terminate the photodetector with a suitable load resistor, and to trade off the performance between bandwidth and signal-to-noise ratio. This is necessary for a wide family of detectors, including phototubes, photoconductors, photodiodes, etc., all of which are described by a current generator Iph with a stray capacitance C across it. Let us consider the equivalent circuit of a photodetector ending on a load resistor RL, as shown in Fig. 1. This is the basic circuit for detection. The output signal in voltage V = IR or in current I. Two noise contributions are added to the signal. One is the Johnson (or thermal) noise of the resistance RL, and the second is the quantum noise.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
220 Computational Methods and Experimental Measurements XIV
Figure 1:
A general noise equivalent circuit for a photodetector.
The total current I = Iph + Id is the sum of the signal current (Iph) and the dark current (Id). With this current is associated the quantum (or shot) noise arising from the discrete nature of electrons and photoelectrons. Its quadratic mean value is given by [3] I n2 2q I ph I d f , (1)
where q is the electron charge and f is the observation bandwidth. A general noise equivalent circuit for a photodetector is shown in Fig. 1. The above two fluctuations are added to the useful signal and the corresponding noise generators are placed across the device terminals. Since the two noises are statistically independent, it is necessary to combine their quadratic mean values to give the total fluctuation as I n2 2q I ph I d 4kTf / RL . (2)
The bandwidth and noise optimisation impose opposite requirements on the value of RL. To maximise f one should use the smallest possible RL, whilst to minimise I n2 the largest possible RL is required. A photodetector can have a good sensitivity using very high load resistances, but then only modest bandwidths ( kHz or less), or it can be made fast by using low load resistances (e.g. RL = 50 but at the expense of sensitivity. Using Eq. (2), we can evaluate the relative weight of the two terms in total noise current. In general, the best possible sensitivity performance is achieved when the shot noise is dominant compared to the Johnson noise, i.e.
2q I ph I d f 4kTf / RL ,
(3)
which implies the condition
RL min
2kT q . I ph I d
(4)
At low signal levels, very high values of resistance are required; for example, for Id = 5 pA, not an unusually low dark current, then RLmin = 10 G. However, if we are using a resistor termination value RL < RLmin then, the total noise can be written as (5) I n2 2q I ph I d f 1 RL min / RL . This means that the noise performance is degraded by factor RLmin/RL, compared to the intrinsic limit allowed by the dark current level. Thus, using RL WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
221
< RLmin means that the shot-noise performance is reached at a level of current not less than (6) I ph I d 2kT / qRL . From this equation, we see that, for a fast photodiode with a 50 load at T = 300K, this current has very large value of 1 mA. The above considerations are valid for photodetectors without any internal gain, G. Let extend the calculation of the signal-to-noise ratio (S/N) to photodetectors having internal gain. In this case, the shot noise can be expressed in the form (7) I n2 2q I ph I d fG 2 F , where F is the excess noise factor to account for the extra noise introduced by the amplification process. Of course, in a non-amplified detector, F = 1 and G = 1. The total noise is a sum of shot noise and thermal noise, then we obtain a S/N ratio I ph S N 2q I ph I d fF 4kTf RL G 2
3
1/ 2
.
(8)
Response time of the first stage of a photoreceiver
A response time of the first stage of photoreceiver can be shortened in many ways. Here, the possibility of such shortening in photoreceivers with p-i-n photodiodes is described. High-frequency properties of p-i-n photodiodes depend on lifetime of minority carriers. In photodiodes with the Schottky barrier, lifetime of carriers is negligible (10–14 s) when compared with minority carriers lifetime in p-i-n photodiodes [4]. Response time of p-i-n photodiode depends on: time of carriers drift through the depletion region, time of carriers diffusion to depletion region, RC time constant of a load circuit of a detector. Influences of the time (d) of carriers diffusion to depletion region can be neglected assuming that majority of carriers are generated in a depletion region. A drift time of minority carriers through the depletion region depends on its width and carriers velocity (voltage of reverse bias). The drift time can be reduced by narrowing the depletion region. However, it causes increase in junction capacity. A detector capacity depends on the photodiode area (A), its resistivity (), and a voltage of reverse bias. A . (9) Cd
Vb 0.51/ 2
Decrease in RC time constant of the first stage of a photoreceiver, the same broadening of a transmission band can be obtained by using the lower capacity photodiode and increase in a value of reverse bias voltage. However, the higher
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
222 Computational Methods and Experimental Measurements XIV bias voltage can cause increase in dark current of a photodiode, i.e., the higher noise. Next factor affecting the photodiode response time (tr) depends mainly on its capacity and input resistance of a preamplifier. A time constant of this circuit is RC
R L Rs Rsh C Req C , R L Rs Rsh
(10)
where Rs is the series resistance of a photodiode, C is the sum of photodiode capacity and input capacity of a preamplifier, Req is the resultant resistance which is a parallel connection of the resistance Rsh of photodiode and the load resistance RL. A response time and transmission bandwidth can be shaped by selection of a capacity of the first stage of a photoreceiver. Decrease in a width of a depletion region causes reduction of photodiode sensitivity, the same possibility of detection of optical signals of low amplitudes. Thus, the compromise between a response time of a photoreceiver and its sensitivity is necessary.
Figure 2:
Factors affecting the response time of the first stage of a photoreceiver.
The above considerations show, that the detector response time is
tT tr2 tt2 td2 and 3 dB limit frequency is given as
f 3dB 2 Req C
1/ 2
,
1 .
(11) (12)
Usually, a designer can obtain the broadening of a transmission bandwidth of a photoreceiver when a detector of low capacity and preamplifier of low input resistance are used. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
223
Figure 2 presents the factors influencing response time of the first stage of a photoreceiver. Of course broadening of a transmission bandwidth of the first stage of a photoreceiver will cause noise increase, i.e., decrease in its sensitivity.
4
Selection of the first stage of a photoreceiver
As an active element in a preamplifier, the bipolar or FET transistor or integrated system, with an input at a bipolar transistor, FET transistor or MOSFET transistor, can be used. The main criteria of selection are both the value of detector resistance and the range of transmitted frequencies [5–7]. Because low intensity signals reach a photoreceiver, a very important task is to optimize the noise of a photodetector-preamplifier, i.e., to obtain maximum of S/N ratio. The first stages of complex electronic devices significantly affect the level of a total noise of a system, so preamplifiers have to fulfil very special requirements. Optimum parameters of a preamplifier can be determined by the basis of analysis of particular sources of the noise in a total equivalent input noise of a photodetector-preamplifier system and on the basis of calculation of an equivalent input noise. A level of an equivalent noise at the input of a photodetector-preamplifier is univocally determined by the detector noise Vnd, background noise, and equivalent noise sources Vn and In. For non correlated components of noise, the total equivalent noise at the input of a photodetectorpreamplifier is defined as 2 2 (13) Vni2 Vnd Vnb Vn2 I n2 Rd2 , where Rd is the detector resistance. It can be generally stated that if high level of current noise is in a transistor of the first stage of a preamplifier, this transistor cannot operate with highresistance detector. High level of the current noise of input transistor causes high equivalent input noise at high resistance of a detector. On the contrary, lowresistance detector can well operate with low voltage noise preamplifier. In [1, 3], detailed analyses have been performed on selection of preamplifiers for various detectors of optical radiation. For optical radiation detectors, the voltage preamplifiers, transimpedance preamplifiers, and charge preamplifiers are used. High signal-to-noise ratio in a voltage preamplifier causes narrowing of a transmission bandwidth of a system. Due to application of a transimpedance preamplifier for the determined detector resistance, the higher bandwidth can be obtained. Charge preamplifiers form a separate group. Figure 3 presents a simplified scheme of the first stage of a photoreceiver with transimpedance preamplifier and noise sources. Such a system is commonly used for UV, VIS and IR detectors, both photoconductive detectors and photodiodes. Improvement in this S/N ratio can be obtained by: - decrease in influence of background radiation (narrowed field of view of a detector due to applied cooled diaphragms and optical filters), - decrease in generation-recombination noise produced from thermally induced carriers in semiconductor (lower temperature of detector operation), - decrease in thermal noise of a detector (cooling), WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
224 Computational Methods and Experimental Measurements XIV
Figure 3:
Simplified equivalent scheme of the first stage of a photoreceiver with noise sources.
- decrease in thermal noise of load resistance (using high RL values and in some applications additional decrease in detector operation temperature), - use of the optimised ultra low-noise preamplifier [8–11].
5
Experimental results
For amplification of the signals from UV detectors, the most frequently voltage and transimpedance amplifiers are used. The basic idea of increase in input impedance of preamplifier is reduction of thermal noises. However, the high resistances RL cause narrowing of a band of the input stage of a photoreceiver. A preamplifier of high input impedance is significant load resistance for a detector, so it does not ensure wide range of signal changes. The problem of serious changes of a signal has been solved in transimpedance preamplifiers. For amplification of the signals from UV detectors, we used transimpedance preamplifier. In the preamplifier the integrated circuit of AD 548, AD 549, AD 795, and OPA 129 type were used. The noise current was obtained 1.8 fA/Hz1/2, 0.5 fA/Hz1/2, 0.6 fA/Hz1/2, 0.1 fA/Hz1/2 respectively, for f=1 kHz. In the case of IR detectors (e.g. HgCdTe), we used transimpedance preamplifiers too, but we optimized it on voltage noise. Integrated circuits LT 1028 and AD 797 type were used at the input stage of a preamplifier. These preamplifiers can be applied both for photodiodes and photoresistors. When a photoresistor is connected, the biasing resistor RL is required. For this system the noise voltage was below 1 nV/ Hz1/2, for f=1kHz.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
225
Many problems have to be overcome during construction of low-noise preamplifier for low-resistance detectors (e.g., HgCdTe of resistance below 100 ). It is due to the fact that the detectors of resistance of the order of 20 produce noise voltage lower than 0.6 nV/Hz1/2, i.e., below the noise voltage generated in the best (available) amplifying elements. So, the question arises; it is possible to build a preamplifier of noise voltage below a detector noise? It appears that it can be achieved when several identical preamplifiers are connected in parallel and next the output signals are added. Such a system of signal processing ensures reduction of final input signal according to the relationship
Vn total Vn n 1 / 2 ,
(14)
where Vn is the noise voltage of a single preamplifier and n is the number of amplifying stages. For preamplifier with AD 797 type, the noise voltage value of 0.3 nV/Hz1/2, for n=9 and f=10 kHz was obtained.
6
Example of low noise photoreceivers in military applications
In recent years an increase of terrorism threats has occurred. It is supported by a large number of infrared guided missiles; limitation of possibilities of the airport protection, access to the airplane schedules and airplanes are an easy target (especially passenger planes). To prevent such occurrences the following security mechanisms are used: decreasing target’s signature, using camouflage smoke, using pyrotechnics sources, radiation generators, multi-spectral detection systems as well as blinding systems [13,14]. Some of these security mechanisms have limited range of use e.g. flare shouldn’t be use in the urbanized area. Flares do not meet the functions in close to attacking missiles and also do not protect airplane against the armour-piercing missiles attack. The multi-spectral systems are free from these drawbacks e.g. active radio systems, passive infrared systems, passive ultraviolet systems. Progress in the aviation technique caused minimal altitude limit reduction up to 20-50m, similar causes can occurs using the rocket technique. These objects are hard to detect because of physics phenomena of the microwaves propagation in the ground zone. Based on radiolocation equation, maximal detection distance depends on direction toward the target and is written as rmax , rmo F , .
(15)
This formula describes the surface equation with following parameters: r,,, with the coordinates beginning from the radiolocation station as a reference point. This surface divides the area around radiolocation station into the two areas: area I, where the target is detectable and area II, where target is invisible (Fig. 4a). WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
226 Computational Methods and Experimental Measurements XIV Taking into consideration the ground impact it is necessary to add in radiolocation station’s range in the free space equation interferometric multiplier. The radiolocation station’s range is calculated using the following formula rmax rmo F , (16) where F() – characteristics multiplier, () – interferometric multiplier. In Fig. 4b radiolocation station visibility is shown.
Figure 4:
Radiolocation station visibility zone in the free space (a) and radiolocation station visibility (b)
Figure 4 appears a dead area. To detect the target in this area the optoelectronics systems are used. The systems operate in UV and IR spectrum region. They can be installed on the shared mast together with radiolocation antenna. The range of this system is determined using following formula d km 3,56 ha ( m ) hc ( m ) , (17) where: ha- hang height of the optoelectronics device in meters, hc - target height in meters. In the Institute of Optoelectronics MUT original devices were elaborated. One of them is laboratory model of passive locator of flying objects. This device includes thermodetection modules with PV HgCdTe detector, polish Vigo System Ltd., optimized for the spectral range of 3-4.2µm. The passive locator can detect thermal objects from several kilometres (on the right of Fig. 5). On the left of Fig. 5 we can see these same thermal object, but in UV spectral range. In such detection systems we used photoreceiver with AlGaN detectors. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Figure 5:
7
227
View of the thermal object in UV and IR wave ranges [12].
Conclusions
Increase in a signal-to-noise ratio in the first stages of photoreceivers can be obtained as a result of decrease in the noise originating from background, minimisation of a detector’s noise, selection of an optimal point of detector operation, and application of low-noise preamplifier matched to detector. Decrease in background noise can be achieved by reduction of an angle of view of a detector, application of a selective filter matched to a spectrum of a signal noise, and application of cooled choppers for long wavelength detectors. Reduction of detector’s noise is mainly due to lower operation temperature, narrower noise bandwidth of a system, and selected optimal working point. The noise of a preamplifier in the total equivalent noise of the first stage of a photoreceiver is the lowest one in the conditions of noise matching. A value of detector resistance is a main criterion for selection of low-noise preamplifier. For low-resistance detectors, preamplifiers optimised with respect to noise voltage should be used but for high-resistance detectors with respect to noise current. Further increase in signal-to-noise ratio at the receiver output is possible due to application of adequate techniques of signal modulation and demodulation. The results of the above activities were used for experimental investigation, i.e., several optical detection devices were designed, performed and tested. The results presented in this work do not comprise all the problems related to maximisation of S/N ratio in optical receivers.
References [1] Rogalski, Z. Bielecki. Chapter entitled “Detection of optical radiation”, in Handbook of optoelectronics. Taylor & Francis, New York, London pp. 73117 (2006).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
228 Computational Methods and Experimental Measurements XIV [2] Z. Bielecki, J. Mikolajczyk. Chapter entitled. “Passive infrared detection” in Photonics Handbook, Laurin Publishing, USA H123-H125, (2005). [3] A. Rogalski, Z. Bielecki. Detection of optical radiation. Bulletin of Polish Academy of Science, vol. 52, no. 1, pp. 43-66 (2004). [4] Z. Bielecki: “Photoreceiver with avalanche C-30645 photodiode” IEE Proceedings Optoelectronics, Vol. 147, pp. 234-236, 2000. [5] C.D. Mothenbacher, J.A. Connelly: Low-noise electronic system design. New York, Willey, 1995. [6] Z. Bielecki: “Maximisation of signal to noise ratio in infrared radiation receivers,” Opto-electron. Rev. 10, 209–216 (2002). [7] Z. Bielecki. Some problems of optimization of signal-to-noise ratio in infrared radiation receivers. Proc. SPIE, Vol. 5125, pp. 238-245 (2002). [8] Z. Bielecki, M. Brudnowski. Method of popcorn-noise reduction. OptoElectron. Rev., Vol. 11, no. 1, pp. 45-50 (2003). [9] Z. Bielecki. Readout electronics for optical detectors. Opto-Electron. Rev., Vol. 12, no. 1, pp.129-137 (2004). [10] Z. Bielecki, W. Kolosowski, R. Dufrene, E. Sedek, J. Wojtas. Photoreceiver for BLU/UV detection. Proc. of SPIE, Vol. 5472, pp. 383-390, (2004), [11] R. Cwirko, Z. Bielecki, J. Cwirko, L. Dobrzanski. Low-frequency noises as a tool for UV detectors characterisation. Opto-Electron. Rev., vol. 14, no 2. pp.155-160 (2006). [12] Z. Bielecki, K. Kopczynski, M. Kwasny, Z. Mierczyk. In polish. Monitoring zagrozen bezpieczenstwa. III Międzynarodowa Konferencja Naukowa Zarzadzanie kryzysowe” Szczecin, Materiały konferencyjne, s. 310-320 (2005). [13] Z. Bielecki, W. Kołosowski, M. Muszkowski, E. Sędek. Chapter entitled. “Phase shifters or optoelectronic delay lines application to sequential analysis of space adaptive phased arrays antennas”, in Computational Methods and Experimental Measurements, WIT Press, Southampton, Boston, UK, pp.241-249 (2005). [14] E. Sędek, Z. Bielecki, M. Muszkowski, W. Kołosowski, G. Różański, M. Wnuk. „Optoelectronic system for phase array antenna beam steering” in Computational Methods and Experimental Measurements, WIT Press, Southampton, Boston, UK, pp. 801-808 (2007).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
229
Modification of raised cosine weighting functions family C. Lesnik, A. Kawalec & J. Pietrasinski Department of Electronics, Military University of Technology, Poland
Abstract Modification of the known family of raised cosine weighting functions with the power of n by using its convolution operation with an auxiliary rectangular window having variable duration time is considered in the paper. Such a modified functions family was derived in a general and exact form for the defined weighting function in a time and frequency domain as well. It was done with the help of the convolution windows constant – length type preparation technique. The derived weighting function was applied for radar chirp signals synthesis with nonlinear frequency modulation (NLFM). Chosen examples of the simulation research of the discussed modified weighting functions family features and radar signals of NLFM type synthesised thanks to it are presented in the paper. Keywords: weighting functions, radar signal synthesis, nonlinear frequency modulation signal, mainlobe, sidelobe, matched filtration.
1
Introduction
Radar signals synthesis is one of the most important problems of modern radiolocation. These signals transmitted to observed space should have very specific features. The so-called matched filter is a very specific part of a radar receiver. Its main task is to maximize SNR (Signal to Noise Ratio) at its output. In this way radar range may be maximized too. In the case of radar chirp signals transmission an echo signal observed at the receiver output has a shape of the very short pulse. Thanks to it radar range resolution may be good. A product of a received signal matched filtration is an output signal having mainlobe and sidelobes. Presence of sidelobes is of course an unwanted effect because it causes weak echo signals detection difficulty. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090211
230 Computational Methods and Experimental Measurements XIV An effective technique of sidelobes attenuation is suitable weighting function application. These are so called weighting windows that modify the matched filter transmittance. A resulted filter mismatch is an additional effect of the weighting procedure. It leads to the worse value of the output SNR and in this way it causes radar range shortening. Application of radar signals of NLFM type is a lossless technique of sidelobes attenuation. The principle of stationary phase is used for the NLFM signals synthesis, detailed elsewhere in [1–3]. In such an approach suitable weighting functions are applied too. As a result of principle of stationary phase using unwanted accumulation of numerical errors can be observed. In order to its decrease the knowledge concerning exact, analytical form of the weighting function applied for the signal synthesis is desired. There are several known techniques used for weighting functions synthesis having expected features important from the point of view of application. One of them is based on convolution operation. To be more precise, there are known methods, such as: time convolution of parent windows, described by Harris [4] and Nuttall [5], multiple time convolution of rectangular windows, described by Wen [6] and Dai and Gretsch [7] or multiple time auto and cross convolution of other well known windows, described by Reljin et al. [8, 9]. These so called convolution windows have greater sidelobe attenuation and their fast decay. Unfortunately the output signal mainlobe width increases. Modification of the known family of the raised cosine weighting functions by using convolution operation of this functions with an auxiliary rectangular window having variable duration time is considered in the paper. It enables finetuning of the weighting function properties. The exact analytical formula of the modified function was derived in time and frequency domain. The paper is organized as follows. The known idea of convolution weighting function preparation is presented in section 2 as an introductory remark. The exact closed formula of the modified raised cosine weighting functions family in the continuous domain of time and frequency is derived in section 3. Examples of chosen features of the modified raised cosine weighting functions family are discussed in section 4. There is an example of its application for radar signal NLFM type synthesis too. A few sentences of conclusion are written in section 5.
2 Convolution weighting function design General form of weighting function with finite time duration wT (t ) can be described as a product as follows wT (t ) = w(t )rT (t ) ,
where:
w(t ) - considered unlimited time function,
rT (t ) - unitary rectangular window as follows:
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(1)
Computational Methods and Experimental Measurements XIV
231
T 1 for t ≤ 2 , T > 0, (2) = 0 for elsewhere t T - rectangular window time duration. Equation (1) is the equivalent of the convolution operation in the frequency domain 1 [W (ω) ∗ RT (ω)] , WT (ω) = (3) 2π t rT (t ) = rect T
where: W (ω) and RT (ω) are Fourier transforms of the w(t ) and rT (t ) respectively. Let us consider weighting function modification wT (t ) described by eqn (3) that is based on its convolution with auxiliary rectangular window rτ (t ) having variable duration time τ wτ (t ) = wT (t ) ∗ rτ (t ) = [w(t )rT (t )] ∗ rτ (t ) ,
(4)
where wτ (t ) - modified weighting function. The auxiliary rectangular window rτ (t ) is defined as follows 1 t τ rτ (t ) = rect = τ 0
for t ≤
τ 2
, τ>0,
(5)
for elsewhere t
where τ - variable duration time of the auxiliary rectangular window. A factor 1 τ causes that the weighting function max value described by eqn (4) is independent of the auxiliary rectangular window duration time. It results from the fact that the area under rτ (t ) is unitary, described by Brandwood [10]. The convolution operation described by eqn (4) equals formula as follows in the frequency domain Wτ (ω) =
1 [W (ω) ∗ RT (ω)]Rτ (ω) , 2π
(6)
where: Wτ (ω) and Rτ (ω) are the Fourier transforms of the wτ (t ) and rτ (t ) respectively. As a result of convolution operation modified weighting function wτ (t ) is different from zero within an interval from − (T + τ ) 2 up to (T + τ ) 2 and its duration time is (T + τ ) . In order to keep constant the final weighting function duration time (it equals T ) there is necessity for suitable scaling of eqns (4) and (6) in time and frequency according to the Fourier transformation scaling property WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
232 Computational Methods and Experimental Measurements XIV t x ↔ a X (aω) , a
where factor of scale change is defined as follows a=
T . T +τ
As a result of eqns (4) and (6) scaling the general form of the modified weighting functions in the time and frequency domain are as follows: t t t wτ (t ) = w rT ∗ rτ , a a a Wτ (ω) =
(7)
a [W (aω) ∗ RT (aω)]Rτ (aω) . 2π
(8)
Scaled modified weighting function described by eqn (7) is different from zero within the interval from −T 2 up to −T 2 and their duration time is T . The window function received in this way is called constant-length CON window and it is described by Reljin et al. [9]. The presented method will be applied for known and very useful (among others for synthesis of the radar and sonar signals with nonlinear frequency modulation, NLFM) family of cosine windows.
3 Modification of raised cosine weighting functions family – general solution Let us consider cosine window family with power of n (raised cosine with power of n ) having a general form as follows t wT(n ) (t ) = k + (1 − k )cos n π T
where: k
t rect , T
(9)
- real parameter of the function, k ∈ 0, 1 ,
n - integer parameter of the function, n ≥ 1 , T - weighting function duration time. As a result of known trigonometric identities application the general description of the function described by eqn (9) with the n -th power was achieved s −1 n t t wT(n ) (t ) = A + B ∑ cos ni π rect i T T i =0
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
,
(10)
Computational Methods and Experimental Measurements XIV
233
where: n + 1 s= , x the biggest integer not higher than x , 2 for odd n, n = 1, 3, 5, ... k 1− k n A= , k + n for even n, n = 2, 4, 6, ... 2 n 2 B =
1− k
, 2n −1 ni = (n − 2i ) , x , x ≥ y - binomial coefficients. y As a result in the general case the weighting function described by eqn (10) in the time domain is a sum of a constant component and cosine functions limited in time and scaled in amplitude. Their period is a total submultiple of the fundamental period that equals 2T . Considering the weighting function general description presented in the eqn (10) in the frequency domain and based on the eqn (6) one can obtain WT(n ) (ω) =
[
]
1 W (n ) (ω) ∗ RT (ω) . 2π
(11)
Using Fourier transforms pairs 1 ↔ 2πδ(ω) , cos ω0 t ↔ πδ(ω − ω0 ) + πδ(ω + ω0 )
one can achieve s −1 n π π W (n ) (ω) = A2πδ(ω) + B 2π ∑ δ ω − ni + δ ω + ni . i T T i =0
(12)
After Fourier transform of the rectangular window application ωT t , rect ↔ T Sa 2 T
where Sa (.) denotes a function in a shape of Sa (x ) = (sin x ) x , and using delta distribution properties it is possible to achieve the final, general form of the cosine functions family with power of n in the frequency domain WT(n ) (ω) = AT Sa
π T π T ωT B s −1 n + T ∑ Sa ω − ni + Sa ω + ni . (13) 2 2 i = 0 i T 2 T 2
This function is a sum of functions of the (sin x ) x type that parameters and location on the radian frequency axis depend on the power n and the window duration time T . WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
234 Computational Methods and Experimental Measurements XIV The general form of the modified weighting function in the time domain, defined for separate time intervals was solved thanks to convolution operation of the function presented in the eqn (10) with the auxiliary rectangular window described by eqn (5): for −
T τ T τ − ≤t<− + 2 2 2 2
A T T wτ (t ) = 1 + + B τ τπ 2 (n )
s −1
∑ i =0
n i sin n π + i ni 2
n A T s −1 i π τ 1 sin ni t + + t+B 2 N τ τ τπ i =0 ni T
,
(14)
∑
for −
T −τ T −τ ≤t< 2 2 wτ(n ) (t ) =
for
A B + Nτ Nτ
s −1 n
∑ i Sa ni
i =0
π τ π cos ni t , T 2 T
(15)
T −τ T +τ ≤t≤ 2 2
n s −1 i π A T T ( n) wτ (t ) = 1 + + B ∑ sin ni − τ τπ i =0 ni 2 2 n A T s −1 i π − t−B ∑ sin ni t − τ τπ i =0 ni T
τ 1 2 N τ
.
(16)
In order to fulfil the condition wτ(n ) (0) = 1 in eqns (14), (15) and (16) normalization operation was done into the coefficient N τ that depends on τ s −1 n π τ N τ = A + B ∑ Sa ni . T 2 i = 0 i
After scaling in time by using coefficient T a= , T +τ
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(17)
Computational Methods and Experimental Measurements XIV
235
the time interval borders and the final general form of the modified weighting function in the time domain are as follows: for −
(T − τ)T T ≤t<− 2 2(T + τ ) n s − 1 i π T A T wτ(n ) (t ) = 1 + + B ∑ sin ni + τ τπ 2 2 n i =0 i n s 1 − i π t τ 1 At T + + B ∑ sin ni + τ a τπ i =0 ni T a 2 N τ
for −
(18)
(T − τ)T ≤ t < (T − τ)T 2(T + τ ) 2(T + τ ) wτ(n ) (t ) =
for
,
A B s −1 n π τ π t + cos ni , ∑ Sa ni N τ N τ i =0 i T 2 T a
(19)
(T − τ)T ≤ t < T 2(T + τ ) 2
n π ∑ sin ni − τπ i = 0 ni 2
A T T wτ(n ) (t ) = 1 + + B 2
τ
s −1 i
n At T s −1 i π t τ 1 − −B ∑ sin ni − τ a τπ i = 0 ni T a 2 N τ
.
(20)
The normalization coefficient described by eqn (17) is the same. From the eqns (18), (19) and (20) results that the modified weighting function slopes in the time domain are created by the sum of the constant component, linear function and sine functions. Modification of the weighting function in its central part means its scaling in amplitude. As a result of the modified weighting function exact version analyse in the frequency domain and based on the eqn (6) one can obtain Wτ(n ) (ω) = WT(n ) (ω)Rτ (ω) .
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(21)
236 Computational Methods and Experimental Measurements XIV Using Fourier transformation of the auxiliary rectangular window Rτ (ω) and eqn (13) and after the normalizing coefficient application N τ one can achieve WT(n ) (ω) =
ωτ ωT ωτ B s −1 n π T T T + Sa Sa ASa ∑ i Sa ω − ni + 2 2 2 2 i = 0 Nτ Nτ T 2 ωτ B s −1 n π T T + Sa ∑ i Sa ω + ni 2 2 i = 0 Nτ T 2
.
(22)
The final general description of the modified weighting function after scaling operation using is as follows WT(n ) (ω) =
π Ta Ta aωτ aωT Ta aωτ B s −1 n + Sa Sa A Sa ∑ i Sa ω − ni + 2 2 2 2 i = 0 Nτ Nτ Ta 2 π Ta T aωτ B s −1 n + Sa ∑ i Sa ω + ni 2 2 i = 0 Nτ Ta 2
. (23)
From the eqn (23) results that the auxiliary weighting window is responsible for amplitude modulation of the modified weighting function separate components and it cause changes of their lobes width and location on the radial frequency axis.
4 Simulation results A simulation model of the modified weighting functions generator and the pulse radar signal generator was prepared for testing. It was done in LabWindows CVI (National Instruments) environment. Thanks to the model both weighting functions and generated radar NLFM signals features tests are possible. The influence of the duration time of the auxiliary rectangular window on the generated signals characteristics was the main task of research. As a result a number of characteristics were obtained. A few of them are presented below. Normalized magnitude spectra of the modified weighting function for different value of the power of n is presented on fig. 1. Values of the parameters k and τ were fund in the iterative way in order to achieve min level of sidelobes. Normalized magnitude spectra of the non-modified weighting function and rectangular window are presented too on common figures for comparison. The first one was prepared for the same value of the parameter n and for optimum value of the parameter k that gives the minimum level of sidelobes. A conclusion can be drawn that thanks to the modification operation the sidelobes level in dependence on the n value, decreases from about 14 dB up to 27 dB with relation to the case of the non-modified weighting function. The sidelobes level attenuation was achieved at the cost of the mainlobe broadening. For example, comparable sidelobes level can be obtained for the modified window with n = 2 and the non-modified window with n = 6 , but in the second case the mainlobe width at -3 dB level is 1.35 times wider. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV 0.0
normalized magnitude spectra [dB]
normalized magnitude spectra [dB]
0.0 -20.0 -40.0 -60.0 -80.0 n=1
-100.0 -120.0 0.00
rectangular non modified modified
2.00
4.00
6.00
8.00
-20.0 -40.0 -60.0 -80.0 -100.0
n=2
-120.0 0.00
10.00
rectangular non modified modified
2.00
normalized frequency
6.00
8.00
10.00
8.00
10.00
8.00
10.00
0.0
normalized magnitude spectra [dB]
normalized magnitude spectra [dB]
4.00
normalized frequency
0.0 -20.0 -40.0 -60.0 -80.0 n=3
-100.0 -120.0 0.00
rectangular non modified modified
2.00
4.00
6.00
8.00
10.00
-20.0 -40.0 -60.0 -80.0 n=4
-100.0 -120.0 0.00
rectangular non modified modified
2.00
4.00
6.00
normalized frequency
normalized frequency 0.0
normalized magnitude spectra [dB]
0.0
normalized magnitude spectra [dB]
237
-20.0 -40.0 -60.0 -80.0 n=5
-100.0 -120.0 0.00
rectangular non modified modified
2.00
4.00
6.00
normalized frequency
Figure 1:
8.00
10.00
-20.0 -40.0 -60.0 -80.0 n=6
-100.0 -120.0 0.00
rectangular non modified modified
2.00
4.00
6.00
normalized frequency
Normalized magnitude spectrum of the modified, non-modified, and rectangular windows for different n .
Design of the transmitted NLFM radar signal was done in the next research step. The matched filter output signal is given by the input filter signal autocorrelation function. The signal autocorrelation function is determined by the inverse Fourier transform of the energy spectral density. So to generate a NLFM signal with uniform amplitude one can use the signal with the magnitude spectrum described by a square root of the weighting function, raised cosine with power of n for example. Finding the nonlinear frequency law using the stationary-phase principle was suggested by Fowle [1], Cook and Bernfeld [2]. This method was applied in this paper. As a result some characteristics were obtained. A number of them are presented here. Normalized magnitude of the NLFM signal observed at the matched filter output versus normalized to pulse duration time are depicted in fig. 2 and fig. 3.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
238 Computational Methods and Experimental Measurements XIV The characteristics are shown for the magnitude spectrum described by nonmodified and modified weighting function and for two different values of the BT product. 0.0
a.
normalized output signal magnitude [dB]
normalized output signal magnitude [dB]
0.0 -10.0 -20.0 -30.0 -40.0 -50.0 -60.0 -70.0 0.00
0.50
1.00
1.50
b. -10.0 -20.0 -30.0 -40.0 -50.0 -60.0 -70.0 0.00
2.00
0.50
Figure 2:
1.50
2.00
Normalized magnitude output matched filter signal for the nonmodified (a. k = 0.113 , τ = 0 ) and modified (b. k = 0.2153 , τ = 0.535T ) weighting function, and BT = 100 . 0.0
normalized output signal magnitude [dB]
0.0
normalized output signal magnitude [dB]
1.00
normalized time
normalized time
a. -10.0 -20.0 -30.0 -40.0 -50.0 -60.0 -70.0 0.00
0.50
1.00
1.50
2.00
b. -10.0 -20.0 -30.0 -40.0 -50.0 -60.0 -70.0 0.00
0.50
normalized time
Figure 3:
1.00
1.50
2.00
normalized time
Normalized magnitude output matched filter signal for the non modified (a. k = 0.09 , τ = 0 ) and modified (b. k = 0.094 , τ = 0.415T ) weighting function, BT = 100 .
As a result of the recursive finding of the minimum sidelobes level by values of the k and τ parameters changing the 0.8 dB improvement of this level were obtained for BT = 100 and 1.8 dB for BT = 300 respectively.
5 Conclusion The main goal of the paper is derivation of the general, analytical formula of the modified weighting function and its application for NLFM type radar signals synthesis. The modification essence is the known weighting functions family of raised cosine type convolution with the auxiliary rectangular window having variable changeably duration time. The rectangular auxiliary window duration time selection gives the opportunity for its features fine tuning. As a result decrease of sidelobes level from 14 dB up to 27 dB, with simultaneous and expected mainlobe width increase for chosen parameters was WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
239
achieved. Moreover the NLFM radar signal parameters improvement was noticed at the output of the matched filter. The scale of this effect is relatively low and it depends on the BT product. It is worth to add that the described method may be applied for fine tuning other type of weighting functions. The modified weighting function described in the paper may be used not only for radar signal synthesis but for harmonic analysis and filter design too.
Acknowledgements This work was supported by the National Center for Research and Development for the years 2007-2010 under Commissioned Research Project PBZ-MNiSWDBO-04/I/2007.
References [1] Fowle, E.N., The Design of FM Pulse Compression Signals. IEEE Transaction on Information Theory, IT-10, pp. 61-67, 1964. [2] Cook, C.E. & Bernfeld, M., Radar Signals, An Introduction to Theory and Application, Artech House, Boston and London, pp. 34-56, 1993. [3] Levanon, N. & Mozeson, E., Radar Signals, John Wiley & Sons, Hoboken, pp. 87-88, 92, 129, 2004. [4] Harris, F.J., On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform, Proceedings of the IEEE, 66, pp. 51-83, 1978. [5] Nuttall, A.H., Some Windows with Very Good Sidelobe Behaviour, IEEE Transaction on Acoustics, Speech, and Signal Processing, ASSP-29, pp. 84-91, 1981. [6] Wen, P., A Fast and High-Precision Measurement of Distorted Power Based on Digital Filtering Techniques, IEEE Transaction on Instrumentation and Measurement, 41, pp. 403-406, 1992. [7] Dai, X. & Gretsch, R., Quasi-Synchronous Sampling Algorithm and its Applications, IEEE Transaction on Instrumentation and Measurement, 43, pp. 204-209, 1994. [8] Reljin, I.S, Reljin, B.D., Papić, V.D. & Kostić, P., New Window Functions Generated by Means of Time Convolution - Spectral Leakage Error, Proc. of the 9th Conference MELECON, Tel Aviv, pp. 878-881, 1998. [9] Reljin, I.S, Reljin, B.D. & Papić, V.D., Extremely Flat-Top Windows for Harmonic Analysis, IEEE Transaction on Instrumentation and Measurement, 56, pp. 1025-1041, 2007. [10] Brandwood D., Fourier Transforms in Radar and Signal Processing. Artech House, Boston and London, pp. 40-53, 102-105, 2003.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
241
Technique for the electric and magnetic parameter measurement of powdered materials R. Kubacki, L. Nowosielski & R. Przesmycki Faculty of Electronics, Military University of Technology, Poland
Abstract A measurement technique for the electric and magnetic properties of powdered ferrites has been developed. This technique allows one to determine relative permittivity and permeability of powdered materials. Measurements were taken in a coaxial transmission line, which guarantees the broadband frequency measurements. The calculations were based on evaluation of the scattering parameters of measured powdered material and two plastic walls. The values of relative permittivity (ε’, ε”) and permeability (µ’, µ”) have been done for ferrite powder in the frequency range from 200 MHz to 1200 MHz. Keywords: absorbing materials, permittivity and permeability measurements.
1
Introduction
With the increasing quantity of devices emitting electromagnetic radiation, there is a need to develop and introduce into practice materials and techniques to protect against unwanted radiation. This is important for EMC purposes as well as for the protection of people against the harmful radiation. There are two ways in which materials can shield against radiation: by reflecting or by absorbing the incident electromagnetic energy. The ideal absorber should have low reflectivity and a high value of absorption of the incident electromagnetic energy. The electromagnetic field incident to the boundary surface of any material is reflected and the level of reflected energy is a function of the internal parameters of the material. In general, the shielding effectiveness of a material or configuration of materials is a measure of its ability to attenuate electromagnetic energy. This ability depends on both its reflection and absorption properties. Energy not being reflected or absorbed by the material is transmitted from one side to the other. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090221
242 Computational Methods and Experimental Measurements XIV For a given material, the amount of energy transmitted has a complex dependence upon the angle of incidence and polarization of the incoming electromagnetic wave. There are a few ways to obtain the minimal value of reflected power; however, the most often applied technique is to cover the protected walls with graphite materials having a pyramidal layer structure. Unfortunately, such structures operate correctly at rather higher (microwave) frequencies. At frequencies lower than 10 MHz, such a technique with a pyramidal layer structure does not guarantee the decrease of the level of reflected energy because, generally speaking, the wavelength is much bigger than the highest of the pyramids. Another way to obtain low reflectivity of material is to use materials having not only electric properties, but also magnetic properties. Materials with such properties are ferrite materials. Nowadays, walls in anechoic chambers are covered by broadband absorbers composed of two layers. The first one is pyramidal structure polyurethane foam loaded with graphite and the second layer is a sintered ferrite. Commercially available monolithic ferrite materials guarantee good absorption properties and a low level of reflected field; however, in many practical uses such a solid form of absorber cannot be introduced. From this point of view there is a need to develop absorbers that can fulfill the following requirements: - operate at desirable frequencies or over a wide frequency band; - be a solid or flexible form of material; - have the desirable level of reflectivity and absorption. New materials can be used for specific studies, e.g. stealthness, but in this case rather composite materials should be developed. It is possible to simulate the properties of an absorber having the above properties; however the constitutive material parameters, e.g. relative permittivity εr and permeability µr, of such materials or any component of these materials should be determined. In many cases components are in powder form. In fact, most of absorbing material is usually composed of powder components joined together with an epoxy bonder. The important thing is to assess the electric and magnetic parameters of the components of the mixture, being in powder form before their mixing.
2
Network analyzer measurements
There are many methods that can be used to determine the permittivity εr and the permeability µr of materials. Resonant cavity techniques offer high accuracy of measurements but they can only be used mainly for one frequency. On the other hand, waveguide methods make possible measurements over a wide frequency band but they have some disadvantages when measured materials are lossy. Nevertheless, measurements in waveguides are based on measuring of the scattering parameters (Sik). In this paper measurements of scattering parameters were taken according to the approach given in Baker-Jarvis [1]. This approach refers to toroidal material
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
243
material under test Ls
L1
Figure 1:
Lm
L2
Coaxial transmission line with a sample of material under test.
under test inserted into a transmission coaxial line – fig. 1. In this method the specimen is treated as homogeneous. Baker-Jarvis [1] described the technique of determining values of the propagation constant (γm) and characteristic impedance (Zm), taking into account the measured scattering parameters. However, in order to determine the values of γm and Zm of the material under test (Lm), the scattering parameters of this sample should be extracted from the scattering parameters of whole coaxial section (Ls). In this case airline sections having lengths L1 and L2 must be compensated. This is an important task to minimize measurement errors due to uncontrolled displacement of the sample. To minimize these errors a method was developed taking into account the nonlinear optimization of the calculation with inserted airlines sections [1]. Measurements of the scattering parameters of the sample inserted into a coaxial line were taken using a vector network analyzer system. However, special additional accessories are necessary for the calibration process. Effective auxiliary tools for calibration were developed and described in [2, 3]. These tools are in fact three airlines with different lengths depending on the frequency range measurements. This special calibration process guarantees the highest accuracy of measurements. Taking into account calibration evaluation, the computational program can compensate for errors caused by any potential displacement of the sample. The above-mentioned measurement technique is also called the reflection/transmission method. This method is well adapted to broadband characterization of isotropic and homogeneous materials; however, there are some disadvantages to this technique. On one hand, at frequencies where
Lm ε r µ r = n λ
2
(n – integer) S11 vanishes and in such case noise affects
measurements. Consequently, the network analyzer phase uncertainty is large. On the other hand, when material is lossy (or thickness of a sample is large) the value of S21 can also vanish. So, the reflection/transmission method should be applied to the characterization of several materials, where permittivity and permeability are approximately known. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
244 Computational Methods and Experimental Measurements XIV
3
Measurement method of the relative permittivity and permeability of powdered materials
There are few waveguide methods that allow one to measure the complex relative permittivity εr of non-solid materials. In the case of liquid materials, measurement methods are easy to realize, especially when they are lossy. For example, a typical waveguide setup used for measuring liquid biological tissues consists of a coaxial open-ended line applied to a liquid surface (Kubacki et al. [4]). In this case the measurement of the complex reflection coefficient is sufficient to determine the values of two parameters εr (ε’, ε”) of the liquid. However, when the material also has magnetic properties µr (µ’, µ”), such measurements based on determining only the reflection coefficient are not sufficient. In this case the transmission coefficient should also be measured. This could be done using waveguides or coaxial lines where values of two complex scattering parameters can be measured. The method described in this paper allows one to measure complex relative permittivity εr and permeability µr of non solid (mainly powder) materials over a wide frequency range – from a few kHz to a few GHz. The technique of measurement has been based on a method developed for coaxial transmission lines where the material under test must have toroidal form. In the proposed method, the material under test is inserted into a coaxial line between two plastic walls as is shown in fig. 2. This allows one to measure powdered materials. The values of electric (ε’, ε”) and magnetic parameters (µ’, µ”) of measured powder are extracted in a mathematical evaluation from the measured data of scattering parameters of the whole sample (S11, S22, S12, S21). Mathematical evaluation that allows one to extract the electromagnetic data of the measured powder from measured data of the whole sample has been derived on the basis of scattering parameters describing two-port networks as is presented in fig.3. In this figure, the scattering parameters of plastic walls were defined as Eik while the scattering parameters of measured powdered material are defined as Sik. measured powder plastic walls
Lw
Figure 2:
L
Lw
The sketch of the coaxial line with a sample of material under test between two plastic walls.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
A
E21
E11
S21
E22
Eik
S11
E12
245
E21
S22
Sik
Eik
E11
S12
E22
E21
A
Figure 3:
Scattering parameters of plastic walls (Eik) and powder under test (Sik).
Taking into account the reciprocal and source-free networks, the scattering matrix satisfies the following condition: Sik=Ski. In our case for symmetrical twoport networks: S11=S22, S12=S21 and also E11=E22, E12=E21. For each network it is possible to formulate the following scattering equations: b1 S11 S12 a1 (1) b = S 2 21 S 22 a 2 Scattering matrix (1) refers to a single network and for the sample composed of three layers such matrix becomes much more complicated. To receive mathematical formulas that allow one to extract scattering parameters of networks representing the powder substrate, the so called graph method of calculation has been introduced. The graph method has been derived from the theory of power flow in the network branches. As an example of how to use graph method in practice the reflection coefficient at the gate A-A (fig. 3) has been presented in fig. 4. A
S21
S22
ΓA
S11
Sik
E11
Eik
S12 A
Figure 4:
The graph method used for the example analysis of the reflection coefficient at the gate A-A.
The final formulas of the scattering parameters Swik of the whole sample (material under test and two plastic walls) are as follows: Sw11 = E11 + E 21
(1 − E11S11 ) S11 + E11S 212 (1 − E11 S11 )2 − E112 S 212
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(2)
246 Computational Methods and Experimental Measurements XIV Sw21 =
2 E 21 S 21
(1 − E11S11 )2 − E11 S 21
(3)
where Swik represents the scattering parameters of whole sample. Formulas (2) and (3) allow one to determine the scattering parameters of the specimen (S11, S21) from the scattering data of whole sample (Sw11, Sw21).
4
Experiments and results
Measurements of complex relative permittivity and permeability were carried out using a vector network analyzer (VNA). This system consisted of a 7 mm coaxial airline, equipped with measurement cables and LPC7 connectors. The center conductor of the coaxial airline is 3,04 mm in diameter to receive the 50 Ω characteristic impedance of the holder. The system measures the magnitude and phase of four scattering parameters (S11, S22, S12, S21). First of all the empty coaxial airlines were employed to calibrate the system. The additional tools consisted of three auxiliary airlines that were used to realize the calibration. These airlines are sample holders as well as calibration tools. They have been presented in fig. 5. The whole measurement setup is presented in fig. 6. Taking into account that various components of the VNA introduce magnitude and phase uncertainties, the calibration process is important in order to obtain the highest accuracy of measurements [1]. Calibration removes the systematic measurement uncertainties of the system. The calibration coefficients were determined by solving a set of simultaneous equations generated from the linear fractional transformation. After calibration, when the system was operated with the errors correction, the measurements were updated by the calibration coefficients. Data of calibration process and measurements of the sample were acquired using the MultiCal program, than they were processed with a special program developed for extracting values of permittivity and permeability.
Figure 5:
Coaxial airlines as sample holders and/or calibration tools.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Figure 6:
247
S-parameter test of the vector network analyzer and a coaxial line with sample.
E0 ρ E0
[ (1-ρ 2
)e -γL
γL 2 )e-2
-ρ [ -ρ(1
] E0 [ ρ 2(1
-4γL
2 )e 3 1-ρ [ -ρ (
] E0 [ ρ 4(1
] E0
-ρ 2)e -3γL
-ρ 2)e -5γL
] E0
] E0
L
Figure 7:
The infinite reflections and transmissions according to the theory of optical rays.
The final formulas of the scattering parameters of a powder (S11, S21) were extracted from the scattering parameters of the whole sample (2) and (3). On the other hand, these scattering parameters could be determined by analyzing the electric field at the specimen interface. In order to determine the material properties from scattering data there is a need to introduce the relationship of the electric field incident at the boundary of the specimen and propagate through the materials. The reflection and transmission coefficients can be interpreted as the infinite summation of optical rays when the incident electromagnetic field is treated as a plane wave. Such a situation is typical in a coaxial line where the TEM mode is propagated. The infinite reflections and transmissions are shown in fig. 7. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
248 Computational Methods and Experimental Measurements XIV The reflection and transmission of rays have been presented according to the theory of optics (fig. 7), taking into account two boundary conditions: “air/sample” as well as “sample/air”. The reflection coefficient (S11) consists of infinite rays in which the first one is reflected at the first interface air/sample (ρ) and all the others are transmitted, arising from multiple reflections inside the sample. On the other hand, the transmission coefficient (S21) is the infinite summation of the rays transmitted to the second interface “sample/air”. This infinite sum of components is in fact a geometric series yielding to the following final sums:
S11 = ρ S 21 =
1 − e − 2γ L 1 − ρ 2 e − 2γ L
(1 − ρ ) e 2
(4)
−γ L
1 − ρ 2 e − 2γ L
(5)
where, in the coaxial transmission line with the fundamental mode TEM:
µr − ε r
ρ - reflection coefficient:
ρ=
γ - propagation constant:
γ = jω ε r µ r
µr + ε r
Formulas (4) and (5) are functions of unknown parameters of εr and µr of the specimen. Values of these parameters (ε’, ε”, µ’ µ”) can be obtained by solving the above complex equations (4) and (5). In this case the iterative procedure yields to a very stable solution for a specimen of arbitrary length. For the purpose of the validation of the solution of the sample of solid material, the measurement in the configuration with two plastic walls was used. Then, the values of ε’, ε”, µ’ µ” were determined using the proposed method. The obtained data were confirmed by measuring this solid sample (without plastic walls) using the reflection/transmission method described in [1, 3]. The measured values of ε’, ε”, µ’ and µ” by the proposed method were within good accuracy with the reflection/transmission method. As an example of powdered material measurements, the values of complex permittivity and permeability of ferrite powder have been measured using the proposed method. Ferrite powder was received in a ball mill from the solid state material (spinel class ferrite, named G-175) to receive granulate (0,2-2) µm. For this powdered material, relative values of permittivity (ε’, ε”) versus frequency were presented in fig. 8, while relative values of permeability (µ’, µ”) were presented in fig. 9. The measured values of permittivity (ε’, ε”) and permeability (µ’, µ”) of the ferrite powder versus frequency were presented in figs. 8 and 9. However, the obtained data for the powdered ferrite were significantly lower than that of solid state ferrite. This is because the magnetic domains of ferrite were destroyed during the milling process. Further investigation should be undertaken to characterize the relationship between the volume of granulates and the decreasing permeability. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
ε’
249
ε”
6 5 4 3 2 1 0 200
300
400
500
600
700
800
900
1000
1100
1200
f [MHz]
Figure 8:
Values of the relative permittivity of ferromagnetic powder in the function of frequency.
µ’
µ”
3
2
1
0 200
300
400
500
600
700
800
900
1000
1100
1200
f [MHz]
Figure 9:
Values of the relative permeability of ferromagnetic powder in the function of frequency.
In the described measurements all grains of powder were relatively small, compared to the wavelength, to receive homogeneous material. Measurement problems can appear in the measurement transmission line when material under test is heterogeneous. This can happen when materials contain inclusions or when grains of powder are not small compared to the wavelength. In these conditions the higher order modes can be excited in the sample by diffraction on the heterogenities. So, the parameters εeff and µeff of such a sample cannot be deduced in classical calculation because it supposes that only the dominant mode propagates. Moreover, in such structures with local inclusions, the local resonances are excited. This causes a lack of energy to be measured when higher order modes appear. This is due to metallic losses in the coaxial line and inclusions. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
250 Computational Methods and Experimental Measurements XIV
5
Conclusions
The measurement technique of electric and magnetic properties of powdered ferrites has been developed. This technique seems to be useful for lossy materials. Based on this technique, values of relative permittivity (ε’, ε”) and permeability (µ’, µ”) of ferromagnetic powders can be measured in the frequency range from 200 MHz to 1200 MHz. The aim of this work was to investigate the measurement technique of powdered materials, because it could be an important tool for simulating the final properties of absorbing materials composed of conductor and ferrite materials. However, it turned out that in the case of ferrite powder such an attempt falls down because the values of relative permeability (µ’, µ”) of powder was significantly lower than that of solid state ferrite. Further investigation should be directed to find the difference in permeability between the powdered and solid state ferrites.
Acknowledgement The authors would like to thank Ryszard Frender from the Telecommunications Research Institute, Warsaw, Poland for providing ferrite powders for the measurements.
References [1] Baker-Jarvis J., Transmission/reflection and short-circuit line permittivity measurements, NIST Technical Note, 2005. [2] Marks R.B., A multiline method of network analyzer calibration, IEEE Trans. Microwave Theory Tech., 42, pp. 1205-1215, 1991. [3] Wiatr W., Frender R., Żebrowski M., „Characterization of microwave absorbing materials using a wideband T/R measurement technique”, Conference Proceedings on International Conference on Microwaves, Radar and Wireless Communications, Wrocław, Poland, 2, pp. 475-478, 2008. [4] Kubacki R., Sobiech J., Wardak K., The comparison of dielectric properties of young and mature animal tissues in microwaves, Przegląd Elektrotechniczny, 5, pp. 43-45, 2006.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
251
Acoustic watermark server effectiveness Z. Piotrowski & P. Gajewski Telecommunications Institute, Faculty of Electronics, Military University of Technology, Poland
Abstract This paper refers to the technology of the acoustic watermark Internet server, which distributes digital sound together with additional inaudible hidden data. We describe the main architecture of this server (software modules) as well as the main features of this system. The characteristic specification referring to the implemented watermarking server’s engine is also presented. Data hiding is one of the most important technologies of Digital Rights Management dedicated for digital multimedia systems. Using hidden signals just below original host level and a perceptual model we can create another digital space for very important data. This data represents the index value for multimedia identification purposes (ID). In this paper we also describe experimental results using the internet infrastructure and dedicated algorithm for audio content fingerprinting. The implemented algorithm operates at the frequency domain and uses orthogonally distributed frequency components allocated along the spectrum bins. We also computed several metrics for a fixed number of audio tracks to show the watermark server and algorithm effectiveness. The following metrics were computed: the time of signal processing for watermark embedding and decoding, the Signal-to-Mask Ratio (SMR), the Mean Square Error (MSE), the Normalised Mean Square Error (NMSE), the Peak Signal to Noise Ratio (PSNR) and the Audio Fidelity (AF). Keywords: data hiding, watermarking, watermarked sound metrics, digital rights management.
1
Introduction
Nowadays, in multimedia distribution systems the important problem is protection against loss of intellectual work by, e.g., the illegal copying and distribution of digital sound without paying the author or producer. This problem WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090231
252 Computational Methods and Experimental Measurements XIV has been increasing dramatically for several years, thus producers and artists had to decrease their royalties from records to have competitive prices per single CD or DVD record in comparison to the so-called digital pirates. The problem is also important to young music producers and composers. They sometimes produce, e.g., well recognised sound tracks – various kinds of music: hip-hop, house, pop etc. in their “home” studios and after the time consuming creation and development sound track process they just simply open the music to the public for free. Our proposal is an internet system for digitally signing the music. This is simply to mark the sound track with an additional imperceptible signature [3] using an acoustics internet server. Then, after this stage, the so-called watermarked sound tracks are made available to public. This method has many advantages: open access to all users all over the world and individual signature allocation only to the dedicated user as well, among other things.
2
The main system architecture
The system architecture is presented in Figure 1. The system consists of an internet server with a build-in database. All users’ terminals are connected to the internet. The server distributes both private as well as public signatures (ID), thus the user can allocate proper meta-data to these signatures. The main web page is shown in Figure 2.
Figure 1:
The main system architecture.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
-
-
253
The main features of the acoustic watermark internet server are: Watermark embedding. (The watermark is a dedicated acoustic signal that is added to the original host signal and is completely inaudible at host signal presence. The watermark represents the digital signature as an integral number.) Watermark decoding. Sound track watermarking using a signature (the signature value is presented only to the logged user). Only a logged user can decode the sound track. The server has a built-in algorithm based on frequency domain processing to embed the watermark.
Figure 2:
The main web page of the acoustic watermark server.
2.1 Software modules The developed system is based on a professional database used PostgreSQL technology and PHP scripts, as well as a watermarking engine. The watermarking engine ensures an effective watermark signal is embedding into the original host sound track. The sound track is stored in the database as a *.wav format file. 2.1.1 Database implementation The base language to communicate with database internet systems is SQL [1]. SQL is a structural language used to save, send and generally for data management purposes in relational database systems. Taking into consideration the project character, a database management system was chosen based on an open source licence server. The basic servers used for these purposes are: MySQL, Firebird and PostgreSQL. We predict very large data numbers, thus the relative most optimal server was the Postgre SQL server. PostgreSQL [2] is an object-relational open-source system compiled to the most popular operational systems e.g. Linux, UNIX (AIX, BSD, HP-UX, SGI IRIX, Mac OS X, Solaris, Tru64) and Windows.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
254 Computational Methods and Experimental Measurements XIV The main table TSignatures implemented in the database structure is presented in Figure 3. The Watermark signature is saved as digital data to the TSignatures table. The signature number (“IW field”) is also the table main key, which auto increments in successive lines with the signature data. The “Status” field allows one to save information about signatures used. The Fields “IDSystem” and “IDuser” are the foreign keys with the data coding system and users.
Figure 3:
The main table TSignatures implemented in the database system.
2.1.2 PHP scripts The project was written in PHP language in the 5.2 PHP interpreter version using object oriented programming and a MVC model (model-view-controller). Scripts work in two basic modes: - server-side scripts (response to user action – controller, and content generation – view); - CLI scripts (command line interface) sound signal processing control. 2.1.2.1 Server-side scripts The object model requires one to divide programme code into classes to collect the object definition in one file. There are several implemented classes: - Main – the main class controls all project processes. This class is responsible for session initialization, database connection and cookie reading. The two main methods are: makeAction() – controls data processing transmitted using forms and registerFlags(), responsible for sending required data to view. - User – this class represents the user of the web server. The user can be logged and can also be a guest. This class allows for user registration, registration confirmation and the login process. - userAdmin – this class is similar to the user class, but has methods allowing for user management. - files – this class receives transmitted files from the user, allocates them in a secure manner and is responsible for processing the files by CLI scripts initialization. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
255
- coder – this class represents the coding and decoding system. - cms – this class manages the web content. Additionally there are other, external classes using a GNU GPL licence: - phpMailer – allows for electronic mail sending using external SMTP servers. - xajax – library for quick integration of the AJAX technique with PHP mechanisms. - PHPTal – forms TAL (Template Attributes Language) implementation based on the XML language. - getId3 – this class allows for quick file header decoding and obtaining information about the file. - reCaptcha – this class is connected with the recaptcha.org service, using a human recognition module named Completely Automated Public Turing Test to tell Computers and Human Apart. 2.1.2.2 CLI scripts CLI scripts control the time consuming processes. There are time restrictions for PHP script execution, typically 30 seconds, thus time consuming operations must be executed using CLI scripts. There are many benefits and positive features of using CLI scripts: - faster view generation (web page generation) for the end user; - easy management of the dedicated processes by the administrator; - robustness against random connection braking using a web browser; - possible process moving to another machine; - hardware reallocation to the processes. The client browser has the possibility of communicating with the web service using server-side scripts only.
3
Watermarking engine
The watermarking engine consists of a compiled to binary form watermark coder and decoder. A frequency domain based algorithm was used for watermarking embedding and decoding [2]. The implemented algorithm operates at the frequency domain [4] and uses orthogonally distributed frequency components allocated along the spectrum bins. The psychoacoustic correction method for watermarking signal shaping was used. We computed the basic characteristics of this algorithm using input and output signals; the input is the host signal and the output is the watermarked signal. Each of the frequency components carries its amplitude information in about one bit, thus the more frequency components the more output data payload. Figure 4 presents the watermark signal as an orthogonally distributed frequency component.
4
Experimental results
Based on the compiled version of the watermark algorithm, we compute several signals metrics. In Figure 5 the basic watermarked signal metrics are presented. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
256 Computational Methods and Experimental Measurements XIV
Figure 4:
Output watermark signal before the embedding process.
Figure 5:
Basic metric of the watermarked signal.
We use following metrics to compute the basic parameters of the watermarked signal: Porig - power [dB] of the host acoustic signal Pzwcorr - power [dB] of the corrected watermark signal (corrected means – shaped according Just Noticeable Level (JND=0) Pout - power [dB] of the output acoustic signal with watermark signal ∑ /∑ Signal-to-Noise Ratio ∑ | | Mean Square Error
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
∑ max 1 ∑
/∑
/∑
-
/∑
257
Normalized Mean Square Error Peak Signal to Noise Ratio Audio Fidelity
We compute the server average processing time of the *.wav format file. In Table 2 we show the results of this experiment for the fixed duration time of the *.wav file (sampling frequency 44100 Hz and 16 bits/sample). Software tool versions are presented in Table 1. Table 1: no. 1 2 3 4 5 6 6
tool OpenBSD PostgreSQL PHP 5 Perl Apache cgi SSL Table 2:
duration time [s] 5 10 30
5
Software tools installed on the experimental server. version 4.3 8.2.6 5.2.5 5.8.8 1.3 1.1 yes
Average *.wav file processing time by server. mono (*.wav) [s] 7.5 9.3 13.8
stereo (*.wav) [s] 8.6 10.8 15.2
Conclusions
An acoustic watermark server based on a built-in PostgreSQL database and a watermarking engine (compiled to binary code) can be very helpful for Digital Rights Management as a basic tool for acoustic hidden fingerprinting (the unique ID signature is represented by the watermark signal). The relatively high processing time for *.wav standard files can be decreased by better server configuration, as well as software optimization. The watermark signal is inaudible in the presence of the host signal.
References [1] ISO/IEC 9075-1:2008, Information technology – Database languages – SQL - Part1: Framework (SQL/Framework). [2] Piotrowski Z., Gajewski P.: Novel method for watermarking system operating on the HF and VHF radio links, Computational Methods and Experimental Measurements XIII, CMEM XIII, , Southampton, Boston, WIT Press, pp.791-800, 2007
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
258 Computational Methods and Experimental Measurements XIV [3] Methods for the Subjective Assessment of Small Impairments in Audio Systems Including Multichannel Sound Systems; Recommendation ITU-R BS.1116. [4] Richard G. Lyons, Understanding Digital Signal Processing, Prentice Hall PTR Publication, 2004
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
259
Intrapulse analysis of radar signal A. Pieniężny & S. Konatowski Department of Electronics, Military University of Technology, Poland
Abstract ELINT/ESM type of electronic intelligence in the primary layer uses parameters measurements of intercepted radar signals. Nowadays modern radar uses more and more complex waveforms. Some waveforms are developed intentionally to make their intercept almost impossible. The main distinctive features of modern radar signal are hidden in its time-frequency structure. In the near past the problem of radar signal feature extraction was considered in time or frequency domain separately, because radar waveforms were relatively simple. Today, however, the signals should be observed simultaneously in both domains. Timefrequency distribution concept offers a new approach in radar signals classification/identification. The paper presents some results of compressive concept and Hough transform application to intra-pulse modulation analysis of radar signals. Linear frequency modulation within the pulse was considered. Keywords: signal spectrum, chirp transform, compressive receiver, Hough transform, intra-pulse modulation.
1
Introduction
Electronic intelligence (ELINT) and electronic warfare support measures (ESM) devices for modern electronic warfare (EW) systems detect the presence of potentially hostile radar (e.g. [13, 14]). The first class of EW systems (ELINT) performs mainly the identification and location tasks of unknown emitters with purpose to determine their parameters that are subsequently included to the threat library. ELINT/ESM systems operation and effectiveness is not only determined by their technical performance quality but also by threat libraries correctness. ESM system itself is particularly very sensitive to this factor due to automatic mode of operation. Modern battlefield requirements are very demanding with respect to ELINT/ESM performance and reliability. This type of equipment has to be highly sophisticated with respect to their hardware and software WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090241
260 Computational Methods and Experimental Measurements XIV architecture. The state of the art in microwave receiver’s technology and in signal processing methods and techniques gradually meet modern battlefield challenges. The typical ELINT/ESM equipment structure usually consists of antenna (antennas), microwave receiver typically providing two intermediate frequency (IF) outputs and demodulating section processing these IF outputs. Receiver is used for radar signals interception whereas the demodulating section is used to determine signal parameters. Intrapulse modulation is becoming distinctive feature of a signal. Considering this parameter it seems reasonable to introduce it as an important factor of deinterleaving process. Deinterleaving process basically is based on signal time parameters measurement. The following are time parameters considered as a primary parameters (measured for each pulse): (TOA)-time of arrival, (PW)-pulse width, (A)-amplitude, (RF)-radiofrequency (carrier frequency), (FMOP)-frequency modulation on pulse. Time parameters are associated with (AOA)-angle of arrival. For each pulse the specific description, PDW-pulse descriptor word, containing primary parameters is created. On the base of PDW the process of deinterleaving is performed. Today to provide low probability of interception and identification of radar signal, radar waveforms are more and more complex. The waveform complexity is realized mainly by intrapulse modulation. Thus the signal frequency structure has become a very important signal descriptor. Time-frequency distribution concept offers a new approach in radar signals processing. Wideband signals compression principle using surface acoustic (SAW) dispersive delay lines (DDL), to measure instantaneous spectrum of short pulsed radar signals is presented. Furthermore the results of image processing techniques, represented by Hough transform, with application to intra-pulse modulation analysis of radar signals have been shown. Linear frequency modulation within the pulse was considered
2
Compressive receiver concept
A good understanding of the compressive receiver (CR) performance requires a short discussion of the fundamentals of chirp transform. The chirp transform is based on classical Fourier transform in which linear dependency between frequency and time (delay) has been introduced [1,2,4,8] +∞
F (ω ) = ∫ f (t )e − jω t dt . and ω = 2 µτ −∞
(1)
where: f(t) - input signal, ω - frequency due to time τ, µ - frequency-to-time conversion factor. After substitution the equality ω=2µτ, and applying the identity 2τt=τ2+t2-(τ-t)2, and suitable factorization, the first equation is converted in a frequency-to-time scaling problem +∞ 2 2 2 (2) S (τ ) = ∫ s (t )e − j µ t e − j µ (τ −t ) dt e − j µτ − ∞ where with comparison to the equation (1), F(ω) and f(t) have been substituted by S(τ) and s(t) respectively. The last expression describes the full chirp WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
261
algorithm. It has electrical implementation, presented in Fig.1, which requires three operations. First, the input signal s(t) is multiplied by a chirp reference signal. Secondly, the resultant product is convolved in pulse compression filter with an equal but opposite frequency-to-time slope value µ. Thirdly, a multiplication by a second chirp, which removes the phase distortion introduced by the first chirp. In spectral analysis application, such as a signal spectrum measurement, the second chirp multiplication is omitted since the desired information is the envelope of the Fourier transform. In such a case the chirp algorithm (transform) becomes the version which produce signal power spectrum only +∞
2
2
V (τ ) = ∫ s (t )e − jµt e − jµ (τ −t ) dt
(3)
−∞
The chirp algorithm arrangement (3), referred to us as M-C (multiplyconvolve) release has practical hardware implementation including chirp generator, mixer, compression filter, amplifier and envelope detector. The radio frequency (RF) signal is multiplied by the linear sweeping reference signal to produce of the linear sweeping intermediate frequency. This signal is subsequently applied to the pulse compression filter connected with the intermediate frequency (IF) amplifier. The compressed output signal envelope is reproduced by the detection circuit. Output video signal appears at time determined by the input signals frequency values. It reflects instantaneous spectrum of input signal. The M-C algorithm, is recommendable for the analysis of short pulsed radar signals. To clarify the CR performance some registrations has been made. The results are presented in Fig.2 to Fig 6. Fig.4 and Fig.5 show one of the most significant feature of the CR, which is ability to analyze signals that are simultaneously present at its input. Fig.3 presents CR output video signals for pulsed wideband linear frequency modulated (LFM) signals and Fig.6 shows CR response for single pulse of intercepted narrowband signal.
M
C
s(t)
v(t)
COMPRESSION FILTER
(M) V( W )
S( W )
exp(jµ W 2) B s ,T s
n(t)
LONG CONVOLVER
B 2, T 2
n(W)
B 2 > B 1 , T 2 >T 1
CHIRP GENERATOR
CHIRP GENERATOR
exp(-jµt 2)
exp(-jµ W 2)
SHORT MULTIPLIER
Figure 1:
B 1 ,T 1
Chirp algorithm arrangement.
Figure 2:
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
CR performance for CW signals, (2µs/div).
262 Computational Methods and Experimental Measurements XIV
Figure 3:
CR performance for pulsed LFM signal, (2µs/div).
Figure 4:
CR output signals for two input signals: pulsed LFM signal and CW signal, 0,5µs/div.
Figure 5:
CR output signals for two input signals: pulsed LFM signal and CW signal, 0,5µs/div.
Figure 6:
CR input and output signals pulsed unmodulated signal, 5µs/div.
The CR output signal reflects instantaneous spectrum of the analysed signal. It performs signal’s analysis in particular time samples refer to us as duty cycles. This results signal spectrum averaging over reference signal time and bandwidth. In other words, the CR output signal expresses signal energy being under analysis. Thus, with respect to the time-frequency distribution terms, to collect a total signal energy it is necessary to gather signals existing in several duty cycles of the CR. Therefore the time-frequency signal representation will be referred to the plane (duty cycle number, frequency expressed by time) over which the CR output signal will be evaluated. As resulted from the registrations, the CR operation consists of several duty cycles in which instantaneous spectrum of a signal is yielded. For particular cycle, signal spectrum is represented by video pulse position relatively to the beginning of a reference signal. Pulse position is determined by the input signal frequency value. Thus to evaluate spectrum it is sufficient to measure the time intervals between CR output video pulse and the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
263
pulse triggering the reference signal. Another factor that should be taken under attention is CR output pulse amplitude variability.
3
Compressive receiver output signal processing
The CR output is followed by many short pulses with positions determined by the input signal frequency value. There exist also side lobes associated with these pulses. Thus to measure the frequency of the input signal the measuring of the main pulse centre with simultaneous neglecting of side lobes is required. To determine the signal frequency, the time parameters measurements should be performed. The simplest approach is to compare the CR output signal with fixed thresholds. When the processed pulse breaks these thresholds, it will be declared a legible output. This approach includes two main shortcomings. Firstly, the amplitude of the output pulse changes with the input signal level. Secondly the fixed thresholds detection scheme does not distinguish between main and side lobes in general. Thus to overcome problems of the CR output signal processing digitizing technique is recommended. To proceed examinations the CR practical model has been used [3, 5–7]. VIDEO MICROWAVE RECEIVER
IF
COMPRESSIVE RECEIVER
DIGITIZER
RECEIVING ANTENNA
COMPUTER
Figure 7:
Measurement system lock diagram.
Figure 8:
CR and receiver video outputs, (pulsed LFM signal, pulse width 10µs).
The measurement system block diagram is presented in Fig.7. Some results of the examinations are presented in Fig.9-12 (in Fig.10 upper trace represents CR output, lower trace represent receiver video output). Fig.8 combines instantaneous video of the intercepted signal and its instantaneous spectrum represented by the CR output video for the single pulse. Fig.9-12 present spectrum two types of radar signals - pulsed linear frequency modulated and narrowband (unmodulated) (in Fig.10, 12 record number denotes time, sample number denotes frequency). Number of pulses under analysis depends on digitizer threshold. As it is seen in the case of pulse train it is possible to estimate frequency deviation on pulse (on the base of pulse train), however it is not possible to determine modulation type.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
264 Computational Methods and Experimental Measurements XIV
Figure 9:
CR output signals, pulsed unmodulated signal (pulse width 5µs).
Figure 10:
CR output signals pulsed unmodulated signal.
Figure 11:
CR output signals, pulsed LFM signal (pulse width 10µs).
Figure 12:
CR output signals, pulsed LFM signal (pulse width 10µs).
4
Wigner-Hough transform
Methods of time-frequency analysis enable to create radar signal image representing its instantaneous frequency [11, 12]. Processing such an image one can determine intrapulse parameters of radar signal. Hough transform is a technique used in image processing, that could also be implemented here [15]. This technique enables to analyze global features of an image of the base of local ones (point is the favourite one). The method relay on lines detection that could be represented in parametric form as straight lines circles, polynomials [9, 10, 15]. The problem analyzed in the paper may be considered as a separate application of two stages: firstly the Wigner-Ville transform (WV) is calculated [11, 12] and secondly the Hough transform is used. It is possible to apply here joint Wigner-Hough transform. Such method relays on Hough transform application to Wigner-Ville transformation of a signal. Particularly the method will be used to analyze multi-component signals with linear intra-pulse frequency modulation. The aim of the paper is to present WHT performance with application to multi-component radar signals processing. Wigner-Hough WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
265
transform of complex signal s(t) is defined as a transformation from time domain to parameters (f,g) domain accordingly to the expression [10]:
WH s ( f , g ) = ∫
∞
∞
∫ s(t + τ 2)s (t − τ 2)⋅ e *
− j 2π ( f + gt )τ
−∞ −∞
dτdt
(4)
The transform defined by the equation (4) one can consider as integral of Wigner-Ville transform:
WH s ( f , g ) = ∫
∞
∫
∞
∞
Ws (t , v)δ (v − f − gt )dtdv = ∫ Ws (t , f + gt )dt
−∞ −∞ −∞ (5) where δ(ν-f-gt) is delta function and Wigner-Ville transform is defined accordingly [11]:
∞
Ws (t , f ) = ∫ s (t + τ 2 )s * (t − τ 2 )e − j 2πfτ dτ −∞
(6)
For practical applications discrete Wigner-Hough transform is needed. The discrete WHT for sequence x(n), n=0,1,...,N-1 (where N is even) is given by the equation [10]: WH x ( f , g ) = +
N −1
∑
N / 2 −1
n
∑ ∑ x ( n + k ) x ( n − k )e *
− j 4πk ( f + gn )
n =0 k = − n
N −1− n
∑ x ( n + k ) x ( n − k )e *
+
(7)
− j 4πk ( f + gn )
n = N / 2 k = − ( N −1− n )
Figure 13:
WV of two LFM signals embedded in noise.
Figure 14:
WHT of two LFM signals.
As it results from (7), WHT for signals with linear frequency modulation (LFM) reaches the maximum value in the points of coordinates (f0,g0). It denotes that detection and parameter estimation of LFM signals embedded in noise one can consider as a peak search in parameter domain (f,g). An example of WHT application in the case of multi-component signals interfered by additive Gaussian noise is presented on Fig.13, 14. Fig.13 presents WV of two LFM signals for signal-to-noise ratio (SNR) equal zero dB. Both signals are LFM signals and have the same amplitude, the same frequency-to-time slope but different frequency carriers. Signals were simulated, however they could reflect possible real values. On the base of WV image it is difficult to identify the signal type. When WHT is used then in transformed WV image two peaks, representing analyzed signals, are observed. Fig.14 shows two important advantages of WHT: WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
266 Computational Methods and Experimental Measurements XIV first - in parameters domain peaks of signals are observed because cross terms were canceled, second – WHT gives integration gain with relation to noise as a result of integration. Parameters estimation of multi-component LFM signals is performed simultaneously for all elements in (f,g) plane represented here by r and theta (Fig.14). When peak exceeds predetermined threshold, it is declared that LFM signal is present and its parameters f0 and g0 (r,theta) are coordinates of the peak.
5
Noise influence on WHT transform
When signal is embedded in noise the WHT is becoming random variable WHTs+v(f0,g0). The peak of WHT moves to the point of coordinates (f0+δf, g0+δg). In this case the WHT properties are described by output SNR defined accordingly to [10]: WHTs ( f 0 , g 0 ) (8) var{WHTs + v ( f 0 , g 0 )} As it results from expression (8) output SNR is defined by WHTs(f,g), (WHT of signal) and by WHTs+v(f,g), (WHT of signal embedded in noise). Signal parameters accuracy estimation is determined by variance of random variables δf and δg. Assuming that noise is zero mean, white Gaussian process the following expression for SNROUT is received [10]: 2
SNROUT =
SNROUT =
N 4 A4 4
N 3 A 2σ n2 N 2σ n4 + 2 2
N2 2 SNR IN 2 = N ⋅ SNR IN + 1
(9) where input SNRIN is defined as A2/σ2n, A – signal amplitude, σ2n – noise variance, N – number of samples. Last expression reveals the threshold effect. When input SNR is high (SNRIN>>1), the expression (5) is approximated by SNROUT=N SNRIN/2. The integration gain proportional to number of integrated samples is present. In the case when input SNR is low (SNRIN<<1) output SNR can be worse than input one. Thus it is possible to determine the point separating two types of method behavior corresponding to low and high SNR values which is reversely proportional to the number of samples N. Concluding, it should be stressed that the advantage of the analyzed method, besides of SNR threshold, is integration gain and cross terms cancellation. However this advantage is taken in the expense of computation complexity.
6
Examples of simulation
Theoretical approach has been examined on the base of simulation. Signal parameters have been taken to be close to the real radar signals parameters (particularly with respect to pulse-width and intermediate frequency values). Results of simulation have presented in Fig.15-22. Presented are LFM WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
267
overlapping in frequency and time signals, having the same or opposite time-tofrequency slopes. Such situation is typical in dense radar signal environment. The task is to distinct signals. The problem has been examined experimentally with purpose to determine to detect and separate signals.
Figure 15:
Waveform, spectrum and WV of two overlapping signals.
Figure 16:
WHT of two overlapping signals.
Figure 17:
WHT projection onto parameters plane.
Figure 18:
Extracted lines from WHT.
Figure 19:
WV of two overlapping signals, SNR=-3dB.
Figure 20:
WHT of two overlapping signals, SNR=-3dB.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
268 Computational Methods and Experimental Measurements XIV
Figure 21:
Waveform, spectrum and WV of two overlapping signals.
Figure 22:
WHT of two overlapping signals.
The results of examinations show that using WHT transform it is possible to extract signal when it is hidden in the noise. Overlapping signals are separated correctly. In the case of simultaneous signals SNR is lowered considerably, as a result of cross terms in WV image. Example presented in Fig.21 is interesting for the reason, that signals overlap in frequency and time what is not rare in dense radar signal environment.
7
Conclusions
The compressive receiver is an efficient tool for the time-frequency signal analysis. Its speed of performance recommends it to apply in radar signal environment to measure frequency parameters, especially of the short-time narrowband and wideband LFM signals. Because the CR output signal represents the instantaneous spectrum of the input signal, the pulse time of arrival is needed. The CR output signals are extremely short (narrow) and their time of appearance at the output has stochastic nature implied by random temporal relations between the reference and input signals. This will cause losses at CR output signal. The main conclusion is that the compressive receiver is high speed Fourier transformer, that aided by task oriented software realizing functions of the virtual measurement instrument provides effective possibility to measure frequency parameters of radar signals. The CR most significant feature is ability to analyze overlapping in frequency and time signals that recommends CR to use in dense signal environment, typical for ELINT/ESM applications. The combination of time-frequency distribution, with a technique of image processing creates new possibilities in signal processing transforms signal detection and parameters estimation to a problem of lines detection in the image. The image is represented by time-frequency distribution of a signal. This approach can be also applied to non-modulated signals and to signals having nonlinear type of frequency modulation. The combined Wigner-Hough transform is robust against disturbances in the image. The method is also effective when segmentation of low quality images is used. This a considerable advantage of Hough transform, because it is possible to extract particular shapes in distorted WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
269
images. Such a property is successfully used for parameters estimation of low power LFM signals especially those existing in the noise. Basic disadvantage of Hough transform is its requirement for a big memory for accumulator and considerable computation complexity. This however is overcome by the use of parallel processing. Current status of problem analyzed here allow to state that there is a need of practical experiment in which the WHT technique will be applied to process signal existing on the ELINT/ESM receiver output with purpose to detect signal and to measure parameters of intrapulse modulation.
Acknowledgements This work was supported by the National Center for Research and Development for the years 2007-2010 under Commissioned Research Project PBZ-MNiSWDBO-04/I/2007.
References [1] Breuer K.D., J. S. Levy J.S., Paczkowski H.C., The compressive receiver, A versatile tool for EW systems; Microwave Journal, vol.10, (October 1989), pp. 81-98. [2] Campbell C.K., Application of surface acoustic wave and shallow bulk acoustic wave devices Proceedings of the IEEE, vol. 77, no.10, (October 1989), pp. 1453-1484. [3] Kawalec A., Pieniężny A., Radar signal feature extraction system based on a saw dispersive delay lines, Molecular and Quantum Acoustics, vol. 26, Poland 2005, pp. 175-182 [4] Kočemasov V.N., Dolbnja E.V., Sobol' N.V., Akustoelektronnye fur'eprocessory, Radio i Svjaz', Moskva, 1987, in Russian. [5] Moule G.L., SAW compressive receiver for radar intercept, IEE Proceedings, pt. F, Communications, Radar and Signal Processing, vol. 129, no. 3, (1982), pp. 180-186. [6] Paradowski L., About some version of SAW chirp spectrum analyzer – analysis and performance, Proceedings of the 5th Conference on ACOUSTOELECTRONICS'91, 10-13 September, 1991, Varna, Bulgaria, World Scientific Publishing Co. Pte. Ltd., Singapore, 1991, pp. 123-131. [7] Paradowski L., Pieniężny A., Radar signal processing in time microscale using frequency-to-time and time-to-frequency procedures and transducers, The Proceedings of The 7th International Conference on Signal Processing Applications & Technology, Boston, Massachusetts, USA, October 7-10, 1996, Miller Freeman Inc., 1996, vol. 2, pp.1500-1504. [8] Tsui J.B.Y., Microwave receivers with electronic warfare applications, John Wiley and Sons, New York, 1986, pp. 278-328. [9] Barbarosa S., Lemoine O., Analysis of nonlinear FM signals by pattern recognition of their time-frequency representation, IEEE Signal Processing Letters, vol. 3, no. 4, April 1996.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
270 Computational Methods and Experimental Measurements XIV [10] Barbarosa S., Analysis of multicomponent LFM signals by a combined Wigner-Hough transform, IEEE Transactions on Signal Processing, vol. 43, no. 6, June 1995. [11] Claasen T.A.C.M., Mecklenbrauker W.F.G., The Wigner distribution – A tool for time-frequency signal analysis, Part I: Continuous time signals. Philips Journal of Research, vol.35, no.3, 1980. [12] Cohen L., Time-frequency distribution – A Review, Proceedings of the IEEE, vol.77, no.7, July 1989. [13] Schleher D.C., Introduction to electronic warfare, Artech House Inc., Dedham, MA, 1986. [14] Stephens J.P., Advances in signal processing for Electronic Warfare, IEEE AES Systems Magazine, November 1996. [15] Żorski W., Metody segmentacji obrazów oparte na transformacie Hougha, Instytut Automatyki i Robotyki Wydział Cybernetyki WAT, Warszawa 2000, in Polish.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
271
Neural detection of parameter changes in a dynamic system using time-frequency transforms E. Swiercz Electrical Faculty, Bialystok Technical University, Poland
Abstract In this paper a neural detector of internal parameter changes in a stationary, nonlinear SISO dynamic system, represented by a discrete model of the NARX type, is considered. The system analysed in this paper is described by the nonlinear difference equation y(n) = f(y(k-1), …, y(k-p), u(k), …, u(k-q), Θ), where f is a non-linear function, y(k), …, y(k-p) are the output samples, u(k), …, u(k-q) are the input samples and Θ is a vector of internal parameters of the system. The values of the vector Θ can be changed in random moments of time, but these values belong to the finite set, so that detection of parameter changes can be considered as classification of signals acquired for different values of changeable parameters. Such a formulation of the problem can be suitable in industrial applications where the change of parameters can model selected faults (or changes of an operating point) of an industrial dynamic system. To decrease dimensionality of classified data, extraction of specific characteristics from a time-frequency transform of an output signal, produces a vector of features Φ, which constitutes the decision space for classification. As the intelligent approach to such a complex problem is justified, the extracted signal features are the inputs of a neural network. The LVQ (Learning Vector Quantisation) neural network has been chosen because of its ability of learning data classification, where the similar input vectors are grouped into a region represented by a socalled coded vector (CV). Such an approach corresponds to pointing out the most probable values of the vector Θ. The detection ability of the LVQ network, both in a non-noisy and noisy environment, has been examined in detail in the paper. Keywords: time-frequency transforms, detection of abnormal states of dynamic system, LVQ neural classifier.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090251
272 Computational Methods and Experimental Measurements XIV
1
Introduction
The complexity of the systems that we use on a day-to-day basis is constantly growing. Hence an abnormal work of a system can occur more frequently, so that a variety of computer aided methods for evaluation of system reliability is strongly required. Automatic detection of changes in a system state is an important subject, because such a change can reflect undesired faults of a system. Early detection of these changes allows one to protect a system, for example by changing a control algorithm. The concept of detection of a current system state presented in this paper assembles some methods and algorithms of signal processing, feature extraction and feature classification with the use of neural networks. Mathematically, a state of a discrete SISO system can be described by a set of difference state equations and an output equation with a specific collection of parameters Θ x(k + 1) = Λ ( x( k ), u (k ), Θ) , (1) y (k ) = Γ( x(k ), u (k ), Θ) or a direct input–output difference equation with another equivalent set of parameters, Ξ
y ( k ) = f ( ∆ p y ( k ), … , y ( k ), ∆q u ( k ), … , u ( k ), Ξ ) ,
(2)
where f is a nonlinear function. From a signal point of view, tracking parameter changes could be understood as detection of non-stationarity of a system in a long time horizon. In this paper a nonlinear dynamic model with changes of parameters in random instances of time is analysed. It can be assumed that each change of parameter values creates a new non-nominal model of a dynamic system. It is also assumed that there is a finite number N of changes hence the same number of corresponding collection of models (classes) {ω1, ω2, …, ωN} is formed. Thus the detection of parameter changes can be formulated as a multi-model classification. To categorise a current system state into classes, a vector capturing unique features of a system has to be created, based on observation of the output signal y(n). Afterwards a classifier is applied to feature vectors to assign data to one of several classes. Most of classification algorithms require probabilistic information: P(ωi) class prior probability, p(y|ωi) - class-conditional density and P(ωi|y) - posterior probability, which is rarely given a priori [1]. The stochastic classification rules use the most frequently following approaches: discriminant functions, the optimal Nearest Neighbour classifiers. Each of these rules has some advantages and disadvantages. The other approaches utilise expert systems or other artificial intelligence techniques, such as neural networks and fuzzy logic [2]. In the present study, in order to classify the signals, a feature vector Φ is formulated by a time-frequency processing of an output signal. Time-frequency representations originated from the Wigner-Ville transformation, require a troublesome optimisation of a transformation kernel that leads to the minimum classification error. Due to the great number of data possible to obtain after such a processing, an approach resulting in compressed data is required. Feature WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
273
extraction is a method of data dimension reduction. Both the discrete wavelet transform (DWT) and the continuous wavelet transform (WT), as affine timefrequency transformations, are frequently used for this task [3]. The main advantages of wavelets is that they have a varying window size, being wide for slow frequencies and narrow for the fast ones. Thus it leads to an optimal time-frequency resolution in all frequency ranges. Furthermore, owing to the fact that windows are adapted to the transients of each scale, wavelets are able to process non-stationary signals. A lot of papers address DWT as a discrete decomposition with multi-scale wavelet transforming of signals for features extraction. Unlikely to continuous wavelet algorithms, discrete algorithms are represented by a collection of a finite number of decomposition coefficients what is a compressed form for a signal representation. A vector Φ could be for example: mean of the absolute values of the wavelet decomposition coefficients, maximum of the absolute values of the wavelet coefficients, average power of the wavelet coefficients, standard deviation of the wavelet, the absolute sum of the wavelet coefficients at each resolution level, ratio of the absolute mean values of adjacent sub-bands, distribution distortion of the coefficients [4–7]. By signal processing methods we could reduce the signal (i.e., the original waveform) to a lower dimension represented by a vector Φ. Next a vector Φ has to be entered to an input of a neural LVQ classifier, performing an appropriate classification and pointing out the most probable values of vector parameters Θ.
2
The formulation of the problem
The approach proposed in the paper consists of four steps: • exciting of a system by non stationary signals; • transforming an output signal through chosen time-frequency transforms; • extracting the feature vector Φ from characteristic points of a timefrequency transformation; • detection and classification of the vector Θ by an LVQ network. An important problem in the successful detection, treated here as the classification, is a choice of an excitation signal of a dynamic system which allows to enhance unique properties of a system. An excitation signal should be located in an essential frequency band of a dynamic system. In the presented method, an excitation signal u(k) consists of two Gaussian atoms, well-separated both in time domain and in the time-frequency plane, called non-stationary Gaussian atoms. The Gaussian atom is a short signal with a Gaussian envelope modulated by the signal m(t)=ejνt. An excitation signal is non-stationary because it cannot be written as a discrete sum of sinusoids [8]. u (t ) = ∑ Ak exp[ j (2πν k t + φ k ] .
(3)
k∈N
Frequencies of exciting atoms have to be selected very carefully, according to frequency properties of a system. A choice of atoms (generally not only two) as WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
274 Computational Methods and Experimental Measurements XIV the excitation signal allows one to expect that after the non-liner processing performed by a dynamic system, atoms are equally well separated. From this point of view the output signal is also non-stationary. Time-frequency processing of atoms after non-linear processing also preserves separation of localisation in the time-frequency plane and emphasizes even subtle differences resulting from parameter changes, which are invisible in the time domain. Selection in a specific way a finite number of characteristic points of the time-frequency transformation allows to formulate a low dimension feature vector Φ. In this study the wavelet transform Tx(t, a, Ψ) as an example of time-frequency processing is considered Tx (t , a; Ψ ) =
+∞
∗ ∫ y( s )Ψ t , a ( s) ds .
(4)
−∞
The set of wavelets Ψt,a(s) are created in the following way Ψ ∗t , a ( s ) = a
−1 / 2
s −t Ψ . a
(5)
The variable a represents the scale whereas t is the translation of the mother-wavelet. Calculating wavelet coefficients at every possible scale generates a lot of data. If we choose scales based on powers of two, so-called dyadic scales, we can formulate an indexed family of wavelets from the mother wavelet function in the form Ψi, j ( x) = 2 −i / 2 Ψ (2 −i x − j ) .
(6)
The wavelets defined in such a way play a substantial role in multiresolution decomposition of a signal in different decomposition levels. The mother wavelet function Ψ for i = 0, j = 0 must satisfy the multiresolution condition related to the scale equation φ (x)
φ ( x ) = 2 ∑ h( k ) φ ( 2 x − k ) ,
(7)
Ψ ( x) = 2 ∑ g (k ) φ (2 x − k ) ,
(8)
k
k
where h and g can be viewed as filter coefficients of half band low-pass and high-pass filters, respectively. J-level wavelet decomposition can be computed as follows f 0 ( x) = ∑ a j +1,k φ j +1,k ( x) + k
J
∑ d j +1,k Ψ j +1,k ( x) ,
(9)
j =0
where aj, k and dj, k are approximation (AC) and details coefficients (DC). Discrete wavelet transform can be defined by a collection of approximation and details coefficients. In each decomposition level a signal is divided into two sub-bands by two filters: a low-pass and a high-pass. Approximation coefficients correspond to a low band, detail coefficients correspond to a high band WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
275
respectively. The selection of an appropriate wavelet and the number of decomposition levels is a key problem, because each scale of decomposition represents a particular coarseness and unique properties of the signal under the study [9]. Multiresolution analysis leads to hierarchical and fast algorithms of computing of DWT coefficients. In this paper the following quantities were chosen for creating the vector Φ: energies of approximation and details coefficients in sub-bands for selected levels of decomposition, means of wavelet coefficient absolute values in sub-bands for selected levels of decomposition, variances of wavelet coefficients in sub-bands for selected levels of decomposition, average power of decomposition coefficients in sub-bands for selected levels of decomposition, ratios of averages of mean details coefficients in sub-bands. It is also assumed that parameters of nonlinear system from a set Θ = (θ1, θ2, …, θM) can be changed in a stepped way in random time instances ki creating non-nominal models corresponding to N classes ωi, i = 1, …, N. Between changes the parameter values are constant. The goal of the classifier is the selection of the most probable value of the element from the vector Θ, which corresponds to the selection of the class ωi. Time interval between changes 〈ki, ki+1〉 is enough long to perform classification. 2.1 Neural identifier of parameters values
Learning Vector Quantisation (LVQ) is a supervised version of vector quantisation, similar to Self-Organising Maps (SOM) based on the Kohonen's works [10¸ 11]. Classes for each input pattern are predefined. The goal of the LVQ algorithm is to define class boundaries based on prototypes, covering the input space of samples in such a way that the boundaries divide the space, creating the best approximation of regions occupied by data belonging to each class. Prototypes are also called ‘codebook vectors’ (CV); each of them represents a region labelled with a class. They are localised in the centre of a class or a decision region (‘Voronoi cell’) in the input space. A class can be represented by an arbitrarily number of CVs, but one CV represents one class only. The division (tessellation) of the input space performed by the set of CVs is optimal if all data within one cell belong to the same class. From the neural networks point of view, the LVQ network is built as a feedforward net with one hidden layer of neurons, fully connected with the input layer. A CV can be described as a hidden neuron (‘Kohonen neuron’) or a weight vector of the weights between all input neurons and the regarded Kohonen neuron. Coded vectors are constructed during supervised learning of a network. Learning modifies the weights in accordance with adapting rules and changes the position of a CV in the input space. The classification after learning is relied on finding a Voronoi cell, specified by the CV with the smallest distance to the input vector and assigning it to the labelled class. The most frequently the Euclidean distance is used for comparison between an input vector and the class representatives. The node of a particular class which has the smallest distance is declared to be the winner. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
276 Computational Methods and Experimental Measurements XIV
3
Numerical experiments
The dynamic system under study is the second order non-linear system with the state equation given by eqn. (10) and a vector of internal parameters Θ = (a, b, c, d). Nominal values of the system parameters [a, b, c, d] = [1.0, 1.0, 3.0, 1.5].
x1 (k + 1) = x2 (k ) x2 (k + 1) = b x2 (k ) + c x13 (k ) + d x12 ( k ) + a u ( k ) .
(10)
y (k ) = x1 (k ) Graphs of the system output equilibrium points for the nominal set of parameters versus a constant input signal u (i.e. static characteristics of the system) and the family of amplitude frequency characteristics (system gain for a harmonic input of a single, changeable frequency) are other forms of system description, fig. 1. In numerical experiments performed in this study it was assumed that c was the only changeable parameter and could change in the range [-2; +2] around its nominal value. It was also assumed that the current value of that parameter would be assigned to the five-element set, Θc ∈ {1; 2; 3; 4; 5}, i.e. the actual value of c “around” 2 will be identified as cdes = 2. In that way five classes (clusters) were established in the range of variability of c parameter; the values closed to nominal represented one of these classes. The characteristics of the system over the range of parameter changes are presented graphically in fig. (2). In the series of extensive numerical experiments, the system was excited by two Gaussian atoms with frequencies from the essential frequency band of the system, what can be seen in fig. 1. From examination of a variety of features of wavelet decomposition of the output signal, it turned out that Daubechies wavelets (db family), symlets wavelets (sym family) and reverse biorthogonal spline wavelets (rbio family) have the best separation properties to be exploited for the classification task considered in this study.
Figure 1:
Examples of graphs describing the nonlinear system.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
277
Computational Methods and Experimental Measurements XIV
Figure 2:
Examples of graphs describing parameter changes of the system.
To prepare training data for the LVQ classifier, the set of 205 system output signals (41 per each class) has been obtained for simulations. To obtain a set of signals representing each class (corresponding to the class centre value cdes ∈ Θc), simulations were performed for c changing in the range [-0.4; +0.4] around the centres, with the step equal to 0.02. Then the discrete wavelet decomposition of the output signals were performed up to the fourth level decomposition for the wavelet families mentioned above: Daubechies of order 3 (db3), symlets of order 3 (sym3) and reverse biorthogonal spline of order 2.2 (rbio2.2). The feature extraction stage consisted of computing the following sets of values: energies of details coefficients, means of coefficient absolute values, variances of decomposition coefficients, average power of decomposition coefficients, ratios of averages of mean details coefficients. Detailed examination of the above characteristic values revealed that the combinations of selected subsets of them are sufficiently good to create classification space, i.e. the feature vectors to be passed to LVQ inputs. Fig. 3 shows the examples of characteristic variables extracted from wavelet decomposition (with db3 employed at level 4), which reveal good separation properties. -4
x 10
1
x 10
20
1
Evar(A4)
Ed(2)
0.5
0.5 0
1
2
3
4
0
5
1
Param. value
2
3
4
10 5 0
5
3
4
0
5
1.5
Evar(D3)
0.3
2
0.2 0.1
2
3
4
Param. value
Figure 3:
5
0
1
2
3
4
Param. value
1
2
Param. value
0.4
Ed(4)
Ed(3)
x 10
1
2
0.01 0.005
5
1.5
0.5
1
4
5
-6
x 10
1
0
3
Param. value
-4
4
0
1
Param. value
-3
6
0.015
15
Evar(D2)
Ed(1)
1.5
Evar(D4)
-6
2
2
3
4
Param. value
5
x 10
1 0.5 0
1
2
3
4
5
Param. value
Energies of detail coefficients and variances of coefficients for db3_lev4 up to fourth level decomposition.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
278 Computational Methods and Experimental Measurements XIV Table 1:
Classification accuracy [%] – db3 wavelets, 4-th decomposition level. Input noise variance σ2
Input data/ Competitive neurons Coefficients energy 25 neurons Coefficients energy 15 neurons Coefficients variance 25 neurons Coefficients variance 15 neurons
Table 2:
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.09
0.10
99.5
100.0
99.0
98.5
97.6
96.1
93.7
90.2
88.3
98.5
98.5
98.1
98.1
98.1
96.1
95.1
91.7
89.8
99.5
100.0
99.0
98.5
97.6
96.1
93.7
90.2
88.3
98.5
98.5
98.1
98.1
98.1
96.1
95.1
91.7
89.8
Classification accuracy [%] – rbio wavelets, 4-th decomposition level. Input noise variance σ2
Input data/ Competitive neurons Coefficients energy 25 neurons Coefficients energy 15 neurons Coefficients variance 25 neurons Coefficients variance 15 neurons
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.09
0.10
99.0
98.5
98.1
97.6
96.6
96.1
96.1
95.1
95.1
98.5
97.5
97.6
97.1
96.1
95.6
95.1
94.6
94.1
100.0
99.5
99.0
98.5
98.1
97.6
97.1
96.1
96.1
99.5
99.0
98.1
98.1
96.6
96.1
95.1
94.1
93.2
Among several LVQ architectures examined, two of them: with 15 neurons (3 per class) and with 25 neurons (5 per class) in the competitive layer shown the best performance during the training. For a variety of feature combinations, classification accuracy obtained by the above networks during the training ranged from 97% to 100%. For further analysis, the above LVQ structures with the five-elements input vectors created from mean energies of coefficients (at approximation and details levels) and coefficient variances (at the same levels) have been chosen. Tables 1, 2 and 3 show classification accuracies of the networks when the excitation signal is contaminated with white noise (random numbers of zero mean and normal distribution) of increased variance. These cases correspond to the situation, when the desired, exact values of the input flow cannot be provided due to inaccuracies of actuators functioning in the system. As it could be expected, increased noise WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
279
Computational Methods and Experimental Measurements XIV
Table 3:
Classification accuracy [%] – sym wavelets, 4-th decomposition level. Input noise variance σ2
Input data/ Competitive neurons Coefficients energy 25 neurons Coefficients energy 15 neurons Coefficients variance 25 neurons Coefficients variance 15 neurons
Table 4:
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.09
0.10
99.5
100.0
99.0
98.5
97.6
96.1
93.7
90.2
88.3
98.5
98.5
98.1
98.1
98.1
96.1
95.1
91.7
89.8
99.5
100.0
99.0
98.5
97.6
96.1
93.7
90.2
88.3
98.5
98.5
98.1
98.1
98.1
96.1
95.1
91.7
89.8
Classification accuracy [%] for extended range of parameter variability. Wavelet decomposition at 4-th level. Input noise variance σ2 = 0.
Wavelet family db3 rbio sym db3 rbio sym Number of neurons in Kohonen layer 25 15 Range of parameter changes Classification features – mean energy of coefficients (0.40; 0.45] 70.0 70.0 70.0 70.0 70.0 70.0 (0.45; 0.50] 63.0 60.0 63.0 62.0 60.0 62.0 (0.40; 0.45] (0.45; 0.50]
Classification features – mean variance of coefficients 70.0 66.0 70.0 70.0 70.0 70.0 63.0 60.0 63.0 62.0 63.0 62.0
variance causes deterioration of classification, but even with relatively large noise variance, the classification quality is acceptable (especially for reverse biorthogonal spline wavelets). The next experiment examined the extrapolation ability of the LVQ classifiers (however good performance of the network outside the training range should not be expected in any case). Table 4 shows the accuracy for values of c parameter, which stay outside the interval (i.e. symmetrically on both sides of the central point from Θc), from which the values for network training have been chosen. The border ±0.5 means that parameter c having such a value can be equally likely assigned to two neighbouring classes. As it can be seen, the classification accuracy for c values from outside the training range significantly decreases.
4
Conclusions
Detection of parameter changes in a nonlinear dynamic system and identification of values of a changeable parameter, via classification of features extracted from WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
280 Computational Methods and Experimental Measurements XIV the output signal, has been considered. The detection scheme consisted of system excitation with a non-stationary signal, data pre-processing with the use of discrete wavelet transform, feature extraction by aggregation of properties of wavelet coefficients and intelligent classification with the use of LVQ networks. Simulation experiments confirmed that wavelet decomposition has the ability to separate signal features for different ranges of parameter values. Also, that the classification system is robust to noise and has certain extrapolation ability.
Acknowledgements This work was supported by the Bialystok Technical University research project No. W/WE/3/07.
References [1] Duda, R. O., Hart, P. E. & Stork, D. G., Pattern Classification, John Wiley & Sons, New York, second edition, 2001. [2] DePold, H. R. & Gass, F. D., The application of expert systems and neural networks to gas turbine prognostics and diagnostics. ASME Journal of Engineering for Gas Turbines and Power, 121 (4), pp. 607-612, Apr. 1999. [3] Wu, J. D. & Chen, J. C., Continuous wavelet transform technique for fault signal diagnosis of internal combustion engines. NDT&E International, 39, pp. 304–311, 2006. [4] Guler, I. & Ubeyli, E. D., A modified mixture of experts network structure for ECG beats classification with diverse features. Engineering Applications of Artificial Intelligence, 18, pp. 845–856, 2005. [5] Al-Assaf, Y., Recognition of control chart patterns using multi-resolution wavelets analysis and neural networks. Computers & Industrial Engineering, 47, pp. 17–29, 2004. [6] Ubeyli, E. D., Wavelet/mixture of experts network structure for EEG signals classification. Expert Systems with Applications, 34, pp. 1954–1962, 2008. [7] Subasi, A., EEG signal classification using wavelet feature extraction and a mixture of expert model. Expert Systems with Applications, 32, pp. 1084-1093, 2007. [8] Auger, F., Flandrin, P., Goncalves, P. & Lemoine, O., Time-Frequency Toolbox For Use with MATLAB, CNRS, Rice University, 1995-1996. [9] Hanbay, D., Turkoglu, I. & Demir Y., An expert system based on wavelet decomposition and neural network for modeling Chua’s circuit. Expert Systems with Applications, 34, 2278–2283, 2008. [10] Kohonen, T., New Developments of Learning Vector Quantization and the Self-organizing Map. Proc. of the Symposium on Neural Networks: Alliances and Perspectives in Senri 1992, (SYNAPSE ’92), Osaka, 1992. [11] Misiti, M., Misiti, Y., Oppenheim, G. & Poggi, J. M., MATLAB wavelet toolbox. User’s Guide, Version 2.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Section 5 Data processing
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
283
A versatile software-hardware system for environmental data acquisition and transmission G. Zappalà CNR Institute for Marine Coastal Environment Section of Messina, Italy
Abstract In recent years increasing importance has been given to knowledge of the marine environment, either to help detect and understand global climate change phenomena, or to protect and preserve those coastal areas where multiple interests converge (linked to tourism, recreational or productive activities) and that suffer greater impact from anthropogenic activities; this has in turn stimulated the start of research programs devoted to the monitoring and surveillance of these particular zones, coupling the needs for knowledge, sustainable development and the exploitation of natural resources. Modern instruments rely on an electronic heart; an integrated hardware-software system developed in Messina is presented here, which is used in various versions to control data acquisition and transmission on buoys or on ship-based instrumentation. The system was originally conceived to implement a PC-like architecture; basically, it comprises a Pentium family CPU, a variable number of RS-232 ports to connect measuring instruments and communication devices, an analog to digital converter, power outputs to drive the actuators and switch on and off the measuring systems, a satellite and/or cellular phone modem and GPS; the mass storage is supplied by Disk on Chip (DOC) devices. The software enables one to fully control the system and the connected instruments both in local and remote mode using a special set of macro commands. A “sequence manager” can be activated to run pre-programmed macro-command sequences. The macro-commands enable one to manage the data acquisition and transmission, the mission programming, the system hardware and the measuring instruments. The whole system can be connected to another computer (local laptop or remote desktop) using terminal software; however, to fully and easily use its capabilities, a remote control program has been written. Thanks to the hardware-software architecture, it is easy to upgrade the system to more powerful processors without the need to completely rewrite the software. Keywords: environmental monitoring, data acquisition and transmission. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090261
284 Computational Methods and Experimental Measurements XIV
1
Introduction
There is an increasing need to have data available in real time or near real time in order to activate proper measures in emergency situations. Cabled or wireless data transmission can be used. The first allows the transmission of a higher amount of data only in coastal sites, while the second gives a bigger flexibility in terms of application to different environments; moreover, using mobile (either terrestrial or satellite) phone services it is possible to allocate the data centre in the most convenient place, without any need of proximity to the sea. To obtain a good synopticity of observations both in spatial and temporal domains, it is necessary to complement traditional ship observations with measurements from fixed stations (buoys moored in sites chosen to be representative of wider areas, or to constitute a sentinel against the arrival of pollutants), satellite observations, use of ships of opportunity and of newly developed instruments, like the gliders, or towed sliding devices, like the SAVE.
2
Software-hardware interaction: design goals
Several different (and somehow contrasting) needs must be faced in the system design: The CPU must offer high elaboration speed and low power consumption – really, power consumption vs. speed is a compromise, sometimes managed using sleep (halt) states; The software must be portable among different hardware platforms – this means it should be written in a high-level language, but high-level languages (either interpreted or compiled) are not as efficient as lowlevel ones, so requiring higher elaboration speed (and more memory, which increases power consumption); The software must be very efficient – this means it must be strictly tailored on the hardware, but this assumption contrasts with former one; Mission programming (i.e. setting of kind and frequency of measurements) must be performed “on the fly”, without recoding the managing software; this can be obtained using sequence files of macrocommands; New instruments must be added to the system easily and quickly.
3
The firmware
The software architecture was designed to be as modular as possible; it is based on three main modules, the “event machine”, the “sequence manager” and the “parser”, a number of “device” and “instrument” modules to fit the installed instruments and the data communication hardware, some auxiliary functions modules, written in various high and low level languages to run in ROMDOS.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
285
3.1 The event machine As shown in Fig. 1, the event machine runs in an endless loop, waiting for one of the following events: arrival of a character from the communication device arrival of a character from the keyboard (during maintenance) occurrence of a timed event occurrence of a “position” event
Start Modem and peripheral Initialization
Characters from modem ?
yes
Modem input management
not Characters from keyboard ?
yes
Keyboard input management
not
not
> 50
Timed or spatial event ? (ship only)
yes
Minute of the hour ? (Buoy only)
1
Sequence manager activation
Sequence manager activation
3 - 50 5 minutes CPU HALT
Figure 1:
The event machine activity.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
286 Computational Methods and Experimental Measurements XIV 3.2 The modem input manager As shown in Fig. 2, when a character arrives from the modem, it activates a routine scanning the input string looking for the “Z” character, which is recognised as the start of the remote command sequence; following characters up to the “LF” are then considered the command and sent to the parser to be interpreted.
Modem input management
Character read from the buffer
Is it a “LF” ?
not yes Is it a macrocommand ?
not
yes Parser
Go back to event machine main loop
Figure 2:
The modem input manager activity.
3.3 The keyboard input manager The keyboard input manager, whose activity is shown in Fig. 3, allows one to locally interact with the data acquisition system during setup and test phases. When a character arrives from the keyboard it is compared against the list of available commands and, if recognised, it is then executed.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
287
From this level, pressing ALT+”a key” it is possible to activate several utilities and sub-menus, made available according to the number and kind of measuring instruments fitted in the system Main keyboard commands are: F1 to show the list of available commands ALT+C to issue local commands ALT+D to run a DOS shell ALT+Q to quit ALT+R to run a macro-command sequence ALT+S to run the modem SMS and e-mail management menu ALT+T to activate the terminal utility Keyboard input management
Character read from the buffer
Is it a valid command ?
not
yes Command execution
Go back to event machine main loop
Figure 3:
The keyboard input manager activity.
3.4 The sequence manager The sequence manager is activated when an event occurs; according to the type of event, it opens the proper sequence file containing the macrocommands for that event, reads it line by line, sending each line to the “parser” to be interpreted and executed. 3.5 The parser The “parser” receives in input a string supposed to contain a valid macrocommand, which, if found and recognised, is executed. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
288 Computational Methods and Experimental Measurements XIV The syntax of the commands is very easy: every macro command begins with the letter Z, followed by a letter identifying the command and by the variable parameters (if any). The macro-commands (that can be combined into sequences using a simple text editor) enable to fully control the system, the connected instruments, and the data transmission both in local and remote mode. Conditional branch commands are also available; this feature can be very useful in case of partial operativity of the system due, for instance, to low battery level or failure of some instruments. 3.6 The “device” modules These modules are written to fit the peculiarities of the computer hardware in assembly language and are installed as Interrupt Service Routines to be quickly and easily called. 3.7 The “instrument” modules These modules are written to interface the computer with various measuring instruments, taking into account their peculiarities. To obtain the maximum flexibility, parameters can be passed to the modules to specify the instrument address and the command to be performed. The full set of commands (not supported by all instruments) includes: Instrument reset Instrument test Measurement performing and storing Usually these modules don’t need high elaboration speed, so they are written in compiled Basic or C 3.8 The data transmission routines The system can receive remote commands via modem, either in real time or as SMS (in this case a delay is possible). It is possible to choose among different data transmission systems, using either terrestrial or satellite modems. The first method uses a service offered by some telecom providers enabling to send e-mails encoded in an SMS; this solution is usually quite expensive and it should be used only where it is impossible to obtain an Internet connection The second method simply connects the measuring station to Internet using a tcp-ip protocol, so allowing one to directly send e-mails. The data path was designed to be fault tolerant: A copy of collected data is locally stored in the measuring station; A copy of the sent e-mail is also locally stored in the measuring station; In case of failure of the receiving mail server, the sending server (managed by the telecom company) will retry the delivery for some time, so allowing to repair the defective server without loss of messages; WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
289
All the mail messages are numbered, so enabling to detect undelivered ones and ask the remote station for retransmission; As soon as they are received, the mail messages are immediately assimilated into the data bank, and a copy remains for some time in the mail server. Periodic back-up operations are performed on the data base. The command-data flow is described in Fig. 4.
Real time data/commands
Terrestrial and/or satellite modem
SMS commands E-mail sent using SMS or TCP-IP
Data acquisition and transmission computer Measuring instruments
Figure 4:
4
Mail box
Local data storage
DATA BANK
The command-data flow.
The hardware
The data acquisition and transmission system was originally conceived to implement a PC-like architecture running a ROMDOS (a ROM based operating system similar to MS-DOS) environment ; this choice was due to the availability of both industry-grade boards and of native software development tools in high and low level languages (assemblers, Basic and C compilers, debuggers…). The boards comply with IEEE 696 (PC104) standard, which defines the dimensional and electrical characteristics of these devices. Basically, the standard defines a size (90 x 96 mm) and a connector layout (replicating the standard IDE signals found in the traditional PC-AT). Boards can be stacked one above the other, to add new peripherals and functions to the system. Usually, the board set could comprise: A CPU board including: o RAM and ROM (usually rewritable) memory o Disk-on-chip or Disk-on-module memory hosting o keyboard and video interface WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
290 Computational Methods and Experimental Measurements XIV soft-hard disk interface serial port interface (up to 4) parallel port interface other interfaces (USB, audio, mouse…) not used in this application Serial ports expansion board (usually 8 ports/board) Analog to digital converter (up to 16 single-ended input channels) with 12 bit (for system diagnostic) or 16 bit (for measurement purposes) resolution. Remote communication device board (GSM-GPRS modem) often integrated with a GPS module; sometimes this board is substituted with devices connected to serial ports obtaining better performances (higher data transmission speed, satellite communication, tcp-ip integrated stack…) Digital I/O board Relay board to switch on and off the connected instruments, or semiconductor power interfaces driven by the digital I/O board or by the parallel printer interface usually included in the CPU board. According to the measuring needs and to power supply availability, the hardware can be fully or only partly implemented. o o o o
5
Utility software
Softwares were developed in Visual Basic to run in Windows environment to help manage the systems. 5.1 The control terminal This program integrates a Terminal program with some utilities: an “application sequence generator”, a “launch scheduling module”, an “HEX file generator”, a “file upload manager” and a “remote file manager module”. From the main terminal window it is possible to customize and use the terminal program or to activate the specific utilities. 5.1.1 The application sequence generator The “application sequence generator” is a menu-driven utility that generates the “sequence” files containing the macro commands for mission programming. 5.1.2 The launch scheduling module This module was developed to enable setting probe release parameters in operation from ships of opportunity. 5.1.3 The HEX file generator It is possible to transfer files to the data acquisition and transmission system coding them in a specifically designed format, somehow similar to an ASCII coded hexadecimal dump, with checksum and transmission order information. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
291
Although not very fast (it increases the amount of data to be transmitted), this format is a reliable and safe way to encode small amounts of data (e.g. the “sequence files” generated by the “application sequence generator”). 5.1.4 The file upload manager This utility was designed to perform automatic uploads of “HEX” files to the data acquisition and transmission systems in the buoys; it can work in a “manual”, “automatic”, or “blind” mode. The file transfer happens one line at a time, with the first line (numbered 0000) containing the file name and the last line (numbered 9999) signalling the end of file. In “blind” mode, the lines are sequentially sent, at timed intervals, without waiting for any feedback; in “manual” mode, the user will wait for a feedback from the remote computer, then will choose to press the ENTER key to send next line, or BACKSPACE to resend the most recently transmitted; in “automatic” mode, every need is managed by the program. It is possible to start (or resume) the file transmission at any line number. 5.1.5 The remote file manager module This module gives remote control of the measuring station file system offering DIR, COPY, DELETE and RENAME facilities
6
Some applications
Several versions of the data acquisition and transmission system were built; the first one was used in a network of coastal monitoring buoys; a second one equips an automatic multiple launcher for expendable probes; microcontroller-based version were also designed to equip small instruments.
Figure 5:
A buoy near Messina (left) and a close-up view of the control electronics (right).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
292 Computational Methods and Experimental Measurements XIV 6.1 The “buoy” version This release was used at the beginning of the century to equip a network of coastal monitoring buoys in the South of Italy, one of which is shown in Fig. 5. These buoys, described by Zappalà et al. [1] were equipped with: a meteo station measuring air temperature, wind direction and speed, pressure, solar radiation a system for pumping water samples from 5 depths to measure water Temperature, Conductivity, Dissolved Oxygen, Turbidity, Fluorescence 5 in situ temperature probes at the same sampling depths an experimental colorimetric Nutrient Probe Analyzer sequentially measuring in few minutes ammonia, nitrate, nitrite and orthophosphate a water sampler for microbiological laboratory analysis, able to collect, fix with formalin and store up to eight 250 ml samples an Acoustic Doppler Current 6.2 The “launcher” version A main goal of a Ship of Opportunity Programme (SOOP) is the provision of sea temperature profiles in (near) real time. The use of commercial ships and expandable probes allows the reduction of costs, in comparison with research ship cruises. A major cost effectiveness is achieved using automated systems, requiring minimum personnel effort. An autonomous multiple launcher was developed in the framework of the Mediterranean Forecasting System – Toward Environmental Prediction Project (MFSTEP), able to automatically collect eight temperature profiles, using a software-programmable sampling strategy. The device, shown in Fig. 6, is described by Zappalà et al. [2, 3] In the “launcher” version the program monitors the position of the hosting ship, comparing GPS data against the programmable set of launch points-times.
Figure 6:
The launcher on the deck of the Magnaghi ship (left) and its control electronics (right).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
7
293
Discussion
The need for advanced systems to survey water quality and meteorological parameters, both at medium and long term scales, stimulated in recent years the development of different kinds of coastal and offshore buoys and platforms, like those described, among the others, by Grisard [4], Griffiths et al. [5], Paul [6], Seim et al. [7], Pinardi et al. [8]. The systems here described constitute a further improvement and application of the know-how acquired in past programmes, described in Crisafi et al. [9], Zappalà et al. [10], Zappalà [11], Zappalà et al. [12], Zappalà [13] and were funded in the framework of Italian (“SAM” Sistemi Avanzati di Monitoraggio – Advanced Monitoring Systems) and European (“MFSTEP” Mediterranean Forecasting System Towards Environmental Predition) programmes. The observing systems can perform meteorological observations and measurements of physical, chemical and physico-chemical parameters characterizing sea water state and quality, current speed and direction; the measuring devices range from “static” sensors (e.g. Pt100 for temperature) to colorimetric analyzers for nutrients; a water sampler, taking samples for further laboratory determinations, may also be included in the systems to complete the series of measurable parameters. Water measurements can be carried out in situ at fixed depths, on samples pumped from various depths into a measurement chamber, using expendable profiling probes, or profiling instruments, like the SAVE described by Zappalà et al. [14], that adopted a microcontroller reduced version of the “launcher” electronics, with a software subset. New releases of the software and of the sequences are uploadable to the system without suspending its normal activity. The macro-commands enable to manage the data acquisition and transmission, the mission programming, the system hardware and the measuring instruments. Thanks to the hardware-software architecture, it is easy to upgrade the system to more powerful processors without the need to completely rewrite the software, which can be easily coded using standard development packages.
References [1] Zappalà, G., Caruso, G., & Crisafi, E., The “SAM” integrated system for coastal monitoring. Proc. of the 4th Int. Conf. On Environmental Problems in Coastal Regions, Coastal Environment IV, ed. C.A. Brebbia, WIT Press: Southampton, pp. 341-350, 2002. [2] Zappalà, G., Manzella G., An automatic multiple launcher for expendable probes. Proc. of the Fourth International Conference on EuroGOOS. Operational Oceanography: Present and Future, eds. H. Dahlin, N.C. Flemming, P. Marchand & S. E. Petersson, European Commission Publication Office, pp. 188-191, 2006.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
294 Computational Methods and Experimental Measurements XIV [3] Zappalà, G., Reseghetti, F., Manzella, G., Development of an automatic multiple launcher for expendable probes. Ocean Sciences, 3, pp. 173–178, 2007 Online. www.ocean-sci.net/3/173/2007/ [4] Grisard, K., Eight years experience with the Elbe Estuary environmental survey net. Proc. of OES-IEEE OCEANS ’94 Conference, I, pp. 38-43, 1994. [5] Griffiths, G., Davis, R., Eriksen, C., Frye, D., Marchand, P., Dickey, T., Towards new platform technology for sustained observations. Proc. of OceanObs 99. Online www.bom.gov.au/OceanObs99/Papers/Griffiths.pdf [6] Paul, W., Buoy Technology. Marine Technology Society Journal, 35 (2), pp. 54-57, 2001. [7] Seim, H., Werner, F., Nelson, J., Jahnke, R., Mooers, C., Shay, L., Weisberg, R., Luther, M., Proc. of SEA-COOS: Southeast Atlantic Coastal Ocean Observing System. MTS-IEEE OCEANS 2002 Conference, I, pp. 547-555, 2002. [8] Pinardi, N., Allen, I., Demirov, E., De Mey, P., Korres, G., Laskaratos, A., Le Traon, P.Y., Maillard, C., Manzella, G., Tziavos, C., The Mediterranean Ocean Forecasting System: first phase of implementation (1998-2001). Annales Geophysicae, 21, pp. 3-20, 2003. [9] Crisafi, E., Azzaro, F., Zappalà, G., Magazzù, G., Integrated automatic systems for oceanographic research: some applications. Proc. of OES-IEEE OCEANS ’94 Conference, I, pp. 455-460, 1994. [10] Zappalà, G., Alberotanza, L., Crisafi, E., Assessment of environmental conditions using automatic monitoring systems. Proc. of MTS-IEEE OCEANS ’99 Conference, II, pp. 796-800, 1999. [11] Zappalà, G., Advanced technologies: equipments for environmental monitoring in coastal areas. Science-technology synergy for research in marine environment - Developments in Marine Technology, eds. L. Beranzoli, P. Favali, and G. Smriglio, Elsevier: Amsterdam, pp. 261-268, 2002. [12] Zappalà, G., Caruso, G., Crisafi, E., Design and use of advanced technology devices for sea water monitoring. Operational Oceanography. Implementation at the European and regional Scales, eds. N.C. Flemming, S. Vallerga, N. Pinardi, H.W.A. Behrens, G. Manzella, D. Prandle, and J.H. Stel, Elsevier: Amsterdam, Oceanography Series, 66, 2002. [13] Zappalà, G., A software set for environment monitoring networks. Proc. of the Int. Conf. On Development and Application of Computer Techniques to Environmental Studies X. Envirosoft 2004, eds. G. Latini, G. Passerini, & C. A. Brebbia, WIT Press, Southampton, pp. 3-12, 2004. [14] Zappalà, G., Marcelli, M., Piermattei, V., Development of a sliding device for extended measurements in coastal waters. WIT Transactions on Ecology and the Environment, eds. D. Prats Rico, C.A. Brebbia, Y. Villacampa Esteve, WIT Press, Southampton, pp 187-196, 2008.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
295
Modelling of the precise movement of a ship at slow speed to minimize the trajectory deviation risk J. Malecki Polish Naval Academy, Poland Faculty of Mechanics and Electrical Engineering, Poland
Abstract In this paper, soft computing techniques applied to designing an autopilot enabling the control of the precise movement of a hydrographic ship in a horizontal plane along the desired trajectory have been considered. The reference trajectory is determined by particular way-points, and command signals are generated by independent fuzzy controllers. The minimization of the risk of deviating from the trajectory has been considered as a function of control quality in an environment with, and without, disturbances. With the aim of presenting the tests’ effectiveness and robustness, selected results of the executed simulation have been presented. Keywords: precise control, slow speed of ship, minimisation of deviation risk, artificial intelligence.
1
Introduction
Modelling of a safe ship’s movement with the aid of soft computing techniques, and considering all real conditions of operation, is a complicated process due to problems with the evaluation of coefficients of dynamics, the description of external forces and sea environmental disturbances. A hydrographic ship has been considered as a control plant [6]. Contemporary ships are often equipped with control systems allowing for complex manoeuvres and operation execution. The process of a ship’s automatic control is a difficult task due to its nonlinear dynamics. Moreover, the dynamics can change according to the modification of the drive’s configuration, which is suited to the executed task of ship [2]. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090271
296 Computational Methods and Experimental Measurements XIV Nowadays fuzzy systems in control of special ship motions find wide practical applications, ranging from soft regulatory control in consumer products to accurate control and modelling of complex nonlinear systems. In this work use of the fuzzy autopilot in control of a ship movement has been proposed [4]. The paper has been organized in the following way: it starts with a brief description of the dynamical and kinematical equations of ship motion. Then the problem of motion along predefined trajectory has been discussed. Next, fuzzy control law has been described and the results of simulation tests have been inserted. Finally, selected conclusions have been presented.
2
Modelling of precise ship movement - equations of motion
General movement of a ship in 6 degrees of freedom (DOF) is described with the aid of the following vectors [2, 3, 6]:
η = [x , y , z ,Φ ,Θ ,Ψ ]T υ = [u ,v , w, p ,q ,r ]T
(1)
τ = [X ,Y , Z , K , M , N ]T where:
η
vector of position and orientation in the earth-fixed frame;
x, y, z Φ ,Θ, Ψ
coordinates of position; coordinates of orientation; vector of linear and angular velocities in the body-fixed frame; linear velocities along longitudinal, transversal and vertical axes; angular velocities about longitudinal, transversal and vertical axes; vector of forces and moments acting on the ship in the body-fixed frame; forces along longitudinal, transversal and vertical axes; moments about longitudinal, transversal and vertical axes.
υ
u, υ, w p, q, r
τ X, Y, Z K, M, N
Simplified equations of ship’s precise motion are described in following form [6]:
[
(
)
]
[
(
)
]
m u& − vr + wq − x G q 2 + r 2 + y G ( pq − r& ) + z G ( pr + q& ) = X m v& − wp + ur − y G r 2 + p 2 + z G (qr − p& ) + x G (qp + r& ) = Y
(
)
(
)
I z r& + I y − I x pq − (q& + rp )I yz + q 2 − p 2 I xy + (rq − p& )I zx + m [x G (v& − wp + ur ) − y G (u& − vr + wq )] = N WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(2)
Computational Methods and Experimental Measurements XIV
297
In general dynamical and kinematical equations of motion [1, 2] can be expressed in following form [3, 5, 6]:
M v& + C ( v ) v + D η& = J
(v)v
+ g(η) = τ
(η)v
(3)
where: M C(v) D(v) g( η ) J (η )
inertia matrix including a rigid-body and an added mass inertia matrix; matrix of Coriolis and centripetal terms including a rigid-body and an added mass Coriolis and centripetal matrixes; hydrodynamic damping matrix; restoring forces and moments matrix; velocity transformation matrix transforming parameters between body-fixed and earth-fixed coordinate systems.
3 Precise control of a ship in a horizontal plane For the aim of analysis and mathematical description of a ship motion in the horizontal plane, three coordinate systems (fig. 1) have been defined [3, 5, 6]: − global coordinate system OXY, also called the earth-fixed coordinate system; − local coordinate system O0X0Y0, fixed to the body of a ship; − reference coordinate system WPiXiYi – this system is not fixed. Precise control system of the ship has been designed under the following assumptions: − the ship moves with varying linear velocities u, v and the angular velocity r; − coordinates of position x, y and the heading Ψ are measurable; − the desired trajectory is a broken line defined by set of way-points WP1, WP2, WP3, etc. with coordinates respectively (x1, y1), (x2, y2), (x3, y3), etc.; − the command signal consists of three components τ=[X, Y, N]. It has been assumed that X-axis in the reference coordinate system WPiXiYi covers with straight line passing through points ( xi , y i ) and ( xi +1 , y i +1 ) . The reference system can be obtained from the global one as a result of the transformation consisting of movement along the vector O 0 WP i and rotation
about Z-axis around the angle φi . A main goal of the precise steering system of the ship motion in the horizontal plane is to minimize mean square deviations ∆y and ∆Ψ measured in the system WPiXiYi, (fig. 1): – ∆y is a perpendicular distance of the ship’s centre of gravity to the predefined trajectory; − ∆Ψ is a local heading angle defined as the angle between the track reference line and the ship’s centreline. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
298 Computational Methods and Experimental Measurements XIV
X0
ρ
Xi
Φi+1
xi+1
Xi+1
WPi+1
ρ
Yi+1
Φi
∆Ψ
Ψ
X
WPi
xi
u
O
x0
v
∆y
Y
O0
Figure 1:
yi+1
yi
y0
Yi
Y0
Three coordinate systems used in the description of a ship’s motion in the horizontal plane: O0X0Y0 – earth-fixed system, OXY – body-fixed system, and WPiXiYi – reference system.
The structure of the above control system described is presented in fig. 2. The form of an assumed cost function Jcost describes the following expression:
(
)
J cost = min ∑ λ y ∆y 2 (t ) + λΨ ∆Ψ 2 (t )
(4)
t
where:
∆y(t ) = − sin(φi ) ( x0 (t ) − xi ) + cos(φi ) ( y0 (t ) − yi )
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(5)
Computational Methods and Experimental Measurements XIV
Figure 2:
X-axis
Trajectory Regulator
Route Planner
xi+1, yi+1
XYN
Y-axis
299
u v r Ship
Ψ-course
General structure of the control system of the ship.
x 0 (t ), y0 (t )
instantaneous coordinates of the ship’s centre of gravity
xi , y i
in the global system O0X0Y0; coordinates of the point WPi in the system O0X0Y0;
φi
angle of rotation of the reference coordinate system with respect to the global one.
This angle has been described in following form:
yi +1 − yi xi +1 − xi
φi = arctan
(6)
∆Ψ(t ) = Ψ (t ) − φi , instantaneous course of the vehicle in the system O0X0Y0; Ψ (t ) t time; constant coefficients. λ y , λΨ Each time the ship location ( x 0 ( t ), y 0 ( t )) at the time t satisfies:
[xi − x0 (t )] 2 + [ yi − y0 (t )] 2≤ ρ 2
(7)
here (see fig. 1):
ρ
φi +1
a circle of acceptance, the next way-point should be selected, the reference coordinate system WPiXiYi changed into WPi+1Xi+1Yi+1; an angle of rotation, calculated according to [6].
New ship’s position (xi+1, yi+1) is updated corresponding to new reference coordinate system.
4
Fuzzy control law
For the aim of designing of the fuzzy proportional derivative controller (FPD), controller adopted from [6, 7] has been used (fig. 3). WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
300 Computational Methods and Experimental Measurements XIV ek
ηdk
-
Fuzzy Regulator
τk
Ship
ηk
∆ek
Figure 3:
Fuzzy control structure.
Membership functions of fuzzy sets of input variables are: error signal ek (t ) = ηdk − ηk (t ) and derived change in error ∆ek (t ) = η k (t ) − η k (t − 1) and output one (command signal) τ k (t ) . Here k is equal to the number of DOF. The notation is taken as follows: N – negative, Z – zero, P – positive, S – small, M – medium and B – big. Table 1 rules presented from the Mac Vicar-Whelan’s standard base of rules have been chosen as the control ones [1, 7]. The unknown parameters of the proposed fuzzy controllers have been determined with the aid of fuzzy base of knowledge. Table 1:
Derived change in error ∆ek
5
N Z B
The fuzzy controller’s base of rules.
NB NB NM NS
Error signal ek NM Z PM NM NS Z NS Z PS Z PS PM Command signal τk
PB PS PM PB
Simulation study
Figures 4 and 5 contain selected results of simulation of studies considered in this article. Simulations of the automatic steering system operation are based on data executed on a real hydrographic ship. Her mathematical model was used to simulate studies in this article [5, 6]. The simulated studies of the worked out automatic steering system were executed in the Matlab environment. Forces of thrust generated by mathematical model of the main propulsion (2 screw) have been presented in fig. 4. Disturbances of sea environment: wind 5 °B, wave and sea current 1 m/s influence on the ship [6]. Fuzzy proportional-derivative controller (PD) was used as a regulator in automatic steering system. The same forces of thrust generated by the main propulsion and bow thruster without influence of disturbances are illustrated in fig. 5. In this case classical proportional-integral-derivative regulator (PID) was used in automatic steering system. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
301
Thrust force Fn12 [kN] 5 0
Fn1
2 5 0
Fn2 50
2 5 0
0
50 0
100 0
150 0
200 0
2500
300 0 time
Thrust force Fss [kN]
15
10
Fss 5
0
5 10 0
Figure 4:
500
1000
1500
2000
2500
3000
time [s]
Automatic steering system of a ship with a fuzzy proportionalderivative regulator (PD). Thrust forces the main propulsion Fs1, Fs2 (upper plot) and thrust forces the bow thruster Fss (lower plot).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
302 Computational Methods and Experimental Measurements XIV Thrust force Fn12 [kN]
50
Fn1
25
0
Fn2 50 25 0
0
500
1000
1500
2000
2500
3000
time [s] Thrust force Fss [kN] 20 15
10
5
0 5 10 0
Figure 5:
500
1000
1500
2000
2500
3000
time [s]
Automatic steering system of ship with a classical proportionalintegral-derivative regulator (PID). Thrust forces the main propulsion Fs1, Fs2 (upper plot) and thrust forces the bow thruster Fss (lower plot).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
6
303
Conclusion
The present state of knowledge associated with problems of designing the fuzzy automatic steering system is formalized in a little degree. This state decides about the effectiveness of working of these types of systems. The effectiveness of their action usually is based on experience and intuition of designer and experience and practical knowledge of the helmsmen and navigators. In the large measure they define the base of the fuzzy algorithm of steering and they decide about the way the control law will describe action of the steered process [3, 6]. In this work fuzzy regulators including variables with 5 - 7 fuzzy sets were used. Too small a quantity of these sets does not allow the essence of the steering object, however too large a quantity causes the considerable and inconvenient growth of the number of parameters describing the algorithms of steering. Conducted research allows one to state that the use of fuzzy logic to the precise steering of ship movement enables execution of hydrographic tasks with large precision. In this paper, the use of fuzzy controllers in precise control along a desired trajectory has been described. Results presented of numerical researches prove that an applied approach in designing of autopilots gives positive effects providing the suitable performance of the proposed control system. Another advantage of the control system discussed is its flexibility with regard to the change of dynamic properties of the ships and a performance index. Further works are needed to identify the best fuzzy structure of the autopilot and to test the robustness of this approach in presence of environmental disturbances.
Acknowledgement The present work was partially supported by State Committee for Scientific Research in Poland under Grant no. N504 - O 0033 32
References [1] Driankov, D., Hellendoorn, H., & Reinfrank, M., An introduction to fuzzy control, Springer-Verlag, 1993. [2] Fossen, T.I., Marine control systems, Marine Cybernetics AS, New York, 2002. [3] Garus, J., Using of softcomputing techniques to modelling of motion of underwater robot under conditions of environmental disturbances, Polish Journal of environmental studies, vol. 16, no 4B, pp. 34-27, 2007 [4] Kacprzyk, J., Multistage fuzzy control, John Wiley and Sons, New York, 1997. [5] Malecki, J., Mathematical model for control of ship at slow speed, Polish Journal of environmental studies, vol. 16, no 4B, pp. 126-129, 2007
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
304 Computational Methods and Experimental Measurements XIV [6] Malecki, J., Mathematical model of ship as object of precise steering, Scientific Magazine Polish Naval Academy, no 169/K, pp. 289-302, Gdynia, 2007 (in Polish). [7] Yager, R.R., & Filev, D.P., Essential of fuzzy modelling and control, John Wiley and Sons, 1994.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
305
Automated safe control of a Self-propelled Mine Counter Charge in an underwater environment P. Szymak Polish Naval Academy, Poland Department of Electrical Engineering and Electronics, Poland
Abstract The operation of a Self-propelled Mine Counter Charge (SMCC) in an underwater environment is exposed to disturbances of its movement. The main disturbances in this kind of environment are sea currents. Another difficulty in SMCC operation, particularly with automated control, is the nonlinear dynamics of the torpedo-shaped body of the SMCC. In this paper the automatic control system of a SMCC, called Gluptak, is presented, which can support the execution of a counter mine mission. Additionally, two control methods have been be presented that can interact with the sea current influence in the case of a lack of sea current measurement on board the SMCC. For the purpose of Gluptak’s control, the use of classical PD and artificial intelligence controllers have been considered, particularly with fuzzy data processing. A mathematical model of the SMCC and selected results of the numerical research are presented. Keywords: underwater vehicle, counter mine mission, automatic control, fuzzy logic.
1
Introduction
One of the main development directions of military underwater technology is that of robots, which are used to identify and destroy naval mines. Using these unmanned vehicles enables exploration at greater depths and in more hazardous conditions. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090281
306 Computational Methods and Experimental Measurements XIV The Polish implementation of this technology is the Self-propelled Mine Counter Charge (SMCC), Głuptak (fig. 1). It was designed in the Underwater Technology Department of Gdańsk University of Technology. The SMCC is a disposable unit, remotely operated and powered from on-board. It is shaped like a torpedo (fig. 1). It carries mine disposal equipment to the detected and classified target. While the target is identified by vehicle sonar and TV cameras (fig. 1), this equipment is used to initiate the mine explosive [3].
Figure 1:
Principal components of SMCC Gluptak.
SMCC Gluptak has a specific propulsion system, consisting of: four, 3-blade screw propellers in the horizontal plane and a single 3-blade screw propeller in a tunnel in a vertical plane. Each thruster is electrically driven and has 50 W power. The propulsion system enables the underwater vehicle to move in water with a maximal speed of 3 m/s and allows the control of the SMCC’s movement in four degrees of freedom (two translation motions: in the longitudinal axis of symmetry xo and the vertical axis of symmetry zo, and two rotations around the lateral axis of symmetry yo and around the zo axis).
2
Mathematical model
For the purpose of Gluptak’s movement simulation, a nonlinear model in six degrees of freedom has been accepted [1], where movement is analysed in two coordinate systems: 1) the body-fixed coordinate system xoyozo, which is movable, 2) the earth-fixed coordinate system xyz, which is immovable. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
307
While for the aim of movement description, notation of physical quantities according to SNAME (The Society of Naval Architects and Marine Engineers) has been accepted [2]. SMCC Gluptak’s movement is described with the assistance of six equations of motion, where the three first equations represent the translations and the last three equations represent the rotations (in three axes of symmetry: longitudinal xo, lateral yo and vertical zo). These six equations can be expressed in a compact form [1] as: Mν& + C(ν)ν + D(ν)ν + g(η) = τ .
(1)
Here ν=[u,v,w,p,q,r] is the body-fixed linear and angular velocity vector, η=[x,y,z,φ,θ,ψ] is the earth-fixed coordinates of position and Euler angles vector and τ=[X,Y,Z,K,M,N]T is the vector of forces and moments of force influenced underwater vehicle. M is an inertia matrix, which is equal to a rigid-body inertia matrix MRB and an added mass inertia matrix MA. C is a Coriolis and centripetal matrix, which is the sum of a rigid-body and an added mass Coriolis and centripetal matrixes. D is a hydrodynamic damping matrix and g is a restoring forces and moments matrix. After making the assumption that SMCC has three planes of symmetry, it moves with small speed in a viscous liquid and a movable coordinate system calculates the vehicle’s centre of gravity, obtained with a specific form of matrixes with nonzero values of the diagonal’s elements [1]. According to [5], these elements were calculated on the basis of the geometrical parameters of SMCC Gluptak. Coriolis and centripetal matrixes were omitted because of the small numerical values that are unimportant in computer simulation.
3
Architecture of the control system
The automated control system of SMCC Gluptak consists of (fig. 2): 1) a supervisory control unit, which is responsible for setting values of the movement’s parameters and turning on and off individual controllers at proper moments, 2) four controllers of: course, trim, translation and draught, which are generating an adequate control signal (fig. 2). Because the hydrodynamic damping in the yo and zo axes is almost 10 times bigger than in the xo axis, and there being about four times more thrust in the xo axis than in the zo axis, the draught’s controller is useless in practise. As there is a lack of information about the sea current’s influence on the underwater vehicle, it has been assumed that the SMCC should approach its aim with a constant velocity of 0,5 m/s. Therefore, the controller of the translation controls translational velocity is relative to the target of the mission. The value of the velocity is obtained from an underwater trackpoint system, which is used to navigate below the surface of the water.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
308 Computational Methods and Experimental Measurements XIV N
X controller of translation
controller of course
M
controller of trim
Y controller of draught
Supervisory control unit
Figure 2:
Z
Architecture of the automated control system of SMCC Gluptak.
The controller of translation is a classical proportional-derivative-integral action controller PID, the action of which in discrete time is based on the simple equation:
u n = k p ⋅ ε n + k i ⋅ ∑ ε n + k d ⋅ ∆ε n .
(2)
Here un is the control signal, εn is the control error, ∑εn is the sum of recent errors in n instant of time and ∆εn is the derived change in error in n instant of time. kp, ki and kd are amplification factors adequately of the proportional, the integral and the derivative elements. All the controller’s settings were tuned in an experimental way with assistance of direct control quantity indexes, such as the rise time, the setting time and the value of the first overshoot. For the accepted step of the control, equal to 1/18 s, the following amplification factors have been received: kp = 55, ki = 0,1 and kd = 21. For the purpose of the courses and trims control two types of proportionalderivative action controllers have been used: 1) classical PD, 2) based on fuzzy logic method FPD [5]. An action of classical PD controller in discrete time is based on the simple equation:
u n = k p ⋅ ε n + k d ⋅ ∆ε n .
(3)
Here un is the control signal, εn is the control error and ∆εn is the derived change in error in n instant of time. kp and kd are amplification factors adequately of the proportional and the derivative elements. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
309
All of the controller’s settings were tuned in an experimental way with the assistance of direct control quantity indexes, such as the rise time, the setting time and the value of the first overshoot. For the accepted step of the control, equal to 1/18 s, the following amplification factors have been received: kp = 2,9 and kd = 256. The action of FPD controllers is based on the fuzzy inference (fig. 3). Use of fuzzy inference in FPD controllers depends on the selection: number, type and position parameters of the membership function of the input and output variables and fuzzy inference rules, which create a base of rules. FPD
set value of parameter p
error e + _
Fuzzy Inference System
amplification derivative
control signal τ
UNDERWATER VEHICLE
vector of state [ν,η]
change of error ∆e actual value of parameter pa
Figure 3:
Block diagram of the fuzzy proportional-derivative controller FPD.
For the purpose of the FPD control of trim and course, a simple structure of the fuzzy inference system has been accepted, which was used in earlier research [4]: 1) three fuzzy sets with external trapezoidal membership functions and internal triangular membership functions for input signals: error (intersection points of functions equal to [–0,2; 0.5] and [0,2; 0,5] and change of error (intersection points of functions equal to [–0,07; 0,3] and [0,07; 0,3]), 2) five singletons for output signal – moment of force M (with coordinates equal to: -1, -0,65, 0, 0,65 and 1). Amplification factors for normalized membership functions were tuned with the assistance of the simulation of the mathematical model and were evaluated with the assistance of direct indexes. The following values of amplification factors have been received: 90° for the error signal, 3° for the change in error signal and 4,5 N for the control signal.
4
Strategy of approach to target the SMCC mission
During the process of designing the approach strategy to a mission target carried out by Gluptak, the following factors should be taken into consideration: 1) the torpedo-shaped body of the SMCC, where the only efficient direction of travel is translational movement in the xo axis, WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
310 Computational Methods and Experimental Measurements XIV 2) the potential possibility of control in four degrees of freedom reduced to three degrees of freedom (proper draught should be carried out by adequate trim and forward movement), 3) the specific construction of a control cable called the umbilical cord in the form of a thin optical waveguide unrolling from a spool (this type of cable generates almost neutral resistance of movement, but it has limited length and the speed of its unrolling depends on the trajectory of the SMCC and the action of the sea current). Because of the presented factors, it has been assumed that the SMCC should take a starting position relative to the target of the mission in the way that the eventual sea current will act on it in its xo axis and on the contrary of its translational velocity vector. In this case, the underwater vehicle will move forward along a line close to a straight line and its cable will unroll backwards to prevent eventual damage of the cable by the underwater vehicle’s thrusters. In the case of sea current action, the effect of an underwater vehicle pushing aside could be observed. When the SMCC is equipped with a sea current measuring device there is no problem with interaction. In this case the virtual aim of the mission should be calculated, which will take into consideration the direction and velocity value of the sea current. In the case of a lack of sea current measurement on the board the SMCC, correction of the set value of controlled variables should be executed. For the purpose of interacting with the sea current two control methods have been developed: 1) continuous updating of the set value of the controlled variable (if an underwater vehicle is pushed aside by the sea current, then a new set value of the controlled variable is calculated), 2) correction of the set value of the controlled variable on the basis of bearing (after achieving set values of the controlled variable bearing in the horizontal or the vertical surface is calculated; changes of bearing in time indicate the sea current action, which gives the possibility of correcting the set value of the controlled variable). The second correction method was based on the simple equation:
p n = p n −1 + k b ⋅ ∆ b n .
(4)
Here pn is the set value of the controlled variable in n instant of time, pn-1 is the set value of the controlled variable in n-1 instant of time, kb is the gain factor and ∆bn is the error of bearing in n instant of time.
5
Numerical research
Computer simulation was realized in the Windows/Matlab environment. Simulation was executed with the influence of the sea current with the following parameters: specified velocity Vc and direction of affecting αc. With the aim of the tested controllers evaluation, direct control quantity indexes have been used. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV b) c) b)
a)
50
45
45
40
40
course [deg]
course [deg]
50
35 30 25 20 15 10 5
311
35 30 25 20 15 10 5
0
0
0
5
10
15
20
25
0
5
time [s]
Figure 4:
10
15
20
25
time [s]
Simulation results of course’s control: a) based on fuzzy logic method FPD, b) classical PD.
At the beginning of the action of the course’s controllers: classical PD and the based on fuzzy logic method (FPD) have been compared (fig. 4). The setting time for the PD controller is almost 40% bigger (12,26 s) than the setting time for the FPD controller (8,74 s). Moreover, fuzzy logic methods are more robust for the nonlinearity of the object and the influence of disturbances [4]. Because of the accepted approximation of the SMCC in the form of a cylinder, received results of numerical researches for the trim controllers are similar to the presented results for the course controllers. Then the action of the whole automated control system on the horizontal surface XY has been examined (fig. 5). The main task of the automatically controlled SMCC Głuptak was to reach a target. Additionally, it had to move along as close to a straight line as possible. This condition is very important in an underwater environment with many obstacles, especially during carrying out of a counter mine mission. Moreover, the comparison of the two control methods has been executed in the presence of affecting sea currents from different directions (fig. 5). As we can observe, the second method (correction of the set value of the controlled variable on the base of bearing) is better than the first one (continuous updating of the set value of the controlled variable). The use of an added bearing signal gives possibilities to move along a safer trajectory closer to a straight line.
6
Conclusion
In general, the received results of the executed numerical research confirmed that the automated control system of the SMCC Głuptak can be successfully used to steer this underwater vehicle along a desired trajectory, especially along a line to the target of a counter mine mission. In particular, it should be underlined that the introduction of an added bearing signal in the control system gives indirect information about the affects of the sea current, which is very significant in the case of a lack of this type of information
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
312 Computational Methods and Experimental Measurements XIV a)
b) 50
50
target
40 30
30
y [m]
y [m]
target
40
20 10
20 10
0
0
0
10
20
30
40
50
0
-10
10
20
30
40
50
-10
x [m]
x [m]
c)
d) 50
50
target
40 30
30
y [m]
y [m]
target
40
20 10
20 10
0
0 0
10
20
30
-10
40
50
0
10
x [m]
Figure 5:
20
30
40
50
-10
x [m]
Automated control of the SMCC Gluptak to the target using methods: a) c) continuous updating of set value, b) d) correction of set value on the base of bearing in environment with affecting sea current: a) b) Vc = 1m/s, αc = 60°, c) d) Vc = 1m/s, αc = 90.
on board an underwater vehicle. The correction of the set value of the controlled variable on the basis of bearing enables the approach to a target to be along a safer trajectory than continuous updating of the set value of the controlled variables. In addition, the controller based on the fuzzy logic method (FPD) enables better and faster regulation of trim and course angle than a classical PD controller.
References [1] Fossen T.I., “Guidance and Control of Ocean Vehicles”, John Wiley & Sons Ltd., 1994. [2] “Nomenclature for Treating the Motion of Submerged Body Through a Fluid”, Technical and Research Bulletin, The Society of Naval Architects and Marine Engineers – SNAME, no. 3 – 47, 1989. [3] Kubaty T., Rowiński L., “Mine counter vehicles for Baltic navy”, internet, http://www.underwater.pg.gda.pl/ publikacje.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
313
[4] Szymak P., “Using of fuzzy logic method to control of underwater vehicle in inspection of oceanotechnical objects”, “Artificial Intelligence and Soft Computing”, Polish Neural Network Society, Warsaw 2006, pp. 163-168. [5] Szymak P., report on grant: „Using of fuzzy logic method to control of naval unit underwater vehicle – load”, in polish, Gdynia 2006.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
315
A novel financial model of long term growing stocks for the Taiwan stock market S.-H. Liang1, S.-C. Liang1, L.-C. Lien2 & C.-C. Liang3 1
Department of Research, LC Investment Workshop, Taiwan, ROC Department of Investment Business Management, Da-Yeh University, Taiwan, ROC 3 Department of Mechanical and Automation, Da-Yeh University, Taiwan, ROC 2
Abstract Adopting the investment concepts of Rule#1 used by Phil Town and the formula for identifying and evaluating the stocks of tomorrow used by Michael Moe, this paper constructs a novel financial model of long term growing stocks for the Taiwan stock market. The investment themes are derived from the intersection of the emerging industry, megatrends, and hot areas for future growth. Then, the candidates of good companies are selected by using the three circle method. To evaluate the candidature of excellent companies, this paper uses both Town’s four Ms method (Meaning, Moat, Management, and Margin of safety) and the Big Five number (Return on Invest Capital (ROIC), Sale growth rate, Earning per share (EPS) growth rate, Equity or Book Value per share (BVPS) growth rate, and Free Cash Flow (FCF) growth rate) and Moe’s four Ps (People, Product, Potential, and Predictability). Also, the EPS growth rate is used to rank the companies. Furthermore, this paper finds the sticker price by estimating the future earnings growth and price/earnings ratio, checks the sticker price by the discounted cash flow method, the Price Earning to Growth Ratio (P/E/G) method and the Price to Sales Ratio (P/S) method, and then calculates the price of the Margin of Safety (MOS). Finally, the Taiwan long-term stock investment is studied by using this simple and step-by-step financial model. This paper aims to present a useful model that individual investors can apply easily. Keywords: financial model, long term, growing stocks, Taiwan, 4Ms, 4Ps.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090291
316 Computational Methods and Experimental Measurements XIV
1
Introduction
Taiwan is moving forward to be a developed country, and its financial circumstance is becoming more and more liberal, global, and diversified. Because of the complexity of the financial merchandise, people think that investment is an extremely specialized subject. There is much literature about investments, but these focus on either inferring theory or providing technical analysis. This literature often confuses the investors and the very result of this literature is so hard to understand that the investors do not know how to use it. In Taiwan, individual investors occupy a large number in the stock market. Although there is plenty of stock market information, individual investors usually do not know which stock to invest in. A simple, direct, and step-by-step investment model is needed for individual investors to make the investment decisions correctly and independently. In 2006, Phil Town developed a successful investment strategy using the first rule of investment (Rule#1), which was pioneered by Columbia University’s Benjamin Graham and strictly followed by famed investor Warren Buffett – Don’t Lose Money (Town [1]). Rule#1 is a sensible and pragmatic step-by-step guide – methodically researched and terrifically accessible. To be financially secure after retirement, investors need something more effective – something guaranteed to protect investors’ principles and earn investors a solid rate of return. By an intriguing process to prevent them from losing money, this results in the investors making more money than they had ever imagined. It essentially comes down to buying shares of companies only when the numbers – and the intangibles – are on investors’ side. The basic formula behind Rule#1 is simple – much more like shopping: 1. Finding a good business that the investor understands. 2. Knowing what it is worth – exactly what it is worth – by predicting its future stock prices. 3. Buying the stock at 50% off and selling it at full price when the market corrects its value. 4. Repeat until very rich. In Rule#1 Phil Town offered investors something as easy to read as roadmaps to: 1. Set up a brokerage account. 2. Utilize the Internet to access the same market data that the big guys on Wall Street use. 3. Set up a “watch list” of what good businesses investors want to buy. 4. Read signals that say “buy” and “sell”. 5. Achieve a guaranteed 15% or better rate of return on investors’ investments. Back in 1992, stock analyst Michael Moe predicted that a humble Seattle coffee company would become a long-term superstar. Since then he’s made similar great calls in high technology and other sectors, earning a reputation as one of today’s most insightful market experts (Moe [2]). Michael Moe, now cofounder and CEO of ThinkEquity Partners, shows how winners like Dell, eBay, and Home Depot could have been spotted in their start-up phase and how an investor can find Wall Street’s future giants. For Wall Street insiders and individual investors alike, Moe’s book, Finding The Next Starbucks: How to Identify and Invest in the Hot Stocks of Tomorrow in 2007, is an indispensable guide to spotting growth opportunities. Moe’s objective is to identify and invest in the small companies that can become big companies – what Moe calls the stars of tomorrow – the fastest growing, most innovative companies in the world. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
317
The hunt for these companies has the greatest potential for reward, but can also be very dangerous for the unprepared. In reality, finding the best stocks is really finding the best companies – over time a stock’s performance will be aligned with how the company does. Great companies and investors are both systematic and strategic to achieve their objectives. So, earning growth that drives stock price is a philosophy in this paper and the core principles, the 10 Commandments, guided the investment process. Therefore, Moe has a framework for looking at industries that are benefiting from secular tailwinds – megatrend analysis. A discipline – the 4Ps (people, product, potential, predictability) – is used for analyzing the core fundamentals of a giant growth company. Then, the valuation methodology gives an investor a perspective on the relative value of a company – Price Earning Ratio (P/E) to grow and Price to Sales Ratio (P/S) versus margins and growth. According to Moe’s philosophy, earning growth is what drives stock price over time, and it seems that a simple solution would be to find companies with high earnings growth and hang on for the ride. While that’s true, investing in high-growth enterprises is even better than that due to the way in which compound interest works, giving an understanding s to why growth investing has such huge potential rewards. In 2007, Malkiel’s concept, a random walk down Wall Street (Malkiel [3]), provided a guided tour of the complex world of finance and practical advice on investment opportunities and strategies. Malkiel examines some popular investing techniques, including technical analysis and fundamental analysis in light of academic research studies of these methods. Through detailed analysis, Malkiel notes significant flaws in both techniques, concluding that, for most investors, following these methods would produce inferior results over passive strategies. Basically, Town’s methodology about the simple strategy for successful investing and Moe’s methodology about how to identify and invest in the hot stocks of tomorrow, are primarily based on fundamental analysis and assisted by technical analysis. This paper adopts the investment concepts of Rule#1 used by Town and the formula for identifying and evaluating the stocks of tomorrow used by Moe to establish a simple and easy model for investors to apply. The candidate of excellent stock has to be found first. Then this paper uses the Big Five numbers to predict the candidate of excellent stock’s long-term future. This paper also combines both Town’s and Moe’s concepts to identify the giant growth companies, then calculates their intrinsic values and finds a margin of safety. Finally this paper uses technical analysis tools including MACD, STOCHASTICS, and MOVING AVERAGE to decide when to buy and sell the stock. This paper may provide a useful reference for individual investors investing in the Taiwan Stock Market.
2
Financial model construction
This paper constructs a simple successive and strategic model for long-term stock investment that is quite suitable for the Taiwan Stock Market based on Town’s modified Rule#1. This paper adopts Town’s first priority in the Rule#1 concept, which is to help investors find excellent companies of certitude that WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
318 Computational Methods and Experimental Measurements XIV have a proven track record of producing strong growth and investment returns. This paper uses Town’s “4M” which uses four tests to determine whether a company is in a good financial condition and will produce strong long-term growth. The first M is Meaning: finding a company that an investor understands and believes in is essential. In this paper, the 3-circle analysis method coupled with Moe’s megatrend, six emerging industries and 16 hot areas for future growth, are used to choose the candidate of an excellent company. The second M is Moat: companies with strong competitive advantages achieve the most predictable returns. If competition is so fierce that a company is struggling to keep ahead, then an investor could not reasonably trust long-term growth estimates. Town’s method can be used to recognize what Warren Buffett calls a company’s “Moat” and use financial statistics such as Return on Capital, Earnings growth and Revenue Growth to build evidence of a strong competitive advantage. Once excellent company characteristics have been identified, confirm that the Big Five numbers (ROIC (Return on investment capital), Equity Growth Rate, EPS Growth Rate, Sales Growth Rate, and Cash Growth Rate) are all over 10% and not going down. The third M is Management: this M focuses on the executive board of the company. The CEO of the company must be carefully reviewed by reading articles and letters to shareholders, and the company’s annual and quarterly reports must be analyzed to determine whether the company is open and honest with its owners. Town covers what to look for and what to avoid. In addition, Moe’s 4P’s method – People, Product, Potential, and Predictability – is used to estimate the potential of the hot stocks of tomorrow. The fourth M is Margin of Safety: Town offers a simple method for determining a stock’s true “sticker price” by estimating the future earnings growth and price/earnings ratio. If a company is selling for 50% of this computed price, it may be time to buy, but only if it passes the final M. Then, determine the growth rate of the business, determine the multiple earnings that are historically accurate (the Price Earning Ratio (P/E)), and determine the current trailing twelve months EPS. Grow the EPS at that growth rate for ten years. Multiply the future EPS by the Price Earning Ratio (P/E) to get the Future Price. The minimum acceptable rate of return of 15% is adopted and the sticker price can be created from the future price divided by four to get the current value. According to Town’s idea, never buy retail but buy at 50% below retail. The Margin of Safety price is the sticker price divided by two and Moe’s Discounted Cash Flow Approach, P/E to Growth Ratio (P/E/G), and P/S are also adopted to be the valuation methodology of a growing company. The 4Ms get investors to a wonderful business at an attractive price. From that point, this paper refers to technical tools such as Group signals, Insider Trading, MACD, and Stochastic and Moving Averages to help investors decide when to get in and out. In this paper, a systematic and strategic financial model is constructed in detail on how to identify and invest in the fastest-growing companies in the world. It starts with the first M (Meaning) and then proceeds to the second M (Moat), the third M (Management and a disciplined valuation approach), and finally the fourth M (Margin of safety). This is integrated in the process to identify the long-term growing stocks as shown in Figure 1. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
319
Procedure to identify and evaluate of long-term stocks First M: Does the business have meaning to you – Finding the candidate of excellent stock Megatrend . Knowledge Economy . Globalization . The internet . Consolidation . Branding . Demographics . Outsourcing . Convergence
Emerging Industries . Technology . Health Care . Alternative Energy . Media/Education . Business/Consumer Service
Hot Areas for future Growth . Web 2.0 . Online Advertising - One-on-One Marketing . On demand – Software as a Service . Just-For-Me Media . The phone is My Life . The ABC’s (and the three Gs) of Biotechnology . Digital Doctor . Healthy, Wealthy, and Wise . Education in the knowledge Economy . The Power of Women . Simply the Best – Premium Brands . Minority to Majority . Safety and Secure . Alternative Energy . Nanotechnology
Three Circle Method . Passion – What do you love to do? . Money – Where do you earn/spend money? . Talent – What are you good at?
The Candidate of excellent stocks Second M: Does the business have a wide Moat – Predicting the candidate of excellent stock’s long-term future using Big Five numbers Calculate Return on Investment Capital (ROIC) . At least 10 percent per year for the last ten years’ average . Check 1. the ten years’ average 2. the past five years’ average 3. last years’ average
Calculate Four Growth Rates . calculate Equity growth rate . calculate EPS (earnings per share) growth rate . calculate Sale growth rate . calculate Cash growth rate . the above four growth rate must faster than 10 percent per year average over the last ten year . make sure the rates of growth are not slowing down
Figure 1:
Procedure to identify and evaluate long-term stocks.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
A
320 Computational Methods and Experimental Measurements XIV A Third M: Does the business have great Management – Identifying the Great Growth Companies
Phil Town’s Concept:
Michael Moe’s Concept:
1.
1. 2. 3.
2.
3.
Great CEO – honest, owner-oriented, and driven person is out there thinking about how to make investor money The most important qualities of a level Five leader: (1) Owner-Oriented (2) Driven Little tricks: (1) Checking out insider trading activities (2) Considering the CEO’s compensation
4.
People – Follow the leader Product – What’s the claim to frame? Potential – How big could this become? Predictability – How visible is the growth?
Fourth M: Does the business have a big Margin of Safety – Valuation Methodology Verify by: 1. 2. 3.
Discounted Cash Flow Formula P/E to Growth (P/E/G) Price to Sales (P/S)
Finding the Margin of Safety (MOS) . To get a big MOS, buy the business for 50 percent off sticker price
The Right Time to buy or sell . Use three Technical Tool MACD, STOCHASTICS, and MOVING AVERAGE: When all three tools are saying “buy”, it’s time to get in. When all three are saying ”sell”, it’s time to get out.
Figure 1:
3
Continued.
Numerical study – example of the Taiwan stock market
Shi-Zha and Xin are the couple who have decided that if they really wanted to retire comfortably after 20 years, they would have to do more with their money than just compound it in a treasury bond. Shi-Zha is a professor in the Mechanical and Automation department and Xin is an automotive engineer. ShiWIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
321
Zha and Xin used the financial model constructed in this paper in 2007, and tried to find good companies to invest. The detailed process of investment study is described as follows: 3.1 First M: Meaning Shi-Zha and Xin thought about the three circle method: Passion, Talent, and Money, and coupled it with megatrend, emerging industry, and hot areas for future growth to decide what kind of business they would be proud to invest in. The results of the three circles method analysis are shown in Figure 2. Shi-Zha and Xin noticed right away that Research was in all three circle while shipbuilding, vehicles and energy were in the intersection of megatrend (globalization), emerging industries (alternative energy), and hot areas for the future growth (alternative energy). These three industries encompass 158 businesses that Shi-Zha and Xin can look at on the Market Observation Post System (MOPS). So far, all 158 businesses look wonderful to Shi-Zha and Xin, because all these businesses have some meaning attached to them.
Passion (What do you love to do?) .Travel/Food .Investment (Stock/Mutual Fund) .Excises .Research (shipbuilding, vehicle, energy and national defence)
Talent (Where do you earn/spend money?) Money in: .Education (Mechanical) .Research (shipbuilding, vehicle, energy, and national defence) .Investment (Stock/Mutual Fund) Money out: .Travel/Food .Insurance .Kids’ education .Shopping(books)
Figure 2:
Money (What are you good at?) .Education (Mechanical and Automation) .Research (shipbuilding, vehicle, energy, and national defence) .Management
Three circle method from Shi-Zha and Xin.
3.2 Second M: Moat Shi-Zha and Xin know that the moat is very critical. They refer to the Five Moats (Big Five numbers) to predict the future. All of the Big Five numbers should be equal to or greater than 10 percent per year for the last 10 years (Table 1). At first, the ROIC values of the 158 business stocks are calculated. 31 stocks meet WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
322 Computational Methods and Experimental Measurements XIV Table 1:
Requirement of Big Five numbers.
Return on Investment Capital (ROIC or ROC or ROI) Sales (or Revenue) growth rate Earning per Share (EPS) growth rate Equity (or Book Value or BVPS) growth rate Free Cash Flow (FCF) growth rate
Table 2:
The stocks meet the requirement of the ROIC. Year
2001
Stock Rate (%) DEPO AUTO PARTS INDUSTRIAL .(6605) FBT (4535) NAK SEALING TECHNOLOGIES CORPORATION (9942) Wistron NeWeb Corporation (6285) HOLUX Technology Inc. (3431) Hu Lane Associate Inc. (6279) MPI (6223) Motech Industries, Inc. (6244) Celxpert Energy Corporation (3323) BRIGHT LED ELECTRONICS CORP (3031) Richtek Technology Corp. (6286) POWERCOM CO., LTD (3043) ENERMAX (8093) Power Mate Technology Co., LTD (8109) O-TA PRECISION INDUSTRY CO., LTD. (8924) INTEGRATED SERVICE TECHNOLOGY, Inc. (3289) Super Dragon Technology Co., Ltd (9955) MACAUTO INDUSTRIAL CO., LTD. (9951) ELAN MICROELECTRONICS CORP (2458) HOLTEK SEMICONDUCTOR INC. (6202) AME, Inc (3188) Actron Technology Corporation (8255) Simplo Company, Ltd. (6121) TSTI (8099) Chenfull International Co., Ltd. (8383) GMT (8081) Topower Computer Industrial Co., Ltd. (3226) Sea Sonic Electronics Co., Ltd. (6203) POWERTECH INDUSTRIAL CO., LTD. (3296) WAH LEE INDUSTRIAL CORP. (3010) Advanced International Multitech Co., Ltd. (8938)
Table 3:
2002
2003
2004
2005
16.62 9.91
22.33 19.37
17.12 21.73
23.81 25.31
21.49 21.16
2006 Last year Avg. 16.36 17.26
14.38
15.03
15.04
14.61
13.65
13.43
14.35
15.17
24.5
25.27
10.87 14.62 21.06 20.87 8.02
14.90 13.84 14.54 12.66 17.79 22.64 17.53 24.18 3.96
11.43 25.36 17.35 23.49 35.68 16.66 21.77 24.50 12.03 20.73 19.99
11.07 15.35 15.34 10.45 24.06 10.00 14.77 25.03 14.88 16.45 17.95
12.12 11.58 13.79 14.60 13.83 11.87 11.24 37.82 10.24 14.79 25.71
16.02 19.40 16.12 14.51 22.53 12.86 17.15 27.29 14.66 20.21 17.18
17.43
12.34 14.35 13.50
20.21 25.32 19.24 10.19 24.55 13.13 20.77 26.47 18.63 24.92 18.29
14.49 25.18 16.94 20.32 15.65
13.59
14.10
13.59
14.11
17.89
17.32
15.40
15.68
18.74
16.05
17.46
15.35
13.30
13.67
15.17
15.76
7.61 8.41 17.25 4.28 -1.71 -13.68 14.96 20.09 13.83 20.52 16.20 20.65
17.39 9.06 14.27 8.41 15.32 -0.59 9.93 18.97 18.10 4.17 19.24 21.00
14.68 23.88 17.97 15.12 16.06 15.68 14.21 14.20 22.00 8.34 25.12 5.78
15.20 22.44 16.08 21.83 15.43 27.53 20.14 11.15 20.42 20.47 27.35 11.50
15.82 9.50 9.96 21.64 13.17 27.12 18.09 16.40 15.58 38.14 21.03 20.23
15.30 9.50 9.93 23.79 22.25 21.77 17.61 14.49 6.65 29.55 6.89 24.38
15.68 14.88 13.64 18.16 16.45 15.17 16.00 15.06 16.55 20.13 19.93 16.58
15.37 13.80 16.28 15.92 11.49 12.97 14.25 14.74 16.10 20.20 17.05 10.74
Last 5 years average
Last 6 years average
20.22 20.97
19.62 19.12
15.25 15.91 19.01
5.97
11.82
15.37
19.58
19.68
13.85
16.06
14.38
12.84
12.41
9.24
10.65
9.59
9.64
10.31
10.52
6.47
8.76
14.54
11.88
10.00
11.18
11.27
10.47
The stocks meet the requirement of the equity growth rate. Year
Stock
10% per year for 10 years 10% per year for 10 years 10% per year for 10 years 10% per year for 10 years 10% per year for 10 years
8.30
17.86
6.02
24.92
9.95
14.07
Last 5 years average 14.56
0.79
12.60
26.41
53.05
11.11
29.27
26.40
8.91
10.00
10.44
1.48
9.87
26.54
2001
2002
2003
2004
2005
2006
Rate (%)
DEPO AUTO PARTS INDUSTRIAL. (6605) Motech Industries, Inc. (6244) HOLUX Technology Inc. (3431) MPI (6223) Richtek Technology Corp. (6286) Actron Technology Corporation (8255) Simplo Company, Ltd. (6121) GMT (8081)
Last 6 years average 13.52 22.20
-6.56
10.25
3.68
46.53
-17.11
21.13
12.90
9.65
99.27
-18.65
-4.61
-8.91
5.47
34.02
4.46
17.77
-5.14
48.14
70.42
13.73
43.34
34.10
30.73
-5.25
20.46
7.18
9.14
17.60
9.83
-3.40
4.14
23.97
55.03
16.77
19.30
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
21.21
323
Computational Methods and Experimental Measurements XIV
the requirement (Table 2). And then on to Equity growth rate, where eight stocks (Table 3) meet the requirement. Those eight stocks are then verified by the EPS growth rate, and six stocks (Table 4) pass the requirement. When calculating the Sale Growth rate of these six stocks (Table 5), all of them fit in with the requirement. Finally, on calculating the cash growth rate of these six stocks, only two stocks (Table 6) are left. Shi-Zha and Xin write down the Big Five of both stocks all together (as shown in Table 7). Table 4:
The stocks meet the requirement of the EPS growth rate. Year
Stock
2001
Rate%
DEPO AUTO PARTS INDUSTRIAL. (6605) Motech Industries, Inc. (6244) MPI (6223) Richtek Technology Corp. (6286) Simplo Company, Ltd. (6121) GMT (8081)
Table 5:
2002
2003
2004
2005
2006
Last 5 years average
Last 6 years average 22.25
41.78
38.60
22.48
27.39
2.88
-16.58
17.26
42.00
128.87
109.54
70.78
19.60
12.90
68.34
63.95
-50.34
40.44
0.39
122.00
-56.28
39.47
29.20
15.95
48.76
0
19.12
-23.57
8.17
92.30
19.20
24.13
76.34
-25.43
84.66
19.27
0.37
12.03
18.18
27.87
82.94
-82.96
116.98
277.39
198.16
-16.23
46.44
52.52
The stocks meet the requirement of the sales growth rate. Year 13.89
40.30
25.10
27.62
13.82
0.19
Last 5 years average 21.44
57.77 12.52 91.55 44.69 52.35
81.00 106.39 65.00 36.00 3.00
119.28 20.39 82.06 146.77 40.08
115.86 116.42 11.39 34.54 65.58
76.66 -16.02 19.83 23.51 94.52
88.17 58.26 60.09 46.56 17.09
86.00 57.09 47.60 57.80 44.20
2001 Stock Rate (%) DEPO AUTO PARTS INDUSTRIAL. (6605) Motech Industries, Inc. (6244) MPI (6223) Richtek Technology Corp. (6286) Simplo Company, Ltd. (6121) GMT (8081)
Table 6:
2002
2003
2004
2005
2006
Last 6 years average 20.20 89.67 49.66 55.00 55.67 45.50
The stocks meet the requirement of the free cash growth rate. Year
Stock Rate (%) Motech Industries, Inc. (6244) GMT (8081)
Table 7: Average growth rate Years Motech Industries, Inc. (6244) GMT (8081) Average growth rate Years Motech Industries, Inc. (6244) GMT (8081)
2001
2002
2003
2004
2005
2006
Last 5 years average
Last 6 years average
108.40 64.35
-27.57 -21.72
177.08 12.98
-34.15 -49.81
-48.43 146.24
296.08 20.51
72.60 21.64
78.57 28.76
The Big five number of excellent stocks. ROIC 6 5 3 19.0 22.5 24.5 20.2 20.1 29.6 Sales growth rate (%) 6 5 3 89.7 96.0 93.6 45.5 44.2 59.0
1 13.8 29.4 1 88.2 17.1
Equity growth rate (%) 6 5 3 1 22.2 26.4 31.1 29.3 21.2 19.3 32.0 16.8 Free cash growth rate (%) 6 5 3 1 78.6 73.0 71.2 296.1 83.6 27.5 27.1 3.6
EPS growth rate (%) 6 5 3 64 68.3 34.4 52.2 46.4 153.1
1 13.0 -16.2
3.3 Third M: Management After finding the two stocks, Shi-Zha and Xin need to know the fundamental information about both businesses. From the MOPS website of Taiwan, Shi-Zha and Xin can find the patterns that each business has been built up and been operated with, the competitors of each business, the risks of each business, the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
324 Computational Methods and Experimental Measurements XIV management operation strategy of each business and the senior executive of each business – the most important point. Shi-Zha and Xin apply the four Ps (people, product, potential, and predictability) to integrate the information they got and to differentiate the constitution of each business and then find the excellent longterm growing stock. 3.4 Fourth M: Margin of Safety To calculate the Margin of Safety price, Shi-Zha and Xin needed to know the sticker price (Intrinsic Value). The process of calculating the sticker price and the price of the Margin of Safety is shown in Table 8. Table 8:
The calculation of the sticker price and the Margin of Safety.
Stock Name Calculated Item 1.Current EPS 2.Estimated PE (Future) EPS for the ten years 3.Estimated EPS growth rate for the next ten years 4.Estimated PE in ten years 5.Minimum acceptable rate of return from this investment 6.Future market price 7.Sticker Price 8.Price of margin of safety
4
Motech Industries Inc.
Global Mixed-mode Technology Inc.
16.01 NT$
11.18 NT$
2007.12.29
128 NT$
89.44NT$
Use last ten years Equity growth rate
35.13
16.48
22.2%
21.21%
15%
15%
4496.64 NT$ 1124.16 NT$ 562.08 NT$
1473.97 NT$ 368.49 NT$ 184.25 NT$
Notes
Conclusion
This paper has been constructed as a financial model of long-term growing stocks in the Taiwan Stock Market. According to the study presented in this research, the following conclusions can be drawn: 1. The financial model constructed based on Town’s and Moe’s concepts is quite suitable for the Taiwan Stock Market. Through calculating the Big Five numbers, the excellent long-term growth company can be found. In particular, when calculating the ROIC numbers, the inferior companies can be dropped out first. 2. MOPS of Taiwan only offers the past six years information, therefore, this paper modified the financial model to use the past six years data rather than the past ten years data. 3. Basically, the financial model constructed in this paper belongs to Fundamental Analysis (i.e. firm-foundation theory). The key point is that Fundamental Analysis relies on some tricky forecasts of the extent and duration of the future growth. The foundation of intrinsic value may thus be less dependable than is claimed.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
4.
325
Before individual investors adopt the financial model of this paper, it is essential to bear three important caveats [3] in mind: (1) Expectations about the future cannot be proven in the present. (2) Precise figures cannot be calculated from undetermined data. (3) What is growth for the goose is not always growth for the gander.
References [1] Phil Town, Rule #1 – The Simple Strategy for Successful Investing, Crown Publishers, 2006 [2] Michael Moe, Finding the Next Starbucks: How to Identify and Invest in the Hot Stocks of Tomorrow, Portfolio, 2006 [3] Burton G. Malkiels, A Random Walk Down Wall Street – The Time-tested Strategy for Successful Investing, W.W. Norton, 2007
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Section 6 Fluid flow
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
329
On the differences of transitional separated-reattached flows over leading-edge obstacles of varying geometries I. E. Abdalla Faculty of Technology, Department of Engineering, DeMontfort University, Leicester, UK
Abstract Large-eddy simulation (LES) of transitional separating-reattaching flow on three different geometries including a square surface mounted obstacle (referred to hereafter as the obstacle), a forward facing step (FFS) and square leading edge plate aligned horizontally to a flow field, has been performed using a dynamic sub-grid scale model. The Reynolds number based on the uniform inlet velocity and the plate thickness, obstacle/step height varies in the range of 4.5 − 6.5 × 103 . The mean LES results for three geometries compare reasonably well with the available experimental and DNS data. As the obstacle and FFS are characterised by an additional separated region upstream the separation line compared to the square leading edge plate, this is thought to have led to some differences observed both on the flow topology and turbulence spectrum downstream the leading edge for the three geometries. The spectra obtained using standard Fourier transform for positions downstream the leading edge plate has clearly captured the characteristic shedding frequency but not in the case of the obstacle and FFS. However, the spectra content at locations within the upstream separated region for the obstacle and FFS indicates that the upstream bubble is unstable via the Kelvin-Helmholtz (K-H) mechanism which might have an influence on both the spectra content, instability mechanism and the flow topology of the downstream separated region. Keywords: large-eddy simulation, transition to turbulence, coherent structures, shedding.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090301
330 Computational Methods and Experimental Measurements XIV
1 Introduction The physics of transitional and turbulent separated-reattached flows generated by leading-edge obstacles has relevance to many applications in engineering and environment. Calculation of wind loads on structures, the spread of pollutants in the vicinity of buildings, turbo-machinery industry and aerodynamics of road vehicles and aircraft are few applications to mention. Of interest in this paper is the transitional separated-reattached flows created by means of a sharp leading-edge geometries including a square leading edge plate aligned horizontally to a flow field, an obstacle and a FFS. The flow over a surfacemounted bluff bodies is complex when compared with other leading body geometries such as the backward-facing step and the square leading edge plate case. The complication comes as a result of an additional separation in the upstream region caused by the obstruction flow leading to the question of whether the upstream separated region behaves as a closed or open bubble and what influence does such separation bubble has on the downstream separated-reattached flow. This study compare some few fundamental features associated with separated-reattached flows for the three geometries mentioned above.
2 Details of numerical computation The filtered Navier-Stokes equation are discretised on a staggered grid using the finite volume method. Smaller-scale motion that are smaller than the control volume are averaged out and are accounted for by a subgrid-scale model. A standard dynamic subgrid model in Cartesian co-ordinate has been employed in the present study. The explicit Adams-Bashforth scheme is used for the momentum advancement. The Poisson equation for pressure is solved using an efficient hybrid Fourier multigrid method. The spatial discretisation is second order-order central differencing which is widely used in LES owing to its non-dissipative and conservative properties. More details of the mathematical formulation and numerical methods have the reader elsewhere by Yang and Voke [1] and Abdalla et al. [2]
3 Flow configuration, mesh and boundary conditions For the three simulations, a free-slip but impermeable boundary is applied at the lateral boundary. In the spanwise direction, the flow is assumed to be statistically homogeneous and periodic boundary conditions are used. No-slip boundary conditions are used at all other walls. At the inflow boundary, a uniform velocity profile is applied. At the outflow boundary, a convective boundary condition is applied. The two-dimensionality of the flow was broken by means of a random perturbation (20% of the inflow velocity) applied for a limited number of time steps (exactly 250 time steps for the three simulations) at the very early stages of the simulation. For the obstacle case, the grid consists of 288 × 128 × 64 cells along the streamwise, wall-normal and spanwise directions respectively with dimensions of the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
331
8
y/h
6 4 2 0 -5
-2.5
0
2.5
5
7.5
10
12.5
x/h
15
17.5
20
22.5
25
27.5
30
Figure 1: The computational domain and mesh used for the obstacle geometry. domain as 35h ∗ 8h ∗ 4h (h is the obstacle height). In terms of wall units based on the friction velocity downstream of reattachment at x/h = 27, the streamwise mesh sizes vary from ∆x+ = 6.77 to ∆x+ = 43.04, while ∆z + = 10.625 and at the wall ∆y + = 1.28. The time step used in this simulation is 4.75 × 10−6 second (0.001425 Uh0 ). The simulation ran for 129,000 time steps equivalent to more than 5 flow passes through the domain (or residence times) to allow the transition and turbulent boundary layer to be established, i.e. the flow to have reached a statistically stationary state. The averaged results were gathered over a further 249,900 steps, with a sample taken every 10 time steps (24,990 samples) averaged over the spanwise direction too, corresponding to more than 10 flow passes or residence times. The computational domain and mesh used for the obstacle case is shown in Figure 1 as an illustration of how the grid is stretched for proper resolution of the flow features. For the FFS, the grid consists of 320 × 220 × 64 cells along the streamwise, wall-normal and spanwise directions respectively with dimensions of the domain as 25h ∗ 8h ∗ 4h (h is the step height). The time step used in this simulation is 1.5 × 10−6 s (0.010125 Uh0 ). The FFS case ran for a total of 404,000 time step with the sampling for the mean field started 100,000 after the start of the run. In terms of wall units based on the friction velocity downstream of reattachment at x/h = 23, the streamwise, wall normal and spanwise mesh sizes are ∆x+ = 19.98, while ∆z + = 10.94 and at the wall ∆y + = 1.135. For the blunt leading-edge flat plate, the grid consists of 256 × 212 × 64 cells along the streamwise, wall-normal and spanwise directions respectively with dimensions of the domain as 25D*16D*4D where D is the plate thickness. In terms of wall units based on the friction velocity downstream of reattachment at x/xR = 2.5 (xR is the mean reattachment length) the streamwise mesh sizes vary from x+ = 9.7 to x+ = 48.5, z+ = 20.2 and at the wall y+ = 2.1. The time step used in this simulation is 0.001885D/U0. The simulation ran for 70, 000 time steps to allow the transition and turbulent boundary layer to become established. The averaged results presented below were then gathered over further 399,000 steps with a sample taken every 10 time steps (39,900 samples) averaged over the spanwise direction too, corresponding to around 28 flow-through or residence times. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
332 Computational Methods and Experimental Measurements XIV
4 Results and discussion The time-mean length of the separation region is an important parameter characterising a separated/reattached flows. The method used here to determine the mean reattachment location is described in Hung et al. and involves determining the ¯ = 0 at the first grid point away from the location at which the mean velocity U wall. As an example, Figure 2 shows that the predicted a mean reattachment length downstream the leading edge for the obstacle is ≈ 15.5h. Bergeles and Athanassiadis [3] reported a value of xR /h = 11 for a turbulent boundary layer of thickness 0.48h. Durst and Rastogi [4] reported a value of xR /h = 16 also under a turbulent boundary layer condition. Similar scatter was reported for the fence geometry. Tropea and Gackstatter [5] reported a value of xR /h = 17 under transitional flow conditions and the DNS study of Orellano and Wengle [6] reported xR /h = 13.2 (12.8 for the LES with Smagorinsky model). Larsen [7] reported a value of xR /h = 11.7 from his experimental work which was conducted for a large turbulence intensity. Comparing the current LES results with the results above, it is clear that the LES prediction is within the range for the current transitional flow. The predicted mean reattachment length downstream of the step is 8.1 h. Ko [8] simulation predicted this length as 5.5 h and the measured value from the Moss and Baker [9] experiment is 4.8 h. In contrast with the current simulation it appears that the LES has over predicted this parameter. But once again the difference is thought to be due to the high Reynolds number and the nature of turbulent flows in the work cited here. However, the FFS step flow bear many similarities for the square leading edge geometry for which the predicted mean reattachment length is 6.5D. Castro and Epik [10] reported a value of 7.7D for a separated-reattached flow generated by a square leading edge flat plate at Re D = 6500 which is comparable to the FFS and the square leading edge plate.
0.5 0.4 0.3 0.2 Um
0.1 0 -0.1 -0.2 -0.3 -0.4 -0.5
Block edge 1
2
3
4
5
6
7
8 x/h
9
10
11
12
13
14
15
16
¯ = 0 at the first grid point away from the wall. Figure 2: Profile of velocity U WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
333
9 8 7
y/h
6 5 4 3 2 1 0
0
1
2
3
4
5
6
7
8
9
10
Um/U0
Figure 3: Obstacle flow: profiles of mean streamwise velocity Um /U0 at six streamwise locations measured from the separation line (leading edge). Left to right x/xR = 0.05, 0.2, 0.4, 0.6, 0.8, 1.025. Also shown are measurements by Tropea and Gackstatter [5] (triangle), Larsen [7] (square) and the DNS data of Orellano and Wengle [6] (circle) at Re = 3,000. 4.1 Mean and rms velocities The current LES results for the obstacle and FFS are compared with relevant experimental and DNS data obtained from similar previous studies. Figure 3 compares the mean streamwise velocity distribution U /U0 at 6 locations downstream of the obstacle leading edge with the experimental data of Tropea and Gackstatter [5] (available only at 3 locations), Larsen [7] and the DNS data of Orellano and Wengle [6]. The results show good agreement with the data of Larsen [7] and the DNS data of Orellano and Wengle [6]. The free-stream velocities of the data from Tropea and Gackstatter [5] are bigger than those predicted by the LES and the other two results, and peak at lower y-values. One of the reasons for this difference is the could be attributed to the difference in blockage ratio used by Tropea and Gackstatter [5] which is very low (2, 5 in the case of Orellano and Wengle [6] and 8 for the current LES). Profiles of the rms streamwise velocity, urms , normalised by U0 , at the same six stations are shown in Figure 4. The agreement between the LES results and the data of Larsen [7] and the DNS data of Orellano and Wengle [6] is encouraging. No measured data were presented by Tropea and Gackstatter [5]. It is worth to mention that data from the FFS and the square leading edge plate (not shown here) show a similar agreement with comparable experimental and computational data. 4.2 Differences in the flow field for the three cases As mentioned above, the main difference between the obstacle and FFS from one side and the square leading edge plate from the other side is the upstream separated region present in the first two and it is absence in the later. Therefore, the main question that is raised here is: what will be the influence of the separated WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
334 Computational Methods and Experimental Measurements XIV 9 8 7
y/h
6 5 4 3 2 1 0
0
0.4
0.8
1.2
1.6
2
urms/U0
Figure 4: Obstacle flow: profiles of mean streamwise turbulent intensity urms /U0 at six streamwise locations measured from the separation line (leading edge). Left to right x/xR = 0.05, 0.2, 0.4, 0.6, 0.8, 1.025. Also shown are measurements by Larsen [7] (square) and the DNS data of Orellano and Wengle [6] (circle) at Re = 3,000.
region upstream the obstacle and FFS flow on the bubble formed downstream the leading edge for the two geometries (the obstacle and FFS)? Another interesting questions is how will such flow dynamics like the one associated with the obstacle and FFS compare with separated-reattached boundary layers cased by a similar leading edge geometries but with no upstream separated region such as the flow dynamic associated with the square leading edge plate? To address these points, the spectra of the flow both upstream the FFS and the obstacle and that of the separated region downstream the leading edge for the three geometries will be examined first followed by flow visualisation to study the flow structure both upstream (the FFS and the obstacle) and downstream the three geometries. 4.2.1 Turbulence spectra Extensive data for the obstacle, FFS and the square leading edge plate at different locations both the upstream (for the obstacle and FFS) and downstream separated region has been collected and processed using both the conventional Fourier transform and the wavelet spectra (not shown here). For more specific about data locations and other relevant details the reader is referred to [11] and [2]. Figures 5a and b shows the spectra for the streamwise velocity u and the wallnormal component v at a point immediately upstream of the separation line for the obstacle flow. The spectra obtained using the Fourier transform clearly show a sharp frequency peak (band) centered at approximately 105 Hz for both velocity components. This is equivalent to (a normalised value of) 5.425 xUR0 . In almost all the work done in separated-reattached flow, the low-frequency peak in the region close to the separation line is attributed to the flapping of the shear layer. However, the normalised value for the frequency in the case of the obstacle flow (5.425 xUR0 ) is much higher than the corresponding value of what is termed WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
335
10
-6
10
-7
10
-8
10-6
(a)
10-9
EV
EU
low-frequency motion, observed in separated-reattached flow in different geometries which is in the range of 0.1 xUR0 ≤ f ≤ 0.18 xUR0 . Hence, it can not be attributed to the flapping of the separated boundary downstream of the leading edge of the obstacle flow. The only possible explanation for this amplified frequency is due to the K-H instabilities in the shear layer forming as a result of the small upstream separated region. In fact, the Strouhal number based on the obstacle height, the free stream velocity and the observed frequency (St = fUh0 ) is equivalent to St = 0.242. This is contained in the range 0.225 ≤ St ≤ 0.275 reported by Abdalla and Yang [11] for the K-H instability. This shows further evidence that the observed frequency in the current case is due to K-H instability mechanism. Extensive data for both the obstacle and FFS downstream the leading edge was processed and the spectra shows no trace of any amplified frequency. An example for such spectra is shown in Figures 5c and d which correspond to the streamwise velocity approximately at ≈ x/xR = 0.5 and the center of the shear layer for the obstacle and FFS respectively. In comparison, most of the data collected from the square leading edge plate simulation downstream the leading edge shows a peak
10
-7
10
-8
10-9 -10
10-10
10
10-11
10-11
-12
10-12
10
10-13
10-13 101
102
f (Hz)
103
10-2
104
101
(c)
10 10
-5
EU
EU
10-3 -4
10-6 10-7 10
-8
101
102
103
f (Hz)
10-1 10
104
(d)
102
f (Hz) 10
3
104
100
-3
(f)
10-2
EP
10-5
10
-3
10-4
-6
10-5
10-7 10-8
103
f (Hz)
10-1
10-4 10
102
10-2 -3 10 10-4 -5 10 -6 10 10-7 10-8 -9 10
104
(e)
10-2
EP
(b)
10 10-1
fxR/U0
100
101
-6 -1
10
fxR/U0
0
10
1
10
Figure 5: (a) Obstacle axial velocity at x/h = −0.25, y/h = 1.05, (b) Obstacle wall-normal velocity at x/h = −0.25, y/h = 1.05, (c) Obstacle axial velocity at x/xR = 0.5, (d) FFS axial velocity at x/xR = 0.5, (e) Square leading edge pressure at x/xR = 0.35, (f) Square leading edge pressure at x/xR = 0.75. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
336 Computational Methods and Experimental Measurements XIV band of frequency, the normalised value of which is in the range 0.6 ≤ xRfU0 ≤ 0.8. A sample of this is shown in figures 5e and f corresponding to the pressure spectra at ≈ x/xR = 0.5 and 0.75 and the center of the shear layer. The center of the shear layer is defined as the y-location where the rms value of the streamwise velocity (urms ) attains a maximum value. This fact represent the first difference between separated boundary layers on sharp leading edge geometries with an upstream separated region (such as the obstacle and FFS) and without such separated region such the square leading edge plate. It clear that the K-H instability that dominates the upstream separated region has it is influence on the spectra contents for the separated region downstream the leading edge for the obstacle and FFS. The disappearance of the shedding frequency from the spectra for the downstream separated region in the obstacle and FFS indicates that transition to turbulence may occurs much faster and that the turbulence intensity upstream the leading edge is higher when compared to the square leading edge plate flow where such frequency is quite apparent. This fact might have it is effects on the nature of instability mechanism on the downstream separated region as well as o the flow topology. However, it remains unclear how and through which mechanisms does the upstream region influence the downstream region and a separate study focusing on this point is necessary. 4.2.2 Flow structure Figure 6 shows low-pressure isosurfaces visualising the flow topology upstream and shortly downstream the leading edge for the obstacle flow. The FFS has shown similar features and is not shown here. Figure 6 display three main features displayed associated with the flow structure upstream the obstacle and FFS. The first feature which is common to both the obstacle and FFS flows, is the existence of a quite distorted 2D structure mainly developed at x/h = −1.0 which exhibits a clear undulating edges. Shortly downstream the leading edge, the boundary layer rolls-up leading to the formation of 2D K-H vortices that convect downstream.
Figure 6: Flow structure of the upstream separated region of the obstacle. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
337
The second feature displayed by the flow structure upstream of the two geometries is the existence of a 2D structure spanning the computational domain and attached to the vertical side of the obstacle/FFS immediately below the leading edge. This structure seems to be of pure 2D nature when viewed from above but when looked at from below, the structure seems very unstable and display a nonplanar (wavy) edge. The reason for the waviness is what could be the third feature of the flow upstream of the obstacle/FFS flow – which is the development of vertical rib-like (streaky) structures that seems to develop from the structures itself or may be connecting it to the 3D small structures at the obstacle/FFS upstream corner. At the lower corner, there exist a smaller-scale structure which is more likely to be a result of the interaction (collision) of the 2D structure which develops at x/h = −1 as described above with the vertical wall. The most important part is that the structure which is attached to the vertical side of the obstacle/FFS (below the leading edge) gives the impression that there is absolutely no interaction between it and the flow downstream of the obstacle/FFS edge. In this case this means that the separation bubble upstream is a closed one and behaves in a typical way to a 2D bubble flow. However, in some occasions the 2D structure seems to get broken due to the work of the longitudinal vertical rib-like structures. This feature may indicate that parcel of fluid from the separation bubble upstream could be released into the downstream of separated region rendering the upstream bubble as an open one. Downstream the leading of the three geometries, the flow topology is characterised by 2D K-H rolls formed as a result the boundary layer rolls-up of the laminar flow. The differences is associated with the scenarios of the evolution of such 2D coherent structures downstream the leading edge and their eventual break-up into smaller 3D structure leading to a fully turbulent flow. Figures 7a and b shows low-pressure isosurfacses at an arbitrary instance of time showing the flow structure downstream the obstacle and square leading edge plate, respectively. It is worth to mention that extensive data for both the obstacle and FFS was processed and all show similar features as in figure 7a. The flow structures clearly shows that 2D K-H rolls convect downstream and eventually disintegrate into smaller turbulent structures. Compared to the square leading edge plate flow, the picture is slightly different. As shown in Figure 7b the 2D K-H rolls are shed shortly down stream the leading edge and maintain their 2D nature while convicting downstream. At some stage, the continuously distorted spanwise K-H rolls are subjected to axial stretching and eventually are transformed into streamwise structures. It is reasonable to assume that the way the streamwise evolving vortices interact with the spanwise vortices is by aligning more vorticity from the spanwise into the streamwise vortices thus making them to grow and become larger while degrading the coherency of the spanwise vortical rolls. In other words the 2D Kelvin-Helmholtz rolls have been transformed into distinct streamwise vortical tubes. The well known Λ-shaped vortices, commonly associated with flat plate boundary layers are clearly seen in the square leading edge plate flow.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
338 Computational Methods and Experimental Measurements XIV
Figure 7: Flow topology downstream the leading edge: (a) the obstacle flow, (b) the square leading edge plate. The differences in the downstream flow topology and the two distinct routes of evolution of the 2D K-H rolls strongly indicates that the nature of turbulent flow and the instability mechanisms for the two types of flows are also distinct. It is also strong indication to the fact that K-H unstable upstream separated region associated with the obstacle and FFS play some influence on the nature of the downstream turbulence characteristics and flow topology.
5 Conclusion A comparison to some aspects of separated-reattached flows on an obstacle, FFS and a square leading edge plates indicates two fundamental differences. The characteristics shedding frequency captured in the square leading edge plate is not apparent in the obstacle and FFS flows. The flow topology downstream the obstacle and FFS leading edge has shown direct disintegrate into of the 2D K-H rolls into smaller 3D structures. For the square leading edge plate flow, 2D K-H rolls transform first into distinct 2D streamwise structures before breaking down into smaller 3D structures. It is most likely that the two differences represent an influWIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
339
ence of the K-H unstable upstream separated region associated with the obstacle and FFS on the downstream region. A detailed quantitative analysis is required to quantify such influence.
References [1] Yang Z., Voke, P.R. Large-eddy simulation of boundary-layer separation and transition at a change of surface curvature. Journal of Fluid Mechanic, 439, 305–333, 2001. [2] Ibrahim E. Abdalla, Malcolm J. Cook, and Zhiyin Yang. Computational analysis and flow structure of a transitional separated-reattached flow over a surface mounted obstacle and a forward-facing step. International journal of computational fluid dynamics, 13-1, 25–57, 2009. [3] Bergeles, G., and Athanassiadis N.. The flow pass a surface mounted obstacle. ASME Journal of Fluid Engineering, 105, 461–463, 1983. [4] Durst, F., Rastogi, A.K. Turbulent flow over two-dimensional fences. in Turb. Shear Flows 2. Springer Verlag: Berlin, 218–231, 1980. [5] Tropea, C.D., Gackstatter, R.. The flow over two-dimensional surfacemounted obstacles at low Reynolds number. Journal of Fluids Engineering, 107:489–494, 1984. [6] Orellano, A., Wengle, H.. Numerical simulation (DNS and LES) of manipulated turbulent boundary layer flow over a surface-mounted fence. Er. J. B Fluids, 19:765–788, 2000. [7] Larsen, P.S.Database on tc-2C and tc-2D fence-on-wall and obstacle-on-wall test case. Report AFM-ETMA 95-01, ISSN 0590-8809; TU Denmark, 1995. [8] Ko, S.H. Computation of turbulent flows over backward and forward-facing steps using a near-wall Reynolds stress model. CTR Annual Research Briefs, Stanford University/NASA Ames, 75–90, 1993. [9] Moss, W.D., Baker, S. Re-circulating Flows Associated with Twodimensional Steps, Aero Quart., 151–172, 1980. [10] Castro, I.P., Epik, E. Boundary layer development after a separated region. J. Fluid Mech, 374, 91–116, 1998. [11] Abdalla, I.E., Yang, Z., 2005. Numerical study of a separated - reattached flow on a blunt plate. AIAA Journal 43, 2465–2474.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
341
A second order method for solving turbulent shallow flows J. Fe & F. Navarrina Grupo de M´etodos Num´ericos en Ingenier´ıa, University of A Coru˜na, Spain
Abstract A second order finite volume model for the resolution of the two dimensional shallow water equations with turbulent term is presented. It is shown that, if a first order upwind method is used to discretize the hydrodynamic equations, a considerable amount of numerical viscosity (or diffusion) is produced. For this reason a second order method has been developed, which makes use of the mean gradient of the variables in a cell. To compare the first and second order methods, the Cavity Flow problem is used. Then a backward step problem is solved, using the k − ε turbulence model to calculate the turbulent viscosity at every point. The results are compared with experimental measures and they confirm the good behavior of the model. Keywords: finite volumes, shallow water equations, numerical viscosity, turbulent term, gradient mean values.
1 Introduction The two-dimensional shallow water equations (2D-SWE) describe the behavior of free surface flows in which the ratio of the depth to the horizontal dimensions is small and the magnitude of the vertical velocity component is much smaller than the magnitude of the horizontal velocity components. This situation can be found for instance in the flow in channels and rivers. 2D-SWE take into account the turbulence effects both through the frictional terms and the second derivatives term. This last term may not be significant in many practical problems when we only need an estimate of energy losses. However, its inclusion may become very important for an accurate simulation of recirculating flows. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090311
342 Computational Methods and Experimental Measurements XIV In a first approach we discretized the hydrodynamic equations with a first order finite volume method. An important point when working with this method is to properly calculate the numerical flux at the cell edges. The upwinding of the flux term has proved to be a useful technique, but it produces a considerable amount of numerical diffusion. For this reason it has been developped a second order method that makes use of the mean gradient of the velocity components in a cell. To compare the first and second order methods for uniform viscosity values, the Cavity Flow problem has been used with three viscosity values. Then a Backward Step problem has been solved and its results have been compared with experimental measures. The eddy viscosity values at every point have been generated with the depth averaged k − ε model. In this work: 1) we describe the first and second hydrodynamic models; 2) we compare them in the Cavity Flow problem; 3) we combine the second order hydrodynamic model with the first order k−ε turbulence model, comparing the obtained results with experimental measures.
2 The shallow water equations The 2D-SWE system in conservative form is expressed as ∂U ∂F1 ∂F2 + + = G, ∂t ∂x ∂y
(1)
being the vector of unknowns U and the flux terms
h
U = hu , hv
hu F1 = hu2 + 21 gh2 , huv
F2 =
hv
huv 1 2 2 hv + 2 gh
(2)
and being the source term 0 G = gh(S0x − Sf x ) + St1 . gh(S0y − Sf y ) + St2
(3)
In the above expressions h is the fluid depth, u and v are the horizontal velocity components and g is the gravity acceleration. S0x , S0y are the geometric slopes. Sf x , Sf y are the friction slopes Sf x =
n2 u
u2 + v 2 4/3
Rh
,
Sf y =
n2 v
u2 + v 2 4/3
Rh
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
,
(4)
Computational Methods and Experimental Measurements XIV
343
where Rh is the hydraulic radius. Finally, St1 , St2 are the turbulent terms
∂v ∂u ∂u ∂ + 2νt h + νt h , ∂x ∂y ∂x ∂y
∂v ∂u ∂ ∂v ∂ + = νt h + 2νt h , ∂x ∂x ∂y ∂y ∂y
St1 = St2
∂ ∂x
(5) (6)
where the coefficient νt is a variable called eddy (or turbulent) viscosity.
3 Discretization of the equations 3.1 Construction of the finite volume mesh The finite volumes used in this work are based on a triangular discretization of the domain (see Figure 1). For each node I, the barycenters of all the triangles that have the common vertex I as well as the midpoints of the corresponding edges are considered. The boundary Γi of the cell Ci is defined by these points. By Γij = AMB we represent the part of Γi that is also part of Γj . The outward normal vector to Γij is η ij . The norm of η ij , ηη ij , is the length of the edge and η ij = ( αij , β ij )T is the corresponding unit vector. The subcell Tij is the union of triangles AMI and MBI. 3.2 Discretization of the hydrodynamic equations At this point we wish to integrate the 2D-SWE, what results in
Ci
∂U dA + ∂t
∇ · F dA =
G dA,
Ci
Ci
K Gi Ci A
I Ti j
M
I
B
M J
A
hijAM J
Figure 1: Finite volumes construction. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
B G ij
hijMB
(7)
344 Computational Methods and Experimental Measurements XIV where the operator ∇ stands for ∂ , ∂ and F = (F1 , F2 ). If we apply the ∂x ∂y Gauss theorem to the flux term, it results
Ci
∂U dA + ∂t
Γi
F · η dA =
G dA.
(8)
Ci
The details on the application of the FVM to the 2D-SWE, by making use of the upwind Van Leer Q-scheme [1], can be found in Fe et al. [2]. The discretized expresion of the 2D-SWE that corresponds to node I is − Uni Un+1 i Ai + Aij ψ nij + ηη ij ψ ηη ij φ φnij = ψ nνij , t j∈Ki
(9)
j∈Ki
in which Uni and Un+1 are approximations to the solution of eqn (1) within each i cell Ci and at time steps tn and tn+1 . Ai and Aij are the cell and subcell areas. Ki represents the set of neighboring nodes of I. The numerical flux φ nij is the approximation of Z = F · η , at Γij , j ∈ Ki and at t = tn , and it is given by φnij =
Z(Uni , η ij ) + Z(Unj , η ij ) 1 − Q(UnQ , η ij ) (Unj − Uni ). 2 2
(10)
Q is the jacobian matrix of Z. |Q| is defined as X |Λ| X−1 , where |Λ| is the diagonal matrix given by the absolute values of the eigenvalues of Q and X is the eigenvectors matrix of Q. UQ represents the vector of variables at the midpoint between I and J. The numerical source in eqn (9) has two terms. In the first of them, it is calculated as 0 + G f, ψ nij = (I − |Q| Q−1 )G
(11)
where the numerical geometric and friction slopes are respectively
0
hn + hn i j Hj − Hi g α , G0 = 2 dij n hi + hnj Hj − Hi g β 2 dij
0
n n Gf = ghi (−Sf x )i , n n ghi (−Sf y )i
(12)
being dij the normal distance from I to Γij . It can be noted that the numerical f is 0 is upwinded [1], while the numerical friction slope G geometric slope G WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
345
discretized pointwise [3], which is a widespread method to treat this term. In the t , being second term, the numerical source takes the form ψ nνij = G 0 n n n n n n n n + h + u + v + u h u v u νti + νtj i j xi xj xi xj yi yj 2 α + β + β 2 2 2 2 2 Gt = n n n n n n n n h u v + h + v + u + v v ν + ν ti tj i j xi xj yi yj yi yj α + α + 2 β 2 2 2 2 2 (13) the numerical turbulent slope. The eddy viscosities νti and νtj have no time indices since it is assumed that they are constant at each node throughout the hydrodyn n , vyi represent the average namic computational process. The values unxi , unyi , vxi of the derivatives of u, v at cell Ci and at time t = tn . ∂u ∂u unxi = , unyi = , (14) ∂x Ci ,tn ∂y Ci ,tn ∂v ∂v n n , vyi = . (15) vxi = ∂x Ci ,tn ∂y Ci ,tn These averaged values can be calculated from the values at the cell edges and a way to calculate them is shown et al. [2]. Equation (9) provides then a time explicit method to calculate the variables, at every node I and at every time step, from the previous time step values at node I and its neighboring nodes.
4 The second order model 4.1 The cavity flow test: first results Now we are going to test the first order model by using the Cavity flow problem, a classical benchmark for the two dimensional Navier-Stokes equations (2D-NSE). 2D-SWE are obtained from the three dimensional Navier-Stokes equations and they are different from the 2D-NSE. The latter do not take into account the third dimension in space, but only the velocities and pressures of a theoretical planar flow, whereas 2D-SWE consider the third dimension by means of the variable depth, and the pressure is expressed as a function of the depth. However both systems produce very similar results with uniform viscosity values and the results of this test may be very useful to asses the ability of the proposed model to accurately represent viscous flows. The problem consists in obtaining the velocity field in a square domain of 1 × 1 m2 . A regular mesh of 81 × 81 nodes is employed. The boundary conditions, of WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
1
1
0.9
0.9
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.6
0.6
0.5
0.5
0.5
Y
1 0.9
Y
Y
346 Computational Methods and Experimental Measurements XIV
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0
0
0.1
0.2
0.3
0.4
0.5
X
0.6
0.7
0.8
0.9
1
0
0.1
0
0.1
0.2
0.3
0.4
0.5
X
0.6
0.7
0.8
0.9
1
0
0
0.1
0.2
0.3
0.4
0.5
X
0.6
0.7
0.8
0.9
1
Figure 2: First order model: a) ν = 0.01; b) ν = 0.001; c) ν = 0.0001. Dirichlet type, are: u = 1, v = 0 at the upper side; u = v = 0 (no-slip condition) at the other three. In the Cavity Flow problem, the form of the streamlines depend on the flow Reynolds number Re. Taking the values V = 1 and L = 1 for the velocity and length scales, the resulting Re is Re =
1 VL = , ν ν
(16)
which allows us to simulate Reynolds numbers of 100, 1000, 10000 by varying the viscosity value. The resulting streamlines are shown in Figure 2 and it can be observed that the difference between the three cases is much smaller than what should be expected with a difference in viscosity values of almost 0.01 m2 /s. The reason must be found in the high numerical viscosity (or diffusion) inherent in first order upwind methods. This numerical viscosity is known to depend on the mesh size and, even with a reasonably fine mesh such as the one used here, its effects are unacceptable. A method to reduce it with a first order method has already been proposed [2], but it reduces the stability as well. For this reason we have developed a second order model. 4.2 The mean gradient An important aspect, when working with the finite volume method, is to properly calculate the numerical flux at the cell edges. In the first order method described before, it has been supposed that the values of u, v were uniform within each of the two cells Ci and Cj having a common interface Γij . The main point to obtain a second order model is to employ instead a linear approximation for the variables within each cell. To this end we have used the mean gradient of the velocity components, which was calculated in Section 3. This mean gradient is obtained from the values at the cell edges, thus involving the variables values at the adjacent cells. Let us take, for instance, the variable u and call ui , uj its uniform values within Ci and Cj . The linear reconstruction at each side of Γij , the following produces, values of u: - Side Ci : u∗i = ui + ∇u Ci · (rM − rI ). - Side Cj : u∗j = uj + ∇u Cj · (rM − rJ ), WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
uJ-uI
*
347
*
u J-u I
M
I
H
J
K
M
I
H
J
K
1
1
0.9
0.9
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.6
0.6
0.5
0.5
0.5
Y
1 0.9
Y
Y
Figure 3: a) First order scheme, b) Second order scheme.
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0
0
0.1
0.2
0.3
0.4
0.5
X
0.6
0.7
0.8
0.9
1
0
0.1 0
0.1
0.2
0.3
0.4
0.5
X
0.6
0.7
0.8
0.9
1
0
0
0.1
0.2
0.3
0.4
0.5
X
0.6
0.7
0.8
0.9
1
Figure 4: Second order model: a) ν = 0.01; b) ν = 0.001; c) ν = 0.0001.
Figure 5: Reference streamlines: a) ν = 0.01; b) ν = 0.001; c) ν = 0.0001. being M ∈ Γij the midpoint between I and J. In this way the difference between the u values on both sides of the interface, which is responsible for the numerical viscosity added, is reduced (see Figure 3). The streamlines obtained using the second order scheme are shown in Figure 4 and we see that they agree very well with the reference streamlines taken from Vellando et al. [4] (Figure 5). 4.3 The turbulence model As it has been shown, the 2D study of viscous fluids with Reynolds numbers below 10.000 can be carried out by using constant values for the viscosity. To represent real turbulent flows, however, it is necessary to calculate the turbulent viscosity νt at every point. To obtain it we have used the depth-averaged k − ε turbulence WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
348 Computational Methods and Experimental Measurements XIV model [5] where νt is calculated as νt = cµ
k2 , ε
(17)
being k and ε the turbulence kinetic energy and the dissipation rate per unit mass respectively. They are given by the transport equations ∂k ∂k ∂k ∂ νt ∂k νt ∂k ∂ +u +v = + + Ph + PkV − ε, (18) ∂t ∂x ∂y ∂x σk ∂x ∂y σk ∂y ∂ε ∂ε ∂ε ∂ +u +v = ∂t ∂x ∂y ∂x
νt ∂ε σε ∂x
∂ + ∂y
νt ∂ε σε ∂y
ε ε2 + c1ε Ph + PεV − c2ε . k k (19)
A way to implement these equations together with the hydrodynamic equations has been fully described in Fe et al. [6]). 4.4 Measurement of the velocity and turbulent kinetic energy The model has been applied to a real channel and experimental measurements of the velocity components have also been made, from which we have calculated the turbulent kinetic energy k. The experimental data were obtained at the Hydraulics Laboratory of the Civil Engineering School of A Coru˜na with SONTEK Micro Acoustic Doppler Velocimeters that produce a small distortion of the velocity field. They are highly accurate (10−3 m/s) and they take between 80 and 250 measurements per second. As the output values have a maximum frequency of 50 Hz, every one represents an average of several measurements. In our case, 2500 values of the velocity components were obtained at every point, during a period of 100 s. The velocimeters were placed at a distance from the bottom of 9.36 cm, at 368 points. The system gives the three mean velocities u, v, w and the three standard deviations σx , σy , σz . Since the model is two-dimensional only the x and y deviation have been taken into account, what seems more coherent with the previous hypothesis. The experimental value of the turbulent kinetic energy is then calculated as k=
1 (σx 2 + σy 2 ). 2
(20)
4.5 Description of the installation and boundary conditions The experimental domain consisted of a horizontal channel made of glass with an abrupt expansion, commonly known as Backward Step. The dimensions can be seen in Figure 6. For the experimental process, a discharge of Q = 20.2 l/s, was employed and a depth of h = 24.2 cm was measured at the end of the channel. Both were the upstream and downstream boundary conditions used for the numerical model. At the walls, the friction velocity condition, described in Fe et al. [6], was considered. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV Measurements zone
4.5 m 0.206 m Y
0.297 m
349
0.503 m X 1m
3.5 m
Figure 6: Dimensions of the domain.
Y
0.5 0.25 0
0.5
1
1.5
X
2
2.5
3
Figure 7: First order model. Streamlines. General view.
Y
0.5 0.25 0
0.5
1
1.5
X
2
2.5
3
Figure 8: Second order model. Streamlines. General view.
4.6 Results The model was applied with a first order discretization. We noticed that the influence of using the k − ε model to generate the νt values was insignificant, due again to the numerical viscosity introduced by the upwinding. The results show a reattachment length too short (Figure 7). Then the second order model was used. The resulting streamlines simulate much better the ones obtained from the experimental measures. They are presented in Figure 8 in a general view, and in Figure 9 in an enlarged view to compare them with the experimental results (Figure 10). The computational results for k are shown in Figure 11. The levels are well predicted, but the position is not so accurately assessed. Due to the simplifying hypothesis made for the walls, the model fails to reproduce the high k levels near the right wall of the channel.
5 Conclusions We have shown that the consideration of the turbulent term improves the accuracy in the representation of 2D recirculating viscous flows, provided that the numerical viscosity produced by the upwinding is reduced. To this end a second order disWIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
350 Computational Methods and Experimental Measurements XIV 0.4
Y
0.3 0.2 0.1 0
1
1.5
2
X
2.5
3
Figure 9: Second order model. Streamlines and velocity vectors. 0.4
Y
0.3 0.2 0.1 0
1
1.5
2
X
2.5
3
Figure 10: Experimental measures. Streamlines and velocity vectors.
k: 0.0025 0.005 0.0075
0.01
0.0125 0.015
0.4
Y
0.3 0.2 0.1 0
1
1.5
2
X
2.5
3
Figure 11: Second order model. Turbulent kinetic energy.
k: 0.0025 0.005 0.0075
0.01
0.0125 0.015
0.4
Y
0.3 0.2 0.1 0
1
1.5
2
X
2.5
3
Figure 12: Experimental measures. Turbulent kinetic energy.
cretization of the velocity components has been proposed, achieving an accurate resolution of the velocity field in the three Cavity Flow tests solved. The second order model, in combination with a k − ε model, has produced a remarkable improvement with respect to the first order model in a Backward Step problem. The proposed model has correctly calculated the position and length of the eddy. The computed k levels are close to the ones obtained from experimental measures. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
351
Acknowledgements This work has been partially supported by Grants # DXPCTSU-CEOU-2007/009 and PGDIT06TAM 118001PR of the Xunta de Galicia and by research fellowships of the Universidad de A Coru˜na and the Fundaci´on de la Ingenier´ıa Civil de Galicia.
References [1] Berm´udez, A., Dervieux, A., Desideri, J. & V´azquez, M.E, Upwind schemes for the two dimensional shallow water equations with variable depth using unstructured meshes. Comput. Methods Appl. Mech. Eng., 155, pp. 49–72, 1998. [2] Fe, J., Cueto-Felgueroso, L., Navarrina, F. & Puertas, J., Numerical viscosity reduction in the resolution of the shallow water equations with turbulent term. International Journal for Numerical Methods in Fluids 2008; 58, pp. 781-802. [3] Brufau, P., V´azquez-Cend´on, M.E. & Garc´ıa-Navarro, P., A numerical model for the flooding and drying of irregular domains. International Journal for Numerical Methods in Fluids 2002, 39, pp. 247–275. [4] Vellando, P., Puertas, J. & Colominas, I. SUPG stabilized finite element resolution of the Navier-Stokes equations. Applications to water treatment engineering. Computer Methods in Applied Mechanics and Engineering 2002; 191, pp. 5899–5922. [5] Rodi, W., Turbulence models and their application in hydraulics. A state-ofthe-art review, IAHR monograph, Balkema: Rotterdam, 1993. [6] Fe, J., Navarrina, F., Puertas, J., Vellando, P., Ruiz, D., Experimental validation of two depth-averaged turbulence models, International Journal for Numerical Methods in Fluids; published on-line in Wiley InterScience (www.interscience.wiley.com), DOI: 10.1002/fld.1880.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
353
Numerical analysis of compressible turbulent helical flow in a Ranque-Hilsch vortex tube R. Ricci, A. Secchiaroli, V. D’Alessandro & S. Montelpare Dipartimento di Energetica – Facoltà di Ingegneria Università Politecnica delle Marche, Italy
Abstract Numerical analysis of the internal flow field in a Ranque-Hilsch vortex tube (RHVT) has been conducted in order to improve understanding of its fluiddynamic behaviour. The flow field in an RHVT is compressible, turbulent and helical with a very high degree of swirl; hence its numerical simulation is a challenging task. Particular interest has been reserved for turbulence modelling, hence both RANS and LES approaches have been employed. In particular axialsymmetric RANS simulations have been conducted using RNG k-ε and a linear RSM (Reynolds Stress differential Model) closure models, while full threedimensional LESs have been performed using Smagorinsky and Germano-Lilly sub-grid scales (SGS) models. Results showed, that turbulence closure models choice is a crucial issue in the prediction of the flow field in an RHVT. In fact, different simulations exhibit some differences in the description of the velocity vector components. In each simulation, flow government equations have been solved using the commercial finite volume code FLUENT™ 6.3.26. Flow patterns in this device have been also investigated by means of the calculation of the Helical Flow Index or normalized helicity; Power Spectral Density (PSD) of velocity magnitude has been eventually calculated showing a good agreement with K41 theory. An improved understanding of the flow field inside the RHVT can lead to a correct prediction of fluid dynamic and thermal behaviour of outlet jets, fundamental information to define cooling performance of this device. Keywords: Ranque-Hilsch vortex tube, swirl flows, RANS, LES, turbulent spectrum.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090321
354 Computational Methods and Experimental Measurements XIV
1
Introduction
Ranque-Hilsch vortex tube (RHVT) is a simple device able to split a compressed gas flow into two low-pressure flows with temperature higher and lower of the inlet gas respectively, Fig. 1. This effect, called Ranque-Hilsch effect or “thermal separation”, is only due to the fluid dynamic behavior of the device. The RHVT consists of a circular tube with an inlet section, where the compressed gas flow enters tangentially through several nozzles, azimuthally arranged. The highpressure flow (by means of a very strong swirling motion) is split into two streams that flow near the internal wall of the tube, the hot one, and along the axis, the cold one (Fig. 1). These gas streams leave the device through two axial outlet sections that are located on the opposite sides of the tube (counter flow tube). Although the vortex tube is a simple device the fluid dynamic effect that produce thermal separation is extremely complex and not completely understood. Many efforts to explain this phenomenon have been made in the past, based on theoretical, numerical and experimental analysis. A complete review can be found in Eiamsa-ard and Promvonge [4]. In this work a numerical analysis of the internal flow field in an RHVT has been conducted using the computational model defined by Ricci et al. [7].
Figure 1:
2
RHVT commercial model used in this work: Exair® 25 scfm.
RHVT model description
Computational model has been built introducing several simplifications that involve inflow and outflow sections. Inlet section has been modified following Skye et al. [1], while outlet hot section has been represented as an axial outflow (instead of radial ones reported in previous papers) closer to real geometry. Computational models are shown in figs. 2 and 3. An axial-symmetric domain has been used for RANS simulations, while three-dimensional grids have been built for LESs. Grid independence procedure, for both RANS simulations and LESs, has been performed by means of Richardson extrapolation technique (Roache [14]), ensuring a negligible influence of grid spacing on the results. Grid features used in these simulations are described in Ricci et al. [7]. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Figure 2:
Figure 3:
3
355
Sketch of a RANS grid.
Sketch of a LES grid.
Mathematical model
The complete set of governing equations is represented by Navier-Stokes, in which gravity effects are neglected, stress tensor is related to strain rate one by constitutive relations for Newtonian fluids, and thermal flux vector is expressed by Fourier’s postulate. Thermophysical properties of air supplying the device are represented by a third order polynomial function. Boundary conditions are expressed, following Skye et al. [1] by imposing pressure, total temperature, velocity vector components and mass flow rate values at the computational inlet, in addiction to pressure values at the outlets as in table 1. No-slip and adiabatic conditions are set at the solid bounds.
4
Turbulence models
The turbulent behaviour of the fluid inside the RHVT has been analyzed in this work by means RANS and LES approach. Being the medium a compressible fluid we have to take into account density and temperature fluctuations (in WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
356 Computational Methods and Experimental Measurements XIV addition to velocity and pressure ones). Nevertheless in the case of compressible flows, it is useful to eliminate density turbulent fluctuations from mean motion equation using Favre’s (or mass-weighted) average (for more details see Wilcox [9]). Table 1:
Boundary conditions.
Parameter Pressure Inlet pin [Pa] Hot Press. Outlet ph [Pa] Cold Press. Outlet pc [Pa] Inlet Total Temp. Tin [K] Tangential Velocity vθ [m/s] Radial Velocity vr [m/s]
Value 700000 101325 101325 300 47.3 9.5
This approach has been used in the present work too. In this research, two turbulence models have been used for RANS equations closure: RNG k-ε and a linear RSM (Wilcox [9]). In contrast to RANS approach the Large Eddy Simulation problem formulation must be unsteady and three-dimensional, so neither geometrical nor analytical simplifications are possible. Filtering operation in LESs has been performed by using Favre filter for each flow function f (1), as described in Erlebacher et al. [10].
if ( x, t ) =
∫
Ω×[ 0,T ]
∫
ρ f ( y, t ' ) G ( x - y, t ' )d 3ydt '
Ω×[ 0,T ]
ρ ( y, t ' ) G ( x - y , t ' )d 3 ydt '
1 G (x - y) = V 0
(1)
y∈V y otherwise
(2)
Two subgrid-scales models have been used in LES: Smagorinsky [11] and Germano-Lilly (dynamic) [12]. Through finite volume discretization, a simplification of the expression of unresolved terms in LES equations can be obtained, using a filter function G ( x - y ) defined by (2), i.e. stating the equivalence between filter width and grid width. In this case in fact Leonard Stress Tensor and Cross Stress Tensor can be neglected.
5
Numerical methods
Flow government equations have been solved by means of a second order Finite Volume Method (see Jasak [13]). In RANS equations the discretization of convective terms in mass, momentum and energy conservation equations has been performed by a SOU scheme, while, discretization scheme used for k, ε and WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
357
Reynolds stresses transport equations is a QUICK. In Large Eddy Simulations the discretization scheme used for convective terms is a low diffusion MUSCL while time integration has been performed with a second order accurate implicit scheme. Frictionless flux treatment has been performed by means of a Roe Flux scheme. Further information and details of all the numerical schemes used in this work can be found in the Fluent user’s guide [8].
6
Results
By the reason of high computational cost, physical time simulated has been set equal to 20 [µs]. Anyway the achievement of steady condition has been evaluated by monitoring time history of the integral pressure on the central section of the RHVT (fig. 4). Physical time simulated is probably large enough to allow the achievement of a steady condition for swirl velocity, axial velocity (the most important ones) and temperature as shown in figs. 5, 6 and 7.
Figure 4:
Integral Pressure value versus time step number (LES – Smagorinsky model).
LES results, presented in this paper, have been obtained averaging instantaneous quantities on a number of time steps corresponding to the steady condition. A complete comparison between velocity profiles simulated by the different turbulent models employed in this work is reported in figs. 8 and 9 for several positions along the tube axis. In these figures is underlined that, in high swirl conditions, the most simple turbulent model predicts velocity profiles very different from LES and RSM ones. All the mathematical models, used in this work, show that swirl velocity is the highest component, as reported in Skye et al. [1], Farouk and Farouk [2], Eiamsa-ard and Promvonge [3] and Behera et al. [5]. Anyway only LESs and RSM closure are able to predict radial profiles of swirl velocity similar to Rankine Vortex ones that are typical for these flow conditions. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
358 Computational Methods and Experimental Measurements XIV
Figure 5:
Axial velocity values versus time in the point at 50 mm from the hot exit and at a 1.75 mm in vertical direction from the axis (Germano-Lilly model).
Figure 6:
Swirl velocity values versus time in the point at 50 mm from the hot exit and at a 1.75 mm in vertical direction from the axis (Germano-Lilly model).
Moreover LESs and RSM approaches show a good agreement in the prediction of swirl velocity profile, nevertheless the maximum value is overestimated by the RSM approach. Except for the regions near the axis and near the wall, simulations with RNG k-ε turbulence model show a swirl velocity field very similar to rigid body rotation one. These results are in agreement with a previous work (Behera et al. [5]) using the same turbulence closure, even WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
359
though in an RHVT with different geometrical characteristics. RNG k-ε model, as we expected, produced predictions with important differences respect to those obtained by RSM and LESs for the velocity field in a high swirling flows. In particular swirl velocity values are underestimated and hence axial velocity is overestimated, by this model. Axial velocity profiles show a substantial difference between RNG k-ε and RSM calculations, mainly in the prediction of the maximum value. In fact, RSM closure shows a velocity decrease near the axis, while RNG k-ε simulations show the maximum value of axial velocity on the axis. LES results seem close to those obtained by RANS simulation with RSM and far away from results obtained by using a Boussinesque-type closure. The SGS models used in this work have not shown considerable differences in the predicted swirl velocity profiles. Anyway, the Dynamic model predicts axial velocity component values higher than Smagorinsky’s ones in the central zone of the tube, while, near the wall region, Smagorinsky’s model predicts higher values.
Figure 7:
Temperature values versus time in the point at 50 mm from the hot exit and at a 1.75 mm in vertical direction from the axis (Germano-Lilly model).
Flow field degree of swirl is usually characterized by the swirl number S, representing axial flux of swirl momentum divided by axial flux of axial momentum and expressed in (3), in which u represents the velocity vector, w the swirl velocity component, u the axial velocity component, ρ the fluid density and r the distance of the generic fluid element from the RHVT axis .
S=
∫ ρ rwu ⋅ dA A
R ∫ ρ uu ⋅ dA A
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(3)
360 Computational Methods and Experimental Measurements XIV
Figure 8:
Swirl Velocity profiles obtained with all Turbulence Models used in this work at several distances form hot outlet.
Figure 9:
Axial Velocity profiles obtained with all Turbulence Models used in this work at several distances form hot outlet.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
∑ρ r w u S= R∑ ρ u u f
f
f
f
⋅Af
f
f
f
⋅Af
f
361
(4)
f
Swirl Number has been calculated numerically by means of a second order finite volume method over LESs data on several radial planes along RHVT axis, as in (4), where subscript f means that the variable is calculated on the face of the computational volume. This computation has been shown that the most intense swirl is located near the inflow section; moreover swirl velocity increase is shown close to hot outlet where axial velocity decreases. Results are reported in fig. 10 and are almost independent from subgrid-scales model used.
Figure 10:
Swirl Number calculated on several radial planes along RHVT axis.
To improve the understanding of the instantaneous flow features the Helical Flow Index Ψ (5) has been used in LES results analysis. HFI is a parameter, defined as in Morbiducci et al. [15] and varying between -1 and 1 in reason of Cauchy-Schwarz inequality. When Ψ=1 the flow is helical and Ψ=0 if the flow is purely axial or circumferential. Different HFI contours on several planes inside the tube are reported in Fig. 11. This analysis showed an instantaneous pure helical flow near the wall (in some radial planes), an almost purely axial flow near the tube axis and a hybrid motion in the rest of the domain where 0<|Ψ|<1.
Ψ ( x, t ) =
u ⋅ (∇ × u ) u ∇×u
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(5)
362 Computational Methods and Experimental Measurements XIV
Figure 11:
HFI instantaneous contours on several radial planes (Smagorinsky model).
Figure 12:
PSD calculated in a point by LES ( Germano-Lilly model).
A spectral analysis of the flow field using LES data has been done. Power Spectral Density has been obtained by means of a DFT algorithm. Calculations are referred to a point located on a vertical diameter at a distance of 50 mm from the hot exit and of 1.75 mm from the axis. Broad (third octave) and narrow band spectra for Smagorinsky and Germano-Lilly models data are presented in Figs. 12, 13.
7
Conclusions
Both RANS and LES approaches have been tested using different RANS closures and SGS models. Results showed that the flow in the tube is split in two helical WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
363
co-axial, co-rotating streams, with different thermal features, placed near the internal wall of the tube, the hot one, and near the axis, the cold one. Flow patterns and velocity profiles show a good agreement with results reported in Skye et al. [1], Farouk and Farouk [2], Eiamsa-ard and Promvonge [3] and Behera et al. [5].
Figure 13:
PSD calculated in a point by LES (Smagorinsky model).
In LES, with Germano-Lilly model, some differences have been shown in the prediction of the axial velocity field while swirl velocity prediction seemed to be unaffected by a SGS model variation. Due to high computation time, it was not possible to increase physical time simulated, hence for the radial velocity, which has a slower dynamic behaviour, a stationary condition has not probably been reached. Swirl Number calculation showed that the most intense swirl is located near inflow; moreover a swirl velocity increase is shown close to hot outlet where axial velocity decreases. Preliminary results of spectral analysis from LES data showed a good description of the inertial sub-range.
References [1] Skye, H.M., Nellis, G.F. & Klein, S.A., Comparison of CFD analysis to empirical data in a commercial vortex tube. Int. J. of Refrigeration 29, pp. 71-80, 2006. [2] Farouk, T. & Farouk, B., Large eddy simulations of the flow field and temperature separation in the Ranque-Hilsch vortex tube. Int. J. of Heat and Mass Transfer, 50, pp. 4724-4735, 2007. [3] Eiamsa-ard, S., & Promvonge, P., Numerical investigation of the thermal separation in a Ranque-Hilsch vortex tube. Int. J. of Heat and Mass Transfer, 50 (5-6), pp. 821-832, 2007. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
364 Computational Methods and Experimental Measurements XIV [4] Eiamsa-ard, S., & Promvonge, P., Review of Ranque Hilsch effect in vortex tubes, Renewable and Sustainable Energy Reviews 12, pp. 18221842, 2008. [5] Behera, U., Paul, P.J., Kasthurirengan S., Karunanithi, R., Ram, S.N. & Dinesh, K., CFD analysis and experimental investigations towards optimizing the parameters of Ranque-Hilsch vortex tube. Int. J. of Heat and Mass Transfer 48, pp. 1961-1973, 2005. [6] Ricci, R., Secchiaroli, A., Montelpare, S. & D’Alessandro, V., Fluid Dynamics Analysis of Ranque-Hilsch Vortex Tube. Rivista del Nuovo Cimento (to appear), 2009. [7] Ricci, R., D’Alessandro, V., Secchiaroli, A., Montelpare, S. & Mazzieri, M., Raffreddamento di Componenti Elettronici di Potenza Mediante Dispositivi ad Effetto Ranque-Hilsch: Simulazione Numerica del Flusso Interno ed Esterno. Proceedings of 26th UIT National Heat Transfer Conference, Ed. ETS, pp. 527-532, 2008. [8] Fluent Inc. “FLUENT 6.3.26 Users’ Guide” 2006. [9] Wilcox, D.C., Turbulence Modelling for CFD, DCW Industries, California, USA, 1994. [10] Erlebacher, M., Hussaini, Y., Speziale & C. G., Zang, T. A., Toward the Large-Eddy Simulation of Compressible Turbulent Flows. J. of Fluid Mechanics 238, pp. 155-185, 1992. [11] Smagorinsky, J., General circulation experiments with primitive equations, I. The basic experiment. Month. Weather Rev, 91, pp. 99–164, 1963. [12] Germano M., Piomelli U., Moin P. & Cabot W.H. A dynamic SGS eddyviscosity model, Physics of Fluids A3(7), pp. 1760-1765, 1991. [13] Jasak, H., Error analysis and estimation for the Finite Volume method with applications to fluid flows. PhD Thesis Imperial College, London, UK, 1996. [14] Roache, P.J., Verification and Validation in Computational Science and Engineering. Hermosa Publishers, 1998. [15] Morbiducci, U., Del Gaudio, C., D’Avenio, G., Calducci, A. & Barbario, V., A mathematical description of blood spiral flow in vessels: application to a numerical study of flow in arterial bending. J. of Biomechanics 38 (7), pp. 1375-1386, 2005.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
365
Turbulence: a new zero-equation model K. Alammar Department of Mechanical Engineering King Saud University, Saudi Arabia
Abstract A new, zero-equation turbulence model for average turbulent fluid flow was developed. The new model, which is based on the Boussinesq hypothesis, incorporates the wall effect and predicts behaviour of the effect with increasing distance from the wall. It is applied to incompressible two-dimensional and axisymmetric turbulent flows over a flat plate and in pipes, respectively. In the fully turbulent regions, predictions agree well with published measurements for skin friction coefficient in the range of Reynolds numbers considered, up to 10 million for both the flat plate and pipe without adjustments to the model. No wall functions were implemented. For a given roughness, the skin friction for the pipe was shown to be independent of the Reynolds number in two test cases, namely Re 1 and 10 million. The skin friction and velocity profiles are presented and assessed. Keywords: turbulence modelling, wall effect, flat plate, pipe, skin friction.
1
Introduction
The problem of turbulence dates back to the days of Claude-Louis Navier and George Gabriel Stokes, as well as others in the early nineteenth century. Searching for its solution, it was a source of great despair for many notably great scientists, including Werner Heisenberg, Horace Lamb, and many others. The complete description of turbulence remains one of the unsolved problems in modern physics. The history of turbulence spans a period of nearly two centuries, and one can spend considerable time and effort to compile its lengthy history and the many efforts that have been put to resolve it. However, this paper is not intended the least to present history of turbulence. Instead, objective of the present work is to present a zero-equation model that may pave the way to the solution to average WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090331
366 Computational Methods and Experimental Measurements XIV turbulent flows. For validation, incompressible flows over smooth flat plate and pipe are considered.
2 Theory Starting with the incompressible Navier-Stokes’ in Cartesian index notation, and with Reynolds decomposition, averaging, and following Boussinesq hypothesis [1], we have:
()
∂u i =0 ∂xi
( ) + (u ) ∂(u ) + ∂( p ) −
∂ ui
ρ
∂t
j
∂x j i
∂xi
(1)
∂ ∂u ∂u µ( i + j ) = ∂x j ∂x j ∂xi
(2)
∂ ∂u ∂u µt ( i + j ) ∂x j ∂x j ∂xi For simplicity, the normal stresses (except for the thermodynamic pressure) and body forces are neglected. µt = C Re w µ is the dynamic eddy viscosity. C is a non-dimensional function of the wall roughness. For a smooth wall, it is a constant. For isotropic roughness, it is a different constant. Re w = ui ρd / µ , and d is the normal distance from the wall. If eq. (2) is normalized, the shear stresses result in the following:
∂ ∂x j
ui d ∂ui ∂u j −1 Re + C UL ( ∂x + ∂x ) j i
(3)
The second term in the brackets is a non-dimensional number attributed to the wall. Clearly, it dominates at high Reynolds number flows. In absence of walls, however, a controversy arises because turbulence is known to exit even in absence of wall effects. In this case, one plausible length scale would be the mean free path, instead of the distance from the wall. This would give rise to second-order effects that would be negligible in presence of walls, and therefore should not affect our results in the present study.
3
The numerical procedure
Assuming two-dimensional and axisymmetric flows for the flat plate and pipe, respectively, equations (1) and (2) were solved using SIMPLE [2], and secondorder schemes. The constant C was set to 0.016 for both cases. Gauss-Seidel iterative method was used on a 32-bit laptop using 100,000 structured grids. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
4
367
Results and discussion
The mean velocity and skin friction coefficient for the flat plate and pipe are depicted in fig. 1. The data is presented on a log scale. For the flat plate, the Reynolds number extends from zero (leading edge) to 1.0x107. The agreement with measurements of Wieghardt and Tillman [3] is good starting from roughly Re 5x105, i.e., after transition. Laminar effects are expected to lower the skin friction within the transitional region. For the pipe flow, Reynolds number extends from 10,000 to 1.0x107. Again, good agreement is attained with measurements of Moody [4], except within the transitional region. 1
0.75
C f x10 -2 or U n
plate-U n-theory plate-Cf -theory plate-Cf -Wieghardt pipe-Cf -theory pipe-Cf -Moody pipe-U n-theory
0.5
0.25
0
10
-3
10
-2
10
yn or Re x10 Figure 1:
-1
10
0
7
Skin friction coefficient and velocity profiles.
The mean velocity profiles are presented for Re 5.0x106 for both cases. Clearly, the boundary layer is considerably thinner for the pipe. Since pipe walls surround the flow, turbulence production is greater in the case of pipe flow. This leads to more energizing of the boundary layer, and hence thinner boundary layer. y+ in this case was in the order of 0.001 for the pipe and 1.0 for the plate. Though not shown in fig. 1, effect of the wall roughness was demonstrated with a test case for which C was set to 0.7. This value was chosen to reproduce a skin friction coefficient value of 0.0026 at Re 1.0x107, which corresponds to a relative roughness parameter of 0.003 in the Moody chart. Using the same value WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
368 Computational Methods and Experimental Measurements XIV for C, no significant change in the skin friction coefficient was predicted at Re 1.0x106, hence predicting the independence of skin friction on Reynolds number at certain roughness levels.
5
Conclusions
Steady, incompressible, two-dimensional turbulent flow over a flat plate and in a pipe was simulated using a new zero-equation turbulence model. The model predicts behaviour of the wall effect with increasing distance from the wall. The theoretical results agreed well with published measurements for both the flat plate and pipe in the fully turbulent regions. The developed model is applicable to fluid flow in general. Other transport equations, including the energy and species, are solved accordingly.
Nomenclature C
a non-dimensional function of wall roughness
Cf
= τ w / 0.5 ρU
D d L Re
pipe diameter, m normal distance from the wall, m length scale, m. D for the pipe and x1 for the plate. = UρL / µ
Rew
= ui ρd / µ
U Un
velocity scale, m/s. Centreline for the pipe and freestream for the plate velocity normalized by the centreline for the pipe and freestream for the plate. velocity component, m/s Cartesian coordinate, m non-dimensional wall distance fluid dynamic viscosity, Pa s fluid density, kg/m3
ui xi y+
µ ρ τw
2
wall shear stress, Pa.
References [1] Schlishting, H. & Gersten, K., Boundary layer Theory, Springer, Berlin, 2000. [2] Patankar, S.V. & Spalding, D.B., A calculation procedure for heat, mass and momentum transfer in three-dimensional parabolic flows, International Journal of Heat and Mass Transfer, 15, pp. 1787–1806, 1972. [3] Wieghardt, K. & Tillman, W., On the Turbulent Friction Layer for Rising Pressure, NACA TM-1314, 1951. [4] Moody, L.F., Friction Factors for Pipe Flow, Transactions of the A.S.M.E., pp. 671–684, 1944. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
369
Mesh block refinement technique for incompressible flows in complex geometries using Cartesian grids C. Georgantopoulou1, G. Georgantopoulos2 & S. Tsangaris1 ¹Fluids Section, School of Mechanical Engineering, National Technical University of Athens, Greece ²Aerodynamic Lab, Hellenic Air Force Academy, Greece
Abstract The present study performs a block refinement technique for the simulation and computation of flows inside domains of arbitrary shaped bounds. The discretisation of the physical domains is achieved by the use of Cartesian grids only. The curvilinear geometries are approached in Cartesian co-ordinates by Cartesian grid lines. In order to achieve the best approach of the original contour, we choose the saw tooth method to determine the appropriate approximated Cartesian points. The refinement method is based on the use of a sequence of nested rectangular meshes in which numerical simulation is taking place. The method is applied for the solution of the incompressible Navier–Stokes equations, for steady and laminar flows, based on a cell centre approximation projection. We present the numerical simulation of internal and external flows for different values of a Reynolds number. The utility of the algorithm is tested by comparing the convergence characteristics and accuracy to those of the standard single grid and BFC grid algorithms. Keywords: grid generation, incompressible flows, nested grids, subgrids, numerical simulation.
1
Introduction
When the body-fitted structured curvilinear (BFC) grid approach came into solving flow problems, the use of Cartesian grids was almost abandoned. The benefit of BFC is that the boundary surface is fitted with a new co-ordinate line based on the body contour [1]. The main problem is that if you have to simulate WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090341
370 Computational Methods and Experimental Measurements XIV a complex multiple connected domain with sharp boundaries it is difficult to automatically generate a grid of good quality. Current algorithms in BFC are still strongly dependent on the problem to be solved and require a lot of computational and human time effort. So in order to avoid these partial problems, there has recently been a great development in Cartesian grids. In the Cartesian grid methods the numerical grid is generated automatically, containing simplified data structures and formulations for the numerical fluxes. The Cartesian grid generation was used by Clarke et al. [2] and Falle and Giddings [3] to calculate steady compressible flows [4]. Coirier and Powell [5] used a Cartesian methodology for steady transonic solution Euler’s equations and in [6] performed accuracy and efficiency assessments of the method. It is a cell-centred method with an interesting treatment of boundary conditions. Smith and Johnston [7] develop a grid generation procedure that uses a Cartesian embedded unstructured approach for complex geometries. Adaptive mesh refinement algorithms have been used extensively to solve a variety of problems in hyperbolic conservation laws and have more recently been extended to incompressible flows [8–10]. Wang [11] develops a quadtree-based adaptive Cartesian/Quadrilateral grid generator and flow solver based on cell cutting [12–14], and Deister et al. [15] presents a refined Cartesian grid based in octa-tree. In the present paper we present a Cartesian grid approach based on a sawtooth method for the curvilinear geometries bounds approximation. This technique is based on Chai et al. [16], where they present the saw-tooth Cartesian method for the heat transfer problem on a complex geometry. The emphasis is placed to present an improved accurate Cartesian approximation of curvilinear geometries and the corresponding fluid solution in comparison with those of the body fitted structured curvilinear method. We apply a nested refinement algorithm based on that of Jesse et al. [17], Martin and Collela [9], and Berger and Collela [18], in which refined regions are organized into unions of a small number of nested rectangular blocks. Refinement is performed in space and the method is cell-centred finite volume, which allows the use of a single set of cellcentred solvers. It is applied to steady, incompressible flow fields for the Navier– Stokes numerical simulation [19]. The flow solver is based on a pseudocompressibility technique (Pappou and Tsangaris [20]).
2
Grid generation
The main problem in Cartesian grid generation for a curvilinear geometry is that we have to use a technique to create an approximate Cartesian bound as close to the initial curvilinear bound as possible. The new approximate bound is parted only by the use of grid lines, on either the x or z-axis. The method used is called saw-tooth and it has been chosen as the most appropriate for the finite volume cell-centered numerical simulation of flow fields. In order to apply the saw tooth approximation we project the original contour of the curvilinear geometry onto a Cartesian grid. This complex contour is described by a set of data points on either the x or z-axis. The second step of the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
371
procedure is the specification of the approximated Cartesian points for the representation of the geometry by using the saw tooth method. If an original data point is on the x-axis, we calculate the distance between this and its neighbouring grid nodes in the same direction (x). According to the smallest distance we choose the corresponding grid node as the Cartesian approximated point [21]. 2.1 Mesh refinement technique The main problem of the above method is that if you want to decrease the approximation error with the initial curvilinear geometry, you have to cluster the used uniform grid and in many cases a huge grid size is needed. In order to overcome this problem we choose a block refinement technique by the use of a hierarchical structured grid approach. The method is based on using a sequence of nested rectangular meshes in which numerical simulation is taking place. We simulate the domain based in as many refined grids as we need. A physical domain’s point can be contained in several grids. The solution of the variables in this point will be taken from the finest grid containing the point. 2.2 Block-nested refinement algorithm The proposed nested algorithm contains several levels of grids. We create a coarse level at the beginning and we solve the domain. We name this coarse level m=0 and each next refined sub-grid is named m+1. The coarsest grid is uniform in the x and z direction respectively. We define an integer refinement factor, such as that in [9], I dx m / dx m 1 dz m / dz m 1 . For convenience the above factor should be a second power. As we have created the coarse grid we simulate the flow field and calculate the variables. We have already defined the limits of the refinement levels and we proceed with the calculation to the next refinement level. The sub-grids bounds must lie on a grid line of the previous level grid. As we use staggered grids and the variable values are expressed on the cell’s centre, we consider pseudo-cells all around the physical domain and the sub-grids too. In this way we estimate the variables using interpolation between pseudo-cells and their neighbour cells. The pseudo-cells of each sub-grid m are lying on the level m-1. We continue this process for all the sub-grids. As we have fulfilled the simulation in all sub-grids and we have the flow field results at m max level, we resolve the problem in the coarser levels again to ensure conservation. In this step of the procedure we have to be careful because we can apply the numerical simulation only in rectangular sub-grids. We find a new solution, this time by the influence of the fine levels. In addition we must satisfy both Dirichlet and Neumann matching conditions along coarse–fine and fine–coarse interfaces. That is why we give the velocity values, but we solve for pressure. The grid algorithm is comprised of multiple levels. As we have already created the Cartesian approximate geometry bound, the grid generation and the numerical simulation procedure is as follows: WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
372 Computational Methods and Experimental Measurements XIV
Create a coarse Cartesian grid (level m=0), simulate and solve the flow field. Transfer the solution to the next grid level (m+1). Solve the flow field on the new sub-domain. Transfer the solution to the next level (m+2) with new boundary conditions. (Repeat the procedure for all the levels) Simulate and solve the flow on the last sub-domain (level mmax). Transfer the solution to the coarser grid level (mmax-1) as its boundary conditions. Solve the sub domain with the influence of the refined grid results. (Repeat the procedure for all the levels) Solve the coarsest-initial sub domain (level m=0). Take the solution of the variables by the finest grid.
2.3 Boundary conditions If the grids are adjacent, the boundary conditions of one grid are provided by the other. If they are not adjacent, the boundary conditions are established by either the coarser level condition or by the physical boundary condition. Let us consider that we have already solved into the initial coarse grid and we have to continue the numerical simulation into a sub-grid. In order to specify the boundary conditions at coarse grid and sub-grid interfaces, we represent
u m1 (i, k ) and w m1 (i, k ) , the values of the velocity components, on the subm m grid pseudo-cells. u (l , n) and w (l , n) are the corresponding coarse grid values into the physical domain. Every interpolation takes place either on the xaxis or on the y-axis. If we consider that we apply the new velocity values on the x-axis, (figure 1), interpolation is applied as follows:
u
m1
u m (l , n) u m (l 1, n) (i, k ) 2
and
w
m 1
w m (l , n) w m (l 1, n) (i, k ) . 2
(1)
Also,
u m1 (i, k ) u m1 (i 1, k ) ... u m1 (i I 1, k ) .
(2)
Therefore, if the refinement factor is set to be equal to 2, (I=2), the above relation becomes as below:
u m1 (i, k ) u m1 (i 1, k ) .
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(3)
Computational Methods and Experimental Measurements XIV
Figure 1:
373
Linear interpolation in order to transfer the velocity values to a coarse–fine interface.
The relation between i and l is:
l 2 i 1 .
(4)
As we have assigned the velocity values on the boundary bounds, we must apply a condition for the pressure. Assuming that we simulate for an axisymmetric flow, the pressure vertical derivative at the interface is estimated as follows:
1 2 u 1 u 2 u p u u nx ( 2 2 ) u v y y x n x y Re y
,
(5)
1 2 v 1 v 2 v v v v ny ( 2 2 2 ) u v y y x y x y Re y where
p is the pressure vertical derivative, Re the Reynolds number, n x and n
n y the components of the unit normal vector and u and v the axial and the vertical velocity components respectively. In order to transfer the boundary values through a fine–coarse interface, we once more apply interpolation and we estimate the pressure vertical derivative as above. With the same symbols, interpolation between the velocity values is: WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
374 Computational Methods and Experimental Measurements XIV u m (l , n)
u m1 (i, k ) u m1 (i 1, k ) ... u m1 (i I 1, k ) , I
(6)
where I is the refinement factor.
3
Results
In order to examine the accuracy of the above method, we present the numerical simulation of the flow field inside a stenosed tube and around a symmetric airfoil NACA0012. We chase out the accuracy of the results comparing them with the correspondence of a Cartesian uniform grid with the same base grid size. 3.1 Steady flow inside a stenosed tube In stenosed tubes, the appearance of large recirculating zones, the high gradients of the wall shear stresses and the strong increase of the pressure drop are the main effects depending on the severity of the stenosis. The grid generation and the numerical method that was described above were used for the calculation of steady flow inside a stenosed tube. The stenotic area is 0.25% of the inlet area. The numerical refinement grid used is level=1 and I=2, (base grid: 401x26). The Re number, which was based on the maximum inlet velocity and the radius of the inlet, was set equal to 100. The boundary conditions are summarized as above in table 1. In order to control the accuracy of the proposed method, we simulated the current flow field by the use of two uniform grids: a curvilinear one sized 501x31 and a uniform Cartesian grid sized 401x26. A part of the block nested numerical grid is presented in figure 2, where it is shown that the whole domain is parted into four sub-grids. Two velocity profiles along the flow field are presented in figure 3 and the pressure distribution along the stenosed tube is presented in figure 4. It seems that the convergence between the block nested algorithm results and the curvilinear grid results is very satisfied, while the Cartesian uniform grid results present greater relative error. It is important to note that we managed to approve the numerical results by the use of only one sub-grid with a refinement factor equal to two, around the stenotic area. Table 1: Lower bound:
Upper bound:
u y
0, w 0,
u w 0,
Boundary conditions. p y p y
0
Inlet:
0 Outlet
p 2 u 1y , w0, 0 x
u x
0, w0, pcost n
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Figure 2:
Part of the used block nested numerical grid 401x26, L=1, I=2.
0.50
____ ------
1.00
Adaptive Cartesian, 401x26,L=1, I=2
____
Uniform Cartesian, 401x26
------
Uniform Curvilinear, 501x31
0.40
0.30
0.20
0.10
Adaptive Cartesian, 401x26,L=1, I=2 Uniform Cartesian, 401x26 Uniform Curvilinear, 501x31
0.80
z-coordinate
z-coordinate
375
0.60
0.40
0.20
x=4.0
x=5.0
0.00
0.00 0.00
1.00
2.00
3.00
4.00
-1.00
0.00
u-velocity
Figure 3:
Figure 4:
1.00
2.00
3.00
4.00
u-velocity
Velocity profiles along the flow field.
Pressure distribution along the stenosed tube.
3.2 Flow around airfoil NACA0012 In the second test case we simulate the flow field around a symmetric airfoil, (figure 5). It is impossible to solve the airfoil domain using a uniform Cartesian grid, because in order to achieve the desired geometry bound approximation a huge number of Cartesian grid cells would be needed. In order to avoid the above memory problem, we apply the block-nested algorithm of two grid levels (m=2), while the integer refinement factor is equal to four (I=4). The base grid size is 61x51 and the whole computational domain consists of 25586 cells. The corresponding uniform Cartesian grid comprises 796416 cells and its use for the airfoil numerical simulation is time-consuming and unprofitable. Regardless of WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
376 Computational Methods and Experimental Measurements XIV the time problem, we solved the fluid flow around the airfoil using a 320x315 uniform Cartesian grid and we realized that the use of the 61x51 refined grid decreases the CPU time by 93%. We applied free stream conditions and we gave the velocity value for all the boundary limits except of the [CD] limit (figure 5), where we the pressure value is given. The angle of attack is equal to 0 and the Reynolds number is 100. We present two axial velocity profiles along the flow field (figure 6). The comparison took place by corresponding results with [22].
Figure 5:
Figure 6:
4
Physical domain around airfoil NACA0012.
Axial velocity profiles of fluid flow around the airfoil, Re=100.
Conclusions
This paper proposes a method for the approximation of complex curvilinear geometries by using Cartesian co-ordinates only. In order to succeed the best geometry approximation close to the initial curvilinear bound, we applied the saw-tooth method in combination with a grid block refinement technique. We use a cell centre discretisation and the boundary transfer is demonstrated in the interfaces by the use of interpolation. We presented the numerical simulation of two flow fields: both inside a stenosed tube and around a symmetric airfoil. At the stenosed tube numerical simulation, we created the approximate Cartesian geometry by a saw-tooth method and we applied a block-nested grid with one level and a refinement factor equal to two. We solved the incompressible Navier–Stokes equations into four separate sub-grids using linear interpolation in order to transfer the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
377
boundary conditions. A comparison of the axial velocity results took place, between the block Cartesian grid, the uniform Cartesian grid and the curvilinear grid, with satisfied results. By the use of the block-nested grid we succeeded to improve the result’s accuracy towards the corresponding uniform Cartesian grid. On the second test case we examined the fluid flow around a NACA0012. Airfoils are ‘thin’ bodies and the use of a uniform Cartesian grid is very unprofitable and sometimes the algorithm is impossible to converge. So the use of the block-nested grid is necessary and provides a lot of advantages according to the CPU memory and converging time. A comparison of the axial velocity results took place between block Cartesian grid and bibliography results. The differences appearing between the profiles are due to the different simulation methods, types of grid, residuals and certainly to digitization error. By the use of the block-nested grid we succeeded in improving the converging time results and sometimes decreasing them by over 90%. It is important to note that the flow rate is of concern, in both of the above test cases, in spite of the differences depicted in the above velocity profiles. The above numerical solution proves that the Cartesian block refinement method is stable and accurate enough, regardless of the fact that the produced Cartesian bound is less accurate than the curvilinear one. The block Cartesian method is simple and gives a convergent and grid independent solution for complex curvilinear geometries, also accomplishing a reduction in CPU memory and the simulation computing time effort.
References [1] Thompson, J.F., Thames, F.C. and Mastin, C.W. “Automatic numerical generation of body fitted curvilinear coordinate system for field containing any number of arbitrary two-dimensional bodies”, J. of Computational Physics, Vol. 15, pp. 299–319, 1974 [2] Clark, D.K., Salas, M.D. and Hassan, H.A., “Euler calculations for multielement airfoils using Cartesian grids”, AIAA J., Vol. 24, pp.353–356, 1986 [3] Falle, S. and Giddings, J., “An adaptive multigrid applied to supersonic blunt body flow”, Numerical Methods in Fluid Dynamics, 1988 [4] Pember, R.B., Bell, J.B., Collela, P., Cruthhfield, W.Y. and Welcome, M.L., “An adaptive Cartesian grid method for unsteady compressible flow in irregular regions”, J. of Comp. Physics, Vol. 120, pp. 278–304, 1995 [5] Coirier, W.J. and Powell, K.G., “Solution-Adaptive Cartesian cell approach for viscous and inviscid flows”, AIAA J., Vol. 34, pp. 938–945, 1996 [6] Coirier, W.J. and Powell, K.G., “An accuracy assessment of Cartesianmesh approaches for the Euler equations”, J. of Computational Physics, Vol. 117, pp. 121–131, 1995 [7] Smith, R.J. and Johnston, L.J., “A novel approach to engineering computations for complex aerodynamic flows”, Proceedings of the 4th Int. Conf. on Numerical grid Generation in CFD, pp. 271–285, 1994
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
378 Computational Methods and Experimental Measurements XIV [8] Almgren, A.S., Bell J.B., Collela P., Howell L.H. and Welcome M.L., “A conservative adaptive projection method for the variable density incompressible Navier-Stokes equations”, J. of Comp. Physics, Vol. 142, pp. 1–46, 1998 [9] Martin D. and Collela P., “A cell-centered adaptive projection method for the incompressible Euler equations.” J. of Computational Physics, Vol. 163, pp. 271–312, 2000 [10] Howell L.H. and Bell J.B., “An adaptive mesh projection method for viscous incompressible flow”, SIAM J. Sci. Comput., Vol. 18, pp. 996– 1007, 1997 [11] Wang, Z.J., “A Quadtree-based adaptive Cartesian/Quad grid flow solver for Navier-Stokes equation”, Computers and Fluids, Vol. 27, pp.529–549, 1998 [12] Tuncer, I.H., “Two-dimensional unsteady Navier-Stokes solution method with moving overset grids”, AIAA J., Vol. 35, pp. 471–476, 1997 [13] Agresar, G., Linderman J.J., Tryggvason, G. and Powell, K.G., “An adaptive, Cartesian, front-tracking method for the motion, deformation and adhesion of circulating cells”, J. of Comp. Physics, Vol. 143, pp. 346–380, 1998 [14] Udaykumar, H.S., Kan, H.C., Shyy, W. and Tran-Son-Tray, R., “Multiphase dynamics in arbitrary geometries on fixed Cartesian grids”, J. of Comp. Physics, Vol. 137, pp.366–405, 1997 [15] Deister F., Rocher D., Hirschel E.H. and Monnoyer F., “Adaptively refined Cartesian grid generation and Euler flow solutions for arbitrary geometries”, Notes on Num. F.D., Vieweg, Braunschweig/Wiesbaden, 1998 [16] Chai, J.C., Lee, H.S. and Patakar, S.V., “Treatment of irregular geometries using a Cartesian coordinates finite-volume radiation heat transfer procedure”, Numer. Heat Transfer, Vol.26, pp.179–197, 1994 [17] Jesse J.P., Fiveland W.A., Howell L.H., Collela P. and Pember R.B., “An adaptive mesh refinement algorithm for the radiative transport equation”, J. of the Comput. Physics, Vol. 139, pp. 380–398, 1998. [18] Berger M.J. and Collela P., “Local adaptive mesh refinement for shock hydrodynamics”, J. of Comput. Physics, Vol. 83, pp.64–84, 1989. [19] Georgantopoulou Chr. G., Pappou Th. J., Tsaggaris S.G., “Cartesian grid generator for N-S numerical simulation of flow fields in curvilinear geometries”, Proc. of the 4th GRACM, pp. 526–534, 2002 [20] Pappou, Th. and Tsangaris, S., “Development of an artificial compressibility methodology using Flux Vector Splitting”, Int. J. for Num. Methods in Fluid, Vol. 25, pp.523–545, 1997 [21] Georgantopoulou, C. and Tsangaris S., “Βlock mesh refinement for incompressible flows in curvilinear domains”, Applied Mathematical Modeling, Vol.31, pp 2136–2148, 2006 [22] Γεωργαντοπούλου, Γ.Χ., Γεωργαντόπουλος, Α.Γ., Πάππου, Ι.Θ. και Τσαγγάρης, Σ., «Εφαρμογή καρτεσιανών πλεγμάτων στην επίλυση πεδίων ροής γύρω από καμπυλόγραμμες γεωμετρίες”, Recent advances in mechanics and the related fields, Vol.1, pp.76, 2003. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
379
Application of the finite volume method for the supersonic flow around the axisymmetric cone body placed in a free stream R. Haoui Department of Mechanical Engineering, University of Science and Technology, Bab Ezzouar, Algeria
Abstract The aim of this study is to determine the supersonic flow parameters around the axisymmetric cone body by finite volume methods. A code is written to capture the oblique shock wave behind a cone placed in supersonic free stream. The numerical method uses the Flux Vector Splitting method of Van Leer (Flux Vector Splitting for the Euler Equations, Lecture Notes in Physics, 170, pp. 507512, 1982). Time stepping is used as a parameter to ensure the convergence of the solution. The CFL coefficient and the mesh size are the other two parameters used to steady the convergence (Haoui et al. Condition de convergence appliquée à un écoulement réactif axisymétrique, 16ème CFM, n°738, Nice, France, 2003). The shock wave is detached when the point angle is large or the Mach number is weak. The Mach-contours show the evolution of the flow well from the infinite one to after the body. The precision of calculations is an order 10-8. For the same infinite Mach number, when the point angle increases, the detached shock back away. The computer code also collects also the waves of relaxation on the convex part of the body. Keywords: axisymmetric, supersonic flow, cone body, finite volume, oblique shock wave.
1 Introduction A numerical technique is proposed to predict a supersonic flow around an axisymmetric cone body. An explicit formulation with the finite volume method WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090351
380 Computational Methods and Experimental Measurements XIV is used for this purpose. The numerical method uses the split flux schemes. This procedure is sufficient because it gives excellent results with the experimental values and the convergence is assured with a high precision. The oblique shock is attached to the cone where a sudden increase of the pressure takes place. The aim is to determine the pressure distribution around the cone body model and the variation for the angle of the shock in function of the Mach number and the inclination of cone point angle. Fig. 1 shows the setup of the cone and the computational domain.
M=4 Computational domain
θ
Cone body
Figure 1: Configuration of the cone body θ=20°.
2 Equations The basic equations are those of Euler written in a 3D vector form as follow: Ω where
.
.
(1)
. , W and Ω are:
(2)
0 0 Ω 0 0 0 This system of equations is closed by the equation of state, non-linear equation e=f (T). Where the specific total energy is:
(3) and the
(4)
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
381
The first term corresponds to the translation-rotation internal energy and the second term to the kinetic energy.
3 Method of resolution The unsteady non-linear PDE’s are solved using an explicit form. The time is considered as an iterative parameter. The steady state solution is obtained when the residue of relative variation of density is about 10-8. The two-dimensional axisymmetric form is obtained by using the perturbation method to the threedimensional flow problem, (Haoui et al. [2]). Fig. 2 shows the mesh form used in the computational domain.
Y j+1 Y
j
X
j-1 i-1
X
i
i+1
Figure 2: Mesh form. We obtain the system: ,
,
,
,
, ′, , ′
0 0 . 1 0
2
or
,
∆ ,
,
,
,
,
∆
.Ω ,
,
(5)
,
, ′, , ′ ,
,
,
∆
,
,
Ω,
(6)
where ∆
,
min
∆ . ,
,
,
,
√
∆x is the minimum mesh size. For mes(i, j) and aire(i, j) see [2]. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(7)
382 Computational Methods and Experimental Measurements XIV In this study, we chose to use a breakdown of flows into two parts et . This decomposition should apply to our problem which is two-dimensional, where we must calculate the flow at each interface between two cells. However, through this interface, the direction that is most important is the normal . Therefore, we change the reference for placing in the benchmark of the interface as the normal reference through a rotation vector R (fig. 3). The flux vector can be written in the new benchmark: (8)
Interface
v
θ
u Figure 3 : Interface with the normale. where
is obtained from
, through the rotation R, as follows: cos sin
where : ,
cos
sin
sin cos
,
(9)
(10)
The transformation R is written broadly as cos sin
cos sin
sin cos
sin cos
(11)
In addition, each interface i+1/2, we know the two neighboring states i and i+1. We can thus calculate the flow F through the interface, the overall flow , being deducted from F by applying the inverse of the rotation: ,
.
(12)
This property makes it possible to use only one component of flux f (F, for example) to determine the breakdown of two-dimensional flows. In addition, this method is easier to implement and less expensive than the breakdown of two. The expressions of and in 1-D dimensional flux
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
383
and , where which are those of is defined as the W transformed by the rotation R, can be written in the following manner: 1 1 1
2
|
|
0
1
(13)
1
0
1 1 1
2
|
|
1
(14)
1 where
4 Results and discussion Fig. 4 shows a configuration of (59x51) nodes. In our calculation, a refined mesh of (232x201) is used. The CPU time per iteration per mesh point of the perfect gas is found to be 25 µs/iter/mp. The convergence is assured when the residue value equals10-6 (fig. 5). In fig. 6 and 7 we show the variation of the Mach number and the pressure distribution along the cone surface. The sudden variation of all parameters confirms the precision of the solution. The number of iterations is found to be around 3000. Figs. 8 and 9 show the Mach-contours around the cone body, for fig.8 the oblique shocks can be observed. In fig.9 the cone body is placed in tube shock where we can observe the reflected shock on the wall of shock tube and on the surface cone body. In the case where the semiangle of cone equals 20°, the incident shock angle decreases when the Mach number of the free stream increases. The test values exist in literature and give the shock angle in the function of Mach number of free stream [4]. The shock angle is observed with shadowgraph methods. In table 1, we observe the small difference between the experimental (Shapiro [4]) and computational values (also see fig. 10). In the case when a semi-angle of cone is chosen to equal 60°, the shock is detached at M=2 (fig. 11). If the Mach number of a free stream increases, the shock is nearer to the cone body. In table 2, we have some examples, also see fig. 12. The mesh grid of (174x268) points and CFL=0.4 are recommended. The number of iterations is about 10000. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
384 Computational Methods and Experimental Measurements XIV
Figure 4: Computational domain.
Figure 5: Variation of the residue.
Figure 6: Mach number.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
385
Figure 7: Ratio of pressure distribution.
Figure 8: Mach contours.
Figure 9: Axisymmetric cone body placed in tunnel. Mach=2.0, angle=20°.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
386 Computational Methods and Experimental Measurements XIV Table 1: Attached shock angle for θ=20°. Mach number 1.4 1.6 1.8 2.0
computational 53.13° 46.40° 42.27° 38.88°
experimental 53° 46° 42° 39°
Figure 10: Shock angle.
Figure 11: Mach contours.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
387
Table 2: Detached shock position. Mach number 2.5 2.5 2.0 2.0 1.5 1.5
Angle θ 45° 60° 45° 60° 45° 60°
Shock position 1 mm 2.7 mm 3 mm 20 mm 4 mm 42 mm
Figure 12: Position of detached shock.
5 Conclusion The finite volume method applied to a supersonic flow problem gives good results and shows the capture of the oblique shocks which are physically true. The angle of the oblique shock wave depends on the cone point-angle and Mach number of the free-flowing waves. The computational results with the chosen method are in good agreement with experimental observations. Our calculation code was also tested in a case where the cone is placed in the shock tube. The reflected shock on the wall of the tube is clearly visible. When the cone point-angle exceeds a maximum value, the shock wave is detached from the cone body, thus confirming the consistency of the code and the precision of the calculations. When the angle of the cone point increases, the shock runs away from the blunt body. The respect of boundary conditions especially near the wall plays a decisive role in the accuracy of calculations, especially in the case of the reactive flow around a blunt body.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
388 Computational Methods and Experimental Measurements XIV
References [1] [2] [3] [4]
Van Leer, B, Flux Vector Splitting for the Euler Equations, Lecture Notes in Physics, 170, pp. 507-512, 1982. Haoui, R. and al, Condition de convergence appliquée à un écoulement réactif axisymétrique, 16 ème CFM, n°738, Nice, France, (2003). Haoui, R., Zeitoun, D. & Gahmousse, A., Chemical and vibrational nonequilibrium flow in a hypersonic axisymmetric nozzle, IJTS, article 8, volume 40, pp.787-795, 2001. Shapiro, A.H., The Dynamics and Thermodynamics of Compressible fluid flow, The Ronald Press Company, New York. Volume II. 664. 1954.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
389
Some aspects and aerodynamic effects in repairing battle damaged wings S. Djellal1 & A. Ouibrahim2 1
Fluid Mechanics Laboratory, EMP, Bordj El Bahri, Algiers, Algeria Laboratoire Energétique, Mécanique et Matériaux, Université de Tizi-Ouzou, Algeria
2
Abstract This paper concerns an experimental investigation, conducted to estimate the aerodynamic effects of aircraft battle-damage repair patches. Of particular concern in this study are the wings. Three different repair schemes are considered, lower (repair at the intrados), upper (repair at the extrados) and full repairs (both intrados and extrados at the same time). Preliminary results clearly showed that repair patches improve the aerodynamic performances of the damaged aircraft model. The full repair was shown to achieve the best recovery of the aerodynamic performances provided that the patch is thin enough. Repairs performed on only one surface (i.e. upper or lower) were also shown to provide significant aerodynamic improvements. The improvement tendency decreases with the increase of the patch thickness. Above a certain thickness value, the fineness decreases below the fineness of the damaged unrepaired model. Keywords: aerodynamic performance, battle-damage, fineness, patch, repair.
1
Introduction
Military airplanes may return from battle flights with some form of battle damage affecting their aerodynamic performances. Moreover, damage may also occur even on the ground, between two operations. This damage should be treated as battle damage in operational scenarios regardless of its origin. To ensure availability, airplanes must be restored as quickly as possible. This will be accomplished through procedures and repair techniques that are substantially different from those used in time of peace. Successful repairs of battle damage
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090361
390 Computational Methods and Experimental Measurements XIV result from quick and accurate assessment, coupled with simple and efficient repair techniques. In a similar way we have to point out the interest in airplane safety which arose after the Aloha accident [1, 2] for civil aviation and after the Royal Australian Air force (RAAF) Machi accident [2] for military aviation associated to the presence of the cracks [3]. If the number of crack is limited and the crack or hole size is small compared to the damaged component, repairs are less expensive, otherwise the component should be replaced [4]. Possible repairs include either replacement of a fastener or filling or damage removal or patching. Metallic reinforcement with rivets and bolts [5] is the traditional method for repairing airplane cracked structures. The use of metallic sheets involves problems such as fatigue, galvanic corrosion, and difficult inspection after repair [6]. Baker [2] has been a pioneer in the design and assessment of repairs in 1970 at the Australian Research Laboratory (ARL) where he worked for the RAAF. Among his colleagues, Jones introduced the adhesively bonded repairs in the military aviation. Unfortunately, very little work has been undertaken to study the aerodynamic effects of repairs devoted to the case of two-dimensional wings [7]. The objective of this investigation is to obtain estimates of the aerodynamic effects of aircraft battle-damage repair (ABDR) patches for an aircraft model. The investigation concerns the influence of different repair schemes and thicknesses.
2
Repair modelling
The effect of repair on the performance of the model is investigated with a battle context in mind, where in some combat situations it may be appropriate to carry out rapid temporary repairs. Such repairs may consist of simple patches installed over the battle-damaged area to cover perforations in the skin. Hence, three repair schemes are considered in our study: Extrados (upper surface) repair with no lower surface repair. Intrados (lower surface) repair with no upper surface repair. Full repair: repairs on both upper and lower surfaces of the wing. The model of aircraft being used is the same as the one studied by Djellal [8] concerning the effect of the damage. In this study, we consider two damage holes of the same diameter (Table 1, representing 40% of the local chord). These holes are located at different spanwise wing positions, which are the root region and between the tip region and the m.a.c. (mean aerodynamic chord). These locations are labelled position 1 and 2 respectively in Fig. 1. In addition, they are both centred at mid-chord. The magnitude of the hole diameter as well as the position 1 were selected in order to have maximum performance loss due to the damage [8], hence expecting a larger improvement upon repair. Position 2 is selected for comparison purposes.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
391
Repair patches are made of aluminium sheet and are circular in shape. It was shown in a previous study devoted to repaired two-dimensional wings that circular patch gives slightly more recovery in aerodynamic performance lost by damage than the square one [9]. Patches were attached to the surface of the wing with 0.2 mm sided sticking tape. Table 1 gives the damage diameters subjected to repairs as well as dimensions of the patches used for the two positions.
Position 2 m.a.c
Position 1
Figure 1: Table 1:
Model and repair patches. Damage and repair dimensions.
Damage diameter Circular patch diameter
3
Position 1 24.5 mm 35 mm
Position 2 18.5 mm 25 mm
Experimental program
The experimental setup used (Fig. 2) has been presented in detail elsewhere [8] and is described here only briefly. All tests were conducted in the subsonic closed return wind tunnel at Polytechnic school of Algiers. This has a 0.6 m diameter test section and is capable of wind speed up to 48 m/s. For a velocity of 45 m/s, the Reynolds number used, based on the wing mean chord, was 1.5X105. Although this value is much smaller than the full scale cruise flight value (3.9X106), it is still not below the critical values and the understanding of the flow behaviour and repair influence can be explained. Force and moment were measured on a three-component balance (Fig. 2). The accuracy measurement of each component is 0.05%. The acquisition system gives repeatability of the lift coefficient CL to within 0.5%, and the drag coefficient CD to 0.2%.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
392 Computational Methods and Experimental Measurements XIV
Figure 2:
4
Experimental setup.
Undamaged and damaged states
Before studying the influence of repairs, tests on the undamaged and damaged model were realized. The appropriate representation of the results and their comparison between, undamaged, damaged and repaired, is to give the fineness curve as function of the angle of incidence instead of the lift versus drag polar where the angle of incidence does not explicitly appear; the fineness being the ratio between the lift and drag coefficients (CL/CD). The obtained results given in Fig. 3 are used as a reference in the following sections. Besides the aerodynamic performance losses [8], the effect of the damage is also characterized by a displacement of the angle of the maximum fineness towards 10° instead of 8° for the undamaged case. This displacement is due to the damage hole providing the upper surface with an additional air flow from the lower surface; disturbing consequently the pressure field distribution. This disturbance affects considerably CL and CD.
5
Repaired states
The influences of repairs on the aerodynamic performances at the position 1 of the wing are considered through two different repair parameters: the scheme and the thickness. 5.1 Influence of repair schemes Three repair configurations were studied for each damage scenario, namely: full, upper and lower repairs. In this section, we compare the effects of these configurations for the damage located at the position 1. The thickness of the circular patch used here was 0.2 mm. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
393
10
9
8
7
Fineness
6
5
4
3
2
1
0 -2
0
2
4
6
8
10
12
14
16
Incidence, Degrees undamaged
Figure 3:
damaged_position 1
damaged_position 2
Influence of damage on fineness.
Figure 4 shows that all repairs schemes provided improvements to the fineness values. The full repair, as expected, provides the best improvement over most of the incidence interval; while the upper and lower repair curves are practically the same. It must be outlined as also shown in Fig. 4 that all repairs restore the undamaged angle of maximum fineness value (8°). For low incidence (α < 6°), the fineness values are closer to those of the undamaged state, while above the angle of maximum fineness (α > 8°), the improvement (increment) decreases up to α = 14° (stall angle) [8]. In the case of lower and upper schemes, repairing only one surface (upper or lower) doesn’t completely restore the geometry of the wing surfaces so that the upper or lower remained hole acts as a circular cavity, inducing a pressure drag increment. The flow structure around the cavities is a complex study [10] and is beyond the scope of this work.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
394 Computational Methods and Experimental Measurements XIV 10
9
8
fineness
7
6
5
4
3
2
1 0
2
4
6
8
10
12
14
16
Incidence, Degrees undamaged
Figure 4:
damaged
upper
lower
full
Lift to drag ratio due to repairs.
To appreciate the gain of recovery ∆frecov (in percentage) of the fineness from the damaged state, we define, from this state, considered at first as reference, at given incidence and for each type of repairs ∆frecov/repairs as follows: f − f damaged ∆f re cov/ repairs (%) = repair scheme (1) f damaged α Thus, Fig. 5, obtained from Fig. 4 and Eq. (1), clearly shows: • All the repair schemes exhibit improvements of the fineness from the damaged state. • The repair type approaching the undamaged values is the full repair. Thus, we observe that the fineness recovered ∆frecov/full repair varies in this case from 35% at 0°, up to 45% at 2°, and then decreases down to 10% at 12°. • The upper and lower repairs display the same values, ∆frecov/upper repair and ∆frecov/lower repair for the whole range of incidence, lower than those ∆frecov/full repair of the full repair. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
395
50%
Fineness recovery (%)
40% 30% 20% 10% 0% 0
2
4
6
8
10
12
14
-10% Incidence, Degrees
upper
Figure 5:
lower
full
Percentage of recovery in fineness due to repairs.
5.2 Influence of repair thickness In this section, we consider circular patch with a full scheme over the damage hole located at position 1. Measurements were carried out for four plates, of thickness h = 0.2, 0.6, 1 and 1.4 mm. To generalise our results, we consider nondimensional representation of the thickness; the ratio of each value of the used thickness over the value of the local chord. These relative thicknesses will henceforth be referred to as ē1, ē2, ē3 and ē4 equal respectively to 0.3%, 1%, 1.6% and 2.3%. Although some values of the thickness can appear as large regarding the real size of the wing; they are considered in the present study in order to assess the largest acceptable repair thickness value. It can be noticed in Fig. 6 that all the thicknesses exhibit improvements in the fineness for angle of incidence as large as 14°. Moreover, the fineness improvement decreases with the increase of the relative thickness (ē). For angle of incidence larger than 14°, all the thicknesses used lead this time to fineness value below the fineness of the unrepaired wing. We can resume that more the values of the thickness are large, less are the angle of incidence values for which the fineness improvement is interesting compared to the unrepaired state. In order to grasp the influence of the thickness on each wing surface (extrados and intrados), we have also consider the lower and the upper repairs where we also find that the variation of the thickness involve different improvements for each scheme repairs. The lower scheme is the less sensitive to the increase of the repair patch for largest value of ē, exhibiting by the way, the best value than the two others even the full scheme (Fig. 7).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
396 Computational Methods and Experimental Measurements XIV 10 9 8
Fineness
7 6 5 4 3 2 0
2
4
6
8
10
12
14
16
Incidence, Degrees ē1
Figure 6:
ē2
ē3
ē4
damaged
undamaged
Influence of repair thickness on the fineness.
It can be concluded that the full scheme gives lower performances than the two other as the thickness of the patch increases. This can be explained by the drag induced by the presence of the patches on both surfaces of the wing, compared to the upper and lower repairs where only one surface is involved in the drag increase.
6
Conclusion
The present investigation has been conducted for two aspects of the influence of battle damage repair patches on the aerodynamic performance of an aircraft model: the repair scheme and the thickness of the patch. First of all, it is shown that, for all the scheme repairs, the battle damage repair patches can achieve substantial reductions in drag coefficient as well as increases in lift coefficient, close to those of the undamaged wing depending on the incidence angle. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
9
9
8
8
7
7
6
Fineness
Fineness
Computational Methods and Experimental Measurements XIV
5 4
397
6 5 4 3
3
2
2
1
0
2
4
6
8
10
12
14
Incidence, Degrees full
upper
ē1 = 0.3% Figure 7:
16
0
2
4
6
8
10
12
14
16
Incidence, Degrees low er
full
upper
low er
ē2 = 2.3%
Effect of repair thickness on fineness for different schemes.
Second, the full repair scheme appears, globally, to be the more appropriate to gain aerodynamic performance lost from battle-damage provided that the patch is thin enough. Repairs to just one surface (upper or lower repairs) also produce significant improvements in lift and drag coefficients. The prevention of flow through the wing seems to be the major factor for the improvement. Third, the increase of the patch thickness leads to the decrease of the improvement. So that, for a large enough patch thickness, the repair no longer leads to improvement; rather the opposite, relative to the unrepaired (unpatched) model. Consequently, repair by patches showed a compromise between the improvement due to the removal of flow through the damage hole and the performance loss due to the plates installed on the wings to remove the hole. As a conclusion, to repair the battle-damaged surfaces on the wing, it is preferable to use the full scheme with a circular patch as thin as possible.
References [1] Pitt P., Multiple-site and Widespread Fatigue Damage in Aging Aircraft, Engineering Failure Analysis, Vol. 4, December 1997, pp. 237–257. [2] Jones R., Assessing and Maintaining Continued Airworthiness in the Presence of Wide Spread Fatigue Damage: an Australian Perspective, Engineering Fracture Mechanics, Vol. 60, No. 1, 1998, pp. 109–130. [3] Jones R., and Chui W. K., Composite Repairs to Cracks in Thick Metallic Components, Composite Structures, Vol. 44, January 1999, pp. 17–29.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
398 Computational Methods and Experimental Measurements XIV [4] Klug J. C., and Sun C. T., Large Deflection Effects of Cracked Aluminium Plates Repaired with Bonded Composite Patches, Composite Structures, Vol. 42, July 1998, pp. 291–296. [5] Seo D. C., and Lee J. J., Fatigue Crack Growth Behaviour of Cracked Aluminium Plate Repaired with Composite Patch, Composite Structures, Vol. 57, July 2002, pp. 323–330. [6] Avdelidis N. P., Moropoulou A., and Marioli Riga Z. P., The Technology of Composite Patches and their Structural Reliability Inspection Using Infrared Imaging, Progress in Aerospace Sciences, Vol. 39, May 2003, pp. 317–328. [7] Render P. M, De Silva S., Walton A. J., and Mani M., Experimental Investigation into the Aerodynamics of Battle damaged airfoils, Journal of Aircraft, Vol. 44, No. 2, 2007, pp. 539–549. [8] Djellal S., Ouibrahim A & Render P. M., The Influence of Battle Damage on the Aerodynamic Characteristics of a Model of an Aircraft. WSEAS Transactions on Fluid Mechanics, Vol.1, January 2006, pp. 89–95. [9] Djellal S., Ouibrahim A & Render P. M., Effects of Battle damage repairs on the aerodynamics of a wing, 2èmes Journées Nationales sur l’aéroGazodynamique et Turbomachines, 5-6 December 2005, Oran, Algeria. [10] Rowley C. W., Juttijudata V., and Williams D. R., Cavity Flow Control Simulations and Experiments, 43rd AIAA Aerospace Sciences Meeting, Reno, NV, January 10–13, 2005.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Section 7 Heat transfer and thermal processes
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
401
Natural and mixed convection in inclined channels with partial openings A. Andreozzi1, B. Buonomo2, O. Manca2 & S. Nardini2 1 2
DETEC - Università degli Studi di Napoli Federico II, Italy DIAM - Seconda Università degli Studi di Napoli, Aversa (CE), Italy
Abstract Natural and mixed convection in air in an inclined channel with the lower wall heated at uniform heat flux and in the presence of an obstacle either at the entrance or exit of the channel is investigated experimentally and numerically, respectively. The study is accomplished for several heat fluxes. In mixed convection the Reynolds number is equal to 150, this being in the laminar regime. Natural convection investigation is carried out with Rayleigh number equal to Ra=6.68x105 and 2.67x106. The effect of the obstacle disposition and height is investigated by flow visualization and wall temperature distribution. Keywords: natural convection, mixed convection, numerical and experimental analysis, inclined channels.
1
Introduction
Natural and mixed convection heat transfer within horizontal or inclined channels and open ended cavities has been studied widely in engineering and science applications. This is due to its importance in many applications, such as electronic cooling, fire research, solar collector systems, chemical vapor deposition systems (CVD) and nuclear reactors [1–5]. For inclined and horizontal channels heated from below, the buoyancy force can induce secondary flow that enhances the local heat transfer. Moreover, the onset point of the secondary flows is important because it delineates the region after which the two-dimensional laminar flow becomes three-dimensional and a transition motion from laminar to turbulent flow is observed [6–15]. In fact, the fluid layers heated from below can involve thermal instabilities due to the top heavy situation occurring. The buoyancy forces that overcome the stabilizing WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090371
402 Computational Methods and Experimental Measurements XIV effects of viscous and thermal diffusion cause the onset of secondary motions. The heating of a fluid from the bottom plate in any case involves the development of secondary motions in span-wise sections. For these configurations the secondary motions imply that the flow is three-dimensional and different patterns of secondary motions can be detected for different aspect ratios and inclination angles of the plates with respect to the gravity vector. Understanding, manipulating and controlling the secondary motions in open-ended cavities are important. There is a need for further numerical and experimental investigations on three-dimensional natural and mixed convections in cavities and particularly in horizontal and inclined channels. A number of studies investigated the instability in natural convection over inclined or slightly inclined plates heated at uniform wall temperature or heat flux as reviewed in [6–8]. Both natural and mixed convections between parallel plates have been thoroughly investigated recently, as shown in [9–11, 14–16] for mixed convection and in [12, 13, 17] for natural convection. The possible flow stabilization or a reduction of thermal instabilities have been investigated mainly in mixed convection [11, 18–20]. An experimental investigation on the mixed convection of air in a channel with the bottom heated horizontal wall was carried out in [18] to study the possible stabilization and elimination of the buoyancy driven unstable longitudinal, transverse and mixed vortex flow by tapering its top wall. An investigation on the mixed convection of air in a bottom heated horizontal flat duct by top plate heating experimentally was accomplished in [11] to study the possible stabilization and elimination of the buoyancy driven unstable longitudinal, transverse and mixed vortex flow. Experiments for the onset and development of the buoyancy driven secondary air flow and enhancement of heat transfer in a horizontal convergent and a divergent channel were carried out in [19]. An experimental flow visualization combined with transient temperature measurement to explore the possible stabilization of the buoyancy drive vortex flow in mixed convection in air in a bottom heated horizontal flat duct by placing a rectangular solid block on the duct bottom was carried out in [20]. It seems that the control of secondary motion in natural and mixed convection in horizontal or inclined parallel plate channels by partial opening at the inlet or outlet sections has not been investigated in depth. In this paper an experimental investigation is conducted on natural convection in inclined channels and a numerical study is carried out on mixed convection in horizontal channels, both heated from below with the inlet or outlet section partially opened.
2
Experimental apparatus
The experiments were carried out in natural convection. The investigated configuration was a channel with the bottom wall heated at uniform heat flux and unheated top and side walls. The heated wall consisted of a 400x530 mm sandwiched phenolic fiberboard plates. The top and side walls were made of Plexiglas. The channel spacing b was measured to an accuracy of ±0.25 mm by a dial-gauge equipped caliper. An obstruction, which was made of a glass plate 2.0 WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
403
mm thick, was located on the lower wall either at the inlet or at the outlet section as shown in Fig.1. The cavity was 400 mm long, 475 mm wide and 40.0 mm high and was open to the ambient along its right and left edges. In order to reduce conductive heat losses, a 150 mm Polystyrene block was affixed to the rear face of the heated wall. The plate facing the channel was 3.2 mm thick and its surface adjacent to the internal air was coated with a 35 m thick nickel plated copper layer. The low emissivity of nickel (0.05) minimized the radiation effects on heat transfer. The rear plate was 1.6 mm thick. Its back surface was coated with a 17.5 m thick copper layer, which was the heater. The plate was heated by passing a direct electrical current through the heater, which had a serpentine shape. Its runs were 19.6 mm wide with a gap of nearly 0.5 mm between each one, giving each heater a total length of 9.0 m. The expected electrical resistance was 0.50 . The narrow gaps between the runs, together with the relatively high thickness (4.8 mm) of the resulting low-conductive fiberglass, were suitable to maintain a nearly uniform heat flux at the plate surface. The electrical power supplied by the heater was evaluated by measuring the voltage drop across the plates and the current passing through them. To avoid electrical contact resistances, thick copper bars soldered both to the electric supply wire and to the ends of the heater were bolted together. The dissipated heat flux per board was evaluated to an accuracy of ±2%. The entire apparatus was located within a room, sealed to eliminate extraneous air currents. Glass
Glass
Outlet obstacle
Inlet obstacle
Figure 1:
Heated wall
(a)
Heated wall
(b)
Investigated configurations in natural convection: (a) inlet obstacle; (b) outlet obstacle.
The wall temperatures were measured by 36 0.50 mm OD ungrounded iron-constantan thermocouples embedded in each fiberboard plate and in contact with the outer layer. They were located at twelve longitudinal stations at three different z values. Fifteen thermocouples were affixed to the rear surface of the plates and embedded in the Polystyrene to enable the evaluation of conductive heat losses. The ambient air temperature was measured by a shielded thermocouple placed near the leading edge of the channel. A Kaye instrument K170 ice point was used as a reference for thermocouple junctions. A National Instruments SCXI module data acquisition system and a PC were used for the data collection and reduction. The data acquisition was performed through the LabviewTM b) software. Calibration of the temperature measuring system showed an estimated precision of the thermocouple-readout system of ±0.1°C. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
404 Computational Methods and Experimental Measurements XIV Camera
a)
b) Plates
Plenum
z
Incense sticks Heat Exchanger
Figure 2:
Cylindrical Lens
y x Sheet of light
Compressed Air
Laser Source
a) Sketch of the smoke generation arrangement; b) sketch of the visualization arrangement.
Smoke for flow visualization was generated by burning incense sticks in a steel tube, connected to a compressor. The smoke was injected through a glass heat exchanger to reduce the temperature of the smoke, and then sent into a plenum. Its temperature, measured by a thermocouple, turned out to be close to that of the ambient air entering the cavity. It was, then, driven to the test section through a small slot situated under the leading edge of the bottom plate along the plate width. A sketch of the apparatus is reported in Fig.2. The longitudinal view of the arrangement for smoke generation is reported in Fig.2a, whereas in Fig.2b the sketch of the visualization set-up is shown. Preliminary tests were carried out to determine the plenum location so as not to interfere with the air flow at the inlet section. The visualization was made possible by means of a laser sheet, generated by a He-Ne laser source. The laser sheet was produced by placing a mirror near the end of the test section at an angle of 45° to the direction of the main flow, behind which a cylindrical lens was placed to enlarge the beam as needed. Small adjustments were allowed by means of a micrometer screw system, in order to take photographs at different locations along the z axis. The same arrangement was used to obtain pictures in the yz plane at several x locations. The still camera was a digital Nikon D100 camera.
3
Geometrical description and numerical procedure
Numerical investigation was carried out on mixed convection in a horizontal channel. The domain was made of a principal channel and two channels with adiabatic walls, one upstream the principal channel and the other downstream, Fig.3. The principal channel was made up of a uniformly heated horizontal wall at uniform heat flux, a parallel adiabatic wall located above and two adiabatic vertical side walls. In this problem the geometrical parameters were the distance between the horizontal walls, b, the length of the heated plate, L, the width of the channel, W, and the height of the obstacle, h. The characteristic dimensionless numbers were the Reynolds number, the Rayleigh number and the Richardson number and they are based on the distance b between the horizontal plates. The WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
405
aim of this paper was to investigate the effect of Reynolds and Rayleigh numbers on thermal and fluid dynamic behavior of mixed convection in a horizontal channel heated from below and the effect of the obstacle in the inlet or outlet section. The mixed convective flow in the horizontal channel was considered to be incompressible. The analysis was transient and the thermophysical properties were assumed constant with temperature except for the density, as suggested by the Boussinesq approximation. The thermophysical properties of the fluid were evaluated at the ambient temperature, T0, which was equal to 300 K.
qc qW
Figure 3:
Configuration numerically investigated.
The commercial CFD code Fluent 6.2 [21] was employed to solve the governing equations. The imposed boundary conditions were: uniform heat flux and no-slip condition on the lower plate; adiabatic wall and no-slip condition on the other solid walls; pressure inlet or pressure outlet conditions on the openboundaries. Computations were made for qw=150 W/m2, L=400 mm, b=40.0 mm, and W=500 mm. The employed grid was uniform inside the horizontal channel consisting of 51x21x41 points, whereas they decreased with an exponential law inside the auxiliary channels. This grid was the right compromise between the solution accuracy and the computational time. About the time step a value of 10-1 was employed after some tests on different time steps. This analysis was accomplished for the highest considered Rayleigh number in order to obtain a valid test also for lower Ra values.
4
Results and discussion
Experimental investigations were carried out for qw= 60 and 240 W/m2, =10°, b=40.0 mm and h=10.0 and 20.0 mm (h/b=0.25 and 0.50). In Fig.4 visualization of flow motion along the longitudinal section with and without obstacle is reported for qW=60 and 240 W/m2 (Ra=6.68x105 and 2.67x106). When the obstacle is absent, h/b=0, fluid moves adjacent the lower wall up to about 1/3 L where it separates and moves toward the upper wall. Then it advances close the unheated upper wall. For the lowest value of the Rayleigh number separation occurs closer to the exit section than for the highest Rayleigh number value. When the inlet obstacle, is present a recirculation zone develops and fluid joins again the lower wall at distance from the inlet section equal to about b for h/b=0.25, whereas for h/b=0.5 this distance is much higher. When the obstacle is at exit section separation happens similarly to the case with no WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
406 Computational Methods and Experimental Measurements XIV obstacle, even if for Ra=2.67x106 the x value, where separation takes place, is higher. However, the obstacle presence slows down the flow. For the highest Rayleigh number value separation seems to take place more distant the inlet section. Ra=6.68x105
Ra=2.67x106
b/h=0
b/h=0.25, inlet obstacle
b/h=0.50, inlet obstacle
b/h=0.25, outlet obstacle
b/h=0.50, outlet obstacle Figure 4:
Flow visualization in longitudinal section z=0 for two Rayleigh numbers.
Separation from lower wall determines secondary motions which develop along the channel. They are very clear from flow visualization in channel transversal sections. In Fig.5 pictures of flow visualization in configurations without obstacle and with obstacle at outlet section and h/b=0.5 are reported. The considered sections are at 100, 200 and 395 mm from the inlet channel section. In the section at x=100 mm and Ra=6.68x105, secondary motions start to develop. This is clearer when the obstacle is absent. For the highest value of Ra plumes are present at x=100 mm which extend for no more 1/3b from the lower wall. When the obstacle is present secondary motions are not present at x=100 mm. At x=200 and 395 mm secondary motions occur in each considered configuration. They are less developed and intense when the outlet obstacle is present. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Ra=6.68x105
407
Ra=2.67x106
no obstacle: h/b=0 and x=100 mm
no obstacle: h/b=0 and x=200 mm
no obstacle: h/b=0 and x=395 mm
outlet obstacle: h/b=0.50 and x=100 mm
outlet obstacle: h/b=0.50 and x=200 mm
outlet obstacle: h/b=0.50 and x=395 mm Figure 5:
Flow visualization in several transversal sections and for two Rayleigh numbers.
Results of numerical investigation on mixed convection in horizontal channel are reported in Figs. 6-8 for Re=150 and Ra=1.67x106 when an inlet or outlet obstacle with h/b=0.5 is present or not. Velocity vectors in the longitudinal section z=0 mm, in Fig.6, show the differences among the considered configurations. Particularly, for the no obstacle case, in Fig.6a, flow separates from the lower wall at about x=250 mm and, at this location, a recirculation zone is evident close to the upper wall. When an obstacle is present at the inlet section, in Fig.6b, a vortex behind it is noticed and at about a distance b from inlet channel section flow joins again the lower wall. In this configuration flow does not separate from the lower wall. When an obstacle is located at the exit, in Fig.6c, flow separates from the lower wall at about x=340 mm and, hence, close to the exit. Temperature fields in transversal sections at x=0, 100, 200, 300 and 400 mm are presented in Figs.7a-7c. When there is not an obstacle, in Fig.7a, secondary motions are developing in the section at x=100 mm close to the side walls whereas at x=200 mm in the whole section secondary effects are clear. At x=300 mm secondary motions are present in the entire section and at x=400 mm temperature is quasi-uniform in the section. When an obstacle is present at the inlet section, in Fig.7b, at x=100 mm plumes are developing whereas secondary motions are absent at side zones. Temperature close to the heated wall seems to WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
408 Computational Methods and Experimental Measurements XIV be less uniform than in the configuration without obstacle. At higher x values plumes appear more developed. In the configuration with an obstacle at the channel exit, in Fig.7c, channel behavior, in terms of air temperature and secondary motions, seems very similar to that for the channel without obstacle, reported in Fig.7a. In fact, at the channel inlet section temperature increases are not evident and at x=100 mm air temperature value increase only in the central zone of the transversal section whereas secondary motion develops in the zone adjacent the side walls. no obstacle
(a)
inlet obstacle
(b) Re=150 2 q=150W/m
outlet obstacle
(c)
0
Figure 6:
0.2
0.4
Velocity vectors in the longitudinal plane z=0 mm for Re=150 and qw=150 W/m2: (a) no obstacle; (b) inlet obstacle; (c) outlet obstacle.
Finally, in Fig.8 surface temperatures on the heated wall is shown for the configurations with obstacle. The great difference between the two cases is clear. In fact, when the obstacle is located at the inlet section, in Fig.8a, warmer zones are present close to the entrance whereas when the obstacle is at the outlet section higher temperature values take place in the zone between x=100 mm and 200 mm.
5
Conclusions
An experimental investigation on natural convection in air, in an inclined channel, with the lower wall heated at uniform heat flux and an obstacle located either at inlet or outlet section of the channel was carried out. Results of flow visualization highlighted the effect of secondary motions along the heated wall WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV Re=150 q=150 W/m2 no obstacle
0
15
30
45
409
60
y=0.4 m y=0.3 m y=0.2 m y=0.1 m y=0.0 m -0.2
-0.1
0
X x [m]
0.1
0.2
(a) Re=150 q=150 W/m2 inlet obstacle
0
15
30
45
60
y=0.4 m y=0.3 m y=0.2 m y=0.1 m y=0.0 m -0.2
-0.1
0
X x [m]
0.1
0.2
(b) Re=150 2 q=150 W/m outlet obstacle
0
15
30
45
60
y=0.4 m y=0.3 m y=0.2 m y=0.1 m y=0.0 m -0.2
-0.1
0
X x [m]
0.1
0.2
(c) Figure 7:
Temperature fields in transversal sections for Re=150 and qw=150 W/m2: (a) no obstacle; (b) inlet obstacle; (c) outlet obstacle.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
410 Computational Methods and Experimental Measurements XIV
0.4
0.4
60
60
45
45 30
0.3
30
0.3
15
15 0
Y[m]
Y[m]
0
0.2
0.1
0.1
0.0
0.2
-0.2
-0.1
0
X[m]
0.1
0.2
0.0
-0.2
-0.1
(a) Figure 8:
0
X[m]
0.1
0.2
(b)
Temperature field on the heated wall for Re=150 and qw=150 W/m2: (a) inlet obstacle; (b) outlet obstacle.
and the separation from the lower heated plate strongly depended on the location and height of the obstacle. When the obstacle is at the channel entrance a recirculation zone develops behind it. When the obstacle is at the exit section, flow motion is similar to the one in case with no obstacle Furthermore, a numerical investigation was accomplished on mixed convection in horizontal channel with or without obstacle. For the no obstacle case a recirculation zone was present in the central part of the channel whereas for the case of the obstacle at the entrance the recirculation was behind it. When the obstacle was at the exit no recirculation was detected.
Acknowledgement This work is funded by the Regione Campania Legge 5 and SUN 2007 research grants.
Nomenclature b g Gr
h k
channel spacing, m acceleration due to the gravity, ms-2 Grashof number, g q c b 4 2k height of the obstacle, m thermal conductivity, Wm-1K-1
L Pr q Ra
length of the heated wall, m Prandtl number heat flux, Wm-2 Rayleigh number, = Gr Pr
Re
Reynolds number,
T
temperature, K
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
uib
Computational Methods and Experimental Measurements XIV
ui
x y z W
average velocity at inlet section of the channel, ms-1 horizontal coordinate distance, m vertical coordinate distance, m coordinate along the width of the plates, m width of the plate, m
411
Greek symbols
volumetric coefficient of expansion, K-1
inclination angle from horizontal, °
kinematic viscosity, m2s-1
Subscript c w
convective wall
References [1] Gebhart, B., Jaluria, Y., Mahajan, R. & Sammakia, B., Buoyancy-Induced Flows and Transport, Hemisphere Publishing Corporation, Washington, D.C, 1988. [2] Manca, O., Morrone, B., Nardini, S. & Naso, V., Natural convection in open channels, in Computational analysis of convection heat transfer, WIT Press, Southampton, Boston, 2000. [3] Bianco, N., Morrone, B., Nardini, S. & Naso, V., Air natural convection between inclined parallel plates with uniform heat flux at the walls, Journal of Heat and Technology, 18 (2), pp. 23–45, 2000. [4] Kasayapanand, N. & Kiatsiriroat, T., Enhanced heat transfer in partially open square cavities with thin fin by using electric field, Energy Conversion and Management, 50 (2), pp. 287-296, 2009. [5] Chong, D., Liu, J. & Yan, J., Effects of duct inclination angle on thermal entrance region of laminar and transition mixed convection, International Journal of Heat and Mass Transfer, 51, pp. 3953-3962, 2008. [6] Jeschke, P., & Beer, H., Longitudinal vortices in a laminar natural convection boundary layer flow on an inclined flat plate and their influence on heat transfer, Journal of Fluid Mechanics, 432, pp. 313-339, 2001. [7] Kimura, F., Yoshioka, T., Kitamura, K., Yamaguchi, M. & Asami, T., Fluid flow and heat transfer of natural convection at a slightly inclined, upwardfacing, heated plate, Heat Transfer-Asian Research, 31, pp. 362-375, 2002. [8] Biertümpfel, R. & Beer, H., Natural convection heat transfer increase at laminar-turbulent transition in a presence of instationary longitudinal vortices, International Journal of Heat and Mass Transfer, 46, pp. 31093117, 2003. [9] Nicolas, X., Revue bibliographique sur les écoulements de PoiseuilleRayleigh-Bénard: écoulements de convection mixte en conduites rectangulaires horizontales chauffées par le bas, International Journal of Thermal Sciences, 41, pp. 961-1016, 2002. [10] Lin, T. F., Buoyancy driven flow and thermal structure in a very low Reynolds number mixed convective gas flow through a horizontal channel, International Journal of Heat and Fluid Flow, 24, pp. 299-309, 2003. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
412 Computational Methods and Experimental Measurements XIV [11] Chen, S. W., Chang, C. Y., Lir, J. T. & Lin, T. F., Stabilization and elimination of transient unstable mixed convective vortex flow of air in a bottom heated horizontal flat duct by top plate heating, International Journal of Heat and Mass Transfer, 47, pp. 4137-4152, 2004. [12] Andreozzi, A., Buonomo, B., Manca, O. & Nardini S., Three dimensional transient natural convection in horizontal channels heated from below, Proc. Energy: production, distribution and conservation, 2, pp. 773-782, Milan - May 14-17, 2006. [13] Andreozzi, A., Buonomo, B., Manca, O. & Nardini, S., Experimental investigation on the effect of longitudinal aspect ratio on natural convection in inclined channels heated below, Proc. 8TH Biennial Conference on Engineering Systems Design and Analysis, paper ESDA2006-95526, Torino, Italy, July 19–22, 2006. [14] Tuh, J. L., Transverse recirculations in low Reynolds number mixed convective gas flow over a model heated substrate, Experimental Thermal and Fluid Science, 32, 293–308, 2007. [15] Chiu, H. C. & Yan, W. M., Mixed convection heat transfer in inclined rectangular ducts with radiation effects, International Journal of Heat and Mass Transfer, 51, pp. 1085–1094, 2008. [16] Foglia G., Lazzari S. & Manca O, Experimental investigation on mixed convection in a horizontal channel, Journal of Heat and Mass Transfer, 1, pp. 27-48, 2007. [17] Andreozzi, A., Buonomo, B., Manca, O. & Nardini, S., Effect of transversal aspect ratio on natural convection in an inclined channel with heated bottom wall, Proc. of 13th Int. Flow Visualization Symposium, paper n. 193080424, July 1st- 4th, Nice French Riviera, 2008. [18] Tseng, W. S., Lin, W. L., Yin, C. P., Lin, C. L. & Lin, T. F., Stabilization of buoyancy-driven unstable vortex flow in mixed convection of air in a rectangular duct by tapering its top plate, Journal of Heat Transfer, 122 , pp.58-65, 2000. [19] Liu, C. W. & Gau, C., Onset of secondary flow and enhancement of heat transfer in horizontal convergent and divergent channels heated from below, International Journal of Heat and Mass Transfer, 47, pp. 5427– 5438, 2004. [20] Chen, S.W., Shu, D.S., Lir, J.T. & Lin, T.F., Buoyancy driven vortex flow and its stability in mixed convection of air through a blocked horizontal flat duct heated from below, International Journal of Heat and Mass Transfer, 49, pp. 3655–3669, 2006. [21] Fluent Incorporated, Fluent 6.2, User Manual, 2006.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
413
Heat flux reconstruction in the grinding process from temperature data J. Irša & A. N. Galybin Wessex Institute of Technology, Southampton, UK
Abstract The problem considered in this paper deals with reconstruction of heat flux from temperature measurements, where due to lack of data, numerical integration is impossible. The problem is reduced to the determination of unknown complex coefficients of a linear piecewise holomorphic function representing heat flux properties and a quadratic holomorphic function representing complex temperature distribution. This leads to an over-determined system of linear algebraic equations, which are subjected to experimental errors. The reconstruction of heat flux is unique; however, reconstruction of flux directions is non-unique. It contains one free additive parameter. The method is useful in situations where limited data on temperature are provided. Keywords: heat flux, temperature measurement, holomorphic function, complex variables.
1
Introduction
In heat flow problems, specifically in heat conductivity where a solid has varying temperature, information on the heat flux is essential. The measurement of heat flux is not an easy task and requires special devices, such as heat flux sensors. These sensors have a higher price than commonly used thermometers and the accuracy is also lower, as they are usually measure the difference in temperature. In problems such as grinding, the heat flux is important due to thermal damage and residual stresses. In order to avoid that damage, many analytical solutions and numerical simulations were provided with different approaches to obtaining the heat flux shape [1, 2]. The current assumption of heat flux entering the workpiece is triangular [2] or polynomial (Q=a+bx+cx2) [1]. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090381
414 Computational Methods and Experimental Measurements XIV Heat flux is related to the gradient of temperature, therefore by having the temperature as a function, it is possible to obtain the heat flux very easily. However, the temperature is usually given in a dataset, only some locations or as an image. Applying the complex variables theory to heat flow provides that the temperature is a harmonic function for steady state heat conductivity. Therefore, one can introduce the holomorphic function, which is a function of temperature and its complex harmonic conjugate. The derivative of the holomorphic function is related to the heat flux, which gives very useful properties. The main idea is to approximate the holomorphic function of complex temperature distribution by the quadratic holomorphic function and complex heat flux by the linear holomorphic function in finite domains. These functions obey continuity along subdomain interfaces, on top of which two more equations are provided to utilize the data. A somewhat similar method was used in [3, 4] with stress trajectories. The next section describes the theoretical background of steady state heat conduction, followed by problem formulation. The numerical method is explained with a discussion of uniqueness. The forthcoming sections are model examples, where ideal data are used to estimate accuracy as well as real data from the grinding process.
2
Steady state temperature distribution
When steady state prevails, the temperature function inside a 2D body T(x,y) is harmonic [5, 6]. Heat flux is mathematically defined by Fourier’s Law: Q kgradT
(1) where T(x,y) is temperature at point (x,y) and k refers to thermal conductivity of the material of the solid body. The physical argument dictates that steady state temperature distribution prevails if there is no heat source or sink, which means there is no heat accumulation Q 0 . Taking heat flux by its real and imaginary parts, z representing x and y direction Q Q x iQ y , where Q x k
T T , Q y k y x
(2)
one will obtain the relation: Q 1 Q x Q y z 2 x y
1 2T 2T 0 k 2 x 2 y 2
(3)
The last expression (3) provides that the steady state temperature function T is harmonic function. This provides existence of harmonic conjugate function H, where both of them satisfy CR; therefore we can write:
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
415
f ( z ) T ( x, y ) iH ( x, y )
(4) where f(z) is a holomorphic function called complex temperature distribution (CTD), where the real part of the function is associated with temperature. Very useful is the derivative of the CTD, which is related to the heat flux in somehow different form. Using the transformations in calculus and CR for the real and and substituting imaginary part of function f(z) T H , H T x y x y by (2) one will obtain the relationship between the derivative of CTD and heat flux, which will be referred to as a complex heat flux (CHF):
f 1 1 Q x iQ y Q e i , k z k
(5)
where Q is the value of the heat flux at a point and is the direction of the heat flux.
3
Formulation of the problem
Let Ω be a simply connected domain or a subdomain of a bigger domain which is not necessarily simply connected and bounded. Let temperature measurements be known at some discrete points (j=1,…,N), located within Ω. The general problem is formulated as follows: given the discrete data on temperature T j 1...N find field of heat flux and heat flux directions within the domain Ω. No restrictions are imposed on data type. The data can be redistributed uniformly or randomly. To obtain the data from an image, a computer code has been designed by using MATLAB, which enables one to pick up the data from the screen from an image presenting temperatures. 3.1 Uniqueness of reconstruction The problem is dealing with reconstruction of function that can be represented in complex exponential form f ( z ) e i , which leads to/from Cauchy-Riemann equations (CR). Therefore having one function known, the second one can be easily reconstructed by integration of CR. Indeed, assume, for instance, that the argument is known. Then by taking logarithm of both parts of, one obtains the following expression for holomorphic logarithmic function ln( f ( z )) ln( ( x, y )) i ( x, y ) . Therefore the logarithm of both right hand side parts must satisfy CR, which allows one to find the former by integration: ln( ( x, y ))
( x, y ) dx p y
(6)
This formula indicates that the reconstruction can be achieved but it will always have one undetermined positive parameter e p as a free parameter. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
416 Computational Methods and Experimental Measurements XIV In the case when data are known at discrete points, reconstruction, in general, is non-unique and can have any finite number of free parameters, which is evident from the following. Let an exact solution f ( z ) u ( x, y ) iv( x, y ) of the problem be found by any method. It obviously, that the modulus must be real; therefore it satisfies the following:
Im f ( z j )e
i ( z j )
0, j 1...N
(7) Let us consider the polynomial, the roots of which are placed at the data points. This polynomial can be written in the form: m
P( z )
C
j (z
zj)
mj
j 1
(8)
where C j are arbitrary complex parameters and m j are arbitrary positive integers. It is evident from (8) that the sum, f(z)+P(z), satisfies the condition (7) and therefore it is also the solution of the problem. Since m j are arbitrary, this solution has finite number of parameters defined as the degree of polynomial P(z) plus one, and it is always greater than N. In order to have sense the solution has to be sought in certain class of functions. Hereafter the solution is sought as a piecewise linear holomorphic functions. It will be shown that this set of functions allows reconstructing the flow with high accuracy. Having the temperature function T(x,y) known analytically T ( x, y ) a bx cy based on CR, one can reconstruct the CTD. The CTD will have therefore one free parameter d, affecting only H. f (z) = T (x,y) + iH (x,y) = a + bx + cy + i (− cx + by + d)
(9)
After performing the derivative of (9), the free parameter d will vanish. From this is obvious that the CHF and forthcoming heat flux can be reconstructed from temperature uniquely, however the complex conjugate function H has free additive parameter.
4
Method of solution
The proposed method deals with reconstruction of complex temperature distribution (CDT) f(z) from known discrete data on temperature. The domain is divided into n smaller subdomains (elements) or arbitrary shape. In order to achieve piecewise linear function of CHF, the CTD must be approximated by quadratic function. Both these approximations obey continuity along all interfaces (at certain collocation points located on the interfaces). For complex heat flux is required approximating function piece-wise linear function: (m) CHF ( z ) a ( m ) b ( m) z , m 1 n
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(10)
Computational Methods and Experimental Measurements XIV
417
for complex temperature distribution is therefore the approximating function the integration of that function: 1 (m) CTD ( z ) c ( m) a ( m ) z b ( m) z 2 , m 1 n (11) 2 Here
a ( m ) a 0( m ) ia1( m) , b ( m ) b0( m) ib1( m ) , c ( m ) c0( m ) ic1( m)
are
3
unknown complex constants; therefore 6 real constants are unknown in every element. Different approximating function could be used instead of linear holomorphic function for complex heat flux. It will be demonstrated that the use of these two particular functions provides results with very high accuracy. 4.1 Discretization of the domain The grid used in this thesis consists of square elements with collocation points on its interfaces. The latter are necessary in order to obey continuity between the elements. The collocation points (CP) are distributed regularly. An example, of such a grid with known data is shown on (Figure 1).
Figure 1:
Discretization of the domain by square elements, collocation points – stars, data – dots.
The minimum number of nodes strictly depends on the number of unknowns in the approximating function as well as the element type. Given that, the square elements are placed in the regular grid with J elements along x-axis and K elements along y-axis and 6 unknown coefficients in every element, one finds the total number of unknowns N unkn as: N unkn 6 N elem 6 JK
(12) The required number of collocation points depends on the number of element interfaces, which is N int ( J 1) K ( K 1) J . Using the same number of collocation points per each interface, keeping in mind that separation of complex WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
418 Computational Methods and Experimental Measurements XIV equations into real and imaginary parts presents two real equations at every collocation point, the required number of collocation points per interface is: N cp
N unkn 6 JK 3JK 2 int 2( J 1) K ( K 1) J 2 JK J K
(13)
For the domain, where J or K is equal to 1, the number of collocation points per interface is N cp 3 . In general case the number of collocation points per interface N cp 2 . Due to condition equations imposed, the N cp will be lowered; therefore N cp 2 is used further on. There are no formal restrictions in the mesh used. Other types of mesh could be used, such as triangular or polygonal with the straight or curvilinear sides. The square grid is convenient from computational point of view because the distances between collocation points are regular, which has good influence on the structure of the matrix of the system of linear equations that is expressed in lower condition numbers of the matrix of the system. 4.2 Equations of continuity and condition equations This problem consists of two continuity equations and two types of condition equations. For the k-th collocation point, lying on the interface between the elements numbered m and m+1, the equation of continuity for CTD is following: (m) ( ml ) CTD ( z k ) CTD ( z k ) 0, m 1 n; k 1 N CP
(14)
The second equation of continuity for CHF is expressed following: (m) ( m l ) CHF ( z k ) CHF ( z k ) 0, m 1 n; k 1 N CP
(15) These equations present the first set of equations for the system of linear algebraic equations (SLAE). The condition equation containing T comes from CTD, where the real part of it is equal to the temperature, therefore:
(m) Re CTD (z j ) T j ,
j 1 N
(16) this condition equation is satisfied in elements, where the data occurs. The second condition equation is due to free parameter c1 . As was mentioned, we are unable from temperature data obtain the parameter c1 , therefore from equations (14), (15) and (16) we are able to recover only the differences of those constants between adjacent elements. Therefore specific condition is used, which provides the sum of all constants c1 across whole domain zero. The complex conjugate function H, can be obtained by adding any real parameter. The condition is following:
Imc 0 N
(i )
i 1
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(17)
Computational Methods and Experimental Measurements XIV
419
The SLAE formed by equations (14)-(17) is over-determined, it has 4 N int N cp N equations and 6 N elem unknowns. Should be noticed that the system can be solved even without the equation (17), however this would decrease the rank of matrix by one, and therefore higher CN would arise and then regularization has to be used. 4.3 Solution of the linear system By extracting the real and imaginary parts of complex equations (equations of continuity and condition equations) the real SLAE can be obtained and rewritten in a matrix form: Ax b (18) where A R mxn , b R m and m>n. x is the vector of the unknown real coefficients, of length n. The vector x consists of real and imaginary parts of unknown complex coefficients. Vector b is known exactly from temperature measurements, while A the matrix of the SLAE depends on approximating functions, type and size of element. The matrix A is not a square, the system is over-determined and therefore the left-hand side Ax does not exactly equal to b and thus the system is inconsistent. However an approximate solution, x*, can be found by means of the least squares methods [5] (LSQR) that minimise the residual: Ax * b 2 (19) where ...
2
stands for the L2 norm. If the arising system is well-posed and not
large, then the inversion of the matrix does not meet any difficulties and solution takes the form [7]: x* ( AT A) 1 AT b (20) The condition number (CN) is used in the numerical examples to control wellposedness of the SLAE. In the case of high CN, the methods for ill-posed problems should used [8].
5
Model examples
5.1 Synthetic example: f ( z ) 1 z z 2 z 3 The CTD function is chosen as f ( z ) 1 z z 2 z 3 . The thermal conductivity is chosen as k=1. 64 uniformly distributed data were chosen and 169 elements. This provides 2561 equations, consisting of 1248 equations of continuity for CTD and the same number for CHF, 64 condition equations from known data and 1 condition equation due to free parameter. The system consisted of 1014 unknown parameters. CN of that system was 371 with errors in the heat flux: maximum error 4.3% and average error 1.3%. The residual was 15.9. Comparisons between ideal and reconstructed temperature, heat flux and temperature profiles are shown in (Figure 2, Figure 3, and Figure 4). WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
420 Computational Methods and Experimental Measurements XIV 5.2 Experimental data 2 – real data (grinding) Real data on temperature were obtained from an image of work piece temperature after treatment, coming from grinding process (Figure 5). The data were picked from [1]. In this example are two types of errors. The first is due to numerical integration of Planck’s law used in [1] and second is due to manual picking up data (section: data description). There was 105 picked data and 400 elements used for the reconstruction. The arising system consisted of 6187 equations and 2400 unknowns. The condition number was CN=2.8e3 and residual 6.3e3. The reconstructed temperature and heat flux is shown in (Figure 6). Heat flux coming to the workpiece on its surface is shown in (Figure 7, left). The heat flux directions are shown in (Figure 7, right); however it was shown that one parameter remains free. This means that, to obtain the real directions, one constant has to be added to the reconstructed directions.
6
Conclusion
The present study investigates the heat flux shape, based on temperature measurements. The method comes from complex variable theory applied to
Figure 2:
Figure 3:
Temperature comparisons, ideal – left and reconstructed – right.
Heat flux comparison, ideal – left and reconstructed – right.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Figure 4:
Figure 5:
421
Profiles of heat flux, ideal – dots and reconstructed – triangles.
Data on temperature of the workpiece after treatment derived from experiment [1].
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
422 Computational Methods and Experimental Measurements XIV
Figure 6:
Reconstructed temperature of the workpiece – left and recovered heat flux – right.
Figure 7:
Recovered heat flux profile on the grinded surface – left and heat flux direction – right.
steady state heat flow, where holomorphic function arises. The real part of the function is related to temperature and the derivative is related to the heat flux. This theory was used to reconstruct the heat flux from temperature data, when the numerical integration, cannot be performed due to lack of data. The method was tested on ideal data where the errors were below 2%. In case of real measurement data, an image of workpiece after treatment resulting from thermography [1] was used. It was shown in the experimental data, that even with very high errors the method is able to provide reasonable results. Moreover it was shown that the common triangular shape of heat flux [2] or polynomial of second order [1] are not accurate enough to represent the shape of entering heat flux at the surface of workpiece during the grinding process based on the data which were used. This method has recovered also the flux directions with one free additive parameter. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
423
Acknowledgement The authors are grateful to EPSRC for the financial support of this work through the Research Grant EP/E032494/1.
References [1] A. Brosse, P. Naisson, H. Hamdi, J.M. Bergheau, Temperature measurement and heat flux characterization in grinding using thermography, JMATPROTEC, 201, 2008, 590-595. [2] S. Malkin, C. Guo, Model Based Simulation of Grinding Process. [3] A.N. Galybin and Sh.A. Mukhamediev, Determination of elastic stresses from discrete data on stress orientations, IJSS, 41 (18-19), 2004, 51255142. [4] Sh.A. Mukhamediev, A.N. Galybin and B.H.G. Brady, Determination of stress fields in elastic lithosphere by methods based on stress orientations, Int J Rock Mech Min,43 (1), 2006, 66-88. [5] M. Rahman, Complex Variables and Transform Calculus, Computational Mechanics Publications, Southampton, 1997. [6] Y.K. Kwok, Applied complex variables, Cambridge University Press, Cambridge, 2002. [7] G.H. Golub, C.F. Van Loan, Matrix Computations, London: The John Hopkins Press Ltd., 1996. [8] A.N. Tikhonov, V.Y. Arsenin, Solution of Ill-posed Problems, Winston, Wiley, New York, 1977.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
425
Analysis of non-isothermal fluid flow past an in-line tube bank M. Alavi & H. Goshayeshi Department of Mechanical Engineering, Islamic Azad University, Mashhad Branch, Iran
Abstract In this paper the two-dimensional, non-isothermal fluid flow past an in-line tube bank has been numerically analyzed by the finite element method. The flow is assumed to be incompressible, laminar and unsteady. To stabilize the discretized equations of the continuity and momentum, the streamline-upwind/PetrovGalerkin scheme is employed and also the energy equation is solved using the Taylor-Galerkin method. Reynolds number of 100, Prandtl number of 0.7, and pitch-to diameter ratios (PDRs) of 1.5 and 2.0 are chosen for this investigation. Having obtained the flow and the temperature fields, the local skin friction coefficient and the local Nusselt number are calculated for the tubes in the bundle at different times. A comparison of the present study results with the results of experiments of other investigators, showed good overall agreement between them. Keywords: finite-element, tube bank, in-line, skin friction, Nusselt number.
1
Introduction
The study of the fluid flow around a tube bank, widely used in many engineering phenomena, is importance such as heat exchangers, nuclear and chemical reactors, etc. In the past it was not possible to apply numerical methods therefore, the results were drawn experimentally. It is obvious that the experimental methods are not economical and are very limited to work out. The first leaders in the experimental applications in this subject were Bergelin et al. [1] that have presented their investigation of heat transfer and fluid friction of flow across the banks of tubes. Oda et al. [2] have considered the investigation of the heat transfer processes in tube banks in cross-flow and Massey [3] has also WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090391
426 Computational Methods and Experimental Measurements XIV predicted the flow and heat transfer in tube banks in cross-flow too. But the numerical applications for fluid flow past a tube bank could be ascribed to Launder and Massey [4], who have presented their numerical prediction of viscous flow and heat transfer in tube banks. Fujii et al. [5] have presented a numerical analysis of laminar flow and heat transfer of air in an in-line tube bank. Chen et al. [6] have carried out finite element solution of laminar, incompressible flow and heat transfer of air around three and four isothermal heated horizontal cylinders in a staggered tube bank and in an in-line tube bank, respectively. Fornberg [7] performed a numerical study of flow through one infinite row of cylinders in steady-state flow and for symmetric periodic conditions. The numerical studies were accomplished by finite difference method. Dhaubhadel et al. [8] presented a finite element solution to the problem of steady flow across an in-line bundle of cylinders for Reynolds numbers up to 400 and pitch-to-diameter ratios (PDRs) of 1.5 and 2.0. Gowda et al. [9] carried out finite element simulation of transient laminar flow past an in-line tube bank with five-tubes deep. They solved the two-dimensional unsteady Navier-Stokes and energy equations. Wang et al. [10] have done numerical analysis of forcedconvection heat transfer in laminar, two-dimensional, steady cross-flow in banks of plain tubes in staggered arrangements by finite-volume method. Recently, ElShaboury and Ormiston [11] have studied numerical analysis of forcedconvection heat transfer in laminar, two-dimensional, steady cross-flow in banks of plain tubes in square and non-square in-line arrangements by finite-volume method. In this paper we survey the numerical simulation of a two-dimensional, laminar, incompressible, and non-isothermal fluid flow and calculations are carried out for a tube bank with five-tubes deep at different times. Numerical simulation can be assumed two-dimensional flows because the length of the tube is infinite, and the fluid flow assumed to be laminar because the fundamental equations are based on laminar conditions.
2
Governing equations and solution procedures
The transient continuity, momentum, and energy equations in a dimensionless form for the incompressible, laminar flow of a Newtonian fluid are, respectively, given by (1) ∇.V = 0
DV 1 2 = −∇ P + ∇ V Dt Re Dθ 1 2 = ∇θ Dt Pe
(2) (3)
where V is the dimensionless velocity vector, t is the dimensionless time, P is the dimensionless pressure, Re is the Reynolds number, θ is the dimensionless temperature and Pe is the Peclet number. The dimensionless variables are defined as follows:
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
V=
427
tU ∞ X V , , t = , P = P , θ = T − T∞ , X= D D U∞ Tw − T∞ ρU ∞2
Re ∞ =
ρC p U max D ρU max D , ρU ∞ D , Re max = Pe = k µ µ
(4)
where, X is the coordinate vector, D is the tube diameter, t is the time, ρ is the fluid density, P is the pressure, µ is its fluid viscosity, Cp is its specific heat, U∞ is the free-stream velocity, Umax is the average velocity at minimum flow crosssection ((Umax / U∞) = PDR /(PDR-1)), T is the temperature, T∞ is the free-stream temperature, Tw is the wall temperature and k is the fluid thermal conductivity. The continuity and momentum equations are solved by Galerkin approximation and the system of discretized equations for each element is as follows: (5) M ( e ) .Z ( e ) + K ( e ) .Z ( e ) + F ( e ) = 0 (e) (e) where for each typical element (e), M is the mass matrix, Z is the vector of unknown variables, K(e) is the stiffness matrix, containing the linear terms and F(e) is the vector containing the convective nonlinear terms that is defined in reference [12]. For calculating the flow and pressure fields, the final system of equations must be solved which can achieved after assembling the system of element equations for all of this elements and therefore the Fortran program has been written. For exertion of the upwind, Petrov-Galerkin scheme is used in which the weighting function for a typical node is modified in such a way that weight of the upwind node is heavier than the downwind node [13]. To solve the energy equation and obtain the temperature field, a TaylorGalerkin scheme is used [14, 15]. In this method, Taylor expansion with second order approximation of temperature is used according to time. The discretized equation for a typical element is as follows: 1 1 1 1 ( Μ (e) + Κ (e) ) θ ( e ) = [ Μ ( e ) − Κ d( e ) − ( Κ (e) + Κ (e) )]θ ( e ) (6) a d bd 2 Pe 2 Pe ∆t ∆t (e) (e) ( e ) ( e ) where, M , K d , K a , K bd are the mass, the diffusion, the advection and the n +1
n
balancing diffusion matrices for element, respectively. In all the above calculations, continuity, momentum and energy equations are solved in the unsteady form with dimensionless time step of 0.01. Assuming that the transport properties are constant in each time step, continuity and momentum equations are solved first and then by using the obtained flow field, energy equation is solved and the temperature field is computed. These calculations are repeated in the successive time steps until the steady-state are obtained.
3
Physical model and boundary conditions
Physical model of flow around an in-line tube bank with five tubes in the flow direction and with depth of one is shown in figure 1. In this model, according to reference [16], the flow field is considered 5 times bigger than the tube diameter in the up-stream (Lus) and 20 times bigger than the tube diameter in down-stream (Lds). WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
428 Computational Methods and Experimental Measurements XIV
Figure 1:
Physical model of flow around an in-line tube bank.
The computational domain and the tube bank boundary condition used in this work are also shown in figure 2. This domain is divided into elements which are isoperimetric rectangular with four nodes. The velocity and the temperature in four nodes of each element are calculated, as well as its centre of pressure. Some of the elements of the mesh around the tubes are shown in figure 3. At first, three grids of coarse with the number of 6869, medium with the number of 12684 and fine with the number of 21679 of elements is used in the computations until mesh optimization was made and higher than these elements the results are the same. As it can be seen, around the tubes where the velocity and the temperature gradient are greater, the finer mesh is adapted. The no-slip and the constant temperature, T=Tw, boundary conditions are applied on the surface of the tube. At the inflow, the x-component of the velocity vector and the temperature are set equal to the free-stream velocity, U∞, and the free-stream temperature, T∞, respectively. At the outlet, the velocity gradient u and the temperature gradient in direction x and v-component of velocity equals zero, i.e. fully-developed boundary conditions are enforced. In the axis of symmetry, the velocity gradient u and the temperature gradient in direction y and so v-component of velocity equals zero. Moreover, as a reference value, the pressure is set equal to the zero at the outflow. The initial conditions are (7) V( X , 0) = 0 θ( X , 0) = 0 (8) The mathematical model of the problem is expressed by the coupled system of eqns (1) through (3) and the initial conditions (7) and (8) together with the boundary conditions are shown in figure 2. Reynolds number of 100, Prandtl number of 0.7, and PDRs of 1.5 and 2.0 are chosen for the investigation.
Figure 2:
Boundary conditions in the computational domain.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Figure 3:
4
429
A portion of finite element mesh in in-line array.
Results and discussion
4.1 Streamlines In figure 4 the streamlines for flow past an in-line tube bank with five tubes in the flow direction with Reynolds numbers 100, and PDR=1.5 at different times until the steady-state reaches, are shown. At the beginning when the flow comes into contact with the tubes, its presence will not be felt by the tubes. Hence, the inertia forces dominate the viscous forces and the streamline pattern resembles that of an invicid flow (figure 4a). At t = 0.5 the symmetry is lost and the incipience of separation can be seen on all the tubes (figure 4b). A pair of symmetric eddies normally forms behind the tubes. The eddy in which the fluid circulates keeps growing with time until it reaches the steady-state (figure 4c).
t = 0.1
(a)
t = 0.5
(b)
Steady-state
(c)
Figure 4:
Streamlines in the in-line tube bank for PDR = 1.5.
4.2 Isotherms The isotherms for flow past an in-line tube bank with five tubes in the flow direction with Reynolds numbers 100, and PDR=1.5 at different times until the steady-state, are shown in figure 5. It is possible to predict the amount of heat WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
430 Computational Methods and Experimental Measurements XIV flow at different times and in the various points of the tube bank. It is obvious that in places where isotherms are closer to each other, the temperature gradient is greater and heat transfer is higher. The isotherms at t = 0.1 of figure 5(a) are crowded over the entire tubes and are symmetrical. As time passes, this symmetry is lost because of the recirculation region between the tubes (figure 5(b)). The growth of isotherms follows the growth of streamlines. In figure 5(c), it can be seen that, in the steady-state, the isotherms are crowded over the front half of the first tube that indicate a high radial heat flux and the temperature difference is around 300 to 400K. But over the other tubes, low velocity recirculation flow interacts with parts of the front half of the subsequent tubes indicate lower radial heat flux.
Figure 5:
t = 0.1
(a)
t = 0.5
(b)
Steady-state
(c)
Isotherms in the in-line tube bank for PDR = 1.5.
4.3 Skin friction coefficient Shearing action between the fluid and the tube surface, which is also known as local skin friction coefficient has been investigated. For laminar and incompressible flow past a tube bank, after calculations of the flow field for each tube, the local skin friction coefficient (Cf) is defined as: (9) τw 2 ∂Vt
Cf =
1 ρ(Vmax ) 2 2
=
Re max ∂n
|w
where, τw is the tube wall shear stress, Vt is the dimensionless tangent velocity vector and n is the dimensionless normal vector on surface. Figure 6 shows these coefficients for PDR of 1.5 at the steady-state. At the first instance, the distribution of friction coefficient is the same as that for all the tubes. This is because the flow initially behaves like a potential flow. As time passes, the local skin friction coefficient decreases and for the first tube is different from those for the following tubes. The maximum local skin friction coefficient is the same for all the tubes, except the first tube that it is a little WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
431
bigger and it is almost for an angle of 80°. For the tubes, except the first tube, the minimum value of Cf that is initially on the rear half of the cylinder has shifted to the front portion of the tube with progress in time, and it is for an angle of 30°. Figure 7 shows local skin friction coefficients for PDR of 2.0 at the steady-state. As it can be seen, those decrease over all the tubes compare to PDR of 1.5, because the velocity gradients are lower at higher PDRs. In this figure, the Cf results are analoged with Ref. [12] and show good overall agreement. 2
0
30
60
90
120
150 tube1 tube2 tube5
1.5
steady-state in-line PDR = 1.5
1.5
1
Cf
1
180 2
0.5
0.5
0
-0.5
Figure 6:
Figure 7:
0
0
30
60
90
Angle
120
150
-0.5 180
The local skin friction coefficients for PDR = 1.5.
The local skin friction coefficients for PDR = 2.0 at the steadystate [9].
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
432 Computational Methods and Experimental Measurements XIV 4.4 Nusselt number Nusselt number is one of the vital parameters of interest to the designer. The local Nusselt number and the bulk temperature are defined as: ∂T ∂θ 1 D (10) ( |w ) ( |w ) = − Nu = − 1 − θ b ∂n (Tw − Tb ) ∂n SD / 2
Tb =
∫ ρuTdy
(11)
D/2
ρ(SD / 2) U ∞
where D is the tube diameter, Tw is the wall temperature, Tb is the bulk temperature at minimum cross-section and SD is the transverse pitch. 24
0
30
60
90
20 18
Nu
16
180 24 22 20 18 16
14
14
12
12
10
10
8
8
6
6
4
4
2
2
0
Figure 8:
150
t=0.1,tube1 t=0.1,tube2 t=0.1,tube5 Steady-state,tube1 Steady-state,tube2 Steady-state,tube5
in-line PDR = 1.5
22
120
0
30
60
90
Angle
120
150
0 180
The local Nusselt number for PDR = 1.5 at the time = 0.1 and the steady-state.
Figure 8 shows the distribution of the local Nusselt number around the tubes for PDR of 1.5 at dimensionless time 0.1, and for the steady-state. In the initial phase, t = 0.1, the distribution of local Nusselt number is almost the same on all the tubes; on the other hand, the temperature gradient is intense around the tubes, thus the Nusselt number is great. As time passes, the local Nusselt number decreases and for the first tube differs from the second and subsequent tubes. As a result, the thinner boundary layer over the first tube leads to a higher temperature gradient and, thus, to a higher heat transfer rate at the tube surface. The maximum Nusselt number for the first tube occurs in the region of the front stagnation point, and then as the fluid moves, it decreases with the growth of thermal boundary layer and for the next tubes occurs at around angle of 60°. The minimum Nusselt number for the first tube corresponds to the boundary layer separation point; but for the next tubes Nusselt number is minimum in the stagnation point because it occurs in the recirculation region behind tube. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
433
Figure 9 shows the distribution of the local Nusselt number around the tubes 1, 2 and 5 at the steady-state for PDR = 2.0. As it can be seen, those decrease over all the tubes. In this figure 13b, Nu results are compared with [12,13] and are shown good overall agreement.
Figure 9:
5
The local Nusselt number for PDR = 2.0 at the steady-state [11].
Conclusions
The problem of the two-dimensional non-isothermal fluid flow past an in-line tube bank is numerically simulated by the finite element method. The flow was assumed to be incompressible, laminar and unsteady. Reynolds number of 100, Prandtl number of 0.7, and PDRs of 1.5 and 2.0 are chosen for the investigation. At the beginning when the flow comes into contact with the tubes, its presence will not be felt by the tubes. The distribution of friction coefficient and the distribution of local Nusselt number is almost the same on all the tubes. Hence, the inertia forces dominate the viscous forces and the streamline pattern resembles that of an invicid flow. At the first instance, the isotherms are crowded over the entire tubes and are symmetrical and the temperature gradient is intense around the tubes, thus the Nusselt number is great. As time passes, a pair of symmetric eddies normally forms behind the tubes and they grow with time until the flow reaches the steady-state and also the local skin friction coefficient and the local Nusselt number decrease until they reach steady-state. For the tubes, except the first tube, the minimum value of the local skin friction coefficients that is initially on the rear half of the cylinder has shifted to the front portion of the tube with progress in time. The growth of isotherms follows the growth of streamlines. At the steady-state, the thinner boundary layer over the first tube leads to a higher temperature gradient and, thus, to a higher heat transfer rate at the tube surface, the maximum Nusselt number for the first tube occurs in the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
434 Computational Methods and Experimental Measurements XIV region of the front stagnation point, and then as the fluid moves, it decreases with the growth of thermal boundary layer and for the next tubes occurs at around angle of 60°, the minimum Nusselt number for the first tube corresponds to the boundary layer separation point; but for the next tubes Nusselt number is minimum in the stagnation point because it occurs in the recirculation region behind tube. The local skin friction coefficients and the local Nusselt number are found to be dependent on pitch to diameter ratios (PDRs) and are higher for smaller PDRs and vice versa. A comparison of the present study results with the results of experiments of other investigators, showed good overall agreement between them.
References [1] Bergelin, O.P., Brown, G.A. & Doberstein, S.C., Heat transfer and fluid friction during flow across banks of tubes – IV. Trans. ASME, 74, pp. 953960, 1952. [2] Oda, S., Kostic, Z.G. & Sikmonovic, S., Investigation of the heat transferprocesses in tube banks in cross-flow. Int. Seminar on Recent Developments in Heat Exchangers, Trogir, Yugoslavia, 1972. [3] Massey, T.H., The prediction of flow and heat fransfer in banks of tubes in cross-flow. Ph.D. Thesis, Central Electricity Research Laboratories, Leatherhead, Surrey, February, 1976. [4] Launder, B.E. & Massey, T.H., The numerical prediction of viscous flow and heat transfer in tube banks. Trans. ASME., J. Heat Transfer, 100, pp. 565-571, 1978. [5] Fujii, M., Fujii, T. & Nagata, T., A numerical analysis of laminar flow and heat transfer of air in an in-line tube bank. Numer. Heat Transfer, 7, pp. 89109, 1984. [6] Chen, C.K., Wong, K.L. & Cleaver, J.W., Finite element solutions of laminar flow and heat transfer of air in a staggered and an in-line tube bank. Int. J. Heat and Fluid Flow, 7, pp. 291-300, 1986. [7] Fornberg, B., Steady incompressible flow past a row of circular cylinders. J. Fluid Mech, 225, pp. 655-671, 1991. [8] Dhaubhadel, M.N., Reddy, J.N. & Telionis, D.P, Penalty finite element analysis of coupled fluid flow and heat transfer for in-line bundle of cylinders in cross-flow. J. Nonlinear Mech, 21, pp. 361-373, 1986. [9] Gowda, Y.T.K., Patnaik, B.S.V.P., Narayana, P.A.A. & Seetharamu, K.N., Finite element simulation of transient laminar flow and heat transfer past an in-line tube bank. Int. J. Heat and Fluid Flow, 19, pp. 49-55, 1998. [10] Wang, Y.Q., Penner, L.A. & Ormiston, S.J., Analysis of laminar forced convection of air dor cross-flow in banks of staggered tubes. Numer. Heat transfer, Part A, 38, pp. 819-845, 2000. [11] El-Shaboury, A.M.F. & Ormiston, S.j., Analysis of laminar forced convection of air cross-flow in in-line tube banks with nonsquare arrangements. Numer. Heat Transfer. Part A, 48, pp. 99-126, 2005. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
435
[12] Heinrish, J.C. & Pepper, D.W., Intermediate finite element method, fluid flow and heat transfer applications. CHP 10, pp. 209-255, 1999. [13] Brooks, A.N. & Hughes, T.J.R., Streamline-upwind/Petrov-Galerkin formulations for convection dominated flows with particular emphasis on the incompressible Navier-Stokes equations. Computer Methods in Applied Mech. and Eng, 32, pp. 199-259, 1982. [14] Usmani, A.S., Cross, J.T. & Lewis, R.W., A finite element model for the simulations of mould filling in metal casting and the associated heat transfer. Int. J. Numer. Meth. in Eng, 35, pp. 788-806, 1992. [15] Arefmanesh, A. & Alavi, M.A., A hybrid finite difference-finite element method for solving the 3-D energy equation in non-isothermal flow past over a tube. Int. J. of Numer. Meth. for Heat and Fluid Flow, 18, pp. 50-66, 2008. [16] Tezduyar, T.E. & Shih, R., Numerical experiments on downstream boundary of flow past cylinders. J. Engn. Mech, 117, pp. 854-871, 1991.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
437
Transport phenomenon in a jet type mold cooling pipe H. Kawahara1 & T. Nishimura2 1
Shipping Technology, Oshima National College of Maritime Technology, Japan 2 Department of Mechanical Engineering, Yamaguchi University, Japan
Abstract The problem with jet type mold cooling pipes is that the heat transfer and flow pattern becomes unclear near the end of the cooling pipe, i.e. in the region where the jet impinges on the cooling pipe. This research examined transport phenomena characteristics in jet cooling pipes, using channels with the same shape as the actual cooling pipe. In this study, the aim was to elucidate transport phenomena in a jet type mold cooling pipe. This was achieved by conducting flow visualization and measurement of the mass transfer coefficient using a cooling pipe with the same shape as that actually used for mold cooling, and predicting heat transfer based on analogy. The heat transfer coefficients were inferred from Sh numbers by using the analogy between mass transfer and heat transfer. The heat transfer coefficient showed almost the same behavior as the flow pattern, and tendencies similar to previous research were exhibited at the stagnation point. Keywords: jet type mold cooling pipe, heat transfer coefficient, mass transfer coefficient, analogy, flow pattern.
1
Introduction
Impinging jets are widely used in heating and cooling applications due to their excellent heat transfer characteristics. To optimize heat transfer, an understanding of the temperature field as well as the velocity field is essential, in particular near the impingement surface where the flow characteristics dominate the heat transfer process. Heat transfer distributions of jet impingement and the effect of various geometric and flow parameters on heat transfer are well WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090401
438 Computational Methods and Experimental Measurements XIV documented, for example by Miranda and Campos [1, 2], Kayansayan and Kucuka [3], Hrycak [4], Gau and Chung [5] and Lee et al. [6]. Jet cooling pipes are used across a wide range of temperatures in die cooling, probes for cryogenic surgery and other applications. Coolants are diverse, and range from liquid nitrogen to water. In practice, these pipes are used especially with dies to improve quality, by regulating die temperature and preventing sticking. Die cooling can be classified into two basic types: straight-flow and jet. With the straight-flow type, lines for providing a flow of cooling water are placed so as to follow the surface of the molded part, and this approach is mainly used to uniformly cool the entire part. However, molded parts with a complex form cannot be properly cooled with straight-flow only. Temperature is controlled to the proper level at such points by inserting jet cooling pipes to provide spot cooling. The problem with jet cooling pipes is that the heat transfer and flow pattern become unclear near the end of the cooling pipe, i.e. in the region where the jet impinges on the cooling pipe. At present, the heat transfer coefficient is calculated by assuming a double-pipe annular channel and using the associated empirical formula. However, in cases where more sensitive temperature control is necessary, accurate values must be measured and used. Also, the cooling pipe is made of metal, and thus its internal flow has not been observed. However, since heat transfer is closely related to the flow of the fluid acting as a coolant, it is also important to observe that flow pattern. This research examined transfer phenomena characteristics in jet cooling pipe, using channels with the same shape as the actual cooling pipe. This was done by visualizing the flow pattern, measuring the flow velocity distribution at the jet outlet, and measuring heat transfer. Here, it is necessary to measure the local distribution and average as heat transfer, but since the dimensions of the test section are small, and the shape is complex compared with an impinging jet onto a flat plate etc., direct measurement of heat is difficult. Therefore, mass transfer was measured using the electrode reaction method, which enables particularly exact local measurement, and heat transfer was predicted based on the analogy between mass transfer and heat transfer.
2
Experimental apparatus and method
2.1 Experimental apparatus Fig. 1 shows the jet type cooling pipe used in the experiment. The cooling pipe has a dual structure, with an inner and outer pipe. The outer pipe of the actual cooling equipment is fashioned by drilling a hole in the die, but in this experiment it was fabricated with acrylic resin to enable visualization. The inner diameter is do2=22mm, and the end is worked into a hemispheric shape. To enable use of the electrode reaction method, one pipe was fabricated with 9 embedded point electrodes, and another with the entire part near the end made into an electrode. The inner pipe is made of stainless steel, and is the same as that used in the actual cooling equipment. It has an inner diameter di of 8mm and WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
439
an outer diameter do1 of 10mm. It is inserted into the outer pipe, and delivers the working fluid for cooling as a jet. Distance from the end of the outer pipe to the jet outlet of the inner pipe was varied in 5mm units in the range from 5 to 25mm. Fig. 2 shows a schematic diagram of the experimental apparatus. Working fluid is delivered from the tank by a centrifugal pump. Then it passes through the flow meter and test section, and returns again to the tank. Temperature of the working fluid is controlled using a heat exchanger in the tank. The flow rate range was set to 0.25-2.0ml/min. The Reynolds numbers used were jet Reynolds numbers taking flow velocity ui at the inner pipe outlet as a baseline, as defined below.
Re j
ui d i
(1)
where is a kinematics viscosity. Their range is 75
do2=22
di=8
In
do1=10
Out
Out
Flow visualization ( a)(a) Fl ow vi sual i zati on φ2 2
do2=22
Out
No.9
No.5 No.6 No.7 No.8
.3
S No .4
No
di=8
In
.2 No
do1=10
Out
No.1
5 5 5 10
Poi nt el ectrode( Li mi ted partmass mass transf er measurem ent) (b) Point( b)electrode (limited part transfer measurement)
18
18
φ2 2
do2=22
di=8
In
do1=10
Out
No. ( 3)
No. ( 1)
No. ( 2)
Out
(c) Whole electrode
( c) Whol e el ectrode
Figure 1:
Shape of the test section.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
440 Computational Methods and Experimental Measurements XIV 2.2 Method of measuring mass transfer using the electrode reaction method The jet type cooling pipe is a device for achieving heat exchange, but in this research, instead of directly measuring that heat transfer, mass transfer was measured since it has an analogy to heat transfer. The electrode reaction method was used for measurement. The following formula gives the amount of mass transfer in the vertical direction on the electrode surface.
Z F C J D C D C U R T y y
(2)
(7)
(5)
(4)
(6) (3)
(8)
(1) (2)
Figure 2:
(1) Receiving tank (2) Pump (3) Flow control valve (4) Flow meter (5) Test section (6) Inner pipe (7) Thermometer (8) Heat exchanger
Schematic of experimental apparatus.
where: - J is the mass flux in distance y from electrode surface [mol/(cm2 s)]; - D is the diffusion coefficient [cm2/s]; - C is the concentration [mol]; - Z is the ion [-]; - F is the Faraday constant (=96500[c/mol]); - R is the Gas constant [cm2/(s2K)]; - T is the fluid temperature [K]; -
is the potential gradient [-]; y C is the concentration gradient [-]; y
- U is the velocity in which the volume element in the solution moves along the y axis [cm/s]; WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
441
The terms on the right side of the above equation are, from the left: the migration term, diffusion term and convection term. Flux at the electrode is as follows.
J
I A Z F
(3) 2
where I and A are the current [A] and electrode surface area [cm ] respectively. A mixture of potassium ferricyanide and potassium ferrocyanide of the same concentration is used as the electrolyte solution. Therefore, the following reactions occur at the electrodes. Cathode plane (Reductive reaction):
FeCN 6 e FeCN 6 3
4
(4)
Anode plane (Oxidation reaction):
FeCN 6 FeCN 6 e 4
3
(5)
Since this is an oxidation-reduction reaction, and involves only the transfer of electrons, it can be assumed that there is no flow in the vertical y direction on the electrode surface. Thus U=0, i.e. the convection term in eqn. (2) vanishes. Migration is a phenomenon whereby ions in solution are moved by electric force due to an electric field. If an uncreative supporting electrolyte (NaOH) is added here, it will cause the same kind of migration, and lessen migration of the electrolyte. Migration of electrolyte is reduced by adding this in a large amount. Therefore, the potential gradient of the electrolyte becomes 0, and the migration term in Formula (2) vanishes. As a result, only the diffusion term on the right side remains in Formula (2).
J D
C y
(6)
J is proportional to the difference in concentration between the electrode surface and the fluid body, and is given by the following formula. J kD Cb Cw (7) where kD is the mass transfer coefficient [cm/s], Cb and Cw are the fluid average concentration [mol/cm3] and concentration of the electrode surface [mol/cm3], respectively. The current obtained with diffusion control is defined as the limiting current Id. The reaction in Formula (4) is carried out extremely rapidly under diffusion control conditions, and thus concentration of Fe(CN) at the anode surface can be regarded as Cw=0. Thus Formula (7) becomes: J k D Cb (8) Substituting (3) into (8) and rearranging, the formula becomes:
kD
Id A Z F Cb
(9)
Since A, Z and F are constants, the mass transfer coefficient kD can be found by assaying fluid concentration Cb and measuring the limiting current. The Schmidt number (Sc) which corresponds to the Prandtl number (Pr) in heat transfer, and WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
442 Computational Methods and Experimental Measurements XIV the Sherwood number (Sh) which corresponds to the Nusselt number (Nu), can be found as follows.
Sc Sh
(10)
D
kD di Id di D A Z F Cb D
(11)
Next, the wall shear stress is defined. Since the electrode surface is circular, the velocity gradient sr of the wall surface can be calculated according to the following formula from the limiting current.
sr 3.55 1015
Id 3 D 2 Cb 3 dE 5
(12)
where dE is the point electrode diameter. From Newton's law of viscosity, shear stress is:
du dy
(13)
where is the fluid viscosity [g/(cm s)]. Therefore, wall surface shear stress can be determined as follows.
w sr 3.55 1015
Id 3
(14)
D 2 Cb 3 dE 5
It is also possible to predict the distribution of local mass transfer amounts from the wall surface shear stress when mass transfer occurs at the entire wall surface. If S is taken to be the distance in the flow direction from the stagnation point at the wall surface, and sr (S) is taken to be the velocity gradient of the wall surface, then the local concentration gradient can be expressed as follows if separation does not occur in the flow.
C Cb sr S y s 4 9D S s x 1/ 2 dx 0 r 3 1/ 2
4 3
(15)
1/ 3
where is the gamma function. Substituting Formula (6) and Formula (8) into Formula (17),
Cb sr S kD Cb S 4 1/ 2 D 9D sr x dx 0 3 1/ 2
1/ 3
(16)
Thus, using the following formula, it is possible to calculate the flow direction distribution of local Sherwood numbers, which are the dimensionless mass transfer amounts.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
di sr S k d ShL S D i 4 1/ 2 D 9D sr x dx 3
443
1/ 2
1/ 3
(17)
Rewriting this formula with dimensionless numbers, it becomes: 1/ 2
Sc1/ 3 Re j 2 / 3 w 1 ui 2 2 ShL S 1/ 2 1/ 3 S 4 181/ 3 w dx d 0 3 i 1 ui 2 2
(18)
where is the fluid density [g/cm3], and ui is the average velocity of the inner pipe nozzle exit. 2.3 Analogy of mass transfer and heat transfer The amount of mass transfer is proportional to the concentration gradient, and is given by Fick's law.
J D
dC dy
(19)
The amount of heat transfer is proportional to the temperature gradient, and is given by Fourier's law.
q
dT dy
(20)
where is the thermal conductivity, and T is the fluid temperature. Due to these relationships, there is a similarity in transport phenomena due to molecular motion between mass transfer and heat transfer, and an analogy holds between the two. In this experiment, since the electrode reaction is a diffusion controlled reaction and convection does not occur, the concentration boundary layer assumes a state very similar to a thermal boundary layer in the case of heat transfer. Therefore, mass transfer and heat transfer have the following sort of correspondence, and can be interchanged. C ⇔ T, Sc ⇔ Pr, Sh ⇔ Nu This experiment employed the Colburn j factor analogy, which is widely used in experiments and particularly effective for turbulence [7]. The J factors for mass transfer and heat transfer are, respectively:
jD
Sh Re Sc1/ 3
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(21)
444 Computational Methods and Experimental Measurements XIV
Nu Re Pr1/ 3
jH
(22)
Here, the relationship jD=jH holds, and thus the amount of heat transfer can be inferred by analogy as follows:
Nu
h di
Sh Pr1/ 3 Sc1/ 3 Sh Pr1/ 3 h di Sc1/ 3
(23) (24)
where h is the heat transfer coefficient. Heat transfer coefficients in the jet-type cooling pipe of the die are predicted using values derived from an empirical formula for the case where turbulence actually flows through the annular channel of a dual pipe, and calculation is done using the following formula [8].
h de
0.53
d 0.02 o2 Re*0.8 Pr1/ 3 di
(25)
Here, Re* are the Reynolds numbers for an annular channel, and are expressed as follows.
Re*
d e uo
(26)
where de is the equivalent diameter, and uo is the cross section average velocity in the annular channel. The equivalent diameter used hydraulic diameter shown in the following.
de
4 Ac l
(27)
where Ac is the cross section of the annular channel, and l is the open verandah length.
3
Experimental results
3.1 Visualization of flow Fig. 3 shows a flow visualization photograph for L/di=0.63. The white line in the figure indicates the position of the point electrodes. In this experiment, the vortex that arises near the cooling pipe tip is defined as a Vortex-A, and in addition, the vortex that arises in the downstream is defined as a Vortex-B. With Rej=106, there is a flow along the wall surface, and Vortex-A does not occur. However, when the Rej number is increased, Vortex-A occurs along the side surface of the inner pipe, and accompanying that, Vortex-B also appears. The flow for L/di=0.63 is steady, and Vortex-A and Vortex-B expand toward the downstream side up to around Rej=1000. At Rej numbers above that, the flow gradually becomes unsteady, but Vortex-A continues to exist. In the range 603
Computational Methods and Experimental Measurements XIV
Figure 3:
Rej=106
Rej=713
Rej=1141
Rej=2282
445
Flow visualization photograph by the aluminum powder method for L/di=0.63.
3.2 Wall surface shear stress Fig. 4 shows the results of measuring wall surface shear stress. Here, I indicate the region where the flow is fluctuating periodically, and II indicates the transition zone from laminar flow to turbulence. At L/di=0.63, the shear stress values can be classified into No.2-5 and No. 6-9. For No. 2-5, the τw value is greatly increased from the low Rej numbers, but the flow is constricted by Vortex-A at the position of these electrodes and the visualization photo shown in Fig. 3. Therefore, the rapid flow velocity is maintained. At Rej=1000 or higher, the increase in τw slows down, and this may be due to the effects of changes in the flow velocity distribution at the nozzle outlet. There is also a tendency for No. 4 and 5 to match No. 6-9 at low Rej numbers, and this shows that Vortex-A has still not developed. At No. 6-9 with low Rej numbers, τw does not increase very much, and then it increases greatly after Rej=1000 is exceeded. These electrodes are positioned at a location where the flow constricted at Vortex-A spreads out after passing Vortex-A, and since the cross-sectional area of the flow increases, the flow velocity near the wall surface slows down. Therefore the τw value is small. The fact that τw increases above Rej=1000 may be because the flow becomes turbulent. At L/di=0.63, Vortex-B grows large, but at No. 7-9, τw decreases rapidly in the range Rej=500-900, and thus entry into the Vortex-B region can be confirmed. No. 6 exhibits a peculiar phenomenon where τw abruptly increases and decreases midway through. This may be because VortexA expanded to the No. 6 position as Rej numbers increased, and then shrank again as the flow became turbulent. No. 6 is between Vortex-A and Vortex-B.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
446 Computational Methods and Experimental Measurements XIV The results for L/di=1.88 are basically same as for L/di=0.63. The major difference is behavior relating to Vortex-B. Vortex B at L/di=1.88 fluctuates periodically, and thus does not remain steady and grow large. Therefore the reducing effect on τw due to Vortex-B does not appear to the same extent as at L/di=0.63. The phenomenon where τw decreases when the electrode enters the Vortex-B region is not observed at No. 9, and thus it can be seen that Vortex-B has not developed as far as No. 9. When L/di=3.13, the flow is more unstable than L/di=1.88, and thus the slope changes at an even lower Rej number. 101 L/di=0.63
101
1
L/di=1.88
100 1.5
1.5
2
τw [N/m ]
τw [N/m2]
100
1
10-1
No.2 No.3 No.4 No.5 No.6 No.7 No.8 No.9
10-2 Ⅱ
10-3 1 10
2
10
////// 103 Rej
10-1
No.2 No.3 No.4 No.5 No.6 No.7 No.8 No.9
10-2
4
10-3 1 10
5
10
10
Ⅰ ////
10
2
Ⅱ /////
103 Rej
104
105
101 L/di=3.13
1
1.5
2
τw [N/m ]
100
10-1
No.2 No.3 No.4 No.5 No.6 No.7 No.8 No.9
10-2
10-3 1 10
2
10
Ⅱ /////
103
104
105
Rej
Figure 4:
Relationship between wall shear stress and the Reynolds number.
3.3 Mass transfer and heat transfer Fig. 5 shows the results for average mass transfer measured using the overall electrodes No. (1)-(3). Here, the stagnation point is the value measured at point electrode No. 1. Average mass transfer measured at No. (1)-(3) is smaller than the value at the stagnation point. This is because the concentration boundary layer has developed taking the stagnation point as its base point, and its thickness is extremely small. Therefore, the boundary layer gradually develops and increases in thickness at No. (1)-(3) on the downstream side of that, and thus the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
447
Sh number becomes smaller. For the same reason, the Sh number gradually decreases moving from upstream to downstream at No. (1)-(3). With L/di=0.63, Vortex-B grows large at No. (2) and (3), and thus a large increase in the Sh number is not seen until close to Rej=1000. 102 No.(1) No.(2) No.(3) Stagnation point
101
Cylindrical annulus (Turbulent flow)
Nu/Pr1/3
Nu/Pr1/3
102 L/di=0.63 (Pr=6.18)
L/di=1.88 (Pr=6.18)
No.(1) No.(2) No.(3) Stagnation point
101
Cylindrical annulus (Turbulent flow)
100 1 10
102 Re*
100 1 10
103
Nu/Pr1/3
102
102 * Re
103
L/di=3.13 (Pr=6.18)
101
No.(1) No.(2) No.(3) Stagnation point Cylindrical annulus (Turbulent flow)
0
10
Figure 5:
101
102 * Re
103
Average mass transfer coefficient measured in the overall electrode.
The heat transfer coefficients were calculated from these Sh numbers using the analogy equation for mass transfer and heat transfer in Formula (24). Fig. 7 shows the average value of heat transfer coefficients for each electrode or No. (1)-(3). The solid line is the turbulence correlation equation (Formula (25)) for an annular channel of a dual pipe. In order to compare with the annular channel of the dual pipe, the Re* numbers in the annular channel of the dual pipe were used as the Re numbers. All of the measured values are larger than the correlation equation for the annular channel of a dual pipe, and thus this correlation equation cannot be used for design of the equipment. Heat transfer increased locally, particularly in the No. (1) region. At No. (2) and (3), the slope of the heat transfer coefficient became equal to the slope of the correlation equation for the annular channel of a dual pipe, from the region where Re* numbers increased and the flow began to become unsteady. Fig. 8 shows the average values of heat transfer coefficients for No. (1)-(3). As a result, the
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
448 Computational Methods and Experimental Measurements XIV
Nu/Pr1/3
102
101
Average No.(1)-(3) (Pr=6.18)
L/d i=0.63 L/d i=1.25 L/d i=1.88 L/d i=2.50 L/d i=3.13 Cylindrical annulus (Turbulent flow)
Nu=0.3Re *0.6Pr1/3
100 1 10
Figure 6:
102 Re*
103
Heat transfer coefficient in each electrode.
average values of heat transfer coefficients form a straight line, as shown by the dotted line in the figure. The correlation equation is as given below.
Nu 0.3 Re*0.6 Pr1/ 3 4
(28)
Conclusions
In this research, the aim was to elucidate transfer phenomena in a jet-type cooling pipe. This was achieved by conducting flow visualization and measurement of mass transfer coefficients using a cooling pipe with the same shape as that actually used for die cooling, and predicting heat transfer based on analogy. The findings of this research are indicated below. a. Through visualization using aluminum powder, a primary vortex (VortexA) and secondary vortex (Vortex-B) were observed on the downstream side from the stagnation point of the impingement jet. These vortices vary greatly in size as the Rej number increases. Also, the secondary vortex causes periodic fluctuations immediately before the flow starts to turn turbulent. b. Point electrodes were embedded in the outer pipe of the cooling pipe, and wall surface shear stress was measured using the electrode reaction method. Those values varied in accordance with the expansion and shrinkage of the primary and secondary vortices observed through visualization. Thus, it was found that the primary vortex has the effect of accelerating flow near the wall surface, and the secondary vortex has the effect of decelerating flow near the wall surface. c. The heat transfer coefficients were inferred from Sh numbers by using the analogy between mass transfer and heat transfer. The heat transfer coefficients showed almost the same behavior as the flow pattern, and WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
449
tendencies similar to previous research were exhibited at the stagnation point. Also, a correlation equation for finding heat transfer coefficients was obtained from the average value for No. (1)-(3).
References [1] Miranda, J.M. & Campos, J.B.L.M., Impinging jets conical wall: laminar flow predictions, AIChE Journal, 45(11), pp.2273-2285, 1999. [2] Miranda, J.M. & Campos, J.B.L.M., Impinging jets confined by a conical wall-High Schmidt mass transfer in laminar flow, International Journal of Heat and Mass Transfer, 44, pp.1269-1284, 2001. [3] Kayansayan, N. & Kucuka, S., Impingement cooling of a semi-cylindrical concave channel by confined slot-air-jet, Experimental Thermal and Fluid Science, 25, pp.383-396, 2001 [4] Hrycak, P., Heat transfer from a row of impinging jets to concave cylindrical surface, International Journal of Heat and Mass Transfer, 24, pp.407-419, 1981. [5] Gau, C. & Chung, C.M., Surface curvature effect on slot-air-jet impinging cooling flow and heat transfer process, Transaction of the ASME, 113, pp.858-864, 1991. [6] Lee, D.H., Chung, Y.S. & Kim, D.S., Turbulent flow and heat transfer measurement on a curved surface with a fully developed round impinging jet, International Journal of Heat and Fluid Flow, 18, pp.160-169, 1997. [7] Mizushima, T., Hara, K. & Kyuno, T., Heat and mass transfer coefficients in double tube cooler condenser, Kagaku Kikai, 16, pp.338-344, 1952. [8] Kobayashi, S. & Iida, K., Idou-ron, Asakura-shoten, 1996.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
451
Two-phase modelling of nanofluid heat transfer in a microchannel heat sink C. T. Nguyen1 & M. Le Menn2 1
Department of Mechanical Engineering, Faculty of Engineering, Université de Moncton, Moncton, New Brunswick, Canada 2 UFRSSI Lorient, Université de Bretagne-Sud, Lorient, France
Abstract The problem of a steady, two-dimensional, laminar forced flow and heat transfer of a nanofluid, Water-Al2O3 mixture, circulating inside a microchannel, 0.1mm thickness by 25mm length, was numerically studied. The nanofluid considered is composed of saturated water and alumina metallic particles with different average diameters, 36nm and 47nm. All fluid properties are assumed temperature-dependent and evaluated using classical two-phase mixtures formulas, while for thermal conductivity and dynamic viscosity, recent in-house experimental data were used. The particle diffusion constant due to Brownian motion was estimated using the Einstein’s relation. The fluid exhibits a parabolic axial velocity, uniform temperature and particle concentration profiles at the inlet; the usual non-slip and uniform wall temperature conditions prevail on both walls; at the exit, the ‘outflow boundary’ condition is imposed. The system of governing equations (conservation of mass, momentum, energy, and species) was successfully solved using a FEM imbedded within a commercial powerful code and a 28558-cell non-uniform grid. Results obtained for the Reynolds range of 200-2500 clearly show the beneficial effects due to the use of nanofluids on the heat transfer coefficient. Results using the two-phase model show that the spatial distribution of particle concentration is highly non uniform; it has been found to vary considerably in the vicinity of the heated walls while remains nearly uniform in a large central region of the channel. The effects due to the particle concentration and size were also studied. A ‘single-phase fluid v/s twophase fluid model’ comparison has clearly shown a certain discrepancy among the results obtained. Keywords: microchannel, laminar forced flow, heat transfer, nanofluids, alumina-water nanofluid, two-phase fluid model, single-phase fluid model. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090411
452 Computational Methods and Experimental Measurements XIV
1
Introduction
Recent advances in manufacturing processes have permitted the fabrication of nanometre-scale solid particles, which, in turn, have created a new class of very special fluids, called ‘nanofluids’. The latter refers to a two-phase composed of a continuous phase, most often a saturated liquid and suspended ‘nanoparticles’. In spite of a striking lack of data, nanofluids appear to constitute a very interesting alternative for various thermal applications, especially those where high heat transfer rates are required. In fact, the capability of nanofluids for the heat transfer enhancement purpose has clearly been established for internal flow configurations - see in particular Pak and Cho [1], Li and Xuan [2], Wen and Ding [3], Yang et al. [4], Maïga et al. [5, 6]. A similar heat transfer enhancement was also observed experimentally for a closed-liquid system destined for cooling of high-powered electronic components, Roy et al. [7] and Nguyen et al. [8], as well as in cases of free convection (Polidori et al. [9]) and laminar mixed convection flow (Ben Mansour et al. [10–12] and Popa et al. [13]). With regard to the fluid flow and heat transfer in microchannel, it is worth mentioning that such a topic has received in the past decades a special attention from researchers (reviews of relevant works may be found in Garimella and Sobhan [14], Morini [15], Steinke and Kandlikar [16] and Chen [18]). Unfortunately, there are few studies/data regarding nanofluid flow and heat transfer in microchannel. Faulkner et al. [19–20] were likely the first who experimentally found that surface heat fluxes in excess of 275 W/cm2 may be achieved using ceramic-based nanoparticles in water. Koo and Kleinstreuer [21], in their numerical study, have observed that a combination of high-Pr fluid/high channel aspect ratio/high particle thermal conductivity would maximize the heat transfer rate. Chein and Huang [22], considering Cu-nanoparticles-water, have clearly shown an improvement of heat transfer. Jang and Choi [23], taken into account the Brownian motion effect for 6nm copper-in-water and 2nm diamondin-water nanofluids, have shown that nanofluids reduce both the thermal resistance and temperature difference between the heated walls and the coolant. Tsai and Chein [24] performed an optimization study of a microchannel heat sink performance using copper-water and carbon-nanotube-water nanofluids. Chein and Chuang [25] studied experimentally CuO-water nanofluid in a trapezoidalcross-section microchannel. They found that more heat can be absorbed using nanofluids for low flow rates, while for high flow rates, nanoparticles did not contribute to extra heat absorption. In a recent numerical study performed for a 3D-rectangular-cross-section microchannel, heat transfer enhancement was again observed for Al2O3-water and CuO-water nanofluids (Nguyen et al. [26]). In this work, we have numerically studied the problem of a forced laminar flow and heat transfer of Al2O3-water nanofluid, with two different particle sizes, 36nm and 47 nm, inside a two-dimensional flat microchannel. Both the singlephase fluid and two-phase fluid models were used. Some significant results showing the effects due to the use of nanofluid on the heat transfer coefficient as well as the comparison of between the two models are presented and discussed.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
453
2 Mathematical formulation and numerical method 2.1 Governing equations and boundary conditions The problem under study consists of a steady, forced and laminar flow of a nanofluid flowing in a straight rectangular cross-section microchannel, Fig. 1, with dimensions, a=65mm, e=0.1mm and length L=25mm. The nanofluid is assumed incompressible with temperature-dependent properties. The compression work is negligible, but both viscous dissipation as well as axial heat diffusion were considered. Because of the channel geometry and in order to reducing the computing times, it has been decided to consider a 2Dmicrochannel of 0.1mm thickness and 25mm of length. The general conservation equations of mass, momentum, energy and species are as follows (Eckert and Drake [26]): ∇⋅(ρV) = 0 (1) (2) ∇⋅(ρV Vi) = -∇P + ∇⋅(µ∇Vi) ∇⋅(ρV CpT) = ∇⋅ (k∇T) + Φ (3) ∇⋅(V c) = ∇⋅(D∇c) (4) V, P, T are fluid velocity vector, pressure and temperature; Vi is a velocity component; c and D are particle mass fraction and diffusion coefficient; Φ is the dissipation function; ρ, µ, k, Cp are, respectively, fluid density, dynamic viscosity, thermal conductivity and specific heat, all properties are temperature-dependent.
Figure 1:
Geometry of the microchannel considered.
The non-linear and highly coupled governing equations (1-4) must appropriately be solved subject to the following boundary conditions: at the channel inlet, the fluid possesses a parabolic axial velocity profile, a uniform temperature and particle concentration profiles; at the outlet, the ‘outflow boundary’ condition (i.e. ∂2/∂x2=0 for all variables) is assumed. On the solid walls, a usual no-slip and uniform wall temperature conditions are specified. 2.2 Nanofluid thermo-physical properties For the nanofluid considered, water-Al2O3 with 36 nm and 47 nm average particle-sizes, effective thermal and physical properties can be evaluated using WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
454 Computational Methods and Experimental Measurements XIV classical formulas already derived and known for a two-phase fluid (subscripts p, bf and nf refer, respectively, to the particles, the base fluid and the nanofluid): ρnf = (1 – φ)·ρbf + φρp (5) (ρ·Cp)nf = (1 – φ)· (ρ·Cp)bf + φ(ρ·Cp)p (6) For Al2O3 particles, the following data are used: Cp=773 J/kg K and ρ=3600 and 3880 kg/m3 for 36nm and 47nm sizes. For nanofluid thermal conductivity and dynamic viscosity, in-house experimental data, Angue Mintsa et al. [28] and Nguyen et al. [29], were used. These data are specific to the nanofluids studied. The diffusion coefficient D due to the particle Brownian motion was estimated using the following formula, often referred as the ‘Einstein’s relation’, which relates the particle diffusion coefficient to the drag coefficient (see Berg [30] and Einstein [31]): D (T) = kBT /kdrag = kBT /(6πµRp) (7) where kB is the Boltzmann constant, kB=1.3806504 x10-23 m2kg/s2K, µ the mixture dynamic viscosity and Rp the particle radius. Typical values of the coefficient D are ranging from 4 to 6x 10-12 m2/s for considered particles under ambient condition. 2.3 Numerical method and code validation The system of governing equations (1)-(4) was successfully solved by using a ‘finite-element’ method embedded within a powerful commercial code named COMSOL Multiphysics. The matrixes resulting from the spatial discretization process have been solved using an efficient and iterative matrix decomposition technique. In order to ensure the accuracy and independence of results with respect to the number of elements used, several non-uniform grids were thoroughly probed and a 28558-quadratic-triangular-elements grid system was adopted. It covers the space of a half of the channel thickness and possesses very fine and highly packed elements near boundaries. The convergence was based on the verification of variables variation between iterations; and also on the global mass and energy balance over the domain, which was kept as low as 0.01%. Table 1:
36nm Single-phase fluid model 47nm
36nm Two-phase fluid model 47nm Water
Summary of the cases studied. Particle fraction φ 9% 6% 3.1% 9% 6% 3.1% 9% 6% 3.1% 9% 6% 3.1% 0%
Flow Reynolds number 200
500
1000
1500
2000
2500
200
500
1000
1500
2000
2500
200 1500
500 2000
1000 2500
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
455
The computer model was successfully validated by comparing numerical results obtained for the axial velocity profile and pressure gradient to the wellknown and theoretical results for a classic case of a 2D-laminar forced Poiseuille flow in a channel (Le Menn [32]). The code was then used to perform nearly seventy-two cases for distilled water and the two nanofluids considered (Table 1).
10,0 9,0
Particle volume fraction φ
8,0 7,0
36nm 3,1%
6,0
36nm 6% 36nm 9%
5,0
47nm 3,1%
4,0
47nm 6% 47nm 9%
3,0 2,0 1,0 0,0
Vertical distance from wall [m]
Figure 2:
Figure 3:
3
Spatial distribution of particle volume fraction (Re=200).
Effect of wall temperature on particle volume fraction profile.
Results and discussion
Some significant results as obtained by using the two-phase model are presented and discussed first, which is followed by a comparison between the single-phase and two-phase model. Figure 2 shows, for example, a (zoom view) typical spatial distribution of the particle volume fraction along a vertical distance to the wall for the case. It is observed that for a given particle volume fraction specified at WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
456 Computational Methods and Experimental Measurements XIV the channel inlet, particle concentration clearly exhibits large variation in the vicinity of the wall, while in the central region of the channel, it remains almost constant. For this case, the spatial variation of particle concentration is approximately observed within one-sixth of the channel half-thickness. It is also observed that 47nm particle-size exhibits a slightly higher concentration gradient in the near wall region than particles of 36nm size, which may appear somewhat paradoxical. It is worth noting that the particle concentration distribution is governed by the convection-diffusion process, hence it depends not only on the particle diffusion coefficient but on the local velocities and temperatures. Figure 3 also shows that for a given particle-size, higher wall temperatures would induce a higher gradient of the particle concentration near the solid wall. Such behaviour is obviously due to an increase of the particle diffusion coefficient with temperature augmentation. 3.1 On the nanofluid heat transfer behaviour in a micro-channel From the results obtained and shown in Fig. 4, it is clear that the use of a nanofluid would produce an appreciable increase of heat transfer in a constantwall temperature microchannel heat sink, while compared to that when using water. In fact for a given particle-size, the average heat transfer coefficient ratio, hnanofluid / hwater, is always higher than unity. This ratio clearly increases with the particle volume fraction. Thus, for the particle fractions considered, 3.1%, 6% and 9%, this ratio has as values, respectively 1.55, 2.1 and 4.3 for 36nm size and 1.5, 2.25 and 3.3 for 47nm size. It is interesting to observe that such a ratio remains nearly constant for the range of the Reynolds number studied. Such a beneficial effect due to nanofluid can also be found on Fig. 5 which shows the variation of the average Nusselt number as function of Re, φ and particle-size. One can see that the Nusselt number clearly increases with the increase of the Reynolds number (the convection effect) and with an increase of particle volume fraction. It may be expected that a combination of high Reynolds numbers and a high particle volume fraction can produce interesting heat transfer rates. There seems to be no clear demarcation between the 36nm and 47nm particle sizes regarding the heat transfer behaviour (it is worth noting that the definition of Re and Nu number also includes nanofluid properties). It is very interesting to mention that the above nanofluid heat transfer performance and behaviour appear consistent with that observed by others (see in particular Chen [18], Koo and Kleinstreuer [21], Chein and Chuang [25] and Nguyen et al. [26]). 3.2 On the comparison ‘single-phase v/s two-phase modelling’ As mentioned previously, both the two-phase and single-phase fluid models were used in the present study. The results obtained from these models were compiled and compared in the following. Figure 6 shows, at first, the results for the ratio hnanofluid / hwater as function of Re for two particular cases (36nm and φ= 6% and 47nm and φ=9%) using both models, considering, especially, all temperaturedependent properties. One can see that for the first case, 36nm-6% particle WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
457
4,5 4 3,5
h nanofluid / h water
36nm 3,1% 36nm 6%
3
36nm 9% 47nm 3,1%
2,5
47nm 6% 47nm 9%
2 1,5 1 0
500
1000
1500
2000
2500
3000
Re
Figure 4:
Variation of hnanofluid / hwater with respect to Re, φ and particle-size. 140
36nm 3,1%
120
36nm 6% 100
36nm 9% 80
Nu
47nm 3,1%
60
47nm 6%
40
47nm 9% Water
20 0 0
Figure 5:
500
1000
Re
1500
2000
2500
Variation of Nu with respect to Re, φ and particle-size.
volume fraction, there is a slight difference between the two models. However, for the second case, 47nm-9% particle volume fraction, both models predict, surprisingly, approximately the same values for the ratio hnanofluid / hwater. Figure 7 shows finally the results for the average Nusselt number as function of parameter Re, results that are obtained using the single-phase fluid model (constant properties) and the two-phase model (variable properties). These results eloquently show that, on the global basis, there is a notable discrepancy between the predictions by the two models. Such a discrepancy, which does exist for all Reynolds number considered, becomes more pronounced for higher values of the latter. For the cases studied, it has been found that such a discrepancy appears to be more important for the case 6%-36nm particle-size than the one with 9%-47nm-size. Although we may expect that the results obtained using the two-phase-variable-properties fluid model would be more realistic than the ones using the simple single-phase-constant-properties model, it is obvious that more results and experimental data will indeed be needed to validate such a result. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
458 Computational Methods and Experimental Measurements XIV
Figure 6:
hnanofluid / hwater function of Re (two-phase vs. single-phase model). 140 120 100
Two-phase 36nm 6% Variable Properties Two-phase 47nm 9% Variable Properties Water
Nu
80 60
Single-phase 36nm 6% Constant Properties Single-phase 47nm 9% Constant Properties
40 20 0 0
500
1000
1500
2000
2500
Re
Figure 7:
4
Nu function of Re (two-phase vs. single-phase model).
Conclusion
In this paper, the problem of the laminar forced convection of Al2O3-water nanofluids inside a constant-wall-temperature two-dimensional microchannel has been numerically investigated. Both the single-phase and two-phase fluid models were used. Results have shown that the particle concentration exhibits a large spatial variation in the vicinity of solid wall, while it remains almost constant in the central area of the channel. The wall heat transfer has been found to increase appreciably with an increase of the flow Reynolds number and the particle volume fraction. There is a clear discrepancy between results predicted by the variable-property-two-phase fluid model and those using the conventional constant-property-single-phase fluid model.
Acknowledgements We wish to thank the Natural Sciences and Engineering Research Council of Canada for financial support to this work and the Université de Bretagne-Sud (France) for granting a financial aid to Mr. Le Menn. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
459
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
[11]
[12] [13] [14]
Pak, B. C. and Cho, Y. I., Hydrodynamic and heat transfer study of dispersed fluids with submicron metallic oxide particles. Exp. Heat Transfer, 11(2), pp. 151-170, 1989. Li, Q., Xuan, Y., Convective heat transfer performances of fluids with nano-particles. Proc. 12th Int. Heat Transfer Conference, Grenoble, France, pp. 483-488, 2002. Wen, D., Ding, Y., Experimental investigation into convective heat transfer of nanofluids at the entrance region under laminar flow conditions. Int. J. Heat Mass Transfer, 47, pp. 5181-5188, 2004. Yang, Y, Zhang, Z.G., Grulke, E.A., Anderson, W.B. and Wu, G., Heat transfer properties of nanoparticle-in-fluid dispersions (nanofluids) in laminar flow. Int. J. Heat Mass Transfer, 48(6), pp. 1107-1116, 2005. Maïga, S. E. B., Palm, S. J., Nguyen, C. T., Roy, G., Galanis, N., Heat transfer enhancement by using nanofluids in forced convection flows. Int. J. Heat Fluid Flow, 26, pp. 530-546, 2005. Maïga, S.E.B., C. T. Nguyen, N. Galanis, G. Roy, T. Maré, Coqueux, M., Heat transfer enhancement in turbulent tube flow using Al2O3 nanoparticle suspension. Int. J. Num. Meth. Heat Fluid Flow, 16(3), pp. 275-292, 2006. Roy, G., C. T. Nguyen, Comeau, M., Numerical investigation of electronic component Cooling enhancement using nanofluids in a radial flow cooling system. J. Enhanced Heat Transfer, 13(2), pp. 101-115, 2006. Nguyen, C. T., G. Roy, C. Gauthier, Galanis, N., Heat transfer enhancement using Al2O3 water nanofluid for an electronic liquid cooling system. Applied Thermal Engineering, 27(8-9), pp. 1501-1506, 2007. Polidori, G., S. Fohanno, Nguyen, C. T., A note on heat transfer modeling of Newtonian nanofluids in laminar free convection. Int. J. Thermal Sciences, 46, pp. 739-744, 2007. Ben Mansour, R., N. Galanis, Nguyen, C.T., Developing laminar mixed convection of nanofluids in a horizontal tube with uniform wall heat flux. Proc. 13th IHTC, Sydney NSW, Australia, Begell House, ISBN-1-56700-2269 (on CD), 2006. Ben Mansour, R., N. Galanis, C.T. Nguyen, Aouina, C., Experimental study of mixed convection laminar flow of water-Al2O3 nanofluid in horizontal tube. Proc. 5th IASME/WSEAS Int. Conf. Heat Trans., Therm. Eng. Envir., Vouliagmeni (Greece), pp. 193-197, 2007. Ben Mansour, R., N. Galanis, Nguyen, C.T., Developing laminar mixed convection of nanofluids in an inclined tube with uniform wall heat flux. Int. J. Num. Methods Heat Fluid Flow (in press), 2008. Popa, C.V, S. Fohanno, G. Polidori, C.T. Nguyen, Heat transfer enhancement in mixed convection using water-Al2O3 nanofluid. Proc. 5th Euro. Therm. Sci. Conf., Eindhoven (Netherlands), Paper No. MCV-4, 8p, 2008. Garimella, S. and Sobhan, C., Transport in Microchannels – A Critical Review. Annual Review of Heat Transfer, 13, pp. 1-50, 2003.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
460 Computational Methods and Experimental Measurements XIV [15] Morini, G. L., Single-phase convective heat transfer in microchannels: a review of experimental results. Int. J. Thermal Sci., 43, pp. 631-651, 2004. [16] Steinke, M. E., Kandlikar, S.G., Single-phase liquid friction factors in [17] Microchannels. Int. J. Thermal Sci., 45, pp. 1073-1083, 2006. [18] Chen, C. -H., Forced convection heat transfer in microchannel heat sinks. Int. J. Heat Mass Transfer, 50, pp. 2182-2189, 2007. [19] Faulkner, D. J., Khotan, M., Shekarriz, R., Practical design of a 1000W/cm2 cooling system. Proc. 19th Ann. IEEE SemiTherm Symp., pp. 223-230, 2003. [20] Faulkner, D. J., Shekarriz, R., Forced convective boiling in microchannels for kW/cm2 electronics cooling. HT2003, ASME Summer Heat Transfer Conf., Las Vegas, Nevada, USA, pp. 329-336, 2003. [21] Koo, J. and Kleinstreuer, C., Laminar nanofluid flow in microheat-sinks. Int. J. Heat Mass Transfer, 48, pp. 2652- 2661, 2005. [22] Chein, R., Huang, G., Analysis of microchannel heat sink performance using nanofluids. Applied Thermal Engng., 25, pp. 3104-3114, 2005. [23] Jang, S. P., Choi, S. U. S., Cooling performance of a microchannel heat sink with nanofluids. Applied Thermal Engng., 26, pp. 2457-2463, 2006. [24] Tsai, T.-H., Chein, R., Performance analysis of nanofluid-cooled microchannel heat sinks. Int. J. Heat Fluid Flow, 28(5), pp.1013-1026, 2007. [25] Chein, R., Chuang, J., Experimental microchannel heat sink performance studies using nanofluids. Int. J. Thermal Sci., 46, pp. 57-66, 2007. [26] Nguyen, C.T., M. Jarrahi Khameneh, N. Galanis, Laminar forced and heat transfer enhancement by using water-based nanofluids in a microchannel. Proc. ICHMT Int. Symp. Advances in Computational Heat Transfer, Marrakech, Morocco, Paper No. CHT-08-106, 19p (CD), 2008. [27] Eckert, E.R.G., Drake, R.M., Jr., Analysis of heat and mass transfer, McGrawHill Book Co., New York, NY, USA, 1972. [28] Angue Mintsa, H., G. Roy, C.T. Nguyen, D. Doucet, New temperaturedependent thermal conductivity data for water-based nanofluids. Int. J. Therm. Sci., 49, pp. 363-371, 2009. [29] Nguyen, C. T., F. Desgranges, G. Roy, N. Galanis, T. Maré, S. Boucher, H. A. Mintsa, H. A., Temperature and particle-size dependent viscosity data for water-based nanofluids–Hysteresis phenomenon. Int. J. Heat Fluid Flow, 28(6), pp. 1492-1506, 2007. [30] Berg, H. C., A random walk in biology, Princeton University Press, 152p, 1993. [31] Einstein, A., Investigations on the Theory of the Brownian Movement, Eds. R. Fürth & A.D. Cowper, Courier Dover Publications, USA, 122p, 1956. [32] Le Menn, M., Modélisation diphasique de l’écoulement laminaire de nanofluides dans un microcanal, Rpt., 38p, Department of Mechanical Engineering, Faculty of Engineering, Université de Moncton, Moncton, NB, Canada, June 2008.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
461
Numerical investigation of sensible thermal energy storage in high temperature solar systems A. Andreozzi1, N. Bianco1, O. Manca2, S. Nardini2 & V. Naso1 1 2
DETEC – Università degli Studi di Napoli Federico II, Italy DIAM – Seconda Università degli Studi di Napoli, Aversa (CE), Italy
Abstract A study on sensible thermal energy storage (TES) for high temperature solar systems is numerically accomplished. The high temperature TES is cylindrical, the fluid and the solid thermo-physical properties are temperature independent and the radiation heat transfer mechanism is neglected. A parametric analysis is carried out. The commercial CFD Fluent code is used to solve the governing equations in the transient regime. Numerical simulations are carried out at different mass velocity values of the heat-carrying fluid and porosity of the storage medium. The results show the effects of the porosity and of the working fluid mass velocity on the stored thermal energy and on the storage time. Keywords: solar energy, sensible thermal energy storage, numerical analysis.
1
Introduction
Thermal energy storages (TESs) are employed in many solar thermal plants in order to ensure the continuity in energy supply and minimize instability. Moreover, the use of TES for thermal applications, such as space and water heating, cooling and air-conditioning, has recently received attention [1–4]. The storage purpose is twofold: to increase the value of the power generated by strongly reducing its randomness and to improve the plant economics by using the available hardware for more hours a year. Recent concentrating solar plant projects incorporate heat storage that allows the system to operate for some 6–12 hours in the absence of solar irradiance [5].
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090421
462 Computational Methods and Experimental Measurements XIV Three types of TES systems are, generally, employed in high temperature applications: latent heat storage systems, molten salt storage systems and sensible heat storage systems. Cost effective systems demand the utilisation of inexpensive storage materials, which usually exhibit a low thermal conductivity. Essential for the successful development of a storage system is the sufficient heat transfer between the fluid and the storage material. Recently, a high temperature TES was studied by a lab-scale cylindrical storage tank experiment [6]. A heat exchanger of TES was used for separating two fluids: the storage medium and the heat transfer fluid. A multi-tank sensible-heat storage system for storing thermal energy, with a two-tanks molten salt system, was proposed in [7]. In a high concentrating solar receiver, the temperature reaches values in the range from 800 °C to 1800 °C and the fluid employed in the plant is often a gas, such as air. In air based solar energy utilization systems, storage of hot air is not possible due its low density. A denser medium is required for storage of thermal energy. In sensible TES, energy is stored by changing the temperature of a storage medium, such as water, oil, rock beds, bricks, sand or soil. The amount of energy input to TES by a sensible heat device is proportional to the difference between the final and initial temperatures of the storage medium, its mass and its specific heat. Each medium has its own advantages and disadvantages. Packed bed generally represents the most suitable energy storage unit for air based solar energy systems as mentioned in [8]. An extensive literature review of research work was presented in [9,10] and, more recently, in [11,12]. Several investigations on fixed bed energy storage use the model originally developed by Schumann [13]. This is a one dimensional two-phase transient model that enables the prediction of the axial and temporal distribution of the solid and fluid temperatures. Experiments using steel spheres to determine the heat transfer coefficient between the fluid and solid for air as the working fluid were accomplished in [14]. A modified version of Schumann’s model was employed in [15] to solve the model equations using gas as the working fluid. A model with the fixed bed divided into two regions, one near the wall with high void fraction and the other in the bed central region was studied in [16]. The results and method given in [15] were employed in [16] to evaluate the temperature distribution and in [17] to verify high computational times when the ratio of the specific heat of the solid to the specific heat of the fluid was high. Investigations concerning the effects of the radial and axial dispersion in the bed were presented in [18,19]. How a single phase model can be derived from the continuous solid phase model was demonstrated in [20]. A storage unit of the fixed type using Schumann's model, including possible variations in the fluid inlet temperature, was modelled in [21]. It was found that the optimization of the packed bed design should aim at maximizing the ratio of total energy availability to total pumping energy, which increases when the size of the material elements increases [22]. However, thermal performance of the system may deteriorate due to the smaller area of contact available for heat transfer [23]. A numerical study on packed bed thermal models, suitable for sensible and latent heat thermal storage systems, was accomplished in [11]. An experimental WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
463
investigation on the effect of the system and operating parameters on heat transfer and the pressure drop characteristics of a packed bed solar energy storage system with large sized elements of storage material was conducted in [12]. Solid media sensible heat storage materials were investigated in the experimental study reported in [24]. A thermodynamic procedure to analyze the energy and exergy balances of a rock bed thermal storage system was presented in [25]. A numerical investigation on the sensitivity of the long-term performance simulations of solar energy systems to the degree of stratification in both liquid and packed bed storage units was carried out in [26]. In this paper a high temperature sensible TES is numerically analyzed and a parametric analysis is accomplished. In the formulation of the model it is assumed that the system geometry is cylindrical, the fluid and the solid thermophysical properties are temperature independent, and the radiation heat transfer mechanism is neglected. The commercial CFD Fluent code is used to solve the governing equations in the transient regime. Numerical simulations are carried out at different mass velocities of the heat-carrying fluid and porosity of the storage medium. The results show the effects of the porosity and of the working fluid mass velocity on the stored thermal energy and on the storage time.
2
Mathematical description and numerical procedure
The proposed prototype of high temperature sensible TES is a cylinder whose diameter is equal to 0.60 m and height is equal to 1.0 m. The storage material is steel and it is assumed to be a porous medium. The heat-carrying fluid is air. The thermo-physical properties of the steel and the air, reported in Tables 1 and 2, are temperature independent. The radiation heat transfer mechanism is neglected. Table 1:
Steel thermo-physical properties.
Property c [J/kg K]
502.48
ρ [kg/m3]
8030
k [W/m K]
16.27
Table 2:
Air thermo-physical properties.
Property c [J/kg K]
1006.43
ρ [kg/m3]
1.225
k [W/m K]
0.0242
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
464 Computational Methods and Experimental Measurements XIV
Figure 1:
Computational domain with imposed boundary conditions.
Thanks to the axisymmetry of the physical model, a two-dimensional analysis was carried out. The employed computational domain is sketched in Fig.1, where the imposed boundary conditions are reported. The temperature of the hot air entering the channel, Tin, is equal to 1473 K. The commercial CFD code Fluent [27] was employed to solve the governing equations in transient and laminar regime. A rectangular shape mesh with 76800 cells has been employed. The initial temperature of the solid elements and of the heat-carrying fluid is assumed to be equal to 1073 K. Air is the heat-carrying fluid, that is assumed to be an incompressible ideal gas. Numerical simulations are carried out for different mass velocity values. The , to the free flow mass velocity, G, is the ratio of the module mass flow rate, m area of the storage module cross section S, that can be expressed as the product of the frontal area of the module and its porosity G
m S
(1)
For assigned G, different porosity values were considered. Results are given in terms of stored thermal energy in porous medium after the considered time interval, Qstored, as a function of time. The stored thermal energy is evaluated as Qstored eff ceff TdV TiV V
(2)
where V 0.283 m3 and Ti=1073 K. eff and ceff are evaluated by:
eff f 1 s ;
ceff c f 1 cs
(3)
In Table 3, the effective density and specific heat values, evaluated for different porosity values, are reported. The viscous and inertial resistance coefficients, 1/ and C2 respectively, depend on G and . In Table 4 they are reported for different porosity values. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
465
A grid dependence analysis was obtained for G=0.245 kg/m2s, Ti=1200 K, Tamb=300 K, =0.40, 1/=6.00x105 m-2 and C2=1312.5 m-1. Six meshes were considered: 10 x 30, 20 x 60, 40 x 120, 80 x 240, 160 x 480 and 320 x 960. The comparison among results allows one to employ in the simulations the mesh 160 x 480 as the more appropriate to obtain an adequate accuracy with a convenient computational time. A similar analysis was carried out also for different time steps. The most advantageous time step was equal to 5 s. The converging criteria were 10-3 for the residuals of the velocity components and 10-6 for the residuals of the energy. Table 3:
Table 4:
Effective density and specific heat of porous medium for different porosity values. ε
eff (kg / m3 )
ceff ( J / kg K )
0.20
6424.20
603.27
0.30
5621.30
653.60
0.35
5219.93
678.86
0.40
4818.49
729.18
0.45
4417.05
725.25
0.50
4015.61
754.45
0.60
3212.70
804.85
Viscous and inertial resistance coefficients for different porosity values. ε
1/ (m2 )
C2 (m1 )
0.20
3846153
14000
0.30
1307189
3629
0.35
827814
2122.4
0.40
600000
1312.5
0.45
358551
845
0.50
240030
280
0.60
107526
259
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
466 Computational Methods and Experimental Measurements XIV
3
Results and discussion
Numerical simulations are carried out at the following mass velocity values: 0.20 kg/m2s, 0.30 kg/m2s and 0.40 kg/m2s and for an inlet temperature of the air equal to 1473 K. The porosity varies from 0.20 to 0.60. The stored thermal energy for a mass velocity value G=0.20 kg/m2s and for different porosity values is reported in Fig.2. The profiles trends show that at initial time the stored energy is higher for the higher value and they present an inversion at different time increasing the porosity value. Moreover, increasing the porosity determines a substantial decrease of the stored thermal energy in the long time. At each porosity the stored thermal energy reaches an asymptotic value, which indicates the thermal saturation of the reservoir. The time at which the thermal saturation is reached increases with decreasing porosity due to the increase in the thermal capacity of the reservoir. The lower thermal inertia causes a temperature increase in less time and a smaller thermal capacity, which determines the heat storage saturation in less time, too. The stored thermal energy values are reported in Table 5 for the first eight hours of computational simulation. The choice of this time period allows one to compare this values sequence with different porosity. For 0.45 and after 4 hours, thermal energy stored reaches the 95% of the maximum stored thermal energy value, which is attained after 8 hours. Moreover, the previous observation is confirmed and the best condition for the thermal energy storage is that obtained for a porosity equal to 0.40 after three hours, with Qstored=3.43x105 kJ.
5e+8
G=0.20 [kg/m2s]
Qstored [J]
4e+8
3e+8
= 0.20 = 0.30 = 0.35 = 0.40 = 0.45 = 0.50 = 0.60
2e+8
1e+8
0 0
5000
10000
15000
20000
25000
30000
35000
t [s]
Figure 2:
Stored thermal energy vs. time for G=0.20 kg/m2s and various porosity values.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Table 5:
ε 0.20 0.30 0.35 0.40 0.45 0.50 0.60 ε 0.20 0.30 0.35 0.40 0.45 0.50 0.60
467
Stored thermal energy values for the first eight hours of computational simulation and G=0.20 kg/m2s. Qstored [kJ] 3600 s 150545 165968 172768 185031 182430 186881 187578 Qstored [kJ] 18000 s 404451 394171 384404 384977 353897 337129 290290
Qstored [kJ] 7200 s 249630 266798 272400 285228 274022 272402 253941 Qstored [kJ] 21600 s 419274 404327 392709 391266 358319 339952 291261
Qstored [kJ] 10800 s 325596 334424 334322 342594 321423 312521 278580 Qstored [kJ] 25200 s 427647 409534 396618 394156 360177 341093 291579
Qstored [kJ] 14400 s 375233 373683 367535 371606 343735 329944 287312 Qstored [kJ] 28800 s 432088 412137 398400 395447 360911 341539 291681
Increasing the mass velocity value, G=0.30 kg/m2s, in Fig. 3, the differences with respect to the previous case are significant in terms of thermal saturation time. In fact, for this configuration, the profile of the stored thermal energy for a porosity value equal to 0.60 shows the asymptotic value after about 10000 s whereas in the first case this value was about 15000 s. This is due to an increase in the convective heat transfer between the air and the porous medium. This is also evident in Table 6 where the stored thermal energy values are reported for the first eight hours of computational simulation. In this case for a porosity equal to 0.40 a Qstored=3.41x105 kJ, very close to the previous value for G=0.20 kg/m2s, is attained after 2 hours. It is about the 86% of the saturation value (Qmax=3.96x105 kJ). Then the higher value of G determines a lower time needed to reach a good thermal storage level. The variation in the stored thermal energy profiles is more marked when the mass velocity value is varied from G=0.30 kg/m2s to G=0.40 kg/m2s, as it is observed in Fig 4. For the lowest porosity values the profiles still show an increasing trend with higher stored thermal energy values, whereas for the higher porosity values a sudden increase in the thermal energy stored in the solid medium is observed. It is necessary to underline a progressive change in the material behaviour at increasing mass velocity values, the larger the porosity the larger the change. A reduction in the time necessary to reach the steady state, passing from about 10000 s for G=0.30 kg/m2s to about 7500 s for G=0.40 kg/m2s, at ε=0.60, is also observed, but with a considerable decrease in the inversion time. The value of this time between ε=0.20 and ε=0.60 changes WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
468 Computational Methods and Experimental Measurements XIV 5e+8
G=0.30 [kg/m2s]
Qstored [J]
4e+8
3e+8
2e+8
= 0.20 = 0.30 = 0.35 = 0.40 = 0.45 = 0.50 = 0.60
1e+8
0 0
5000
10000
15000
20000
25000
30000
35000
t [s]
Figure 3:
Stored thermal energy vs. time for G=0.30 kg/m2s and various porosity values.
Table 6:
Stored thermal energy values for the first eight hours of computational simulation and G=0.30 kg/m2s.
ε 0.20 0.30 0.35 0.40 0.45 0.50 0.60 ε 0.20 0.30 0.35 0.40 0.45 0.50 0.60
Qstored [kJ] 3600 s 188403 206097 213655 227702 223207 226780 222384 Qstored [kJ] 18000 s 433180 412586 398396 395587 360943 341565 291684
Qstored [kJ] 7200 s 319531 330770 331778 340926 321104 312389 279047 Qstored [kJ] 21600 s 436145 414029 399466 396218 361325 341757 291720
Qstored [kJ] 10800 s 393888 387898 379522 381475 351540 335368 289661 Qstored [kJ] 25200 s 436926 414390 399721 396362 361401 341792 291725
Qstored [kJ] 14400 s 423381 407012 394244 392818 359157 340527 291420 Qstored [kJ] 28800 s 437152 414483 399777 396393 361414 341798 291726
from about 7000 s for G=0.20 kg/m2s to about 4000 s for G=0.40 kg/m2s. For this configuration the best condition for the thermal energy storage is obtained for =0.20, as observed in Table 7. In fact, after 2 hours the highest Qstored value is equal to 3.76x105 kJ for =0.20 and it is about 86% of Qmax at this porosity. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
469
5e+8
G=0.40 [kg/m2s]
Qstored [J]
4e+8
3e+8
2e+8
= 0.20 = 0.30 = 0.35 = 0.40 = 0.45 = 0.50 = 0.60
1e+8
0 0
5000
10000
15000
20000
25000
30000
35000
t [s]
Figure 4:
Stored thermal energy vs. time for G=0.40 kg/m2s and various porosity values.
Table 7:
Stored thermal energy values for the first eight hours of computational simulation and G=0.40 kg/m2s.
ε 0.20 0.30 0.35 0.40 0.45 0.50 0.60 ε 0.20 0.30 0.35 0.40 0.45 0.50 0.60
Qstored [kJ] 3600 s 228993 248076 255517 270274 262398 263517 250651 Qstored [kJ] 18000 s 437005 414430 399736 396377 361405 341794 291725
Qstored [kJ] 7200 s 376386 376379 371042 374796 347110 332258 288679 Qstored [kJ] 21600 s 437213 414505 399787 396399 361416 341799 291725
Qstored [kJ] 10800 s 425931 408565 395671 393724 359920 340914 291539 Qstored [kJ] 25200 s 437237 414512 399791 396401 361417 341799 291725
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Qstored [kJ] 14400 s 435473 413761 399281 396128 361273 341730 291716 Qstored [kJ] 28800 s 437240 414513 399792 396401 361417 341799 291725
470 Computational Methods and Experimental Measurements XIV
4
Conclusions
A numerical investigation on thermal energy storage at high temperature was carried out. The storage is a component of a high temperature (>800 °C) thermal solar plant. The thermal storage was cylindrical, the storage medium was a solid and it was modelled as a porous medium. The transient regime analysis was accomplished in the heating phase of the thermal storage with different mass velocity and porosity values. Results showed that, for the considered parameter values, thermal saturation of the TES was attained after eight hours for all considered porosity values and the larger the mass velocity the lower the saturation time. For an assigned mass velocity value the larger the porosity the higher the stored thermal energy at initial time. After the inversion time the trend is the opposite, i.e., the larger the porosity the lower the maximum storage thermal energy, Qmax. However, the best value of porosity to reach about 0.85 Qmax in the lowest time is 0.40 for the lowest considered G values and 0.20 for the highest considered G values.
Acknowledgement This work was supported by MIUR with Articolo 12 D. M. 593/2000 Grandi Laboratori “EliosLab”.
Nomenclature specific heat, J kg-1 K-1 inertial resistance coefficient, m-1 mass velocity, kg m-2s-1 thermal conductivity, Wm-1K-1 mass flow rate, kg s-1
c C2 G k m
Q Qstored S t T V
thermal energy, J stored thermal energy, J free flow area of module storage cross section, m2 time, s temperature, K volume, m3
Greek symbols -1
viscous coefficient, m-2
porosity
density, kg m-3
Subscript amb
ambient
eff
effective
f
fluid
i
initial
in max s
entering maximum value solid
References [1]
Dincer, I. & Rosen, M., Thermal energy storage: System and application, John Wiley & Sons, 2002. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
[2] [3] [4] [5] [6] [7]
[8] [9] [10]
[11] [12] [13] [14] [15] [16] [17] [18]
471
Paksoy, H. Ö., Thermal energy storage for sustainable energy consumption: Fundamentals, Case studies and design, Springer, 2007. Beckmann, G. & Gilli, P.V., Thermal energy storage: Basics, Design, Applications to power generation and heat supply, Springer, 2002. Sharma, A., Tyagi, V.V., Chen, C.R. & Buddhi, D., Review on thermal energy storage with phase change materials and applications, Renewable and Sustainable Energy Reviews, 13, pp. 318-345, 2009. Angelino, G. & Invernizzi, C., Binary conversion cycles for concentrating solar power technology, Solar Energy, 82 (7), pp. 637-647, 2008. Vaivudh, S., Rakwichian, W. & Chindaruksa, S., Heat transfer of high thermal energy storage with heat exchanger for solar trough power plant, Energy Conversion and Management, 49, pp. 3311–3317, 2008. Salomoni, V. A., Majorana, C. E., Giannuzzi, G. M. & Miliozzi, A., Thermal-fluid flow within innovative heat storage concrete systems for solar power plants, International Journal of Numerical Methods for Heat and Fluid Flow, 18 (7-8), pp. 969-999, 2008. Cautier, J. P. & Farber, E. A., Two applications of a numerical approach of heat transfer process within rock beds, Solar Energy, 29, pp.451462, 1982. Barker, J. J., Heat transfer in packed beds, Industrial & Engineering Chemistry, 57, pp. 4351, 1965. Balakrishnan, A., Pei, R. & David, C. T., Heat transfer in packed bed systems – a critical review, Industrial and Engineering Chemistry, Process Design and Development, 18, pp.3040, 1979. Ismail, K. A. R. & Stuginsky, R. Jr., A parametric study on possible fixed bed models for pcm and sensible heat storage, Applied Thermal Engineering, 19, pp.757788, 1999. Singh, R., Saini R. P. & Saini, J. S., Nusselt number and friction factor correlations for packed bed solar energy storage system having large sized elements of different shapes, Solar Energy, 80, pp.760771, 2006. Schumann, T. E. W., Heat transfer: a liquid flowing through a porous prism, J. Franklin Inst., 208, pp.405416, 1929. Furnas, C. G., Heat transfer from a gas stream to a bed of broken solids II, Industrial & Engineering Chemistry, 22, pp. 721-731, 1930. Handley, D. & Heggs, P. J., The effect of thermal conductivity of the packing material on transient heat transfer in a fixed bed, International Journal of Heat and Mass Transfer, 12, pp.549570, 1969. Gross, D. J., Hickox, C. E. & Hackett, C. E., Numerical simulation of dual-media thermal energy storage systems, Trans. ASME, Journal of Solar Energy Engineering, 102, pp.287297, 1980. Beasley, D. E. & Clark, J. A., Transient response of a packed bed for thermal energy storage, International Journal of Heat and Mass Transfer, 27, pp.16591699, 1984. Gunn, D. J., Axial and radial dispersion in fixed beds, Chemical Engineering Science, 42, pp. 363-373, 1987. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
472 Computational Methods and Experimental Measurements XIV [19] [20] [21] [22] [23] [24] [25] [26]
[27]
Gunn, D. J., Ahmed M. M. & Sabri, M. N., Radial heat transfer to fixed beds of particles, Chemical Engineering Science, 42, pp.2163−2171, 1987. Vortmeyer, D. & Schaefer, R. J., Equivalence of one and two-phase models for heat transfer processes in packed beds: one dimensional theory, Chemical Engineering Science, 29, pp. 485−491, 1974. Sowell, E. F. &. Curry, R. L., A convolution model of the rock bed thermal storage units, Solar Energy, 24, pp.441−449, 1980. Torab, H. & Beasley, D. E., Optimization of packed bed thermal energy storage unit, Journal of Solar Energy Engineering, 109 (3), pp.170−175, 1987. Sagara, K. & Nakahara, N., Thermal performance and pressure drop of packed beds with large storage materials, Solar Energy, 47 (3), pp.157−163, 1991. Laing, D., Steinmann, W. D., Tamme, R. & Richter, C., Solid media thermal storage for parabolic trough power plants, Solar Energy, 80 (10), pp. 1283-1289, 1980. Navarrete-González, J. J., Cervantes-de Gortari, J. G. & Torres-Reyes, E. Exergy analysis of a rock bed thermal storage system, International Journal of Exergy, 5 (1), pp. 18-30, 2008. Arias, D. A., McMahan, A. C. & Klein, S. A., Sensitivity of long-term performance simulations of solar energy systems to the degree of stratification in the thermal storage unit, International Journal of Energy Research, 32 (3), pp. 242-254, 2008. Fluent Incorporated, Fluent 6.2, User Manual, Lebanon, NH, 2005.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
473
Dynamic modelling of the thermal space of the metallurgical walking beams furnaces D. Constantinescu Material’s Science and Engineering, University POLITEHNICA of Bucharest, Romania
Abstract The aim of the paper is to establish the basics of a model in order to help the evaluation of the parameters of the heat transfer and energy consumption in the case of the walking beam furnaces for rolling mills. The heating process of alloyed and high-alloyed steel billets in view of processing by rolling is analyzed. The temperature gradients, producing internal thermal stresses in the heated material, are a problem that influences the design of the aggregate. If the values of the thermal stresses exceed the tensile strength they can lead to the destruction of the finished product. The thermal stresses are mostly due to the poor correlation of heating process of the billets in typical furnaces for rolling mills with mechanical and thermal characteristics of the heated material and with the dynamic of the gases. Using physical and mathematical modelling, there are established correlations between the thermal process, the dynamic of the gases and the particularities of the furnace in order to obtain the conditions for modelling a variable geometry of the aggregate. The particularities of the steels and of the furnace are analyzed in order to reach an optimum of the geometrical model for the thermal space. Physical and mathematical models are used to establish a new variable geometry. Saving energy and metal, due to the chemical, thermal and dynamic processes, means having a cleaner environment. A new disposal system of the burners inside the furnace and a new variable geometry of the thermal space can lead to energy and metal savings. The conclusions of the study are applied for a new design of the furnace, including dynamic aspects of the geometry of thermal space. Keywords: furnace, metallurgy, modelling, dynamic of the gases, thermal space.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090431
474 Computational Methods and Experimental Measurements XIV
1
Temperature and thermal stresses in steel billets
A study of the mechanism of thermal stresses and the establishing of the critical thermal values was imposed in order to include its influence on the mathematical remodelling of the thermal space [1–3]. 1.1 The case of the cylindrical billets Using the Bessel functions, and having the notes: θc: furnace temperature θmi: temperature in the centre of the billet θm0: initial temperature of the billet θms: temperature at the surface of the billet (its value is determined subject to upper or lower surface). θmf : final temperature of the billet It was deduced for a cylindrical billet: - temperature for the surface:
θ ms − θ c = (θ m 0 − θ c )[ν 1 ⋅ ϕ1 ⋅ J 0 (n1 R)]
-
(1)
average final temperature
θ mf − θ c = (θ m0 − θ c ) ν 1 ⋅ ϕ 1 ⋅
2 J 1 (n1 R) n1 R
(2)
Temperature’s equation in the section of the cylinder may be expressed by:
r
θ m = θ c − (θ c − θ mi ) ⋅ J 0 t=
θ ms θ mf
R J (n R) −θc = 0 1 2 J 1 ( n1 R ) −θc n1 R
2t −
R λ
(3) (4)
Only the first component of the series was used, so, it is necessary to put the condition: a ⋅ τ 2 ≥ 0,3 , (a: thermal diffusivity, τ: time).
R
Examples of the application of the model for the temperatures are presented in figures 1–4. For information about the values of the thermal stresses in the round section billet it is used the Bessel function J0(n1R) [4, 5]. If “R” is the cylinder radius and “r” the current radius, for r = R the stresses σtgs and σazs at the surface of the billet are:
σ tgs = σ axs =
β⋅E ⋅ ∆θ [ν 1 ⋅ ϕ 1 ⋅ J 0 (n1 R )] 1−ν
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(5)
475
Computational Methods and Experimental Measurements XIV
tc = 860ºC (furnace) tmi (center) Delta1 [tc-tmi]
140 mm (case A1)
1000
Temperature, oC
800
Temperature [ºC]
1
900 850
tms (surface)
600
400
200
m
o
θc=860 C
800 750 700 650 600 550 500 450 400
r
θ m = θ c − (θ c − θ mi ) ⋅ J 0
350 300
2
4
6
8
10
12
14
16
18
20
22
24
26
centre
2
4
6
a ⋅τ 8
10
12
14
Time [min.]
Temperatures in the cylindrical billet (140mm).
Case A1, temperature θm at the surface, r/R=1
r
θ m = θ c − (θ c − θ mi ) ⋅ J0
750
R
700
Figure 2:
18
20
22
R2
24
≥ 0 ,3 26
R2
600
≥ 0,3
o
(furnace's temperature: 860 C)
500
R 2t − λ 2
Analysis of the temperature in the cylindrical billet.
centre, r/R=0 Case A1, temperature θm in the ceneter,
limit condition:
a ⋅τ
o
(furnace's temperature: 860 C)
800
16
Time, min
Temperature [ºC]
Figure 1:
R
λ
limit condition:
150 100
0
2t −
R
250 200
0
Temperature [ºC]
tm1 surface tm0 centre tm0.5 midle(r/R=0.5) tm0.95 r/R=0.95 tm0.8 r/R=0.8 tm0.6 r/R=0.6 tm0.55 r/R=0.55
surface Case A , temperature θ
3
Y =1505.8425-188.187 X+12.10816 X -0.24455 X
650
600
400
300
tm0 (θm) limit condition:
200
θ m = θ c − (θ c − θ mi ) ⋅ J0
r
2t −
R λ
4
16
18
R
100
a ⋅τ
R2
≥ 0,3
0
550
6
8
10
12
14
16
18
20
22
24
26
0
2
6
8
10
Time[min.]
Figure 3:
12
14
20
22
24
26
Time [min.]
Detail for the temperature at the surface (r/R=1) (tm1 in figure2).
Figure 4:
Detail for the temperature in the centre (r/R=0) (tm0 in figure 2).
For r=0, in the axis σtgax and σrax are:
σ r ax = σ tg ax =
β⋅E ⋅ ∆θ [ν 1 ⋅ ϕ 1 ] 1− ν
(6)
For whole section of the cylinder, if the heating is symmetrical (∆θ1=∆θ2):
σr =σt =
2 J (n R ) β⋅E ⋅ ∆ θ ν 1 ⋅ ϕ 1 1 1 1− ν n1 R
β: coefficient of dilatation E: Young module ν: Poisson coefficient WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(7)
476 Computational Methods and Experimental Measurements XIV ν1, φ1: series of Bessel function ∆θ: temperature gradient 1.2 The case of the rectangular section billets In the case of billets with rectangular section with the dimension “X”, the admitted thermal stresses reported to the variable (x) axis, equations (8) and (9) [6] are proposed: - axial and tangential stress in the axis of the billet, σxax and σyax:
θ −θ β⋅ E β⋅ E (8) ⋅ ∆θ ⋅[ν1 ⋅ϕ1] = ⋅ ∆θ ⋅ c mi 1−ν 1−ν θc −θm0 axial and tangential stress at the surface of the billet, σxs and σys: θ −θ β⋅E β⋅E (9) σ x s = σ ys = ∆θ[ν1 ⋅ϕ1 ⋅ cos(n1 X )] = ∆θ ⋅ c ms 1− ν 1− ν θ c − θ m0
σ x ax= σ y ax =
-
The values of the thermal stresses in whole section of the rectangular billet are: θ c − θ mf sin( n1 X ) β ⋅ E β⋅E (10) σm = ∆θ ν 1 ⋅ ϕ 1 ⋅ ∆θ ⋅ = 1− ν 1 − − ν θ θ n X 1 c m 0 1.3 Influence of the thermal regime on the furnaces’ design In figure 5 is explained the relation between the heating conditions necessary for a specific category of steels and the geometry of the vault. The model that connects the thermal stress and the thermal field of the aggregate is obtained by simulation on the computer. RUL1 OBS:θs admitted is a temperature obtained in function of σad rel. to case A 3 (θc =1300 ºC) (certainly, at τ =0, θs can't be 370ºC; the variation follows the pointed line)
1400 1200
temperature [ºC]
1000
For Φ =300mm and θc =1300ºC it will be obtained: ∆θreal θs real θi real it is recommanded for: ∆θadmis the values: θs admis θc admis
800 600 400 200
Billet's dimension: 0
0
20
40
60
80
Φ300mm 100
120
140
160
true equivalent time [ min ]
Figure 5:
Maximum admitted temperature of the aggregate (θc-admis) in concordance with the maximum admitted temperature gradient ∆θadmits (billet radius 300mm; if the temperature of the aggregate is 1300oC the values are θs real, θi real, ∆θreal).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
477
1400 1200
θc. admitted
temperature
1000 800 600
y=751-1.54 x+0.0182x
400
to establish the curve of the vault
200 0
20
40
60
80
100
120
140
160
time
Figure 6:
Basic profile of the vault (beginning temperature: 750o C).
The results concerning the modelling of thermal fields in the furnace are used as a component included to the re-modelling of the thermal space of the aggregate. Starting from the diagrams that show the thermal regime for the aggregate, it was proposed to use computer modelling, see the design of the vault of a furnace (figure 6) [7].
2
The problem of energy and mass transfer
In order to efficiently manage the mathematical model that describes the geometry of the thermal space of the aggregate, it is necessary to know the relations between the temperature of the steel, the temperature of the thermal isolation and the temperature of the flue gases. The thermal output ηt, gives the energetic efficiency. It is strongly connected to the energy transfer problems. Equation (11) [7] was established
ηt =1+λa ⋅v0a ⋅
θa ⋅ca θcb ⋅ccb vot ⋅θga ⋅cp Qcb
+
Hi
−
Qcb
−(λa −1) ⋅v0a ⋅
θga ⋅ca Qcb
(11)
where are used the notations: voa: theoretical volume of air combustion related to the thermal unit of the fuel [m3N/103 kJ] λa: air combustion excess coefficient θa: air combustion temperature, oC ca: thermal capacity of the air, kJ.m-3N.K-1 cp: thermal capacity of the flue gases, Hi: thermal energy of the fuel, kJ.m-3N Defining “the factor of the fuel” [7] as: K cb = 1 + θ cb ⋅ ccb : Hi
ηt = Kcb +
1 (λa ⋅ v0a ⋅θ a ⋅ ca − v0t ⋅θ ga ⋅ c p − (λa −1)⋅ v0a ⋅θ ga ⋅ ca ) Qcb
(12)
The connection between the temperature of the gases, the temperature of the thermal isolation and the temperature of the billets is described by the equation: WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
478 Computational Methods and Experimental Measurements XIV
θg ⋅(αgp⋅εp +αc) =θp ⋅(αgp⋅εp +αc +χ⋅αpm⋅εpm−χ⋅αgpm⋅εpm) −θm⋅(σαpm⋅εpm−αgpm⋅εp) +qex
(13)
αgpm: heat exchange coefficient from the gases to the metallic material, if it is considerate that the temperature of the gases is the same with the temperature of the thermal isolation, kJ.m-2.h-1.K-1 αpm: heat exchange coefficient by radiation between the thermal isolation and the metal, kJ.m-2.h-1.K-1 αgp: radiation heat exchange coefficient between the gases and the thermal isolation, kJ.m-2.h-1.K-1 αc : convection heat exchange coefficient between the gases and the thermal isolation, kJ.m-2.h-1.K-1 θg: temperature of the flue gases, oC θp: temperature of the thermal isolation, inside the furnace, oC ε: thermal emissivity coefficients
χ = sS, where s: heated surface of the billets, m2 S: surface of the thermal isolation, m2 qex: the conduction thermal flow Referring to equation (13) the complex heat exchange in the furnace is characterized by: - the heat exchange coefficient between the thermal isolation and the billets
α1 =
αgm ⋅εm +αc q ⋅ αgp ⋅ ε p +αc +σ ⋅αpm ⋅ε pm −σ ⋅αgm ⋅ε p + ex αgp ⋅ε p +αc θ p −θm
(14)
- the heat exchange coefficient between the flue gases and the billets
α2 =
αpm ⋅εpm −αgpm ⋅εp
q ⋅αgp ⋅εp +αc − ex +αgm ⋅εm +αc (15) θg −θm αgp ⋅εp +αc +σ⋅ αpm ⋅εpm −αgpm ⋅εp
(
)
For particular cases, more friendly forms of the equations may be established. For example, if the fuel is the natural gas, the temperature of the flue gases is:
θ g = α p ⋅θ p + α m ⋅θ m + where
αp =
[
q ex
0,8 ⋅ α gp + α c
(
0,8 ⋅ α gp + α c + σ α pm − α gpm 0 ,8 ⋅ α gp + α c
αm = σ
(
0,8 α pm − α gpm
)]
)
0,8 ⋅ α gp + α c
(16)
(17) (18)
Equation (16) can also have different forms according to the model of the furnace. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
3
479
Dynamic of the gases in the aggregate
In order to obtain the most adequate thermal regime, it is necessary to assure an advanced circulation and recirculation of the gases and to have permanently under control the parameters of their dynamic. Certainly, a study using a physical model is also necessary. In order to accomplish this study and to use the obtained results for the real case it is needed to establish some non-dimensional criteria. The theory of similitude was applied. In figure 7 the physical model for the walking beams furnace and an experimental result concerning the dynamic of the gases is presented.
Figure 7:
The experimental physical model for the recirculation of the flue gases in the case of the walking beam furnace.
The recirculation of the gases has some particular characteristics explained by the geometrical limits of the thermal space. In figure 8 (senses 2 and 2’), the circulation is at the superior surface of the billets and produces a secondary degree recirculation. The debit recycled in the “primary heating zone” and “secondary heating zone” can be calculating using equation (19). Xf (19) m = m ⋅ 0,2 − 1 r
0
r0
mr: mass of recycled gases m0: masse of the gases at the exit from the burner r0: burner’s radius As results of experiments and mathematical modelling it was remarked the influence of the temperature on the general dynamic of the gases in the continuous linear furnace (walking type furnace, figure 8). It is also to remark that, the abstraction of the flue gases do not have an uniform distribution; the maximum value of the speed is reached in the central tap holes, simultaneously with low values of the speed thru the lateral tap holes (figure 9).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Speed of the flue gases, Wev [m/s]
480 Computational Methods and Experimental Measurements XIV
Modelling data for the case of a two zones walking beam furnace with high debit of fuel
Number of the tap hole
Figure 8:
Influence of the temperature and of the jet of the gases on the dynamic in the furnace depending on the heating zone.
Figure 9: Speed distribution in the tap holes of the furnace.
If the temperature has a higher value at the superior levels of the furnace, comparing to the hearth level, it will influence differently the dynamic regime in the two zones (figure 8) Introducing and applying some hypothesis about the very closed values for the thermal capacity at the level of the primary and recycled jets, the recycling coefficient can be expressed. The dependence of the dynamic recycling coefficient K on the temperatures is presented in figure 10.
4
Design of the thermal space of the furnace
The dynamic design of the furnace obtained when applying the mathematical model may assure a 41% to 55% decrease of heating time, (from 153minutes to 90minutes), corresponding to a similar decrease of energy consumption [8]. It is also possible to reduce the oxidation of the steel till 0.8÷1.4%. Practically, it means to propose a variable geometry of the thermal space. The basic schema corresponding to the thermal diagram is presented in figure 11. The model includes some particularities for the geometry of the furnace: • there are three groups of burners: - group A1 using air from the heat recovery R1 which can assure a higher temperature at the superior level of the furnace - group A2: burners using air from R2 in order to assure a lower temperature at the inferior levels of the furnace - group of special AFRP burners - group A3: are in function only in special conditions, in connection with the geometry of the thermal space in the first heating zone • groups A1 and A2 are connected to the tap holes system in order to control the dynamic regime of the gases • the exit of the flue gases is at the level of the vault
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
481
Figure 10:
Establishing of the temperature of the mixed jets ta, using the recycling coefficient.
Figure 11:
A basic schema for modelling the geometry of the continuous furnace (walking beam furnace).
• the evacuation system of the gases assure the condition θb<θv in the first zone of the furnace • in order to assure the dynamic and thermal regime, the geometry of the vault (especially in the first zone) is decisive • the low oxidation level of the steel is also highly influenced by the profile of the vault in the first zone
5
Discussion
The results of the studies about the thermal stresses are the base of the remodelling of the geometry of the thermal space of the furnace. From this, the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
482 Computational Methods and Experimental Measurements XIV aspects regarding the design of the vault are essential in determining the dynamic and the thermal solutions. If a various range of steel with different thermal properties is heated in the furnace, a variable geometry of the thermal space will bring important economies of energy and metal. Starting from the diagrams of the variation of maximum admitted temperature of the furnace it is possible to model the vault of the continuous working furnace. Using the proposed general solutions for the remodelling of the thermal regime it can be obtained a better control of the temperatures in each heating zone of the furnace and to correlate it with the necessary temperatures of the billets. It is also possible to control the temperature of the thermal isolation, and by this to save thermal energy. Using the results of the modelling, it is possible to control the flue gases temperature in each heating zone of the furnace in connection with the temperature of the steel. The basics of the general solution of the model allowed to establishing the disposal mode of the burners in connection with the design of the furnace and the necessary output. The design of the furnace can be also changed having in view the thermal and the dynamic particularities of the flow gases.
References [1] Constantinescu, D., Nicolae, A., A model regarding the heating of steel billets in view of plastic deformation, Metalurgia, Bucuresti, vol. I, no.4. p. 22, 1996 [2] Constantinescu, D., Nagy, D.: Tensiuni termice in semifabricatele din otel incalzite in vederea deformarii plastice, BRAMAT99”, Brasov, volumul III, sectia III, pag.108, 1999 [3] Constantinescu, D: Thermal stresses in steel billets and temperature level in furnaces for rolling mills; Scientific Bulletin UPB, vol.62, nr.1/2000, pag.127 ISSN 1454-2331 [4] Heligenstaedt, W., Thermique appliqué aux fours industriels, Dunod, Paris, 1979 [5] Stanley, P., Dulieu-Smith, J.P.: The determination of crack-tip parameters from thermoplastic data, Experimental Techniques vol.20 nr.2/1996 [6] Constantinescu, D., Thermal stresses when heating semi finished products for rolling, SHMD 2002, Metalurgija 3/2002, p.257 ISSN 0543–5846, Zagreb. [7] Constantinescu, D., Mazankova, M.: Heat exchange, energy and metal saving in the furnaces for billets reheating, 5th International Metallurgical Conference” Continuous casting of billets and modelling of steelmaking processes” vol. I, pag.343–353, October2003, Trinec, Czech Republic [8] Constantinescu, D: Energy saving at the continuous thermal aggregates applying the variable geometry of the thermal space obtained by mathematical modeling, BRAMAT 2007, Brasov, Romania, February 2007, ISSN 1223-9631. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Section 8 Material characterisation
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
485
Investigation of shape recovery stress for ferrous shape memory alloy H. Naoi1, M. Wada2, T. Koike2, H. Yamamoto2 & T. Maruyama3 1
Faculty of Engineering, Hosei University, Japan Graduate School of Engineering, Hosei University, Japan 3 Awaji Materia Co. Ltd., Japan 2
Abstract Ferrous shape memory alloy has recently been developed. The shape recovery characteristics of Fe-28%Mn-6%Si-5%Cr shape memory alloy are reported by uni-axially pre-strained test. The alloy has drawn strong attention for application for structural components such as steel pipe joints and crane rail joints. In order to apply the alloy to the joints, we estimate the joint strength for its structural design. So investigation of shape recovery stress has been significant. In this study, Fe-28%Mn-6%Si-5%Cr alloy are melted. Thin plates are manufactured by rolling and are heat-treated as solution treatment. We conduct the uniaxial test for evaluation of shape recovery stress. Specimens of the plates are pulled to the designated strain in the longitudinal direction by tensile test equipment. They are heated for shape recovery and simultaneously the pulling load is controlled at the value of zero until the displacement reaches to the designated value. After reaching at the designated value of displacement, the specimen is fully restricted without displacement. And the stress induced in the specimen is measured during heating up to over austenitic transformation temperature and cooling down to room temperature. Stress - strain - temperature diagrams are measured through the heating and cooling process. We conclude that the stress induced by restriction increases with the increase of restricted strain. Keywords: ferrous shape memory alloy, shape recovery stress, uni-axially prestrained, pipe joints, solution treatment, austenitic transformation, restricted strain, ε martensite transformation, shape recovery strain.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090441
486 Computational Methods and Experimental Measurements XIV
1
Introduction
Ferrous shape memory alloy is more low-cost than non-ferrous shape memory alloy of the Ti-Ni etc. and its formability is more excellent. The application of the alloy to fittings for tubing, screws for tightening, and joints for structural connection has been investigated (Wada et al. [3]). In the practical use of the ferrous shape memory alloy, the shape recovery strain and the shape recovery stress are important. Here, it is defined with the shape recovery stress as the stress which is caused in the shape memory alloy when shape recovery strain is restrained by controlling displacement. When displacement is restrained almost completely, the shape recovery stress in the alloy is assumed to be roughly 200250MPa or less. The appearance mechanism of the shape recovery stress is not yet clarified. In the practical use, the displacement is often restricted on the way of shape recovery process, so the relationship between the shape recovery stress and the restricted strain under various conditions is investigated.
2
Characteristic of ferrous shape memory alloy
2.1 Mechanical properties and transformation temperatures Fe-28%Mn-6%Si-5%Cr alloy (mass %) is used for the test specimens. They are rolled out to the thin plate of the thickness 0.8mm. The longitudinal direction where the test specimen is rolled out is defined as LD and the transversal direction of the plate width are defined as TD. Afterwards, they are air-cooled in 950C with the heating of one hour as the solution treatment for the shapememory treatment. Mechanical characteristics of the test specimen are measured (Wada et al. [4]) in the direction of LD and TD, as shown in Table 1. Table 1:
Mechanical property in Fe-28%Mn-6%Si-5%Cr shape memory alloy.
In the each direction, 0.2% proof stress is 271MPa and 267MPa, the tensile strength is 815MPa and 811MPa, and the elongation is 38.6% and 37.9%, respectively. The plasticity anisotropy is little observed to the test specimens. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
487
The martensitic transformation beginning temperature is observed at from 253K to 298K, and the austenitic transformation finishing temperature is observed at from 403K to 458K. 2.2 Deformation behaviour in the shape recovery process at uni-axially prestrained test Fig. 1 shows the schematic of stress-strain and temperature-strain curve in the shape recovery process at uniaxial test. The strain worked by plastic deformation to the specimen which is expanded in this report is defined as εp of prestrain, and the strain measured by the change of the gauge length before and after the heat treatment is defined as εr of shape recovery strain. When the alloy is applied to a fixed amount of deformation and is heated at a fixed temperature repeatedly, larger shape recovery strain will be developed. This phenomenon is called training effect.
Figure 1:
Stress-strain and temperature strain diagram in shape recovery process.
It is reported that the shape recovery strains of Fe-28%Mn-6%Si-5%Cr alloy are measured in the case of without training and with training by tensile prestrain and compressive prestrain, as shown in fig.2. In the uniaxially-prestrained specimens, the prestrain εr increases to the maximum value of -2% at 6% εp, and then decreases slightly with increasing prestrain εp. The slight reduction of εr above 6% εp may be attributed to the slip bands generated by greater deformation, since it is reported (Otsuka et al. [6]) that the maximum volume fraction of the stress-induced martensite phase is approximately 30% in the alloy. The absolute shape recovery strains at compressive prestrain are lower than that of at tensile prestrain because of different deformation behaviour by friction force. As there is little friction force at the tensile prestrain, the state of stress is estimated to be uniaxial. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
488 Computational Methods and Experimental Measurements XIV
Figure 2:
Prestrain vs. shape recovery strain at uniaxial tensile and compressive test.
Figure 3:
Prestrain and shape recovery strain in the longitudinal direction LD by multi axial forming.
2.3 Deformation behaviour at multi-axially pre-strained test The shape recovery characteristic for the uniaxial stretching without training process and the biaxial stretch forming by hydraulic pressure bulge forming are investigated. The prestrain ratio β of LD direction element to the WD direction element of the prestrain is defined by the following eqn. (1). (1) β = ε pW / ε pL Relationship between prestrain and shape recovery strain of longitudinal direction LD by the multi axial forming is reported (Naoi et al. [1]) as shown in fig.3. The shape recovery characteristic by the uniaxial stretching in this figure is calculated. The shape recovery strain in the multiaxial forming at the prestrain ratio β= - 0.5 is a value close to the one of the uniaxial tensile test. However as WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
489
the prestrain ratio β increases the absolute value of the shape recovery strain εrl decreases. This reason is considered (Wada et al. [2]) that a complete dislocation is introduced and the shape recovery behaviour is interrupted when the supply of an excessive strain more than the strain that makes the stress-induced martensitic transformation appear to this alloy.
3
Shape recovery stress for uniaxial tensile test
3.1 Experimental procedure 3.1.1 Test specimen The ferrous shape memory alloy of Fe-28%Mn-6%Si-5%Cr is used for the test specimen. After melting and solidification, plates of 0.8mm in thickness are manufactured by rolling. Tensile test specimens are machined at 100mm in length, 19mm in width and 0.8mm in thickness. Then, as for both shape memory heat treatment and solution treatment, furnace heating at 1000C for 1hour and air cooling is conducted. In order to prevent the thermal transfer to the tensile test equipment at the heating of shape recovery treatment, ceramics of 5mm in thickness is bonded to the chuck part of tensile test specimen as heat insulator. Thereinafter the prestrain is given and the shape recovering treatment is conducted. Tensile prestrain of from 5.5 to 6% is given to the specimen by tensile equipment. 3.1.2 Shape recovery heat treatment and restriction of displacement Shape recovery heat treatment and restriction of displacement for specimen are conducted by the experimental equipment. The heat insulator is installed at the top and bottom part of the band heater in order to heat the tensile test specimen at uniform and constant temperature. Heating and cooling rate are controlled at 12C/minute and 6C/minute, respectively. After the test specimen is prestrained by the tensile test machine, the specimen is installed and clamped in the equipment. The stress and strain of the specimen can be controlled at designated value in the equipment. At the first stage, the stress of the specimen is controlled to be free of zero and the strain of the specimen is measured by position meter. Then at the second stage of designated interval time, the displacement of the specimen is completely restricted during heating and cooling down as shape recovery treatment. The axial load which is generated during the shape recovery treatment is measured by the load cell, and uniaxial stress is calculated. The experimental condition is set at three kind states of the restraint of displacement. Firstly the displacement by shape recovery is completely restricted from the first stage of heating, secondly it is restricted in the middle of shape recovery process and thirdly it is free from restriction. 3.2 Results and discussion 3.2.1 Shape recovery stress generated by restriction of displacement Fig. 4 shows the transition of the shape recovery stress σ generated in the specimen when the pre-strain εp is given and then the displacement is completely WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
490 Computational Methods and Experimental Measurements XIV restricted through the shape recovery heat treatment of from heating at 375C to cooling down into the room temperature. Fig. 5 shows the transition of shape recovery strain when the displacement of the specimen is not restricted during heating and cooling from the normal temperature to 375C.
Temperature / C
Figure 4:
Transition of shape recovery stress at complete displacement.
restraint in
Figure 5:
Transition of shape recovery strain at free from restraint in displacement.
In the process of heating from room temperature to 375C, a little compressive stress is generated by the compressive deformation because of the thermal expansion as shown from point “s” to point “a” in the fig. 4. Then tensile stress becomes roughly 90MPa at the point “b” whose temperature is 205C. The reason for appearance of the tensile stress is estimated that compressive displacement appears at the transformation temperature (T0) from ε-martensite to γ-austenite and the displacement is restricted by the equipment, as shown from point “a” to point “b” in the fig. 5. The shape recovery stress in point “b” is corresponding to WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
491
the fluid stress which is generated by the uniaxial compression strain of about 1% at 375C shown in the fig. 5. According to our research in the past, when the stress whose direction is opposite to that of shape recovery strain is added to the material, the absolute value of the shape recovery strain decreases. So the shape recovery compressive strain decreases because the experiment from point “a” to point “b” is conducted under the tensile stress. Therefore, the fluid stress by the plastic forming is considered to be gradually suppressed. The stress generated by the thermal shrink increases with the progress of cooling process from 348C to 130C of the transformation temperature T0, as shown from point “b” to “c” in the fig. 5. The shape recovery stress is saturated to be about 200MPa at 130C of the transformation temperature (T0) and after then the generated stress decreases slightly. The phenomenon of the cooling process performs as well as those of the heating process. The shape recovery stress from point “c” to point “f” slightly decreases. The reason is considered the transformation of from the γ-austenite to the εmartensite that appears under the stress generated by the restricted strain. The generated internal stress and the chemical free energy are considered to be balanced during the cooling process. Namely, when the specimen is exposed in the tensile stress the ε-martensite is stable at over the temperature of T0 and the γaustenite is stable at under the temperature of T0. The reason of the decrease in the stress is also considered Bauschinger effect, because the tensile deformation is performed after the compressive one in this shape recovery process.
Figure 6:
Effect of heating temperature on shape recovery stress by complete restraint of displacement.
3.2.2 Effect of heating temperature on shape recovery stress Fig. 6 shows the effect of heating temperature on the shape recovery stress by complete restraint of displacement when the shape recovery heat treatment temperature is set up at 280C, 375C and 450C. Shape recovery stress at the heating temperature of 280C is 25MPa smaller than the one at heating temperature of 375C. On the other hand, the shape recovery stress at the heating temperature of 450C begins to fall slightly over 370C, however, the shape WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
492 Computational Methods and Experimental Measurements XIV recovery stress after cooling is almost equal with the one heated at 375C.The stress generated by the thermal stress decreases with the increasing temperature rise because its γ austenite transformation performs completely over the heating temperature of 360C. Namely, shape recovery stress is maximum value when the heat treatment temperature is over 360C. 3.2.3 Effect of consumed shape recovery strain on shape recovery stress We define “consumed shape recovery strain” as shape recovery strain that appears until the specimen begins to restrict its displacement. Fig. 7 shows the effect of the consumed shape recovery strain on the shape recovery stress that appears when the shape recovery treatment is completed. The shape recovery stress decreases with the consumed shape recovery strain increases linearly and there is a significant point at 1.2% of the consumed shape recovery strain. From fig. 5, the displacement restraint is conducted during the heating process until the consumed shape recovery strain reaches roughly up to 1.01.2%, and it is conducted during the cooling process when the consumed shape recovery strain becomes in 1.2% or more. The shape recovery stress is subjected to the transformation stress from ε-martensite to γ-austenite in the heating process and to the thermal stress of reduction in temperature in the cooling process. At the point “B” in fig. 7, the shape recovery stress is roughly 135MPa and the consumed shape recovery strain is 0.6%.
Figure 7:
4
Effect of consumed shape recovery strain on shape recovery stress.
Shape recovery stress in ferrous shape memory pipe joints
4.1 Experimental procedure 4.1.1 Connecting method by ferrous shape memory joint When the ferrous shape memory alloy is applied to the pipe joints, the diameter of the alloy pipe joint is expanded as prestrain process and then two steel pipes are connected by the alloy pipe joint while the shape recovery heat treatment is conducted. In the pre-strain giving process, the tapered mandrel expanding WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
493
method is mainly used. Fig. 8 shows the process flow of the ferrous shape memory alloy pipe joint connecting. When the ferrous shape-memory alloy is applied to the pipe joints, it is known that the connecting strength of the pipe joint changes by the clearance between the pipe joint and the steel pipe. The relation between the shape recovery stress and the consumption shape recovery strain of the ferrous shape memory alloy pipe joint is investigated in the examination. It is assumed that the restraint of the shape recovery of the alloy begins when the alloy pipe joint comes in contact with the steel pipe.
Figure 8:
Connecting of ferrous shape memory alloy pipe joint and steel pipes.
And, when the shape memory alloy pipe joint contacts to the steel pipe and restrains its deformation, the circumferential direction strain εc of the pipe joint is calculated. The circumferential direction strain εc is defined by eqn. (2). Thereby, Dji is the inside diameter of the alloy pipe joint and Dmo is the outside diameter of the connected steel pipe. Circumferential direction strain εc is assumed to be consumed shape recovery strain.
εc = Dji – Dmo Dmo
(2)
When the steel pipe is tightened by the ferrous shape memory alloy joint, the hoop stress of the alloy pipe joint is assumed to be the shape recovery stress. 4.1.2 Test specimen In the examination the shape memory alloy joint is inserted into the steel pipe as shown in fig. 8. The specification of the pipe joint is Fe-28%Mn-6%Si-5%Cr alloy as mentioned in 2.1. After the alloy is melted, rods are manufactured by hot forging and pipes of specimen are machined. Afterwards, the solution treatment is given at 1050 C for 90 minutes. The diameter expansion ratio is set at 6% by pushing the mandrel with the taper of 5° in angle as shown in fig. 9.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
494 Computational Methods and Experimental Measurements XIV The diameter expansion is conducted again after the shape recovering treatment is given, and the training processing is given at 600C for 30 minutes. Afterwards, the surfaces of the inside and outside of the test specimen are machined. In order to connect the carbon steel pipe by the alloy joint, the shape recovery heat treatment at 350C for 30 minutes is conducted. The grade of carbon steel pipe is STKM13A by Japans industrial standard. Dimensions of the steel pipe are 60.5mm in outside diameter, 3.8mm in wall thickness and 20mm in length. Dimensions of the pipe joint are from 66.5to be 67.8mm in outside diameter Djo, from 60.5 to 61.8mm in inside diameter Dji, 3mm in thickness t and 15mm in length L. Quantity of kind of the dimension is 11, and the experimental number is totally 22 by two pieces per kind as for the test specimen. As a result, consumed shape recovery strain εc in the direction of circumferential calculated by the eqn. (2) is from 0 to 2.15%. In the pre-strain giving process, the tapered mandrel expanding method is mainly used. As shown in fig. 9, the multi-axial stress is generated in the test specimen when the taper mandrel expanding method is conducted.
Figure 9:
Tapered mandrel expanding method and multi-axial stress.
Figure 10:
Measurement of shape recovery stress.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
495
4.1.3 Measurement of shape recovery stress Fig. 10 shows the measurement of the shape recovery stress. In order to measure the circumferential strain, two plasticity strain gauges are stuck on the outside surface of the alloy pipe joint at the position at 2mm from both pipe ends after the connecting is completed. And when the strain gauge sticking part is cut off, the part of the test specimen shrinks and the circumferential strain εθ is measured. The circumferential shape recovery stress σθ is calculated from the eqn. (3), thereby Ej is Young’s modulus of the alloy pipe joints.
σ θ = E jε θ
(3)
4.2 Results and discussion 4.2.1 Shape recovery stress and consumed shape recovery strain The circumferential shape recovery stress σθ is calculated from the measured consumed circumferential shape recovery strain εθ by using the eqn. (3) and shown in fig.11, comparing with those of uniaxial test.
Figure 11:
Consumed shape recovery strain and shape recovery stress.
Shape recovery stress of pipe joints decreases with the increase of consumed shape recovery strain of pipe joints. The approximate straight line by regression analysis is shown in the eqn. (4). (4) σ θ = − 87 . 1 × ε θ + 175 Shape recovery stress σθ that consumption shape recovery strain εθ becomes 0 is about 170MPa for the alloy pipe joint and is about 190-240MPa for the uniaxial prestrained specimen. Moreover, consumption shape recovery strain εθ that shape recovery stress σθ becomes 0 is 2.0% for the pipe joint, and 1.5% for the uniaxial prestrained specimen. One of the reasons is considered that the former has the training treatment and the latter has not the training treatment. When the training processing is given at uniaxial prestrained specimen as WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
496 Computational Methods and Experimental Measurements XIV observed in fig. 2, the shape recovery strain reaches at 3.7%. However the shape recovery strain of the pipe joint is 2.0%, it is smaller than uniaxial prestrained specimen. The reason is considered that the expanding by tapered mandrel is conducted under the multiaxial stress of circumferential tension stress and longitudinal compression stress. 4.2.2 Estimation of joint strength for shape memory alloy When the steel pipe is tightened by the pipe joints, the contact pressure p caused in contact surface is calculated from balance of force and is expressed as the eqn. (5). Thereby D and t are diameter and wall thickness of joint, respectively.
p=
2tσθ D
(5)
When collapse strength of the steel pipe is larger than that of the pipe joint, tube joint strength F can be estimated by the eqn. (6) from balance of force and the eqn. (6). Thereby, L’ is the contact length between pipe joint and steel pipes, μis coefficient of friction between them.
F = πDL '⋅ p ⋅ µ = 2πtL 'σ θ µ
(6)
We conducted the examination of strength of the alloy pipe joint and the strength can be estimate by eqn. (6) when the coefficient of friction is 0.31.
5
Conclusions
We investigate the shape recovery stress for ferrous shape memory alloy in this study. The following items are clarified. (1) Shape recovery stress increases with the increase of the restriction for displacement. (2) Heating temperature affects on the shape recovery stress. The higher the temperature is, the larger the shape recovery stress is. (3) Shape recovery stress changes linearly with consumed shape recovery strain for the alloy pipe joints. And equation expressed between their relations is expressed. (4) Strength of pipe joint for shape memory alloy is calculated.
References [1] Naoi, H., Wada, M., Koike, T. & Maruyama, T., 3rd.Inter. Conf. on ThermoMech. Proc. of Steels, Texture and Anisotropy 1, p.86, 2008. [2] Wada, M., Naoi, H., Yasuda, H. & Maruyama, T., Material Science and Engineering A 481-482, p.178, 2008. [3] Wada, M., Narita, K., Naoi, H. & Maruyama, T., Proc. of TMP, p.1, 2004. [4] Wada, M., Naoi. H. & Tsukimori, K., Proc. of IMECE2003, p.1. 2003. [5] Othuka, H., Yamada, H., Maruyama, T., Tanahashi, H., Matsuda, S. & Murakami, M., ISIJ Int., 30-8, p.674, 1990. [6] Otsuka, H., Murakami, H. & Matsuda, S., Proc. of MRS Int. Mtg. on Advanced Materials, 9, p.451, 1989. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
497
Growth behavior of small surface cracks in coarse and ultrafine grained copper M. Goto1, S. Z. Han2, Y. Ando1, N. Kawagoishi3, N. Teshima4 & S. S. Kim5 1
Department of Mechanical Engineering, Oita University, Japan Korea Institute of Materials Science, Republic of Korea 3 Department of Mechanical Engineering, Kagoshima University, Japan 4 Department of Mechanical Engineering, Oita National College of Technology, Japan 5 Engineering Research Center, Gyeongsang National University, Republic of Korea 2
Abstract Since fatigue life of a plain specimen of ductile metals is controlled mainly by the propagation life of a small surface crack, to clarify the growth behavior of a small crack is crucial to the safe design of smooth members. However, little has been reported on the growth behavior of small surface cracks in ultrafine grained (UFG) metals. In the present study, stress-controlled fatigue tests for coarse grained (CG) and UFG copper were conducted. The surface damage evolution during cyclic stressing was observed by optical microscopy, and the growth behavior of a small surface crack was monitored by a plastic replication technique. The physical background of fatigue damage for CG and UFG copper was discussed from the viewpoints of the initiation and growth behavior of small surface cracks. Keywords: equal channel angular pressing, copper, ultrafine grain, fatigue, small surface-crack.
1
Introduction
Copper has been widely used as a base material in various electronics-based industries. Recent developments in electronic apparatus however mean that WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090451
498 Computational Methods and Experimental Measurements XIV higher strength/ductile materials are now required to meet these needs. To improve the strength of copper requires the development of highly alloyed Cu materials. Alloyed Cu however, possesses inherently lower conductivity compared to their unalloyed counterparts. Therefore, in order to overcome this inherent shortcoming associated with conventional Cu alloys, the development of pure Cu with submicron grained structure is currently being explored by employing a severe plastic deformation principle, such as the ECAP (equal channel angular pressing) processing [1–4]. For applying the ultrafine grained (UFG) copper to the components of machines and structures, fatigue damage should be clarified. Since the fatigue life of machine components and structures are mainly controlled by the growth life of a fatigue crack, the crack growth behavior should be clarified for the design of safe machine components and structures. Recently, the growth behaviors of millimeter-range cracks in UFG metals were studied for compacttension [5–7] and single edge-notched specimens [8,9]. On the other hand, the fatigue life of smooth specimens is approximately controlled by the growth life of a small surface crack. Nisitani and Goto [10], Goto and Nisitani [11] and Goto and Knowles [12] showed that the crack growth life from an initial size to 1 mm accounted for about 70% of the fatigue life of plain specimens of many conventional grain-sized metals. This means that the growth behavior of small cracks must be clarified to estimate the fatigue life of plain members. For both coarse grained (CG) and UFG copper, to the authors’ knowledge, there has been little reported in the literature on the growth behavior of a small surface crack [13]. In the present study, stress-controlled fatigue tests for CG and UFG copper were conducted. The surface damage evolution during cyclic stressing was observed by optical microscopy, and the growth behavior of a small surface crack was monitored using a plastic replication technique (PRT). The physical background of fatigue damage was discussed from the viewpoints of the initiation and growth behavior of small surface cracks.
2
Experimental procedures
The material used was 99.99% pure oxygen-free copper. Before the ECAP process, the material was annealed at 500˚C for 1hr. In ECAP die, the inner and outer angles of channel intersection were 90 and 45˚, respectively. Repetitive ECAP was accomplished by Bc route (after each pressing, the billet bar was rotated around its longitudinal axis through 90˚). Four passages of extrusion resulted in an equivalent shear strain of about 3.9 [14]. MoS2 was used as lubricant at each pressing, and the pressing speed was 5 mm/sec. After ECAP, the microstructure and mechanical properties were studied. Transverse cross sections of ECAP processed bars were cut to prepare specimens for transmission electron microscopic observation. Specimens were mechanically polished to a thickness of 100 µm, then subjected to twin-jet electropolishing. The solution was 200ml CH3OH plus 100 ml HNO3. The jet thinning was conducted at –30˚C.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
499
Round bar specimens of 5 mm diameter (Fig. 1) were machined from the annealed and ECAP processed bars. Although the specimens had a circumferential notch (ρ = 20 mm and c = 0.25 mm, where ρ = radius and c = depth of the notch), the strength reduction factor for this geometry was close to one, and therefore the specimens could be considered plain specimens. Before testing, all the specimens were electropolished to remove about 25 µm from the surface layer in order to facilitate observations of the surface state. All tests were carried out at room temperature using a rotating bending fatigue machine operating at 3000 rev/min. The surface damage evolution during cyclic stressing was observed by optical microscopy. The measurement of crack length was conducted using a PRT. The crack length, l, is a length measured along the circumferential direction of the surface. The stress value referred to in this paper is that of the nominal stress amplitude, σa , at the minimum cross-section.
Figure 1:
Shape of the specimen.
3 Experimental results and discussion 3.1 Microstructure and mechanical properties of the materials Figure 2 shows typical microstructures for CG and UFG copper. Grain size of CG copper was about 100 µm. For UFG copper, refinement of structure is evident. Namely, granular grains with average size of 300 nm are formed, and grain boundary (GB) areas involve a high population of dislocations. The SADP (selected area diffraction patterns of the center area, 1µm in diameter) consists of rings of diffraction spots, showing that GBs have high angles of misorientation. The fraction of high angle GBs might be relatively low [15]. In addition, the GBs had lost their sharpness and exhibited a ‘spotty’ contrast and broad contours, suggesting non-equilibrium GBs contain random networks of GB dislocations [16]. The change in heat flow value was measured using a differential scanning calorimeter in a nitrogen environment with a ramping rate of 1.67˚C/min up to 500˚C. The measurements of the heat flow value showed that the changes in the heat flow value (CHFV) after the 4th pressing of ECAP were CHFV = 0.85 J/g. The rationale for the heat-flow measurement is that the heat-flow of pure copper would be solely controlled by the generation and/or annihilation of dislocations and GBs during ECAP processing, unlike the other alloys with either precipitation hardening or solid solution hardening mechanism. As might be expected, the CHFV of fully annealed copper is almost zero. The large value of WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
500 Computational Methods and Experimental Measurements XIV
Figure 2:
Figure 3:
Microstructure of the materials; (a) CG copper, (b) UFG copper.
Fracture surface of tensile specimens; (a) CG copper, (b) UFG copper.
CHFV after the 4th pressing of ECAP means that redundant strain energy related to high-density dislocations is stored in the material, showing instability of the microstructure. The mechanical properties of an annealed bar were 183 MPa tensile strength, 64% elongation, and a Vickers hardness number of 63. After eight passages of ECAP, their values changed to 438 MPa, 28%, and 141, respectively. The relationship between the Vickers hardness number, HV, and the number of pressings showed that a dramatic increase in HV (HV = 63 to 128) occurs from the first pressing. However, the subsequent increases become milder, followed by a saturation trend after the 3rd pressing. Figures 3(a) and (b) show the postfracture surface for tensile specimens of CG and UFG copper, respectively. The fracture surface of CG copper shows many dimples. The sample processed by ECAP shows a flat fracture surface, including a couple of dimples, compared to the fracture surface of annealed sample. There is no significant difference in the dimple size between two samples, in spite of the two orders of magnitude difference in grain size. Nevertheless, the dimples are shallower for the UFG copper because of its decreased ductility. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
501
3.2 Formation behavior of fatigue damage Figure 4 shows the S-N curve of CG and UFG copper. For UFG copper examined under stress controlled testing, the enhancement in fatigue life is obvious [17–19]. The degree of enhancement is sharply increased with increasing stress amplitude. In the long-life field in excess of N = 107 cycles, however, the fatigue life to failure of UFG copper tends to coincide with that of CG copper. Stress amplitude σa MPa
250
EC A P ed
200
150
100
NAnnealed orm alized 50 105
106
107
Number of cycles to failure Nf
Figure 4:
S-N curve.
Figures 5(a) and (b) show the typical surface damage of a post-fatigued specimen for CG and UFG copper, respectively. For CG copper, slip bands were formed along limited sip planes within a small number of grains and the length of slip bands was nearly equivalent to the grain size. In contrast, the surface of UFG copper was covered by a high population of damaged area. Figure 5(c) is SEM micrograph of the highlighted area in Fig. 5(b), showing protrusions and intrusions [20]. Surprisingly, those sizes are larger than the grain size.
(a) Figure 5:
20µm
(b)
20µm
(c)
5µm
Surface morphology of post-fatigued specimen at σa =120MPa; (a) CG copper (Nf =4.739x105), (b) UFG copper (Nf =3.75x106), (c) SEM micrograph of the highlighted area in (b).
Figure 6 shows the formation process of surface damage at σa = 120 MPa for UFG copper, where the number of cycles to failure, Nf, was Nf = 3.75x106. It is WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
502 Computational Methods and Experimental Measurements XIV found that there is a pronounced time lag in the onset of a significant enlargement of the damaged areas. In addition to this, the whole surface observations of the specimen indicated that the damaged regions were formed at an early stage of cycling, and that the number and area of these regions slowly increased with further cycling up to a specific number of cycles, depending on the material and stress amplitude. Once this specific number of cycles had been exceeded, both the number and area of the damaged regions showed a significant rise. The number of cycles performed prior to the start of this remarkably large extension in damaged regions was about half of fatigue life (N ≈1.7x106).
20 µm
N=0 Figure 6:
1.2x106
1.9x106
2.3x106
2.9x106
3.4x106
Change in surface state of UFG copper during repeated stressing of σa =120MPa (Nf =3.75x106).
(a)
30µm N=0
1.5x104
2x104
2.5x104
3.5x104
4.5x104
3.5x105
4x105
4.5x105
6.5x105
10.5x105
(b)
30µm N=0 Figure 7:
Change in surface state around a major crack observed by a plastic replication technique; (a) CG, σa = 130MPa, Nf = 1.535x105, (b) UFG, σa = 120MPa, Nf = 2.3x106.
Figure 7 shows initial growth behavior from a major crack, which led to the final fracture of the specimen, monitored by PRT. At an early stage of cycling, WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
503
for CG copper, slip bands were formed within a grain. A grain size crack was initiated from slip bands at an about 20% of fatigue life. For UFG copper, a 30 µm length crack was initiated at about 20% of fatigue life. 3.3 Growth behaviour of a small crack Figure 8 shows the crack growth curve (lnl vs. N relation). The relationship for UFG copper can be approximated by a straight line independent of a stress amplitude. For CG copper, however, the relationship depends on a stress amplitude. Namely, the growth curves at a stress above σa = 100 MPa are nearly represented by a straight line. At a stress below σa = 90 MPa, a linear relation of the growth curve nearly holds for a crack length in excess of l = 0.3 mm, whereas, each plot for l < 0.3 mm deviates upward from an extension of the linear relation for l > 0.3 mm. 10
10
1
U FG
(a) 80MPa 90MPa 100MPa 120MPa 130MPa
0.1
0.01 0.0
5.0x106
Number of cycles
Figure 8:
1.0x107
N
Crack length l mm
Crack length l mm
Annealed
(b)
1
0.1
0.01 0.0
200MPa 160MPa 120MPa 100MPa 2.0x10 6
4.0x10 6
Number of cycles
6.0x10 6
N
Crack growth curve (lnl vs N relation); (a) CG copper, (b) UFG copper.
The linear relation shown in Fig. 8 means that crack growth rate (CGR, dl/dN) should be proportional to l at a constant stress amplitude. Figure 9 shows the CGR versus crack length relation. The CGR for CG copper is nearly proportional to the crack length (dl/dN∝l ) with the exception of the CGR below dl/dN = 10-6 mm/c. For UFG copper, however, the dl/dN∝l relation nearly holds even for extremely low CGR range (dl/dN < 10-6 mm/c) in spite of some fluctuating plots around dl/dN = 10-8 mm/c. To compare the growth resistance of a small crack between CG and UFG copper, the relations at σa = 80 and 120 MPa for CG copper (Fig. 9(a)) were redrawn by dashed lines in Fig. 9(b). The comparison of the relation at σa = 120 MPa for both copper indicates that CGR of CG copper is about three times larger than that of UFG copper. In addition, relation at 80 MPa for CG copper is nearly equivalent to that at 85 MPa for UFG copper. These are different from the growth behavior of other ECAPed Al and steel alloys, in which finer microstructures exhibit higher growth rates [5, 6, 8]. However, WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
504 Computational Methods and Experimental Measurements XIV experiments conducted by other alloys were for a long crack initiated from artificial notches. This means that applied stress amplitudes were sufficiently small and a crack propagated under the small scale yielding condition. In the present fatigue tests, the stress amplitudes were excessively large compared to stresses applied to other alloys. For CG copper, especially, the ratios of stress amplitude to tensile strength, σa/σu, were 0.43 and 0.66 for σa = 80 and 120 MPa, respectively. Accordingly, the crack at σa = 120 MPa propagates under large scale yielding, showing accelerated CGR compared to UFG copper with higher tensile strength. The dependency of CGR on stress amplitude was studied and it showed that dl/dN is nearly proportional to σan at a constant crack length. The value of n was about 7.5 and 4.4 for CG and UFC copper, respectively. Considering the stress dependency and the relation (dl/dN∝l) in Fig. 9 together, thus, the CGR of the present copper is uniquely determined by a term σanl, namely small-crack growth law (SCGL) defined by equation (1) holds [10, 21]; (1)
10-4
A nnealed
(a)
1
dl/dN mm/cycle
10-5
10-6 120M P a
80M P a
10-7
1 1
10-6 CG (σa= 80MPa)
200MPa 160MPa 120MPa 100MPa 85MPa
10-7
10-8 0.01
0.1
Crack length l
Figure 9:
CG (σa= 120MPa)
U FG
(b)
90M P a
σa= 130M P a
dl/dN
mm/cyele
10
-5
10-4
1
1
10
mm
10-8 0.01
0.1
1
10
Crack length l mm
Crack growth data (dl/dN vs l relation); (a) CG copper, (b) UFG copper.
Figure 10 shows the dl/dN versus σanl relation. It is evident that the SCGL is available for estimating CGR of a small surface crack propagating with a growth rate above 10-6 and 10-7 mm/c for CG and UFG copper, respectively. A dashed line in Fig. 10(b) is the relationship obtained from UFG copper processed by twelve pressing cycles of ECAP (grain size: 300 nm). There is no significant deference of the relationship between UFG copper with four and twelve ECAP process cycles. Many researchers have reported that the growth rate of fatigue cracks can be unified in terms of the stress intensity factor range ∆K. Here, the parameter ∆K is WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
505
the effective parameter for a crack when the condition of small scale yielding at a crack tip is satisfied. The value of ∆K for an infinite plate with a crack is given by the equation ∆K = ∆σ (πa)1/2. This equation indicates that the stress range has to be higher for a small crack in order to get the same growth rate as for a larger crack. Therefore, when a sufficiently small crack propagates at a finite growth rate (for example, 10-6 to 10-3 mm/c) the condition of small scale yielding is not usually satisfied. Thus the growth rate of a small crack cannot be determined uniquely by ∆K. Nisitani and Goto showed that a term σanl is effective parameter to determine the CGR of a small crack in many ductile materials. Here, n is a material constant. The expression σanl (n = 3) was first proposed by Frost and Dugdale [22]. They applied it to comparatively large cracks in which the condition of small scale yielding nearly holds. Now σa3l can be considered as an approximation for ∆K, whereas σanl in the present study is a parameter for crack propagation under large scale yielding. Moreover, an effective and convenient method based on a SCGL in which the effect of mechanical properties is partly considered has been proposed for predicting the fatigue life of smooth members of many CG metals [23, 24]. The prediction was in good agreement with the experimental results. Thus, Fig.10(b) means an applicability of SCGL to the evaluation of fatigue life of UFG metals. 10-4
dl/dN mm/cycle
(a)
UFG n = 4.4
1
10-5
1
-6
10
130MPa 120MPa 100MPa 90MPa 80MPa
-7
10
-8
10
10
13
10
14
n
σa l
Figure 10:
4
10
15
n (MPa) mm
10
16
dl/dN mm/cycle
-5
10
10-4
Annealed n = 7.5
(b)
1 1 UFG copper produced by 12 pressing cycles
10-6
200MPa 160MPa 120MPa 100MPa 85MPa
10-7
10-8 107
108
109
n σa l
n (MPa) mm
1010
dl/dN vs. σan l relation; (a) CG copper (n = 7.5), (b) UFG copper (n = 4.4, dashed line indicates the relation for UFG copper processed by 12 pressing cycles).
Conclusions
The main results of the present study can be summarized as follows: (1) The enhancement in fatigue life due to ECAP was remarkably large in the short-life regime, yet not so pronounced in the long-life regime, as reflected by the fatigue strength of UFG copper at N = 107 cycles, σw, which coincided with that of fully annealed CG copper. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
506 Computational Methods and Experimental Measurements XIV (2) For CG copper, a major crack, which led to the final fracture of the specimen, was initiated form slip bands with a dimension of grain size. For UFG copper, it was initiated from intrusions with a dimension of over ten times grain sizes. For both copper, the crack growth life from an initial size to fracture accounted for about 80% of the fatigue life of plain specimens. (3) At a high stress amplitude (σa > 100 MPa), growth rate of a small crack of CG copper was larger than that of UFG copper. At σa = 80 MPa, however, CGR of CG copper was nearly equivalent to that of UFG copper. In CG copper, the ratio of σa = 100 MPa to tensile strength was about 0.55. In a range of σa > 100 MPa, thus, the crack in CG copper propagates under large scale yielding, showing accelerated CGR compared to UFG copper with higher tensile strength. (4) The growth rate of a small crack cannot be unified in terms of the stress intensity factor range ∆K, but it was uniquely determined by a term σanl. Here, n is a material constant and the value of n was about 7.5 and 4.4 for CG and UFG copper, respectively. For CG copper, the term could estimate the growth rate of a crack propagating with dl/dN > 10-6 mm/cycle. For UFG copper, the term was applicable to a crack with an extremely low growth rate (dl/dN > 10-7 mm/cycle).
Acknowledgement This study was supported by a Grant-in-Aid (20560080) for Scientific Research (C) from the Ministry of Education, Science and Culture of Japan as well as a grant from the Fundamental R&D Program for Core Technology of Materials funded by the Ministry of Commerce, Industry and Energy, Republic of Korea.
References [1] Segal, V.M., Materials processing by simple shear. Mater Sci Eng, A197, pp.157-164, 1995. [2] Valiev, R.Z., Structure and mechanical properties of ultrafine-grained metals. Mater Sci Eng, A234-236, pp.59-66, 1997. [3] Zhu, Y.T. & Lowe, T.C., Observations and issues on mechanisms of grain refinement during ECAP process. Mater Sci Eng, A291, pp.46-53, 2000. [4] Iwahashita, Y., Horita, Z., Nemoto, M. & Langdon, T.G., The process of grain refinement in equal-channel angular pressing. Acta Mater, 46, pp.3317-3331, 1998. [5] Vinogradov, A., Nagasaki, S., Patlan, V., Kitagawa, K. & Kawazoe, M., Fatigue properties of 5056 Al-Mg alloy produced by equal-channel angular pressing. NanoStruct Mater, 11, pp.925-934, 1999. [6] Pao, P.S., Jones, H.N., Cheng, S.F. & Feng, C.R., Fatigue crack propagation in ultrafine grained Al-Mg alloy. Inter J Fatigue, 27, pp. 11641169, 2005.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
507
[7] Chung, C.S., Kim, J.K., Kim, H.K. & Kim, W.J., Improvement of highcycle fatigue life in a 6061Al alloy produced by equal channel angular pressing. Mater Sci Eng, A337, pp. 39-44, 2002. [8] Kim, H.K., Choi, M-I., Chung, C.S. & Shin, D.H., Fatigue properties of ultrafine grained low carbon steel produced by equal channel angular pressing. Mater Sci Eng, A340, pp. 243-250, 2003. [9] Hanlon, T., Tabachnikova, E. D. & Suresh, S., Fatigue behavior of nanocrystalline metals and alloys. Inter J Fatigue, 27, pp. 1147-1158, 2005. [10] Nisitani, H. & Goto, M., (Eds: Miller, K.J. & de los Rios, E.R.), A smallCrack Growth Law and its Application to the Evaluation of Fatigue Life. The behaviour of short fatigue cracks, Mech Eng Publications Lim: London pp.461-478, 1986. [11] Goto, M. & Nisitani, H., Fatigue life prediction of heat-treated carbon steels and low alloy steels based on a small crack growth law. Fatigue Fract Eng Mater Struc, 17, pp.171-185, 1994. [12] Goto, M. & Knowles, D. M., Initiation and propagation behaviour of microcracks in Ni-base superalloy Udime 720 Li. Eng Fract Mech, 60, pp.1-18, 60, 1998. [13] Goto, M., Han, S.Z., Kim, S.S., Ando, Y. & Kawagoishi, N., Growth mechanism of small surface-crack of ultrafine grained copper in a highcycle fatigue regime. Scripta Mater, 2009, In press. [14] Iwahashi, Y., Wang, J., Horita, Z., Nemoto, M. & Langdon, T.G., Principle of equal-channel angular pressing for the processing of ultra-fine grained materials. Scripta Mater, 35, pp.143, 1995. [15] Rabkin, E., Gutman, I., Kazakerich, H., Buchman, E. & Gorui, D., Correlation between the nanomechanical properties and microstructure of ultrafine grained copper produced by equal channel angular pressing. Mater Sci Eng, A396, pp.11-21, 1995. [16] Valiev, R.Z., Approach to nanostructured solids through the studies of submicron grained polycrystals. NanoStructured Mater, 6, pp.73-82, 1995. [17] Vinogradov, A. & Hashimoto, S., Multiscale phenomena in fatigue of ultrafine grain materials-an overview. Mater Trans, 42, pp.74-84, 2000. [18] Mughrabi, H., Höppel, H.W. & Kautz, M., Fatigue and microstructure of ultrafine-grained metals produced by severe plastic deformation. Scripta Mater, 51, pp.807-812, 2004. [19] Goto, M., Han, S.Z., Yakushiji, T., Kim, S.S. & Lim, C.Y., Fatigue strength and formation behavior of surface damage in ultrafine grained copper with different non-equilibrium microstructures. Inter J Fatigue, 30, pp.13331344, 2008. [20] Goto, M., Han, S.Z., Yakushiji, T., Lim, C.Y. & Kim, S.S., Formation process of shear bands and protrusions in Ultrafine grained copper under cyclic stresses. Scripta Mater, 54, pp.2101-2106, 2006. [21] Nisitani, H., Unifying treatment of fatigue crack growth laws in small, large and nonpropagating cracks, Mechanics of Fatigue -AMD(Ed: Mura, T.), ASME, 43, pp.151-166, 1981.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
508 Computational Methods and Experimental Measurements XIV [22] Frost, N.E. & Dugdale, D.S., The propagation of fatigue cracks in sheet specimens. J Mechanics & Physics Solid, 6, pp.92-110, 1958. [23] Goto, M. & Nisitani, H., Inference of a small-crack growth law based on the S-N curves. Trans Jpn Soc Mech Eng, A56, pp.1938-1944, 1990 (in Japanese). [24] Nisitani, H. & Goto, M., Kawagoishi, N., A small-crack growth law and its related phenomena. Eng Fract Mech. 41, pp.499-513, 1992.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Section 9 Structural and stress analysis
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
511
Numerical simulation of structures using generalized models for data uncertainty W. Graf, J.-U. Sickert & F. Steinigen Department of Civil Engineering, Institute for Structural Analysis, Technische Universit¨at Dresden, Germany
Abstract The challenging task in computational engineering is to model and predict numerically the behaviour of engineering structures in a realistic manner. Beside sophisticated computational models and numerical procedures to map physical phenomena and processes onto structural responses, an adequate description of available data covering the content of provided information is of prime importance. Generally, the availability of information in engineering practice is limited due to available resources. Far beyond the capability to specify crisp values, data are imprecise, diffuse, fluctuating, incomplete, fragmentary and frequently expert specified. Beside objective characteristics like randomness, available data are influenced by subjectivity to a considerable extend. This impedes the specification of probabilistic models with crisp parameter values to describe the uncertainty. Applying imprecise probabilities objective components of the uncertainty as well as subjective components can be considered simultaneously. A sophisticated procedure to handle imprecise probabilities provide the uncertainty model fuzzy randomness. Since fuzziness, randomness, and fuzzy randomness can be processed simultaneously, it is denoted as generalized uncertainty model. The models are demonstrated by means of a numerical example to emphasize their features and to underline their applicability. Keywords: computational methods, numerical simulation, data uncertainty.
1 Introduction The load-bearing behaviour of structures during lifetime is influenced by many static and dynamic alterations. Whether a nonlinear numerical simulation of the load-bearing behaviour leads to realistic results, depends on the quality and comWIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090461
512 Computational Methods and Experimental Measurements XIV plexity of computational models and methods as well as the reliability of available input data. In most cases the available input data can only be reliable described as uncertain variables. Consideration of these data uncertainty in numerical analysis requires adequate computational models for processing uncertain data. Fig. 1 shows three examples for the computation of the load-bearing behaviour of structures under consideration of uncertain data. The result of loading test of a textile reinforced concrete bridge is also drawn in fig. 1a) in order to validate the numerical results. The structural behavior of an assembly of a vehicle body (fig. 1b)) of a commercial car during a crash represents a dynamic problem. As displayed in fig. 1c), also the destruction of structures by blasting can be simulated as dynamic system under consideration of data uncertainty.
a)
b)
p [kN/m²] 16 14
uncertain numerical simulation
12 10 8 6 4
loading test
2 0 0
0.002
0.004 0.006 0.008 0.01
0.012 0.014
v3(1735) [m]
c)
Figure 1: Fields of application of uncertain structural analysis.
2 Modeling of uncertain variables The parameters – for geometry, material, load etc. – of the numerical simulations during the lifetime of structures are generally uncertain parameters. The following mathematical models are available to describe uncertainty (see also fig. 2), whereas fuzziness and randomness are considered as special cases of the generalized model fuzzy randomness [1]. The choose of the model depends on the available data. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV f(x)
:(x)
X
~ x
f(x)
1
~ X
~ f(x)
:=0 :=1
" x
x
x b) fuzziness
a) randomness
513
c) fuzzy randomness
Figure 2: Mathematical models of uncertainty.
The advancement of the traditional probabilistic uncertainty model enables the additional consideration of epistemic uncertainty. Thereby, epistemic uncertainty is associated with human cognition, which is not limited to a binary measure. Contrary to this, interval mathematics are limited to a binary assessment. Advanced concepts allow a gradual assessment of intervals. This extension can be realized with the uncertainty characteristic fuzziness, quantified by means of fuzzy set theory. If sufficient statistical data exist for a parameter and the reproduction conditions are constant, the parameter may be described stochastically. Thereby the choose of the type of the probability distribution function affects the result considerably. 2.1 Fuzzy variables Often the uncertainty description for parameters is based on pure expert judgment or samples which are not validated statistically. Then the description by the uncertainty model fuzziness is recommended. The model comprehends both objective and subjective information. The uncertain parameters are characterized with the aid of a membership function (x), see fig. 2b) and eq. (1). The membership function x (x) assesses the gradual membership of elements to a set. Fuzzy variables x ˜ = {(x;
x (x))
| x ∈ X };
x (x)
≥ 0∀x ∈ X
(1)
may be utilized to describe the imprecision of structural parameters directly as well as to specify the parameters of fuzzy random variables. 2.2 Fuzzy random variables If, e.g., reproduction conditions vary during the period of observation or if expert knowledge completes the statistical description of data, an adequate uncertainty quantification succeeds with fuzzy random variables. The theory of fuzzy random variables is based on the uncertainty model fuzzy randomness representing a generalized model because of it joins both stochastic and non-stochastic properties. ˜ is defined as the fuzzy set of their originals, whereby A fuzzy random variable X each original is a real-valued random variable X. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
514 Computational Methods and Experimental Measurements XIV
Figure 3: Fuzzy realizations of a fuzzy random variable.
The representation of fuzzy random variables presented in this paper, bases on the definition of fuzzy random variables according to [2]. The space of the random elementary events Ω is introduced. Here, e.g. the measurement of a structural parameter may be an elementary event . Each elementary event ∈ Ω generates not only a crisp realization like the displayed dots but a fuzzy realization ˜, in which x ˜ is an element of the set F(R) of all fuzzy variables on R. x ˜() = x ˜ fig. 3 shows exemplarily five fuzzy realizations x ˜ of a fuzzy random variable X. Each fuzzy variable is defined as a convex, normalized fuzzy set, whose membership function x (x) is at least segmentally continuous. As special case also crisp realizations as x ˜4 = x(4 ) may be considered. Accordingly, a fuzzy random vari˜ is the fuzzy result of the mapping given by able X ˜ : Ω → F(R) X
(2)
Based on this formal definition a fuzzy random variable is described by their ˜ ˜ fuzzy probability distribution function (fuzzy pdf) F(x). The function F(x) is defined as the set of real-valued probability distribution functions F(x) which are gradually assessed by the membership F (F(x)). F(x) is the pdf of the original X ˜ ˜ i) and is referred to as trajectory of F(x). As result, a fuzzy functional value F(x ˜ represents a fuzzy function as belongs to each value xi , see fig. 4. Thus, F(x) defined in sect. 3.1. A fuzzy probability density function is defined accordingly. ˜f(x) = {(f(x);
f (f(x)))
| f ∈ f};
f (f(x))
≥0∀f ∈f
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(3)
Computational Methods and Experimental Measurements XIV f(x)
F(x)
1.0 ~ F(xi)
~ f(x)
µ =0 µ =1
Fa=1(xi)
µ =0 µ =1
0.5
~ F(x)
1.0
Fa=0,r(xi)
0.5 µ(F(x))
515
Fa=0,l (xi)
0.0
0.0 x
1.0
xi
0.0
x
Figure 4: Fuzzy probability density and cumulative distribution function.
3 Modeling of uncertain functions 3.1 Fuzzy function In the case that parameters depend on crisp or uncertain conditions, they are con˜, ˜, ˜ ) or fuzzy processes x ˜( ˜(˜ ). Variables sidered as fuzzy functions x ˜(˜t) = x ˜ and further parameters ˜ , e.g. temmay be the time ˜, the spatial coordinates perature. A fuzzy function x ˜(˜t) enables the formal description of at least piecewise continuous uncertain structural parameters in (R). The following definition of fuzzy functions is introduced. Given are • the fundamental sets T ⊆ R and X ⊆ R • the set F(T) of all fuzzy variables ˜t on the fundamental set T • the set F(X) of all fuzzy variables x ˜ on the fundamental set X. Then, the uncertain mapping of F(T) to F(X) that assigns exactly one x ˜ ∈ F(X) to each ˜t ∈ F(T), respectively, is referred to as a fuzzy function denoted by x ˜(˜t) :
F(T)→ ˜ F(X)
(4)
x ˜(˜t) = x ˜t = x ˜(˜t) ∀ ˜t | ˜t ∈ F(T)
(5)
In fig. 5 a fuzzy process x ˜() is presented, which assigns a fuzzy quantity x ˜(i ) to each time i . x
:(x)
~ x(J1) J1
time J J2
J3
Figure 5: Fuzzy process x ˜(). WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
516 Computational Methods and Experimental Measurements XIV For the numerical simulation the bunch parameter representation of a fuzzy function is applied. xt = x ˜(˜s, t) ∀ t | t ∈ T} x ˜(˜s, t) = {˜
(6)
For each crisp bunch parameter vector s ∈ ˜s with the assigned membership value (s) a crisp function x(t) = x(s, t) ∈ x ˜(t) with (x(t)) = (s) is obtained. The fuzzy function x ˜(t) may thus be represented by the fuzzy set of all real valued functions x(t) ∈ x ˜(t) with (x(t)) = (x(s, t)) = (s) x ˜(t) = x ˜(˜s, t) =
x(s, t), (x(s, t)) | (x(s, t)) = (s) ∀ s | s ∈ ˜s
(7)
which may be generated from all possible real vectors s ∈ ˜s. For every t ∈ T each of the crisp functions x(t) takes values which are simultaneously contained in the ˜(t) are defined associated fuzzy functional values x ˜(t). The real functions x(t) of x for all t ∈ T referred to as trajectories. Numerical processing of fuzzy functions x ˜(t) = x(˜s, t) demands the discretization of their arguments t in space and time. 3.2 Fuzzy random function According to eqs. (2) and (4) a fuzzy random function is the result of the uncertain mapping ˜ X(t) : F(T) × Ω → F(R) (8) Thereby, F(X) and F(T) denote the sets of all fuzzy variables in X and T respectively [4]. At a specific point t the mapping of eq. (8) leads to the fuzzy random ˜ ˜ variable X(t) = X(t). Therefore, fuzzy random functions are defined as a family ˜ t. of fuzzy random variables X ˜ ˜ t = X(t) ˜ ∀ t | t ∈ T} X(t) = {X
(9)
For the numerical simulation again the bunch parameter representation of a fuzzy random function is applied. For each crisp bunch parameter vector s ∈ ˜s with ˜ the assigned membership value (s) a real random function X(t) = X(s, t) ∈ X(t) ˜ with (X(t)) = (s) is obtained. The fuzzy random function X(t) may thus be ˜ represented by the fuzzy set of all real random functions X(t) ∈ X(t) X(t), (X(t)) | X(t) = X(s, t); (X(t)) = (s) ∀ s | s ∈ ˜s (10) which may be generated from all possible real vectors s ∈ ˜s. Thereby, every t ∈ T ˜ The real is simultaneously contained in the associated fuzzy random function X(t). ˜ random function X(t) ∈ X(t) is defined for all t ∈ T and referred to as trajectory. ˜ = X(˜s, t) requires the A numerical processing of a fuzzy random function X(t) discretization of their arguments t in space and time. X(˜s, t) =
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
517
:=0 :=1
x f(x)
~ X(J) J1
J2
J3
J
˜ , ). Figure 6: Fuzzy random process X( j
4 Fuzzy stochastic analysis Fuzzy stochastic analysis is an appropriate computational model for processing uncertain data using the uncertainty model fuzzy randomness. Basic terms and definitions related to fuzzy randomness have been introduced, inter alia, by [2]. The formal description of fuzzy randomness chosen by these authors is however not suitable for formulating uncertainty encountered in engineering problems. A suitable form of representation with the scope of numerical engineering problems is given with the so-called -discretization by [1] and [3]. The numerical simulation under consideration of fuzzy variables and fuzzy functions (fuzzy analysis) may formally be described by the mapping M(t) :
x ˜(t) → ˜z(t)
(11)
˜(t) are mapped According to eq. (11) the fuzzy variables x ˜ and the fuzzy functions x to the fuzzy results ˜z(t) with aid of the crisp analysis algorithm M(t). Every arbitrary deterministic fundamental solution may be used as algorithm M(t). On the basis of point and time discretization, fuzzy functional values of the function x(˜s, , , ) are determined at points in space j , time i , and parameters . The numerical simulation is carried out with the aid of the -level optimization [3]. For the fuzzy variable ˜ a1 and the fuzzy function x(˜s, , the input subspace E assigned to the level is formed. By applying the mapping model M(i ) the extreme values z , l (j , i ) and z , r (j , i ) of the fuzzy result variable ˜z(j , i ) are computed. The points are interval bounds of the -level sets and enable the numerical description of the convex membership function of the fuzzy result variable ˜z(j , i ). For the computation of ˜z(j , i+1 ) at the time point i+1 the procedure must be restarted at = 0 due to the interaction within the mapping model. Fuzzy stochastic analysis allows the mapping of fuzzy random input variables onto fuzzy random result variables. In the field of engineering fuzzy stochastic analysis can be applied for static and dynamic structural analysis and for assessment of structural safety, durability as well as robustness. Two different approaches for computation of the fuzzy random result variables have been developed. The first variant (fig. 7) bases on the bunch parameter representation of fuzzy random WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
518 Computational Methods and Experimental Measurements XIV variables by [4]. The second variant utilizes the l r -representation of fuzzy random variables. The variant to be preferred depends on the engineering problem, the available uncertain data, and the wanted results [5].
FFA (FSA (d ))
Fuzzy analysis
FSA (d )
Stochastic analysis
d
Deterministic computational analysis
Figure 7: Fuzzy stochastic analysis (FSA).
5 Numerical example The numerical simulation is applied to a T-beam floor construction. A section of the T-beam floor construction with two beams is shown in fig. 8. The timedependent reliability in relation to serviceability limit state is computed with the aid of FSA. The serviceability limit state is defined with the maximum displacement of v3 = 3.0 cm in the middle of the beam. Both fuzzy random and real random input variables are taken into consideration. statical system
layered model
6.0 m
51.9
8
è1
textile strengthening
textile strengthening
44.0 cm
2
1 10
è2
thickness of reinforcement layers [mm] steel layers: 0.221 (plate longitudinal u/b) 0.050 (plate transverse u/b) 0.402 (beam b) textile layer: 0.2 material properties [N/mm²] 1.0 0.25 2.0 m 0.25 1.0 x concentrated support nodal load P
textile 620 tex glass ET = 74000.0 fT = 1400.0
concrete C 12/15
steel S 500 Es =210000.0 fy = 500.0
Figure 8: Geometry, material, and FE model. For the deterministic computational analysis of the FSA, the physically nonlinear analysis with hybrid finite folded plate elements is applied. RC structures WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
519
with textile strengthening are described appropriately with the multi-referenceplane model (MRM). The MRM is utilized to describe multi-layered composite materials with a discontinuous multi-Bernoulli-kinematics [6]. A MRM element comprises k+1 layered sub-elements and k interfaces. The sub-element i with its corresponding reference plane RPi (i = 0, . . . , k) is subdivided into si sub-layers (concrete and steel sub-layers or fine-grained concrete and textile sub-layers). In order to describe the composite structure comprised of reinforced concrete and textile strengthening, different nonlinear material laws are applied to the individual sub-layers of concrete, steel, and textile. Endochronic material laws for concrete and steel are utilized for general loading, unloading, and cyclic loading processes, and taking into account the accumulated material damage during the load history. In the case of cyclic loading, the textile-reinforced fine-grained concrete layers are split into sub-layers of fine-grained concrete and of textile reinforcement. The endochronic material law for concrete is adapted to the fine-grained concrete. A nonlinear elastic-brittle material law is used for the textile reinforcement.
dK 0.981 0.978 0.976
1
0.95
m(dK)
deterioration dK(J)
1
0.9 0
20
40
60
80
100
time ô [a]
˜K . Figure 9: Fuzzy process d The structure is discretized by means of 156 MRM elements. The beams are modeled with 12 concrete layers and the plate part of the floor is using five concrete layers. The steel reinforcement is specified as an uniaxial smeared layer in each case. Crack formation, tension stiffening, and steel yielding are taken into consideration. The T-beam floor was designed for a dead weight g of the floor construction, an additional load g1 , and a live load p1 . Due to a conversion, it is necessary to dimension the floor additionally for a live load p2 and point loads P. For this reason textile strengthening is applied to the underside of the construction (fig. 8). The loads g1 , p1 , and p2 are modeled as uniformly distributed superficial loads. The point loads P are modeled as nodal loads of magnitude P = 30 kN. The live load p1 is modeled as a real random variable (Gumbel distribution with a = 2.565 and b = 5.699) and the live load p2 as fuzzy random variable (Gumbel distribution with the ˜ p2 =< 5.7, 6.0, 6.3 > kN/m2 ). The bunch parameters p2 = 0.5 kN/m2 and E WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
520 Computational Methods and Experimental Measurements XIV concrete compressive strength c is specified in this case as a Gaussian distributed fuzzy random variable (˜ c =< 2.0, 2.5, 3.0 > N/mm2 and Ec = 20 N/mm2). 2/3 The fine-grained concrete tensile stress t = (0.3 + c) fgc is modeled as random variable by means of the Gaussian distributed parameter c with Ec = 0 and c = 0.01. The time-dependent reliability in relation to serviceability limit state requires the consideration of deteriorating effects. The deterioration is influenced by several factors, which are not precisely known [7]. This information deficit leads to uncertainty and at the end to an uncertain service life. The uncertain deterioration ˜ K = e− (, s) d which acts on the global is simplified by the fuzzy process d stiffness matrix, see fig. 9. The processes (, s) in this example are determined as (, s) = 0.0001 if ≤ 0 , and 0.0001 + 1/6000(e0.02··s − e0.02·τ0 ·s) if > 0 with 0 = 20 years and the bunch parameter s ∈ ˜s =< 0.9, 1.0, 1.1 >, see fig. 9. As results of the uncertain numerical simulation, fig. 10 shows the time-dependent fuzzy reliability.
0.015 m(P)
0.010 -3
3. 2
P [10 ]
0. 5 1. 2
reliability P (ô)
1
0.005
0
0
20
40 time ô [a]
60
80
Figure 10: Time-dependent fuzzy reliability.
Acknowledgement The authors gratefully acknowledge the support of the German Research Foundation.
References [1] M¨oller, B.; Beer, M. Fuzzy Randomness. Uncertainty in Civil Engineering and Computational Mechanics, Springer: Berlin, Heidelberg, 2004. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
521
[2] Puri, M.L.; Ralescu, D. Fuzzy random variables. Journal of Mathematical Analysis and Applications, 114 (1986), pp. 409–422. [3] M¨oller, B.; Graf, W.; Beer, M. Fuzzy structural analysis using -level optimization, Computational Mechanics, 26(2000), pp. 547–565. [4] Sickert, J.-U.; Beer, M.; Graf, W.; M¨oller, B. Fuzzy probabilistic structural analysis considering fuzzy random functions. In: 9th International Conference on Applications of Statistics and Probability in Civil Engineering, Rotterdam, Millpress 2003, pp. 379–386. [5] M¨oller, B.; Graf, W.; Sickert, J.-U.; Reuter, U. Numerical simulation based on fuzzy stochastic analysis, Mathematical and Computer Modelling of Dynamical Systems (MCMDS), Taylor & Francis, 13(2007) 4, pp. 349–364. [6] M¨oller, B.; Graf, W.; Hoffmann, A.; Steinigen, F. Numerical simulation of structures with textile reinforcement, Computers and Structures, 83(2005), pp. 1659–1688. [7] M¨oller, B.; Graf, W.; Sickert, J.-U.; Beer, M. Time-dependent reliability of textile strengthened RC structures under consideration of fuzzy randomness, Computers and Structures, 84(2006), pp. 585–603.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
523
A dynamic model for the study of gear transmissions A. Fernandez del Rincon, F. Viadero, R. Sancibrian, P. Garcia Fernandez & A. de Juan Dept. Structural and Mechanical Engineering, University of Cantabria, Santander, Spain
Abstract In this work a previous model developed by the authors for the quasi-static analysis of spur gear transmissions supported by ball bearings was modified extending its capabilities to dynamic analysis. The model combines a finite element and an analytical formulation achieving sufficient accuracy and computational efficiency to make dynamic analysis feasible. Non-linearity associated with the contact among teeth was included, taking into account the flexibility of gears, shafts and bearings. Furthermore, parametric excitations originating both from gear and bearing supports, as well as clearance, were also taken into account. An example of a simple transmission is presented providing several results obtained using the proposed model. Nevertheless, in spite of its usefulness, particularly in the case of variable torque loads, and its improved capabilities compared with other procedures, this approach still requires a high computational effort. As a consequence in those cases where the transmission operates under stationary conditions the formulation could be simplified by using a pre-calculated value for each the gear tooth stiffness as a function of the angular position. Once again, the original model is useful, taking advantage of its computational efficiency in the calculation of these stiffness coefficients throughout a meshing period. The improved model is applied to the same transmission and the consequences of a misleading calculation of the stiffness coefficients are shown. Then, it was used to study the vibratory behaviour under different levels of applied torque, showing the modifications suffered by the orbits, meshing contact forces and particularly the spectra obtained for bearing forces. Keywords: gear dynamics, transmission error, bearings, tooth contact. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090471
524 Computational Methods and Experimental Measurements XIV
1
Introduction
The increase in the demands for more efficient and reliable gear transmissions, with higher levels of torque and speed, gives rise to an emergent interest in the development of analytical tools to provide a deeper understanding of dynamics of gear transmissions. This kind of tools could be applied in the improvement of the dynamic performance to reduce the level of noise and vibration, but they also serve as an excellent support for developing new techniques of detection and prediction in the field of condition monitoring based on vibratory measurements. This kind of techniques requires the set up of a condition monitoring system that is normally based on field measurements in order to define the normal vibratory behaviour. Therefore, the development of more accurate models for gear dynamics could serve as a basis for future improvements in this technique and could also increase the capabilities for prediction of the progress of a certain fault. There are a huge number of publications related to the dynamics of gear transmissions. A good revision of the works, as well an interesting introduction to the problems involved in this kind of elements was provided by Ozguven and Houser [1]. The main phenomena involved in gear dynamics are the parametric excitation due the variable number of meshing teeth as well as the non linearity present as a consequence of the backslash. In this sense Kahraman and Singh [2] propose the classification of dynamic gear models as: Linear Time Invariant (LTI), Linear Time Variant (LTV), Non Linear Time Invariant (NLTI) and Non Linear Time Variant (NLTV) depending on the procedure followed in order to include or not these features. In summary, the most important aspect in order to formulate a good dynamic model for gear transmission is the procedure followed to include the meshing contact forces. Nevertheless, there are other components in gear transmissions that present the same kind of dynamic phenomena described for gears. These components are bearings. Bearings behave in a similar way to gears. They also have a parametric excitation due the variable number of roller elements transmitting the load to the supports. However, they also present non linearity as a consequence of the Hertzian contact and clearance [3]. It is clear from the point of view of condition monitoring that both elements should work together and the features of both elements are added in order to give the final vibration signature of the transmission. Therefore, condition monitoring requires the development of dynamic models that provide an accurate description of the dynamic forces as opposed to the most general models, where the interest is mainly focused on the determination of the resonances for design purposes. That means, accurate models for gear dynamics that should include both, gears and bearings in the formulation, taking into account the parametric excitation and non linearity originating from each element. Pursuing this objective, a quasi-static model including those features was presented by the authors in a previous work [4]. The procedure for gear contact force calculation employs a hybrid approach dividing the elastic deflections in teeth into two different contributions: global deflections including bending, and WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
525
shearing and local deflections in the vicinity of theoretical contact points. Global deflections were obtained by means a plane strain (or plane stress) finite element model taking into account the elastic coupling between successive teeth under a load. Instead, local deflections were approached by a non-linear formulation derived by Weber-Banashek for bi-dimensional problems. Bearings were also included taking into account the clearance, contact non linearity and variation in the number of loaded rolling elements. In this case, a bi-dimensional model was developed where only deflections of Hertzian type are considered, neglecting bending and shearing of races and rolling elements. This model was subsequently used for simulation of several kinds of defects in gears [5]. In this paper, this model was further extended to carry out dynamic calculations. The usefulness is proved by the simulation of the vibratory behaviour of a simple transmission. A faster solution was achieved by the calculation of the meshing stiffness for a certain torque applied to the transmission. This procedure is compared with the original one and the consequences of an inaccurate estimation of the meshing stiffness are analysed. Special attention was given to the resulting spectra as the load condition is modified. That is one of the main drawbacks of introducing and setting up condition monitoring systems in machinery working under variable conditions of load (wind turbines, rolling mills, etc.). Different load levels provide different behaviour and as a consequence the alarm levels should be adapted on the basis of experimental measurements. Thus a good model used during the design task could be a very profitable tool to extend the life of the transmission.
2
Model description
A simple transmission was analysed, as is shown in the schema in fig. 1. This transmission is composed of a couple of twin spur gears mounted on elastic shafts that are supported by two ball bearings. Inertia is lumped at the centre of each element (gear or bearing) allowing translational and rotational movement in the plane. An additional rotational-only inertia is included at the output and a constant value of rotational speed is assumed at the input. That means a dynamic model with 19 degrees of freedom (dof). 50 mm
50 mm
b11 Input Ω (constant)
50 mm
R11
b12
Ø=40 mm
R21
Shaft 1 Output Torque
b22
J22
b21 Shaft 2
Figure 1:
Schema and dimensions of a simple transmission.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
526 Computational Methods and Experimental Measurements XIV Following the procedure described in [4], gear contacting forces are obtained based of the model proposed by Andersson and Vedmar [6], where tooth deformations are divided into two groups; one near the contact that is treated by an analytical non-linear formulation of Hertzian type, and another of elastic nature determined by means of a dedicated finite element model. This kind of formulation achieves an efficient treatment of gear teeth contacts, not requiring a highly refined mesh at the contact surfaces. As a consequence the computational effort is reduced compared with conventional finite element models allowing the analysis of dynamic problems. Contact forces in the line of action (LOA) were further improved by the inclusion of friction and damping. Friction forces were added assuming a Coulomb model with constant friction coefficient, taking into account the reversing sense of this force when the contact takes place in the vicinity of the primitive point. In order to avoid numerical problems, a smoothing formulation based on the hyperbolic tangent function was considered according to the following expression G G vP (1 / 2) ⋅ t1i vGP (1 / 2) G G G G (1) ( Ff )1i = − Fi ftgh v G ⋅ t1i ; ( Ff )2i = − ( Ff )1i ; 0 vP (1 / 2) Where Ff 1i and Ff 2i are force vectors representing friction forces at the i contact on gear 1 and 2, f is the friction coefficient, Fi is the contact force at the i contact, vPi is the relative velocity between the contacting points on each contact surface and t is a unitary vector defining the common normal of the surfaces and v0 is a threshold level to smooth the transition when the relative velocity is null. Conventional models consider damping as an overall term accounting for all dissipative effects neglecting the oil present between contacts. However, the intermediate layer of oil between two approaching surfaces gives rise to a damping effect known as squeeze film damping. In this model, this phenomenon was the only dissipative source considered for gear damping, neglecting other sources such as the hysteretic damping of contacting surfaces. Following this assumption, the squeeze damping was defined by the following expression i
i
i
3/ 2
1 χ1 χ 2 G FAm = 12πη b (2) vn ; 2h χ1 + χ 2 Where η is the dynamic viscosity, b the gear wide, h the thickness of the lubricant film, χi the curvature radius of the contacting surface i and |vn| the modulus of the relative velocity on the normal of the contacting profiles. Assuming that the oil is present around the contacting teeth, the value of h is defined by the minimum distance between contacting profiles that is obtained knowing the position of each gear. In order to avoid the discontinuity when the lubricant thickness is null, a threshold was defined. As the model considers several potential contact points the damping force is obtained for each one taking into account not only those due the active line of action but also those contacts in the reverse line of action. Regarding bearings, the model proposed in [4] was kept unmodified except for the capability of including waviness and localized defects, which will not be WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
527
discussed in this work. Random relative sliding of the rolling elements with respect to the cage was also added bearing in mind the smoothing out of the spectral content due the ball pass frequency. Taking into account torsional and flexural deflection of shafts and the formulations described in the previous paragraphs for gears and bearings, the block diagram shown in fig. 2 is achieved. There, the connection among blocks could be carried out by a linear translational/rotational spring with a viscous damper or by a non linear function. Non linear functions are represented by a double sense arrow (in green for gears and blue for bearings). Meshing force calculation includes normal contact forces, friction and squeeze damping, applying the procedure described previously. Otherwise, bearing damping is added in the block diagram as an equivalent translational viscous damping having the same value for any direction on the plane of movement. The same type of damping was used for connecting shafts.
C1b2
C1b1 f1b1
f1b2 K1b1R1 C1b1R1
ΘIn(t)
X1b1 Y1b1 Θ1b1
K1R1b2 C1R1b2
X1R1 Y1R1 Θ1R1 KT1b1R1 CT1b1R1
KT1J1b1 CT1J1b1
X1b2 Y1b2 Θ1b2 KT1R1b2 CT1R1b2
Fmesh1R12R1 K2b1R1 C2b1R1
X2b1 Y2b1 Θ2b1
K2R1b2 C2R1b2
X2R1 Y2R1 Θ2R1 KT2b1R1 CT2b1R1
f2b1
C2b1
Figure 2:
X2b2 Y2b2 Θ2b2 KT2R1b2 CT2R1b2
f2b2
Θ2J2
TOut
KT2b2J2 C T2b2J2 C2b2
Block diagram of a simple transmission.
Taking a reference frame, with the z-axis oriented along the shaft centre line, from left to right in fig. 1, and the y-axis defined by the line between gear centres. X and Y are the translational degrees of freedom along the x and y-axis while Θ is the rotational degree of freedom around the z-axis. Subscript T denotes torsional properties, b means bearing and R gear. Following this nomenclature, Xibj means the displacement along the x-axis of bearing j WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
528 Computational Methods and Experimental Measurements XIV belonging to shaft i. The degrees of freedom associated with bearings and gears are grouped in vectors qibj= {xibj, yibj, Θibj }T and qiRj = {xiRj, yiRj, ΘiRj }T arriving at the following set of dynamic equations θIn =ω ; m1b1 x1b1 +C1b1R1 ( x1b1 − x1R1 )+C1b1 ( x1b1 )+ K1b1R1 ( x1b1 − x1R1 )+ f1b1x (q1b1 )=0; m1b1 y1b1 +C1b1R1 ( y1b1 − y1R1 )+C1b1 ( y1b1 )+ K1b1R1 ( y1b1 − y1R1 )+ f1b1 y (q1b1 )=0; J1b1θ1b1 +CT 1J 1b1 (θ1b1 −θIn )+CT 1b1R1 (θ1b1 −θ1R1 )+ KT 1J 1b1 (θ1b1 −θ In )+ KT 1b1R1 (θ1b1 −θ1R1 )=0; m1R1 x1R1 +C1b1R1 ( x1R1 − x1b1 )+C1R1b 2 ( x1R1 − x1b 2 )+ K1b1R1 ( x1R1 − x1b1 )+ K1R1b 2 ( x1R1 − x1b 2 )+ f1R1x (q1R1 ,q2 R1 ,q1R1 ,q2 R1 )=0; m1R1 y1R1 +C1b1R1 ( y1R1 − y1b1 )+C1R1b 2 ( y1R1 − y1b 2 )+ K1b1R1 ( y1R1 − y1b1 )+ K1R1b 2 ( y1R1 − y1b 2 )+ f1R1 y ( q1R1 ,q2 R1 ,q1R1 ,q2 R1 )=0; J1R1θ1R1 +CT 1b1R1 (θ1R1 −θ1b1 )+CT 1R1b 2 (θ1R1 −θ1b 2 )+ KT 1b1R1 (θ1R1 −θ1b1 )+ KT 1R1b 2 (θ1R1 −θ1b 2 )+ f1R1θ (q1R1 ,q2 R1 ,q1R1 ,q2 R1 )=0; m1b 2 x1b 2 +C1R1b 2 ( x1b 2 − x1R1 )+C1b 2 ( x1b 2 )+ K1R1b 2 ( x1b 2 − x1R1 )+ f1b 2 x ( q1b 2 )=0; m1b 2 y1b 2 +C1R1b 2 ( y1b 2 − y1R1 )+C1b 2 ( y1b 2 )+ K1R1b 2 ( y1b 2 − y1R1 )+ f1b 2 y (q1b 2 )=0; J1b 2θ1b 2 +CT 1R1b 2 (θ1b 2 −θ1R1 )+ KT 1R1b 2 (θ1b 2 −θ1R1 )=0; m2b1 x2b1 +C2 b1R1 ( x2b1 − x2 R1 )+C2b1 ( x2b1 )+ K 2b1R1 ( x2b1 − x2 R1 )+ f 2b1x ( q2b1 )=0; m2b1 y2b1 +C2b1R1 ( y 2b1 − y 2 R1 )+C2b1 ( y 2b1 )+ K 2b1R1 ( y2b1 − y2 R1 )+ f 2b1 y ( q2b1 )=0; J 2b1θ2b1 +CT 2b1R1 (θ2b1 −θ2 R1 )+ KT 2 b1R1 (θ 2b1 −θ 2 R1 )=0; m2 R1 x2 R1 +C2b1R1 ( x2 R1 − x2b1 )+ K 2b1R1 ( x2 R1 − x2b1 )+ f 2 R1x (q1R1 ,q2 R1 ,q1R1 ,q2 R1 )+C2 R1b 2 ( x2 R1 − x2b 2 )+ K 2 R1b 2 ( x2 R1 − x2b 2 )=0; m2 R1 y2 R1 +C2b1R1 ( y 2 R1 − y 2 b1 )+ K 2b1R1 ( y2 R1 − y2b1 )+ f 2 R1 y (q1R1 ,q2 R1 ,q1R1 ,q2 R1 )+C2 R1b 2 ( y 2 R1 − y 2b 2 )+ K 2 R1b 2 ( y2 R1 − y2b 2 )=0; J 2 R1θ2 R1 +CT 2b1R1 (θ2 R1 −θ2 b1 )+ KT 2b1R1 (θ 2 R1 −θ 2b1 )+ f 2 R1θ (q1R1 ,q2 R1 ,q1R1 ,q2 R1 )+CT 2 R1b 2 (θ2 R1 −θ2 b 2 )+ KT 2 R1b 2 (θ 2 R1 −θ 2b 2 )=0; m2b 2 x2b 2 +C2 R1b 2 ( x2 b 2 − x2 R1 )+C2b 2 ( x2 b 2 )+ K 2 R1b 2 ( x2b 2 − x2 R1 )+ f 2b 2 x (q2 b 2 )=0; m2b 2 y2b 2 +C2 R1b 2 ( y 2 b 2 − y 2 R1 )+C2b 2 ( y 2 b 2 )+ K 2 R1b 2 ( y2 b 2 − y2 R1 )+ f 2 b 2 y (q2b 2 )=0; J 2b 2θ2b 2 +CT 2b 2 J 2 (θ2b 2 −θ2 J 2 )+CT 2 R1b 2 (θ2b 2 −θ2 R1 )+ KT 2b 2 J 2 (θ 2b 2 −θ 2 J 2 )+ KT 2 R1b 2 (θ 2 b 2 −θ 2 R1 )=0; J θ +C (θ −θ )+ K (θ −θ )=T ; 2J 2 2J 2
T 2b 2 J 2
2J 2
2b 2
T 2b 2 J 2
2J 2
2b 2
Out
Those equations expressed in matrix form give rise to the following expression [ M ]q + [C ]q + [ K ]q + fb (q) + f R (q, q ) = f Ext (t ); (3) T q = {q1b1 , q1R1 , q1b 2 , q2b1 , q2 R1 , q2b 2 , θOut } ; Non linear terms are included in vectors fb and fR while matrices M, C and K are constant coefficient matrices. Numerical integration of dynamic equations was done combining Matlab/Simulink® tools. The general equation in (3) was reformulated for implementation in Simulink using function blocks with Matlab functions for the non linear terms and an ode45 solver for integration.
3
Application example
Following the model described in the previous paragraph, a numerical example will be presented here. Table 1 compiles the main physical parameters defining gears. A couple of 209 Single-row radial deep-groove ball bearings [7] support each shaft whose dimensions and features are contained in Table 2. Finally, the data about shafts are grouped in Table 3. Shaft mass was lumped on gears and bearings and a pair of couplings were placed at the input and output of the transmission (represented by their stiffness and damping). Shaft stiffness was calculated on the basis of a constant value for the radius and length values that appear in fig. 1. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Table 1:
Spur gear set data.
Parameter
Value
Parameter
Value
Number of teeth Module (m) Elasticity Modulus Poisson’s ratio Pressure angle Rack addendum Rack deddendum
28 3.175 [mm] 210 [GPa] 0.3 20 [degree] 1.25 m 1m
Rack tip rounding Gear tip rounding Gear face width Gear shaft radius Mass (miR1) Gear inertia (JiR1 ) Oil viscosity
0.25 m 0.05 m 6.35 [mm] 20 [mm] 0.7999 [Kg] 4.0 10-4 [Kgm2] 0.004 [Pas]
Table 2: Parameter
Bearing data [7].
Value
Contact Stiffness Number of balls Radial clearance Outer race diameter Inner race diameter Inner groove radius Outer groove radius
10
3/2
1.2 10 [N/m ] 9 15 [µm] 77.706 [mm] 52.291 [mm] 6.6 [mm] 6.6 [mm]
Table 3:
Parameter
Value
Ball diameter m1b1= m2b2 m2b1= m1b2
12.7 [mm] 0.490 [Kg] 0.245 [Kg]
J1b1=J2b2 J2b1=J1b2
9.8 10-5 [Kgm2] 4.9 10-5 [Kgm2]
Bearing damping 5%
334.27 [Ns/m]
Shafts data.
Parameter
Value 2
Output inertia [Kg m ] Input / output torsion stiffness [Nm/rad] Input / output torsion damping [Nms/rad] 1% Shaft torsion stiffness [Nm/rad] Shaft torsion damping [Nms/rad] Shaft flexion stiffness [N/m] Shaft flexion damping [Ns/m] 1%
4
529
J2J2= 3.56 10-4 KT1J1b1= KT2b2J2=4.0 105 CT1J1b1= CT2b2J2=3.5761 KTib1R1= KTiR1b2 = 4.0 105 CTib1R1= CTiR1b2 = 0 Kib1R1= KiR1b2 = 6.24 108 Cib1R1= CiR1b2 = 31.6
Pre-calculation of meshing stiffness
Although the model allows dynamic simulations, it still requires a considerable amount of time to provide results. That means that it could be used as an analysis tool more than a design tool. With the aim of extending its capabilities, the original formulation was analysed searching for the most expensive computational task, arriving at the conclusion that the solution of the non-linear system of equations present in the original formulation [4] was the critical one. Consequently, this task was avoided by the pre-calculation of the contacting stiffness for a certain load by means of a previous quasi-static analysis where the goodness of the original model can be exploited. When the load and rotational speed are stationary and the operation of the system does not correspond to resonance, this approach should be valid. In this way, the model structure WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
530 Computational Methods and Experimental Measurements XIV remains unchanged, including friction and squeeze damping mechanisms and only the meshing contact forces should be modified. It should be pointed out that an incorrect calculation of the corresponding meshing stiffness don’t leads to a different behaviour in the global sense, as the rms level of the vibration should remain more less the same. Nevertheless, the time record and the resulting spectra will have a different shape. Torque level is crucial in this task as determine the parametric excitation due the meshing, modifying their spectral decomposition and as a consequence the final vibratory behaviour of the transmission. Some authors neglect this fact and propose a torque-independent stiffness based on analytical formulations [8] and [9]. Furthermore, as opposed to those models, in this one, the quasi-static analysis provides the stiffness for each tooth pair contact as a function of the angular position of the gear mounted on shaft 1 instead of a global value that is the most common approach. In this way each contact can be analysed individually obtaining better knowledge about the way the load is shared by tooth pairs. In order to validate the pre-calculated contact stiffness model and to analyse the consequences of an incorrect calculation of it, three analyses were done. One was carried out with a torque of 100 Nm using the original dynamic model. A second analysis was done under the same torque using a pre-calculated stiffness corresponding to this torque (100 Nm). Finally, one more analysis was carried out; again a torque of 100 Nm was applied, but this time the pre-calculated stiffness was obtained under a torque of 10 Nm that could be considered similar to the approach with torque-independent models. In all models the rotational speed of the input shaft was of 1000 r.p.m. and data output were recorded in a file with a sampling frequency of 75 kHz. 2.6
x 10
-3
Original Model Meshing Stiffness from (100 Nm) Meshing Stiffness from (10 Nm)
2.5
DTE [rad]
2.4
2.3
2.2
2.1
2
1.9 0
Figure 3:
1
2
3 Meshing Cicles
4
5
6
DTE obtained under different assumptions for meshing stiffness calculation.
In order to compare it, the Dynamic Transmission Error (DTE) was selected as it is related directly with the gear meshing behaviour. In figure 3, six meshing WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
531
cycles are presented for the original model along with those using pre-calculated meshing stiffness. Differences are clearly appreciated when the torque used for meshing stiffness calculation is wrong. While the model with pre-calculated stiffness based on a torque of 100 Nm gives practically the same DTE as the model without pre-calculation, the model based on a torque of 10 Nm provides a completely different response tending to overestimate the resultant DTE. As a conclusion the pre-calculation of a torque-dependent meshing stiffness provides the same results with a faster computation. Nevertheless, the torque used to calculate meshing stiffness, should agree with that used for dynamic simulation giving inaccurate results otherwise.
5 Numerical simulation and discussion Using the model with pre-calculated meshing stiffness, simulations were done with a constant rotational speed at the input of 1000 r.p.m, and several loads at the output rising from 10 Nm to 100 Nm, data output were again recorded in a file with a sampling frequency of 75 kHz. To reduce the transient until the system achieves stationary conditions the central position of the bearings and gears was obtained from a previous quasi-static analysis and the results were used as initial conditions for integration. In the same way, initial rotational velocity was imposed only in the torsion dofs. Some results will be shown in the following figures. In the first place the orbits for each torque level are presented in figure 4. There, it can be observed that the model recognizes the deflection of shafts and bearings. Higher torque values open the position of the orbit along the LOA. The non linear nature of the bearing forces is also visible as the orbits tend to be closer when the load was increased. Variable bearing compliance gives rise to a substantial enlargement of the orbit shape along the LOA with several oscillations. At the same time the orbit was spread in the off line of action (OLOA) as the clearance provides lower bearing stiffness in this direction and the model allows OLOA forces when the contact takes place at the rounded tips. This fact is highlighted when friction is considered because the low OLOA stiffness gives a wider orbit in this direction as can be seen in figure 5 where orbits for a torque of 100 Nm with friction coefficient (f) of 0, 0.03 and 0.05 are presented. In figure 6 the meshing forces for each contact are presented for the extreme values of applied torques (10 Nm on the left and 100 Nm on the right), normalized about the corresponding static case. Lower torque gives a wider period for single contact with important singularities when changes take place in the number of active contacts. On the other hand, higher torque reduces the period of single contact and it seems quieter, with singularities only when the tooth begins to support the entire load. The force transmitted to the supports towards the bearings and their spectral decomposition is more interesting from the point of view of condition monitoring. Bearing LOA force spectra are presented in figure 7 as a function of the applied torque, normalizing the frequency with the Gear Mesh Frequency (GMF). The ball pass frequency appears at low frequencies,, which is also WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
532 Computational Methods and Experimental Measurements XIV 20
10 [Nm] 20 30 40 50 60 70 80 90 100
Coordenada Coord. y [microm]
15 10 5 0 -5
-10 -15 -20 -30
-10 0 10 Coord. x [microm] Coordenada
14
14
13
13
13
12
12
12
11
Coord. y [microm]
14
11
10
10
9
9
9
8
28
29
30 31 Coord. x [microm]
Figure 5:
32
8
28
33
29
30 31 Coord. x [microm]
32
33
28
29
30 31 Coord. x [microm]
32
33
Orbit when friction is added; a) f=0; b) f=0.03; c) f=0.05. 1.4
1.4
Contacto 1 Contacto 2 Contacto 3
1.2
Contacto 1 Contacto 2 Contacto 3
1.2 1 Fn/F0@100 [Nm]
1 0.8 0.6
0.8 0.6
0.4
0.4
0.2
0.2
0 0
30
11
10
8
Fn/F0@10 [Nm]
20
Orbits for several torque levels. Dashed line is the bearing clearance.
Coord. y [microm]
Coord. y [microm]
Figure 4:
-20
0.5
1
Figure 6:
1.5
2
2.5
3
Ciclo de Engrane Meshing Cycle
3.5
4
4.5
5
0 0
0.5
1
1.5
2
2.5
3
Ciclo de Engrane Meshing Cycle
3.5
4
4.5
Normalized meshing contact forces; a) 10 Nm; b)100 Nm.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
5
Computational Methods and Experimental Measurements XIV
533
80 70
Bearing Force [N]
60 50 40 30 20 10 0 100 80 60 Applied Torque [Nm]
40 20 0
4
2
0
10
8
6 Gear Mesh Frequency
12
Spectrum of the bearing forces (b11) in the line of action.
Figure 7:
present as a side band around the mesh frequency and its multiples. That is not a common situation in real machinery because of the noise and the quasi-periodic character of the bearing vibrations due the slip of the cage. Noise addition to the resultant signal as well as the random angular position of ball bearings reduces this phenomenon enabling the model results to approximate to experimental measurements. Each of the GMF harmonics follows a different path when the torque increases from 10 to 100 Nm showing the model’s capabilities to capture this feature. For example at low torques the second GMF harmonic is preponderant, while at high loads the 5th GMF harmonic becomes the biggest. A clearer picture of the modifications in GMF harmonics with the load can be seen in the bar chart presented in figure 8 where only the five first harmonics of GMF are considered. 80 GMF GMF GMF GMF GMF
Bearing Force Harmonic Amplitude [N]
70
60
1X 2X 3X 4X 5X
50
40
30
20
10
0
Figure 8:
10
20
30
40
50 60 Applied Torque [Nm]
70
80
90
100
Amplitude of the first 5 GMF harmonics of bearing force (b11) on the line of action.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
534 Computational Methods and Experimental Measurements XIV
6
Conclusions
A model for the dynamic analysis of a gear transmission supported by bearings was presented. This model was based on an efficient formulation and solution of the meshing contact forces with a non-linear model for bearings. In order to improve the computation time, a pre-calculated value for meshing stiffness was obtained, simplifying the resultant equations system. Careful selection of the torque should be done in order to obtain accurate results. Special attention should be paid when the load cannot be considered stationary, using in these cases the original model. The model features were demonstrated with a simple transmission paying particular attention to the bearing force spectrum variation with the load and showing the capability of the model to capture this behaviour.
Acknowledgements This paper has been developed in the framework of the Project DPI2006-14348 funded by the Spanish Ministry of Science and Technology.
References [1] Ozguven, N. H., Houser, D. R., Mathematical Models used in gear dynamics – a review, Journal of Sound and Vibration, 121(3), pp. 383-411, 1988. [2] A. Kahraman, R. Singh, Interactions between time-varying mesh stiffness and clearance nonlinearities in a geared system, Journal of Sound and Vibration, 146(1), pp 135-156, 1991. [3] S. Fukata, E. H. Gad, T. Kondou, T. Ayabe, H. Tamura, On the radial vibrations of ball bearings (computer simulation), Bulletin of the JSME, 28, pp. 899 - 904, 1985. [4] F. Viadero, A. Fernández del Rincón, R. Sancibrian, P. García Fernández, A. de Juan, A model of spur gears supported by ball bearings, WIT Transactions on Modelling and Simulation, 46, pp. 711- 722, 2007. [5] A. Fernández del Rincón, F. Viadero, R. Sancibrian, P. García Fernández, A. de Juan, Esfuerzos de contacto en engranajes exteriores de dientes rectos con defectos, soportados mediante rodamientos, 8º Congreso Iberoamericano de Ingeniería Mecánica, CIBIM8, Cuzco (Perú), 2007. [6] A. Andersson, L. Vedmar, A method to determine dynamic loads on spur gear teeth and on bearings, Journal of Sound and Vibration, 267(5), pp. 1065-1084, 2003. [7] Tedric A. Harris, Rolling Bearings Analysis, John Wiley & Sons Inc., 2001. [8] F. Chaari, W. Baccar, M. S. Abbes, M. Haddar, Effect of spalling or tooth breakage on gearmesh stiffness and dynamic response of a one-stage spur gear transmission, European Journal of Mechanics A/Solids, (2008). [9] S. Wu, M. J. Zuo, A. Parey, Simulation of spur gear dynamics and estimation of fault growth, Journal of Sound and Vibration, (2008).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
535
Long-term behaviour of concrete structures reinforced with pre-stressed GFRP tendons J. Fornůsek1, P. Konvalinka1, R. Sovják1 & J. L. Vítek2 1
Experimental Centre, Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic 2 Department of Masonry and Concrete Structures, Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic
Abstract Nowadays, composite materials are used more often in every part of industry, including civil engineering. Using these composite materials in civil engineering is innovational and there are many unanswered questions about these materials and the relaxation of the glass fibre reinforced polymers (GFRP) tendon in pre-stressed concrete is one of them. Knowing the long-term behaviour of the pre-stressed GFRP tendons is very important for the right design. Underestimating the long-term changes in the GFRP tendons can lead to serious problems or collapse of a structure. This paper shows two long-term experiments. One of them is the relaxation of pre-stressed GFRP tendons and the second one is creep of a concrete slab reinforced with pre-stressed GFRP tendons. The first experiment shows that relaxation of pre-stressed GFRP tendons is very high. A GFRP tendon was pre-stressed up to 37% (237,9 MPa) of its tensile strength (654,0 MPa). The decrease of tensile stress when the experiment was closed (after 132 days) was about 10,5%. Based on the experimental data, the numerical viscoelastic model consisting of Kelvin links was developed. The modulus of elasticity of the fibres and matrix was determined with the nanoindentation method. Others parameters were fitted from the experimental data. The chosen numerical model corresponds very well with the experimental data, but for the best outcome a longer experiment should be carried out. The numerical model and fitting of parameters were made in MATLAB 2007a software. The creep of the slab shows the long-term behaviour of a structure reinforced with GFRP tendons. The creep test was ended after one year. A concrete slab pre-stressed with GFRP tendons was subjected to a four point loading test with a constant load. During the year deflections and strain were recorded and hence the creep curve is plotted. Keywords: GFRP, creep, relaxation, long-term behaviour, pre-stressing. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090481
536 Computational Methods and Experimental Measurements XIV
1
Introduction
Composite materials are being used more frequently in all parts of industry; civil engineering is not the exception. Use of these non-traditional materials brings many questions that should be answered. No national instructions exist for designing of these structures in the Czech Republic. This vacuum leads to very low use and mistrust of this reinforcement type. Experiments are trying to fill this vacuum with new knowledge. The long-term behaviour of composite reinforcement and structures reinforced with this reinforcement is one of the most important themes for the correct design of structures. The underestimating of long-term behaviour can lead to serious problems in the future of the structures.
2
Characteristics of composite reinforcement
Composite reinforcement for internal reinforcing of concrete structures consists of fibres and a matrix (FRP – fibre reinforced polymers). The fibres are the main bearers of mechanical properties. Nowadays three different types of fibres are used – aramid (AFRP), carbon (CFRP) and glass (GFRP). The matrix usually consists of vinylester, epoxy or other polymers. There are many differences between common steel reinforcement and composite reinforcement – FRP. Using FRP reinforcement has advantages and disadvantages. The most outstanding advantage is vulnerability against corrosion in an aggressive environment. The other advantages are low weight, high tensile strength and transparency for electromagnetic radiation (GFRP). These qualities can be used for structures in very aggressive environments, for example, for the restoration of structures and for antenna covers. On the other hand there are a few disadvantages, such as the low modulus of elasticity, fragility and not least the high purchase price. The low elasticity modulus means high deflections and deformations of FRP reinforced structures. The effect of the low elasticity modulus can be reduced by pre-stressing of structures. The diameter of the GFRP tendons was 14 mm. There were some differences between the featured and measured modulus of elasticity. Table 1:
Basic characteristics of GFRP tendons. PREFA KOMPOZITY a.s.
Measurement in EC, FCE
Modulus of elasticity [GPa]
32,14
41,11
Tensile strength [MPa]
654,66
Thermal expansion [K-1] 5.10-6 Glass vs. matrix ratio was 73: 27 (c1 = 0,73; c2 = 0,27).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
> 600 (max. load. machine) 7,5.10-6
Computational Methods and Experimental Measurements XIV
3
537
Characteristics of the GFRP tendons used
The GFRP tendons were made by the firm PREFA KOMPOSITY, a.s., Brno, Czech Republic. The basic characteristics of the tendons are shown in table 1.
4
Creep test of concrete slab pre-stressed with GFRP tendons
4.1 Properties of the concrete slab The concrete slab was cast in January 2008. The GFRP tendons were prestressed before casting, therefore the so called pre tensioned concrete slab was performed. The scheme of the slab is shown in figure 1. The dimensions of the slab were 4500 x 600 x 200 mm (L x W x H). The slab was designed as the simple beam. There were two pairs (four sticks) of vibrating wire (VW) strain gages embedded in the concrete slab. The first pair was embedded in the end of the slab and the second pair was embedded in the middle of the slab. One VW strain gage of the pair was embedded on the upper side of the slab and the second on the lower side. Pre-stressing was applied 14 days after casting. The strength of the pre-stressing force was circa 32 kN in each tendon (which means tension circa 215 MPa). Three displacement sensors were placed under the slab for measuring of deflection. One of them was placed in the middle and the other under the concentrated loads. Four cylinders were also cast as referential samples for measuring of the shrinkage and temperature effects at the same time. In each cylinder there was an embedded VW strain gage. Two of the cylinders were loaded at the same time as the slab. 4.2 Creep experiment behaviour Load was applied 28 days after casting. The weight of each concentrated load (F) was 1000 kg, which means a force of 10 kN. Regular measurement of the strain and deflection was accomplished since the load was set. The duration of the experiment was designed as one year. The time dependent graph of the strain is shown in figure 2 and the time-deflection graph of the slab is shown in figure 3. The graph of strain of the cylinders is shown in figure 4. The deflection of the slab is going to be stabilized on the value of about 20 mm in the middle. After the end of this experiment the measured relaxation of the GFRP tendons and the creep of the slab should be unified into the numerical model. This model should proof if the relaxation of the GFRP tendons with the numerical creep model of concrete corresponds with the creep measurement of the slab.
5
Relaxation of the GFRP tendon
The long-term behaviour of the GFRP tendon has not been well inspected so far. So, the test of time dependent tension loses was set. This experiment is closely WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Figure 1:
Scheme of the concrete slab pre-stressed with GFRP tendons.
538 Computational Methods and Experimental Measurements XIV
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
539
linked with the creep test. This experiment was designed for one year but it was cut after c 4 months because of insufficient space in the experimental hall. 5.1 Stressing bed for measuring of the time dependent tension decreasing A special stressing bed for measuring of the relaxation in the Experimental Centre was designed. The main features of the stressing bed are to be seen in figure 6. The construction of the stressing bed takes into account the slip of the pre-stressed GFRP tendon in the steel anchors as well as the effect of the temperature. The principle of the stressing bed is very simple. The GFRP tendon with steel anchors (on both sides) is placed into the stressing bed. One end is
1000 800
Strain [ m/m]
600
Upper gage Lower gage Lower gage Upper gage
Strain of the slab - support - support - middle - middle
400 200 0 -200 0
50
100
150
200
250
300
350
400
450
-400 -600 -800 -1000
Time [days]
Figure 2:
Time–strain graph of the slab. Deflection of the slab
Deflection [mm]
25 20 15 10
Deflection under left weight Deflection in the middle Deflection under right weight
5 0 0
50
100
150
200
250
300
350
400
Time [days]
Figure 3:
Time–deflection graph of the slab.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
450
540 Computational Methods and Experimental Measurements XIV placed on the two load cells and the second end is placed in the movable part. The movable part is joined to the fixed profile with a screwed bar. The pre-stressing is brought in with the nut rotation on the screwed bar. 5.2 Experimental behaviour (relaxation test) The 5400 mm long GFRP tendon was pre-stressed in the stressing bed to 37% (239,7 MPa) of its tensile strength (654,7 MPa). This means that the force is about 36,6 kN. Constant strain was fixed after the requested load was achieved. Constant strain had the value of 0,662% (35,73 mm). The measurement was provided in regular intervals. In the beginning of the experiment the intervals were short (about 5–10 minutes). In the end the intervals were longer (2–4 days). There was loss of tension of 3,29% after the first 24 hours. After 28 days the loss was 7,3% and at the end of the experiment, after 132 days, the loss was 10,5%. However, the experiment was ended early after only 132 days because of lack of space. The measured data are featured in figure 5.
6
Nanoindentation of the GFRP bars used
The GFRP tendon consists of two components – glass fibre and a vinylester matrix. To obtain a good viscoelastic model it is necessary to determine the characteristics of each component. For the determination of the elasticity modulus the nanoindentation method was chosen. This method is based on the testing of material to m depth and the measuring of force and depth at the same time [7]. This method found that the elasticity modulus of GFRP components is: Eglass = 52,58 GPa Ematrix = 5,45 GPa Creep and shrinkage of the concrete cylinders Time [days] 0
Strain [ m/m]
-200
0
50
100
150
200
250
300
350
400
-400 -600 -800 -1000 -1200 -1400 -1600
Loaded cylinder 1 Loaded cylinder 2 Cylinder 3 Cylinder 4
Figure 4:
Creep and shrinkage of the cylinders.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
450
Computational Methods and Experimental Measurements XIV
541
If using equation:
c glass E glass c matrix E matrix E GFRP
(1)
ci is the content of components
EGFRP = 39,85 GPa This value is very close to the value form table 1 measured in EC. So, the modulus of elasticity of the GFRP tendons used is about 40 GPa. Figure 7 shows incisions from the nanoindentation tip on the glass fibre.
7
Viscoelastic model
The viscoelastic model is based on the theory of linear viscoelasticity [5]. It is developed in dependence on experimental data. The model consists of two parallel Kelvin chains (figure 8). One is for glass fibre and the second is for the matrix. The first two parameters (Eglass, Ematrix) of the model were determined
with the nanoindentation method (chapter 6). The other parameters (Eglass,i,
glass,i, Ematrix,i, matrix,i)
were determined with the program GAFPF (Genetic Algorithm Function Parameters Finder), which was written especially for this task. The final approximation of the measured data with the viscoelastic model is featured in figure 9. Basic equations (2)–(4) [5] lead to the scale of the differential equations. These were solved with the Runge–Kutta fourth order method [9]. Relaxation of this model was calculated with eq. (5). The graph of the viscoelastic model outcome compared to measured data against time is shown in figure 9. Decrease of stress against time (temperature effect and slip in the anchor was taken into account) 240 236
Stress [MPa]
232 228 224 220 216 212 208 0
20
40
60
80
100
Time [days]
Figure 5:
Decrease of stress against time.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
120
140
542 Computational Methods and Experimental Measurements XIV
Stressing bed for pre-stressing and FRP relaxation tests. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Figure 6:
Computational Methods and Experimental Measurements XIV
Figure 7:
Incisions in glass fibre from the nanoindentation tip.
glass
glass,1
glass,2
Eglass,1
Eglass,2
glass,1
glass,2
Ematrix,1
Ematrix,2
matrix,1
matrix,2
matrix,1
matrix,2
Eglass
Ematrix
matrix
Figure 8:
Viscoelastic model.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
543
544 Computational Methods and Experimental Measurements XIV
t glass t glass ,1 t glass , 2 t
t matrix t matrix ,1 t matrix , 2 t t glass t matrix t
(2) (3) (4)
t 0 glass matrix
glass ,1 0; glass , 2 0 matrix ,1 0; matrix , 2 0 RGFRP (t ) c1 Rglass (t ) c2 Rmatrix (t )
Figure 9:
8
(5)
Comparison of experimental data and viscoelastic model.
Conclusion
The main objective of this paper was determination of the long-term behaviour of GFRP tendons. The following conclusions were deduced from the experimental results: The modulus of elasticity presented by the manufacturer was about 20% lower than that measured in the Experimental Centre. The nanoindentation method determined the elasticity modulus of the glass fibres and the matrix: Eglass = 52,58 GPa and Ematrix = 5,45 GPa.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
545
The GFRP tendon was relaxing very dramatically. The GFRP tendon was pre-stressed to 37% of its tensile strength. Loss of tension after the first 24 hours was about 3,3%, after 28 days 7,3% and at the end of experiment (132 days) it was 10,5% (figure 7). The experiment was ended too early to obtain more significant results. The ideal time for the experiment should be about two years or more for better results. The viscoelastic model corresponds to the long-term behaviour during the experiment. However, it is impossible to make some predictions for the longer term because of the short duration of the experiment.
Acknowledgements This research has been supported by the Ministry of Industry and Trade under no. 1H-PK2/57, the Ministry of Education of the Czech Republic under no. MSM 6840770031 and an internal grant of the CTU in Prague under no. CTU0801611.
References [1] ACI 440.4R-03, Prestressing concrete structures with FRP tendons, American Concrete Institute, Farmington Hills, Mich, 2003 [2] Bažant, Z.P., Jirásek, M., Inelastic analysis of structures [3] Bittnar, Z., Šejnoha, J., Numerical methods of mechanics 1 (Czech), CTU, Prague, 1992 [4] Dupák, J., Fornůsek, J., Konvalinka, P., Litoš, J., Sovják, R., Measurement and testing line for composite reinforcement in concrete, Utility design request, Prague, 2008 [5] Jirásek, M., Zeman, J., Material deformation and failure (Czech), CTU, Prague, 2006. [6] Phan-Thien, N., Understanding Viscoelasticity – Basics of Rheology, Springer, Berlin – Heidelberg, Germany, 2002. [7] Savková, J., Bláhová, O., Nanoindentational measurement of thin layers – principles, methods, effects (Slovak), Research Centre – New Technology, University of West Bohemia, 2006 [8] Šejnoha, M., Zeman, J., Micromechanical Analysis of Composites, CTU, Prague, 2002 [9] Vitásek, E., Numerical methods (Czech), SNTL, Prague, 1987 [10] The Mathworks, Inc., MATLAB 2007a Help, The Mathworks, Inc, Natick, Massachusetts, 2007
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
547
Application of the finite element method for static design of plane linear systems with semi-rigid connections D. Zlatkov, S. Zdravkovic, B. Mladenovic & M. Mijalkovic Civil Engineering and Architectural Faculty of Nis, Serbia
Abstract In real constructions, particularly in prefabricated constructions, structural connections can be neither absolutely rigid nor ideally elastic, but semi-rigid, which significantly changes the stresses and strains in the structure. Hence, there is a need to carry out the structural analysis and design taking into account the level of rigidity of the connections. For that purpose the ratio between real and absolutely rigid fixing of the member ends is assumed to be from zero to one. The design procedure for structures with semi-rigid connections under static load based on the classical deformation method has already been described in our previous works. Having in mind that matrix formulation of a problem is more convenient for contemporary structural analysis, this is applied to the design of the considered systems and is described in detail in this paper. The formation of the stiffness matrix for a bar with semi-rigid connections, as well as the vector of the equivalent load, is shown to depend on the level of rigidity of a joint connection. These matrices can be introduced into well-known computer programs to modify them for the static design of a plane linear system whose connections are semi-rigid. A numerical example regarding a two-floor reinforce concrete frame with a span of 24m of the AMONT prefabricated structural system in Morava Krusce, Serbia, is included in the paper. Keywords: semi-rigid connections, stiffness matrix, interpolation function, vector of equivalent load, prefabricated reinforced concrete structural system.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090491
548 Computational Methods and Experimental Measurements XIV
1
Introduction
Many worldwide researches, based on numerical simulations and experimental results, indicated that a great number of connections of members in joints of linear systems can neither be classified as ideally pinned nor as absolutely rigid. The results of an integral project devoted to attesting the static and dynamic stability of typified modules of the reinforced concrete (RC) AMONT structural system of industrial halls realized in IZIIS, Skopje, Macedonia (Ristic et al. 5), together with the Institute of Civil Engineering and Architectural Faculty at Nis, confirmed this statement. It has been noticed from the tests performed that the level of rigidity of connections is of great importance, particularly in the case of precast structures, because even a low level of rigidity in precast connections affects the redistribution of action effects. With the purpose of including the real rigidity of connections in the structural design (Milicevic et al. 2), the ratios between real and absolutely rigid the fixing of the member ends are assumed to be ik and ki ( 0 ik ; ki 1 ), where: ik *ik / i , ki *ki / k (1) With this assumption about the level of rigidity, the design procedure for structures with semi-rigid connections is developed using the deformation method. The classical formulation of the deformation method for systems with standard connections has already been described (Djuric and Jovanovic 1), as well as for systems with semi-rigid connections (Milicevic and Zdravkovic 2). In this paper, the deformation method is applied in matrix formulation taking into account assumption (1) and the derivation of the stiffness matrix, and the vector of the equivalent load has been carried out using the variational procedure.
2
Matrix analysis of systems with semi-rigid connections
Matrix analysis can be considered as a special case of the finite element method, a well known method of numerical structure analysis. In the matrix formulation of the force method and the deformation method, the basis is a member as one dimensional finite element. The system is discrete, composed of memberselements of the system, which are interconnected in the discrete points (joints of a system). Fig. 1 shows the simplest model of a straight prismatic member of length l, of a constant cross section area, exposed to bending in the plane xoy of the local coordinate system. The member cross section moment of inertia is I and the material modulus of elasticity is E. If the influence of axial forces on the deformation of the member is neglected, the generalized displacements are
transversal displacements ( vi ,v k ) and rotations i , k
of the member ends,
thus the element has four degrees of freedom, two at each joint. Generalized
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
forces are shear forces Ti ,Tk
549
and bending moments M i , M k in the joints
i and k. The convention of the positive indications of displacements and forces is presented in fig. 1.
Figure 1:
Generalized displacement and forces at the member ends.
The relation between the vector of generalized forces and the vector of generalized member displacements is: R kq Q ,
where:
q T q1 q2 RT R1
R2
Q T Q1 Q2
(2)
q3
q4 i
i
R3
R4 Ti
Mi
Tk
Mk ,
Q3 Q4 Ti
Mi
Tk
Mk ,
k11 k12 k k22 k 21 k31 k32 k41 k42
k
k ,
k14 k24 , k34 k44 are the generalized displacement vector, the generalized force vector, the equivalent loading vector and the member stiffness matrix, respectively. Besides direct determination of the stiffness matrix and the equivalent load vector based on the clear geometric and physical meaning of its elements, very often variational procedure for deriving, based on stationariness of the function of member potential energy is used in matrix analysis. In the case of a straight member bending in plane, the relationship between displacement v(x) of whichever point of a member axis and the parameter of displacements at the member ends can be the most easily obtained starting from the homogeneous differential equation of bending: k13 k23 k33 k43
d 4 v( x )
0, (3) dx 4 whose solution can be written as a polynomial of the third order, which follows: EI
v( x ) 1 2 x 3 x 2 4 x 3 .
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(4)
550 Computational Methods and Experimental Measurements XIV Coefficients i (i=1,2,3,4) are defined from boundary conditions at the member ends. Interpolation functions in the shape of Hermit’s polynomials determined for a fixed-end member are given in Sekulovic 3. In the case of a member with semi-rigid connections at the i and k ends, the interpolation functions (2) can be derived from differential equation (1) and the boundary conditions. When unit translation q1=1 is applied to the joint i of a member, while all other generalized displacements are equal to zero, what follows can be written according to fig. 1. b 1 *ik ik ( 1 ik ) ki ik ; aik l
b 1 *ki ki ( 1 ki ) ik ik aki l
(5)
where ik and ki are the rigidity level of joint connections at the ends of a member, which can be determined numerically or experimentally.
Figure 2:
Physical meaning of the elements of stiffness matrix.
Boundary conditions for semi-rigid connections at the ends i and k of a member ik, from which coefficients i (i=1,2,3,4) appearing in expression (4) can be determined, are written in the following form: v( x ) vi 1 1 x0 1 ( x ) i 2 ( l ik )
v( x ) v 1 l l 2 l 3 0 2 3 4 k xl 1 2 l l ( x ) 2 3 ( ki ) 2 3 4 k l
(6)
Then, according to (4), the elements of the interpolation function matrix, when unit displacement q1=1 of the joint i, unit rotation q2=1 at the end i, unit displacement q3=1 of the joint k and unit rotation q4=1 of the joint k are applied separately, while all of the other generalized displacements are equal to zero, interpolation functions N*1, N*2, N*3 and N*4, respectively, can be determined. Interpolation functions N*m (m=1,..,4) represent Hermit’s polynomials of the first order and their diagrams are shown in fig. 2. In the limit cases when a WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
551
member is rigidly connected at its ends i and k (rigidly fixed-end member), that is ik ki 1 , expressions (8) have already known values (Sekulovic 3). So, the matrix of interpolation functions can be shown in the following form: N N 1 ( x ) N 2 ( x ) N 3 ( x ) N 4 ( x ) ,
(7)
where 2 ki 2 ik ki 3 1 N1 ( x ) v( x )q1 1 1 ( ik ) x ik x x , l l l2 N 2 ( x ) ik x N 3 (
2 ik ki ki l 2 ik ki ki l 3 x x , l l2
2 ki 2 ik ki 3 1 x ) ( ik )x ik x x , l l l2
N 4 ( x ) ( ik ik l )x
(8)
2 ik ki 2ki l 2 ik ki ki l 3 x . x l l2
Interpolation function N*m(x) represents an elastic line of a semi-rigidly fixed-end member due to generalized displacement qm=1 (m=1,2,3,4), while all of the other generalized displacements are qn=0, nm. 2.1 Stiffness matrix of a semi-rigidly fixed-end member The stiffness matrix of a semi-rigidly connected member is obtained after the second derivatives of the interpolation functions have been determined and is as follows: N ( x ) 1 N 2 ( x ) k EI N 1 ( x ) N 2 ( x ) N 3 ( x ) N 4 ( x ) dx , (9) 0 N 3 ( x ) N 4 ( x )
where: 4EI 2 ik ik ki ki2 , 2EI k12 2( ik ik ki2 ki ki ) ik ki ik ki ki ik , 4EI 2 2 k , k13 11 ik ik ki ki k11
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
552 Computational Methods and Experimental Measurements XIV 2EI 2( ik ik ki2 ki ki ) ik ki ik ki ki ik , 4EI 2 2 2 2 ik ik ki ki ki ik 2 ki ki ki , 2EI , 2( ik ik ki2 ki ki ) ik ki ik ki ki ik k12 (10) 2EI 2 ki ki ) ik ki ki ik ik ki 2 , 2( 2ik ik ik ki 4EI 2 ik ik ki ki2 k11 , 2EI , 2( ik ik ki2 ki ki ) ik ki ik ki ki ik k14 4EI 2 ik ik ki 2ki 2ik ik ik ki ik2 2 .
k14 k22 k23 k24 k33 k34 k44
When the axial forces effect on deformation is taken into account, the stiffness matrix of a semi-rigidly fixed-end member can be written as follows:
EF 0 k11 k sim.
0 k12 k22
EF 0 0 k13 0 k23 EF
0 k33
0 k14 k24 . 0 k34 k44
(11)
2.2 Vector of equivalent load In the matrix analysis of structures, the external action along individual members can be replaced by concentrated load at the ends of the member. In the case of bending in plane, the load vector components are presented in fig.3 and they are equal to the negative values of reactions of elastically fixed members under given external action, which can be a load normal to the axis of the member, as well as temperature differences between temperatures on the upper and lower surfaces of the member.
Figure 3:
Vector of equivalent load.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
T* T* Q1 i i Q * M M* Q* 2 ik ik . Q3 Tk* Tk* * * Q4 M ki (o) M ki (t)
553
(12)
The vector of equivalent load for a member with semi-rigid end connections is: l
T
Q
p( x ) N
*
( x ) dx.
(13)
0
For example, in the case of uniform load, where p(x)=p=const, it can be written in the following form: 2 ki 2 ik ki 3 1 1 ( ik )x ik x x 2 2 ki ki 2 ik ki ki 3 x x ik x ik 2 Q p dx. (14) 2 ik ki 2 ik ki 3 1 0 ( ik )x x x 2 2 ki 2 ki 2 ik ki ki 3 ( ik ik )x ik x x 2
After integration it becomes: 5 6 1 4 ik ki 2 p ik p 6 p ki ki Q , lim Q , lim Q 4 . (15) 12 6 2 1 ik 1 2 ik 1 ik ki 3 ki 1 ki 0 4 6 0 ik ki ik
For other cases of loading, such as linear distributed load and concentrated force, concentrated moment vectors of equivalent loading are also derived (Zlatkov 4). In the case of the influence of constant temperature differences along the member axis the equivalent load vector is obtained as follows:
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
554 Computational Methods and Experimental Measurements XIV ki 2 ki x 2 ik 6 ik 2 ik ki ki 2 ik ki ki x 6 2 t o 2 Q EI t dx , (16) h 2 ki ki 0 x 2 ik 6 ik 2 2 2 ki ki ki ki 2 ik x 6 ik 2 and after integration it is: ik ki o Δt ik ki ki Q EI t , h ik ki ik ki ki 3 (17) 2 0 o 1 o 3 t t lim Q EI t , lim Q EI t 2 . h 0 ik 1 h ik 1 3 ki 1 1 ki 0 2 0
3
Some results of the static design of a system with semi-rigid connections
The derived expressions for the stiffness matrix and equivalent load vectors can be incorporated in available software in order to enable structural design of linear systems with semi-rigid connections. In this paper the well-known computer program STRESS, intended for the linear elastic analysis of plane or space structures, is applied to illustrate the above presented theoretical approach. The design of structures with semi-rigid connections using STRESS differs from the standard procedure only in commands containing data about members, i.e. it is necessary to form the stiffness matrices of members. Instead of giving cross-section characteristics of prismatic members (area and moment of inertia for the main axes) through the command MEMBER PROPERTIES PRISMATIC, properties of a member are described through the command STIFFNESS GIVEN, in which case the basic stiffness matrix elements for semirigidly connected members are input directly.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
555
Load cannot be included by the command MEMBER LOADS as load distributed along the member length, but has to be represented as equivalent end load and input by the command MEMBER END LOADS in the following form: M START FORCE X 1 Y 2 Z 3 MOMENT X 4 Y 5 Z 6 M END FORCE X 1 Y 2 Z 3 MOMENT X 4 Y 5 Z 6, where M is the notation of a member, FORCE X, Y, Z are the end forces in directions x,y,z with their numerical values 1, 2, 3 and MOMENT X,Y,Z are the end moments in directions x,y,z with their numerical values 4, 5, 6.
Figure 4:
Static scheme of a RC frame in the AMONT prefabricated structural system.
The real frame structure, under real loading (presented in fig. 4), is considered as an illustration of this application to frames with semi-rigid connections. This is a two-floor RC frame of the AMONT prefabricated structural system, Morava Krusce, Serbia, with a span of 24m, column cross section of 50x50cm and beam cross sections as shown at fig. 4. From the results of tests of the AMONT connections, Ristic 5, it has been found that the connection column to foundation is almost absolutely rigid and because of that the level of rigidity is adopted as 61=72=83==1 (fixed-end member) in the numerical example, while the connection beam to column behaves as 75% fixed, so it is adopted as 12=21=23=32=45=54=41=53=27==0.75. The computer program MKS2 is composed for the calculation of the elements of the base stiffness matrix for various combinations of the level of rigidity of connections at the member ends according to (10) (Zlatkov 4). The stiffness sub matrices required as input data in STRESS (Stankovic 6) for all members of the system (shown in fig. 4) are: 0 580833.3 k1* k*2 0 1606.5 9639.4 0 0 195403. * * k 4 k6 0 25837.3 0 56196.1
0 0 490974.2 9639.4 , k3* 0 1770.2 21242.1 82547.2 0 0 0 195403. * 20997.5 56196.1 ,k5 0 0 42147.1 162968.8
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
0 21242.1 , 364022.7
0 42147.1 , 122226.6
556 Computational Methods and Experimental Measurements XIV 0 0 2266666.7 k*7 k8* 0 32751.4 56684.8 . 56684.8 141712. 0
The equivalent load vector for members 1, 2 is: * * Q12 Q*21 Q*23 Q32 330.93kN, * * M 12 M*21 M*23 M32 558.444kNm, and the equivalent load vector for member 3 is: * * Q*45 Q54 277.92kN, M*45 M54 937.98kNm. A diagram of bending moments is shown in fig. 5 for the adopted rigidity level of connections for the chosen AMONT frame. The values of bending moments in the case of absolutely fixed members in joints are shown in brackets. The differences between these two models are evident and they point out the fact that the calculation allowing for the rigidity of connections has is justified because it is closer to the real behaviour of the structure.
Figure 5:
4
Diagram of bending moments for =1.0 and =0.75 (in brackets are values for ==1.0).
Conclusion
In this paper the stiffness matrix and the vector of equivalent load for systems with semi-rigid connections are derived using the variation procedure on the basis of the deformation method. These terms can be incorporated in the available software in order to enable structural design of linear systems with semi-rigid connections. In this paper the well-known computer program STRESS, intended for the linear elastic analysis of plane or space structures, is applied to illustrate the presented theoretical approach. The numerical example of the model with semi-rigid connections, which represents the real frame structure under real loading, is considered. Comparing the obtained static and deformation values with those for the model with standard connections, significant differences are observed, meaning that care has WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
557
to be taken in practical civil engineering design, particularly for the case of precast RC systems. The presented procedure enables a designer to choose a model with different levels of rigidity of each member end in one joint, which can be determined experimentally, and to find out what the redistribution of the internal influences looks like depending on the real achieved level of rigidity of connections. This is particularly useful in the case of a very important structure, which can be tested after construction with the aim of finding out the real behaviour of the structure among all the levels of rigidity of its connections. Then the results of the tests can be used for designing a more realistic model that could be used in further analysis in the case of accidental loading, such as seismic forces, or in the case of changing of the purpose of the construction, when it is economically justified.
Acknowledgement This research is supported by the Ministry of Science of the Republic of Serbia, within the framework of the project Experimental and Theoretical Research of Real Connections at Reinforced Concrete and Composite Structures under Static and Dynamic Loading, No 16001, for the period 2008–2010.
References [1] Djuric, M., Jovanovic, P., Teorija okvirnih konstrukcija, Grdjevinska knjiga, Beograd, 1972. [2] Milicevic, M., Zdravkovic, S., Uticaj stepena krutosti veze na velicinu kriticnog opterecenja i promenu naprezanja u linijskim sistemima, Simpozijum Nova tehnicka regulativa vo gradeznoto konstrukterstvo, pp. D8-1-D8-7, Skopje, 1986. [3] Sekulovic, M., Matrix analysis of structure, Gradjevinska knjiga, Beograd, 1991. [4] Zlatkov, D., Analiza linijskih sistema sa polukrutim vezama stapova u cvorovima, 1-233, Gradjevinski fakultet Nis, 1998. [5] Ristic, D., Micov, V., Zisi, N., Dimitrovski, T., Attesting of Static and Dynamic Stability of the Typified Modulus of a Hall Program of the Precast RC Structural System AMONT-Krusce IZIIS Skopje, 1998. [6] Stankovic, S., Djordjevic, D.J., Stress, Programski sistem za staticki proracun inzenjerskih konstrukcija-uputstvo za koriscenje programa, Trion Computers, Nis, 1991. [7] Milicevic, M., Zdravkovic, S., Zlatkov, D., Matrix analysis of systems with semi-rigid connections of members, Proceeding of 21th Yugoslav congress of theoretical and applied mechanics, pp. 265-268, Nis, 1995. [8] Milicevic, M., Zdravkovic, S., Zlatkov, D., Kostadinov, B., Matrix Formulation of Design and Testing of Structures with Semi-Rigid Connections, Structural Engineers World Congress, pp. 266, San Francisco, California, 1998.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
559
Blade loss studies in low-pressure turbines – from blade containment to controlled blade-shedding R. Ortiz1, M. Herran2,3 & H. Chalons2 1
Onera, DADS/CRD, Structural and Dynamic Strength Unit, Lille, France 2 Turbomeca-Safran Group, Methods and Development Tools for Mechanical Design and Analysis, Bordes, France 3 Université de Lyon, CNRS, Insa-Lyon, LaMCoS, Villeurbanne, France
Abstract Activities performed in the “Structural Design and Dynamic Strength” research unit of ONERA-Lille mainly aim at analysing the behaviour of aeronautical structures under dynamic transient loadings (crash, impact, explosion…). In this frame, recent studies are performed in collaboration with TURBOMECA (French turbine manufacturer) to evaluate and develop numerical methodologies for the analysis of “blade-shedding” events in low-pressure turbines. Blade-shedding is part of a safety process to preserve turbines disks from burst in the event of over-speed regime and consists in generating controlled blades ruptures in order to decrease centrifugal efforts sustained by the disks. Such a procedure however leads, on the one hand, to impacts from the released blades onto the turbine rings and containment shields and, on the other hand, to unbalance transient centrifugal efforts in the turbine, which create severe loads at the engine mounting components and bolted assemblies. Keywords: blade-off, low pressure turbine, Finite Element, Explicit code, blade shedding.
1
Introduction
These engine mounting loads constitute dimensioning parameters for the engine structure and are estimated with a whole engine test. The proposed works WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090501
560 Computational Methods and Experimental Measurements XIV therefore consist in evaluating the capacity of Finite Element (FE) explicit methodologies to handle with such a dynamic event, with specific emphasis laid upon the understanding of blade rupture process, up to the impact with the containment shield, and the prediction of the efforts supported by the engine mounting components. The ultimate objective should be to contribute to the reduction of the turbine design phase duration and costs (including the suppression or the simplification of certification engine test). Already, the performed simulations permitted to identify some needs in terms of model improvements to better predict numerically, then pass the bladeshedding loads to the engine mounting points, which represent a key-point in turbine certification process. A full-scale 3D FE model of a recent turbine and structural environment (protection panel, engine mounting component…) has then been developed with the Europlexus code for feasibility purposes: the blades trajectory after rupture, their impact/penetration through the turbine rings and stopping by the containment shield can now be studied
2
State of the art
The prediction of the engine mountings loads using finite element method requires a special study of mechanical phenomena occurring in the “blade shedding” phase and to simulate the engine behaviour. On the literature, the problem has no equivalence because Turbomeca is the first manufacturer which uses this technology and so the nearest problem is the blade out on the free turbine. The blade out is defined by the release of one blade at the maximum defined rotational speed. The mechanical problem is then to demonstrate that no high-energy debris is ejected from the engine and the deceleration of the unbalanced rotor, due to the blade loss, does not lead to hazardous behaviour of the engine. For example, Sun [1] has recently simulated the transient rotor dynamics due the loss of one blade and investigated the effect of thermal growths for the ball bearing First, the Blade Shedding engine test has been analysed in order to understand the scenario of the phenomena. The result of the analysis [2] gives a global idea of the different sequences of the “Blade Shedding”: the release of all the blades takes a few ms while high vibration levels in the structure and loads on the mounting are observed during a few tenths of seconds. Nevertheless the sequence of the release of the blade has not been identified and remains one of the most unknown of the problem. Indeed, the transient rotor dynamics needs to be simulated. By the past, Guilhen et al. [3] has proposed a Newmark Algorithm with Alpha=1/2 and Beta = ½ in order to solve the problem and a modal reduction in order to save time calculation. In a second time, a recent work [4] has been achieved to modelize the transient rotor dynamic with beam element on the Europlexus finite element explicit code and allows, on the one hand, simulate the mountings loads due to transient unbalance (due to loss of blade) and, on the other hand to identify the rotor/stator contact which leads to the failure of the blade. Thanks to the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
561
comprehension of the rotor dynamics, the scenario of the release of the blade can be identified or defined with a better confidence. On the other hand, the impact of the blade on the stator part remains a main mechanical problem. By the past, the main subject was the dimensioning of the containment shield. Number of investigators have performed experimental studies and have created analytical models of the rotor burst and containment process (for example [5] and [6]). In order to reduce the time and cost of development of new engines, the Finite Element Method is used and allows to predict comparatively the ability of containment shield [7, 8]. Also Herran and al. [9] have conducted simulations in order to simulate blades impact and to define the required model for the fragmentation of the blade. In this work, blades profiles modelling with thick shells led to the assumption of no deformation in the thickness of the material and the impossibility of refinement of the profile. Even if a choice of volumic elements to model these blades seems more appropriate, however, we retain this blade modelling with thick shells mainly to avoid too fine meshes that would lead to important time calculations. This simplification was assumed to be acceptable. Thus, these simulations allow us to define a methodology for modelling the FE free turbine that ONERA has incorporated into a recent model of a helicopter engine.
Figure 1:
Plastic strain of two blades on a “blade shedding” shield [9].
In the present article, the work is focused on the modelling of the majority of the engine with volumic and shell elements and the resolution of the problem with the Newmark explicit algorithm implemented on the Europlexus software. The simulation is conducted for 10 ms to represent all of the highly nonlinear phenomenon.
3
Blade shedding modelling
3.1 Dynamic non linear analysis The Europlexus code used in this paper is best adapted to rapid dynamic phenomena taking into account the geometric non linearity (large displacements, large rotations, large strains), and the non-linearity of materials (plasticity, viscoplasticity, etc). Europlexus is an explicit dynamic software using a Newmark time integration scheme. At each time step, the calculation is defined as follows: WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
562 Computational Methods and Experimental Measurements XIV -
The un, vn, an values are initially known (the time step is tn) Velocity calculation with a predictor scheme:
G n + 1 G n ∆t n G n v 2 =v + ⋅a 2
-
Displacements calculation for the next time step:
-
Internal efforts calculation using the behavior law, Acceleration calculation for the next time step:
-
(1)
G n +1 G n G n+ 1 n u = u + ∆t ⋅ v 2
(2)
G n +1 −1 a = m ( Fextn +1 − Fintn +1 )
(3)
Final velocities calculation:
G n +1 G n + 1 ∆t n G n +1 ⋅a v =v 2 + 2
(4)
The contact forces are obtained by the penalty method. The major difficulty in this work is to obtain the most judicious compromise between the mesh size of our model and time step. Basically, dynamic fast time calculations are inversely proportional to the size of mesh. Then, we will focus mainly to conduct the full calculation for the first non-linear phase (around 10 ms). 3.2 FE methodology 3.2.1 Mesh rules The CRD unit of the DADS department has a great experience in computing and impact crash at high speed. The implementation methodology is to build a CAD model adapted to future needs as mesh size. In this context, the calculation of blade loss will be easier. In order to correctly represent the bending stress of the volumic elements is necessary to discretize the thickness with at least three elements. So for weak thicknesses, elements might become too small resulting calculations too long for a simple parametric study. That is why the structure is mainly meshed with shell elements and that the bricks are used only for cases where the thickness are important, or to add the missing mass. The screws were not represented thoroughly (real geometry). In our case, beam elements are used. They can model the mechanical links in terms of effort in tension and torsion (BSHT and BSHR EUROPLEXUS elements). These kind of beam elements are also used to model the shaft bearings. The rotor is linked with a beam element to the stator model. Table 1: MASS (g) CG (Z, rotor axe) (mm)
Mass and gravity center position (CAD/FE model). ENGINE PARTS Part 1 (Fig.2) Part 2 (Fig.2) Part 1 Part 2
F.E MODEL 8900 8509 1046 652.8
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
CAD 9330 8815 1047.5 651.8
% -4.6 -3.5 -
Computational Methods and Experimental Measurements XIV
563
3.2.2 Mass and inertia analysis The next figures show two complexes engine parts and table 1 compares the real mass and gravity center position of the CAD with the model.
Figure 2:
Example of two FE pieces of the complete engine.
The difference between the CAD complete model and the FE model leads to an acceptable difference lower than 5% respecting the mass, gravity center position and inertia moments. 3.2.3 Engine mounting load methodology and calculation 3.2.3.1 F.E model of the complete engine Figure 3 shows the complete model mesh used for the “blade-shedding” calculation.
Figure 3:
Complete FE model of the engine.
The complete F.E model contains: - 254 278 Nodes, - 760 DST3 (3 nodes shell element), - 83 439 Q4GS (4 nodes shell elements), - 91 768 CUB8 (8 nodes hexaedric brick elements), - 19 584 SHB8 (8 nodes hexaedric thick shell elements for the blade profile), - 10 BSHT (spring elements used specially for bearing), - 388 POUT (Beam element). 3.2.3.2 Engine mountings loads modelling In this section a presentation of the method to calculate the mounting loads during the “blade shedding” phenomenon is exposed. The engine is linked to the helicopter by an isostatic mounting defined by two connecting rods at the front of the engine and a CV joint at the rear. Figure 4(a) shows the front mounting modelled with hexahedral elements and the rear linking tube meshed with shell elements. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
564 Computational Methods and Experimental Measurements XIV
Figure 4:
Front (a) and rear (b) mountings FE modelling.
Then a cinematic relationship rigid element defines a "master" node at each mounting. This “master” node represents the average position of the mounting (Figure 4(b)). A beam element is connected to one end to this "master" node, the other end to the node of helicopter mounting. In the case of an engine mounting, this node is clamped and so the helicopter loads are overestimated. For the rear mounting, the CV joint is realized by a user beam element (BSHT element) which allows the freedom of rotation and provides the forces for this mounting. After that the bending moment on the mounting is deduced. 3.2.3.3 Blades-loss method: “blade shedding” For the blades-loss and complete FE simulation, the methodology is given below: - Do first prestressing calculation (shaft screw) with damping,
Figure 5:
Prestressing calculation and centrifugal forces calculation.
Indeed, to ensure the cohesion of the rotating parts of the rotor, a pretension stress is carried out by a tie-bolt. The tie-bolt is set during the assembly of the shaft and ensures the cohesion between the disks and the shaft. The tie-bolt is put in tension with a screw (Figure 5). The remaining elastic force of the tie-bolt permits a perfect disk-shaft cohesion. This assembly is essential because if the tie-bolt were to break, there would be a separation of the rotating parts. Therefore, the shaft is modelled with volumic elements to achieve correctly the prestressing method and to take into account the gyroscopic effects. It is important to note that, using a dynamic code with explicit algorithm for an axial prestressing calculation (with a continuous mesh) constitutes a unique and original method. This method can detect the opening of the coupling (no tensions
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
565
on the border element), the plastification of the rotor and takes into account all the effects on the rotor dynamics of the free turbine. - Restart with displacements, stresses, hardening parameters of the first prestressing calculation, - Apply centrifugal forces with damping on all structure, A calculation to verify the stability of our model is achieved before starting the calculations of rotor rotation and centrifugal forces (Figure 5). - Apply rotor initial velocity , The same restart method used for the stabilisation calculation is achieved. Thus, the calculation restarts with the final conditions of the previous calculation. These conditions are now the initial conditions used to simulate the blade-loss and “blade shedding” phenomenon (Fig. 6). - Perform a “Blade shedding” calculation with a Turbomeca released blade scenario.
Figure 6:
Figure 7:
Sequential and simultaneous blades-loss calculation.
Engine mounting, bearing loads and rotor displacement (sequential and simultaneous blades-off).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
566 Computational Methods and Experimental Measurements XIV A simultaneous blade-loss calculation is performed to compare and observe the significant bearings efforts in the case of a non symmetrical blades-off (Figure 6). 3.3 Engine mounting efforts The transient response of the engine during the blade shedding has been numerically analysed. The efforts in the engine mounting components have been especially post treated. The calculation has been obtained for 10 to 13 ms which is relevant compared with the sequential blades-off of 5 ms only. Figure 7 shows the different loads calculated in the shaft bearing and bolted assemblies. We can observe an important unbalance. A first approach compared with measurement has shown an acceptable correlation. The ratio between the sequential blades-off and simultaneous case is close to 10 for the front engine mountings and only 1.3 for the rear linking. Respect to the shaft bearing, this ration is 5. The rotor displacement can be observed in figure 7. Here, the same comparison with a simultaneous blade-off is done. The displacement of the rotor with a sequential blade-loss sequence is significant, higher than in the simultaneous case (ratio close to 9). During this first non linear phase, the rotor loads are introduced by the transient unbalance due to the sequential blades-off.
4
Conclusion
The complete methodology to simulate the blade shedding phenomenon has been achieved in this work, using the dynamic explicit code Europlexus. This particular mechanical protection naturally coupled with the classical electronic protection is used in case of overspeed. Furthermore, the simulation was conducted taking into account the shaft assembly, taking care to simulate the prestressing due to the tie-bolt. A comparison between two simulations was performed, one reporting a sequential blades-off and the other simulating a simultaneous loss of the blades. Even if the second simulation has very little interest, it helps us to observe the significant structure loads for a significant unbalance. The work presented in this paper allows us to analyse the dynamic response of a turboshaft engine during blade shedding. Finally, the rotor unbalance load depends of the blade-off sequence. A FE model using an Explicit Newmark time algorithm, allows one to observe with the first non linear phase. Recently, Turbomeca uses simplify model to analyse high level of vibration up to 200 ms taking into account gyroscopic effect [3]. All these theoretical results obtained in this work require a comparison with tests in a future work.
Acknowledgements The authors gratefully acknowledge Turbomeca for its support. This work takes place in the framework of the MAIA mechanical research and technology program sponsored by CNRS, ONERA and SAFRAN Group.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
567
References [1] Sun, G., Palazzolob, A., Provenzac, A., Lawrencec, C., Carney, K., Long duration blade loss simulations including thermal growths for dual-rotor gas turbine engine. Journal of Sound and Vibration, 316, pp. 147-163, 2008. [2] Guilhen, P.M., Berthier, P., Ferraris, Guy, Lalanne, M., Instability and unbalance response of dissymetric rotor bearing systems, ASME J. Vibr. Ac. Str. Rel. Des., 110 July 1988. [3] Herran, M., Rapport annuel de thèse : Prédiction des efforts aux attaches lors d’un blade shedding, Tech. Rep., Turbomeca, 2008. [4] Herran, M., Nélias, D., Ortiz, R., Draft : implementation of rotor dynamics effects into the europlexus code fort the prediction of transient dynamic loading of engine mountings due to blade shedding unbalance, Proceedings of TURBOEXPO 2009, Proceedings of ASME Turbo Expo 2009: Power for Land, Sea and Air, June 8-12, 2009, Orlando, USA. [5] Mangano, G.J., Studies of engine rotor fragment impact on protective structures, Agard Con & Proc., Impact Damage Tolerance of Structure, 41st Meeting of the Structures and Materials panel, 186, 1975. [6] McCarthy, D., Types of rotor failure and characteristics of fragments, A workshop held at the Massachusetts, Institute of Technology, NASA CP2017, pp. 65-92, 1977. [7] Kelvin, Y. Ng., Turbine rotor burst containment analysis using MSC/Dytran, an analytical approach to predicting primary containment, MSC 1996 World Users' Conference Proceedings. [8] Kraus, A., Frischbier, J., Containment and penetration simulation in case of blade loss in a low pressure turbine, Proceedings of the DYNAmore LSDYNA Forum 2002 19–20 September, Bad Mergentheim (2002). [9] Herran, M., Chalons, H., Nélias, D., Ortiz, R., Modélisation de l’impact d’une pale contre un blindage lors du “blade Shedding”, Colloque Vibrations Chocs et bruit, Lyon (France), 2008.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Section 10 Industrial applications
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
571
Finding the “optimal” size and location of treatment plants for a Jatropha oil plantation project in Thailand J. E. Everett Emeritus Professor of Information Management, School of Business, The University of Western Australia, Australia
Abstract Curcas Energy is developing a sustainable energy project in Thailand to produce Jatropha oil as a biological source of diesel fuel. The company is providing Jatropha seeds and advice to smallholder farmers. The crop is being grown as hedgerows on otherwise unused land. The nuts will be transported from widely distributed collection points to central plants where the oil will be extracted. The extraction plants can be any integer multiple of a basic unit, with larger plants having advantages of scale that have to be weighed against the lesser transport costs to smaller more widely distributed treatment plants. This paper describes a modelling tool, developed in Excel, which enables planners to compare the costs and benefits of alternative plant locations and sizes. The collection point locations and estimated production rates are entered into the model, together with potential treatment plant locations, costs and capacities, and transport costs per kilometre. The model computes the preferred plant destination for each collection point, after taking into account production rates and treatment capacities, and plots a map of all the collection points and plants, colour coded to indicate the destination plants for each collection point. The planner can iteratively adjust the plant locations, capacities and costs to explore the wide range of suggested alternatives to be considered. Keywords: transport, sustainable energy, Jatropha, biofuel, Thailand.
1
Introduction
Curcas Energy, an Australian company based in Perth, is developing a sustainable energy project to produce Jatropha oil in Thailand. The Jatropha tree WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090511
572 Computational Methods and Experimental Measurements XIV originated in South America but is now found widely distributed throughout tropical Asia. The trees, which grow to about three metres, can be grown on marginal land, producing poisonous nuts containing about thirty per cent oil. The nuts are often called “candle-nuts”, and are used for this purpose in some Asian countries. The oil can be extracted and used as a biological source of diesel fuel [1, 2], and provides an excellent feedstock for aviation fuel [3, 4]. The waste can be used for generating electricity or converted into fertiliser or agrichar. Smallholding farmers throughout the country are being supplied with Jatropha seeds and cultivation expertise. It is envisaged that the Jatropha trees will be grown as hedgerows and on wasteland, and will not be using land that could otherwise be used for growing food crops. So, unlike palm oil, the use of Jatropha as a fuel source does not compete with food resources or foodproducing land. The farmers will deliver their nut crop to local collection points. From the collection points, the nuts will be transported to treatment plants. The treatment plants can be any integer multiple of a basic processing unit. Larger treatment plants can be run at lesser unit cost, but generally involve greater transport distances. This paper describes a modelling tool, developed in Excel, which enables planners to compare the costs and benefits of alternative plant locations and sizes. The collection point locations (northing and easting) and estimated production rates are entered into the model, together with potential treatment plant locations, costs and capacities, and transport costs per kilometre. The model computes the preferred, least-cost feasible, plant destination for each collection point, after taking into account production rates and treatment capacities, and plots a map of all the collection points and plants, colour coded to indicate the destination plants for each collection point. The planner can iteratively adjust the plant locations, capacities and costs to explore the wide range of suggested alternatives to be considered. The problem could be posed as a linear quadratic problem, or could adapt one of a number of the standard transportation models described in management science textbooks, such as Winston and Albright [5]. However it was chosen to design a heuristic model, realised in Excel using Visual Basic (VBA) coding. This approach has the advantage that the model is readily comprehensible to the user, and adaptable, so that the user can explore the universe of possibilities instead of being constrained to an “optimum” solution which may ignore a range of criteria, not explicit to the model. In the author’s experience, this interaction between the user and a heuristic spreadsheet model can greatly facilitate the development of a useful solution, satisfying both the explicit criteria of the model and the user’s implicit, sometimes subjective, criteria.
2
The model
One frequent criticism of using spreadsheets for computation is that data and computations are not well separated, and equations may be easily overwritten without the user realising. This objection is overcome by using VBA macros instead of placing equations in the cells to carry out the computations. A further WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
573
safeguard is obtained by protecting the workbook, with only the data entry cells being unlocked, so that vital parts of the worksheets cannot be inadvertently overwritten. The model comprises an Excel workbook with two worksheets. One, named “CollectPts” defines the collection point locations, and reports their allocation to treatment plants. The second worksheet, “Plants”, defines the treatment plants and the cost factors, and contains charts reporting the locations and assignments of collection points to treatments plants. 2.1 Data specification The example to be discussed uses hypothetical data, in no way related to the actual cost estimates. The example assumes one hundred collection points, being delivered to a choice of four treatment plant locations, each treatment plant having a chosen number of presses. In practice, multiple alternative treatment plant locations, and varying numbers of presses at each plant, can be chosen to explore the feasible solution domain. 2.1.1 Collection points One line record per collection point on the CollectPts worksheet specifies its name, the northing and easting location in km, and the expected production rate in kilotonnes (kt) per year. Following computation, these records are extended to give the distance to each potential treatment plant destination, and the chosen destination. 2.1.2 Treatment plants One line record per treatment plant on the Plants worksheet specifies its name, the northing and easting location in km, and the number of presses to be installed. Computation cells extend the record to show the plant capacity and costing. Table 1: # of Limit Cap Presses kt/yr $M 1 40 5.84 2 80 9.61 3 120 13.39 4 160 17.16 5 200 20.94
Cost and other parameters. Annual Cost $M/yr $/kt 1.17 1.24 1.92 1.20 2.68 1.18 3.43 1.17 4.19 1.16
Transport/kt.km $150 Distance factor 140% Cost of Cap/yr 12%
2.1.3 Cost and other parameters Table 1 shows the cost and other parameters to be specified on the Plants worksheet. The treatment plants are modular, with the production limit or capacity proportional to the number of presses, but with the capital cost increasing rather less than linearly with the number of presses, and the cost per kt WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
574 Computational Methods and Experimental Measurements XIV declining as the number of presses are increased. The cost of transporting a kt of nuts for one km is also to be specified, along with the yearly cost of capital (enabling capital and running costs to be combined for evaluation). The distance factor will be discussed below. 2.1.4 Travel distances The distances from collection points to treatment plants are unknown, and infeasible to ascertain for the vast number of routes being considered. However, Thailand is well endowed with minor roads, which are not generally straight linkages, so the distance from any collection point to a treatment plant is assumed to be a factor (for example 140%) multiplied by the straight-line distance. The straight-line distance is calculated as the Pythagorean hypotenuse, or the square root of the summed squared northing and easting distances. 2.2 Computation Computation comprises two steps, which may then be further iterated with user interaction. Pressing the “Compute Inflow” button and the “Revise Routes” button respectively initiate the computation steps. Each of these buttons runs VBA macros, to carry out the computations described below. 2.2.1 Compute Inflow The Compute Inflow macro extends the collection point records (on the CollectPts worksheet) to report the distance to each plant, and identifies the nearest plant. Table 2: Point
km
Name N
E
292 kt/yr
The initial computed inflow. km to Plant
Near
A
B
C
D
Send to
Col00 31 61 1.95
45
44
50
73
B
B
44
Col01 51 16 4.37
64
48
36
98
C
C
36
Col02 76 99 3.05
57
122
85
27
D
D
27
Col03 29 30 2.54
63
13
45
99
B
B
13
Col04 79 69 4.40
26
98
49
15
D
D
Col05 79 74 1.68
30
102
55
8
D
D
Col06 13 97 1.84
87
92
103
96
A
A
87
34
85
140
B
B
34
Col07 6
12 0.90 104
Next Best
Plant Plant km Plant km Extra
A
57
30
15
A
26
11
8
A
30
22
The implied load to each plant is computed, and compared with the plant’s capacity limit. For each collection point initially assigned to an overloaded treatment plant, the next nearest treatment plant is identified, and the extra distance required to get there is reported. Table 2 shows the top part of the report after computing the inflow. In this example, plant D is overloaded, so the collection points nearest to plant D are WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
575
each provided with a “next best” plant and the extra distance that would be incurred if they went there, instead of to the nearest plant. Of the collection points shown, Col02, Col04 and Col05 are each closest to plant D, with plant A being their next closest. Col02 is 27 km from its nearest plant D, and 57 km from the next nearest plant A, so an extra 30 km would be required to go to plant A instead of plant D. 2.2.2 Revise routes If, as in this case, the initial computed inflow overloads one or more plants, then either the number of presses at those plants can be increased, or some of the collection points going there can be rerouted, by pressing the Revise Routes button. During the Revise Routes operation, the collection point list is sorted to put those collection points going to the overloaded plant (or plants) at the top, sorted in increasing extra distance that would be incurred if they were rerouted to the next nearest plant. This extra distance can be considered an opportunity cost of re-routing. Working down the list, the collection points are reassigned to the next nearest plant until the plant overload has been removed. In this case, plant D was overloaded by 15.48 kt/yr, so the top five candidates were rerouted to plant A, as shown in Table 3. Table 3: Point Name Col97 Col80 Col50 Col89 Col04 Col91 Col82 Col05 Col64 Col58
km N E 46 92 97 54 50 97 94 58 79 69 99 62 97 63 79 74 73 81 87 74
292 kt/yr 3.71 2.69 3.91 0.93 4.40 0.97 2.44 1.68 3.92 3.61
Rerouting from the overloaded plant. A 51 49 55 44 26 51 47 30 33 40
km to Plant B C 92 76 113 56 100 81 110 55 98 49 119 63 116 61 102 55 101 60 111 61
D 50 44 48 36 15 37 33 8 10 13
Next Best Near Send to Plant Plant km Plant km Extra D 50 A A 51 1 D 44 A A 49 5 D 48 A A 55 7 D A 36 A 44 8 D A 15 A 26 11 D D 37 A 51 14 D D 33 A 47 14 D D 8 A 30 22 D D 10 A 33 23 D D 13 A 40 27
The list is then re-sorted, as shown in Figure 4. Note that Col04 has been rerouted from plant D to go now to plant A. But Col02 and Col05 still go to plant D, because enough other collection points, with lesser opportunity cost in terms of extra distance, can be re-routed to eliminate the plant overload. It is possible that, after the above redistribution, some plants will still be overloaded. The Revise Routes operation can be repeated as many times as needed, until all the overloading has been eliminated, provided the total plant capacity exceeds the total collection point production.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
576 Computational Methods and Experimental Measurements XIV Table 4: Point
km
Name N
E
292 kt/yr
The revised routes.
km to Plant
Near
A
B
C
D
Send to
Next Best
Plant Plant km Plant km Extra
Col00 31 61 1.95
45
44
50
73
B
B
44
Col01 51 16 4.37
64
48
36
98
C
C
36
Col02 76 99 3.05
57
122
85
27
D
D
27
Col03 29 30 2.54
63
13
45
99
B
B
13
Col04 79 69 4.40
26
98
49
15
D
26
Col05 79 74 1.68
30
102
55
8
D
A D
Col06 13 97 1.84
87
92
103
96
A
A
87
34
85
140
B
B
34
Col07 6
12 0.90 104
8
2.3 Results When a feasible acceptable route allocation has been achieved, tabular and graphical reports of the cost and allocation results can be seen on the Plants worksheet. 2.3.1 Cost results Table 4 shows the cost result report. The number or locations of treatment plant, or the number of presses assigned to each, can be changed and the program rerun to explore the solution domain. Table 5: Plant Name A B C D
Location # of N E Press 63 60 2 20 32 3 60 40 2 80 80 1 Total 8 Total Inflow
Cost results.
Annual Plant Operating Cost Limit Capital kt/yr $M Fix$M $/kt Var$M Tot$M 80 9.61 1.92 $1.20 84.97 86.89 120 13.39 2.68 $1.18 128.16 130.84 80 9.61 1.92 $1.20 87.04 88.97 40 5.84 1.17 $1.24 49.40 50.57 320 38.46 7.69 $1.20 349.58 357.27 292
Inflow/yr Truck Limit kt kt.km $M/yr Used 71 3,562 0.53 89% 109 4,505 0.68 91% 73 2,397 0.36 91% 40 699 0.10 100% 292 11163 1.67 91%
2.3.2 Allocation results Figure 1 plots the locations of the collection points and of the treatment plants, and identifies the treatment plant allocated for each collection point. Figure 2 shows the capacity limit for each plant, and the total yearly tonnage allocated to it. It also shows the tonnage redistributed from plant D to plant A, to remove the initial overloading.
3
Discussion
The example discussed has shown how an Excel workbook, backed up by VBA macros, can be used as a decision tool for a complex transportation problem. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Figure 1:
Figure 2:
577
Allocation of collection points to treatment plants.
Redistribution to plant A from overloaded plant D to plant A.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
578 Computational Methods and Experimental Measurements XIV The problem could probably have been recast as a quadratic optimisation to identify a unique optimum. However, it is argued that doing so would reduce the solution space and ignore any non-quantified criteria. In practice, it is found that a spreadsheet-based tool as described here provides opportunity for fruitful human-machine interaction, allowing the computer to do the tedious calculations whilst still enabling the human to include judgemental, not directly quantifiable, factors, and to explore the wide range of possible alternative policies.
References [1] Poison plant could help to cure the planet, Times Online, 28 July 2007. [2] Fitzgerald M., India's Big Plans for Biodiesel, Technology Review, MIT: Cambridge, MA, December 2006. [3] Air NZ biofuel test flight a success http://tvnz.co.nz/technology-news/air-nzbiofuel-test-flight-success-2431806. [4] More to noxious weed fuel than meets the sky, The Australian, 5 December 2008. [5] Winston, W.L. & Albright, S.C., Practical Management Science, Duxbury: Pacific Grove, CA, pp 193-200, 2001.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
579
An industrial ship system for the flat development of undevelopable surfaces: algorithm and implementation E. M. Soto1, X. A. Leiceaga2 & S. García2 1 2
AZTECA Consulting de Ingeniería, Vigo, Spain Esc. Tec. Sup. de Ingenieros Industriales, Univ. de Vigo, Vigo, Spain
Abstract This article presents an implementation of a new algorithm for flat development of undevelopable surfaces, which are characteristic of the plates used for construction of ship hull, and designed as B-spline patches, in an Industrial Ship System. The displayed method is based on the structured triangular composition of the surface, for later acquisition of flat development, thereby making differences between the distances of the 3D surface and the flat development minimum. A complete tool integrated in the Industrial Ship System has been obtained, which provides precision and rapidity, and it allows us to obtain all the necessary information for the later production for cutting and forming Keywords: undevelopable surface, B-spline, mesh, design patches, hull strake, production.
1
Introduction
This article presents a complete tool, integrated in Naval Engineering System [5], which does the flat development of the complex surfaces that form the steel plates of the ship’s hull. At the moment, in Spanish shipyards the production of curved steel plates is carried out by combined processes [1] mechanical forming and thermomechanical forming. First the plate goes through rolls and then, in order to obtain the exact curvature, it is subjected to combined heat and cold processes, this method is known as “forming by line heating”. [2].
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090521
580 Computational Methods and Experimental Measurements XIV With the developed application, from the 3D plate model, designed in a generic program, MicroStation V8 – Bentley Systems, its flat development and the necessary lines for its later manufacture in the shipyard are obtained [1]. So it has been necessary: - To create mesh of the surface. Due to the difficulty in obtaining uniform mesh with the own tools of the selected program. - To develop the algorithm to obtain the equivalent flat surface. - To identify the reference lines for manufacture and verification The programming work has been carried out in M.D.L. (MicroStation Development Language) native language of MicroStation similar to C.
2
Resolution of mesh
The ship hull is designed using a set of B-Spline surfaces (Figure 1). The constructive patches do not coincide with these B-Spline surfaces.
Figure 1:
Figure 2:
Set of B-Splines surfaces that compose hull ship.
Constructive patch made up of 4 surfaces of design.
The first difficulty is to obtain uniform mesh in the constructive patches composed of several B-Spline surfaces as shown in Figure 2. Therefore we decided to program a mesh application of all the cases and this tool maintains the possibility of visualizing it, thus being able to select the optimal mesh for each plate. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
581
The algorithm mesh is made up of the following steps: Step 1: Make an approximated triangulation of the surface. Search for the edges (condition: without neighbouring edges). Step 2: Obtain the best direction of projection to avoid loss of information according to different criteria. (Figure 3.)
Figure 3:
Direction of projection.
Step 3: Obtain the best approach of the edges of the constructive patch like B. Spline. Step 4: Make a flat surface on which to project the edges obtained in the previous step. This flat surface will be perpendicular to the direction of selected projection. Step 5: Make the projection of the curves obtained as limits of the patch on the flat surface. Thus, we obtain a flat Spline surface, with which we will work in a simpler way. Step 6: Extract the isoparametric curves of this flat surface. We obtain the surface mesh as intersections of these curves. The intersection points are the regular and square mesh, optimal for later processing. Step 7: Projection of these points onto the original surface. If the projection does not intersect the original surface, we search in the triangulation of the original surface. This is the solution found, results are good and it is the only one which has not produced any problems with surfaces composed of several patches of design.
3
Development algorithm
The proposed method is based on the non-structured triangular composition of the surface, to later obtain its flat development so that we can minimize [3] the difference between the distance in the flat development and the same distance on the original surface. The development of the surface will be done by minimization of an “application error” (see below). This is the difference between the distances of the real surface and the same distance in the developed surface between neighboring points, by application of local isometrics between the real surface and the mesh of developed surface. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
582 Computational Methods and Experimental Measurements XIV
Figure 4:
Mesh is projected on the original surface.
[
]
E = ∑ d k3 D − d k2 D (v1x , v1 y ,..., vix , vix ,..., v pqx , v pqy ) l
2
(1)
k =1
where: p: number of points distributed in parametric direction u. q: number of points distributed in parametric direction v. vix, viy: variables of the unknown function, that correspond to coordinates x and y in development surface D, corresponding to p·q points of S d3Dk: distance in S (3D) between neighboring points. d2Dk: distance in D (2D) between neighboring points in S. l = 4·p·q – 3·(p+q) + 2: number of total segments that connect neighboring points. i = 1, 2,…, p·q: number of total nods of the parametric net. The concept of development is to look for the minimum of this function [3], that is to say, to look for a dimension vector p·q, v*, so that: l
[
E (v*) = minE (v ) = min ∑ d k3 D − d k2 D
]
2
(2)
k =1
In this way, a triangular flat mesh is obtained which is approximate to the real one, where the difference between corresponding distances is minimum, between the 3D surface and the plane. Also we must consider in the minimization process the conditions of contour in order to look for a greater efficiency in the result. 3.1 Step 1: Definition of the parametric space of the mesh The first step of the algorithm consists of arrangement of the data maintaining the defined parametric space according to Figure 5. 3.2 Step 2: Division in two meshes for the application of the algorithm The procedure followed in this algorithm is to construct two new meshes: original (Mesh A) and another turned by 180º with respect to the first (Mesh B). That is to say, one which begins the development from the node (0,0) and the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Figure 5:
583
Definition of the parametric space and distances of a point (i, j).
other, applying to the function of rotation of angle 180º, from the node (p,q) (the opposite node). We calculate the 180º turned mesh by copying the coordinates in the opposite sense of the numeration, we begin by the coordinates of the node (p,q) until arriving at (0,0) Thus, we obtain two meshes with the same coordinates but ordered in two different ways. It is possible to apply the same functions to both meshes, and thus we obtain two developments from opposite ends of the same surface. 3.3 Step 3: Previous calculations First we will carry out the calculations of the distances and the angles of verification: edges, diagonals, perimeter and patch area; which inform us how good the approach obtained is. The distances in the 3D surface are measured according to the formula:
d (i , j ):( i , j +1) =
(x
− xi , j +1 ) + ( yi , j − yi , j +1 ) + (zi , j − zi , j +1 ) 2
i, j
2
2
(3)
All the distances represented for a generic point (i,j) in Fig. 5 must be considered: same row, same column, right diagonal, left diagonal. There are singular nods, the nods of edges and vertices, in which there are not all the distances. These values are eliminated. In order to obtain the angles, sines and cosines, which are necessary for the first points of the approach, we used the theorem of the cosine with the distances already defined. First, we calculate cosine – equation (4) – and from them, the angle and the sine d (20, 0):(0,1) + d (20, 0 ):(1, 0) − d (21, 0):(0,1) (4) cosγ 2 = 2·d ( 0, 0 ):(0,1) ·d ( 0, 0):(1, 0 ) 3.4 Step 4: Approach calculations Thus, we have all the necessary values to begin the calculation of the approach; we will apply the same calculations to two meshes, Mesh A and Mesh B. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
584 Computational Methods and Experimental Measurements XIV In order to obtain the starting points with which to diminish the function error we calculate the average of the data obtained to fit to the distances in rows of the original surface ( v 1(i, j) ), and to the distance in columns ( v (i,2 j) ) for each of the meshes, that until now stays separated.
(v
0 (i, j)x
,v
0 (i, j)y
v1(i, j)x , v(i,2 j)x v1(i, j)y , v(i,2 j)y ) = , 2 2
(5)
where: i = 0, 1, 2,… p; j = 0, 1, 2,…, q. Crossing all the points of the mesh. Applying the function that has been programmed from this algorithm, two meshes 2D (Mesh A and Mesh B) are obtained, which are defined by (p+1)·(q+1) - initial vector (v (i, j) x0, v (i, j) y0), that is to say, the coordinates (x, and) of (p+1)·(q+1) nodes corresponding to the nodes of the 3D mesh. 3.5 Step 5: Merge two meshes In order to be able to merge these meshes we must, in the first place, undo the rotation of 180º applied previously to Mesh B. Once we have the two meshes development with the same order of nodes, we merge those using coefficients by rows and columns, so that the values of the coordinates of the nodes that agreed with the axes of the development mesh will have more weight, because these have a smaller error. In Mesh A is the row 0 and the column 0, whereas in Mesh B is row p and column q. The coefficients are determined following the formulas to Mesh A, whereas the coefficients of Mesh B will be 1 minus the Mesh A coefficients: nf · p 1 (6) Coef .colum = 1 − − N − p p − 1 nc·q 1 Coef . fila = 1 − − N − q q −1
(7)
where: nc = number of the column of the node; nf = number of the row of the node; p = number of columns; q = number of rows; N = number of total nods Applying the weighted average of the two meshes from the equations: (8) Coef .ColumA · x1 + Coef .ColumB · x 2 = x (9) Coef .FilaA· y1 + Coef .FilaB· y 2 = y We obtain the coordinates (x, y) of 2D developed meshes. In addition we calculate the edges, diagonals and area (with the same function applied to 3D mesh) we compared them with the data that we obtained in 3D mesh to give us an idea of the error made in the development. 3.6 Step 6: Relocate This step consists of fitting the mesh to the 3D distances of the edges, diagonals and perimeter. For this, we first measured the necessary distances to compare with those of 3D mesh, later, we made the difference between the 2D and 3D distances, we moved the coordinates in x and y accordingly, from the center of the mesh. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
585
First we move the lines individually and later we correct the edges so that the mesh is maintained as uniform as possible. Since there is displacement we must re-compute the measurements between nodes of the 2D mesh. We must do it whenever there is displacement of nodes. 3.7 Step 7: Calculation of the error (E) The error is defined in equation (1) as the square of the differences between the 3D distances and the same in the 2D mesh, according to the four specified directions (horizontal, vertical and the two diagonals) 3.8 Step 8: Iterations Iteration process: process to diminish the function error [5], applying the following calculations: 1. Calculation of the factor, from the calculated errors
2·E d ( i., j )−( a ,b )
(10)
2. Calculation of the difference of the coordinates between the iteration and previous one (being it_a and it_b consecutive iterations)
(v(iti ,_j a) x − v(iti ,_j b) x )
(11)
3. Calculation of the divergence of coordinates, that is the product of both previous iterations.
2·E d ( i., j ) −( a ,b )
it _ a ·(v( i , j ) x − v(iti ,_j b) x )
(12)
4. And the total divergence is the sum of the divergences of the coordinates which affect the neighbouring nodes, whilst still considering the signs. 5. Calculation of the 2D mesh with the previous mesh and the divergence multiplied by a coefficient (by defect 0.1)
∇E ( v ) = ∑ i
∂E ·vi ∂vi
(13)
After finding the mesh of the iteration, we do all the calculations again to compare the results and to prepare the necessary values for the next iteration. • Calculate the distances • Calculate the error (difference between 3D and 2D) • Calculate the edges, diagonals and angles; and calculate the difference with the real valour. • Once all this is calculated, we include a second process that consists of relocating the nodes from the centre of meshes, to improve the approach. • Calculate lines and relocate • Calculate frames and relocate WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
586 Computational Methods and Experimental Measurements XIV • • • • • • • •
Calculate the distances 2D Relocate the edges Distances in the 2D mesh Calculate frames Calculate lines Calculate error Measurement edges and diagonals Error of the distances to compare them finally with those which we must reach. A maximum value has been taken from the error to finish the iterations; when the program arrives at that error, the algorithm ends. We also implemented the possibility of introducing a number of iterations to verify the convergence of the algorithm.
4
Result: a functional tool
The result is a functional tool for the development of non-developable surfaces implemented in the generic program MicroStation We analyse its operation with a block of the Fore Peak, (Figure 6), where the curvatures of hull are more complex. The patches to develop for this block are those of (Figure 7), where they are separated to observe the different forms and curvatures from the same.
Figure 6. We apply the programmed tool: “Develop patches” that carry out the explained algorithm. First we select the surface, in our case the patch 02, like we see in fig. 7. A dialogue window appears, where we input values of the different variables for the development. 4.1 Selection of parameters We select, in the first part of the dialogue window, the mesh density, the tolerances of mesh, and the marks of the final developed surface.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
587
Figure 7.
Figure 8:
Selections of parameter to develop the surface.
With respect to the mesh we must indicate: number of rows (p) and number of columns (q). With respect to the error or tolerance of mesh: chord height, normal angle to the surfaces, side width maximum. These variables are of the MicroStation mesh tool, they will have values by defect and it is not necessary to cover them. Regarding the production marks, that we want to appear in the developed surface, we have the variables: to mark guide lines to roll; to mark Frames; to mark Water Lines; distance between Lines of Heating; Distance from the mark to the edge. These marks are obtained by connection with the System Data Base, where the coordinates are stored. 4.2 Exit Options. With respect to the second part of the window, we selected the exit options (fig. 9) that can be grouped in: Options to visualise of 2D Mesh solution. Show in the file on which it works or in another file. Options to visualize of 3D mesh of the surface to develop. Show in the file on which it Works, or select another file. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
588 Computational Methods and Experimental Measurements XIV
Figure 9:
Exit options.
Options to verify the exit data: developed 2D surface and marks. Write each set of elements in a level, to show or hide the data according to want Obtain comparative table with the dimensions obtained in both meshes. 4.3 Developed 2D mesh After we give parameters and exit options, we push the developed button and obtain developed mesh like B-Spline Surface, as shown in the images of fig. 10. The left image is mesh of the 2D surface and the right figure shows lines of marked, frames and guidelines to roll in its corresponding levels. In fig. 11 we see the comparative tables between the measurements of 2D and 3D surfaces for the verification of the approach. In addition, these distances are used also for later weld fillet calculations, times of the cutting machine, among others. All these data are exportable as TXT file for their study or later calculations.
5
Conclusions
We emphasise the following conclusions among others: We obtain optimal meshes for each surface. We obtain meshes that better adjust to each surface. In this respect, the tool allows the execution and visualisation of 3D meshes for the election of what we considered next to the real surface, and thus the optimal one to obtain the flat development. The tool is fast. It is important to emphasise the rapidity of the tool, noticing that although we were programming graphical methods, used at present [4], these would be slower. Definition of finished tool. As the paper has shown, a tool implemented in System Integral of Engineering and Design has been developed. It makes sufficient accuracy in the flat development of the non-developable surfaces that compose the ship hull, in addition it gives all the necessary information for its production; positioning, cuts and forming. Unique 3D model design. We emphasize too that the developed planes are directly generated from the three-dimensional patches of the model of the hull. A WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
589
unique model of design is used for calculations, analysis and production; thereby preventing the possible incoherences that take place at present due to the existence of several models.
Figure 10:
Left: develop surface mesh. Right: lines and marks.
Figure 11:
Report of verification of distances.
References [1] Ship design and construction. The Society of Naval Architects and Marine Engineers, New York, 2003. [2] Aplicación técnica de calor para conformado. Astilleros Españoles, S.A. November 1990 [3] William H. Press. Numerical Recipes. Cambridge 1992-1996. [4] Enrique Pardo Trazado de líneas y desarrollos del buque. Editorial Gustavo Gili, S.A. Barcelona 1984. [5] Philipe E. Gill, Walter Murray, Margaret H. Wright, Practical Optimization. Academic Press, 1981 [6] X. Leiceaga, F.G. Zapatero, M. Rodríguez and J. Prieto, Proyectación asequible de buques. El modelo 3D al servicio de la ingeniería naval. Revista Ingeniería Naval, Article Technical, October 2003. [7] Henrik Bisgaard Clausen. Plate Forming by Line Heating. Technical University of Denmark, KGS Lyngby. 2000
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Section 11 Forest fires
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
593
Assessment of the plume theory predictions of crown scorch or crown fire initiation using transport models V. Konovalov1,2, J.-L. Dupuy1, F. Pimont1, D. Morvan3 & R. R. Linn4 1
I.N.R.A. Ecologie des Forêts Méditerranéennes (UR 629) Site Agroparc, Domaine de Saint Paul, F-84914 Avignon. France 2 ICMM, Ural Branch of RAS, Perm, Russia 3 UNIMECA,Université de la Méditerranée, Marseille, France 4 EES2, Los Alamos National Laboratory, Los Alamos, USA
Abstract The aim of our work is to numerically study crown scorch and crown fire ignition as the effects of a fire line spreading through surface fuel under a tree canopy. The objective was to assess the usual assumptions made when one uses the Van Wagner criteria, based on plume theory, to estimate crown scorch or crown fire ignition. The Van Wagner criteria are indeed simple predictive models for crown scorch height or crown fire initiation occurrence. For this purpose the FIRESTAR 2D and FIRETEC wildfire simulators are used. We simulated the fire line by a heat source at ground level and mainly investigated the temperature field. As a first step, we tested the sensitivity of the simulations to different simulation parameters of the wildfire models. As a second step, we ran computations of thermal plumes with no-wind and with no-canopy, for the first comparison to the plume theory. The influence of crown existence on the temperature field above the heat source, as well as on crown scorch and fire ignition conditions, was then investigated. As a third step, the effect of a wind to the plume was shown for the no-canopy and canopy cases. Keywords: crown scorch and crown fire ignition, plume theory, van Wagner criteria, FIRESTAR 2D and FIRETEC wildfire simulators.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090531
594 Computational Methods and Experimental Measurements XIV
1
Introduction
The Van Wagner criteria are simple predictive models for crown scorch height or crown fire initiation occurrence, based on plume theory. These criteria are widely used by forest engineers to assess the risk of crown scorch or of crown fire ignition in the prescribed fire operations. Following some preliminary work [1, 2], Van Wagner derived from the plume theory a formula relating the crown scorch height with the linear fire front intensity [3]:
I2 3 hs = 11.61 , 60 − ta
(1)
where hs (m) is the crown scorch height, I (kW/m) is the linear fire front intensity, ta (°C) is the ambient temperature. The numerical constant was derived from the experimental data. Crown scorch was assumed to appear at the usual value of 60 °C. This temperature threshold is actually well adapted for the prediction of tree foliage necrosis, even if higher thresholds should be used for vegetative buds [4]. Vegetative buds have indeed a higher response time to a heat flux than needles, due to their lower surface-to-volume ratio. Here, we will consider that the fuel elements are in thermal equilibrium with the gaseous phase, which thus is well supported for foliage (except during water evaporation process), but not for buds. A basic assumption of plume theory is to consider points far enough from the heat source. It also means that the main mechanism of heat transfer is convection, since heat-conduction or radiation should only be significant close to the source. Van Wagner criterion also assumes that the plume structure is not significantly affected by the presence of a canopy. In [3], Van Wagner investigated the effect of a weak wind on crown scorch. By a weak wind, we mean that plume structure is not destroyed with wind, but just inclined [2]. A simple correction gives the scorch height including the effect of wind:
hs =
3.94 I 7 6 ( 0.107 I + U 3 ) ( 60 − ta ) ,
(2)
where U (m/s) is the wind velocity. In the present study, we intended to assess the main assumptions made in the previous theory. To investigate numerically crown scorch and crown fire ignition as the effects of a fire line spreading through surface fuel under a tree canopy, we used two different physically-based fire models: FIRESTAR 2D [5] and FIRETEC [6]. In both models, a heat source was implemented, as a rectangle area located in the middle of the domain and put at ground level. The heat source represents a steady fire line of given intensity (power in kW/m). The influence of a canopy and an ambient wind were also included. The sensitivity of the simulations to different simulation parameters of the wildfire simulators was tested. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
2
595
Wildfire simulator parameters
2.1 FIRETEC simulations FIRETEC is a 3D coupled fire-atmosphere model, developed at Los Alamos National Laboratory (New Mexico, USA). It is based on conservation of mass, momentum, species and energy in compressible-gas formulation. Turbulence modelling within FIRETEC is close to the large-eddy simulation approach. The 3D-geometry of the numerical simulation is very interesting in terms of fire behaviour analysis, but requires a significant computational cost. For that reason, the spatial resolution remains limited to the order of 1 m at ground level (in practice, 2m size cells are usually selected). The combustion model treats vegetation pyrolysis and combustion as a single process and is based on a Probability Density Function for temperature. In the present study, the heater was defined as a line of cells in the middle of the domain at ground level. We tested the effect of the spatial resolution on simulation outputs. For that purpose, we considered numerical domains of a given size (160 m in length, 20 m in width and 615 m in height) with cyclic boundary conditions on the lateral domain sides, but with different mesh size along the x, y and z axis. Mesh resolutions were characterized by their cell size at ground level: 1m × 1m × 1m, 1m × 1m × 2m, 1m × 2m × 1m, 2m × 2m × 1m, 2m × 2m × 2m, and 4m × 4m × 4m. It should be notice that cell heights increased along the vertical axis. Fig. 1 shows the effect of the mesh resolution to the temperature field output for the case of the heater power 250 kW/m.
Figure 1:
Vertical profiles of the temperature rise, as predicted by the FIRETEC model.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
596 Computational Methods and Experimental Measurements XIV The curves obtained with logarithmic axis for all the different meshes were almost linear. In addition, their slopes are generally close to -1, as it is predicted by plume theory (Fig. 1). However, the slopes depended on mesh resolution and some discrepancies to the -1 slope appeared at the lowest resolution (curve 4 on Fig 1). The -1 slope in log scale is a requirement of plume theory, obtained by dimensional analysis. But it must be specified that plume theory predicts the maximum temperature on the plume axis (point temperature), since FIRETEC predicts an average temperature over the cell volume. FIRETEC predictions should match plume theory when temperature gradients are small enough. The conclusion of these preliminary results was the following: in order to get the plume temperatures as predicted by plume theory and other field output of the plume above the heater, we need to use rather fine mesh, especially for the vertical direction. The standard mesh resolution 2m × 2m × 2m doesn’t seem enough. 2.2 FIRESTAR 2D simulations FIRESTAR is a 2D-geometry wildfire behaviour simulator, developed at Mediterranean University (Marseille, France). The combustion processes are described with more details than in FIRETEC and the spatial resolution in higher, but the 2D assumption raises questions. The turbulence model is based on a k-ε approach. The mesh size in FIRESTAR can be refined in the area of interest, namely the heat source and the plume here. In the study, the heater was also defined as a set of cells in the middle of the domain at ground level. The mesh size was 0.2m × 0.1m at burner location. The numerical domain was 100 m long and 50 m high. We tested three different modes for the simulation of the heat source. In the two first ways, we injected a mass of CO in the heat source area that reacted with oxygen to release heat. Two models of combustion rate were tested for the simulated burner: the first one was based on the Eddy Dissipation Concept; the second one assumed that either CO or oxygen was fully consumed by combustion in a time step and so the rate of combustion was limited by the amount of CO or oxygen according to the stoichiometric requirement (‘Mixed is burned’ model, MIB). In the third way, we directly injected heat through an artificial heat source term in the equation for energy conservation of the gaseous phase. The difference for the temperature outputs between these three modes of the heat source was found existing only near the heat source for any time or at any distance from the heat source, but in a short time from the heat start. FIRESTAR turbulence model is a k-ε model (k sets for turbulent kinetic energy and ε for the dissipation rate of this energy). Different k-ε models are implemented in FIRESTAR (standard, RNG, low Reynolds). The predictions of FIRESTAR 2D obtained with the standard and the Nam and Bill sets of parameters were compared with the plume theory predictions. The predictions of FIRESTAR 2D with the two sets of turbulence parameters were close together and beyond some threshold value of height, they showed the same trend as WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
597
plume theory. The threshold of height increased with power (about 2 and 10 m respectively for 100 and 1600 kW/m) (Fig. 2). This observation illustrates the fact that plume theory should only apply ‘far enough’ from the heat source.
a) Power 100 kW/m Figure 2:
b) Power 1600 kW/m
Vertical profiles of gas temperature rise as predicted by the FIRESTAR 2D model with two sets of turbulence parameters (standard, Nam and Bill) and by plume theory.
The set of standard parameters gave results closer to plume theory for low powers, the set of Nam and Bill parameters gave results closer to plume theory for high powers.
3
No-wind simulations (FIRESTAR 2D)
3.1 No-canopy simulations We first considered the case without any crown and compare the results of FIRESTAR simulations (‘virtual crown’) with predictions of Van Wagner formula for crown scorch. In both models crown scorch was assumed to occur when gas temperature reached 60 °C. Fig. 3 (a) shows the minimum intensity necessary to get ignition of canopy fuel elements at a given height above the ground. In these ‘virtual crown’ simulations, we considered that ignition occurred as soon as the gaseous phase reached the ignition temperature of fuel (600 K). The agreement in crown scorch predictions is excellent. The predictions of FIRESTAR were fitted to a power law of the fire intensity as shown at Fig. 3 (b) leading to the following formula
I2 3 hi = 18.7 , 600 − Ta where Ta (K) is the ambient absolute temperature.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(3)
598 Computational Methods and Experimental Measurements XIV
a) Crown scorch heights Figure 3:
b) Crown fire ignition heights
Crown scorch and crown fire ignition heights as predicted by FIRESTAR 2D in the absence of a tree canopy (‘virtual crown’).
3.2 Tree canopy influence We defined a tree canopy model based on field measurements of a variety of Aleppo pine (Pinus halepensis) forests in Greece [7]. The maximum density was 0.16 kg/m 3 and the average value was about 0.08 kg/m 3. We added a second family of fuel representing the smallest twigs (0-6 mm diameter) with a maximum density half the one of the needles and the same vertical profile. This tree canopy model is called ‘light crown’ in the following. We also defined a very dense tree canopy by setting the maximum value of needles density to 0.8 kg/m3 (five times denser than the light crown). This second canopy modeled is called ‘dense crown’ in the following (we ignored twigs in this case). The dense crown can be viewed as a limiting case, not as a realistic canopy. We used an area-to-volume ratio of 10000 m -1 and a material density of 800 kg/m 3 for needles (data measured on Pinus halepensis, INRA). As a first step, we considered canopies with crown base height ranging from 3 to 20 m. Based on temperature fields in a large set of simulations, we looked for the minimum intensity that caused scorch at the crown base. The results are plotted on Fig. 4. For a given intensity, we can see that the crown scorch height can be very slightly higher in presence of a canopy than in the no-canopy case (‘Virtual crown’). However, this effect was negligible. Fig. 5 shows scorch heights computed for different pine densities. We also plotted the plume theory prediction and the prediction of FIRESTAR with no crown (‘virtual crown’) for comparison. Obviously the presence of the canopy increased the crown scorch height with respect to the ‘virtual crown’ or plume theory predictions. Furthermore, the presence of the canopy had more effect for a dense canopy. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
599
Figure 4:
Minimum intensity necessary to get scorch at crown base height level.
Figure 5:
Crown scorch heights as a function of the heat source intensity for crowns ranging between 3 and 13 m.
4
Wind case results (FIRETEC)
Here, we considered the effect of a wind on the plume above the heater for the no-canopy and canopy cases, using the FIRETEC code. The domain size was the same as in the preliminary results and the mesh resolution 2m × 2m × 2m at ground level was used. In order to represent an infinite plume along the y axis, cyclic boundary conditions on the lateral domain sides were used. Because the turbulence model of FIRETEC is based on a LES-approach, the most significant part of the turbulence is explicitly solved by the model. In order to obtain mean results, the outputs of FIRETEC were averaged over a significant period of time. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
600 Computational Methods and Experimental Measurements XIV A vertical wind profile was introduced at the inlet boundary as 1 7
z U x =U 0 H
,
(4)
where the reference height H is equal to 5 m. The wind speed is defined here with the reference velocity U 0. We tested two wind velocity levels: 0.1 m/s, and 1 m/s; and two power levels: 250 kW/m, and 1000 kW/m (Fig. 6).
a) Power 250 kW/m and wind 0.1 m/s
b) Power 1000 kW/m and wind 0.1 m/s
c) Power 250 kW/m and wind 1.0 m/s
d) Power 1000 kW/m and wind 1.0 m/s
Figure 6:
Images of the plume temperature field for two wind velocity levels and two power levels.
One can see significant inclining of the plume with rather weak wind (1.0 m/s). We observed a similar effect with FIRESTAR 2D simulator. As expected, the plume is colder when wind speed is higher. We then tested the effect of the tree canopy presence on the plume. The density was 0.085 kg/m 3. The canopy extended between 3 and 13 m in both cases. A pre-computation of the wind field using cyclic-conditions at inlet and outlet boundaries of the domain was performed to properly establish the turbulence due to the canopy layer. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
a) No-canopy Figure 7:
601
b) Canopy density 0.085 kg/m3 Images of the plume temperature field for power 250 kW/m and wind 1.0 m/s for the no-canopy (a) and canopy (b) cases.
Figure 7 illustrates that the canopy existence makes the plume much less inclined. Hence we expect a significant effect of the presence of the canopy layer on the plume and on the temperature distribution, as compared to the predictions of plume theory, which does not consider the canopy effect. These results are preliminary results with a wind and a canopy and the effect of canopy on the plume should be further investigated before drawing any conclusion.
References [1] Thomas, P.H., The size of flames from natural fires. In ‘Proceedings of the Ninth International Symposium on Combustion’: Academic Press Inc. New York, N.Y., pp. 884–859, 1963. [2] Thomas, P.H., The effect of wind on plumes from a line heat source. Joint Fire Res. Organ., Fire Res. Stn., Boreham Wood, Engl. Fire Res., Note 572, 1964. [3] Van Wagner, C.E., Height of crown scorch in forest fires. Canadian Journal of Forest Research, 3(3), pp. 373–378, 1973. [4] Michaletz, S.T. & Johnson, E.A., A heat transfer model of crown scorch in forest fires. Canadian Journal of Forest Research, 36:2839–2851, 2006. [5] Morvan D., Dupuy J.L. (2004) Modeling the propagation of a wildfire through a Mediterranean shrub using a multiphase formulation. Combustion and Flame 138,199–210. [6] Linn R.R., P. Cunningham (2005), Numerical simulations of grass fires using a coupled atmosphere-fire model: basic fire behavior and dependence on wind speed. Journal of Geophysical Research, 110, D13107. [7] Mitsoploulos, I.D. & Dimitrakopoulos, A.P, Canopy fuel characteristics and potential crown fire behaviour in Aleppo pine (Pinus halepensis Mill.) forests. Annals of Forest Science, 64, pp. 287–299, 2007.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
603
Spotting ignition of fuel beds by firebrands C. Lautenberger & A. C. Fernandez-Pello Department of Mechanical Engineering, University of California, Berkeley, CA, USA
Abstract Wind can carry fire-lofted embers or molten/burning metal particles generated by powerline interactions long distances, where they may land on and ignite fuel beds remote from the source. This process, known as spotting, is a common mechanism of wildland and wildland urban interface fire propagation. The physical processes leading to spot fire initiation after an ember or heated particle has landed are not yet quantitatively understood. To provide insight into spot fire initiation, this paper presents a comprehensive 2D numerical model for the potential ignition of a porous fuel bed by an ember or hot metal particle. The model consists of a computational fluid dynamics (CFDs) representation of the gas-phase coupled to a heat transfer and pyrolysis model that simulates condensed-phase phenomena. The coupled model is used to simulate ignition of a powdered cellulose porous fuel bed by glowing pine embers in a laboratory experiment. The model provides qualitative information regarding the mechanisms that lead to ignition, smolder, or flame propagation on a porous fuel bed that agree qualitatively with experimental observations. This work provides the foundation for a more complete study of the problem where the effects of different factors (moisture content, humidity, temperature, porosity, particle size/heat content, etc.) are quantified. Keywords: spotting, embers, ignition.
1
Introduction
Firebrand spotting is a primary mechanism for the spread of both wildland and wildland-urban-interface (WUI) fires. Spotting can lead to rapid fire spread because firebrands generated by burning vegetation are lofted by the fire plume and transported downwind to ignite secondary fires or structures far from the fire front. In addition to propagation by firebrand spotting, many wildland fires are WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090541
604 Computational Methods and Experimental Measurements XIV initiated by heated or burning metallic particles generated from different sources such as powerline interactions or conductor clashing in high winds, overheated catalytic converters, and hot work/welding. The three primary steps in formation of spot fires are: 1) Firebrand/metal particle generation; 2) Firebrand lofting and ember/particle transport; 3) Ignition (or non-ignition) of fuels after a firebrand/particle lands. Of these, the aspect of the spot fire formation process that is least understood is what happens after a firebrand or heated particle lands on a target fuel bed, i.e. whether or not flaming ignition (or smoldering ignition followed by transition to flaming) occurs. This highly complex process depends on several factors including: size and state of the brand (smoldering/glowing, flaming), characteristics of the fuel bed on which it lands (temperature, density, porosity, moisture content), and environmental conditions (temperature, humidity, wind velocity). Three types of ignition mechanisms that may occur are: 1) Smoldering ignition; 2) Piloted gas-phase ignition induced by the brand or particle itself; and 3) Prolonged smolder followed by spontaneous transition to flaming. Ignition of fuels by fire brands and heated surfaces has been studied primarily experimentally, in particular by workers at NIST [1–4] and Jones [5–7] applied “hot spot” theory to investigate the problem analytically. Zvyagils’kaya and Subbotin [8] and Grishin et al. [9] applied numerical models that considered a porous condensed-phase that represented natural vegetation. However, there have been few modeling studies that include a porous condensed-phase model (to simulate the target fuel bed) coupled to a gas-phase code (to simulate the exterior “ambient”). This coupled approach is required to faithfully simulate the three ignition mechanisms described above, and considerable progress is still needed before models reach this point and can be considered predictive. The objective of this work is to develop a model to simulate the smoldering ignition of powdered cellulose fuel beds by glowing pine embers. The source code, executable files, and sample input files are freely available through an opensource project known as Gpyro [10].
2
Model description
2.1 Physical configuration The physical configuration modeled here is the ember-initiated smolder of a powdered cellulose fuel bed with air flowing over its surface. In addition to simulating this physical problem with a computer model, a few qualitative laboratory experiments were conducted, making it possible to qualitatively compare the behavior predicted by the model to that seen experimentally. To introduce the physical configuration simulated here, the experiments are described briefly below. The bench-scale test apparatus consists of a small-scale wind tunnel 38 cm in length, 13.5 cm in width, and 8 cm in height. Powdered cellulose is placed in an aluminum sample holder that is 12 cm in length with a 4 cm by 4 cm cross section. The sample is conditioned in an oven at 110 ºC for approximately 1 hour WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
605
to remove most moisture near the surface before. To reduce heat losses, the sample holder is lined with fiberglass insulation (3 mm thickness) and then embedded flush in the bottom wall of the wind tunnel. Compressed air flows through a converging duct into the test section at a prescribed velocity. An experiment begins by dropping a firebrand (ember) onto the powdered cellulose. Embers are generated by immersing pine cylinders of different sizes in a premixed propane flame for 40 s. Glowing embers are dropped on the target fuel surface after flaming combustion has ceased. The qualitative information from the experiment is used to help formulate the model and verify its predictive capabilities. In this work, a two dimensional “slice” down the centerline of the experimental apparatus is modeled. Use of a 2D (instead of 3D) computational domain significantly reduces the required CPU time. A 2D computational domain is commensurate with the qualitative nature of simulations at this early stage of model development. The computational domain used in the modeling is shown in Figure 1. To limit CPU and storage requirements, a subsection of the wind tunnel 25.6 cm in length is modeled (total length of the wind tunnel is 38 cm). Computational domain
25.6 cm
Oxidizer flow
8 cm z
Fire brand
x
7 cm
6 cm 13 cm
Powdered cellulose
4 cm
12.5 cm 38 cm
Figure 1:
Computational domain.
The model description that follows is split into three parts: 1) Condensedphase, which applies inside the powdered cellulose, 2) Gas-phase, which applies in the exterior ambient, and 3) Boundary/initial conditions. 2.2 Condensed-phase (porous fuel bed) The condensed-phase computational model formulation includes the twodimensional conservation equations for a combustible porous material undergoing thermal and oxidative reactions. The equations are solved numerically, with details given in [11].
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
606 Computational Methods and Experimental Measurements XIV 2.2.1 Governing equations Assumptions inherent in the condensed-phase model formulation are given in [11]. The resultant two-dimensional conservation equations (and auxiliary relations such as Darcy’s law and the ideal gas law) that apply inside the powdered cellulose are (see [10, 11] for details): ∂ρ ′′ = −ω& ′fg (1) ∂t ∂(ρ Yi ) ′′′ = ω& ′fi′′ − ω& di (2) ∂t ∂ ρ gψ ∂m& ′x′ ∂m& ′z′ ′′ (3) = ω& ′fg + + ∂z ∂x ∂t ∂ ρ gψ Y j ∂ m& ′x′Y j ∂ m& ′z′Y j ∂&j ′j′, x ∂&j ′j′, z ′′′ + + =− − + ω& ′fj′′ − ω& dj (4a) ∂t ∂x ∂z ∂x ∂z ∂Y ∂Y &j ′j′, z = −ψ ρ g D j &j ′j′, x = −ψ ρ g D j (4b) ∂x ∂z M ∂ m& ′x′ hg ∂ m& ′z′hg ∂q& ′′ ∂q& ′′ ∂ ρh + + = − x − z + Q& s′′′ + ω& ′fi′′ − ω& di′′′ hi (5a) ∂t ∂x ∂x ∂z ∂z i =1
(
(
) (
( ) (
)
) (
) (
)
q& ′x′ = −k
m& ′x′ = −
)
∑(
)
∂T ∂T q& ′z′ = −k ∂x ∂z Tg = T
K ∂P ν ∂x
ρg =
m& ′z′ = −
(5b) (6)
K ∂P ν ∂z
(7)
PM RTg
(8)
The pressure is determined by substituting Eq. (8) and Eq. (7) into Eq. (3). and solving the resultant pressure evolution equation. In the equations above, a subscript i refers to the condensed-phase and a subscript j refers to the gas-phase. An overbar denotes a weighted or averaged quantity, i.e. k = X i ki . See
∑
Lautenberger [11] for details. 2.2.2 Source terms The governing equations presented in the previous section contain several source ′′′ , ω& ′fj′′ , ω& di ′′′ , ω& ′fg ′′ , and Q& s′′′ ) terms attributed to chemical reactions ( ω& ′fi′′ , ω& di that must be quantified. These source terms are presented below in generalized form. Heterogeneous reaction stoichiometry is written in general form as: N
1 kg Ak +
∑ j =1
ν ′j ,k kg gas j → ν B ,k kg Bk +
N
∑ν ′′
j ,k
j =1
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
kg gas j
(9a)
Computational Methods and Experimental Measurements XIV
ν Bk =
ρ Bk
607 (9b)
ρ Ak
Each reaction k converts a condensed-phase species having index Ak to a condensed-phase species having index Bk. Gases may be consumed or produced in the process. The destruction rate of condensed-phase species Ak by reaction k is calculated as either thermal or oxidative pyrolysis: ρ YA k ′′′ k = ω& dA ρ YA k Σ
(
ρ YA k ′′′ k = ω& dA ρ YA k Σ
(
)
)
nk
nk
Ek (ρ YA )Σ Z k exp − RT (for nO ,k = 0 )
k
2
Ek (ρYA )Σ [(1 + YO )n ]Z k exp − RT (for nO ,k ≠ 0 ) O2 ,k
k
2
2
(10a)
(10b)
The formation rate of condensed-phase species Bk by reaction k is related to condensed-phase bulk density ratios as: ′′ k = ν B ,k ω& dA ′′′ k = ω& ′fB
ρ Bk
ρ Ak
′′′ k ω& dA
(11)
The formation rate of all gases (conversion rate of condensed-phase mass to gasphase mass) by reaction k is: ρB ′′′ k = 1 − k ω& dA ′′′ ω& ′fg′′ k = (1 − ν B ,k )ω& dA (12) k ρ A k The formation and destruction rates of gaseous species j from condensed-phase reaction k are calculated as: ′′′ k = ω& ′fg ′′ , k max (y s , j , k ,0 ) ω& ′fj′′ , k = ν ′j′, k ω& dA
′′′ k = −ω& ′fg ′′ , k min (y s , j , k ,0 ) ω& dj′′′ , k = ν ′j , k ω& dA
(13a) (13b)
where ys,j,k is the N by K species yield matrix, see Lautenberger [11] for details. Associated with each reaction k is a heat of reaction: ′′ k ∆H vol , k Q& s′′′, k = −ω& ′fg
(14)
The total source terms appearing in the conservation equations are obtained by summing over all reactions.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
608 Computational Methods and Experimental Measurements XIV 2.2.3 Numerical solution methodology The governing equations described above yield a system of coupled algebraic equations that are solved numerically. Due to the nonlinearity introduced by the source terms and temperature-dependent thermophysical properties, a fullyimplicit formulation is adopted for solution of all equations. The gas-phase species, gas-phase momentum, and condensed-phase energy conservation equations are solved using a computationally efficient tridiagonal matrix algorithm (TDMA). The two-dimensionality of the governing equations is handled using a line-by-line TDMA. The condensed-phase mass and condensedphase species conservation equations are solved with a customized fully implicit solver that uses relaxation to prevent divergence. Convective terms are fully upwinded. Additional details are given in [11]. The condensed-phase uses a nominal 1 mm by 1 mm grid spacing. The timestep is set by the gas-phase code to satisfy the CFL condition required for stability. 2.2.4 Reaction mechanism and material properties The reaction mechanism used here is based on a mechanism developed previously to simulate the oxidative pyrolysis of white pine [11]. Since here the powdered cellulose samples are dried before conducting an experiment, the moisture evaporation step is excluded, and the mechanism consists of three steps: cellulose → ν char char + ν tp thermal pyrolysate
(15.1)
cellulose + ν O2cell O 2 → ν char char + ν op oxidative pyrolysate
(15.2)
char + ν O2char O 2 → ν ash ash + ν cop char oxidation products
(15.3)
The ν coefficients in Eq. (15) are related to bulk density ratios and the species yield matrix discussed earlier (see Eq. (9) and Eq. (13)). 2.3 Gas-phase (exterior ambient) The pyrolysis model described in Section 2.2 is coupled to Fire Dynamics Simulator (FDS) Version 5.1.3 [12], where it is applied as a boundary condition. The gas-phase equations solved by FDS and the solution methodology are described in detail in the FDS Technical Reference [12]. When applying FDS in this paper, the following simplifications and approximations are made: • 2D elliptic flow • Gas-phase dynamic viscosity is the molecular value (rather than the effective value calculated from the Smagorinsky model) • Single-step irreversible Arrhenius combustion reaction The FDS gas-phase routines are modified only minimally in this work (to permit coupling to the condensed-phase and facilitate specification of a volumetric heat source representing a glowing ember) so the reader is referred to the FDS Technical Reference [12] for complete details of the gas-phase model. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
609
2.4 Boundary and initial conditions The boundary and initial conditions on the gas-phase (handled by FDS) and the powdered cellulose (handled by the pyrolysis model discussed in Section 2.2) are described below. Due to its importance in the simulations, the boundary condition applicable to the ember is also discussed. 2.4.1 Boundary conditions For the gas-phase calculation (exterior ambient), the upper wall of the duct is modeled as an FDS ‘INERT’ boundary condition (impermeable wall with temperature maintained at the ambient value of 20 ºC). Air is introduced from the left by a prescribed velocity boundary condition (0.5 m/s) with specified gasphase mass fractions (0.23 for oxygen, 0.77 for nitrogen, and 0 for the remaining gases), and gases leave to computational on the right via an FDS ‘OPEN’ boundary condition. Referring to Figure 1, the boundary condition at the bottom wall is modeled as steel for 0 cm < x < 6 cm and 18.5 cm < x < 25.6 cm. For these solid surfaces, the gas solid coupling is handled directly by FDS. The boundary condition at the bottom wall is powdered cellulose for 6 cm < x < 18.5 cm and the coupling between FDS and the powdered cellulose is described in greater detail below. For the powdered cellulose, the three bounding surfaces that do not abut the gas-phase are modeled as impermeable and perfectly insulated. The powdered cellulose abuts the gas-phase exterior ambient at x = 0. At this interface, there is full coupling between the powdered cellulose (simulated using the pyrolysis model described earlier) and the gas-phase (simulated using FDS). That is, the temperature of the powdered cellulose is calculated by the pyrolysis model and passed to FDS. Similarly, the convection or diffusion of gas-phase species into our out of the powdered cellulose is calculated by the pyrolysis model and passed to FDS as a mass flux. 2.4.2 Initial conditions The gas-phase is initially quiescent (zero velocity) with a temperature of 20 ºC and a background pressure of 101.3 kPa. The initial gas-phase mass fractions are 0.23 for oxygen, 0.77 for nitrogen, and zero for the remaining species. The powdered cellulose is also initially at a temperature of 20 ºC. The initial condensed-phase mass fractions are 1.0 for cellulose, 0.0 for char, and 0.0 for ash. 2.4.3 Ember model The ember is treated as a volumetric heat source. The order of magnitude of the volumetric heat source is estimated as:
(ρ − ρ a )∆H c ≈ 400 kg/m 3 × 10 MJ/kg ≈ 4 MW/m 3 Q& ′′′ ≈ v tb 900 s
(16)
where it is assumed that the brand leaves an ash with a density (ρa) that is negligible in comparison to that of the virgin brand (ρv ≈ 400 kg/m3), the brand is WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
610 Computational Methods and Experimental Measurements XIV completely consumed in 15 minutes (tb = 900 s), and the average heat of combustion is 10 MJ/kg (lower than typical values for wood due to an assumed incompleteness of combustion). This is a crude estimate of the heat release rate per unit volume of a glowing ember, so Q& ′′′ is treated as a parameter.
3
Results
Two different ignition source strengths (representing the ember) are investigated here: 4 MW/m3 and 6 MW/m3. The air velocity and temperature are respectively 20°C and 0.5 m/s. The first case shows that the temperatures are low enough that minimal smoldering occurs, with a thin char layer forming only near the cellulose surface where it abuts the heat source representing that ember. In the second simulation, the ignition source strength is increased by 50%, causing considerable smolder to occur. This leads to gas-phase ignition after ~55 s. Figure 2 shows the calculated gas-phase temperatures at 60 s (approximately 5 s after gas-phase ignition) and at 90 s (approximately 35 s after gas-phase ignition). The flame has started to spread both upstream (against the oncoming flow) and downstream (in the same direction as the oncoming flow). This is qualitatively consistent with experiments conducted with flaming brands that show flame propagation both upstream and downstream.
(a) Figure 2:
(b)
Gas-phase temperatures for 6 MW/m3 ember. (a) 60 s; (b) 90 s.
The condensed-phase temperature profile is shown in Figure 3 at the same times shown in Figure 2. Five seconds after ignition, the calculated temperature contour is similar to that for the 4 MW/m3 ember at 60 s but the heated area is larger due to the 50% greater heat release rate of the ignition source. By 35 s after ignition) the size of the heated region has increased considerably due convective and radiative heating from the gas-phase flame. This model calculates the concentrations of various gas-phase species inside the decomposing porous solid (cellulose in this case). This is critical for predicting the transition from smolder to flaming as well as accounting for differences in burning behavior in inert and oxidative environments.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
(a) Figure 3:
4
611
(b)
Condensed-phase temperatures for 6 MW/m3 ember. (a) 60 s; (b) 90 s.
Concluding remarks
The model and resultant computer simulations presented here appear to be capable of discerning between conditions that will or will not lead to initiation of a spot fire after landing of an ember. Additional work is required to characterize practical materials and to better understand the boundary condition between the ember or heated particle and the target fuel bed. Particularly challenging is determining the material properties and reaction kinetics of various fuels that must be supplied as input to the model.
Acknowledgements This work was funded by NSF under Award 0730556, “Tackling CFD Modeling of Flame Spread on Practical Solid Combustibles”. The authors would like to thank Sonia Fereres for setting up and running the experiments.
References [1] Manzello, S.L., Cleary, T.G., Shields, J.R., Maranghides, A., Mell, W., and Yang, J.C., “Experimental Investigation of Firebrands: Generation and Ignition of Fuel Beds,” to appear in Fire Safety Journal (2008). [2] Manzello, S.L., Cleary, T.G., Shields, J.R., and Yang, J.C., “On the ignition of fuel beds by firebrands,” Fire and Materials 30: 77–87 (2006). [3] Manzello, S.L., Cleary, T.G., Shields, J.R., and Yang, J.C., “Ignition of mulch and grasses by firebrands in wildland-urban interface fires,” International Journal of Wildland Fire 15: 427–431 (2006). [4] Pitts, W., “Ignition of Cellulosic Fuels by Heated and Radiative Surfaces,” NIST Technical Note 1481, March 2007. [5] Jones, J.C., “Predictive Calculations of the Effect of an Accidental Heat Source on a Bed of Forest Litter,” Journal of Fire Sciences 11: 80–86 (1993).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
612 Computational Methods and Experimental Measurements XIV [6] Jones, J.C., “Further Calculations Concerning the Accidental Supply of Heat to a Bed of Forest Material," Journal of Fire Sciences 12: 502–505 (1994). [7] Jones, J.C., “Improved Calculations Concerning the Ignition of Forest Litter by Hot Particle Ingress,” Journal of Fire Sciences 13: 350–356 (1995). [8] Zvyagils’kaya, A.I. and Subbotin, A.N., “Influence of Moisture Content and Heat and Mass Exchange with the Surrounding Medium on the Critical Conditions of Initiation of Surface Fire,” Combustion, Explosions, and Shock Waves 32: 558–564 (1996). [9] Grishin, A.M., Dolgov, A.A., Zima, V.P., Kryuchkov, D.A., Reino, V.V., Subbottan, A.N., and Tsvyk, R. Sh., “Ignition of a Layer of Combustible Forest Materials,” Combustion, Explosions, and Shock Waves 34: 613–620 (1998). [10] http://code.google.com/p/gpyro [11] Lautenberger, C.W., “A Generalized Pyrolysis Model for Combustible Solids,” PhD Dissertation, Department of Mechanical Engineering, University of California, Berkeley, 2007. [12] http://repositories.cdlib.org/cpl/fs/LautenbergerPhD/ [13] McGrattan, K., Hostikka, S., Floyd, J., Baum, H., and Rehm, R., “Fire Dynamics Simulator (Version 5) Technical Reference Guide,” NIST Special Publication 1018-5, 2007.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
613
Impact of fuel-break structure on fire behaviour simulated with FIRETEC F. Pimont1, J.-L. Dupuy1 & R. R. Linn2 1
I.N.R.A. Ecologie des Forêts Méditerranéennes (UR 629) Site Agroparc, Domaine de Saint Paul, Avignon, France 2 Los Alamos National Laboratory MS: D401, Los Alamos, NM, USA
Abstract This study focuses on the effects of fuel structure and in particular its spatial heterogeneity in the context of fuel-break design. The coupled atmospherewildfire behaviour model HIGRAD/FIRETEC is used to simulate wind fields and fire propagation in a complex landscape including forest-to-break and breakto-forest transitions. Two different Mediterranean ecosystems are used here: a Pinus halepensis (light canopy) and a Pinus pinaster (dense canopy). In both ecosystems, two forest zones are separated by a 200 m break. The study is separated into two parts. First, the break-induced winds are simulated with FIRETEC. The impact of the break structure (cover fraction, clump size) on the mean wind and turbulence statistics are shown. A significant increase of wind velocity and turbulence amount is observed when the cover fraction is reduced within the break. In addition, at low cover fraction, the introduction of tree clumps also induces wind acceleration. In the second part of the study, a fire line is ignited in the area upwind of the break and the fire propagation is computed using the precomputed wind fields of the first part of the study. The fire propagates in the upwind forest area before crossing the break and propagating in the downwind forest area. A decrease of fire intensity occurs after several meters of propagation on the fuel-break. This intensity decrease is significant when the cover fraction is lower or equal to 25%, but negligible at 50%. In addition, in the Pinus pinaster canopy, the fuel structure and especially clump size affect the fire damage. Keywords: fire behaviour, fuel effects, fuel-break, physically-based model.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090551
614 Computational Methods and Experimental Measurements XIV
1
Introduction
Forest fire prevention is mainly based on fuel reduction, in order to reduce fire intensity and crowning activities. However, the impact of fuel-break planning and forest protection policies on fire behaviour is not yet well known. They generally induce heterogeneous spatial patterns of vegetation, with some variations in fuel distribution and continuity. A better understanding of the fuel spatial pattern effects on fire intensity and crowning activities is frequently requested by forest managers for fuel-breaks planning. Several case studies can be found in the literature. Some of them are descriptions of real wildfires [1] and some others are modelling works [2–5]. Dupuy and Morvan [2], Linn et al. [3] and Pimont et al. [4] used physics-based models to assess fire propagation in different conditions of fuel treatments; however, they do not integrate both transitions from forest to clearing and clearing to forest, which affect both ambient conditions (wind) and fire. In [5], the authors study wildfire propagation at landscape scale in the context of multiple fuel-breaks, but the effects of each break on fire behaviour are postulated. Wind simulation accuracy is generally known to be critical for detailed fire behaviour prediction [6]. In canopies, wind-flows are dominated by a turbulent regime, with short periods of strong gusts due to the Kelvin-Helmholtz instability [7]; they are locally strongly affected by clearings of several times the height of the canopy [8]. These effects should be included in models to reproduce fire behaviour in such complex landscapes. HIGRAD/FIRETEC is a three dimensional physically-based model, which was started to be validated not only through comparisons with experimental data in grasslands [9], but also in crown fires in complex situations [10]. Simulations can be run at a scale of two meters near the ground and effects due to vegetation structure can be taken into account explicitly at this scale, in order to investigate the fuel structure impact on fire behaviour [3, 4]. In addition, FIRETEC’s abilities to accurately simulate wind flows and turbulence over complex configurations such as fuel-breaks have already been demonstrated against detailed experimental data [11]. In the present study, we used FIRETEC to assess the impact of fuel-break structure on both wind flows and fire behaviour, in Maritime and Aleppo pine canopies. Fuel structure is described using the cover fraction and the tree clump dimension.
2
Simulation characteristics
The HIGRAD/FIRETEC modelling system is a three-dimensional two-phase transport model that solves the conservation equations of mass, momentum, energy and chemical species. A detailed description of the physical and chemical formulation of the model is available in [9–11]. FIRETEC includes representations of vegetation in order to simulate turbulent flows and fire propagation at fine scales (~m) within and above the heterogeneous vegetation canopies.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
615
320
240
y(m)
Ignition zone
160
Forest
Fuel break
Forest
80
0
0
80
160
240
320
400
480
560
640
x(m)
Figure 1: Table 1:
ρ
(kg.m-3)
Top view of the fuel scene. Fuel physical characteristics. σ (m-1)
M (%)
CBH (m)
H (m)
Canopy: - Aleppo 0.1 10000 100 4.75 12 - Maritime 0.5 5000 100 4.75 12 Understorey 1.0 5000 70 0 0.5 ρ: bulk density; σ: area to volume ratio; Μ: moisture content; CBH: crown base height; h: height. In the present study, the size of the computational domain was set to 640 m×320 m×615 m (fig. 1) with a horizontal resolution of 2 m. The mesh was stretched in the vertical direction, starting from a 1.5 m resolution near the ground to 40 m at the top. 2.1 Fuel characteristics The canopy height was set to h=12 m in the stand, with an understorey of 50 cm. The physical characteristics of the fuel are described in table 1. The forest had a cover fraction of C=75% and a tree clump size of L=4 m (table 2). The fuelbreak area had the same understorey, but the canopy was modified. In this area, for the two pine species (Pa and Pm), C ranged between 0 and 75% (Het0, Het25, Het50, Het75) and L ranged between 4 and 20 m (a, b, c). Homogenized vegetation at the fuel-break scale was also tested (Hom25 was spatially homogeneous, but had the same load as Het25a, b or c). It can be seen as an infinitely small clump size case. 2.2 Wind simulation The initial wind flow was considered in equilibrium with the ground, which means that the initial velocity profile was logarithmic. The technique used to WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
616 Computational Methods and Experimental Measurements XIV resolve conservation equations is similar to the large-eddy simulation technique (LES). For the analysis here, we mainly considered the mean streamwise velocity in the middle of the fuel-break, averaged on 3600 s of simulated time. Other statistics of the flow were also considered, but are not shown in the present paper. Table 2:
Spatial characteristics of canopy fuel on the fuel-break area.
Case
Pine species
L (m)
C (%)
(kg.m-3)
ρmax
LAI
Load (kg.m-2)
PmHet75a (no treatment) PmHet50a PmHet25a PmHet25b PmHet25c PmHom25 PmHet0 PaHet75a (no treatment) PaHet50a PaHet25a PaHet25b PaHet25c PaHom25 PaHet0
Maritime pine
4
75%
0.5
7.2
2.7
4 4 10 20 0 4
50% 25% 25% 25% 100% 0% 75%
0.5 0.5 0.5 0.5 0.17 0.0 0.1
4.8 2.4 2.4 2.4 2.4 0.0 2.9
1.8 0.9 0.9 0.9 0.9 0.0 0.54
4 4 10 20 0 -
50% 25% 25% 25% 100% 0%
0.1 0.1 0.1 0.1 0.033 0.0
1.9 0.97 0.97 0.97 0.97 0.0
0.36 0.018 0.018 0.018 0.018 0.0
Aleppo pine
d: clump size; C: cover fraction; ρmax: max bulk density; LAI: leaf area density. 2.3 Fire simulation A fire line was ignited in the upwind forest area (fig. 1). The environmental wind fields used for boundary conditions were derived from the wind simulation previously described, in order to reproduce appropriate ambient conditions (mean wind profile and turbulence). Cyclic boundary conditions on the ydirection were used in order to simulate an infinite fire line incoming on the fuelbreak. Here, we studied the fire behaviour during its propagation, mainly based on fire intensity (computed from fuel consumption prediction). We also compared the impact of fuel treatments on fuel consumption on the fuel-break.
3
Wind simulations
The drag force within the canopy induced shearing in the wind velocity profile, which caused the development of Kelvin-Helmholtz instabilities, explicitly solved by the code. The mean vertical profile of streamwise velocity was characterized by an inflection near z=2/3h, that can be seen on fig. 2a&b (Simulation PaHet75a and PmHet75a). The inflexion was stronger in the maritime pine than in Aleppo pine, due to a higher leaf area density. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
617
3.1 Effect of C on mean streamwise velocity In the presence of a fuel-break, the fuel reduction was associated with a decrease in the cover fraction (C from 0 to 50%), which resulted in a lower average drag force on the break area. This drag reduction induced an increase of mean streamwise velocity in comparison to the forest values and a reduction of the strength of the inflexion, fig. 2.
Figure 2:
Streamwise mean velocity profile in the middle of the fuel-break.
For a given cover fraction (C=25%, for example), the increase of the clump size L (from 4 to 20 m) caused an increase of mean streamwise velocity (fig. 3a in maritime pine). Moreover, the homogenized case was characterized by the lowest values. The larger clump size was, the higher the mean streamwise velocity was. In addition, these clump size effects followed the same trend in Aleppo pine, but their magnitude was almost negligible, because the leaf area density was much smaller than in maritime pine. In addition, the clump size also affected the spatial variability of mean streamwise velocity. Fig. 3b represents the variability of mean streamwise velocity along the y axis, normalized by its mean value along the y axis. It increased with the clump size.
4
Fire simulation
4.1 Fire propagation within the break (case study PaHet25a) The case PaHet25a was the Aleppo pine forest with a fuel-break of 25% cover fraction and 4 m clump size. After the ignition, the fire propagated from the upwind forest area (fig. 4b), through the fuel-break (fig. 4c) and in the downwind forest area (fig. 4d). Fig. 4a represents the fire intensity as a function of position x of the firefront in the domain. The greyed area represents the fuel-break WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
618 Computational Methods and Experimental Measurements XIV position. At the beginning of the simulation, the intensity increased until the fire propagation was established. After the transition on the fuel-break, the intensity decreased. The minimum value was obtained when the fire had gone over 60 m within the fuel-break. Downwind to the leading edge, the fire intensity increased up to the same values as in the upwind forest area.
Figure 3:
Effects of clump size L, on mean profiles and spatial variability of mean flow, in the middle of the maritime pine fuel-break.
The rate of spread was close to 0.8 ms-1, with no significant modification on the fuel-break. 4.2 Effects of structure parameters Here, we compare the intensity diagrams for the different canopy structure. Fig. 5 represents the intensity for the different cover fractions (from 0 to 75%) for Aleppo (a) and maritime (b) pine. As expected the intensity on the fuel-break decreased with cover fraction. However, it is worth noting that a fuel reduction at 50% (instead of 75%) did almost not reduce fire intensity, whereas a reduction at 25% or even 0% was much more efficient. In addition, the intensity magnitude was much stronger in maritime pine than in Aleppo pine. This result is mainly due to the fact that the propagation in Aleppo pine was characterized by torching at this moderate wind speed, whereas the propagation in maritime pine was characterized by crowning (at high cover fraction). This difference was explained by the differences in bulk density between both pine species [4]. The effects of clump size on fire intensity were almost negligible in both ecosystems at 25% cover fraction (fig. 6). However, the homogenization of fuel at fuel-break scale (Hom25) had significant effects on fire behaviour: for Aleppo pine, fire intensity and flame heights were lower in the homogenized case than in the heterogeneous cases (fig. 6a and 7a&c); for maritime pine, fire intensity and
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Figure 4:
Figure 5:
619
Fire behaviour on an Aleppo pine fuel-break (C= 25%, d=4 m).
Intensities for different cover fractions in both pine stands.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
620 Computational Methods and Experimental Measurements XIV
Figure 6:
Intensities for different clump sizes in both pine stands.
flame heights were higher in the homogenized case than in the heterogeneous cases (fig. 6b and 7b&d). The effects of fuel homogenization in both ecosystems were opposite.
5
Discussion and conclusion
5.1 Comparison to some reference data Observed crown fire intensities usually range between 8000 and 40000 kWm-1, with some exceptional events up to 150000 kWm-1 [12]. In our simulations, intensities ranged between 7000 kWm-1 for passive crown fires in Aleppo pine and 25-30000 kWm-1 for active crown fires in dense maritime pine. These values are reasonable considering the fact that the wind is moderate in our simulation (12-13 kmh-1 as mean velocity and 25-30 kmh-1 during gusts). In the present study, the rate of spread was slightly affected by canopy fuel structure and generally ranged between 0.8 and 1 ms-1. There are not much affected by the different fuel treatments. In fact, the fuel treatments only deal with the canopy structure and does not affect the understorey, so that our results are consistent with the idea that the ROS are controlled by understorey fuel. Compared to [13], the ROS obtained in our study seem slightly too fast. They are also a little bit faster than the ones we used to get using FIRETEC [4]. The main difference is between these runs and previous ones is the use of an infinite fireline (cyclic boundary condition). This simulation is a 3D simulation (we can notice the crown streets generated by the code on fig. 4); however, the fact that the line is infinite prevents the fire from lateral indrafts that refresh the fire. This argument is supported by the fact that in the experiment used in [10] for their comparison, the fire spread in the middle of the plot is much faster than the fire spread close to the lateral sides, which is affected by these indrafts. Indeed, the rate of spread would probably have been much faster with a much larger WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
621
firefront. Such an argument might explain the high values obtained in this particular study. In addition to this point, we used precomputed wind field for ambient wind condition assessment, that include resolved turbulence and gusts that are also likely to affect fire spread, instead of a constant ambient wind.
Figure 7:
Fire behaviour on different fuel-breaks.
5.2 Fuel structure effects The cover fraction had a strong effect on fire intensity, with a threshold effect between 25 and 50%. It basically shows that a light thinning at 50% cover fraction might be inefficient. Results also suggest that the behaviour and heterogeneity effects are likely to be different in both ecosystems. With the moderate wind considered, the propagation in Aleppo pine is characterized by sparse torching, whereas the propagation is much more active in maritime pine. These results were already observed on smaller domain and with a finite fire line [4]. In addition to that, a detailed analysis of wind flow not presented here shows that the plumes in maritime pine are strong enough for an entrainment of downwind fresh air (negative streamwise velocity downwind of the fire) whereas the fire is just pushed by the wind (streamwise velocity always positive) in Aleppo pine. This behaviour can be understood in terms of “plume dominated” versus “wind driven” behaviour. To summarize, this study of the break-induced winds and fires entailed a better understanding of the impact of the break structure. A significant increase of wind velocity and turbulence amount was observed with cover fraction reduction and the introduction of heterogeneous tree clumps induced wind acceleration. Fire intensity decreased after several meters of propagation within WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
622 Computational Methods and Experimental Measurements XIV the fuel-break. The intensity reductions were significant when the cover fraction was lower or equal to 25%. In addition, in Pinus pinaster canopy, fuel structure affected fire damage.
References [1] Lambert, B., Casteignau, D., Costa, M., Etienne, M., Guiton, J-L., Rigolot, R., Analyse après incendie de six coupures de combustible. Réseau Coupures de Combustible. Editions de la Cardère, 85 p., 1999. [2] Dupuy, J.L. & Morvan, D., Numerical study of a crown fire spreading toward a fuel break using a multiphase physical model. International Journal of Wildland Fire, 14 (2), pp. 141–151, 2005. [3] Linn, R.R., Winterkamp, J., Colman, J.J., Edminster, C., Bailey, J., Modeling interactions between fire and atmosphere in discrete element fuel beds. International Journal of Wildland Fire, 14, pp. 37–48, 2005. [4] Pimont, F., Linn, R.R., Dupuy, J-L., Morvan, D., Effects of vegetation description parameters on forest fire behaviour with FIRETEC. Forest Ecology and Management, 234S, S120, 2006. [5] Finney, M.A., Selia, R.C., McHugh, C.W., Ager, A.A., Bahro, B., Agee, J.K., Simulation of long-term landscape-level fuel treatment effects on large wildfires. International Journal of Wildland Fire, 16, pp. 712–727, 2007. [6] Butler, B., Forthofer, J., Finney, M., McHugh, C., Stratton, R., Bradshaw, L., The impact of high resolution wind field simulations on the accuracy of fire growth predictions. Forest Ecology and Management, 234S, S85, 2006. [7] Raupach, M.R., Bradley E.F., Ghadiri, H., A wind tunnel Investigation Into Aerodynamic Effect of Forest Clearing on the nesting of Abbott’s Booby on Christmas Island. Internal report, CSIRO Centre for environmental Mechanics, Canberra, 1987. [8] Raupach, M.R., Finnigan, J.J., Brunet, Y., Coherent eddies and turbulence in vegetation canopies: the mixing-layer analogy. Boundary-Layer Meteorology, 78, pp. 351–382, 1996. [9] Linn, R.R., Cunningham, P., Numerical simulations of grass fires using a coupled atmosphere-fire model: basic fire behavior and dependence on wind speed. Journal of Geophysical Research, 110, D13107. 19 pp., 2005. [10] Linn, R.R., Canfield, J., Winterkamp, J., Cunningham, P., Colman, J.J., Edminster, C., Goddrick, S.L., Numerical Simulations of Fires Similar to the International Crown Fire Modelling Experiment. Proceedings of the Sixth Symposium on Fire and Forest Meteorology, American Meteorological Society, Canmore, Alberta, 25–27 October 2005. [11] Pimont, F., Dupuy, J-L., Linn, R.R., Dupont, S., Validation of FIRETEC wind-flows over a canopy and a fuel-break. Submitted to International Journal of Wildland Fire. Submitted. 19 p. [12] Trabaud, L., Les feux de forêts. Editions France Sélection. 278 p, 1989. [13] Taylor, S.W., Wotton, B.M., Alexander, M.E., Dalrymple, G.N., Variation in wind and crown fire behaviour in a northern jack pin-black spruce forest. Canadian Journal of Forest Research, 34, pp. 1561–1576, 2004. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
623
A new model of information systems for public awareness about wildfires P.-Y. Badillo1 & C. Sybord2 1 2
IRSIC-Medi@SIC Laboratory, Aix-Marseillle Université, France Université Lumière de Lyon, France
Abstract In the framework of the FIRE PARADOX European project we have to define a public awareness strategy concerning a new management of wildfires. This strategy relies firstly on the identification of stakeholders and communication flows. To grasp the complexity of the topics we construct a first information system and then we propose to enrich this information system so that it could allow the analysis, the validation and the organization of the information exchanges, in order to create attitudes and behaviours in favour of general interest. So, this decision-making information system could be a socioorganizational support of communication, based on systemics. This theoretical choice permits one to approach the transverse structure and the multidimensional conception of the decision-making information system. Keywords: public awareness strategy, communication, information, risk and uncertainty, decision-making information system.
1
Introduction
In the 1970s, at the time of the computerization of areas called Departments in France and before the implementation of public decentralization, within a framework of inter-department cooperation (“Entente Interdépartementale”) for fire-fighting, an information system (a database called PROMETHEE) was created. This database concerns the departments of the South of France and it has recorded the main characteristics of forest fires for each year. In the French wildfires prevention and fire-fighting context, this database, founded upon interdepartmental cooperation, is today an Information System (I.S.) at the disposal of the decision makers, in particular for prevention (when, where, how to WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090561
624 Computational Methods and Experimental Measurements XIV prevent). In reference to the theory of the general system, we define the I.S. as the system that manages the exchanges between the operating system and the system of control. This definition was enriched by the introduction of the systems of knowledge, defined by Ermine [1]. These systems of knowledge represent, indeed, the knowledge held by stockholders of an organization, in relation with the information processing. However, even modified by the introduction of the concepts of inheritance of knowledge, the systemic approach of the I.S. does not allow one to effectively take into account the experience of the main stakeholders implied in wildfires prevention and fire-fighting. That is why we propose a new model of I.S., which could serve to support communication and decision for public awareness about wildfires. The first part of this paper explains the basis of the project. Our team is involved in the FIRE PARADOX project (6th PCRD). We have to set up a public awareness strategy (Badillo and Bourgeois [2]) concerning a new management of fire: the “fire paradox management”, which has the goal of being more efficient and more ecological in the fight against wildland fires. FIRE PARADOX is a European integrated project on fire management, coordinated by the Instituto Superior de Agronomia, Universidade Técnica de Lisboa, Portugal (see http://www. fireparadox.org). Concerning public awareness, the objective is to define and propose a communication strategy at the level of the European Union or/and various countries regarding the “fire paradox management”. We propose a public awareness strategy relying on an intelligence approach that shows the communication flows among stakeholders and implies the construction of an information system, which synthesizes the complexity of the system. The second part of this paper deals with a model of I.S. that could allow the analysis, the validation and the organization of the information exchanges, in order to create attitudes and behaviours in favour of general interest. We qualify this I.S. as decisional. This decision-making information system (D.M.I.S.) could be a socio-organizational support of communication, based on systemics. This theoretical choice permits one to approach the transverse structure and the multidimensional conception of the decision-making information system.
2
A strategic intelligence approach: basis of the information system
Firstly we identify the stakeholders and then we establish the first basis for an information system. 2.1 Stakeholders: identification of the targets for communication Many stakeholders are the targets of information and communication campaigns about wildfires: a) Property owners within or at the boundary of a forest: some mayors send suitable letters to their residents that are in this situation, in order to remind them of some of their obligations (targets are often selected
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
b) c) d) e) f) g) h) i)
625
through a geographical information system, as is the case in the city of Venelles, near Aix-en-Provence in the South-East of France). Local residents: they can be reached through the press or through booklets. Farmers and foresters: their associations and the ministries that deal with them are key targets. Motorists: messages on roads and motorways are a good complement in campaigns, especially to avoid throwing burning cigarettes through car windows, especially in case of wind. Tourists and forest hikers: in France, forest hiking is prohibited or regulated when there is a risk of fire; a special information phone number is displayed in the press. Pupils: they are a very important target to motivate their families; comics can be well adapted as a communication support. Other targets such as businesses located at the boundary of a forest. Local elected representatives are another main target. Media: journalists are also, of course, an important target.
2.2 The maze of today’s information stakeholders In a systemic approach, we translated into a conceptual model the multiplicity of the target stakeholders and information stakeholders and the various information systems related to prevention and action directed against forest fires. Our analysis shows that there are three main categories of information stakeholders who practice some of the Key Success Factors for a communication campaign: -
Associations of volunteers involved in wildfire prevention. Firemen. Towns and villages, and associations of towns or villages.
In terms of timing, their actions take place essentially during two periods of the year: -
The season for bush and land clearing. The fire season.
Table 1 presents some actions that are related to the different information stakeholders and the seasons. We have developed a first information system concerning communication on fire in France. We have tried to build an information system on the informationcommunication concerning the prevention of wildfires and the fighting against wildfires; initially at the conceptual level, it is a data model of the MERISE type. We have applied this methodology to the French case. We have distinguished three main levels: the national level, the intermediary level, which in France is mainly the “département” level (at the present stage of our work, we consider that – concerning information flows about fires – the Regional level is the aggregation of the “département” levels), and the local level (municipal level).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
626 Computational Methods and Experimental Measurements XIV Table 1:
Examples of Key Success Factors for a local public sensitization campaign.
2.3 A national information system to describe the main information stakeholders This information system gives a very clear comprehension of the complexity of the flows of communication, which are driven by many stakeholders providing many messages to various targets of population with different goals through various media channels. At the national level, different ministries (Agriculture, Internal affairs, Ecology...) are involved in the preventive actions concerning wildfires; and at the “département” level, the Ministers delegate responsibilities to the Prefect, in charge of coordinating actions of prevention with the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
627
administrative services. The Prefects make sure that the laws are being applied and the Prevention of Risks Plans (PRP) are implemented at the local level of the municipalities. The “département” is managed by the elected members of the “Conseil Général”. Each departmental fire service has under its responsibility different local fire stations. Mayors of town halls have the official responsibilities of organising the emergency services. According to the PRP, mayors have to disseminate information on the prevention of wildfires. At the State’s Administration
Administrationby the Local governments
National level Ministries
Delegate
“Département” level
Prefect of the « département »
Transmit
« Conseil Général »
State services for « département »
Coordinates
Inform s Administrate s
Local level Municipality
Fire service
Calls For
Disseminates
Creates Municipal association s
Municipal information and communication
Relays
Media
informs
Realises
Information at local level Leaflets
Posters
Municipal events
Figure 1:
Briefings
Reports
Statements/ press releases
A simplified representation of the French information and communication system concerning fire prevention and fighting against fire (Badillo and Bourgeois [3]).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
628 Computational Methods and Experimental Measurements XIV local level many actions concerning different targets are developed with different communication tools, such as, for example: leaflets, posters, municipal events, reports, official statements or press releases
3
The proposed methodological approach: towards a decision-making information system
In the activities of information watch, the majority of the decisions are mostly unstructured and thus refer to the Simonian model [4]. From this point of view, the contribution of a D.M.I.S. in the decisional practices raises the question of the relations between knowledge, information, action and stakeholders. This organization defines the overall character of the D.M.I.S., which we are going to tackle by a systemic approach (section 3.1), to outline the transverse structure of a D.M.I.S. (section 3.2), in reference to the engineering of knowledge (section 3.3). 3.1 The theoretical choice of a systemic approach Our choice of a systemic approach makes it possible to approach the D.M.I.S. like a system of governorship “stakeholders-machines”, itself belonging to the general system “organization”, in reference to the theory of the general system (Le Moigne [5]). The general system is structured in three subsystems represented by figure 2:
Figure 2:
Representation of the general system “organization”.
The operating system transforms the raw materials (material and immaterial inputs) into finished products (material and immaterial products). The information system records and memorizes the operations (processes) of the operating system. The system of control coordinates information and the processes by using its cognitive capacities of self-organization. For wildfires, the I.S. corresponds to the database called PROMETHEE, which allows the optimal assignment of the Canadair planes, for example in Corsica. The elements, at the base of the D.M.I.S., are the I.S., the system of control, the relations and the WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
629
interactions between these two systems, to allow “activable” decisions. In reference to Argyris [6], on the “activable” knowledge, we propose to define an “activable” decision as a decision “being at the same time valid and being able to be in motion” ([6], p. 257) at the level of the three systems of the organization. This space of study is represented in grey in figure 2 and is in conformity with the organisational and informational dimensions of decisional activities. Figure 2 is completed by figure 3, which represents the cognitive and temporal dimensions of a decisional practice. This “OIDK Model” (Operation, Information, Decision, Knowledge) is the reference of the systems of knowledge defined by Ermine [1]. It makes it possible to identify and characterize the knowledge and cognitive flows of a system, and in particular of a D.M.I.S. (always specified in grey). The advantage of such a systemic representation is the global taking into account of four dimensions inherent in the practice of decisions in every organization. The relations and the interactions between the systems give the D.M.I.S. its “communicating” character. However, it is precisely on this point that this systemic representation presents a major disadvantage: the systems (operation, information, decision), hierarchically superposed, generate vertical coordination preventing almost any dynamics of training and devaluating, in the long term, the knowledge system. 3.2 Outline of a transverse structure Under these conditions, and according to our assumption that knowledge is the strategic axis of the design and the development of the D.M.I.S., we make figure 3 swivel by an angle of 90°, which gives figure 4: the social organization of the D.M.I.S. cognitive and informational flows are unchanged. On the other hand, the traditional hierarchical links, resulting from Taylor’s model, disappeared to the benefit of transverse links allowing a more open circulation of information and knowledge, generating in its turn a “real” division of knowledge and information.
Figure 3:
The “OIDK Model” and the space of definition of a D.M.I.S.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
630 Computational Methods and Experimental Measurements XIV
Figure 4:
The social organization of the D.M.I.S.
For public awareness about wildfires, the bottom of figure 4 represents the knowledge of stakeholders (such as shepherds and hunters, firemen, foresters…). For the decisional practices, this social organization makes it possible to “plan” the complexity of the decision-making process, before the decision is taken. Hence, the question of the relationship between knowledge and action (decision) arises, as it is one of the major concerns of the engineering of knowledge. 3.3 The multidimensional conception of the D.M.I.S. In our work of investigations on the D.M.I.S., the real situation is the impact of the D.M.I.S. on the “performance” of the decisional actions, organized collectively and individually (in reference to each stakeholder who intervenes in the decision-making process). So the socio-cognitive aspects of training inherent to the decision-making process are taken into account. The decisional activity rests on the triptych: activity, knowledge, organization, proposed by Teulier and Girard ([7], p. 391), in accordance with figure 5.
Figure 5:
Triangular relations between the concepts of knowledge, activity and organization.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
631
The advantage of this triptych is to consider the cognitive and organisational dimensions of a decisional practice. The disadvantage is to occult the technological dimension of the D.M.I.S., developed with information and communication technologies (ICT). We propose figure 5, which supplements figure 4. Figure 6 includes the four dimensions specific to the D.M.I.S. This multidimensional conception of the D.M.I.S. is coherent with distributed cognition; this distributed cognition is implemented by artefacts, which help to make an action so that it leads to a successful conclusion. We have chosen to represent these artefacts by the paradigm of the Multi-Agents Systems, with the aim of integrating the collective and individual cognitive activities that intervene in decision-making.
Figure 6:
4
Multidimensional conception of the D.M.I.S.
Conclusion
We have proposed a new model of I.S. This model integrates the stakeholders’ knowledge as a basis of the information/communication process. Taking into account the complexity of the I.S. (see [8]), we have shown the links between the various systems of an organization and the decisional practices. We called this I.S. a D.M.I.S. This type of I.S. could become an inductor of performance, to the detriment of an indicator of performance. For public awareness about wildfires, this I.S. allows one to take into account the various knowledge held by the various stakeholders. The concrete consequence could be better communication on the prevention of wildfires.
References [1] Ermine, J.L., Les systèmes de connaissances, Hermès: Paris, 1996. [2] Badillo, P.-Y. & Bourgeois, D., Strategic information and communication on forest fires, with some bench marks on natural disasters. Report to the European Commission: synthesis of the state of the art, report to the European Commission in the framework of Module 11, FIRE PARADOX European Integrated Project (project no. FP6-018505, 6th PCRD), April 2008. WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
632 Computational Methods and Experimental Measurements XIV [3] Badillo, P.-Y. & Bourgeois, D., Communication, risk and complexity: a new approach applied to the “Fire Paradox” project. IAMCR 2007 International Conference on Media, Information, Communication: Celebrating 50 Years of Theories and Practices, UNESCO, Paris, 23–25 July 2007. [4] Simon, H.A., Administration et processus de décision, Economica: Paris, 1983. [5] Le Moigne, J.L., La Théorie du Système Général, PUF: Paris, 1977 & 1994. [6] Argyris, C., Savoir pour agir, Interéditions: Paris, 1995. [7] Teulier, R. & Girard, N., Modéliser les connaissances pour l’action dans les organisations (Chapter 18), Ingénierie des connaissances, eds R. Teulier, J. Charlet, P. Tchounikine, L’Harmattan: Paris, 2005. [8] Amabile, S. & Caron-Fasan, M.-L., Contributions à une Ingénierie des Systèmes d'Information Orientée Complexité (Chapter 3), Faire de la recherche en systèmes d'information, ed. F. Rowe, Vuibert: Paris, pp 67–78, 2002.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
633
Combustion modelling for forest fires: from detailed to skeletal and global models P. A. Santoni University of Corsica, UMR 6134 SPE, Corte, France
Abstract This work aims to improve the understanding of the combustion of vegetative fuels. The degradation gases released by some Mediterranean species were determined. They mainly consist of CO, CH4, CO2, H2O and other hydrocarbons. Then a study of the oxidation of a CO/CH4/CO2 mixture was performed with a perfectly stirred reactor (PSR) at atmospheric pressure over the temperature range 773–1273 K. Mole fraction profiles as a function of temperature were compared to the numerical predictions obtained with the PSR code from CHEMKIN II using the full mechanism GRI-Mech 3.0. A skeletal mechanism was developed from the full mechanism. It was tested for laminar flames obtained with samples of crushed Mediterranean species. Different skeletal and global mechanisms were also tested. The comparison between the simulated and predicted temperatures points out that the common assumption of carbon monoxide burning in air is not appropriate at this scale. Methane should be included in the modelling to perform reliable simulations. This result leads to the proposal of a simple combustion mechanism to be included in models of wildland fire. Keywords: combustion modelling, degradation gases, forest fire.
1
Introduction
The understanding of the mechanisms that control the ignition and the spread of wildland fires is a major objective for the scientific community. Over the last fifty years, statistical, empirical and physical models have been proposed to simulate forest fires [1]. For ten years, physical models have tended to include more physical mechanisms. Among them, combustion kinetic was poorly investigated [2]. The combustible part of the devolatilization products is WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line) doi:10.2495/CMEM090571
634 Computational Methods and Experimental Measurements XIV generally considered to be carbon monoxide burning in air [3], whatever the vegetation species. Although a detailed reaction mechanism (over three hundred reactions and 50 species) is possible for an accurate description of the chemistry, it is currently impractical for predicting wildfires due to the computational time. Thus, using simplified kinetics mechanisms is attractive. In this paper a modelling approach leading from full mechanisms to global models is presented. Gases released by the degradation of two pines (Pinus laricio, Pinus pinaster) and a heather (Erica arborea) were first determined thanks to a tube furnace. Then a perfectly stirred reactor (PSR) was used as a test environment to investigate the burning of these gases. The recorded molar fractions were compared to the prediction obtained with the PSR code from CHEMKIN II [4] using the full mechanism GRI-Mech 3.0 [5]. A skeletal mechanism was then developed from the full mechanism. This mechanism was tested to simulate laminar flames obtained from the burning of crushed forest fuels. A primitive variable formulation was used to solve the transient conservation equations obtained for the modelling of the laminar flames. A mixture representative of the degradation gases was also defined thanks to this mechanism. Then, due to the expansive computational time required, different skeletal and global mechanisms with fewer reactions were investigated. The aim was to provide a reliable kinetic mechanism allowing decreasing the computational time without losing accuracy to be included in model of fire spread. The paper is organized as follows. In the first section we present the study of the degradation gases, the PSR environment and the elaboration of the skeletal mechanism. In the second section, laminar flames experiments are exposed. The numerical method to simulate such flames and the comparison between the different skeletal and global mechanisms are detailed.
2
From detailed to skeletal models
2.1 Degradation gases The tube furnace apparatus used as pyrolyser is shown in fig. 1. It is made of a cylindrical furnace 43.5 cm long with an internal diameter of 6.5 cm. The reactor inside, is 86 cm long with an inner diameter of 5 cm. Experiments were
Figure 1:
Schematic of the tube furnace (1 thermocouples, 2 temperature controller, 3 bearing, 4 nitrogen injection, 5 electric furnace, 6 combustion boat, 7 air suction, 8a-c valves, 9 gas samplers).
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
635
conducted for three fuels: two pines (Pinus Pinaster, Pinus laricio) and a heather Erica arborea. The fuels samples were heated from 280 to 430°C. Gases were collected into a balloon directly attached to a gas chromatograph. At least three repetitions were carried out. The degradation gases obtained mainly consist of CO2, CO, CH4, H2O, and lower amounts of hydrocarbons (Table 1). Table 1:
Mass fractions of the main pyrolysis gases released by the degradation of the vegetative fuels (x=6 or 8 and y=6, 8 or 10). Gas CO CH4 H 2O CO2 C2H4 C2H6 C3Hx C4Hy
Pinus laricio 0.140 0.040 0.074 0.616 0.008 0.015 0.016 0.090
Pinus pinaster 0.257 0.078 0.047 0.536 0.010 0.016 0.008 0.048
Erica arborea 0.141 0.026 0.047 0.718 0.004 0.005 0.007 0.052
2.2 Combustion of degradation gases in perfectly stirred conditions 2.2.1 The perfectly stirred reactor The combustion of a gas mixture representative of the degradation gases of Pinus pinaster was investigated with a PSR. To simplify the study, we only considered CO, CH4 and CO2. The experimental device is composed of a reactor, a sampling system and a chromatographic system. It was applied for the first
Stick of fuel introduction
Mixing zone Heating furnace Quartz reactor
Sampling probe Thermocouple
Gas exhaust
GC
Figure 2:
Temperature
The perfectly stirred reactor.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
636 Computational Methods and Experimental Measurements XIV time for mixtures encountered in wildland fires. The reactor consists of a small sphere of 60 mm diameter (see fig. 2) with four nozzles (0.3 mm i.d.) for the admission of gases to achieve the stirring. All the gases were preheated before injection to minimize temperature gradients into the PSR. A regulated heating furnace maintained the reactor at the desired temperature. The oxidizer and the fuel flowed separately until they reached the mixing point. To ensure a low temperature elevation in the reactor, the reactants were diluted with a dilution factor of 9.2 from the composition obtained with the tubular furnace. The present experiments were performed at steady state for a residence time τ of 1.3 s. The temperature of the gases in the PSR was varied stepwise in the range: 773 K – 1273 K. Samples of the reacting mixture were analyzed by gas chromatography to measure H2, O2, CO, CH4, C2H2, C2H4, C2H6, and CO2. 2.2.2 Kinetics modelling The PSR code from CHEMKIN II package [4] was used with the full mechanisms Gri Mech 3.0 [5] to model the combustion of the mixture. The species conservation equation and the conservation of energy are given by: m& Yk − Yk* − ω& kWkV = 0 (1)
( ) m& ∑ (Y h − Y h )+ Q = 0 K
k =1
* * k k
k k
(2)
where subscript k is set for species k and K, Y, W, ω& , h, V, Q and m& represent respectively the species number, mass fraction, molecular weight, volumetric molar rate of reaction, specific enthalpy, reactor volume, heat loss and mass flow rate. The inlet data are marked with an *. For each reaction, the forward and backward rate coefficients are in the modified Arrhenius form. Figure 3 displays the predicted and measured molar fraction for CO and CH4.
0,03
Molar fraction
Molar fraction
0,02
0,01
0,02
0,01
0
0
773
873
973
1073
1173
1273
773
Temperature (K)
CH4 Exp
Figure 3:
CH4 FM
873
973
1073
1173
1273
Temperature (K)
CH4 SM
CO Exp
CO FM
CO SM
Experimental data and PSR code output with Full and Skeletal Mechanisms (FM and SM) for CH4 and CO.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
Table 2: Reactions 1. O+H2<=>H+OH 2. O+HO2<=>OH+O2 3. O+CH4<=>OH+CH3 4. O+CH3<=>H+CH2O 5. O+CH2O<=>OH+HCO 6. 2O+M<=>O2+Ma 7. H+O2<=>O+OH 8. H+O2+H2O<=>HO2+H2O 9. H+O2+AR<=>HO2+AR 10. H+O2+N2<=>HO2+N2 11. H+2O2<=>HO2+O2 12. H+HO2<=>2OH 13. H+HO2<=>O2+H2 14. H+CH4<=>CH3+H2 15. H+CH2O<=>HCO+H2 16. H+O2+M<=>HO2+M 17. H+CH2O(+M)<=>CH3O 18. H+CH3(+M)<=>CH4(+M) 19. 2OH<=>O+H2O 20. OH+H2<=>H+H2O 21. OH+HO2<=>O2+H2O 22. OH+CH4<=>CH3+H2O 23. OH+CH2O<=>HCO+H2O 24. OH+CO<=>H+CO2 25. HO2+CH3<=>O2+CH4 26. HO2+CH3<=>OH+CH3O 27. HO2+CO<=>OH+CO2 28. O2+CH2O<=>HO2+HCO 29. O2+CO<=>O+CO2 30. HCO+O2<=>HO2+CO 31. HCO+M<=>H+CO+M 32. HCCO+O2<=>OH+2CO 33. CH3+O2<=>O+CH3O 34. CH3+O2<=>OH+CH2O 35. CH3+CH2O<=>HCO+CH4 36. CH3O+O2<=>HO2+CH2O 37. 2CH3(+M)<=>C2H6(+M) 38. O+C2H6<=>OH+C2H5 39. H+C2H6<=>C2H5+H2 40. OH+C2H6<=>C2H5+H2O 41. CH3+C2H6<=>C2H5+CH4 42. C2H5+O2<=>HO2+C2H4 43. H+C2H4(+M)<=>C2H5(+M)
637
Skeletal mechanism. A(mol.cm.s) 3.87E+04 2.00E+13 1.02E+09 5.06E+13 3.96E+13 1.20E+17 2.65e+16 1.13E+19 7.00E+17 2.60E+19 2.08E+19 8.40E+13 4.48E+13 6.60E+08 5.74E+07 2.80E+18 5.40E+11 1.39E+16 3.57E+04 2.16E+08 1.45E+13 1.00E+08 3.43E+09 4.76E+07 1.00E+13 3.78E+13 1.50E+14 1.00E+14 2.50E+12 1.35E+13 1.87E+17 3.20E+12 3.56E+13 2.31E+12 3.32E+03 4.28E-13 6.77E+16 8.98E+07 1.15E+08 3.54E+06 6.14E+06 8.40E+11 5.40E+11
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
β 2.7 0.0 1.5 0.0 0.0 -1.0 -0.7 -0.8 0.8 -1.2 -1.2 0.0 0.0 1.6 1.9 -0.9 0.5 -0.5 2.4 1.5 0.0 1.6 1.2 1.2 0.0 0.0 0.0 0.0 0.0 0.0 -1.0 0.0 0.0 0.0 2.8 7.6 -1.2 1.9 1.9 2.1 1.7 0.0 0.5
E(cal/mol) 6260.0 0.0 8600.0 0.0 3540.0 0.0 17041.0 0.0 0.0 0.0 0.0 635.0 1068.0 10840.0 2742.0 0.0 2600.0 536.0 -2110.0 3430.0 -500.0 3120.0 -447.0 70.0 0.0 0.0 23600.0 40000.0 47800.0 400.0 17000.0 854.0 30480.0 20315.0 5860.0 -3530.0 654.0 5690.0 7530.0 870.0 10450.0 3875.0 1820.0
638 Computational Methods and Experimental Measurements XIV Table 2:
Continued.
44. OH+C2H4<=>C2H3+H2O 45. CH3+C2H4<=>C2H3+CH4 46. C2H3+O2<=>HO2+C2H2 47. H+C2H2(+M)<=>C2H3(+M) 48. O+C2H2<=>H+HCCO 49. O+C2H4<=>CH3+HCO
3.60E+06 2.27+05 1.34E+06 5.60E+12 1.35E+07 1.25E+07
2.0 2.0 1.6 0.0 2.0 1.8
2500.0 9200.0 -384.0 2400.0 1900.0 220.0
The third-body efficiencies are Z(H2) = 2; Z(H2O) = 6; Z(CH4) = 2; Z(CO) = 1.5; Z(CO2) = 2; Z(C2H6) = 3; Z(Ar) = 0.7. We can note a general agreement. The most important reactions for the combustion were retained in the skeletal mechanism, which was derived from the full mechanism through sensitivity analysis. It is listed in Table 2 and involves 22 species and 49 reactions. The initiation reaction is R6. The oxidation of hydrogen and formation of radicals is due to R1, R2, R7-13, R16 and R19-21. The carbon monoxide is oxidized by R24, R27 and R29. The methane break-up occurs through R3, R14, R18, R22 and R25. Then the C1-chain continues toward CH3O and CH2O via R4, R17, R26, R33-36. CH2O forms HCO by R5, R15, R23 and R28. HCO radical is converted in CO mainly by R30 and R31. The C2-chain is activated by R37 and R49. Then C2H6 is converted to C2H2 through R38-47. C2H2 forms HCCO by R48 and HCCO is converted to CO by R32.
3 From skeletal to global models 3.1 Laminar flames from vegetative fuels The skeletal mechanism was tested with laminar diffusion flame of crushed vegetative fuels. Crushing allows obtaining laminar flame and avoiding fuel geometries effects on the burning. The experimental device is given in fig. 4.
Figure 4:
Sketch of the experimental apparatus.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
639
The fuel sample was in the shape of a cylinder (diameter of 3.5 cm, depth of 5 mm and mass of 1.5 g). The mass loss was measured. Ignition was performed with ethanol. 11 thermocouples (type K, wires 50 µm) were positioned along the flame axis. Five repetitions were made. During the first 60 s of the burning, the flame fluctuates then it becomes laminar. Its height decreases from 4 to 1.5 cm and after 120 s, the extinction begins. 3.2 Balance equations and kinetics modelling The crushed sample was represented by a burner. The model solved the twodimensional, unsteady, laminar, reactive Navier-Stokes equations coupled with radiation and transport. The flow was calculated using a finite-volume procedure and radiative transfer equation was solved with the discrete ordinates method. ∂ρ r r (3) + ∇ρ V = 0 ∂t r r r r ∂ρ Yi + ρ V .∇Yi = ∇ ρ Dij ∇Yi + ω& i (4) ∂t r rr r r r r ∂ρ V (5) + ρ V ∇ .V = ∇p + ρ g + ∇ .τ ∂t
(
)
r r r r r N r r r r r r r ∂ρe + ρ V . ∇e = ∇ λ∇T + ∇. Rg + ∇ ρ hi Dij ∇Yi − ∇. pV + ∇. τ .V + Q& (6) ∂t i =1 r r 4 r r σT dI (r , s ) (7) + a I (r , s ) = a π ds Five mechanisms were tested to model the combustion: a. The skeletal mechanism previously presented (49 reactions, 22 species). b. The skeletal mechanism of [10] for methane (23 reactions, 14 species). c. The skeletal mechanism of [7] for wildland fire (22 reactions, 14 species). d. The global mechanism (GM1) which considers only CO as fuel CO + 0.5O2 ⇔ CO2 (8)
(
)
( )
∑
1.3 108 R .T
ω& CO = 2.239 1012 [CO ][H 2O ]0.5[O2 ]0.25 exp − 1.3 108 R .T
ω& CO2 = 5 108 [CO2 ]exp −
( )
(9) (10)
The global mechanism (GM2) for which the oxidation of CO is given by eqns. (8-10) and the oxidation of CH4 is incomplete. CH 4 + 1.5O2 ⇒ CO + 2 H 2O (11) 2 108 R .T
ω& CH 4 = 5.012 1011 [CH 4 ]0.7 [O2 ]0.8 exp −
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
(12)
640 Computational Methods and Experimental Measurements XIV The evaluation of these models was carried out based on their computational time and their accuracy against the experimental results of laminar flame from Pinus laricio samples at 60 s (the beginning of the laminar stage of the flame). 3.3 Composition of the gas mixture Our skeletal mechanism was used to test whether the C2 hydrocarbons have to be taken into account for the modelling of the laminar flame. Two mixtures were tested. For mixture 1, the mass fractions of CO, H2O, CH4 and C2 hydrocarbons are given in Table 1 while for mixture 2, C2 were omitted. For both mixtures, the mass fraction of CO2 is taken to set the sum of all mass fractions equals to 1. Table 3 gives the conditions at the burner. Table 3:
Burner inputs for the determination of the gas composition. CO CH4 H 2O CO2 C2H4 C2H6 Mass flow rate (kg.s-1) Radius (cm)
Mixture 1 Mixture 2 0.140 0.140 0.040 0.040 0.074 0.074 0.722 0.746 0.008 0.015 4.47 10-6 1.47
Figure 5 displays the comparison between predicted and observed temperatures along the flame axis. The range of experimental temperatures is represented by vertical lines. The results computed with mixture 1 overestimate the temperatures while results obtained with mixture 2 match the experiments. In the thermal plume, the predictions are slightly higher. The thermal plume
Figure 5:
Comparison between observed and predicted temperatures with our skeletal mechanism and two mixtures, along the flame axis.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
641
becomes progressively turbulent with height and the cooling due to the mixing with air is underestimated for laminar simulation. According to these results, to match experiments, one has to consider the mass fractions of CO, CH4 and H2O obtained during the gas analysis and to neglect other species. 3.4 Test of other skeletal and global models Since our mechanism is time consuming (about 1 week on our workstation), other skeletal mechanism from [2] and [6] were tested. Figure 6 presents a comparison of the three skeletal mechanisms along the flame axis. The temperature curves are similar. The mechanism elaborated by [2] generates however higher temperatures than the others. In the flame zone, the temperatures predicted by the mechanism of [6] are in the range of the experimental data with a reduced computational time. However, the computational time remains too long (twice the duration required to compute the flow field without reaction).
Figure 6:
Comparison between observed and predicted temperatures obtained with our skeletal mechanism and that of Zhou [2] and Peters [6].
Thus global mechanisms were examined. Mixture 2 (see Table 3) was used to test mechanism GM2. Since GM1 does not consider methane, two compositions called mixture 3 and 4 and derived from mixture 2 were used (see Table 4). Table 4:
Burner inputs for the determination of the global mechanism. Composition Mechanism CO CH4 H2O CO2 LHV (MJ/kg)
Mixture 2 GM2 and [10]. 0.140 0.040 0.074 0.746 3.42
Mixture 3 Mixture 4 GM1 0.180 0.338 0.074 0.074 0.746 0.588 1.82 3.42
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
642 Computational Methods and Experimental Measurements XIV Mixture 3 was established according to Grishin’s hypothesis [3] for which the mass fraction of CO is set equal to the sum of CO and CH4. In mixture 4, the mass fraction of CO is taken to give the same low heating value as mixture 2. Figure 7 shows the experimental and simulated temperatures along the flame axis obtained with these global mechanisms and the skeletal mechanism of [6]. GM1 with mixture 3 underestimates significantly the temperature (about 200°C). The energy released is too low. The association of mixture 4 with mechanism GM1 provides better results. However, the position of the maximum temperature gives an error of 25%. Conversely, mechanism GM2 matches the experimental temperatures and predictions of skeletal mechanism from [6]. Moreover the computational time with GM2 is equal to the calculation time for the cold flow. It meets the criteria of accuracy and computational time required for the definition of a combustion model usable in forest fire simulation.
Figure 7:
4
Experimental and computed temperatures obtained with the global mechanisms and Peters and Kee’s mechanism along the flame axis.
Conclusion
The main contributions of this work can be summarized as follow: The gases released during the degradation of forest fuels were identified, Their combustion was studied with a PSR leading to the elaboration of a skeletal mechanism including the C2, This mechanism was validated for a diffusion laminar flame, A global mechanism including methane for combustion was proposed. Prospects will concern the test of this global mechanism under turbulent condition at laboratory and field scales.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
Computational Methods and Experimental Measurements XIV
643
References [1] Pastor, E., Zarate, L., Planas, E. & Arnaldos, J., Mathematical models and calculation systems for the study of wildland fire behaviour. Progress in energy and combustion science, 29, pp. 139–153 2003. [2] Zhou, X. & Mahalingam, S., Evaluation of reduced mechanism for modelling combustion of pyrolysis gas in wildland fire. Combustion Science and Technology, 171, pp. 39–70, 2001. [3] Grishin, A.M., Mathematical modeling of forest fires and new methods of fighting them, Publishing House of Tomsk University: Tomsk, 390 p, 1997. [4] Kee, R.J., Rupley, F.M. & Miller, J.A., Chemkin-II: A FORTRAN Chemical Kinetics Package for the Analysis of Gas-Phase Chemical Kinetics, Report No. SAND 89-8009: Sandia National Laboratories, 1989. [5] Gri-Mech http://www.me.berkeley.edu/gri_mech/ [6] Peters, N. & Kee, R. J., The computation of stretched laminar methane-air diffusion flames using a reduced four step mechanism, Combustion and Flame, 68, pp. 17–29, 1987.
WIT Transactions on Modelling and Simulation, Vol 48, © 2009 WIT Press www.witpress.com, ISSN 1743-355X (on-line)
This page intentionally left blank
Computational Methods and Experimental Measurements XIV
645
Author Index
Abdalla I. E.............................. 329 Alammar K. ............................. 365 Alavi M.................................... 425 Ando Y. ................................... 497 Andreozzi A..................... 401, 461 Badillo P.-Y. ............................ 623 Bianco N. ................................. 461 Bielecki Z. ....................... 203, 217 Braga D............................ 137, 147 Britten A. J................................. 47 Britten A. ................................. 125 Buonomo B.............................. 401
Gardel A. ................................... 37 Gent M. ..................................... 95 Georgantopoulos G.................. 369 Georgantopoulou C.................. 369 Golak S. ................................... 169 Goshayeshi H........................... 425 Goto M..................................... 497 Graf W. .................................... 511 Han S. Z................................... 497 Haoui R.................................... 379 Herran M. ................................ 559 Holz K. .................................... 203
Černý R.......................... 3, 13, 157 Chalons H. ............................... 559 Constantinescu D. .................... 473 Cyr J......................................... 125
Irša J. ....................................... 413
D’Alessandro V. ...................... 353 de Juan A. ................................ 523 Diego I. ...................................... 95 Djellal S. .................................. 389 Dupuy J.-L. ...................... 593, 613 Duran M................................... 147
Kawagoishi N. ......................... 497 Kawahara H. ............................ 437 Kawalec A. .............................. 229 Kearley V................................. 137 Kim S. S................................... 497 Kobata N.................................... 71 Kočí J........................................... 3 Kočí V. ...................................... 13 Koike T.................................... 485 Kolosowski W. ........................ 217 Konatowski S........................... 259 Konovalov V. .......................... 593 Konvalinka P. .................... 83, 535 Krok J. ....................................... 25 Kubacki R................................ 241 Kuramae H................................. 61
Enache I. .................................. 147 Espinosa F.................................. 37 Everett J. E............................... 571 Fe J........................................... 341 Fernandez del Rincon A. ......... 523 Fernandez-Pello A. C............... 603 Fornůsek J................................ 535 Fructus M................................. 137 Frýba L. ................................... 117 Gajewski P............................... 251 Galybin A. N............................ 413 Garcia Fernandez P.................. 523 García S. .................................. 579
Janszen G................................. 105 Jiménez J. A............................... 37
Lautenberger C. ....................... 603 Le Menn M. ............................. 451 Leiceaga X. A. ......................... 579 Lesnik C................................... 229 Leszczynski M......................... 203
646 Computational Methods and Experimental Measurements XIV
Liang C.-C. .............................. 315 Liang S.-C................................ 315 Liang S.-H. .............................. 315 Lien L.-C. ................................ 315 Linn R. R. ........................ 593, 613 Máca P. ...................................... 83 MacKenzie S.............................. 47 Maděra J. ............................... 3, 13 Malecki J.................................. 295 Manca O. ......................... 401, 461 Marona L. ................................ 203 Maruyama T. ........................... 485 Masmoudi M............................ 137 Mazo M...................................... 37 Menéndez M. ............................. 95 Miadonye A. ............................ 125 Mihulka J. ................................ 157 Mijalkovic M. .......................... 547 Mikolajczyk J. ......................... 203 Miyaoka T.................................. 71 Mladenovic B. ......................... 547 Montelpare S............................ 353 Morimoto H. .............................. 61 Morvan D................................. 593 Nakamachi E.............................. 61 Naoi H...................................... 485 Nardini S.......................... 401, 461 Naso V. .................................... 461 Navarrina F. ............................. 341 Nettuno P. G. ........................... 105 Nguyen C. T. ........................... 451 Nishimura T............................. 437 Nowakowski M........................ 203 Nowosielski L.......................... 241 Ondráček M. .............................. 13 Ortiz R. .................................... 559 Ouibrahim A. ........................... 389 Parte Y. .................................... 137 Pavlík Z.................................... 157 Pavlíková M............................. 157
Pérez D. ..................................... 37 Perlin P. ................................... 203 Pieniężny A. ............................ 259 Pietrasinski J............................ 229 Pimont F. ......................... 593, 613 Piotrowski Z. ........................... 251 Pirner M................................... 117 Portet C.................................... 147 Przesmycki R........................... 241 Ricci R. .................................... 353 Roldán J. .................................... 95 Rovnaníková P........................... 13 Rutecka B. ............................... 203 Sakamoto H. .............................. 61 Sancibrian R. ........................... 523 Santiso E.................................... 37 Santoni P. A............................. 633 Secchiaroli A. .......................... 353 Secka K.................................... 125 Sedek E.................................... 217 Sedlmajer M. ............................. 13 Sickert J.-U. ............................. 511 Soto E. M................................. 579 Sovják R. ........................... 83, 535 Stacewicz T. ............................ 203 Stanuszek M. ............................. 25 Steinigen F............................... 511 Stroeven P................................ 179 Swiercz E................................. 271 Sybord C.................................. 623 Szymak P. ................................ 305 Tajiri S. ...................................... 71 Tanaka H. .................................. 71 Teshima N. .............................. 497 Toraño J. .................................... 95 Torno S. ..................................... 95 Touya T. .................................. 137 Tsangaris S. ............................. 369 Tsutahara M............................... 71 Urushadze Sh........................... 117
Computational Methods and Experimental Measurements XIV
Vejmelková E. ....................... 3, 13 Velasco J.................................... 95 Viadero F. ................................ 523 Vítek J. L. .......................... 83, 535 Wada M. .................................. 485 Wefky A. M............................... 37 Wnuk M........................... 191, 217
647
Wojtas J. .................... 25, 203, 217 Wylot T.................................... 137 Yamamoto H. .......................... 485 Zappalà G. ............................... 283 Zdravkovic S. .......................... 547 Zlatkov D................................. 547
This page intentionally left blank
...for scientists by scientists
Computational Methods and Experimental Measurements XIII Edited by: C.A. BREBBIA, Wessex Institute of Technology, UK and G.M. CARLOMAGNO, University of Naples, Italy
Containing papers presented at the Thirteenth International conference in this well established series on Computational Methods and Experimental Measurements (CMEM), these proceedings review state-of-the-art developments on the interaction between numerical methods and experimental measurements. Featured topics include: Computational and Experimental Methods; Experimental and Computational Analysis; Computer Interaction and Control of Experiments; Direct, Indirect and InSitu Measurements; Particle Methods; Structural and Stress Analysis; Structural Dynamics; Dynamics and Vibrations; Electrical and Electromagnetic Applications; Biomedical Applications; Heat Transfer; Thermal Processes; Fluid Flow; Data Acquisition, Remediation and Processing and Industrial Applications WIT Transactions on Modelling and Simulation, Vol 46 ISBN: 978-1-84564-084-2 2007 928pp £295.00/US$585.00/€442.50 eISBN: 978-1-84564-282-2
Applied Numerical Analysis M. RAHMAN, Dalhousie University, Canada
This book presents a clear and well-organised treatment of the concepts behind the development of mathematics and numerical techniques. The central topic is numerical methods and the calculus of variations to physical problems. Based on the author’s course taught at many universities around the world, the text is primarily intended for undergraduates in electrical, mechanical, chemical and civil engineering, physics, applied mathematics and computer science. Many sections are also directly relevant to graduate students in the mathematical and physical sciences. More than 100 solved problems and approximately 120 exercises are also featured. ISBN: 1-85312-891-0
2004 408pp+CD-ROM £165.00/US$259.00/€245.00
...for scientists by scientists
Computational Methods and Experimental Measurements XII Edited by: C.A. BREBBIA, Wessex Institute of Technology, UK and G.M. CARLOMAGNO, University of Naples, Italy
This volume contains most of the papers presented at the 12th International Conference on Computational Methods and Experimental Measurements (CMEM) held in Malta in 2005. These biannual conferences provide a forum to review the latest work on the interaction between computational methods and experimental measurements. New types of experiments allowing for more reliable interpretation of physical systems result in a better virtual representation of reality. Experimental results are themselves increasingly dependent on specialised computer codes. It is only through the harmonious progressive development of the experimental and computational fields that engineering sciences will be able to progress. This volume contains over 80 papers grouped in the following sections. Computational and Analytical Methods; Experimental and Computational Analysis; Direct, Indirect and In-Situ Measurements; Particle Methods; Structural and Stress Analysis; Structural Dynamics; Dynamics and Vibrations; Electrical and Electromagnetic Applications; Bioengineering Applications; Heat Transfer; Thermal Processes and Fluid Flow. WIT Transactions on Modelling and Simulation, Vol 41 ISBN: 1-84564-020-9 2005 944pp £365.00/US$579.00/€545.00
New Solutions in Contact Mechanics J. JÄGER, Lauterbach Verfahrenstechnik, Germany
The result of around 20 years of research by the author, this book features some previously unpublished solutions that will be useful for scientific investigation and mechanical design. A boundary element algorithm for contact with friction is discussed and a demonstration version with 800 contact points is included on an accompanying CD-ROM. All of the chapters are more or less self-contained, while the derivations used are suitable for undergraduate students. Readers will also find new information which may aid further developments in this field. Contents: Introduction; Summary of the Basic Equations; Solutions for Plane Contact; Solutions for Axi-Symmetric Profiles; General Contact and Comparison with FEM and BEM Programs; Computation of Contact Problems; Gauss-Seidel Method for Frictional Contact Problems; Numerical Results for Incremental LoadHistories; Basic Equations and Solutions for Impact; Tangential and Torsional Impact of Spheres; Numerical Results for Impact; Applications. ISBN: 1-85312-994-1
2004 328pp+CD-ROM £129.00/US$209.00/€199.00
...for scientists by scientists
Elastic and Elastoplastic Contact Analysis Using Boundary Elements and Mathematical Programming A. FARAJI, University of Tehran, Iran
This book presents a general elastic and elastoplastic analysis method for the treatment of two- and three-dimensional contact problems between two deformable bodies undergoing small displacements with and without friction. The author’s approach uses the Boundary Element Method (BEM) and Mathematical Programming (MP). This is applied to the contacting bodies pressed together by a normal force which is sufficient to expand the size of the contact area, mainly through plastic deformation, acted on subsequently by a tangential force less than that necessary to cause overall sliding. The formulated method described in this book is straightforward and applicable to practical engineering problems. Series: Topics in Engineering, Vol 45 ISBN: 1-85312-733-7 2005 144pp £55.00/US$95.00/€79.00
Integral Equations and their Applications M. RAHMAN, Dalhousie University, Canada For many years, the subject of functional equations has held a prominent place in the attention of mathematicians. In more recent years this attention has been directed to a particular kind of functional equation, an integral equation, wherein the unknown function occurs under the integral sign. The study of this kind of equation is sometimes referred to as the inversion of a definite integral. While scientists and engineers can already choose from a number of books on integral equations, this new book encompasses recent developments, including some preliminary backgrounds of formulations of integral equations governing the physical situation of the problems. It also contains elegant analytical and numerical methods, and an important topic of the variational principles. Primarily intended for senior undergraduate students and first year postgraduate students of engineering and science courses, students of mathematical and physical sciences will also find many sections of direct relevance. The book contains eight chapters, pedagogically organized. This book is specially designed for those who wish to understand integral equations without having extensive mathematical background. Some knowledge of integral calculus, ordinary differential equations, partial differential equations, Laplace transforms, Fourier transforms, Hilbert transforms, analytic functions of complex variables and contour integrations are expected on the part of the reader. ISBN: 978-1-84564-101-6 2007 384pp £126.00/US$252.00/€189.00 eISBN: 978-1-84564-289-1
This page intentionally left blank
This page intentionally left blank
This page intentionally left blank
This page intentionally left blank
This page intentionally left blank
This page intentionally left blank
This page intentionally left blank