Structural design optimization considering uncertainties
Structures and Infrastructures Series ISSN 1747-7735
Book Series Editor:
Dan M. Frangopol Professor of Civil Engineering and Fazlur R. Khan Endowed Chair of Structural Engineering and Architecture Department of Civil and Environmental Engineering Center for Advanced Technology for Large Structural Systems (ATLSS Center) Lehigh University Bethlehem, PA, USA
Volume 1
Structural design optimization considering uncertainties
Edit ed by
Yiannis Tsompanakis1, Nikos D. Lagaros2 & Manolis Papadrakakis3 1
Department of Applied Sciences, Technical University of Crete, University Campus, Chania, Crete, Greece 2,3 Institute of Structural Analysis & Seismic Research, Faculty of Civil Engineering, National Technical University of Athens, Zografou Campus, Athens, Greece
LONDON / LEIDEN / NEW YORK / PHILADELPHIA / SINGAPORE
Colophon Book Series Editor : Dan M. Frangopol Volume Editors: Yiannis Tsompanakis, Nikos D. Lagaros and Manolis Papadrakakis Cover illustration: Objective space of the M-3OU multi-criteria optimization problem Nikos D. Lagaros September 2007 This edition published in the Taylor & Francis e-Library, 2008. “To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk.” Taylor & Francis is an imprint of the Taylor & Francis Group, an informa business ©2008 Taylor & Francis Group, London, UK All rights reserved. No part of this publication or the information contained herein may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, by photocopying, recording or otherwise, without written prior permission from the publishers. Although all care is taken to ensure integrity and the quality of this publication and the information herein, no responsibility is assumed by the publishers nor the author for any damage to the property or persons as a result of operation or use of this publication and/or the information contained herein. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data Structural design optimization considering uncertainties / Edited by Yiannis Tsompanakis, Nikos D. Lagaros & Manolis Papadrakakis. p. cm. – (Structures and infrastructures series ; 1747-7735) Includes bibliographical references and index. ISBN 978-0-415-45260-1 (hardcover : alk. paper) ISBN 978-0-203-93852-2 (e-book) 1. Structural optimization. I. Tsompanakis, Yiannis. 1969– II. Lagaros, Nikos D. 1970– III. Papadrakakis, Manolis. 1948– TA658.8.S73 2007 624.1 7713–dc22 2007040343 Published by: Taylor & Francis/Balkema P.O. Box 447, 2300 AK Leiden, The Netherlands e-mail:
[email protected] www.balkema.nl, www.taylorandfrancis.co.uk, www.crcpress.com ISBN 0-203-93852-6 Master e-book ISBN
ISBN13 978-0-415-45260-1(Hbk) ISBN13 978-0-203-93852-2(eBook) Structures and Infrastructures Series: ISSN 1747-7735 Volume 1
Table of Contents
Editorial About the Book Series Editor Foreword Preface Brief Curriculum Vitae of the Editors List of Contributors Author Data
IX XI XIII XV XXI XXIII XXV
PART 1
Reliability-Based Design Optimization (RBDO) 1
2
Principles of reliability-based design optimization Alaa Chateauneuf , University Blaise Pascal, France Reliability-based optimization of engineering structures
3
31
John D. Sørensen, Aalborg University, Aalborg, Denmark 3
4
Reliability analysis and reliability-based design optimization using moment methods Sang Hoon Lee, Northwestern University, Evanston, IL, USA Byung Man Kwak, Korea Advanced Institute of Science and Technology, Daejeon, Korea Jae Sung Huh, Korea Aerospace Research Institute, Daejeon, Korea Efficient approaches for system reliability-based design optimization Efstratios Nikolaidis, University of Toledo, Toledo, OH, USA Zissimos P. Mourelatos, Oakland University, Rochester, MI, USA Jinghong Liang, Oakland University, Rochester, MI, USA
57
87
VI
5
6
Contents
Nondeterministic formulations of analytical target cascading for decomposition-based design optimization under uncertainty Michael Kokkolaras, University of Michigan, Ann Arbor, MI, USA Panos Y. Papalambros, University of Michigan, Ann Arbor, MI, USA Design optimization of stochastic dynamic systems by algebraic reduced order models Gary Weickum, University of Colorado at Boulder, Boulder, CO, USA Matt Allen, University of Colorado at Boulder, Boulder, CO, USA Kurt Maute, University of Colorado at Boulder, Boulder, CO, USA Dan M. Frangopol, Lehigh University, Bethlehem, PA, USA
7
Stochastic system design optimization using stochastic simulation Alexandros A. Taflanidis, California Institute of Technology, CA, USA James L. Beck, California Institute of Technology, CA, USA
8
Numerical and semi-numerical methods for reliability-based design optimization Ghias Kharmanda, Aleppo University, Aleppo, Syria
9
Advances in solution methods for reliability-based design optimization Alaa Chateauneuf , University Blaise Pascal, France Younes Aoues, University Blaise Pascal, France
10
Non-probabilistic design optimization with insufficient data using possibility and evidence theories Zissimos P. Mourelatos, Oakland University, Rochester, MI, USA Jun Zhou, Oakland University, Rochester, MI, USA
115
135
155
189
217
247
11
A decoupled approach to reliability-based topology optimization for structural synthesis 281 Neal M. Patel, University of Notre Dame, Notre Dame, IN, USA John E. Renaud, University of Notre Dame, Notre Dame, IN, USA Donald Tillotson, University of Notre Dame, Notre Dame, IN, USA Harish Agarwal, General Electric Global Research, Niskayuna, NY, USA Andrés Tovar, National University of Colombia, Bogota, Colombia
12
Sample average approximations in reliability-based structural optimization: Theory and applications Johannes O. Royset, Naval Postgraduate School, Monterey, CA, USA Elijah Polak, University of California, Berkeley, CA, USA
13
Cost-benefit optimization for maintained structures
307
335
Rüdiger Rackwitz, Technical University of Munich, Munich, Germany Andreas E. Joanni, Technical University of Munich, Munich, Germany 14
A reliability-based maintenance optimization methodology Wu Y.-T., Applied Research Associates Inc., Raleigh, NC, USA
369
C o n t e n t s VII
15
Overview of reliability analysis and design capabilities in DAKOTA with Application to shape optimization of MEMS Michael S. Eldred, Sandia National Laboratories, Albuquerque, NM, USA Barron J. Bichon, Vanderbilt University, Nashville, TN, USA Brian M. Adams, Sandia National Laboratories, Albuquerque, NM, USA Sankaran Mahadevan, Vanderbilt University, Nashville, TN, USA
401
PART 2
Robust Design Optimization (RDO) 16
Structural robustness and its relationship to reliability Jorge E. Hurtado, National University of Colombia, Manizales, Colombia
17
Maximum robustness design of trusses via semidefinite programming Yoshihiro Kanno, University of Tokyo, Tokyo, Japan Izuru Takewaki, Kyoto University, Kyoto, Japan
18
19
20
21
Design optimization and robustness of structures against uncertainties based on Taylor series expansion Ioannis Doltsinis, University of Stuttgart, Stuttgart, Germany Info-gap robust design of passively controlled structures with load and model uncertainties Izuru Takewaki, Kyoto University, Kyoto, Japan Yakov Ben-Haim, Technion, Haifa, Israel Genetic algorithms in structural optimum design using convex models of uncertainty Sara Ganzerli, Gonzaga University, Spokane, WA, USA Paul De Palma, Gonzaga University, Spokane, WA, USA Metamodel-based computational techniques for solving structural optimization problems considering uncertainties Nikos D. Lagaros, National Technical University of Athens, Athens, Greece Yiannis Tsompanakis, Technical University of Crete, Chania, Greece Michalis Fragiadakis, University of Thessaly, Volos, Greece Vagelis Plevris, National Technical University of Athens, Athens, Greece Manolis Papadrakakis, National Technical University of Athens, Athens, Greece
References Author index Subject index
435
471
499
531
549
567
599 631 633
Editorial
Welcome to the New Book Series Structures and Infrastructures. Our knowledge to model, analyze, design, maintain, manage and predict the lifecycle performance of structures and infrastructures is continually growing. However, the complexity of these systems continues to increase and an integrated approach is necessary to understand the effect of technological, environmental, economical, social and political interactions on the life-cycle performance of engineering structures and infrastructures. In order to accomplish this, methods have to be developed to systematically analyze structure and infrastructure systems, and models have to be formulated for evaluating and comparing the risks and benefits associated with various alternatives. We must maximize the life-cycle benefits of these systems to serve the needs of our society by selecting the best balance of the safety, economy and sustainability requirements despite imperfect information and knowledge. In recognition of the need for such methods and models, the aim of this Book Series is to present research, developments, and applications written by experts on the most advanced technologies for analyzing, predicting and optimizing the performance of structures and infrastructures such as buildings, bridges, dams, underground construction, offshore platforms, pipelines, naval vessels, ocean structures, nuclear power plants, and also airplanes, aerospace and automotive structures. The scope of this Book Series covers the entire spectrum of structures and infrastructures. Thus it includes, but is not restricted to, mathematical modeling, computer and experimental methods, practical applications in the areas of assessment and evaluation, construction and design for durability, decision making, deterioration modeling and aging, failure analysis, field testing, structural health monitoring, financial planning, inspection and diagnostics, life-cycle analysis and prediction, loads, maintenance strategies, management systems, nondestructive testing, optimization of maintenance and management, specifications and codes, structural safety and reliability, system analysis, time-dependent performance, rehabilitation, repair, replacement, reliability and risk management, service life prediction, strengthening and whole life costing. This Book Series is intended for an audience of researchers, practitioners, and students world-wide with a background in civil, aerospace, mechanical, marine and automotive engineering, as well as people working in infrastructure maintenance, monitoring, management and cost analysis of structures and infrastructures. Some volumes are monographs defining the current state of the art and/or practice in the field, and some are textbooks to be used in undergraduate (mostly seniors), graduate and
X
Editorial
postgraduate courses. This Book Series is affiliated to Structure and Infrastructure Engineering (http://www.informaworld.com/sie), an international peer-reviewed journal which is included in the Science Citation Index. If you like to contribute to this Book Series as an author or editor, please contact the Book Series Editor (
[email protected]) or the Publisher (
[email protected]). A book proposal form can be downloaded at www.balkema.nl. It is now up to you, authors, editors, and readers, to make Structures and Infrastructures a success. Dan M. Frangopol Book Series Editor
About the Book Series Editor
Dr. Dan M. Frangopol is the first holder of the Fazlur R. Khan Endowed Chair of Structural Engineering and Architecture at Lehigh University, Bethlehem, Pennsylvania, USA, and a Professor in the Department of Civil and Environmental Engineering at Lehigh University. He is also an Emeritus Professor of Civil Engineering at the University of Colorado at Boulder, USA, where he taught for more than two decades (1983–2006). Before joining the University of Colorado, he worked for four years (1979–1983) in structural design with A. Lipski Consulting Engineers in Brussels, Belgium. In 1976, he received his doctorate in Applied Sciences from the University of Liège, Belgium, and holds an honorary doctorate degree (Doctor Honoris Causa) and a B.S. degree from the Technical University of Civil Engineering in Bucharest, Romania. He is a Fellow of the American Society of Civil Engineers (ASCE), American Concrete Institute (ACI), and International Association for Bridge and Structural Engineering (IABSE). He is also an Honorary Member of both the Romanian Academy of Technical Sciences and the Portuguese Association for Bridge Maintenance and Safety. He is the initiator and organizer of the Fazlur R. Khan Lecture Series (www.lehigh.edu/frkseries) at Lehigh University. Dan Frangopol is an experienced researcher and consultant to industry and government agencies, both nationally and abroad. His main areas of expertise are structural reliability, structural optimization, bridge engineering, and life-cycle analysis, design, maintenance, monitoring, and management of structures and infrastructures. He is the Founding President of the International Association for Bridge Maintenance and Safety (IABMAS, www.iabmas.org) and of the International Association for Life-Cycle Civil Engineering (IALCCE, www.ialcce.org), and Past Director of the Consortium on Advanced Life-Cycle Engineering for Sustainable Civil Environments (COALESCE). He is also the Chair of the Executive Board of the International Association for Structural Safety and Reliability (IASSAR, www.columbia.edu/cu/civileng/iassar) and the Vice-President of the International Society for Health Monitoring of Intelligent Infrastructures (ISHMII, www.ishmii.org). Dan Frangopol is the recipient of several prestigious awards including the 2007 ASCE Ernest Howard Award, the 2006 IABSE OPAC Award, the 2006 Elsevier Munro Prize, the 2006 T. Y. Lin Medal, the 2005 ASCE Nathan M. Newmark Medal, the 2004 Kajima Research Award, the 2003
XII A b o u t t h e B o o k S e r i e s E d i t o r
ASCE Moisseiff Award, the 2002 JSPS Fellowship Award for Research in Japan, the 2001 ASCE J. James R. Croes Medal, the 2001 IASSAR Research Prize, the 1998 and 2004 ASCE State-of-the-Art of Civil Engineering Award, and the 1996 Distinguished Probabilistic Methods Educator Award of the Society of Automotive Engineers (SAE). Dan Frangopol is the Founding Editor-in-Chief of Structure and Infrastructure Engineering (Taylor & Francis, www.informaworld.com/sie) an international peerreviewed journal, which is included in the Science Citation Index. This journal is dedicated to recent advances in maintenance, management, and life-cycle performance of a wide range of structures and infrastructures. He is the author or co-author of over 400 refereed publications, and co-author, editor or co-editor of more than 20 books published by ASCE, Balkema, CIMNE, CRC Press, Elsevier, McGraw-Hill, Taylor & Francis, and Thomas Telford and an editorial board member of several international journals. Additionally, he has chaired and organized several national and international structural engineering conferences and workshops. Dan Frangopol has supervised over 70 Ph.D. and M.Sc. students. Many of his former students are professors at major universities in the United States, Asia, Europe, and South America, and several are prominent in professional practice and research laboratories. For additional information on Dan M. Frangopol’s activities, please visit www.lehigh.edu/∼dmf206/
Foreword
The aim of structural optimization is to achieve the best possible design by maximizing benefits under conflicting criteria. Uncertainties are unavoidable in the structural optimization process. Therefore, a realistic optimal design process should definitely consider uncertainties. Two broad types of uncertainty have to be considered: (a) uncertainty associated with randomness, the so-called aleatory uncertainty, and (b) uncertainty associated with imperfect modeling, the so-called epistemic uncertainty. It has been clearly demonstrated that both aleatory and epistemic uncertainties can be treated, separately or combined, and analyzed using the principles of probability and statistics. Structural reliability theory has been developed during the past decades to handle problems considering such uncertainties. This continuous development has had considerable impact in recent years on structural optimization. The purpose of this book is to present the latest research findings in the field of structural optimization considering uncertainties. A wide variety of topics are covered by leading researchers. The first part (Chapters 1 to 15) is devoted to reliability-based design optimization, and the second part (Chapters 16 to 21) deals with robust design optimization. To provide the reader with a good overview of pertinent literature, all cited papers and additional references on the topics discussed, are collected in a comprehensive list of references. The Book Series Editor would like to express his appreciation to the Editors and all Authors who contributed to this book. It is his hope that this first volume in the Structures and Infrastructures Book Series will generate a lot of interest and help engineers to design the best structural systems under uncertainty.
Dan M. Frangopol Book Series Editor Bethlehem, Pennsylvania November 2, 2007
Preface
Uncertainties are inherent in engineering problems and the scatter of structural parameters from their nominal ideal values is unavoidable. The response of structural systems can sometimes be very sensitive to uncertainties encountered in the material properties, manufacturing conditions, external loading conditions and analytical and/or numerical modelling. In recent years, probabilistic-based formulations of optimization problems have been developed to account for uncertainties through stochastic simulation and probabilistic analysis. Stochastic analysis methods have been developed significantly over the last two decades and have stimulated the interest for the probabilistic optimum design of structures. There are mainly two design formulations that account for probabilistic systems response: Reliability-Based Design Optimization (RBDO) and Robust Design Optimization (RDO). The main goal of RBDO methods is to design for safety with respect to extreme events. RDO methods primarily seek to minimize the influence of stochastic variations on the mean design. The selected contributions of this book deal with the use of probabilistic methods for optimal design of different types of structures and various considerations of uncertainties. This volume is a collective book of twenty-one self-contained chapters, which present state-of-the-art theoretical advances and applications in various fields of probabilistic computational mechanics. The first fifteen chapters of the book are focused on RBDO theory and applications, while the rest of the chapters deal with advances in RDO and combined RBDO-RDO theory and applications. Apart from the reference list that is given separately for each chapter, a complete list of references is also provided for the reader. In order to obtain contributions that cover a wide spectrum of engineering problems, the problem of optimum design is considered in a broad sense. The probabilistic framework allows for a consistent treatment of both cost and safety. In what follows a short description of the book content is presented. In the introductory chapter by Chateauneuf, the fundamental theoretical and computational issues related to RBDO are described and the advantages of RBDO compared to conventional deterministic optimization approaches are outlined. This chapter emphasizes the role of uncertainties in deriving a “true’’ optimal solution, defined as the best compromise between cost minimization and safety assurance. The presented RBDO formulations cover various important probabilistic issues (theoretical, computational and practical), such as multi-component reliability analysis, safety factor calibration, multi-objective applications, as well as a great variety of engineering applications, such as topology, maintenance and time-variant problems.
XVI
Preface
The theoretical basis for reliability-based structural optimization is described by Sørensen within the framework of Bayesian statistical decision theory. This contribution presents the latest findings in RBDO with respect to three major types of decision problems with increased degree of complexity and uncertainty: a) decisions with given information (e.g. planning of new structures), b) decisions when new information is provided (e.g. for re-assessment and retrofitting of existing structures), c) decisions involving planning of experiments/inspections to obtain new information (e.g. for inspection planning). Furthermore, RBDO issues related to decisions with systematic reconstruction are also discussed. Reliability-based, cost-benefit problems are formulated and exemplified with structural optimization. Illustrative examples are presented including a simple introductory example, a decision problem related to bridge re-assessment and a reliability-based decision problem for offshore wind turbines. Lee, Kwak and Huh deal with reliability analysis and reliability-based design optimization using moment methods. By using this approach, a finite number of statistical moments of a system response function are calculated and the probability density function (PDF) of the system response is identified by empirical distribution systems, such as the Pearson or the Johnson system. In this chapter, a full factorial moment method (FFMM) procedure is introduced for reliability analysis calculations. A response surface augmented moment method (RSMM) is developed to construct a series of approximate response surface for enhancing the efficiency of FFMM. The probability of failure is calculated using an empirical distribution system and the first four statistical moments of system’s performance function are calculated from appropriate design simulations. The design sensitivity of the probability of failure, required during RBDO process, is calculated in a semi-analytic way using moment methods. As stated in the chapter by Nikolaidis, Mourelatos and Liang, a designer faces many challenges when applying RBDO to engineering systems. The high computational cost required for RBDO and the efficient computation of the system failure probability are the two principal challenges. As a result, most RBDO studies are restricted to the safety levels of the individual failure modes. In order to overcome this deficiency, two efficient approaches for RBDO are presented in this chapter. Both approaches apportion optimally the system reliability among the failure modes by considering the target values of the failure probabilities of the modes as design variables. The first approach uses a sequential optimization and reliability assessment (SORA) approach, while the second system RBDO approach uses a single-loop method where the searches for the optimum design and for the most probable failure points proceed simultaneously. The two approaches are illustrated and compared on characteristic design examples. Moreover, it is shown that the single-loop approach, enhanced with an active set strategy, is considerably more efficient than the SORA approach. According to the work of Kokkolaras and Papalambros, design subproblems are formulated and solved so that their solution can be integrated to represent the optimal design of the decomposed system. This approach requires appropriate problem formulation and coordination of the distributed, multilevel system design problem. The presented analytical target cascading (ATC) is a methodology suitable for multilevel optimal design problems. Design targets are cascaded to lower levels using the modelbased, hierarchical decomposition of the original design problem. An optimization problem is posed and solved for each design subproblem to minimize deviations from
Preface
XVII
propagated targets. Solving the subproblems and using an appropriate coordination strategy the overall system compatibility is preserved. The required computational effort motivated Weickum, Allen, Maute and Frangopol to address the need for developing efficient numerical probabilistic techniques for the reliability analysis and design optimization of stochastic dynamic systems. This work seeks to alleviate the computational costs for optimizing dynamic systems by employing reduced order models. The key to utilize reduced order models in stochastic analysis and optimization lies in making them adaptable to design changes and variations of the random parameters. For this purpose, an extended reduced order model (EROM) method, which is a reduced order model accounting for parameter changes, is integrated into stochastic analysis and design optimization. The application of the proposed EROM is tested both for deterministic and probabilistic optimization of the characteristic connecting rod example. Taflanidis and Beck consider a two stage framework for efficient implementation of RBDO of dynamical systems under stochastic excitation (e.g. earthquake, wind or wave loading), where uncertainties are assumed for both the excitation characteristics and the structural model adopted. In the first stage a novel approach, the so called stochastic subset optimization (SSO), is implemented for iteratively identifying a subset of the original design space that has high probability of containing the optimal design variables. The second stage adopts a stochastic optimization algorithm to pinpoint, if needed, the optimal design variables within that subset. Topics related to the combination of the two different stages, in order to enhance the overall efficiency of the presented methodology, are also discussed. An illustrative example for the seismic retrofitting, via viscous dampers, is presented. The minimization of the expected lifecycle cost is adopted as the design objective, in which the cost associated with damage caused by future earthquakes is calculated by stochastic simulation via a realistic probabilistic model for the structure and the ground motion that involves the formulation of an effective loss function model. Kharmanda discusses in his contribution issues related to RBDO formulation and solution procedures. The RBDO formulation is defined as a nonlinear mathematical programming problem in which the mean values of uncertain system parameters are used as design variables and its weight or cost is optimized subjected to prescribed probabilistic constraints. In this chapter, recent developments for the efficient RBDO problem solving using semi-numerical and numerical techniques are presented. Following a detailed description of the proposed methods, their efficiency is demonstrated in computationally demanding dynamic applications. The obtained results as well as the computational implications of the methods are compared and their advantages and disadvantages are highlighted in a comprehensive manner. In the contribution by Chateauneuf and Aoues, the main objective is to apply appropriate numerical methods in order to solve RBDO problems more efficiently. A comprehensive description of the most commonly used RBDO formulations and the corresponding numerical methods is provided. A good RBDO algorithm should satisfy the conditions of efficiency (computation time), precision (accuracy of finding the optimum), generality (capability to deal with different kinds of problems) and robustness (stability of the convergence for any admissible initial point, local or global convergence criteria, etc). All these aspects are discussed in detail, and effective solutions are proposed via characteristic test examples.
XVIII
Preface
In the chapter by Mourelatos and Zhou, evidence theories are used to account for uncertainty in structural design with incomplete and/or fuzzy information. A sequential possibility-based design optimization (SPDO) method is presented which decouples the design loop and the reliability assessment of each constraint and is also capable of handling both random and possibilistic design variables. Furthermore, a computationally efficient optimum design formulation using evidence theory is presented, which can handle a mixture of epistemic and aleatory uncertainties. Numerical examples demonstrate the application of possibility and evidence theories in probabilistic optimum design and highlight the trade-offs among reliability-based, possibility-based and evidence-based design approaches. In the chapter by Patel, Renaud, Tillotson, Agarwal, and Tovar, the mode of failure is considered to be the maximum deflection of the structure in reliability-based topology optimization (RBTO). A decoupled approach is employed in which the topology optimization stage is separate from the reliability analysis. The proposed decoupled reliability-based design optimization methodology is an approximate technique to obtain consistent reliable designs at lower computational expense. An efficient nongradient Hybrid Cellular Automaton (HCA) method has been implemented in the proposed decoupled approach for evaluating density changes, while the strain energy for every new design is evaluated via finite element structural analyses. The chapter by Royset and Polak presents recent advances in combining Monte Carlo sampling and nonlinear programming algorithms for RBDO problems utilizing effective approximation techniques that can lead to the reduction of the excessive computational cost. More specifically, they present an approach where the reliability term in the problem formulation is replaced by a statistical estimate of the reliability obtained by means of Monte Carlo sampling. The authors emphasize on the calculation of “adaptive optimal’’ sample size which is achieved using sample-adjustment rules by solving auxiliary optimization tasks during the evolution of RBDO. The efficiency of the methods is verified in a number of numerical examples arising in design of various types of structures having a single or multiple limit-state functions, in which reliability terms are included in both objective and constraint functions. Rackwitz and Joanni describe theoretical and practical issues leading to cost-efficient optimization formulations for existing aging structures. In order to establish an efficient methodology for optimizing maintenance, an elaborate model, based on renewal theory that uses systematic reconstruction or repair schemes after suitable inspection, is formulated in which life-cycle cost perspective is used. The presented implementation shows the impact of the choice of the objective function, the risk acceptability and the transient behaviour of the failure rate. The emphasis is given on concrete structures, but the described methodology can be applied to any material and any type of engineering structures. In particular, minimal age-dependent block repairs and maintenance by inspection and repair have been studied via an illustrative example. Wu describes in his contribution a reliability-based damage tolerance (RBDT) approach that employs a systematic approach for probabilistic fracture-mechanics damage tolerance analysis with maintenance planning under various uncertainties. Moreover, he presents the successful integration of RBDT in the proposed reliabilitybased maintenance optimization (RBMO) methodology, focusing on efficient sampling and other computational strategies for handling the uncertainties related to structural maintenance issues (fatigue, failure, inspection, repair, etc). A comparison of different
Preface
XIX
versions of the proposed RBMO for analytical benchmark examples as well as for realistic test cases is presented. Eldred, Bichon, Adams and Mahadevan present an overview of recent research related to first and second-order reliability methods. They outline both the forward reliability analysis of computing probabilities for specified response levels (using the so-called RIA, i.e. the reliability index approach) and the inverse reliability analysis of computing response levels for specified probabilities (the performance measure approach or PMA). A number of algorithmic variations are described and the effect of different limit state approximations, probability integrations, warm starting, most probable point search algorithms, and Hessian approximations is discussed. Relative performance of these reliability analysis and design algorithms is presented for several benchmark test problems as well as for real-world applications related to the probabilistic analysis and design of micro-electro-mechanical systems (MEMS) using the DAKOTA software. Hurtado aims at exploiting the complementary nature of RDO and RBDO probabilistic optimization approaches, using effective expansion techniques. Under this viewpoint, an efficient approximate methodology that integrates RDO and RBDO is proposed, in an effort to allow the designer to foresee the implications of adopting RDO or RBDO in the optimization process of probabilistic applications and to combine them in an optimum manner. On this basis, the concept of “robustness assurance’’ in structural design is introduced, in a similar manner to the “quality assurance’’ in the construction phase. For this purpose, a practical method for robust optimal design interpreted as entropy minimization is presented. Illustrative examples are presented to elucidate the advantages of the proposed approach. The robustness function is a measure of the performance of structural systems and expresses the greatest level of non-probabilistic uncertainty at which any constraint on structural performance cannot be violated. Kanno and Takewaki propose an efficient scheme for robust design optimization of trusses under various uncertainties. The structural optimization problem is formulated in the framework of an info-gap decision theory, aiming at maximizing the robustness function and is solved using semi-definite programming methods. Characteristic truss examples are used to demonstrate the efficiency of the proposed methodology. In his chapter, Doltsinis advocates the importance of an elaborate consideration of random scatter in industrial engineering with regard to reliability, and for securing standards of operation performance (robustness). For this purpose, synthetic Monte Carlo sampling and analytic Taylor series expansion that offer alternatives of stochastic analysis and design improvement are described. The robust optimum design problem is formulated as a two-criteria task that involves minimization of mean value and standard deviation of the objective function, while randomness of the constraints is also considered. Numerical applications justify the efficiency of the proposed approach are presented with linear and nonlinear structural response. Takewaki and Ben-Haim present a robust design concept, capable of incorporating uncertainties for both demand (loads) and capacity (various structural design parameters) of a dynamically loaded structure. Since uncertainties are prevalent in many cases, it is necessary to satisfy critical performance requirements, rather than to optimize performance, and to maximize the robustness to uncertainty. In the proposed implementation, the so called, “info-gap models of uncertainty’’ are used to represent
XX
Preface
uncertainties in the Fourier amplitude spectrum of the dynamic loading and the basic structural parameters related the vibration model of the structure. Furthermore, earthquake input energy is introduced as a new measure of structural performance for passively controlled structures and uncertainties of damping coefficients of control devices are also considered. Ganzerli and De Palma focus on the use of convex models of uncertainty with genetic algorithms for optimal structural design. Convex models theory together with probability and fuzzy sets, convex models can be considered part of the so-called “uncertainty triangle’’. Following, a literature review on convex models and their applications a description of convex models theory as an efficient alternative way to deal with problems having severe structural uncertainties, is presented. Subsequently, applications including the use of convex models of uncertainty combined with genetic algorithms for optimal structural design of trusses are demonstrated, and directions for further research in this area are given. In the last chapter, Lagaros, Tsompanakis, Fragiadakis, Plevris and Papadrakakis present efficient methodologies for performing standard RBDO and combined reliability-based and robust design optimization (RRDO) of stochastic structural systems in a multi-objective optimization framework. The proposed methodologies incorporate computationally efficient structural optimization and probabilistic analysis procedures. The optimization part is performed with evolutionary methods while the probabilistic analysis is carried out with the Monte Carlo Simulation (MCS) method with the Latin Hypercube Sampling (LHS) technique for the reduction of the sample size. In order to reduce the excessive computational cost and make the whole procedure feasible for real-world engineering problems the use of Neural Networks (NN) based metamodels is incorporated in the proposed methodology. The use of NN is motivated by the time-consuming repeated FE solutions required in the reliability analysis phase and by the evolutionary optimization algorithm during the optimization process. The editors of this book would like to express their deep gratitude to all the contributors for their most valuable support during the preparation of this volume and for their time and effort devoted to the completion of their contributions. In addition, we are most appreciative to the Book Series Editor, Professor Dan M. Frangopol, for his kind invitation to edit this volume, for preparing the foreword of this book, and for his constructive comments and suggestions offered during the publication process. Finally, the editors would like to thank all the personnel of Taylor and Francis Publishers, especially Germaine Seijger, Richard Gundel, Lukas Goosen, Tessa Halm, Maartje Kuipers and Janjaap Blom, for their most valuable support for the publication of this book.
Yiannis Tsompanakis Nikos D. Lagaros Manolis Papadrakakis September 2007
Brief Curriculum Vitae of the Editors
Yiannis Tsompanakis is Assistant Professor in the Department of Applied Sciences of the Technical University of Crete, Greece, where he teaches structural and computational mechanics as well as earthquake engineering lessons. His scientific interests include computational methods in structural and geotechnical earthquake engineering, structural optimization, probabilistic mechanics, structural assessment and the application of artificial intelligence methods in engineering. Dr. Tsompanakis has published many scientific papers and is the co-editor of several books in computational mechanics. He is involved in the organization of minisymposia and special sessions in international conferences as well as special issues of scientific journals as guest editor. He serves as a board member in various conferences, organized the COMPDYN-2007 conference together with the other editors of this book and acts as a co-editor of the resulting selected papers volume.
Nikos D. Lagaros is Lecturer of structural dynamics and computational mechanics in the School of Civil Engineering of the National Technical University of Athens, Greece. His research activity is focused on the development and the application of novel computational methods and information technology to structural and earthquake engineering analysis and design. In addition, Dr. Lagaros has provided consulting and expert-witness services to private companies and federal government agencies in Greece. He also serves as a member of the editorial board and reviewer of various international scientific journals. He has published numerous scientific papers, and is the co-editor of a number of forthcoming books, one of which is dealing with innovative soft computing applications in earthquake engineering. Nikos Lagaros is co-organizer of COMPDYN 2007 and co-editor of its selected papers volume.
XXII
Brief Curriculum Vitae of the Editors
Manolis Papadrakakis is Professor of Computational Structural Mechanics in the School of Civil Engineering at the National Technical University of Athens, Greece. His main fields of interest are: large-scale, stochastic and adaptive finite element applications, nonlinear dynamics, structural optimization, soil-fluid-structure interaction and soft computing applications in structural engineering. He is co-Editor-in-chief of the Computer Methods in Applied Mechanics and Engineering Journal, an Honorary Editor of the International Journal of Computational Methods, and an Editorial Board member of a number of international scientific journals. He is also a member of both the Executive and the General Council of the International Association for Computational Mechanics, Chairman of the European Committee on Computational Solid and Structural Mechanics and Vice President of the John Argyris Foundation. Professor Papadrakakis has chaired many international conferences and presented numerous invited lectures. He has written and edited various books and published a large variety of scientific articles in refereed journals and book chapters.
List of Contributors
Adams, B.M., Sandia National Laboratories, Albuquerque, NM, USA Agarwal, H., General Electric Global Research, Niskayuna, NY, USA Allen, M., University of Colorado at Boulder, Boulder, CO, USA Aoues, Y., University Blaise Pascal, France Beck, J.L., California Institute of Technology, CA, USA Ben-Haim, Y., Technion, Haifa, Israel Bichon, B.J., Vanderbilt University, Nashville, TN, USA Chateauneuf, A., University Blaise Pascal, France De Palma, P., Gonzaga University, Spokane, WA, USA Doltsinis, I., University of Stuttgart, Stuttgart, Germany Eldred, M.S., Sandia National Laboratories, Albuquerque, NM, USA Fragiadakis, M., University of Thessaly, Volos, Greece Frangopol, D.M., Lehigh University, Bethlehem, PA, USA Ganzerli, S., Gonzaga University, Spokane, WA, USA Huh, J.S., Korea Aerospace Research Institute, Daejeon, Korea Hurtado, J.E., National University of Colombia, Manizales, Colombia Joanni, A.E., Technical University of Munich, Munich, Germany Kanno, Y., University of Tokyo, Tokyo, Japan Kharmanda, G., Aleppo University, Aleppo, Syria Kokkolaras, M., University of Michigan, Ann Arbor, MI, USA Kwak, B.M., Korea Advanced Institute of Science and Technology, Daejeon, Korea Lagaros, N.D., National Technical University of Athens, Athens, Greece Lee, S.H., Northwestern University, Evanston, IL, USA Liang, J., Oakland University, Rochester, MI, USA Mahadevan, S., Vanderbilt University, Nashville, TN, USA Maute, K., University of Colorado at Boulder, Boulder, CO, USA Mourelatos, Z.P., Oakland University, Rochester, MI, USA Nikolaidis, E., University of Toledo, Toledo, OH, USA Papadrakakis, M., National Technical University of Athens, Athens, Greece Papalambros, P.Y., University of Michigan, Ann Arbor, MI, USA Patel, N.M., University of Notre Dame, Notre Dame, IN, USA Plevris, V., National Technical University of Athens, Athens, Greece Polak, E., University of California, Berkeley, CA, USA Rackwitz, R., Technical University of Munich, Munich, Germany
XXIV
List of Contributors
Renaud, J.E., University of Notre Dame, Notre Dame, IN, USA Royset, J.O., Naval Postgraduate School, Monterey, CA, USA Sørensen, J.D., Aalborg University, Aalborg, Denmark Taflanidis, A.A., California Institute of Technology, CA, USA Takewaki, I., Kyoto University, Kyoto, Japan Tillotson, D., University of Notre Dame, Notre Dame, IN, USA Tovar, A., National University of Colombia, Bogota, Colombia Tsompanakis, Y., Technical University of Crete, Chania, Greece Weickum, G., University of Colorado at Boulder, Boulder, CO, USA Wu, Y.-T., Applied Research Associates Inc., Raleigh, NC, USA Zhou, J., Oakland University, Rochester, MI, USA
Author Data
Adams, B.M. Sandia National Laboratories PO Box 5800, MS 1318 Albuquerque, NM 87185-1318 USA Phone: (505)284-8845 Fax: (505)284-2518 Email:
[email protected] Agarwal, H. General Electric Global Research Niskayuna, New York, 12309 USA Phone: (574) 631-9052 Fax: (574) 631-8341 Email:
[email protected] Allen, M. Research Assistant Center for Aerospace Structures Department of Aerospace Engineering Sciences University of Colorado at Boulder Boulder, CO 80309-0429, USA Phone: (303) 492 0619 Fax: (303) 492 4990 Email:
[email protected] Aoues, Y. Laboratory of Civil Engineering University Blaise Pascal Complexe Universitaire des Cézeaux, BP 206 63174 Aubière Cedex, France Phone: +33(0)473407532 Fax: +33(0)473407494 Email:
[email protected]
XXVI A u t h o r D a t a
Beck, J.L. Professor Engineering and Applied Science Division California Institute of Technology Pasadena, CA 91125 USA Phone: (626) 395-4139 Fax: (626) 568-2719 Email:
[email protected] Ben-Haim, Y. Professor Faculty of Mechanical Engineering Technion – Israel Institute of Technology Haifa 32000, Israel Phone: 972-4-829-3262 Fax: 972-4-829-5711 Email:
[email protected] Bichon, B.J. PhD Student Civil and Environmental Engineering Vanderbilt University VU Station B 351831 Nashville, TN 37235 USA Phone: 615-322-3040 Fax: 615-322-3365 Email:
[email protected] Chateauneuf, A. Professor Polytech’Clermont-Ferrand Department of Civil Engineering University Blaise Pascal Complexe Universitaire des Cézeaux, BP 206 63174 Aubière Cedex, France Phone: +33(0)473407526 Fax: +33(0)473407494 Email:
[email protected] De Palma, P. Professor Department of Computer Science School of Engineering and Applied Science Gonzaga University Spokane, WA 99258-0026 USA
Author Data
Phone: 509-323-3908 Email:
[email protected] Doltsinis, I. Professor Institute for Statics and Dynamics of Aerospace Structures Faculty of Aerospace Engineering and Geodesy University of Stuttgart Pfaffenwaldring 27 D-70569 Stuttgart, Germany Phone: 0711-685-67788 Fax: 0711-685-63644 Email:
[email protected] Eldred, M.S. Sandia National Laboratories P.O. Box 5800, Mail Stop 1318 Albuquerque, NM 87185-1318 USA Phone: (505)844-6479 Fax: (505)284-2518 Email:
[email protected] Fragiadakis, M. Lecturer Faculty of Civil Engineering University of Thessaly Pedion Areos, Volos 383 34, Greece Phone: +30-210-748 9191 Fax: +30-210-772 1693 Email:
[email protected] Frangopol, D.M. Professor of Civil Engineering and Fazlur R. Khan Endowed Chair of Structural Engineering and Architecture Department of Civil and Environmental Engineering Center for Advanced Technology for Large Structural Systems (ATLSS Center) Lehigh University 117 ATLSS Drive, Imbt Labs Bethlehem, PA 18015-4729, USA Phone: 610-758-6103 or 610-758-6123 Fax: 610-758-4115 or 610-758-5553 Email:
[email protected] Ganzerli, S. Associate Professor Department of Civil Engineering School of Engineering
XXVII
XXVIII A u t h o r D a t a
Gonzaga University Spokane, WA 99258-0026 USA Phone: 509-323-3533 Fax: 509-323-5871 Email:
[email protected] Huh, J.S. Senior Researcher Engine Department/KHP Development Division Korea Aerospace Research Institute 45 Eoeun-Dong, Yuseong-Gu Daejeon 305-330, Republic of Korea Phone: +82-42-860-2334 Fax: +82-42-860-2626 Email:
[email protected] Hurtado, J.E. Professor Universidad Nacional de Colombia Apartado 127 Manizales Colombia Phone: +57-68863990 Fax: +57-68863220 Email:
[email protected] Joanni, A.E. Research Engineer Institute for Materials and Design Technical Univerisity of Munich D-80290 München, Germany Phone: +49 89 289-25038 Fax: +49 89 289-23096 Email:
[email protected] Kanno, Y. Assistant Professor Department of Mathematical Informatics Graduate School of Information Science and Technology University of Tokyo, Tokyo 113-8656, Japan Phone & Fax: +81-3-5841-6906 Email:
[email protected] Kharmanda, G. Dr Eng Faculty of Mechanical Engineering
Author Data
University of Aleppo Aleppo – Syria Phone: +963-21-5112 319 Fax: +963-21-3313 910 Email:
[email protected] Kokkolaras, M. Associate Research Scientist, Research Fellow Optimal Design (ODE) Laboratory Mechanical Engineering Department University of Michigan 2250 G.G. Brown Bldg. 2350 Hayward Ann Arbor, MI 48109-2125, USA Phone: (734) 615-8991 Fax: (734) 647-8403 Email:
[email protected] Kwak, B.M. Samsung Chair Professor Center for Concurrent Engineering Design Department of Mechanical Engineering Korea Advanced Institute of Science and Technology 373-1 Guseong-dong, Yuseong-gu Daejeon 305-701 Republic of Korea Phone: +82-42-869-3011 Fax: +82-42-869-8270 Email:
[email protected] Lagaros, N.D. Lecturer Institute of Structural Analysis & Seismic Research Faculty of Civil Engineering National Technical University of Athens Zografou Campus Athens 157 80, Greece Phone: +30-210-772 2625 Fax: +30-210-772 1693 Email:
[email protected] Lee, S.H. Postdoctoral Research Fellow Department of Mechanical Engineering Northwestern University 2145 Sheridan Road Tech B224 Evanston IL 60201, USA Phone: +1-847-491-5066
XXIX
XXX A u t h o r D a t a
Fax: +1-847-491-3915 Email:
[email protected] Liang, J. Graduate Research Assistant Department of Mechanical Engineering Oakland University Rochester, MI 48309-4478 USA Phone: (248) 370-4185 Fax: (248) 370-4416 Email:
[email protected] Mahadevan, S. Professor Civil and Environmental Engineering Vanderbilt University VU Station B 351831 Nashville, TN 37235, USA Phone: 615-322-3040 Fax: 615-322-3365 Email:
[email protected] Maute, K. Associate Professor Center for Aerospace Structures Department of Aerospace Engineering Sciences University of Colorado at Boulder Room ECAE 183, Campus Box 429 Boulder, Colorado 80309-0429, USA Phone: (303) 735 2103 Fax: (303) 492 4990 Email:
[email protected] Mourelatos, Z.P. Professor Department of Mechanical Engineering Oakland University Rochester, MI 48309-4478 USA Phone: (248) 370-2686 Fax: (248) 370-4416 Email:
[email protected] Nikolaidis, E. Professor Mechanical Industrial and Manufacturing Engineering Department
Author Data
4035 Nitschke Hall The University of Toledo Toledo, OH 43606 USA Phone: (419) 530-8216 Fax: (419) 530-8206 Email:
[email protected] Papadrakakis, M. Professor Institute of Structural Analysis & Seismic Research Faculty of Civil Engineering National Technical University of Athens Zografou Campus Athens 157 80, Greece Phone: +30-210-772 1692 & 4 Fax: +30-210-772 1693 Email:
[email protected] Papalambros, P.Y. Professor Director, Optimal Design (ODE) Laboratory University of Michigan 2250 GG Brown Building Ann Arbor, Michigan 48104-2125 USA Phone: (734) 647-8401 Fax: (734) 647-8403 Email:
[email protected] Patel, N.M. Graduate Research Assistant Design Automation Laboratory Aerospace and Mechanical Engineering 365 Fitzpatrick Hall of Engineering University of Notre Dame Notre Dame, Indiana 46556-5637 USA Phone: (574) 631-9052 Fax: (574) 631-8341 Email:
[email protected] Plevris, V. PhD Candidate Institute of Structural Analysis & Seismic Research Faculty of Civil Engineering National Technical University of Athens
XXXI
XXXII A u t h o r D a t a
Zografou Campus Athens 157 80, Greece Phone: +30-210-772-2625 Fax: +30-210-772-1693 Email:
[email protected] Polak, E. Professor Emeritus, Professor in the Graduate School Department of Electrical Engineering and Computer Sciences University of California at Berkeley 255M Cory Hall 94720-1770 Berkeley, CA USA Phone: 510-642-2644 Fax: 510-841-4546 Email:
[email protected] Rackwitz, R. Professor Institute for Materials and Design Technical Univerisity of Munich D-80290 München, Germany Phone: +49 89 289-23050 Fax: +49 89 289-23096 Email:
[email protected] Renaud, J.E. Professor Design Automation Laboratory Aerospace and Mechanical Engineering 365 Fitzpatrick Hall of Engineering University of Notre Dame Notre Dame, Indiana 46556-5637 USA Phone: (574) 631-8616 Fax: (574) 631-8341 Email:
[email protected] Royset, J.O. Assistant Professor Operations Research Department Naval Postgraduate School Monterey, California 93943 USA Phone: 1-831-656-2578 Fax: 1-831-656-2595 Email:
[email protected]
Author Data
Sørensen, J.D. Professor Department of Civil Engineering Aalborg University Sohngardsholmsvej 57 9000 Aalborg, Denmark Phone: +45 9635 8581 Fax: +45 9814 8243 Email:
[email protected] Taflanidis, A.A. Ph.D Candidate Engineering and Applied Science Division California Institute of Technology Pasadena, CA 91125 USA Phone: (626) 379-3570 Fax: (626) 568-2719 Email:
[email protected] Takewaki, I. Professor Department of Urban and Environmental Engineering Graduate School of Engineering Kyoto University Kyotodaigaku-Katsura, Nishikyo-ku, Kyoto 615-8540 Japan Phone: +81-75-383-3294 Fax: +81-75-383-3297 Email:
[email protected] Tillotson, D. Research Assistant Design Automation Laboratory Aerospace and Mechanical Engineering 365 Fitzpatrick Hall of Engineering University of Notre Dame Notre Dame, Indiana 46556-5637 USA Phone: (574) 631-8616 Fax: (574) 631-8341 Email:
[email protected] Tovar, A. Assistant Professor Department of Mechanical and Mechatronic Engineering Universidad Nacional de Colombia
XXXIII
XXXIV A u t h o r D a t a
Cr. 30 45-03, Of. 453-401 Bogota, Colombia Phone: +57-3165320 - 3165000 ext. 14062 Fax: +57-3165333 - 3165000 ext. 14065 Email:
[email protected] Tsompanakis, Y. Assistant Professor Department of Applied Sciences Technical University of Crete University Campus Chania 73100, Crete, Greece Phone: +30 28210 37 634 Fax: +30 28210 37 843 Email:
[email protected] Weickum, G. Graduate Research Assistant Center for Aerospace Structures Department of Aerospace Engineering Sciences University of Colorado at Boulder Room ECAE 188, Campus Box 429 Boulder, Colorado 80309-0429 USA Phone: (303) 492 0619 Fax: (303) 492 4990 Email:
[email protected] Wu, Y.-T. Fellow, Applied Research Associates, Inc. 8540 Colonnade Center Dr., Ste 301 Raleigh, NC 27615 USA Phone: 919-582-3335 or 919-810-1788 Email:
[email protected] Zhou, J. Graduate Research Assistant Department of Mechanical Engineering Oakland University Rochester, MI 48309-4478 USA Phone: (248) 370-4185 Fax: (248) 370-4416 Email:
[email protected]
Part 1
Reliability-Based Design Optimization (RBDO)
Chapter 1
Principles of reliability-based design optimization Alaa Chateauneuf University Blaise Pascal, France
ABSTRACT: Reliability-Based Design Optimization (RBDO) aims at searching for the best compromise between cost reduction and safety assurance, by controlling the structural uncertainties allover the design process, which cannot be achieved by deterministic optimization. This chapter describes the fundamental concepts in RBDO. It aims to explain the role of uncertainties in deriving the optimal solution, where emphasis is put on the comparison with conventional deterministic optimization. The interest of RBDO formulation can also be extended to cover different design aspects, such as multi-component reliability analysis, safety factor calibration, multi-objective applications and time-variant problems.
1 Introduction The design of structures must fulfill a number of different criteria, such as cost, safety, performance and durability, leading to conflicting requirements to be simultaneously considered by the engineer. Therefore, the challenge in the design process is how to define the best compromise between contradictory design requirements. Moreover, the complexity of the design process does not allow for simultaneous optimization of all the design criteria with respect to all the parameters. Traditionally, this complexity is reduced by dividing the process into simpler sub-processes where each requirement can be handled separately. The designer can hence concentrate his effort on only one goal, generally the cost, and then checks if the other requirements can be, more or less, fulfilled. If necessary, further adjustments are introduced in order to improve the obtained solution. However, this procedure cannot assure performance-based optimal design. In structural engineering, the deterministic optimization procedures have been successfully applied to systematically reduce the structure cost and to improve the performance. However, uncertainties related to design, construction and loading, lead to structural behavior which does not correspond to the expected optimal performance. The gap between expected and obtained performances is even larger when the structure is optimized, as the remaining margins are reduced to their lower bounds; in other terms, the optimal structure is usually sensitive to uncertainties. In deterministic design, the propagation of uncertainties is usually hidden by the use of the well-known “safety factors’’, without direct connection with reliability specifications. Traditionally, the optimal cost is looked for by iterative search procedures, while the required reliability level is assumed to be ensured by the applied safety factors, as described by the design codes of practice. As a matter of fact, these safety factors are calibrated for average
4
Structural design optimization considering uncertainties
design situations and cannot ensure consistent reliability levels for specific design conditions. They may even lead to poor design, as the optimization procedure will search for the weakest region in the domain covered by the code of practice. This weakest region often presents not only the lowest cost but also the lowest safety. The deterministic optimal design is pushed to the admissible domain boundaries, leaving very little space for safety margins in design, manufacturing and operating processes. Moreover, the optimization process leads to a redistribution of the roles of uncertainties which can only be controlled by reliability assessment on the basis of the sensitivity measures. For these reasons, the Deterministic Design Optimization (DDO) cannot ensure appropriate reliability levels. If the DDO solution is more reliable than required, the losses can be avoided in construction and manufacturing costs; however, if the reliability is lower than required, the economic solution is not really achieved, because of the increase of the failure rate, leading to failure losses higher than the expected money saving. In this sense, the Reliability-Based Design Optimization (RBDO) becomes a very powerful tool for robust and cost-effective designs (Frangopol 1995). The RBDO aims to find a balanced design by reducing the expected total cost, which is defined in terms of the initial cost (i.e. including design, manufacturing, transport and construction costs), the failure cost, the operation cost and the maintenance costs. In addition, the RBDO takes the benefit of driving the search procedure by the wellcontrolled variables having great impact on the total cost. On the other side, the variables with high uncertainties are penalized independently of their mechanical role. In this sense, the system robustness is achieved as the role of highly uncertain and fluctuating variables is diminished during the optimization process. Contrary to the DDO, the solution does not lie in the weakest domain of the design code of practice, but a better compromise is defined by satisfying the target reliability levels. The RBDO can also be applied for robust design purposes, where the mean values of random variables are used as nominal design parameters, and the cost is minimized under a prescribed probability. Therefore, the solution of RBDO provides not only an improved design but also a higher level of confidence in the design. From the practical point of view, solving the RBDO problems is a heavy task because of the nested nonlinear procedures: optimization procedure, reliability analysis and numerical simulation of structural systems. Several methods have been developed for solving efficiently this problem, in order to allow for complex industrial applications; this topic will be discussed in a subsequent chapter by Chateauneuf and Aoues. This chapter aims at describing the RBDO principles, in order to give a clear vision of the links between classical deterministic approach and the reliability-based one. It emphasizes the fact that the deterministic optimization, based on safety factor considerations, is not anymore sufficient for safety control and assurance. The Reliability-Based Design Optimization has the advantage of ensuring a minimum cost without affecting the target safety level. At the end of the present chapter, the use of the RBDO in different kinds of engineering problems is briefly discussed in order to show how large can be the application spectrum.
2 Historical background Since the beginning of the twentieth century, the need for rational way to consider structural safety motivated a number of researchers, such as Forsell (1924), Wierzbicki
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
5
(1936) and Lévi (1948). In the conference on structural safety, held in Liège 1948, by the Association Internationale des Ponts et Charpentes, Torroja stated, probably for the first time, that the reduction of the total cost, have to include not only the construction cost, but also the expected failure cost. CT = CI + CF
(1)
where CT is the expected total cost, CI is the initial cost (i.e. design and construction cost) and CF is the expected failure cost. This expression has been easily approved, as the increase of construction cost should lead to higher safety margin and so decreasing the failure probability. Even that the formulation of the RBDO is known since 1948 (and even earlier), the direct application was impossible because of the difficulties related to the failure probability computation for realistic structures. With the development of the reliability theory starting in the 1950s, the solution procedures became available in 1970s and improved in the 1980s, in order to allow for the analysis of practical engineering structures. However, till now, the difficulty to estimate the failure cost is still remain a main problem, especially when dealing with human lives and environmental deterioration. On the basis of the target reliability index, the RBDO is really born in the second half of 1980s and developed along 1990s. Nowadays, the industrial applications of RBDO still face many difficulties due to the very high computational effort required to solve large-scale systems. Most of practical applications of structural optimization requires at least three conflicting goals (Kuschel and Rackwitz 1997): – – –
Low structural cost, including or not the expected failure cost. High reliability levels for components and systems. Good structural performance under various operating conditions.
Actually, the new trend is to include the inspection, maintenance, repair and operating costs in the definition of the expected total cost CT , in order to reach a performance-based design on the basis of multi-criteria considerations (Frangopol 2000). A comprehensive overview of these approaches is given by Frangopol and Maute (2003).
3 Reliability analysis The design of structures requires the verification of a certain number of rules resulting from the knowledge of physics and mechanics, combined with the experience of designers and constructors. These rules come from the necessity to limit the loading effects such as stresses and displacements. Each rule represents an elementary event and the occurrence of several events leads to a failure scenario. In addition to the deterministic variables d to be used in the system control and optimization, the uncertainties are modeled by stochastic variables affecting the failure scenario. The knowledge of these variables is not, at best, more than statistical information and we admit a representation in the form of random variables X, whose realizations are noted x. For a given design rule, the basic random variables are defined by their probability distribution with some statistical parameters (generally, the mean and the standard deviation).
6
Structural design optimization considering uncertainties
Safety domain
Joint distribution
x2
G>0
Pf
Failure domain
G<0 x1
Figure 1.1 Joint distribution and failure probability.
The safety is defined as the state where the structure is able to fulfil all the operating requirements: mechanical and serviceability, for which it is designed, during the whole lifetime. To evaluate the failure probability with respect to a given failure scenario, a performance function G(x, d) is defined by the condition of good operation of the structure. The limit between the state of failure G(x, d) ≤ 0 and the state of safety G(x, d) > 0 is known as the limit state surface G(x, d) = 0. Having the performance function G(x, d), known also as the limit state function or the safety margin, it is possible to evaluate the probability of failure by integrating the joint probability density over the failure domain (Figure 1.1): Pf (d) = fX (x, d) dx (2) G(x,d)≤0
It is to be noted that the joint density function fX (x, d) depends on the design parameters d only when the distribution parameters belong to the design variables; this is especially the case when the mean value is considered as a design variable in the optimization process. There is a special case when the performance function is simply written by the margin between the resistance R and the load effect S, where both variables are independent normal random variables. The performance function and the failure probability are simply given by: G(X, d) = R − S Pf (d) = (−β(d))
with:
m R − mS β(d) = σR2 + σS2
(3)
where ( · ) is the standard gaussian cumulated distribution function, β(d) is the reliability index, mR , mS , σR and σS are respectively the means and standard deviations of the resistance and load effect. For this simple configuration, the optimization variable could be the mean design strength, and probably, in some cases, the mean load effect.
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
7
It to note that also standard deviations can be taken as optimization variables if the relationship between the quality control and the structural cost can be established. In practice, the performance function cannot be written in a simple linear form of normal variables, and equation 3 can rarely be applied. It is thus necessary to evaluate, more or less precisely, the failure probability as given in equation 2. Direct integration is impossible even for small structures due to: 1) the high-required precision, 2) the computation cost of the mechanical response, and 3) the multidimensional space. Numerical methods have to be applied to give an approximation of the failure probability. Three methods are commonly used for this purpose: –
–
–
Monte Carlo simulations allowing to estimate the failure probability for any general problem. It has two main advantages: 1) the possibility to deal with practically any mechanical or physical model (linear, nonlinear, continuous, discrete, . . .) and 2) the simple implementation without any modification of the mechanical model (e.g. finite element software) which is considered as a blackbox. However, the two main drawbacks are: 1) the very high computational time, especially for realistic structures with low failure probability and 2) the numerical noise due to random sampling, leading to non-monotonic estimates during simulations, and consequently, it becomes impossible to get accurate and stable evaluation of the response gradient. Although computation time can be reduced by using importance sampling and other variance reduction techniques, the numerical noise still remains a serious difficulty for practical applications in RBDO. First- and Second-Order Reliability Methods, known as FORM/SORM, which are based on the approximation of the performance function in the standard gaussian space by using polynomial series. An optimization algorithm is applied to search for the design point, called also the most probable failure point or β-point, which is the nearest failure point to the origin in the normal space. Then, linear (FORM) or quadratic (SORM) approximations are adopted for the performance function in order to get an asymptotic approximation of the failure probability. It is approved that FORM is usually sufficient for the majority of practical engineering systems. In RBDO context, FORM/SORM techniques have the advantages of: 1) high numerical efficiency; and 2) direct computation of the gradients of the reliability index, and consequently of the failure probability. The main drawbacks are: 1) the limited precision and convergence problems in some cases, especially for highly nonlinear limit states; and 2) the computation time for large number of random variables. Response Surface Methods (RSM), which are commonly used to approximate the mechanical response of the structure, by building what is called a meta-model. Quadratic polynomials are shown to be suitable for localized approximation of structural systems. The large part of the computational cost lies in the evaluation of the polynomial coefficients. Then, the failure probability can be simply evaluated by using the response surface which is an analytical expression, instead of the mechanical model itself (generally, complex finite element model). The advantages are mainly: 1) the reduction of the computation time for moderate number of random variables; and 2) the possibility of coupling reliability and optimization algorithms to achieve high efficiency. The most common drawback lies in the large number of mechanical calls for moderate and high number of variables.
8
Structural design optimization considering uncertainties
x2
u2
Physical space
Normalized space Failure domain Gu(u, d) 0 P* MPP
Failure domain G(x, d) 0 mX
2
Safe domain mX
G(x, d) 0
b x1
1
u*2
a
Gu(u, d) 0
u*1
u1
Figure 1.2 Reliability index and the Most Probable Failure Point (MPP).
In First Order Reliability Method, the failure probability Pf is approximated in terms of the reliability index β according to the expression: Pf (d) = Pr[G(X, d) ≤ 0] ≈ (−β(d))
(4)
where Pr[·] is the probability operator and ( · ) is the standard Gaussian cumulated function. The invariant reliability index β, introduced by Hasofer and Lind (1974), is evaluated by solving the constrained optimization problem (Figure 1.2):
β = min
u =
(Ti (x))2
i
(5)
under the constraint: G(T(x), d) ≤ 0 where u is the distance between the median point (corresponding to the space origin) and the failure subspace in the normalized space u and T(x) is an appropriate probabilistic transformation: i.e. ui = Ti (x). The image of the performance function G(x) in the normalized space is noted: Gu (u, d) = G(T(x), d). The solution of this problem is called the Most Probable Failure Point, the design point or the β-point; it is noted P∗ , or either x* or u*, whether physical or normalized space is considered, respectively. At this point, the following relationship holds: β = u. For the case of two random variables, Figure 1.3 illustrates the important points involved in structural design: the mean point represents the average stress and strength at operation, the characteristic values are loading and resistance values that can be guaranteed in the design process (they correspond to small probability to find higher loading level or to find lower strengths; percentiles of 95% or 5% are commonly adopted) and finally the Most Probable Failure Point (MPP) where the failure configuration has the highest joint probability density. While the reliability analysis aims at finding the Most Probable failure Point, the design procedure aims at setting the characteristic and mean values of strength and dimensions, according to economical considerations.
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
s
Strength R
fS(s), fR(r) Load effect S
9
P*
Limit state
s*
sk
xk G(x, d) = 0
mS
mX r
mS
sk
rk mR s* = r*
s, r
r* rk
mR
Figure 1.3 Mean, characteristic and design points.
Alternatively to equation 2, the reliability level of a structure can also be characterized by the performance function Pp defined as: Pp (d, p) = fX (x, d) dx (6) G(x,d)≤p
where the subscript p is the performance measure (in standard reliability, p is set to zero). This formulation can be useful for specific RBDO formulations (see chapter by Chateauneuf and Aoues). 3.1
S ystem reliability analys is
Due to optimization, the structural components are strongly stretched close to the limit state, and their contribution in the overall safety becomes significant. That is why structural reliability cannot be correctly computed unless the complete system is considered, by taking into consideration the contributions of all the failure modes through appropriate modeling of system configurations, material behavior, load variability, strength uncertainty and statistical correlation. As structures are made of the assembly of several members, the overall ultimate capacity is highly conditioned by the redundancy degree. For many structures, several components can reach their ultimate capacity much before reaching the overall structural failure load. On the other hand, the structure could contain a number of critical members, leading to the overall failure if any one of them fails. In this context, the system reliability can be quite different from the reliability of its components. In the last decades, many research works have been dedicated to compute the system reliability, especially for series and parallel systems. A series system, representing a “weakest-link’’ chain, fails if any link fails; superstructures and building foundations are generally good examples of a series system. A parallel system implies that each component contributes more or less in the structural good-standing; the system failure takes place if all components fail.
10
Structural design optimization considering uncertainties
Practical expressions for system reliability include lower and upper bounds for both series and parallel systems, some of these bounds consider the correlation between pairs of potential failure modes. Also, more complex system models involving mixed series-parallel systems can be used (Ditlevsen and Madsen 1996). For series and parallel systems, the first order approximation of the failure probabilities can be computed as following:
Pf = Pr
Gj (X, d) ≤ 0 ≈ 1 − m (β(d), ρ)
j
Pf = Pr
for series system
(7)
Gj (X, d) ≤ 0 ≈ m (−β(d), ρ)
for parallel system
j
where m (β(d), ρ) is the multi-dimensional standard normal distribution, β(d) is the vector of the reliability indices for the different modes and ρ is the matrix of correlations between the failure modes. For practical RBDO analysis, the failure probability can be estimated by Ditlevsen bounds (Ditlevsen 1979), which is written for series system as:
Pf1 +
m j=2
⎡ max ⎣Pfj −
j−1 k=1
⎤ Pfjk , 0⎦ ≤ Pfs ≤
m j=1
Pfj −
m j=2
max Pfjk k<j
(8)
where m is the number of dominant failure modes, Pfj is the failure probability of mode j and Pfjk is the probability of the intersection of modes j and k; in this expression, the failure probabilities follow a decreasing order.
4 Formulation of Reliability-Based Design Optimization The Reliability-Based Design Optimization (RBDO) aims at finding the optimal solution that fulfills the prescribed reliability requirements. The fluctuation of loads, the variability of material properties and the uncertainties regarding the analysis models, contribute to make the performance of the optimal design different from the expected one. In this sense, the optimization process has a large effect on the structural reliability. It is today well recognized that the safety factor approach cannot ensure the required safety levels, as they do not explicitly consider the probability of failure regarding some performance criteria. In other words, the optimal design resulting from deterministic optimization procedures does not necessarily meet the reliability targets. The RBDO allows us to consider the safety margin evolution, leading to the settlement of the best compromise between the life-cycle cost and the required reliability. This task is rather complicated due to the inherent non-deterministic nature of the input information. For this reason, many analysis methods have been developed to deal with the statistical nature of data. The process efficiency is mandatory to deal with realistic engineering problems (Kharmanda et al. 2002). The solution based on reliability concepts is rather robust, as the uncertain parameters are penalized during the design process, compared to a greater commitment of the well-controlled parameters.
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
4.1
11
Insuf f iciency of Determinis tic Des ig n O p ti mi zati o n
In Deterministic Design Optimization (DDO), it is aimed to reduce the initial structural cost CI (d) under a number of constraints gj (d, γ), j = 1, 2, . . . , ng ; where d is the vector of design parameters and γ is the vector of partial safety factors. The optimization problem is thus written: min CI (d) d
subject to
gj (d, γ) ≤ 0 for
(9)
j = 1, 2, . . . , ng
In this problem, the structural safety is assumed to be ensured by the introduction of the safety factors within the constraint equations. A typical constraint for stress can be written as: g = σ − fy /γ where σ is the applied stress, fy is the yield stress and γ is the safety factor. Usually the strength fy is defined either by the mean value or by the characteristic value; the former is common in mechanical engineering and the latter is common in civil engineering. In the DDO, it is assumed that the safety factors are appropriate whatever the chosen optimal configuration. For most of systems, it can be shown that the safety level is not independent of the selected optimal design parameters. Figure 1.4a illustrates how the deterministic optimal design is defined on a constraint which is simply described by shifting the limit state. It can be said that the failure limit state g(d) is transformed to a safe limit state g(d, γ), by introducing the safety factor to take account for uncertainties. Starting from the initial point x0 , the DDO is based on the use of classical optimization algorithms to find the optimal design d∗ , which is generally located on the boundary of the reduced design space, including the safety factor. The Reliability-Based Design Optimization aims at finding the optimal solution, such that the failure limit state is kept sufficiently far from the operating point. In other words, the failure surface must lie on the iso-reliability level corresponding to the prescribed safety target (Figure 1.4b). It is clear that even for simple cases, the solution can be quite different from the deterministic optimization where homothetic Safe design constraint
s
Limit state
s
g(d )0
Optimum design
γ
Optimum design Iso-reliability contours
Limit state n
ctio
g(d) = 0
d*
s*
s
Co
g(d, γ) = 0 x0 s
Co
(a)
d*
s* x0
n
du t re
r*
du t re
ctio
r
r (b)
r*
Figure 1.4 Comparison of optimal points corresponding to DDO and RBDO.
12
Structural design optimization considering uncertainties
fG(g)
Safety factor g1
1
g
g
Figure 1.5 Distribution of the global safety factor.
reduction of the design space is applied. In this sense, the RBDO can really ensure optimal cost, without compromising the structural safety. As a matter of fact, the uncertainties related to structural geometry, material properties and loading lead to stochastic cost, strength and stress in the structure, which automatically leads to random safety factors. When the optimal design is defined, it can be possible to see the effect of uncertainties on the global safety factor. This can be carried out by plotting the distribution of the strength-stress ratio, as illustrated in figure 1.5. In this case, the structural failure is observed when the safety factor realizations become less than unity. The failure probability can thus be computed by evaluating either Pr [γ ≤ 1] or Pr [G(d∗ , X) ≤ 0]. It is also to emphasize that the system uncertainties may lead to random total cost, which can be considered in one of the two following ways: –
–
If the optimal configuration is specified, the structure realization involves random variations in material properties, geometrical parameters, material unit cost, construction costs and failure costs. These viabilities and fluctuations produce a random total cost, where the probabilistic distribution depends on the inherent random variables. The goal of RBDO is usually to minimize the expected total cost. If the structural realizations are considered for cost optimization, a scatter of the optimal solutions is obtained, as different optimum is found for each structural realization. In other words, even if random variables are not involved in the cost, the optimal deterministic solution changes according to structure and loading fluctuations. Therefore, the total cost becomes random as solutions differ in terms of input uncertainties. This leads to what can be seen as a lack of robustness.
Deterministic optimization is even worst when multi-constraints or multicomponents are considered. The difficulty lies in the way to set the safety factors in order to ensure simultaneously safe and optimal design. As illustrated in figure 1.6, while the deterministic optimum leads to uncontrolled safety levels with respect to various limit states (due to the application of either the same safety factor or an inconsistent set of safety factors), the reliability-based design optimization looks for the situation where the safety levels can be simultaneously controlled for all the limit states. In this case, the optimum design is oriented such that safety requirements are fulfilled with respect to the uncertainty degrees; practically, greater margins are taken for largely scattered variables, while small margins are considered for well controlled variables.
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
13
Deterministic optimum
x2
Limit state with safety factors Real limit state
Iso-reliability contours
Reliability-based optimum
x1
Figure 1.6 Comparison between DDO and RBDO solutions.
P
1
y
S1
v
2 S2
600 mm
u
x 800 mm
450 mm
Figure 1.7 Two-link structure under vertical force.
In other words, the design is driven by the variables with small uncertainties. That is why the Reliability-Based Design Optimization (RBDO) aims at searching for the best compromise between cost reduction and reliability assurance, by taking the system uncertainties into consideration; therefore, the RBDO ensures economical and safe design. It offers a good alternative to the safety factor approach, which is based on deterministic considerations and cannot take account for reduction of safety margins during the optimization procedure. In order to illustrate this idea, let us consider the two-bar system shown in figure 1.7. The system is supported at the end nodes and a vertical load P is applied at the internal node. The bar cross-sections are noted S1 and S2 for bars 1 and 2, respectively. The
14
Structural design optimization considering uncertainties
design criteria are related to member strengths and nodal displacements; buckling is assumed to be neglected. Under stress and deflection constraints, the deterministic optimization problem is written: min V = 1000 S1 + 750 S2 S1 ,S2
subject to g1 = F1 −
fY S1 ≤ 0 γσ (10)
fY S2 ≤ 0 γσ vL ≤0 g3 = v − γv
g2 = F2 − and with: F1 = 0.6P;
F2 = 0.8P;
P v= E
480 360 + S1 S2
(11)
where E is the Young’s modulus, fY is the yield stress, vL is the limit displacement, γσ is the load safety factor and γv is the displacement safety factor. They are calculated by:
fY S1 fY S2 ; γσ = min 0.6P 0.8P
(12)
E S1 S2 vL γv = P (480 S1 + 360 S2 )
For deterministic optimization, these safety factors are taken as γσ = 1.5 and γv = 1.2. Considering only the two resistance limit states, Figure 1.8 shows the
S2
G1(d) ⴝ 0
G1(d, γ) ⴝ 0
Deterministic optimum 0.9P/fY
Iso-reliability contours G2(d, γ) ⴝ 0
P1*
0.6P/fY
Safety design domain G1(d, γ) 0 and G2(d, γ ) 0
P*2
G2(d) ⴝ 0
Failure domain S1 0.8P/fY
1.2P/fY
Figure 1.8 Deterministic design and inconsistent reliability levels.
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
15
deterministic optimum solution in the space of the design variables S1 and S2 . This optimum is obtained at the intersection of the shifted limit states, due to the application of the safety factor of 1.5. If the uncertainties on the cross-sections are considered by using independent normal variables with the same standard deviation, we get directly the iso-reliability contours in this superposed design/random space. It is clearly observed that the Most probable failure points P1∗ and P2∗ do not lie on the same reliability level. As the system represents a series combination, the bar 2 reduces the overall structural reliability and the target safety is not met. A more rational approach implies that the two MPP should be at the same reliability contour; for more complex systems, this should take into account the failure costs and system reliability models for effective setting of the MPP locations. It has to be stressed that even for this simple problem, the task is not that easy, it can be imagined how complex is to set appropriate safety factors for nonlinear limit states (e.g. the displacement constraint) with non-gaussian variables, including statistical correlations. For this reason, the DDO approach cannot clearly give convenient solution with consistent uncertainty considerations.
4.2
Reliability-bas ed des ign optimizatio n
Although the total cost includes all costs from construction till destruction and recycling, including the in-service costs, the high complexity of engineering systems leads to difficulties in dealing with both aspects: design and maintenance uncertainties. A common procedure consists in separating the design into two steps. In the first step, the structure is designed in order to avoid “failure’’ configurations, with respect to the limit states (ultimate, serviceability, fatigue, . . .). At this design stage, the optimization is applied to assure the structural survival (or good-standing) with the lowest cost. In the second step, the maintenance planning is optimized for the structure designed in the first step. In this way, the design optimization is first carried out to define the optimal structural configuration for which Reliability-Based Maintenance Optimization (RBMO) is performed to define the maintenance-inspection-replacement policy. In this sense, the total cost minimization is carried out in two times: minimizing the initial and failure costs at the first time and then minimizing the maintenance cost at the second time. Of course, this implies an approximation, as some design variables can play different roles in the cost components. For some engineering systems, the decoupling of design and maintenance costs may not lead to globally optimal costs, due to interaction between design decisions and deterioration rates and time-dependent failure probability. In other words, the variation of some variables may increase failure cost and decrease maintenance costs, and vice-versa. However, this approach is widely accepted in engineering practice. It has also a practical advantage, as the optimal design becomes independent of the maintenance policy, where the operating conditions (loading, environment, deterioration rates, costs, . . .) may strongly varies over the structure lifespan. The total cost CT depends on two kinds of variables (Kharmanda et al. 2002b): –
Design variables, noted d, which are the deterministic control parameters. They should be optimized for cost reduction. They can be mechanical parameters
16
Structural design optimization considering uncertainties
Cost
Expected total cost CT Expected failure cost CF Cf Pf
Minimum expected total cost
Initial construction cost CI Failure probability Optimum reliability level
Pf
Figure 1.9 Evolution of the costs in function of the failure probability.
–
(e.g. geometrical dimensions, material properties) or probabilistic parameters (e.g. means of random distributions). Random variables, noted X, whose realizations are x, representing the uncertainties and the fluctuations in the system configuration. Each of the random variables is defined by a probabilistic distribution. They usually represent geometrical, material or loading uncertainties.
Basically, the RBDO aims at minimizing the total expected cost CT (Figure 1.9) which is given in terms of initial cost CI (including design, manufacturing, transport and construction costs) and direct failure cost Cf (Torroja 1948, Ditlevsen and Madsen 1996). min CT (d) = CI (d) + Cf Pf (d) d
subject to gj (d) ≤ 0
(13)
A more rigorous mathematical notation consists in writing E[CT (d, X)] instead of CT (d), as what is optimized is the expectation, not the cost itself (which is a random function); however, for simplicity, the notation CT (d) is maintained to indicate the expectation. This problem can also be written in terms of the utility function U(d) as following (Frangopol 1995): max U(d) = B(d) − CI (d) − L(d) x
subject to
gj (d) ≤ 0
(14)
where B is the benefit derived from the system operation, C I is the initial construction cost and L is the expected loss due to inspection, maintenance and failure. This total cost expression indicates that the possible increase of initial cost should be balanced by a decrease in the risk C F (i.e. product: Cf Pf ). The minimization is carried out for the design parameters such as member sizes, structural configuration
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
17
and material parameters. These design parameters may correspond to probabilistic distribution parameters: cost is related to the mean value when it represents the nominal design value and to the standard deviation when it represents the quality control and the dispersion reduction aspects. Usually, the cost of consequences is taken as fixed, but in fact it should be a function of the failure probability. This means that the failure cost Cf (e.g. reconstruction cost, direct damage cost and pollution) is independent of the failure probability Pf , and consequently, the expected failure cost can be written: CF = Cf × Pf . This expression holds as long as the failure rate remains below a commonly accepted level. However, with abnormal failure rates, the failure cost Cf becomes a function of the failure probability according to indirect damage (propaganda, market losses, public opinion on company/authority, accelerated effects, . . .). For example in automotive industry, if a defect is observed for few cars, the failure cost for each car can be assumed equal to the repair of that car (added to some indemnity for the car owner). But if the defect is observed for a large number of cars, the company should repair the whole produced cars, beside the social damage to the company itself which can be traduced by significant selling losses. In other domains, such as nuclear energy, the failure cost can be a jump function as only one accident in a nuclear power plant leads to very high economic, social and political consequences. To take account for this failure rate dependence, it could be appropriate to estimate the failure cost by nonlinear functions, such as: CF = E[Cf ] ≈ Cf (Pf ) × Pf
(15)
where Cf (Pf ) may take one of the following forms: Polynomial Exponential Sigmoidal
Cf (Pf ) = Cf0 (1 + Pfα ) Cf0 for Pf ≤ Pf0 Cf (Pf ) = Cf0 exp(µ(Pf − Pf0 )α ) for Pf > Pf0 Cf 1 Cf (Pf ) = Cf0 + 1 + exp (−µ(Pf − Pf0 ))
where Cf0 and Cf1 are respectively the basic and the extra failure costs, Pf0 is the probability threshold, and α, µ are parameters to be estimated in terms of failure consequences. More generally, the expected total cost CT can be expressed in terms of all the costs involved in the structural system, from birth to death. It thus includes inspection, maintenance, repair and operating costs (Frangopol 2003), leading to: CT = CI + CF + CM + CS + CR + CU + CD
(16)
where CI is the initial construction cost, CF is the expected failure cost, usually defined as: CF = Cf × Pf , CM is the expected preventive maintenance cost, CS is the expected inspection cost, CR is the expected repair cost, CU is the expected use cost and CD is the expected recycling and destruction cost, which is particularly important for sensitive structures, such as nuclear powerplants.
18
Structural design optimization considering uncertainties
In practice, the design objective of only minimizing the expected total cost is not yet applicable, and is somehow dangerous from human point of view. For example, if the designer underestimates the failure consequences with respect to the initial cost, the optimal solution will allow for high failure rates, leading to accept the use of low-reliable structures. The extrapolation to rich and poor countries or cities, leads also to low reliability levels in poor countries (or cities) because of the lower failure costs, as human lives and constructions have statistically lower monetary values. One can imagine the political consequences of such a strategy. At least theoretically, the correct estimation of the failure cost should lead to coherent results. The problem of cost estimation is even more complicated when talking about the whole lifetime management, because the failure cost may change along the structure lifetime due to socio-economical considerations (e.g. life quality of the society). In all cases, special care is strongly required when minimizing the expected total cost, even when other reliability constraints are considered. Due to difficulties in estimating the failure cost Cf (especially when dealing with human lives and environmental deterioration, political consequences, . . .), the direct use of the above equation is not that easy. For design purpose, an alternative to the expect total cost formulation is usually to minimize the initial cost under a prescribed reliability constraint (Moses 1977): min CI (d) d
subject to Pf (d) ≤ Pft d ≤d≤d L
(17) U
where dL and dU are respectively the lower and upper bounds of the design variables and Pft is the admissible failure probability, which is set on the basis of engineering state-of-knowledge and experience. An equivalent formulation is defined in terms of the target reliability index βt : min C(d) d
subject to
β(d) ≥ βt d ≤d≤d L
(18) U
This formulation has the advantage of avoiding the failure cost computation. Nevertheless, the failure consequences can be indirectly included by selecting suitable target safety levels. In civil engineering, it is common to use an admissible failure probabilities of 10−4 for the ultimate limit state and of 10−2 for the serviceability limit state. More refined target values are given in the Eurocodes, in terms of the economical gravity and the number of exposed persons. In principle, the target system reliability should be determined by social and economical considerations. There is no general rule, so far, to select the target value of the system-reliability index. Furthermore, the designer’s experience and preferences still play an important role in the process. A reasonable choice consists in taking the reliability of old design codes as a target for the new codes. Nevertheless, the choice of the target value is still very important in system reliability-based optimization, because it is the regulator of the reliability indexes of the failure modes.
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
19
The above formulation represents two embedded optimization problems (Enevoldsen and Sørensen 1994; Enevoldsen 1994). The outer one concerns the search for optimal design variables to minimize the cost and the inner one concerns the evaluation of the reliability index in the space of random variables. The coupling between the optimization and reliability problems is a complex task and leads to a very high calculation cost. The major difficulty lies in the evaluation of the structural reliability, which is carried out by a particular optimization procedure. In the random variable space, the reliability analysis implies a large number of mechanical calls, where in the design variable space, the search procedure modifies the structural configuration and hence requires the re-evaluation of the reliability level at each iteration. For this reason, the solution of these two problems (optimization and reliability) requires very large computation resources that seriously reduces the applicability of this approach. This topic will be intensively discussed later on in the chapter by Chateauneuf and Aoues. In general, the RBDO can be formulated according to one of the following forms: –
RBDO1: Minimize the design cost under reliability and structural constraints: min: CI (d) d
subject to: β(d) ≥ βt and: gj (d) ≤ 0
min: CI (d) or
d
subject to: Pf (d) ≤ Pft and: gj (d) ≤ 0
where βt is the target reliability index and Pft is the maximum allowable failure probability. When first order approximation is applied, the relationship between these two forms is given by: Pf = (−β) or β = −−1 (Pf ). –
RBDO2: Maximize the reliability under cost and structural constraints: max: β(d)
min: Pf (d)
d
subject to: CI (d) ≤ CIt and: gj (d) ≤ 0 –
d
or
subject to: CI (d) ≤ CIt and: gj (d) ≤ 0
RBDO3: Maximize the reliability per unit cost under structural constraints: max: β(d)/CI (d) d
subject to: gj (d) ≤ 0
or
max: 1/Pf (d)/CI (d) d
subject to: gj (d) ≤ 0
which is equivalent to minimize the ratio cost/reliability: min: CI (d)/β(d) d
subject to: gj (d) ≤ 0
or
min: CI (d) · Pf (d) d
subject to: gj (d) ≤ 0
This kind of formulation is particularly useful when there is no limitation on the total cost in RBDO2.
20
Structural design optimization considering uncertainties
P
h
mR
A
L
Figure 1.10 Perforated beam subjected to uniform load.
–
RBDO4: Minimize the total expected cost under reliability and structural constraints: min: CI (d) + Cf Pf (d) d
subject to: β(d) ≥ βt and: gj (d) ≤ 0
min: CI (d) + Cf Pf (d) or
d
subject to: Pf (d) ≤ Pft and: gj (d) ≤ 0
These formulations are considered as the basic forms of reliability-based design optimization, where the goal is to better redistribute the material within the structure by taking into account the effects of uncertainties and fluctuations. 4.3
Il l ustrati o n o n pe r fo r at e d s imple be a m
A simply supported beam, with length L = 2 m and height h = 0.3 m, is perforated by 5 holes of mean radius mR . The beam is subjected to uniformly distributed load P with mean value 1 MN/m and coefficient of variation of 15%. The maximum stress is located at point A in Figure 1.10. Under the effect of geometrical uncertainties, the nominal hole radius mR has to be designed according to the RBDO basis. In Figure 1.11, the initial, failure and total costs are plotted in function of the mean hole radius. The minimum cost corresponds to mR = 7.5 cm, and to the failure probability of 1.07 × 10−4 . Figure 1.12 shows the expected total cost for different values of consequence severity. It is observed that the hole radius should be decreased with higher consequence costs, in order to reduce the probability of failure and therefore the risk. The optimal solutions are found with respect to each failure cost case: Low: mR = 7.9 cm (Pf = 3.4 × 10−3 ), Moderate: mR = 7.5 cm (Pf = 1.1 × 10−4 ), High: mR = 7.1 cm (Pf = 4.3 × 10−6 ) and Very High: mR = 6.7 cm (Pf = 3.7 × 10−7 ). It can be observed that the failure probability levels are very sensitive to the failure consequences, showing that special care should be considered in estimating these consequences, as they changes drastically the optimal solution.
5 Multi-component RBDO In practical structural systems, the overall failure is generally dependent on a certain number of components where each one may have several failure modes, arranged in
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
21
Expected costs (Euros)
4000 3500
Initial cost Failure cost Total cost
3000 2500 2000 1500 1000 500 0 6,500
7,000 7,500 Mean hole radius (cm)
8,000
Figure 1.11 Initial, failure and total costs of the perforated beam.
Expected costs (Euros)
4,00E03 3,50E03 3,00E03 2,50E03 2,00E03 1,50E03 1,00E03 5,00E02 0,00E00 6,500
Low failure cost Moderate failure cost High failure cost Very high failure cost 7,000
7,500
8,000
Mean hole radius (cm)
Figure 1.12 Expected total costs in function of the failure consequence costs.
series and/or parallel systems. During the optimization of redundant structures, the contribution of various members is highly redistributed and the prediction of the most important components is not easy. Some insignificant components at the beginning of the RBDO procedure can become very important in the neighborhood of the optimal point. That is why structural reliability cannot be correctly computed without the whole system consideration, by taking account for all the failure modes. In this case, the constraint on system reliability becomes a computational challenge because of the different levels of embedded optimization loops. So, the system RBDO has common limitations due to system reliability computation and the necessity to make some approximations in practical cases (e.g. bounds, reduction of failure paths, . . .). This is probably the main reason why the system approach is less popular than the component approach. Another difficulty arises from the fact that the component assembly is rather a logical combination (i.e. union and intersection of events) than just algebraic operation, which is hard to deal with in system optimization, as sensitivity computation is not easy for
22
Structural design optimization considering uncertainties
logical operators. For example, the derivation of the union of two events is not simple to handle when one of them is totally or partially included inside the other, as the derivative operator can only capture the dominant event sensitivity. This difficulty is emphasized by the fact that failure mode combination is strongly related to the few significant failure modes at a given instance of the computing process. However, as the design variable values change in each iteration, the significant failure modes are not always the same, which greatly influences the convergence of the optimization procedure. Fortunately, in practice the significant failure modes identified in the system reliability analysis tend to be stabilized after few iterations. 5.1 Sy stem RB DO fo r mulat io n The system RBDO can be formulated either at the component level or at the system level (Enevoldsen 1994). At the component level, the RBDO can be written by specifying the target reliability for each one of the structural components, leading to: min CI (d) x
subject to βi (d) ≥ βti
and
gj (d) ≤ 0
(19)
where βi (d) and βti are respectively the reliability index and the target index for the ith component. Each one of the component reliability constraints includes a minimum reliability requirement for a specific failure mode at a specific location in the structure. For example, a member has several critical cross-sections which may fail according to several modes, such as yielding, cracking and excessive deformation, in addition to member buckling failure and structural instability. At the system level, the RBDO is formulated by only specifying the target system reliability for the whole structure: min CI (d) d
subject to βsys (d) ≥ βt
and
gj (d) ≤ 0
(20)
where βsys (d) and βt are respectively the reliability index and the target index for the whole system. The system reliability is generally evaluated by the use of upper and lower bounds. Some authors combined the constraints on component and system reliabilities, but this approach could lead to either redundant or inconsistent constraints. Aoues and Chateauneuf (2007) proposed a scheme for consistent RBDO of structural systems. The basic idea consists in updating the component target safety levels in order to fulfill the overall system target. In the main optimization loop, the cost function is minimized under the constraints that component reliability indexes must satisfy the updated target values. min C(d) subject to
Updated
βj (d) ≥ βtj
d ≤d≤d L
Updated
(21)
U
where βtj is the updated target reliability index for the jth failure mode and βj (d) is the reliability index for the considered design configuration. In the updating procedure,
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
P
q
23
P
q q/8
d2
d2 d1
Lc 3 m
Lc 3 m
L 8m M1
M2 M2 M1/8
M1
M2
Bending moment diagram
Figure 1.13 Overhanged beam with variable cantilever depth.
the target indexes are adjusted to meet the system reliability requirement. This can be performed by solving the problem: min
Updated j
βt
mp
Updated
(βtj
− β j )2
(22)
i=1
subjected to
Updated βsys (βtj , ρjk )
≥ βt Updated
which is solved for the updated target indexes βtj 5.2
.
Overhanged reinforced concrete be am
In order to show the interest of system analysis, an overhanged beam with variable cantilever depth is considered, as shown in Figure 1.13 (Aoues and Chateauneuf 2007). With a constant breadth of 20 cm, the beam is defined by the middle-span depth d1 and the cantilever end depth d2 . The span is L = 8 m and the cantilever length is Lc = 3 m. The beam is subjected to uniformly distributed loads q and q/8 as illustrated in Figure 1.13. In order to reduce the negative moments, two tension rods are acting at the cantilever ends, modeled by the tensile force P. The concrete strength is taken as fcu = 25 MPa and the steel yield strength is fY = 200 MPa. An extreme loading case is considered where q = 40 kN/m and P = 30 kN; leading to the maximum moments M(x = 0.75) = 11.25 kNm and M(x = 3) = −90 kNm. The considered random variables are the applied loads and the effective depth of RC cross-sections, which are considered as normally distributed to allow for easy graphical illustrations. For a given cross-section, the design equation is written by:
Gi = fY Asi
f Y As i di − 2(0.85fcu b)
− Mi
(23)
24
Structural design optimization considering uncertainties Table 1.1 Statistical data for random variables. Random variable
Mean
St-deviation
Middle span depth d1 Cantilever end depth d2 Reference moment M
md1 md2 mM = 90 kNm
σd1 = 5 cm σd2 = 2.5 cm σM = 18 kNm
The reinforcement is chosen as As1 =12 cm2 and As2 = 6 cm2 , leading to the limit states: G1 = 0.24(d1 − 0.02824) − M1 G2 = 0.12(d2 − 0.01412) − M2
(24)
which can be written in the normalized space by probabilistic transformation: H1 = 0.24(md1 + σd1 ud1 − 0.02824) − (mM + σM uM ) 2 H2 = 0.12(md2 + ρσd2 ud1 + 1 − ρ σd2 ud2 − 0.01412) −0.125(mM + σM uM )
(25)
where M is a reference moment (equal to M1 ), ui are the normalized variables and ρ is the correlation between d1 and d2 . The distribution parameters are given in Table 1.1. The correlation between d1 and d2 is taken as ρ = −0.6. As this situation is considered as extreme one, the allowable failure probability of the system is set to: Pf _system = 0.05 (naturally, this is a conditional probability as it assumes that extreme situation occurs). The reliability solution leads to the direction cosines: αd1 = 0.55 and αM = −0.83 for the limit state H1 , and αd1 = −0.48, αd2 = 0.64 and αM = −0.60 for the limit state H2 . Thus, the correlation between these two limit states is equal to 0.233. The overall RC volume in this beam is computed by V = 0.2(11 d1 + 3 d2 ). To take account for the workmanship in the cost calculation, the depths are set to the power 3. The final cost of RC is estimated by 150a/m3 . The system RBDO is applied to the structure, by adopting two considerations: 1) the target reliability index is the same for all the limit states, and 2) the target reliability indexes are adapted to find a better solution, under the satisfaction of the system target. In the first case, the target system failure probability of 0.05 is reached when both components have reliability indexes of 1.943, knowing the correlation of 0.233. In the second case, the target of 0.05 is searched, where the cost is to be set as low as possible. The interest of the adaptive target strategy is shown by comparing these two RBDO formulations, as indicated in Table 1.2. For the same system reliability level, the adaptive target methodology allows us to significantly decrease the structural cost, by better distributing the material within the structure. Figure 1.14 compares the failure domains for both solutions (the 2D graph is given for the limit states projected on the plane uM = 0). As the system failure probability is the same for both formulations, the decrease of the margin for H1 implies the increase of the margin for H2 . For the same system reliability, the adaptive approach allows us to reach a cost reduction of 12.4%. Figure 1.14 shows also the beam profile obtained
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
25
Table 1.2 RBDO formulation and solutions. Considered aspect
Component-based formulation
System-based formulation
Formulation
Minimize: 300(11m3d1 + 3m3d2 ) under: β1 ≥ 1.9434 and: β2 ≥ 1.9434
Minimize: 300(11m3d1 + 3m3d2 ) under: βsys ≥ 1.64485
Failure point U ∗ :
u∗d1 = −1.07; u∗d2 = 0; u∗M = 1.61 ∗ ∗ ud1 = 0.93; ud2 = −1.24; u∗M = 1.17
u∗d1 = −0.91; u∗d2 = 0; u∗M = 1.37 ∗ ∗ ud1 = −1.58; ud2 = −2.11; u∗M = 1.98
Reliability levels at optimum:
β1 = 1.9434 β2 = 1.9434 Pfsys = 0.05
β1 = 1.6487 β2 = 3.2959 Pfsys = 0.05
Optimum design:
m∗d1 = 57.8 cm m∗d2 = 16.9 cm CT = 64.3a
m∗d1 = 55.2 cm m∗d2 = 21.1 cm CT = 56.3a
H1
ud
2
H2 ud
1
Limit states for identical component reliabilities
Limit states for adaptive targets
16.9 cm 21.1 cm 55.2 cm 57.8 cm Design with identical component reliabilities
Design with adaptive targets
Figure 1.14 Failure domains and optimum design for identical and adaptive formulations.
by the two approaches. It is clear that the adaptive target approach tries to decrease the depth where cost is widely involved, without decreasing the overall system safety.
6 RBDO issues The interest of RBDO is not limited to the design of new structures, but it also offers a powerful tool to solve a large class of structural problems. The RBDO is applied to various levels of reliability assessment, design, maintenance and codification. Some of these issues are briefly presented in this section. 6.1
Multicriteria approach for RBDO
As a matter of fact, the RBDO is a multicriteria optimization problem where the objective is to minimize the costs and to maximize the safety (Kuschel and Rackwitz 1997). It is generally acceptable that reliability and economy have conflicting requirements
26
Structural design optimization considering uncertainties
which must be considered simultaneously in the optimization process. The usual formulations aims either to combine these two objectives in only one weighted objective or to deal with one of these objectives as an optimization constraint. A more rational formulation can be stated as real multicriteria problem where the designer can get the Pareto optimal configurations in order to make consistent choices in the design process. As an example, Frangopol (2003) proposed a four-objective vector for bridge structures: f(d, x) = [V(d), PfCOL (d, x), Pf YLD (d, x), Pf DFM (d, x)]
(26)
where V(d) is the material volume, Pf COL (d, x) is the collapse probability, Pf YLD (d, x) is the first yield probability and Pf DFM (d, x) is the excessive deformation probability. This problem can be solved by any general multi-criteria technique. 6.2
C o d e c al i b r at io n b y R B DO
The design codes of practice must fit a certain objective for the whole applicable domain. Many actual design codes derive from a reliability-based calibration procedure to determine the partial safety factors to be applied in design (Sørensen et al. 1994). The objective of these codes is generally to keep the structural reliability above the specified target level (Ang and De Leon 1997). The problem of defining the safety factors is solved by the minimization of a penalization function for all the design situations covered by the design code (Gayton et al. 2004); the optimization problem is thus (Ditlevsen & Madsen 1996): min f (γi ) = γi
L
W(ωj , βj (γi ), βt )
(27)
j=1
where W( · ) is a penalty function, γi are the partial safety factors, βj (γi ) is the safety index for the j-th situation and βt is the target reliability. Several kinds of penalty function have been proposed in the literature. The simplest one is defined by the weighted least square function: W1 (γi ) = ωj (βj (γi ) − βt )2
(28)
This function has the advantage of being very simple and the solution of the optimization problem (equation 27) can be greatly simplified if βj (γi ) has a simple explicit expression. Nevertheless, this function is symmetrical with respect to βt , i.e. it only depends on the difference βj − βt , and structures with a reliability index smaller than the target are not more penalized than structures with higher reliability index. Another function can take the following form (Lind 1977): W2 (γi ) = ωj (k(βj (γi ) − βt ) + exp(−k(βj (γi ) − βt )) − 1)
(29)
where k > 0 is the curvature parameter. This function penalizes the reliability indexes which are smaller than the target, compared to those higher than the target. When the parameter k increases this function becomes more penalizing for βj < βt than the least square function. For large values of k, the penalty goes to infinity for βj < βt , and so
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
27
reliability indexes lower than the target become forbidden. Other penalty functions can be proposed on the basis of socio-economic measures of the gap between the code and its objective. In such a case, the relationship between cost and target reliability index must be known. Classically, the goal of the design codes is to minimize the expression (27) for the whole spectrum of the design situations. Nevertheless, new evolutions of some codes of practice tend to homogenize the risk (i.e. product of failure probability by the consequences) instead of the reliability level (or failure probability). As an example, the RBDO calibration could take the form: min CT (d, γ) = d,γ
subject to
L
L
CTi (d, γ)
i=1
W(ωj , βj (d, γ), βt ) ≤ ε
(30)
j=1
gj (d) ≤ 0 dL ≤ d ≤ dU where ε is an acceptable tolerance for target fitting. 6.3 Topology bas ed RBDO Knowing that the RBDO concerns mostly shape optimization, the application to topology optimization is a new research field (Kharmanda et al. 2004). The basic idea concerns the use of uncertainties as a control parameter for topology selection. In fact, the reliability constraint allows us to get a robust structural topology. Figure 1.15 illustrates the fact that different topologies can be suitable for the same ground structure. Usually, the comparison in deterministic topology optimization is only related to minimized mean compliance, without observing the solution dispersion. The principle of reliability-based topology robustness consists in defining the topology which is less sensitive to system uncertainties. The main difficulty in dealing with topology lies in the fact that topology optimization is a qualitative approach, while the reliability-based design is a quantitative
Robust topology Compliance
Large dispersion
Optimization procedure iterations
Figure 1.15 RBTO and principle of reliability-based topology robustness.
28
Structural design optimization considering uncertainties
approach. The coupling of the two methods requires special developments to overcome formulation and efficiency problems. 6.4 T i m e-v a ri ant R B DO Every designer knows well that system information are not perfect and their validity is limited under system aging. In fact, most of phenomena involved in the total cost function are time-variant. It can be mentioned, for example, the loading fluctuation over the structural lifetime, the deterioration of material properties with time, the variation of operating and maintenance costs, and the monetary fluctuation of failure costs. All these time-variant phenomena lead to time-variant optimal solution. However, the designer must take decisions in a given stage of the project (largely before the construction or the manufacturing of the system), in function of the available data at that stage. The resulting solution is optimal only in the first part of the structure lifetime, as it does not account for aging and long-term exposure. In time-variant RBDO (Kuschel and Rackwitz 1998), the ideal scheme consists in designing the system for best optimal solution, considering the whole lifetime of the system. In this case, the utility function takes the form: max x
U(p, d, T) = B(d, t) − CI (d) − L(p, d, T)
subject to
gj (d) ≤ 0
(31)
with T B(d, T) =
b(t) d(t) (1 − Pf (p, d, t))dt 0
(32)
T L(p, d, x, T) =
Cf (p, d) f (p, d, t) d(t)dt 0
where b(t) is the benefit derived from the existence of the system, Cf is the failure cost, f (p, d, t) is the probability density of the time to failure, d(t) is the discount function (or capitalization function) and T is the system age. 6.5
C o upl ed r e liab ilit y-b as e d d es ig n an d m a i n t e n a n ce p l a n n i n g
Although in design practice and due to system complexity, the maintenance planning is often considered as an independent step, the Reliability-based optimization can also be applied to a coupled set of design and maintenance parameters. In this case, the problem is formulated as: min x
CT (d) = CI (d) + CF (d) + CM (d)
subject to gj (d) ≤ 0
(33)
At the design stage, the maintenance cost is minimized by selecting the best set of parameters. At this stage, there is no available site information (as the system is
P r i n c i p l e s o f r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
29
not constructed yet) and a priori hypotheses have to be formulated. Generally, regular maintenance intervals are chosen at this stage. The maintenance cost is usually a function of the type of inspection method mS , the number of inspections in the remaining lifetime nS , and the time for different inspections t. The maintenance cost can be described by (Enevoldsen and Sørensen 1994): CM (d, p) = CPM (d, p) + CINS (d, p) + CREP (d, p)
(34)
where CM is the expected maintenance cost, CPM is the preventive maintenance cost, CINS is the expected inspection cost, CREP is the expected cost of repairs, and p is the vector of maintenance parameters. Enevoldsen and Sørensen (1994) suggested to use the following expressions to evaluate inspection and repair costs: CINS (d, mS , nS , t) =
nS i=1
CREP (d, nS , t) =
nS i=1
CSi (mS )(1 − Pf (d, ti ))
1 (1 + r)ti
1 CRi (x)PRi (d, ti ) (1 + r)ti
(35)
where CSi is the ith inspection cost, Pf is the failure probability in the time interval [0, ti ], r is the discount rate, CRi is the cost of a repair at the ith inspection and PRi is the probability of performing a repair after the ith inspection for surviving components.
7 Conclusions RBDO is a powerful tool for robust design of structural systems. The explicit consideration of safety level allows us to optimize the total cost where the solution becomes less sensitive to system uncertainties. Contrary to traditional deterministic design optimization, the RBDO allows us to modulate the safety margins in function of the uncertainty effects for each variable, in order to reach economic, safe, efficient and robust design. In this sense, the safety factors are optimally defined within the system, compared to deterministic design where the safety factors are set before undergoing the optimization process. RBDO is still an active research field in order to extend the possibilities for new applications. Design, topology and time-variant reliability-based optimizations are very interesting field to reach performance-based design for cost-effective, durability and lifetime management of structural systems.
References Ang, A.H.-S. & De Leon, D. 1997. Determination of optimal target reliabilities for design and upgrading of structures. Structural Safety 19:91–103. Aoues, Y. & Chateauneuf, A. 2007. Reliability-based optimization of structural systems by adaptive target safety application to RC frames. Structural Safety. Article in Press. Ditlevsen, O. 1979. Narrow reliability bounds of structural systems. Journal of Structural Mechanics 7:435–451. Ditlevsen, O. & Madsen, H. 1996. Structural Reliability Methods. John Wiley & Sons.
30
Structural design optimization considering uncertainties
Enevoldsen, I. 1994. Reliability-based optimization as an information tool. Mech. Struct. & Mach. 22:117–135. Enevoldsen, I. & Sørensen, J.D. 1994. Reliability-based optimization in structural engineering. Structural Safety 15:169–196. Frangopol, D.M. 1995. Reliability-based optimum structural design. In: Probabilistic structural mechanics handbook, edited by C. Raj Sundararajan, Chapman Hall, USA, 352–387. Frangopol, D.M. 1999. Life-cycle cost analysis for bridges. In: Bridge safety and reliability. ASCE, Reston, Virginia, 210–236. Frangopol, D.M. 2000. Advances in life-cycle reliability-based technology for design an maintenance of structural systems. In: Computational mechanics for the twenty-first century. Edinburgh: Saxe-Coburg Publishers, 257–270. Frangopol, D.M. & Maute K. 2003. Life-cycle reliability-based optimization of civil and aerospace structures. Computers & Structures 81:397–410. Gayton, N., Mohamed-Chateauneuf, A., Sørensen, J.D., Pendola, M. & Lemaire, M. 2004. Calibration methods for reliability-based design codes. Structural Safety 26(1):91–121. Hasofer, A.M. & Lind, N.C. 1974. An Exact and Invariant First Order Reliability Format. J. Eng. Mech., ASCE, 100, EM1:11–121. Kharmanda, G., Mohamed-Chateauneuf, A. & Lemaire, M. 2002. Efficient reliability-based design optimization using a hybrid space with application to finite element analysis. Journal of Structural and Multidisciplinary Optimization 24(3):233–245. Kharmanda, G., Mohamed-Chateauneuf, A. & Lemaire, M. 2002. CAROD: Computer-Aided Reliable and Optimal Design as a concurrent system for real structures. Journal of Computer Aided Design and Computer Aided Manufacturing CAD/CAM 1(1):1–12. Kharmanda, G., Olhoff, N., Mohamed-Chateauneuf, A. & Lemaire, M. 2004. Reliability-based topology optimization. Struct. Multidisc. Optim. 26:295–307. Kuschel, N. & Rackwitz, R. 1997. Two basic problems in reliability-based structural optimization. Mathematical Methods of Operations Research 46:309–333. Kuschel, N. & Rackwitz, R. 1998. Structural optimization under time-variant reliability constraints. Proceeding of the eighth IFIP WG 7.5 Working conference on Reliability and Optimization of Structural Systems, edited by Nowak, University of Michigan, Ann Arbor, Michigan, USA, 27–38. Kuschel, N. & Rackwitz, R. 2000. A new approach for structural optimization of series system. In: R.E. Melchers & M.G. Stewart (eds). Proceedings of the 8th International conference on applications of statistics and probability (ICASP) in Civil engineering reliability and risk analysis, Sydney, Australia, December 1999, Vol. 2. pp. 987–994. Lemaire, M., in collaboration with Chateauneuf, A. & Mitteau, J.C. 2006. Structural reliability. ISTE, UK. Lind, N.C. 1977. Reliability based structural codes, practical calibration. Safety of structures under dynamic loading, Trondheim, Norway, 149–160. Madsen, H.O. & Friis Hansen, P. 1991. Comparison of some algorithms for reliabilitybased structural optimization and sensitivity analysis. In: C.A. Brebbia & S.A. Orszag (eds): Reliability and Optimization of Structural Systems, Springer-Verlag, Germany, 443–451. Moses, F. 1977. Structural System Reliability and Optimization. Comput. Struct. 7:283–290. Moses, F. 1997. Problems and prospects of reliability based optimization. Engineering Structures 19(4):293–301. Rackwitz, R. 2001. Reliability analysis, overview and some perspectives. Structural Safety 23:366–395. Sørensen, J.D., Kroon, I.B. & Faber, M.H. 1994. Optimal reliability-based code calibration. Structural Safety 15:197–208.
Chapter 2
Reliability-based optimization of engineering structures John D. Sørensen Aalborg University, Aalborg, Denmark
ABSTRACT: The theoretical basis for reliability-based structural optimization within the framework of Bayesian statistical decision theory is briefly described. Reliability-based cost benefit problems are formulated and exemplified with structural optimization. The basic reliability-based optimization problems are generalized to the following extensions: interactive optimization, inspection and repair costs, systematic reconstruction, re-assessment of existing structures. Illustrative examples are presented including a simple introductory example, a decision problem related to bridge re-assessment and a reliability-based decision problem for offshore wind turbines.
1 Introduction The theoretical basis for reliability-based structural optimization can be formulated within the framework of Bayesian statistical decision theory mainly developed and described in the period 1940–60, see for example (Raiffa & Schlaifer 1961), (Aitchison & Dunsmore 1975), (Benjamin & Cornell 1970) and (Ang & Tang 1975). By statistical decision theory it is possible to solve a large number of decision problems where some of the parameters are modeled as uncertain. The uncertain parameters are modeled by stochastic variables or stochastic processes. Uncertain costs and benefits can thus be accounted for in a rational way. A large number of “simple’’ examples for application of statistical decision theory within structural and civil engineering are given in e.g. (Benjamin & Cornell 1070), (Rosenbleuth & Mendoza 1971) and (Ang & Tang 1975). During the last decades significant achievements have been obtained in development of efficient numerical techniques which can be used in solving problems formulated by statistical decision theory. Especially the development of FORM (First Order Reliability Methods), SORM (Second Order Reliability Methods) and simulation techniques to evaluate the reliability of components and systems has been important, see e.g. (Madsen et al. 1986). In the same period efficient methods to solve non-linear optimization problems have also been developed, e.g. the sequential quadratic optimization algorithms (Schittkowski 1986) and (Powell 1982). These developments have made it possible to solve problems formulated in a decision theoretical framework. Examples are: •
Reliability-based inspection and repair planning for offshore structures and concrete structures, formulated as a preposterior decision problem, see e.g. (Kroon 1994), (Engelund 1997), (Skjong 1985), (Thoft-Christensen & Sørensen 1987),
32
•
Structural design optimization considering uncertainties
(Fujita et al. 1989), (Madsen et al. 1989), (Madsen & Sørensen 1990), (Fujimoto et al. 1989), (Sørensen & Thoft-Christensen 1988) and (Faber et al. 2000). Reliability-based structural optimization problems and associated techniques for sensitivity analysis and numerical solution. Basic formulations of reliability-based structural optimization are given in e.g. (Murotsu et al. 1984), (Frangopol 1985), (Sørensen & Thoft-Christensen 1985) and (Enevoldsen & Sørensen 1994). System aspects are considered in e.g. (Enevoldsen & Sørensen 1993), interactive reliabilitybased optimization in (Sørensen et al. 1995) and optimization with time-variant reliability in e.g. (Kuschel & Rackwitz 1998). Further it is noted that a one-level approach for reliability-based optimization is described in (Streicher & Rackwitz 2002) based on an idea in (Madsen & Hansen 1992).
In section 2 a short description of Bayesian decision theory for engineering decisions is given and in section 3 reliability-based structural optimization problems are formulated. Only time-invariant reliability problems are considered. Three levels of decision problems with increasing degree of complexity can be identified: (1) decisions with given information (e.g. for new structures), (2) decisions with given new information (e.g. for existing structures), (3) decisions involving planning of experiments/inspections to obtain new information (e.g. for inspection planning). Further, interactive optimization aspects are discussed. In order to solve reliability-based optimization problems it is important to have accurate and numerically effective methods to evaluate probabilities of different events and of expectations. In section 4 some probabilistic methods, such as FORM/SORM, are briefly mentioned. Also techniques are described for sensitivity analyses to be used in numerical solution of the optimization problems using general optimization algorithms. In section 5 illustrative examples are presented, including applications with re-assessment of a concrete bridge and with reliability-based design of support structure for wind turbines.
2 Decision theory for engineering decisions Engineers are often in the situation to take decisions on design of a new structure, on repair/maintenance of existing structures where statistical information is available. In the following it is shown how Bayesian statistical decision theory can be used for making such decisions in a rational way, see (Raiffa & Schlaifer 1961) and (Benjamin & Cornell 1970) for a detailed description. An important difficulty in Bayesian statistical decision theory when applied in civil and structural engineering is that it can be difficult to assign values to cost of failure, or not acceptable behavior, especially when loss of human lives is involved. One solution is to calibrate the cost models to existing structures or to base the decisions on comparisons with alternative solutions. Further, organizational factors can have a rather significant influence in the decision process. These factors often have an influence, which is not rational from a cost-benefit point of view. Examples are the influence of the organizational structure, personal preferences and organizational culture. The first problem to consider is that of making rational decisions when some of the parameters defining the model are uncertain, but a statistical description of the
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s
33
Cost C(z, X) Design decision z
State of nature X
Figure 2.1 Decisions with given information.
parameters is available, i.e. the statistical information is given. The uncertain parameters are modeled by n stochastic variables X = (X1 , X2 , . . . , Xn ). The density function of the stochastic variables is fX (x, θ) where θ are statistical parameters, for example mean values, standard deviations and correlation coefficients. Further, it is assumed that a decision has to be taken between a number of alternatives which can be modeled by design/decision variables z = (z1 , z2 , . . . , zN ). In figure 2.1 a decision model with one discretized variable z is shown. The decision is taken before the realization by nature of the stochastic variables is known. Besides the decision variables z and the uncertain variables X also a cost function C(z, X) is introduced in the decision model in figure 2.1. When a decision z has been taken and a realization x of the stochastic variables appears then the value obtained is denoted C(z, X) and represents a numerical measure of the consequences of the decision and the realization obtained. C(z, X) is assumed to be related to money and represents in general costs minus benefits, if relevant. As an example the design parameters z could be the geometrical parameters of a structural system (cross-sectional dimensions and topology), the stochastic variables X could be loads and material strengths and objective function C could be the cost of the structure. In some decision problems it can be difficult to specify the cost function, especially if the consequences not directly measurable in money are involved, for example personal preferences. However, as described in von (von Neumann & Morgenstern 1943) rational decisions can be taken if the cost function is made such that the expected value of the cost function is consistent with the personal preferences. Thus, if the decisionmaker wants to act rationally the strategy z, which minimizes the expected cost, has to be chosen as C ∗ = min EX [C(z, X)] = C(z, X)fX (x) dx (1) z
EX [−] is the expectation with respect to the joint density function of the stochastic variables X is the minimum cost corresponding to the optimal decision z∗ . The optimization problem can be generalized to include benefits B(z) such that the total expected benefits minus costs, Z are maximized. (1) is then written (2) Z∗ = max Z(z) = B(z) − EX [C(z, X)] = B(z) − C(z, X)fX (x) dx z
where it is assumed that the benefits are not dependent on the stochastic variables X.
34
Structural design optimization considering uncertainties
3 Reliability-based structural optimization The formulations given above can be used in a number of cases related to design of structures. As mentioned in section 2 they can e.g. be used in a design situation where z models the design variables (size and shape variables in a structural system), X models uncertain loads and material parameters, B models the benefits and C models the total expected costs to design and possible failure. As mentioned only time-invariant reliability problems are considered.
3.1 Ba si c rel i ab ilit y-b as e d o pt imizat io n f o r m u l a t i o n s First, it is assumed that • •
There is no systematic reconstruction of the structure in case of failure Discounting can be ignored The total expected cost-benefits can then be written Z(z) = B(z) − C(z) = B(z) − CI (z) − Cf PF (z)
(3)
where CI (z) and Cf model the costs due to construction and failure, B(z) models the benefits and PF (z) is the probability of failure. Failure/no failure should here be considered in a general sense as satisfactory/not satisfactory behavior. The optimal design z∗ is obtained from the optimization problem: max Z(z) = max {B(z) − CI (z) − Cf PF (z)} z
z
(4)
(4) can equivalently be formulated as a reliability-constrained optimization problem max B(z) − CI (z) z
(5)
subject to β(z) ≥ βmin where the generalized reliability index is defined by β(z) = −−1 (PF (z))
(6)
is the standard normal distribution function. βmin can be a code specified minimum acceptable reliability level related to annual or lifetime reference time intervals. Other design constraints can be added to (5) if needed. (4) and (5) give the same optimal decision if βmin is chosen as the reliability level corresponding to the optimal solution z∗ of (4): βmin = β(z∗ ), i.e. there is a close connection between βmin and Cf /CI . This can easily be seen considering the Kuhn-Tucker optimality conditions for (4) and (5). (5) is a two-level optimization problem, sine the calculation of the reliability index β by FORM requires an optimization problem to be solved, see section 4.
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s
35
The optimization problem in (5) can be generalised to the following element reliability-based structural optimization problem: m mP D max Z(z) = B(z) − C(z) = B(z) − CIi Vi (z) + Cfi (−βi (z)) z
subject to
i=1
βi (z) ≥ BI,i (z) ≥ 0, BE,i (z) = 0, z1i ≤ zi ≤ ziu , βimin ,
i=1
i = 1, . . . , M i = 1, . . . , mI i = 1, . . . , mE i = 1, . . . , N
(7)
where z = (z1 , . . . , zN ) are the design (or optimization) variables. The optimization variables are assumed to be related to parameters defining the geometry of the structure (for example diameter and thickness of tubular elements) and coordinates (or related parameters) defining the geometry (shape) of the structural system. The objective function C consists of a deterministic and a probabilistic part with mD and mP terms, respectively. Vi is e.g. a volume in the ith deterministic term and Vi is the cost per volume of the ith term modelling the construction costs. Vi is assumed to be deterministic. If stochastic variables influence Vi then design values, see below, are assumed to be used to calculate Vi . In the probabilistic part Cfi is the cost due to failure of failure mode i. βi , i = 1, . . . , mP are reliability indices for the mP failure modes. The general formulation of (7) allows the objective function to model both the structural weight and the total expected costs of construction and failure. The constraints in (7) are based on the reliability indices βi , i = 1, . . . , M for M failure modes. βimin , i = 1, . . . , M are the corresponding lower limits on the reliabilities. BI,i , i = 1, . . . , mI and BE,i , i = 1, . . . , mE define the deterministic inequality and equality constraints in (7) which can ensure that response characteristics such as displacements and stresses do not exceed codified critical values. Determination of the inequality constraints usually includes finite element analyses of the structural system. The inequality constraints can also include general design requirements for the design variables. Finally also simple bounds are included as constraints. The variables (parameters) used to model the structure are characterized as stochastic or deterministic if the variable can be modelled as stochastic or deterministic and design or fixed if the variable is a design (optimization) variable or a fixed constant. The optimization problem in (5) can further be generalised to the following system reliability-based structural optimization problem: m mP D CIi Vi (z) + Cfi (−βi (z)) max Z(z) = B(z) − C(z) = B(z) − z
subject to
βS (z) ≥ βmin , BI,i (z) ≥ 0, BE,i (z) = 0, z1i ≤ zi ≤ ziu ,
i=1
i = 1, . . . , mI i = 1, . . . , mE i = 1, . . . , N
i=1
(8)
where βS is the system reliability index. If failure of the structure can be modelled as by a series/parallel system then βS can be obtained from: βS (z) = −−1 (Pf (z))
(9)
36
Structural design optimization considering uncertainties
where Pf (z) is the probability of failure of the system, e.g. obtained by FORM/SORM techniques. 3.2
Intera c ti v e o pt imizat io n
In practical solution of an optimization problem it will often be very relevant to be able to make different types of interaction between the user and the numerical formulation/ solution of the design problem. The basic types of interactive optimization which influences the formulation of the optimization problems are, see (Haftka & Kamat 1985) and (Sørensen et al. 1995): • • • •
include (delete) a design (optimization) variable include (delete) a constraint modify a constraint or modify (change) the objective function.
In order to investigate the effect of interactive optimization on the optimality criteria, (9) is restated as the following general optimization problem: min C(z) z
subject to
ci (z) = 0, ci (z) = 0,
i = 1, . . . , mE i = mE + 1, . . . , m
(10)
First order necessary conditions that have to be satisfied at a (local) optimum point z∗ are given by the Kuhn-Tucker conditions. If the optimization process has almost converged, a good guess on the optimal design is available. A modification of the optimization problem is then specified by the user. In (Sørensen et al. 1995) the details are described. Figure 2.2 illustrates the data flow in interactive structural optimization. The modules used are: • • • • • 3.3
User interface OPT: general optimization algorithm REL: module for reliability analysis, e.g. FORM, incl. optimization FEA: finite element program module DSA: module for calculating design sensitivity coefficients. General i z a t io n: inc lud e ins pec t io n a n d r e p a i r co s t s
The basic decision problems considered in section 2 can as mentioned be generalized to be used in reliability-based experiment and inspection planning, see figure 2.3. If e model the inspection times and qualities, and d models the repair decision given uncertain inspection result S, the optimization problem can be written: 0 (e) + CR0 (e, d)PR (e, d) + Cf0 PF (e, d)} max Z(e, d) = B0 − {CIN e,d
(11)
0 where B0 models the benefits, CIN models the inspection costs, CR0 models the repair costs, PR is the probability of repair and PF is the probability of failure, both obtained using stochastic models for S and X.
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s
37
CARBOS: User interface
OPT
DSA
Modifications ? N Final design
Y
Reliability analysis
FEA
Sensitivity analysis
DSA
FEA
Interactive optimization Optimization (probabilistic) Optimization (deterministic) Sensitivity analysis
REL
CARBOS: Modify variables, constraints and obj. function
Figure 2.2 Data flow interactive optimization, from (Sørensen et al. 1995).
Inspection plan e
Inspection result S
Repair decision d
State of nature X
Cost-benefit Z(e, S, d, X)
Figure 2.3 Decisions for with given information.
(11) can be further generalised if the total expected costs are divided into construction, inspection, repair and failure costs and a constraint related to a maximum annual (or accumulated) failure probability PFmax is added. If the inspections performed at times T1 , T2 , . . . , TN are part of e the optimization problem can be written max Z(e, d) = B(e, d) − {CI (e, d) + CIN (e, d) + CR (e, d) + CF (z, e)} e,d
subject to
ei1 ≤ ei ≤ eiu ,
i = 1, . . . , N
Pt (t, e, d) ≤ PFmax ,
t = 1, 2, . . . , TL
(12)
where, B is the expected benefits, CI is the initial costs, CIN is the expected inspection costs, CR is the expected costs of repair and CF is the expected failure costs. The annual probability of failure in year t is PF,t . The N inspections are assumed performed at times 0 ≤ T1 ≤ T2 ≤ · · · ≤ TN ≤ TL .
38
Structural design optimization considering uncertainties
The total capitalised benefits are written B(e, d) =
N
Bi (1 − PF (Ti ))
i=1
1 (1 + r)Ti
(13)
The ith term represents the capitalized benefits in year i given that failure has not occurred earlier, Bi is the benefits in year i, PF (Ti ) is the probability of failure in the time interval [0, Ti ] and r is the real rate of interest. The total capitalised expected inspection costs are written CIN (e, d) =
N
CIN,i (e)(1 − PF (Ti ))
i=1
1 (1 + r)Ti
(14)
The ith term represents the capitalized inspection costs at the ith inspection when failure has not occurred earlier, CIN,i is the inspection cost of the ith inspection, PF (Ti ) is the probability of failure in the time interval [0, Ti ] and r is the real rate of interest. The total capitalised expected repair costs are CR (e, d) =
N
CR,i PRi (e, d)
i=1
1 (1 + r)Ti
(15)
CR,i is the cost of a repair at the ith inspection and PRi is the probability of performing a repair after the ith inspection when failure has not occurred earlier and no earlier repair has been performed. The total capitalised expected costs due to failure are estimated from CF (e, d) =
TL
CF (t) PF,t PCOL|FAT
t=1
1 (1 + r)t
(16)
where CF (t) is the cost of failure at the time t. PCOL|FAT is the conditional probability of collapse of the structure given failure of the considered component. 3.4
G en eral i z a t io n: inc lud e s y s t emat ic r e co n s t r u ct i o n
The following assumptions are made: (1) the structure is assumed to be systematically rebuild in case of failure, (2) only initial costs, CI (z) and direct failure costs, CF are included, (3) the benefits per year are b and (4) failure events are assumed to be modeled by a Poisson process with rate λ. The probability of failure is PF (z). The optimal design is determined from the following optimization problem, see e.g. (Rackwitz 2001):
CI (z) CF λPF (z) b Ci (z) − − + max Z(z) = z rC0 C0 C0 C0 r + λPF (z) (17) subject to zl ≤ z ≤ zu , i = 1, . . . , N i
i
PF (z) ≤
i
PFmax
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s
39
where zl and zu are lower and upper bounds on the design variables. PFmax is the maximum acceptable probability of failure e.g. with a reference time of one year. This type of constraint is typically required by regulators. The optimal design z∗ is determined by solution of (17). If the constraint on the maximum acceptable probability of failure is omitted, then the corresponding value PF (z∗ ) can be considered as the optimal probability of failure related to the failure event and the actual cost-benefit ratios used. The failure rate λ and probability of failure can be estimated for the considered failure event, if a limit state equation, g(X1 , . . . , Xn , z) and a stochastic model for the stochastic variables, (X1 , . . . , Xn ) are established. If more than one failure event is critical, then a series-parallel system model of the relevant failure modes can be used. 3.5
Generalisation: optimal re-as s es s me nt o f e xi s ti ng s truc ture s
In re-assessment of structures and engineering systems, engineers are often in the situation to be involved in decisions on repair and/or strengthening of an existing system/structure where some statistical information is available. In the following it is shown how Bayesian statistical decision theory can be used for making such decisions in a rational way. The theoretical basis is detailed described in e.g. (Raiffa & Schlaifer 1961) and (Benjamin & Cornell 1970). It is assumed that the decision is taken on behalf of the owner of the structure, and that a cost-benefit approach is used with constraints related to minimum safety requirements specified by national/international codes of practice and/or the society. The same principles can be applied in case of other decision makers. It is noted that the optimal solution from the cost-benefit problem should be used as one input to the decision process. The decision problem on possible repair and/or strengthening in a re-assessment situation is illustrated in figure 2.4. It is assumed that the design variables in the initial design situation are denoted z. After the initial design information about the uncertain variables influencing the behaviour of the structure is collected, and are denoted S. Often this information will be collected in connection with the re-assessment. The decision variables at the time TR of re-assessment are denoted d. The uncertain variables describing the state of nature are denoted X.
Time TR
Design decision z
Information S
Repair/re-design decision d
State of nature X
Cost-benefit Z(z, S, d, X)
Figure 2.4 Decisions in re-assessment with given information. The vertical line illustrates the time of re-assessment.
40
Structural design optimization considering uncertainties
The decision is taken before the realization by nature of the stochastic variables is known. Besides the decision variables d and the uncertain variables X also a costbenefit function Z(z, S, d, X) is introduced in the decision model. When a decision d in the re-assessment problem has been taken and a realisation x of the stochastic variables appears then the value obtained is denoted Z(z, S, d, x) and represents a numerical measure of the consequences of the re-assessment decision and the realisation obtained. Z(z, S, d, x) is assumed to be measured in monetary units and represents in general costs minus benefits, if relevant. Illustrative examples of the decision variables z and d, and the stochastic variables S and X are: • • • •
z: design parameters, e.g. geometrical parameters of a structural system (crosssectional dimensions and topology). The design parameters are already chosen at the initial design, and are therefore fixed at the time of re-assessment. S: information collected, e.g. concrete compression strengths obtained from samples taken from the structure, measured wave heights, non-failure of the structure, no-find of defects by an inspection. d: design parameters in the re-assessment, e.g. geometrical parameters of a repair (cross-sectional dimensions and topology). X : stochastic variables, representing e.g. loads and material strengths.
In some decision problems it can be difficult to specify the cost function, especially if the consequences not directly measurable in money are involved, for example personal preferences. However, as described in (von Neumann, J. and Morgenstern 1943) rational decisions can be taken if the cost function is made such that the expected value of the cost function is consistent with the personal preferences. If the information S is related the stochastic variables X then a predictive density function (updated density function) fX (x|s) of the stochastic variables X taking into account a realization s can be obtained using Bayesian statistical theory, see (Lindley 1976) and (Aitchison & Dunsmore 1975). If the decision-maker wants to act rationally, taking into account the information s the strategy d, which maximizes the expected cost-benefits, has to be chosen from Z∗ = max EX|s [Z(z, s, d, X)] d
(18)
EX|s [−] is the expectation with respect to the predictive (updated) density function fX (x|s). In the following the initial design variables z are not written explicitly. Z∗ is the maximum cost-benefit corresponding to the optimal decision. If the benefits are not dependent on the stochastic variables then the optimization problem can be written: Z∗ = max Z(d) = max {B(d) − EX|s [C(s, d, X)]} d
d
(19)
where the future benefits are denoted B and the future costs are denoted C. Both benefits and costs should be discounted to the time of the re-assessment. The optimization formulation can also be generalised to include decision variables related to experiment planning.
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s
41
In the following time-invariant reliability problems are considered. It is assumed that there is no systematic reconstruction of the structure in case of failure and discounting can be ignored. The total expected cost-benefits can then be written Z(d) = B(d) − C(d) = B(d) − CS (d) − Cf Pf (d)
(20)
where CS (d) and Cf models the costs due to repair/strengthening after the re-assessment and due to failure, B(d) models the benefits and Pf (d) is the probability of failure updated with the information s. Failure/no failure should here be considered in a general sense as satisfactory/not satisfactory behaviour. In the case the information S models (one or more) events modelled by an event margin {h(d, X) ≤ 0}, and failure is modelled by a limit state function g(d, X), the updated probability of failure is obtained from: Pf (d) = P(g(d, X) ≤ 0|h(d, X) ≤ 0)
(21)
In the case the information S is related to the measurements of the stochastic variables X then the (updated) density function fX (x|s) is used. The optimal design d∗ is obtained from the optimization problem max Z(d) = max {B(d) − CS (d) − Cf Pf (d)} d
d
(22)
(22) can equivalently be formulated as a reliability-constrained optimization problem max B(d) − CS (d), d
subject to
β (d) ≥ βmin
(23)
where the generalised reliability index is defined by β (d) = −−1 (Pf (d)). βmin is a code specified minimum acceptable reliability level related to annual or lifetime reference time intervals. Other design constraints can be added to (23) if needed. (22) and (23) give the same optimal decision if βmin is chosen as the reliability level corresponding to the optimal solution d∗ of (22): βmin = β (d∗ ), i.e. there is a close connection between βmin and Cf /CS . This can easily be seen considering the Kuhn-Tucker optimality conditions for (22) and (23). The basic decision problems considered above can be generalized to be used in reliability-based experiment and inspection planning as described in section 3.3. 3.6
Numeric al s olution of decis ion pro bl e ms
Numerical solution of the decision problems requires solution of one or more optimization problems. Since the optimization problems formulated are generally continuous with continuous derivatives sequential quadratic optimization algorithms such as (Schittkowski 1986) and (Powell 1982) can be expected to be the most effective, see (Gill et al. 1981). These algorithms require that values of the objective function and the constraints be evaluated together with gradients with respect to the decision variables. The probabilities in the optimization problems can be solved using FORM techniques, see (Madsen et al. 1986). Associated with the FORM estimates of the
42
Structural design optimization considering uncertainties
probabilities also sensitivities with respect to parameters are obtained. If the decision problem includes analysis of a structural system the finite element method in combination with sensitivity analyses can be used. The sensitivity analyses can be based on the direct or adjoint load method in combination with the discrete quasi-analytical method or with the continuum method.
4 Reliability analysis and sensitivity analysis As mentioned in the previous section the evaluation of the probability of failure events is an integral part of decision analysis and reliability-based structural optimization problems. Further, the decision analysis involves the evaluation of expected values of the costs. Both the relevant failure probabilities and expected values can be determined using modern reliability analysis techniques. If all variables in the reliability problem can be modelled as time-invariant random variables, the failure probability, PF (z), for a given limit state equation, g(x, z) can be evaluated as PF (z) = P(g(X, z) ≤ 0) = fX (x, z) dx (24) g(x,z)≤0
where fX (x, z) is the joint density function of the stochastic variables X. The integral in (24) plays a central role in the reliability analysis and has therefore been devoted special attention over the last decades. As the integral in general has no analytical solution it is easily realised that its solution or numerical approximation becomes a major task for integral dimension larger than say 6 and for small probabilities. Sufficiently accurate approximations have been developed which are based on asymptotic integral expansions. These FORM/SORM methods are standard in reliability analysis and commercial software, see e.g. (Madsen et al. 1986). Also simulation methods can in many cases be very effective alternatives to FORM/SORM methods. By FORM analysis the failure surface is approximated by its tangent at the design point. On the basis of the linearised failure surface the probability of failure can be approximated by, see (9): PF (z) ≈ (−β(z))
(25)
Most optimization algorithms for solution of the reliability-based optimization problems formulated in section 3 require that the sensitivities with respect to objective functions and reliability estimates can be determined efficiently. By a FORM analysis these derivatives can be computed numerically by the finite difference method. However, it is more efficient to use a semi-analytical expression. For an element analysis the derivative of the first order reliability index, β, with respect to a parameter p, which may be a design variable z, is ∂β ∂ g(u∗ ; p) 1 = ∇u g(u∗ ; p) ∂p ∂p
(26)
If a gradient-based algorithm is used in order to locate the design point the gradient vector ∇u g(u∗ ; p) is already available and it is only necessary to determine the derivative
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s
43
of the failure function with respect to the parameter p. The derivative of the first order estimate of the probability of failure with respect to p is ∂Pf ∂β = −ϕ(−β) ∂p ∂p
(27)
where ϕ denotes the density function of a standard normally distributed variable. Also for series and parallel systems semi-analytical expressions for the derivatives of the first order reliability index can be derived. The following optimization problem corresponding to the general optimization problems defined in section 3, is considered. min CI (z, p) = C0 (z, p) + Cj (z, p)Pj (z, p) z j (28) subject to Pf (z, p) ≤ Pfmax where z are decision/design variables, p are quantities defining the costs and/or the stochastic model. Pj denotes a probability (failure or repair), Pf denotes a failure probability and Pfmax is the maximum accepted failure probability. The sensitivity of the total expected costs C with respect to the elements in p is obtained from, see (Haftka & Kamat 1985) and (Enevoldsen 1994) dPj dPf dC = Cj +λ dpi dpi dpi
(29)
j
where λ is the Lagrangian multiplier associated with the constraint in (25). The sensitivity of the decision variables z with respect to pi can be calculated using the formulas given below which are obtained from a sensitivity analysis of the Kuhn-Tucker conditions related to the optimization problem defined in (28). dz/dpi is obtained from ⎡
⎤ dz ⎡ ⎤ A B ⎢ C ⎥ dp i ⎥=⎣ ⎦ ⎣ ⎦⎢ ⎣ ⎦ dλ 0 BT 0 dpi ⎡
⎤
The elements in the matrix A and the vectors B and C are ∂2 Cj ∂2 Pf ∂Pj ∂Cj ∂2 Pj ∂ 2 C0 Ars = Pj + +2 + Cj + λ ∂zr ∂zs ∂zr ∂zs ∂zr ∂zs ∂zr ∂zs ∂zr ∂zs
(30)
(31)
j
Br =
∂Pf ∂zr
Cr = −
(32)
∂2 Cj ∂Pj ∂Cj ∂ 2 C0 − + ∂zr ∂pi ∂zr ∂pi ∂zr ∂pi
(33)
j
It is seen that the sensitivity of the objective function (the total expected cost) with respect to some parameters can be determined on the basis of the first order sensitivity
44
Structural design optimization considering uncertainties
coefficients of the probabilities and of the cost functions, see (29). However, calculation of the sensitivities of the decision parameters is much more complicated because it involves estimation of the second order sensitivity coefficients of the probabilities, see e.g. (Enevoldsen 1994).
5 Examples 5.1 Ex am pl e 1 – S imple c o s t-b e nefit an a l ys i s In this section a simple, introductory example is presented. A structural component is considered. It is assumed to have strength R and load S, which for simplicity both are Normal distributed: Load S: expected value µS = 20 kN and Coefficient of Variation = 25% Strength R: expected value µR = 50 kN/m2 and Coefficient of Variation = 10% The design variable z represents the cross-sectional area. The limit state equation is written: g = zR − S
(34)
In the initial design situationz = z0 = 1 m2 is chosen. The corresponding reliability index is β = (1 · 50 − 20)/ (1 · 5)2 + 52 = 4.24 and the probability of failure Pf = (−4.24) = 1.1 · 10−5 . The benefits and cost of failure are B0 = 10 and CF = 107 . New information has been collected. It consists of n = 5 tests with samples of similar components with the following results: 51, 53, 56, 57 and 58 kN/m2 . The mean value of the test results is X = 55 kN/m2 . For updating Bayesian statistics is used. It is assumed that the strength has a known standard deviation σR = 4 kN/m2 . The expected value is assumed to have a prior which is Normal distributed with expected value µ0 = 50 kN/m2 and standard deviation σ0 = 3 kN/m2 . It is noted that these assumptions are consistent with the initial model for the strength (µR = 50 kN/m2 and COV = 10%). The (updated) posterior for the expected value becomes Normal distributed with (nXσ02 + µ0 σR2 )/(nσ02 + σR2 ) = 53.7 kN/m2 and expected value of µR equal to µ =
standard deviation of µR equal to σ = (σ02 σR2 )/(nσ02 + σR2 ) = 1.5 kN/m2 . The predictive (updated) distribution for the strength becomes Normal distributed with expected value of R equal to µ = µ = 53.7 kN/m2 and standard deviation of R
equal to σ = σ 2 + (σ02 σR2 )/(nσ02 + σR2 ) = 4.3 kN/m2 . The updated reliability index and probability of failure becomes β = (1 · 53.7 − 20)/ (1 · 4.3)2 + 52 = 5.12 and the probability of failure Pf = 1.56 · 10−7 . At time TR the following two alternatives re-design situations are considered: 1)
continue with existing design The cost-benefits becomes: Z = B0 − CF Pf = 10 − 107 · 1.5610−7 = 8.44
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s
45
9.0
Z(z)
8.5
8.0
7.5
7.0 1.00
1.05
1.10
1.15
1.20
z
Figure 2.5 Cost-benefit as function of design variable z.
2)
use a modified design with increased benefits The design variable is chosen to be z = 1.1 m2 . The benefits are assumed to be changed to: B (z) = B0 + (z − z0 ) · 0.5. The cost of the design change is assumed to be: CI (z) = 1 + (z − z0 ) · 2. The updated reliability index and probability of failure becomes: β = (1.1 · 53.7 − 20)/ (1.1 · 4.3)2 + 52 = 5.68 and the probability of failure Pf = 6.60 · 10−9 . The cost-benefits become: Z = B0 + (z − z0 ) · 0.5 − (1 + (z − z0 ) · 2) − CF Pf = 10 + (1.1 − 1) · 0.5 − (1 + (1.1 − 1) · 2) − 107 · 6.6010−9 = 8.78
Since the cost-benefits are larger for the modified design than continuing with the existing design, the modified design should be chosen. In figure 2.5 the cost-benefits are shown as function of z. It is seen that the optimal decision is to chose a modified design with z = 1.12. It is noted that the known information also could be in the form of an event, e.g. an inspection, and that there could be many more decision alternatives. 5.2 Example2 – Repair decis ion for conc re te bri dg e A road bridge with concrete columns is considered. The total expected lifetime is assumed to be TL . The concrete columns are exposed to chloride ingress due to spread of de-icing salts on and below the bridge. There are some indications that
46
Structural design optimization considering uncertainties
chloride has penetrated the concrete and that corrosion of the reinforcement could be expected within the next few years. Therefore a re-assessment is performed at time TR as illustrated in figure 2.4. Chloride ingress is one of the most common destructive mechanisms for this type of structures. The most typical type of chloride initiated corrosion is pitting corrosion which may locally cause a substantial reduction of the cross-sectional area and cause maintenance and repair actions which can be very costly. Further, the corrosion may make the reinforcement brittle, implying that failure of the structure might occur without warning. The probabilistic analysis of the time to initiation of corrosion in concrete structures is in this example based on models described in (Engelund & Sørensen 1998). At the time of re-assessment it is assumed that chloride profiles are taken from representative parts of the concrete columns. The estimation of the time to initiation of corrosion is based on these chloride profiles combined with prior knowledge. A chloride profile consists of a number of measurements of the chloride concentration as a function of the distance to the surface, y. Using the chloride profiles, the surface concentration and the diffusion coefficient can be estimated. It is assumed that diffusion (transportation) of chlorides into the concrete can be described by a one-dimensional diffusion model where C(y, t) is the content of chloride at time t in the depth y, D(y, t) is the coefficient of diffusion (transportation) at time t in the depth y, CS is the surface concentration and Cinit is the initial chloride concentration. It is assumed that the diffusion coefficients can be written:
a t0 (35) D(y, t) = D0 (y) t where D0 (y) is the reference diffusion coefficient at the reference time t0 and a is an age coefficient (0 < a < 1). Models for the diffusion coefficient can include different diffusion coefficients in different depths. Based on n measurements in one chloride profile the surface concentration cS , the coefficient of diffusion D0 and the age coefficient a can be estimated using the Maximum Likelihood method, see (Engelund & Sørensen 1998). Next using Bayesian statistics a predictive (updated) distribution for the stochastic variables X can be obtained. On the basis of the available information described above the decision maker has to decide which repair/maintenance strategy should be applied. As an example, three different strategies are described below based on the models in (Engelund & Sørensen 1998). All the costs given below are in some monetary unit. It is assumed that the repair is carried out before the probability of any critical event such as total collapse of the bridge. Therefore, in the following the optimization problem is solved without any restriction on the probability of some critical event. Strategy 1: consists of a cathodic protection. This strategy is implemented when corrosion has been initiated at some point. In order to determine when corrosion is initiated, inspections are carried out each year, beginning five years before the expected time of initiation of corrosion. The cost of these inspections is 25 each year except for the last year before expected initiation of corrosion where the cost is 100. The cost of
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s
47
the cathodic protection is 1000 and the cost of running the cathodic protection is 20 each year. Strategy 2: is implemented when 5% of the surface of the bridge columns shows minor signs of corrosion, e.g. small cracks and discolouring of the surface. The repair consists of repairing the minor damages and applying a cathodic protection. As for strategy 1 the costs related to this strategy are the costs of the repair and the costs of an extended inspection programme which starts three years before the expected time of repair. However, by this strategy, also the costs related to running the cathodic protection must be taken into account. The cost of repair is 2000, the cost of inspection for three years before the repair is 100 each year and the cost of running the cathodic protection is 30 each year. Strategy 3: repair is performed as a complete exchange of concrete and reinforcement in the corroded areas. The strategy is implemented when 30% of the surface at the bridge columns shows distinct signs of corrosion, such as cracking and spalling of the cover. The cost related to this strategy are the cost of the repair and the cost of an extended inspection programme which starts three years before the expected time of repair. The cost of repair is 3000 and the cost of inspection in the three years before repair is 200 each year. Traffic restrictions in the year of repair he bridge decrease the benefits with 1000. The total expected costs for maintenance/repair is determined from CS (z1 , z2 , z3 ) =
TL
Pi (z)Ci (z)
(36)
i=TR
where z = (z1 , z2 , z3 ) is the three repair/maintenance options, Pi (z) is the probability that repair/maintenance is performed in year i and Ci (z) is the total costs of the repair strategy if the repair is performed in year i: Ci (z) =
TL j=TR
Ci,j (z)
1 (1 + r)j−TR
(37)
Ci,j (z) is the repair/maintenance cost in year j if the repair is performed in year i. These costs can be found in the descriptions of the repair strategies. The costs are discounted to the time of re-assessment TR using the real rate of interest r. The expected benefits in the remaining lifetime are determined from B(z) =
TL i=TR
TL 1 B0 − Pi (z)Bi (z) (1 + r)i−TR
(38)
i=TR
where Bi (z) =
TL j=TR
Bi,j (z)
1 (1 + r)j−TR
(39)
B0 is the basic annual benefit from use of the bridge and Bi,j (z) is the loss of benefits in year j due to repair in year i, e.g. due to traffic restrictions.
48
Structural design optimization considering uncertainties
The optimal repair strategy is obtained solving the optimization problem max B(z) − CS (z) z
(40)
The expected costs are determined using the predictive stochastic model for the surface concentration cS , the coefficient of diffusion D0 and the age coefficient a obtained using the available information. 5.3
Ex am pl e 3 – O pt imal d es ig n o f o ffsh o r e w i n d t u r b i n e s
Wind turbines for electricity production are increasing drastically these years both in production capability and in size. Offshore wind turbines with an electricity production of 2–5 MW are now being produced. The main failure modes are fatigue failure of wings, hub, shaft and main tower, local buckling of main tower, and failure of the foundation. This example considers reliability-based optimization of the tower and foundation, see (Sørensen & Tarp-Johansen 2005a) and (Sørensen & Tarp-Johansen 2005b). 5.3.1 F ormu l a tio n o f r e lia b ilit y-b a s e d o p t im i z at i o n pr o bl e ms f or w in d tu r b in e s Reliability based optimization problems can be formulated in different ways, e.g. with or without systematic reconstruction. In this example it is assumed that the control system is performing as expected, one single wind turbine is considered and the wind turbine is systematically reconstructed in case of failure. It is noted that it is assumed that the probability of loss of human lives is negligible. The the main design variables are denoted z = (z1 , . . . , zN ), e.g. diameter and thickness of tower and main dimension of wings. The initial (building) costs are CI (z), the direct failure costs are CF , the benefits per year are b and the real rate of interest is γ. Failure events are assumed to be modelled by a Poisson process with rate λ. The probability of failure is PF (z). The optimal design can thus be determined from the following optimization problem, see section 3.4:
CI (z) CF λPF (z) b CI (z) − − + max W(z) = z γC0 C0 C0 C0 γ (41) l u subject to zi ≤ zi ≤ zi , i = 1, . . . , N PF (z) ≤ PFmax where zl and zu are lower and upper bounds on the design variables. C0 is the reference initial cost of corresponding to a reference design z0 . PFmax is the maximum acceptable probability of failure e.g. with a reference time of one year. This type of constraint is typically required by regulators. The optimal design z∗ is determined by solution of (41). If the constraint on the maximum acceptable probability of failure is omitted, then the corresponding value PF (z∗ ) can be considered as the optimal probability of failure related to the failure event and the actual cost-benefit ratios used. The failure rate λ and probability of failure can be estimated for the considered failure event, if a limit state equation, g(X1 , . . . , Xn , z) and a stochastic model for the
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s
49
DT
t1
H/3
t2
H/3
t3
H/3
H
hw d tP
HP
D DP
Figure 2.6 Design variables in wind turbine example (not in scale).
stochastic variables, (X1 , . . . , Xn ) are established. If more than one failure event is critical, then a series-parallel system model of the relevant failure modes can be used. An offshore 2 MW wind turbine with monopile foundation is considered, see figure 2.6. The wind turbine tower has height h = 63 m and a diameter which h increases linearly from D at bottom to DT at the top. The tower is divided in three sections each with height h/3 and each with the same thickness: t1 in top section, t2 in middle and t3 in bottom section. Diameter and thickness of monopile are constant: DP and tP . Tower and monopile are made of structural steel. The distance from bottom of the tower to the water surface is hw = 7 m and the distance from the water surface to the sea bed (the water depth) is d = 9 m. Wind and wave loads on the tower itself are neglected. The following failure modes are included: (a) yielding in cross sections in tower just above and below changes in thickness, (b) local stability in cross sections in tower just above and below changes in thickness, (c) fatigue in cross sections just above and below changes in thickness, and (d) yielding in monopile in cross-section with maximum bending moment. The stochastic model for the extreme loading at the top of the tower is described in (Sørensen & Tarp-Johansen 2005a) and (Sørensen & Tarp-Johansen 2005b). For the failure mode yielding of cross-section the limit state function is written: σ=
M N + ≥ Fy A W
(42)
where the cross-sectional forces in the cross-section is the normal force N, a shear force Q and a bending moment M. Further A is the cross-sectional area (= πt(D − t)), W is the cross-sectional section modulus and Fy is the yield stress.
50
Structural design optimization considering uncertainties
The cross-sectional forces are calculated from the stochastic variables HT , MT , and NT . The yield stress, Fy , is modelled as a LogNormal variable with coefficient of variation (COV) = 0.05 and characteristic values (5 percentile) equal to 235 MPa and 340 MPa for the tower and the mono-pile, respectively. For the failure mode local buckling of cross-section the limit state function is written: σ=
N M + ≥ Fyc A W
(43)
where the local buckling strength is estimated by the model in (ISO 19902 2001). The cross-sectional forces are calculated from the stochastic variables HT and MT . The yield stress, Fy is modelled as for yielding failure. Model uncertainty is introduced through a factor XB multiplied to Fyc . XB is assumed LogNormal distributed with expected value 1 and COV = 0.10. For the failure mode fatigue failure SN-curves and linear damage accumulation by the Miner rule are used. It is assumed that the SN-curve is bilinear and can be described by: N = K1 ( s)−m1
for N ≤ NC
(44)
N = K2 ( s)−m2
for N > NC
(45)
where s is the stress range, N is the number of cycles to failure, K1 , m1 are the material parameters for N ≤ NC , K2 , m2 are the material parameters for N > NC , sC is the stress range corresponding to NC . Further it is assumed that the total number of stress ranges for a given fatigue critical detail can be grouped in nσ groups/bins such that the number of stress ranges in group i is ni per year. In a deterministic design check the design equation can be written:
ni TF
si ≥ sC
1 K1C s−m i
+
ni TF
si < sC
2 K2C s−m i
≥1
(46)
where si = Mi /z is the stress range in group i, Mi is the bending moment range z is a design parameter, KiC is the characteristic value of Ki (log KiC mean of log Ki minus two standard deviations of log Ki ), TF = FDF TL is the fatigue life time, TL is the service life and FDF is the Fatigue Design Factor which can be considered as a fatigue safety factor. In a reliability analysis the reliability index (or the probability of failure) is calculated using the limit state function associated with (46). This limit state equation can be written: g =1−
ni TL
si ≥ sC
1 K1 s−m i
−
ni TL
si < sC
2 K2 s−m i
(47)
where si = XS Mi /p is the stress range in group i, XS is a stochastic variable modelling model uncertainty related to the fatigue wind load and to calculation of the relevant fatigue stresses with given wind load. XS is assumed LogNormal distributed with
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s
51
Table 2.1 Stochastic model. D: Deterministic; N: Normal; LN: LogNormal. Variable
Distribution
Expected value
Standard deviation
X stress X wind TL m1 log K 1 m2 log K 2
LN LN D D N D N
1 1 T F /FDF 3 12.151 + 2 · 0.20 5 15.786 + 2 · 0.25
COV stress = 0.05 COV wind = 0.15 20 years 0.20 0.25
log K 1 and log K 2 are fully correlated.
2 2 mean value = 1 and COV = COVwind + COVstress . log Ki is modelled by a Normal distributed stochastic variable according to a specific SN-curve. Representative statistical parameters are shown in Table 2.1. The basic SN curve used correspond to the SN 90 curve in (EC 3 2003). The optimal design is determined from the following optimization problem: b CI (z) max W(z) = − − z rC0 C0 subject to
pli ≤ pi ≤ pui , PF (z) ≤
CI (z) CF + C0 C0
λPF (z) γ
i = 1, . . . , N
(48)
PFmax
ω1 (z) ≥ ωL where PFmax is the maximum acceptable annual probability of failure. ω1 is the lowest natural frequency of the wind turbine structure and ωL is a minimum acceptable eigen frequency. The probability of failure is estimated by the simple upper bound: PF ≈ N i=1 (−βi ) where βi is the annual reliability index in failure element i of the N failure elements/failure modes. The following design/optimization variables related to the tower and pile model are used: DT is the diameter at tower top, D is the diameter of tower at bottom, t1 , t2 and t3 are thickness of tower sections, DP is the diameter of the monopile, tP is the thickness of monopile, HP is the length of the monopile. The initial costs is modelled by:
1 1 Vmono + CI = C0,foundation 2 2 Vmono,0
1 3 Vtower + CI,blades + CI,powertrain + CI,others + C0,tower + 4 4 Vtower,0 turbine
(49)
52
Structural design optimization considering uncertainties
where Vmono,0 and Vtower,0 are reference cross-sectional areas for the mono-pile foundation and the tower, respectively. Thus, the model is a linear model that gives the initial costs for designs that deviate from a given reference. The term CI,others accounts for initial costs connected to external and internal grid connections that are of course independent of the extreme load. Because, in current practice, the design of the blades and the power train are driven by fatigue and operation loads respectively, the dependence of the initial costs of these main parts of the turbine on the extreme load is assumed negligible in this model. The following model is used for the normalised initial costs at the considered site
CI 1 1 1 3 Vtower 1 1 1 1 1 Vmono 1 + + + + + (50) = + C0 6 2 2 Vmono,0 2 3 4 4 Vtower,0 3 3 3 turbine The ratios appearing in this formula will be site specific. For a far off offshore site the grid connection will become a larger part of the total costs. Likewise the foundation costs will depend on water depth. For other sites the cost ratios may e.g. be: 1 5 , , and 13 for the foundation, the turbine, and the other costs, respectively. For the 4 12 reference turbine Vmono,0 = 25.5 m3 and Vtower,0 = 14.0 m3 , which have been derived from the following reference values: h = 63 m, hw = 7 m, DT = 2.43 m, tT = 17 mm, DB = 3.90 m, tB = 29 mm, hP = 41 m, tp = 49.5 mm, and DP = 4.1 m. Thus 1 Vmono 1 Vtower CI 1 1 1 1 + + = + + + C0 12 12 14.0 m3 24 8 23.2 m3 3 3 turbine
(51)
It is noted that out of the total initial costs only a minor part depends on the loads because the study is restricted to the support structure. For a gravity foundation the normalised failure costs are estimated to be: CF,foundation 1 = C0,foundation 6
(52)
Compared to this the failure costs for the turbine are negligible. The turbine failure costs could be virtually zero if one just leaves the turbine at the bottom of the sea like a shipwreck, a solution that may hardly be accepted by environmentalists. It is noted that, at least in Denmark, it is for aesthetic reasons not accepted to rebuild the turbine a little away from the collapsed turbine, whereby the failure costs could otherwise practically vanish. Indeed Danish building licences demand that a new turbine, which replaces a collapsed turbine, must be situated at the exact same spot. That is, the space cannot even be left unused. Assuming that the damage to the grid is small the failure costs become: CF 1 = C0 36
(53)
For the considered site and turbine, and Assumption: Given site-i.e. climate (A = 10.8, k = 2.4), specified rated power (2 MW) and turbine height and rotor
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s
53
Table 2.2 Optimal values of design variables, objective function and natural frequency. γ
0.03
0.05
0.10
0.05
0.05
b/C 0 C F /C 0 DT D t1 t2 t3 DP tP HP W ω1
1/8 1/36 2.92 m 4.00 m 20 mm 28 mm 35 mm 5.41 m 21 mm 34.7 m 3.264 2.71
1/8 1/36 2.89 m 4.00 m 20 mm 29 mm 33 mm 5.40 m 20 mm 34.7 m 1.602 2.67
1/8 1/36 2.81 m 4.00 m 20 mm 25 mm 32 mm 4.93 m 20 mm 34.7 m 0.359 2.43
1/10 1/36 2.89 m 4.00 m 20 mm 29 mm 33 mm 5.40 m 20 mm 34.7 m 1.102 2.67
1/8 1/360 2.77 m 4.00 m 20 mm 28 mm 33 mm 5.31 m 20 mm 34.7 m 1.603 2.63
diameter. The average power is 1095 kW which with an assumption of 2% down time the annual average production may be computed. In the Danish community subsidising currently ensures that the market price for 1 kWh wind turbine generated electric power is 0.43 DKK/kWh. From this, one should subtract, as a lifetime average, 0.1 DKK/kWh for operation and maintenance expenses. The normalised average benefits per year becomes approximately b 1 = C0 8
(54)
The real rate of interest r is assumed to be 5% because, as argued, a purely monetary reliability optimization is considered. Assuming a lower tower frequency of 0.33 Hz a frequency constraint becomes ω1 ≥ 2π 0.33 Hz = 2.07 s−1 . The optimal design is determined from the optimization problem (5.9). The following bounds on the design variables are used: Thicknesses: 20 mm and 50 mm Diameter tower: 2m and 4 m Diameter monopile: 2 m and 6 m The optimal values of the design variables are shown in Table 2.2, including cases where the real rate of interest r is 3%, 5% and 10%, b/C0 is 1/8 and 1/10, and CF /C0 is 1/36 and 1/360. In Table 2.3 reliability indices for the different failure modes and for the system are shown. It is seen that • •
For increasing rate γ the dimensions and the value of the objective function as expected decreases. Further also the corresponding system reliability indices and eigenfrequencies decrease slightly. The optimal dimensions are not influenced by a change in the benefits – only the value of the objective function decreases with decreasing benefits per year.
54
Structural design optimization considering uncertainties
Table 2.3 Optimal values reliability indices for failure modes and system – first value is for local buckling/yielding and second value is for fatigue. γ
0.03
0.05
0.10
0.05
0.05
b/C 0 C F /C 0 Top section
1/8 1/36 13.6/4.90 5.63/4.25 7.96/5.62 5.12/3.67 6.37/4.32 5.08/3.60 7.09/4.09 3.41
1/8 1/36 13.4/4.79 5.55/4.20 8.02/5.64 5.22/3.72 6.08/4.09 4.85/3.49 7.09/3.98 3.34
1/8 1/36 12.9/4.52 5.37/4.08 6.91/5.01 4.35/3.50 5.79/3.95 4.66/3.26 7.09/3.47 3.06
1/10 1/36 13.4/4.79 5.55/4.20 8.02/5.64 5.22/3.72 6.08/4.09 4.85/3.49 7.09/3.98 3.34
1/8 1/360 12.7/4.52 5.28/4.01 7.39/5.20 4.83/3.54 5.95/4.01 4.86/3.49 7.09/3.86 3.26
Middle section Bottom section
Top Bottom Top Bottom Top Bottom
Pile System
• • • •
For decreasing failure costs the optimal dimensions, the objective function, the system reliability level and the eigenfrequency decrease slightly. The system reliability index β is 3.1–3.4. In this example the fatigue failure mode has the smallest reliability indices (largest probabilities of failure). The frequency constraint is not active.
The example shows that the optimal reliability level related to structural failure of offshore wind turbines is of the order of a probability per year equal to 2 · 10−4 − 10−3 corresponding to an annual reliability index equal to 3.1–3.4. This reliability level is significantly lower than for civil engineering structures in general.
6 Conclusions The theoretical basis for reliability-based structural optimization within the framework of Bayesian statistical decision theory is briefly described. Reliability-based cost benefit problems are formulated and exemplified with structural optimization. The basic reliability-based optimization problems are generalized to the following extensions: interactive optimization, inspection and repair costs, systematic reconstruction, re-assessment of existing structures. Illustrative examples are presented including a simple introductory example, a decision problem related to bridge re-assessment and a reliability-based decision problem for offshore wind turbines.
References Aitchison, J. & Dunsmore, I.R. 1975. Statistical Prediction Analysis. Cambridge University Press, Cambridge. Ang, H.-S.A. & Tang, W.H. 1975. Probabilistic concepts in engineering planning and design, Vol. I and II, Wiley.
R e l i a b i l i t y-b a s e d o p t i m i z a t i o n o f e n g i n e e r i n g s t r u c t u r e s
55
Benjamin, J.R. & Cornell, C.A. 1970. Probability, Statistics and Decision for Civil Engineers. Mc-Graw-Hill. EN 1993-1-9 2003. Eurocode 3: Design of steel structures – Part 1–9: Fatigue. Enevoldsen, I. & Sørensen, J.D. 1993. Reliability-Based Optimization of Series Systems of Parallel Systems. ASCE Journal of Structural Engineering, Vol. 119, No. 4, pp. 1069–1084. Enevoldsen, I. 1994. Sensitivity Analysis of a Reliability-Based Optimal Solution. ASCE, Journal of Engineering Mechanics. Enevoldsen, I. & Sørensen, J.D. 1994. Reliability-based optimization in structural engineering. Structural Safety, Vol. 15, pp. 169–196. Enevoldsen, S. & Sørensen, J.D. 1998. A Probabilistic Model for Chloride-Ingress and Initiation of Corrosion in Reinforced Concrete Structures. Structural Safety, Vol. 20, pp. 69–89. Engelund, S. 1997. Probabilistic models and computational methods for chloride ingress in concrete. Ph.D. thesis, Department of Building Technology and Structural Engineering, Aalborg University. Faber, M.H., Engelund, S. Sørensen, J.D. & Bloch, A. 1989. Simplified and generic risk based inspection planning. Proc. OMAE2000, New Orleans. Frangopol, D.M. 1985. Sensitivity of reliability-based optimum design. ASCE, Journal of Structural Engineering, Vol. 111, No. 8, pp. 1703–1721. Fujimoto, Y., Itagaki, H., Itoh, S., Asada, H. & Shinozuka, M. 1989. Bayesian Reliability Analysis of Structures with Multiple Components. Proceedings ICOSSAR 89, pp. 2143–2146. Fujita, M., Schall, G. & Rackwitz, R. 1989. Adaptive Reliability Based Inspection Strategies for Structures Subject to Fatigue. Proceedings ICOSSAR 89, pp. 1619–1626. Gill, P.E., Murray, E.W. & Wright, M.H. 1981. Practical Optimization. Academic Press. Haftka, R.T. & Kamat, M.P. 1985. Elements of Structural Optimization. Martinus Nijhoff, The Hague. ISO 19902 2001. Petroleum and natural gas industries – Fixed steel offshore structures. Kroon, I.B. 1994. Decision Theory Applied to Structural Engineering Problems. Ph.D. thesis, Department of Building Technology and Structural Engineering, Aalborg University. Kuschel, N. & Rackwitz, R. 1998. Structural optimization under time-variant reliability constraints. Proc. 8th IFIP WG 7.5 Conf. On Reliability and optimization of structural systems, University of Ann Arbor, pp. 27–38. Lindley, D.V. 1976. Introduction to Probability and Statistics from a Bayesian Viewpoint, Vol. 1 + 2. Cambridge University Press, Cambridge. Madsen, H.O. & Friis-Hansen, P. 1992. A comparison of some algorithms for reliabilitybased structural optimization and sensitivity analysis. Proc. IFIP WG7.5 Workshop, Munich, Springer-Verlag, pp. 443–451. Madsen, H.O. & Sørensen, J.D. 1990. Probability-Based Optimization of Fatigue Design Inspection and Maintenance. Presented at Int. Symp. On Offshore Structures, University of Glasgow. Madsen, H.O., Krenk, S. & Lind, N.C. 1986. Methods of Structural Safety. Prentice-Hall. Murotsu, Y., Kishi, M., Okada, H., Yonezawa, M. & Taguchi, K. 1984. Probabilistically optimum design of frame structures. Proc. 11th IFIP Conf. On System modeling and optimization, Springer-Verlag, pp. 545–554. Madsen, H.O., Sørensen, J.D. & Olesen, R. 1989. Optimal Inspection Planning for Fatigue Damage of Offshore Structures. Proceedings ICOSSAR 89, pp. 2099–2106. Powell, M.J.D. 1982. VMCWD: A FORTRAN Subroutine for Constrained Optimization. Report DAMTP 1982/NA4, Cambridge University, England. Rackwitz, R. 2001. Risk control and optimization for structural facilities. Proc. 20th IFIP TC7 Conf. On System modeling and optimization, Trier, Germany. Raiffa, H. & Schlaifer, R. 1961. Applied Statistical Decision Theory. Harward University Press, Cambridge, Mass.
56
Structural design optimization considering uncertainties
Rosenblueth, E. & Mendoza, E. 1971. Reliability optimization in isostatic structures. J. Eng. Mech. Div. ASCE, pp. 1625–1642. Schittkowski, K. 1986. NLPQL: A FORTRAN Subroutine Solving Non-Linear Programming Problems. Annals of Operations Research. Skjong, R. 1985. Reliability-Based Optimization of Inspection Strategies. Proc. ICOSSAR’85 Vol. III. pp. 614–618. Streicher, H. & Rackwitz, R. 2002. Structural optimization – a one level approach. Proc. Workshop on Reliability-based Design and Optimization – rbo02, IPPT, Warsaw. Sørensen, J.D. & Thoft-Christensen, P. 1985. Structural optimization with reliability constraints. Proc. 12th IFIP Conf. On System modeling and optimization, Springer-Verlag, pp. 876–885. Sørensen, J.D. & Thoft-Christensen, P. 1988. Inspection Strategies for Concrete Bridges. Proc. IFIP WG 7.5, Springer Verlag, Vol. 48, pp. 325–335. Sørensen, J.D., Thoft-Christensen, P., Siemaszko, A., Cardoso, J.M.B. & Santos, J.L.T. 1995. Interactive reliability-based optimal design. Proc. 6th IFIP WG 7.5 Conf. On Reliability and Optimization of Structural Systems, Chapman & Hall, pp. 249–256. Sørensen, J.D. & Tarp-Johansen, N.J. 2005a. Reliability-based optimization and optimal reliability level of offshore wind turbines. International Journal of Offshore and Polar Engineering (IJOPE), Vol. 15, No. 2, pp. 1–6. Sørensen, J.D. & Tarp-Johansen, N.J. 2005b. Optimal Structural Reliability of Offshore Wind Turbines. CD-rom Proc. ICOSSAR’2005, Rome. Thoft-Christensen, P. & Sørensen, J.D. 1987. Optimal Strategies for Inspection and Repair of Structural Systems. Civil Engineering Systems, Vol. 4, pp. 94–100. von Neumann, J. & Morgenstern, O. 1943. Theory of Games and Economical Behavior. Princeton University Press.
Chapter 3
Reliability analysis and reliabilitybased design optimization using moment methods Sang Hoon Lee Northwestern University, Evanston, IL, USA
Byung Man Kwak Korea Advanced Institute of Science and Technology, Daejeon, Korea
Jae Sung Huh Korea Aerospace Research Institute, Daejeon, Korea
ABSTRACT: Reliability analysis methods using the design of experiments (DOE) are introduced and integrated into reliability based design optimization (RBDO) frame work with a semi-analytic design sensitivity analysis (DSA) for the reliability measure. A procedure using the full factorial DOE with optimal levels and weighs is introduced and named as full factorial moment method (FFMM) for reliability analysis. The probability of failure is calculated using an empirical distribution system and the first four statistical moments of system performance function calculated from DOE. To enhance the efficiency of FFMM, a response surface augmented moment method (RSMM) is developed to construct a series of approximate response surface approaching to that of FFMM. A semi-analytic design sensitivity analysis for the probability of failure is proposed in combination with FFMM and RSMM. It is shown that the proposed methods are accurate and effective especially when the inputs are non-normal.
1 Introduction One of the fundamental problems in the structural reliability theory is the calculation of the probability of failure which is defined as a multifold probability integral of the joint probability density function of random variables over the domain of structural failure. Because the analytic calculation of this integral is practically impossible, many approximate methods and simulation methods are developed so far (Madsen et al. 1986, Kiureghian 1996, Bjerager 1991). Among the methods, the first order reliability method (FORM) (Hasofer & Lind 1974, Rackwitz & Fiessler 1978) is considered to be one of the most efficient computational methods and over the past three decades, contributions from numerous studies have made FORM the most popular reliability method. The reliability based design approaches (Lee & Kwak 1987–1988, Enevoldsen & Sørensen 1994, Frangopol & Corotis 1996, Tu et al. 1999, Youn et al. 2003) have adopted FORM as their main analysis tool for reliability due to its efficiency. The difficulties in FORM such as numerical difficulty in finding the most probable failure point (MPFP), errors involved in the nonlinear failure surface including the possibility of multiple design points (Kiureghian & Dakessian 1998), and errors caused
58
Structural design optimization considering uncertainties
by non-normality of variables (Hohenbichler & Rackwitz 1981) are well recognized and efforts to overcome these are also made. They include the second order reliability method (SORM) (Fiessler et al. 1979, Breitung 1984, Koyluoglu & Nielsen 1994, Kiureghian et al. 1987), the advanced Monte Carlo simulation (MCS) such as importance sampling (Bucher 1988, Mori & Ellingwood 1993, Melchers 1989), directional sampling (Bjerager 1988, Nie & Ellingwood 2005) and the response surface based approaches (Faravelli 1989, Bucher & Bourgund 1990, Rajashekhar & Ellingwood 1993). However, finding MPFP is a still numerically difficult task in FORM and often the error involved degrades the accuracy of final results. In this chapter, we investigate another route for structural reliability, the moment method. The moment method calculates the probability of failure by computing the statistical moments of the performance function and fitting the moments with some empirical distribution systems such as the Pearson system, Johnson system, Gram-Charlier series, and so on (Johnson et al. 1995). For this purpose, the performance function must be computed for a set of well-designed calculation points, often called quadrature points or designed experimental points. Compared with FORM, the moment method has advantages that it dose not involve the difficulties of searching for the MPFP and the information of cumulative distribution function (CDF) is readily available. Not so many attempts are reported about the reliability analysis using the moment method. For statistical moment estimation, Evans (1972) proposed a quadrature formula which uses 2n2 + 1 nodes and weights for a system with n random variables and applied it to tolerance analysis problems. Li & Lumb (1985) adopted Evans’ quadrature formula in structural reliability analysis in combination with the Pearson system. Rosenblueth (1981) devised a 2n point estimate method and (Hong 1996) proposed a nonlinear system of equations for point estimate of probability in combination with the Johnson distribution system and Gram-Charlier series. Zhao & Ono (2001) proposed a point estimate method using Rosenblatt transformation and kn point concentration where k is the number of quadrature points for each random variable. Taguchi (1978) proposed a design of experiment (DOE) technique which uses three level experiments for each random variable to calculate the mean and standard deviation of performance function for tolerance design. Taguchi’s method was improved by (D’Errico & Zaino 1988). These methods can treat only normally distributed random variables and the DOE becomes a 3n full factorial design when n random variables are under consideration. Actually, the levels and weights proposed by D’Errico & Zaino are equivalent to the nodes and weights in Gauss-Hermite quadrature formula (Abramowitz & Stegun 1972, Engels 1980). Seo & Kwak (2002) extended D’Errico & Zaino’s method to treat non-normal distributions by deriving an explicit formula of three levels and weights for general distributions. In addition to the strong points of moment method mentioned above, the moment method using DOE has several good aspects. It is very easy and simple to use and does not involve any deterioration of accuracy or additional efforts for treating non-normal random variables. However, the common problem of the moment based methods is the numerical efficiency. The methods often tend to become very expensive as the number of random variables increases. To overcome this shortcoming, (Lee & Kwak 2006) developed a new moment method integrating the response surface method with the 3n full factorial DOE. (Huh et al. 2006) developed a response surface approximation scheme based on the moment method and applied it to the design study of a precision nano-positioning system.
Reliability analysis and optimization using moment methods
59
In this chapter, we present our previous developments of moment methods which utilize DOE for statistical moment estimation and propose a RBDO framework with a semi-analytic design sensitivity analysis in combination with the moment methods. In section 2, the full factorial moment method (FFMM) is introduced with an explanation on the selection of optimal DOE. In section 3, the response surface augmented moment method (RSMM) is introduced and the accuracy and efficiency of RSMM are compared with other methods via several examples. In section 4, a RBDO procedure is proposed using FFMM and RSMM with a semi-analytic design sensitivity analysis. Section 5 provides some discussions on moment methods and the proposed RBDO procedure and concluding remarks.
2 Reliability analysis using full factorial moment method The probability of failure of a system is defined by a multifold probability integral as Pf = Pr [g(X) < 0] =
fX (x)dx
(1)
g(x)<0
where X is the vector of input random variables, g(x) is the system performance function whose negative value indicates the failure state, and fX (x) is the joint probability density function (PDF) of X. In moment method, the probability of failure is calculated from the PDF of g(X) which is found by fitting the first four statistical moments of system performance function with empirical distribution systems as in Equation 2:
Pf =
fX (x)dx = g(x)<0
0 −∞
fg(X) (g(x))dg(x)
(2)
where fg means the PDF of g(X). One essential procedure for this calculation is the accurate estimation of the statistical moments of g(X). Since the empirical distribution systems require high order moments usually up to fourth order, approximate methods using perturbation or Taylor series expansion are not adequate in most cases. In this section, we introduce a moment estimation scheme based on the design of experiments (DOE) which is applicable to general non-normal random variables. And a reliability analysis procedure using the Pearson system is introduced with examples.
2.1
Design of experiments for s tatis tical mo me nt e s ti mati o n
For a random variable X, the k-th order statistical moments of a one-dimensional function g(X) can be calculated using a quadrature formula with m nodes as follows: E{g k } =
∞
−∞
[g(x)]k fX (x)dx
≈ w1 [g(µ + α1 σ)]k + w2 [g(µ + α2 σ)]k + · · · + wm [g(µ + αm σ)]k
(3)
60
Structural design optimization considering uncertainties
where fX (x) is the probability density function of X, µ and σ denote the mean and standard deviation of X, respectively. To estimate accurately up to the fourth moment, which is often required by the empirical distribution systems such as the Pearson system, at least three node quadrature rule is necessary and the parameters, αi and wi can be found ideally by solving the following moment matching equations (Engels 1980): ∞ µk = (x − µ)k fX (x)dx −∞
= w1 (α1 σ)k + w2 (α2 σ)k + · · · + wm (αm σ)k , (k = 0, 1, . . . , 2m − 1)
(4)
where µk is the k-th statistical moment of random variable X, which can be calculated from the PDF of X, and 2m − 1 is the polynomial order of the quadrature rule. By introducing levels, li = µ + αi σ, Equation 4 can be rewritten in terms of li and wi from the point of view of an experimental design. In the equations, there are 2m unknowns and the number of equations is also 2m. Thus, if we provide the values of µk , li and wi are uniquely determined. It is not a simple task to find the solution of Equation 4 algebraically. For 3 level DOE, (Seo & Kwak 2002) derived a simple explicit formula of li and wi , which will be discussed in section 2.2. For general cases, Equation 4 can be solved with a numerical method but from the experience of the authors, it is found that solving Equation 4 for cases m ≥ 7 is very difficult. When there are n random variables in the system and if we use the same number of levels, m for each random variable, the DOE becomes a mn full factorial design from the product quadrature rule and the first four statistical moments of system response function g(X) can be calculated as follows: µg =
m
w1·i1 · · ·
i1 =1
⎡ σg = ⎣
m
wn·in g(l1·i1 , . . . , ln·in )
(5)
in =1
m
w1·i1 · · ·
i1 =1
m
⎤1/2 wn·in (g(l1·i1 , . . . , ln·in ) − µg )2 ⎦
⎡ ⎤ m m β1g = ⎣ w1·i1 · · · wn·in (g(l1·i1 , . . . , ln·in ) − µg )3 ⎦ σg3 i1 =1
⎡ β2g = ⎣
m
i1 =1
(6)
in =1
w1·i1 · · ·
(7)
in =1 m
⎤ wn·in (g(l1·i1 , . . . , ln·in ) − µg )4 ⎦ σg4
(8)
in =1
where wi·j and li·j mean the j-th weight and level of i-th variable and µg , σg , β1g and β2g denote the mean, standard deviation, skewness and kurtosis of g(X), respectively. If we know a priori that g(X) shows more nonlinear dependence on some of the random variables, we can use more levels for those variables. In general, we can use different number of levels for each random variable, e.g. m1 , . . . , mn instead of m in Equations
Reliability analysis and optimization using moment methods
61
5–8, and the total number of experiments then becomes m1 · m2 · · · mn . The extension to the general case is straightforward. 2.2
Determination of optimal levels and we i g hts
In this section, we look further inside how we can find the optimal levels and weights for statistical moment estimation. As mentioned in section 2.1, they can be found by solving the moment matching equation, Equation 4. When we use three levels, that is, m is chosen as 3, the system of equations can be written as follows: w1 + w2 + w3 = 1
(9)
w1 l1 + w2 l2 + w3 l3 = µ
(10)
w1 (l1 − µ)2 + w2 (l2 − µ)2 + w3 (l3 − µ)2 = σ 2
(11)
w1 (l1 − µ)3 + w2 (l2 − µ)3 + w3 (l3 − µ)3 =
β1 σ 3
(12)
w1 (l1 − µ)4 + w2 (l2 − µ)4 + w3 (l3 − µ)4 = β2 σ 4
(13)
w1 (l1 − µ)5 + w2 (l2 − µ)5 + w3 (l3 − µ)5 = µ5
(14)
Since it is difficult to solve these equations algebraically, (Seo & Kwak 2002) replaced the condition on the fifth moment (Eq. 14) with l2 = µ and obtained an explicit formula of li and wi in terms of first four statistical moments of input random variable as follows: √
⎤T σ β1 σ ⎢ µ + 2 − 2 4β2 − 3β1 ⎥ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ µ {l1 , l2 , l3 } = ⎢ ⎥ ⎥ ⎢ √ ⎦ ⎣ σ β1 σ + µ+ 4β2 − 3β1 2 2 ⎡
⎡ ⎢ ⎢ ⎢ ⎢ {w1 , w2 , w3 } = ⎢ ⎢ ⎢ ⎢ ⎣
√ (4β2 − 3β1 ) + β1 4β2 − 3β1 2(4β2 − 3β1 )(β2 − β1 ) β2 − β1 − 1 β2 − β 1 √ (4β2 − 3β1 ) − β1 4β2 − 3β1 2(4β2 − 3β1 )(β2 − β1 )
(15)
⎤T ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦
(16)
The levels and weights in Equations 15 and 16 can be calculated very easily, but applicable to only symmetric distributions in their exactness. With dissymmetric distributions such as lognormal, exponential distribution, the first level l1 can be located outside the domain where the distribution is defined. For example, for an exponential
62
Structural design optimization considering uncertainties
1.0 0.9 0.8 Proposed 3 level Seo & Kwak's 3 level P.D.F. Outside
Weight
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 1
0
1
2
3 Level
4
5
6
7
Figure 3.1 Levels and weights from the moment matching equation and by Seo & kwak’s formula for an exponential distribution (λ = 1).
distribution defined in (x ≥ 0) with distribution parameter λ (Hahn & Shapiro 1967), the four statistical moments are calculated as, µ=
1 , λ
σ=
1 , λ
β1 = 2,
β2 = 9
(17)
and the levels by Equation 15 are given as follows: l1 =
√ √ 1 1 1 (2 − 6) < 0, l2 = , l3 = (2 + 6) λ λ λ
(18)
It is seen that the first level is located outside the domain of distribution and this may cause severe numerical problems in applying DOE for moment estimation. The other problem of the levels and weights by Equations 15 and 16 is that since the requirement on the fifth moment is replaced with the requirement that the midlevel should be the mean value of the random variable, the accuracy of high moment estimation degrades for a nonlinear performance function. It will be illustrated in the examples and the levels and weights by Equations 15 and 16 may not be optimal in terms of accuracy, unless all the distributions have symmetry. With this reason, it is preferable to solve Equation 4 directly with numerical methods and numerical equation solving algorithms such as modified Powell’s hybrid method (More et al. 1980) can be used for this purpose. Figure 3.1 compares the levels and weights obtained by solving Equation 4 with numerical methods with those obtained by Equations 15 and 16 for the exponential distribution example mentioned above. It is found that the levels and weight obtained directly from Equation 4 are free from the
Reliability analysis and optimization using moment methods
63
0.7 0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1 0
0 0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
(b) Uniform distribution 0 x 10
(a) Normal distribution m 5, s 1 0.7
7
0.6
6
0.5
5
0.4
4
0.3
3
0.2
2
0.1
1 0
0 0
1
2
3
4
5
6
7
8
(c) Rayleigh distribution ^ s2
9
10
0
0.1 0.2 0.3 0.4
0.5 0.6 0.7 0.8 0.9
1
(d) Beta distribution h 0.3, g 0.6
Figure 3.2 Three level DOE for different distributions. Notations of the distribution parameters are from (Hahn & Shapiro 1967).
problems found in Seo & Kwak’s levels and weights. In Figure 3.2, levels and weights for different distributions are depicted for a three level case. The extension of the procedure for finding levels and weights to cases with more than three levels is straightforward, but there are two difficulties in calculation. Firstly, the calculation of high order moments of input random variable can be complicated and tedious depending on the PDF of the input variable. For example, when calculating the levels and weights for a five level DOE (m = 5), the moments of input X should be provided up to ninth order, which might be very complicated for some distributions. Secondly, the solution of Equation 4 becomes very difficult or sometimes impossible. From our experience, the calculation up to five level DOE turns out to be manageable, but the calculation for more levels than five was not successful in many cases. When the PDF of a distribution has the same form with the weight function in a Gaussian quadrature formula (Abramowitz & Stegun 1972), the levels and weights can be directly derived from the nodes and weights of the Gaussian quadrature rule instead of solving Equation 4. Among the well known distributions, the normal distribution matches with the Gauss-Hermite quadrature, the exponential distribution matches with the Gauss-Laguerre quadrature, and the uniform distribution matches with the Gauss-Legendre quadrature formula.
64
Structural design optimization considering uncertainties
2.3 Em pi ri c al d is t r ib ut io n s y s t ems Once the four statistical moments of g(X) are obtained, the PDF of g(X) can be approximated by the empirical distribution systems, such as Johnson system and Pearson system (Johnson et al. 1995, Hahn & Shapiro 1967). In our approach, the Pearson system of distribution is adopted, which approximates the PDF of a random variable X as a solution of a differential equation which follows: 1 df (x) x+a =− f (x) dx c0 + c1 x + c2 x2
(19)
where f (x) is the PDF to be found, x is x − µ and c0 , c1 , c2 and a are the coefficients determined from the four statistical moments of X. The relation is given as, c0 = (4β2 − 3β1 )(10β2 − 12β1 − 18)−1 σ c1 = a = β1 (β2 + 3)(10β2 − 12β1 − 18)−1 σ 2 c2 = (2β2 − 3β1 − 6)(10β2 − 12β1 − 18)−1
(20)
The shape of f (x) changes considerably with the characteristics of roots of the following equation: c0 + c1 x + c2 x2 = 0
(21)
Pearson classified the types of distribution into seven groups according to the types of roots of Equation 21. It is summarized in Table 3.1. It is notable that the type of a distribution is determined solely by the skewness and kurtosis. As a special case, β1 = 0, β2 = 3 corresponds to the normal distribution. The Pearson system is a convenient tool that enables to find a PDF from the first four statistical moments of a random variable, however, one thing that should be noted is that the PDF found by the Pearson system is not the unique solution corresponding to the moments. It is important with this reason, to understand the mathematical background and assumptions in the Pearson system: For example, it can represent Table 3.1 Pearson system of distributions (classifications and corresponding distributions). Types Type I: Type II: Type III: Type IV: Type V: Type VI: Type VII:
Case β1 (β2 + 3)2 <0 4(2β2 − 3β1 − 6)(4β2 − 3β1 ) β1 = 0,β2 < 3 2β2 − 3β1 − 6 = 0 0<κ<1 κ=1 κ>1 β1 = 0,β2 > 3 κ=
Distributions Beta Beta (Symmetric) Gamma No match Inverse Gaussian Beta prime Student’s t
Reliability analysis and optimization using moment methods
65
PDF which has only one mode. More detailed explanations about the Pearson system can be found in (Johnson et al. 1995). 2.4
Proc edure of reliability analys is us i ng f ul l f ac to ri al mo me nt method
The procedure for reliability analysis using the full factorial DOE and the Pearson system is summarized in Figure 3.3. This procedure calculating the probability of failure using the full factorial experimental set is named in our work as full factorial moment method (FFMM). 2.5
Examples
Two examples are provided to demonstrate the accuracy of FFMM in statistical moment estimation and reliability calculation. The first example is a simple linear polynomial function (Eq. 22) whose statistical moments can be calculated analytically. FFMM is applied with several different input distributions and compared with exact solution to verify its accuracy. The input setting of the problem is summarized in Table 3.2. g(x1 , x2 , x3 ) = 3x1 − 2x2 + 5x3
(22)
The 3n full factorial DOE has been applied and the results of moment estimation are summarized in Table 3.3. It is seen that FFMM provides very accurate results regardless of the types of input distributions. The second example of FFMM is an application to the overrunning clutch assembly known as Fortini’s clutch (Fig. 3.4). This problem has been discussed by several authors including (Greenwood & Chase 1990, Creveling 1997) and so on. The contact angle
Given input distribution Calculate first 2m 1 moments of Xi
mX , sX , b1Xi , b2Xi , m5.Xi ,…, m2m1.Xi i
i
Find levels and weights: section 2.2 {li .1,…,li .m} and {wi .1,…,wi .m}
Run full factorial DOE: Eqs. (5)~(8) mg , sg , b1g , b2g
Pearson system: Eqs. (19), (20) fg(X) (g(x)) and Pr[ g(X) 0]
Figure 3.3 Overall procedure of full factorial moment method.
66
Structural design optimization considering uncertainties
Table 3.2 Four different settings of input distributions and their parameters. Case
(a) (b) (c) (d)
Parameters of distribution∗
Distribution
Exponential Gamma Lognormal Mixed
x1
x2
x3
λ1 = 1.0 η1 = 1.0, λ1 = 1.5 µ ˆ 1 = 0.1, σˆ 1 = 0.1 λ1 = 4.0 (exponential)
λ2 = 2.0 η2 = 3.0, λ2 = 5.0 µ ˆ 2 = 0.5, σˆ 2 = 0.1 µ ˆ 2 = 0.5, σˆ 2 = 0.1 (lognormal)
λ3 = 3.0 η3 = 3.0, λ3 = 1.0 µ ˆ 3 = 1.0, σˆ 3 = 0.1 a3 = 4.0, b3 = 1.0 (Weibull)
∗ The notations of the distribution parameters are from (Hahn & Shapiro 1967).
Table 3.3 Results of moment estimation using FFMM for linear performance function. Case Method µg σg β1g β2g (a) (b) (c) (d)
Exact Proposed Exact Proposed Exact Proposed Exact Proposed
3.6667 3.6667 15.800 15.800 13.678 13.678 1.9680 1.9680
3.5746 3.5746 8.9152 8.9152 1.4482 1.4482 1.5131 1.5131
1.3412 1.3412 1.0805 1.0805 0.25520 0.25520 0.18862 0.18862
6.2969 6.2969 4.7962 4.7962 3.1306 3.1306 3.2369 3.2369
µ5g 1.3944e4 1.3944e4 8.3428e5 8.3428e5 16.871 16.871 21.712 21.712
y Cage
Hub x2
x4
x1
x3 Roller bearing
Figure 3.4 The overrunning clutch assembly (Fortini’s clutch).
Y is given in terms of the independent component variables, X1 , X2 , X3 and X4 as follows:
X1 + 0.5(X2 + X3 ) Y = arccos (23) X4 − 0.5(X2 + X3 ) The design requirement for this mechanism is that the contact angle Y must lie between 0.087264 radian (5◦ ) and 0.157075 radian (9◦ ). The distribution types and parameters of random variables are listed in Table 3.4.
Reliability analysis and optimization using moment methods
67
Table 3.4 Distribution types and parameters of input random variables in clutch example. Component
Distribution
Mean
Standard deviation
Parameters for non-normal variables
X1 X2 X3 X4
Beta Normal Normal Rayleigh
55.29 mm 22.86 mm 22.86 mm 101.60 mm
0.0793 mm 0.0043 mm 0.0043 mm 0.0793 mm
γ1 = η1 = 5.0 (55.0269 ≤ x1 ≤ 55.5531) σˆ 4 = 0.1211 (x4 ≥ 101.45)
Table 3.5 Results of moment estimation and probability calculation for Fortini’s clutch example. Moment
3n (S&K ∗ )
3n (MM∗∗ )
5n (MM∗∗ )
FORM
MCS
Mean STD Skewness Kurtosis Pr [y < 5◦ ] Pr [y < 6◦ ] Pr [y < 7◦ ] Pr [y < 8◦ ] Pr [y < 9◦ ] Pr [5◦ < y < 9◦ ] Function call
0.1219 0.0117 −0.0578 2.9216 0.00159 0.07266 0.50430 0.93617 0.99925 0.99767 81
0.1219 0.0117 −0.0497 2.8488 0.00124 0.07288 0.50467 0.93570 0.99943 0.99819 81
0.1219 0.0117 −0.0530 2.8827 0.00140 0.07272 0.50452 0.93595 0.99934 0.99794 625
• • • • diverge 0.08777 (40) 0.52037 (25) 0.93564 (15) 0.99921 (15) • No. in ( )
0.1219 0.0117 −0.0523 2.8822 0.00122 0.07388 0.50265 0.93666 0.99926 0.99804 1,000 k
∗ Seo & Kwak’s three level formula (Eqs. 15, 16) ∗∗ Levels and weights from Equation 4.
For comparison, FORM with HL-RF algorithm (Hasofer & Lind 1974, Rackwitz & Fiesseler 1978) and Monte Carlo simulation (MCS) are applied together with FFMM to calculate the probability of the contact angle being outside the allowable range. And to check the difference in the accuracy made by the selection of levels and weights, the Seo & Kwak’s three level DOE (Eqs. 15 and 16) is also tried. The results are summarized in Table 3.5. In this example, FORM has some numerical difficulty in treating the non-normal distributions when the probability is small. During the calculation of the probability Pr(y < 5◦ ), the MPFP search point goes far outside the domain of the non-normal random variables and this results in the divergence of HL-RF algorithm. On the contrary, FFMM shows good accuracy throughout the range of y values. It is seen that there is a subtle difference in the results by Seo & Kwak’s 3 level DOE and DOE obtained by directly solving the moment matching equation (Eq. 4). This difference becomes more significant as the system performance function becomes more nonlinear. The PDF found by the Pearson system is plotted in Figure 3.5 with the histogram obtained by MCS. It is shown that the Pearson system gives very accurate PDF estimation in this problem. The number of function calls required for calculating the probability is also listed in Table 3.5. The number of function evaluations in FFMM seems significantly bigger than that of FORM, and it increases very rapidly as the number of random variables increases. This becomes the weak point of FFMM, which hinders its application to more practical engineering problems.
68
Structural design optimization considering uncertainties
PDF by MCS PDF by Pearson system
0.7 0.6 0.5 0.4 f (y)
Failure region
0.3
Failure region
0.2 0.1 0.0 3
4
5
6
7
8
9
10
y (degree)
Figure 3.5 Probability density function of contact angle y.
3 Response surface augmented moment method The FFMM introduced in section 2 provides good accuracy and ability to treat nonnormal distributions as shown in section 2.5. However, the number of function evaluations required by FFMM increases exponentially with the number of random variables and it is often prohibitive for applications to practical engineering problems where the evaluation of system performance function requires considerable time and computational resources. To tackle this problem, a novel way to integrate the response surface approximation with the 3n FFMM was developed and named as response surface augmented moment method (RSMM) (Lee & Kwak 2006). In RSMM, instead of performing expensive full factorial experiments, experiments are selectively performed at the points with bigger weight and the rest of the data are approximated by a second order response surface. This response surface is updated with addition of experiments one by one until a convergence in the probability of failure is achieved. In this section, the overall procedure of RSMM is introduced and some important concepts utilized in the method are explained together with examples of reliability analysis. 3.1
Ov era l l pr o c e d ur e
Two strategies are taken in developing RSMM. Firstly, to reduce the number of function evaluations, the experimental data are used not only for moment estimation but also for function approximation. Experiments important in probability calculation are selectively performed and the rest of data in the set of full factorial design are approximated using a response surface. Secondly, the initially simple response surface is updated progressively with the addition of experiments by introducing new cross product terms into the approximation model. The overall procedure of RSMM is as follows: (a)
Establish 3n full factorial DOE with levels and weights obtained by solving Equation 4.
Reliability analysis and optimization using moment methods
69
x2.2 1 36
1 9
1 36
1 9
4 9
1 9
x1.2
1 36
1 9
1 36
Figure 3.6 Experiment layout of RSMM at the initial approximation stage.
(b)
Calculate performance function g(x) at the 2n + 1 experimental points located on the axis of mid-level. Usually, the weight on the mid-level is much larger than the rest. Figure 3.6 is the example of a 2 normal variable case. The numbers in the figure are the weights imposed on the experimental points calculated by, wi = w1i · w2i · · · · · wni =
n
wji
(24)
j=1
(c)
where wi means the overall weight imposed on the i-th experimental point and wji is the weight of the j-th variable at the i-th experimental point. The circled experimental points in the figure are those at which experiments for the initial approximation are performed. With the 2n + 1 data obtained in step (b), build a quadratic response surface using the least square estimation (Myers & Montgomery 1995) without cross product terms as, g(x) ˜ =a+
n i=1
(d)
(e)
bi xi +
n
ci x2i
(25)
i=1
Using g(x) ˜ complement data at points where experiments are not performed and calculate the first four statistical moments of g(x) with Equations 5–8. Then obtain the probability of failure Pr [g(x) < 0] using the Pearson system just as done in FFMM. Calculate the influence index at the points where experiments are not performed. The influence index, κi , at the i-th experimental point is defined as follows: dPf (26) κi = d g(x ˜ i)
70
(f) (g)
Structural design optimization considering uncertainties
where xi is the vector x at the i-th experimental point. The influence index is a measure devised to figure out the relative importance of experiments in calculating the probability. Detailed explanations about the influence index are given in the subsequent section. Perform one additional experiment at the point with the biggest κi . With the data obtained in step (f), update g(x). ˜ A new cross product term may be added into g(x) ˜ as in Equation 27, g(x) ˜ =a+
n
bi xi +
i=1
(h) (i)
3.2
n i=1
ci x2i +
nmix
dk xi(k) xj(k)
(27)
k=1
where nmix is the number of cross product terms included in the formulation and i(k) and j(k) are the indices of the first and second variable in the k-th cross product term respectively where i(k) < j(k). The way of updating response surface approximation is discussed in the subsequent section. With the updated g(x), ˜ calculate the probability of failure as in step (d). Repeat the steps from (e) to (h) until the value of the probability of failure converges. Inf l uen c e ind ex
RSMM tries to find a reliable solution by taking the converged value of probability as its final solution after sufficient updates of the approximation with successive additions of experiments. One important procedure in RSMM is then the arrangement of the order of experiment addition. It is obvious that to reduce the number of experiments effectively a higher priority should be given to the experimental point which can bring in the greatest change of probability when the approximation at that point is replaced with the real experimental data. Influence index is devised to compare the magnitudes of the expected change of probability at the points where experiments are not performed. The change of probability can be roughly estimated as follows:
Pf
dPf · (g(xi ) − g(x ˜ i )) d g(x ˜ i)
(28)
where the term ˜ i ) means the approximation error at the i-th experimental g(xi ) − g(x ˜ i ) is the derivative of Pf with respect to g(x) ˜ at the i-th experimental point and dPf d g(x point, which implies the importance of the point in the calculation of Pf in the cur˜ i ) cannot be estimated unless the value rently approximated system. Since g(xi ) − g(x of g(xi ) is calculated with an additional experiment, the influence index, κi , is defined ˜ i ). This derivative denotes the as the absolute value of the coefficient term, dPf d g(x ˜ at xi . sensitivity of Pf due to the change of the estimated value g(x) The influence index κi can be calculated very effectively without additional g(x) evaluations. Pf is a function of four statistical moments of g(X) and can be expressed as follows: Pf = Pf (µg , σg , β1g , β2g ) (29)
Reliability analysis and optimization using moment methods
Equations 5–8 about µg , σg ,
71
β1g , β2g can be rewritten as follows:
n exp
µg
wi g(x ˆ i)
(30)
i=1
σg
n exp
12 w (g(x ˆ i ) − µg ) i
2
(31)
i=1 n exp
β1g
n exp
β2g
wi (g(x ˆ i ) − µ g )3
i=1
σg3
(32)
wi (g(x ˆ i ) − µ g )4
i=1
σg4
(33)
where n exp is the number of total experimental points, which is equal to 3n and wi is the weight imposed on the i-th experimental point calculated by Equation 24. g(x ˆ i ) is defined in RSMM as follows: g(xi ) if experiment at xi is performed g(x ˆ i) = (34) g(x ˜ i ) if experiment at xi is not performed yet The derivative of Pf can be written as follows: d β1g dPf ∂Pf ∂Pf ∂Pf ∂Pf dµg dσg dβ2g = + + + · · · · d g(x ˜ i) ∂µg d g(x ˜ i ) ∂σg d g(x ˜ i ) ∂ β1g d g(x ˜ i) ∂β2g d g(x ˜ i) d β1g
Pf
Pf
Pf
Pf dµg dσg dβ2g + + + (35) · · · ·
µg d g(x ˜ i ) σg d g(x ˜ i ) β1g d g(x ˜ i)
β2g d g(x ˜ i) The terms, Pf / µg , Pf / σg , Pf / β1g and Pf / β2g can be calculated using the finite difference method from the Pearson system program and the rest of the terms can be obtained by differentiating Equations 30–33 as follows: dµg = wi d g(x ˜ i) dσg wi = (g(x ˜ i ) − µg ) d g(x ˜ i) σg
d β1g dσg 3wi 3 wi + = 3 · (g(x · β1g ˜ i ) − µg )2 − d g(x ˜ i) σg σg d g(x ˜ i)
dβ2g dσg 4 4wi wi β1g + ˜ i ) − µg )3 − = 4 · (g(x · β2g σg σg d g(x ˜ i) d g(x ˜ i)
(36) (37) (38) (39)
72
Structural design optimization considering uncertainties
3.3 U pd ate o f r es po ns e s ur fac e appr ox i m a t i o n Once an additional experiment is performed, we can update the response surface with the newly obtained data. In RSMM, not only the regression coefficients but also the regression bases are updated to improve the accuracy of the approximation for the given set of observations. At the initial stage of RSMM, a total of 2n + 1 polynomial terms are used in the response surface of up to quadratic terms. The cross product terms are introduced into the approximation model during the update so that we can account for the interactions among the random variables. As we may use at most the same number of terms in the approximation model with the number of observations and experiments are added one by one, one cross product term may be added in one updating step. To select the appropriate cross product term to be added, a simple procedure is established as follows. Let the coordinate of the experimental point where experiment has been newly performed denoted by xN = {x1·N , x2·N , . . . , xn·N } where xi·N is the value of variable xi at xN which takes a value among {li·1 , li·2 , li·3 }. Suppose the response surface before update is given as,
g(x) ˜ = a+
n
bi xi +
i=1
= a+
n
n
ci x2i +
i=1
bi ξi +
i=1
n i=1
nmix
dk xi(k) xj(k)
k=1
ci ξi2 +
nmix
d k ξi(k) ξj(k)
(40)
k=1
where ξi is the coded variable defined as follows: ξi =
xi − (li·1 + li·3 )/2 (li·3 − li·1 )/2
(41)
and a, bi , ci and d k are the regression coefficients when g(x) ˜ is expressed in the coded variables. (a) (b)
(c)
List up the variables which do not have mid-level at xN , say xi·N = li·2 . Among all possible combinations with the variables listed in step (a), select the cross product terms that are not included in the former approximation, Equation 40 and add them into C, the set of candidate cross product terms. At the initial stage of RSMM, C is a null set. If C is null, that is, all addible cross product terms are already used in the former approximation, no term is added at this updating step. If C has only one element, it is added to the regression bases. If C has more than one element, build g(x) ˜ including each cross product term in C and compare the residual sum of squares of each model. If there is a cross product term which makes the regression matrix singular, it is discarded. Choose the model corresponding to the minimum residual sum of squares. If all the RS models have the same residual sum of squares, then go to step (d). Else, go to step (e).
Reliability analysis and optimization using moment methods
(d)
Calculate coefficient sum CSij defined as Equation 42 for all members of C, CSij = |bi | + |ci | + |bj | + |cj |
(e)
73
(42)
where bi and ci are the coefficients in Equation 40. Choose xi xj that has the greatest value of CSij and add it to the regression bases. If a term is added, then remove it from the candidate set C and go to the next stage of RSMM.
The reason why cross product terms which consist only of variables off the mean-axis at currently or formerly executed experiments are added is to prevent the singularity or ill-conditioning during the least square estimation. With this adaptive update of response surface approximation, the number of terms in the response surface model is kept as small as possible while including the interaction effect that might exist among variables.
3.4
Examples
Two examples are taken to check accuracy and efficiency of RSMM. For comparison, the results by FFMM, MCS, FORM and method proposed by (Zhao & Ono 2001) are provided. In FORM, the HL-RF algorithm (Rackwitz & Fiessler 1978) is used to find MPFP. The method of Zhao & Ono is a recently reported moment based reliability method which uses samples solely on the mean axis of each variable while in RSMM non-axial samples are also utilized. The first example is taken from (Masen et al. 1986, Kiureghian et al. 1987 and Zhao & Ono 2001). The performance function representing the failure in one plastic collapse mechanism of a one bay frame is given as, g(x) = x1 + 2x2 + 2x3 + x4 − 5x5 − 5x6
(43)
where the variables are statistically independent and log-normally distributed with the means µ1 = . . . µ4 = 120, µ5 = 50, µ6 = 40 and standard deviations σ1 = . . . σ4 = 12, σ5 = 15, σ6 = 12. The probabilities Pr [g(x) < 0] calculated are summarized in Table 3.6. It is observed that the RSMM and FFMM give equally accurate results for present example while
Table 3.6 Results of moment estimation and reliability analysis of the first example.
Mean STD Skewness Kurtosis Pr[g(x) < 0] Function calls
FORM
Zhao & Ono
FFMM
RSMM
MCS
• • • • 9.430e-3 50
270.000 103.271 −0.528 3.650 1.219e-2 42
270.000 103.271 −0.523 3.612 1.212e-2 729
270.000 103.271 −0.523 3.612 1.212e-2 16
269.990 103.174 −0.530 3.623 1.213e-2 1,000 k
74
Structural design optimization considering uncertainties
P1
P2
P3
P4
P5
P6
E1, A1
2m E2, A2 E1, A1
PNT1
E2, A2
DISP1 6@4 m
Figure 3.7 Truss structure with 23 members.
Table 3.7 Input random variables for truss example. Variables
(unit)
Distribution type
Mean
Standard deviation
1 2 3 4 5 6 7 8 9 10
E1 (N/m2 ) E2 (N/m2 ) A1 (m2 ) A2 (m2 ) P1 (N) P2 (N) P3 (N) P4 (N) P5 (N) P6 (N)
lognormal lognormal lognormal lognormal gumbel gumbel gumbel gumbel gumbel gumbel
2.1 × 1011 2.1 × 1011 2.0 × 10−3 1.0 × 10−3 5.0 × 104 5.0 × 104 5.0 × 104 5.0 × 104 5.0 × 104 5.0 × 104
2.1 × 1010 2.1 × 1010 2.0 × 10−4 1.0 × 10−4 7.5 × 103 7.5 × 103 7.5 × 103 7.5 × 103 7.5 × 103 7.5 × 103
FORM shows rather erroneous result due to the non-normality of variables. Zhao & Ono’s method also gives good results for the present example. Since the performance function is approximated exactly with the quadratic response surface without cross product terms, convergence is achieved just after the initial approximation in RSMM. The second example is a truss structure with 23 members, as shown in Figure 3.7. Ten random variables are considered which are summarized in Table 3.7. It is assumed that all the horizontal members have perfectly correlated Young’s modulus and cross sectional areas with each other and so is the case with the diagonal members. The requirement of this problem is that the displacement at PNT1 in Figure 3.7 should not exceed 0.11 m. The performance function is defined as, g(x) = 0.11 − DISP1
(44)
The displacement at PNT1 is calculated using the commercial software, ANSYS 6.0.
Reliability analysis and optimization using moment methods
75
0.009
Pf
0.008 0.007 0.006 0.005 0.004 25
30 35 40 Number of function call
45
Figure 3.8 Convergence history of probability of failure in truss example.
RSMM finds a solution with 24 additional experiments after initial approximation with 21 experiments. At the final stage, the response surface model of g(x) is obtained with 17 cross product terms as follows: g(ξ(x)) ˜ = 2.8070 + 1.2598ξ1 + 0.2147ξ2 + 1.2559ξ3 + 0.2133ξ4 − 0.1510ξ5 − 0.4238ξ6 − 0.6100ξ7 − 0.6100ξ8 − 0.4238ξ9 − 0.1510ξ10 − 0.1978ξ12 − 0.0362ξ22 − 0.2016ξ32 − 0.0346ξ42 + 0.0023ξ52 2 + 0.0008ξ62 + 0.0036ξ72 + 0.0036ξ82 + 0.0008ξ92 + 0.0023ξ10
− 0.0042ξ1 ξ2 − 0.3022ξ1 ξ3 − 0.0110ξ1 ξ4 + 0.0381ξ1 ξ5 + 0.0871ξ1 ξ6 + 0.1232ξ1 ξ7 + 0.1232ξ1 ξ8 + 0.0871ξ1 ξ9 + 0.0346ξ1 ξ10 + 0.0041ξ2 ξ3 + 0.0110ξ3 ξ4 + 0.0261ξ3 ξ5 + 0.0831ξ3 ξ6 + 0.1172ξ3 ξ7 + 0.1172ξ3 ξ8 + 0.0832ξ3 ξ9 + 0.0296ξ3 ξ10
(45)
The history of probability is depicted in Figure 3.8. At the first approximation, the probability is calculated as 0.00451 and the converged value is 0.00880. The finally obtained distribution of g(x) is Pearson type I, a beta distribution as in Figure 3.9. The results are compared in Table 3.8 with the other methods. As in the previous examples, the result of RSMM shows good agreement with those of FFMM and MCS with 100,000 samples but the results of FORM and Zhao & Ono’s method have some discrepancy with the other results. When checking the numerical efficiency, RSMM shows very good performance compared to the other methods. FORM and Zhao & Ono’s method also show good numerical efficiency but the accuracies of those methods are not satisfactory for the present example. The total elapsed time for RSMM is 90 seconds with a Pentium 4 machine while FORM spends 151 seconds.
76
Structural design optimization considering uncertainties
PDF by MCS PDF by Pearson system
0.40
Probability density function
0.35 0.30 0.25
Failure region
0.20 0.15 0.10 0.05 0.00 2
0
2
4
6
g(x)
Figure 3.9 Probability density function of g(x) in truss example. Table 3.8 Results of moment estimation and reliability analysis of truss example. FORM
Mean STD Skewness Kurtosis Pr[g(x) < 0] Function calls
Zhao & Ono’s
• • • • 5.019 × 10−3 77
5n
7n
0.0306 0.0111 −0.4989 3.4289 4.356 × 10−3 50
0.0307 0.0110 −0.5708 3.4120 4.357 × 10−3 70
FFMM
RSMM
MCS
0.0306 0.0109 −0.2009 3.0786 8.821 × 10−3 59,049
0.0306 0.0109 −0.2008 3.0785 8.804 × 10−3 45
0.0307 0.0111 −0.4789 3.3826 8.330 × 10−3 100,000
4 Reliability-based design optimization using moment methods The reliability-based design optimization (RBDO) can generally be formulated as follows: Minimize W(d, z) subject to Pr [g im(d, z, x) < 0] ≤ pi i = 1, . . . , m {gi (d, z, x) < 0} ≤ p0 Pr
(46)
i=1
where W and gi are the objective function and limit state function and d, z, x are the vector of design variables, state variables and random variables respectively. The design
Reliability analysis and optimization using moment methods
77
Initial design, Distribution of random variables
Optimization engine (SQP, MMFD, etc) EvaluateW, Pr[gi < 0]
Evaluate
dW dP f , dd dd
N
RSMM or FFMM
Converge? Feasible? Y STOP
Figure 3.10 Flowchart of RBDO using RSMM or FFMM.
variable can be a deterministic variable or a mean value of a random variable existing in the system. The first constraint is imposed on the component failure probability and the second one is imposed on the system failure probability which can be evaluated by the reliability bounds concept (Ditlevsen 1979). The formulation in Equation 46 can be applied also to the tolerance synthesis problems (Creveling 1997, Lee & Woo 1990), and in this case, the tolerance, which is usually defined as some multiple of standard deviation of a dimension becomes a design variable. In this section, a RBDO procedure using FFMM and RSMM is introduced with an approximate design sensitivity analysis for probabilistic constraints. 4.1
Proc edure of RBDO us ing moment me tho ds
FFMM and RSMM can be combined with a mathematical programming for RBDO. The overall procedure is depicted in Figure 3.10. The optimization engine calls RSMM or FFMM whenever it needs to evaluate the probabilistic constraints. The procedure is close to the double loop strategy where constraint feasibility is checked at every design point during the optimization. The efficiency of the procedure will be discussed in section 5. 4.2
Design sens itivity analys is
Since one evaluation of a probabilistic constraint usually takes considerable number of performance function evaluations, it is important to provide the design sensitivity of the probabilistic constraint in an analytic or semi-analytic way during the RBDO process. In this section, a semi-analytic design sensitivity analysis with moment method is provided (Lee & Kwak 2005). Both RSMM and FFMM can be adopted in this procedure and the calculation is done without any additional g(x) evaluations. Instead,
78
Structural design optimization considering uncertainties
the experimental data or response surface model previously obtained in the reliability analysis is utilized. The procedure is as follows. Since the probability of failure is a function of four statistical moment, µg , σg , β1g and β2g , the design sensitivity of Pf can be written as follows: d β1g ∂Pf dµg ∂Pf dσg ∂Pf ∂Pf dβ2g dPf = + + + · · · · dd ∂µg dd ∂σg dd dd ∂β dd ∂ β1g 2g d β1g
Pf dσg
Pf
Pf dβ2g
Pf dµg + + + · · · ·
µg dd
σg dd dd
β2g dd
β1g
(47)
The terms Pf / µg , Pf / σg , Pf / β1g and Pf / β2g can be calculated using the finite difference method with the Pearson system program and the rest of terms can be obtained from Equations 5–8, as follows: 3 3 dµg ∂µg dlk·ik ∂µg dwk·ik = + ddk ∂lk,ik ddk ∂wk,ik ddk
(48)
3 3 dσg ∂σg dlk·ik ∂σg dwk·ik = + ddk ∂lk,ik ddk ∂wk,ik ddk
(49)
ik =1
ik =1
ik =1
ik =1
3 3 ∂ β1g dlk·ik ∂ β1g dwk·ik d β1g = + ddk ∂lk,ik ddk ∂wk,ik ddk ik =1
(50)
ik =1
3 3 dβ2g ∂β2g dlk·ik ∂β2g dwk·ik = + ddk ∂lk,ik ddk ∂wk,ik ddk ik =1
(51)
ik =1
where dk is the design variable related with the k-th random variable. The partial derivatives in Equations 48–51 can be calculated by directly differentiating Equations 5–8 as follows: 3 3 3 3 ∂µg ∂g(l1·i1 , . . . , ln·in ) = w1·i1 · · · wk−1·ik−1 wk+1·ik+1 · · · wn·in · wk·ik ∂lk·ik ∂lk·ik i1 =1
ik−1 =1
3
3
ik+1 =1
in =1
(52) ∂µg = ∂wk·ik
i1 =1
w1·i1 · · ·
ik−1 =1
wk−1·ik−1
3 ik+1 =1
wk+1·ik+1 · · ·
3
" ! wn·in g l1·i1 , . . . , ln·in (53)
in =1
For the lack ofspace, the derivation only for the mean is presented. The partial derivatives of σg , β1g , β2g with respect to lk·ik and wk·ik can be obtained in the same way. It is noted that for calculating the partial derivatives of moments with respect to lk·ik , we have to calculate the partial derivative of performance function g with respect to lk·ik . When FFMM is used, it is calculated using a backward or forward difference
Reliability analysis and optimization using moment methods
79
schemes on the previously obtained experimental data. For a three level case, it can be formulated as follows: ∂g(l1·i1 , . . . , ln·in ) ∼ g(l1·i1 , . . . , lk·3 , . . . , ln·in )h1 g(l1·i1 , . . . , lk·2 , . . . , ln·in )(h1 + h2 ) + =− ∂lk·1 h2 (h1 + h2 ) h1 h2 −
g(l1·i1 , . . . , lk·1 , . . . , ln·in )(2h1 + h2 ) h1 (h1 + h2 )
(54)
∂g(l1·i1 , . . . , ln·in ) ∼ g(l1·i1 , . . . , lk·3 , . . . , ln·in )h1 g(l1·i1 , . . . , lk·2 , . . . , ln·in )(h2 − h1 ) + = ∂lk·2 h2 (h1 + h2 ) h1 h2 −
g(l1·i1 , . . . , lk·1 , . . . , ln·in )h2 h1 (h1 + h2 )
(55)
∂g(l1·i1 , . . . , ln·in ) ∼ g(l1·i1 , . . . , lk·3 , . . . , ln·in )(h1 + 2h2 ) = ∂lk·3 h2 (h1 + h2 ) −
g(l1·i1 , . . . , lk·2 , . . . , ln·in )(h1 + h2 ) g(l1·i1 , . . . , lk·1 , . . . , ln·in )h2 + h1 h2 h1 (h1 + h2 ) (56)
where h1 and h2 are lk·2 − lk·1 and lk·3 − lk·2 respectively. This finite difference scheme can be extended to cases where more than 3 levels are used in DOE. When RSMM is used, the partial derivative can be calculated using the previously obtained response surface model g˜ as follows: ⎧ ⎪ 0 if ik = m m = 1, 2, 3 ∂g(l1,i1 , . . . , ln,in ) ⎨ ∂g(x) ˜ ∂g(x) = (57) ⎪ ∂lk,m ⎩ ∂x ∂xk x=(l1,i ,...,lk,m ,...,ln,in ) k x=(l1,i ,...,lk,m ,...,ln,in ) 1
1
The derivatives dlk·ik /ddk , dwk·ik /ddk in Equations 48–51 can be obtained from the relationship between lk·ik , wk·ik and dk . Since lk·ik , and wk·ik are determined from the four statistical moments of xk (Eq. 4), the derivatives can be written as follows: dlk·ik ∂lk·ik dµxk ∂lk·ik dσxk ∂lk·ik d β1xk ∂lk·ik dβ2xk = + + + ddk ∂µxk ddk ∂σxk ddk ∂β2xk ddk ∂ β1xk ddk ∂wk·ik dµxk ∂wk·ik dσxk ∂wk·ik d β1xk ∂wk·ik dβ2xk dwk·ik = + + + ddk ∂µxk ddk ∂σxk ddk ∂β2xk ddk ∂ β1xk ddk
(58) (59)
When we use the explicit formula of levels and weights derived by Seo & Kwak (2002), the partial derivatives in Equations 58 and 59 can be obtained by directly differentiating the formula. If we solve Equation 4 numerically in order to obtain lk·ik and wk·ik , then they can be calculated with finite difference method. The derivatives, dµxk /ddk , dσxk /ddk , d β1xk /ddk and dβ2xk /ddk , are determined from the definition of the optimization problem. In RBDO, the design variable dk becomes the mean value of xk , that is, µxk , and it is usually assumed that the distribution characteristics are not changed during the optimization, so dµxk /ddk becomes 1
80
Structural design optimization considering uncertainties
and the others become 0. In tolerance synthesis problem, dk becomes the tolerance of dimension xk , which is 3σk for the usual definition of tolerance, and the other moments are assumed invariant, so dσxk /ddk become 1/3 with the other derivatives become 0. With these, all the derivatives and partial derivatives necessary to calculate Equations 48–51 have been derived and the design sensitivity of failure probability can be calculated from Equation 47. The whole procedure described in this section seems a little bit tedious and complex, but it is easy to implement and the computational efficiency is also good. 4.3
Ex am pl es
In this section, we present two examples of RBDO performed by RSMM and the design sensitivity analysis explained in the earlier section. The first example is a mathematical problem introduced in (Xiao et al. 1999). The problem is stated as follows: Minimize subject to
W(d) ≡ πd12 + d2 Pf 1 ≡ Pr [g1 ≡ X13 X2 − 95.5 ≤ 0] ≤ 0.0010 Pf 2 ≡ Pr [g2 ≡ X12 X2 − 70.7 ≤ 0] ≤ 0.0010 1.0 ≤ d1 ≤ 2.0 20.0 ≤ d2 ≤ 50.0
(60)
where the design variables, d1 and d2 are the mean values of random variable X1 , X2 respectively and X1 and X2 follow normal distribution with standard deviation 0.1 and 3.0. Initial design is (1.5, 35.0) and the optimization is performed with the sequential quadratic programming (SQP). For comparison, the reliability index approach (RIA) is also applied and the results are summarized in Table 3.9. It is seen that the two methods converged to similar solutions with the second constraint activated. However, slight differences in the values of d2 and the objective function are noticed. When we apply the MCS to the final designs found by RSMM and RIA for verification, the actual value of Pf 2 is calculated as 0.001012 for RSMM Table 3.9 Optimization results of first example. RSMM
RIA
W
d1
d2
W
d1
d2
Initial Final
42.069 41.564
1.500 2.000
35.000 28.998
42.069 41.459
1.500 2.000
35.000 28.893
Probability
Initial
Final
Final by MCS∗
Initial
Final
Final by MCS∗
P f1 P f2
1.7917e-1 2.6086e-1
2.5110e-6 9.9975e-4
8.5000e-6 1.0120e-3
1.7097e-1 2.5203e-1
7.6580e-6 9.9958e-4
1.0000e-5 1.1060e-3
Function calls
W
g1
g2
W
g1
g2
27
342
342
49
1011
972
∗ Sample size: 1,000,000.
R e l i a b i l i t y a n a l y s i s a n d o p t i m i z a t i o n u s i n g m o m e n t m e t h o d s 81
and 0.001106 for RIA. The result of FORM contains rather larger error than that of RSMM and actually, RIA violates the constraint slightly more than RSMM. RSMM provides more accurate estimate of probability of failure than FORM and this is the reason why the two methods converge to different solutions. The numbers of function calls are also listed in the table. It should be noted here that during the optimization with RIA, the design sensitivity is calculated with finite difference method, so judging the efficiency of the method via the number of function calls may not appropriate in this case. However, even considering this fact, it is seen that the RBDO by RSMM shows good performance in terms of efficiency. The second example is the optimization of the truss structure introduced in section 3. In this example, a symmetry condition is imposed on the geometry and load condition with respect to the axis of symmetry (Fig. 3.11). Three new random variables, X1 , X2 , X3 are introduced which determine the shape of the truss structure so there are 10 random variables in total. X1 , X2 , X3 are distributed normally with standard deviation of 2 cm and the distribution parameters of the other variables are the same as in Table 3.7. The system requirement is that the displacement at the center point should not exceed 11.5 cm. The optimization problem is formulated as follows: Minimize Weight(d) Subject to Pf ≡ Pr [G(X) < 0] ≤ 0.05 where G(X) = 11.5 − |disp1|, 100 < di < 400
(61) i = 1, 2, 3
where di is the mean value of random variable Xi with initial value of 200 cm. We apply SQP to solve this problem and optimization with RIA is also tried for comparison. The results are summarized in Table 3.9. It is seen that RIA and RSMM find somewhat different solutions. RIA converges to a solution activating the constraint, but in the result of RSMM, the constraint is not activated. We perform MCS with 100,000 samples to verify the reliability analysis at the final design found by the two methods and it is listed in Table 3.10 as Pf _MCS . It is seen that greater error in the probability is involved in the result of RIA and the final design found by RIA actually violates the constraint by a relatively large amount. RSMM finds the solution with 903 function evaluations of G(x) and RIA spends 4004. As in the first example of this section, design sensitivity is calculated using finite difference method in case of RIA, so the direct comparison of the numerical efficiency
P1
P2
P3
P3
P2 E1, A1
E2, A2 x1
x2
x3 E2, A2
E1, A1 400
Disp1
Figure 3.11 Truss structure wit 23 members (RBDO example).
P1
82
Structural design optimization considering uncertainties
Table 3.10 Optimization result of truss structure with RSMM and RIA. RSMM
RIA
d
weight/w 0
Pf
P ∗f_MCS
d
weight/w 0
Pf
P ∗f_MCS
115.143 155.181 220.897
0.9806
0.0456
0.0465
110.142 166.210 204.371
0.9775
0.0500
0.0632
903 G(x) evaluations
4004 G(x) evaluations
∗ Sample size: 100,000.
via the number of function evaluations is not appropriate. However, even considering that fact, the numerical efficiency of RSMM seems satisfactory in this problem.
5 Conclusions So far, moment based reliability analysis methods, FFMM and RSMM have been introduced and illustrated by examples. The following strong points of moment methods could be figured out. Firstly, it does not involve the difficulties of searching for the most probable failure point (MPFP) as in FORM/SORM. Especially, the procedure of FFMM is very simple and easy to use. Secondly, not only the probability value but also the information of PDF and cumulative distribution function (CDF) of a system response function is made available, which can give a deeper insight about the statistical characteristics of the engineering system. Thirdly, they do not use any transformation to deal with non-normal distributions and is free from the deterioration of accuracy and efficiency, which is suffered by other existing methods. Meanwhile, the moment methods have certain drawbacks and limitations and those should also be well recognized and reminded before the applications. The information of MPFP is very important in calculating the small probability in tail region, but moment methods do not make use of it. Also the approximation using only finite number of moments has put some limitation in the accuracy. For this reason, moment methods are often considered more suitable for high probability problems. In most of our test cases, probability calculation using the Pearson system gives reliable results up to about 4 sigma level, corresponding to a probability of 10−5 order. However, at probability levels less than 10−5 , the failure probability found by the Pearson system might not be reliable. The non-uniqueness of the PDF mentioned in section 2.3 also should be carefully reminded. The accuracy of moment estimation is determined from the integration order provided by the method and the degree of nonlinearity of performance function in the high probability region that is defined in terms of coefficients of variation of variables. It is noted that large system nonlinearity and coefficient of variation of variables can degrade the accuracy of moment estimation. The accuracy of moment calculation can be extended by introducing more levels into DOE, which may cost much more computational cost. In RSMM, the probability converges to the value which is expected to be found by FFMM since all the samples are taken from the set of full factorial design. The convergence is expedited since important samples in probability are selectively taken in
R e l i a b i l i t y a n a l y s i s a n d o p t i m i z a t i o n u s i n g m o m e n t m e t h o d s 83
the early stage of RSMM, while the rest of samples are approximated with a response surface. The initial approximation is made using samples with high weights and additional samples are selected considering the impacts they will bring to the probability. Up to now, we covered examples with uni-modal distributions, and hence the levels of the highest weight are mid-levels. In the case of non-uni-modal shape, the selection of the initial set of experiments must be changed accordingly. One difficulty in RSMM is to determine stopping criterion. A tight criterion is necessary but there should be a compromise between accuracy and computational cost. This is a topic of more study in the future. Since the current version of RSMM is based on the 3n FFMM, in case more accurate DOE is necessary, an extension of the method should be made accordingly. A semi-analytic design sensitivity analysis is proposed in combination with FFMM and RSMM, which is shown to be robust and accurate through several tests. The proposed procedure of RBDO is applied successfully to simple RBDO problems and the efficiency of the procedure turns out to be comparable to or even better than conventional RIA. However, it should be noted that the RIA is a classical approach and these days, much more efficient algorithms and strategies are available in the field of RBDO, such as the performance measure approach (Lee & Kwak 1987–88, Tu et al. 1999, Youn & Choi 2003) and the single loop optimization strategy like sequential optimization and reliability assessment (SORA) method (Du & Chen 2003). The comparison with those methods are not provided in this chapter, however in our comparative study, it was recognized that the efficiency of the RBDO procedure proposed in this chapter cannot match that of a single loop method like SORA due to its double loop nature. It is not easy to adopt the single loop optimization strategy into RBDO with the moment methods. However, better accuracy of moment methods than the first order reliability approximation is still a strong point compared to the RBDO method based on FORM.
References Abramowitz, M. & Stegun, I.A. 1972. Handbook of mathematical functions. 10th ed. New York: Dover. Bjerager, P. 1988. Probability integration by directional simulation. ASCE Journal of Engineering Mechanics 114(8):1285–1302. Bjerager, P. 1991. Methods for structural reliability computation. In: F. Casciati (ed.), Reliability problems: general principles and applications in mechanics of solid and structures. New York: Springer Verlag. Breitung, K. 1984. Asymptotic approximation for multi-normal integrals. ASCE Journal of Engineering Mechanics 10(3):357–366. Bucher, C.G. 1988. Adaptive sampling: an iterative fast Monte-Carlo procedure. Structural Safety 5(2):119–126. Bucher, C.G. & Bourgund, U. 1990. A fast and efficient response surface approach for structural reliability problems. Structural Safety 7:57–66. Creveling, C.M. 1997. Tolerance design: A handbook for developing optimal specification. Cambridge, MA: Addison-Wesley. D’Errico, J.R. & Zaino, N.A. 1988. Statistical tolerancing using a modification of Taguchi’s method. Technometrics 30(4):397–405. Ditlevsen, O. 1979. Narrow reliability bounds for structural systems. Journal of Structural Mechanics 7:453–472.
84
Structural design optimization considering uncertainties
Du, X. & Chen, W. 2004. Sequential optimization and reliability assessment for efficient probabilistic design. ASME Journal of Mechanical Design 126(2):225–233. Enevoldsen, I. & Sørensen, J.D. 1994. Reliability-based optimization in structural Engineering. Structural Safety 15:169–196. Engels, H. 1980. Numerical quadrature and qubature. New York: Academic Press. Evans, D.H. 1972. An application of numerical integration techniques to statistical tolerancing, III – general distributions. Technometrics 14:23–35. Faravelli, L. 1989. Response surface approach for reliability analysis. ASCE Journal of Engineering Mechanics 115(12):2763–2781. Fiessler, B. et al. 1979. Quadratic limit states in structural reliability. ASCE Journal of Engineering Mechanics 105(4):661–676. Frangopol, D.M. & Corotis, R.B. 1996. Reliability-Based Structural System Optimization: State-of-the-Art versus State-of the-Practice. In F.Y. Cheng (ed.), Analysis and Computation: Proceedings of the Twelfth Conference held in Conjunction with Structures Congress XIV pp. 67–78. Greenwood, W.H. & Chase, K.W. 1990. Root sum squares tolerance analysis with nonlinear problems. ASME Journal of Engineering for Industry 112:382–384. Hahn, G.J. & Shapiro, S.S. 1967. Statistical models in engineering. John Wiley & Sons: New York. Hasofer, A.M. & Lind, N.C. 1974. Exact and invariant second order code format. ASCE Journal of Engineering Mechanics 100(1):111–121. Hohenbichler, M. & Rackwitz, R. 1981. Nonnormal dependent vectors in structural reliability. ASCE Journal of Engineering Mechanics 107:1127–1238. Hong, H.P. 1996. Point-estimate moment-based reliability analysis. Civil Engineering Systems 13(4):281–294. Huh, J.S. et al. 2006. Performance evaluation of precision nanopositioning devices caused by uncertainties due to tolerances using function approximation moment method. Review of Scientific Instrument 77:15–103. Johnson, N.L. et al. 1995. Continuous univariate distributions. New York: John Wiley & Sons. Kiureghian, A.D. et al. 1987. Second-order reliability approximations. ASCE Journal of Engineering Mechanics 113(8):1208–1225. Kiureghian, A.D. 1996. Structural reliability methods for seismic safety assessment: a review. Engineering Structures 95:412–24. Kiureghian, A.D. & Dakessian, T. 1998. Multiple design points in first and second-order reliability. Structural Safety 20:37–49. Koyluoglu, H.U. & Nielsen, S.R.K. 1994. New approximations for SORM integrals. Structural Safety 13:235–246. Lee, S.H. & Kwak, B.M. 2005. Reliability based design optimization using response surface augmented moment method. Proceedings of 6th World Congress on Structural and Multidisciplinary Optimization, Rio de Janeiro, Brazil. Lee, S.H. & Kwak, B.M. 2006. Response surface augmented moment method for efficient reliability analysis. Structural Safety 28:261–272. Lee, T.W. & Kwak, B.M. 1987–88. A reliability-based optimal design using advanced first order second moment method. Mechanics of Structures and Machines 15(4):523–542. Lee, W.J. & Woo, T.C. 1990. Tolerances: their analysis and synthesis. ASME Journal of Engineering for Industry 112:113–121. Li, K.S. & Lumb, P. 1985. Reliability analysis by numerical integration and curve fitting. Structural Safety 3:29–36. Madsen, H.O. et al. 1986. Methods of structural safety. Englewood Cliffs: Prentice-Hall. Melchers, R.E. 1989. Importance sampling in structural systems. Structural Safety 6(1):3–10.
R e l i a b i l i t y a n a l y s i s a n d o p t i m i z a t i o n u s i n g m o m e n t m e t h o d s 85 Moré, J.J. et al. 1980. User guide for MINPACK-1. Argonne National Labs Report ANL-80-74. Argonne. Illinois. Mori, Y. & Ellingwood, B.R. 1993. Time-dependent system reliability analysis by adaptive importance sampling. Structural Safety 12(1):59–73. Myers, R.H. & Montgomery, D.C. 1995. Response surface methodology: process and product optimization using designed experiments. New York: John-Wiley & Sons. Nie, J. & Ellingwood, B.R. 2005. Finite element-based structural reliability assessment using efficient directional simulation. ASCE Journal of Engineering Mechanics 131(3):259–267. Rackwitz, R. & Fiessler, B. 1978. Structural reliability under combined random load sequences. Computers and Structures 9:489–494. Rajashekhar, M.R. & Ellingwood, B.R. 1993. A new look at the response surface approach for reliability analysis. Structural Safety 12:205–220. Rosenblueth, E. 1981. Two-point estimate in probabilities. Applied Mathematical Modeling 5:329–335. Seo, H.S. & Kwak, B.M. 2002. Efficient statistical tolerance analysis for general distributions using three-point information. International Journal of Production Research 40(4):931–44. Taguchi, G. 1978. Performance analysis design. International Journal of Production Research 16:521–530. Tu, J. et al. 1999. A new study on reliability-based design optimization. ASME Journal of Mechanical Design 121(4):557–564. Xiao, Q. et al. 1999. Computational strategies for reliability based Multidisciplinary optimization, Proceedings of the 13th ASCE EMD Conference. Youn, B.D. & Choi, K.K. 2003. Hybrid Analysis Method for Reliability-Based Design Optimization. ASME Journal of Mechanical Design 125(2):221–232. Zhao, Y.G. & Ono, T. 2001. Moment methods for structural reliability. Structural Safety 23: 47–75.
Chapter 4
Efficient approaches for system reliability-based design optimization Efstratios Nikolaidis University of Toledo, Toledo, OH, USA
Zissimos P. Mourelatos & Jinghong Liang Oakland University, Rochester, MI, USA
ABSTRACT: Two efficient approaches for series system reliability-based design optimization (RBDO) are presented. Both approaches apportion optimally the system reliability among the failure modes by considering the target values of the failure probabilities of the modes as design variables. The first approach uses a sequential optimization and reliability assessment (SORA) approach. It approximates the coordinates of the most probable points of the failure modes as the design changes through linear extrapolation. The second system RBDO approach uses a single-loop method where the searches for the optimum design and for the most probable failure points proceed simultaneously. The efficiency and robustness of the single-loop based approach is enhanced through an easy to implement active set strategy. The two approaches are illustrated and compared on design examples. It is shown that both approaches yield more efficient designs than a conventional component RBDO formulation. Moreover, it is shown that the single-loop approach is considerably more efficient than the SORA approach.
1 Introduction This section presents an overview of reliability-based design optimization (RBDO). Two groups of methods for performing component RBDO efficiently are explained. The section concludes with an outline of this chapter. 1.1
RBDO: Benefits and challenges
Competitive pressures compel manufacturers to minimize weight and cost while maintaining an acceptable safety level. Designers face significant uncertainty such as uncertainty in the operating conditions, material properties and the geometry of a design. In this chapter, the term “uncertainty’’ refers to both random and epistemic uncertainties. In deterministic design, uncertainty is traditionally accounted for by using safety factors and conservative design (characteristic) values for strength and loads. This empirical approach often leads to overdesign or occasionally to unsafe designs. Moreover, deterministic design is not adequate for novel designs involving new materials and geometries. RBDO provides safer and more efficient designs than deterministic design optimization because it explicitly accounts for uncertainty using probability theory. As a result, RBDO is being increasingly accepted as an effective design tool for aerospace, automotive, civil and ocean engineering structures. An overview of RBDO is given in
88
Structural design optimization considering uncertainties
(Frangopol & Maute 2004) with applications to aerospace, civil and micoelectromechanical system design. Studies have demonstrated that RBDO can produce a more efficient design than a deterministic approach without sacrificing safety, or alternatively, RBDO can yield a safer design than a deterministic approach for a given maximum allowable cost. Here the safety of a design is measured by its system reliability. A designer faces many challenges when applying RBDO to practical problems. The high computational cost and the consideration of the system failure probability in RBDO are two principal challenges that the study presented in this chapter tries to address. Finding the probability of failure of a design requires repeated structural analyses for different sets of values of the random variables, which may be computationally expensive, especially when finite element analysis is required (Madsen et al. 1986, Moses 1995, Melchers 2001). Also, the computational cost is grossly compounded when calculating the probability of failure of different designs during the search for the optimum reliability-based design. For example, if a second moment method is employed to calculate the probability of failure, then two nested optimizations must be performed. This approach, called double loop (DLP) approach, requires one optimization for finding the optimum values of the design variables (outer loop) and a second optimization (inner loop) for finding the most probable point (MPP), which is needed for estimating the probabilities of the failure modes. Another principal challenge in RBDO is to consider the system reliability. Calculating system reliability is expensive, especially if one wants to account for the probability of the intersection of the failure modes. As a result, most RBDO studies constrain only the safety levels of the individual failure modes. Thus, the user must decide the minimum allowable safety level (or equivalently safety index) of each failure mode (constraint) instead of specifying only an acceptable system safety level. This approach has several shortcomings. First, the required safety levels of the failure modes are usually not optimal. Second it does not allow consideration of the cost required to achieve a certain reliability level for each failure mode. Finally, it does not account for the interactions of the failure modes (e.g., the probability of these modes occurring simultaneously). We believe that we can obtain considerably better designs by enabling the user to specify the minimum acceptable safety level of the system only, and allowing the optimizer to determine the required safety levels the failure modes, instead of asking the user to select them. 1.2 Pro g ress i n r e d uc ing t he c o mput at i o n a l co s t o f R B D O To address the problem of the high computational cost required by DLP approaches, two new classes of RBDO formulations have been recently proposed. The first class decouples the RBDO process into a sequence of cycles consisting of a deterministic design optimization followed by a reliability assessment of the found optimum (Du & Chen 2004, Royset et al. 2001). The constraints in the deterministic optimization dictate that the design does not fail at a checking point, which is some approximation of the MPP. The Sequential Optimization and Reliability Assessment (SORA) method uses the reliability information from the previous cycle to shift the violated deterministic constraints in the feasible domain. SORA appears to be similar to the safety-factor approach reported in (Wu et al. 2001). Another decoupled approach has also been
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n 89
proposed in (Royset et al. 2001). However, it is restricted to deterministic design variables in the design optimization loop. The second class of RBDO methods converts the problem into an equivalent, singleloop deterministic optimization by integrating the two optimization loops into one. The approach in (Thanedar & Kodiyalam 1992) uses a mean value first-order reliability method. However, it is numerically inaccurate or unstable due to a wrong estimation of the probabilistic constraints. The single-loop, single-vector (SLSV) approach (Chen et al. 1997) is the first attempt in a truly single-loop approach. It improves the RBDO computational efficiency by eliminating the inner reliability estimation loops. However, it requires a probabilistic active set strategy for identifying the active constraints, which may hinder its practicality. A single-level RBDO approach has also been reported in (Kuschel & Rackwitz 2000, Streicher & Rackwitz 2004, and Agarwal et al. 2004). It integrates the nested optimization loops into one by enforcing the Karush-Kuhn-Tucker (KKT) optimality conditions of the inner loop as equality constraints in the outer design optimization loop. In doing so however, it increases the number of design variables because it uses the standard normal variants for each constraint as additional design variables. This can increase the computational cost substantially, especially for practical problems with many design variables and constraints. Furthermore, the approach requires second-order derivatives that are computationally costly and difficult to calculate accurately. One of the system RBDO approaches presented in this chapter uses the single-loop RBDO of Liang et al. (2004). This single-loop system RBDO approach is summarized in section 2.3.
1.3 Approaches for s ys tem RBDO This chapter presents two RBDO approaches for series systems. The probability of failure of a series system is approximated using the first-order or the second-order Ditlevsen upper bounds (Ditlevsen 1979). In the first-order bound, the system failure probability is approximated by the sum of the failure probabilities of all failure modes. This approximation is accurate for most systems whose probability of failure is low (e.g., less than 10−5 ). The second-order bound provides a more accurate approximation of the system failure probability than the first-order bound by accounting for the joint probability of pairs of failures modes. The first approach, which was first reported in (Ba-abbad et al. 2006), uses a sequence of deterministic optimization problems. The MPPs of the failure modes are approximated using linear extrapolation. This approach is a modified formulation of the SORA method in which the reliabilities of all failure modes are considered as design variables. As a result, the approach allows for an optimal apportionment of the reliability of a system among its failure modes. The second approach for system RBDO utilizes the single-loop RBDO approach of (Liang et al. 2004) to determine the optimum design. This approach approximates the MPP’s in each design iteration, using a relation representing the KKT optimality conditions instead of using linear extrapolation (Ba-abbad et al. 2006). To facilitate convergence, an active set strategy is used for identifying the critical failure modes whose failure probabilities affect significantly the system failure probability, in each
90
Structural design optimization considering uncertainties
iteration. The failure probabilities of the remaining non critical failure modes are assumed zero. Three major developments are involved in the above two approaches for system RBDO. The fundamental development is the approach that allows for an optimal apportionment of the reliability of a system among its components. Although this approach increases the number of design variables, it does not affect significantly the algorithmic efficiency because the objective function is not a function of the additional design variables. Furthermore, the inclusion of the failure probabilities of the modes in the design variable set does not increase significantly the cost of calculating the constraints. The second major development is the use of approximations (SORA or a single-loop approach) to solve efficiently the RBDO problem. The single-loop approach is more efficient than the SORA approach because the single-loop approach eliminates the sequence of solutions of deterministic optimization problems. Moreover, the single-loop approach does not approximate the MPP using extrapolation. Finally, the third development is an active set strategy used by the single-loop system RBDO approach. This strategy updates the active constraint set in each iteration to ensure algorithmic stability. Both system RBDO approaches presented in this chapter provide more efficient designs than component RBDO approaches because the system RBDO approaches account for the relation between the cost (or weight) and the reliability of each failure mode. Specifically, these approaches optimize the reliabilities of the failure modes using information about the sensitivity derivatives of the reliabilities of the modes and the sensitivity derivatives of the cost with respect to the design variables. An optimality condition for the reliabilities of the modes that involves these sensitivities is presented in section 4. The details of the proposed system RBDO approaches for series systems including algorithms for these approaches are described in section 2. The efficiency and accuracy of the two approaches are demonstrated and compared in section 3, using two examples that involve a cantilever beam and an internal combustion engine. Section 4 presents an optimality condition for a general, system RBDO problem and shows that the optimum cantilever beam design satisfies this condition. Section 5 presents the conclusions.
2 System RBDO methods The two system RBDO approaches are presented in this section. First, a general form for a series system RBDO problem is presented in subsection 2.1. An equivalent performance measure approach formulation, in which the failure probabilities of the constraints are considered as design variables, is derived. This formulation is used as a basis to develop the SORA and single-loop based approaches in sections 2.2 and 2.3. 2.1 F o rm ul a ti o n o f s y s t e m R B DO pr o bl e m The general system RBDO problem seeks the most efficient design whose system prob. The formulation of ability of failure does not exceed a maximum allowable value pall f
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
91
the system RBDO problem is as follows, min f (d, µX , µP )
(1)
d,µx
subject to
psys = P
n '
Gi (d, X, P) ≤ 0 ≤ pall f
i=1
dL ≤ d ≤ dU , µLX ≤ µX ≤ µU X where d ∈ Rk is the vector of deterministic design variables, X ∈ Rm is the vector of random design variables and P ∈ Rq is the vector of random parameters. Symbols µX and µP denote the mean values of the random variables and random parameters respectively. A bold letter indicates a vector, an upper case letter indicates a random variable or parameter and a lower case letter indicates a realization of a random variable or parameter. The probability of failure of a system with n failure modes is denoted by psys and it is equal to P[∪ni=1 Gi (d, X, P) ≤ 0], where Gi (d, X, P) is the performance function of the ith failure mode. In subsequent formulations of the RBDO problem, the side constraints on the design variables will be omitted for simplicity. The system failure probability is a function of the probabilities pfi of the failure modes and their joint probabilities, psys =
n
pfi −
i=1
n n
pfij + · · · (−1)(n−1) pf1,2,...,n
(2)
i=2 j
Let us constrain the probabilities of failure of the modes to be no greater than some bounds ptfi , called target probabilities, and consider these probabilities as design variables. Then the system RBDO formulation in Eq. (1) becomes, min f d,µx ,ptf ,...,ptf 1
(3)
n
subject to psys ≈
(d, µX , µP )
n i=1
pfi ≤ ptfi ptfi −
i = 1, 2, . . . , n
n n
pfij + · · · (−1)(n−1) pf1,2,...,n ≤ pall f
i=2 j
In this formulation, the optimizer should determine the optimum values of the target failure probabilities of the modes, besides the values of the original design variables, d and µX . Consider a performance measure approach (PMA) formulation (Tu et al. 1999, Youn et al. 2003) of problem (2) in which the constraints on the failure probabilities of the modes are written in terms of their safety indices, βi . In the PMA formulation, instead of checking if the minimum distance of the MPP from the origin is no less than
92
Structural design optimization considering uncertainties
the target value of the safety index of each failure mode, we check if the performance function is nonnegative at the MPPs, XMPP (βit ), PMPP (βit ). This formulation is, min f (d, µX , µP )
(4)
d,µx ,β1t ,...,βnt
Gi [d, XMPP (βit ), PMPP (βit )] ≥ 0,
subject to psys ≈
n
(−βit ) −
i=1
n n
i = 1, 2, . . . , n
pfij + · · · (−1)(n−1) pf1,2,...,n ≤ pall f
i=2 j
where is the cumulative probability distribution of a standard normal random variable (zero mean, unit standard deviation) and (−βit ) = ptfi and XMPP and PMPP are the values of the random design variables and parameters at the MPP for each constraint. Symbol βit denotes the target value of the safety index of the ith failure mode. As Eq. (3) indicates, for each mode, XMPP and PMPP are functions of the target value of its safety index. The above RBDO problem is too expensive to solve for most practical problems because the probabilities of failure of the modes need to be calculated each time a design changes. Therefore, two efficient approaches that use approximations of the system failure probability in terms for the failure probabilities of the modes and approximations of the MPP are presented below. The first approach uses linear extrapolation to find the MPP of a design as a function of its safety index, while the second solves the optimality conditions for the MPP.
2.2
SORA-b a se d s y s t em R B DO appr o ac h
2.2.1 F ormu l a tio n Assume that the system failure probability is equal to the sum of the probabilities of the failure modes psys ≈ ni=1 pfi ≈ ni=1 (−βit ). This is a conservative approximation and it is accurate for small failure probabilities, i.e., 10−5 (Liang et al. 2007). The key idea of the SORA approach is to approximate the coordinates of the most probable point as a function of the value of the safety index. Let UMPP (βi ) = (β ) X T −1 MPP i denote the vector of the reduced values of random variables at the PMPP (βi ) MPP, and T the transformation from the space of the reduced variables to the space of the original variables. The MPP, UMPP (βi ), is approximated as a function of the value of the safety index, βi , given the MPP, UMPP (βi0 ), for a baseline value of the safety index, βi0 , as follows (Fig. 4.1), UMPP (βi ) =
βi UMPP (βi0 ) βi0
(5)
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
U MPP2 (b)
93
G const.
b
U MPP2
(b0) b0 G const. U MPP1 (b0)
U MPP1 (b)
Figure 4.1 Approximation of MPP as function of the safety index.
This approximation allows recasting the system RDBO problem formulation in Eq. (4) as a deterministic optimization problem as follows; min f (d, µX , µP )
(6)
d,µx ,β1t ,...,βnt
subject to
psys ≈
n
Gi
βt d, T 0i UMPP (βi0 ) βi
≥ 0,
i = 1, 2, . . . , n
(−βit ) ≤ pall f
i=1
In the above formulation, the target failure probabilities of the modes have been replaced by the corresponding target values of the safety indices. Since there is a oneto-one relation between the probability of failure and the safety index of a mode, this substitution of the design variables does not change the solution of the optimization problem. The solution of Eq. (6) is a design whose failure modes have safety indices approximately equal to βit . These approximate values are called herein “projected values of the safety indices’’. The approximation of the design point as a function of the safety index in Eq. (5) is only valid in a trust region around the baseline value of the safety index. The progress of the optimization should be monitored in each iteration and the change in the value of the safety index should be constrained within some move limit to remain in the trust region. Available methods for optimization using trust regions can be used for this purpose (Moré & Sørensen 1983, Steihaug 1983, Sørensen 1994). 2.2.2 A lgorithm Figure 4.2 describes the system RBDO algorithm. Each cycle of this algorithm consists of three operations: (a) reliability analysis using a First-Order Reliability Method
94
Structural design optimization considering uncertainties
Select initial design
Perform PMA analysis to find MPP for minimum acceptable value of safety index
Approximate MPP as function of safety index
Solve approximate deterministic optimization problem
N Convergence? Y Stop
Figure 4.2 Algorithm of a SORA based system RBDO method.
(FORM) of the initial design or of the design obtained from the previous cycle to check if this design has acceptable reliability, (b) PMA reliability analysis of the design to determine the MPPs of the failure modes, UMPP (βi0 ), and (c) approximate deterministic optimization to update the optimum design and find the target values of the probabilities of the failure modes, Pfti . First, we perform a deterministic optimization using a factor of safety to find an initial design. Then we perform FORM reliability analysis of the deterministic optimum. At this stage we can identify the non-critical failure modes, that is, those failure modes of the deterministic optimum design that do not affect significantly the system reliability because they have negligible probability of failure compared to the failure probabilities of the remaining critical modes. The target values of the safety indices of these modes are removed from the set of design variables to facilitate convergence. Then we perform an inverse reliability analysis (PMA) of the deterministic optimum assuming equal probabilities of failure, which are obtained by dividing the allowable /nc . failure probability of the system by the number of critical failure modes; ptfi = pall f Finally, we perform approximate deterministic optimization considering the target values of the safety indices of the nc failure modes, β1t , . . . , βnt c , as design variables in addition to the original design variables. Now the optimizer seeks both the optimum values of the design variables and the optimum target values of the safety indices to minimize the objective function (i.e. cost or weight), and the optimization problem is given by Eq. (6.b). In this case, the optimization problem formulation (6) becomes, min f d,µx ,β1t ,...,βnt c
(d, µX , µP ) i = 1, 2, . . . , nc
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
subject to
βi Gi d, T 0 UMPP (βi0 ) ≥ 0, βi psys ≈
nc
95
i = 1, 2, . . . , nc (6.b)
(−βit ) ≤ pall f
i=1
Once we have found the optimum, we check the failure probabilities of all failure modes (including the non critical ones) using FORM. At this step, we may remove (or add) the target values of the safety indices of some failure modes with low (high) failure probabilities from the set of design variables. Then we perform PMA analysis for the optimum values of the target values of the safety indices found from the deterministic optimization. Finally, we solve the approximate deterministic optimization problem again. We repeat this process until convergence. 2.3
Single-loop RBDO approach
The proposed approach is based on the single-loop RBDO algorithm of Liang et al. (2004), which is referred herein as a component, single-loop algorithm. It is based on an equivalent deterministic optimization formulation, which eliminates the need for inner reliability loops without increasing the number of design variables. For completeness, a brief overview of the component, single-loop RBDO algorithm is given below. 2.3.1 O verview of a component si ngl e-l oop RB D O Designers replace the constraint on the system reliability with constraints dictating that the reliabilities of the components are greater than or equal to some target values, in order to circumvent the calculation of the system reliability. These target values are often chosen based on judgment and experience. A typical component RBDO problem is formulated as, min f (d, µX , µP ) d,µX
subject to
(7) Ri = P[Gi (d, X, P) ≥ 0] ≥ Rti ,
i = 1, 2, . . . , n
where Ri = 1 − pfi is the actual reliability level of the ith constraint (or failure mode) and Rti is the corresponding minimum allowable reliability. A method solving directly the optimization problem (7) constitutes the double-loop RBDO method. This method employs two nested optimization loops; the design optimization loop (outer) and the reliability assessment loop (inner). The latter is needed for the evaluation of each probabilistic constraint. If the probability of failure is estimated using FORM, then every time the design optimization loop calls for a constraint evaluation, a reliability assessment loop is executed that searches for the MPP in the standard normal space. If the random variables are not normal, a nonlinear transformation maps the original space to the standard (or reduced) normal space. Using an R-percentile formulation (Du & Chen 2004), the RBDO problem (7) can be expressed as, min f (d, µX , µP ) d,µX
subject to
(8) Gi (d, X, P) ≥ 0,
i = 1, 2, . . . , n
96
Structural design optimization considering uncertainties
where the vectors X and P are evaluated at the MPP; i.e. X = XMPP and P = PMPP for each constraint. The objective function is minimized subject to constraints that are evaluated in the X space. It is therefore, necessary to have a consistent relationship between vectors d, µX , µP , for which the objective function is evaluated, and vectors d, X, P for which the constraints are evaluated. This is done by solving the Karush-KuhnTacker (KKT) optimality conditions (Papalambros & Wilde 2000) of the reliability loops in the design optimization loop for X and P. Using the PMA method, the performance measure Gp = minG(U) is minimized on U
the beta-circle H(U) = U − βt = 0 in the standard normal space U. At the optimal point, according to the KKT optimality condition, the gradient ∇G(U) of the limit state and the gradient ∇H(U) of the beta-circle are collinear and point in opposite directions. This condition yields, U = −βall · α
(9)
where α = ∇GU (d, X, P)/∇GU (d, X, P)
(10)
is the normalized gradient of the constraint in U-space. Based on Eq. (9), the following relations between X, P and µX , µP hold for normal random variables, X = µX − σ · βt · α,
P = µP − σ · βt · α
(11)
where α = σ · ∇GX,P (d, X, P)/σ · ∇GX,P (d, X, P). Using Eqs (11), the double-loop RBDO problem (8) is transformed to the following single-loop, equivalent deterministic optimization problem, min f (d, µX , µP ) d,µx subject to
Gi (d, Xi , Pi ) ≥ 0,
(12) i = 1, 2, . . . , n
where Xi = µX − σ · βit · αi ; Pi = µP − σ · βit · αi ; αi = σ · ∇GiX,P (d, Xi , Pi )/σ · ∇GiX,P (d, Xi , Pi ) Symbol αi represents the normalized gradient of the ith constraint and σ is the standard deviation vector of random variables X and random parameters P. In the single-loop RBDO problem (12), the objective function is evaluated at the mean point d, µX , µP and the constraints are calculated at point d, X, P. The relationship of Eq. (11) is used to evaluate the constraints consistently with the values of the design variables. The single-loop method does not search for the MPP of each constraint in each iteration. Instead, in each iteration, an approximation of the MPP’s of the active constraints is used for each constraint. The sequence of the MPP approximations convergences to the correct MPP. This dramatically improves the efficiency of the single-loop method without compromising the accuracy. Figures 4.3a and 4.3b show a combined flowchart of the component single-loop method and the proposed system, single-loop RBDO method (see section 2.3.2). The
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
97
Initialize d0, 0X, µp, p t,0 f , , lb, ub
Assign X00X, P0 P
Assign CF = 1 Calculate bit Φ1(1 − pfit)
Calculate ␣i0•∇Gi(X,P)(d0, X0, P0)/||•∇Gi(X,P) (d0, X0, P0)||
• • • • • •
Repeat the following steps until convergence: Update counter, k Determine active constraint set Calculate objective function Approximate MPPs of random variables Check active constraint set Perform new iteration of problem (12) for component RBDO or problem (19) for system RBDO See Fig. 3b for detailed calculations in this part of the algorithm
Figure 4.3a Overview of the single-loop RBDO algorithm.
component, single-loop algorithm consists only of operations in boxes with solid line border. Operations in boxes with dashed line border belong to the system, singleloop algorithm. For the component single-loop method, the initial point d0 , µ0X , µP and the target safety index vector βt for all constraints are first specified. Also, the user specifies the standard deviation vector σ for the random variables and random parameters and the upper and lower bound vectors (ub and lb, respectively) for all deterministic and probabilistic design variables. The initial point d0 , X0 , P0 that is needed to evaluate the constraints is assumed equal to d0 , µ0X , µP ; i.e. X0 = µ0x and P0 = µP . At this point, the initial normalized gradient vector α for the ith constraint is calculated as, α0i = σ · ∇Gi(X,P) (d0 , X0 , P0 )/σ · ∇Gi(X,P) (d0 , X0 , P0 )
(13)
At the kth iteration of the optimization loop, the objective function is calculated at point dk , µkX , µP . For the evaluation of the constraints, the algorithm checks if the optimizer has changed the design vector µkX compared with the previous iteration;
98
Structural design optimization considering uncertainties
dk, kX Change? Yes, k k1
No
Calculate ␣ik⋅∇Gi(X,P)(dk1,Xk1 ,Pk1 )/|| •∇Gi(X,P) (dk,Xk1 ,Pk1 )|| i i i i
Yes ptf ≤ ε ? i
No
Assign CF(i) 0; bti 0
Assign CF(i) 1 Calculate bti Φ−1(1ptf ) i
Assign MaxBeta max(bti ) CF (i) 0?
Yes
βi t MaxBeta
Perform new iteration
Calculate Xki Xk βit•␣ki •, Pik Pbti •␣ki • Yes
CF (i) 0?
Calculate Gi(dk,Xki ,Pki ) No
No
Gi 0? Yes Update: C F (i) 1
Calculate Gi(dk,Xki ,Pik) for active constraints n n Calculate ∑ ptfi ∑ max pfij i1 i=2 j
Is f minimized?
Figure 4.3b Calculations performed in one iteration of single-loop algorithm.
if not, the current gradient vector αk is used to calculate Xk = µkX − σ · βt · αk and Pk = µP − σ · βt · αk for each constraint. If µkX has changed from the previous iteration, the normalized gradient vector αk is updated before it is used to calculate Xk and Pk , which are needed for the constraint evaluation.
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
99
This is an essential step for keeping the design variable vector µX and the X, P vectors consistent, resulting in a robust and stable algorithm. Furthermore, it greatly improves the efficiency since the algorithm avoids unnecessary gradient evaluations. When non-normal variables are used, equivalent σXN and µN X (Haldar & Mahadevan 2000) are calculated every time the optimizer updates the design variables and design parameters. The main advantage of the component single-loop method is that it eliminates the repeated reliability analysis loops without increasing the number of design variables or adding equality constraints. Instead of performing nested design optimization and reliability loops, it solves an equivalent single-loop deterministic optimization problem. The consistency between the design variable vector d, µX , µP and vector d, X, P needed to evaluate the constraints makes the single-loop algorithm robust. It should be noted that the component single-loop RBDO algorithm does not require an active constraint set as is the case with the SLSV algorithm (Chen et al. 1997). The active constraint set is simply identified by the algorithm. This is a significant advantage, which simplifies the implementation of the single-loop algorithm and enhances its robustness and efficiency.
2.3.2 Single-loop approach for seri es sy stem RB D O The component single-loop RBDO approach of the previous subsection handles RBDO problems in which each critical failure mode of a series system has a predetermined minimum target safety index βit . Thus the user arbitrarily assigns a minimum safety level for each failure mode instead of letting the optimizer determine this level in order to achieve some required system reliability. A single-loop RBDO approach for series systems is proposed in this subsection. The optimizer determines the optimal values of the target failure probabilities of all failure modes besides the original design variables d, µX and µP . The user specifies a system reliability level and the optimizer allocates optimally the specified system reliability among its failure modes. The optimal reliability allocation and the optimal reliable design are determined simultaneously using an equivalent single-loop RBDO formulation. The proposed series system approach is a modification of the component singleloop approach of the previous section. According to Eq. (12), a target safety index βit = −−1 (ptfi ) is needed for each constraint (failure mode). However, the optimizer must determine the target failure probability ptfi of each failure mode by apportioning the allowable system probability of failure pall among all failure modes. A natural way f to do this is to include all the target values of the failure probabilities of the constraints, ptfi , into the design variable set. In each iteration, the optimizer determines each ptfi and the corresponding target safety index βit = −−1 (ptfi ) is calculated so that transformations Xi = µX − σ · βit · αi and Pi = µP − σ · βit · αi in Eq. (12) hold. Simultaneously, we must make sure that the system failure probability does not exceed the maximum allowable its value psys ≤ pall . The system failure probability is approximated by the f upper second order Ditlevsen bound, psys = ni=1 ptfi − ni=2 max (pfij ), where Pfij is the j
joint probability between the ith and jth failure modes.
100
Structural design optimization considering uncertainties
Based on the first-order reliability analysis (FORM), the failure set is approximated by a polyhedral bounded by the tangent hyperplanes at the MPP points. In this case, ptfi = (−βit ), where βit is the safety index for the ith failure mode. Similarly pfij is obtained by approximating the joint failure set using the tangent hyperplanes at the MPP points of the two failure modes, pfij = (−βi , −βj ; ρij ) =
−βi
−∞
−βj
−∞
ϕ(x, y; ρij ) dxdy
(14)
where ϕ(, ; ρ) is the PDF of a bivariate normal vector with zero means, unit variances, and a correlation coefficient ρ given by,
2 2 1 βi + βj − 2ρβi βj ϕ(−βi , −βj ; ρ) = exp − 2 1 − ρ2 2π 1 − ρ2
1
(15)
In Eq. (14), (, ; ρ) is the bivariate normal CDF, which has the property, ∂ϕ ∂2 = ∂x∂y ∂ρ
(16)
Combining Eqs (14) to (16) yields, pfij = (−βi , −βj ; 0) +
ρij 0
ρij
= (−βi )(−βj ) +
∂ (−βi , −βj ; z) dz ∂ρ
(17)
ϕ(−βi , −βj ; z) dz
0
Note that pfij is directly related to the degree of correlation between the failure modes, expressed by the correlation coefficient ρij , which varies between −1 (fully, negatively correlated) and +1 (fully, positively correlated). In this work, pfij is obtained by approximating the joint failure set by the tangent hyperplanes at the corresponding MPP points of the two failure modes. If U denotes the MPP point in standard normal are then replaced by the linear safety margins space, the safety margins Gi (U) and G j (U) m Mi = βi − m α U and M = β − α ir r j j r=1 s=1 is Us , so that the correlation coefficient ρij is given by, ρij = ρ[Mi , Mj ] =
n
αir αjr = cos (υij )
(18)
r=1
After the correlation coefficient ρij between two active failure modes is calculated, Eq. provides the joint probability of failure needed in evaluating constraint n n (17) t p − max ptfij ≤ pall . i=1 fi i=2 f j
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
101
The component, single-loop RBDO algorithm of Eq. (12) is therefore, modified as follows, (d, µX , µP )
min f d,µx ,ptf ,...,ptf 1
(19)
n
subject to Gi (d, Xi , Pi ) ≥ 0, i = 1, 2, . . . , n n ' Gi (d, Xi , Pi ) ≤ 0 psys = P i=1
≈
n i=1
ptfi −
n i=2
max (pfij ) ≤ pall f j
where Xi = µX − σ · βit · αi ;
Pi = µP − σ · βit · αi ,
αi = σ · ∇GiX,P (d, Xi , Pi )/σ · ∇GiX,P (d, Xi , Pi )
i = 1, 2, . . . , n
If the target probability of the ith failure mode ptfi is very small and the corresponding safety index βit = −−1 (ptfi ) is very large, the optimization algorithm of problem (2.3.2) becomes computationally inefficient and in most cases, it fails to converge because the constraints become insensitive to the values of design variables ptfi . This is also true for any DLP RBDO algorithm based on the PMA approach if a very large target safety index is used. That is why the PMA approach with a “small’’ target safety index is superior to the RIA approach where the safety index of inactive constraints is very large (Youn et al. 2003). To avoid this problem, the proposed single-loop system RBDO algorithm uses an active set strategy that is very easy to implement because all target failure probabilities are included in the design variable set. In every iteration of the optimization process, ptfi is compared with a small predefined threshold value ε. If ptfi ≤ ε, the ith constraint is probabilistically inactive and its probability of failure is assumed zero (i.e. ptfi = 0 for all inactive constraints). Therefore, in each iteration, an active constraint is easily identified. This set is updated in each iteration. The predefined threshold value ε depends on the problem and can be set equal to a fixed small value or equal to small percentage of the largest failure probability of the modes. In this study the threshold was set equal to 10−7 , which corresponds to a safety index of 5.1993. The flowchart of the proposed single-loop, series system RBDO algorithm is shown in Figs. 3a and 3b. First, the algorithm initializes all design variables and parameters (including probabilities of failure of each failure mode), and specifies lower and upper bounds of design variables and standard deviations. Subsequently, the initial gradients for all constraints are calculated. After initialization, the flowchart is similar to that of the component single-loop method. The only difference is the implementation of an active set strategy, which is explained in detail below. The following two points regarding the efficiency and stability of the proposed system single-loop algorithm are important.
102
•
•
Structural design optimization considering uncertainties
The increased number of design variables increases the number of iterations of the optimizer but it does not affect appreciably the computational cost in each iteration. The new approach adds the target failure probabilities of the constraints, ptfi , to the design variable set, thereby increasing the number of design variables by the number of constraints. This increases the number of iterations of the optimizer needed to achieve convergence. However, the computational cost in each iteration does not increase appreciably because the biggest part of this cost is due to gradient calculation, which is easy to perform. The reasons are that the objective function is not a function of the target failure probabilities of the modes and each constraint is a linear function of its safety index. A probabilistic active set strategy is used to increase the efficiency and stability of the algorithm. It has been mentioned that in any iteration, if the probability of failure of the ith constraint is smaller than a threshold value ε the constraint is assumed probabilistically inactive. It is very easy to check if a constraint is smaller than the threshold because all failure probabilities are design variables. As is indicated in Fig. 3b, a constraint flag CF(i), which is set equal to zero for inactive constraints and one for active constraints, is used to identify inactive constraints. The safety indices of all inactive constraints are set equal to the maximum safety index of the active constraints. Fig. 3b shows the details of the procedure identifying inactive constraints.
After a safety index is calculated or assigned for each constraint, the algorithm calculates quantities Xi = µX − σ · βit · αi and Pi = µP − σ · βit · αi for the ith constraint, relating X, P and µX , µP at the current MPP approximation using the KKT conditions (see section 2.1). At this point, the feasibility of all inactive constraints is checked by calculating the value of each inactive constraint at (Xi , Pi ) allowing us to update the active constraint set (see Fig. 3b) in each iteration. Subsequently, the value of all active constraints at (Xi , Pi ) is obtained as well as the system probability of failure psys = ni=1 ptfi − ni=2 max (pfij ). In calculating the joint failure probability j
pfij , we first check the value of the correlation coefficient ρij . If ρij is equal to 1, the two failure modes are positively fully correlated and their joint failure probability can be approximated by min (ptfi , ptfj ). If ρij is equal to −1 the two failure modes are negatively fully correlated and their joint failure probability can be assumed zero. If ρij ≈ 0, then the failure modes are independent and ptfij = (−βi ) · (−βj ). In any other (ρ case, ptfij = (−βi )(−βj ) + 0 ij ϕ(−βi , −βj ; z) dz is used in problem (2.3.2), and the system single-loop RBDO algorithm continues similarly with the component single-loop version.
3 Applications This section demonstrates and the accuracy and efficiency of the proposed SORA and single-loop approaches for series system RBDO problems using a beam example, and an internal combustion engine design example. Deterministic optimization,
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
103
Y
Z
t L 100 in
w
Figure 4.4 Cantilever beam under vertical and lateral bending.
a component single-loop method, and the proposed series system, single-loop RBDO approaches are compared. In all cases, the same initial point and similar convergence criteria are used. MATLAB was used for deterministic optimization, and the single-loop approaches. The add-in tool “Solver’’ in MS-Excel was used for constrained optimization in the SORA system RBDO approach. 3.1 A cantilev er beam example Consider a cantilever beam in vertical and lateral bending (Wu et al. 2001) (see Fig. 4.4). The beam is loaded at its tip by vertical and lateral loads Y and Z, respectively. Its length L is equal to 100 in. The width w and thickness t of the cross-section are deterministic design variables. The objective is to minimize the weight of the beam. This is equivalent to minimizing the cross sectional area f = w · t, assuming that the material density and the beam length are constant. Two non-linear failure modes are considered. The first failure mode is yielding at the fixed end of the beam; the other failure mode is that the tip displacement exceeds the allowable value D0 = 2.2535 in. In the single-loop system RBDO approach the problem is formulated as, min f = w · t
w,t,ptf ,ptf 1
2
subject to
P[Gi (X) ≥ 0] ≥ 1 − ptfi
i = 1, 2
600 600 · Z · Y + wt 2 w2 t
2 2 Z Y 4L3 G2 (E, Z, Y, w, t) = D0 − + Ewt t2 w2
G1 (SY , Z, Y, w, t) = SY −
Gsys = 0.0027 − psys = 0.0027 − (ptf1 + ptf2 − pf12 ) ≥ 0 0 ≤ w, t ≤ 5 where G1 and G2 are the limit state functions corresponding to the two failure modes. In the SORA approach, the problem formulation is identical but the system probability
104
Structural design optimization considering uncertainties Table 4.1 Comparison of RBDO methods for the beam example.
w = x1 t = x2 ptf1 ptf2 pf12 psys Objective f (X) G 1 (X) G 2 (X) No. of F.E.
Deterministic optimization
Component single-loop
System single-loop (SORA)
2.3520 3.3263
2.4484 3.8884 0.00135 0.00135 Neglected 0.00270 9.5202 0 0 115
2.6093 (2.6209) 3.6126 (3.6001) 0.002412 (0.002328) 0.000426 (0.000372) 0.0001389 (Neglected) 0.00270 9.4263 (9.4356) 0 0 624∗
7.8235 640.3600 0 83
∗ This number refers to the number of function evaluations of the Single-Loop approach. The number of function evaluations for the SORA approach could not be determined.
10
Objective values
9.5 9 8.5 8 7.5 Deterministic optimization Component single-loop System single-loop/JPDF
7 6.5 6 0
2
4
6
8
10
12
14
16
Iteration numbers
Figure 4.5 Optimization history for the beam example.
of failure is approximated by the sum of the failure probabilities of the two modes. Design variables w and t are deterministic while Y, Z, SY and E are normally distributed random parameters with Y ∼ N (1000, 100) lb, Z ∼ N (500, 100) lb, SY ∼ N (40 000, 2000) psi and E ∼ N (29 · 106 , 1.45 · 106 )psi; SY is the random yield strength, Z and Y are mutually independent random loads in the vertical and lateral directions respectively, and E is the Young’s modulus. A safety index βit = 3, which corresponds to a maximum allowable probability of failure of 0.00135, is used for both constraints in the component single-loop approach (see Eq. 7). Table 4.1 and Fig. 4.5 compare the efficiency of the deterministic, component single-loop and system single-loop optimizations and the resulting optimum designs. A common initial point (w = 2.5, t = 2.5) and common convergence conditions were used for all three optimizations. Both the component and system RBDO problems have the same allowable system failure probability pall = 0.0027, which corresponds f
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
105
all to βsys = 2.78215 – the difference is that in component RBDO the probabilities of both failure modes are were constrained to be less than or equal to 0.00135, whereas in system RBDO the maximum allowable failure probabilities of the modes were determined by the optimizer. The component and system single-loop methods converged in six and nine iterations, respectively (see Fig. 4.5). The deterministic optimization converged in eight iterations. The SORA system RBDO approach required four cycles each involving a deterministic optimization and reliability analysis. A total of 25 iterations were required by the optimizer (seven in each of the first three cycles and four in the fourth). Due to the optimal apportionment of the allowable system reliability, both system RBDO approaches resulted in an optimum cross sectional area of approximately 9.43 in2 , which is better than the component optimum of 9.5202 in2 . The optimizer allocated a much lower probability of failure to the second failure mode than the first, indicating that the reliability of this mode can be increased at a much lower cost (cross sectional area) than the reliability of the first mode. Note that for the component approach, the system failure probability is 0.0027 (0.00135 + 0.00135). Both constraints are active in the component and system approaches while only the second constraint is active in the deterministic optimization. The single-loop system approach yielded a more efficient design than SORA because the former approximates more accurately the system failure probability than the latter, but the difference between the two optimum designs is small. The last row of Table 4.1 shows the number of function evaluations for the deterministic, component single loop and system single-loop optimizations. The number of function evaluations of the SORA system RBDO approach could not be determined in MS-Excel solver. Each call of the objective function or any constraint, counts as a function evaluation. The system RBDO uses more function evaluations than the component RBDO due to the active constraint set strategy. Also, the component RBDO uses more function evaluations than the deterministic optimization. However, both the component and single-loop system approaches are efficient, as evidenced by the low number of function evaluations. The optimal apportionment of the system probability of failure among the failure modes indicates the significance of each mode in the overall system probability of failure. In this beam example, the first failure mode is much more significant than the second because the failure probability of the first mode is about seven times bigger than the second. Therefore, if we want to change the problem formulation in order to reduce the system probability of failure and/or the area of the optimum design, we must allocate more resources to reduce the probability of failure of the first mode. For example if we can select a different material for the beam it is better to increase the yield stress than the Young’s modulus of the beam, because yielding is the most important mode. An important advantage of a system RBDO approach is that it allocates the system reliability among the failure modes by accounting for the cost of increasing reliability. An optimality condition relating the sensitivity derivatives of the reliabilities of the failure modes and the sensitivity derivatives of the cost (objective) function will be presented in section 4. This condition indicates that high reliability is allocated to failure modes whose reliability can be increased at low cost, that is, a small increase in the objective function is required in order to increase the reliability of these modes.
106
Structural design optimization considering uncertainties
3.2 Intern al c o mb us t io n e ng ine examp l e This example addresses a flat head design of an internal combustion engine from a thermodynamic viewpoint (Papalambros & Wilde 2000, McAllister & Simpson 2003). Design variables are the cylinder bore b, compression ratio cr , exhaust valve diameter dE , intake valve diameter dI , and the revolutions per minute (RPM) at peak power divided by 1000, ω. The goal is to obtain preliminary values for these variables that maximize the power output per unit displacement while meeting specific fuel economy and packaging constraints. The problem is stated as, Find : b, dI , dE , cr , ω, ptf1 , ptf2 , . . . , ptf9 max f = ω[3688ηt (cr , b)ηv (ω, dI ) − FMEP(cr , ω, b)]/120 FMEP = 4.826(cr − 9.2) + 7.97 + 0.253 · [8V/(πNc )] · ωb(−2) + 9.7 · 10−6 {[8V/(πNc )] · ωb(−2) }2 ηt = 0.8595(1 − cr−0.33 ) − Sv (1.5/ω)0.5 Sv = 0.83 · [8 + 4cr + 1.5(cr − 1)b3 πNc /V]/[(2 + cr ) · b] ηv = ηvb (1 + 5.96 · 10−3 ω2 )/{1 + [9.428 · 10−5 · 4V/(πNc Cs ) · (ω/dI2 )]2 } 1.067 − 0.038 e(ω−5.25) for ω ≥ 5.25 ηvb = 0.637 + 0.13 ω − 0.014 ω2 + 0.00066 ω3 for ω ≤ 5.25 subject to: P[Gi (X) ≥ 0] ≥ 1 − ptfi Gsys = 0.006539 −
9 i=1
i = 1, 2, . . . , 9 ptfi
−
9 i=2
max (ptfij ) j
≥0
where: (min. bore wall thickness), G1 = 400 − 1.2Nc b G2 = b − [8V/(200πNc )]0.5 (max. engine height), (valve geometry & structure), G3 = 0.82b − dI − dE (min. valve diameter ratio), G4 = dE − 0.83dI (max. valve diameter ratio), G5 = 0.89dI − dE (max. Mech Index), G6 = 0.6Cs − 9.428 · 10−5 (4V/πNc )(ω/dI2 ) (knock-limit compression ratio), G7 = − 0.045 · b − cr + 13.2 (max. torque converter RPM), G8 = 6.5 − ω G9 = 230.5Qηtw − 3.6 · 106 (max. fuel economy), ηtw = 0.8595 · (1 − cr−0.33 ) − Sv V = 1.859 · 106 (mm3 ) Q = 43,958 kJ/kg Cs = 0.44 Nc = 4
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
107
Table 4.2 Distribution parameters and bounds of design variables.
Cylinder bore, b, mm Intake valve diameter, d I , mm Exhaust valve diameter, d E , mm Compression ratio, c r RPM at peak power/1000, ω
Standard deviation
Lower bound
Upper bound
0.4 0.15 0.15 0.05 0.25
70 25 25 6 5
90 50 50 12 12
Many of the above expressions are valid only within the limited range of bore-tostroke ratio of 0.7 ≤ b/s ≤ 1.3. More information on the problem definition can be found in (Papalambros & Wilde 2000). All design variables are assumed normally distributed with standard deviation and bounds as shown in Table 4.2. First, the deterministic optimization problem was solved. For the component singleloop approach, a target safety index of 3 (which corresponds to a failure probability of 0.00135) was assumed for each failure mode. The system probability of failure was therefore, equal to 0.00675 (=5 × 0.00135), assuming that all modes are disjoint. The assumed system probability of failure of 0.00675 was checked using Monte Carlo (MC) simulation with importance sampling. It was found that the actual probability of failure is 0.006539, instead. The latter was used as the maximum allowable system probability of failure for both the single-loop and SORA system RBDO approaches. It was also verified using MC simulation that the joint probabilities of failure for all pairs of active constraints are negligible for this example. The same initial point of (80, 35, 40, 11, 6) and convergence conditions were used in all optimizations. Table 4.3 compares the results from deterministic optimization, component singleloop RBDO, system single-loop RBDO and SORA system RBDO. In the deterministic optimization, the constraint values are given at the optimum point. For the component and system approaches, the constraint values are given at their corresponding MPP’s. Finally, in the system approaches, the active constraint values are given at their MPP points corresponding to different β values as calculated by the algorithm, while the inactive constraint values are given at their approximate MPP’s (point on β – circle closest to the limit state) corresponding to the assigned maximum β (see section 2.3.2). As shown in Table 4.3, the system single-loop problem has the same active constraint set with the component single-loop problem. Also, the deterministic optimization has the same active set excluding the sixth constraint. Table 4.3 shows that the single loop and SORA based approaches yielded practically identical designs. Table 4.3 shows the apportionment of the specified 0.006539 system probability of failure among all failure modes. The most critical mode is the sixth one followed by the third and first, with probabilities of failure of 0.002370, 0.001665 and 0.001448, respectively. The deterministic optimum has highest output power but the least reliability. The component and the system optima are very similar (see values of design variables) and have almost the same output power (50.9713 and 51.1023, respectively), and system reliability. However, the system reliability approach is superior to the component
108
Structural design optimization considering uncertainties
Table 4.3 Comparison of results for the engine design example. Design variables
Deterministic optimization
Component single-loop
System single-loop
System SORA
b dI dE cr ω pf1 pf3 pf4 pf6 pf7 psys Objective f(X) G 1 (X) G 2 (X) G 3 (X) G 4 (X) G 5 (X) G 6 (X) G 7 (X) G 8 (X) G 9 (X) No. of F.E.
83.3333 37.3406 30.9927 9.4500 6.0720
82.1333 35.8430 30.3345 9.3446 5.3141 0.00135 0.00135 0.00135 0.00135 0.00135 0.006539 50.9713 0 4.0088 0 0 0.9633 0 0 0.4359 0.0892 471
82.1419 35.8456 30.3641 9.3174 5.3598 0.001448 0.001665 0.000811 0.002370 0.000232 0.006539 51.1023 0 3.548016 0 0 0.700382 0 0 0.0968 0.0771 1290
82.1413 35.8356 30.3645 9.3170 5.3639 0.001441 0.001544 0.000722 0.002595 0.000223 0.006539 51.1013 0 3.14∗ 0 0 0.63∗ 0 0 0.01∗ 0.0476∗ N/A
55.6677 0 6.4088 0 0 2.2404 0.0211 0 0.4280 0.1179 309
∗ The values of the performance function correspond to a safety index of 4.5 for modes 2, 5, 8, and 9 for the SORA
system RBDO. Therefore, they should not be compared to the results of the single loop RBDO.
reliability approach for the following two reasons. First, the system approach allows the designer to control directly the system reliability, whereas the component reliability approach only allows the designer to control the reliabilities of the failure modes. For example, if the designer did not know that modes 2, 5, 8 and 9 were insignificant and distributed the minimum allowable reliability equally among the modes then he/ she would obtain a design with lower power output than the maximum achievable one and unnecessarily high reliability. Second, the system reliability approach helps the designer identify the significant failure modes that contribute the most to the overall system reliability. This is very important considering that if we wanted to change the problem formulation then resources should be targeted towards improving the reliability of those “critical’’ failure modes. Figure 4.6 shows the optimization histories and the last row of Table 4.3 shows the number of function evaluations for all approaches, but SORA. The computational cost of the single-loop system approach is higher than that of the component approach due to the active set strategy. In general, the more failure modes in a problem, the higher the system computational cost is expected to be. The SORA based RBDO was considerably slower than its single-loop counterpart. The SORA approach converged in two cycles involving a total of 22 iterations; the first cycle involved 15 iterations while the second involved 7 iterations.
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
109
57 56
Objective value
55 54 53 52 51 50 49
Deterministic optimization Component single-loop System single-loop/JPDF
48 47 0
1
2
3
4
5
6
7
Iteration number
Figure 4.6 Optimization histories for the engine design example.
4 Optimality condition for series system RBDO and validation of the optimum of the beam example Consider the general system RBDO problem formulation in Eq. (1). The Lagrangian, L, of this constrained optimization problem is, L = f + λ(pall f − psys )
(20)
At the optimum, the constraint on the system safety index is usually active, that is the system failure probability assumes its maximum allowable value to minimize the objective function, f . At the optimum, the gradient of the Lagrangian is zero. Therefore, the optimality conditions become, ∂psys ∂f ∂L =0⇒ =λ ∂yk ∂yk ∂yk
(21)
where yk is the kth design variable. From Eq. (21) we obtain the following optimality criterion for the series system RBDO problem, ∂f ∂yk ∂psys ∂yk
= λ = constant
(22)
Equation (21) says that at the optimum, the iso-cost surface (loci of all designs with constant objective function, f ) and iso-reliability surface (loci of all designs with same system reliability, or equivalently system failure probability) have same slope, that is they are tangent to each other. This equation can be used to validate the optimum design obtained by the SORA or single-loop system RBDO approaches. Note that, there is no reason that the failure probabilities of the modes be equal at the optimum.
110
Structural design optimization considering uncertainties
Any deviation from system reliability optimum results in a heavier or less reliable design
4 System failure probability 0.0027
3.9
Area 9.43 in2 System reliability optimum
3.8
Component RBDO optimum
t (in)
3.7 3.6 3.5 3.4 3.3 2.35
2.45
2.55
2.65
2.75
w (in)
Figure 4.7 Optimality condition for beam design.
We will check if the optimum beam design in the first example (Table 1) satisfies the above optimality condition. Figure 4.7 compares the optimum designs obtained by the proposed system RBDO approach and the design obtained by component RBDO. It is observed that the system reliability optimum has the smallest area among all the designs with same system reliability (Pf = 0.0027, system reliability = 1 − 0.0027, system safety index = 2.782). Iso-reliability curves (curves representing designs with system reliability 1 − 0.0027) and iso-cost curves (designs with area = 9.43 in2 ) are tangent at the point representing the optimum reliability-based design. Any deviation from this optimum will result in a design with larger area or an infeasible design (violation of the minimum system reliability constraint). The design obtained by component RBDO lies on the same iso-reliability curve as the RBDO optimum but it has higher area. This shows that the proposed approach saves area by apportioning reliability in an optimal way among the failure modes of a system.
5 Conclusions Two approaches for system RBDO were presented that allow for an optimal apportionment of the reliability of a series system among its failure modes (constraints). These approaches use SORA and single-loop RBDO algorithms to determine the optimum design. The target values of the failure probabilities of the failure modes are considered as design variables. An active set strategy is used for algorithmic stability. The active constraint set is updated in each iteration during the optimization process. The first-order and second-order Ditlevsen upper bounds are used to approximate conservatively the probability of failure of a series system. The proposed algorithms ensure
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
111
overall system reliability rather than an arbitrary reliability for each failure mode, as is the case with component RBDO methods. The user specifies an acceptable system reliability level instead of deciding arbitrarily on a minimum reliability level for each failure mode, which is usually not optimal. The efficiency and robustness of the two approaches was demonstrated on two design examples involving a beam, and an internal combustion engine. The results were compared with deterministic optimization and the conventional RBDO formulation. It was shown that both system RBDO approaches identified identical optimal designs that have the specified system reliability and provide an optimal reliability for each failure mode. In doing so, the algorithms for the two system RBDO approaches identified the critical failure modes that contributed the most to the system reliability. The single-loop system RBDO approach was found considerably more efficient than its SORA counterpart because the former approach performs only one deterministic optimization, while the latter approach performs a sequence of optimizations. Moreover, the single-loop approach avoids the extrapolation of the MPP’s of the SORA approach. The results from the examples also showed that the number of function evaluations is higher for the system approaches compared with the component approach due to the active set strategy. In general, the more failure modes are in a problem, the higher the system computational cost is expected to be.
Acknowledgments This study was performed with funding for the last two authors from the General Motors Research and Development Center and the Automotive Research Center (ARC), a U.S. Army Center of Excellence in Modeling and Simulation of Ground Vehicles at the University of Michigan. The support is gratefully acknowledged. Such support does not however, constitute an endorsement by the funding agencies of the opinions expressed in the chapter.
Nomenclature Latin symbols d: deterministic design variables f : objective function Gi (d, X, P): ith deterministic constraint (performance function of the ith failure mode of a system) P: random parameters PMPP : values of the random parameters at the Most Probable Point psys : actual system failure probability pall : maximum allowable failure probability of a system f pfi : actual failure probability of ith mode of a system ptfi : target failure probability of ith mode of a system T: Transformation from space of reduced variables (U or Z) to the space of the space of the original variables (X) UMPP : values of the vector of the random variables and the random parameters at the Most Probable Point in the reduced space (Z- or U-space)
112
Structural design optimization considering uncertainties
X: random variables XMPP : values of the random variables at the Most Probable Point in the space of the original variables (X-space) y: set of all design variables (both deterministic and random) Greek symbols α: normalized gradient of the performance function of a failure mode all βsys : minimum allowable value of the safety index of a system βi : actual value of the safety index of the ith mode of a system βit : target value of the safety index of the ith mode of a system µX : mean values of random variables µP : mean values of random parameters ∇: gradient operator. A b b rev i a ti o ns CF: Constraint Flag DLP: Double Loop FORM: First Order Reliability Method IC: Internal Combustion KKT: Karush-Kuhn-Tucker MPP: Most Probable Point PMA: Performance Measure Approach RBDO: Reliability-Based Design Optimization SLSV: Single-Loop, Single-Vector SORA: Sequential Optimization and Reliability Assessment.
References Agarwal, H., Renaud, J.E., Lee, J.C. & Watson, L.T. 2004. A Unilevel Method for Reliability Based Design Optimization. 45th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference. Palm Springs, CA. Ba-abbad, M., Nikolaidis, E. & Kapania, R. 2006. A New Approach for System ReliabilityBased Design Optimization. AIAA Journal, Vol. 44, No. 5, pp. 1087–1096. Chen, X., Hasselman, T.K. & Neill, D.J. 1997. Reliability Based Structural Design Optimization for Practical Applications. Proceedings of the 38th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference. Ditlevsen, O. 1979. Narrow Reliability Bounds for Structural Systems. Journal of Structural Mechanics 3:453–472. Du, X. & Chen, W. 2004. Sequential Optimization and Reliability Assessment Method for Efficient Probabilistic Design. ASME Journal of Mechanical Design 126(2):225–233. Frangopol, D.M. & Maute, C. 2004. Reliability-Based Optimization of Civil and Aerospace Structural Systems. Engineering Design Reliability Handbook. Chapter 24, CRC Press, Boca Raton, Florida. Haldar, A. & Mahadevan, S. 2000. Probability, Reliability and Statistical Methods in Engineering Design. John Wiley & Sons, Inc. Kuschel, N. & Rackwitz, R. 2000. Optimal Design under Time-Variant Reliability Constraints. Structural Safety 22(2):113–128.
E f f i c i e n t a p p r o a c h e s f o r s y s t e m r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
113
Liang, J., Mourelatos, Z.P. & Tu, J. 2004. A Single-Loop Method for Reliability-Based Design Optimization. International Journal of Product Development. Interscience Enterprises Limited, Great Britain (in press). Liang, J., Mourelatos, Z.P. & Nikolaidis, E. 2007. A Single-Loop Approach for System Reliability-Based Design Optimization. ASME Journal of Mechanical Design (accepted). Madsen, H.O., Krenk, S. & Lind, N.C. 1986. Methods of Structural Safety. Prentice-Hall, Inc. McAllister, C.D. & Simpson, T.W. 2003. Multidisciplinary Robust Design Optimization of an Internal Combustion Engine. ASME Journal of Mechanical Design 125(1):124–130. Melchers, R.E. 2001. Structural Reliability Analysis and Prediction. John Wiley & Sons. Moré, J.J. & Sorensen, D.C. 1983. Computing a Trust Region Step. SIAM Journal on Scientific and Statistical Computing, Vol. 3, pp. 553–572. Moses, F. 1995. Probabilistic Analysis of Structural Systems. Probabilistic Structural Mechanics Handbook: Theory and Industrial Applications, edited by C. Raj Sundararajan, Chapman & Hall, 166–187. Papalambros, P.Y. & Wilde, D.J. 2000. Principles of Optimal Design; Modeling and Computation. 2nd Edition, Cambridge University Press. Royset, J.O., Der Kiureghian, A. & Polak, E. 2001. Reliability-based Optimal Design of Series Structural Systems. Journal of Engineering Mechanics 607–614. Sørensen, D.C. 1994. Minimization of a Large Scale Quadratic Function Subject to an Ellipsoidal Constraint. Department of Computational and Applied Mathematics, Rice University. Technical Report TR94-27. Steihaug, T. 1983. The Conjugate Gradient Method and Trust Regions in Large Scale Optimization. SIAM Journal on Numerical Analysis, Vol. 20, pp. 626–637. Streicher, H. & Rackwitz, R. 2004. Time-Variant Reliability-Oriented Structural Optimization and a Renewal Model for Life-cycle Costing. Probabilistic Engineering Mechanics 19(1–2): 171–183. Thanedar, P.B. & Kodiyalam, S. 1992. Structural Optimization Using Probabilistic Constraints. Structural Optimization 4:236–240. Tu, J., Choi, K.K. & Park, Y.H. 1999. A New Study on Reliability-Based Design Optimization. ASME Journal of Mechanical Design 121:557–564. Wu, Y.-T., Shin, Y., Sues, R. & Cesare, M. 2001. Safety – Factor Based Approach for Probabilistic – Based Design Optimization. 42nd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference. Seattle, WA. Youn, B.D., Choi, K.K. & Park, Y.H. 2003. Hybrid Analysis Method for Reliability-Based Design Optimization. ASME Journal of Mechanical Design 125:221–232.
Chapter 5
Nondeterministic formulations of analytical target cascading for decomposition-based design optimization under uncertainty Michael Kokkolaras & Panos Y. Papalambros University of Michigan, Ann Arbor, MI, USA
ABSTRACT: Analytical target cascading (ATC) is a methodology for optimal design of hierarchically decomposed systems. In this chapter, we present non-deterministic formulations of ATC to account for uncertainties in decomposition-based optimal system design. Depending on the amount of available information, we adopt either a probabilistic or a robust optimization approach to formulate the multilevel design optimization problem, and use appropriate techniques to estimate uncertainty propagation. We demonstrate the application of all presented ATC formulations using an engine optimal design example, and discuss the obtained results.
1 Introduction The dictionary definition of a system is “an organized integrated whole made up of diverse but interrelated and interdependent parts,’’ and complex is one of its synonyms (Merriam-Webster 2007). It is not surprising then that developing large engineering systems is accomplished by assigning the task of designing the diverse but interrelated parts to different teams, and that the challenge is to organize these activities so that the parts can be integrated successfully to form the whole. Accordingly, large engineering systems are typically decomposed into subsystems, subsystems are decomposed into components, components are decomposed into parts, and so on. This results in a multilevel hierarchy, an example of which is shown in Figure 5.1. Different teams (or individuals) are then assigned with the optimal design Level i = 0
Level i = 1
System j = 1
Subsystem j = 1
Level i = 2 Component j = 1
Subsystem j = 2
Component j = 2
Subsystem j = n
Component j = m
Figure 5.1 Example of a hierarchically decomposed multilevel system.
116
Structural design optimization considering uncertainties
problem of each element in this hierarchy. If these design teams are not given exact specifications, they focus on their own objectives without taking into consideration interactions with other elements. This situation is compounded when design variables are shared among elements; if the obtained values of shared design variables are not equal in all elements, the system design is inconsistent and cannot be realized. Hierarchical decomposition facilitates the use of decentralized optimization approaches that aid systems engineers to identify interactions among elements at lower levels and to transfer this information to higher levels, and has become standard design practice, as evidenced by the organizational structure of engineering companies (Haimes et al. 1990). Analytical target cascading (ATC) is a methodology for solving such hierarchical multilevel optimal design problems. Design targets are cascaded to lower levels using the model-based hierarchy. An optimization problem is posed and solved for each design subproblem to minimize deviations from propagated targets. Solving the subproblems according to an appropriate coordination strategy yields overall system compatibility. The deterministic formulation of the ATC methodology assumes that complete information of the system design problem is available, and that design decisions can be implemented precisely. These assumptions imply that optimization results are as good (and therefore useful) as the design and simulation/analysis models used to obtain them, and that they are meaningful only if they can be realized exactly. In reality, these assumptions do not hold. We are rarely in a position to represent a physical system without using approximations, have complete knowledge on all of its parameters, or control the design variables with high accuracy. Therefore, uncertainty is inherently present in simulation-based design of complex engineering systems. The analysis models used for the simulation depend on assumptions and include many approximations and empirical constants. Also, advanced yet relatively immature technologies are often associated with uncertainty. The designer is not sure about the validity of the decisions he/she has made, and would like to be able to perform optimization studies under uncertainty. It is therefore imperative to represent uncertainties and take them into account during the early design assessment process. Uncertainty identification, representation, and quantification are the cornerstones of design optimization under uncertainty. Given the design model and the necessary analysis/simulation models, the designer must first identify all possible sources of uncertainty. Then, she/he must choose an appropriate means to represent and quantify them. A popular approach is to represent them as random variables, and quantify them by means of some probability distribution utilizing expertise and data. This approach is useful when there are sufficient data to infer probability distributions for the considered random variables. It should be adopted since a plethora of techniques exists for solving probabilistically formulated optimal design problems. However, in many situations the designer does not have the necessary information available. In this case, he/she must assume that the uncertain quantities can take any value within intervals that are used to quantify uncertainty. In this chapter, we review the ATC methodology, and we extend its deterministic formulation using both probabilistic and interval analysis approaches. We address the issue of representing uncertain quantities as optimization variables, formulate the
Nondeterministic formulations of analytical target cascading
117
associated nondeterministic design problems appropriately, and present techniques for estimating uncertainty propagation through the multilevel hierarchy of decomposed systems. The proposed methodologies are applied to a simple engine design example to illustrate the introduced concepts.
2 Analytical target cascading Analytical target cascading (ATC) is a mathematical methodology for translating (“cascading’’) overall system design targets to element specifications based on a hierarchical multilevel decomposition (Michelena et al. 1999; Papalambros 2001; Kim 2001; Kim et al. 2003). The objective is to assess interactions and identify possible tradeoffs among elements early in the design development process, and to determine specifications that yield consistent system design with minimized deviation from system design targets. For an engineering corporation, ATC provides a means to dictate technical objectives to different design teams, knowing a-priori that these goals can be achieved without conflicting with those of other teams. Consistent system design can then be accomplished with minimum communication overhead, i.e., maximum efficiency, avoiding costly iterations late in the process. ATC operates by formulating and solving a minimum deviation optimization problem for each element in the hierarchy. Assuming that responses of higher level elements are functions of responses of lower-level elements, it aims at minimizing the gap between what upper-level elements “want’’ and what lower-level elements “can.’’ Similarly, if design variables are shared among some elements at the same level, their consistency is coordinated by their common parent element at the level above. The ATC process is proven to be convergent when using a specific class of coordination strategies (Michelena et al. 2003), and has been successfully applied to a variety of optimal design problems, e.g., (Kim et al. 2002; Kokkolaras et al. 2002; Kim et al. 2003). We refer the reader to the above references for a detailed description of ATC. Here, we present the concept and the general mathematical formulation. The key assumption of the ATC methodology is that there is a functional dependency in the hierarchical, multilevel system decomposition. Assuming that element j at level i has nij children, this functional dependency is expressed as rij = fij (r(i+1)1 , . . . , r(i+1)nij , xij , yij )
(1)
where rij are element responses, r(i+1)1 , . . . , r(i+1)nij denote children responses, xij represent local design variables, and yij denote local shared design variables (i.e., design variables that this element shares with other elements at the same level). The mathematical formulation of problem pij for element j at level i is min rij (r(i+1)1 , . . . , r(i+1)nij , xij , yij ) − riju 22 + yij − yiju 22 + nij nij l l r(i+1)k − r(i+1)k 22 + k=1 y(i+1)k − y(i+1)k 22 k=1
(2)
with respect to r(i+1)1 , . . . , r(i+1)nij , xij , yij , y(i+1)1 , . . . , y(i+1)nij subject to gij (rij , xij , yij ) ≤ 0 where coordinating variables for the shared design variables of the children are denoted by y(i+1)1 , . . . , y(i+1)nij , and superscripts u (l) are used to denote response and shared
118
Structural design optimization considering uncertainties
Optimization inputs Response and shared variable values cascaded down from the parent
Optimization outputs
ruij
rlij
yuij
ylij
Response and shared variable values passed up to the parent
Element optimization problem pij, where rij is provided by the analysis/simulation model rij fij (r(i1)k1,..., r(i1)kc ,xij, yij) ij
Response and shared r1(i1)k ,..., r1(i1)k 1 cij variable values passed 1 1 up from the children y(i1)k ,..., y(i1)k 1
cij
ru(i1)k1,..., ru(i1)kcij yu(i1)k1,..., yu(i1)kc
ij
Response and shared variable values cascaded down to the children
Figure 5.2 ATC information flow at element j of level i.
variable values that have been obtained at the parent (children) problem(s), and have been cascaded down (passed up) as design targets (consistency parameters). Shared design variables are restricted to exist only among elements at the same level having the same parent. The top-level problem of the hierarchy is a special case: at this level (i = 0), there is only one element (j = 1 – the system), and responses cascaded from above are the given system design targets T = Ru01 (there is no parent element); also, since this is the sole element of the level, there exist no shared variables. The bottom-level problems are also a special case since they have no children. Finally, note that although communication among levels, i.e., updating parameter values associated with the ATC process, is bi-directional, functional dependency is strictly hierarchical. Figure 5.2 illustrates the information flow of the ATC process at element j in level i. Assuming that all the parameters have been updated using the solutions obtained at the parent- and children-problems, Problem (2) is solved to update the parameters of the parent- and children-problems. This process is repeated until the variables in all optimization problems do not change significantly after consecutive iterations. The sequence in which the subproblems are solved is called a coordination strategy. As in any distributed multidisciplinary optimization methodology, the choice of coordination strategy among the many available alternatives is critical. In contrast to other methodologies for multilevel system design, global convergence properties have been proven for a specific class of coordination strategies under standard smoothness and convexity assumptions (Michelena et al. 2003). Nevertheless, case studies have also demonstrated that the ATC process may terminate successfully in practice when other coordination schemes are used (Michelena et al. 2001; Kim et al. 2002; Louca et al. 2002; Kokkolaras et al. 2004). It is emphasized that ATC should not be viewed either solely or merely as a design optimization methodology. ATC addresses the early part of the product development process (cf. Figure 5.3). Its purpose is to account for the interrelations of the system
119
ifica tion
s
Nondeterministic formulations of analytical target cascading
Par ts s pec
Analytical target cascading process Part 1
Enterprise target setting
Design targets
Part 2 Part specifications feasible?
System
Part n
Yes Design targets feasible?
No
No
Part 1 design embodiment Part 2 design embodiment
Yes
Final system design
Part n design embodiment
Figure 5.3 ATC in the product development process. Brake-specific fuel consumption
Engine simulation Power loss due to friction Piston-ring/cylinder-liner subassembly
Ring and liner surface roughness
Oil consumption blow-by liner wear rate
Liner material properties (Young’s modulus and hardness)
Figure 5.4 Hierarchical bi-level system.
parts, identify possible tradeoffs, and determine optimal and consistent design specifications to match design targets as close as possible (i.e., it can also be used to check whether the given design targets can be achieved by the available means). Once this is accomplished, the design embodiment for each part can be carried out concurrently or outsourced.
3 Application to engine design In this section, we apply the ATC methodology to a simple yet illustrative simulationbased optimal design example to demonstrate the introduced concepts. Specifically, we consider a V6 gasoline engine as the system, which is decomposed into six subsystems, each of which represents the piston-ring/cylinder-liner subassembly of a single cylinder. The system simulation predicts engine performance in terms of brake-specific fuel consumption. Although the engine has six cylinders, they are all designed to be identical. For this reason, we can actually consider only one subsystem. The associated bi-level hierarchy, shown in Figure 5.4, includes the engine as a system at the top level and the piston-ring/cylinder-liner subassembly as a subsystem at
120
Structural design optimization considering uncertainties
the bottom level. The ring/liner subassembly simulation takes as inputs the surface roughness of the ring and the liner and the Young’s modulus and hardness and computes power loss due to friction, oil consumption, blow-by, and liner wear rate. The engine simulation takes then as input the power loss and computes brake-specific fuel consumption of the engine. Commercial software packages were used to perform the simulations. A detailed description of the problem can be found in (Chan et al. 2004). Due to the simplicity of the given problem structure, we use a simplified version of the notation introduced earlier. Since there are only two levels with only one element in each, we skip element indices and denote the upper-level element with subscript 0 and the lower-level element with subscript 1. We use second indices to denote the components of the design variable vector of the lower-level element optimization problem. The design problem is to find optimal values for the piston-ring and cylinder-liner surface roughness design optimization variables x11 and x12 , respectively, and optimal values for the design optimization variables representing the material properties (Young’s modulus x13 and hardness x14 ) of the liner that yield minimized brake-specific fuel consumption, i.e., system response R0 . The optimal design problem includes constraints on liner wear rate, oil consumption, and blow-by. The power loss due to friction, i.e., subsystem response R1 , links the two levels. The top- and bottom-level ATC problems are formulated as min (R0 (R1 ) − T)2 + (R1 − Rl1 )2 R1
(3)
and min
x11 ,x12 ,x13 ,x14
subject to
(R1 (x11 , x12 , x13 , x14 ) − Ru1 )2 liner wear rate = g11 (x11 , x12 , x13 , x14 ) ≤ 2.4 × 10−12 m3 /s blow-by = g12 (x11 , x12 , x13 , x14 ) ≤ 4.25 × 10−5 kg/s oil consumption = g13 (x11 , x12 , x13 , x14 ) ≤ 15.3 × 10−3 kg/hr 1 µm ≤ x11 , x12 ≤ 10 µm 80 GPa ≤ x13 ≤ 340 GPa 150 BHV ≤ x14 ≤ 240 BHV
(4)
respectively. The fuel consumption target T was set to zero to achieve the best fuel economy possible. 3.1
D eterm i nis t ic d es ig n o pt imizat io n r e s u l t s
It is desired to minimize power loss due to friction in order to optimize engine operation and thus maximize fuel economy. Therefore, it was anticipated that the bottom-level optimization problem would yield a design with as smooth surfaces (low surface roughnesses) as possible without violating the bounds or the nonlinear design constraints. The ATC process of solving Problems (4) and (3) iteratively converged after two iterations. The obtained deterministic optimal ring/liner subassembly design is shown in Table 5.1. The ring surface roughness and the liner’s Young’s modulus optimal values are at their lower bounds; the liner surface roughness and hardness have interior optimal values. Figure 5.5 shows the two-dimensional projection of the design space
Nondeterministic formulations of analytical target cascading
121
Table 5.1 Deterministic optimal ring/liner subassembly design. Variable
Description
Value
µX 11 µX 12 x 13 x 14
Ring surface roughness, [µm] Liner surface roughness, [µm] Liner Young’s modulus, [GPa] Liner hardness, [BHV]
1.0 3.5 80 175
10 0.486
0.48
0.475
9
Liner wear rate 2.4 1012
SFC
asing B
Decre
7
0.485
0.48
6
49
0.475
Liner surface roughness [m]
8
5 Optimal design 4 Oil consumption 15.3 3 0.48
2
1
BowBy 4.5 105
1
2
3
5 7 4 6 Ring surface roughness [m]
8
9
10
Figure 5.5 Two-dimensional projection of the design space.
spanned by the two surface roughness variables when the liner Young’s modulus and the liner hardness are kept fixed at 80 GPa and 175 BHV, respectively. The liner surface roughness is not at its lower bound because the oil consumption constraint is active: increased liner surface roughness is required to maintain an optimal oil film thickness in order to avoid excessive oil consumption.
4 The probabilistic ATC formulation In this section, the ATC formulation is extended to account for uncertainties. Adopting a probabilistic framework, we model uncertain quantities as random variables
122
Structural design optimization considering uncertainties
(denoted by upper case Latin symbols). In general, we use the terms random design optimization variable and random design optimization parameter to differentiate between random variables that are design optimization variables and random variables that are design optimization parameters in the optimization problems. Here, to avoid confusion, and without loss of generality, we assume that all design optimization parameters are deterministic, and we omit them in the mathematical formulations. We use the means of random design variables as optimization variables and assume that their standard deviation is known or has been estimated with sufficient accuracy. The objective and the constraints must be reformulated. We replace the objective function with its expectation, and we now require that the probability of violating a constraint is less than some pre-specified probability of failure. The probabilistic formulation of Problem (2) is (Kokkolaras et al. 2006) min E[Rij ] − µuRij 22 + µYij − µuYij 22 + nij nij µR(i+1)k − µlR(i+1)k 22 + k=1 µY(i+1)k − µlY(i+1)k 22 k=1 with respect to subject to with
µR(i+1)1 , . . . , µR(i+1)nij , µXij , µYij , µY(i+1)1 , . . . , µY(i+1)nij P[gijk (Rij , Xij , Yij ) > 0] ≤ Pfk , k = 1, 2, . . . , Mij Rij = fij (R(i+1)1 , . . . , R(i+1)nij , Xij , Yij )
(5)
where Mij is the number of design constraints, P[ · ] denotes probability measure, and Pfk is a pre-specified probability of failure for design constraint k. Liu et al. considered more than one moments to represent random variables in the ATC optimization problems (Liu et al. 2006). 4.1
U n c erta i n t y pr o pag at io n
In a multilevel hierarchy, responses (outputs) of lower-level elements are inputs to higher-level elements. This is an issue of utmost importance in design optimization of hierarchically decomposed systems under uncertainty, since the solution of probabilistic optimization problems requires moment estimation of high-level random optimization variables that are functions of low-level random optimization variables. In other words, we need appropriate techniques for uncertainty propagation. Consider element j at level i. By solving Problem (5), we obtain optimal values µ∗R(i+1)1 , . . . , µ∗R(i+1)n , µ∗Xij , and µ∗Yij . Using the functional dependency relation ij
Rij = fij (R(i+1)1 , . . . , R(i+1)nij , Xij , Yij ), we must now estimate the moments (typically the first two, mean and standard deviation) of the responses Rij since the latter constitute random optimization variables of the parent probabilistic optimal design problem. This needs to be done for all problems at all levels of the hierarchy. An efficient and accurate technique is therefore required for propagating uncertainties through the multilevel hierarchy. We assume that all element responses in the multilevel hierarchy are uncorrelated. Many probabilistic design methods and software packages use a first-order Taylor expansion about the current mean design to estimate the mean and standard deviation of propagated random responses. We have found that while the mean values can be estimated relatively accurately, standard deviation estimates are unacceptably inaccurate in may cases (Youn et al. 2004; Kokkolaras et al. 2004). Thus, we propose
Nondeterministic formulations of analytical target cascading
123
an uncertainty propagation technique we developed based on the highly efficient and accurate Advanced Mean Value (AMV) method (Wu et al. 1990). The AMV method has been originally proposed as a computationally efficient method for generating the cumulative distribution function (CDF) of a response R = f (X) that is a random variable (Wu et al. 1990). It uses a simple correction to compensate for errors introduced by a utilized Taylor series approximation. Based on the CDF definition, we have the following first-order relation between the CDF value of R at a particular value f0 and the reliability index β: P[f ≤ f0 ] = P[g ≤ 0] = (−β)
(6)
where g(X) = f (X) − f0 and is the standard normal cumulative distribution function. According to the AMV method, if the random variables X are uncorrelated and normally distributed with means µX and standard deviations σX , the most probable point (MPP) of failure (or design point) in the standard normal space can be computed by U∗ = −βX
∇glin (µX ) ∇f (µX ) = −βX |∇glin (µX )| |∇f (µX )|
(7)
where glin (X) is a linear approximation of g(X) at µX and X is a diagonal matrix, whose diagonal is the vector σX . In the original space the MPP coordinates are X ∗ = x U ∗ + µ x
(8)
Note that for random variables that are not normally distributed, a nonlinear transformation is needed according to the Rackwitz-Fiessler method (Haldar and Mahadevan 2000). The AMV method corrects the CDF value of R in Equation (6) with P[f ≤ f (X∗ )] = (−β)
(9)
by replacing the f0 value corresponding to the reliability index β with f (X∗ ). The process of Equations (6) through (9) is repeated for a few (different) β values, so that a region of the CDF of R is constructed. The derivative of that CDF region provides the corresponding probability density function (PDF) value. The obtained CDF and PDF values are finally used to compute equivalent mean and standard deviation at the current design point. This AMV-based technique is used to estimate the mean and standard deviation of each response for all the elements of the multilevel hierarchy according to the discussion in Section 4.1. The technique is computationally efficient since it requires only a single linearization of the performance function at the mean value and an additional function evaluation at each required CDF level. Reference (Wu 1994) provides more details regarding the accuracy and efficiency of the AMV method on several applications. 4.1.1 Illustra tive exampl es The linearization (or MVFOSM-based or method of moments) and AMV-based techniques were used to estimate the first two moments of several nonlinear functions. All
124
Structural design optimization considering uncertainties
random variables were assumed to be normal. Test functions and input statistics are presented in Table 5.2 and results are summarized in Table 5.3. One million samples were used for the Monte Carlo simulations. By inspecting Table 5.3, it can be seen that while the mean-related errors of the linearization approach are within acceptable limits, standard deviation errors can be quite large. The AMV-based moment estimation method performs always better, and never exhibits unacceptable errors. 4.2 Pro b ab i l i st ic e ng ine d es ig n We now apply the probabilistic ATC methodology to our bi-level engine design problem. Here, the root mean square (RMS) of asperity height is used to represent asperity roughness, which is assumed to be normally distributed. Thus, the surface roughness design variables are now normal random design optimization variables. The probabilistic formulation of the top- and bottom-level ATC problems are min (E[R0 ] − T)2 + (µR1 − E[R1 ]l )2 µR1
with
(10)
R0 = f0 (R1 )
Table 5.2 Test functions and input statistics. #
Function
Input statistics
1 2
X 21 + X 22 −exp(X 1 − 7) − X 2 + 10
X 1 ∼ N(10,2), X 2 ∼ N(10,1) X 1,2 ∼ N(6,0.8)
3
1−
4
1−
5
1−
X21 X2 20 (X1 + X2 − 5)2 30 80 2 X1 + 8X2 + 5
X 1,2 ∼ N(5,0.3) −
(X1 − X2 − 12)2 30
X 1,2 ∼ N(5,0.3) X 1,2 ∼ N(5,0.3)
Table 5.3 Estimated moments and errors relative to Monte Carlo simulation (MCS) results. #
1
µlin µAMV µMCS lin [%] AMV [%]
200.0 203.4 205.0 −2.44 −0.78
σlin σAMV σMCS lin [%] AMV [%]
44.72 45.20 45.10 −0.84 0.22
2
3
4
5
3.6321 3.6029 3.4921 4.00 3.17
−5.25 −5.3495 −5.3114 −1.15 0.71
−1.0333 −1.0380 −1.0404 −0.68 −0.23
−0.1428 −0.1454 −0.1448 −1.30 0.41
1.9386 0.9013 0.9327 107.85 −3.36
0.8385 0.8423 0.8407 −0.26 0.19
0.1166 0.1653 0.1653 29.46 0
0.00627 0.00631 0.00630 −0.47 0.15
Nondeterministic formulations of analytical target cascading
125
and min
µX11 ,µX12 ,x13 ,x14
subject to
with
(E[R1 ] − µuR1 )2
P[liner wear rate = G11 (X11 , X12 , x13 , x14 ) > 2.4 × 10−12 m3 /s] ≤ Pf P[blow-by = G12 (X11 , X12 , x13 , x14 ) > 4.25 × 10−5 kg/s] ≤ Pf P[oil consumption = G13 (X11 , X12 , x13 , x14 ) > 15.3 × 10−3 kg/hr] ≤ Pf P[X11 < 1 µm] ≤ Pf , P[X11 > 10 µm] ≤ Pf P[X12 < 1 µm] ≤ Pf , P[X12 > 10 µm] ≤ Pf 80 GPa ≤ x13 ≤ 340 GPa 150 BHV ≤ x14 ≤ 240 BHV R1 = f1 (X11 , X12 , x13 , x14 )
(11)
respectively. The standard deviation of the surface roughnesses was assumed to be 1.0 µm, and remained constant throughout the ATC process. The assigned probability of failure Pf was 0.13%, which corresponds to the target reliability index β = 3. The fuel consumption target T was simply set to zero to achieve the best fuel economy possible. Note that since the random variables are normally distributed, the associated linear probabilistic bound constraints are reformulated as deterministic. For example, P[X11 < 1 µm] ≤ Pf ⇔ P[X11 − 1 µm < 0] ≤ Pf ⇔
µX − 1 µm µX11 − 1 µm ≤ ( − β) ⇒ − 11 0− ≤ −β ⇔ σX11 σX11 µX11 − 1 µm ≥ β ⇔ µX11 − 1 µm ≥ βσX11 ⇔ σX11 µX11 ≥ 1 µm + βσX11 ⇔ µX11 ≥ 4 µm Similarly, the other three probabilistic bound constraints in Problem (4) are reformulated as µX11 ≤ 7 µm;
µX12 ≥ 4 µm;
µX12 ≤ 7 µm
The obtained probabilistic optimal ring/liner subassembly design is shown in Table 5.4. The ring surface roughness optimal value is at its probabilistic lower Table 5.4 Probabilistic optimal ring/liner subassembly design. Variable
Description
Value
µX11 µX12 x 13 x 14
Ring surface roughness, [µm] Liner surface roughness, [µm] Liner Young’s modulus, [GPa] Liner hardness, [BHV]
4.00 6.15 80 240
126
Structural design optimization considering uncertainties Table 5.5 Reliability analysis results. Constraint
Active
Pf
MCS Pf
Liner wear rate Blow-by Oil consumption
No No Yes
<0.13% <0.13% 0.13%
0% 0% 0.16%
Table 5.6 Estimated moments and errors relative to Monte Carlo simulation (MCS). Response
Power loss [kW]
Fuel consumption [kg/kWhr]
µlin µAMV µMCS lin [%] AMV [%] σlin σAMV σMCS lin [%] AMV [%]
0.3950 0.3922 0.3932 0.45 −0.25 0.0481 0.0309 0.0311 54.6 −0.64
0.5341 0.5431 0.5432 −0.01 −0.01 0.00757 0.00760 0.00759 −0.25 0.13
minimum, while the liner’s Young’s modulus and hardness optimal values are at their deterministic lower and upper bounds, respectively. The liner surface roughness variable has an interior optimal value because the oil consumption constraint is probabilistically active. Constraint activity in probabilistic design optimization indicates that the constraint’s MPP lies on the target reliability circle. The probabilistic optimal values of the surface roughness optimization variables have changed relative to their deterministic counterparts to accommodate the uncertainty, i.e., the optimum shown in the two-dimensional projection of the design space (Figure 5.5) moved to the inside (we cannot show the location of the probabilistic optimum in the same figure because it lies in a different two-dimensional projection of the design space due to the change in the liner hardness optimal value). A Monte Carlo simulation was performed to assess the accuracy of the reliability analyses of the probabilistic constraints. One million samples were generated using the mean and standard deviation values of the design variables, and the constraints were evaluated using these samples to calculate the probability of failure. Results are summarized in Table 5.5. The obtained design is 0.03% less reliable than found for the active probabilistic constraint. This error is due to the first-order reliability approximation used in the probabilistic optimization problem. Propagation of uncertainty was modeled using the AMV-based technique described in Section 4.1. Table 5.6 summarizes the estimated moments for the two responses of the bi-level hierarchy. Results obtained using the first-order approximation approach (linearization) are included to illustrate the large error that may be introduced. Specifically, it can be seen that the standard deviation estimate of the power loss (necessary for solving the top-level probabilistic optimization problem) is 0.0481 kW
Nondeterministic formulations of analytical target cascading
20
20
18
18
16
16
14
14
12
12
10
10
8
8
6
6
4
4
2
2 0 0.15
0
0.2
0.25
0.3
0.35 (a)
0.4
0.45
0.5
0.2
0.25
0.3 0.35 (b)
0.4
0.45
127
0.5
Figure 5.6 Power loss uncertainty: (a) PDF obtained using the AMV-based technique and (b) frequency diagram obtained using Monte Carlo simulation.
when using a first-order approximation. This value is 54.6% larger than the Monte Carlo simulation estimate of 0.0311 kW. Such large errors will be propagated during the ATC process and yield useless design results. Using the AMV-based approach, we obtained an estimate of 0.0309 kW, which is only 0.64% smaller than the Monte Carlo estimate. Using the AMV-based technique is advantageous because CDFs and PDFs can be generated with high efficiency. In our example, power loss (the subsystem response) is a highly nonlinear function of the subsystem’s inputs. In fact, its PDF is multimodal, as shown in Figure 5.6. This figure depicts a) the PDF obtained using the AMV-based technique and b) the frequency diagram generated from a histogram that was obtained using Monte Carlo simulation with one million samples. The agreement is quite satisfactory and illustrates the usefulness of the AMV-based approach to propagate uncertainty for highly nonlinear functions.
5 The ATC formulation for interval uncertainty quantification The probabilistic approach is very useful and should be adopted when the designer has sufficient data to model uncertain quantities as random variables with appropriate probability distributions. When this is not the case, it is imperative to assume that the uncertain quantities can take any value within a range. Note that this not equivalent to assuming a uniform distribution as it does not imply that the probability of taking a specific value in a range is equal to any other value within that range. We view the interval analysis approach as a special case of possibility theory (Dubois and Prade 1988), where information availability is limited to a minimum. Designs obtained using possibility-based design optimization (PBDO) methods are typically conservative compared to the ones obtained using probabilistic design optimization, also known as reliability-based optimization (RBDO), methods. Possibility-based designs sacrifice
128
Structural design optimization considering uncertainties
additional optimality compared to RBDO designs to account for lack of uncertainty information and avoid constraint violation. According to possibility theory, the possibility π(A) of event A occurring provides an upper bound on the probability P(A) of that event occurring, i.e., P(A) ≤ π(A). From the design point of view, we can conclude that what is possible may not be probable, and what is impossible is also improbable. If the possibility of violating a constraint is zero, then the probability of violating the same constraint will also be zero. If feasibility of a constraint g is formulated in negative null form (g ≤ 0), the constraint is always satisfied if π(g > 0) = 0. By introducing the notion of membership functions and α-cuts, we can relax this requirement as π(g > 0) ≤ α, provided that 0 < α 1 (Zadeh 1978). It can be shown that if the maximum possibly attainable value of the constraint g α at the corresponding α-cut is less than or equal to zero, i.e., gmax ≤ 0, the possibility of violating this constraint is less than α (Mourelatos and Zhou 2005). In general, membership functions express how ranges of values that bound the uncertainty quantities are decreased with increasing amount of information. The α-cuts denote levels of information, starting at the lowest (α = 0), where the range is largest, and increasing to the highest (α = 1), where the range is the smallest (possibly a crisp value). In this work, we will assume that the lowest level of information is available, where α is equal to zero. Therefore, we do not have to consider membership functions and higher α-cuts, eliminating thus ad-hoc selections, but also maximizing the conservative nature of the obtained designs. Given an interval uncertainty in a design variable X, the process of identifying the maximum attainable value gmax of a constraint g(X) requires the solution of an optimization problem. Given a nominal value XN for the design variable X, we first identify the uncertainty interval [(1 − δX )XN , (1 + δX )XN ], where δX denotes the relative deviation from the nominal value XN . Then, we solve the simple bound-constrained problem max g(x) x
subject to
(1 − δX )XN ≤ x ≤ (1 + δX )XN
(12)
to compute gmax . In a design optimization problem with many constraints where design variables are subject to interval uncertainty, finding the optimal design involves a nested optimization process known as robust optimization. An outer-loop optimization generates a sequence of iterates of nominal value vectors XN for the uncertain design variables X. For each iterate XN , an inner-loop optimization problem like the one formulated in Equation (12) is solved for each constraint. These worst-case optimization problems (also referred to as “anti-optimization’’ problems (Elishakoff et al. 1994)) may involve a larger number of optimization variables, but are only bound-constrained. The primary purpose of solving these problems is to obtain the maximal (worst) value of each constraint g that may be attained due to the uncertainty in X. These constraint values are used in the outer-loop optimization, where the worst objective value is maximized and the worst constraint value must be feasible. Nevertheless, the ∗ inner-loop optimal values XN can be used to attempt to control uncertainty, i.e., what values to strive for and what values to avoid, if possible. The ATC formulation for design variables and parameters that are subject to interval uncertainty is a straightforward application of the robust optimization problem
Nondeterministic formulations of analytical target cascading
129
formulation. The implication of dealing with intervals is that two values must be matched for each uncertain quantity that links two elements: the “worst-case’’ value (computed solving a maximization problem of the form presented in Equation (12)), and the “best-case’’ value (computed by solving a minimization problem). The ATC formulation for interval uncertainties is min Rijw − Rijuw 22 + Rijb − Rijub 22 + Yijw − Yijuw 22 + Yijb − Yijub 22 + nij nij l l R(i+1)kw − R(i+1)k 22 + k=1 R(i+1)kb − R(i+1)k 2 + k=1 w b 2 nij n ij l l Y(i+1)kw − Y(i+1)k 2 + k=1 Y(i+1)kb − Y(i+1)k 2 k=1 w 2 b 2 with respect to R(i+1)1N , . . . , R(i+1)nij N XijN , YijN , Y(i+1)1N , . . . , Y(i+1)nij N subject to gijmax (R(i+1)1N , . . . , R(i+1)nij N XijN , YijN ) ≤ 0 with {Rijw , Rijb } = fij (R(i+1)1N , . . . , R(i+1)nij N XijN , YijN )
(13)
5.1 ATC-based optimization res ults The ATC process for design optimization problems with interval uncertainty variables is illustrated in this section using the same engine design problem (Kokkolaras et al. 2006). As in the probabilistic case, the considered uncertain quantities are ring and liner surface roughnesses; root mean square (RMS) of asperity height is used to represent and quantify surface roughness. Here, let us assume that we do not have sufficient data to infer that surface roughness is normally distributed. Instead, we assume that it exhibits deviations from nominal values that can be quantified by an interval. This surface roughness interval uncertainty is propagated through the simulation hierarchy to estimate intervals for power loss and fuel consumption. Since uncertainty information is available at the bottom-level we first formulate and solve the bottom-level problem min
X11N ,X12N ,x13 ,x14
subject to
(R1w − Ru1w )2 + (R1b − Ru1b )2
(14)
max. liner wear rate = G11max (X11N , X12N , x13 , x14 ) ≤ 2.4 × 10−12 m3 /s max. blow-by = G12max (X11N , X12N , x13 , x14 ) ≤ 4.25 × 10−5 kg/s max. oil consumption = G13max (X11N , X12N , x13 , x14 ) ≤ 15.3 × 10−3 kg/hr 2 µm ≤ X11N ≤ 9 µm 2 µm ≤ X12N ≤ 9 µm 80 GPa ≤ x13 ≤ 340 GPa 150 BHV ≤ x14 ≤ 240 BHV
where X11 and X12 are (uncertain) ring and liner surface roughness design variables, respectively, x13 and x14 are (deterministic) liner Young’s modulus and hardness design variables, respectively, and R1 is power loss due to friction (subscripts w and b denote worst and best possible values due to interval uncertainty, respectively, while superscript u denotes target value from the upper level). According to the interval analysis approach, at the outer-loop optimization we determine nominal values X11N and X12N (as well as optimal values for x13 and x14 ), while solving five inner-loop optimization
130
Structural design optimization considering uncertainties
problems given the (assumed invariant) surface roughness interval uncertainty: one best-case scenario for the power loss, one worst-case scenario for the power loss, and one worst-case scenario each for oil consumption, blow-by, and wear rate. Since we do not have information from the top-level problem yet, i.e., target values for Ru1w and Ru1b , we assume these to be equal to zero. Once the power loss uncertainty interval [R1b , R1w ] has been obtained, we compute the midpoint and the percentage deviation from the endpoints to pass this uncertainty information to the top-level problem, which is formulated as min (R0w − Tw )2 + (R0w − Tw )2 + ω((R1w − Rl1w )2 + (R1b − Rl1b )2 )
(15)
R1N
where R0 denotes fuel consumption. The symbol T denotes fixed engine design target values, while the superscript “l’’ denotes interval target values from the lower level, so that the top-level problem does not consider solutions that are too far from what the bottom-level can provide. The weight ω can be adjusted to emphasize consistency rather than fuel consumption optimality. At the outer-loop optimization of this problem we determine nominal values of power loss while solving two inner-loop optimization problems given the quantified (at the lower level) power loss interval uncertainty: one best-case scenario for the fuel consumption and one worst-case scenario for the fuel consumption. After the top-level problem is solved (note that the desired fuel consumption interval target values may not be achieved), the power loss interval and the corresponding uncertainty is updated, passed down to the bottom-level problem, which is then solved again and so on. We assume that the ATC coordination process is converged when all quantities do not change significantly anymore. Table 5.7 reports the results obtained assuming δX = 0.1 (10%) for both the ring and the liner surface roughness uncertainty. The power loss links the two problems. In order to achieve the best (minimal) fuel consumption possible, we set the top-level problem target values for both the worst and the best fuel consumption equal to zero. Of course, these target values are unattainable. Therefore, the power loss interval computed by solving the bottom-level problem ([0.277, 0.369]) cannot be matched exactly when solving the top-level problem. By increasing the values of the weight ω, we increase consistency, i.e., interval matching for the power loss ([0.263, 0.356] for ω = 1000). It is interesting that while the power loss uncertainty is invariantly quantified at 15% around the interval midpoint, the fuel consumption uncertainty Table 5.7 Results of the ring/liner problem using the interval ATC formulation. Bottom level
X 11N [µm] 2.06
X 12N [µm] 5.87
x 13 [GPa] 80
x 14 [BHV] 40
R 1b [kW] 0.277
R 1w [kW] 0.369
Top-level
R 1b [kW] 0.176 0.253 0.263
R 1w [kW] 0.238 0.343 0.356
δR1 [%] 15 15 15
R 0b [kg/kWhr] 0.486 0.502 0.504
R 0w [kg/kWhr] 0.499 0.522 0.525
δR0 [%] 1.3 2 2
ω=1 ω = 10 ω = 1000
δR1 % 15
Nondeterministic formulations of analytical target cascading
131
changes for different weight values (from 1.3% to 2% around the interval midpoint). This implies that uncertainty is not always invariant with respect to the design point, as assumed in many design under uncertainty methodologies.
6 Conclusions We presented how analytical target cascading (ATC), a methodology for design optimization of hierarchically decomposed multilevel systems, can account for uncertainties. We first assumed that we have sufficient information available to model the uncertain quantities as random variables and used the popular and powerful probabilistic framework to reformulate the ATC problems as reliability-based design optimization (RBDO) problems. We used the moments of the random variables as optimization variables. Recognizing that first-order approximations may yield inaccurate estimates of standard deviations of propagated random variables, we developed an uncertainty propagation technique that is based on the advanced mean value (AMV) method. This technique can be used to generate approximate CDFs and PDFs that yield sufficiently accurate estimations of means and standard deviations of propagated random variables. A simple yet illustrative bi-level example was used to demonstrate the probabilistic ATC methodology. The results showed that the probabilistic formulation of the ATC process can be applied successfully using a bottom-up coordination. The computationally efficient AMV-based technique for the required propagation of uncertainties produced standard deviation estimates that were much more accurate relative to the ones obtained using first-order approximations, ensuring the meaningfulness of the ATC results. We then considered the case where we have incomplete uncertainty information available, and we assumed ranges for the uncertain quantities, adopting an interval analysis approach to formulate and solve robust optimization ATC problems (also known as worst-case optimization or anti-optimization). The interval analysis approach yields design solutions that are conservative relative to the ones obtained using a probabilistic design approach, especially as interval uncertainty increases. However, the interval analysis approach ensures feasibility at all times. In terms of computational cost, the nested optimization of the interval analysis approach seems to be less expensive than the required reliability analysis (analytical or simulation-based) in the probabilistic approach. It is also less challenging numerically since the inner-loop optimization problems are simple bound-constrained problems. The main challenge is that the inner-loop problems require global solutions to ensure consideration of the worst-case scenario. One of the advantages of the interval analysis approach is that the solution of the inner-loop problems provides information to the designer with respect to the beneficial or adversary effects of uncertainty so that, if possible, resources can be allocated to control critical uncertainty quantities. A significant finding is that interval uncertainty does not necessarily propagate symmetrically or invariantly.
References Chan, K.Y., Kokkolaras, M., Papalambros, P.Y., Skerlos, S.J. & Mourelatos, Z. 2004. Propagation of uncertainty in optimal design of multilevel systems: Piston-ring/cylinder-liner case study. In Proceedings of the SAE World Congress, Detroit, Michigan, Paper No. 2004-01-1559.
132
Structural design optimization considering uncertainties
Dubois, D. & Prade, H. 1988. Possibility Theory. New York: Plenum Press. Elishakoff, I., Haftka, R.T. & Fang, J.J. 1994. Structural design under bounded uncertainty – optimization with anti-optimization. International Journal of Computers and Structures 53(6):1401–1405. Haimes, Y.Y., Tarvainen, K., Shima, T. & Thadathil, J. 1990. Hierarchical Multiobjective Analysis of Large-Scale Systems. Hemisphere Publishing Corporation, pages 41–42. Haldar, A. & Mahadevan, S. 2000. Probability, Reliability, and Statistical Methods in Engineering Design. John Wiley & Sons, p. 205. Kim, H.M. 2001. Target Cascading in Optimal System Design. PhD thesis, University of Michigan. Kim, H.M., Kokkolaras, M., Louca, L.S., Delagrammatikas, G.J., Michelena, N.F., Filipi, Z.S., Papalambros, P.Y., Stein, J.L. & Assanis, D.N. 2002. Target cascading in vehicle redesign: A class VI truck study. International Journal of Vehicle Design 29(3):1–27. Kim, H.M., Michelena, N.F., Papalambros, P.Y. & Jiang, T. 2003. Target cascading in optimal system design. ASME Journal of Mechanical Design 125(3):474–480. Kim, H.M., Rideout, D.G., Papalambros, P.Y. & Stein, J.L. 2003. Analytical target cascading in automotive vehicle design. ASME Journal of Mechanical Design 125(3):481–489. Kokkolaras, M., Fellini, R., Kim, H.M., Michelena, N.F. & Papalambros, P.Y. 2002. Extension of the target cascading formulation to the design of product families. Structural and Multidisciplinary Optimization 24(4):293–301. Kokkolaras, M., Louca, L.S., Delagrammatikas, G.J., Michelena, N.F., Filipi, Z.S., Papalambros, P.Y., Stein, J.L. & Assanis, D.N. 2004. Simulation-based optimal design of heavy trucks by model-based decomposition: An extensive analytical target cascading case study. International Journal of Heavy Vehicle Systems 11(3-4):402–432. Kokkolaras, M., Mourelatos, Z.P. & Papalambros, P.Y. 2004. Design optimization of hierarchically decomposed multilevel systems under uncertainty. In Proceedings of the ASME Design Engineering Technical Conferences, Salt Lake City, Utah, Paper No. DETC2004/DAC-57357. Kokkolaras, M., Mourelatos, Z.P. & Papalambros, P.Y. 2006. Design optimization of hierarchically decomposed multilevel systems under uncertainty. ASME Journal of Mechanical Design 128(2):503–508. Kokkolaras, M., Mourelatos, Z.P. & Papalambros, P.Y. 2006. Impact of uncertainty quantification on design decisions for a hydraulic-hybrid powertrain engine. In Proceedings of the 47th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Newport, Rhode Island, Paper No. AIAA-2006-2001. Liu, H., Chen, W., Kokkolaras, M., Papalambros, P.Y. & Kimk H.M. 2006. Probabilistic analytical target cascading – a moment matching formulation for multilevel optimization under uncertainty. ASME Journal of Mechanical Design 128(4):991–1000. Louca, S., Kokkolaras, M., Delagrammatikas, G.J., Michelena, N.F., Filipi, Z.S., Papalambros, P.Y. & Assanis, D.N. 2002. Analytical target cascading for the design of an advanced technology heavy truck. In Proceedings of the 2002 ASME International Mechanical Engineering Congress and Exposition, New Orleans, LA. Paper No. IMECE-2002-32860. Merriam-Webster on-line (www.m-w.com), accessed April 2007. Michelena, N.F., Kim, H.M. & Papalambros, P.Y. 1999. A system partitioning and optimization approach to target cascading. In Proceedings of the 12th International Conference on Engineering Design, Munich, Germany. Michelena, N.F., Louca, L., Kokkolaras, M., Lin, C.-C., Jung, D., Filipi Z., Assanis, D., Papalambros, P.Y., Peng, H., Stein, J. & Feury, M. 2001. Design of an advanced heavy tactical truck: A target cascading case study. SAE 2001 Transactions – Journal of Commercial Vehicles. Also appeared in the Proceedings of the 2001 SAE International Truck and Bus Meeting and Exhibition, Chicago, IL, Paper No. 2001-01-2793. Michelena, N.F., Park, H. & Papalambros, P.Y. 2003. Convergence properties of analytical target cascading. AIAA Journal 41(5):897–905.
Nondeterministic formulations of analytical target cascading
133
Mourelatos, Z.P. & Zhou, J. 2005. Reliability estimation and design with insufficient data based on possibility theory. AIAA Journal 43(8):1696–1705. Papalambros, P.Y. 2001. Analytical target cascading in product development. In Proceedings of the 3rd ASMO UK/ISSMO Conference on Engineering Design Optimization, Harrogate, North Yorkshire, England. Wu, Y.T. 1994. Computational methods for efficient structural reliability and reliability sensitivity analysis. AIAA Journal 32(8):1717–1723. Wu, Y.T., Millwater, H.R. & Cruse, T.A. 1990. Advanced probabilistic structural analysis method of implicit performance functions. AIAA Journal 28(9):1663–1669. Youn, B.D., Kokkolaras, M., Mourelatos, Z.P., Papalambros, P.Y., Choi, K.K. & Gorsich, D. 2004. Uncertainty propagation techniques for probabilistic design of multilevel systems. In Proceedings of the 10th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Albany, New York, Paper No. AIAA-2004-4470. Zadeh, L.A. 1978. Fuzzy sets as a basis for a theory of possibility. Fuzzy sets and systems 1:3–28.
Chapter 6
Design optimization of stochastic dynamic systems by algebraic reduced order models Gary Weickum, Matt Allen & Kurt Maute University of Colorado at Boulder, Boulder, CO, USA
Dan M. Frangopol Lehigh University, Bethlehem, PA, USA
ABSTRACT: This chapter addresses the need for efficient numerical stochastic techniques in the analysis and design optimization of dynamic systems. Most stochastic analysis techniques result in a heavy computational burden, the cost of which is amplified if embedded into a design optimization framework. This work seeks to alleviate the computational costs of analyzing dynamic systems by reduced order modeling techniques. The key to utilizing reduced order models for stochastic analysis and optimization lies in making them adaptable to design changes and variations in random parameters. This chapter presents an extended reduced order modeling method approximating the response of a dynamic system in the space of design and random parameters. The extended reduced order modeling technique is embedded into a stochastic analysis and design optimization framework. The accuracy and computational efficiency of extended reduced order models are verified with the stochastic analysis and design optimization of a linear structural dynamic system. Stochastic analyses are performed using Monte Carlo simulation, the first-order reliability method, and polynomial chaos expansion. The utility of the extended reduced order modeling method for design optimization purposes is illustrated by solving deterministic and reliability-based design optimization problems. Comparing the stochastic analyses and design optimization results using full and reduced order models show that the overall computational costs can be significantly diminished by the extended reduced order modeling method presented.
1 Introduction Stochastic and reliability analyses of static structures are well explored, and mature computational procedures have been developed (Bjerager 1990, Ghanem and Spanos 1991; Schuëller 1997; Schuëller 2001). The integration of these analysis tools into design optimization processes has been widely accomplished in the design of static structural systems, as shown by (Enevoldsen and Sørensen 1994a,b; Chandu and Grandhi 1995; Yu, Choi, and Chang 1997a,b, 1998; Grandhi and Wang 1998; Luo and Grandhi 1997), among others. However, limited work has been done on formal methods to include reliability analyses in the design of dynamic systems. There has been a considerable amount of work done on optimization of the harmonic response of a dynamic system, such as optimization with eigenvalue criteria (Haug and Choi 1982; Masur 1984; Diaz and Kikuchi 1992). The dynamic system of interest in this
136
Structural design optimization considering uncertainties
work is those that require a time integration to find the transient response of the system. The computational costs associated with dynamic analyses of realistic models with a large number of degrees of freedom hinder their inclusion into stochasticbased optimization methods. Stochastic analysis techniques, such as Monte Carlo simulation and the first-order reliability method, attempt to characterize a system’s probabilistic response due to random or uncertain inputs. A deterministic analysis of a system, assuming no input uncertainty, requires one system analysis. In contrast, the common factor to all stochastic-based analysis methods is the requirement of multiple deterministic analyses of the system at various points in the uncertain or random variable space. Design optimization seeks to find the optimal system within a design space satisfying a set of constraints. The common thread of all optimization algorithms is the necessity of multiple system analyses within the design variable space. One link between stochastic analysis and optimization is the requirement of multiple analyses of altered system configurations in a parameter space. This requirement is emphasized in a stochastic-based design optimization framework, as design criteria in the optimization procedure are now stochastic in nature. Therefore, the key to incorporating any computationally expensive system into a stochastic design framework is to decrease the expense of analyzing systems altered in a parameter space. Surrogate models have been developed allowing the approximation of the system response as a function of the design parameters based on performance predictions from high-fidelity simulation models. Surrogate models may be broadly characterized as data fit (local, multi-point, or global approximations), multi-fidelity (omitted physics, coarsened discretization or tolerances), or reduced order model (ROM) surrogates. A ROM mathematically reduces the system modeled, while still capturing the physics of the governing partial differential equations (PDEs), by projecting the original system response using a computed set of basis functions. For example, the projection reduces the number of degrees of freedom (DOFs) in a large finite element or finite volume model (O(104 to 109 ) DOF) down to a handful of basis coordinates (typically O(100 to 102 )). Thus, the ROM case is distinguished from the data fit case in that it is still intimately tied to the original PDEs and retains the physics, and is distinguished from the multi-fidelity case in that it is derived directly from the original high fidelity model and does not require multiple models of differing fidelity. ROM models have proven a successful means of reducing the computational costs of a system’s response in time (Ravindran 1999; Thomas, Dowell, and Hall 2001; Legresley and Alonso 2001; Willcox and Peraire 2001). However, ROMs typically approximate the response of only one particular configuration and are therefore of limited use for design optimization and stochastic analysis purposes. The utility of these ROMs lies in a particular system’s time integration, and any changes in the design may render the ROM inaccurate. The key missing component for the application of a ROM implemented into a reliability-based design optimization (RBDO) framework and, the focus of this work, is the extension of ROMs into the space of the design and uncertainty parameters. To date, most approximation methods used in this field consider the physical analysis as a black-box tool, and build a response surface on the results. In contrast, the objective of this work is to build an approximation technique capturing the physical nature of a system through the inclusion of
Design optimization of stochastic dynamic systems
137
the partial differential equations governing the system response. The feasibility and differences, in terms of both computational cost and accuracy, of the approximation methods will be studied herein. 1.1
Reduced order models
The construction of the ROM surrogate model from the dynamic analysis of the finite element model is discussed. The governing equation of interest is a structural dynamic response ˙ = f ext (u, u, ˙ t) M u¨ + F int (u, u)
(1)
where M is the mass matrix, F int is the internal force, u is the displacement, and ˙ t). The dynamic response is either linear or nonlinear the external force is f ext (u, u, ˙ t), depending on the internal forces F int and the external forcing function, f ext (u, u, ˙ which depend on the time, t, as well as the displacements, u, and velocities, u. For large systems, the calculation of the linear dynamic response is costly and the cost is further increased for nonlinear systems. The cost of the dynamic response is reduced using a reduced order model, which is a low dimensional approximation. Following a Galerkin type projection scheme, the displacements of the system response are approximated by k basis vectors () and generalized variables η as follows: u(t) =
k
ηj (t)φj = η(t)
(2)
j=1
The reduction of the dynamic response is performed using the approximation of (2) in (1) and premultiplying by T as shown below. ˙ t) T M¨η(t) + T F int (η(t), η˙ (t)) = T f ext (u, u,
(3)
The system, originally n × n, is reduced to a k × k system, where k < n. The force vectors are reduced from n × 1 for the full system to k × 1 for the reduced system. The reduced system may be written as: MR η¨ + FRint (η, η˙ ) = fRext (η, η˙ , t)
(4)
where MR , FRint , and fRext are dependent upon the basis vectors and the system matrices for a particular design. For linear systems, FRint (η, η˙ ) and fRext (η, t) are only linear functions of η and η˙ are calculated as follows: FRint (η, η˙ ) = T K int η(t) + T D η˙ (t) fRext (η, t)
= f
T ext
(t) + K T
ext
η(t)
(5) (6)
where K int and K ext are the stiffness matrices associated with the internal and external forces, and D is the damping matrix accounting for viscous damping.
138
Structural design optimization considering uncertainties
In nonlinear systems, the internal and external forces are dependent on the displacements and the reduced force vectors are updated by either one of the following two methods. “On-line’’ approximation evaluates the forces in the full order model using the approximation of the displacements from (2). The second method approximates the forces FRint (η, η˙ ) and fRext (η, η˙ , t) by explicit functions of η and t apriori, that is “offline’’, and does not require any computations involving the full order model when using the ROM. The advantage of using an on-line approximation is that the system response is calculated more accurately. Using an off-line approximation, the CPU time is decreased at the cost of accuracy. In the following, only linear dynamic problems will be considered and both off and on-line approaches studied. The key to building an effective ROM is to identify a set of basis vectors capturing the physics of a system. Two of the most common methods of identifying appropriate vectors are eigen analyses and snapshot methods, such as the proper orthogonal decomposition (POD). This chapter uses eigenmodes as the basis. 1.2
Ei g en m od e s
In structural dynamics, eigenmodes are the most common means of reducing a system. The eigenmodes φ and eigenvalues ω2 of an undamped system are the results of the following eigenvalue problem: ! " (7) K − ωi2 M φi = 0 where ωi is the frequency (rad/sec) of the eigenmode φi . The greatest benefit of using eigenmodes as a reduced basis is the un-coupling of equation (3) due to the orthogonality with respect to the system matrices. The projected mass matrix is φiT Mφj = δij where δij is the Kronecker symbol. The projected stiffness matrix is φiT Kφj = δij ωi2 . Using all the modes yields the same response as the full order model. In most cases, higher modes tend not to contribute much to the system response so only low modes are considered. For large-scale numerical models, computing eigenmodes can be prohibitively costly. In this case as well as for nonlinear problems, other approaches for constructing basis vectors must be used, such as proper orthogonal decomposition.
2 Extended ROM in structural dynamics The goal of this work is to use an “extended reduced order model’’ (E-ROM) to approximate the system response with respect to changes in the design and uncertainty parameters. Using a full order model tends to be computational expensive for optimization and more so for an uncertainty analysis. The proposed method involves building a ROM, using eigenmodes as a basis, for the original design (in optimization cases) or mean design (in uncertainty analyses). The ROM is only applicable to the original design. Performing an uncertainty analysis or optimization, the E-ROM must be capable of capturing the system response in the spaces of the uncertainty parameters (r) and design variables (p). An extension of a ROM into a physical parameter space has been considered by (Kirsch 2002) as a reanalysis approach. This work is an extension of these ideas into the field of stochastic
Design optimization of stochastic dynamic systems
Matrix approximation
Ini
TS1
TS1
TS2
TS2
Full
Full
Computational cost
Full
FCA
SCA
Ini
Ini
TS1
Modal approximation
Full
FCA
SCA
TS1
Ini
Modal approximation
139
Accuracy
Figure 6.1 Comparison of matrix and eigenmode approximation methods. “Ini’’ is the initial design, “TS1 ’’ and“TS2 ’’ are the first and second orderTaylor Series approximation respectively, “SCA’’ and“FCA’’ a single and full CA and“Full’’ is a full recomputed basis and matrices.
analysis and design optimization. To formulate an effective E-ROM, the set of basis functions (φ) and system matrices (M, C and K) are considered. Five methods are studied to deal with the changes in the eigenmodes and four for updating the system matrices. Using the initial modes or recomputing the modes are obvious options. Following classical perturbation methods, the eigenmodes at the altered design can be approximated by Taylor Series expansion (Kleiber and Hien 1992; Nieuwenhof and Coyotte 2002) and a combined approximation (CA) method (Kirsch 2002, Kirsch 2003; Kirsch 2001; Kirsch 1999; Kirsch 2000). There are two methods analyzed within CA, a single and full CA. The difference between single and full CA is explained in Section 2.4. Figure 6.1 shows the approximate computational cost and accuracy of each combination of updating the system matrices and eigenmodes. The reader may note utilizing updated eigenmodes is not a practical option due to the computational costs of an eigen analysis. The main costs for approximating the system matrices and the eigenmodes are due to the gradient computations. The first and second order sensitivities of M and K can be evaluated either based on the analytically derived finite element formulations or by finite differencing at comparable costs. The sensitivities of the eigenmodes are discussed in the next section. 2.1
Eigenmode s ens itivity analys is
In an effort to approximate the change of a system, the derivatives of the eigenmodes of the initial system with respect to the design variables are needed to build a first order approximation of the new design. Since finite difference methods would drastically increase the computational costs, by requiring the solution of additional eigensystems for each design variable, an analytical approach is used (Adelman and Haftka 1986; Mills-Curran 1988; Dailey 1989).
140
Structural design optimization considering uncertainties
The derivative of the eigensystem (7) with respect to a system parameter pj , expanded using the product rule results in: ) * ! " dφi dωi d T ∂K 2 ∂M − 2ωi M − ωi = 0; (φ Mφi ) = 0 (8) φi + K − ωi2 M ∂pj dpj ∂pj dpj dpj i Premultiplying (8) by φiT eliminates the second term and the gradients of the eigenvalues with respect to the system parameter are found,
∂M ∂K φi φiT − ωi2 ∂pj ∂pj dωi = (9) dpj 2ωi φiT Mφi The solution for the sensitivities of the eigenvalues is substituted into dωi /dpj of the first term in (8), resulting in the following system: ) * ! " ∂M ∂K dφ dω d T i i K − ωi2 M =− − 2ωi M − ωi2 (φ Mφi ) = 0 (10) φi ; dpj ∂pj dpj ∂pj dpj i This system is solved for dφi /dpj considering the norm of the eigen vector by ) * dφi d φ˜ i d φ˜ i 1 T ∂M T = − φi M + φi φi φi ; dpj dpj dpj 2 ∂pj
∂K dωi d φ˜ i + 2 ∂M ˜ = −K − 2ωi M − ωi φi dpj ∂pj dpj ∂pj
(11)
where K˜ + is the generalized inverse (Adelman and Haftka 1986; Dailey 1989) of the singular matrix K˜ = (K − ωi2 M). 2.2
Num eri c al mo d e l: c o nne c t ing r o d
The application of the aforementioned methodology is tested on a rod, shown in Figure 6.2, used in various past studies (Bennet and Botkin 1985; Zhang, Beckers, and Fleury 1995). The rod is clamped at the inner circumference of the left hole, and a transient force is applied to the inner circumference of the right hole. The rod is lightly damped using Raleigh damping, with α = 10−5 and β = 10−5 . The beam is modeled using 400 isoparametric 4 node elements, resulting in a total of 936 degrees of freedom. All computations are performed within MATLAB utilizing the CALFEM finite
Applied force p3 p1
Design parameters (p)
p4 p2 0
p5
Thickness = 3 mm
Figure 6.2 Finite element model of connecting rod.
50
Times [s] 1200
Design optimization of stochastic dynamic systems
141
element toolbox (Austrell, Dahblom, Lindemann, Olsson, Olsson, Persson, Petersson, Ristinmaa, Sandberg, and Wernbergk 1999). Two geometric parameters, p1 and p2 , control the horizontal positions of the center points of the left and right circular segments of the center hole, as depicted in Figure 6.2. The radii of the circular segments are kept constant. The rod has an overall length of 51 mm, a thickness of 3 mm, a Poisson ratio of 0.3, and a Young’s modulus of E = 7.2 × 105 N/mm2 . Modal and sensitivity analyses are performed on the initial system, and are used to approximate the response associated with different design changes. The ROM is based on four eigenmodes, resulting in a decrease from 936 degrees of freedom to 4. The greatest benefit of this reduction lies within the decreased computational cost of the time integration reduced by a factor of 110. 2.3
First order Taylor s eries approxima ti o n o f e i g e nmo de s
Once the geometry of the rod is altered, for optimization or uncertainty analysis, the eigenmodes themselves change. In an effort to track the change, a first order Taylor Series approximation is used about the original system to describe the basis at any set of system parameters p. (p) ≈ 0 +
∂T0 (p − p0 ) ∂p
(12)
Displacement (mm)
The derivatives of the basis are found following the method described in Section 2.1. The utility of the approximation is studied in the following example. The reduced system is built at the original system. The parameter p1 in Figure 6.2, is altered in the design of the rod, and the eigenvalues are approximated at the design change. The design change represents a shift in the left circular segment of the center hole by 0.75 mm, full range −4 ≤ p2 ≤ 4. To isolate the effects of the approximated eigenvalues on the design change, the actual mass and stiffness matrices of the altered system are used. The plot on the left of Figure 6.3 uses the eigenmodes from the initial design to approximate the system response at the design change. The plot on the right of Figure 6.3 0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.1
0.1
0.2
0.2
0.3
0.3
0.4
0.4
0.5
0
1
2
3
Time (ms)
4
5
6
0.5
Initial design – FA Altered design – E-ROM Altered design – FA 0
1
2
3
4
5
6
Time (ms)
Figure 6.3 Actual and approximated responses: vertical displacement at right end of the rod over time for initial and altered design. “FA’’ and “E-ROM’’ is an analysis through a full model analysis and a E-ROM using the no update (left) and update (right) of the basis.
142
Structural design optimization considering uncertainties
demonstrates the accuracy in the first order Taylor Series approximation of the eigenmodes. Therefore, using the eigenmodes from the initial design is not sufficiently accurate, but using a Taylor Series approximation leads to acceptable approximation errors. 2.4 C o m b i ned appr o ximat io n o f eig e n m o d e s Combined approximation (CA) (Kirsch 2002; Kirsch 2003; Kirsch 2001; Kirsch 1999; Kirsch 2000) is a reanalysis method used to approximate the basis vectors due to a change in system parameters. The new basis is approximated as a linear combination of another basis: φ˜ i (p) = y1 r1 + y2 r2 + · · · + yn rn
(13)
where yi are constants and r is the basis used for CA. A binomial series expansion about the original design is often chosen as the reduced basis (Kirsch 2002; Kirsch 2003). In this study, two different methods are used both of which require eigenmodes and derivatives with respect to the design/random variables. The first method approximates the ith eigenmode φ˜ i through the corresponding mode φi and its derivatives ∂φi /∂pj : φ˜ i (p) ≈ y1 φi + y2
∂φi ∂φi + · · · + yn+1 ∂p1 ∂pn
(14)
This approach is labeled “single CA’’ and is equivalent to a first order Taylor series expansion if y1 = 1 and yi > 1 = pi−1 . The second method uses all modes and its derivatives for approximating ith eigenmode φ˜ i . φ˜ i (p) ≈ y1 φi + y2
∂φi ∂φi ∂φm ∂φm + · · · + yn+1 + · · · + ym φm + ym+1 + · · · + yk ∂p1 ∂pn ∂p1 ∂pn
(15)
This approach is labeled “full CA’’. To find the coefficients y, the newly assembled system matrices are reduced by the basis r. Once the reduced matrix MCA and KCA are found, where MCA = rT Mr and KCA = rT Kr, the following eigenvalue problem is solved to find y: KCA y = λ MCA y
(16)
where y are the eigenmodes of (16). In a single CA, only the first eigenmodes from (16) is used in (14) to approximate the modes at the design change. This is done for each of the i modes, needing i separate eigen analyses. A full CA, requires only one eigen analysis, but is as large as there are modes and derivatives. The modes obtained from a single CA returns the same number of modes (i) where a full CA will return the same number of modes as there are modes and derivatives. A study is performed to see the approximation (Taylor Series, Single and Full CA) technique yielding a better approximation of the eigenmodes with respect to the full order response. To do this, the E-ROM is calibrated at an initial design and then the
Design optimization of stochastic dynamic systems
1.66
104
2 Full
1.68
TS
1.5
1.7
SCA 1
FCA
1.72 0.5
1.9
1
1.5
2
104
Percent error
Dissipation energy [mJ]
1.74 1.76
0
2
2
1
2.05
0
1
∆p
1.5
2
1
1.5
2
1.5
2
3
1.95
2.1
143
1
1
∆p
Figure 6.4 Dissipation energy approximating eigenmodes by Taylor Series and CA.
system parameter p1 from Figure 6.2 is varied to a p of 1 mm. The energy dissipation from t = 0.4 ms to t = 1.0 ms is used as the performance metric. The results of the study are illustrated in Figure 6.4. The top and bottom figures represent two different studies, started at different values of p0 . The figures on the left represent the recorded energy dissipation values for the three different approximation methods compared with the full order updated model. The figures on the right represent the percent error of the three approximation methods with respect to the full system analysis. In the top two graphs of Figure 6.4, both CA techniques are able to approximate the dissipation energy better for the entire p. In the bottom two figures, Taylor Series captures the response well up to a p of approximately 0.85 but then diverges rapidly, where both CA techniques are more consistent in approximating the response. In this example, the full CA leads to better approximations over the parameter intervals considered than the single CA. For the remainder of this study, both CA approximation techniques will be used. The reader may note a linear approximation of the eigenmodes captures well the nonlinear behavior of the system response with respect to system parameters. This illustrates the idea that an approximation before the physical solution of the system, i.e. the eigenmodes, is better than a similar approximation of the output of the system.
144
Structural design optimization considering uncertainties
2.5 Appro x i m a t io n o f mas s and s t iffne s s m a t r i ce s While the computation of eigenmodes after a parameter change is not feasible, calculating the altered mass and stiffness matrices is a possibility, as the computational costs of doing so does not increase as drastically with respect to degrees of freedom as do the time integration and eigen-analysis. Therefore, a first-order and secondorder approximation of the system matrices are compared against the actual mass and stiffness matrices in the analysis of an altered system. In comparing these, the approximated eigenvalues will be used for all altered responses, as their utility has already been illustrated. The positive effect of using the actual M and K is illustrated in Figure 6.5. Therefore, the difference between not updating M and K, a first order approximation, and a second order approximation is established. The negative effect of using the initial matrices for the analysis of an altered system is illustrated in the top left of Figure 6.5, where the poor approximation has drastically smaller displacements. The attempt to approximate M and K with a first-order approximation results in an even worse description of the altered system, as shown in the top right of Figure 6.5. The reader may note the first-order approximation results in a drastic reduction in stiffness and a large erroneous displacement. A linear approximation for the stiffness matrix is therefore not an option. The efforts to use a second order approximation for M and K are shown in bottom left of Figure 6.5. The approximated model fails to deviate significantly from the original model and capture the change of the structural response due to the design change. However, some utility in the second order approximation is demonstrated if the design change between the altered and initial system is small. The results for a system change 1/5 the size used in the other examples (original change of 0.75 mm) are illustrated in the bottom right of Figure 6.5. Although the change in the response of the system is significantly smaller, the second order approximation effectively captures the change. The study of approximating the mass and stiffness matrices illustrates updating the system matrices is versatile and an effective method. However, there is still utility in a second-order estimation of the matrices for small system changes, due to the computational savings in forgoing additional assembly computations. The E-ROM approximation recommended is a full CA and recalculation of M and K, with the acceptance of a second order approximation of M and K for small design changes. Further application of the suggested E-ROM will be discussed in the subsequent sections.
3
E-ROMs for uncertainty analysis
To illustrate the utility of the E-ROM in uncertainty analysis, the rod in Figure 6.2 is used, and the horizontal locations of the centers of the circular segments of the center hole are modeled as a manufacturing uncertainty by the two uncertainty variables r1 and r2 . The radii of the circular segments are kept constant. A normal distribution is assigned to the horizontal positions of the centers of both circular segments with a standard deviation of 0.2 mm. As a performance measure, the amount of energy dissipated from the system between 0.4 ms and 1.0 ms is measured and utilized to evaluate altered design configurations. The E-ROM is used in three different uncertainty analysis methods in the subsequent sections, and the results compared against using the full order system.
Displacement (mm)
Design optimization of stochastic dynamic systems
Original matrices (dp 0.75)
0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5
First order (dp 0.75) 0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5
0
Displacement (mm)
145
1
2
3 4 Time (ms)
5
6
Second order (dp 0.75)
0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5
0
1
2
3 4 Time (ms)
5
6
Second order (dp 0.15)
0.2
0.1
0 0
1
2
3
4
5
6
3
Time (ms) Initial design – FA
3.5
4
4.5
5
5.5
6
Time (ms) Altered design E-ROM
Altered design – FA
Figure 6.5 Approximated response for first and second order approximations of the system matrices M and K.
3.1
Monte Carlo Simulation
The most general uncertainty analysis is Monte Carlo Simulation (MCS). In general, MCS is impossible for most realistic dynamic systems due to the computational costs of each simulation, and the high number of computations required for an accurate solution. This cost is significantly reduced by utilizing the E-ROM due to the reduced cost of time integration. However, the E-ROM proposed still requires assembly of the altered system matrices, which is expensive for a large number of samples. MCS is performed here not as a proposed solution of alleviating the computational burden, but as a means of demonstrating the effectiveness of the E-ROM in the uncertainty space. A Monte Carlo analysis is performed on both the full order and the E-ROM system. 10,000 samples are taken in all, and the same sample points are used for both the full
146
Structural design optimization considering uncertainties
order model and E-ROM. Each sample point represents one particular realization of the uncertainty parameters, for which a dynamic analysis is carried out and the energy dissipated is recorded. To test the framework in terms of an uncertainty analysis, a failure surface is created by picking a critical energy dissipation level of −16.5 mJ. If a design failed to dissipate at least 16.5 mJ of energy, that is Ed ≥ −16.5 mJ, then the design is considered unsafe. The E-ROM, calibrated at the initial design, was first tested within the range of possible sample points, the mean design plus or minus 4 standard deviations, and the results were sufficiently accurate. The MCS using the full order system analysis returned a probability of failure of 19.72%, and the MCS utilizing the E-ROM returned a probability of failure of 19.4% for a single CA and 19.8% for full CA. The difference between these predictions is 0.3% for a single CA and 0.05% for the full CA. This is the error between the full order model and E-ROM analyses since the same samples points were used. 3.2
F i rst Ord er R e liab ilit y Me t ho d
To address the computational burden of Monte Carlo analysis for uncertainty quantification, the First Order Reliability Method (FORM) is studied with the E-ROM approximation. FORM employs an approximation of the limit state function at the most probable point (MPP) of failure (Hasofer and Lind 1974). FORM requires the first-order derivatives to linearize the failure surface at the MPP, and therefore it is considered accurate as long as the curvature of the failure surface in the space of the random variables is not too large at the MPP. The MPP is determined by solving an optimization process in the standard normal space of the random parameters. FORM based RBDO methods are often used within the structural design community (Enevoldsen and Sørensen 1994b; Enevoldsen and Sørensen 1994a; Frangopol and Corotis 1996; Yu, Choi, and Chang 1997a; Grandhi and Wang 1998; Luo and Grandhi 1997). However, FORM based RBDO methodologies are still in their infancy for multi-physics and dynamic systems due to the additional cost of RBDO being magnified by high analysis costs (Allen, Raulli, Maute, and Frangopol 2004; Allen and Maute 2004). The major expense of FORM lies in the MPP search. The MPP search has certain characteristics suitable for the E-ROMs. The search is conducted in the standard normal space of the random parameters centered about the mean design. Therefore, the E-ROM is calibrated at the mean design. Also, the objective of the optimization process is to minimize the distance to the origin, aiming to keep the process close to the calibration point. For low reliability requirements the MPP lies close to the origin and no recalibration is required. However, for high reliability requirements a recalibration and trust region framework are typically required. FORM analysis were performed using both the full order system and E-ROM. The analyses were performed on the initial configuration of the rod, and the failure criteria was the dissipation energy below −16.5 mJ. The quantitative results of the FORM analyses are summarized in Table 6.1. The two optimization problems converged to slightly different MPPs leading in turn to a change in the reliability index and the calculated probability of failure. However, the difference between the two model results is small. To improve the convergence of the E-ROM based MPP search towards the solution of the full order model, the E-ROM could be recalibrated at the MPP initially found and the MPP search continued.
Design optimization of stochastic dynamic systems
147
Table 6.1 FORM results of full and E-ROM model analysis. Analysis method
MPP s1
MPP s2
Beta
Pf
Full model E-ROM Single CA Full CA
0.837
0.194
0.859
19.52%
0.848 0.839
0.179 0.192
0.867 0.861
19.30% 19.46%
Table 6.2 PCE based MCS results of full and E-ROM model analysis. Analysis method
Full model E-ROM Single CA Full CA
3.3
MSC (%)
FORM (%)
PCE 1st order (%)
2nd order (%)
3rd order (%)
19.7
20.38
20.8
20.1
19.8
19.4 19.8
20.42 20.32
21.8 20.7
19.8 20.1
19.5 19.7
Polynomial chaos expans ion bas ed M o nte Carl o Si mul ati o n
The obvious drawback of running MCS is the number of simulations to obtain an error low enough to accurately represent the PDF of the model. The major drawbacks of FORM are the inability to capture nonlinearities in the failure surface, and the limitation of having only one particular probability measure instead of a complete PDF. The third uncertainty analysis technique discussed utilizes a polynomial chaos expansion (PCE) on the system output to allow a MCS to be used at a relatively low computational cost compared to a full MCS (Xiu, Lucor, Su, and Karniadakis 2002; Field, Red-Horse, and Paez 2000; Field 2002; Nurdin 2002). PCE uses system analyses at collocation points to build a polynomial approximation of the model response. Depending on the accuracy of the PCE needed, a different number of collocation points are used. The more collocation points, the better the PCE to accurately represent the model response. The computational cost of this method lies within the system analyses at the collocation points. There, the E-ROM is utilized to reduce the computational costs of these analyses. Once the model is sampled and the PCE is built, a MCS can be conducted on the PCE approximation to get the PDF of the model at a low computation cost. If a sufficiently large number of samples are taken for the MCS, the majority of the error is due to the PCE approximation, and not the MCS. Considering the same limit state function as used previously, a first, second, and third order PCE is built and the probability of failure is calculated. The results are shown in Table 6.2. The probability of failure for MCS, FORM, and PCE are all within one percent of each other. Between the two different methods of E-ROM, the full CA is able to predict the stochastic response more accurately, where the error between each of the methods is at most 0.1%.
148
Structural design optimization considering uncertainties
4 Deterministic optimization with E-ROMs While the intended use of the proposed method is to alleviate the computational costs of stochastic-based optimization, its utility is demonstrated first with a deterministic optimization problem. The E-ROM approach sufficiently models a system within a certain region around the design point for which the E-ROM was calibrated. Most design optimization problems, unlike FORM optimization problems, contain variables with bounds larger than the trust region of the E-ROM. Therefore, an adaptation strategy for updating the trust region is used to incorporate the E-ROM into optimization problems with large bounds. In this study, the trust region framework of (Giunta and Eldred 2000 and Eldred, Giunta, and Collis 2004) is used. The initial bounds of the trust region are from −2 to 2 for both design variables while the global bound range from −4 to 4 for both design variables. For optimization, the connecting rod’s energy dissipation is used as the objective to maximize in the design problem. The two design variables are the horizontal positions of the center points of the left and right circular segments of the center hole, as depicted in their initial configuration in Figure 6.2. The deterministic optimization problem is as follows: min (−Ediss ) s
(17)
subject to −4 ≤ si ≤ 4
where Ediss is the energy dissipated and si are the optimization variables describing the center hole geometry. The contour of the dissipation energy is seen on the right of Figure 6.6. The contour lines in the figure are for illustrative purposes only, and obtained by sampling the design space to compare the optimization results. The results from the deterministic optimum are shown in Table 6.3. All three models converged to the same deterministic optimum. Each recalibration is more expensive to obtain than a function evaluation of the full model which is twice the cost of the full model. This is due to obtaining the basis which includes derivatives with respect to the design variables. The function evaluations of single and full CA also accrue computational costs, but not as much as a function evaluation of the full model. For large systems, the function evaluations of the full order model would be significantly more costly than a function evaluation of the E-ROM. Comparing Single and Full CA, each recalibration requires the same costs. Performing a full CA is more costly than a single CA because more basis vectors are used with full CA.
Table 6.3 Full and E-ROM model deterministic optimization results. Analysis method
s1
s2
Function evaluations
Full model E-ROM Single CA Full CA
1.548
4.0
27
1.551 1.549
4.0 4.0
122 144
Equivalent recalibration cost
16 16
Design optimization of stochastic dynamic systems
149
5 RBDO with E-ROMs In general, the solution of optimization problems with stochastic criteria is significantly more expensive than the solution of problems with purely deterministic criteria. Virtually all stochastic analysis procedures require additional analyses of the system for various points within the uncertainty space around the mean design. These analyses are required for each evaluation of a stochastic criterion within each iteration of the design optimization. If these analyses are solved using an E-ROM instead of a full system model, with similar convergence results, then significant computational savings are realized. The framework is tested on a RBDO of the rod in Figure 6.2. The energy dissipated is again used as the objective to be maximized in the design problem. However, a reliability-based constraint is imposed on the system. The constraint limits the standard deviation of the dissipation energy less than 300 µJ, making the RBDO problem as follows: min (−Ediss ) s
subject to
σE − 300 ≤ 0 and −4 ≤ si ≤ 4
(18)
where Ediss is the energy dissipated and σE is the standard deviation of the energy dissipated. The constraint on the standard deviation forces the optimization to a more robust design, limiting the sensitivity of the system performance to uncertainties. The standard deviation is found by a Monte Carlo simulation based on PCE, as highlighted in Section 3. This method was chosen due to its computational efficiency and its ability to obtain the entire PDF of the output response. The derivatives of the standard deviation are obtained by finite differencing. The positions of the centers of the circular segments of the center hole are now treated as both the design variables and the random variables. The design variables represent the mean or intended design, and a normal distribution is assigned to the horizontal positions of the center points of the left and right circular segments of the center hole, each with a standard deviation of 0.2 mm. The radii of the circular segments are kept constant. To illustrate this academic example problem, the constraint is explored in the design and uncertainty space by sampling uniformly throughout. The results of the sampling are illustrated in the contour plot in Figure 6.6. The plot on the left of Figure 6.6 represents the contour plots for the standard deviation of the energy dissipated in µJ. The reader may note the constraint boundary has been highlighted in the figure, and the feasible and infeasible regions identified. The initial design is feasible, but the deterministic design is infeasible. Figure 6.6 on the right overlays the constraint boundaries onto the contour plot of the objective, to give the reader the general idea of the RBDO problem. 5.1
Results
The RBDO problem is solved using the full order system analyses and two E-ROM approximation methods, single and full CA. The optimization results are summarized in Table 6.4. Each of the methods has similar search directions, step sizes and
Structural design optimization considering uncertainties
Standard Deviation of Energy
Energy and Standard Deviation of Energy
0 30
0 1600
750
30 0
0
300
1
120
00
30 0
300 450
150
14000
15 0
15
0
3
300
3
300
0
2
1600
300
150
300
2
14000
15
0
600 450
1
14000
14 00 0
450
0
0
30
16000
1
0
45
0
1400 0
12 00 0
140
900
750
60 0
0 60
2
0
30
1
200 00
600
450
3
18000
0 450
1
450
300
2
50
0
45
30
0
90 150
3
75
p2
4
900
00
4
30 0
150
4 4
p1 3
2
1
0
1
2
p
3
4 4
4
3
2
1
0
1
2
3
4
1
Deterministic optimum
RBDO optimum
Infeasible solution
Initial design (starting point)
Figure 6.6 Contour plots of objective and stochastic constraint of RBDO problem in the space of the optimization variables.
Table 6.4 Full and E-ROM model RBDO results. Analysis method
s1
s2
Iterations
Time (min)
Full model E-ROM Single CA Full CA
1.31
2.81
16
130
1.35 1.27
2.84 2.86
25 13
50.8 21.5
converged solutions. The E-ROM is recalibrated within each trust region step. Within each trust region the stochastic analyses and function evaluation are computed using the E-ROM. The E-ROM framework converges quickly to the general solution of the RBDO problem as did the deterministic optimum. However, it is recommend imposing a weak convergence criterion on the E-ROM optimization and switching to the full model to fine tune the optimization variables in the vicinity of the solution. The benefit of reliability-based design optimization is demonstrated through the standard deviation of the dissipation energy. The standard deviation at the deterministic optimum is 579 mJ, 299 mJ at the RBDO optimum and 171 mJ at the initial design. The stochastic response is characterized by a probability density function with a mean and standard deviation, as opposed to a deterministic formulation where the output is characterized by a single value. The objective of the optimization is to maximize the energy dissipated by the system, or move the mean of the design to a greater standard deviation in dissipation energy. With no stochastic constraint, the deterministic
Design optimization of stochastic dynamic systems
151
Table 6.5 Computational costs of E-ROM and full model analysis.
Assembly Time integration
Full model
E-ROM
1.11 sec 5.59 sec
1.11 sec 0.051 sec
optimization achieves a large level of energy dissipation, but results in a design whose performance is highly susceptible to uncertainties. 5.2
Computational cos t
The computational cost measured by CPU time of the above examples are reported in Tables 6.4 and 6.5. Since the overall analysis and optimization times are proportional to the computational time required to analyze one altered design configuration, the computational costs associated with each individual analysis are reported in Table 6.5. The cost for each analysis consists of two main components: the cost to assemble the mass and stiffness matrices for the new design and the cost to perform the time integration for the transient analysis of the new design. The computational savings of the reduced time integration over the full time integration is a factor a 110. The overall computational savings per analysis is significantly smaller, approximately 43%, due to the relatively large assembly time. As the number of degrees of freedom of a system increase, the full integration time generally increases at a higher rate than the required assemble time, thus increasing the savings with the E-ROM approach. The overall computational savings of the E-ROM in an optimization framework is dependent upon the costs of recalibration, and the frequency of recalibration required. Again, the E-ROM is most beneficial in the RBDO framework requiring many analyses about a mean design used as the recalibration point. Table 6.4 demonstrates the effectiveness of the E-ROM to save CPU time of the RBDO problem. The single CA E-ROM saves approximately 39% of the time it takes the full order model to run and the full CA E-ROM saved 71%. When moving to larger models, the time saved running an E-ROM will be more significant.
6 Conclusions A computational framework has been presented allowing reliability-based design optimization of dynamic systems by reducing the associated computational costs. The framework utilizes reduced order models extended into the parameter space of the design and random variables. This extension allows for the analysis of altered designs by the E-ROM at a significant reduction in computational cost. Various approaches for constructing an E-ROM were studied. Numerical studies showed the system matrices need recomputing at each altered design and can not be approximated by a Taylor series expansion. In contrast, the reduced basis can be well approximated with a first order Taylor series expansion, using the full combined approximation approach. The utility of the approach was tested on a linear elastic rod subjected to a time-varying load. The E-ROM was used for various stochastic analysis techniques compared against the full model analyses. The E-ROMs ability to capture the essential characteristics of
152
Structural design optimization considering uncertainties
the system was demonstrated by both deterministic and reliability-based design optimization examples. The E-ROM framework proved to converge to the solution with significantly less computational effort than the full system model.
Acknowledgments The authors acknowledge the support by the National Science Foundation under grant DMI-0300539. The opinions and conclusions presented are those of the authors and do not necessarily reflect the views of the sponsoring organizations.
References Adelman, H. & Haftka, R. 1986. Sensitivity analysis of discrete structural systems. AIAA Journal 24:823–832. Allen, M. & Maute, K. 2004. Reliability-based design optimization of aeroelastic structures. Structural and Multidisciplinary Optimization 27(4):228–242. Allen, M., Raulli, M., Maute, K. & Frangopol, D. 2004. Reliability-based analysis and design optimization of electrostatically actuated MEMS. Computers and Structures 82(13–14): 1007–1020. Austrell, P.-E., Dahblom, O., Lindemann, J., Olsson, A., Olsson, K.-G., Persson, K., Petersson, H., Ristinmaa, M., Sandberg, G. & Wernbergk, P.-A. CALFEM: A finite element toolbox to MATLAB, Version 3.3., Structural Mechanics, LTH, Sweden, 1999. http://www.byggmek.lth.se/Calfem/index.htm. Bennet, J. & Botkin, M. 1985. Structural shape optimization with geometric description and adaptive mesh refinement. AIAA Journal 23:458–464. Bjerager, P. 1990. On computational methods for structural reliability analysis. Structural Safety 9(2):79–96. Chandu, S. & Grandhi, R. 1995. General purpose procedure for reliability based structural optimization under parametric uncertainties. Advances in Engineering Software 23:7–14. Dailey, R. 1989. Eigenvector derivatives with repeated eigenvalues. AIAA Journal 27: 486–491. Diaz, A. & Kikuchi, N. 1992. Solution to shape and topology eigenvalue optimization problems using a homogenization method. International Journal for Numerical Methods in Engineering 35:1487–1502. Eldred, M.S., Giunta, A.A. & Collis, S. 2004. Second-order corrections for surrogate-based optimization with model hierarchies. In AIAA 2004-44570, 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, 30 August – 1 September 2004, Albany, NY. Enevoldsen, I. & Sørensen, J.D. 1994a. Reliability-based optimization as an information tool. Mechanics of Structures & Machines 22(1):117–135. Enevoldsen, I. & Sørensen, J.D. 1994b. Reliability-based optimization in structural engineering. Structural Safety 15:169–196. Field, R. 2002. Numerical methods to estimate the coefficients of the polynomial chaos expansion. In 15th Engineering Mechanics Conference, Columbia University, NY. ASCE. Field, R., Red-Horse, J. & Paez, T. 2000. A nondeterministic shock and vibration application using polynomial chaos expansions. In 8th ASCE Joint Specialty Conference on Probabilistic Mechanics and Structural Reliability, South Bend, IN. Frangopol, D. & Corotis, R. 1996. Reliability-based structural system optimization: Stateof-the-art verse state-of-the-practice. In Cheng (ed.), Analysis and Computation: Proceedings of the Twelfth Conference held in Conjunction with Structures Congress XIV, pp. 67–78.
Design optimization of stochastic dynamic systems
153
Ghanem, R.G. & Spanos, P.D. 1991. Stochastic finite element: a spectral approach, Springer. Giunta, A.A. & Eldred, M.S. 2000. Implementation of a trust region model management strategy in the dakota optimization toolkit. In AIAA/USA/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Long Beach, CA. Grandhi, R. & Wang, L. 1998. Reliability-based structural optimization using improved two-point adaptive nonlinear approximations. Finite Elements in Analysis and Design, 35–48. Hasofer, A. & Lind, N. 1974. Exact and invariant second-moment code format. J. of Engineering Mechanics 100:111–121. Haug, E.J. & Choi, K.K. 1982. Systematic occurrence of repeated eigenvalues in structural optimization. Journal of Optimization Theory and Applications 38:251–274. Kirsch, U. 1999. Efficient, accurate reanalysis for structural optimization. AIAA Journal 37(12):1663–1669. Kirsch, U. 2000. Combined approximations – a general reanalysis approach for structural optimization. Structural and Multidisciplinary Optimization 20(2):97–106. Kirsch, U. 2001. Exact and accurate solutions in the approximate reanalysis of structures. AIAA Journal 39(11):2198–2205. Kirsch, U. 2002. A unified reanalysis approach for structural analysis, design, and optimization. Structural and Multidisciplinary Optimization 25(1):67–85. Kirsch, U. 2003. Approximate vibration reanalysis of structures. AIAA Journal 41(3): 504–511. Kleiber, M. & Hien, T. 1992. The Stochastic Finite Element Method, Basic Perturbation Technique and Computer Implementation. Wiley. Legresley, P. & Alonso, J. 2001. Investigation of nonlinear projection for POD based reduced order models for aerodynamics. In AIAA 2001-16737, 39th Aerospace Sciences Meeting & Exhibit, January 8–11, 2001, Reno, NV. Luo, X. & Grandhi, R. 1997. Astros for reliability-based multidisciplinary structural analysis and optimization. Computers and Structures 62:737–745. Masur, E. 1984. Optimal structural design under multiple eigenvalue constraints. International Journal of Solids and Structures 20:117–120. Mills-Curran, W. 1988. Calculation of eigenvector derivatives for structures with repeated eigenvalues. AIAA Journal 26(7):867–871. Nieuwenhof, B. & Coyotte, J. 2002. A perturbation stochastic finite element method for the time-harmonic analysis of structures with random mechanical properties. In 5th World Congress on Computational Mechanics, Vienna, Austria. WCCM. Nurdin, H. 2002. Mathematical modeling of bias and uncertainty in accident risk assessment. Mathematical Sciences, University of Twente, The Netherlands. Ravindran, S. 1999. Proper orthogonal decomposition in optimal control of fluids. Technical report, NASA TM-1999-209113. Schuëller, G. 1997. A state-of-the-art report on computational stochastic mechanics. Probabilistic Engineering Mechanics 12:197–321. Schuëller, G. 2001. Computational stochastic mechanics – recent advances. Computers & Structures 79:2225–2234. Thomas, J., Dowell, E. & Hall, K. 2001. Three-dimensional transonic aeroelasticity using proper orthogonal decomposition based reduced order models. In 42nd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials (SDM) Conference, April 2001, Seattle, WA, AIAA Paper 2001-1526. Willcox, K. & Peraire, J. 2001. Balanced model reduction via the proper orthogonal decomposition. In 15th AIAA Computational Fluid Dynamics Conference, June 11–14, Anaheim, CA, AIAA 2001-2611.
154
Structural design optimization considering uncertainties
Xiu, D., Lucor, D., Su, C.-H. & Karniadakis, G. 2002. Stochastic modeling of flow-structure interactions using generalized polynomial chaos. Journal of Fluids Engineering 124:51–59. Yu, X., Chang, K. & Choi, K. 1998. Probabilistic structural durability prediction. AIAA Journal 36(4):628–637. Yu, X., Choi, K. & Chang, K. 1997a. A mixed design approach for probabilistic structural durability. Journal of Structural Optimization 14(2–3):81–90. Yu, X., Choi, K. & Chang, K. 1997b, January. Reliability and durability based design sensitivity analysis and optimization. Technical Report R97-01, Center for Computer Aided Design University of Iowa. Zhang, W.-H., Beckers, P. & Fleury, C. 1995. A unified parametric design approach to structural shape optimization. International Journal for Numerical Methods in Engineering 38:2283– 2292.
Chapter 7
Stochastic system design optimization using stochastic simulation Alexandros A. Taflanidis & James L. Beck California Institute of Technology, CA, USA
ABSTRACT: Engineering design in the presence of uncertainties often involves optimization problems that include as objective function the expected value of a system performance measure, such as expected life-cycle cost or failure probability. For complex systems, this expected value can rarely be evaluated analytically. In this study, it is calculated using stochastic simulation techniques which allow explicit consideration of nonlinear characteristics of the system and excitation models, as well as complex failure modes. At the same time, though, these techniques involve an unavoidable estimation error and significant computational cost which make the associated optimization challenging. An efficient framework, consisting of two stages, is presented here for such optimal system design problems. The first stage implements a novel approach, called Stochastic Subset Optimization, for iteratively identifying a subset of the original design space that has high plausibility of containing the optimal design variables. The second stage adopts some stochastic optimization algorithm to pinpoint, if needed, the optimal design variables within that subset. Topics related to the combination of the two different stages for overall enhanced efficiency are discussed. An illustrative example is presented that shows the efficiency of the proposed methodology; it considers the retrofitting of a four-story structure with viscous dampers. The minimization of the expected lifetime cost is adopted as the design objective. The expected cost associated with damage caused by future earthquakes is calculated by stochastic simulation using a realistic probabilistic model for the structure and the ground motion.
1 Introduction In engineering design, the knowledge about a planned system is never complete. First it is not known in advance which design will lead to the best system performance in terms of the specified metric; it is therefore desirable to optimize the performance measure over the space of design variables that define the set of acceptable designs. Second, modeling uncertainty arises because no mathematical model can capture perfectly the behavior of a real system and its future excitation. In practice, the designer chooses a model that he or she feels will adequately represent the behavior of the system when built; however, there is always uncertainty about which values of the model parameters will give the best representation of the system, so this parameter uncertainty should be quantified. Furthermore, whatever model is chosen, there will always be an uncertain prediction error between the model and system responses. For an efficient engineering design, all uncertainties, involving future events as well as the modeling of the system, must be explicitly accounted for. A probability logic approach provides a rational and
156
Structural design optimization considering uncertainties
consistent framework for this purpose (Jaynes 2003). In this case, this process is often called stochastic system design. In this context, consider some controllable parameters that define the system design, referred to herein as design variables, ϕ = [ϕ1 , ϕ2 , . . . , ϕnϕ ] ∈ Φ ⊂ Rnϕ , where Φ denotes the bounded admissible design space. Also consider a model class that is chosen to represent a system design and its future excitation, where each model in the class is specified by a nθ -dimensional vector θ = [θ1 , θ2 , . . . , θnθ ] lying in Θ ⊂ Rnθ , the set of possible values for the model parameters. Because there is uncertainty in which model best represents the system behavior, a PDF (probability density function) p(θ|ϕ), which incorporates available knowledge about the system, is assigned to these parameters. The objective function for a robust-to-uncertainties design is, then, expressed as: Eθ [h(ϕ, θ)] =
Θ
h(ϕ, θ)p(θ|ϕ)dθ
(1)
where Eθ [·] denotes expectation with respect to the PDF for θ and h(ϕ, θ) : Rnϕ × Rnθ → R denotes the performance measure of the system, referred to also as the loss function; possible examples for h(ϕ, θ) are the life-cycle cost (see (37) later) or the indicator function for system failure so that (1) gives the failure probability (see (6) later). We then have the optimal stochastic design problem: Minimize Eθ [h(ϕ, θ)] subject to f c (ϕ) ≥ 0
(2)
where f c (ϕ) corresponds to a vector of constraints. Such optimization problems, arising in decision making under uncertainty, are typically referred to as stochastic optimization problems (e.g. Ruszczynski & Shapiro 2003, Spall 2003). In structural engineering, stochastic design problems are usually related to the expected life-cycle cost of a structure (e.g. Ang & Lee 2001) or to its reliability, quantified in terms of the probability of failure P(F|ϕ) (e.g. Enevoldsen & Sørensen 1994, Gasser & Schuëller 1997). Many variants of such problems have been posed, typically expressed in one of three following forms: (a) optimization of the system reliability given deterministic constraints, (related, for example, to construction cost), (b) optimization of the cost of the structure given reliability constraints, or (c) optimization of the expected life-cycle cost of the structure. Approaches have been suggested for transforming the latter problem to one of the former two. This is established by approximating the cost related to future damages to the structure in terms of its failure probability (Sørensen et al. 1994). In this setting, Reliability-Based Design Optimization (RBDO), i.e. design considering reliability measures in the objective function or the design constraints, has emerged as one of the standard tools for robust and costeffective design of structural systems. An alternative design methodology that also accounts for probabilistic system response is the Robust Design Optimization (RDO). RDO primarily seeks to minimize the influence of stochastic variations on the mean design; as such, it focuses on reduction of the mean performance rather than looking at optimizing the response that exceeds some acceptable thresholds, as RBDO does. Still, RBDO and RDO represent only a portion of the potential problems encountered in robust-to-uncertainties system design optimization.
Stochastic system design optimization using stochastic simulation
157
In this study we discuss general stochastic system design problems that involve as objective function the expected value of a system performance measure. Some special attention is given to problems with reliability objectives, i.e. when that expected value corresponds to a failure probability. This class of problems, which belongs to RBDO, will be referred to herein as ROP (reliability objective problems). We focus on analysis methods that are applicable to complex systems, involving, for example, nonlinear models with high-dimensional uncertainties. These types of problems appear often in the study of dynamic systems when the excitation is modeled as a stochastic process, for example, in the field of dynamic reliability (Au & Beck 2003b). An efficient framework for analysis and optimization is presented here for such design problems using stochastic simulation techniques to evaluate the system performance.
2 Optimal stochastic system design using stochastic simulation 2.1
G eneral cas e
We consider the optimization described by (2), which may be equivalently formulated as: ϕ∗ = arg min Eθ [h(ϕ, θ)] ϕ∈Φ
(3)
where the deterministic constraints are taken into account by appropriate definition of the admissible design space Φ. In the probabilistic setting described earlier, model uncertainties may be incorporated in the system description as a model prediction error, i.e. an error between the response of the actual system and that of the assumed model. This error can be model probabilistically as a random variable (Beck & Katafygiotis (1998)) and augmented into θ to form an uncertain parameter vector, comprised of both the uncertain model parameters and the model prediction error. For optimization (3), the integral in (1) must be evaluated. A particular source of difficulty for structural design when complex systems are considered is the high computational cost associated with this evaluation. To reduce this computational effort, many specialized approximate approaches have been proposed for structural optimizations (e.g. Enevoldsen & Sørensen 1994, Gasser & Schuëller 1997, Jensen 2005). These approaches include using response surface methods to approximate the structural behavior or using some proxy for the structural reliability in ROP (for example, a reliability index obtained through FORM or SORM). These specialized approaches may work satisfactorily under certain conditions, but are not proved to converge to the solution of the original design problem. For this reason such approaches are avoided in this study. Instead, evaluation of the integral in (1) through stochastic simulation techniques is considered. In this setting, an unbiased estimate of the expected value in (1) can be obtained using a finite number, N, of random samples of θ, drawn from p(θ|ϕ): ˆ θ,N [h(ϕ, ΩN )] = 1 h(ϕ, θi ) E N N
i=1
(4)
158
Structural design optimization considering uncertainties
where ΩN = [θ1 . . . θN ], and vector θi denotes the sample of the uncertain parameters used in the ith simulation. This estimate involves an unavoidable error eN (ϕ, ΩN ). The optimization in (3) is, then, only approximately equivalent to: ϕ∗N = arg min Eˆ θ [h(ϕ, ΩN )]
(5)
ϕ∈Φ
However, if the stochastic simulation procedure is a consistent one, then as N → ∞, Eˆ θ,N [h(ϕ, ΩN )] → Eθ [h(ϕ,θ)] and ϕ∗N → ϕ∗ . The existence of the estimation error eN (ϕ, ΩN ), which may be considered as noise in the objective function, contrasts with classical deterministic optimization where it is assumed that one has perfect information. Figure 7.1a illustrates the difficulties associated with eN (ΩN , ϕ). The curves corresponding to simulation-based evaluation of the objective function have non-smooth characteristics, a feature which makes application of gradient-based algorithms challenging. Also, the estimated optimum depends on the exact influence of the estimation error, which is not the same for all evaluations; different runs of the algorithm converge to different solutions, which do not necessarily correspond to the true optimum. An efficient framework, consisting of two stages, is discussed in the following sections for such optimizations. The first stage implements a novel approach, called Stochastic Subset Optimization (SSO) (Taflanidis & Beck 2007a, Taflanidis & Beck 2007b) for efficiently exploring the sensitivity of the objective function to the design variables and iteratively identifying a subset of the original design space that has high plausibility of containing the optimal design variables. The second stage adopts some appropriate stochastic optimization algorithm to pinpoint the optimal design variables using information from the first stage. Topics related to the combination of the two different stages for enhanced overall efficiency are discussed. Before presenting this framework some special characteristics of ROP are considered.
(a) 0.09
(b)
0.08 0.07
0.6
0.06
0.4
0.05 0.04 10
~ )) Pε(g(,
0.8 h(, )
E [h(, )]
1
analytical sim N 1000 sim N 4000
0.2 12
14
16
18
20
0 0.1
IF(, ) 0.05
0 ~ g(, )
0.05
0.1
Figure 7.1 (a) Analytical and simulation-based (sim) evaluation of objective function and (b) comparison between the two candidate loss functions for reliability objective problems.
Stochastic system design optimization using stochastic simulation
2.2
159
Reliability objective problems
In a reliability context, the robust probability of failure (Papadimitriou et al. 2001) can be employed to include probabilistic model uncertainties when evaluating the performance of a system. This probability quantifies the performance by giving a measure of the plausibility of the occurrence of system failure, based on all available information, and it is expressed as: P(F|ϕ) = Eθ [IF (ϕ, θ)] = IF (ϕ, θ)p(θ|ϕ)dθ (6) Θ
where IF (ϕ, θ) is the indicator function of failure, which is 1 if the system that corresponds to (ϕ, θ) fails, i.e. its response departs from the acceptable performance set, and 0 if it does not. An equivalent expression can also be used for the robust failure probability when a model prediction error, ε(ϕ, θ), is assumed. Let g(ϕ) > 0 and g(ϕ, ˜ θ) > 0 be the limit state quantities defining the system’s and model’s failure respectively, and let the model prediction error be defined in such a way that the relationship ε(ϕ, θ) = g(ϕ, ˜ θ) − g(ϕ) holds; then, if Pε (·) is the conditional on (ϕ, θ) cumulative distribution function for the model prediction error ε(ϕ, θ) and noting that g(ϕ) > 0 is equal to ε(ϕ, θ) < g(ϕ, ˜ θ), the robust failure probability can be equivalently expressed as (Taflanidis & Beck 2007a): P(F|ϕ) = Pε (g(ϕ, ˜ θ))p(θ|ϕ)dθ (7) Θ
where in this case the vector θ corresponds solely to the uncertain parameters for the system and excitation model, i.e. excluding the prediction error. Thus, the loss function in ROP corresponds either to (a) h(ϕ, θ) = IF (ϕ, θ) or (b) h(ϕ, θ) = Pε (g(ϕ, ˜ θ)), depending on which formulation is adopted, (6) or (7). Both of these formulations are used in the two stages of the framework suggested. In Figure 7.1b these two loss functions are compared when ε is Normally distributed with mean 0 and standard deviation 0.01.
3 Stochastic subset optimization SSO is an efficient algorithm for exploring the sensitivity of stochastic design optimization problems using a small number of system analyses (Taflanidis & Beck 2007a, Taflanidis & Beck 2007b). 3.1 Augmented problem Consider, initially, the modified positive loss function hs (ϕ, θ) : Rnϕ × Rnθ → R+ defined for a constant s as: hs (ϕ, θ) = h(ϕ, θ) − s
where s < min h(ϕ, θ) ϕ,θ
(8)
and note that Eθ [hs (ϕ, θ)] = Eθ [h(ϕ, θ)] − s. Since the two expected values differ only by a constant, optimization of the expected value of h(·) is equivalent, in terms of the
160
Structural design optimization considering uncertainties
optimal design choice, to optimization of the expected value of hs (·). In the SSO setting we focus on the latter optimization. The basic idea in SSO is the formulation of an augmented problem where the design variables are artificially considered as uncertain with distribution p(ϕ) over the design space Φ. In the setting of this augmented stochastic design problem, define the auxiliary PDF π(ϕ, θ) as: π(ϕ, θ) =
hs (ϕ, θ)p(ϕ, θ) Eϕ,θ [hs (ϕ, θ)]
(9)
where p(ϕ, θ) = p(ϕ)p(θ|ϕ) and the normalizing integral in the denominator corresponds to the expected value in the augmented uncertain space: Eϕ,θ [hs (ϕ, θ)] = hs (ϕ, θ)p(ϕ, θ)dθdϕ (10) Φ
Θ
This expected value will not be explicitly needed, but it can be obtained though stochastic simulation, which leads to an expression similar to (4) but with the pair [ϕ, θ] defining the uncertain parameters. The transformation of the loss function in (8) may be necessary to ensure that π(ϕ, θ) ≥ 0. For most structural design problems h(ϕ, θ) ≥ 0 and the transformation in (8) is usually unnecessary, which is always the case for ROP. However, in some cases it may be advantageous to choose s near the minimum of h(ϕ, θ) to increase efficiency of SSO (see later). In terms of the auxiliary PDF, the objective function Eθ [hs (ϕ, θ)] can be expressed as: Eθ [hs (ϕ, θ)] =
π(ϕ) Eϕ,θ [hs (ϕ, θ)] p(ϕ)
where the marginal PDF π(ϕ) is equal to: π(ϕ) = π(ϕ, θ)dθ Θ
(11)
(12)
Define, now, J(ϕ) as: J(ϕ) =
π(ϕ) Eθ [hs (ϕ, θ)] = Eϕ,θ [hs (ϕ, θ)] p(ϕ)
(13)
Since Eϕ,θ [hs (ϕ, θ)] can be viewed simply as a normalizing constant, minimization of Eθ [hs (ϕ, θ)] is equivalent to the minimization of the quotient J(ϕ) = π(ϕ)/p(ϕ). For this purpose the marginal PDF π(ϕ) in the numerator must be evaluated. Samples of this PDF can be obtained through stochastic sampling/simulation techniques (Robert & Casella (2004)). These techniques will give sample pairs [ϕ, θ] that are distributed according to π(ϕ, θ). Their ϕ component corresponds to samples from the marginal distribution π(ϕ). Appendix A briefly discusses two appropriate sampling algorithms, one using a direct approach to Monte Carlo (MC) simulation and one using Markov Chain Monte Carlo (MCMC) simulation. SSO is based on exploiting the information in these samples.
Stochastic system design optimization using stochastic simulation
3.2
161
Subset analys is
An analytical approximation for π(ϕ) based on these samples for ϕ can be established using, for example, the maximum entropy method (Ching & Hsieh 2007), histograms (Au 2005) or kernel density estimators. Experience indicates that for challenging problems, including, for example, cases where the dimension nϕ is not small (e.g. larger than two) or the sensitivity for a design variable is complex, such methods may be problematic and are generally unreliable as means of approximating π(ϕ) (Taflanidis & Beck 2007a). In the SSO framework, such an approximation for π(ϕ) is avoided. The sensitivity analysis is performed by looking at the average value of J(ϕ) over I, H(I), which for any subset of the design space I ⊂ Φ with volume VI is defined as: ( ( J(ϕ)dϕ 1 1 π(ϕ) I Eθ [hs (ϕ, θ)]dϕ dϕ (14) = I = H(I) = Eϕ,θ [hs (ϕ, θ)] VI VI VI I p(ϕ) To simplify the evaluation of H(I), a uniform distribution is chosen for p(ϕ). Note that p(ϕ) does not reflect the uncertainty in ϕ but is simply a device for formulating the augmented problem and thus can be selected according to convenience. Finally H(I) and an estimate of it based on the samples from π(ϕ), obtained as described previously, are given, respectively, by: VΦ H(I) = VI
NI /VI ˆ π(ϕ)dϕ and H(I) = NΦ /VΦ I
(15)
where NI and NΦ denote the number of samples from π(ϕ) belonging to the sets I and Φ, respectively, and VΦ is the volume of the design space Φ. The estimate for H(I) is equal to the ratio of the volume density of samples from π(ϕ) in sets I and Φ. The coefficient of variation (c.o.v.) for this estimate depends on the simulation technique used for obtaining the samples from π(ϕ). For a broad class of sampling algorithms this c.o.v. may be expressed as: ˆ c.o.v. H(I) =
1 − P(ϕ ∈ I) ≈ N · P(ϕ ∈ I)
1 − NI /NΦ , N · NI /NΦ
P(ϕ ∈ I) =
π(ϕ)dϕ
(16)
I
where N = NΦ /(1 + γ), γ ≥ 0, is the effective number of independent samples. If direct MC techniques are used then γ = 0, but if MCMC sampling is selected then γ > 0 because of the correlation of the generated samples. Ultimately, the value of γ depends on the characteristics of the algorithm used. See (Au & Beck 2003b) for a formula for γ when the Metropolis-Hasting algorithm is used. For the uniform PDF for p(ϕ), note that H(I) is equal to the ratio: ( Eθ [hs (ϕ, θ)]dϕ/VI H(I) = ( I (17) Φ Eθ [hs (ϕ, θ)]dϕ/VΦ where the integrals in the numerator and denominator could be considered as the “average set content’’ in I and Φ respectively. Thus H(I) expresses the average sensitivity of Eθ [hs (ϕ, θ)] to ϕ within the set I ⊂ Φ.
162
Structural design optimization considering uncertainties
3.3 Sub set o pt imizat io n Consider a set of admissible subsets A in Φ that have some predetermined shape and some size constraint, for example related to the set volume, and define optimization: I ∗ = arg min H(I) I∈A
(18)
This definition is motivated by the fact that, as explained above, minimization of Eθ [hs (ϕ, θ)] is equivalent to minimization of J(ϕ) and that H(I) corresponds to the volume average integral of J(ϕ) over subset I. Based on the estimate in (15), optimization (18) is approximately equal to identification of the subset that contains the smallest volume density NI /VI of samples: NI ˆ Iˆ = arg min H(I) = arg min I∈A I∈A VI
(19)
Note that the relationship of the position in the design space of a set I ∈ A and the number of sample points in it is non-differentiable. Thus, methods appropriate for non-smooth optimization problems, such as genetic algorithms, should be chosen for optimization (19). The evaluation of the objective function for this problem involves small computational effort. Thus, the optimization can be efficiently solved if an appropriate algorithm is chosen. If set A is properly chosen, for example if its shape is “close’’ to the contours of Eθ [hs (ϕ, θ)] in the vicinity of ϕ∗ , then ϕ∗ ∈ I ∗ for the optimization in (18). This argument is not necessarily true for the optimization in (19) because only estimates of H(I) are used. Iˆ is simply the set, within the admissible subsets A, that has the largest likelihood, in terms of the information available through the obtained samples, of including ϕ∗ . This likelihood defines the quality of the identification and ultimately depends (Taflanidis & Beck 2007b) on the ratio of average set content, given by H(I) (see (17)). Taking into account the fact that the set content in the neighborhood of ˆ the optimal solution is the smallest in Φ, it is evident that smaller values of H(I) ∗ ˆ correspond to greater plausibility for the set I to include ϕ . Since only estimates of ˆ are available in the stochastic identification, the quality depends, ultimately, on H(I) ˆ I) ˆ and (b) its coefficient of variation (defining the accuracy both: (a) the estimate H( of that estimate). Smaller values for these parameters correspond to better quality of identification. Both of them should be used as a measure of this quality. 3.4
D eta i l s f or R O P
When SSO is implemented for ROP, selection of IF (ϕ, θ) as the loss function is beneficial because it simplifies the task of simulating samples from π(ϕ, θ). In this case these samples correspond simply to failed samples, i.e. samples that lead to failure of the system (IF (ϕ, θ) = 1), and the auxiliary PDF π(ϕ, θ) is simply the PDF for the augmented uncertain parameter vector conditioned on failure of the system, i.e. p(ϕ, θ|F). Similarly, the marginal π(ϕ) corresponds to p(ϕ|F). Monte Carlo simulation can then be used for simulating samples from p(ϕ|F). For design problems that involve small failure probabilities this approach may be inefficient because 1/PF trials are needed on
Stochastic system design optimization using stochastic simulation
163
the average in order to simulate one failed sample, where PF is the failure probability in the augmented design problem, defined, similarly to (10), as: PF = I(ϕ, θ)p(ϕ, θ) dθdθϕ (20) Φ
Θ
Other stochastic simulation methods, such as Subset Simulation (Au & Beck 2001) should be preferred in such cases. 3.5
Implementation is s ues
3.5.1
Resolu tion for desi gn v ari abl es and i te ra t i v e i d e n t i fi ca t i o n
The size of the admissible subsets I define (a) the resolution of ϕ∗ and (b) the informaˆ tion about the accuracy of H(I) that is extracted from the samples from π(ϕ). Selecting smaller size for the admissible sets leads to better resolution for ϕ∗ . At the same time, though, this selection leads to smaller values for the ratio NI /NΦ (since smaller number of samples are included in smaller sets) and thus it increases the c.o.v. (reduces accuracy) of the estimation, as seen from (16). In order to maintain the same quality for the estimate, the effective number of independent samples must be increased, which means that more simulations must be performed. Since we are interested in subsets in Φ with small average set content, the required number of simulations to gather accurate information for subsets with small size is large. To account for this characteristic and to increase the efficiency of the identification process, an iterative approach can be adopted. At iteration k, additional samples in Iˆk−1 (where I0 = Φ) that are distributed according to π(ϕ) are obtained. A region Iˆk ⊂ Iˆk−1 for the optimal design parameters is then identified as above. The quality of the identification is improved by applying such an iterative scheme, since the ratio of the samples in Iˆk−1 to the one in Iˆk is larger ˆ Iˆk ) smaller) than the equivalent ratio when comparing Iˆk and (and thus the c.o.v. for H( the original design space Φ. The samples [ϕ, θ] available from the previous iteration, whose ϕ component lies inside set Iˆk−1 , can be exploited for improving the efficiency of the sampling process. In terms of the algorithms described in Appendix A this may be established for example by (a) forming better proposal PDFs and/or (b) using the samples already available as seeds for MCMC simulation. Some guidelines for the MCMC simulation are given later on, in the context of the example considered. Another way to improve the efficiency in this iterative process is to continually update hs (ϕ, θ) in (8) by re-defining s: hs,k (ϕ, θ) = h(ϕ, θ) − sk
where sk = min h(ϕ, θ) ϕ∈Iˆk ,θ
(21)
Figure 7.2 illustrates this concept. For choice hs,2 (ϕ, θ), which corresponds to a larger value of s, the sensitivity of the objective function, in the SSO setting, is larger and a candidate region for the optimal choice is more easily discernible (better quality is established) based on samples from π(ϕ). If hs (ϕ, θ) is reformulated, though, the ancillary density π(ϕ, θ) changes and the samples from the previous iteration cannot provide useful information for the next iteration unless the previous and the next loss
Structural design optimization considering uncertainties
0.3
0.1
0.1 15
0 10
20
100
80
80
60
60
N1
100
(b)
40
40
20
20
0 10
s 0.10 hs,2(, ) h(, )0.10
0.2
0.2
0 10
(a)
s0 hs,1(, ) h(, )
E [hs(,)]
E [hs(, )]
0.3
N1
164
15
20
0 10
15
20
15
20
Figure 7.2 Influence of selection of s in SSO (a) objective function Eθ [hs (ϕ, θ)] and (b) histograms of samples from π(ϕ) obtained through MC simulation.
functions hs (ϕ, θ) are similar. For cases where the sensitivity of the objective function is small, our experience indicates that the re-formulation of the loss function can be beneficial (assuming that s can be set to a larger value). When the sensitivity is quite high, though, it is preferable to keep the same loss function and use the samples available to improve the efficiency when generating new samples. 3.5.2 S el ec tion o f a d m is s ib le s u b s e t s Proper selection of the geometrical shape and size of the admissible sets is important for the efficiency of SSO. The geometrical shape should be chosen so that the challenging, non-smooth optimization (19) can be efficiently solved and still the sensitivity of the objective function to each design variable is fully explored. For example, a hyper-rectangle or a hyperellipse can be appropriate choices, with the latter expected to be closer to the objective function contours but requiring more computational complexity, especially in high dimensions. The size of admissible subsets is related to the quality of identification as discussed earlier. Selection of size for the admissible subsets can be determined by incorporating a constraint for either (i) the volume ratio δ = VI /VΦ or (ii) the number of samples ratio ρ = NI /NΦ . The first choice cannot be directly related to any of the measures of quality of identification; thus proper selection of δ is not straightforward. The second choice allows for directly controlling the coefficient of variation (see (16)) and thus one of the parameters influencing the quality of identification. This characterization for
Stochastic system design optimization using stochastic simulation
165
the admissible subsets is adopted here. The subset optimization in (19) corresponds, finally, to identification of a subset that has smallest estimated average value, within the class of subsets that guarantee a specific c.o.v. for that estimate: Iˆ = arg min NI /VI = arg max VI , I∈Aρ
I∈Aρ
Aρ = {I ⊂ Φ : ρ = NI /NΦ }
(22)
The same comment as for the optimization in (19) applies in this case; an algorithm appropriate for non-smooth optimizations should be selected. The volume (size) of the admissible subsets in this scheme is adaptively chosen so that the ratio of samples in the identified set is equals to ρ. The choice of the value for ρ affects the efficiency of the identification. If ρ is large, fewer number of samples is required for the same level of accuracy (c.o.v. in (16)). However, a large value of ρ means that the size of the identified subsets will decreases slowly (larger size sets are identified). A slow sequence requires more steps to converge to the optimal solution. It can thus be seen that the choice of the constraint ρ is a trade-off between the number of samples required in each step and the number of steps required to converge to the optimal design choice. In the applications we have investigated so far it was found that choosing ρ = 0.1–0.2 yields good efficiency. 3.5.3
Influence of di mensi on of desi gn v ari a b l e s v e ct o r
For a specific reduction of the volume of the search space in some step of the set identification, δk = VIk /VIk−1 , the mean reduction of the length for each design variable √ is nϕ δk . The mean total length reduction over all variables in niter iterations is: + , niter / . , n iter nϕ = (δmean )niter /nϕ δk ≈ ϕ δnmean
(23)
k=1
where δmean is the geometric mean of the volume reductions over all of the iterations. It is evident from (23) that for the same mean total length reduction, the number of iterations is proportional to the dimension of the design space. This proportionality relationship has been verified in the examples considered in (Taflanidis & Beck 2007a; Taflanidis & Beck 2007b). Assuming that the mean total length reduction over all variables describes adequately the computational efficiency of SSO, this argument shows that this efficiency decreases linearly with the dimension of the design space, so SSO should be considered appropriate for problems that involve a large number of design variables. 3.5.4
SSO algori thm and sampl e i mpl ement a t i o n e x a m p l e
The SSO algorithm is summarized as follows (Figure 7.3 illustrates some of the key steps of the algorithm). Initialization: Define the desired geometrical shape for the subsets I. Decide on the constraint value ρ in (22) and the desired number of samples N. The latter should be
166
Structural design optimization considering uncertainties
(b) Identification of set Î1
(a) Initial samples through MC w2
8 6
8 6 4 2
4 2 100
w2
200
300
400
500
600
200
300
400
500
600
(d) Magnification of previous plot
(c) Retained samples in set Î1
8 6
4
4
3
2
2 100
200
300
400
500
600
(e) New samples through MCMC and set Î2 w2
100
4 3 2 100
200
300 w1
400
500
100
200
300
400
500
(f) Last step, stopping criteria are satisfied 4.5 4 3.5 3 2.5 150
200 w1
250
Figure 7.3 Illustration of some key steps in SSO for an example with a two-dimensional design space. The X in the plots corresponds to the optimal solution.
ˆ chosen such that the c.o.v. for H(I) in (16) is equal to some pre-specified value for the given value of ρ. For example, for c.o.v. 5% and choice of ρ = 0.2, N should be 1600 if direct MC techniques are used. Step 1: Simulate, for example using MC simulation, NΦ = N samples from π(ϕ, θ). Identify subset Iˆ1 as the solution of the optimization problem (22) and keep only the NIˆ1 samples whose ϕ component belongs to the subset Iˆ1 . Step k: Use some sampling technique, such as Metropolis-Hastings algorithm, to obtain in total N samples from π(ϕ, θ) inside the subset Iˆk−1 . Identify subset Iˆk : Iˆk = arg max VIk , I∈Aρ,k
Aρ,k = {I ⊂ Iˆk−1 : ρ = NI /N}
(24)
Keep only the NIˆk samples whose ϕ component belongs to the subset Iˆk and exploit them in the next iteration. Stopping criteria: At each step, estimate ratio ˆ Iˆk ) = H(
NIˆk VIˆk−1 NIˆk−1 VIˆk
(25)
and its coefficient of variation according to the appropriate expression (depending on the algorithm used). Based on these two quantities and the desired quality of the identification (see next section), decide on whether to stop or to proceed to step k + 1. Table 7.1 presents the results for a sample run of the SSO algorithm for the design problems considered in (Taflanidis & Beck 2007a). Problem D1 involves 2 design
Stochastic system design optimization using stochastic simulation
167
Table 7.1 Results from a sample run of the SSO algorithm for two design problems. Problem D1 (nϕ = 2) δk =VˆIk /VˆIk−1 √ nϕ δk ˆ ˆIk ) H( nϕ V /V ˆI3 Φ
Iteration of SSO
Problem D2 (nϕ = 4)
1
2
3
0.370 0.608 0.541
0.314 0.561 0.636 0.167
0.270 0.489 0.835
δk =VˆIk /VˆIk−1 √ nϕ δk ˆ ˆIk ) H( nϕ V /V ˆI6 Φ
Iteration of SSO 1
2
3
4
0.442 0.815 0.453
0.378 0.784 0.529
0.346 0.323 0.767 0.754 0.577 0.619 0.183
5
6
0.269 0.720 0.743
0.223 0.687 0.896
variables, nϕ = 2, whereas problem D2 involves 2 more design variables, in total nϕ = 4. Figure 7.3 presents graphically the results for the sample run in problem D1 . In these examples the shape of the subsets I was selected as hyper-rectangles and ρ was chosen equal to 0.2. Figure 7.3 clearly illustrates the dependence of the quality of the identification on ˆ Iˆk ) which expresses (see (15)) the difference in volume density of samples from π(ϕ) H( inside and outside of the identified set Iˆk (corresponding to the interior rectangle in these plots). In the first iteration, this difference is clearly visible. As SSO evolves, the difference becomes smaller and by the last iteration (Figure 7.3f), it is difficult to visually discriminate which region in the set has smaller volume density of samples from π(ϕ). This corresponds to a decrease in the quality of the identification. In order to maintain plausibility for the identified set to contain the optimal solution, the iterative process stops. This figure also shows the capability of SSO to explore the sensitivity of the objective function with respect to each design variable. Within the initial design space (Figure 7.3a) the sensitivity with respect to design variable ϕ2 appears to be significantly higher, based on the density of failed samples. The set identified by SSO corresponds to larger size reduction for that variable (Figure 7.3b), thus it efficiently captures the difference in sensitivities. Note that in order to take advantage of this capability, no proportionality relationship should be enforced for the dimensions of the admissible subsets in different directions. Looking now at the results in Table 7.1, it is clear that as the identification process in SSO evolves, the reduction in the size of the identified subsets, expressed by δk , becomes larger and δk approaches the value for ρ (selected here as 0.2). Also the value ˆ Iˆk ) increases, which corresponds to reduction in the quality of identification. of H( All these patterns can be theoretically justified (Taflanidis & Beck 2007a) assuming that as the SSO identification progresses, regions of the design space with smaller sensitivity to the objective function are approached. The influence of the number of design variables in the efficiency of√ SSO is also evident from the results in Table 7.1. As mentioned earlier, the quantity nϕ δk corresponds to the mean length reduction per design variable. For D2 that reduction is much smaller, even though the values for δk are of the same level for the two design problems, which leads to more iterations until a set with small sensitivity to the objective function is identified. The average total length reduction is similar (look at the last row in Table 7.1 which is equivalent to the expression (23)) but the number of required iterations for D2 is double. Since design
168
Structural design optimization considering uncertainties
case D2 has double the number of design variables, this verifies the proportionality dependence (argued in Section 3.5.3) between required number of iterations and the number of design variables to establish the same average reduction per design variable. Note also that the mean reduction in size per design variable (last row of Table 7.1) is significant. This means that the size of the subset identified by SSO is considerably smaller than the original design space. Since this identification requires a small number of iterations it verifies the efficiency of the SSO algorithm. 3.6 C o nv ergen c e t o o pt imal s o lut io n The algorithm described above can adaptively identify a relatively small sub-region for the optimal design variables ϕ∗ within the original design space. The efficiency of convergence to ϕ∗ depends a lot on the sensitivity of the objective function around the optimal point. If that sensitivity is large then SSO will ultimately converge to a “small’’ ˆ satisfying at the same time the accuracy requirements that make it highly likely set I, ˆ The center point of this set, denoted herein as ϕSSO , gives the estimate that ϕ∗ is in I. for the optimal design variables. In cases that this sensitivity is not large enough, such convergence will be problematic and will require increasing the number of samples in order to satisfy the requirement for the quality of identification. Another important topic related to the identification in such cases is that there is no measure of the quality of the identified solution, i.e. how close ϕSSO is to ϕ∗ , that can be directly established through the SSO results. If the identification is performed multiple times and a sequence {ϕSSO,i } is obtained, the c.o.v. of {Eˆ θ [h(ϕSSO,i ,θ)]} could be considered a good candidate for characterizing this quality. This might not be always a good measure though. For example, if the choice for admissible subsets is inappropriate for the problem considered, it could be the case that consistent results are obtained for ϕSSO (small c.o.v.) that are far from the optimal design choice ϕ∗ . Also, this approach involves higher computational cost because of the need to perform the identification multiple times. For such cases, it would be more computationally efficient (instead of increasing N in SSO and performing the identification multiple times) and more accurate (in terms of identifying the true optimum), to combine SSO with some other optimization algorithm for pinpointing ϕ∗ . A discussion of topics related to such algorithms is presented next.
4 Stochastic optimization algorithms We go back to the original formulation of the objective function, i.e. (1). In principle, though, the techniques discussed here are applicable to the case that the loss function h(θ, ϕ) is replaced by hs (θ, ϕ) used in the SSO setting (given by (8)). 4.1
C om m o n r and o m numb e r s
The efficiency of stochastic optimizations such as (5) can be enhanced by the reduction of the absolute and/or relative importance of the estimation error eN (ϕ, ΩN ). The absolute importance may be reduced by obtaining more accurate estimates of the objective function, i.e. by reducing the variance of the estimates. This can be established in various ways, for example by using importance sampling or by selecting a larger sample size N in (4), but these typically involve extra computational effort. It is, thus,
Stochastic system design optimization using stochastic simulation
169
more efficient to seek a reduction in the relative importance of the estimation error. ˆ θ,N [h(ϕ1 , Ω1 )] This means reducing the variance of the difference of the estimates E N 2 2 ˆ θ,N [h(ϕ , Ω )] that correspond to two different design choices ϕ1 and ϕ2 . This and E N variance can be decomposed as: var(Eˆ θ,N [h(ϕ1 , Ω1N )] − Eˆ θ,N [h(ϕ2 , Ω2N )]) = var(Eˆ θ,N [h(ϕ1 , Ω1N )]) ˆ θ,N [h(ϕ1 , Ω1 )], E ˆ θ,N [h(ϕ2 , Ω2 )]) + var(Eˆ θ,N [h(ϕ2 , Ω2N )]) − 2cov(E N N
(26)
ˆ θ,N [h(ϕ2 , Ω2 )] are evaluated independently their covariance If Eˆ θ,N [h(ϕ1 , Ω1N )] and E N is zero; deliberately introducing dependence, increases the covariance (i.e. increases their correlation) and thus decreases their variability (the variance on the left). This decrease in the variance improves the efficiency of the comparison of the two estimates; it may be considered as creating a consistent estimation error. In a simulation-based context this task is achieved by adopting common random number streams (CRN), i.e. Ω1N = Ω2N , in the simulations generating the two different estimates. Figure 7.4a shows the influence of such a selection: the curves that correspond to CRN are characterized by consistent estimation error and are smoother. Also note that the absolute influence of the estimation error for the case that corresponds to larger N (curve (iii)) is, as expected, smaller. Two important questions regarding the use of CRN are: will the variance be reduced (efficiency)? Is this the best one can do (optimality)? The answer to both these questions depends on the way the random sample θ (input) influences the sample value of the loss function h(ϕ, θ) (output) in each simulation. Optimality can be proved only in special cases but efficiency can be guaranteed under mild conditions (Glasserman & Yao 1992). Continuity and monotonicity of the output with respect to the random number input are key issues for establishing efficiency. If h(ϕ, θ) is sufficiently smooth then the two aforementioned requirements for CRN-based comparisons can be guaranteed,
(a) 0.09 0.08 0.07
0.1
0.06
0.08
0.05
0.06
0.04 10
12
14
16
18
(i) analytical (ii) CRN sim using IF (iii) CRN sim using Pε
0.12
P(F|)
E [h(,)]
(b) 0.14
(i) analytical (ii) sim N 4000 (iii) CRN sim N 4000 (iv) CRN sim N 1000
20
0.04 10
12
14
16
18
20
Figure 7.4 Illustration of some points in CRN-based evaluation of the objective function for (a) general stochastic design problems and (b) reliability objective problem.
170
Structural design optimization considering uncertainties
as long as the design choices compared are not too far apart in the design variable space. In such cases it is expected that use of CRN will at least be advantageous (if not optimal). If the systems compared are significantly different, i.e. correspond to ϕ that are not close, then CRN does not necessarily guarantee a consistent estimation error. This might occur if the regions of Θ that contribute most to the integral of the expected value for the two systems are drastically different and the CRN streams selected do not efficiently represent both of these regions. This feature is also indicated in Figure 7.4a; the estimation error is not consistent along the whole range of ϕ for the CRN curves (compare the objective function for curve (iv) for large and small values of ϕ) but for local comparisons it appears to be consistent. For ROP, CRN does not necessarily have a similar effect on the calculated output if formulation (6) is adopted since the indicator function IF (ϕ, θ) is discontinuous. Thus the aforementioned requirements for establishing efficiency of CRN cannot be guaranteed. It is thus beneficial to use the formulation (7) for the probability of failure in CRN-based optimizations. For design problems where no prediction error in the model response is actually assumed, a small fictitious error should be chosen so that the optimization problems with and without the model prediction error are practically equivalent, i.e. correspond to the same optimum. Figure 7.4b illustrates this concept; the influence on P(F|ϕ) of the two different loss functions in Figure 7.1b and the advantage of selecting Pε (g(ϕ, ˜ θ)) is clearly demonstrated. 4.2
Ex teri or sampling appr o ximat io n
Solution approaches to optimization problems using stochastic simulation are based on either interior or exterior sampling techniques (Ruszczynski & Shapiro 2003). Interior sampling methods resample ΩN at each iteration of the optimization algorithm. On the other hand, exterior sampling approximations (ESA) adopt the same stream of random numbers throughout all iterations in the optimization process, thus transforming problem (5) into a deterministic one, which can be solved by any appropriate routine. These methods are also commonly referred to as sample average approximations (Royset & Polak 2004) and they are closely related to CRN. The CRN cases in Figure 7.4 correspond actually to ESA. Several asymptotic results are available for ESA and their rate of convergence under weak assumptions. For finite-dimensional sample sizes, the optimal solution depends on the sample ΩN selected. Figure 7.4a clearly demonstrates this issue (compare the optimum values in the CRN curves (iii) and (iv)). Usually ESA is implemented by selecting N “large enough’’, typically much higher than it would be for interior sampling methods, in order to get better quality estimates for the objective function and thus more accurate solutions to the optimization problem. See (Ruszczynski & Shapiro 2003) for more details and (Royset & Polak 2007) for a computationally efficient iterative approach that adaptively implements higher accuracy estimates as the algorithm converges to the optimal solution. The quality of the solution obtained through ESA is commonly assessed by solving the optimization problem multiple times, for different independent random sample streams. Even though the computational cost for the ESA deterministic optimization is typically smaller than that of the original stochastic search problem, the overall efficiency may be worse because of the requirement to perform the optimization multiple times.
Stochastic system design optimization using stochastic simulation
171
4.3 Appropriate s tochas tic optimizatio n al g o ri thms Both gradient-based and gradient-free algorithms can be used in conjunction with CRN or ESA and can be appropriate for stochastic optimizations. Gradient-based algorithms use derivative information to iterate in the direction of steepest descent for the objective function. Only local designs are compared in each iteration, which makes the implementation of CRN efficient and allows for application of stochastic approximation which can significantly improve the computational efficiency of stochastic search methods (Kushner & Yin 2003). The latter can be established by applying an equivalent averaging across the iterations of the algorithm instead of establishing higher accuracy estimates at each iteration. In simple examples, the loss function h(ϕ, θ) (or even the limit state function g(ϕ, ˜ θ) in ROP) are such that the gradient of the objective function with respect to ϕ can be obtained through a single stochastic simulation analysis (Royset & Polak 2004, Spall 2003). In many structural design problems though, the models used are generally complex, and it is difficult, or impractical, to develop an analytical relationship between the design variables and the objective function. Finite difference numerical differentiation is often the only possibility for obtaining information about the gradient vector but this involves computational cost which increases linearly with the dimension of the design parameters. Simultaneous-perturbation stochastic approximation (SPSA) (Kleinmann et al. 1999, Spall 2003) is an efficient alternative search method. It is based on the observation that one properly chosen simultaneous random perturbation in all components of ϕ provides as much information for optimization purposes in the long run as a full set of one at a time changes of each component. Thus, it uses only two evaluations of the objective function, in a direction randomly chosen at each iteration, to form an approximation to the gradient vector. Gradient-free optimization methods, such as evolutionary algorithms, direct search and objective function approximation methods are based on comparisons of design choices that are distributed in large regions of the design space. They require information only for the objective function which makes them highly appropriate for stochastic optimizations (Beck et al. 1999, Lagaros et al. 2002) because they avoid the difficulty of obtaining derivative information. They involve, though, significant computational effort if the dimension of the design variables is high. Use of CRN in these algorithms may only improve the efficiency of the comparisons in special cases; for example, if the size (volume) of the design space is “relatively small’’ and thus the design variables being compared are always close to each other. More detailed discussion of algorithms for stochastic optimization can be found in (Spall 2003; Ruszczynski & Shapiro 2003). Only SPSA is briefly summarized here. 4.3.1
Sim u lt a n eous-perturbati on stochasti c a p p ro x i m a t i o n u s i n g co m m o n ra n d om numbers
The implementation of SPSA using CRN takes the iterative form: ϕk+1 = ϕk − αk gk (ϕk , ΩN ) ϕk+1 ∈ Φ
(27)
172
Structural design optimization considering uncertainties
where ϕ1 is the chosen point to initiate the algorithm and the jth component for the CRN simultaneous perturbation approximation to the gradient vector in the kth iteration, gk (ϕ, ΩkN ), is given by: gk,j =
ˆ θ,N (ϕk − ck ∆k , Ωk ) ˆ θ,N (ϕk + ck ∆k , Ωk ) − E E N N 2ck κ,j
(28)
where ∆k ∈ Rnϕ is a vector of mutually independent random variables that defines the random direction of simultaneous perturbation for ϕk and that satisfies the statistical properties given in (Spall 2003). A symmetric Bernoulli ±1 distribution is typically chosen for the components of ∆k . The selection for the sequences {ck } and {αk } is discussed in detail in (Kleinmann et al. 1999). A choice that guarantees asymptotic convergence to ϕ∗ is αk = α/(k + w)β and ck = c1 /kζ , where 4ζ − β > 0, 2β − 2ζ > 1, with w, ζ > 0 and 0 < β < 1. This selection leads to a rate of convergence that asymptotically approaches k−β/2 when CRN is used (Kleinmann et al. 1999). The asymptotically optimal choice for β is, thus, 1. In applications where efficiency using a small number of iterations is sought after, use of smaller values for β are suggested in (Spall 2003). For complex structural design optimizations, where the computational cost for each iteration of the algorithm is high, the latter suggestion should be adopted. Implementation of CRN contributes to reducing the variance of the gradient approximation in (28) and thus the variability in estimating ϕκ ; for example, the rate of convergence is k−β/3 when CRN is not used. Regarding the rest of the parameters for the sequences {ck } and {αk }: w is typically set to 10% of the number of iterations selected for the algorithm and the initial step c1 is chosen “close’’ to the standard deviation of the measurement error eN (ΩN , ϕ1 ). This last selection prevents the finite difference gradient from getting excessively large in magnitude but might be inefficient if the standard deviation of the error changes dramatically with ϕ. The value of α can be determined based on the estimate of g1 and the desired step size for the first iteration. Some initial trials are generally needed in order to make a good selection for α, especially when little prior insight is available for the sensitivity of the objective function to each of the design variables. Typically SPSA is implemented adopting interior sampling techniques. Convergence of the iterative process is judged based on the value ϕk+1 − ϕk in the last few steps, for an appropriate selected vector norm. Note that since the progress of the algorithm at each step depends on the sample ΩkN and the randomly chosen perturbation direction, converˆ θ,N [h(ϕk+1 , Ωk+1 )] − Eˆ θ,N [h(ϕk , Ωk )]| gence cannot be judged based on the value of |E N N (because the two estimates are evaluated using different steams of random samples and thus include different estimation error) or the value of ϕk+1 − ϕk at the last step only (because this value depends on the random search direction chosen). This notion of convergence, though, depends on the selection of the sequence {αk }; for example, selection of small step sizes might in some cases give a false impression that convergence has been established, even though this is not true. Such problems can be avoided by restarting the SPSA algorithm at the converged optimal solution to monitor the behavior for some small number of iterations. Blocking rules can also be applied in order to avoid potential divergence of the algorithm, especially in the first iterations (Spall 2003).
Stochastic system design optimization using stochastic simulation
173
5 Framework for stochastic optimization using stochastic simulation 5.1
Outline of the framework
As already mentioned, a two-stage framework for stochastic system design may be established by combining the algorithms presented in the previous two sections. In the first stage, SSO is implemented in order to efficiently explore the sensitivity of the objective function and adaptively identify a subset ISSO ⊂ Φ containing the optimal design variables. In the second stage, any appropriate stochastic optimization algorithm is implemented in order to pinpoint the optimal solution within ISSO . The specific algorithm selected for the second stage determines the level of quality that should be established in the SSO identification. If a method is chosen that is restricted to search only within ISSO (typically characteristic of gradient-free methods), then better quality is needed. The iterations of SSO should stop at a larger size set, and establish greater plausibility that the identified set includes the optimal design point. If, on the other hand, a method is selected that allows for convergence to points outside the identified set, lower quality may be acceptable in the identification. Our experience ˆ Iˆk ) with a c.o.v. of 4% for that estimate, indicates that a value around 0.75–0.80 for H( ˆ indicates high certainty that Ik includes the optimal solution. Of course, this depends on the characteristics of the problem too and particularly on the selection of the shape of admissible subsets, but this guideline has proved to be robust in the applications we have considered so far. The efficiency of the stochastic optimization considered in the second stage is influenced by (a) the size of the design space Φ defined by its volume VΦ , and, depending on the characteristics of the algorithm chosen, by (b) the initial point ϕ1 at which the algorithm is initiated, and (c) the knowledge about the local behavior of the objective function in Φ. For example, topic (b) is important for gradient-based algorithms whereas topic (c) is relevant for iterative algorithms that require user insight for selecting appropriate step sizes (like SPSA). The SSO stage gives valuable insight for all these topics and can, therefore, contribute to increasing the efficiency of convergence to the optimal solution ϕ∗ . The set ISSO has smaller size (volume) than the original design space Φ. Also, it is established that the sensitivity of the objective function with respect to all components of ϕ is small. This allows for efficient normalization of the design space (in selecting step sizes) and choice of approximating functions. In Taflanidis & Beck (2007a), 60% reduction of the overall computational cost for convergence to the optimal solution was reported when comparing the combined framework discussed here to SPSA optimization (without the SSO stage). In that study, the following guidelines were suggested for tuning of the SPSA parameters using information from SSO: ϕ1 should be selected as the center of the set ISSO and parameter α chosen so that the initial step for each component of ϕ is smaller than a certain fraction (chosen as 1/10) of the respective size of ISSO , based on the estimate for g1 from (28). This estimate should be averaged over ng (chosen as 6) evaluations because of its importance in the efficiency of the algorithm. Also, no movement in any direction should be allowed that is greater than a quarter of the size of the respective dimension of ISSO (blocking rule).
174
Structural design optimization considering uncertainties
The information from the SSO stage can also be exploited in order to reduce the variance of the estimate Eθ [h(ϕ, ΩN )] by using importance sampling. This choice is discussed next. 5.2
Im po rtanc e s ampling
Importance sampling (IS) is an efficient variance reduction technique. It is based on choosing an importance sampling density pis (θ|ϕ) to generate samples in regions of Θ that contribute more to the integral of Eθ [h(ϕ, θ)]. The estimate for Eθ [h(ϕ, θ)] is given in this case by: ˆ θ,N [h(ϕ, ΩN )] = 1 E h(ϕ, θi )R(θi |ϕ) N N
(29)
i=1
where the samples θi are simulated according to pis (θ|ϕ) and R(θi |ϕ) =
p(θi |ϕ) pis (θi |ϕ)
(30)
is the importance sampling quotient. The main problem is how to choose a good IS density. The optimal density is simply the PDF that is proportional to the absolute value of the integrand of (1) |h(ϕ, θ)|p(θ|ϕ) (Robert & Casella (2004)) leading to a selection: pis,opt (θ|ϕ) =
|h(ϕ, θ)|p(θ|ϕ) Eθ [|h(ϕ, θ)|]
(31)
Samples for θ that are distributed proportional to hs (ϕ, θ)p(θ|ϕ) when ϕ ∈ ISSO are available from the last iteration of the SSO stage. They simply correspond to the θ component of the available sample pairs [ϕ, θ]. Re-sampling can be performed within these samples, using weighting factors |h(ϕi , θi )|/hs (ϕi , θi ) for each sample, in order to approximately simulate samples proportional to |h(ϕ, θ)|p(θ|ϕ) when ϕ ∈ ISSO . The efficiency of this re-sampling procedure depends on how different hs (ϕi , θi ) and h(ϕi , θi ) are. In most cases the difference will not be significant and good efficiency can be established. Alternatively, hs (ϕi , θi ) can be used as loss function in the second stage of the optimization. In this case there is no need to modify the samples from SSO. This choice would be inappropriate if s was negative because it makes the loss function less sensitive to the uncertain parameters θ, thus possibly reducing the efficiency of IS. In such design problems it is better to use the original loss function h(ϕ, θ). The samples simulated proportional to |h(ϕ, θ)|p(θ|ϕ) can be finally used to create an importance sampling density pis (θ|ϕ) to use in (30), since the set ISSO is small. Various strategies have been discussed in the literature for such an adaptive importance sampling (see for example (Au & Beck 1999)). For problems with high-dimensional vector θ, the efficiency of IS can be guaranteed only under strict conditions (Au & Beck (2003a)). An alternative approach can be applied for such cases: the uncertain parameter vector is partitioned into two sets, Θ1 and Θ2 . Θ1 is comprised of parameters that individually do not significantly influence the loss function (they have significant influence only when viewed as a group), for
Stochastic system design optimization using stochastic simulation
175
example, the white noise sequence modeling the stochastic excitation in dynamic reliability problems, while Θ2 is comprised of parameters that have individually a strong influence on h(ϕ, θ). The latter set typically corresponds to a low-dimensional vector. IS is applied for the elements of Θ2 only. This approach is similar to the one discussed in (Pradlwater et al. 2007) and circumvents the problems that may appear when applying IS to design problems involving a large number of uncertain parameters.
6 Illustrative example: optimization of the life-cycle cost of an office building The retrofitting of a symmetric, four-story, office building with linear viscous dampers is considered. The building is a non-ductile reinforced concrete, perimeter momentframe structure. The dimension of the building is 45 m × 45 m and the height of each story is 3.9 m. The perimeter frames in the two building directions are separated from each other, which allows structural analysis in each direction to be done separately. Because of the symmetry of the building, analysis of only one of the directions is necessary. 6.1
Probabilistic s tructural model
A class of shear-frame models (illustrated in Figure 7.5) with hysteretic behavior and deteriorating stiffness and strength is assumed (using a distributed element model assumption for the deteriorating part (Iwan & Cifuentes 1986). The lumped mass of the top story is 935 ton while it is 1215 ton for the bottom three. The initial inter-story stiffnesses ki of all the stories are parameterized by ki = kˆ i θk,i , i = 1, 2, 3, where [kˆ i ] = [700.0, 616.1, 463.6, 281.8] MN/m are the most probable values and θk,i are nondimensional uncertain parameters, assumed to be correlated Gaussian variables with mean value one and covariance matrix with elements ij = (0.1)2 exp[−(i − j)2 /22 ]. For each story, the post-yield stiffness coefficient αi , stiffness deterioration coefficient βi , over-strength factor γi , yield displacement δy,i and
Fi m4
viscous damper
Fy,i
d4
m3
ki dy,i
m3
dp,i
biki retrofitting scheme
m1
dy,i
dp,i
du,i –Fr
d1
di du,i
i-th story restoring force
–Fu,i
Fi
m2
Fu, i(1gi )Fy,i
aiki
Fu,i Fr
i du,i dp,ihidy,i
Fu,i Illustration of deteriorating stiffness and strength characteristics
Figure 7.5 Structural model assumed in the study.
176
Structural design optimization considering uncertainties
displacement coefficient ηi have mean values 0.1, 0.2 0.3, 0.22% of story height and 2, respectively (see Figure 7.5 for proper definition of some of these parameters). All these parameters are treated as independent Gaussian variables with c.o.v. 10%. The structure is assumed to be modally damped. The damping ratios for all modes are treated similarly as Gaussian variables with mean values 5% and coefficients of variation 10%.
6.2
Prob ab i l i s t ic s it e s eis mic hazar d a n d gr o u n d m o t i o n m o d e l
In order to estimate the earthquake losses, probability models are established for the seismic hazard at the structural site and for the ground motion, as in (Au & Beck 2003b). Seismic events are assumed to occur following a Poisson distribution and so are independent of previous occurrences. The uncertainty in moment magnitude M is modeled by the Gutenberg-Richter relationship (Kramer 2003) truncated on the interval [Mmin , Mmax ] = [5.5, 8], leading to a PDF: p(M) =
b exp( − b · M) exp( − b · Mmin ) − exp( − b · Mmax )
(32)
and expected number of events per year v = exp (a − bMmin ) − exp (a − bMmax )
(33)
The regional seismicity factors are selected as b = 0.9 loge (10) and a = 4.35 loge (10), leading to v = 0.25. For the uncertainty in the event location, the epicentral distance, r, for the earthquake events is assumed to follow a lognormal distribution with median 20 km and logarithmic standard deviation 0.4. Figure 7.7a illustrates the PDFs for M and r. For modeling the ground motion, the methodology described in Boore (2003) is adopted (also characterized as the “stochastic method’’). This methodology, which was initially developed for generating synthetic ground motions, is reinterpreted here to form a probabilistic model for the earthquake excitation. According to this model, the time-history (output) for a specific event magnitude, M, and source distance, r, is obtained by modulating a white-noise sequence Zw (input) through the following steps: (i) the sequence Zw is multiplied by an envelope function e(t; M, r); (ii) this modified sequence is then transformed to the frequency domain; (iii) it is normalized by the square root of the mean square of the amplitude spectrum; (iv) the normalized sequence is multiplied by a radiation spectrum A(f ; M, r); and finally (v) it is transformed back to the time domain to yield the desired acceleration time history. The characteristics for A(f ; M, r) and e(t; M, r) are presented in Appendix B. Figure 7.6a shows functions A(f ; M, r) and e(t; M, r) for different values of M and r = 15 km. It can be seen that as the moment magnitude increases, the duration of the envelope function also increases and the spectral amplitude becomes larger at all frequencies with a shift of dominant frequency content towards the lower-frequency regime. Figure 7.6b shows a sample ground motion for M = 6.7 and r = 15 km.
Stochastic system design optimization using stochastic simulation
1
M 6 (b) M 6.7 200 M 7.5 100
102 101 100
M 6 M 6.7 M 7.5
101 102 103
cm/sec2
0.8 e(t; M,r)
A(f; M,r) (cm/sec)
(a)
0.6 0.4
0
0
100
0.2
101 101 100 f(rad/sec)
177
200 0
10
20
30
t(sec)
0
5
10 t(sec)
15
Figure 7.6 (a) Radiation spectrum and envelope function for various M and r = 15 km and (b) sample ground motion for M = 6.7, r = 15 km.
6.3
Expected life-cycle cos t
The objective function in the stochastic design problem is the expected life-cycle cost of the structure for a life-time of tlife = 60 years after the retrofit. This cost, C(ϕ), as a function of the design variables is given by (Porter et al. 2004): 1 − e−rd tlife C(ϕ) = Cd (ϕ) + L(ϕ, θ) vtlife p(θ)dθ rd tlife Θ
(34)
where Cd (ϕ) is the cost of the viscous dampers, rd equals the discount rate (taken here as 2.5%) and L(ϕ, θ) is the expected cost given the earthquake event and the system specified by the pair [ϕ, θ]. The uncertain parameter vector in this design problem consists of the structural model parameters, θs , the seismological parameter θg = [M, r] and the white noise sequence, Zw , so θ = [θs , θg , Zw ]. The term in the brackets in (34) is the present worth factor, which is used in order to calculate the present value of the expected future earthquake losses (Porter et al. (2004)). The earthquake damage and loss are calculated assuming that after each event the building is quickly restored to its undamaged state. The cost of the dampers at each floor is estimated based on their maximum force capacity Fud,i as Cd,i (ϕ) = $80(Fud,i )0.8 . This simplified relationship comes from fitting a curve to the cost of some commercially-available dampers. The viscosity of the dampers is selected assuming that the maximum force capacity is established at a velocity of 0.2 m/sec. The earthquake losses are estimated adopting the methodology described in (Goulet et al. 2007, Porter et al. 2004). The components of the structure are grouped into nas damageable assemblies. For each assembly j, nd,j different damage states are designated and a fragility function is established for each damage state dk,j . These functions quantify the probability that the component has reached or exceeded that damage state conditional on some engineering demand parameter (EDPj ). Damage state 0 always corresponds to an undamaged condition. Each fragility function is a conditional cumulative log-normal distribution with median xm and logarithm standard deviation bm , as
178
Structural design optimization considering uncertainties
Table 7.2 Characteristics of fragility functions and expected repair costs for each story. d k,j
xm
1 (light) 2 (moderate) 3 (significant) 4 (severe) 5 (collapse) 1 (damage)
bm
nel
Structural Components 1.4δy,i 0.2 22 (δy,i + δp,i )/2 0.35 22 δp,i 0.4 22 δu,i 0.4 22 3% 0.5 22 Contents 0.6g 0.3 100
$/nel
d k,j
xm
2000 9625 18200 21600 34300
1 (patch) 2 (replace)
Partitions 0.33% 0.2 0.7% 0.25
3000
1 (damage)
1 (damage)
bm
nel
$/nel
500 500
180 800
Acoustical Ceiling 1g 0.7 103 m2 Paint 0.33% 0.2 3500 m2
25 25
presented in Table 7.2. Indirect losses because of (a) fatalities and (b) building downtime, i.e. loss of revenue while the building is being repaired, are ignored in this study. The expected losses in the event of the earthquake are given by: n
L(ϕ, θ) =
d,j nas
P[dk,j |ϕ, θ]Ck,j
(35)
j=1 k=1
where P[dk,j |ϕ, θ] is the probability that the assembly j will be in its kth damage state and Ck,j is the corresponding expected repair cost. Table 7.2 summarizes the characteristics for the fragility functions (xm , βm ) and the expected cost $/nel . The nel in this table corresponds to the number of elements that belong to each damageable assembly in each direction of each floor. For the structural contents and the acoustical ceiling, the maximum story absolute acceleration is used as EDP and for all other assemblies the maximum inter-story drift ratio is used. For estimating the total wall area requiring a fresh coat of paint, the simplified formula developed in (Goulet et al. 2007) is adopted. According to this formula a fraction of the undamaged wall area is also repainted, considering the desire of the owner to achieve a uniform appearance. This fraction depends on the extent of the damaged area and is chosen here based on a lognormal distribution with median 0.25 and logarithmic standard deviation 0.5. 6.4
Opti m a l d ampe r d e s ig n
The maximum force capacities of the dampers in each floor are the four design variables ϕ = [Fud,i : i = 1, . . . , 4]. The initial design space for each variable is set to [0, 13000] kN for Fud,1 and Fud,2 and [0, 8000] kN for Fud,3 and Fud,4 . Results for a sample run of the optimization algorithm are presented in Table 7.3. For the SSO stage the sets I3 and ISSO are reported here only. Also lIi denotes the length of set I in the direction of the ith design variable. 6.4.1 S toc h a stic s u b s e t o p t im iza t io n The objective function (34) can be written as: 1 − e−rd tlife C(ϕ) = Eθ [hs (ϕ, θ)] = Cd (ϕ) + L(ϕ, θ) vtlife p(θ)dθ rd tlife Θ
(36)
Stochastic system design optimization using stochastic simulation
179
Table 7.3 Results from the optimization (sample run).
ϕ
I3 (kN)
F ud,1 F ud,2 F ud,3 F ud,4
[3610, 7657] [3557, 7756] [4034, 7095] [1566, 4751]
ISSO (kN) [5857, 6980] [4539, 6045] [4085, 5517] [1841, 2959]
ϕSSO
ϕ∗
(kN)
(kN)
6418 5292 4801 2400
6420 5195 4481 2060
Eˆ θ [h(ϕ∗ , θ)] Eˆ θ [h(ϕSSO , θ)]
l iSSO /liΦ
0.430 × 106 $
0.094 0.126 0.179 0.139
0.438 × 106 $
Thus, the loss function used in the SSO stage of the optimization is: 1 − e−rd tlife hs (ϕ, θ) = Cd (ϕ) + L(ϕ, θ) vtlife rd tlife
VISSO /VΦ
nϕ
0.131
(37)
The parameter selections for SSO are: ρ = 0.2, N = 2000, s = 0. The shape for the sets I is selected as a hyper-rectangle and the adaptive identification is stopped when ˆ Iˆk ) becomes larger than 0.80. The optimization in (24) is performed using a genetic H( algorithm. In total, 6 iterations of the SSO algorithm are performed. After 3 iterations the loss functions hs (ϕ, θ) is reformulated by choosing s = $200.000. Algorithm 1 (Appendix A) is used for sampling in the 1st and 4th iterations and Algorithm 2 in all others. For the MCMC simulation (Algorithm 2) a global proposal PDF equal to p(Zw ) is chosen for the white noise sequence, to avoid the problems with the highdimensionality of the uncertain parameter vector, and local random walk proposal PDFs for all other parameters. A uniform PDF centered at the current sample, with wide spread (covering 0.7 of the current subset Iˆk−1 at each iteration k), is chosen for the proposal PDF for ϕ. This is a proposal PDF that is easy to sample from and still approximates the form of π(ϕ), which is expected to look like a convex function with small sensitivity as the identification converges to a set near the optimal design variables. A global uniform proposal PDF could also be chosen for ϕ, as regions with small sensitivity are approached. Such a global proposal PDF avoids rejecting samples due to their ϕ component, in the candidate sampling step, falling outside the given subset Iˆk−1 at iteration k, which can occur with a local uniform PDF and which increases the correlation in the generated Markov Chain. For the rest of the uncertain parameters, θs and θg , independent conditional Gaussian distributions are chosen, centered at the current sample with standard deviation equal to ½ the standard deviation of the samples retained from the previous step. Ultimately, the efficiency of the MCMC simulation (Algorithm 2) depends a lot on the quality of the selected proposal PDFs. In cases that such PDFs cannot be easily chosen then MC simulation can be used. The results in Table 7.3 show that SSO efficiently identifies a subset for the optimal design variables and leads to a significant reduction of the size (volume) of the search space (look at the last two columns of Table 7.3). The converged optimal solution in the second stage, ϕ∗ , is close to the center ϕSSO of the set that is identified by SSO; also the objective function at that center point Eθ [h(ϕSSO , θ)] is not significantly different from the optimal value Eθ [h(ϕ∗ , θ)]. Thus, selection of ϕSSO as the design choice leads to a sub-optimal design that is, though, close to the true optimum in terms of both the design vector selection and its corresponding performance. This agrees with the
180
Structural design optimization considering uncertainties
findings of all of our other studies and indicates that the sole use of SSO might be adequate for many problems (see Taflanidis & Beck 2007a) for a more thorough comparison and discussion).
6.4.2 S i m u lta n eo u s-p e r t u r b a t io n s t o ch a s t ic a ppr o x i mat i o n wi t h c ommon ran d o m n u m b e r s For the second stage of the optimization framework the formulation of the objective function in (34) is adopted. Stochastic simulation is used in order to estimate only the second part, since the cost of the dampers can be deterministically evaluated, so: 1 − e−rd tlife h(ϕ, θ) = L(ϕ, θ) vtlife rd tlife
(38)
Following the discussion in Section 5.2, importance sampling densities are established for the structural model parameters and the seismological parameters, M and r, but not for the high-dimensional white-noise sequence. Figure 7.7b illustrates this concept for M and r. A truncated lognormal distribution is selected for the IS PDF for M (with median 7 and logarithmic standard deviation 0.1) and a lognormal for r (with median 15 and logarithmic standard deviation 0.4). Note that the IS PDF for M is significantly different from its initial distribution; since M is expected to have a strong influence on h(θ, ϕ), the difference between the distributions is expected to have a big effect on the accuracy of the estimation. The respective difference between the PDFs for r is much smaller. For the structural model parameters this difference is negligible, and the IS PDFs were approximated to be Normal distributions, like p(θs ), with a slightly ˆ θ,N [h(ϕ, ΩN )] for a sample shifted mean value but the same variance. The c.o.v. for E
(a) 1500
1500
2 p(M) 1
1000 N 500
0 8
0
1000 N 500 0 5.5
6
6.5
7
7.5
0.04 p(r) 0.02
0
20
M
r
40
60
0
(b) 100
80 60 N 40
0.4
20
0.2
0 5.5
0.06
0.6
0 6
6.5
7 M
7.5
8
pis(M)
0.04
50 N
pis(r)
0.02 0
0 0
20
r
40
60
Figure 7.7 Details about importance sampling densities formulation for M and r (a) Initial PDF and samples and (b) samples from SSO stage and IS PDF.
Stochastic system design optimization using stochastic simulation
181
size N = 1000 is 16% without using IS and 4% when IS is used. This c.o.v. is of the same level for all values of ϕ√ ∈ ISSO , since the ISSO set is relatively small. Note that the c.o.v. varies according to 1/ N (Robert & Casella 2004); thus, the sample size for direct estimation (i.e. without use of IS) of the objective function with the same level of accuracy as in the case when IS is applied would be approximately 16 times larger. This illustrates the efficiency increase that can be established by the IS scheme discussed earlier. The converged optimal solution in the second stage is included in Table 7.3. Forty iterations were needed in the second stage of the framework using a sample size of N = 1000. This computational cost can be characterized as small. Convergence is judged by looking at the norm ϕk+1 − ϕk ∞ for each of the 5 last iterations. If that norm is less than 0.2% (normalized by the dimension of the initial design space), then we assume that convergence to the optimal solution has been established. 6.4.3 Effic ien c y of the tw o-stage opti mi zati on fra m e wo rk To evaluate the efficiency of the optimization framework, the same optimization was performed without the use of SSO in the first stage. In this case the starting point for SPSA was selected as the center of the design space Φ and α was chosen so that in the first iteration the movement for any design variable is not larger than 5% of the respective dimension of the design space. In this case IS is not implemented; since search inside the whole design space Φ is considered, it is unclear how samples of θ can be obtained to form the IS densities and separately establishing an IS density for each design choice ϕ is too computationally expensive. The larger variability of the estimates caused the gradient-based algorithm to diverge in the first couple of iterations. Thus a larger value for the sample size N = 3000 used. The required number of iterations for convergence of the algorithm in a sample run (and the total number of system simulations) was 102 (612,000). When the combined framework was used the corresponding numbers were 40 (80,000). This comparison illustrates the efficiency of the proposed two-stage optimization framework. The better starting point of the algorithm, as well as the smaller size of the search space, which allows for better normalization, that the SSO subset identification can provide, are the features that contribute to this improvement of efficiency.
6.5
Ef f icienc y of s eis mic protection s ys te m
The expected lifetime cost for the structure in each direction without the dampers is estimated as $1.1 million. The expected lifetime cost of the retrofitted system is $430,000, so the addition of the viscous dampers improves significantly the performance of the structural system. Of this amount, $267,000 corresponds to the cost for the installation of the viscous dampers and $163,00 to the present worth of the expected repair cost for damage from future earthquakes. Figure 7.8 shows the decomposition of the expected lifetime repair cost into its different components for both the initial structure and the retrofitted structure. Only minor changes occur in the distribution of the total cost over its different components. Note that the relative importance of the repair cost for acceleration-sensitive assemblies increases by the addition of the dampers, as expected, but still the importance of this cost remains small, practically negligible.
182
Structural design optimization considering uncertainties
(b) Structure with dampers
(a) Structure with no dampers
structural 33%
structural 36%
paint 29%
paint 33%
Contents 2% partitions 32% Ceiling 1%
Ceiling 1%
partitions 30%
Contents 3%
Figure 7.8 Breakdown of expected lifetime repair costs for (a) initial and (b) retrofitted structures.
7 Conclusions The robust-to-uncertainties design of engineering systems is of great importance. In this study, we discussed stochastic optimization problems that entail as objective function the expected value of a general system performance measure. We focused on problems that involve complex models and high-dimensional uncertain parameter vectors. Stochastic simulation was considered for evaluation of the system performance. This simulation-based approach allows explicit consideration of (a) nonlinearities in the models assumed for the system and its future excitation and (b) complex failure modes. The only constraint in the complexity of the system description stems from the accessible computational power, since a large number of simulations of the system response is needed. The constant advances in computer technology (hardware and software related) are continuously reducing the significance of this constraint. A two-stage framework for the associated optimization problem was discussed. The first stage implements an innovative approach, called Stochastic Subset Optimization (SSO), for efficient exploring the sensitivity of the objective function to the design variables and adaptively identifying sub-regions within the original design space that (a) have high likelihood of including the optimal design variables and (b) are characterized by small sensitivity with respect to each design variable. SSO is combined in the second stage with some other stochastic optimization algorithms for overall enhanced efficiency and accuracy of the optimization process. Simultaneous perturbation stochastic approximation was considered for this purpose in this study and suggestions for enhanced efficiency of the overall framework were given. With respect to SSO, guidelines for establishing good quality in the identification and stopping criteria for the iterative process were suggested. Topics related to the use of common random numbers for the second stage of the optimization framework were extensively discussed. Implementation of importance sampling for this stage was also considered by using the information available in the last iteration of SSO. In all discussions, special attention was given to optimization problems that involve the reliability of a system as the objective function.
Stochastic system design optimization using stochastic simulation
183
An example was presented that shows the efficiency of the proposed methodology and illustrates a systematic way to design structural systems under stochastic earthquake excitation considering all important probabilistic information. The example considered the retrofitting of a four-story non-ductile reinforced concrete office building with viscous dampers. The minimization of the expected lifetime cost was adopted as the design objective. A realistic probabilistic model was presented for representing future ground motions. An efficient and accurate methodology for estimating the damages caused by earthquake events was adopted. The structural performance was evaluated by nonlinear simulation that incorporates all important characteristics of the behavior of the structural system and all available information about the structural model and the expected future earthquakes. In this example, SSO was shown to efficiently identify a set that contains the optimal design variables and to improve the efficiency of SPSA when combined in the context of the suggested optimization framework. The center of the set identified by SSO was found to be close to the true optimal values in terms of both the design variables and the corresponding performance. Thus, use of SSO solely would lead to a sub-optimal design that is close, though, to the optimal one. For better resolution and accuracy the combined two-stage framework should be preferred.
Appendix A Two algorithms that can be used for simulating samples from π(ϕ, θ) are discussed here. Algorithm 1: Accept-reject method, which can be considered a direct Monte Carlo approach. First, choose an appropriate proposal PDF f (ϕ, θ) and then generate a sequence of independent samples as follows: (1) (2)
Randomly simulate candidate sample [ϕc , θc ] from f (ϕ, θ) and u from uniform (0,1). Accept [ϕ, θ] = [ϕc , θc ] if hs (ϕc , θc )
(3)
p(ϕc , θc ) p(ϕ, θ) > u, where M > max hs (ϕ, θ) ϕ,θ Mf (ϕc , θc ) f (ϕ, θ)
(39)
Return to (1) otherwise.
Algorithm 2: Metropolis-Hastings algorithm, which belongs to Markov Chain Monte Carlo methods (MCMC) and is expressed through the iterative form: (1) (2)
Randomly simulate a candidate sample [ϕ˜ k+1 , θ˜ k+1 ] from a proposal PDF q(ϕ˜ k+1 , θ˜ k+1 |ϕk , θk ). Compute acceptance ratio: rk+1 =
hs (ϕ˜ k+1 , θ˜ k+1 )p(ϕ˜ k+1 , θ˜ k+1 )q(ϕk , θk |ϕ˜ k+1 , θ˜ k+1 ) hs (ϕk+1 , θk+1 )p(ϕk+1 , θk+1 )q(ϕ˜ k+1 , θ˜ k+1 |ϕk , θk )
(40)
184
Structural design optimization considering uncertainties
(3)
Simulate u from uniform (0,1) and set [ϕ˜ k+1 , θ˜ k+1 ] if rk+1 ≥ u [ϕk+1 , θk+1 ] = otherwise [ϕk , θk ]
(41)
In this case the samples are correlated (the next sample depends on the previous one) but follow the target distribution after a burn-in period, i.e. after the Markov chain reaches stationarity. The algorithm is particularly efficient when samples that follow the target distribution are already available since then no burn-in period is needed. Assume, in this setting, that there are Na samples [ϕ, θ] and a total N > Na are desired. Starting from each of the Na original samples, [N/Na ] samples are generated by the above process. Since the initial samples are distributed according to π(ϕ, θ) the Markov Chain generated in this way is always in its stationary state and all samples simulated follow the target distribution. Note that knowledge of the normalizing constant in the denominator of π(ϕ, θ) is not needed for any of the two algorithms. The efficiency of both these sampling algorithms depends on the proposal PDFs f (ϕ, θ) and q(ϕ, θ). These PDFs should be chosen to closely resemble hs (ϕ, θ)p(ϕ, θ) and still be easy to sample from. If the first feature is established then most samples are accepted and the efficiency of the algorithm is high. For Metropolis-Hastings the proposal PDFs can either be global (independent), i.e. q(·) = q(ϕ˜ k+1 , θ˜ k+1 ), or establish a local random walk, i.e. q = q(ϕ˜ k+1 , θ˜ k+1 |ϕk , θk ). In the latter case, the spread of the proposal PDFs is particularly important because it affects the size of the region covered by the Markov Chain samples. Excessively large spread may reduce the acceptance rate, increasing the number of repeated samples and thus slowing down convergence and creating correlation between samples. Small spread does not allow for efficient investigation of the whole region of the uncertain parameters and creates correlation between samples because of their proximity. If the dimension of the uncertain parameter vector is high, a typical characteristic for dynamic problems where the excitation is modeled using a white-noise sequence, the efficiency of the MCMC simulation process might be reduced (Au & Beck (2001)) because high correlation might exist between the current and the next chain state. For ROP the modified Metropolis-Hastings algorithm, discussed in detail in (Au & Beck 2003b) can be used in these cases (assuming that the loss function is described by the indicator function of failure). For general stochastic design problems, a global PDF should be chosen for parameters that individually do not significantly influence the objective function, but have significant influence only when viewed as a group. The white-noise sequence in dynamic problems typically belongs to this category.
Appendix B According to the stochastic method (Boore 2003), the total amplitude spectrum A(f ; M, r) for the acceleration time history is expressed as a product of the source, path and site contributions: A(f ; M, r) = (2πf )2 S(f ; M)
1 exp (−πko f ) exp [−πfR/(Q(f )βs )] 8 A m
R f 1+ fmax
(42)
Stochastic system design optimization using stochastic simulation
185
Here S(f ; M) is the “equivalent two point-source spectrum’’ given by (Atkinson & Silva 2000): S(f ; M) = CMw
1−e e + 2 1 + (f /fa ) 1 + (f /fb )2
(43)
where Mw is the seismic moment (in dyn-cm) which is connected to the moment magis given nitude, M, by the relationship log10 Mw = 1.5(M + 10.7) and the constant C √ by C = RΦ VF/(4πRo ρs βs ); R = 0.55 is the average radiation pattern, V = 1/ 2 represents the partition of total shear-wave velocity into horizontal components, F = 2 is the free surface amplification, ρs = 2.8 g/cm3 and βs = 3.5 km/s are the density and shear-wave velocity in the vicinity of the source, and Ro = 1 is a reference distance. The frequencies fa and fb in (43) are given by log10 fa = 2.181 − 0.496 M and log10 fb = 2.41 − 0.408 M, respectively, and e is a weighting parameter described by the expression log10 e = 0.605 − 0.255 M. For the rest of the parameters in (42) the term 1/R is the geometric spread factor, where R = [h2 + r2 ]1/2 is the radial distance from the earthquake source to the site, with log10 h = 0.15M − 0.05 representing a moment dependent, nominal “pseudo-depth’’. The term exp [−πfR/(Q(f )βs )] accounts for elastic attenuation through the earth’s crust with Q(f ) = 180f 0.45 a regional quality factor. The quotient factor in (42) is related to near-surface attenuation with parameters fmax = 10 and ko = 0.03. Finally Am is a near-surface amplification factor which is described through the empirical curves for generic rock sites given by (Boore & Joyner 1997). An alternative approach suggested by (Au & Beck 2003b) (instead of using the empirical curves) would be to set Am to an average constant value equal to 2. The envelope function for the earthquake excitation is represented by (Boore 2003): e(t; M, R) = a(t/tn )b exp (−c(t/tn ))
(44)
where b = −λ ln (η)/[1 + λ( ln (λ) − 1)], c = b/λ, a = [ exp (1)/λ]b and tn = 0.1R + 1/fa with λ = 0.2, η = 0.05.
References Ang, H.-S.A. & Lee, J.-C. 2001. Cost optimal design of R/C buildings. Reliability Engineering and System Safety 73:233–238. Atkinson, G.M. & Silva, W. 2000. Stochastic modeling of California ground motions. Bulletin of the Seismological Society of America 90(2):255–274. Au, S.K. 2005. Reliability-based design sensitivity by efficient simulation. Computers and Structures 83:1048–1061. Au, S.K. & Beck, J.L. 1999. A new adaptive importance sampling scheme. Structural Safety 21:135–158. Au, S.K. & Beck, J.L. 2001. Estimation of small failure probabilities in high dimensions by subset simulation. Probabilistic Engineering Mechanics 16:263–277. Au, S.K. & Beck, J.L. 2003a. Importance sampling in high dimensions. Structural Safety 25(2): 139–163. Au, S.K. & Beck, J.L. 2003b. Subset simulation and its applications to seismic risk based on dynamic analysis. Journal of Engineering Mechanics 129(8):901–917.
186
Structural design optimization considering uncertainties
Beck, J.L., Chan, E., Irfanoglu, A. & Papadimitriou, C. 1999. Multi-criteria optimal structural design under uncertainty. Earthquake Engineering and Structural Dynamics 28(7):741–761. Beck, J.L. & Katafygiotis, L.S. 1998. Updating models and their uncertainties. I: Bayesian statistical framework. Journal of Engineering Mechanics 124(4):455–461. Boore, D.M. 2003. Simulation of ground motion using the stochastic method. Pure applied Geophysics 160:635–676. Boore, D.M. & Joyner, W.B. 1997. Site amplifications for generic rock sites. Bulletin of the Seismological Society of America 87:327–341. Ching, J. & Hsieh, Y.-H. 2007. Local estimation of failure probability function and its confidence interval with maximum entropy principle. Probabilistic Engineering Mechanics 22:39–49. Enevoldsen, I. & Sørensen, J.D. 1994. Reliability-based optimization in structural engineering. Structural Safety 15(3):169–196. Gasser, M. & Schuëller, G.I. 1997. Reliability-based optimization of structural systems. Mathematical Methods of Operations Research 46:287–307. Glasserman, P. & Yao, D.D. 1992. Some guidelines and guarantees for common random numbers. Management Science 38:884–908. Goulet, C.A., Haselton, C.B., Mitrani-Reiser, J., Beck, J.L., Deierlein, G., Porter, K.A. & Stewart, J.P. 2007. Evaluation of the seismic performance of code-conforming reinforcedconcrete frame building-From seismic hazard to collapse safety and economic losses. Earthquake Engineering and Structural Dynamics 36(13):1973–1997. Iwan, W.D. & Cifuentes, A.O. 1986. A model for system identification of degrading structures. Earthquake Engineering and Structural Dynamics 14:877–890. Jaynes, E.T. 2003. Probability theory: the logic of science. Cambridge, UK: Cambridge University Press. Jensen, H.A. 2005. Structural optimization of linear dynamical systems under stochastic excitation: a moving reliability database approach. Computer Methods in Applied Mechanics and Engineering 194:1757–1778. Kleinmann, N.L., Spall, J.C. & Naiman, D.C. 1999. Simulation-based optimization with stochastic approximation using common random numbers. Management Science 45(11): 1570–1578. Kramer, S.L. 2003. Geotechnical earthquake engineering. New Jersey: Prentice Hall. Kushner, H.J. & Yin, G.G. 2003. Stochastic approximation and recursive algorithms and applications. New York: Springer. Lagaros, N.D., Papadrakakis, M. & Kokossalakis, G. 2002. Structural optimization using evolutionary algorithms. Computers and Structures 80(7–8):571–589. Papadimitriou, C., Beck, J.L. & Katafygiotis, L.S. 2001. Updating robust reliability using structural test data. Probabilistic Engineering Mechanics 16:103–113. Porter, K.A., Beck, J.L., Shaikhutdinov, R.V., Au, S.K., Mizukoshi, K., Miyamura, M., Ishida, H., Moroi, T., Tsukada, Y. & Masuda, M. 2004. Effect of seismic risk on lifetime property values. Earthquake Spectra 20:1211–1237. Pradlwater, H.J., Schuëller, G.I., Koutsourelakis, P.S. & Champris, D.C. 2007. Application of line sampling simulation method to reliability benchmark problems. Structural Safety 29(3):208–221. Robert, C.P. & Casella, G. 2004. Monte Carlo Statistical Methods. New York, NY: Springer. Royset, J.O. & Polak, E. 2004. Reliability-based optimal design using sample average approximations. Probabilistic Engineering Mechanics 19:331–343. Royset, J.O. & Polak, E. 2007. Efficient sample size in stochastic nonlinear programming. Journal of Computational and Applied Mathematics (in press). Ruszczynski, A. & Shapiro, A. 2003. Stochastic Programming. New York: Elsevier. Sørensen, J.D., Kroon, I.B. & Faber, M.H. 1994. Optimal reliability-based code calibration. Structural Safety 15:197–208.
Stochastic system design optimization using stochastic simulation
187
Spall, J.C. 2003. Introduction to stochastic search and optimization. New York: WileyInterscience. Taflanidis, A.A. & Beck, J.L. 2007a. Stochastic subset optimization for optimal reliability problems. Journal of Probabilistic Engineering Mechanics (In press). Taflanidis, A.A. & Beck, J.L. 2007b. Stochastic subset optimization for stochastic design. ECCOMAS Thematic Conference on Computational Methods in Structural Dynamics and Earthquake Engineering, Rethymno, Greece, 13–16 June.
Chapter 8
Numerical and semi-numerical methods for reliability-based design optimization Ghias Kharmanda Aleppo University, Aleppo, Syria
ABSTRACT: In the Reliability-Based Design Optimization (RBDO) model for robust system design, the mean values of uncertain system variables are usually used as design variables, and the cost is optimized subject to prescribed probabilistic constraints as defined by a nonlinear mathematical programming problem. Therefore, a RBDO solution that reduces the structural weight in uncritical regions does not only provide an improved design but also a higher level of confidence in the design. In this work, we present recent developments for the RBDO model relative to two points of view: reliability and optimization. Next, we present our recent developments for reliability-based design optimization model. Finally, we demonstrate the efficiency of our methods on different applications.
1 Introduction When Deterministic Design Optimization (DDO) methods are used, deterministic optimum designs are frequently pushed to the design constraint boundary, leaving little or no room for tolerances (or uncertainties) in design, manufacture, and operating processes. So deterministic optimum designs obtained without consideration of uncertainties could lead to unreliable designs, therefore calling for Reliability-Based Design Optimization (RBDO). It is the objective of Reliability-Based Design Optimization (RBDO) to design structures that should be both economic and reliable. However, the coupling between the mechanical modeling, the reliability analyses and the optimization methods leads to very high computational cost and weak convergence stability (Feng & Moses 1986). To overcome these difficulties, two points of view have been considered. From a reliability view point, RBDO involves the evaluation of probabilistic constraints, which can be executed in two different ways: either using the Reliability Index Approach (RIA) or the Performance Measure Approach (PMA) (see Tu et al. 1999, Youn et al. 2003). The major difficulty lies in the evaluation of the probabilistic constraints, which is prohibitively expensive and even diverges for many applications. However, from an optimization view point, we have two categories of methods: numerical and semi-numerical methods. For the first category, a hybrid method based on simultaneous solution of the reliability and the optimization problem has successfully reduced the computational time problem (Kharmanda et al. 2002). Next, an improved hybrid method has been recently proposed to improve the optimum value of the objective function more than the resulting value by the hybrid method (Mohsine et al. 2005). However, the hybrid and improved hybrid RBDO problems are more complex than
190
Structural design optimization considering uncertainties
that of deterministic design and may not lead to local optima. For the second category, an Optimum Safety Factor (OSF) method has been proposed to compute safety factors satisfying a required reliability level without demanding additional computing cost for the reliability evaluation (Kharmanda et al. 2004b). However, the OSF method cannot be used for all cases such as modal analysis. So a safest point method has been proposed to deal with these problems (Kharmanda et al. 2006). We finally note that the developments based on the reliability view point is less efficient than those based on the optimization view point because the latter provides us with reliability-based optimum designs without additional computing cost for probabilistic (reliability) constraints and leads, at least, to local optima. The numerical methods need a much higher computing time than the semi-numerical ones but to improve the optimum values, we need to use the numerical method with very expensive operations.
2 Two points of view for developing the RBDO model 2.1 Rel i a b i l i ty vie w po int The work of (Tu et al. 1999, Youn et al. 2003) depends on the development of several approaches based on a reliability view point. Here, two design requirements are coupled for each probabilistic constraint: the performance requirement is described implicitly by the performance measure, and the reliability requirement is approximated explicitly by the first- or second-order reliability index. The conventional Reliability Index Approach (RIA) for RBDO has been developed and applied to design for against fatigue crack initiation of a road arm of the M1A1 tank to successfully obtain an optimum shape design of the component, see (Youn et al. 2003). However, it was found that the computational requirement of RIA is extremely intensive because the evaluation of each probabilistic constraint during an overall RBDO iteration is quite expensive. To alleviate this computational burden, it was proposed to develop a Performance Measure Approach (PMA) for RBDO. In this approach, the reliability constraint is defined from the design perspective (rather than from the reliability analysis perspective) to measure the design constraint violation. The prescribed reliability requirement (such as six-sigma design, see Koch et al. 2004) was assumed to be satisfied, and the probabilistic performance measure (the value of the limit-state function) that satisfies this prescribed reliability requirement was used to measure the degree of the reliability constraint violation. Using PMA, an inverse reliability analysis problem was associated with the evaluation of the reliability constraint and a nonlinear ball constraint optimization problem was proposed for this inverse reliability analysis problem. The inverse reliability analysis problem in the proposed PMA was solved in a far more efficient and stable way than the conventional RIA. From a broader perspective, it was shown that the probabilistic constraints can be evaluated using either RIA or the PMA. However, there are two major advantages in PMA compared to RIA, see (Youn et al. 2004). First, it is found that the performance measure approach is inherently robust and is more effective when the reliability constraint is inactive. This fact is not surprising, since it is easier to minimize a complicated cost function subject to a simple constraint function than minimizing a simple cost function subject to a complicated constraint function. The inverse reliability analysis problem of PMA provides
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s
191
this benefit. Secondly, which is more significant, the PMA always yields a solution, whereas RIA does not yield solutions for certain types of distributions. The major difficulty is associated with the reliability evaluation. So, we found that it is more efficient to select the optimality criterion as a point of view for the developments. 2.2
Optimization view point
Not surprisingly, efforts were directed towards the development of efficient techniques and general purpose programs to perform the reliability analysis. These programs and procedures compute the reliability index of a structure for a defined failure mode, but do not provide an optimum set of design parameters for improving the reliability of a structure for defined reliability information. Since the reliability index is computed iteratively, an enormous amount of computer time is involved in the whole design process. Two categories of methods have been developed. For the first category, called numerical methods, a hybrid method based on simultaneous solution of the reliability and the optimization problem has successfully reduced the computational time problem (Kharmanda et al. 2002). Next, an improved hybrid method has been recently proposed to improve the optimum value of the objective function more than the resulting value by the hybrid method (Mohsine et al. 2005). However, the hybrid and improved hybrid RBDO problems are more complex than that of deterministic design and may not lead to local optima. For the second category, called semi-numerical methods, an optimum safety factor (OSF) method has been proposed to compute safety factors satisfying a required reliability level without demanding additional computing cost for the reliability evaluation (Kharmanda et al. 2004a). However, the OSF method cannot be used for all cases such as modal analysis. So a safest point method has been proposed to deal with these problems (Kharmanda et al. 2006). In the next sections, we present our developments and some applications.
3 Numerical RBDO methods 3.1
Classical method (CM)
3.1.1
Basic formul ati on
The classical reliability-based optimization is performed by nesting the following two problems: 1.
Optimization problem: min f (x) x
subject to
gk (x) ≤ 0,
k = 1, . . . , K
(1)
β(x, u) ≥ βt where f (x) is the objective function, gk (x) ≤ 0 are the associated constraints, β(x, u) is the reliability index of the structure, and βt is the target reliability.
192
Structural design optimization considering uncertainties
x2
Failure domain
Failure domain
u2
H(x, u)0
G(x, y)0 G(x, y)0
P*
x1
0
Physical space
0
H(x, u)0
u1
Normalized space
Figure 8.1 (a) Physical space or X-Space and (b) normalized space or U-Space.
Reliability analysis: the reliability index β(x, u) is equal to the minimum distance between the limit state function H(x, u) and the origin, see Figure 8.1b. This index is determined by solving the minimization problem:
2.
min d(u) = u
u2i
(2)
subject to H(x, u) ≤ 0 where d(u) is the distance in the normalized random space, defined as above, and H(x, u) is the performance function (or limit state function) in the normalized space, defined such that H(x, u) ≤ 0 implies failure, see Figure 8.1b. In the physical space, the image of H(x, u) is the limit state function G(x, y), see Figure 8.1a.
3.1.2
F u rth er d e v e lo p m e n t
According to the sub-problems (1) and (2), the classical solution consists in minimizing two Lagrangians: min L1 (x, u, λk , λβ ) = f (x) + λβ [βt − β(x, u)] + min L2 (u, λH ) = d(u) + λH H(x, u)
k
λk gk (x)
(3a) (3b)
where λk , λβ and λH are, respectively, the Lagrangian multipliers for the constraints, the reliability index and the limit state function; (λk ≥ 0, λβ ≥ 0 and λH ≥ 0). The optimality
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s
193
conditions of these two Lagrangians are, respectively, ∂L1 ∂β ∂gk ∂f = − λβ + λk =0 k ∂xi ∂xi ∂xi ∂xi ∂L1 = βt − β(x, u) = 0 ∂λβ
(4a) (4b)
∂L1 = gk (x) = 0 ∂λk
(4c)
∂H ∂d ∂L2 = + λH =0 ∂uj ∂uj ∂uj
(5a)
∂L2 = H(x, u) = 0 ∂λH
(5b)
and
It has been demonstrated that the classical approach needs a high computational time and may lead to weak convergence stability. Furthermore, it is very difficult to implement of the machine (see Kharmanda et al. 2001, 2002). 3.2
Hybrid method (HM)
3.2.1
Basic formul ati on
The solution procedure in two separate spaces requires large computational time, especially for large-scale structures (Feng & Moses 1986). In order to improve the numerical performance, the hybrid approach consists in minimizing a new form of the objective function F(x, y) subject to a limit state as well as deterministic and reliability constraints, i.e., min F(x, y) = f (x) · dβ (x, y) x,y
subject to
G(x, y) ≤ 0 gk (x) ≤ 0,
k = 1, . . . , K
(6)
dβ (x, y) ≥ βt The minimization of the function F(x, y) is carried out in the Hybrid Design Space (HDS) of deterministic variables x and random variables y. Here, dβ (x, y) is the distance in the hybrid space between the optimum point and the design point, dβ (x, y) = d(u). Since the random variables and the deterministic ones are treated in the same space (HDS), it is very important to know the types of the used random variables (continuous and/or discrete) and the distribution law that has been used. The normalized variable u is used to evaluate the reliability index (2). However, the reliability index can be obtained in terms of the probability of failure as: β = −−1 (Pf )
(7)
Structural design optimization considering uncertainties
Hybrid Design Space
Hybrid Design Space
db bt db bt P*x
P*x
ng decr easi
P*y
Failure domain
f(x)
f(x)
P*y
db bt db bt
db
Objective function levels
decr easi
ng
db
Safe domain
Objective function levels
G(x, y)0
G(x, y)0
G(x, y)0
Limit state decreasing G(x, y)0
x2, y2
Limit state decreasing G(x, y)0
x2, y2
G(x, y)0
194
x1, y1
x1, y1
(a)
(b)
Hybrid Design Space G(x, y)0
Safe domain db bt dbbt p*x
decr easi
ng
db
Objective function levels
G(x, y)0
G(x, y)0
x2, y2 Limit state decreasing
P*y
f(x)
Failure domain
x1, y1 (c)
Figure 8.2 Hybrid design spaces: (a) normal law, (b) lognormal law and (c) uniform law.
where is the cumulative density function and Pf is the probability of failure. In many engineering applications, the evaluation of the failure probability can be carried out in several ways (Ditlevsen & Madsen 1996). 3.2.2 F u rth er d e v e lo p m e n t The hybrid Lagrangian is written as LH (x, y, λ) = f (x) · dβ (x, y) + λβ [βt − dβ (x, y)] + λG G(x, y) +
k
λk gk (x)
(8)
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s
195
The optimality conditions of this Lagrangian are ∂G ∂gk ∂LH ∂f ∂dβ = dβ (x, y) + [f (x) − λβ ] + λG + λk =0 ∂xi ∂xi ∂xi ∂xi ∂xi
(9a)
∂G ∂LH ∂dβ = [f (x) − λβ ] + λG =0 ∂yi ∂yi ∂yi
(9b)
∂LH = βt − dβ (x, y) = 0 ∂λβ
(9c)
k
∂LH = G(x, y) = 0 ∂λG ∂LH = gk (x) = 0 ∂λk
(9d) (9e)
The optimality conditions (9) represent the optimal solution by a linear combination of different gradients of f , dβ , G and gk . At the convergence, the distance dβ stretches toward the reliability index β, which next stretches toward βt when the associated constraint is active. By comparing the conditions (9) with the optimality conditions of the classical formulation (see (4 and 5)); we can note that the only difference in the search direction lies in the coupled term: ∂G/∂xi . In fact, two cases may occur in function of the type of the optimization variables xi . Case 1: xi is a deterministic mechanical parameter (e.g. xi is a parameter of the limit state). In this case, the limit state sensitivity takes the form (Ditlevsen & Madsen 1996) ∂dβ ∂G =η ∂xi ∂xi
(10)
with the norm η 0 0 0 0 ∂Tj−1 (x, u) 0 0 ∂H 0 0 ∂G 0 0 0 η=0 0 0 ∂u 0 = 0 0 0 ∂y ∂u j j j
(11)
Case 2: xi is a probability distribution parameter of the random variable yi (e.g. xi is the mean of yi ). In this case, xi is a pure probability variable and has no effect on the limit state function, leading to: ∂G/∂xi = 0. In this case, we obtain ⎧ ⎨ ∂G ∂H = ∂yj ⎩ ∂xi 0
for
i=j
for
i = j
(12a)
where ∂dβ ∂G =η ∂yi ∂yi
(12b)
196
Structural design optimization considering uncertainties
From (10) and (12), we can see that the gradient vectors of G and dβ are co-directional. It means that there is no modification of the search direction. The introduction of this result in the first optimality condition of the hybrid Lagrangian (9a) leads to ∂LH ∂f ∂dβ ∂gk = dβ (x, y) + [f (x) − λβ + ηλG ] + λk =0 ∂xi ∂xi ∂xi ∂xi
(13)
k
The comparison of the optimality conditions for classical and hybrid approaches gives the relationships between the Lagrangian multipliers in the two formulations: λβ =
λβ − f (x) − ηλG dβ (x, y)
(14)
λH =
λG f (x) − λβ
(15)
and
These developments show that the solution of problem (8) respects exactly the optimality conditions of the initial problem, given by (4a) and (5b), where the two phenomena were separated. Otherwise, the hybrid Lagrangian definition does not introduce any modification in the optimality conditions. In the literature, the hybrid method has been successfully applied for several examples (Kharmanda et al. 2001–2003). An industrial application of a lorry brake system design (for KNORR-BREMSE Company) has been successfully carried out during the PhD thesis of (Mohsine 2006). 3.3
Im prov ed hy b r id me t ho d (I H M)
3.3.1 Ba si c f ormu la t io n Using the hybrid method, we can obtain local optima and designer may then select the best optimum. In the improved hybrid method, we introduce the design point and the optimum solution in the objective function and the constraints at the design point and at the optimum solution as follows: min x,y
F(x, y) = f (x) · dβ (x, y) · f (my )
subject to G(x, y) = 0 gk (x) ≤ 0 gj (my ) ≤ 0 and dβ (x, y) ≥ βt
(16)
The random vector y has mean values my and standard-deviations σy · f (my ) is the optimal objective function and gj (my ) is the constraint at which we can control the optimal configuration. The solution of this problem depends on two important points. It can be carried out simultaneously in the hybrid design space (HDS).
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s
3.3.2
197
F u rt h er dev el opment
We show the equivalence of the improved method and the classical (initial) one. The improved hybrid Lagrangian is written as LI (x, y, λ) = f (x) · dβ (x, y) · f (my ) + λβ βt − dβ (x, y) + λG G(x, y) + λk gk (x) + λj gj (my ) k
(17)
j
In order to write the optimality conditions of the improved hybrid Lagrangian, we note that the derivatives of f (my ) and of g(my ) with respect to y are nil: ! " ∂f my ∂y
=0
(18)
∂g(my ) =0 ∂y y∗
(19)
y∗
and
Because the my value coincide the optimal solution of the objective function and we derivate with respect to random distribution for which we propose a function Q that we can write my = Q(y). So it gives: ∂f ◦ Q (y) = 0 ∂y y∗
(20)
So the optimality conditions of the improved hybrid Lagrangian are: ∂LI ∂f ∂dβ = dβ (x, y) · f (my ) + [f (x) · f (my ) − λβ ] ∂xi ∂xi ∂xi ∂G ∂gk + λG + λk =0 ∂xi ∂xi
(21a)
k
∂G ∂LI ∂dβ = [f (x) · f (my ) − λβ ] + λG =0 ∂yi ∂yi ∂yi
(21b)
∂LI = βt − dβ (x, y) = 0 ∂λβ
(21c)
∂LI = G(x, y) = 0 ∂λG ∂LI = gk (x) = 0 ∂λk ∂LI = gj (my ) = 0 ∂λj
(21d) (21e) (21f)
198
Structural design optimization considering uncertainties
The optimality conditions (21) represent the optimal solution by a linear combination of different gradients of f , dβ , G and gk . At the convergence, the distance dβ stretches toward the reliability index β, which next stretches toward βt when the associated constraint is active. By comparing the conditions (21) with the optimality conditions of the classical formulation (see (4 and 5)); we can note that the only difference in the search direction lies in the coupled term: ∂G/∂xi . In fact, two cases may occur in function of the type of the optimization variables xi . Case 1: xi is a deterministic mechanical parameter (e.g. xi is a parameter of the limit state). In this case, the limit state sensitivity takes the form (Ditlevsen & Madsen 1996) ∂dβ ∂G =η ∂xi ∂xi
(22)
with the norm η 0 0 0 0 ∂Tj−1 (x, u) 0 0 ∂H 0 0 ∂G 0 0 0 η=0 0 0 ∂u 0 = 0 0 0 ∂yj ∂uj j
(23)
Case 2: xi is a probability distribution parameter of the random variable yi (e.g. xi is the mean of yi ). In this case, xi is a pure probability variable and has no effect on the limit state function, leading to: ∂G/∂xi = 0. In this case, we obtain ⎧ ⎨ ∂G ∂H = ∂yj ⎩ ∂xi 0
for
i=j
for
i = j
(24)
where ∂dβ ∂G =η ∂yi ∂yi
(25)
From (22) and (24), we can see that the gradient vectors of G and dβ are co-directional. It means that there is no modification of the search direction. The introduction of this result in the first optimality condition of the improved hybrid Lagrangian (21a) leads to ∂f ∂dβ ∂gk ∂LI = dβ (x, y) · f (my ) + [f (x · f (my ) − λβ + ηλG ] + λk =0 ∂xi ∂xi ∂xi ∂xi
(26)
k
The comparison of the optimality conditions for classical and improved hybrid approaches gives the relationships between the Lagrangian multipliers in the two formulations: λβ =
λβ − f (x) · f (my ) − ηλG dβ (x, y) · f (my )
(27)
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s
199
and λG =
λG f (x) · f (my ) − λβ
(28)
These developments show that the solution of problem (17) respects exactly the optimality conditions of the initial problem, given by (4) and (5), where the two phenomena were separated. Otherwise, the improved hybrid Lagrangian definition does not introduce any modification in the optimality conditions. Applications of transient analysis have demonstrated that the main benefit of the improved hybrid method. Here, we improve the structure performance by much more minimizing the objective function than the hybrid method (Mohsine et al. 2005). To conclude this section, we can compare between the three kinds of numerical methods: classical, hybrid and improved hybrid RBDO methods. The classical method leads to very high computational cost and weak convergence. The hybrid method has successfully reduced the computing time relative to the classical one (Kharmanda et al. 2001–2003). The improved hybrid method can improve the optimum value of the objective function more than the hybrid method (Mohsine et al. 2005). In these presented numerical method, the reliability-based design optimization problem has two kinds of variables: random and deterministic that still leads to expensive procedures and may not yield local optima. In the next section, we present two semi-analytical methods that reduce the optimization problem scale.
4 Semi-numerical RBDO methods 4.1 4.1.1
Optimum s afety factor method (O SF) Basic formul ati on
It is our aim that the safety factors should be independent of the engineering experience. In fact the engineering experience is based on experimental work, design knowledge, etc. However, when designing a new type of structure, we usually need some experimental background for proposing suitable safety factors. When applying safety factors the initial cost will increase, and this increase should not be too large. Given that sensitivity analysis plays a very important role and can provide us with the influence of the parameters on the structure studied, we will use this concept in the proper direction and combine it with the reliability analysis. The main disadvantage of the Deterministic Design Optimization (DDO) procedure is that it may not satisfy an appropriate required reliability level. Although we improve the reliability level of the structure when using the hybrid RBDO, this approach leads to a saving of computational time (which may be then available for the reliability analysis). Thus, our OSF approach consists in using both sensitivity analysis and reliability analysis to overcome the disadvantages of DDO and RBDO by numerical methods (Kharmanda & Olhoff 2003, Kharmanda et al. 2003, 2004c, Kharmanda & Olhoff 2007). Table 8.1 shows the different formulations of optimum safety factors for normal, lognormal and uniform distributions.
200
Structural design optimization considering uncertainties Table 8.1 Optimum safety factors for normal, lognormal and uniform distributions. Law
OSF
Normal
Sfi = 1 + γi ·u∗i
1 2 1 S fi = exp ln(1 + γi2 ) · u∗i 2 1 + γi √ Sfi = 1 − 3γi (1 − 2(u∗i ))
Lognormal Uniform
u2
u2
Infeasible domain
f
b
PD
P*
u1
bt H(u)0
a u1
H(u)0
POp Feasible domain
(b) (a)
Figure 8.3 (a) Design point modeling and (b) Optimum solution modeling.
4.1.2 F u rth er d e v e lo p m e n t Let us consider an example of only two normalized variables u1 and u2 (see Figure 8.3a). For an assumed failure scenario H(u) ≤ 0, the design point P∗ is calculated by min d 2 = u21 + u22 u
subject to H(u1 , u2 ) ≤ 0
(29)
The Lagrangian function for the problem (29) can be written as: L(u, λ, s) = d 2 (u) + λ · [H(u) + s2 ]
(30)
where the inequality constraint in (29) is adjoined by the Lagrange multiplier λ, after having converted the inequality constraint into equality H(u) + s2 = 0, by introducing
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s
201
the real slack variable s. The optimality conditions for this Lagrangian are: ∂L ∂d 2 ∂H = +λ = 0, ∂ui ∂ui ∂ui ∂L = H(u) + s2 = 0 ∂λ ∂L = 2sλ = 0 ∂s
i = 1, 2
(31a) (31b) (31c)
The optimality condition for L with respect to s, yields the so-called switching condition sλ = 0, and the necessary condition ∂2 L/∂s2 ≥ 0 for a minimum of L implies that the Lagrangian multiplier λ must be non-negative, i.e., λ ≥ 0. So due to the condition (31c), we distinguish between two cases: Case 1: If the real slack variable is non-zero (s = 0), the Lagrangian multiplier has to be zero (λ = 0) and the limit state constraint must be less than zero (H(u) < 0), which correspond to the case of failure. Case 2: If the real slack variable is zero (s = 0), the Lagrangian multiplier is nonnegative (λ ≥ 0) and the limit state is defined by the equality constraint H(u) = 0. The solution here is found on the limit state function and represents the Design Point. The first case is not suitable to our reliability-based study whereas the second one is basic for our approach. Since we have only two normalized variables u1 and u2 , equation (31a) can be written as: ∂d 2 ∂H ∂L = +λ =0 ∂u1 ∂u1 ∂u1
(32a)
∂L ∂d 2 ∂H = +λ =0 ∂u2 ∂u2 ∂u2
(32b)
Using the square distance d 2 in equation (29), we get: ∂H λ ∂H = 0 ⇔ u1 = − ∂u1 2 ∂u1 ∂H λ ∂H = 0 ⇔ u2 = − 2u2 + λ ∂u2 2 ∂u2
2u1 + λ
(33a) (33b)
From Figure 8.3, at the design point P∗ , the tangent of α is given by: tan α = u2 /u1 and using equations (33a) and (33b), we get: ∂H u2 ∂u2 = tan α = ∂H u1 ∂u1
(34)
Equation (34) shows the relationship between the distribution of the normalized vector components and the sensitivity of the limit state function. Problem (29) gives us
202
Structural design optimization considering uncertainties
the reliability index β as the minimum distance between the limit state function and the origin (Hasofer & Lind 1974). This way the resulting reliability index may be lower or higher than the target reliability index βt . As we wish to satisfy a required target reliability level, we now write: βt2 = u21 + u22
(35)
Using equations (34) and (35), we get:
∂H 2 ∂u1 2 βt2 = u22 + u2 ∂H 2 ∂u2 or
⎞ ∂H 2 ⎟ ⎜ ∂u ⎟ ⎜ 1 2 u22 ⎜ 2 + 1⎟ = βt ⎠ ⎝ ∂H
(36)
⎛
(37)
∂u2 Here, the value of normalized vector components principally depends on the percentage of the limit state gradient. So u2 is written as: +
, , ∂H 2 , , ∂u2 (38) u2 = βt , 2 , - ∂H ∂H 2 + ∂u1 ∂u2 In general, when considering the normal distribution law, the normalized variable ui is given by: ui =
yi − m i , σi
i = 1, . . . , n
(39)
The standard deviation σi can be related to the mean value mi by: σi = γi · mi ,
i = 1, . . . , n
(40)
This way we introduce the safety factors Sfi corresponding to the design variables xi . The design point can be expressed by: yi = Sfi · mi ,
i = 1, . . . , n
(41)
By (40) and (41), we replace σi and xi in equation (39) and get: ui =
Sfi − 1 , γi
i = 1, . . . , n
(42)
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s
203
Using equation (42), we can write (38) in the following form: +
, , ∂H 2 , , Sf2 − 1 ∂u2 = βt , 2 , γ2 - ∂H ∂H 2 + ∂u1 ∂u2
(43)
or in terms of Sf2 : +
, , ∂H 2 , , ∂u2 Sf2 = 1 + γ2 · βt ,
, - ∂H 2 ∂H 2 + ∂u1 ∂u2
(44)
The calculation of the normalized gradient ∂H/∂u is not directly accessible because the mechanical analysis is carried out in the physical space, not in the standard space. The computation of the normalized gradient is carried out by applying the chain rule on the physical gradient ∂G/∂x: ∂H ∂G ∂Tk−1 (u, y) = , ∂ui ∂yk ∂ui
i = 1, . . . , n, k = 1, . . . , K
(45)
where T −1 (y, u) is the probabilistic transformation function. After some algebra, the normalized gradient can be written as: ∂H = ∂ui
∂G ∂y , i
i = 1, . . . , n
(46)
The distribution of the components of the vector u can be measured by the sensitivity analysis of the limit state function with respect to the design point vector y. + , ∂G , , ∂y , 2 Sf2 = 1 ± γ2 · βt , - ∂G ∂G + ∂y ∂y 1
(47)
2
For a single limit state problem of n design variables, and sum from j = 1 to n, equation (47) can thus be written in the following form: + , , ∂G , , ∂yi , Sfi = 1 ± γi · βt , , n - ∂G j=1 ∂yj
i = 1, . . . , n
(48)
204
Structural design optimization considering uncertainties
with the optimum values u∗i of the normalized vector: + , , ∂G , , ∂yi , u∗i = ±βt , , n - ∂G j=1 ∂yj
i = 1, . . . , n
Here, the sign ± depends on the sign of the derivative, i.e. ∂G > 0 ⇔ Sfi > 1, ∂yi
i = 1, . . . , n
(49)
∂G < 0 ⇔ Sfi < 1, ∂yi
i = 1, . . . , n
(50)
Using these safety factors, we can satisfy the required reliability level and avoid the complexity of the problem. In the literature, the OSF method has been successfully applied for several static examples (Kharmanda et al. 2003–2004c). For the transient analysis, (Yang et al. 2005) from Ford Motor Company compared the results and efficiencies of different RBDO methods on an exhaust system. The objective was to minimize the weight of the system subject to constraints that the reliability of the resultant forces in each frequency region should be less than specified values. All in all 144 constraints were imposed, but many of them were inactive. (Yang et al. 2005) tested several RBDO methods. They concluded that: ‘(Kharmanda et al. 2004c) also used structural safety factors, based on the sensitivity of the limit-state function, for RBDO. In addition to its simplified computational framework to completely decouple the optimization and the reliability analyses, the method has two advantages: 1.
2.
It incorporates the partial safety-factor concept with which most designers are familiar. And, theoretically, safety factors do not have to be tied to the individual random variables and thus the MPPs (Most Probable Points). It produces progressively improved reliable designs in the initial steps that help designers keep track of their designs.’
According to the experience of Ford Motor Company, our method is considered as a very good active constraint strategy (for problems with many constraints). For modal analysis, it has been applied for a special case (Kharmanda et al. 2004d), where the reliability-based optimum solution was determined subject to a prescribed eigen-frequency fn . But if the failure interval [fa , fb ] is given, it is also very difficult to determine the safest solution using the OSF method. So we have to develop an efficient method to find the best point correspond to the eigen-frequency for a given frequency interval.
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s
205
Table 8.2 Mean values by safest point method for normal, lognormal and uniform distributions. Law
Mean values
Normal
mi =
yai + ybi , i = 1, … , n 2
a b ln(yi yi ) , i = 1, … , n mi = 1 + γi2 exp 2 ya + ybi , i = 1, … , n mi = i 2
Lognormal Uniform
Failure domain Safety domain
ba fa
bb fn
f(Hz) fb
Figure 8.4 The safest point at frequency fn .
4.2
Safest point method (SP)
4.2.1
Basic formul ati on
The safest structure under free vibrations for a given interval of eigen-frequency is found at the safest position of this interval where the safest point has the same reliability index relative to both sides of the interval. The use of the hybrid method here needs a multiple procedures and high computing time (Kharmanda et al. 2007). Thus, we present efficient formulations of the safest point method for normal, lognormal and uniform distributions (see Table 8.2). 4.2.2
F u rt h er dev el opment
Let consider a given interval [fa , fb ]. For the first shape mode, to get the reliabilitybased optimum solution for a given interval, we consider the equality of the reliability indices: βa = βb
(51a)
206
Structural design optimization considering uncertainties
with + , n , βa = (uai )2
and
+ , n , βa = (ubi )2
i=1
i = 1, . . . , n
(51b)
i=1
To verify the equality (51a), we propose the equality of each term. So we have: uai = −ubi ,
i = 1, . . . , n
(52)
According to the normal distribution law, the normalized variable ui is given by (39) and (52), we get: y b − mi yia − mi =− i , σi σi
i = 1, . . . , n
(53)
To obtain equality between the reliability indices (see equation 51a), the mean value of variable corresponds to the structure at fn . So the mean values of safest solution are located in the middle of the variable interval [yia , yib ] as follows: xi = mi =
yia + yib , 2
i = 1, . . . , n
(54)
In a recent publication (Kharmanda et al. 2006, 2007), we found that the safest point method is suitable for modal analysis more than the other methods that are complex to implement and to converge in this kind of study. To conclude this section, the OSF is simple to implement, can satisfy required reliability levels, has only a single type of variable y, and only needs a single simple optimization process to determine the design point. This method can be successfully used for static and transient analysis problems but for a modal analysis where the aim is to optimize a structure for a given eigen-frequencies interval, the safest point method is very suitable to find the best structure. Finally, to compare between the numerical and semi-numerical methods, we can note that the computational time when using the numerical methods is very high relative to the semi-numerical methods because we deal with two kinds of variables for numerical methods but with only one kind for semi-numerical methods. As result, the numerical methods can solve both optimization and reliability problems by numerical procedures but the semi-numerical methods solve the reliability problem by analytical form and the optimization problem by numerical procedure that leads to a reduction of the computing time. In the next section, we present some numerical examples to compare for different methods with object of showing their advantages depending on the studied cases.
5 Numerical applications We study three examples: static, modal and transient cases in order to provide the designer with the suitable method for each case.
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s
207
H3
H2
H1 L (a)
(b)
Figure 8.5 Layout of tri-material cantilever beam.
5.1
Static analys is: A cantilever tri-mat e ri al be am
The objectives of the following static analysis are to demonstrate: 1. 2. 3.
DDO procedure cannot satisfy the required reliability level, Semi-numerical methods such as OSF are simple to implement relative to numerical ones such as hybrid method, Semi-numerical methods such as OSF reduce efficiently the problem scale relative to numerical ones such as hybrid method that leads to a reduction of the computing time.
The design problem under consideration pertains to a short tri-material cantilever beam of length L = 100 mm, height H = 50 mm and width T = 20 mm, which is loaded by a distributed pressure q = 15 N/mm2 . The beam structure is composed of three layers of material (Figure 8.5) of different Young’s moduli E1 = 200 GPa, E2 = 100 GPa and E3 = 150 GPa, Poisson’s ratios ν1 = 0.3, ν2 = 0.1 and ν3 = 0.2, and yield stresses y y y σ1 = 48 MPa, σ2 = 18 MPa and σ3 = 42 MPa. The heights of the three layers are: H1 = 10 mm, H2 = 30 mm, and H3 = 10 mm. To optimize the tri-material beam structure, the mean values mH1 , mH2 and mH3 of the heights H1, H2 and H3 are the design variables. The physical heights H1, H2 and H3 are elements of the vector of random variables. The target reliability index is taken to be: βt = 3, and the standard-deviations are given by σH1 = 0.1mH1 , σH2 = 0.1mH2 and σH3 = 0.1mH3 . During the subsequent design optimization processes, we consider all variables to be bounded by upper and lower limits. 5.1.1
O ptimizat ion procedures
DDO procedure: The objective is to minimize the volume subject to the design constraints and consider a safety factor Sf that is applied to the stress and based on engineering experience. The structure has to be designed by considering the maximum y allowable values σjw = σj /Sf , j = 1, 2, 3 for the von Mises stresses σjmax , j = 1, 2, 3 in the
208
Structural design optimization considering uncertainties
most critical points in each of the three layers of different material. Thus, the structural optimization problem with the safety factor taken into account, can be written as min
H1,H2,H3
Volume(H1, H2, H3) y
subject to σ1max (H1, H2, H3) ≤ σ1w = σ1 /Sf y
σ2max (H1, H2, H3) ≤ σ2w = σ2 /Sf
(55)
y
σ3max (H1, H2, H3) ≤ σ3w = σ3 /Sf The associated reliability evaluation without consideration of the safety factor can be written in the form min
uH1 ,uH2 ,uH3
d(uH1 , uH2 , uH3 )
subject to
y
σ1 − σ1max (H1, H2, H3; uH1 , uH2 , uH3 ) ≤ 0 y
σ2 − σ2max (H1, H2, H3; uH1 , uH2 , uH3 ) ≤ 0
(56)
y
σ3 − σ3max (H1, H2, H3; uH1 , uH2 , uH3 ) ≤ 0 Here, we take the value of the global safety factor applied to the yield stresses to be Sf = 1.5. This way the allowable stresses will be: σ1w = 32, σ2w = 12 and σ3w = 28 MPa. After having optimized the structure according to (55), the resulting volume is found to be VDDO = 43 252 mm3 . The reliability index depends on the distribution law, and optimum values of the reliability index for the three different types of distribution are found to be: βDDO = 3.5127. Using DDO we cannot control a required reliability level. However, by integrating the reliability concept into the design optimization process (thereby performing RBDO), we can satisfy the reliability constraint. Hybrid procedure: The classical method implies very high computational cost and exhibits weak convergence stability. So we use the hybrid method to satisfy the required reliability level (within admissible tolerances of 1%). In the hybrid procedure of RBDO, we minimize the product of the volume and the reliability index subject to the limit state functions and the required reliability level. The hybrid RBDO problem is written as min
mH1 ,mH2 ,mH3 ,H1,H2,H3
subject to
Volume(H1, H2, H3) · dβ (mH1 , mH2 , mH3 , H1, H2, H3) y
σ1max (H1, H2, H3) ≤ σ1
y
σ2max (H1, H2, H3) ≤ σ2
(57)
y
σ3max (H1, H2, H3) ≤ σ3
dβ (mH1 , mH2 , mH3 , H1, H2, H3) ≥ βt This optimization process is carried out in a hybrid design space. The resulting optimal values of the reliability index are found to be: dβ = 3.0001 ≈ βt (i.e., 0.03% higher than the target reliability index). The resulting optimum volumes are determined as: Volhybrid = 41 782 mm3 . The experience of the designer on finite element software
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s
209
Table 8.3 Safety factor values. β=3
Variables
∂σ1 /∂yi
∂σ2 /∂yi
∂σ3 /∂yi
Sf
H1 H2 H3
−1.0520 −0.7452 −0.8432
−0.2160 −0.2041 −0.6796
−0.7318 −0.6119 −0.8271
0.8255 0.8458 0.8108
plays a very important role in improving the objective function and controlling the convergence. Although the method yields results that satisfy the required reliability level within admissible tolerances, the problem is a complex optimization problem and needs a large number of iterations to converge and improve the value of objective function. OSF procedure: This method includes three main steps: 1.
The first step is to obtain the design point (the Most Probable Point). Here, we minimize the volume subject to the design constraints without consideration of the safety factors. This way the optimization problem is simply written as: min
H1,H2,H3
Volume(H1, H2, H3)
subject to
y
σ1 (H1, H2, H3) ≤ σ1
y
σ2 (H1, H2, H3) ≤ σ2
(58)
y
σ3 (H1, H2, H3) ≤ σ3
2.
3.
The design point is found to correspond to the maximum von Mises stresses σ1max = 47.335 MPa, σ2max = 17.177 MPa and σ3max = 41.999 MPa, that are almost y y y equivalent to the yield stresses σ1 , σ2 and σ3 . The second step is to compute the optimum safety factors for normal distribution. In this example, the number of the deterministic variables is equal to that of the random ones. During the optimization process, we obtain the sensitivity values of the limit state with respect to all variables. So there is no need for additional computational cost. Table 8.3 shows the results leading to the values of the safety factors, namely the sensitivity results for the different limit state functions. The third step is to calculate the optimum solution. This encompasses inclusion of the values of the safety factors in the values of the design variables in order evaluate the optimum solution.
5.1.2 D iscussion Table 8.4 presents the different results of the DDO and RBDO procedures. Both RBDO procedures can satisfy the required reliability level βt = 3 but the DDO cannot. The DDO may lead to high or low reliability levels because it does not control the reliability. In order to demonstrate the efficiency of the OSF (semi-numerical) method relative to the hybrid (numerical) procedure, we discuss below the results obtained by these procedures. The resulting design obtained by the OSF method is the best solution relative to the design obtained by hybrid RBDO procedure as the objective is to provide the
210
Structural design optimization considering uncertainties Table 8.4 Results for DDO and RBDO procedures. Variables
mH1 mH2 mH3 σ1max σ2max σ3max H1 H2 H3 y σ1 y σ2 y σ3 β Volume
DDO
8.6285 25.232 9.3910 31.152 11.026 27.999 7.5034 18.961 7.4072 47.009 17.007 41.598 3.5127 43 252
RBDO Hybrid method
OSF method
8.3992 24.753 8.6298 33.096 12.059 29.718 7.4942 18.726 7.4368 47.488 17.075 41.997 3.0001 41 782
9.3974 21.851 9.1176 34.347 12.134 30.915 7.7576 18.482 7.3930 47.335 17.177 41.999 3.0000 40 366
best compromise between cost and safety. The OSF methodology satisfies the required reliability level βt = 3 and gives a smaller structural volume than the hybrid method for the reliability level. In order to improve the resulting structure by the hybrid method, the designer can obtain several local optima and then select the best solution. The resulting optimum volume obtained by OSF (VOSF = 40 366 mm3 ) is smaller than the resulting volume determined by the hybrid method by 3.39%. In general the DDO is simple to implement but it has two kinds of optimization variables x and u and also needs two optimization procedures: the first determines the optimal solution using safety factor, and the second yields the value of the reliability index. Note that DDO cannot perform design subject to a required reliability level. The hybrid method as a numerical method, can generally satisfy the required reliability level but it has two types of optimization variables x and y and needs also to solve a single, complex optimization problem. This means that the designer needs more iteration to get several local optima in order to improve the objective function, and the hybrid method is complex to implement exactly. The OSF as a semi-numerical method, is simple to implement, can satisfy required reliability levels, has only a single type of variable y, and only needs a single, simple optimization process to determine the design point. It is demonstrated that the OSF method possesses several advantages: a smaller number of optimization variables, good convergence stability, lower computing time, and satisfaction of required reliability levels (see also Kharmanda et al. 2002, 2004c). 5.2
Mod al analy s is: An air c r aft wing
The objectives of the following modal analysis are to demonstrate that the safest point method is the most suitable to use for the modal cases because of its simple
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s
A
(a)
B
C
211
D
(b)
Figure 8.6 Aircraft wing.
implementation and its computing time reduction relative to the other methods. The wing is uniform along its length with cross sectional area as illustrated in Figure 8.6a. It is firmly attached to the body of the airplane at one end. The chord of the airfoil has dimensions and orientation as shown in Figure 8.6b. The wing is made of low density polyethylene with a Young’s modulus of 38e3 psi, Poisson’s ration of 0.3, and a density of 8.3E-5 lbf-sec2/in4. Assume the side of the wing connected to the plane is completely fixed in all degrees of freedom. The wing is solid and material properties are constant and isotropic. Here, we can consider thee structures: The first structure must be optimized subject to the first frequency value of the given fa , the second one must be optimized at the end frequency value of the interval fb , and the third structure must be optimized subject to a frequency value fn that verifies the equality of reliability indices relative to both sides of the given interval (see Figure 8.5). Let consider the interval [16,18] Hz and a given interval to design the beam structure. This way, we consider that the frequency values as follows: fa = 16 Hz, fb = 18 Hz and fn = ? Hz, where fn must verify the equality of reliability indices:βa = βb . Table 8.5 shows that the safest point method provides the solution with a good computational time relative to the HM. 5.3 Transient analys is: A triangular plate The objective of the following transient analysis is to demonstrate that the improved hybrid method can provide the designer with a better optimum value than the hybrid method. A triangular plate structure being illustrated in Figure 8.7 is submitted to pressure 200 Mpa. The Young’s modulus is: 207 GPa and Poisson’s ratio is: 0.3. The thickness of this plate is: R0 = 10 mm and T1 = 30 mm. The radius of fillet is: FIL = 10 rad. The yield stress is: σy = 235 Mpa. The optimization problem is to find the optimum value of the structural volume subject the maximum stress (transient response). This hybrid RBDO problem can be expressed as: min x,y
Volume(x) · dβ (x, y)
subject to σmax (y) − σy = 0 σk (y) − σy ≤ 0 and
dβ (x, y) ≥ βt
(59a)
212
Structural design optimization considering uncertainties Table 8.5 Results for Hybrid and SP procedures when βa = βb . Parameter
Fn
A B C D A1 B1 C1 D1 A2 B2 C2 D2 Fn Fa Fb Volume Time(S)
Fa
Fb
Initial
0.13295 0.24112 0.30834 0.26316 0.11295 0.20112 0.24834 0.18316 0.15295 0.28112 0.36834 0.34316 17.9030 14.3580 21.8460 6.18645 —
Optimum solutions Hybrid method
SP method
0.13391 0.20138 0.29656 0.20562 0.12331 0.24105 0.28214 0.26306 0.14441 0.24120 0.31071 0.26330 17.1080 16.0990 17.9530 5.55177 25
0.12300 0.22838 0.29963 0.22668 0.11301 0.21578 0.27162 0.23855 0.13320 0.24121 0.30939 0.26406 16.9790 16.0000 17.9510 5.83910 151
and the improved hybrid RBDO problem can be presented as follows: min x,y
Volume(x) · dβ (x, y) · Volume(my )
subject to σmax (y) − σy = 0 σk (y) − σy ≤ 0 and
(59b)
dβ (x, y) ≥ βt
Here, we can regroup T1, R0 and FIL in a random vector y but to optimize the design, the means mT1 , mR0 and mFIL are regrouped in a deterministic vector x, and their fix standard-deviation equals to 0.1 mx . Here, the normalized variable ui is given by: ui =
yi − m i , σi
i = 1, . . . , n
(60)
where the mean mi and the standard deviation σi are two parameters of the distribution, usually estimated from the available data. Table 8.6 shows the hybrid and improved hybrid results. The improved and the hybrid RBDO satisfy the required reliability level βt . However, the optimal volume obtained by the improved hybrid method is less than that obtained by the hybrid method. This way the volume value reduction is almost 26% that leads to economic structures.
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s
T1
30°
213
30°
Dimension: mm
R Fil z x
30° 30°
40
INRAD
200
30° 30°
Figure 8.7 Geometry and Boundary conditions of triangular plate structure.
Table 8.6 Results for RBDO procedures by HM and IHM. Parameter
HM
IHM
T1 FIL R0 σy mT1 mFIL mR0 σw Volume β
24.985 8.5833 7.3251 234.92 29.678 10.600 7.6991 204.51 105 874 3.8096
24.058 9.1013 9.8216 235.04 26.092 9.1062 6.0869 216.42 78 250 3.8
In this example, we demonstrate that the improved hybrid method can improve the structural performance relative to the hybrid method but it needs a more complex model (complex implementation) than the hybrid method.
6 Conclusions For the static analysis, we first demonstrate that the DDO procedure may lead to low or high reliability levels because it necessitates a proposition of a global safety factor depending on the engineering experience (cannot control the reliability levels). However, all methods of RBDO (Reliability-Based Design Optimization) respect the required reliability level. Comparing the RBDO methods, it has been demonstrated that the classical approach needs a high computing time relative to the hybrid method and has weak convergence stability (see Kharmanda et al. 2001, 2002). The improved
214
Structural design optimization considering uncertainties
Table 8.7 Advantages and disadvantages of DDO and RBDO procedures. Models
Advantages
Disadvantages
DDO
– Simple to implement
– – – –
No satisfaction of reliability requirements Two optimization processes Two types of optimization variables x, u May lead to local optima
– – – – – – – – – – – – – – –
Weak convergence stability High computing time Very complicated to implement Two types of optimization variables x, u May lead to local optima Single, complex optimization process Two types of optimization variables x, y Complicated to implement May lead to local optima Improvement of the objective function Single, complex optimization process Two types of optimization variables x, y Very complicated to implement More iteration to improve the objective May lead to local optima
RBDO 1. Numerical methods CM – Satisfaction of reliability requirements
HM
– Satisfaction of reliability requirements
IHM
– Satisfaction of reliability requirements
2. Semi-numerical methods OSF – Simple to implement – Satisfaction of reliability requirements – Single, simple optimization process – Reduction of computing time – Single type of variables y SP – Simple to implement – Satisfaction of reliability requirements – Double simple optimization processes – Reduction of computing time – Single type of variables y
– Leads, at least, to local optima
– Used only for modal analysis
hybrid method needs a complex model to improve the optimum value of the objective function relative to the hybrid method. The hybrid method has a good convergence stability that makes it suitable for RBDO problems as a numerical method. However, the hybrid RBDO problem is more complex than that of deterministic design and may not lead to local optima. To overcome both drawbacks, an Optimum Safety Factor (OSF) method has been proposed to provide us with reliability-based optimum designs without additional computing cost for probabilistic (reliability) constraints and leads, at least, to local optima (Kharmanda et al. 2004c). As result, the OSF being a semi-numerical method is very efficient for the RBDO problems in static cases because of its simple implementation and the reduction of number of optimization variables. If the designer needs to improve the objective function, the hybrid and improved hybrid methods are suitable by testing several starting points and next selecting the best solution. The improved hybrid method provides us with a better
N u m e r i c a l a n d s e m i-n u m e r i c a l m e t h o d s
215
solution than the hybrid method but needs a complex implementation (Mohsine et al. 2005, Kharmanda & Olhoff 2007). For modal analysis, the hybrid method has been applied for a special case of a structure performing free vibrations (Kharmanda et al. 2003), where the reliabilitybased optimum solution was determined subject to a prescribed eigen-frequency fn . The optimum safety factor method has been also applied for a special case of a structure performing free vibrations (Kharmanda et al. 2004a), where the reliability-based optimum solution was determined subject to a prescribed eigen-frequency fn . But if the failure interval [fa , fb ] is given, we cannot determine the reliability-based optimum solution using optimum safety factor method and the hybrid necessitates a complex procedure to optimize three structures simultaneously to get the equality between reliability indices. The semi-numerical method called Safest Point (SP) method is very suitable for the modal cases because of its simple implementation and small computing time (Kharmanda et al. 2006, 2007). For transient analysis, the hybrid and improved hybrid methods (numerical methods) and the OSF method (semi-numerical method) are suitable to be used. When saving the computational time or/and needing simple implementation, the OSF method is the best approach to be used. However, for getting several solutions and improving the optimum value of the objective function, we use the hybrid and the improved hybrid methods (Mohsine et al. 2006). The improved hybrid method provides the designer with a local optimum better than the hybrid one but the hybrid method is simpler to implement than the improved hybrid method. As a general conclusion, the DDO is simple to implement but it has two kinds of optimization variables x and u and also needs two optimization procedures: the first determines the optimal solution using safety factor, and the second yields the value of the reliability index. Note that DDO cannot perform design subject to a required reliability level. All numerical and semi-numerical methods RBDO satisfy the required reliability level but they are different at computing time, convergence stability, simplicity implementation, improvement of objective function value, kind of variables, suitable uses.
References Ditlevsen, O. & Madsen, H. 1996. Structural Reliability Methods. John Wiley & Sons. Feng, Y.S. & Moses, F. 1986. A method of structural optimization based on structural system reliability. J. Struct. Mech. 14:437–453. Kharmanda, G., Mohamed, A. & Lemaire, M. 2001. New hybrid formulation for reliabilitybased optimization of structures. The Fourth World Congress of Structural and Multidisciplinary Optimization, WCSMO-4, Dalian, China, 4–8 June 2001. Kharmanda, G., Mohamed, A. & Lemaire, M. 2002. Efficient reliability-based design optimization using hybrid space with application to finite element analysis. Structural and Multidisciplinary Optimization 24:233–245. Kharmanda, G., Mohamed, A. & Lemaire, M. 2003. Integration of reliability-based design optimization within CAD and FE models. In: Recent Advances in Integrated Design and Manufacturing in Mechanical Engineering, Kluwer Academic Publishers. Kharmanda, G., El-Hami, A. & Olhoff, N. 2004a. Global Reliability–Based Design Optimization. In: Frontiers on Global Optimization, C.A. Floudas (ed.), 255 (20), Kluwer Academic Publishers.
216
Structural design optimization considering uncertainties
Kharmanda, G. 2004b. Two points of view for developing reliability-based design optimization. NT2F4 (New Trends in Fatigue and Fracture IV), Aleppo, Syria, 10–12 May 2004. Kharmanda, G., Olhoff, N. & El-Hami, A. 2004c. Optimum values of structural safety factors for a predefined reliability level with extension to multiple limit states. Structural and Multidisciplinary Optimization, 27:421–434. Kharmanda, G., Olhoff, N. & El-Hami, A. 2004d. Recent Developments in Reliability-Based Design Optimization (Keynote Lecture). In: Computational Mechanics, Proc. Sixth World Congress of Computational Mechanics (WCCM VI in conjunction with APCOM’04), Sept. 5–10, 2004, Beijing, China. Tsinghua University Press & Springer-Verlag. Kharmanda, G. & Olhoff, N. 2007. Extension of optimum safety factor method to nonlinear reliability-based design optimization. Journal of Structural and Multidisciplinary Optimization, to appear. Kharmanda, G., Altonji, A. & El-Hami, A. 2006. Safest point method for reliability-based design optimization of freely vibrating structures. 1st International Francophone Congress for Advanced Mechanics, IFCAM01, Aleppo, Syria, 02–04 May 2006. Kharmanda, G., Makhloufi, A. & Elhami, A. 2007. Efficient computing time reduction for reliability-based design optimization. Qualita2007, 20–22 March 2007, Tanger, Maroc. Koch, P.N., Yang, R.J. & Gu, L. 2004. Design for six sigma through robust optimization. Struct. and Multidisc. Optim. 26:235–248. Mohsine, A., Kharmanda, G. & El-hami, A. 2006. Improved hybrid method as a robust tool for reliability-based design optimization. Structural and Multidisciplinary Optimization 32:203–213. Mohsine, A. 2006. Contribution à l’optimization fiabiliste en dynamique des structures mécaniques. Thèse de doctorat, INSA de Rouen, France (French version). Tu, J., Choi, K.K. & Park, Y.H. 1999. A new study on reliability-based design optimization. Journal of Mechanical Design, ASME 121(4):557–564. Youn, B.D. & Choi, K.K. 2004. Selecting Probabilistic Approaches for Reliability-Based Design Optimization. AIAA Journal 42(1):124–131. Yang, R.J., Chuang C., Gu, L. & Li, G. 2005. Experience with approximate reliabilitybased optimization methods II: an exhaust system problem. Structural and Multidisciplinary Optimization 29:488–497.
Chapter 9
Advances in solution methods for reliability-based design optimization Alaa Chateauneuf & Younes Aoues University Blaise Pascal, France
ABSTRACT: The solution of Reliability-Based Design Optimization implies high computational efforts due to the coupling of reliability and optimization problems. The probabilistic constraint is the key constraint in RBDO, which requires considerable computational effort and reveals the classical iterative problems of numerical efficiency, accuracy and stability. To solve the RBDO systems, three approaches are commonly used: the two-level approach, the one-level approach and the decoupled approach. A good algorithm should satisfy the conditions of efficiency, precision, generality and robustness. This chapter describes the recent advances in numerical methods for RBDO solution, in order to give a comprehensive overview of the basis and characteristics of the different approaches. The numerical applications on simple structures allow us to compare the efficiency of the RBDO approaches.
1 Introduction Reliability-Based Design Optimization (RBDO) aims at searching for the best compromise between cost reduction and reliability assurance, by considering system uncertainties. Although the basic RBDO ideas have been established more than thirty years ago, the solution is not yet easy, even for simple structures. The difficulty lies in the consideration of the reliability constraints, which require a large computational effort and involves classical numerical problems, such as convergence, accuracy and stability. The situation becomes worst when finite element and CAD models are involved, especially when material and geometrical nonlinearities are considered. While the optimization process is carried out in the space of the design variables, the reliability analysis is performed in the space of the random variables, where a lot of numerical calculations are required to evaluate the failure probability. Consequently, in order to search for the optimal structural configuration, the design variables are repeatedly changed, and each set of design variables corresponds to a new random variable space which then needs to be manipulated to evaluate the structural reliability at that point (Murotsu et al. 1994). Because of the too many repeated searches needed in the above two spaces, the computational time for such an optimization becomes the main problem. Figure 9.1 shows the main involved models in the RBDO modeling of engineering structures. The nested optimization, reliability, CAD and finite element models involve nonlinear iterative numerical procedures, where the problems of convergence, precision and computation time are omnipresent. For practical design, the cost of simple
218
Structural design optimization considering uncertainties
Optimization problem (design space) Reliability problem (random variable space) CAD model (geometrical variables) Finite Element Model (nodal variables) Nonlinearities and transient behavior (mechanical variables)
Figure 9.1 Nested models in the RBDO of engineering structures.
finite element (which is already a large time consuming procedure) is multiplied by a factor between ten and several thousands, which cannot be afforded in the design process. The computation scheme is thus a big problem that researchers should overcome in order to allow for practical applications. Generally speaking, a good solver should satisfy the conditions of efficiency (computation time), precision (accuracy of finding the optimum), generality (capability to deal with different kinds of problems with or without large number of variables) and robustness (stability of the convergence for any admissible initial point, local or global convergence criteria, etc). In the last decade, many advanced methods and techniques have been intensively developed in both fields: optimization and reliability. This chapter aims to describe the most common numerical methods to solve RBDO problems. After describing the basic formulation, the two-level, the one-level and the decoupled approaches are presented and discussed. Numerical applications are then presented for illustration and comparative purposes. For more details about the presented approaches, the reader is encouraged to review the original works referenced at the end of this chapter.
2 Basic RBDO formulation Basically, the RBDO problem is defined as the minimization of either the initial cost or the expected total cost (i.e. initial and expected failure costs), subjected to the constraint of an admissible failure probability Pft , in addition to the other structural constraints. As mentioned above, the particularity in RBDO lies in the computation of the reliability constraint, which involves additional computational effort and convergence difficulties. This constraint can be evaluated by one of the reliability methods, such as FORM/SORM, RSM or even Monte Carlo (Ditlevsen et Madsen 1996, Rackwitz 2001, Lemaire 2006). The RBDO is formulated as: min f (d) d
subject to Pf (d) ≤ Pft and gj (d) ≤ 0
(1)
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
x2
Normalized space
u2
Physical space
Failure domain Gu (u, d) 0
Failure domain G(x, d) 0
P* u*2
G(x, d) 0
mx2
Safe domain mx
x1
1
219
β
MPP Gu(u, d) 0
α
u* 1
u1
Figure 9.2 Reliability index solution and probabilistic transformation.
where d is the vector of design variables, f (d) is the objective function, gj (d) are the structural deterministic constraints, Pf (d) is the failure probability of the structure and Pft is the admissible failure probability. In the First Order Reliability Method (FORM), the failure probability Pf is given as a function of the reliability index β: Pf (d) = (−β(d)) ≈ Pr[G(X, d) ≤ 0]
(2)
where X is the vector of random variables (whose realization is noted x), Pr[·] is the probability operator and (·) is the standard gaussian cumulated function. It is to be noted that the design variables d may be either independent deterministic variables or distribution parameters, especially the mean values, of some of the random variables. These two cases should be carefully taken into account when computing the gradient vectors. The reliability level is defined by an invariant reliability index β, as defined by (Hasofer and Lind 1974), which is evaluated by solving the constrained optimization problem: β = min u =
(Ti (x))2
i
(3)
under: G(T(x), d) ≤ 0 where u is the distance between the median point and the failure subspace in the normalized space ui and Ti (·) is an appropriate probabilistic transformation: i.e. ui = Ti (x). The image of the performance function G(x, d) in the normalized space is written: Gu (u,d) = G(x, d) (Figure 9.2). The solution of this problem is called the Most Probable Failure Point (MPP), the design point or the β-point, where β = u∗i ; it is noted P∗ or either x* or u*, whether physical or normalized space is considered, respectively. In fact, the term Most Probable Point is not rigorous from the probabilistic point of view, P∗ is just the point corresponding to the maximum joint density in the failure domain. However, in RBDO, the term MPP is preferred to the term design point, as it avoids confusion between design optimization and design for reliability.
220
Structural design optimization considering uncertainties
For the reliability problem describe in equation 3, the Kuhn-Tucker optimality conditions are written: ∇u u + λ∇u Gu (u∗ , d) = 0 Gu (u∗ , d) = 0
(4)
where ∇u is the gradient operator in the normalized space and λ is the Lagrange multiplier. The solution of the above equations leads to: λ = 1/∇u Gu (u∗ , d), and hence the reliability problem must satisfy the conditions: Gu (u∗ , d) = 0 ∇u Gu (u∗ , d) · u∗ + ∇u Gu (u∗ , d) · u = 0
(5)
The optimization process is carried out in the space of the design parameters d, which are deterministic. In parallel, the solution of the reliability problem is performed in the space of the random variables by solving the optimization problem in equation 3. Traditional reliability-based design optimization requires a double loop iteration procedure, where reliability analysis is carried out in the inner loop for each change in the design parameters, in order to evaluate the reliability constraints. The computational time for this procedure is extremely high due to the multiplication of the number of iterations in both optimization and reliability problems, involving a very high number of mechanical analyses. Recent developments in the literature aims at solving the numerical difficulties, through three main approaches: – –
–
Two-level approaches, which are based on the improvement of the traditional double-loop approach by increasing the efficiency of the reliability analysis. Mono-level approaches which aim at solving simultaneously the optimization and the reliability problems within a single loop dealing with both design and random variables. Decoupled approaches, where the reliability constraint is replaced by an equivalent deterministic (or pseudo-deterministic) constraint, involving some additional simplifications.
In the following sections, the basic ideas behind these approaches will be briefly described.
3 Two-level approaches A straight forward approach to solve RBDO problems is a two-level approach, where the outer loop aims to solve the optimization problem by improving the design variables d and the inner loop aims to solve the reliability problem by dealing with the random variables x. In order to reduce the computational effort in the two-level formulation, two RBDO approaches have been proposed to deal with probabilistic constraints: –
Reliability Index Approach (RIA) considers the cost reduction under the reliability index constraint.
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
–
3.1
221
Performance Measure Approach (PMA) involves an inverse reliability problem as an alternative constraint. Reliability Index Approach (RIA)
Traditionally, the RBDO procedure is solved in the two spaces: the space of design variables, corresponding to deterministic physical space and the space of Gaussian random variables, obtained by probabilistic transformation of the random physical variables. In the classical approach, the RBDO is calculated by nesting the two following problems: •
optimization problem under reliability constraints: min f (d) x
subject to and
•
β(d) ≥ βt gj (d) ≤ 0
(6)
where f (d) is the objective function, gj (d) are the associated deterministic constraints, β(d) is the reliability index of the structure and βt is the target reliability. calculation of the reliability index β(d): min u = x
subject to
[Ti (x, d)]2
i
(7)
G(x, d) ≤ 0
where u is the distance between the origin and the considered point in the normalized random space, G(x,d) is the limit state function and Ti (·) is the probabilistic transformation to the normalized space. The solution of this RBDO problem consists in solving the two nested optimization problems. For each new set of the design parameters, the reliability analysis is performed in order to get the new MPP, corresponding to a given reliability level. As illustrated in Figure 9.3, this procedure leads to slow convergence scheme and zigzagging due to the sequential changes of the optimal point and the Most Probable Point. The method is somehow similar to relaxation procedures, known as a low convergence scheme. Actually, it is well established that RIA converges slowly or even fails to converge for a number of problems (Choi and Youn 2002). 3.2 Perf ormance Meas ure Approach (P MA) This method is based on an inverse reliability analysis, where the performance function level is specified as a constraint, instead of the reliability index itself (Tu et al. 1999, 2000). The performance measure is written: −1 Gp (d) = FG [(−βt ); x, d]
(8)
222
Structural design optimization considering uncertainties
Target reliability index location
u2
u2
MPP
MPP
Limit state G (d) = 0
Limit state G (d) 0
G (d) > 0
βt
G (d) 0
x0
I nc
re
o as e
fG
(a)
x0 u1
(b)
In
a cre
se
of
G
u1
Figure 9.3 Illustration of (a) RIA and (b) PMA searches.
where (·) is the standard gaussian cumulated distribution function and FG (·) is the −1 (·) its inverse. In the standard gaussian CDF of the performance function G(·) and FG space, the performance measure is directly evaluated at the Most Probable Failure Point P*, such that the target reliability can be satisfied. Gp (d) = G(x∗ , d | u∗ = βt )
(9)
The RBDO is then formulated as: min f (d) dk
subject to Gp (d) ≤ 0 and gj (x) ≤ 0
(10)
where Gp (·) is obtained by solving the problem defined in equation 9. PMA is shown to be efficient and robust, since it is easier to minimize a complicated objective function subjected to a simple constraint than to minimize a simple objective function subjected to a complicated constraint. However, several numerical examples using PMA show inefficiency and instability in the assessment of probabilistic constraints during the RBDO process, even with Advanced Mean Value AMV or Hybrid Mean Value HMV methods (Youn and Choi 2004a). For this reason, (Youn and Choi 2004b) proposed a coupling of HMV with Response Surface Method, specifically developed for reliability and optimization analyses. In Figure 9.3, the search scheme of PMA is compared with RIA. While RIA is zigzagging, PMA goes first to the hyper-sphere with a radius equal to the target reliability index, then iterations are carried out on this hyper-sphere. This is the reason why convergence is faster and more stable in the case of PMA.
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
223
Although that many applications, such as those given by Frangopol (1995) and Nikolaidis and Burdisso (1988), are based on RIA algorithms, the PMA is increasingly used for large-scale problems. Lee et al. (2002) have conducted a comparative study between RIA and PMA, where RIA has shown to be less efficient for high reliability levels. They analyzed several examples and concluded that conventional RIA is not computationally attractive, compared with recently introduced target performance based approaches.
4 Mono-level approaches The mono-level methods are aimed at improving the efficiency of the RBDO procedures, by introducing the reliability at the same loop as optimization. The basis of the one-level approaches consists in solving both optimization and reliability problems without nesting the two problems. In this way, parallel convergence can be reached in both design and random spaces, and the computational cost may be saved. Among the mono-level approaches in the literature, one can indicate the well-known works of (Madsen and Friis Hansen 1992; Kuschel and Rackwitz 1997), which are based on reformulating the RBDO problem. The solution can then be obtained by traditional nonlinear optimization algorithms. 4.1 Total cost formulation The work of (Madsen and Friis Hansen 1992) belongs to the earlier efforts in this topic. They proposed a combined method integrating the expected failure cost in the objective function. The proposed mono-level formulation is written as: min CT (d) = CI (d) + Cf (−u) d
subject to
Gu (u, d) = 0
and
∇u Gu (u, d) u =− u ∇u Gu (u, d)
(11)
The last condition can also be written: ∇u Gu (u, d) · u + ∇u Gu (u, d) · u = 0
(12)
This formulation has the advantage of being solved by standard optimization algorithms, but requires the explicit implementation of the probabilistic transformation, as well as the computation of the second order derivatives. The numerical examples carried out by the authors showed very large number of mechanical calls, compared to two-level RBDO models. Despite the high computational cost, further improvements of the combined method were still possible to make it an attractive alternative to the classical nested RBDO. 4.2
Formulation with optimality conditi o ns
Kuschel and Rackwitz (1997) have developed two formulations for RBDO: either by minimizing the expected total cost, or by maximizing the structural reliability for a
224
Structural design optimization considering uncertainties
given cost. In this mono-level approach, the reliability constraints are replaced by Karush-Kuhn-Tucker conditions for the first order reliability problem. These optimality conditions are then introduced as new constraints in the mono-level optimization problem. The total cost formulation is written as: min f (d, u) = CI (d) + Cf (d)(−u) d
subject to
Gu (u, d) = 0 ∇u Gu (u, d) · u + ∇u Gu (u, d) · u = 0 (−u) ≤ Pft
(13)
u = T(x, d) and
gj (d) ≤ 0
The maximum reliability formulation is written as: max u d
subject to
Gu (u, d) = 0 ∇u Gu (u, d) · u + ∇u Gu (u, d) · u = 0 CI (d) + Cf (d) (−u) ≤ Ct
and
(14)
u = T(x, d) gj (d) ≤ 0
To allow for efficient solution of both problems, the sensitivity operators are provided in the algorithm. The reliability index sensitivity is given by (Enevoldsen and Sørensen 1994): ∂β ∂Gu (u∗ , d) 1 = ∂dk ∇u Gu (u∗ , d) ∂dk
(15)
and the expected cost sensitivities are computed as following: ∂Cf (d) ∂CT (u, d) ∂CI (d) = + (−u) ∂dk ∂dk ∂dk ∂CT (u, d) ui = −Cf (d) φ(u) ∂ui u
(16)
The authors have applied this approach on several examples and showed the efficiency of the approach. 4.3
Hyb ri d f or mulat io n
A mono-level approach has also been introduced by the hybrid formulation, proposed by (Kharmanda et al. 2002), allowing to combine deterministic and random variables.
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
225
The RBDO formulation is based on defining a new objective function F(x,d) which integrates cost and reliability aspects as following: min F(d, x) = f (d) · Tβ (x, d) d,x
subject to
gj (d) ≤ 0
(17)
Tβ (x, d)) ≥ βt and
G(x, d) ≤ 0
where Tβ (x, d) is the image of u(x, d) in the physical space (while u(x, d) is a straight line, Tβ is generally a curve). The minimization of the function F(x,d) is carried out in the hybrid space of deterministic and random variables. An example of this hybrid design space is given in Figure 9.4, where the reliability levels Tβ are represented by ellipses (case of normal joint distribution), the objective function levels are given by solid curves and the limit state function is represented by dashed lines. Two important points can be observed: the optimal solution Pd∗ and the reliability solution Px∗ (i.e. the design point found on the curves G(x, d) = 0 and Tβ = βt ). This hybrid space contains all information about the RBDO model (e.g. optimal points, sensitivities, reliability levels, objective function iso-values and constraints). The optimality conditions for this hybrid formulation are: ∇x F(d, x) − λ∇x Tβ (x, d) + ∇x G(x, d) = 0 ∇d F(d, x) + λj ∇d gj (d) − λ∇d Tβ (x, d) + ∇d G(x, d) = 0 λj gj (d) = 0
(18)
λ(βt − Tβ (x, d)) = 0 G(x, d) = 0
Hybrid Design Space
Px*
f (x→)
Pd*
decr easin
g
Tb
Tb bt Tb bt
Objective function levels
G(x→, y→)0
→ →
G(x→, →y )0
Limit state decreasing G(x , y ) 0
x2, d2
x1, d1
Figure 9.4 Hybrid design space.
226
Structural design optimization considering uncertainties
It can be shown that these optimality conditions satisfy the initial two-level RBDO formulation (Kharmanda et al. 2003). While the method is theoretically attractive, numerical applications have shown that special care should be considered in the implementation of such a procedure, in order to ensure efficiency and convergence. Kaymaz and Marti (2006) have developed a specific formulation to apply two- and one-level approaches to elastoplastic structural behavior. In this study, the one-level approach required a more complex formulation and a larger number of optimization variables, but no difficulties have been observed for convergence. According to Royset et al. (2001), the mono-level approach may have several disadvantages: 1) even with first order optimization algorithms, the mono-level approach requires second-order derivatives, 2) an explicit formulation of the probabilistic transformation is required, 3) the mono-level approach is not suitable for system reliability constraints. Nevertheless, the mono-level approach seems to be very attractive, but still requires specific developments.
5 Decoupled approaches The idea of decoupling optimization and reliability problems seems to be very attractive as nested loops can be avoided and a lot of reliability analyses can be saved. This is generally carried out by defining a specific approximation and an equivalent deterministic parameter. However, the main challenge lies in the specification of the equivalent RBDO problem allowing to reach accurate precision. A basic idea consists in defining an equivalent deterministic constraint in terms of the standard deviation of the performance function. The optimal design is then searched for under approximated percentile of the performance function; this method is known as the Approximate Moment Approach (AMA). The updating of the equivalent deterministic constraint can be carried out by performing a reliability analysis after each convergence to new optimal points. Starting from the initial point, an alternative solution consists in performing reliability analysis to determine the Most Probable Failure Point (MPP) and hence the reliability index and the safety factors. The new limit state equation at the MPP is then used as a constraint in the deterministic optimization analysis, where the output are the new design parameters. These two steps can be solved in sequence until convergence (Torng an Yan 1993; Zou et al. 2004). Der Kiureghian and Polak (1988), Kirjner-Neto et al. (1998) and Royset et al. (2001) developed decoupled approaches by reformulating the RBDO problem as a deterministic semi-infinite optimization problem, where outer approximation method allows to solve the reliability problem independently of the optimization scheme. In this approach, the reliability constraint is firstly transformed into an infinite number of deterministic limit state constraints, and then the application of an outer programming algorithm allows to solve the RBDO problem. In a recent work, (Ching and Hsu 2006) proposed a method to transform the reliability constraint into deterministic constraint, by introducing the so-called limit state factor, multiplying the nominal limit state. When the equivalent deterministic constraint is defined, the RBDO can be solved as a classical deterministic optimization
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
227
problem. However, it is to be proved whether this approach can be applied for real engineering structures, with several limit states involving large number of design and random variables.
5.1 Approximate Moment Approach (AMA) This approach transforms the probabilistic constraints to an approximated deterministic constraint, given by a percentile of the performance function (similar to the characteristic value approach in classical design). The RBDO problem is written as (Koch et al. 2007): min f (d) d
(19)
subject to
mG (d) + k σG (x, d) ≤ 0
where mG and σG are respectively the mean and the standard deviation of the performance function and k is a coefficient to be specified for a given safety level. Unlike other methods, the AMA does not require reliability analysis, as the required information are only the first and second moments of the performance function. While the mean is approximately computed in terms of the mean values of the random variables, the variance is based on first order development of the performance function, which can be written for independent variables as:
2 σG
=
∂G(x, d) i
∂xi
2
σX i
(20) x=mX
The method is efficient, as practically no extra cost is required, with respect to standard deterministic optimization. However, this approach implies many simplifications and cannot lead to accurate reliability results. Consequently, the error in the reliability estimation does not allow for convenient RBDO procedure, and in many cases leads to meaningless results. The main defect lies in the assumption that the random variables and the performance function are normally distributed, which is far from being appropriate for most of engineering structures. The other strong assumption lies in the computation of the variance of G, which assumes linear combination of random variables, leading to very limited application field.
5.2
Sequential Optimization with Reliabi l i ty A s s e s s me nt (SORA)
The Sequential Optimization with Reliability Assessment SORA is based on a single loop strategy composed by a sequence of deterministic optimization and reliability analyses. For each loop, the deterministic optimization is carried out, then the performance measure is checked and updated. The new value of the performance measure is then used in the next loop as a constraint limit.
228
Structural design optimization considering uncertainties
Three major ideas are introduced in the SORA method (Du & Chen 2002): –
A reliability percentile formulation is used to evaluate the design feasibility at the desired reliability level, An equivalent deterministic optimization is applied in order to reduce the number of reliability analyses, Efficient MPP search algorithm (the Modified Advance Mean Value MethodMAMV) is used for inverse reliability evaluation.
– –
The use of the reliability percentile instead of full reliability analysis leads to computational time reduction. This percentile allows for the identification of the feasible domain in design optimization. For a given reliability level R = 1 − Pf , the percentile reliability performance is given by: Gp = G(x∗ , d)
such as: Pr [G(X, d) ≥ Gp ] = R
(21)
The RBDO model can thus be written as: min f (d) d
subject to G(x∗ , d) ≥ 0
(22)
This formulation has the advantage of being fully deterministic, which can be solved by any classical optimization algorithm. However, the solution of equation 21 requires several calls to the structural model, which reduces the efficiency of the approach.
5.3
Seq uen ti al appr o ximat e pr o g r amm i n g (SAP)
In this method (Chen et al. 2006; Yi et al. 2006), a sequence of approximate programming is performed until the identification of the optimum point. In each subprogramming problem, the reliability analysis is approximated at the current MPP. By using suitable linearization, a recurrence formula derived from the optimality conditions at the MPP is developed in order to approximate the reliability index and its derivatives. At each step, the previously found MPP is taken as the linearization point. The use of response sensitivities improves the efficiency of the proposed algorithm. This procedure enables concurrent convergence of design optimization and reliability calculations. The optimization problem is written: min f (d) d
˜ d) ≥ βt subject to β(x, and
gj (d) ≤ 0
(23)
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
229
where β˜ is the approximated reliability index, obtained through the recurrence formula: ˜ d β(x,
(r+1)
(r) ∂β(x, ˜ d) (r+1) (r) ˜ d )+ ) = β(x, − dl ) · (dl ∂dl (r)
l
˜ d(r) ) = β(x, k
G(r) −
i
∇u
(r)
∂G ∂ui
G(r)
(r)
· ui
=
(r) (r) G(r) − αi · ui (r) ∇u G
(24)
i
˜ d(r) ) u∗i = −αi · β(x, (r)
where the subscripts (r) and (r + 1) indicate the iteration numbers and αi is the direction cosine for the variable ui . 5.4
Probabilistic s ufficiency factor
Wu et al. (2001) and Qu and Haftka (2003) introduced the probabilistic sufficiency factors in order to replace the RBDO with a series of deterministic optimizations by converting reliability constraints into equivalent deterministic constraints. For a prescribed failure probability Pft , the probabilistic sufficiency factor Psf is given by solving: Pr [γ ≤ Psf ] = Pft
(25)
where γ is the global safety factor, define by the random ratio between strength and stress. This means that the probabilistic sufficiency factor is simply a percentile of the safety factor that corresponds to the target failure probability. Qu and Haftka (2003) proposed to compute Pft by Monte Carlo simulations. When Pft is defined, the RBDO problem can be written as: min f (d) d
subject to
1 − Psf ≤ 0
and
gj (d) ≤ 0
(26)
It can be seen that the sufficiency factor constraint is equivalent to the target reliability constraint. The drawback of the method lies in the use of Monte Carlo simulations, which is generally large time consuming and presents significant numerical noise. 5.5
S ingle-loop double vector (SLDV)
In the Single-loop double-vector method (SLDV), there are two variable vectors: one for the mean values (design parameters) and one for the MPP values. This method has been improved by (Chen et al. 1997) who proposed to work with only one vector, leading to Single-loop single-vector approach (SLSV), on the basis of first order approximation of the limit state.
230
Structural design optimization considering uncertainties
6 System reliability optimization The progress in System Reliability-Based Design Optimization SRBDO is relatively slow because it depends on the system reliability analysis where the computational time, and the numerical instability lead to many difficulties in the SRBDO formulation. The solution of the relevant failure modes is a time consuming process, which is mainly due to design variable changes at each iteration of the optimization procedure. Consequently, the relevant failure modes also change during optimization iterations (e.g. a failure mode which is the most important within a given iteration may become negligible in the following iterations, due to design variable changes). The redundancy in the system reliability must be taken into account in the SRBDO Process (Murotsu et al. 1994), but it was found that a non redundant structure would need a higher safety margin than redundant one in order to achieve the same acceptable level of damage tolerance. Moses (1997) indicated that although many efforts have been made to compute the system reliability, the fundamental idea of system reliability problem is to extrapolate the analysis of the component reliability and performance to an overall structural risk assessment. Feng and Moses (1986) proposed an algorithm to identify the failure modes through incremental loading models, in order to be introduced in the system reliability constraint in the formulation of the reliability-based optimization. Different frameworks have presented many methodologies for SRBDO: the system reliability may be considered as a single probabilistic constraint (Moses 1997), the system reliability is replaced by the reliability indexes of the significant failure modes (Rackwitz 2001; Enevoldsen and Sørensen 1993), which is an alternative to the original formulation, and finally the system reliability constraint and component reliability constraints were simultaneously taken into account; An alternative approach of SRBDO can be based on multi-criteria optimization (Frangopol 1995, Kuschel and Rackwitz 2000). In an early work, (Enevoldsen and Sørensen 1993) proposed a sequential strategy to solve the RBDO of structural systems. The use of sensitivity operators for cost and reliability index, allows the authors to ensure stable convergence of the RBDO algorithm under system reliability constraints. More recently, Kuschel and Rackwitz (2000) proposed a mono-level approach for the reliability-based optimization of series systems. From another point of view, (Fu and Frangopol 1990) proposed to deal with RBDO of structural systems as a multi-objective optimization problem. This leads to a consistent decision making procedure for structural design and assessment. Although system optimization is usually considered either as a macro-component or as an independently acting components where safety constraints are specified separately, (Aoues and Chateauneuf 2007) proposed a scheme for consistent RBDO of structural systems. The basic idea consists in updating the component target safety levels in order to meet the overall system target and to avoid over-designed components. In the main optimization loop, the cost function is minimized under the constraints that component reliability indexes must satisfy the adapted target values. In the inner updating procedure, the target indexes are adjusted to
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
231
meet the system reliability requirement. The proposed formulation is written in the form: ⎧ C(d) ⎪ ⎪min ⎪ ⎨ d Updated (27) subject to βj (d) ≥ βtj ⎪ ⎪ ⎪ ⎩ dL ≤ d ≤ dU Updated
where βtj is the updated target reliability index for the jth failure mode and βj (d) is the reliability index for the considered design configuration. The system reliability depends on its component reliabilities as well as the correlation ρjk between the different failure modes, it can be expressed as: βsys = f (β, ρ)
(28)
where β is the reliability index vector and ρ is the matrix of correlations between the failure modes. The embedded updating procedure is expressed by least square minimization for the difference between the updated targets and the actual indexes under the constraint of satisfying the required system safety. The procedure aims to solve the system: ⎧ mp ⎪ ⎪ Updated ⎨ min (βtj − β j )2 Updated (29) βt i=1 j ⎪ ⎪ Updated ⎩ subjected to βsys (βtj , ρjk ) ≥ βt_sys Updated
where the updated targets βtj are themselves the optimization parameters. The optimal solution corresponds to the best quadratic fitting between the component Updated indexes βj and the corresponding target indexes βtj , under the constraint of satisfying the system target; this constraint is always active at the optimal solution: Updated βsys (βtj , ρjk ) = βt_sys . The updating procedure plays a key-role as it searches for the best values of the target indexes which pull down the reliability indexes for structural components.
7 Numerical applications In this section, three examples are presented in order to illustrate the application of RBDO methods. In the first example, a steel hook is optimized by a mono-level approach. The second example concerns a bracket truss, where different methods are compared for high nonlinear performance function. Finally, statically determinate and redundant trusses are optimized to show the numerical efficiency of the applied algorithms. 7.1
Steel hook
The RBDO is applied to the design of the steel hook shown in Figure 9.5 (Kharmanda et al. 2002). The hook is loaded by a shaft in contact with the circular surface of radius
232
Structural design optimization considering uncertainties
a
a
R2
R3
R3 t3
b
t3 L
c d
f
t2
t1
e t1
t2
Figure 9.5 Hook configuration and finite element mesh.
Table 9.1 Random and design variables. Variable
Mean
Std-deviation
a b c d e f t1 t2 t3 F
ma mb mc md me mf mt1 mt2 mt3 400
3 2 4 4 4 4 1 1 1 20
R1 and supported by an axis through the upper hole of radius R2 . While the upper part has uniform thickness, trapezoidal cross-section is chosen for the curved part in order to better redistribute the stresses. The following dimensions are fixed: the loading radius R1 = 190 mm, the hanging hole R2 = 100 mm, the filet radius R3 = 100 mm and the hook height L = 1200 mm. The used material is construction steel with Young’s modulus E = 200 GPa and yield stress fY = 235 MPa.The hook is modeled by 1602 solid 20-node elements, with 18,600 degrees of freedom, the applied load F = 400 kN is distributed over 30 elements on the contact surface. In this study, the optimal design is to be found under reliability considerations. The mean values of dimensional properties (ma , mb , mc , md , me , mf , mt1 , mt2 , mt3 ) are considered as design parameters d, while the applied force F and the geometric variables (a, b, c, d, e, f , t1, t2, t3) are taken as random variables X, as given in Table 9.1. For this problem, the target reliability index is set to βt = 3.35, corresponding to a failure probability of 4 × 10−4 .
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
7.1.1
233
D D O soluti on
In Deterministic Design Optimization, the structural volume is minimized under the constraint of allowable stress corresponding to the yield stress divided by suitable safety factor. min V(d) d
subject to
SF σmax ≤ fY
(30)
where SF is the global safety factor, related to loading force F, is set to 1.5 according to common practice. The optimal volume is found to be: VDDO−1.5 = 0.2927 × 108 mm3 and the optimal design is given in Table 9.2. For this solution, the reliability analysis is carried out, leading to the reliability index: β = 7.49 which is much higher than the target value: βt = 3.35. Following this result, a cost reduction has been decided by decreasing the global safety factor to: SF = 1.25. In this case, the optimal volume is decreased to: VDDO−1.25 = 0.2508 × 108 mm3 and the corresponding reliability index is: β = 3.64 > βt . Three disadvantages can be observed in DDO approach: the first one is that safety factors given in recommendation are not always suitable for structural systems, the second one is the difficulty of reasonable choice of the safety factor because of their critical role on manufacturing cost and structural reliability, and the third one is the bad distribution of safety margins for different variables due to global scaling of the safety level. For these reasons, it is very important to integrate the reliability analysis in the optimization process. 7.1.2
RBD O soluti on
The Reliability-Based Design Optimization is formulated by introducing explicitly the reliability constraint: min V(d) d
subject to
β(d) ≥ βt
(31)
where the reliability index is calculated by the solution of: min u(x, d) x
subject to
σmax (x, d) ≥ fY
(32)
In this formulation, the stress becomes a random function. The hybrid formulation is applied to solve the RBDO problem, leading to the optimal design parameters and partial safety factors indicated in Table 9.2. While DDO is based on global safety factor for loading F (SF = 1.25), RBDO shows that some parameters of the structure such as dimensions, can also play a very important role on the structural safety (γt2 = 1.306 and γF = 1.068). Therefore, RBDO satisfies the required reliability level by adding or removing material where it is necessary, and hence improves the structural performance
234
Structural design optimization considering uncertainties Table 9.2 Optimal solutions obtained by DDO and RBDO. Variable
a b c d e f t1 t2 t3 F Volume Reliability
DDO SF = 1.5
SF = 1.25
125.7 74.5 187.1 216.5 185.8 173.5 39.4 10.4 13.2 – 0.2927 × 108 7.49
135.4 78.1 191.1 219.3 191.4 181.2 30.8 10.0 10.5 – 0.2508 × 108 3.64
DDO stress distribution
RBDO optimum
Safety factors
110.7 80.0 198.2 198.2 198.1 151.6 27.8 13.1 10.1 – 0.235 × 108 3.36
1.005 1.006 1.001 1.001 1.002 1.000 1.007 1.306 1.006 1.068
RBDO stress distribution
Figure 9.6 Optimal solutions for DDO and RBDO.
by reducing the structural volume in uncritical regions. This can be understood as a better distribution of the safety factors. Figure 9.6 shows the stress distributions resulting from DDO and RBDO procedures. It can be seen that stress field is more homogeneous for Reliability-Based Design Optimization than the distribution in the Deterministic Design Optimization. 7.1.3 Bra c k et st r u ct u r e Figure 9.7 shows a two-member bracket supporting a vertical load P applied at a distance L from the wall hinge. The member AB, with 60◦ of inclination, is linked to member CD through a pin-joint at B. Both members have rectangular cross-sections: wAB × t for AB and wCD × t for CD, w stands for width and t for thickness. It is aimed
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
235
L L /3
C
D
B PW
E 60˚ E
w
t Cross section E-E A
Figure 9.7 The parameterization of the bracket structure.
to optimally define the values of the parameters: t, wAB and wCD , by considering the uncertainties in material and geometrical properties. The two design constraints are: •
the maximum bending stress σb in member CD must be less than the yield stress fY , taken as 225 MPa for the used steel. The maximum bending stress σb is located at point B and is given by the following formulas: σb =
•
6 MB wCD t 2
with: MB =
PL ρgwCD tL2 + 3 18
(33)
the compression force FAB in member AB must be less than the buckling load Fb . The normal force in member AB is given by:
1 3P 3ρgwCD tL + (34) FAB = 2 4 cos θ
and the buckling load for member AB is written as: Fb =
3 π2 E t wAB π2 EI = ! 2L "2 2 LAB 12 3 sin θ
(35)
Therefore, it is aimed to minimize the structural weight under the two limit states: G1 = fY − σb (wCD , t, L, P) G2 = Fb (wAB , t, L) − FAB (wCD , t, L, P)
(36)
The deterministic optimization is performed by using the partial safety factors, corresponding to live and dead load factors: γs = 1.5 and γp = 1.35, respectively. For bending stress, the partial factor is γr = 1.5; hence, in DDO, the admissible stress bending is fy /γr . The random variables are given in Table 9.3, where the design variables are considered as the distribution means of the geometrical properties.
236
Structural design optimization considering uncertainties Table 9.3 Statistical data of the random variables. Parameter
Symbol
Mean
C.O.V
Distribution
Applied load Young’s modulus Yield stress Unit mass Length Width of member AB Width of member CD Thickness
P (kN) E (GPa) f y (MPa) ρ (kg/m3 ) L (m) wAB (m) w CD (m) t (m)
100 200 225 7860 5 wAB w CD t
0.15 0.08 0.08 0.10 0.05 0.05 0.05 0.05
Gumbel Gumbel Lognormal Weibull Normal Normal Normal Normal
Table 9.4 Summary of the numerical results in the design of bracket structure. Design method
weight (kg)
βG1
βG2
Iteration
CPU
G-eval
wAB (cm)
w CD (cm)
t (cm)
DDO RIA PMA SORA
787.17 678.18 678.88 678.88
4.86 1.99 2.00 2.00
2.94 2.00 2.01 2.01
9 5 7 22
0.07 0.45 0.57 0.39
40 2340 2736 1340
6.13 6.08 6.08 6.08
20.21 15.68 15.69 15.69
26.94 20.91 20.91 20.91
For a target reliability βt = 2 (corresponding to a failure probability of 1%), the reliability-based optimization problem is written: ) √ * 4 3 wAB + wCD min W = ρgtL wAB ,wCD ,t 9 subject to β1 ≥ 2 and β2 ≥ 2
(37)
where β1 and β2 are the reliability indexes related to G1 and G2 , respectively. Table 9.4 compares the optimization results and computational effort for different methods. It can be first seen that deterministic optimization often leads to high cost and reliability, while RBDO approaches lead to better fit of the target safety. The Reliability Index Approach (RIA), the Performance Measure Approach (PMA) and the Sequential Optimization with Reliability Assessment (SORA) lead to the same design point, corresponding to 12.7% of weight reduction. However, the computational cost in SORA is much less than for the other RBDO methods: 1340 evaluations of the performance function instead of 2340 and 2736 evaluations for RIA and RBDO-PMA, respectively. At the optimal RBDO point, Table 9.5 indicates the Most Probable Failure Point and the corresponding partial safety factors. The load factor is lower for the buckling limit state, as the reliability is more affected by the other random variables, especially by the width wAB . Compared to DDO, these results show the advantage of RBDO in adjusting the partial safety factors in terms of the reliability sensitivity with respect to the uncertain variables, which have different influences on the different failure modes.
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
237
Table 9.5 The most probable point and partial safety factors for RBDO solution.
G1 G2 Safety factors γ G1 γ G2
P* (kN)
E* (GPa)
f Y∗ (MPa)
ρ∗ (kg/m3 )
L* (m)
∗ wAB (cm)
w ∗CD (cm)
t* (cm)
126.74 117.45
197 191
212.48 224.28
7739 7754
5.10 5.17
6.08 5.67
15.37 15.70
20.06 20.56
1.26 1.17
1.04 1.05
1.06 1.00
1.01 1.01
1.03 1.03
1.00 1.07
1.02 1.00
1.04 1.02
L 3
2L 3 C B α
D
θ t
h
A
w
W
Figure 9.8 Inclined bracket structure.
The bracket structure is now considered by introducing the inclination of the bar AB as an additional design parameter (Figure 9.8). This inclination is defined by the angle α. The normal force FAB in member AB takes the form: FAB =
L ρgwCD tL P+ h sin θ 2 cos α
(38)
and the maximum bending moment is: MB =
PL ρgwCD tL2 + 3 18 cos α
(39)
The angle α introduces a high degree of nonlinearity in the limit state functions, allowing to test the stability of the RBDO methods. Even with many initial trials, the Reliability Index Approach (RIA) could not converged, because of the limit state nonlinearity. The Performance Measure Approach (PMA) approach did not converge when the AMV algorithm (Advanced Mean Value Method) was applied to perform the inverse reliability analysis and to estimate the performance measure. However, the use of HMV algorithm (Hybrid Mean Value) allows the PMA to converge. This result confirms that HMV algorithm is more convenient for highly nonlinear limit states. The optimal inclination of the bracket is 25.4◦ for DDO and 24.5◦ for RBDO. Figure 9.9 shows the convergence of the performance measure and the reliability index
238
Structural design optimization considering uncertainties
1000
12 βG
1
10
0
βG
2
GPMA
Reliability index
Performance measures
8 –1000
1
GPMA
–2000
2
GSORA
1
–3000
GSORA
2
6 4 2 0
–4000 –2 –5000 –6000 0 (a)
–4 1
2
3
5 4 Iterations
6
7
8
–6
0 (b)
0.5
1
1.5
2
2.5 3 Iterations
3.5
4
4.5
5
Figure 9.9 (a) Performance measure in PMA and SORA. (b) Reliability index during PMA iterations.
Table 9.6 Numerical results in the design of bracket structure with inclination. βG1
Design method
weight (kg)
DDO RIA PMA SORA
716.96 4.87 Not converged 556.44 2.07 556.45 2.07
G-eval
CPU (s)
wAB (cm)
w CD (cm)
t (cm)
α(◦ )
22
147
0.16
5.36
20.24
27.00
25.43
13 30
6790 1744
1.42 0.51
5.37 5.37
15.69 15.69
20.93 20.93
24.64 24.53
βG2
Iteration
2.77 2.00 2.00
during optimization iterations. For both limit states, the reliability indexes converge to the target value βt = 2. It can be generally observed that PMA converges slower than SORA for this kind of problem. Table 9.6 confirms these results by indicating 6790 mechanical calls for PMA, against only 1744 calls for SORA. Once again, SORA has proven to be more efficient and robust than RIA and PMA. 7.2 T i m b er trus s The design of timber trusses is usually carried out by checking the ultimate crosssection capacities with respect to the ultimate limit state. However, as these structures are made of the assembly of several members, the overall ultimate capacity is highly conditioned by the redundancy degree. In many structures, several components can reach their ultimate capacity largely before reaching the overall structural failure load. On the other hand, the structure could contain a number of critical members, that produces the overall failure if any one of them fails, even for redundant structures. In this context, the system reliability can be greatly different from the reliability of its components. The numerical applications are carried out for two roof trusses (Chateauneuf and Noret 2005), where the depth and the breadth of the horizontal, bracing and upper
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
239
Table 9.7 Model parameter data. Parameter
Value
Young’s Modulus Poisson’s ratio Timber density Distance between trusses Truss span Truss height Beam breadth
11 GPa 0.25 420 kg/m3 5m 20 m 5.77 m 0.10 m
Table 9.8 Statistical data of the random variables. Parameter
Characteristic value
Mean
C.O.V
Distribution
Permanent load (Roof Load) (kN/m2 ) Variables load (Concentrated Load) (kN) Snow (kN/m2 ) Wind (kN/m2 ) Bending strength (MPa) Tension strength (MPa) Compression strength (MPa) Young’s Modulus (MPa)
479.2 1422.3 932.5 400.8 14.16 8.26 12.39 8654.8
384.6 1071.4 625 301.8 24 14 21 11 000
0.15 0.25 0.30 0.20 0.25 0.25 0.25 0.13
Normal Gumbel Normal Weibull Lognormal Lognormal Lognormal Lognormal
roof member are considered as design variables. The two trusses correspond to statically determinate and indeterminate structures, respectively. In the RBDO analysis, the uncertainties of the strength and the applied loads are considered random variables, as detailed in Table 9.8. The characteristic values correspond to a percentile of 95% for loading and 5% for timber strength. The target failure probability is set to 10−4 which corresponds to βc = 3.7. The RBDO algorithms are implemented in Matlab environment (Mathworks Inc. 2007), where the optimization toolbox is applied to solve the system. The mechanical computation is carried out by the Finite element method, using CALFEM library (CALFEM 2007). The comparative study is performed for different RBDO methods. The limit state functions considered in this application are: ⎧
⎪ σc,d 2 σm,d ⎪ ⎪ + ≤ 1 in compression ⎨G = fc,d fm,d ⎪ σt,d σm,d ⎪ ⎪ + ≤1 in tension ⎩G = ft,d fm,d
(40)
where σc,d , σt,d , σm,d , fc,d , ft,d and fm,d are respectively the design values of stress and strength in compression, tension and bending.
240
Structural design optimization considering uncertainties
b.L
Z
X L
a.L
Figure 9.10 Truss with rigid joints. Table 9.9 Initial design values and bounds. Design variable
Initial design
Lower bound
Upper bound
D1 (member 1,2,3) (cm) H1 (member 1,2,3) (cm) D2 (member 3,4,5,6) (cm) H2 (member 3,4,5,6) (cm) D3 (member 7,8,9,10,11) (cm) H3 (member 7,8,9,10,11) (cm)
30 10 40 10 10 10
2 2 2 2 2 2
100 100 100 100 100 100
Table 9.10 Optimal results according to different RBDO methods. Design method
Optimal weight (kg)
min{β1 , β2 ,…,β11 }
Optimization iterations
Reliability iterations
FEA calls
CPU (s)
DDO RIA PMA SORA
928.51 802.61 802.61 802.49
4.37 3.70 3.70 3.70
6 12 12 15
– 1095 807 171
49 103 521 70 323 2610
0.36 749 510 20
In the Deterministic Design Optimization, the partial safety factors are drawn from the Eurocodes; the strength modification factor is also introduced in DDO and in RBDO, to account for the humidity and the duration of loading. 7.2.1 S ta ti c a l ly d e t e r m in a t e t r u s s The truss, illustrated in Figure 9.10, is formed by 11 rectangular timber members. The cross-sections are rectangular with breadth b and depth d, where the initial values of the six design variables are given in Table 9.9. It is to mention that this truss presents 11 performance functions, involving 11 reliability constraints in the RBDO procedure. It is thus aimed to keep the lowest reliability index above the target level of 3.7. Table 9.10 compares the results obtained by the different methods: DDO, RIA, PMA and SORA. All the RBDO approaches converge to the optimal weight of 802.6 kg, which is 14% lower than the deterministic result. While the number of optimization
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
241
Table 9.11 Iteration of the reliability analysis. Method
0
1
2
3
4
5
6
7
8
9
10
11
12
Total
RIA PMA SORA
88 63 69
71 62 51
89 62 51
89 62
89 62
89 62
89 62
90 62
89 62
78 62
78 62
78 62
78 62
1095 807 171
Table 9.12 Numerical comparison of the optimal design. Method
d1
b1
d2
b2
d3
b3
DDO RIA PMA SORA
14.64 13.11 13.11 13.11
13.91 12.45 12.45 12.45
36.53 34.39 34.39 34.39
18.26 17.19 17.19 17.19
11.74 10.72 10.72 10.72
11.15 10.20 10.20 10.18
iterations is comparable for different RBDO methods, the number of reliability iterations is much higher for RIA and even for PMA. The number of the Finite Element Analyses (FEA) is huge for these methods: more than 1,00,000 runs for RIA and 70,000 runs for PMA. It is to note that these FEA include those necessary to compute the constraint gradients by finite difference techniques. The fifth column in Table 9.10 gives the number of iterations realized either to perform the reliability analyses in RIA or the inverse reliability analysis in PMA and SORA. Table 9.11 shows how these reliability iterations (inner loop) are distributed over the optimization iterations (outer loop). In the decoupled method (i.e. SORA), this number corresponds to the number of reliability iterations at each cycle of the equivalent deterministic design. The optimal designs from these optimization methods are detailed in Table 9.12. All the RBDO methods converge to the same optimal design, where all the dimensions are lower than those from deterministic optimization. The iteration history is illustrated in Figure 9.11, where the decoupled approach (SORA) can be easily distinguished. Figure 9.12 compares the characteristics of the RBDO methods on the basis of the evaluation criteria: Cost, Safety, number of iterations, FEA calls and CPU time.
7.2.2
Bra c ed t russ
Let us consider the same truss layout with additional members forming an X bracing system; the truss has now 25 members. The cross-section depths and breadths are noted d1 , b1 for members 1 to 6, d2 , b2 for members 7 to 12, d3 , b3 for members 13 to 25. The redundant truss configuration implies a large amount of internal force redistribution during the optimization process. Among the 25 limit states, the critical failure modes change along the optimization iterations. The structural response becomes strongly nonlinear due to interdependence of design and random variables.
242
Structural design optimization considering uncertainties
1000 DDO RBDO-RIA RBDO-PMA SORA
Structural weight
900 800 700 600 500 400 0
2
4
6
8 10 Iterations
12
14
16
Figure 9.11 History of the structural weight.
MEF-eval
RBDO-RIA RBDO-PMA RBDO-SORA DDO
CPU
Optimal cost
Reliability index
Iterations
Figure 9.12 Numerical performance of the design optimization methods.
The Reliability Index Approach could not converge in this example because of the limit state non-linearity and of the probabilistic transformations. In addition, the bracing members in this example have large reliability indexes, their evaluation implies very high time consumption and leads to the divergence of the optimization algorithm. The low number of finite element calls in SORA explains why the CPU time is so low for this method, compared to PMA. This advantage is even larger for more complex structural models. Although that PMA and SORA lead to almost the same structural
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
L3
L8
L22
L18 Y
L7
Z
X L1
L13
L15
L10
L17
L23
L25 L24
L11 L16
L19
L21 L20
L3
L2
243
L4
L14
L5
L12 L6
Figure 9.13 Braced truss with 25 members.
Table 9.13 Optimization results for the truss with X bracing. min{β1 , β2 ,…,β25 }
Design method
Optimal weight (kg)
(DDO RIA PMA SORA
1080.54 5.234 No convergence 682.24 3.700 677.87 3.699
Optimization iterations
Reliability iterations
5
FEA calls
–
6 17
989 327
42 76 153 5962
CPU (s) 1.67 1175 97
Table 9.14 Optimal designs of the truss with X bracing.
DDO PMA SORA
d1
b1
d2
b2
d3
b3
30.23 23.02 16.51
15.11 11.51 15.69
30.18 24.45 24.49
15.09 12.22 12.24
10.43 8.51 8.49
9.90 8.08 8.07
weight and reliability index, the optimal design parameters are quite different in both approaches, especially for the dimensions b1 and d1 , as indicated in Table 9.14. Figure 9.14 shows the iteration history for the three methods: DDO, PMA and SORA. Although that SORA requires more iterations than PMA, it involves lower number of reliability analyses, leading to a global reduction of the computation cost. It proves, once more, its capacity to deal with engineering structures, by ensuring convergence stability and efficiency.
8 Conclusions As briefly described in the previous sections, a very intensive research activity is performed in the field of RBDO solution methods. Three approaches are usually adopted: two-level, mono-level and decoupled approaches. Although significant progress is performed in developing efficient numerical methods, the application to practical engineering structures is still a challenge, knowing the complexity of realistic industrial systems. In order to select a method, the designer has to search for a reasonable compromise
244
Structural design optimization considering uncertainties
1100
DDO RBDO-PMA SORA
1000 bopt = 5.23
Structural weight
900 800 bopt = 3.70 700 600 500 400 300 0
2
4
6
8 10 Iterations
12
14
16
Figure 9.14 History of the optimal design.
between the accuracy, the efficiency and the robustness of the applied RBDO algorithm. As a basic choice, the two-level approach requires less development effort to carry out RBDO. In this category, the performance measure approach leads to robust and efficient scheme, with respect to conventional reliability index approach. Globally, the decoupled approaches, such as the Sequential Optimization with Reliability Assessment, are very interesting, as they are stable and highly efficient, as many reliability analyses can be avoided. In all cases, the RBDO algorithms should be considered with special care and the results should be well validated by the designer, especially for complex structural systems where several failure points and local optima often co-exist.
References Aoues, Y. & Chateauneuf, A. 2007. Reliability-based optimization of structural systems by adaptive target safety application to RC frames. Structural Safety. Article in Press. CALFEM, A finite element toolbox to MATLAB, Version 3.3, Division of Structural Mechanics and Division of Solid Mechanics, Lund University, Sweden, http://www.civeng.ucl.ac.uk/ Chateauneuf, A. & Noret, E. 2005. System reliability-based optimization of redundant timber trusses. In: J.D. Sørensen (ed.). Reliability and optimization of structural systems, Proceedings of the IFIP WG7.5 Working Conference on reliability and optimization of structural systems, Aalborg, Denmark, May. Chen, X., Hasselman, T.K. & Neill, D.J. 1997. Reliability based structural design optimization for practical applications. Proceedings of the 38th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics and material conference, Kissimmee, Florida, AIAA-97-1403. Cheng, G., Xu, L. & Jiang, L. 2006. A sequential approximate programming strategy for reliability-based structural optimization. Computers and Structures. Article in Press.
A d v a n c e s i n s o l u t i o n m e t h o d s f o r r e l i a b i l i t y-b a s e d d e s i g n o p t i m i z a t i o n
245
Ching, J. & Hsu, W.-C. 2006. Transforming reliability limit state constraints into deterministic limit state constraints. Structural Safety. In Press. Choi, K.K. & Youn, B.D. 2002. On probabilistic approaches for reliability-based design optimization. In: 9th AIAA/NASA/USA/ISSMO symposium on Multidisciplinary Analysis and Optimization, September 4–6, Atlanta, GA, USA. Der Kiureghian, A. & Polak, E. 1988. Reliability-based optimal design: a decoupled approach. In: A.S. Nowak (ed.). Reliability and optimization of structural systems, Proceedings of the 8th IFIP WG7.5 Working Conference on reliability and optimization of structural systems, Chelsea, MI, USA: Book Crafters. pp. 197–205. Ditlevsen, O. & Madsen, H.O. 1996. Structural reliability methods. John Wiley & Sons. Du, X. & Chen, W. 2002. Sequential optimization and reliability assessment method for efficient probabilistic design. ASME, design engineering technical conferences and computers and information in engineering conference, DETC2002/DAC-34127, Montreal, Canada. EN 1995-1-1, Eurocode 5: Design of timber structures; Part 1-1: General rules and rules for buildings. Comité Européen de Normalisation, 2005. Enevoldsen, I. & Sørensen, J.D. 1993. Reliability-based optimization of series systems of parallel systems. Journal of Structural Engineering 119(4):1069–1084. Enevoldsen, I. & Sørensen, J.D. 1994. Reliability-based optimization in structural engineering. Structural Safety 15:169–196. Feng, Y.S. & Moses, F. 1986. A method of structural optimization based on structural system reliability. J. Struct. Mech. 14(4):437–453. Fu, G. & Frangopol, D.M. 1990. Reliability-based Vector optimization of structural systems, J. of Struct. Engrg. ASCE 116(8):2143–2161. Hasofer, A.M. & Lind, N.C. 1974. An Exact and Invariant First Order Reliability Format. J. Eng. Mech. ASCE 100, EM1:11–121. Kaymaz, I. & Marti, K. 2006. Reliability-based design optimization for elastoplastic mechanical structures. Computers and Structures. Article In Press. Kharmanda, G., Mohamed-Chateauneuf, A. & Lemaire, M. 2002. Efficient reliability-based optimization using a hybrid space with application to finite element analysis. Journal of Structural and Multidisciplinary Optimization 24(3):233–245. Kirjner-Neto, C., Polak, E. & Der Kiureghian, A. 1998. An outer approximations approach to reliability-based optimal design of structures. Journal Optim. Theory Appl. 98(1):1–16. Koch, P.N., Yang, R.J. & Gu, L. Design for six sigma through robust optimization. Structural and Multidisciplinary optimization. In Press. Kuschel, N. & Rackwitz, R. 1997. Two basic problems in reliability-based structural optimization. Mathematical Methods of Operations Research 46:309–333. Kuschel, N. & Rackwitz, R. 2000. A new approach for structural optimization of series system. In: R.E. Melchers, M.G. Stewart (ed.). Proceedings of the 8th International conference on applications of statistics and probability (ICASP) in Civil engineering reliability and risk analysis, Sydney, Australia, December 1999, Vol. 2, pp. 987–994. Lee, J.O., Yang, Y.S. & Ruy, W.S. 2002. A comparative study on reliability index and target performance based probabilistic structural design optimization. Computers and Structures (80):257–269. Lemaire, M., in collaboration with Chateauneuf, A. & Mitteau, J.C. 2006. Structural reliability. ISTE, UK. Madsen, H.O. & Friis Hansen, P. 1992. Comparison of some algorithms for reliability-based structural optimization and sensitivity analysis. In: R. Rackwitz & P. Thoft-Christensen (eds): Reliability and optimization of structural systems, Proceedings of the 4th IFIP WG7.5 Working conference on Reliability and Optimization of Structural Systems, Munich, Germany, September 1991. Berlin: Springer. pp. 443–451. Mathworks Inc. www.mathworks.com, 2007.
246
Structural design optimization considering uncertainties
Moses, F. 1997. Problems and prospects of reliability based optimization. Engineering Structures 19(4):293–301. Murotsu, Y., Shao, S. & Watanabe, A. 1994. An approach to reliability-based optimization of redundant structures. Structural Safety 16:133–143. Nikolaidis, E. & Burdisso, R. 1988. Reliability-based optimization: a safety index approach. Computer and Structures 28(6):781–788. Qu, X. & Haftka, R.T. 2003. Design under uncertainty using Monte Carlo simulation and probabilistic sufficiency factor. In: Proceedings of DET’03 conference, Chicago, IL,USA. Rackwitz, R. 2001. Reliability analysis, overview and some perspectives. Structural Safety 23:366–395. Royset, J.O., Der Kiureghian, A. & Polak, E. 2001. Reliability-based optimal structural design by the decoupling approach. Reliability Engineering and System Safety 73:213–221. Torng, T.Y. & Yan, R.J. 1993. Robust structural system design using a system reliabilitybased design optimization method. In: P.D. Spanos & Y.T. Wu (ed.), Probabilistic Mechanics: Advances in structural reliability methods, Springer-Verlag, NY:534–549. Tu, J., Choi, K.K. & Park, Y.H. 1999. A new study on reliability-based design optimization. Journal of Mechanical Design 121:557–564. Tu, J., Choi, K.K. & Park, Y.H. 2000. Design potential method for robust system parameter design. AIAA Journal 39(4):667–677. Youn, B.D. & Choi, K.K. 2004. Selecting probabilistic approaches for reliability-based design optimization. AIAA Journal 42(1):124–131. Youn, B.D. & Choi, K.K. 2004. A new response surface methodology for reliability-based design optimization. Computeres and Structures 82:241–256. Yi, P., Cheng, G. & Jiang, L. 2006. A sequential approximate programming strategy for performance-measure based probabilistic structural design optimization. Structural Safety. Article in Press. Wu, Y.T., Shin, Y., Sues, R. & Cesare, M. 2001. Safety factor based approach for probabilitybased design optimization. In: Proceedings of the 42nd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Seattle, WA, USA, Paper n◦ AIAA 2001-1522. Zou, T., Mahadevan, S. & Sopory, A. 2004. A reliability-based design method using simulation techniques and efficient optimization approach. ASME Design Engineering technical conferences, Salt Lake City, Utah, DETC2004/DAC-57457.
Chapter 10
Non-probabilistic design optimization with insufficient data using possibility and evidence theories Zissimos P. Mourelatos & Jun Zhou Oakland University, Rochester, MI, USA
ABSTRACT: Early in the engineering design cycle, it is difficult to quantify product reliability due to insufficient data or information for modeling the uncertainties. Design decisions are therefore, based on fuzzy information that is vague, imprecise, qualitative, linguistic or incomplete. The uncertain information is usually available as intervals with lower and upper limits. In this chapter, the possibility and evidence theories are used to account for uncertainty in design with incomplete information. Possibility-based and evidence-based design optimization methods are presented which handle a combination of probabilistic and non-probabilistic design variables. Also, a computationally efficient sequential possibility-based design optimization (SPDO) method is implemented, which decouples the design loop and the reliability assessment of each constraint. Two numerical examples demonstrate the application of possibility and evidence theories in design and highlight the trade-offs among reliability-based, possibility-based and evidence-based designs.
1 Introduction Engineering design under uncertainty has recently gained a lot of attention. Uncertainties are usually modeled using probability theory. In Reliability-Based Design Optimization (RBDO), variations are represented by standard deviations which are typically assumed constant, and a mean performance is optimized subject to probabilistic constraints (Lee et al. 2002, Liang et al. 2007, Tu et al. 1999, Wu et al. 2001, Youn et al. 2001). In general, probability theory is very effective when sufficient data is available to quantify uncertainty using probability distributions. However, when sufficient data is not available or there is lack of information due to ignorance, the classical probability methodology may not be appropriate. For example, during the early stages of product development, quantification of the product’s reliability or compliance to performance targets is practically very difficult due to insufficient data for modeling the uncertainties. A similar problem exists when the reliability of a complex system is assessed in the presence of incomplete information on the variability of certain design variables, parameters, operating conditions, boundary conditions etc. Uncertainties can be classified in two general types; aleatory (stochastic or random) and epistemic (subjective) (Klir & Filger 1988, Klir & Yuan 1995, Oberkampf et al. 2001, Sentz & Ferson 2002, Yager et al. 1994) Aleatory or irreducible uncertainty is related to inherent variability and is efficiently modeled using probability theory. However, when data is scarce or there is lack of information, the probability theory is not
248
Structural design optimization considering uncertainties
useful because the needed probability distributions cannot be accurately constructed. In this case, epistemic uncertainty, which describes subjectivity, ignorance or lack of information, can be used. Epistemic uncertainty is also called reducible because it can be reduced with increased state of knowledge or collection of more data. Formal theories to handle uncertainty have been proposed in the literature including evidence theory or Dempster – Shafer theory (Klir & Filger 1988, Yager et al. 1994), possibility theory (Dubois & Prade 1988) and interval analysis (Moore 1966). Two large classes of fuzzy measures, called belief and plausibility measures, respectively, characterize the mathematical theory of evidence. They are mutually dual in the sense that one of them can be uniquely determined from the other. Evidence theory uses plausibility and belief (upper and lower bounds of probability) to measure the likelihood of events. When the plausibility and belief measures are equal, the general evidence theory reduces to the classical probability theory. Therefore, the classical probability theory is a special case of evidence theory. Possibility theory handles epistemic uncertainty if there is no conflicting evidence among experts (Klir & Filger 1988). It uses a special subclass of dual plausibility and belief measures, called possibility and necessity measures, respectively. In possibility theory, a fuzzy set approach is common, where membership functions characterize the input uncertainty (Zadeh 1965). Even if a probability distribution is not available due to limited information, lower and upper bounds (intervals) on uncertain design variables are usually known. In this case, interval analysis (Moore 1966, Muhanna & Mullen 1999, Muhanna & Mullen 2001) and fuzzy set theory (Zadeh 1965) have been extensively used to characterize and propagate input uncertainty in order to calculate the interval of the uncertain output. An efficient method for reliability estimation with a combination of random and interval variables is presented in (Penmetsa & Grandhi 2002). However, it is not implemented in a design optimization framework. A few design optimization studies have been also reported, where some or all of the uncertain design variables are in interval form (Du, Sudjianto & Huang 2005, Gu et al. 1998, Rao & Cao 2002). Optimization with input ranges has also been studied under the term antioptimization (Elishakoff et al. 1994, Lombardi & Haftka 1998). Anti-optimization is used to describe the task of finding the “worst-case’’ scenario for a given problem. It solves a two-level (usually nested) optimization problem. The outer level performs the design optimization while the inner level performs the anti-optimization. The latter seeks the worst condition under the interval uncertainty. A decoupled approach is suggested in (Lombardi & Haftka 1998) where the design optimization alternates with the anti-optimization rather than nesting the two. It was mentioned that this method takes longer to converge and may not even converge at all if there is strong coupling between the interval design variables and the rest of the design variables. A “worst-case’’ scenario approach using interval variables has also been considered in multidisciplinary systems design (Du & Chen 2000, Gu et al. 1998). Very recently, possibility-based design algorithms have been proposed (Choi et al. 2004, Mourelatos & Zhou 2005) where a mean performance is optimized subject to possibilistic constraints. It was shown that more conservative results are obtained compared with the probability-based RBDO. A comprehensive comparison of probability and possibility theories is given in (Nikolaidis et al. 2004) for design under uncertainty.
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s
249
Evidence theory is more general than probability and possibility theories, even though the methodologies of uncertainty propagation are completely different (Bae et al. 2004, Oberkampf & Helton 2002). It can be used in design under uncertainty if limited, and even conflicting, information is provided from experts. Furthermore, the basic axioms of evidence theory allow to combine aleatory (random) and epistemic uncertainty in a straightforward way without any assumptions (Bae et al. 2004). Evidence theory however, has been barely explored in engineering design. One of the reasons may be its high computational cost due mainly to the discontinuous nature of uncertainty quantification. Evidence-based methods have been only recently used to propagate epistemic uncertainty (Bae et al. 2004) in large-scale engineering systems. Although a computationally efficient method is proposed in (Bae et al. 2004), the design issue is not addressed. We are aware of only one study which propagates epistemic uncertainty using evidence theory and also performs a design optimization (Agarwal et al. 2004). The optimum design is calculated for multidisciplinary systems under uncertainty using a trust region sequential approximate optimization method with surrogate models representing the uncertain measures as continuous functions. In this chapter, the possibility and evidence theories are used to account for uncertainty in design with incomplete information. The formal theories to handle uncertainty are first introduced using the theoretical fundamentals of fuzzy measures. The chapter highlights how the possibility theory can be used in design. A computationally efficient and accurate hybrid (global-local) optimization approach is presented for calculating the confidence level of “fuzzy’’ response, combining the advantages of the commonly used vertex and discretization methods. A possibility-based design optimization method is subsequently described where all design constraints are expressed possibilistically. The method gives a conservative solution compared with all conventional reliability-based designs obtained with different probability distributions. A general possibility-based design optimization method is also presented which handles a combination of random and possibilistic design variables. Furthermore, a sequential algorithm for possibility-based design optimization (SPDO) is presented. It decouples a double-loop PBDO process into a sequence of cycles composed of a deterministic design optimization followed by a set of worst-case possibility evaluation loops. The series of deterministic and possibility loops is repeated until convergence is achieved. A computationally efficient design optimization method is also described based on evidence theory, which can handle a mixture of epistemic and random uncertainties. The method can be used when limited and often conflicting, information is available from “expert’’ opinions. The algorithm quickly identifies the vicinity of the optimal point and the active constraints by moving a hyper-ellipse in the original design space, using an RBDO algorithm. Subsequently, a derivative-free optimizer calculates the evidence-based optimum, starting from the close-by RBDO optimum, considering only the identified active constraints. The computational cost is kept low by first moving to the vicinity of the optimum quickly and subsequently using local surrogate models of the active constraints only. The chapter is organized as follows. Section 2 gives an introduction to fuzzy measures. Section 3 describes the fundamentals of possibility theory based on fuzzy measures as well as some numerical methods for propagating non-probabilistic
250
Structural design optimization considering uncertainties
uncertainty, which are essential in possibility-based design. A detailed formulation of Possibility-Based Design Optimization (PBDO) where design constraints are satisfied possibilistically, is presented in section 4. Section 5 presents a detailed formulation of an Evidence-Based Design Optimization (EBDO) method and its implementation. Section 6 introduces the sequential algorithm for possibility-based design optimization. All principles are demonstrated with two examples in section 7. Results are compared among deterministic optimization, RBDO, PBDO, EBDO and SPDO. Finally, a summary and conclusions are given in section 8.
2 Fuzzy measures The evidence and possibility theories are based on the mathematical foundation of fuzzy measures which provide the foundation of fuzzy set theory. Before we introduce the basics of fuzzy measures, it is helpful to review the used notation on set representation. A universe X represents the entire collection of elements having the same characteristics. The individual elements in the universe X are denoted by x, which are usually called singletons. A set A is a collection of some elements of X. All possible sets of X constitute a special set called the power set ℘(X). A fuzzy measure is defined by a function g: ℘(X) → [0, 1] which assigns to each crisp (Ross 1995) subset of X a number in the unit interval [0, 1]. The assigned number in the unit interval for a subset A ∈ ℘(X), denoted by g(A), represents the degree of available evidence or belief that a given element of X belongs to the subset A. In order to qualify as a fuzzy measure, the function g must obey the following three axioms: Axiom 1 (boundary conditions): g(Ø) = 0 and g(X) = 1. Axiom 2 (monotonicity): For every A, B ∈ ℘(X), if A ⊆ B, then g(A) ≤ g(B). Axiom 3 (continuity): For every sequence (Ai ∈ ℘(X), i = 1, 2,…) of subsets of ℘(X), if either A1 ⊆ Ai ⊆…or A1 ⊇ A2 ⊇ …(i.e., the sequence is monotonic), then lim g(Ai ) = g( lim Ai ). i→∞
i→∞
A belief measure is a function Bel: ℘(X) → [0, 1] which satisfies the three axioms of fuzzy measures and the following additional axiom (Klir & Filger 1988): Bel(A1 ∪ A2 ) ≥ Bel(A1 ) + Bel(A2 ) − Bel(A1 ∩ A2 )
(1)
The axiom (1) can be expanded for more than two sets. For A ∈ ℘(X), Bel(A) is interpreted as the degree of belief, based on available evidence, that a given element of X belongs to the set A. A plausibility measure is a function Pl : ℘(X) ⇒ [0, 1]
(2)
which satisfies the three axioms of fuzzy measures and the following additional axiom (Klir & Filger 1988) Pl(A1 ∩ A2 ) ≤ Pl(A1 ) + Pl(A2 ) − Pl(A1 ∪ A2 )
(3)
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s
251
Every belief measure and its dual plausibility measure can be expressed with respect to the non-negative function m : ℘(X) ⇒ [0, 1]
(4)
such that m(Ø) = 0 and m(A) = 1
(5)
A∈℘(X)
The function m is called Basic Probability Assignment (BPA) due to the resemblance of Eq. (5) with a similar equation for probability distributions. The basic probability assignment m(A) is interpreted either as the degree of evidence supporting the claim that a specific element of X belongs to the set A or as the degree to which we believe that such a claim is warranted. At this point, it should be noted that the BPA is very different from the probability distribution function. Basic probability assignments are defined on sets of the power set (i.e., on A ∈ ℘ (X)), whereas the probability distribution functions are defined on the singletons x of the power set (i.e., on x ∈ ℘ (X)). Every set A ∈ ℘ (X) for which m(A) > 0 is called a focal element of m. Focal elements are subsets of X on which the available evidence focuses; i.e. available evidence exists. Given a BPA m, a belief measure and a plausibility measure are uniquely determined by Bel(A) = m(B) (6) B⊆A
Pl(A) =
m(B)
(7)
B∩A =0
which are applicable for all A ∈ ℘(X). In Eq. (6), Bel(A) represents the total evidence or belief that the element belongs to A as well as to various subsets of A. The Pl(A) in Eq. (7) represents not only the total evidence or belief that the element in question belongs to set A or to any of its subsets but also the additional evidence or belief associated with sets that overlap with A. Therefore, we have Pl(A) ≥ Bel(A)
(8)
Probability theory is a subset of evidence theory. When the additional axiom of belief measures (see Eq. (1)) is replaced with the stronger axiom Bel(A ∪ B) = Bel(A) + Bel(B) where A ∩ B = Ø
(9)
we obtain a special type of belief measures which are the classical probability measures. In this case, the right hand sides of Eqs (6) and (7) become equal and therefore, Bel(A) = Pl(A) = m(x) = p(x) (10) x∈A
x∈A
252
Structural design optimization considering uncertainties
for all A ∈ ℘(X), where p(x) is the classical probability distribution function (PDF). Note that the BPA m(x) is equal to p(x). Therefore with evidence theory, we can simultaneously handle a mixture of input parameters. Some of the inputs can be described probabilistically (random uncertainty) and some can be described through expert opinions (epistemic uncertainty with incomplete data). In the second case, the range of each input parameter will be discretized using a finite number of intervals. The BPA value for each interval must be equal to the PDF area within the interval. It should be noted that according to evidence theory, the Bel(A) and Pl(A) bracket the true probability P(A) (Klir & Filger 1988), i.e. Bel(A) ≤ P(A) ≤ Pl(A)
(11)
Evidence obtained from independent sources or experts must be combined. If the BPA’s m1 and m2 express evidence from two experts, the combined evidence m can be calculated by the following Dempster’s rule of combining (Sentz & Ferson 2002) m(A) =
m1 (B)m2 (C)
B∩C=A
1−K
for A = 0
(12)
where K=
m1 (B)m2 (C)
(13)
B∩C=0
represents the conflict between the two independent experts. Dempster’s rule filters out any conflict, or contradiction among the provided evidence, by normalizing with the complementary degree of conflict. It is usually appropriate for relatively small amounts of conflict where there is some consistency or sufficient agreement among the opinions of the experts. Yager (Yager et al. 1994) has proposed an alternative rule of combination where all degrees of contradiction are attributed to total ignorance. Other rules of combining can be found in (Sentz & Ferson 2002). The possibility theory is a subcase of the general evidence theory. It can be used to characterize epistemic uncertainty, when incomplete data is available. It applies only when there is no conflict in the provided body of evidence. In such a case, the focal elements of the body of evidence are nested and the associated belief and plausibility measures are called consonant. In contrary, when there is conflicting evidence, the belief and plausibility measures are dissonant. A family of subsets of the universal set is nested if they can be ordered in such a way that each is contained within the next. Thus, A1 ⊂ A2 ⊂ · · · ⊂ An are nested sets. Consonant belief and plausibility measures are usually known as necessity measures n and possibility measures π, respectively. Therefore, if there is no conflicting information, n(A) = Bel(A) and π(A) = Pl(A). The necessity and possibility are dual measures, related by n(A) = 1 − π(A) where A is the complement of set A.
(14)
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s
253
3 Fundamentals of possibility theory This section highlights the fundamentals of possibility theory as it was originally introduced in the context of fuzzy set theory (Zadeh 1978). In the fuzzy set approach to possibility theory, focal elements are represented by a-cuts of the associated fuzzy set. Focal elements are subsets that are assigned nonzero degrees of evidence. The possibility theory can be used to bracket the true probability based on the fuzzy set approach at various confidence intervals (a-cuts). The advantage of this is that as the design progresses and the confidence level on the input parameter bounds increases, the design need not be reevaluated to obtain the new bounds of the response. Similarly to the probability measures, which are represented by the probability distribution functions, the possibility measures can be represented by the possibility distribution function r : X ⇒ [0, 1] such that π(A) = max r(x)
(15)
x∈A
It can be shown that possibility measures are formally equivalent to fuzzy sets. In this equivalence, the membership grade of an element x corresponds to the plausibility of the singleton consisting of that x. Therefore, a consonant belief structure is equivalent to a fuzzy set of X. A fuzzy set is an imprecisely defined set that does not have a crisp boundary. It provides instead, a gradual transition from “belonging’’ to “not belonging’’ to the set. A function can be defined such that the values assigned to the elements of the set are within a specified range and indicate the membership grade of these elements in the set. Larger values denote higher degrees of set membership. Such a function is called a membership function and the set defined by it a fuzzy set. The membership function µA by which a fuzzy set A is usually defined has the form µA : X → [0, 1] where [0, 1] denotes the interval of real numbers from 0 to 1, inclusive. Given a fuzzy subset A of X with membership function µA , Zadeh (Zadeh 1978) defines a possibility distribution function r associated with A as numerically equal to µA , i.e. r(x) = µA (x) for all x ∈ X. Then, he defines the corresponding possibility measure π as π(A) = sup r(x)
for each A ∈ ℘(X)
(16)
x∈A
Eq. (16) is equivalent to Eq. (15) when X is finite. In the fuzzy set approach to possibility theory, focal elements are represented by a-cuts of the associated fuzzy set. For the remaining of this discussion, we will follow the fuzzy set approach to possibility theory. Eq. (11) states that the true probability is bracketed by the belief and plausibility measures. If we know the possibility distribution function µY (y) of the response Y, then the true probability P(Y) can be also bracketed as n(Y) ≤ P(Y) ≤ π(Y)
(17)
where the necessity n(Y) and possibility π(Y) measures are calculated from Eqs (14) and (16), respectively. The “extension principle’’ (Klir & Filger 1988, Ross 1995,
254
Structural design optimization considering uncertainties
x
(x)
1.0 ␣
0.0
XaL
XN
dL (a)
XaU
X
dU (a)
Figure 10.1 Triangular possibility distribution for a fuzzy variable.
Yager et al. 1994) is used to calculate the possibility distribution function µY (y) of the response. 3.1
F uz z i f i c a ti o n pr o c es s and e xt e ns io n p r i n ci p l e
The process of quantifying a fuzzy variable is known as fuzzification. If any of the input variables is imprecise, it is considered fuzzy and must be therefore, fuzzified in order for the uncertainty to be propagated using fuzzy calculus. The fuzzification is done by constructing a possibility distribution, or membership function, for each imprecise (fuzzy) variable. Details can be found in (Ross 1995). The membership function takes values in the [0, 1] interval. Here, we use convex normal possibility distributions to characterize the fuzzy variables. An example of a convex normal triangular possibility distribution is shown in Fig. 10.1. The point for which the possibility is equal to one is called normal point. The possibility distribution is convex since it is strictly decreasing to the left and right of the normal point. At each confidence level, or a-cut, a set Xa is defined as Xa = {x : xaL ≤ x ≤ xaR , a ∈ [0, 1]}
(18a)
which is a monotonically decreasing function of a; i.e. a1 > a2 ⇒ Xa1 ⊂ Xa2
for every a1 , a2 ∈ [0, 1]
(18b)
Due to the convexity of the possibility distribution function, all sets generated at different a-cuts are nested according to Eq. (18b). Therefore, the convexity and normality of the possibility distribution function satisfies the basic requirement of nested sets (no conflicting evidence) in possibility theory. After the fuzzification of the imprecise input variables, the “extension principle’’ is used to propagate the epistemic uncertainty through the transfer function in order to calculate the fuzzy response. The “extension principle’’ calculates the possibility distribution of the fuzzy response from the possibility distributions of the fuzzy input variables. In particular, given the transfer function y = f (x), where the output y depends
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s
255
on the N independent fuzzy inputs x = {x1 , . . . , xN }, the “extension principle’’ states that the possibility distribution µY of the output is given by µY [y = f (x)] = sup {min [µX (f (xj ))]} j
y
(19)
where “sup’’ denotes the suprenum operator that gives the least upper bound. The above equation can be interpreted as follows. For a crisp value of the output y, there may exist more than one combination of crisp values of input variables x resulting in the same output. The possibility of each combination is given by the smallest possibility value for all fuzzy input variables. The possibility that y = f (x), is given by the maximum possibility for all these combinations. Note that in probability theory, the probability of an outcome is equal to the product of the probabilities of the constituent events. In fuzzy set theory however, the possibility of an outcome is equal to the minimum possibility of the constituent events. If the outcome can be reached in many ways, then the outcome probability, in probability theory, is given by the sum of the probabilities of all the ways. In fuzzy theory, the possibility of the outcome is given by the maximum possibility of all the possibilities (Ross 1995). The direct (“brute force’’) solution of Eq. (19) is practically intractable except for simple cases involving one or two fuzzy variables. The computational effort increases exponentially with increasing number of fuzzy input variables. For this reason, approximate numerical techniques have been proposed, among which the discretization method (Akpan et al. 2002) and the vertex method (Penmetsa & Grandhi 2002) are the most popular ones. In the discretization method, the domain of each fuzzy variable i; 1 ≤ i ≤ N is discretized with Mi discrete values at each a-cut. Then the output y is evaluated at all N 9 Mi for each a-cut. Subsequently, Eq. (19) is used to calculate possible combinations i=1
the possibility distribution of the output. The range of the output is defined by the minimum and maximum response from all combinations. Although this method can be very accurate, the associated computational cost is practically prohibitive. In the vertex method, all the binary combinations of only the extreme values of the fuzzy variables at an a-cut are fed into the deterministic transfer function. The bounds of the fuzzy response are then obtained at the a-cut, by choosing the maximum and minimum responses. The procedure is repeated for all a-cuts of interest. The method has the potential to give accurate bounds of the response based on the bounded input. However, when the transfer function exhibits minima or maxima within the domain defined by the extreme values of the input variables, the vertex method is inaccurate. This is due to the fact that the function is evaluated only at the binary combinations of the input variable bounds. For a problem with N fuzzy input variables, the required number of function evaluations for the vertex method is A ∗ 2N , where A is the number of a-cuts. In general, the vertex method is computationally more efficient compared with the discretization method. However, the required computational effort grows exponentially with the number of input fuzzy variables (Ross 1995). For this reason, most of the reported applications are restricted to very few fuzzy variables (Chen & Rao 1997, Mullen & Muhanna 1999, Rao & Sawyer 1995).
256
Structural design optimization considering uncertainties
A hybrid (global-local) optimization method has been reported in (Mourelatos & Zhou 2005), which ensures computational efficiency without loss of accuracy. An optimization algorithm is used to calculate the minimum and maximum values of the response at each a-cut. Because the global minimum and maximum values of the response are needed, a derivative free, global optimizer called DIRECT (DIvisions of RECTangles), is used in order to avoid being trapped at a local optimum and obtain therefore, an inaccurate solution. DIRECT is a modification of the standard Lipschitzian approach that eliminates the need to specify a Lipschitz constant (Jones et al. 1993). Although global optimizers may get close to the global optimum quickly, it takes them longer to achieve a high degree of accuracy because they usually have a slow rate of convergence. This suggests that the best performance can be obtained by combining DIRECT with a gradient-based local optimizer in a hybrid approach. In this work, DIRECT is first used, followed by a local optimizer based on Sequential Quadratic Programming (SQP). DIRECT provides a converged global optimum based on “loose’’ convergence criteria. Subsequently, the DIRECT solution is used as starting point for SQP, which identifies the optimum accurately and efficiently. 3.2 A m a th em at ic al e xample The following two-variable, six-hump camel function (Wang 2003) is used y(x1 , x2 ) = 4x21 − 2.1x41 +
1 6 x + x1 x2 − 4x22 + 4x42 , 3 1
x1,2 ∈ [−2, 2]
to illustrate the accuracy and efficiency of the hybrid optimization method of the previous section and compare it with the vertex and discretization methods. For demonstration reasons, the following simple triangular membership functions are used for the two input variables x1 and x2 ⎧ 0 ≤ xi ≤ 2 ⎪ ⎨ − xi + 1, 2 µXi (xi ) = ⎪ ⎩ xi + 1, −2 ≤ x ≤ 0 i 2
i = 1, 2
Fig. 10.2 shows the contour plot of the six hump camel function. The H’s indicate all extreme points. Points H2 and H5 with coordinates (0.0898, −0.7127) and (−0.0898, 0.7127) respectively, are two global optima with an equal function value of ymin = −1.0316. The calculated membership functions of the response y using the vertex, discretization and hybrid optimization methods are plotted in Fig. 10.3. Ten a-cuts are used for all three methods. For the discretization method, the range of each input fuzzy variable, at each a-cut, is equally split in 15 divisions. It is known that if the input membership functions are convex normal, the response membership function must also be convex normal. The justification is that when the input uncertainty increases (low a-cut values), the uncertainty of the response must remain the same or increase. As shown in Fig. 10.3, the response membership function obtained by the vertex method is not convex and therefore, it is wrong.
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s
257
2 1.5 1
X2
0.5 0
0.5 1 1.5 2 2
1.5
1
0.5
0 X1
1
0.5
1.5
2
Figure 10.2 Contour plot for mathematical example. 1 0.9
Number of alpha-cuts 10
0.8 0.7
Vertex method
0.6 Y (y)
Discretization method 0.5 Optimization method
0.4 0.3 0.2 0.1 0 10
0
10
20
30
40
50
60
y
Figure 10.3 Response membership function for mathematical example.
As explained in section 3.1, the discretization method evaluates the function not only at the upper and lower limits of the input variables at each alpha cut but also between the bounds. Thus, it can capture the extreme points that might be present in between the upper and lower bounds. At each alpha cut, all combinations are obtained and the minimum and maximum response values are calculated in order to get the response membership function. It is clear that the response becomes more accurate as the number of divisions per alpha cut increases. As shown in Fig. 10.3,
258
Structural design optimization considering uncertainties Table 10.1 Accuracy and efficiency comparison of vertex, discretization and hybrid optimization methods.
Lower Bound Upper Bound No. of F.E.
Vertex
Discretization
Hybrid Optimization
47.73 55.73 4
−1.01 55.73 256
−1.03 55.73 140
the response membership function calculated with the discretization method, is convex and normal. The uncertainty decreases as the level of confidence increases (increasing a-cut values). The major disadvantage of this method is that as the number of design variables increases and the number of divisions per a-cut also increases, the method becomes computationally very expensive. In this example, the number of a-cuts is 10 and the number of divisions per a-cut is 15. Therefore, the number of function evaluations is 10 ∗ (15 + 1)2 = 2560. The response membership function of the six hump camel function is also calculated using the proposed hybrid optimization method. The result is identical with that obtained with the discretization method (see Fig. 10.3). Table 10.1 summarizes the lower and upper bound values of the response at the zero a-cut, as calculated by the vertex, discretization and hybrid optimization methods. The vertex method is very efficient but inaccurate. The hybrid optimization method however, has the same accuracy with the “brute force’’ discretization method but it is much more efficient.
4 Possibility-based design optimization In deterministic design optimization, an objective function is minimized subject to satisfying a set of constraints. In Reliability-Based Design Optimization (RBDO), where all design variables are characterized probabilistically, an objective function is usually minimized subject to the probability of satisfying each constraint being greater than a specified high reliability level. In this section, a methodology is presented on how to use possibility theory in design. We will show that the possibility-based design is conservative compared with all RBDO designs obtained with different probability distributions. In RBDO, some optimality is usually sacrificed in order to accommodate the random uncertainty. The possibility-based design sacrifices a little more optimality in order to accommodate the lack of probability distribution information. It therefore, encompasses all RDBO designs obtained with different distributions. According to Eq. (11), the probability P(A) of event A is bracketed by the belief Bel(A) and plausibility Pl(A); i.e. Bel(A) ≤ P(A) ≤ Pl(A). We have also mentioned that for consonant (no conflicting evidence) belief structures, the plausibility measures are equal to the possibility measures, resulting in η(A) ≤ P(A) ≤ π(A), where η and π are the necessity and possibility measures, respectively (see Eq. 17). This means that the possibility π(A) provides an upper bound to the probability P(A). From the design point of view, we can thus conclude (Klir & Filger 1988, Ross 1995, Zadeh 1978) that what is possible may not be probable, and what is impossible is also improbable.
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s
259
mG(g)
1
a a1
gmin
a gmin
gN
a gmax
gmax
g
g0
Figure 10.4 Used notation in possibility-based design optimization.
Note that for an impossible event A, the possibility π(A) is zero. If we therefore, make sure that the possibility of violating a constraint is zero, then the probability of violating the same constraint will be also zero. If feasibility of a constraint g is expressed with the positive null form g ≥ 0, the constraint is always satisfied if π(g ≤ 0) = 0
(20)
The possibility π in Eq. (20) is calculated using Eq. (16). Fig. 10.4 shows the membership function µG (g) of constraint g. The possibility of set A = {g: gmin ≤ α α α , α ∈ [0, 1]} is π(A) = α and the possibility of set B = {g: gmin ≤ g ≤ gmax , g ≤ gmin α ∈ [0, 1]} is π(B) = 1. Similarly, the possibility of constraint violation is π(g ≤ 0) = α1 . Eq. (20) can be relaxed as π(g ≤ 0) ≤ α
(21)
where the a-cut level is small; i.e. α << 1. Based on Fig. 10.4, the relation (21) is satisfied if α gmin ≥0
(22)
α where gmin is the global minimum of g at the a-cut. Eq. (22) is analogous to the R-percentile formulation (Tu et al. 1999) of a probabilistic constraint in RBDO. The α = 0. possibilistic constraint of Eqs (21) or (22) becomes active if gmax Based on this discussion, a possibility-based design optimization (PBDO) problem can be formulated as
min f (d, xN , pN ) d,xN
subject to
π(gi (d, X, P) ≤ 0) ≤ α,
i = 1, . . . , n
dL ≤ d ≤ dU , xL ≤ xN ≤ xU
(23)
260
Structural design optimization considering uncertainties
where d ∈ Rk is the vector of deterministic design variables, X ∈ Rm is the vector of possibilistic design variables, P ∈ Rq is the vector of possibilistic design parameters and xN and pN are the normal point vectors for the possibilistic design variables and parameters, respectively. According to the used notation, a bold letter indicates a vector, an upper case letter indicates a possibilistic variable or parameter and a lower case letter indicates a deterministic variable or a realization of a possibilistic variable or parameter. Feasibility of the ith deterministic constraint is expressed with the positive null form gi ≥ 0. The possibilistic design variables are represented with convex normal possibility distributions (membership functions). Note that they may not be necessarily triangular. The superscript N denotes the normal point of each distribution where the membership function value is equal to one. Subscripts L and U denote lower and upper bounds, respectively. In PBDO, we will assume that the membership functions of the possibilistic design variables have a constant shape and that their normal points are design variables moving within predetermined bounds. This is analogous to RBDO where the PDF of each random design variable stays constant while its mean value is a design variable. Based on Eq. (22), the PBDO formulation (23) is equivalent to min f (d, xN , pN ) d,xN
subject to giαmin ≥ 0
i = 1, . . . , n
(24)
dL ≤ d ≤ dU , xL ≤ xN ≤ xU The PBDO formulation (23) or (24) is a double-loop optimization problem where an optimization is performed (inner loop) when the design optimization (outer loop) calls for a possibilistic constraint evaluation. It should be noted that the PBDO optimum at a = 1 coincides with the deterministic optimum.
4.1
PBD O wi t h a c o mb inat io n o f r and o m a n d possi b i l i st ic var iab les
Reliability-based design optimization (RBDO) provides optimum designs in the presence of only random (or aleatory) uncertainty (Liang et al. 2007, Tu et al. 1999, Wu et al. 2001). A typical RBDO problem is formulated as (Liang et al. 2007) min f (d, µY , µZ ) d,µY
subject to P(gi (d, Y, Z) ≥ 0) ≥ Ri = 1 − pfi ,
i = 1, . . . , n
(25)
dL ≤ d ≤ dU , µLY ≤ µY ≤ µU Y where Y ∈ R is the vector of random design variables and Z ∈ Rr is the vector of random design parameters. For a variety of practical applications however, there may not be enough information to characterize all design variables and parameters probabilistically. A subset
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s
261
of them can be therefore, characterized possibilistically using membership functions. A possibility-based design optimization problem with a combination of random and possibilistic (or fuzzy) variables can be formulated as min f (d, µY , µZ , xN , pN )
d,xN ,µY
subject to
giαmin ≥ 0,
i = 1, . . . , n
dL ≤ d ≤ dU
(26)
N µLY ≤ µY ≤ µU Y , xL ≤ x ≤ xU
with giαmin = min gi (d, U, x, p), x,U,p
subject to
i = 1, . . . , n,
βi = βti xLα (xN ) ≤ x ≤ xUα (xN ), pαL ≤ p ≤ pαU
where βt is the target reliability index. Note that xLα and xUα are the lower and upper limits of X at an a-cut. Problem (26) represents a double-loop performance measure approach (PMA) optimization sequence. The design optimization of the outer loop calls a series of possibilistic constraints in the inner loop. Each possibilistic constraint is in general, a global optimization problem. The inner loop calculates the minimum value of each limit state function considering that the realizations of possibilistic variables X vary between xLα and xUα , the realizations of possibilistic parameters P vary between pαL and pαU and the reliability requirement βi = βti is satisfied. It therefore, calculates the worst-case scenario.
5 Evidence-based design optimization (EBDO) In this section, a methodology is presented on how to use evidence theory in design. We will show that the evidence theory-based design is more conservative compared with all RBDO designs obtained with different probability distributions and less conservative compared with the PBDO design. If feasibility of a constraint g is expressed with the non-negative null form g ≥ 0, we have shown that Bel(g ≥ 0) ≤ P(g ≥ 0) ≤ Pl(g ≥ 0) where P(g ≥ 0) is the probability of constraint satisfaction. Therefore, P(g < 0) ≤ pf is satisfied if Pl(g < 0) ≤ pf
(27)
where pf is the probability of failure which is usually a small prescribed value. The above statement is equivalent to P(g ≥ 0) ≥ R is satisfied if Bel(g ≥ 0) ≥ R where R = 1 − pf is the corresponding reliability level.
(28)
262
Structural design optimization considering uncertainties
Hence, an evidence theory-based design optimization (EBDO) problem can be formulated as min f (d, xN , pN ) d,xN
subject to Pl(gi (d, X, P) < 0) ≤ pfi ,
i = 1, . . . , n
(29)
N dL ≤ d ≤ dU , xLN ≤ xN ≤ xU
where X ∈ Rm and P ∈ Rq are the vectors of uncertain design variables and parameters. The superscript “N’’ indicates nominal value of uncertain variables or parameters. The uncertainty is provided by expert opinions. It should be noted that the plausibility measure is used instead of the equivalent belief measure, in Problem (29). The reason is that at the optimum, the failure domain for each active constraint is usually much smaller than the safe domain over the frame of discernment (FD) (domain of all focal elements with nonzero combined BPA; see next section). As a result, the computation of the plausibility of failure is much more efficient than the computation of the belief of safe region. 5.1 Assessi ng B el and Pl wit h d emps t e r-s h a f e r t h e o r y Evidence theory can quantify epistemic uncertainty, even when the experts provide conflicting evidence. This section shows how to propagate epistemic uncertainty through a given model (transfer function) which is necessary in calculating the plausibility of constraint violation in Problem (29). The uncertainty propagation will be illustrated using the following simple transfer function y = f (a, b)
(30)
where a ∈ A, b ∈ B are two independent input parameters and y is the output. The combined BPA’s for both a and b are obtained from Dempster’s rule of combining of Eq. (12) if multiple experts have provided evidence for either a or b. With combined information for each input parameter, we define a vector c = aci , bcj , needed to calculate the output y as C = A × B = {c = [aci , bcj ], aci ∈ A, bcj ∈ B}
(31)
where subscript c stands for “combined’’ and i, j indicate focal elements. Taking advantage of assumed parameter independency, the BPA for c is mc (hij ) = m(aci )m(bcj )
(32)
where hij = [aci , bcj ] and aci , bcj denote intervals such that a ∈ aci and b ∈ bcj . Equation (32) can be used to calculate the combined BPA structure for the entire domain C. For every (a, b) ∈ c |c ∈ C, needed to evaluate the output y, the combined BPA mc is used. A representative combined BPA structure is shown in Fig. 10.5. The Cartesian product C of Eq. (31) is also called frame of discernment (FD) in the literature. It consists of all focal elements (rectangles in Fig. 10.5 with nonzero combined BPA) and can be viewed as the finite sample space in probability theory.
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s
263
BPA 0.8 0.6
b
0.4
0.6 0.4
0.2 0 0
0.2
0.2 a
0.4 0.6
Figure 10.5 Representative BPA structure for two parameters a and b.
gmax
g0 gmin g0
g0
g 0 g0
g0 g min
gmax
g0
gmax g0
gmin g0
Figure 10.6 Schematic illustration of focal element contribution to belief and plausibility measures.
Let a domain F being defined as F = {g : g = f (a, b) − y0 > 0, (a, b) ∈ c, c = [ac , bc ] ⊂ C}
(33)
where y0 is a specified value. According to evidence theory, Bel(F) ≤ pf ≤ Pl(F)
(34)
where pf = P(g > 0) is the true probability. The Bel (F) and Pl (F) are calculated using Eqs (6) and (7) where set A is equal to set F of Eq. (33) and B is a rectangular domain (focal element) such that B ⊆ A for Eq. (6) and B ∩ A = 0 for Eq. (7). B ⊆ A means that the focal element must be entirely within the domain g > 0 and B ∩ A = 0 means that the focal element must be entirely or partially within the domain g > 0 (see Fig. 10.6). In order to identify if a focal element B satisfies B ⊆ A or B ∩ A = 0, the following minimum and maximum values of g must be calculated [gmin , gmax ] = [min g(x), max g(x)] x
x
(35)
for xL ≤ x ≤ xU where (xL , xU ) defines the focal element domain. For monotonic functions, the vertex method (Penmetsa & Grandhi 2002) can be used to calculate the minimum and maximum values in Eq. (35) by simply identifying the minimum and
264
Structural design optimization considering uncertainties
Hyper-ellipse
X1
Initial design point g1 (x1, x2) 0 Feasible region Frame of discernment g2(x1, x2) 0 Objective reduces
B
MPP for g1 0
EBDO optimum Deterministic optimum X2
Figure 10.7 Geometrical interpretation of the EBDO algorithm.
maximum values among all vertices of the focal element domain. For non-monotonic functions, a global optimizer is needed. If for a focal element, gmin and gmax are both positive, the focal element will contribute to the calculation of belief and plausibility. On the other hand, if gmin and gmax are both negative, the focal element will not contribute to the calculation of belief or plausibility. If however, gmin is negative and gmax is positive, the focal element will not contribute to the belief but it will contribute to the plausibility calculation. This is shown schematically in Fig. 10.6. In summary the following tasks are performed in order to calculate the belief and plausibility of the failure region: 1.
2.
3. 4.
For each input parameter, combine the evidence from the experts by combining the individual BPA’s from each expert using Dempster’s rule of combining (Eq. (12)). Construct the BPA structure for the m-dimensional frame of discernment, where m is the number of input parameters. Assuming independent input parameters, Eq. (32) is used. Identify the failure region space (set F of Eq. (33)). Use Eqs (6) and (7) to calculate the belief and plausibility measures of the failure region. The failure region must be identified only within the frame of discernment. The true probability of failure is bracketed according to Eq. (34).
5.2 Im pl em ent at io n o f t he EB DO alg o r i t h m A computationally efficient solution of Problem (29) is presented here. As a geometrical interpretation of it, we can view the design point (d, x) moving within the feasible domain so that the objective f is minimized (see Fig. 10.7). If the entire FD is in the
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s
265
feasible domain, the constraints are satisfied and are inactive. A constraint becomes active if part of the FD is in the “failure’’ region so that the plausibility of constraint violation is equal to pf . In general, Problem (29) represents movement of a hyper-cube (FD) within the feasible domain. In order to save computational effort, the bulk of the FD movement, from the initial design point to the vicinity of the optimal point (point B of Fig. 10.7), can be achieved by moving a hyper-ellipse which contains the FD. The center of the hyper-ellipse is the “approximate’’ design point and each axis is arbitrarily taken equal to three times the standard deviation of a hypothetical normal distribution. This assumes that each dimension of the FD hyper-cube is equal to six times the standard deviation of the hypothetical normal distribution. The hyper-ellipse can be easily moved in the design space by solving a RBDO problem. The RBDO optimum (point B of Fig. 10.7) is in the vicinity of the solution of Problem (29) (EBDO optimum). The RBDO solution also identifies all active constraints and their corresponding most probable points (MPP’s). The maximal possibility search algorithm (Choi et al. 2004) can also be used to move the FD hyper-cube in the feasible domain. It should be noted that the 3-sigma axes hyper-ellipse is arbitrary. The size of the hyper-ellipse is not however, crucial because it is only used to calculate the initial point (point B of Fig. 10.7) of the EBDO algorithm. The latter calculates the true EBDO optimum accurately. From our experience, a 3 to 4-σ size works fine. At this point, we generate a local response surface of each active constraint around its MPP. In this work, the Cross-Validated Moving Least Squares (CVMLS) (Tu & Jones 2003) method is used based on an Optimum Symmetric Latin Hypercube (OSLH) (Ye et al. 2000) “space-filling’’ sampling. A derivative-free optimizer calculates the EBDO optimum. It uses as initial point the previously calculated RBDO optimum which is close to the EBDO optimum. Problem (29) is solved, considering only the identified active constraints. For the calculation of the plausibility of failure Pl(g < 0) of each active constraint, an algorithm presented in (Mourelatos & Zhou 2005) is used. It identifies all focal elements which contribute to the plausibility of failure. The computational effort is significantly reduced because accurate local response surfaces are used for the active constraints. The cost can be much higher if the optimization algorithm evaluates the actual active constraints instead of their efficient surrogates (response surfaces). It should be noted that a derivative-free optimizer is needed due to the discontinuous nature of the combined BPA structure. The DIRECT derivative-free, global optimizer is used (Jones et al. 1993).
6 A sequential algorithm for possibility-based design optimization (SPDO) The computational effort of the double-loop approach of Problem (26) may be prohibitive especially for large-scale applications. For this reason, a Sequential algorithm for Possibility-based Design Optimization (SPDO) method is proposed in this section. It decouples the double-loop PBDO process of Problem (26) by using successive cycles composed of a deterministic design optimization followed by a set of possibilistic evaluation loops. In each cycle, the deterministic optimization and the possibilistic
266
Structural design optimization considering uncertainties
xN2, X2
Deterministic constraint Possibilistic constraint g(xN1, xN2)0
p[g(x1, x2 ) 0] a
cut (xN1, xN2) Shift to:
SP2
g(xN1
– SP1, xN2 – SP2)
SP1 (x1,worst , x2,worst )
xN1,X1
Figure 10.8 Shifting of feasible domain for only uncertain variables.
evaluations are decoupled. The latter are conducted after the deterministic optimization. If at the deterministic optimum of a cycle, a particular possibilistic constraint is violated, a “shifting’’ vector is determined which moves the constraint boundary within the deterministic feasible domain. The “shifted’’ constraints are then used to perform a new deterministic design optimization. The series of deterministic and possibilistic evaluation loops continues until convergence is achieved; i.e. the objective function is minimized without violating any possibilistic constraint. At convergence the magnitude of the “shifting’’ vector is zero. The idea of using a “shifting’’ vector has been originally proposed in (Du & Chen 2004).
6.1 SPD O wi th o nly po s s ib ilis t ic var iabl e s Before we present the proposed SPDO algorithm for a combination of possibilistic and random variables, we will introduce the approach when there are not any random variables. For comprehension purposes, we initially assume without loss of generality, that there are not any deterministic design variables or possibilistic design parameters. In this case, Problem (24) reduces to min f (xN ) xN
subject to giαmin ≥ 0
i = 1, . . . , n
(36)
xL ≤ xN ≤ xU For illustration purposes, Fig. 10.8 gives a geometrical interpretation using only two possibilistic variables X1 and X2 .
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s
267
N The normal points of the two possibilistic variables are denoted by xN 1 and x2 , respectively. The deterministic and possibilistic constraint boundaries are denoted N by g(xN 1 , x2 ) = 0 and π[g(x1 , x2 ) ≤ 0] ≤ α, respectively. Because the possibility-based design is more conservative than the deterministic design, its feasible region is reduced compared with that of the deterministic design. Problem (36) is solved using a sequence of cycles. Each cycle is composed of a deterministic optimization followed by a possibilistic evaluation. During the first cycle, the following deterministic problem is solved
min f (xN ) xN
subject to
gi (xN ) ≥ 0 i = 1, . . . , n xL ≤ xN ≤ xU
N The optimal point xN = (xN 1 , x2 ) is on the boundary of the active deterministic constraints. A possibilistic evaluation at the desired a-cut, is then implemented N for each constraint at xN = (xN 1 , x2 ) in order to determine the worst-case point xworst = (x1,worst , x2,worst ). The following problem is solved for the ith constraint,
min gi (x) x
subject to
xN − δL (α) ≤ x ≤ xN + δU (α)
where δL (α) and δU (α) are the lower and upper bounds of x at the desired a-cut (see Fig. 10.1). If the solution xi,worst (worst-case point) of the above problem is deterministically infeasible, we must force it at least onto the deterministic constraint in order to ensure feasibility of the ith possibilistic constraint. This can be achieved by using a “shifting’’ vector SP = (SP1 , SP2 ) similarly to (Du & Chen 2004). In this case, the deterministic optimization of the next cycle is formulated as min f (xN ) xN
subject to
N gi (xN 1 − SP1 , x2 − SP2 ) ≥ 0,
i = 1, . . . , n
xL ≤ xN ≤ xU For multiple possibilistic constraints, the boundary of each constraint is shifted inside the deterministic feasible region by the distance between the deterministic optimal point and the worse case point. The new feasible region is smaller in comparison with that of the previous cycle. In general, the deterministic optimization problem for the kth cycle is min f (k d,k xN , pN )
k d,k xN
subject to
gi (k d,k xN −k SP,k−1 pworst ) ≥ 0,
i = 1, . . . , n
(37)
dL ≤k d ≤ dU , xL ≤k xN ≤ xU where the left superscript indicates the cycle number. The “shifting’’ vector for the possibilistic design variables is k SP = k−1 xN − k−1 xworst . Because the “shifting’’ vector
268
Structural design optimization considering uncertainties
xN2, X2
Deterministic constraint Possibilistic constraint g(mY1, xN2) 0
p[g(mY1, x2 ) 0] a
␣ cut (mY1, xN2) Shift to: g(mY1 – SS, xN2 – SP)
SP SS (Y1,MPP , x2,worst )
mY1, Y1
Figure 10.9 Shifting of feasible domain for a combination of uncertain and random variables.
idea cannot be used for the possibilistic design parameters P, the worst-case vector k−1 pworst from the previous cycle, is used. For the first cycle, the worst-case vector pworst is assumed equal to the nominal point pN . Subsequently, n possibilistic evaluation problems are solved (one for each possibilistic constraint). The following problem is solved for the ith constraint min gi (k d, x, p) x,p
subject to
x − δL (α) ≤ x ≤ k xN + δU (α)
k N
(38)
pL (α) ≤ p ≤ pU (α) and its solution determines k xworst and k pworst which are used in the next cycle. Problems (37) and (38) are repeated for a few cycles until convergence is achieved. 6.2
SPD O wi th b o t h po s s ib ilis t ic and r a n d o m v a r i a b l e s
A sequential approach is described in this section for a mixture of possibilistic and random variables. For demonstration purposes, Fig. 10.9 shows a geometric interpretation for a hypothetical problem with one random design variable (Y1 ), one possibilistic design variable (X2 ), and one deterministic constraint g(µY1 , xN 2 ) = 0 and one possibilistic constraint π[g(µY1 , x2 ) ≤ 0] ≤ α. For this general case, Problem (26) is solved. In the first cycle, the following deterministic optimization is performed min f (d, µY , µZ , xN , pN )
d,xN ,µY
subject to
gi (d, µY , µZ , xN , pN ) ≥ 0,
i = 1, . . . , n
dL ≤ d ≤ dU , µLY ≤ µY ≤ µU Y xL ≤ xN ≤ xU
(39)
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s
269
A possibilistic evaluation is then implemented for each constraint at the optimal point (d, µY , xN ) of Problem (39) in order to identify the worst-case point (xworst , pworst ), at the desired a-cut. The following problem is solved for the ith constraint, min gi (d, U, x, p)
U,x,p
subject to
U = βti
(40)
xN − δL (α) ≤ x ≤ xN + δU (α) pL (α) ≤ p ≤ pU (α)
The solution of the above problem is the worst-case point (d, Yi,MPP , Zi,MPP , xi,worst , pi,worst ) where (Yi,MPP , Zi,MPP ) is the worst-case most probable point (MPP) for the ith constraint. If point (d, Yi,MPP , Zi,MPP , xi,worst , pi,worst ) is deterministically infeasible, we must force it at least onto the deterministic constraint in order to ensure feasibility. This is achieved by using a “shifting’’ vector SS = {SS1 , . . . , SS } for the random variables and a “shifting’’ vector SP = {SP1 , . . . , SPm } for the m possibilistic variables. In this case, the deterministic optimization of the next cycle is min f (d, µY , µZ , xN , pN )
d,xN ,µY
subject to
gi (d, µY − SS,1 ZMPP , xN − SP,1 pworst ) ≥ 0,
i = 1, . . . , n
(41)
xL ≤ xN ≤ xU In summary, the deterministic optimization problem for the kth cycle is min
k d,k xN ,k µ Y
f (k d,k µY , µZ , k xN , pN )
subject to
gi (k d,k µY −k SS,k−1 ZMPP , k xN − k SP,k−1 pworst ) ≥ 0, dL ≤k d ≤ dU , xL ≤ k xN ≤ xU
i = 1, . . . , n
(42)
µLY ≤ k µY ≤ µU Y where the left superscript indicates the cycle number and the “shifting’’ vectors for the random design variables and the possibilistic design variables are k SS = k−1 µY − k−1 YMPP and k SP = k−1 xN − k−1 xworst , respectively. Each constraint has its own “shifting’’ vectors. After the deterministic optimization, n possibilistic evaluation problems are solved (one for each constraint). The possibilistic assessment problem for the ith constraint is, min gi (k d, U, x, p)
U,x,p
subject to
U = βti x − δL (α) ≤ x ≤ k xN + δU (α)
k N
(43)
pL (α) ≤ p ≤ pU (α) The solution of Problem (43) determines the worst-case point (k d, Yi,MPP , Zi,MPP , xi,worst , pi,worst ) which is used in the next cycle. The sequence of Problems (42) and (43) is repeated until convergence.
270
Structural design optimization considering uncertainties
Starting
k 1,0 YMPP 0Y,0ZMPP Z, 0xworst 0xN,0pworst pN
kSS
k1Y –k1YMPP
kSP
k1xN –k1xworst
Det. Optimization min f(kd, k Y, Z, kxN, pN)
k ,k ,k N d Y X
kk1
Subject to g(kd,kY – kSS, k1ZMPP,kxN – kSP,k1pworst ) 0 k
d,kY ,kXN
Possibilistic Evaluation Find kYMPP,kZMPP ,kxworst and kpworst min g(kd, kU , kX, kp)
kU,kx,kp
Subject to
U bt
k
x ␦L (a) kx kxN ␦U(a)
k N
pL(a)k p pU(a) kY k k k MPP, ZMPP , xworst , pworst
N
f converget? feasible solution? Y End
Figure 10.10 Flowchart for the SPDO algorithm.
Figure 10.10 shows the flowchart of the proposed SPDO algorithm for a combination of possibilistic and random variables. The details of the algorithm have been already provided in this section. More information is provided in (Zhou & Mourelatos 2007).
7 Examples In this section, the possibility-based and evidence-based design algorithms as well as the SPDO are demonstrated with a cantilever beam example and a pressure vessel example. In both examples, comparisons are made with deterministic design and reliability-based design results. It should be noted that theoretically, the possibility and reliability-based results cannot be compared because the possibility and reliability theories are based on different axioms. However for practical purposes, we attempt to compare them by arbitrarily using membership functions which “resemble’’ the probability density functions used in the reliability-based results.
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s
271
Y
Z
t L 100 in
w
Figure 10.11 Cantilever beam under vertical and lateral bending.
7.1 A cantilev er beam example In this example, a cantilever beam in vertical and lateral bending (Wu et al. 2001) is used (see Fig. 10.11). The beam is loaded at its tip by the vertical and lateral loads Y and Z, respectively. Its length L is equal to 100 in. The width w and thickness t of the cross-section are deterministic design variables. The objective is to minimize the weight of the beam. This is equivalent to minimizing f = w ∗ t, assuming that the material density and the beam length are constant. Two non-linear failure modes are used. The first failure mode is yielding at the fixed end of the cantilever; the other failure mode is that the tip displacement exceeds the allowable value of D0 = 2.5 . The PBDO problem is formulated as, min f = w ∗ t w,t
subject to
gjαmin ≥ 0 j = 1, 2 600 600 g1 (y, Z, Y, w, t) = y − ∗Y + 2 ∗Z wt 2 w t
2 2 Z Y 4L3 g2 (E, Z, Y, w, t) = D0 − + 2 Ewt t w2 0 ≤ w, t ≤ 5
(44)
where g1 and g2 are the limit states corresponding to the two failure modes. The design variables w and t are deterministic. In the RBDO study of (Liang et al. 2007), Y, Z, y and E are normally distributed random parameters with Y ∼ N(1000, 100) lb, Z ∼ N (500, 100) lb, y ∼ N (40 000, 2000) psi and E ∼ N(29 ∗ 106 , 1.45 ∗ 106 ) psi; y is the random yield strength, Z and Y are mutually independent random loads in the vertical and lateral directions respectively, and E is the Young modulus. A reliability index β = 3 has been used in (Liang et al. 2007) for both constraints. For the PBDO case, Y, Z, y and E are possibilistic parameters described with the triangular membership functions (xN − 3 ∗ σ, xN , xN + 3 ∗ σ) where xN is the normal point of each variable is and σ is the used standard deviation in the RBDO study. The frame of discernment defined by the (xN − 3 ∗ σ, xN + 3 ∗ σ) coordinates is also used in EBDO. Table 10.2 compares the deterministic optimization, RBDO, PBDO and EBDO results. The PBDO optimum (objective function) with a = 0 is higher than the RBDO optimum. Because it represents the worst case design, it provides an upper bound of
272
Structural design optimization considering uncertainties
Table 10.2 Comparison of PBDO, EBDO and RBDO optima for the cantilever beam example. Design variables
w t Objective f(w,t) Constraints g 1 (x)/ y g 2 (x)/D0
Determ. optimum
Reliability optimum
Possibility optimum
Evidence optimum
α = 0.1
pf = 0.1
pf = 0.0013
2.0470 3.7459
2.4781 3.8421
2.4534 3.6162
2.5028 3.9902
7.6679
9.5212
10.556
10.901
8.8721
9.9868
0 0
0 0.1436
0 0.15
0 0.168
0 0.00428
0.0032 0.0835
2.5298 4.1726
α=0 2.5901 4.210
all RBDO optima obtained with different distributions, as long as these distributions have similar variability ranges (e.g. different beta distributions defined over the same range). For a higher a-cut (a = 0.1), the PBDO optimum reduces. It should be noted that the PBDO optimum at a = 1 coincides with the deterministic optimum. The last two rows of Table 10.2 show the normalized values of the two constraints at the optimum. The first constraint is normalized by the mean yield strength y = 40 000 and the second constraint is normalized by the allowable tip displacement D0 = 2.5. Although both constraints are active at the deterministic optimum, only the first constraint is active for both the RBDO and PBDO optima. The EBDO problem formulation is the same with Problem (44) but with different constraints. The new constraints are Pl(gi < 0) ≤ pf , i = 1, 2. The uncertain parameters P = [Y, Z, y, E] have the BPA structure of Table 10.3. The BPA for each interval of an uncertain parameter is assumed to be equal to the area under the PDF used in RBDO, in order to compare the EBDO design with the corresponding RBDO design. This is not how the BPA is obtained in general. As it has been mentioned, expert opinions are used to construct the BPA structure. If however, a random variable or parameter is described probabilistically, equivalent BPA values within specified intervals are calculated as equal to the area under the PDF. In doing so, the evidence theory can be used to handle a mixture of probabilistic and non-probabilistic variables. The last two columns of Table 10.2 show the EBDO results for pf = 0.1 and 0.0013 (β = 3). As expected, the deterministic optimum of 7.6679 is less than the RBDO optimum of 9.5212 which in turn, is less than the EBDO optimum of 9.9868 at pf = 0.0013 (β = 3). For pf = 0.1, the EBDO optimum reduces. Furthermore, the EBDO optimum of 9.9868 at pf = 0.0013 is better than the worst case PBDO optimum of 10.901 (a = 0). Although only the first constraint is active for the RBDO and PBDO optima, both constraints are active for the EBDO optima, similarly to the deterministic case. 7.2 A pressu re ve s s e l e xample This example considers the design of a thin-walled pressure vessel (Lewis & Mistree 1997) which has hemispherical ends as shown in Fig. 10.12. The design objective is to
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s
273
Table 10.3 BPA structure for y,Y, Z and E. y (×103 )
Z Interval
BPA (%)
Interval
BPA (%)
[200 300] [300 400] [400 450] [450 500] [500 550] [550 600] [600 700] [700 800]
2.2 13.6 15 19.2 19.2 15 13.6 2.2
[35 37] [37 38] [38 39] [39 40] [40 41] [41 42] [42 43] [43 45]
6.1 9.2 15 19.2 19.2 15 9.2 7.1
E (×106 )
Y Interval
BPA (%)
Interval
BPA (%)
[700 800] [800 900] [900 1 000] [1000 1 100] [1100 1 200] [1200 1 300]
2.2 13.6 34.1 34.1 13.6 2.4
[26.5 27.5] [27.5 28.5] [28.5 29] [29 29.5] [29.5 30.5] [30.5 31.3]
10 21 13.5 13.5 21 21
T
R
R
L
R R L R
Figure 10.12 Thin-walled pressure vessel.
calculate the radius R, mid-section length L and wall thickness t in order to maximize the volume while avoiding yielding of the material in both the circumferential and radial directions under an internal pressure P. Geometric constraints are also considered. The material yield strength is Y. A safety factor SF = 2 is used.
274
Structural design optimization considering uncertainties
Table 10.4 BPA structure for R, L, t, P and Y. R
L
t
BPA (%)
[RN − 6.0 RN − 4.5] [RN − 4.5 RN − 3.0] [RN − 3.0 RN ] [RN RN + 3.0] [RN + 3.0 RN + 4.5] [RN + 4.5 RN + 6.0]
[LN − 12 LN − 9] [LN − 9 LN − 6] [LN − 6 LN ] [LN LN + 6] [LN + 6 LN + 9] [LN + 9 LN + 12]
[tN − 0.4 tN − 0.3] [tN − 0.3 tN − 0.2] [tN − 0.2 tN ] [tN tN + 0.2] [tN + 0.2 tN + 0.3] [tN + 0.3 tN + 0.4]
0.13 2.15 47.72 47.72 2.15 0.13
P
Y
BPA (%)
[800 850] [850 900] [900 1000] [1000 1100] [1100 1150] [1150 1200]
[208000 221000] [221000 234000] [234000 260000] [260000 286000] [286000 299000] [299000 312000]
0.13 2.15 47.72 47.72 2.15 0.13
The PBDO problem is stated as 4 3 πR + πR2N LN 3 N subject to gjα ≥ 0 j = 1, . . . , 5 max f =
RN ,LN ,tN
min
where, P(R + 0.5t)SF 2tY P(2R2 + 2Rt + t 2 )SF g2 (X) = 1.0 − (2Rt + t 2 )Y L + 2R + 2t g3 (X) = 1.0 − 60 R+t g4 (X) = 1.0 − 12 5t g5 (X) = 1.0 − R 0.25 ≤ tN ≤ 2.0 6.0 ≤ RN ≤ 24 10 ≤ LN ≤ 48 g1 (X) = 1.0 −
The EBDO problem formulation is the same but with constraints Pl(gj (X) < 0) ≤ pf j = 1, . . . , 5. For the EBDO case, the uncertainty in design variables R, L, and t and design parameters P and Y are represented with the combined BPA structure of Table 10.4. To compare results with RBDO, the BPA values of R, L, t, P and Y are taken equal to the area under the PDF of a normal distribution for the intervals shown in Table 10.4. The normal distributions for R, L, t, P and Y have standard deviations
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s
275
Table 10.5 Comparison of deterministic, RBDO, and EBDO optima for vessel example. Design variables
RN LN tN Objective −f (RN , LN )
Determ. optimum
Reliability optimum
Evidence optimum pf = 0.2
pf = 0.0228
11.750 36.000 0.250
8.7244 33.5186 0.269
8.333 30.407 0.347
8.1111 26.1852 0.3472
22 400
10 791
9053
7644
Table 10.6 Convergence history for the pressure vessel example. Cycle #
Design variables (RN , µL , µt )
Obj.
g1 (X)
g2 (X)
g3 (X)
g4 (X)
g5 (X)
α=0 1 2 3
(11.75, 36.0, 0.25) (7.0108, 30.3867, 0.2892) (7.0107, 30.3867, 0.2893)
22 398 6132 6132
−0.2551 0.4996 0.5
−1.5101 0.0 0.0
−0.2502 0.0 0.0
−0.3917 0.0 0.0
0.6897 0.0258 0.0256
α = 0.2 1 2 3
(11.75, 36.0, 0.25) (7.9108, 30.3867, 0.2892) (7.9107, 30.3867, 0.2893)
22 398 8044 8044
−0.1857 0.4996 0.5
−1.3713 0.0 0.0
−0.2202 0.0 0.0
−0.3167 0.0 0.0
0.7239 0.4326 0.4325
equal to 1.5, 3, 0.1, 50 and 13 000, respectively. The mean values for parameters P and Y are taken equal to 1000 and 260 000. The intervals for R, L, t, P and Y extend four standard deviations from each side of the normal point, in an attempt to use a similar variation with the RBDO study. Finally, EBDO and PBDO use the same frame of discernment. Table 10.5 compares the deterministic optimization, RBDO and EBDO results. Similar conclusions with the previous example are drawn. A reliability index β = 2.0 (pf = 0.0228) has been used in the RBDO study for all constraints. The EBDO maximum volume for pf = 0.0228 is lower than the corresponding RBDO volume. For comparison purposes, the EBDO optima for pf = 0.2 and pf = 0.0228 have also been calculated. As shown in Table 10.5, the EBDO maximum volume increases with increasing pf , as expected. In this example, the third and fourth constraints are active for the deterministic, RBDO and EBDO optima. Table 10.6 gives the convergence history of the SPDO method for α = 0 and a = 0.2. It lists the values of design variables, objective function and the five constraints for each cycle. For both a-cuts, the algorithm converges in three cycles.
276
Structural design optimization considering uncertainties Table 10.7 SPDO results and comparisons for the pressure vessel example. Design variables
Determ. opt.
Double-loop RBDO
Double-loop PBDO
SPDO
a = 0.2
a=0
a = 0.2
a=0
RN µL µt
11.750 36.000 0.250
8.7244 33.5186 0.269
7.9108 30.483 0.2894
7.000 30.660 0.2997
7.9107 30.3867 0.2893
7.0107 30.3867 0.2893
Objective f (RN , µL )
22 400
10 791
8062
6150
8044
6132
Constraints g 1 (X) g 2 (X) g 3 (X) g 4 (X) g 5 (X) No. of F.E.
0.8173 0.6346 0 0 0.8936 96
0.5003 0 0 0 0.6891 5904
0.5 0 0 0 0.4323 9470
0.55 0.1 0 0 0 10 534
0.5 0 0 0 0.4325 1832
0.5 0 0 0 0.0256 2121
Table 10.7 compares the deterministic optimization, RBDO, double-loop PBDO and SPDO results. Two a-cuts (α = 0 and α = 0.2) are used for the possibilistic design. Similarly to the previous example, the proposed SPDO approach gives the same results with the double-loop PBDO with a much better efficiency. For example, the number of function evaluations for a = 0 is 2121 and 10 534 for SPDO and double-loop PBDO, respectively. As expected, the deterministic optimum of 22 400 is higher than the RBDO optimum of 10 791 which is in turn, higher than the worst-case (a = 0) PBDO optimum of 6150. Note that in this example the objective is maximized. Also, the PBDO optimum of 8062 (a = 0.2) is higher than the worst-case optimum of 6150 (a = 0). At the deterministic optimum, only the third and fourth constraints are active. However at the RBDO and PBDO optima, the second constraint is also active. It should be also noted that the computational cost of the double-loop PBDO is usually higher than that of the double-loop RBDO (see Table 10.7) due to the different problem formulation between the two.
8 Conclusions In this chapter, the possibility and evidence theories were used to assess design reliability with incomplete information. The possibility theory was viewed as a variant of fuzzy set theory. The different types of uncertainty and formal uncertainty theories were first introduced using the fundamentals of fuzzy measures. Subsequently, the commonly used vertex and discretization methods which are used for propagating non-probabilistic uncertainty were reviewed and compared with a hybrid (globallocal) optimization method. It was showed that the hybrid optimization method is very efficient and has the same accuracy with the “brute force’’ discretization method. The possibility theory was also used in design. A possibility-based design optimization method was proposed where all design constraints are expressed possibilistically.
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s
277
It was shown that the method gives a conservative solution compared with all conventional reliability-based designs obtained with different probability distributions. A general possibility-based design optimization method was also presented which handles a combination of random and possibilistic design variables. Furthermore, a sequential algorithm for possibility-based design optimization (SPDO) was introduced. It decouples a double-loop PBDO process into a sequence of cycles composed of a deterministic design optimization followed by a set of worst-case reliability evaluation loops. The computational cost is kept low by first using the performance measure approach in reliability analysis and second by decoupling the deterministic design optimization from the worst-case reliability evaluation. A computationally efficient evidenced-based design optimization method was also described, which can handle a mixture of epistemic and random uncertainties. A mean performance is optimized subject to the plausibility of constraint violation being small. Uncertainty is quantified using “expert’’ opinions. Two examples demonstrated the proposed possibility-based and evidence-based design optimization methods. It was shown that both the PBDO and EBDO designs are more conservative compared with the RBDO design. However, the EBDO design is usually less conservative compared with the PBDO design.
References Agarwal, H., Renaud, J.E., Preston, E.L. & Padmanabhan, D. 2004. Uncertainty Quantification Using Evidence Theory in Multidisciplinary Design Optimization. Reliability Engineering and System Safety 85:281–294. Akpan, U.O., Rushton, P.A. & Koko, T.S. 2002. Fuzzy Probabilistic Assessment of the Impact of Corrosion on Fatigue of Aircraft Structures. Paper AIAA-2002-1640. Bae, H.-R., Grandhi, R.V. & Canfield, R.A. 2004. An Approximation Approach for Uncertainty Quantification Using Evidence Theory. Reliability Engineering and System Safety 86: 215–225. Bae, H.-R., Grandhi, R.V. & Canfield, R.A. 2004. Epistemic Uncertainty Quantification Techniques Including Evidence Theory for Large-Scale Structures. Computers and Structures 82: 1101–1112. Chen, L. & Rao, S.S. 1997. Fuzzy Finite Element Approach for the Vibration Analysis of Imprecisely Defined Systems. Finite Elements in Analysis and Design 27:69–83. Choi, K.K., Du, L. & Youn, B.D. 2004. A New Fuzzy Analysis Method for PossibilityBased Design Optimization. 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, AIAA 2004-4585, Albany, NY. Du, X. & Chen, W. 2000. An Integrated Methodology for Uncertainty Propagation and Management in Simulation-Based Systems Design. AIAA Journal 38(8):1471–1478. Du, X. & Chen, W. 2004. Sequential Optimization and Reliability Assessment Method for Efficient Probabilistic Design. ASME Journal of Mechanical Design 126:225–233. Du, X., Sudjianto, A. & Huang, B. 2005. Reliability-Based Design with a Mixture of Random and Interval Variables. ASME Journal of Mechanical Design 127:1068–1076. Dubois, D. & Prade, H. 1988. Possibility Theory. New York: Plenum Press. Elishakoff, I.E., Haftka, R.T. & Fang, J. 1994. Structural Design under Bounded Uncertainty – Optimization with Anti-Optimization. Computers and Structures 53:1401–1405. Gu, X., Renaud, J.E. & Batill, S.M. 1998. An Investigation of Multidisciplinary Design Subject to Uncertainties. 7th AIAA/USAF/NASA/ISSMO Multidisciplinary Analysis and Optimization Symposium, St. Louis, Missouri.
278
Structural design optimization considering uncertainties
Jones, D.R., Perttunen, C.D. & Stuckman, B.E. 1993. Lipschitzian Optimization Without the Lipschitz Constant. Journal of Optimization Theory and Applications 73(1):157–181. Klir, G.J. & Filger, T.A. 1988. Fuzzy Sets, Uncertainty, and Information. Prentice Hall. Klir, G.J. & Yuan, B. 1995. Fuzzy Sets and Fuzzy Logic: Theory and Applications. Prentice Hall. Lee, J.O., Yang, Y.O. & Ruy, W.S. 2002. A Comparative Study on Reliability Index and Target Performance Based Probabilistic Structural Design Optimization. Computers and Structures 80:257–269. Lewis, K. & Mistree, F. 1997. Collaborative, Sequential and Isolated Decisions in Design. Proceedings of ASME Design Engineering Technical Conferences, Paper# DETC1997/ DTM-3883. Liang, J., Mourelatos, Z.P. & Tu, J. 2007. A Single-Loop Method for Reliability-Based Design Optimization. In press International Journal of Product Development. Also, Proceedings of ASME Design Engineering Technical Conferences, 2004, Paper# DETC2004/DAC-57255. Lombardi, M. & Haftka, R.T. 1998. Anti-Optimization Technique for Structural Design under Load Uncertainties. Computer Methods in Applied Mechanics and Engineering 157:19–31. Moore, R.E. 1966. Interval Analysis. Prentice-Hall. Mourelatos, Z.P. & Zhou, J. 2006. A Design Optimization Method using Evidence Theory. ASME Journal of Mechanical Design 128(4):901–908. Mourelatos, Z.P. & Zhou, J. 2005. Reliability Estimation with Insufficient Data Based on Possibility theory. AIAA Journal 43(8):1696–1705. Muhanna, R.L. & Mullen, R.L. 2001. Uncertainty in Mechanics Problems – Interval-Based Approach. Journal of Engineering Mechanics 127(6):557–566. Mullen, R.L. & Muhanna, R.L. 1999. Bounds of Structural Response for all Possible Loadings. ASCE Journal of Structural Engineering 125(1):98–106. Nikolaidis, E., Chen, S., Cudney, H., Haftka, R.T. & Rosca, R. 2004. Comparison of Probability and Possibility for Design Against Catastrophic Failure Under Uncertainty. ASME Journal of Mechanical Design 126:386–394. Oberkampf, W.L. & Helton, J. 2002. Investigation of Evidence Theory for Engineering Applications. AIAA Non-Deterministic Approaches Forum, AIAA 2002-1569, Denver, CO. Oberkampf, W., Helton, J. & Sentz, K. 2001. Mathematical Representations of Uncertainty. AIAA Non-Deterministic Approaches Forum, AIAA 2001-1645, Seattle, WA, April 16–19. Penmetsa, R.C. & Grandhi, R.V. 2002. Efficient Estimation of Structural Reliability for Problems with Uncertain Intervals. Computers and Structures 80:1103–1112. Penmetsa, R.C. & Grandhi, R.V. 2002. Estimating Membership Response Function using Surrogate Models. Paper AIAA 2002-1234. Rao, S.S. & Cao, L. 2002. Optimum Design of Mechanical Systems Involving Interval Parameters. ASME Journal of Mechanical Design 124:465–472. Rao, S.S. & Sawyer, J.P. 1995. A Fuzzy Finite Element Approach for the Analysis of Imprecisely Defined Systems. AIAA Journal 33:2264–2370. Ross, T.J. 1995. Fuzzy Logic with Engineering Applications. McGraw Hill. Sentz, K. & Ferson, S. 2002. Combination of Evidence in Dempster – Shafer Theory. Sandia National Laboratories Report SAND2002-0835. Tu, J., Choi, K.K. & Park, Y.H. 1999. A New Study on Reliability-Based Design Optimization. ASME Journal of Mechanical Design 121:557–564. Tu, J. & Jones, D.R. 2003. Variable Screening in Metamodel Design by Cross-Validated Moving Least Squares Method. Proceedings 44th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, AIAA-2003-1669, Norfolk, VA. Wang, G. 2003. Adaptive Response Surface Method Using Inherited Latin Hypercube Design Points. ASME Journal of Mechanical Design 125:1–11.
N o n-p r o b a b i l i s t i c o p t i m i z a t i o n u s i n g p o s s i b i l i t y a n d e v i d e n c e t h e o r i e s
279
Wu, Y.-T., Shin, Y., Sues, R. & Cesare, M. 2001. Safety – Factor Based Approach for Probabilistic – Based Design Optimization. 42nd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference. Seattle, WA. Yager, R.R., Fedrizzi, M. & Kacprzyk, J. (eds) 1994. Advances in the Dempster – Shafer Theory of Evidence. John Wiley & Sons, Inc. Ye, K.Q., Li, W. & Sudjianto, A. 2000. Algorithmic Construction of Optimal Symmetric Latin Hypercube Designs. Journal of Statistical Planning and Inference 90:145–159. Youn, B.D., Choi, K.K. & Park, Y.H. 2001. Hybrid Analysis Method for Reliability-Based Design Optimization. ASME Journal of Mechanical Design 125(2):221–232. Zadeh, L.A. 1965. Fuzzy Sets. Information and Control 8:338–353. Zadeh, L.A. 1978. Fuzzy Sets as a Basis for a Theory of Possibility. Fuzzy Sets and Systems 1:3–28. Zhou, J. & Mourelatos, Z.P. 2007. A Sequential Algorithm for Possibility-Based Design Optimization. In press ASME Journal of Mechanical Design. Also, Proceedings of ASME Design Engineering Technical Conferences, 2006, Paper# DETC2006-99232.
Chapter 11
A decoupled approach to reliabilitybased topology optimization for structural synthesis Neal M. Patel, John E. Renaud & Donald Tillotson University of Notre Dame, Notre Dame, IN, USA
Harish Agarwal General Electric Global Research, Niskayuna, NY, USA
Andrés Tovar National University of Colombia, Bogota, Colombia
ABSTRACT: Conceptual designs of structures have been generated using topology optimization over the past two decades. However, traditional topology optimization techniques neglect uncertainties that exist in the real-world. In this chapter, this problem is addressed by including the notion of reliability into the design process. A reliability-based topology optimization (RBTO) framework for structural synthesis is proposed using a decoupled reliability-based design optimization (RBDO) approach, so that the topology synthesis is separate from the reliability analysis. In the algorithm presented, a maximum allowable displacement failure mode is considered. Starting from a continuum design space of uniform material distribution and initial uncertain variable values, a deterministic topology optimization is followed by a reliability analysis of the resulting structure to determine the most probable point of failure (MPP) for the current structure. The MPP is determined with respect to the maximum allowable deflection of the structure for a given applied loading. The non-gradient Hybrid Cellular Automaton (HCA) method used for topology optimization is combined with the decoupled approach for RBDO to develop a new continuum-based approach to RBTO. The objective of this chapter is to present the background behind the methods employed and demonstrate capabilities of the RBTO framework using examples.
1 Introduction Concept designs for minimum compliance structures can be synthesized using topology optimization (Bendsoe and Kikuchi 1988). Traditional techniques neglect variabilities that occur over the life and use of a structure. For example, the structure of a bridge can incur vastly different loading depending on the traffic pattern for a given time of the day. Uncertainties may exist in certain material properties as well. Reliability-based design optimization (RBDO) is a probabilistic optimization method that has been used in design problems to account for variation and uncertainty. The objective of RBDO is to mediate between cost and safety. In deterministic optimization, designs are often driven to the limits of the design constraints, neglecting tolerances in modeling and simulation uncertainties. Therefore, the resulting optimized designs can be unreliable with a high probability of failure when in use. Factor of safety techniques have been
282
Structural design optimization considering uncertainties
employed as a popular method for accounting for uncertainties and off-design operation, but these designs are typically over-engineered, leading to higher cost since the uncertainties are not necessarily quantified. The probabilistic RBDO approach facilitates the design to a specific risk and target reliability level accounting for the various sources of uncertainty. In probabilistic optimization methods, these variational uncertainties are modeled as random variables. In this respect, the deterministic analysis can be viewed as an extension of the probabilistic analysis, where the deterministic quantities are a trivial instance of the random variables. Reliability-based topology optimization (RBTO) extends the notion of reliability to the area of topology optimization. In this chapter, we consider a discretized continuum design domain, where the density of each element is used as a design variable. Traditional topology optimization methods drive the topology of a structure to an optimum design based on a single constraint on mass. However, nothing can be said about the reliability of the resulting topology since it does not account for uncertainties and modes of failure that the structure realistically would require. Because of the large number of design variables associated with topology optimization problems, the inclusion of RBDO methods could be computationally time prohibitive for largescale problems because of gradient calculations required in the sensitivity analysis. Therefore, research in this area is on concentrated in developing efficient reliability based topology optimization techniques. Kharmanda et al. (Kharmanda, Olhoff, Mohamed, and Lemaire 2004) proposed a reliability-based methodology for topology optimization using a heuristic strategy that aims to reduce mass while improving the reliability level of the structure without increasing its weight. However, in this approach, the failure mode is purely a linear combination of the random variables and does not have any physical meaning. Mogami et al. (Mogami, Nishiwaki, Izui, Yoshimura, and Kogiso 2006) incorporated reliability-based constraints in the topology optimization method using discrete frame elements and the traditional double-loop approach. Maute and Frangopol (Maute and Frangopol 1998) extended the notion of reliability to Micro-Electro-Mechanical Systems (MEMS) design using topology optimization. In the RBTO framework presented here, a decoupled approach is employed such that the topology optimization is separate from the reliability analysis (Agarwal and Renaud 2006). The decoupled reliability-based design optimization methodology is an approximate technique to obtain consistent reliable designs at a lower computational expense. Starting from an initial design domain of full material and uncertain parameters, such as loads, a complete topology optimization is followed by a reliability analysis of the structure; because the main optimization and the reliability analysis phases are detached, we refer to this as a decoupled approach. Although the RBTO framework can be generalized for use with any topology optimization method, in this work the Hybrid Cellular Automaton (HCA) method is utilized for deterministic continuum structural synthesis of minimum compliance structures (Tovar, Quevedo, Patel, and Renaud 2006). It is assumed that the structural deformation is elastic and loading is static. The change in density is evaluated locally using a CA rule, while the compliance is evaluated using a global structural analysis via the finite element method (FEM). In the presented methodology, RBTO has the same objective as the deterministic topology optimization: minimize compliance. Typically maximum deflection and stress are of concern when deigning a structure for
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n
283
maximum stiffness. Here we consider the mode of failure to be the maximum deflection of the structure when loads are applied. Therefore, a constraint on maximum allowable displacement of the structure is implemented as well as a similar displacement constraint formulation for the limit-state function. The utilization of the gradient free HCA method in the RBTO framework adds efficiency to the methodology. In the topology optimization problem, the design variables are the densities of the material elements that make up the design domain. Characteristics of the problem that may have some associated uncertainty are identified as uncertain parameters. The reliability subproblem is applied to the topology generated. A new topology optimization is executed using the uncertain parameter values at the most probable point of failure (MPP), as determined in this subproblem. This process is repeated until convergence. The RBTO framework is applied to two design problems and the final designs are validated using the Monte Carlo simulation. In these problems, the elastic modulus and applied loading are considered as the uncertain parameters, characterized by a normal distribution, and a first-order estimate is used to approximate the failure surface.
2 Reliability-based design optimization Optimized designs based on a deterministic formulation are usually associated with a high probability of failure due to inherent uncertainties associated with the imposed design constraints. In today’s competitive marketplace, it is very important that the resulting designs are both optimum and at the same time reliable. Optimized designs without considering the variability of design variables and parameters can be prone to failure in service. In order to achieve the objective of obtaining reliable optimum designs, a designer must replace a deterministic optimization with a reliability-based design optimization (RBDO), where the critical probabilistic constraints are replaced with reliability constraints, as shown below min f (x, p) x
subject to
gD (V) ≥ 0 gR (x, p) ≥ 0
(1)
xl ≤ x ≤ xu where x and p represent the design variables and fixed parameters, respectively, and gR and gD denote reliability and deterministic constraints. The reliability constraints are either constraints on probabilities of failure corresponding to each probabilistic constraint, or a single constraint on the overall system probability of failure. The reliability constraints (gR ) can be formulated as giR = Pallowi − Pi
i = 1, . . . , k
gR = Pallowsys − Psys
(2) (3)
for k constraints where Pi is the failure probability of the probabilistic constraint giR at a given design and Pallowi is the allowable probability of failure for that failure mode.
284
Structural design optimization considering uncertainties
The parameter Psys is the system failure probability at a given design and Pallowsys is the allowable system probability of failure. These probabilities of failure are usually estimated by employing standard reliability techniques. The reliability analysis is a tool used to compute the reliability index or the probability of failure corresponding to a given failure mode or for the entire system (Haldar and Mahadevan 2001). The reliability analysis involves a probability distribution transformation, the search for the MPP, and the evaluation of the cumulative Gaussian distribution function. The uncertainties are modeled as continuous random variables V = (V1 , V2 , . . . , Vn )T , with known (or assumed) continuously differentiable distribution functions, FV (v). The ith random probabilistic constraint is denoted as giR (V, η), where η refers to deterministic parameters, also called limit state parameters. In the following, v denotes a realization of the random variables V. Letting giR (V, η) ≤ 0 represent the failure domain and giR (V, η) = 0 be the so-called limit state function, then the time-invariant probability of failure for the ith probabilistic constraint is given by Pi (η) =
fV (v)dx
(4)
giR (x,η)≤0
where fV (v) is the joint probability density of V. It is almost impossible to find an analytical solution to the above integral. In standard reliability techniques, a probability distribution transformation T is usually employed, as illustrated in Fig. 11.1. An arbitrary n-dimensional random vector V = (V1 , V2 , . . . , Vn )T is mapped into an independent standard normal vector U = (U1 , U2 , . . . , Un )T . The standard normal random variables are characterized by zero mean and unit variance. The limit state function in U-space can be obtained as giR (x, η) = giR (T(u), η) = GR i (u, η) = 0. The failure domain in U-space is GR i (u, η) ≤ 0.
u2
x2
GR (u, h) 0 (fail) gR (x, h) 0 (safe)
u T (x)
u*(MPP) b
FORM u1
x1
SORM gR
(x, h) 0 (fail) GR (u, h) 0 (safe)
Original space
Standard space
Figure 11.1 Transformation from the original space to the standard space.
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n
285
Equation (4) thus transforms to Pi (η) =
φU (u) du
(5)
GR i (u,η)≤0
where φU (u) is the standard normal density of u. If the limit state function in U-space is affine, i.e., if GR (u, η) = T u + β, then an exact result for the probability of failure is Pf = (−β), where () is the cumulative Gaussian distribution function. If the limit state function is close to being linear, i.e., if GR (u, η) ≈ T u + β with β = −T u∗ , where u∗ is the solution of the following optimization problem min ||u|| u
subject to
GR (u, η) = 0
(6)
then the first-order estimate of the probability of failure is Pf = (−βp ), where represents the vector of direction cosines at the solution point. The solution u∗ of the above optimization problem, the so called design point, β-point or the MMP of failure, defines the reliability index βp = ||u∗ ||. This method of estimating the probability of failure is known as the First-Order Reliability Method (FORM) (Haldar and Mahadevan 2001). In the second-order reliability method (SORM), the limit state function is approximated as a quadratic surface (Breitung 1984). However, first order approximations, Pf (η) ≈ (−βp ), are usually sufficient for most practical cases and, therefore, are used in this chapter. Using the FORM estimate, the reliability constraints in Eq. (2) can be written in terms of reliability indices as follows girc = βi − βreqdi
(7)
where βi is the calculated reliability index and βreqdi = −−1 (Pallowi ) is the desired reliability index for the ith probabilistic constraint. This is referred to the reliability index approach (RIA). RIA can be solved as an optimization problem to solve for the constraint in Eq. (2). The reliability index corresponding to a failure mode requires the solution of the optimization problem in (6). Various algorithms have been reported in the literature (P. Lui 1991) to solve for the solution, which typically requires many system analysis evaluations. Moreover, RIA may fail to provide a solution to the FORM problem, especially when the limit state surface is far away from the origin in U-space or when the case GR (u, η ) = 0 never occurs at a particular design variable setting. Thus, the most challenging task is the search for the MPP. To overcome these difficulties in RIA, Choi et al. (Choi, Youn, and Yang 2001) provides an improved formulation to solve the RBDO problem. In this method, known as the performance measure approach (PMA), the reliability constraints are stated by an inverse formulation girc = Gi (ui∗ , η ) R,∗
i = 1, . . . , k
(8)
286
Structural design optimization considering uncertainties R,∗
where Gi is the solution to an inverse reliability analysis (IRA). This optimization problem is stated as min GR i (u, η ) u
subject to
u = βreqdi
(9)
where ui∗ is the optimum (corresponding MPP in IRA) of the ith reliability constraint. Solving RBDO by the PMA formulation is usually more efficient and robust than the RIA formulation, where the reliability is evaluated directly. PMA is, therefore, used in the proposed methodology. The efficiency lies in the fact that the search for the MPP of an inverse reliability problem is easier to solve than the search of the MPP corresponding to an actual reliability. 2.1
Rel i a b i l i ty in s t r uc t ur al o pt imizat i o n
Reliability in structural design has been developed considerably since the 1970’s (Moses 1973). Haldar and Mahadevan (Haldar and Mahadevan 2001), Haftka et al. (Haftka, Gürdal, and Kamat 1990), among others (Frangopol 1998), present a comprehensive background in structural reliability. Murotsu and Shao (Murotsu and Shao 1989) applied the notion of reliability to shape optimization of truss structures, where nodal coordinates are used as shape design variables with sizing design variables, such as the cross-sectional areas of the truss members. Papadrakakis and Lagaros utilized neural networks and the Monte Carlo simulation to perform reliability-based structural optimization of large-scale structural systems. Royset et al. (Royset, Kiureghian, and Polak 2001) developed a decoupled technique for reliability-based structural optimization where the structural optimization and reliability analysis were separated. In that methodology, a semi-definite optimization algorithm was utilized for the structural optimization. Frangopol and Maute (Frangopol and Maute 2003) reviewed the state of the art in reliability based design in both civil and aerospace structures. The inclusion of reliability was then extended to the design of aeroelastic structures by Allen and Maute (Allen and Maute 2004). In the current work, reliability is explored in the area of topology optimization.
3 Topology optimization The roots of topology optimization date back to the late 1980’s. This computational technique for the optimal distribution of material of continuum structures was first introduced by Bendsøe and Kikuchi (Bendsoe and Kikuchi 1988). Topology optimization can be viewed as a method for developing an initial, or concept, design. The optimization process systematically eliminates and redistributes material throughout the domain to minimize or maximize a specified objective. Early work in topology optimization generally dealt with simple problems that used the assumptions of elastic material properties, linear deformations, and static loading conditions. A comprehensive review of topology optimization can be found in literature by Bendsøe and Sigmund (Bendsøe and Sigmund 1989), Rozvany (Rozvany 1997), and Eschenaueuer and Olhoff (Eschenaueuer and Olhoff 2001).
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n
287
The objective and constraints considered are a global structural response such as mean compliance, von Mises stresses, eigenfrequencies, or geometrical parameters such as volume (or mass) or perimeter. This can be extended to multiple loading conditions. Traditionally, using the static-elastic assumption, the objective of a structural optimization problem is to achieve minimum compliance or strain energy with a constraint on the mass, or volume V. This can be expressed formally as min f (ρ) ρi
subject to
N i=1
ρi v i ≤ V
(10)
ρmin ≤ ρi ≤ ρmax where ρ are the elemental densities. The compliance of a structure due to loading can be expressed as c(x) = dT K(x)d
(11)
where K is the global stiffness matrix, d is the vector of global displacements, and F is the vector of external global forces. The vector x is the set of design variables related to the material state of the elements in the design domain, such as ρ. 3.1
M aterial parametrization
Ultimately, the goal of topology optimization is to determine a material distribution within the design domain to achieve a specified objective. One can utilize discrete structural elements, known as the ground structure approach, to described a structure. In this work, continuum elements are used. For continuum structures, the homogenization and density approaches are the two most popular material parameterizations. 3.1.1 The hom og eni zati on approach The initial work in topology optimization of continuum structures was based on composite material models to describe the material properties in all dimensions. This technique, presented in the seminal work of Bendsøe and Kikuchi (Bendsoe and Kikuchi 1988), is referred to as the homogenization approach, which uses composite materials as the basis for describing varying material properties where each element is a microstructure. The homogenization can be viewed as an interpolation model for void and full material. In the homogenization approach, the design domain consists of square cells. Each cell has a rectangular hole at the centroid defined by lengths a and b, as shown in Fig. 11.2. The rectangle is orientated at an angle θ. Ultimately, the density of an element is a function of the variables ai , bi , and θi . Thus, each element has three variables associated with it. The relationship between the size of the cavity and the material properties is obtained using homogenization method. This method is typically employed in two stages (Duysinx 1997). In the first stage, the microstructure orientation θ is varied,
288
Structural design optimization considering uncertainties
a
u b
Figure 11.2 A unit cell of a microstructure parameterized using the homogenization method.
x
Figure 11.3 An illustration of the density approach to material parametrization in topology optimization.
based on the principal strains. In the second stage, the microstructure parameters a and b are updated. This approach to material parametrization is typically utilized for linear-elastic material assumptions (Bendsoe and Kikuchi 1988), but has been applied to problems with both material and geometric nonlinearities as well (Yuge and Kikuchi 1995; Yuge, Iwai, and Kikuchi 1999). 3.1.2 Th e d en si t y a p p r o a ch A second technique for material parametrization deals with the more direct approach of associating just one design variable with each individual material element, as illustrated in Fig. 11.3. This is called the density approach. The material model is defined to allow the material to assume intermediate property values by utilizing an interpolation function. The design variables are the relative densities (xi ) of the elements where
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n
289
0 represents a void and 1 is full density. The density of a material element can be expressed as ρi (xi ) = xi ρ0
(0 < xi ≤ 1)
(12)
where ρ0 is the density of the base material. In utilizing the finite element method (FEM), the design variable is mapped to the global stiffness matrix by relating the relative density of an element to its elastic modulus. The solid isotropic material with penalization (SIMP) model (Bendsoe 1989; Zhou and Rozvany 1991) is a commonly utilized interpolation scheme that heuristically relates the relative density to the elastic modulus of each element using the following expression p
Ei (xi ) = xi E0
(13)
where p is the penalization parameter (p ≥ 1) and E0 is the elastic modulus of the base isotropic material. Therefore, we can view the elements of differing relative densities in the design domain as unique isotropic material elements. The power p is used to penalize intermediate densities to drive the elemental densities within in the design domain to have either full density (x = 1) or no density (x = 0). Most optimizers require this penalization to generate 0–1 topologies. Although the density approach is used in gradient-based optimization methods because it is a continuous function, this material parametrization can be utilized with non-gradient methods so that material is distributed in a continuous manner from one iteration to the next. This allows the topology to evolve in a smooth, efficient manner. In this RBTO framework, a linear interpolation model (p = 1) is utilized to relate the design variable xi to the elastic modulus of a material element with an intermediate density, as expressed by Ei (xi ) = xi E0
3.2
(14)
Optimization techniques
Various methodologies have been developed for topology optimizations over the past two decades. Topology optimization algorithms fall into the categories of mathematical programming (MP), optimality criteria (OC), and evolutionary programming methods (Bendsøe and Sigmund 1989). Mathematical programming techniques are mathematically based methods for optimization. OC methods are derived from the Karush-Kuhn-Tucker (KKT) optimality conditions. Evolutionary methods are heuristic, or intuition-based, approaches that use mechanisms inspired by biological evolution, such as reproduction, mutation, and survival of the fittest, to find an optimal solution to a problem. An important distinction between classes of methods is that MP and OC methods utilize continuous design variables whereas evolutionary methods use discrete representations as design variables. Sequential Convex Programming (SCP) is an example of a MP approach used for solving topology design problems. The Method of Moving Asymptotes (MMA), developed by Svanberg (Svanberg 1987), is the most popular SCP algorithm used for structural optimization because of its efficiency. In this method, a strictly convex subproblem is approximated at each iteration based on sensitivity information
290
Structural design optimization considering uncertainties
at the current design and then solved. The roots of OC methods in continuum-based topology optimization date back to the pioneering work of Bendsøe and Kikuchi. OC methods are primarily suited for problems containing a small number of constraints as compared to the number of design variables. In general, the OC methods are more computationally efficient than conventional MP methods (Rozvany, Bendsoe, and Kirsh 1995). Since the material volume constraint is the only active constraint, an OC method can be used to provide more rapid convergence compared to other optimization schemes. The aforementioned algorithms require gradient information in obtaining the final solution. In contrast, numerous topology optimization methodologies have been developed using evolutionary strategies that do not require gradients. An often used but inefficient approach is to utilize genetic algorithms (GAs) or semi-stochastic techniques. These methods may be more likely to find global solutions, but they require thousands of function calls. Another non-gradient based methodology developed by Xie and Stevens (Xie and Stevens 1997) is called Evolutionary Structural Optimization (ESO). It is based on the concept of progressively removing inefficient material from a structure so that it evolves into an optimal design. Another approach that requires no gradient information and utilizes cellular automata (CA) is the Hybrid Cellular Automaton (HCA) method. Since there is no randomness in the HCA formulation, this method is considered to be a MP method. 3.2.1 Topology s y n t h e s is u s in g h y b r id ce llu lar aut o mat a A cellular automaton (CA) is a discrete model studied in computability theory and mathematics (Wolfram 2002). It consists of an regular grid of cells, or lattice, where each cell is characterized by a finite number of states. The state of each cell at a given time, or generation, is a function of the states of a finite number of neighboring cells, called the neighborhood. Every cell has the same set of rules, which are applied based on information in its neighborhood. These rules are applied to the entire CA lattice each generation. The notion of cellular automata was initially conceived by John von Neumann in the late 1940’s. According to Burks (Burks 1970), the first CA proposed by von Neumann was a two-dimensional square lattice comprised of several thousands cells. Each of these cells had up to 29 possible states. The CA rule required the state of each cell plus its four nearest neighbors, located directly north, south, east, west. This CA model was so complex that it has only been partially implemented on a computer. The von Neumann rule has the so-called property of universal computation, meaning that there exists an initial configuration of the CA which leads to the solution of any computer algorithm. Accordingly, any universal computer circuit (i.e., logical gate) can be simulated by the rule of the automaton. This illustrates that complex and unexpected behavior can emerge from a CA rule. Cellular automata rules are applied over a number of discrete time steps on each CA element based on information collected in its neighborhood. The rules operate on the set of the states of neighboring cells. The rules are applied iteratively for as many time steps as required. Therefore, the global behavior of the CAs is governed by the set of local rules. These rules operate according to local information collected in the neighborhood of each cellular automaton. The final state of a CA is defined by
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n
291
the state of itself and states of the CAs within the neighborhood. For example, the information collected from a neighborhood can be expressed as S¯ i =
1
ˆ N
ˆ +1 N
j=0
Sj
(15)
ˆ is the number of elements in its neighborwhere S0 is the field state of the ith CA and N hood. This can be viewed as a filtering technique that prevents numerical instabilities of checkerboarding and mesh dependency. In practice, the size of the neighborhood is often limited to the adjacent cells but can also be extended. Figure 11.4 depicts some common two-dimensional neighborhood layouts. In the cellular automata paradigm, the same neighborhood is applied for all CA in the lattice. In the context of structural optimization, no state information exists outside of the design domain. Therefore, the neighborhood is modified for the boundary elements to only include neighbors within the design region. One of the first applications of cellular automata to structural design was presented by Inou et al. (Inou, Shimotai, and Uesugi 1994; Inou, Uesugi, Iwasaki, and Ujihashi 1998). CAs have been applied for both discrete and continuous structures. Gürdal and Tatting (Gürdal and Tatting 2000) and Slotta et al. (Slotta, Tatting, Watson, Gürdal, and Missoum 2002) applied cellular automata to truss structures. In that application, a rectangular design domain was composed of an array of truss elements. Each cell was composed of a node and the eight trusses from neighboring nodes in an forty five degree arrangement. Kita and Toyoda (Kita and Toyoda 2000) presented a methodology that is similar to the HCA method developed by Tovar for structural synthesis in that it utilizes the finite element method for structural analysis. The local update rule was based on the minimization of both the weight of the structure and the deviation between the yield stress and the von Mises equivalent stress for each cell. Furthermore, a two-dimensional isotropic material was considered where the thickness of each CA was the design variable. Hajela and Kim (Hajela and Kim 2001) used a genetic algorithm (GA) based on energy minimization to determine an appropriate CA rule for a two-dimensional continuum. The Hybrid Cellular Automaton (HCA) method is a computational technique that has demonstrated the ability to act as an optimization tool for the synthesis of optimal topologies. This approach is inspired by the biological process of bone remodeling and was first presented by Tovar (Tovar 2004). As done in other topology optimization methods, the design domain is discretized into material elements. To use the finite element method for structural analysis, the design domain is represented using a finite
(a) Empty (N 0)
(b) Von Neumann (N 4)
(c) Moore (N 8) (d) Extended (N 24)
ˆ is the number of neighboring CAs. Figure 11.4 Typical 2-D neighborhoods for CAs. N
292
Structural design optimization considering uncertainties
element model that is discretized using continuum finite elements (FE). The states of the material elements in the design domain are represented using a lattice of CAs, where a one-to-one correspondence between CA and FE generally exists, although this is not a requirement. However, uniformity in the CA discritization is required. A set of local rules is used to determine material distribution. These rules are applied to the local information collected in the neighborhood of each CA. At a discrete position i and time/iteration k, a CA is defined by a set of states that are operated on by a set of rules belonging to a given neighborhood of the CA. The state of each CA αi , is defined by the associated design variables xi (e.g., density, thickness) and field variables Si (e.g., compliance). The field variables are computed by a finite element analysis; hence this is a hybrid approach since each CA is provided global information. The complete state of each cell is expressed by
(k) αi
(k)
Si = (k) xi
(16)
where k denotes the state applies to a specific iteration. The HCA method has been shown to be an efficient non-gradient based technique for the design of stiff, or minimum compliance, structures. For the traditional linear-static problem, the algorithm synthesizes or evolves a structure that is equivalent to solving the following problem min x
N i=0
|Si (xi ) − Si∗ |
subject to Kd = F 0<x≤1
(17)
where the field variable state S being operated on is compliance. The idea is to drive the state of each CA to a specified target. In the HCA method for minimum compliance design, the density state of each cellular automaton is modified so that the elements in the design domain have uniform compliance. The rules used in this chapter that govern material distribution are control-based. In the case of the design for maximum stiffness, a monotonically decreasing relationship occurs between mass and compliance (Tovar 2004). An inversely proportional relationship exists between elastic modulus and compliance, i.e., when a load is applied to an elastic structure, as its modulus decreases, the compliance increases. Therefore, in the design of stiff structures, mass must be added to reduce the compliance of an element; to increase compliance mass is removed. The setpoint directly controls the total mass distributed within the design domain, as there is a one-to-one correspondence between the compliance of the structure under a given load and the total mass of the structure. Adapting the principles of fully stressed design (Haftka, Gürdal, and Kamat 1990), HCA is utilized to allocate material based on the compliance of each element. Although numerous distribution rules can be used (Tovar, Patel, Kaushik, and Renaud 2007), a simple proportional error material update is used here. The change in relative density of element i at the kth iteration can be expressed as (k) (k)
xi = KP (S¯ i − Si∗ (k) )
(18)
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n
293
(k) where KP is a scaling parameters where S¯ i is the effective field state of a CA, which reflects the average field state of itself and its neighborhood as expressed in Eq. (15). When designing for minimum compliance, Si = ci . Note that the setpoint is not necessarily static and can change from one iteration to the next as explained in the following section.
3.2.1.1
M U LT I P L E L OA D I N G C O N D IT I O N S
When a design problem is posed such that loading can existed in multiple, independent scenarios, an analysis must be performed for each loading condition, or load case. In traditional topology optimization of static-elastic problems, a weighted sum of the compliance or strain energy from each load case for each element is often used to represent the final value (Bendsøe and Sigmund 1989). Thus, the final compliance state for the ith element in the design domain is represented by a weighted sum of the compliance for each load case ci =
NL
αL ciL
(19)
L=1
where ciL is the compliance of the element for load case L and NL is the total number of load cases. A smaller load can have more of an influence in the final structure by give more weight to that load case through the weight parameter αL . 3.2.1.2
MA S S C O NT R O L
A mass control scheme utilized to generate topologies of a specified mass. To accommodate mass control, the appropriate setpoint must be determined. It has been shown by Tovar (Tovar 2004) that for structural optimization, there exists a direct relationship between the compliance in a structure when loaded and the final mass of the structure. The higher the setpoint, the lower the mass and vice versa. The error in the current field state of a CA and setpoint directly affects the material distribution within the design domain. Therefore, to design for a specific mass, the corresponding setpoint must be must determined. A scheme for finding this target can be accomplished by simply iterating on the HCA update rules and updating the setpoint, as shown in Fig. 11.5, until the correct mass results after applying the design rule expressed in Eq. (18). The setpoint for the k + 1 HCA iteration is found by iterating on the update ⎛ S∗ (j+1) = S∗ (j) ⎝
(k+1)
Mf
Mf∗
⎞ ⎠
(20)
where j is an iterator for the sub-loop on the HCA rules in Eq. (18) and Mf∗ is the mass fraction target. The mass fraction of a design domain is defined as N Mf =
i=1
N
xi
(21)
294
Structural design optimization considering uncertainties
Current design S*(0)(x(k)) Apply HCA material update rules
x(k), S*( j1)
x(k1) Convergence test |M(k1) Mf*| ε f
no S*(j)
Update global setpoint
yes New design
Figure 11.5 Illustration of HCA material update for mass control using a setpoint update strategy.
where N is the number of elements in the design domain. When mass of the structure at the kth iteration satisfies the mass target, the resulting material update control loop is terminated and the dynamic analysis is performed on the resulting material distribution for the k + 1 iteration unless the topology has converged based on the stopping criterion. Thus, the mass constraint is enforced at each HCA iteration. 3.2.1.3
D I S P LA C E M E NT C O N ST RA I NT
Using the ability to control mass we can include constraints that are related to mass. For use in the RBTO framework, the HCA method must incorporate a constraint on which we can consider a mode of failure, i.e., a limit state function. Here, we will consider the design of structures that are reliable with respect to the maximum allowable displacement. Koˇcvara (Koˇcvara 1997) developed a linear constraint on displacement using a minimum compliance formulation for the optimization of a truss structure. A bi-level approach was proposed, where the primal goal was to satisfy a displacement constraint and the secondary goal was to minimize compliance. A displacement constraint is formulated for continuum-based topology optimization problems by Deqing et al. (Deqing, Yunkang, Zhengxing, and Huanchun 2000) using dual programming approach. The maximum displacement of a structure is a global behavior. The total mass of structure, a global property, is used to control the displacement of the structure. First, the relationship between displacement and mass must be quantified. Since HCA operates on local information, a single element is studied. Making the assumption of linearly elastic material behavior and constant boundary conditions (e.g., time independent, etc.), we can solve for the nodal displacements using the finite element method formulation that assumes linear elastic material behavior and static loading. The term linear refers to the relationship exists between the strains and stresses. To find the solution
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n
295
for the displacements, a system must be in equilibrium where the potential energy is at an extremum as 9 stated by the principle of stationary potential energy. The total potential energy ( ) for a structure that has been discretized into finite elements can be expressed in terms of the internal or strain energy (U) and the work done by the external forces (W)
=U−W =
1 T d Kd − FT d 2
(22)
where K is the global stiffness matrix, d is the global vector of nodal displacements, and F is the vector of external global forces applied at each node. The extrema of the total potential energy of the deformable body is expressed by, 9 ∂ =0 ∂d
(23)
From this condition, the resulting equilibrium equation to be solved is, Kd − F = 0
or
Kd = F
(24)
The static loading assumption requires that an independent relationship among the global stiffness matrix, K, the force vector, F, and the displacements, d, exists. Constructing K using the material parametrization described by Eq. (14), the displacements can be solved for in Eq. (24). The relationship for the maximum displacement of all nodal degrees of freedom for a two-dimensional four-node element, as a function of the relative density (or elastic modulus), is shown in Fig. 11.6. For a linear-elastic analysis,
600
500 400 d 300 200 100
0.1
0.2
0.3
0.4
0.5 x
0.6
0.7
0.8
0.9
1
Figure 11.6 The uniaxial relationship between displacement (d) of an element and it relative density (x) by plotting the compression of an single element with unit height and width (E = F = 1).
296
Structural design optimization considering uncertainties
the relationship has the form δmax (x) = C
1 x
(25)
where C is a constant. This inversely proportional relationship is same as that for the “compliance-elastic modulus’’ relationship mentioned previously. Characterizing this local relationship, it is observed that displacement can be controlled through mass. However, since the maximum displacement of a structure is a global behavior, the displacement constraint developed in this work is applied globally by penalizing, or reducing, the mass constraint, which is described above, until the displacement equality constraint is satisfied. The HCA method is used to drive the topology to the minimum mass that satisfies the displacement constraint.
4 Decoupled RBTO formulation In the reliability-based topology optimization (RBTO) method developed in this work, the structural synthesis is performed separately from the gradient-based reliability analysis. The HCA topology optimization occurs in sequence with the reliability analysis. Following a topology optimization using the HCA method, a reliability analysis is performed to find the most probable point of failure (MPP), u∗ , using the performance measure approach (PMA) described in Eq. (9). In the optimization subproblem, the design variable in the topology optimization, i.e., the relative density of each element, is fixed and the uncertain parameters, or random variables, are the design variables. The MPP is returned and used as fixed input parameters for in the topology optimization. Convergence is achieved when a target reliability index is reached. The general form of the RBTO, a minimum compliance problem can be expressed as min c(x) x
subject to Pf (V) = P(G(x, v) < 0) ≤ Pt Kd = F 0≤x≤1 i = 1, . . . , n and j = 1, . . . , m where
(26)
G = − + max
for i density elements and j uncertain variables where Pt is a tolerance on the probability of failure. The limit state function G states that if the performance parameter is larger than the limit value max then the system fails. To find the reliability index, the limit state function is approximated at each iteration. In this chapter, PMA is employed with a first-order approximation (FORM) of the limit state function. The optimization algorithm is described in Fig. 11.7, where a deterministic HCA update is executed based on the uncertain variables calculated from a reliability analysis performed for each iteration. This process continues until convergence. 4.1
RBT O m et ho d o lo g y
In this methodology, a single limit state constraint on the maximum allowable displacement of the structure is considered. For the example problems considered, the
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n
297
Start x(0), u(0) Initial density x(t), u(t) Topology optimization min x s.t.
c(x) Ni1xivi V 0 x 1
x(t1) u*(t1) min u s.t.
Reliability analysis ⌿ (u, x, h) ⌿max
||u|| breqd
i
u*(t1)
Convergence test
no
yes End
Figure 11.7 A flowchart of the decoupled approach to reliability-based topology algorithm. For the example problems considered, the HCA method is used for the topology optimization to synthesize the structure and then a reliability anaylsis is performed using a maximum displacement constraint ( ≡ δmax ).
performance parameter and the limit value max in the formulation (26) are the displacements δmax and δ∗max , respectively. Two sets of random variables are considered on this work: the elastic modulus of the material E0 and the loads Fi on the structure. These uncertainties account for material/manufacturing uncertainties and operational uncertainties. Other uncertain parameters may be included. The specific RBTO formulation solved for the design problems presented can be expressed as min c(x) x
subject to g D : Kd = F g R : −δmax + δ∗max ≤ 0 0<x≤1
(27)
298
Structural design optimization considering uncertainties
Starting with a fully dense material in the design domain, i.e., all density variables x are at their upper bound and initial values of the uncertain variables are set to their mean values, a topology optimization is performed subject to a maximum allowable displacement, followed by a reliability analysis. In the analysis, the optimization subproblem in (9) is solved, where the density design parameters are fixed, using uncertain variables to determine the MPP with respect to the constraint imposed. In this implementation, a sequential quadratic programming (SQP) algorithm is utilized to solve for the values of random variables v at the MPP. For the optimization subproblem in the reliability analysis, a warm-start approach is included as an improvement over previous investigations (Patel, Agarwal, Tovar, and Renaud 2005). Therefore, the sub-optimization starts from the MPP of the previous iteration. Fixing the resulting set of uncertain variables, a new topology optimization is executed and the process is repeated until convergence. The algorithm is described below. Let t denote the iteration counter for the global RBTO and k used in the previous sections represent a local counter for the topology optimization. Step 1.
Step 2. Step 3. Step 4. Step 5.
Define the design domain, deterministic material properties, constraint g R , and initial design, x(0) (full density). Define the random material properties and random loading conditions. Initialize the design domain densities and set random variable values V as the fixed design parameters. Perform topology optimization using the HCA method. Perform the reliability analysis to obtain random variable values at the MPP. ∗(t + 1) ∗(t) Check for convergence, | g g∗(t)− g | ≤ ε1 and |(u(t + 1) − u(t) )T (u(t + 1) − u(t) )| ≤ ε2 for the tolerance parameters ε1 and ε2 . If the convergence criteria are satisfied, the final topology is obtained; otherwise, go to Step 2.
5 Numerical examples The RBTO framework presented is applied to two example problems. The first problem is a Michell-type structure that considers a single concentrated load. The second example considers the loading conditions of a three bar truss. A normal distribution for each random variable vi in the reliability analysis, which is expressed by uj =
vj − mvj σvj
(28)
where mvj and σvj denote the mean value and the standard deviation about the jth random variable, respectively. The parameter uj represents the random variable transformed into the standard space. For these examples, the Poisson’s ratio is ν = 0.33. The mean values of the random parameters are used to generate the structural topology for the first RBTO cycle. Furthermore, for these problems the range of uncertainty about the mean is assumed to be 5% of the mean. This may be an unrealistically large uncertainty for the elastic modulus of the material, but the large uncertainty is used to demonstrate the methodology. The design rule scale parameter in Eq. (18) is KP = 0.2 for the topology synthesis.
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n
5.1
299
Mic hell-type s tructure problem
This example considers a two-dimensional 2 m × 1 m beam structure that is fully constrained at one end and loaded at the free end. The elastic modulus of the material used is E = 200 GPa and a concentrated load of F = 100 N is applied at the midpoint of the lower boundary of the beam as shown in Fig. 11.8. These values represent the mean values that are varied in the reliability analysis component of the methodology to find the MPP. The design domain is discretized into 5000 elements. For this example, a constraint on maximum allowable displacement δmax = 1 × 10−5 m is imposed. The resulting topologies for each algorithm cycle is shown in Fig. 11.9. Around 30 FEA analyses are required for the topology synthesis and 23 analyses are required for the reliability analysis during each RBTO cycle. A summary of the HCA performance with the deflection constraint and the subsequent reliability analyses is tabulated in Table 11.1. As expected, the elastic modulus E is driven to a lower value and the load F is increased to a higher value. In this example, the resulting change in structural characteristics is quite significant. A mass increase of 33% is required to obtain a reliable design as compared to the structure synthesized using the mean values of the random
F Figure 11.8 The 2 m × 1 m design domain with a single load.
(a) Cycle 1: Mf 0.359
(b) Cycle 2: Mf 0.478
Figure 11.9 The resulting topologies for the Michell-type structure after each RBTO cycle for β = 3. Table 11.1 Summary of FEA evaluations and uncertain parameter values for the Michell-type structure. RBTO Cycle
HCA Iters
Reliability FEA evals
E (GPa)
F (kN)
1 2
33 29
23 23
178.1 178.1
−110.2 −110.2
300
Structural design optimization considering uncertainties
b
Mass Mf
b 0 (50%)†
0.359
b 0.5 (69.15%)
0.388
b 1 (84.13%)
0.392
b 2 (97.72%)
0.431
b 3 (99.87%)
0.478
Topology
†
Deterministic (no uncertainty)
Figure 11.10 Comparison of Michell-type structures for varying prescribed levels of reliability with respect to a constraint on the maximum allowable displacement δmax = 1 × 10−5 m. The baseline run (β = 0) is 50% reliable. Table 11.2 Validation of the Michell topologies using a 10,000 sample Monte Carlo simulation. Reliability
Expected
Actual
β = 0.5 β=1 β=2 β=3
69.15% 84.13% 97.72% 99.87%
69.11% 82.94% 97.43% 99.87%
variables. Figure 11.10 shows an increase in mass required to satisfy an increase in prescribed reliability. An excellent correlation between the prescribed level of reliability and the actual reliability of the structures is shown in Table 11.2. This confirms that FORM accurately predicted the failure surface in each case. 5.2 T russ struc t ur e pr o b lem In this example, we will consider the three-bar truss problem subject to three loading conditions presented by Duysinx (Duysinx 1997), shown in Fig. 11.11. The elastic
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n
301
F1 F2
F3
Figure 11.11 The 2 m × 1 m design domain with three load cases.
(a) Cycle 1: Mf 0.269
(b) Cycle 2: Mf 0.338
Figure 11.12 The resulting topologies for the “three bar’’ structure after each RBTO cycle for β = 3. Table 11.3 Summary of FEA evaluations and uncertain parameter values for the three bar problem. RBTO Cycle
HCA Iters
Reliability FEA evals
E (GPa)
F1 (kN)
F2 (kN)
F3 (kN)
1 2
28 27
47 47
177.0 177.0
−100.0 −200.0
−299.9 −300.0
−438.4 −438.5
modulus of the material used is E = 200 GPa. The mean values for the three load cases are as follows: −10 kN, −30 kN, and −40 kN. The maximum displacement constraint is considered with respect to all load cases, i.e., the constraint is violated if the maximum displacement exceeds the allowable value for any loading. The design domain is discretized into 5000 elements. For this example, a constraint on maximum allowable displacement δmax = 1 × 10−5 m is imposed. The resulting topologies generated at each algorithm cycle is shown in Fig. 11.12, where we see that the RBTO algorithm converges after 2 cycles. Fewer than 30 FEA analyses are required for the topology synthesis and fewer than 40 analyses are required for the reliability analysis during each RBTO cycle. A summary of the HCA performance with the deflection constraint and the reliability analyses is tabulated in Table 11.3. It is observed that the load cases F1 and F2 do not change after each
302
Structural design optimization considering uncertainties
Reliability
Mass Mf
b 0 (50%)†
0.269
b 0.5 (69.15%)
0.278
b 1 (84.13%)
0.288
b 2 (97.72%)
0.309
b 3 (99.87%)
0.338
Topology
†
Deterministic (no uncertainty)
Figure 11.13 Comparison of “three bar’’ structures for varying levels of reliability with respect to a constraint on the maximum allowable displacement δmax = 1 × 10−5 m. The baseline run ( β = 0) is 50% reliable.
reliability analysis since the third load case F3 dominates. Although the evolution of the structure is more subtle in this problem, the topologies following the initial reliability analyses distribute approximately 26% more mass compared to the initial structure synthesized using the mean values of the random variables. Again, mass is the driver to generating a more reliable design, as illustrated in Fig. 11.13. Using the Monte Carlo simulation to validate the reliability of each structure, it was determined the first-order approximation used in the reliability analysis accurately predicts the failure surface in this problem. These results are tabulated in Table 11.4.
6 Conclusions In this chapter, a new methodology for reliability-based topology optimization (RBTO) is presented. A decoupled approach for reliability-based design optimization is combined with the HCA method for structural synthesis. The objective could be generalized
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n
303
Table 11.4 Validation of the “three bar’’ topologies using a 10,000 sample Monte Carlo simulation. Reliability
Expected
Actual
β = 0.5 β=1 β=2 β=3
69.15% 84.13% 97.72% 99.87%
69.11% 82.94% 97.43% 99.87%
and applied to different types of problems. The reliability formulation used in this investigation is known as the performance measure approach (PMA) where the reliability index β is included as a constraint in this subproblem and the random variables are driven to the most probable point of failure (MPP) for the current structural design with respect to a displacement constraint. The MPP is required satisfy the specified reliability index β. This RBTO methodology facilitates structural designs that are reliable with respect to a specified performance parameter. Using a maximum allowable displacement constraint as a failure mode, we observe that the reliable topology requires more mass for each example problem, compared to the initial deterministic topology. For both example cases, where uncertainties in the elastic modulus and applied loads were considered, only two algorithm cycles are required for convergence to a design. The excellent correlation shown between the prescribed and actual radiabilities demonstrates the First-Order Reliability Method (FORM) is sufficient for use with problem that use the static-elastic assumptions. The inclusion of the Hybrid Cellular Automata (HCA) method adds to the efficiency of the proposed methodology for the design of structures using the static-elastic assumptions as previous investigations. The use of the HCA method in the RBTO framework could show great benefit other design problems, such as aeroelastic design, where gradients are not easily computed for the topology synthesis. However, the FORM approximation must be investigated for use with other design problems.
References Agarwal, H. & Renaud, J.E. 2006. A new decoupled framework for reliability based design optimization. AIAA Journal 44(7):1524–1531. Allen, M. & Maute, K. 2004. Reliability-based optimization of aeroelastic structures. Struct. Multidisc. Optim. 27:228–242. Bendsøe, M.P. 1989. Optimal shape design as a material distribution problem. Comp. Mth. Appl. Mech. Engrg. 1:193–202. Bendsøe, M.P. & Kikuchi, N. 1988. Generating optimal topologies in optimal design using a homogenization method. Comp. Mth. Appl. Mech. Engrg. 71:197–224. Bendsøe, M.P. & Sigmund, O. 1989. Topology Optimization: Theory, Methods and Applications. Springer-Verlag, Berlin. Breitung, K. 1984. Asymptotic approximations for multinormal integral. Journal of Engineering Mechanics 110(3):357–366.
304
Structural design optimization considering uncertainties
Burks, A. 1970. Essays on Cellular Automata, Chapter Von Neumann’s self-reproducing automata, pp. 3–64. University of Illinois Press. Choi, K.K., Youn, B.D. & Yang, R. 2001. Moving least square method for reliability-based design optimization. In The Fourth World Congress of Structural and Multidisciplinary Optimization (WCSMO-4). Deqing, Y., Yunkang, S., Zhengxing, L. & Huanchun, S. 2000. Topology optimization design of continuum structures under stress and displacement constraints. Applied Mathematic and Mechanics 21:1–26. Duysinx, P. 1997. Layout optimization: A mathematical programming approach. Technical Report DCAMM report No. 540, University of Liege. Eschenaueuer, H.A. & Olhoff, N. 2001. Topology optimization of continuum structures: A review. Applied Mechanics Reviews 54:331–390. Frangopol, D.M. 1998. Probabilistic structural optimization. Progress in Structural Engineering and Materials 1(2):223–230. Frangopol, D.M. & Maute, K. 2003. Life-cycle reliability-based optimization of civil and aerospace structures. Computers and Structures 81:397–410. Gurdal, Z. & Tatting, B. 2000. Cellular automata for design of truss structures with linear and nonlinear response. In Proceedings of the 41st AIAA Structures, Strucutural Dynamics, and Materials Conference, Number 2000-1580, 2000, April 3–6, Atlanta, Georgia. Haftka, R.T., Gurdal, Z. & Kamat, M.P. 1990. Elements of Structural Optimization. Kluwer Academic Publishers, Dordrecht, The Netherlands, 2nd ed. Hajela, P. & Kim, B. 2001. On the use if energy minimization for ca based analysis in elasticity. Struct. Multidisc. Optim. 23:24–33. Haldar, A. & Mahadevan, S. 2001. Probability, Reliability and Statistical Methods in Engineering Design. New York: Wiley. Inou, N., Shimotai, N. & Uesugi, T. 1994. Cellular automaton generating topological structures. In 2nd European Conference on Smart Structures and Materials, Number 2361-08, 1994, October Glasgow, United Kingdom, pp. 47–50. Inou, N., Uesugi, T., Iwasaki, A. & Ujihashi, S. 1998. Self-organization of mechanical structure by cellular automata. Fracture and Strength of Solids 145(9):1115–1120. Kharmanda, G., Olhoff, N., Mohamed, A. & Lemaire, M. 2004. Reliability-based topology optimization. Struct. Multidisc. Optim. 26:295–307. Kita, E. & Toyoda, T. 2000. Structural design using cellular automata. Struct. Multidisc. Optim. 19:64–73. Kocvara, M. 1997. Topology optimization with displacement constraints: a bilevel programming approach. Struct. Optim. 4:256–263. Liu, P.-L. & Kiureghian, A.D. 1991. Optimization algorithms for structural reliability. Structural Stafety 9:161–177. Maute, K. & Frangopol, D.M. 1998. Reliability-based design of mems mechanisms by topology optimization. Computers and Structures 81:813–824. Mogami, K., Nishiwaki, S., Izui, K., Yoshimura, M. & Kogiso, N. 2006. Reliability-based structural optimization of frame structures for multiple failure criteria using topology optimization techniques. Struct. Multidisc. Optim. 32(4):299–311. Moses, F. 1973. Desing for reliability: concepts and applications. Wiley. Murotsu, Y. & Shao, S. 1989. Optimum shape design of truss structures based on reliability. Struct. Multidisc. Optim. 2(2):65–76. Patel, N.M., Agarwal, H., Tovar, A. & Renaud, J.E. 2005. Reliability based topology optimization using the hybrid cellular automaton method. In 1st AIAA Multidisciplinary Design Optimization Specialist Conference, 2005, April 18–21, Austin, Texas. Royset, J.O., Kiureghian, A.D. & Polak, E. 2001. Reliability-based optimial structural design by the decoupled approach. Reliability Engineering and Systems Safety 73:213–221.
A d e c o u p l e d a p p r o a c h t o r e l i a b i l i t y-b a s e d t o p o l o g y o p t i m i z a t i o n
305
Rozvany, G.I.N. 1997. Topology Optimization in Structural Mechanics. Springer. Rozvany, G.I.N., Bendsoe, M.P. & Kirsh, U. 1995. Optimality criteria: a basis for multidisciplinary optimization. Appl. Mech. Rev. 48:41–119. Slotta, D., Tatting, B., Watson, L., Gurdal, Z. & Missoum S. 2002. Convergence analysis for cellular automata applied to truss design. Engineering Computations 19(8):953–969. Svanberg, K. 1987. The method of moving asymptotes a new method for structural optimization. Int. J. Numer. Meth. Engrg. 24:359–373. Tovar, A. 2004. Bone Remodeling as a Hybrid Cellular Automaton Optimization Process. Ph.D. thesis, University of Notre Dame. Tovar, A., Patel, N.M., Kaushik, A.K. & Renaud, J.E. 2007. Optimality conditions of the hybrid cellular automata for structural optimization. AIAA Journal. Tovar, A., Quevedo, W., Patel, N. & Renaud, J.E. 2006. Topology optimization with stress and displacement constraints using the hybrid cellular automaton method. In Proceedings of the 3rd European Conference on Computational Mechanics, 2006, June 5–8, Lisbon, Portugal. Wolfram, S. 2002. A New Kind of Science. Wolfram Media. Xie, Y.M. & Stevens, G. 1997. Evolutionary Structural Optimization. Springer-Verlag, London. Yuge, K. & Kikuchi, N. 1995. Optimization of a frame structure subjected to a plastic deformation. Struct. Optim. 10:197–208. Yuge, K., Iwai, N. & Kikuchi, N. 1999. Optimization of 2-d structures subjected to nonlinear deformations using the homogenization method. Struct. Optim. 17(4):286–299. Zhou, M. & Rozvany, G.I.N. 1991. The COC algorithm, part II: Topological, geometrical and generalized shape optimization. Comp. Meth. Appl. Mech. Engrg. 89:309–336.
Chapter 12
Sample average approximations in reliability-based structural optimization:Theory and applications Johannes O. Royset Naval Postgraduate School, Monterey, CA, USA
Elijah Polak University of California, Berkeley, CA, USA
ABSTRACT: This chapter describes recent advances in combining Monte Carlo sampling and nonlinear programming algorithms for reliability-based structural optimization. Specifically, we present an approach where the reliability term in the problem formulation is replaced by a statistical estimate of the reliability obtained by means of Monte Carlo sampling. This replacement introduces a sampling error and gives rise to sample average approximations. The chapter presents rules for adjusting the sample size effectively.
1 Introduction Cost efficient bridges, building frames, aircraft wings, and other mechanical structures can be achieved by formulating and solving nonlinear optimization problems. However, such problems become significantly harder to solve when a structure’s reliability is accounted for in the problem formulation. This difficulty is caused by the fact that the failure probability of most structures, as well as the corresponding gradient with respect to design variables, cannot be computed exactly, but must be approximated. The difficulty is further aggravated by the challenge of deriving suitable expressions for the gradient of the failure probability. One possible approach to overcome these difficulties is to estimate the failure probability and its gradient using Monte Carlo sampling. This chapter describes recent advances in combining Monte Carlo sampling and nonlinear programming algorithms for reliability-based structural optimization. Other approaches for such optimization include (successive) first-order approximations (Madsen and Friis Hansen 1992; Enevoldsen and Sørensen 1994; Kuschel and Rackwitz 2000; Royset et al. 2006), gradient-free heuristics, (Itoh and Liu 1999; Nakamura et al. 2000; Beck et al. 1999), response surfaces (Gasser and Schuëller 1998; Igusa and Wan 2003), and surrogate functions (Torczon and Trosset 1998; Eldred et al. 2002). However, a review of these approaches is beyond the scope of this chapter. Here, we present an approach where the failure probability in the problem formulation is replaced by a statistical estimate obtained by means of Monte Carlo sampling. This replacement introduces a sampling error and gives rise to approximate optimization problems. Such approximate problems are referred to as sample average approximations.
308
Structural design optimization considering uncertainties
Even if a sample average approximation is solvable by some nonlinear programming algorithm to sufficient accuracy, the design obtained might be far from the optimal design of the original problem due to the induced sampling error. This deficiency can be overcome by constructing a sample average approximation with a large sample size, which tends to have a small sample error and hence tends to have optimal designs near the optimal designs of the original problem. Unfortunately, a large sample size implies high computational cost. For example, each sample point may involve a finite element analysis of the structure at hand. Hence, applying a nonlinear programming algorithm to a sample average approximation with a large sample size is usually impractical. On the other hand, as we have already mentioned, a small sample size is computationally less expensive, but may lead to designs far from an optimal one. Intuition and empirical evidence indicate that the following adaptive approach is efficient. Initially, consider a sample average approximation with a small sample size, i.e., a coarse, but inexpensive approximation, and apply some optimization algorithm to achieve a certain amount of design improvement. When this design improvement is achieved, refine the approximation by increasing the sample size and apply the optimization algorithm to this refined approximation. Initiate the optimization from the improved design achieved at the coarser approximation level, i.e., the calculations on the refined approximation is “warm started.’’ Repeat the process until an acceptable design is achieved. This adaptive approach avoids spending excessive computational effort on estimating the failure probability of the relatively poor designs produced by the early iterations of the optimization algorithm. Increasingly large efforts are expended only as better and better designs are achieved and accurate estimates of the failure probability are needed to ensure further design improvements. This approach tends to be efficient because coarse estimates of the failure probability (and its gradient) are usually sufficient to steer an optimization algorithm towards better designs in the early stages of the calculations. We have derived a theory for the described adaptive increase in sample size (Royset and Polak 2007; Polak and Royset 2007). In this chapter, we review some of these theoretical results and show their application to reliability-based structural optimization. Section 2 formally defines the reliability-based structural optimization problem. Section 3 discusses the properties of the failure probability as a function of the design variables and derives an expression for its gradient. The gradient is derived for general structural systems consisting of an arbitrary number of unions and intersections of failure events. A Monte Carlo estimate of this gradient is used to direct the calculations towards better designs. Section 4 describes the basic algorithmic approach, which involves approximately solving a sequence of sample average approximations with increasing sample size. Section 5 presents sample-adjustment rules that ensure computational efficiency and theoretical convergence. Clearly, a rapid increase in sample size may result in many algorithmic iterations on computationally costly sample average approximations. In fact, too rapid increase in sample size may prevent convergence to a solution. By contrast, a slow increase in sample size may lead to unnecessarily many iterations with coarse approximations. Hence, it is important to balance the increase of sample size with the progress of the optimization algorithm towards an optimal design. We present two different sample-adjustment rules: (i) a feedback rule specifying an increase in sample size whenever the optimization algorithm’s progress falls below a threshold value
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n
309
and (ii) the solution of an auxiliary optimization problem that determines the “optimal’’ sample size at every iteration using estimated values of rate of convergence, computational cost, distance to optimal design, and sampling error. In Section 6, we illustrate the theoretical results with three numerical examples arising in design of various mechanical structures. Structures with both a single and with multiple limit-state functions are considered and reliability terms are included in both objective and constraint functions. We also present an example with a nontraditional objective: determine several “good’’ designs that are significantly different. This objective is useful when qualitative factors such as practical, esthetic, social, and political requirements are especially important. In such situations, the designer may seek to generate several “good’’ designs with respect to quantitative factors (e.g., cost and reliability) using some optimization algorithm and then select among these designs using his or her judgment regarding the other, qualitative factors. Finally, the concluding remarks of this study are presented in Section 7.
2 Problem formulation Consider the design of a mechanical structure such as a bridge, a building frame, or an aircraft wing. Let x be an n-dimensional vector of design variables, for example related to the size and form of the structure, and let c0 (x) and c(x) be the initial and failure costs, respectively, of the structure given design x. Furthermore, let p(x) be the failure probability of the structure, given design x, to be defined precisely below. Then, the reliability-based design optimization problem takes the form min{c0 (x) + c(x)p(x)|p(x) ≤ q, x ∈ X} x
(1)
where q is a bound on the failure probability, X is a constraint set for x defined in terms of J constraint functions fj (x), j ∈ J = {1, 2, . . . , J}, i.e., X = {x|fj (x) ≤ 0, j ∈ J}
(2)
The objective function in (1) consists of the initial cost plus the expected cost of failure. The constraint functions represent restrictions on shape and form of the structure, amount and location of steel reinforcement, as well as other factors. We assume that there are no integer restrictions on the design variables x. We also assume that the constraint and cost functions are fairly simple functions, e.g., analytic expressions, that can easily be evaluated. Hence, the challenge is associated with the failure probability p(x). When the failure cost is positive, i.e., c(x) > 0 for all x ∈ X, (1) is equivalent to the following problem min{c0 (x) + c(x)x0 |p(x) ≤ x0 , 0 ≤ x0 ≤ q, x ∈ X} x0 ,x
(3)
where x0 is an auxiliary design variable (Royset et al., 2006). The transformation from (1) to (3) is beneficial for numerical reasons; the multiplication of a presumably large failure cost c(x) with a presumably small, inaccurately estimated failure probability p(x) in (1) may cause numerical difficulties. Hence, we always recommend solving (3) instead of (1). Consequently, we focus primarily on problems with a deterministic
310
Structural design optimization considering uncertainties
objective function and a failure probability constraint as in (3). To simplify the notation and without loss of generality, we consequently consider the problem: P : min{c(x)|p(x) ≤ q, x ∈ X} x
(4)
where c(x) is some deterministic objective (cost) function. Mechanical structures are assessed using one or more performance measures, e.g., displacement and stress levels at various locations in the structure. In this chapter, we consider the general case of “system failure’’ where the (system) failure probability is defined by a collection of performance measures. Specifically, failure occurs when certain combinations of the performance measures are unsatisfactory. Let gk (x, u), k ∈ K = {1, 2, . . . , K}, be a collection of K limit-state functions describing the relevant performance measures. The functions gk (x, u) depend on the design x and the realization u of a standard normal random m-vector U. This random vector incorporates the uncertainty in the structure and its environment. Note that a limit-state function given in terms of multivariate normal (possibly with correlation) and lognormal random vectors can be transformed into one defined in terms of a standard normal vector using a smooth bijective mapping. A limit-state function given in terms of random vectors governed by other distributions can also be transformed, possibly by introducing an approximation. Hence, the limitation to a multivariate standard normal distribution is in many applications not restrictive (see e.g. Chapter 7 of (Ditlevsen and Madsen 1996) and (Liu and Kuo 2003; Akgul and Frangopol 2003; Holicky and Markova 2003)). By convention, gk (x, u) ≤ 0 represents unsatisfactory performance of the k-th measure. Hence, we define the failure probability of the structure as p(x) = P[F(x)], where the failure domain ': {gk (x, U) ≤ 0} (5) F(x) = i∈I k∈Ci
with Ci ⊂ K and I = {1, 2, . . . , I} defining the combinations of performance measures that lead to structural failure. For example, suppose that a structure is defined with three limit-state functions, i.e., K = 3, representing stress level (g1 (x, u)), displacement at location A ((g2 (x, u)), and displacement at location B (g3 (x, u)). Also, suppose that the structure is defined to have failed if the first performance measure (stress level) is unsatisfactory, regardless of the displacement levels, and it is also defined to have failed if both of the displacement measures are unsatisfactory, regardless of the stress level. In this case, I = {1, 2}, C1 = {1}, and C2 = {2, 3}, i.e., F(x) = {g1 (x, U) ≤ 0} ∪ ({g2 (x, U) ≤ 0} ∩ {g3 (x, U) ≤ 0})
(6)
3 Failure probability and gradient Since P, see (4), is a nonlinear optimization problem it would be natural to apply a standard nonlinear programming algorithm to this problem. However, such an approach requires two assumptions to be satisfied. First, all the functions in P must be at least once differentiable (with respect to the design variables x) with continuous gradients. We refer to this assumption as the smoothness assumption. Second, we must be able to
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n
311
compute, relatively easily, all the functions and their gradients for any given design x. We refer to this assumption as the computability assumption. Since we assume that the cost function c(x) and constraint functions fj (x) are all analytic functions or in some other form satisfying our two assumptions (smoothness and computability), the challenge is associated with the failure probability p(x). Due to the complicated form of p(x) it is not clear whether the assumptions are satisfied. In fact, it appears unlikely that the computability assumption is satisfied due to the m-dimensional integral in the definition of p(x). It is also difficult to perceive situations under which the smoothness assumption is satisfies when the limit-state functions are not differentiable. Hence, we assume throughout this chapter that the limit-state functions gk (x, u) are differentiable with respect to both arguments and have continuous gradients. If the limit-state functions are not differentiable and/or the design variables are restricted to integers, then the theory and algorithms derived in this chapter are not applicable. This section rewrites the expression for the failure probability in a form that is equivalent, under weak assumptions, to the original definition. As seen in the following, this effort results in an expression that satisfies the smoothness assumption and that lends itself to estimation of both the failure probability and its gradient. This effort would have been unnecessary if we were only interested in computing the failure probability and not in optimization. Standard Monte Carlo sampling (possibly with importance sampling) would have sufficed in such a situation (see, e.g., (Ditlevsen and Madsen 1996)). However, within an optimization algorithm we also need the gradient of the failure probability and the gradient is not easily available from the definition of the failure probability. In (Uryasev 1995), we find a theoretical expression for the gradient of the failure probability. However, this expression may involve surface integrals, which are difficult to estimate in practice. In (Marti 1996) (see also (Marti 2005)), an integral transformation is presented, which, when it exists, leads to a simple expression for the gradient of the failure probability. However, it is not clear under what conditions this transformation exists. As in (Uryasev 1995), (Tretiakov 2002) assumes that the failure domain F(x) is bounded and given by a union of events. With this restriction, an expression for the gradient of the failure probability involving integration over a simplex is derived. In principle, this integral can be evaluated by Monte Carlo sampling. However, to the authors’ knowledge, there is no computational experience with estimation of failure probabilities for highly reliable mechanical structures using this expression. In (Royset and Polak, 2004a; Royset and Polak, 2004b) we find expressions for the failure probability and its gradient that can be estimated by Monte Carlo and importance sampling. However, the expressions are limited to the case with one performance measure (i.e., K = 1). In Section 9.2 of (Ditlevsen and Madsen 1996), an expression for the gradient of the failure probability is suggested, without a complete proof, for the case with one performance measure. This expression is based on a form of p(x) that has been found computationally efficient in applications. In (Royset and Polak 2007) a generalization of this special-case formula was derived and formally proven. We proceed by describing the expression for the failure probability given in (Royset and Polak 2007). It can be shown that the failure domain is equivalently expressed as (7) F(x) = min max gk (x, U) ≤ 0 i∈I
k∈Ci
312
Structural design optimization considering uncertainties
As in (Deak 1980) (see alternatively (Ditlevsen et al., 1987; Bjerager 1988), and Section 9.2 of (Ditlevsen and Madsen 1996)), we observe the following fact: If the standard normal random vector U = RW and R2 is Chi-square distributed with m degrees of freedom, then W is a random vector, independent of R, uniformly distributed over the surface of the m-dimensional unit hypersphere. Note that W represents a direction and R a positive length. Hence, we obtain from the total probability rule that (8) p(x) = E P min max gk (x, RW) ≤ 0W i∈I
k∈Ci
Here, P[{mini∈I maxk∈Ci gk (x, RW) ≤ 0|W}] is the conditional probability of a failure event in the random direction W for a given x. This conditional probability takes a particular simple form if the safe domain, i.e., the complement of the failure domain F(x)c , is “star-shaped.’’ A safe domain is star-shaped if in any direction w one passes from the safe to the failure region only once when moving from the origin in the u-space1 in the direction of w; see (Royset and Polak 2007) for a mathematically precise definition. When the safe domain is star-shaped, the expression inside the expectation 2 2 2 (r (x, W)), where χm ( · ) is the Chi-square cumulative distribution in (8) equals 1 − χm function with m degrees of freedom and r(x, W) is the minimum distance in direction W from the origin of the u-space to the surface of F(x). This distance can expressed in terms of the minimum distances in direction W from the origin to the surface of {gk (x, RW) ≤ 0}, k ∈ K. Let rk (x, W) denote this distance. Then, r(x, W) = min max rk (x, W) i∈I
k∈Ci
(9)
2 Since χm ( · ) and the square function (positive domain) are strictly increasing, we find that
p(x) = E[φ(x, W)]
(10)
where 2 2 (rk (x, W))} φ(x, W) = max min{1 − χm i∈I
k∈Ci
(11)
This is the new expression for the failure probability we will use in the following. As noted earlier, this expression is equivalent to the original definition of the failure probability under the assumption of a star-shaped safe domain, see (Royset and Polak 2007) for a proof. We observe that when the safe domain is not star-shaped, (10) may overestimate the failure probability. Hence, it is conservative to assume a star-shaped safe domain. For a given design x, it is possible to obtain an indication whether the star-shape assumption is satisfied by computing an estimate N j=1 IF (x) (uj )/N of p(x), where u1 , u2 , . . . , uN are realizations of independent, identically distributed standard normal vectors and IF (x) (uj ) = 1 if uj ∈ F(x), and zero otherwise. If this estimate is significantly smaller than 1 We refer to the m-dimensional space of realizations of U as the “u-space.’’
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n
313
the one of (10), then the star-shape assumption is violated. We also note that equivalent assumptions were adopted by (Tretiakov 2002; Ditlevsen et al., 1987; Bjerager 1988) and Section 9.2 of (Ditlevsen and Madsen 1996). The main advantage of the new expression for the failure probability (10) over the original expression p(x) = P[F(x)] is that a useful expression for the gradient of the failure probability can be derived. At first glance, it appears that the gradient of p(x) in (10) is simply the expectation of the gradient of φ(x, W) with respect to x. However, closer examination shows that φ(x, W) is not differentiable with respect to x due to ˆ its max-min form. Hence, we define the set of active limit-state functions K(x, W) as those limit-state functions that define the surface of the failure domain F(x) in the direction W. More precisely, ˆ ˆ i (x, W), i ∈ I(x, ˆ W)} K(x, W) = {k ∈ K|k ∈ C
(12)
where ˆ W) = I(x, ˆ i (x, W) = C
max r (x, W) = max r (x, W) i ∈ I min k k
(13)
k ∈ Ci max rk (x, W) = rk (x, W)
(14)
i ∈I k∈Ci
k∈Ci
k ∈Ci
ˆ Using the definition of the set of active limit-state functions K(x, W), we derive the subgradient of φ(x, W) as ∂φ(x, W) = conv
ˆ k∈K(x,W)
2fχm2 (r2k (x, W))rk (x, W)
∇x gk (x, rk (x, W)W) ∇u gk (x, rk (x, W)W)T W
(15)
where conv{·} denotes the convex hull, fχm2 ( · ) is the Chi-square probability density function with m degrees of freedom, and ∇x gk (x, u) and ∇u gk (x, u) the gradient of gk (x, u) with respect to x and u, respectively. Informally, the expression in the brackets 2 2 (rk (x, W)) obtained using implicit of (15) is the gradient with respect to x of 1 − χm differentiation. This leads to the following expression for the gradient of the failure probability (see (Royset and Polak 2007) for a proof) ∇p(x) = E[dφ (x, W)]
(16)
where dφ (x, W) is any element of the subgradient ∂φ(x, W). We note that (16) is only valid if the safe domain is bounded, i.e., the minimum distance in every direction w from the origin of the u-space to the surface of F(x) is bounded from above by some (large) number. This may not always be the case in applications. However, it is always possible to define an artificial limit-state function gK+1 (x, U) = ρ − U, with a sufficiently large ρ > 0, replace I by I + 1, and set CI = {K + 1}. Then, F(x) satisfies the assumption about a bounded safe domain. This is equivalent to enlarging the failure domain. The probability associated with the enlarged failure domain is slightly larger than the one associated with the original failure domain. The difference, however, is no
314
Structural design optimization considering uncertainties
2 greater than 1 − χm (ρ2 ) and therefore negligible for sufficiently large ρ. Consequently, this boundedness assumption is not restrictive in practice. From the above derivation we see that the failure probability is differentiable with a continuous gradient given by (16), i.e., the failure probability satisfies the required smoothness assumption for nonlinear optimization. However, for this to have any practical value, we also need to be able to compute the failure probability and its gradient, i.e., we need the computability assumption to be satisfied. Clearly, (10) and (16) cannot, in general, be evaluated analytically, but must be estimated by Monte Carlo sampling. Let w1 , w2 , . . . , wN be a set of N sample points, each generated by independent sampling from the uniform distribution on the m-dimensional unit hypersphere. Given this sample, we define the estimate of (10):
pN (x) =
N
φ(x, wj )/N
(17)
j=1
Since W corresponds to a direction, this type of Monte Carlo simulation is referred to as directional sampling (Bjerager 1988). It is well-known (see, e.g., (Rubinstein and Shapiro 1993) for a proof) that pN (x) converges to p(x) uniformly over any closed and bounded set, as N → ∞. Hence, at least in principle, we can obtain an accurate estimate of the failure probability by computing (17) with a large N. (Of course, a large sample size may be prohibitive computationally.) We now consider an estimate of the gradient (16). Since φ(x, w) is not differentiable with respect to x, we see that pN (x) is generally not differentiable either. However, since φ(x, w) has a subgradient, see (15), it can be shown that pN (x) also has a subgradient denoted by ∂pN (x), see (Royset and Polak 2007). This subgradient is given by ∂pN (x) =
N
∂φ(x, wj )/N
(18)
j=1
see (15) for the expression for ∂φ(x, wj ). It is shown in (Royset and Polak 2007) that the subgradient ∂pN (x) converges (shrinks) to ∇p(x) uniformly over any closed and bounded set, as N → ∞. We note that there is typically no need to estimate the subgradient ∂pN (x), but only one of its elements. To generate such an element, proceed as follows: (i) obtain N sample points w1 , w2 , . . . , wN , (ii) for each sample point wj deterˆ mine one active limit-state function, i.e., find one element in K(x, wj ), and compute the numerical value of the vector within the brackets of (15) for the active limit-state function, and (iii) average the numerical values over all the sample points.
4 Algorithm based on sample average approximations In this section, we follow (Royset and Polak 2007; Polak and Royset 2007) and present an algorithm that utilizes the sample average estimates of the failure probability and its gradient derived in the previous section, see (17) and (18). The algorithm carries out nonlinear optimization iterations on a sequence of sample average approximations for
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n
315
the original problem P. Given the sample points w1 , w2 , . . . , wN , we define the sample average approximation of P as the following optimization problem: PN : min{c(x)|pN (x) ≤ q, x ∈ X} x
(19)
It is noted that the only difference between P and PN is that p(x) has been replaced by its sample average. Intuitively, PN becomes a better approximation to P as N increases. In fact, under weak assumptions, a global minimum of PN converges to a global minimum of P, as N → ∞, see (Royset and Polak 2007) and more generally Chapter 6 of (Ruszczynski and Shapiro 2003) and references therein. Since we can evaluate pN (x) for a given sample, PN satisfies our computability assumption. However, PN does not satisfy our smoothness assumption since (17) is generally not differentiable – it only has a subgradient (18). Hence, standard nonlinear programming algorithms may perform poorly when applied to PN . As seen in Subsection 5.1 below, we are able to overcome this difficulty by utilizing the fact that P satisfies the smoothness assumption. In this section, we proceed under the assumption that there is some optimization algorithm that can effectively be applied to PN . As discussed in Section 1, the simplest scheme for approximately solving P would be to select some sample size N and apply some optimization algorithm to PN for a number of iterations. The obtained design would be an estimate of the optimal design of P. However, this may be a poor estimate if the sample size is small, and if the sample size is large, the computational cost may be prohibitive. In (Royset and Polak 2007; Polak and Royset 2007), the following adaptive scheme is proposed. Conceptual Algorithm for Solving P. Step 0. Select an initial design x0 , an initial sample size N, and sample w1 , w2 , . . . , wN . Set iteration counter j = 0. Step 1. Consider the sample average approximation PN and compute a new design xj+1 by carrying out one iteration of some optimization algorithm applied to PN . This iteration is initialized by the current design xj . Step 2. Use some sample-adjustment rule and determine if the sample size should be augmented. If the sample size should be augmented, replace N by some larger N and generate additional sample points to complement the existing sample points. Step 3. Replace j by j + 1, and go to Step 1. The conceptual algorithm describes an adaptive scheme, but does not specify how Steps 1 and 2 can be implemented. What optimization algorithm can be used in Step 1? What sample-adjustment rule should be used in Step 2? At first glance, the first question appears easier. However, as discussed above, PN may not satisfy the smoothness assumption and standard nonlinear programming algorithms may perform poorly. In fact, as we will see in Subsection 5.1 below, care must be taken when selecting the optimization algorithm in Step 1 to ensure convergence of the overall algorithm. The second question appears to be difficult and embodies the following fundamental trade-off. A rapid increase in sample size may result in many iterations with large N and hence high computational cost. As we see in Subsection 5.1 below, there is also a theoretical concern; a rapid increase in sample size may prevent convergence to an optimal design.
316
Structural design optimization considering uncertainties
On the contrary, a slow increase in sample size may lead to unnecessarily many iterations on coarse sample average approximations. The next section discusses two approaches for implementing Step 2. We also briefly discuss the implementation of Step 1. There is also a third question that is not addressed in the conceptual algorithm: when to stop the calculations? As in all nonlinear programming, this is a fundamentally difficult questions that is substantially aggravated by the presence of sample averages. A simple approach would be to augment the sample size until it reaches a “sufficiently large’’ number, e.g., an N that results in a coefficient of variation for pN (x) of less than 5%. Then, keep that sample size for a number of iterations until the optimization algorithm in Step 1 ceases to make substantial progress from iteration to iteration. Another approach is to simply run the algorithm until the dedicated time is consumed. Techniques for checking whether a given design is close to optimal includes statistical testing, see e.g. Section 6.4 of (Ruszczynski and Shapiro 2003). A further discussion of stopping criteria and solution quality is beyond the scope of this chapter.
5 Selection of sample sizes The conceptual algorithm presented in the previous section needs a sample-adjustment rule (see Step 2). There are two main concerns when constructing a sample-adjustment rule: (i) theoretical convergence and (ii) computational efficiency. This section presents two different rules. The first rule satisfies (i), but its efficiency is sensitive to input parameters. The second rule has weaker convergence properties, but allocates samples optimally in some sense. 5.1 F eed b ac k r ule The first sample-adjustment rule for Step 2 of the conceptual algorithm augments the sample size when the progress of the optimization algorithm in Step 1 is sufficiently small. This rule is motivated by the following observation: when the optimization algorithm in Step 1 is making small progress towards an optimal design of the current sample average approximation PN , the current design is probably near that optimal design. Hence, there is little to be gained from computing even better designs for PN ; it is better to increase the sample size N and start to calculate with a more accurate sample average approximation. In (Royset and Polak 2007), the progress of the optimization algorithm in Step 1 is measured in terms of a function FN (x , x ) defined by FN (x , x ) = max{c(x ) − c(x ) − γψN (x )+ , ψN (x ) − ψN (x )+ }
(20)
where
ψN (x) = max pN (x) − q, max fj (x) j∈J
(21)
ψN (x)+ = max{0, ψN (x)}, and the parameter γ > 0. The function FN (x , x ) measures how much “better’’ the design x is compared to design x . Suppose that x is a feasible design for PN . Then, ψN (x ) ≤ 0 and ψN (x )+ = 0 and, hence, FN (x , x ) = max{c(x ) − c(x ), ψN (x }
(22)
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n
317
We see that if FN (x , x ) ≤ −ω, with ω being some positive number, then the objective function in PN for design x is reduced with at least the amount ω compared to the value for design x . Additionally, x is feasible for PN because ψN (x ) ≤ −ω. Suppose that x is not a feasible design for PN . Then, ψN (x ) > 0. When FN (x , x ) ≤ −ω, the constraint violation for PN at x is reduced with at least the amount ω compared to the value at x because ψN (x ) − ψN (x ) ≤ −ω. The above observation leads to the following sample-adjustment rule: If FN (xj , xj+1 ) is no larger than a threshold, then the progress is sufficient and the current sample size is kept. (Note that FN (xj , xj+1 ) is a negative number and that it measures the decrease in cost or constraint violation. Hence, a large negative number corresponds to a large progress towards an optimal design.) If FN (xj , xj+1 ) is larger than the threshold, then the progress is too small and the sample size is increased. The challenge with this rule is to determine an appropriate threshold. In (Royset and Polak 2007), we find a sample-size dependent threshold that results in the following sample-adjustment rule: If
τ FN (xj , xj+1 ) > −η ( log log N)/N
(23)
then augment the sample size. Otherwise, keep the current sample size in the next iteration. Here, η is a positive parameter and τ is a parameter strictly between 0 and 1. Since the threshold is increasing (approaches zero from below) with increasing sample size, the rule becomes successively more stringent. For large N, the sample size is only increased if the optimization algorithm in Step 1 of the conceptual algorithm makes a tiny progress (FN (xj , xj+1 ) is close to zero). This means that for large N, it is necessary to solve the sample average approximation to near optimality before the sample size is increased. On the other hand, for small N, the sample size is increased even if the optimization algorithm in Step 1 is making a relatively large progress. Hence, the rule avoids having to solve low-precision sample average approximations to high accuracy before switching to a larger sample size. But, the rule eventually forces the algorithm to solve high-precision sample average approximations to high accuracy. The double logarithmic form of the threshold in (23) relates to the Law of the Iterated Logarithm (see (Royset and Polak 2007) and references therein). It is shown in (Royset and Polak 2007) that this exact form of the sample-adjustment rule guarantees convergence of the conceptual algorithm when implemented with a specific optimization algorithm in Step 1. This specific optimization algorithm is motivated by the Polak-He algorithm (see Section 2.6 of (Polak 1997)) and takes the following form. For any current design xj and current sample size N, the next iteration xj+1 = xj + λN (xj , d)hN (xj , d)
(24)
where d is any element in the subgradient ∂pN (xj ), see (18) and its subsequent paragraph, and the stepsize λN (xj , d) is given by Armijo’s rule: λN (xj , d) =
max {βk |FN (xj , xj + βk hN (xj , d)) ≤ βk αθN (xj , d)}
k∈{0,1,2,...,}
(25)
318
Structural design optimization considering uncertainties
Here, α ∈ (0, 1] and β ∈ (0, 1) are parameters, and θN (x, d) = − min{zT bN (x) + zT BN (x, d)T BN (x, d)z/(2δ)} z∈Z
with parameter δ > 0, the J + 2-dimensional unit simplex Z given by ⎫ ⎧ J+2 ⎨ ⎬ Z = z zj = 1, zj ≥ 0, ∀j ⎭ ⎩ j=1
(26)
(27)
the (J + 2)-dimensional vector (γ as in (20)) bN (x) = (γ ψN (x)+ , ψN (x)+ − pN (x) + q, ψN (x)+ − f1 (x), . . . , ψN (x)+ − fJ (x))T (28) and the n × (J + 2)-matrix BN (x, d) = (∇c(x), d, ∇f1 (x), . . . , ∇fJ (x))
(29)
Finally, the search direction hN (xj , d) = −BN (xj , d)ˆz/δ
(30)
where zˆ is any optimal solution of (26). The problem in (26) is quadratic and can be solved in a finite number of iterations by a standard QP-solver (e.g. Quadprog (Mathworks, Inc. 2004)). Usually, the one-dimensional root finding problems in the evaluation of rk (x, w), needed in (15), cannot be solved exactly in finite computing time. One possibility is to introduce a precision parameter that ensures a gradually better accuracy in the root finding as the algorithm progresses. Alternatively, we can prescribe a rule saying that the root finding algorithm should terminate after CN iterations, with C being some constant. For simplicity, we have not discussed the issue of root finding. In fact, this issue is not problematic in practice. The root finding problems can be solved in a few iterations with high accuracy using standard algorithms. Hence, the root finding problems are solved with fixed precision for all iterations in the algorithm giving a negligible error. The feedback rule (23) requires the user to determine the parameters η and τ as well as the amount of sample size increase. To avoid a quick increase in sample size and corresponding high computational costs, the parameter τ is typically set to 0.9999. However, it is nontrivial to determine an efficient value for the parameter η. If η is large, then the sample size tends to be augmented frequently. Hence, η should be small to avoid costly sample average approximations in the early iterations. However, too small η may result in an excessive number of iterations for each sample average approximation. Overall, in Section 6 we see empirically that the numerical value of η may influence computing times significantly. Furthermore, neither the conceptual algorithm nor the feedback rule specify how much the sample size should be increased – only when to increase it. Typically, the user specifies a rule of the form: replace N by ξN, with ξ > 1, whenever the sample size needs to be increased. Naturally, the
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n
319
computationally efficiency may vary with the amount increased each time. We note that (Royset and Polak 2007) proves that the conceptual algorithm with the sampleadjustment rule (23) and the optimization algorithm (24) is guaranteed to converge to a solution for any τ ∈ (0, 1), η > 0, and sample size increase. Hence, the above discussion only relates to how fast the algorithm will converge. As indicated in the previous paragraph, it can be difficult to select efficient values for the parameter η as well as an efficient sample size increase every time the algorithm is prompted by the sample-adjustment rule. Typically, some numerical experimentation and parameter tuning for the problem at hand is needed. In the next subsection, we describe an alternative, more complex sample-adjustment rule that avoids such tuning. 5.2
Ef f icient s cheme
In this subsection, we present the sample-adjustment scheme given in (Polak and Royset 2007), which modifies a methodology originally developed in (He and Polak 1990). Instead of having a simple sample-adjustment rule as in Subsection 5.1 for Step 2 of the conceptual algorithm, the scheme in (Polak and Royset 2007) consists of a precalculation step that determines the “optimized’’ sample size for subsequent iterations. In the pre-calculation step, the user selects a required accuracy of the final design (e.g., a feasible design with cost within 5% of the minimum cost) and solves an auxiliary optimization problem that determines the sample size for each iteration (e.g., 100, 100, 100, 200, 200, 300, etc., sample points, for iterations 1, 2, 3, 4, 5, 6, etc., respectively). Hence, whenever the conceptual algorithm reaches Step 2, it simply looks up the prescribed sample size from the output of the auxiliary optimization problem. The objective function of the auxiliary optimization problem, to be derived below, is the total computational work needed to obtain a solution of required accuracy, and the constraint is that the required cost reduction be achieved. Let a stage be a number of iterations carried out by the conceptual algorithm for a constant sample size. The decision variables in the auxiliary problem are (i) the number of stages, s, to be used, (ii) the sample size Ni to be used in stage i, i = 1, 2, . . . , s, and (iii) the number of iterations ni to be carried out in stage i. For example, 100, 100, 100, 200, 200, and 300 sample points, for iterations 1, 2, 3, 4, 5, and 6 respectively, correspond to three stages, with stage 1 consisting of three iterations (n1 = 3) and sample size 100 (N1 = 100), stage 2 consisting of two iterations (n2 = 2) and sample size 200 (N2 = 200), and stage 3 consisting of one iteration (n3 = 1) and sample size 300 (N3 = 300). While the number of stages s has to be treated as an integer variable, the variables Ni and ni can be treated as continuous variables and rounded at the end of their optimization. In practice, it turns out that the optimal number of stages s∗ hardly ever exceeds 10, with 3–7 being a most likely range for s∗ . Incidentally, if one assigns the number of stages to be s > s∗ , and then solves the reduced auxiliary optimization problem for the Ni and ni , the optimal solution will consist of several Ni being equal, so that the total number of distinct stages is s∗ . The auxiliary problem depends on a sampling-error bound, on the initial distance to the optimal value, and on the rate of convergence of the optimization algorithm applied in Step 1 of the conceptual algorithm. All of these may have to be estimated. As a result, it may be presumptuous to call the solution of the auxiliary optimization
320
Structural design optimization considering uncertainties
problem an “optimal strategy,’’ and hence we will call it an “efficient strategy.’’ As we will see from our numerical results, despite the use of estimated quantities, the efficient strategy is considerably more effective than the obvious alternatives. 5.2.1 Au xi li a ry o p t im iza t io n p r o b le m We begin by deriving the auxiliary optimization problem. First we penalize the constraint in P to convert it into an equivalent, unconstrained min-max problem. This simplifies the derivation since it avoids distinguishing between feasible and infeasible design. For a given parameter π > 0, we define c˜ (x) = c(x) + π max{0, p(x) − q, f1 (x), f2 (x), . . . , fJ (x)}
(31)
c˜ N (x) = c(x) + π max{0, pN (x) − q, f1 (x), f2 (x), . . . , fJ (x)}
(32)
and the unconstrained problem P˜ : min c˜ (x)
(33)
x
We refer to π as a penalty since it adds a positive number to the objective functions c(x) and cN (x) for any infeasible design x. If P is calm (see, e.g., (Burke 1991; Clarke 1983)) and π is sufficiently large, then the design x is a local minimizer of P˜ if and only if it is a local minimizer of P. Similarly, the unconstrained problem P˜ N : min c˜ N (x)
(34)
x
is equivalent to PN for sufficiently large π. An appropriate penalty π can be selected using well-known techniques such as the one in Section 2.7.3 of (Polak 1997). The implementation of such techniques is beyond the scope of this chapter and we assume in the following that a sufficiently large penalty π > 0 has been determined so that optimal solutions of P˜ and P˜ N are feasible for P and PN , respectively. As above, we assume that each sample point is independently generated and that sample points are reused at later stages, i.e., for all stages i = 2, 3, . . . , s, the sample at stage i consists of the Ni−1 sample points at stage i − 1 and of Ni − Ni−1 new, independent sample points. To construct an auxiliary optimization model for determining the number of stages, the sample size at each stage, and the number of iterations to be performed at each stage, we introduce the following assumptions. Suppose that the optimization algorithm in Step 1 of the conceptual algorithm is linearly convergent with a rate of convergence coefficient independent of the sample size in the sample average approximations. That i is, for any stage i and iteration j, the costs of the design at the next iteration, xj+1 , and i ∗ the current design, x , relate to the cost of the optimal design x of P˜ N as follows: j
i ∗ ∗ c˜ Ni (xj+1 ) − c˜ Ni (xN ) ≤ θ(˜cNi (xji ) − c˜ Ni (xN )) i i
Ni
i
(35)
where θ ∈ (0, 1) is the rate of convergence coefficient. Hence, every iteration of the optimization algorithm reduces the remaining distance to the optimal value by a factor θ.
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n
321
Many optimization algorithms including the Pshenichnyi-Pironneau-Polak Min-Max Algorithm (see Section 2.4.1 of (Polak 1997)) are linearly convergent. Next, we assume that for any design x the sampling error is given by |˜cN (x) − c˜ (x)| ≤ (N)
(36)
where (N) is a strictly decreasing positive function with (N) → 0, as N → ∞. We return to the form of (N) below, but for now we only assume that such a function exists. To simplify the notation, we deviate from the numbering scheme of the conceptual algorithm and let j note the iteration number of the current stage (and not from the beginning). Then, xji is the design at iteration j of the i-th stage. Hence, we plan to compute the designs x01 , x11 , . . . , xn11 on stage 1, x02 , x12 , . . . , xn22 on stage 2, . . . , and , i.e., designs x0s , x1s , . . . , xns s on stage s. To make use of “warm’’ starts, we set x0i = xni−1 i−1 the last design of the current stage is taken as the initial design of the next stage. ∗ Let x∗ and xN be optimal designs for P˜ and P˜ N , respectively. Then, in view of (36) we have that ∗ ∗ c˜ (x∗ ) ≤ c˜ (xN ) ≤ c˜ N (xN ) + (N) ∗ ) c˜ N (xN
∗
∗
≤ c˜ N (x ) ≤ c˜ (x ) + (N)
(37) (38)
We refer to the distance between the cost c˜ (x) of some design x and the cost c˜ (x∗ ) of an optimal design x∗ for P˜ as the cost error of design x. Here, the term “error’’ refers to the discrepancy between x and x∗ . For any stage i = 1, 2, . . . , s, we define the cost error after the last iteration of the i-th stage by ei = c˜ (xni i ) − c˜ (x∗ )
(39)
Also let e0 = c˜ (x01 ) − c˜ (x∗ ). Using (36)–(38) and (35), we obtain that for all i = 1, 2, . . . , s, ∗ ei ≤ c˜ Ni (xni i ) − c˜ Ni (xN ) + 2 (Ni ) i ∗ )] + 2 (Ni ) ≤ θ ni [˜cNi (x0i ) − c˜ Ni (xN i
≤θ
ni
[˜c(xni−1 ) i−1
∗
− c˜ (x )] + 4 (Ni )
≤ θ ni ei−1 + 4 (Ni )
(40) (41) (42) (43)
Hence, es ≤ e0 θ k0 (s) + 4
s
θ ki (s) (Ni )
(44)
i=1
where ki (s) = sl=i+1 nl if i < s and ki (s) = 0 if i = s. We observe that (44) gives an upper bound on the cost error after completing s stages with ni iterations and Ni sample points at stage i. As shown in (Polak and Royset 2007), the cost error is guarantee to vanish as the number of stages s increases to infinity. This shows that such gradual sample
322
Structural design optimization considering uncertainties
size increase can lead to asymptotic convergence. This is a valuable result, but in this subsection we aim to determine efficient sample-adjustment schemes, i.e., schemes that minimize the computing time to reach a specific reduction in cost error from an initial value. To be able to construct efficient sample-adjustment schemes we need to quantify the computational effort associated with one iteration of the optimization algorithm used in Step 1 of the conceptual algorithm as a function of the sample size N. Suppose that this computational effort is given by the positive function w(N) for any design x. We are now ready to present the auxiliary optimization problem. Given an initial cost error e0 > 0 and a required fractional reduction in cost error ∈ (0, 1), we seek to determine the number of stages s as well as sample sizes Ni and numbers of iterations ni at each stage i, i = 1, 2, . . . , s, such that the computational effort to reach a cost error of e0 is minimized. We note that the cost error is the discrepancy between the cost of the current design and the cost of the optimal design ˜ In view of (44), this optimization problem takes the following form of P. min
s,ni ,Ni
s i=1
s ni w(Ni )e0 θ k0 (s) + 4 θ ki (s) (Ni ) ≤ e0 i=1
Ni+1 ≥ Ni ,
i = 1, 2, . . . , s − 1
s, ni , Ni integer,
(45)
i = 1, 2, . . . , s
The objective function in D(e0 , ) represents the total computational effort needed to carry out the planned iterations. The first constraint ensures that the cost error has at least been reduced to the required level e0 and the second set of constraints ensures that the sample size is nondecreasing. The estimation of the parameters defining problem D(e0 , ) is discussed in the next section. 5.2.2 Im p lem en t a t io n o f a u x ilia r y o p t im iza t i o n pr o bl e m The auxiliary optimization problem D(e0 , ) involves the work and sampling-error functions w(N) and (N) as well as the rate of convergence parameter θ and the initial cost error e0 = c˜ (x01 ) − c˜ (x∗ ). All these quantities must be determined before D(e0 , ) can be solved. We deal with these issues one at a time. In view of (17) and (18), the computing effort required to evaluate pN (x) and an element of the subgradient grows linearly in N. Hence, the work associated with one iteration of the optimization algorithm used in Step 1 of the conceptual algorithm is proportional to N and we set the work function w(N) = N. The (almost sure) sampling error (N) can be determined using the Law of the Iterated Logarithm, see (Royset and Polak 2007). However, ( log log N)/N is a pessimistic estimate of the sampling error “typically’’ experienced. Since our goal is to determine efficient number of stages, sample sizes, and numbers of iterations, it appears √ to be more reasonable to assume that the sampling error is proportional to 1/ N as proposed by classical estimation theory: For a given design x, it follows under weak assumption from the Central Limit Theorem that pN (x) is approximately normally
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n
323
distributed with mean p(x) and variance σ(x)2 /N for large N, where σ(x)2 = Var[φ(x, W)]. Hence, for sufficiently large N, √ P[|pN (x) − p(x)| ≤ 1.96σ(x)/ N] ≥ 0.95 (46) However, we are primarily interested in the difference between c˜ N (x) and c˜ (x). Since the max-function in (32) only makes the variance less, it follows that P[|˜cN (x) − c˜ (x)| ≤ (N)] ≥ 0.95 (47) √ when (N) = 1.96πσ(x)/ N. This error expression appears to be appropriate for our auxiliary optimization problem, and we set √ (48)
(N) = 1.96π max σ(x)/ N where the maximization is over all designs examined in a preliminary calculation described below. We determine σ(x), θ, and e0 in an estimation phase consisting of n0 iterations of the optimization algorithm in Step 1 of the conceptual algorithm applied to P˜ N0 , with 0 N0 being a small sample size. Let {xj0 }nj=0 be the iterates computed in this estimation phase. Each time pN0 (x) is computed, the corresponding variance σ(x)2 is estimated by σ(x) = 2
N0
(φ(x, wj ) − pN0 (x))2 /(N0 − 1)
(49)
j=1
We always retain the largest σ(x)-value computed and use that in the calculation of
(N), see (48). The rate of convergence parameter θ is estimated by the solution of the following ∗ least-squares problem, where the optimal value c˜ N0 (xN ) of P˜ N0 is also estimated: 0 min θˆ ,ˆc
n0
[(ˆc + (˜cN0 (x00 ) − cˆ )θˆ j ) − c˜ N0 (xj0 )]2
(50)
j=0
This least-square problem minimizes the squared error between the calculated cost at each iteration c˜ N0 (xj0 ) and the nonlinear model cˆ + (˜cN0 (x00 ) − cˆ )θˆ j . The nonlinear model estimates that the cost of the design at iteration j is the optimal cost cˆ plus the initial cost error c˜ N0 (x00 ) − cˆ reduced by a factor. The factor is simply the rate of convergence coefficient raised to the power of the number of iterations. Using the results of the ∗ ) by cˆ . Finally, we (coarsly) least square calculations, we estimate θ by θˆ and c˜ N0 (xN 0 1 ∗ estimate the initial cost error e0 = c˜ (x0 ) − c˜ (x ) by eˆ 0 = c˜ N0 (x00 ) − cˆ . We have now established procedures for estimating all the unknown quantities in D(e0 , ). D(e0 , ) is a nonlinear integer program that appears difficult to solve directly, but this fact can be circumvented by the following observations. First, the restriction of D(e0 , ) obtained by fixing s to a number in the range 5–10 tends to be insignificant since more than 5–10 stages is rarely advantageous and fewer than 5–10 stages is still effectively allowed in the model by setting Ni = Ni+1 for some i. Second, Ni , and to some extent also ni , tend to be large integers. Hence, a continuous relaxation
324
Structural design optimization considering uncertainties
with rounding of the optimal solutions to the nearest integers is justified. In view of these observations, D(e0 , ) can be solved approximately using a standard nonlinear programming algorithm. 5.2.3 Overa ll a lg o r it h m w it h e f f icie n t s a m p l e-adj us t me nt s c he me We now summarize our approach and discuss how the auxiliary optimization problem can be integrated in an algorithm for solving P. As indicated above, the process of solving the auxiliary optimization problem must be preceded by an estimation phase where parameters are determined. This leads to the following overall algorithm for solving P approximately. Algorithm with Efficient Sample-Adjustment Scheme. Parameters. Number of iterations in estimation phase n0 , sample size in estimation phase N0 , maximum number of stages s, and constraint penalty π > 0. Data. Required fractional reduction in cost error > 0, initial design x00 , and independent sample points w1 , w2 , . . . . Step 0. Compute variance estimate σ(x00 )2 using (49). Step 1. For j = 0 to n0 − 1, perform: 0 by starting from xj0 and carrying out Sub-step 1.1. Compute the next design xj+1 one iteration of some optimization algorithm applied to PN0 . 0 Sub-step 1.2. Compute the variance estimate σ(xj+1 )2 using (49). Step 2. Set σˆ equal to the largest variance estimate encountered in Steps 0 and 1. Step 3. Determine θˆ and cˆ as√the optimal solution of (50). ˆ Step 4. Set (N) = 1.96πσ/ ˆ N, and determine ni and Ni by solving s s ˆ i ) ≤ ˆe0 min θˆ ki (s) (N ni Ni ˆe0 θˆ k0 (s) + 4 ni ,Ni
i=1
i=1
Ni+1 ≥ Ni ,
i = 1, 2, . . . , s − 1
ni , Ni ≥ 1,
i = 1, 2, . . . , s
(51)
Step 5. For i = 1 to s, perform: Sub-step 5.1. Set the first design of the current stage equal to the last design of the previous stage, i.e., x0i = xni−1 . i−1 i Sub-step 5.2. For j = 0 to ni − 1, compute the next design xj+1 by starting from xji and carrying out one iteration of some optimization algorithm applied to PNi . We note that the optimization algorithm used in Sub-Step 1.1 should be identical to the one used in Sub-Step 5.2 since the former sub-step is used to estimate the behavior of the latter. However, any nonlinear programming algorithm can be used in Steps 3 and 4. The proposed algorithm consists of three phases: estimation of parameters (Steps 0–3), solution of auxiliary optimization problem (Step 4), and main iterations (Step 5). This represents the simplest implementation of our idea. Alternatively, we can adopt a moving-horizon approach, where Step 5 is completed only for i = 1, followed by Step 4, then by Step 5 for i = 1 again, followed by Step 4, etc. Hence,
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n
325
the sample-adjustment plan is re-optimized after each stage, which may lead to an improved plan. With re-optimization, it is also possible to re-compute σ, ˆ using all previous iterates, as well as θˆ and cˆ . Other implementations can also be imagined. In the following numerical study, we adopt the simple implementation described above.
6 Numerical examples We illustrate our sample-adjustment approaches using three numerical examples. The examples are implemented in Matlab 7.0 (Mathworks, Inc. 2004) on a 2.8 GHz PC running Microsoft Windows 2000. 6.1
Feedbac k rule and efficient s chem e
This subsection presents a comparative study of the two sample-adjustment approaches given in Section 5. The numerical results of this subsection were reported in (Polak and Royset 2007). Ex a mple 1 The first example arises in the optimal design of a short structural column with a rectangular cross section of dimensions x1 × x2 . Hence, x = (x1 , x2 ) is the design vector. The column is subjected to bi-axial bending moments V1 and V2 , which, together with the yield strength V3 of the material, are considered to be independent, lognormally distributed random variables. The column is also subject to a deterministic axial force af . This gives rise to a failure probability p(x) = P[{G(x, V) ≤ 0}]
(52)
where the random vector V = (V1 , V2 , V3 ) and G(x, V) is a limit-state function defined by G(x, V) = 1 −
4V1 4V2 − 2 − 2 x1 x2 V3 x1 x2 V3
af x1 x2 V3
2 (53)
As discussed in Section 2, this limit-state function can be transformed into one give in terms of a standard normal vector U. Let g1 (x, U) be this transformed limit-state function. Since the resulting safe domain is not bounded, we introduce an auxiliary limit-state function g2 (x, U) = ρ − U, where ρ = 6.5 in this example. (This introduces negligible error.) Then, we redefine the failure probability of the structure as p(x) = P[{g1 (x, U) ≤ 0} ∪ {g2 (x, U) ≤ 0}]
(54)
which is in the form considered in this chapter. We seek a design of the column which satisfies the constraints defined by f1 (x) = −x1 , f2 (x) = −x2 , f3 (x) = x1 /x2 − 2, f4 (x) = 0.5 − x1 /x2 , f5 (x) = x1 x2 − 0.175, and minimize p(x). This is problem (1) with c0 (x) = 0, c(x) = 1, and J = 5. As discussed above, pN (x) does not satisfy the smoothness assumption. Hence, care must be taken when selecting an optimization algorithm for Step 1 in the conceptual algorithm or in Sub-Steps 1.1 and 5.2 in the algorithm with efficient sample-adjustment scheme. For simplicity in
326
Structural design optimization considering uncertainties
these numerical tests, we ignore the fact that the smoothness assumption may be violated and use the Pshenichnyi-Pironneau-Polak Min-Max Algorithm (see Section 2.4.1 of (Polak 1997)) as the optimization algorithm for solving PN . No detrimental behavior of the Pshenichnyi-Pironneau-Polak Min-Max Algorithm was observed because of this simplification. (Note that since p(x) is smooth, pN (x) is, for practical purposes, effectively smooth for large N.) The parameters for the algorithm with efficient sample-adjustment scheme were selected to be n0 = 25, N0 = 50, s = 5, and π = 2. We note that π = 2 suffices to ensure feasibility. Finally, the required fractional reduction in cost error was = 0.01 and the √ √ initial point was chosen to be x00 = ( 0.175, 0.175). The auxiliary optimization problem yielded a sample-adjustment strategy of three stages with 25, 8, and 8 iterations, with sample sizes 50, 251, and 1621, respectively, which was executed in 458 seconds. Note that this computing time includes the estimation phase (30 seconds) and the solution time of the auxiliary optimization problem (3 seconds). For comparison, we also solve the problem using the feedback rule of Subsection 5.1 to adjust the sample size. We experiment with the thresholds −η
τ
( log log N)/N
(55)
and √ −η/ N
(56)
for determining if the progress is “small’’ in Step 1 of the conceptual algorithm. We note that (55) is the same as in (23). This threshold formula guarantees convergence as proven in (Royset and Polak 2007). The threshold in (56) leads to a heuristic algorithm, but offers the advantage that the threshold tends to zero faster for increasing N as compared to (55). In the numerical tests, we set τ = 0.9999. As mentioned above, it is difficult to select and effective value of η, so we experiment with a range of values. Furthermore, we must determine how much the sample size should increase when prompted by the sample-adjustment rule. In this example, we selected five stages with sample sizes equally spaced between the minimum and maximum sample sizes given by the auxiliary optimization problem, i.e., 50, 443, 836, 1228, and 1621. We used the same random seed in both algorithms. We ran the algorithm with the feedback rule until c˜ 1621 ( · ) was equal to the cost achieved in the last iteration of the algorithm with the efficient scheme. We did not augment the sample size beyond 1621, but continued computing iterates at that stage until the target costvalue was achieved. This is a somewhat favorable stopping criterion for the algorithm with the feedback rule because this algorithm might augment, prematurely, the sample size beyond 1621 resulting in long computing times. The computing times for the algorithm with the feedback rule are summarized in Table 12.1 for various values of the parameter η and for the two threshold formulae (55) and (56). In Table 12.1, the row with η = ∞ gives the computing time for a fixed sample size equal to the largest sample size 1621 for all iterations.
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n
327
Table 12.1 Computing times [seconds] for the algorithm with feedback rule for sample adjustment as applied to Example 1. The algorithm with efficient sample-adjustment scheme computes the same design in 458 seconds. η
∞ 10−1 10−2 10−3 10−4 10−5 10−6 10−7 10−8 10−9
Threshold (55)
(56)
980 1044 1084 678 675 682 476 574 603 898
980 1036 654 675 677 676 477 554 601 901
As seen from Table 12.1, a fixed sample size can result in poor computing times compared to an adaptive scheme using a feedback rule. However, in the adaptive schemes there is a trade-off between solving the approximating problems accurately at an early stage (i.e., using small η), potentially wasting time, and solving the early approximations too coarsely (i.e., using large η), leading to many iterations at stages with high computational cost. In the efficient sample-adjustment scheme of Sub-Section 5.2, the trade-off is balanced by solving the auxiliary optimization problem. In the feedback rule, the user needs to consider the trade-off manually by selecting a value for the parameter η. If the right balance is found, i.e., a good η, then the feedback rule can be efficient. In fact, the feedback rule with η = 10−6 is only marginally slower than the efficient scheme. Of course, it is difficult to select η a priori. To illustrate this difficulty, we repeated the example for the higher accuracy = 0.005. Then, the efficient scheme increased the sample size up to 6473 and solved the problem in 1461 seconds. From Table 12.1 it appears that η = 10−6 is a good choice. We selected this value and re-solved the problem using the feedback rule with five stages equally spaced in the range [50, 6473] as above. The computing time turned out to be 4729 seconds. Hence, η = 10−6 was not efficient in this case. Exa mple 2 The second example considers the design of a simply supported reinforced concrete T-girder for minimum cost according to the specifications in (American Association of State Highway and Transportation Officials 1992), using the nine design variables x = (As , b, hf , bw , hw , Av , S1 , S2 , S3 ), where As is the area of the tension steel reinforcement, b is the width of the flange, hf is the thickness of the flange, bw is the width of the web, hw is the height of the web, Av is the area of the shear reinforcement (twice the cross-section area of a stirrup), and S1 , S2 and S3 are the spacings of
328
Structural design optimization considering uncertainties Table 12.2 Computing times [seconds] for the algorithm with feedback rule for sample adjustment as applied to Example 2. The algorithm with efficient sample-adjustment computes the same design in 1001 seconds. η
∞ 10−2 10−3 10−4 10−5 10−6 10−7
Threshold (55)
(56)
>36000 >12600 2004 2256 6721 1209 11108
>36000 7416 1990 2342 2327 1608 >7200
shear reinforcements in the high, medium, and low shear force zones of the girder, respectively. We model uncertainty using eight independent random variables collected in a vector V. We assumed that the girder can fail in four different modes corresponding to bending stress in mid-span and shear stress in the high, medium, and low shear force zones. Structural failure occurs if any of the four failure modes occur. This gives rise to four nonlinear, smooth limit-state functions Gk (x, V), k = 1, 2, 3, 4, whose exact form is rather complicated and is given in (Royset et al. 2006). This results in a failure probability p(x) = P[ ∪4k=1 {Gk (x, V) ≤ 0}]. As above, these limit-state functions can be transformed into ones given in terms of a standard normal vector U. Let gk (x, U) be these transformed limit-state functions. Since the resulting safe domain is not bounded, we introduce an auxiliary limit stage function g5 (x, U) = ρ − U, where ρ = 10 in this example. (This introduces negligible error.) Then, we redefine the failure probability of the structure as 5 ' p(x) = P {gk (x, U) ≤ 0} (57) k=1
which is in the form considered in this chapter. We also imposed 24 deterministic, nonlinear constraints as described in (Royset et al. 2006). Algorithm parameters were selected to be n0 = 50, N0 = 50, s = 5, and π = 1. Finally, the required fractional reduction in cost error = 0.0001 and the initial point x00 = (0.01, 0.5, 0.5, 0.5, 0.5, 0.0005, 0.5, 0.5, 0.5) were chosen. The algorithm with the efficient sample-adjustment scheme gave three stages with 65, 20, and 20 iterations, with sample sizes 50, 373, and 2545, respectively. The total computing time was 1001 seconds. Again we compared this result with that obtained using the algorithm with the feedback rule. Here, we use five stages of equally spaced sample sizes between 50 and 2545. Using the same stopping criterion as for the first example, we obtained the computing times in Table 12.2. We observe that the computing times using the feedback rule can be significantly longer than those achieved using the efficient scheme. We also
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n
329
4
3
6
5
8.66 m
7
2
1 L 10 m
10 m
Figure 12.1 Truss for Example 3.
note that an approach with a fixed sample size of 2545 for all iterations takes more than 10 hours (see the first row in Table 12.2). 6.2 Alternative objective functions We conclude this chapter by demonstrating how our solution methodology can also solve other problems than P (and (1) and (3)). Typically, engineers need to account for not only quantitative factors such as cost and reliability, but also esthetic, social, and political requirements. Most esthetic, social, and political requirements are qualitative in nature and cannot easily be incorporated into numerical models. Even quantitative factors may not fully represent reality due to imprecise models and lack of data. In this subsection, we show how multiple optimization models can be formulated and solved to account for this situation. We adopt an approach originally proposed in (Brill Jr. 1979) for public sector planning: determine a small set of design alternatives that satisfy the stated requirements, are “good’’ with respect to the stated objective, and are also dispersed in the design space. Instead of searching for one optimal design or an efficient frontier, as in singleand multi-objective objective optimization, respectively, this approach seeks several design alternatives (e.g., 3–12) that the engineer and the decision maker can further assess using qualitative objectives. As pointed out in (Brill Jr. 1979), the best design from the perspective of the decision maker may not be located on the efficient frontier, as assumed by a multi-objective optimization formulation, due to the fact that not all objectives are included in the multi-objective formulation. Furthermore, by seeking a dispersed set of design alternatives, the engineer and decision maker are presented with a wide range of alternatives which may stimulate new considerations and ideas about designs, objectives, and constraints. See also (White 1996; Drezner and Erkut 1995) for similar approaches. We illustrate this approach with an example. Ex am ple 3 Consider the simply supported truss in Figure 12.1. The truss is subject to a random load L in its mid-span. L is lognormally distributed with mean 1000 kN and standard
330
Structural design optimization considering uncertainties
deviation 400 kN. Let Sk be the yield stress of member k. Members 1 and 2 have lognormally distributed yield stresses with mean 100 N/mm2 and standard deviation 20 N/mm2 . The other members have lognormally distributed yield stresses with mean 200 N/mm2 and standard deviation 40 N/mm2 . The yield stresses of members 1 and 2 are correlated with correlation coefficients 0.8. However, their correlation coefficients with the other yield stresses are 0.5. Similarly, the yield stresses of members 3–7 are correlated with correlation coefficients 0.8, but their correlation coefficients with the yield stresses of members 1 and 2 are 0.5. The load L is independent of the yield stresses. Let V = (S1 , S2 , . . . , S7 , L). The design vector x = (x1 , x2 , . . . , x7 ), where xk is the cross-section area (in 1000 mm2 ) of member k. The truss fails if any of the members exceed their yield stress. (We ignore the possibility of buckling.) This gives rise to seven limit state functions: Gk (x, V) = Sk xk − L/ζk ,
k = 1, 2, . . . , 7
(58)
where ζk is factor given by √ the geometry and loading of√the truss. From Figure 12.1, we determine that ζk = 1/(2 3) for k = 1, 2, and ζk = 1/ 3 for k = 3, 4, . . . , 7. Using a Nataf distribution (see (Ditlevsen and Madsen 1996), Section 7.2), we transform these limit-state functions into limit-state functions given in terms of a standard normal random vector U. Let gk (x, U) denote these transformed limit-state functions. Since the resulting safe domain is not bounded, we introduce an auxiliary limit state function g8 (x, U) = ρ − U, where ρ = 20 in this example. (This introduces negligible error.) Then, we redefine the failure probability of the structure as p(x) = P
8 '
{gk (x, U) ≤ 0}
(59)
k=1
which is in the form considered in this chapter. We impose the constraint that the failure probability should be no larger than 0.001350, i.e., p(x) ≤ q = 0.001350. We also impose the 14 deterministic constraints 0.5 ≤ xk ≤ 2, k = 1, 2, . . . , 7, that limit the allowable area of each member to be between 500 mm2 and 2000 mm2 . We initially seek a design of the truss that minimizes the cost of the truss, i.e., we aim to solve P. Since all members are equally long, the cost c(x) = 7k=1 xk . We use the conceptual algorithm implemented with the feedback rule (23) for sample-adjustment, with parameters η = 0.002 and τ = 0.9999, and optimization algorithm (24) for Step 1, with parameters α = 0.5, β = 0.8, γ = 2, and δ = 1. The sample size is initially 375 and is increased by a factor of 4 every time it is prompted by the sample-adjustment rule. However, the sample size is not increased beyond 24000. We start the calculations with initial design x0 = (1.000, 1.000, . . . , 1.000) and stop when a feasible solution for P24000 is found. The resulting design is given in the first row of Table 12.3. With the motivation that a decision maker may want to be presented with a small set of good designs, from which he or she may select, we formulate an optimization model that generates substantially different designs. Specifically, suppose that we have a set of existing design alternatives xd , d ∈ D. Let cˆ be the smallest cost over all existing design alternatives, i.e., cˆ = mind∈D c(xd ). Then, the following optimization model provides a
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n
331
Table 12.3 Alternative designs for Example 3. The first row gives the optimal design, but the subsequent rows are at most 10% more costly. Design
Dispersion
x1
x2
x3
x4
x5
x6
x7
1.138 1.169 1.982 1.124 1.121 1.123 1.122 1.123 1.087
1.156 2.000 1.164 1.146 1.145 1.147 1.146 1.146 1.595
1.118 1.089 1.100 1.110 1.113 1.107 1.944 1.106 1.536
1.107 1.096 1.100 1.946 1.108 1.109 1.109 1.108 1.104
1.119 1.096 1.102 1.109 1.109 1.947 1.109 1.110 1.107
1.113 1.103 1.104 1.111 1.949 1.111 1.110 1.110 1.119
1.108 1.091 1.092 1.100 1.100 1.101 1.104 1.941 1.098
– 0.8451 0.8449 0.8393 0.8367 0.8286 0.8269 0.8331 0.6085
design that is no more costly than aˆc, with a > 1, and that is as “different’’ compared to the existing designs xd as possible: max{x0 |p(x) ≤ q, x ∈ X, c(x) ≤ aˆc, x − xd ≥ x0 , d ∈ D} x0 ,x
(60)
Here, x0 is an auxiliary design variable that we seek to maximize. The last set of constraints in (60) ensures that the difference (measured in the Euclidean distance) between the new design x and the existing designs xd are all no smaller than x0 . Hence, (60) maximizes the smallest difference between a new design and the existing designs, while ensuring that the new design is feasible and no more costly than aˆc. We note that (60) is in the form P (after redefining the cost and constraint functions) and, hence, it can be solved by the conceptual algorithm described in Section 4. Using the same algorithm parameters as in the beginning of this example, we obtain the designs reported in Table 12.3. In this table, the first row reports the optimal design. The second row is obtained by solving (60) with a = 1.1 and D consisting only of the design in the first row. We observe that the design in the second row is substantially different than the one in the first row, even though it is no more than 10% more costly. The last column of Table 12.3 shows that the second design lies 0.8451 “away’’ from the first design measured in the Euclidean distance. The remaining rows in Table 12.3 are computed in a similar manner, but with D now consisting of all the designs in the rows above. We note that all the designs cost no more than 10% more than the minimum cost. It is seen from Table 12.3 that the minimum cost design (row 1) distributes the material evenly between the different members. However, good designs can also be achieved by selecting one of the members to have cross-section area close to 2 (rows 2–8). Moreover, good designs can be found by setting two members to approximately 1.5 (last row). Naturally, it becomes harder and harder to find a “different’’ design as the set of existing designs D grows, i.e., the last column of Table 12.3 tends to decrease for later designs. Hence, after some solutions of (60) with steadily increasing D, the designs we generate will not be substantially different
332
Structural design optimization considering uncertainties
compared to the ones already computed. This is an interactive process, which should be ended whenever a useful set of designs have been generated and further calculations will provide only limited insight.
7 Conclusions We have presented an approach for solving reliability-based optimal structural design problems using Monte Carlo sampling and nonlinear programming. The approach replaces failure probabilities in the problems by Monte Carlo estimates with increasing sample sizes, and solves the resulting approximate problems with increasing precision. We have also described rules for adjusting the sample sizes, which ensure theoretical convergence and computational efficiency. The numerical examples show empirically that the sample-adjustment rules can reduce computing times substantially compared with an implementation using a fixed sample size. The approach in this chapter is directed towards reliability-based structural optimization problems where the design variables are not restricted to be integers and the relevant limit-state functions are differentiable with continuous gradients. Furthermore, the approach requires many limit-state function evaluations, which (currently) prevent its application to problems involving, e.g., computationally intensive finite element analysis. We note, however, that the sample-adjustment rules described in this chapter dramatically reduce the number of limit-state function evaluations compared to an approach with a fixed sample size. Consequently, the results of this chapter open the possibility for solving, to high accuracy, many previously intractable reliability-based structural optimization problems.
References Akgul, F. & Frangopol, D.M. 2003. Probabilistic analysis of bridge networks based on system reliability and Monte Carlo simulation. In A. Der Kiureghian, S. Madanat & J.M. Pestana (eds), Applications of Statistics and Probability in Civil Engineering, Rotterdam, Netherlands, pp. 1633–1637. Millpress. American Association of State Highway and Transportation Officials (1992). Standard specifications for highway bridges. Washington, D.C.: American Association of State Highway and Transportation Officials. 15th edition. Beck, J.L., Chan, E., Irfanoglu, A. & Papadimitriou, C. 1999. Multi-criteria optimal structural design under uncertainty. Earthquake Engineering & Structural Dynamics 28(7):741–761. Bjerager, P. 1988. Probability integration by directional simulation. Journal of Engineering Mechanics 114(8):1288–1302. Brill Jr., E.D. 1979. The use of optimization models in public-sector planning. Management Science 25(5):413–422. Burke, J.V. 1991. Calmness and exact penalization. SIAM J. Control and Optimization 29(2):493–497. Clarke, F. 1983. Optimization and nonsmooth analysis. New York, New York: Wiley. Deak, I. 1980. Three digit accurate multiple normal probabilities. Numerische Mathematik 35:369–380. Ditlevsen, O. & Madsen, H.O. 1996. Structural reliability methods. New York, New York: Wiley.
S a m p l e a v e r a g e a p p r o x i m a t i o n s i n r e l i a b i l i t y-b a s e d s t r u c t u r a l o p t i m i z a t i o n
333
Ditlevsen, O., Oleson, R. & Mohr, G. 1987. Solution of a class of load combination problems by directional simulation. Structural Safety 4:95–109. Drezner, Z. & Erkut, E. 1995. Solving the continuous p-dispersion problem using nonlinear programming. The Journal of the Operational Research Society 46(4):516–520. Eldred, M.S., Giunta, A.A., Wojtkiewicz, S.F. & Trucano, T.G. 2002. Formulations for surrogate-based optimization under uncertainty. In Proceedings of the 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Paper AIAA-2002-5585, Atlanta, Georgia. Enevoldsen, I. & Sørensen, J.D. 1994. Reliability-based optimization in structural engineering. Structural Safety 15(3):169–196. Gasser, M. & Schuëller, G.I. 1998. Some basic principles in reliability-based optimization (RBO) of structures and mechanical components. In Stochastic programming methods and technical applications, K. Marti & P. Kall (eds), Lecture Notes in Economics and Mathematical Systems 458, Springer-Verlag, Berlin, Germany. He, L. & Polak, E. 1990. Effective diagonalization strategies for the solution of a class of optimal design problems. IEEE Transactions on Automatic Control 35(3):258–267. Holicky, M. & Markova, J. 2003. Reliability analysis of impacts due to road vehicles. In A. Der Kiureghian, S. Madanat & J.M. Pestana (eds), Applications of Statistics and Probability in Civil Engineering, Rotterdam, Netherlands, pp. 1645–1650. Millpress. Igusa, T. & Wan, Z. 2003. Response surface methods for optimization under uncertainty. In Proceedings of the 9th International Conference on Application of Statistics and Probability, A. Der Kiureghian, S. Madanat & J. Pestana (eds), San Francisco, California. Itoh, Y. & Liu, C. 1999. Multiobjective optimization of bridge deck maintenance. In Case Studies in Optimal Design and Maintenance Planning if Civil Infrastructure Systems, D.M. Frangopol (ed.), ASCE, Reston, Virginia. Kuschel, N. & Rackwitz, R. 2000. Optimal design under time-variant reliability constraints. Structural Safety 22(2):113–127. Liu, P.-L. & Kuo, C.-Y. 2003. Safety evaluation of the upper structure of bridge based on concrete nondestructive tests. In A. Der Kiureghian, S. Madanat & J.M. Pestana (eds), Applications of Statistics and Probability in Civil Engineering, Rotterdam, Netherlands, pp. 1683–1688. Millpress. Madsen, H.O. & Friis Hansen, P. 1992. A comparison of some algorithms for reliability-based structural optimization and sensitivity analysis. In Reliability and Optimization of Structural Systems, Proceedings IFIP WG 7.5, R. Rackwitz & P. Thoft-Christensen (eds), SpringerVerlag, Berlin, Germany. Marti, K. 1996. Differentiation formulas for probability functions: the transformation method. Mathematical Programming 75:201–220. Marti, K. 2005. Stochastic Optimization Methods. Berlin: Springer. Mathworks, Inc. 2004. Matlab reference manual, Version 7.0. Natick, Massachusetts: Mathworks, Inc. Nakamura, H., Miyamoto, A. & Kawamura, K. 2000. Optimization of bridge maintenance strategies using GA and IA techniques. In Reliability and Optimization of Structural Systems, Proceedings IFIP WG 7.5, A.S. Nowak & M.M. Szerszen (eds), Ann Arbor, Michigan. Polak, E. 1997. Optimization. Algorithms and consistent approximations. New York, New York: Springer-Verlag. Polak, E. & Royset, J.O. 2007. Efficient sample sizes in stochastic nonlinear programming. J. Computational and Applied Mathematics. To appear. Royset, J.O., Der Kiureghian, A. & Polak, E. 2006. Optimal design with probabilistic objective and constraints. J. Engineering Mechanics 132(1):107–118. Royset, J.O. & Polak, E. 2004a. Implementable algorithm for stochastic programs using sample average approximations. J. Optimization. Theory and Application 122(1):157–184.
334
Structural design optimization considering uncertainties
Royset, J.O. & Polak, E. 2004b. Reliability-based optimal design using sample average approximations. J. Probabilistic Engineering Mechanics 19(4):331–343. Royset, J.O. & Polak, E. 2007. Extensions of stochastic optimization results from problems with simple to problems with complex failure probability functions. J. Optimization. Theory and Application 133(1):1–18. Rubinstein, R. & Shapiro, A. 1993. Discrete Event Systems: Sensitivity Analysis and Stochastic Optimization by the Score Function Method. New York, NY: Wiley. Ruszczynski, A. & Shapiro, A. 2003. Stochastic Programming. New York, New York: Elsevier. Torczon, V. & Trosset, M.W. 1998. Using approximations to accelerate engineering design optimization. In Proceedings of the 7th AIAA/USAF/NASA/ISSMO Symp. on Multidisciplinary Analysis and Optimization, AIAA Paper 98-4800, St. Louis, Missouri. Tretiakov, G. 2002. Stochastic quasi-gradient algorithms for maximization of the probability function. A new formula for the gradient of the probability function. In Stochastic Optimization Techniques, New York, pp. 117–142. Springer. Uryasev, S. 1995. Derivatives of probability functions and some applications. Annals of Operations Research 56:287–311. White, D.J. 1996. A heuristic approach to a weighted maxmin dispersion problem. IMA Journal of Mathematics Applied in Business and Industry 7:219–231.
Chapter 13
Cost-benefit optimization for maintained structures Rüdiger Rackwitz & Andreas E. Joanni Technical University of Munich, Munich, Germany
ABSTRACT: In this chapter the theoretical and practical issues for setting up effective costbenefit optimization formulations for existing aging structures are presented. These formulations include deterioration and failure models as well as inspection and repair models. An elaborate optimization methodology, based on renewal theory that uses systematic reconstruction or repair schemes after suitable inspection is formulated, in which life-cycle cost perspectives are used is implemented for maintained concrete structures.
1 Introduction Many civil engineering structures are exposed not only to loads but also to the technical or natural environment. They are aging because of wear, corrosion, fatigue and other phenomena. At a certain age they need to be inspected and, possibly, repaired or replaced. Many aging phenomena are rather complex and all but fully understood in their physical and chemical context. For concrete structures the most important aging phenomena in temperate climates are corrosion due to carbonation and/or chloride attack, for steel structures it is rusting and fatigue. Moreover, the concepts for costbenefit optimization of such structures are not very well developed, although it is known that the cost for maintenance can be considerable and, in the long term, can even exceed the cost of the initial investment. It should be clear that only a rigorous lifecycle consideration can fully account for all cost involved, and that design rules and maintenance strongly interact. While the techniques for design optimization appear sufficiently developed, no clear concepts exist for optimizing maintenance. In this contribution suitable failure models for physically based deterioration phenomena are first reviewed. Their computation is essentially based on FORM/SORM (see, for example, (Rackwitz 2001)) which can be shown to be accurate enough for the purpose under discussion. Several schemes for computing first passage time distributions are discussed. Failure time models for series systems are also given. This is followed by some remarks about classical renewal theory, Bayesian updating, inspection and repair models. Then, the well-known renewal theory (Rosenblueth and Mendoza 1971; Rackwitz 2000) for cost-benefit optimization of structures is outlined. It is extended and generalized to optimal and integrated inspection and maintenance strategies. When setting up suitable maintenance strategies we follow closely the concepts developed in classical reliability theory as described, for example, in (Barlow and Proschan 1965; Barlow and Proschan 1975) which we find still very valid and which, to our knowledge,
336
Structural design optimization considering uncertainties
have not been applied to structures so far (see, however, (Van Noortwijk 2001)). In particular, we study minimal, age-dependent and block repairs and maintenance by inspection and repair. The models are generalized for maintenance optimization of series systems. Some special optimization techniques are briefly reviewed. An example illustrates aspects of the theory. Clearly, the considerations are no more valid if other than economic reasons exist to repair and/or retrofit an existing structure.
2 Preliminaries 2.1 F ai l ure m od e ls wit ho ut d e t er io r at i o n As a matter of fact, there are very few exact, time-variant failure models available which are amenable to practicable computation. In some cases consideration of (stationary or non-stationary) time-variant actions and time-variant structural state function is necessary. Let G(X(t), t) be the structural state function such that G(X(t), t) ≤ 0 denotes failure states and X(t) a random process. Examples of such processes are the Gaussian and related processes and the rectangular wave renewal processes. But X(t) can also include simple random variables. Then, the failure time distribution can be computed numerically by the outcrossing approach. A well-known upper bound is
t
F(t) ≤
ν(τ)dτ ≤ 1
(1)
0
with the outcrossing rate (more specifically, the downcrossing rate) 1 P({G(X(t), t) > 0} ∩ {G(X(t + ), t + ) ≤ 0})
→0
ν(τ) = lim
(2)
This upper bound is only tight for small probabilities. Frequently, an asymptotic result is used (Cramér and Leadbetter 1967) t F(t) ≈ 1 − exp − ν(τ)dτ (3) 0
with t f (t) ≈ ν(t) exp − ν(τ)dτ
(4)
0
Equation (3) implies a non-homogeneous Poisson process of failure events with intensity ν(t). For stationary failure processes Equation (3) reduces to a homogeneous Poisson process and simplifies somewhat. In general, computations are done by first transforming the original process and/or random variables into the so-called standard space of uncorrelated standard normal variates (Hohenbichler and Rackwitz 1981) which enables to use FORM/SORM (see, for example, (Rackwitz 2001)) provided that the dependence structure of the two events {G(X(t), t) > 0} and {G(X(t + ), t + ) ≤ 0} can be determined in terms of correlations coefficients. Some computational details are given in (Streicher and Rackwitz 2004). However, the relevant conditions must be fulfilled, i.e. the outcrossing events must become independent
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s
337
and rare asymptotically. For example, the independence property is lost if X(t) contains not only (mixing) random processes but also simple random variables. Therefore, in many cases this approach yields only crude approximations. An alternative approach will be discussed in the next subsection. 2.2
Failure mo dels for deterioration
Obviously, the outcrossing approach can also be applied if there is deterioration. It appears as if it performs better if the outcrossing rate is increasing with time. For aging structures a closed-form failure time (first passage time) distribution is hardly available except for some special, usually oversimplifying cases. The log-normal, inverse Gaussian or Weibull distribution function with a suitable deterioration mechanism for the mean (or other parameters) has been used. They, at most, can serve as approximations. Realistic failure models must be derived from physical multi-variable deterioration models (cumulative wear, corrosion, fatigue, etc.). For (monotonically and continuously) deteriorating structures a widely used failure model is as follows. Let G(X, t) = g(U, t) be the (differentiable) structural state function of a structural component with G(X, t) = g(U, t) ≤ 0 the failure domain. X is a vector of random variables and time t is a parameter. Transition from X to U denotes the usual probability transformation from the original into the standard space of variables (Hohenbichler and Rackwitz 1981). Within FORM/SORM the probability of the time to first failure is F(t) = P(T ≤ t) = P(g(U, t) ≤ 0) ≈ (−β(t))C(t)
(5)
for t ≥ 0 and the failure density is ∂F(t) ∂β(t) ∂C(t) ≈ −ϕ(β(t)) C(t) + (−β(t)) ∂t ∂t ∂t ) * − ∂t∂ g(u∗ ,t) ∂C(t) = −ϕ(β(t)) C(t) + (−β(t)) ∇u g(u∗ , t) ∂t
f (t) =
(6)
T is the time to first entrance into a failure state. ( · ) and ϕ( · ) denote the univariate standard normal distribution function and corresponding density, respectively. β(t) is the (geometrical) reliability index. C(t) is a correction factor evaluated according to SORM and/or importance sampling which can be neglected in many cases. In Equation (6) it frequently can be assumed that C(t) does not vary with t. Clearly, this model does not take account of the randomness in the deterioration process caused by a (large) number of small disturbances which, however, is small to negligible for cumulative deterioration phenomena, at least for larger t. A numerical computation scheme for first-passage time distributions under less restrictive conditions than the outcrossing approach can also be given. It is based on the following lower bound formula F(t) = P(T ≤ t) ≥ P
) n ' i=1
* P(G(X(ti ), ti ) ≤ 0)
(7)
338
Structural design optimization considering uncertainties
with t = tn and ti < t denoting a narrow not necessarily regular time spacing of the interval [0, t]. As demonstrated by examples in (Au and Beck 2001), the lower bound F(t) = P(T ≤ t) = 1 − P(G(X(θ), θ) > 0 for all θ in [0, t]) ) n * ) n * ' ' ≥ P {g(U(θi ), θi ) ≤ 0} ≈ P {α(θi )T U(θi ) + β(θi ) ≤ 0} i=0
)
= 1−P
*
n :
i=0
{Zi ≤ β(θi )} = 1 − n+1 (β; R)
(8)
i=0
to the first-passage time distribution turns out to be surprisingly accurate for all values of F(t), if the time-spacing τ = θi − θi−1 is chosen sufficiently close and where θi = iτ and t = θn . Here again, a probability distribution transformation from the original space into the standard space is performed and the boundaries of each failure domain are linearized. The last line represents a first order approximation (Hohenbichler and Rackwitz 1983) where n (·; ·) is the n-dimensional standard normal integral with β = {β(θi )} the vector of reliability indices of the various components in the union and the dependence structure of the events is determined in terms of correlation coefficients R = {ρij = α(θi )T α(θj )}. Suitable computation schemes for the multinormal integral even for high dimensions and arbitrary probability levels have been proposed, for example in (Hohenbichler and Rackwitz 1983; Gollwitzer and Rackwitz 1988; Pandey 1998; Ambartzumian et al. 1998; Genz 1992). It would appear that slight improvements can be achieved if the probabilities for the individual events are determined by SORM (or any other suitable improvement) and an equivalent value of βe (θ) is computed from βe (θ) = −−1 ((−β(θ))CSORM ). This computation scheme is approximate but quite general if the correlation structure of the state functions in the different points in time can be established. In (Au and Back 2001) a Monte Carlo method is used to compute Equation (7) which can be recommended if high accuracy requirements are imposed – at the expense of in part considerable numerical effort. The special case of equi-dependent (equi-correlated) components is worth mentioning. In this case we simply have (see, for example (Dunnett and Sobel 1955)) Fe (t) = 1 −
∞ −∞
ϕ(τ)
t i=1
√ βi − ρτ dτ √ 1−ρ
(9)
For equi-reliable components (no variation of resistance quantities with time) this result simplifies further. The corresponding values of the density function needed when taking Laplace transforms as required later are most easily calculated by f (θi ) = (F(θi ) − F(θi−1 ))/τ or a higher order differentiation rule. For equi-reliable components Equation (9) has a decreasing risk function.
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s
339
The results obtained so far carry over to systems without any further conceptual difficulty. Only the numerical computations become more involved. Any system can be reduced to a minimal cut set system so that its failure probability is represented as ⎛ ⎞ mi s : ' Pf (t) = P(T ≤ t) = F(t) = P ⎝ {Tij ≤ t}⎠
(10)
i=1 j=1
Assume that the failure times of the parallel systems can be determined which, in general, can involve quite some numerical effort. The remaining series system then is computed as ) s * ) s * s ' : Pf (t) = P(T ≤ t) = F(t) = P {Ti ≤ t} = 1 − P {Ti > t} ≤ P(Ti ≤ t) i=1 i=1 i=1 (11) where usually the failure and survival events are dependent. The upper bound in Equation (11) is less useful for larger, low reliability systems. Equation (8) can be combined with Equation (11), especially if the parallel systems can be represented sufficiently well by equivalent, linearly bounded failure domains of the components (Gollwitzer and Rackwitz 1983). Some specific results for the computation of series systems are given in (Streicher and Rackwitz 2004). The failure densities are obtained by differentiation. Note that, by definition, a series system fails if any of its components fails. In passing it is also noted that the formulation in Equation (11) also includes failure due to extreme disturbances. And it should be clear that the series system model must be applied if several hazards are present. Deterioration of structural resistance is frequently preceded by an initiation phase. In this phase failure is dominated by normal (extreme-value) failure. Structural resistance is virtually unaffected. Only in the succeeding phase resistances degrade. Examples are crack initiation and crack propagation or chloride penetration into concrete up to the reinforcement and subsequent reduction of the reinforcement cross-section by corrosion and, similarly, for initial carbonation and subsequent corrosion. In many cases the initiation phase is much longer than the actual degradation phase. Let Ti denote the random time of initiation, Te the random time to normal (first-passage extreme-value) failure and Td the random time from the end of the initiation phase to deterioration failure with degraded resistance. Then, F(t) = P(T ≤ t) = P[({Ti > t} ∩ {Te ≤ t}) ∪ ({Ti ≤ t} ∩ {Te < Ti }) ∪({Ti ≤ t} ∩ {Te > Ti } ∩ {Ti + Td ≤ t})]
(12)
= P[{Ti > t} ∩ {Te ≤ t}] + P[{Ti ≤ t} ∩ {Te < Ti }] + P[{Te > Ti } ∩ {Ti + Td ≤ t}] Note, extreme-value failure during the initiation phase and failure in the deterioration phase are mutually exclusive. Assume that Ti is independent of the other two variables.
340
Structural design optimization considering uncertainties
If the variables Te and Td can also be assumed independent, the following formula can be used t F(t) = Fe (t)F i (t) + fi (τ)[Fe (τ) + (1 − Fe (τ))Fd (t − τ)]dτ (13) 0
2.3 T h e renew al mo d el A sufficiently general setting is to assume that the structure fails at a random time in the future. After failure or serious deterioration it is systematically renewed by reconstruction or retrofit/repair. Reconstruction, repair or retrofit reestablish all (stochastic) structural properties. The times between failure (renewal) events have identical distribution functions F(t), t ≥ 0 with probability densities f (t) and are independent. The sequence of failures and renewals then forms an ordinary renewal process. Renewal theory allows for a useful refinement which will be found to be important for the problem under discussion, namely the distribution of the time to the first event can have distribution function F1 (t) = F(t), t ≥ 0 (see (Cox 1962) for details). The process of renewals is then denoted by modified or delayed renewal process. The independence assumption between failure times needs to be verified carefully. In particular, one has to assume that loads and resistances in the system are independent for consecutive renewal periods and there is no change in the design rules after the first and all subsequent failures (renewals). Even if designs change failure time distributions must remain the same. But the model allows for a different design rule for the initial design which can be one of the reasons for F1 (t) = F(t). Throughout the chapter the point process of renewals is an orderly point process, that is multiple occurrences of renewals in a small time interval are excluded (Cox and Isham 1980). The renewal function for a modified renewal process which will be used extensively later on is (Cox 1962) E[N(t)] = M1 (t) =
∞
np(N(t) = n) =
n=1
= F1 (t) +
∞ n=1
∞
n(Fn (t) − Fn+1 (t)) =
n=1 t
Fn (t)
n=1
t
Fn (t − u)dF(u) = F1 (t) +
0
∞
M1 (t − u)dF(u)
(14)
0
with N(t) the random number of renewals and Fn (t) = P(N(t) ≥ n) = P(Tn ≤ t) the distribution function of the time to the n-th renewal. The renewal intensity (or, if applied to failure processes, the unconditional failure rate) is obtained upon differentiation ∞
dM1 (t) P(one renewal in [t, t + dt]) = = fn (t) m1 (t) = lim dt dt dt→0
(15)
n=1
For ordinary processes the index ‘1’ is omitted. The last expression in Equation (14) is called ‘renewal equation’. As pointed out in (Cox 1962), m(t) (or m1 (t)) has a limit m(t → ∞) = lim m(t) = t→∞
1 E[Tf ]
(16)
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s
341
for f (t) → 0 if t → ∞. In approaching the limit m(t) can be strictly increasing, strictly decreasing or oscillate in a damped manner around 1/E[Tf ]. Ordinary renewal processes then tend to be large around t = E[Tf ], 2E[Tf ], . . . and small around t = 0, 32 E[Tf ], 53 E[Tf ], . . .. For a Poisson process with parameter λ it is constant, i.e. m(t) = λ. If there are oscillations they die out more rapidly for larger dispersions of the failure time distribution. In many examples oscillations have been found when the risk function is increasing. Also, in many cases the failure rate is increasing for small t. Only for some special models, especially those with very large coefficient of variation of failures times, m(t) is decreasing. The transient behavior of m(t) will later be of interest. Unfortunately, Equation (14) has closed-form solutions for only very few special mathematical failure models (see (Streicher et al. 2006) for a list of relevant references) and otherwise can be computed directly only with extreme numerical effort. In general, Equation (14) or Equation (15) have to be determined numerically. A particularly suitable numerical method is proposed in (Ayhan et al. 1999). It makes use of the upper and lower sum in Riemann-Stieltjes integration for the discrete version of Equation (14). ( t Because M(t) is non-decreasing, we have the following bounds for M(t) = F(t) + 0 M(t − s)dF(s)
MLB (kτ) = F(kτ) +
k
MLB ((k − i)τ) F(iτ)
i=1
≤ M(kτ) ≤ F(kτ) +
k
MUB ((k − i + 1)τ) F(iτ) = MUB (kτ)
(17)
i=1
for equal partitions of length τ in [0, t] with F(iτ) = F(iτ) − F((i − 1)τ) and nτ = t. The resulting system of linear equations is solved easily. If the first failure time distribution is different from the others one obtains by one additional convolution
t
M1 (t) = F1 (t) +
F1 (t − s)dM(s)
(18)
0
which, in turn, is bounded by
M1,LB = F1 (t) +
k i=1
inf
(i−1)τ≤x≤iτ
≤ M1,UB ≤ F1 (t) +
F1 (t − x)(MLB (iτ) − MLB ((i − 1)τ)) ≤ M1 (kτ)
k
sup
i=1 (i−1)τ≤x≤iτ
F1 (t − x)(MUB (iτ) − MUB ((i − 1)τ)) (19)
m1 (t) is obtained by numerical differentiation. The computation methods in Equations (17) and (19) are useful whenever interest lies in the (unconditional) failure rate or risk acceptance questions. Other approximation methods have also been proposed.
342
Structural design optimization considering uncertainties
For aging components with increasing risk function the following bounds on the renewal function are given in (Barlow and Proschan 1965, p. 54) t tF(t) t t − 1 ≤ M(t) ≤ ( t ≤ −1 ≤ (t E[Tf ] E[T f] 0 (1 − F(τ))dτ 0 (1 − F(τ))dτ
(20)
The sharper upper bound in Equation (20) turns out to be remarkably close to the exact result for small t. Under suitable conditions one also has m(t) =
d d tF(t) M(t) ≤ (t dt dt 0 (1 − F(τ))dτ
(21)
Again, the upper bound for Equation (21) is found to be very close to the exact result up to approximately E[T]. It approaches the limit 1/E[T] for large t. The lower bound obtainable from Equation (20) by differentiation is generally less useful. Equation (21) can be used with advantage in Sections 4.4 and 4.5. 2.4
U pd a ti ng t he pr o b ab ilis t ic mo d el
There are many types of updating of a probabilistic model depending on the type of information collected during the experimental and numerical investigations. In general, one can distinguish between variable updating and event updating. In a Bayesian context information is collected about a variable by taking (independent) samples and testing them. This leads to an improved estimate of the parameters of the distribution of a variable. Let xn be values of a sample of size n and θ a parameter (vector), then an improved posterior distribution is
f (θ | xn ) = (
L(xn | θ)f (θ) L(x n | θ)f (θ)dθ θ
(22)
where L(xn | θ) is the likelihood function and f (θ) the prior density. The Bayesian or predictive density function is f (x | xn ) = f (x | θ)f (θ | xn )dθ (23) θ
For many important distributions analytical results are available (Aitchison and Dunsmore 1975). Updating by events is generally more difficult. We show this for the model from Equation (5) and previous informative events B = i=1 Bi . For example, such events could be the knowledge about the maximum load in the past, some measured damage indicator or just the knowledge that the structure has survived up to the present time. Then, we have two types of observations, namely equalities and inequalities which require different treatment. For B = i=1 bi (X, t0 ) ≤ 0 it is F(t | B) =
P({g(X, t) ≤ 0} ∩ i=1 {bi (X, t0 ) ≤ 0}) P( i=1 bi (X, t0 ) ≤ 0)
(24)
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s
343
It is assumed that the observation events B can always be written in the form given. In most cases the observation and decision point is t0 = 0. Within FORM one can write for one observation event F(t | B) =
2 (−βg (t), −βb (t0 ), ρ) (−βb (t0 ))
(25)
where 2 (x, y, ρ) is the two-dimensional normal integral and ρ = αTg αb with αg , αb the two normalized gradients of the limit state functions. This scheme applies analogously if more than one event has to be considered. For B = {b(X, t0 ) = 0} we have F(t | B) =
∂ ∂βb
( βb
−∞
P(Zg ≤ βg | Zb (t0 ) = z)ϕ(z)dz ϕ(βb (t0 ))
)
−βg (t) + ρ(t, t0 )βb (t0 ) = 1 − ρ(t, t0 )2
*
(26)
3 Cost-benefit optimization 3.1
G eneral
It is generally accepted that the ultimate target to be achieved in structural design including proper maintenance is to maximize the net benefit derived from the structure over its lifetime, subject to constraints related to safety and serviceability. For technical facilities the following objective has been proposed by (Rosenblueth and Mendoza 1971) based on earlier proposals in economics for cost benefit analysis: Z(p) = B(p) − C(p) − D(p)
(27)
A facility is financially optimal if Equation (27) is maximized. It is assumed that all quantities in Equation (27) can be measured in monetary units. p is the vector of all safety relevant parameters. B(p) is the (expected) benefit derived from the existence of the facility, C(p) is the cost of design and construction and D(p) is the (expected) cost in case of failure. Later we will also include all expenses for maintenance in D(p). Statistical decision theory dictates that expected values are to be taken. In the following it is assumed that C(p) and D(p) are differentiable in each component of p. The facility has to be optimized during design and construction at the decision point which is taken as t = 0. Now it is a well-established principle of cost-benefit analysis that future costs and benefits must be discounted, using a compound interest formula. A continuous discounting function is assumed for analytical convenience which is accurate enough for all practical purposes. δ(t) = exp [−γt]
(28)
γ is a time-independent, time-averaged interest rate. In most cost-benefit analyses a tax and inflation-free discount rate should be taken. If a discrete discount rate γ is given, one converts with γ = ln (1 + γ ). The principles of choosing appropriate discount rates are thoroughly discussed in (Rackwitz et al. 2005).
344
Structural design optimization considering uncertainties
Cost and benefits may differ for the different parties involved having different economic objectives, e.g. the owner, the builder, the user and society. Also, the discount rate may vary among the different parties in their cost-benefit analysis. A facility makes sense only if Z(p) is positive within certain parameter ranges for all parties involved. 3.2 D eri v ati o ns A complete cost-benefit analysis must include not only the direct and indirect cost for possible failure and for maintenance of the structure to be built, but also the cost for all future realizations if the concepts of sustainability are applied (Rackwitz et al. 2005). But this is just the situation for the application of renewal theory. It is assumed that structures will be systematically reconstructed after failure and/or maintained. This rebuilding strategy is in agreement with the principles of life cycle engineering and also fulfills the demand for sustainability (Rackwitz et al. 2005). Clearly, it rests on the assumption that future preferences are the same as the present preferences. For regular renewal processes some objective functions based on the renewal model are already derived in (Rosenblueth and Mendoza 1971; Rackwitz 2000; Streicher and Rackwitz 2004) and elsewhere. For existing structures the time to first failure is generally different from the other failure times due to additional experimental and numerical investigations and subsequent updating of the structural state and/or due to repair or retrofit of the existing structure. But there can also be other reasons for assuming f1 (t, p) = f (t, p). Therefore, we derive our model for cost-benefit optimization in full generality. The objective function is given by Equation (27). The expected damage cost D(p) are derived as follows. The discrete cost associated with failure including the reconstruction or repair cost are denoted as CV,1 at the first renewal and CV = CV,n at subsequent renewals. Let θi = ti − ti−1 be the times between renewals with density f (t, p) whereas θ1 = t1 has density f1 (t, p). The time to the n-th renewal is Tn = ni=1 θi . Systematic reconstruction is assumed. The discounted expected damage cost are then ∞ n D(p) = E CV,n exp −γ θk n=1
k=1
= E[CV,1 exp[−γ θ1 ]] + E = E[CV,1 exp[−γ θ1 ]] + E
∞ n=2 ∞
CV,n
n
∞
exp[−γ θk ]
k=1
CV,n exp[−γ θ1 ]
n=2
= E[CV,1 exp[−γ θ1 ]] +
n−1
exp[−γ θk ] exp[−γ θn ]
k=2
E[ exp[−γ θ1 ]]E[ exp(−γ θ)]n−2 E[CV,n exp[−γ θn ]]
n=2
= E[CV,1 exp[−γ θ1 ]] + E[ exp[−γ θ1 ]] =
E[CV exp[−γ θ]] 1 − E[ exp[−γ θ]]
CV,1 E[exp[−γ θ1 ]] 1 − E[exp[−γ θ]] −CV,1 E[exp[−γ θ1 ]]E[exp[−γ θ]] + E[exp[−γ θ1 ]]CV E[exp[−γ θ]] + (29) 1 − E[exp[−γ θ]]
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s
345
a n−k where we have made use of the relation s = ∞ = 1−q for k < ∞. n=k aq (∞ (∞ ∗ E[exp[−γ θ1 ]] = 0 exp[−γt]f1 (t, p)dt = f1 (t, p) and E[exp[−γ θ]] = 0 exp[−γt] f (t, p)dt = f ∗ (t, p) is also denoted as Laplace transform of f1 (t, p) and f (t, p). If f (t, p) is a probability density it is f ∗ (0, p) = 1 and 0 < f ∗ (γ, p) ≤ 1 for all γ ≥ 0. Equation (27) can be rewritten in case of systematic reconstruction after failure with CV,1 = (C1 (p) + L) as well as CV = (C(p) + L) as Z(p) = B(p) − C(p) − +
(C1 (p) + L)f1∗ (γ, p) 1 − f ∗ (γ, p)
(C1 (p) + L)f ∗ (γ, p)f1∗ (γ, p) − (C(p) + L)f1∗ (γ, p)f ∗ (γ, p) 1 − f ∗ (γ, p)
(30)
for the modified renewal process. L is the monetary loss in case of failure including direct failure cost, loss of business and, possibly, the cost to reduce the risk to human life and health (or, better, the compensation cost). If only C1 (p) = C(p) the two terms in the numerator of the forth term cancel. This is usually the case for existing and systematically renewed structures and, therefore Z(p) = B(p) − Cini (p) − (C(p) + L)
f1∗ (γ, p) 1 − f ∗ (γ, p)
(31)
It has to be mentioned that the design parameters p can be different after the first renewal compared to the initial design. Also, the cost for the initial design Cini (p) can be different from the reconstruction cost C(p). The term m∗1 (γ, p) =
f1∗ (γ, p) 1 − f ∗ (γ, p)
(32)
is also denoted by the Laplace transform of the renewal intensity. If f1 (t, p) = f (t, p), f1∗ (t, p) in Equation (31) must be replaced by f ∗ (t, p). The benefit B(p) is also discounted down to the decision point. For a benefit rate b(t) unaffected by possible renewals and negligibly short times of reconstruction (retrofitting) one finds ∞ B= b(t) exp[−γt]dt (33) 0
Clearly, the integral must converge imposing some restriction on the form of b(t). If the benefit rate b = b(t) is constant one can integrate to obtain
∞
B= 0
b exp[−γt]dt =
b γ
(34)
The upper integration limit is extended to infinity because the full sequence of life cycle benefits is considered. A model which represents realistically the observation that with increasing age of a component its suitability for use diminishes according to b(t) has been established in (Hasofer and Rackwitz 2000). Decreasing benefit was associated with obsolescence in
346
Structural design optimization considering uncertainties
this reference. But b(t) can have any form. At each renewal the benefit rate starts again at b(0) for systematic reconstruction. The total benefit is already given in (Streicher 2004) and is repeated here in full generality. B(p) = E
∞
)
exp −γ
i=1
θi
θk
* exp[−γ τ]b(τ)dτ
0
k=1
θ1
=E
i−1
exp[−γ τ]b(τ)dτ
0
+ E exp[−γ θ1 ]
∞ i−1
exp[−γ θk ]
θ1
exp[−γ τ]b(τ)dτ
0
i=2 k=2
=E
θi
exp[−γ τ]b(τ)dτ + E[exp[−γ θ1 ]]
0
∞
E[exp[−γ θ]]i−2
i=2
θ
×E
exp[−γ τ]b(τ)dτ
0
θ1
= 0
(θ E[exp[−γ θ1 ]]E[ 0 exp[−γ τ]b(τ)dτ] exp[−γ τ]b(τ)dτ + 1 − E[ exp[−γ θ]]
(35)
Equation (35) can be simplified for the case of systematic reconstruction after failure to ∞ t B(p) = exp[−γτ]b(τ)dτ f1 (t, p)dt 0
0
f1∗ (γ, p) 1 − f ∗ (γ, p)
+
∞
=
∞
t
exp[−γτ]b(τ)dτ f (t, p)dt 0
0
BD (t)f1 (t, p)dt +
0
f1∗ (γ, p) 1 − f ∗ (γ, p)
∞
BD (t)f (t, p)dt
(36)
0
with
t
BD (t) =
exp[−γτ]b(τ)dτ
(37)
0
For f1 (t, p) = f (t, p) Equation (36) simplifies to: 1 B(p) = 1 − f ∗ (γ, p)
∞
BD (t)f (t, p)dt
(38)
0
For completeness, the objective function is also given for the case where the component is given up after failure or a finite service time ts Z(p) = 0
ts
ts
BD (t)f1 (t, p)dt − C(p) − L
exp[−γt]f1 (t, p)dt 0
(39)
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s
347
Because the failure densities, in general, are known only numerically and pointwise, the corresponding Laplace transforms have to be taken numerically. Suitable techniques are presented in (Streicher and Rackwitz 2004) and Section 5. The formulae are easily extended for systems with several components and/or multiple failure modes in series as demonstrated by Equation (11) (see also (Streicher and Rackwitz 2004)). In particular, one component of the system can model replacement due to obsolescence. Non-constant discounting is discussed in (Rackwitz et al. 2005). Optimization of Equation (31) with respect to the design parameter p can be performed by one of the available algorithms (see Section 5). Application to existing, aging but maintained structures requires a few more remarks. It is assumed that the structure is already in use for some time. At a special point in time it will be decided to inspect and possibly repair or retrofit the structure. The cost which occur at this decision point are CR (p). Clearly, all cost incurred before that point are irrelevant if the decision is to keep the structure rather than demolishing and rebuilding it. The value of CR (p) can be zero if the structure is left as is but the probabilistic model for the time to first failure f1 (t, p) possibly is updated. Then, renewal of the structure is a question as to when the possibly updated failure rate is no more acceptable. The modified density f1 (t, p) of the time to first failure has to be determined depending on the repair/retrofitting actions and the information collected about the actual state of the structure. CR (p) generally differs from C(p), the reconstruction cost after failure, or even exceeds it if retrofitting is more expensive than reconstruction. A maintenance plan for the existing structure has to be designed. After the first renewal due to future failure the regular failure time density f (t, p) is valid.
3.3 Applicatio n to s tationary Pois s onia n di s turbanc e s Unfortunately, analytic Laplace transforms are available only for a few analytic failure models, for instance the exponential, uniform, gamma, normal and inverse normal distribution. The important exponential distribution with parameter λ corresponding to a Poisson process has f1∗ (γ) = f ∗ (γ) = λ/γ + λ and, therefore, m∗ (γ) = λ/γ. A very useful generalization is when a modified renewal process models disturbance (loading) events (Hasofer 1974; Rosenblueth 1976). Such disturbances generally are extreme events like shocks, explosions, earthquakes, storms or floods. The distribution functions between events are G1 (t) and G(t), respectively. If such an event occurs the failure probability is Pf (p). By definition, the occurrence of disturbance events and the failure events are independent. The density function of the time to the first failure event then is f1 (t, p) =
∞
gn (t)Pf (p)Rf (p)n−1
(40)
n=1
i.e. the first failure event can occur after the first, second, third, . . . disturbance event and where Rf (p) = 1 − Pf (p). The density of the n-th event can be obtained by recursive convolution so that in terms of Laplace transforms ∗ gn∗ (γ, p) = gn−1 (γ, p)g ∗ (γ, p) = g1∗ (γ, p)[g ∗ (γ, p)]n−1
(41)
348
Structural design optimization considering uncertainties
Application to the renewal intensity yields m∗1 (γ, p) =
∞
∗ g1∗ (γ)gn−1 (γ)Pf (p)Rf (p)n−1
n=1
=
∞
g1∗ (γ)[g ∗ (γ)]n−1 Pf (p)Rf (p)n−1 =
n=1
Pf (p)g1∗ (γ) 1 − Rf (p)g ∗ (γ)
(42)
For the regular renewal process m∗1 (γ, p) has to be replaced by m∗ (γ, p). Let reconstruction and damage cost be C(p) and L, respectively. Also, as a special case, let the times between disturbances be the (exponential failure time distributions with (failure) rate ∞ λPf (p). Therefore, E(e−γt ) = −∞ e−γt (λPf (p))e−λPf (p)t dt = λPf (p)/γ + λPf (p). Then, if only failures due to such disturbances are considered it is (Rackwitz 2000) Z(p) = B − Cini (p) − (C(p) + L)
λPf (p) γ
(43)
For a series system it is Pf (p) = P( sk=1 P(Fk (p))) = 1 − P( sj=1 F j (p)) in Equation (11) where Fk (p) is the failure event in the k-th mode and F k (p) its complement. Then, the following generalization is possible ⎡ ⎛ ⎞⎤ s : λ Z(p) = B − Cini (p) − (C(p) + L) ⎣1 − P ⎝ F j (p)⎠⎦ (44) γ j=1
The benefit B and the initial cost Cini (p) as well as the damage cost are related to the whole system. If there are n different, independent hazards each with rate λi one derives ⎡ ⎛ ⎞⎤ n s : λi Z(p) = B − Cini (p) − i=1 (Ci (p) + Li ) ⎣1 − P ⎝ F ij (p)⎠⎦ (45) γ j=1
These generalizations also apply analogously for the more complicated cases discussed below.
4 Preventive maintenance 4.1 Ma i n tenanc e s t r at eg ie s Repair after failure is but the simplest maintenance strategy. For aging components, i.e. components with increasing risk function (conditional failure rate) r(t) = f (t)/1 − F(t), i.e. r (t) > 0, the risk of failure with potentially large consequences increases with age and alternative maintenance strategies have been proposed in order to reduce expected failure consequences. The most important alternative is called preventive maintenance at random or fixed times. Preventive maintenance actions can be replacements or
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s
349
(perfect) repairs. Preventive repairs occur only if corrective renewals have not occurred before due to failure or obsolescence. Note that preventive maintenance is usually suboptimal for non-aging components, i.e. with constant or decreasing risk function. A first strategy repairs a system (component) at age a or after failure, whichever comes first. In (Barlow and Proschan 1965) this strategy is denoted by age replacement. It requires knowledge of the age a of a component. (Barlow and Proschan 1965) also investigate so-called block repairs. In this maintenance strategy the components in a system are repaired either after failure or all at once at a given time d irrespective of their actual age. It is clear for increasing risk functions and, in fact, is shown in (Barlow and Proschan 1965) that the total number of repairs is smaller for age repairs than for block repairs. However, the number of failures (with large consequences) is larger in the first strategy and so, possibly, the total cost. Block repairs also may be organizationally easier. Sometimes they are necessary, i.e. whenever a single repair of a component prevents the whole system from functioning. While knowledge about the actual deterioration state of a component is irrelevant for the block repair strategy, this may be vital for the age repair strategy. An improvement is when repairs are only performed if inspections indicate that they are necessary. Otherwise further inspections and possible repairs are postponed to a later time. A strategy where repairs are preceded by inspections is also denoted as condition-based strategy. In practice, mixtures of these maintenance strategies will also be found. 4.2
Inspections
Inspections should determine the actual state of a component in order to decide on repair or leave it as is. But inspections can rarely be perfect. A decision about repair can only be reached with certain probability depending on the inspection method used. The repair probability depends on the magnitude of one or more suitable damage indicators (chloride penetration depth, crack length, abrasion depth, etc.) measured during inspection. For cumulative damage phenomena the damage indicators increase with time and so does the repair probability PR (t). The parameter t is the time elapsed since the beginning of the deterioration process. For example, the repair probability may be presented as PR (t) = P(S(t, X) > sc ) = P(sc − S(t, X) ≤ 0)
(46)
with S(t, X) a suitable, monotonically increasing damage indicator, X a random vector taking into account of all uncertainties during inspection and sc a given threshold level. If this is exceeded a decision for repair is taken. The vector X usually also includes a random variable modeling the measurement error. Frequently, the damage indicator function S(t, X) reflects the damage progression and has a similar form as the failure function. It involves, at least in part, the same random variables. In this case failure and no repair/repair events become dependent events. It is, of course, possible to consider multidimensional damage indicators and derive repair decisions from an arbitrary combination thereof. A discussion of the details of the efficiency of various inspection methods and the corresponding repair probabilities is beyond the scope of this chapter. They depend on the particular deterioration phenomenon under consideration.
350
Structural design optimization considering uncertainties
4.3 Repai r m od el After failure of a system or component it is repaired unless it is given up after failure or it is repaired systematically in the age-dependent maintenance strategy or it is repaired after an indicative inspection in the condition-based maintenance strategy. The name repair is used synonymously for renewal, replacement or reconstruction. Repairs, if undertaken, restore the properties of a component to its original (stochastic) state, i.e. repairs are equivalent to renewals (AGAN = As Good As New) so that the life time distribution of the repaired component is again F(t). The repair times can either be assumed negligibly short or have finite length. The model is a somewhat idealized model. It rests on a number of assumptions the most important of which is probably that repairs fully restore the (stochastic) properties of the component. Imperfect repairs cannot be handled because the renewal argument repeatedly used in the following breaks down. In the literature several models for imperfect repairs are discussed which only partially reflect the situations met in the structures area. An important case is when minimal repairs not essentially changing the initial lifetime are done right after an inspection. If one generalizes this model to a model where a renewal (perfect repair) occurs with probability π but minimal repair with probability 1−π, one has essentially the model proposed in (Brown and Proschan 1983). This model, in fact, resembles the one studied herein with π = PR (t). Negligibly short times of inspection and repair are most often only a more or less good approximation. Consideration of finite, random renewal times in the age-repair strategy appears possible but is complicated because inspections and probably also failures cannot occur during repairs. No benefit can be earned during repair times. Another important case is when repairs are delayed, for example due to budget restrictions. It appears possible to handle this case by adding a random delay time to the random repair time. During a delay time the component can still degrade or fail while this is unlikely to happen during repair. Finite renewal times are not considered in this chapter. Some more but still first results are given in (Joanni and Rackwitz 2006). It turns out that for realistic repair times their influence is very small. Inspection/repair at strictly regular time intervals as assumed below is also not very realistic. However, as will be shown in the examples, the objective function is rather flat in the vicinity of the optimal value so that small variations will not noticeably change the results. Repair operations necessarily lead to discontinuities (drops) in the risk function, and similarly in the renewal intensity. They can substantially reduce the number of failures and, thus, corrective renewals. In an effective maintenance scheme the majority of renewals will, in fact, be preventive renewals. 4.4 Ag e-d epend ent r e pair s It is convenient to start with the general case of replacements (repairs, renewals) at random times Tr with distribution Fr (t) or after failure at random times Tf with distribution Ff (t, p). The renewal time then is the minimum of these times with distribution function F(t, p) = 1 − (1 − Ff (t, p))(1 − Fr (t)) = 1 − F f (t, p)F r (t)
(47)
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s
351
for independent times Tf and Tr with density f (t, p) = ff (t, p)F r (t) + fr (t)F f (t, p)
(48)
and where the notation F(x) = 1 − F(x) is used. Application of Equation (29) then gives for the damage term of an ordinary renewal process D(p) =
(C(p) + L)fF∗ (γ, p) + R(p)fF∗ (γ, p) r
1−
(fF∗ (γ, p) f
f
(49)
+ fF∗ (γ, p)) r
and, similarly, for the benefit term with the model in Equation (35) (∞ B(p) =
0
(∞ BD (t)ff (t, p)F r (t)dt + 0 BD (t)fr (t)F f (t, p)dt 1 − (fF∗ (γ, p) + fF∗ (γ, p)) f
where fF∗ (γ, p) = r
(∞ 0
(50)
r
exp[−γt]ff (t, p)F r (t)dt and fF∗ (γ, p) =
(∞
f
0
exp[−γt]fr (t)F f (t, p)dt
are the modified complete Laplace transforms of ff (t, p)F r (t) and fr (t)F f (t, p), respectively. R(p) is the cost of repair and BD (t) is as in Equation (37). The case of random maintenance actions has hardly any practical application except if there is continuous monitoring of the system state. Then, the time until intervention by repair is random and can be defined as the first passage time of a given threshold by the continuous observation process. Alternatively, assume maintenance actions at (almost) fixed intervals a, 2a, 3a, . . . so that fr (t) = δe (a) and Fr (t) = He (a) (δe (x) = Dirac’s delta function, He (a) = Heavyside’s unit step function. Equation (49) then specializes to DM (p,a) =
(C(p) + L)f ∗∗ (γ, p,a) + R(p) exp[−γa]F(p,a) 1 − (f ∗∗ (γ, p,a) + exp[−γa]F(p,a))
(51)
and similarly Equation (50) to (a BM (p,a) =
0
BD (t)f (t, p)dt + BD (a)F(p,a)
(52) 1 − (f ∗∗ (γ, p,a) + exp[−γa]F(p,a)) (a with f ∗∗ (γ, p,a) = 0 exp[−γt]f (t, p)dt the incomplete Laplace transform of f (t, p) and F(p,a) the probability of survival up to a. The quantity BD (t) is given in Equation (37). Note that the Laplace transform of a deterministic repair time fr (t) = δe (a) is f ∗ (γ) = exp[−γa]. The repair cost R(p) should be substantially smaller than C(p) + L so that it is worth making preventive repairs and, thus, avoiding the large failure and reconstruction cost in case of failure. Equation (51) goes back to some early work in (Cox 1962; Barlow and Proschan 1965; Fox 1966). In (Van Noortwijk 2001) parallel results are developed for discrete failure models and discrete discounting schemes. Next, assume that the structure is already in use. At a special decision point in time inspection, retrofit or repair at cost CR (p) takes place. Depending on the state of the structure and the action which is done, an updated failure time density f1 (t, p) for
352
Structural design optimization considering uncertainties
the time to the first renewal is calculated. Therefore, a new cost benefit optimization is necessary in order to find optimal replacement intervals and design variables. The first replacement interval a1 with f1 (t, p) is different from the subsequent intervals a with ordinary failure time density f (t, p). It will further be assumed that for the first renewal the optimized parameter p, which is also valid for all subsequent renewals, is calculated without having regard to the special parameters realized in the existing structure. If the structure undergoes a complete renewal at the decision point it is even possible to introduce the design variables p already in that structure. Then, the existing design variables p have to be augmented by the additional variables. The expected damage cost are then determined according to Equation (49) as DMa1−a (p, a1 , a = (C(p) + L)f1∗∗ (γ, p, a1 ) + R(p) exp[−γa1 ]F 1 (p, a1 ) +
f1∗∗ (γ, p, a1 ) + exp[−γa1 ]F 1 (p, a1 ) 1 − (f ∗∗ (γ, p, a) + exp [−γa]F(p, a))
× ((C(p) + L)f ∗∗ (γ, p, a)
+ R(p) exp[−γa]F(p, a))
(53)
For constant benefit rate b(t) = b the benefit is as in Equation (34). The expected benefit for a non-constant rate b(t) as in Equation (35) is (Streicher 2004) a1 BMa1−a (p, a1 , a) = BD (t)f1 (t, p)dt + BD (a1 )F 1 (p, a1 ) 0
+
f1∗∗ (γ, p, a1 ) + exp[−γa1 ]F 1 (p, a1 )
1 − (f ∗∗ (γ, p, a) + exp[−γa]F(p, a))
a × BD (t)f (t, p)dt + BD (a)F(p, a)
(54)
0
with BD (t) from Equation (37). The cost for continuous monitoring and/or maintenance could alternatively also be taken into account in the benefit term by replacing b(t) with b(t) − c(t). The objective function then is ZMa1−a (p, a1 , a) = BMa1−a (p, a1 , a) − CR (p) − DMa1−a (p, a1 , a)
(55)
Repair is interpreted as preventive renewal (replacement of an aging component after a finite time of use a). Renewal after failure is called corrective renewal. Equation (55) can be subject to optimization not only with respect to the design parameter p but also with respect to the inspection/repair intervals a1 and a, respectively. Optimal inspection/ repair intervals do not always exist, as pointed out already in (Fox 1966). They exist for failure models with increasing risk function (Fox 1966). If they do not exist, then it is preferable to wait with renewal until failure unless the failure rate exceeds a given value. When optimizing Equation (55) it is, of course, important that the owner, builder or other party does not only enjoy the benefits but also carries the cost of construction, the cost of failures and the cost for preventive maintenance. Only then, a joint optimization of design and maintenance makes sense. If one is only interested in optimal maintenance it is still possible to optimize the cost for preventive and corrective repairs with respect to the repair intervals keeping the design parameter p constant.
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s
4.5
353
Bloc k repairs
The damage cost for block repairs are composed of the (discounted) cost of planned systematic renewals at time d (or d1 for the first interval, where the time to the first failure has the updated failure time density f1 (t, p)) plus the (discounted) cost of failure(s) before d (or d1 ). Therefore, DB (p, d1 , d) = R(p)e−γd1 + (C(p) + L)[f1∗∗ (γ, p, d1 ) + m∗∗ 1 (γ, p, d1 )] +
e−γd1 [R(p)e−γd + (C(p) + L)f ∗∗ (γ, p, d)[1 + m∗∗ (γ, p, d)]] 1 − e−γd (56)
(d ( d1 −γt ∗∗ (γ, p, d1 ) = 0 (1) e−γt f(1) (t, p)dt, m∗∗ m(1) (t, p)dt with where f(1) (1) (γ, p, d1 ) = 0 e m1 (t, p) for the updated failure rate to the first renewal until d1 and m(t, p) in subsequent intervals until d as the renewal (Cox 1962). m1 (t, p) intensities in Equation (15) ∞ f (t, p) and m(t, p) = and m(t, p) are given by m1 (t, p) = ∞ n=1 1,n n=1 fn (t, p), respectively (see Equation (15)). Here and in the following the notation x(1) means either x or x1 whatever is relevant. Remember, integration of m(1) (t) is simply the mean number of renewals in [0, d(1) ] but here discounting is introduced additionally. The computation of m(1) (t) is the numerically expensive part (see Equation (17), Equation (19) or Equation (21)). Note that all components are repaired at time d(1) with certainty and cost R(p) but some components are already renewed earlier because they failed. For f1 (t, p) = f (t, p) and d1 = d Equation (56) simplifies to DB (p, d) =
R(p)e−γd + (C(p) + L)f ∗∗ (γ, p, d)[1 + m∗∗ (γ, p, d)] 1 − e−γd
(57)
For benefit rates unaffected by renewals one simply has the results in Equation (34) or (33). The benefit term for the case in Equation (35) is for finite integration intervals [0, d]. BB (p, d1 , d) =
d1
BD (t)f1 (t, p)dt +
0
e−γd1 + 1 − e−γd
d
m∗∗ 1 (γ, p, d1 )
d1
BD (t)f (t, p)dt 0 ∗∗
BD (t)f (t, p)dt + m (γ, p, d)
0
d
BD (t)f (t, p)dt 0
(58) with BD (t) in Equation (35). For f1 (t, p) = f (t, p) and d1 = d Equation (58) simplifies to (d BB (p, d) =
0
BD (t)f (t, p)dt + m∗∗ (γ, p, d) 1 − e−γd
(d 0
BD (t)f (t, p)dt
(59)
The length d (and/or d1 ) of a replacement interval can also be subject to optimization with respect to benefits and cost. In general, there is little difference between agedependent and block repairs unless the failure cost are very large.
354
Structural design optimization considering uncertainties
4.6 Inspec ti o n and r epair In the structures and many other areas any expensive maintenance operation is preceded by inspections involving cost I if damage progression and/or changes in system performance are observable. We understand that the inspections are essential inspections leading eventually to decisions about repair or no repair. If there are inspections at times a(1) , 2a(1) , 3a(1) , . . . there is not necessarily a repair because aging processes and inspections are uncertain or the signs of deterioration are vague. Repairs occur only with a certain probability PR (t), for example according to Equation (46). For cumulative damage phenomena this probability should increase with time as in Equation (46) and should depend on the actual observed damage state. As mentioned before the same (physical or chemical) damage process determines an (observable) damage state but also failure. For this reason inspection results and thus repair events and failure events are dependent. In fact, only if inspections address the same damage process, specifically the same realization, can we expect to make reasonable decisions about repair or no repair. Such dependencies makes an analytical and numerical treatment complicated but still computationally manageable. The objective is ZIR (p, a1 , a) = BIR (p, a1 , a) − CR (p) − DIR (p, a1 , a)
(60)
where in generalizing Equation (53) DIR (p, a1 , a) = N1 +
N2 N3 D
(61)
with:
N1 = (C(p) + L)
n=1
+I
∞
⎛ ⎞ n−1 d ⎝: exp[−γt] P {R(ja1 )} ∩ {T1 ≤ θ}⎠ dθ (n−1)a1
∞
na1
j=0
⎛ exp[−γ(na1 )]P⎝{R(na1 )} ∩
n=1
+ (I + R(p))
∞
n−1 :
exp[−γna1 ]P⎝{R(na1 )} ∩
n=1
+
⎞
n−1 :
{R(ja1 )} ∩ {T1 > na1 }⎠ (62a)
j=0
⎛ ⎞ n−1 d ⎝: exp[−γt] P {R(ja1 )} ∩ {T1 ≤ θ}⎠ dθ (n−1)a1
∞
∞ n=1
|θ=t
{R(ja1 )} ∩ {T1 > na1 }⎠
j=0
⎛
n=1
N2 =
⎞
dt
na1
j=0
⎛ exp[−γna1 ]P⎝{R(na1 )} ∩
|θ=t
n−1 :
dt ⎞
{R(ja1 )} ∩ {T1 > na1 }⎠
j=0
(62b)
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s
355
⎛ ⎞ n−1 d ⎝: N3 = (C(p) + L) exp[−γt] P {R(ja)} ∩ {T ≤ θ}⎠ dt dθ n=1 (n−1)a j=0 |θ=t ⎛ ⎞ ∞ n−1 : exp[−γ(na)]P⎝{R(na)} ∩ {R(ja)} ∩ {T > na}⎠ +I ∞
na
n=1
+ (I + R(p))
∞
⎛
j=0
exp[−γna]P⎝{R(na)} ∩
n=1
⎞
n−1 :
{R(ja)} ∩ {T > na}⎠
⎛ ⎞ n−1 d ⎝: exp[−γt] P {R(ja)} ∩ {T ≤ θ}⎠ dt D=1− dθ n=1 (n−1)a j=0 |θ=t ⎛ ⎞ ∞ n−1 : exp[−γna]P⎝{R(na)} ∩ − {R(ja)} ∩ {T > na}⎠ ∞
(62c)
j=0
na
n=1
(62d)
j=0
and CR (p) = cost of investigating and/or retrofitting an existing structure C(p) = reconstruction cost after failure L = direct damage cost after failure R(na(1) ) = repair event at the j-th inspection R(ja(1) ) = no-repair event at the j-th inspection PR (ja(1) ) = probability of repair after the j-th inspection PR (ja(1) ) = (1 − PR (ja(1) )) = probability of no repair after the j-th inspection a(1) = deterministic inspection interval I = cost per inspection R(p) = repair cost for preventive maintenance. The first term N1 in Equation (61) is the replacement cost after first failure or repair, N3 the replacement cost for subsequent renewal cycles. In both cases the replacement cost include the cost of failure, the cost of inspections given that no failure and no repairs have occurred before and the third term accounts for the cost of inspection and repair given that no failure occurred before. Here, one has to extend the renewal interval to 2a(1) , 3a(1) , . . . if an inspection is not followed by repair and no failure occurred. Since PR (a(1) ) < 1 it is usually sufficient to consider only a few terms in the sums. The higher order terms vanish for PR (a(1) ) → 1 and are significant only for relatively small a(1) . As concerns numerical computions consider the fractional Laplace transform of the failure density given dependencies between no repair and failure events, that is (Joanni and Rackwitz 2006). ∗∗∗ f(1) (γ, p, (n − 1)a(1) ≤ t ≤ na(1) ) ⎛ ⎞ na(1) n−1 d ⎝: = exp[−γt] P {R(ja(1) )} ∩ {T(1) ≤ θ}⎠ dθ (n−1)a(1) j=0
|θ=t
dt
(63)
356
Structural design optimization considering uncertainties
where T(1) is the random time to failure. Here again, the intersection probabilities can be determined by FORM/SORM but alternative methods such as Monte Carlo n−1 simulation can also be used. Remember that a typical intersection event j=0 {R(ja(1) )} ∩ {T(1) ≤ t} after the probability distribution transformation into standard space is given by n−1 j=0 {sc − S(ja(1) , UR ) > 0} ∩ {g(1) (UF , t) ≤ 0} according to Equations (5) and (46), for example. UR and UF denote the variables in the random vector defining the damage indicator (including measurement error) and the variables defining failure, respectively. Because UR and UF have some components in common the events are dependent. Within FORM/SORM the event boundaries are now linearized in the most likely failure point(s) and the correlation coefficients between the respective state functions are computed. The dependencies can be taken into account by evaluating the corresponding multivariate normal integrals. The differentiation under the integral that is necessary for evaluation of Equation (63) is best done numerically, but can also be performed analytically under certain conditions. For F1 (t) = F(t) the damage term in Equation (61) simplifies to DIR =
N3 . D
(64)
The benefit is given by Equation (33) or (34) if it is unaffected by renewals. It has a similar structure as Equation (61). Generalizing Equation (54) for the model in Equation (35) one obtains BIR (p, a1 , a) = B1 +
B2 B3 D
(65)
and B1 =
n=1
B2 =
⎡
∞
na1 (n−1)a1
d ⎣: P P({R(ja1 )} ∩ {T1 ≤ θ})|θ=t dt dθ j=0 ⎛ ⎞⎤ ∞ n−1 : + B∗D (na1 )P ⎝ {R(ja1 )} ∩ {T1 > na1 }⎠⎦ (66a)
⎡
∞ n=1
B∗D (t)
na1 (n−1)a1
n−1
n=1
j=0
d ⎣: P P({R(ja1 )} ∩ {T1 ≤ θ})|θ=t dt + exp[−γ(na1 )] dθ j=0 ⎛ ⎞⎤ n−1 : {R(ja1 )} ∩ {T1 > na1 }⎠⎦ × P⎝{R(na1 )} ∩ n−1
(66b)
j=0
B3 =
⎛ ⎞ n−1 : d B∗D (t) P⎝ {R(ja)} ∩ {T ≤ t}⎠ dθ (n−1)a
∞ n=1
+
∞ n=1
na
⎛ B∗D (na)P⎝
j=0
n−1 : j=0
⎞
{R(ja)} ∩ {T > na}⎠
dt |θ=t
(66c)
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s
where BD (t) is given in Equation (37) and t ∗ BD (t) = exp [−γ τ]b(τ)dτ.
357
(67)
(n−1)a(1)
For F1 = F(t) an analogous simplification as in Equation (64) is possible. For independent repair and failure events the intersection signs must simply be replaced by product signs simplifying the numerical computations considerably. The question is when the independence assumption becomes at least approximately true. This must depend on the case under consideration. Dependencies become weaker for larger measurement errors during inspections and for smaller dependencies between damage indicators and failure processes. 4.7
Preventive maintenance for s eries s y s te ms
By definition, a series system fails if any of its components fails. Consequently, all of its components have to renewed. This requires only a few modifications of the theory developed in Section 4.6. For a system with s components we have ⎛ ⎞ ∞ na1 n−1 s : dP ⎝ : N1s = (C(p) + L) exp[−γt] × (−1) {R(ja1 )} ∩ {Tm1 > θ}⎠ dt dθ (n−1)a1 n=1
+I
∞
j=1
⎛
exp[−γ(na1 )]P⎝{R(na1 )} ∩
n=1
+ (I + R(p))
∞
n−1 :
∞ n=1
+
exp[−γt] × (−1)
s :
{R(ja1 )} ∩
N3s = (C(p) + L)
n=1 ∞
m=1
+ (I + R(p))
∞ n=1
m=1
n−1 :
s :
j=0
m=1
{R(ja1 )} ∩
(68a) dt
θ=t
⎞
{Tm1 > na1 }⎠
(68b)
⎞ ⎛ n−1 s : dP ⎝ : exp[−γt] × (−1) {R(ja)} ∩ {Tm > θ}⎠ dθ (n−1)a na
j=1
⎛
exp[−γ(na)]P⎝{R(na)} ∩
n=1
{Tm1 > na1 }⎠ ⎞
j=1
exp[−γna1 ]P⎝{R(na1 )} ∩ ∞
⎞
n−1 s : dP ⎝ : {R(ja1 )} ∩ {Tm1 > θ}⎠ dθ
⎛
n=1
+I
n−1 : j=0
(n−1)a
∞
{Tm1 > na1 }⎠
⎛
na
θ=t
⎞
m=1
exp[−γna1 ]P⎝{R(na1 )} ∩
n=1
N2s =
s :
{R(ja1 )} ∩
j=0
⎛
m=1
⎛
m=1
n−1 :
s :
j=0
m=1
{R(ja)} ∩
exp[−γna]P⎝{R(na)} ∩
n−1 : j=0
dt
θ=t
⎞
{Tm1 > na1 }⎠
{R(ja)} ∩
s :
⎞ {Tm > na}⎠
m=1
(68c)
358
Structural design optimization considering uncertainties
Ds = 1 −
∞ n=1
+
∞
⎞ ⎛ n−1 s : dP ⎝ : exp [−γt] × (−1) {R(ja)} ∩ {Tm > θ}⎠ dθ (n−1)a na
j=1
⎛
exp[−γna]P ⎝R(na) ∩
n=1
n−1 :
s :
R(ja) ∩
j=0
m=1
⎞
dt θ=t
{Tm > na}⎠
(68d)
m=1
in Equation (61) with N1 , N2 , N3 and Dd replaced by N1s , N2s , N3s and Ds , respectively. Similar modifications have to made for the benefit term.
B1s =
⎛ ⎞ n−1 s : : dP ⎝ {R(ja1 )} ∩ B∗D (t) × (−1) {Tm1 > θ}⎠ dθ (n−1)a1
∞ n=1
+
∞
na1
j=1
⎛ B∗D (na1 )P ⎝{R(na1 )} ∩
n=1
B2s =
+
n−1 :
s :
j=0
m=1
{R(ja1 )} ∩
+
(69a)
⎛
exp [−γ(na1 )]P ⎝{R(na1 )} ∩
n−1 :
s :
j=0
m=1
{R(ja1 )} ∩
dt θ=t
⎞
{Tm1 > na1 }⎠
(69b)
⎞ ⎛ n−1 s : : dP ⎝ {R(ja)} ∩ B∗D (t) × (−1) {Tm > θ}⎠ dθ (n−1)a
∞ n=1
{Tm1 > na1 }⎠
m=1
n=1
B3s =
θ=t
na1
j=1
∞
dt
⎞
⎛ ⎞ n−1 s : dP ⎝ : ×(−1) {R(ja1 )} ∩ {Tm1 > θ}⎠ dθ (n−1)a1
∞ n=1
m=1
∞ n=1
na
⎛ B∗D (na)P⎝{R(na)} ∩
j=1
m=1
n−1 :
s :
j=0
m=1
{R(ja)} ∩
⎞
{Tm > na}⎠
θ=t
(69c)
Here, we have used again P( s=1 E ) = 1 − P( sm=1 Em ) in order to retain operations solely on intersections. The series system is a realistic assumption for many but not for all civil engineering systems. For example, if one bridge of several bridges in a road connection between A and B fails or a river dam breaks at a certain point the infrastructure or flood protection system fails but only the failed bridge or dam section must be restored in order to make the system functional again. This may require certain modifications in the models outlined so far. If the block maintenance regime is chosen, all components in the systems will be restored. But if the age-dependent regime with inspection and repair is chosen any repair action may also be associated to a specific component. An analytical treatment will then become rather difficult and complex because the components in the system will have different ages. More complex systems can involve considerable numerical effort.
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s
359
5 Some remarks about suitable optimization methods 5.1
G eneral
It is necessary to speak a little about the technical aspects of optimization. When designing and applying appropriate optimization techniques to the objective functions derived in the foregoing sections one faces the problem that, in fact, two optimization tasks have to be solved: (i) Optimization with respect to the design parameter p and (ii) Optimization with respect to the standard vector u to find the (local) reliability index, at least if FORM/SORM methods are applied. More specifically, the reliability optimization has to be solved for each step in the design parameter optimization. Even if one assumes differentiability of the objective and in the stochastic model as well as in the structural state function and uniqueness of the solution point(s), overall optimization still is a formidable task requiring quite some numerical effort. In the recent literature one distinguishes between one-level and bi-level optimization methods. For the bi-level method one optimizer solves the cost benefit optimization and another, possibly different optimizer solves the reliability optimization. In the one-level approach both optimization tasks are solved simultaneously by a suitable optimizer. In the following we shall briefly comment on both concepts. Both usually work and it is a matter of taste to select one or the other. If the abovementioned smoothness properties do not hold, then other optimization procedures are in order. 5.2
Bi-level optimization
In order to obtain the set of parameters for which the objective function Z(p) becomes optimal, the so-called bi-level approach can be chosen. Here, the optimization task in standard normal space for computation of the required reliability statistics corresponding to a fixed parameter set p is carried out separately using one of the sequential quadratic programming or similar methods. The results, in turn, serve as input to the main optimization loop for the parameter set p for which any of the available optimization methods can be employed. Alternatively, a direct search optimization method developed by (Powell 1994) can be applied. It does not require derivatives. This approach proved to be robust and reliable and only slightly more expensive than other methods. For the main optimization loop, lower and upper bounds should be imposed on the parameters, and it usually turns out to be advantageous to scale the optimization domain such that its shape becomes a hypercube. 5.3
One-level optimization
Let p be a parameter vector which enters in both the cost function and the limit state function g(u, p) = 0. Benefit, construction and damage function as well as the limit state function(s) are differentiable in p and u. The conditions for the application of FORM/SORM hold. In the so-called β-point u∗ the optimality conditions (Kuhn-Tucker conditions) are (Kuschel and Rackwitz 1997): g(u, p) = 0 u ∇u g(u, p) = − u ∇u g(u, p)
(70)
360
Structural design optimization considering uncertainties
The geometrical meaning of Equation (70) is that the gradient of g(u, p) = 0 is perpendicular to the vector of direction cosines of u∗ . The basic idea mentioned first in (Madsen and FriisHansen 1992) and elaborated in (Kuschel and Rackwitz 1997) now is to use these conditions as constraints in the cost optimization problem thus avoiding a bi-level optimization. It will turn out that this concept is crucial for further numerical analysis. For example, for the model in Equation (43) this leads to: Z(p) = B − C(p) − (C(p) + L)
subject to
λPf (p) γ
(71)
g(u, p) = 0 ui ∇u g(u, p) + ∇u g(u, p)i u = 0; i = 1, . . . , n − 1 hk (p) ≤ 0, k = 1, . . . , q
where hk (p) ≤ 0, k = 1, . . . , q are some constraints on the admissible parameter range. One may also add a constraint on the failure rate λPf (p). It is important to reduce the set of the gradient conditions in the Kuhn-Tucker conditions by one. Otherwise the system of Kuhn-Tucker conditions is overdetermined. It is also important that the remaining Kuhn-Tucker conditions are retained under all circumstances, for example, if one or more gradient Kuhn-Tucker conditions become co-linear with one or more of the other constraints. Otherwise the β-point conditions are not fulfilled. λP (p) must simply be replaced by If there are multiple failure modes (C(p) + L) γf s λ (C(p) + L)(1 − P( j=1 F j (p))) (see Equation (44)). In this case γ ⎞⎤ ⎡ ⎛ s : λ⎣ Z(p) ≤ B − C(p) − (C(p) + L) 1 − P⎝ F j (p)⎠⎦ γ
(72)
j=1
subject to gk (uk , p) = 0; k = 1, . . . , s ui,k ∇u gk (uk , p) + ∇u gk (uk , p)i uk = 0; i = 1, . . . , nk − 1; k = 1, . . . , s h (p) ≤ 0, = 1, . . . , q where the Kuhn-Tucker conditions have to be fulfilled separately for each failure mode. Note that there are s distinct independent vectors uk . It may be noted that all failure mode equations are fulfilled simultaneously. For (locally) non-stationary problems, especially aging problems and for problems with non-Poissonian failures, it is possible to propose a numerical solution. More precisely, the Laplace transform is taken numerically and each value of the failure density is computed by FORM/SORM. The same scheme, however, applies to the full
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s
361
Laplace transform of non-stationary problems as well. Z(p) ≈ B − C(p) − (C(p) + H) g(uj , p,tj ) = 0
f ∗ (γ, p) 1 − f ∗ (γ, p)
(73)
for j = 0, 1, . . . , m
ui,j ∇u g(uj , p,tj ) + ∇u g(uj , p,tj )i uj = 0
i = 1, . . . , n − 1; j = 0, . . . , m
h (p) ≤ 0, = 1, . . . , q where f ∗ (γ, p) ≈
m
wj exp[−γtj ]fT (tj , p)
(74)
j=0
with m the number of time steps and wj the weights for numerical integration of Equation (74). In order to solve the optimization problem a suitable optimization algorithm is required. Unfortunately, off-shelf sequential quadratic programming methods turned out to have problems, possibly due to the many equality constraints. Based on sequential linear programming methods a new optimization algorithm JOINT 5 (Pshenichnyj 1994) has been developed from an earlier algorithm proposed by Enevoldsen and Sørensen (Enevoldsen and Sørensen 1992). This turned out necessary because the tasks in Equations (71), (72) and (73) require special precautions which are not necessarily available in most of the off-shelf algorithms. For example, the algorithm includes a reliable and robust slow down strategy to improve stability of the algorithm instead of an exact (or approximate) line search which too often is the reason for non-convergence. A special ‘extended’ equation system is solved in case of failure in the quadratic subalgorithm, e.g. due to linear dependence of the linearized constraints. In addition, the algorithm contains a careful active set strategy (for further details see (Streicher 2004)). As in the bi-level method a suitable scaling of the objective is advantageous. Gradient-based methods need first derivatives of the objective and all active constraints. In case of cost optimization under reliability constraints first order KuhnTucker optimality conditions for a design point are restrictions to the optimization problem. These equations are given in terms of the first derivatives of the limit state function. The gradients of these conditions involve second derivatives. Thus, the solution of the quadratic subproblem needs second derivatives, i.e. the complete Hessian of g(u, p). The determination of the Hessian in each iteration step is laborious and can be numerically inexact. In order to avoid this, an approximation by iteration is proposed. The Hessian is first preset with zeros. Note that linear limit state functions always have a zero Hessian matrix. This implies loss of efficiency, but the overall numerical effort needs not to rise, because calculation of the Hessian is no more necessary. In order to improve the results in case of strongly nonlinear limit state functions, it is possible to evaluate the Hessian after the first optimization run and restart the algorithm. For the improved solution the starting point is the solution of the previous run and the Hessian matrix is fixed for the whole run. This iterative improvement with subsequent restarts continues until the results differ only with respect to a given precision which is usually after very few steps. The results can be simultaneously improved by including
362
Structural design optimization considering uncertainties
second-order corrections during reiteration (see Kuschel and Rackwitz 2000). Any other more exact improvement can be taken into account in a similar manner. The techniques proposed enable the solution of quite general problems. They are based on a one-level optimization but rather strong requirements on differentiability of the objective, limit state functions and other restrictions must be made. Also, a possibly substantial increase of the problem dimension must be expected in extreme cases and, hence, much computing time will be necessary. In passing it is worthwhile to remark that for the bi-level approach a proof of convergence is not yet available whereas it is available for the one-level approach.
6 Illustrating example – Chloride attack in an existing building The following, slightly academic example shows several interesting features and is an appropriate test case. Chloride attack due to salting and subsequent corrosion, for example, in the entrance area of a parking house or in a concrete bridge is considered. A simplified, approximate model for chloride concentration in concrete is C(x, t) = Cs (1 − erf( 2√xDt )) where Cs = surface chloride content (measured ≈ 0.5 cm below surface and extrapolated), x = depth and D = diffusion parameter. A suitable criterion for the time to the start of chloride corrosion of the reinforcement is:
c Ccr − Cs 1 − erf √ ≤0 (75) 2 Dt where Ccr = critical chloride content and c = concrete cover. Inversion gives the initiation time
Ccr −2 c2 −1 Ti = 1− erf (76) 4D Cs The stochastic model is Variable
[unit]
Distr. function
Parameters
C cr Cs c D
% % cm
Uniform Uniform Log-normal Uniform
0.4, 0.6 0.8, 1.2 mc , 0.8 0.5, 0.8
cm2 year
The planned concrete cover is mc = 5.0 cm. By drilling small holes and analyzing chemically the drill dust one has determined a chloride concentration of 0.4 at a depth of 3 cm with measurement error 0.05. Applying Equation (26) and truncation at t = 12 years gives an updated distribution function of the time to the start of corrosion as shown in Figure 13.1, where it is compared with the initiation time distribution. It is seen that chloride penetration occurred slightly more rapid than expected in renewed structures. During initiation time the structure can fail due to time-variant, stationary extreme loading. It is assumed that each year there is an independent extreme realization of the load. Load effects are normally distributed with mean 2.0 and coefficient of
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s
363
1.0
0.8
0.6
0.4
0.2 Regular distribution Updated distribution 24
48
72 96 Time [years]
120
144
Figure 13.1 Updated distribution for first failure time and subsequent failure time distributions.
variation of 40%. Structural resistance is also distributed normally with mean 3-times as large as the mean load effect and coefficient of variation 30% (p = 6.0). Once corrosion has started mean resistance deteriorates with rate δ(t) = 1 − 0.07t + 0.00002t 2 . The distribution and density functions of the time to first failure are computed using SORM in Equation (13) with the failure time distributions in the initiation phase and in the deterioration phase determined by Equation (7). The structural states in two arbitrary time steps have constant correlation coefficient of ρ = ρij = 0.973. First, the mean times of the various distributions in Equation (13) are determined. One finds E[Ti ] = 51 and E[Td ] = 9.4. The mean of Te does not exist. Using the distribution in Equation (13) one determines E[T] = 61. These mean times indicate that virtually no failures occur during initiation. The risk functions for both distributions assuming the repair probabilities in Figure 13.2 are first increasing but decrease slightly for larger t. Visual inspections, inspections by half-cell potential measurements and chemical analyses are performed at regular intervals a(1) . They are followed by renewals (repairs) with probability ) * mc,(1) ≤0 (77) PA (a(1) ) = P r(1 + 0.05UR ) − Cs,R 1 − erf 2 DR a(1) shown in Figure 13.2 if a chloride concentration of r at the reinforcement was observed. The term (1 + 0.2UR ) models the measurement error with UR a standard normal variable. Repair times are assumed negligibly short. Remember, the existing structure is already 12 years old and has suffered from chloride attack during the whole period. The first inspection is undertaken after 5 years. For all subsequent renewed structures the first inspection is after 8 years. Erection cost are C(mc , mr ) = C0 + C1 m2c + C2 mr , inspection cost are I = 0.02C0 , and we have C0 = 106 , C1 = C2 = 104 , L = 10C0 , γ = 0.03. For preventive repairs the cost are R(mc , mr ) = 0.6C(mc , mr ) · mr is the safety
364
Structural design optimization considering uncertainties
1.0
Probability of repair
0.8
0.6
0.4
0.2 Regular probability, r 0.42 Updated probability, r 0.43 16
32 48 Time [years]
64
80
Figure 13.2 Repair probabilities.
Expected maintenance cost [106 MU]
8 1 unit 2 units 5 units
7 6 5 4 3 2 1
6
12 Replacement age a [a]
18
24
Figure 13.3 Age replacement.
factor separating the means of load effect and resistance. The benefit is determined by using a decaying rate b(t) = b exp [−0.0001t 2 ], b = 0.15C0 , in the model in Equation (35). All cost are in appropriate currency units. It is noted that the physical and cost parameters are somewhat extreme but not yet unrealistic. When optimizing with respect to the inspection interval the Laplace transforms are taken numerically using Simpson’s integration formula. We first show the total cost (preventive and corrective) for the case of systematic age-dependent repairs and system sizes of s = 1, 2 and 5 (Figure 13.3). As expected, the total cost are higher for larger systems and the optimum replacement interval decreases.
C o s t-b e n e f i t o p t i m i z a t i o n f o r m a i n t a i n e d s t r u c t u r e s
365
Expected maintenance cost in [106 MU]
8 1 unit, r 0.41 2 units, r 0.36 5 units, r 0.33
7 6 5 First inspection after 8 years 4 3 2 1
6
12 18 Inspection interval a [a]
24
Figure 13.4 Total cost for inspection and repair.
6 12 18 24 Inspection interval a [a]
18 12 6
4 4.4
1
Inspection interval a [a]
1
Inspection interval a [a]
1.9
*
1
n5, r 0.35, r0.32, D 2.30×106 24
3.6 3.2 2.8
12 6
6
2.4
6 12 18 24 Inspection interval a [a]
18
2.9 2.7
1.6
6
*
3.1
2
12
1
n2, r 0.40, r0.36, D 1.84 ×10 24
2.5 2.3 1 2.
1.8
2.2
18
6
1
*
Inspection interval a [a]
1
n1, r 0.43, r0.42, D 1.55×10 24
6 12 18 24 Inspection interval a [a]
Figure 13.5 Expected maintenance cost of an existing n-unit structure in [106 MU], with periodic inspections at an interval of a1 and a beginning after 5 and 8 years, respectively.
Figure 13.4 shows the results for the inspection/repair strategy. Here, we have also optimized the repair thresholds r. They become more stringent for larger systems. Also, the optimum inspection/replacement intervals are much smaller than in the simple agedependent case. The differences in cost between systematic age-dependent repairs and repairs after inspections are not large in this example. By parameter changes it is, however, easy to make them larger. The result of an optimization with respect to a1 and a is shown in Figure 13.5 for mc = 5 and mr = 6. One sees that the contour lines are spaced more narrow for a1 than for a. The optima with respect to a and a1 are rather flat. If, however, the repair probabilities are much smaller than given in (2) no optimum would be found. The inspection intervals depend strongly on the system size.
366
Structural design optimization considering uncertainties
7 Conclusions The theory developed earlier for optimal design and maintenance of aging structural components and systems is extended to optimal repair and retrofit of existing structures. It is assumed that structures are maintained (inspected and repaired with certain probability) at regular time intervals and systematically reconstructed after failure. Age-dependent and block repairs are studied assuming negligibly short repair times. Three models for the benefit are discussed. Due to updating by additional investigations, the time to first failure usually has different probabilistic characteristics than all other times. Appropriate objective functions for cost-benefit optimization are derived. It is pointed out that inspections and possible repair events and failure events must address the same realization of the damage process if preventive maintenance makes at all sense. Even if the risk function initially was increasing, maintenance operations will let the risk function drop. Perfect inspections and repairs will reduce the risk function down to zero. For imperfect inspections the risk function will drop down to finite values. This generally requires the numerical computation of the renewal intensity by differentiating the renewal function for which tight bounds can be given.
References Aitchison, J. & Dunsmore, I.R. 1975. Statistical Prediction Analysis. New York: Cambridge University Press. Ambartzumian, R., Der Kiureghian, A., Ohaniana, V. & Sukiasiana, H. 1998. Multinormal probability by sequential conditioned importance sampling: Theory and application. Probabilistic Engineering Mechanics 13(4):299–308. Au, S.-K. & Beck, J.L. 2001. First excursion probabilities for linear systems by very efficient importance sampling. Probabilistic Engineering Mechanics 16(3):193–207. Ayhan, H., Limón-Robles, J. & Wortman, M.A. 1999. An approach for computing tight numerical bounds on renewal functions. IEEE Transactions on Reliability 48(2): 182–188. Barlow, R.E. & Proschan, F. 1965. Mathematical Theory of Reliability. New York: John Wiley & Sons. Barlow, R.E. & Proschan, F. 1975. Statistical Theory of Reliability and Life Testing: Probabilistic Models. New York: Holt, Rinehart & Winston. Brown, M. & Proschan, F. 1983. Imperfect repair. Journal of Applied Probability 20: 851–859. Cox, D.R. 1962. Renewal Theory. Monographs on Applied Probability and Statistics. London: Chapman & Hall. Cox, D.R. & Isham, V. 1980. Point Processes. Monographs on Applied Probability and Statistics. London: Chapman & Hall. Cramér, H. & Leadbetter, M.R. 1967. Stationary and Related Stochastic Processes. New York: John Wiley & Sons. Dunnett, C.W. & Sobel, M. 1955. Approximations to the probability integral and certain percentage points of multivariate analogue of Student’s t-distribution. Biometrika 42: 258–260. Enevoldsen, I. & Sørensen, J. 1992. Optimization algorithms for calculation of the joint design point in parallel systems. Structural and Multidisciplinary Optimization 4(2):121–127. Fox, B. 1966. Age replacement with dis