Collaborative Product and Service Life Cycle Management for a Sustainable World
Richard Curran • Shuo-Yan Chou • Amy Trappey Editors
Collaborative Product and Service Life Cycle Management for a Sustainable World Proceedings of the 15th ISPE International Conference on Concurrent Engineering (CE2008)
123
Richard Curran, PhD Chair of Aerospace Management and Operations TU Delft Faculty of Aerospace Engineering Kluyverweg 1 2629HS Delft The Netherlands
Shuo-Yan Chou, PhD Department of Industrial Management (IM) National Taiwan University of Science and Technology (NTUST) 43 Keelung Road, Section 4 Taipei 106 Taiwan
Amy Trappey, PhD Department of Industrial Engineering and Engineering Management National Tsing Hua University (NTHU) 101 Kuang Fu Road, Section 2 Hsinchu 300 Taiwan
ISBN 978-1-84800-971-4
e-ISBN 978-1-84800-972-1
DOI 10.1007/978-1-84800-972-1 British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Control Number: 2008933701 © 2008 Springer-Verlag London Limited The papers by N.J. Reed et al., by S. Jinks et al. and by J. Cheung et al., are published with kind permission of © copyright 2008 Rolls-Royce plc. All Rights Reserved. Permission to reproduce may be sought in writing to IP Department, Rolls-Royce plc, P.O. Box 31, Derby DE24 8BJ, United Kingdom. The software disk accompanying this book and all material contained on it is supplied without any warranty of any kind. The publisher accepts no liability for personal injury incurred through use or misuse of the disk. Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made. Cover design: eStudio Calamar S.L., Girona, Spain Printed on acid-free paper 9 8 7 6 5 4 3 2 1 springer.com
Preface
There is now an overwhelming body of scientific research and political opinion which agrees that current patterns of energy and materials usage are unsustainable, whether in terms of availability or environmental impact. The problem is twofold. In the short-tomedium-term, the current approach to development is sub-optimal through inefficient utilisation of the world’s resources while also causing unnecessary irreversible or longterm damage. In the medium-to-long-term, the earth’s depleting resources and biophysical systems will struggle to withstand the exponential burden of over-population even at reduced levels of human ecological footprint. The severity of the problem is evident if one considers that the world’s 43 main deltas are predicted to be under water within decades, removing one of the earth’s most productive food regions that also happens to correspond to areas of significant human population density. The challenges that face us tomorrow have already started yesterday and are shaped by the things we do today, or indeed do not do. Our lives today are based on the most basic manifestations of progress, such as quality of sustenance, domestic and social environments, mobility and leisure, and most significantly, are based on convenient and reliant energy production in the consumption and use of the world’s resources. However, we are now at a turning point where we need to make decisions with objective reference to our longer-term quality of life, with respect to our own future generations and the ‘global ecological justice’ for those in all parts of the world. Sir David King (UK Chief Governmental Scientist) is of the opinion that climate change is a bigger threat than global terrorism and is the key challenge for the 21st century. However, the recent Stern Report (2006) proposed that the economics of meeting and working with climate change to achieve a sustainable future is not out of scale with current and future economic potential. Therefore, concurrent engineering through collaborative enterprise will have a crucial role in the 21st Century in the provision of a balanced solution to industrial and economic activity that respects environmental and sustainability requirements. In the context of sustainable industry, companies must provide their products and services with greater resource efficiency and/or a reduced negative impact on the environment. In industrial processes, this would mean energy efficiency, resource conservation to meet the needs of future generations, safe and skill-enhancing working conditions, low waste production processes, and the use of safe and environmentally compatible materials. This can only be achieved for products and services through a concurrent engineering approach to a life-cycle balanced solution. Until recently the
vi
Preface
emphasis in industrial processes has been on improving the energy efficiency and, due to legislative requirements, there has been a shift towards improving the safe working conditions and skill-training of the work force. However, the current strategy is to give more emphasis to resource conservation, by a process of not only “reduce, reuse and recycle” strategies, but also through innovative designs and the use of environmentally compatible materials. Materials technology is now seeing the utilisation of nanocomposites to enhance mechanical and biodegradability of polymers while advanced composites are being used in applications ranging from bridge decks to aircraft wings. Structural composites, polymers and even geopolymers are increasingly used in both aerospace and construction industries to provide increased structural performance whilst reducing the volume and weight of materials, and the energy used to manufacture them. The value of good design and engineering is becoming more and more prevalent in the balance between meeting customer demands at an acceptable cost; whether economic, social or environmental. Allied to the current strategy being taken up in many developed countries is the adoption of environmentally friendly and low carbon technologies, in which the release of greenhouse gases, such as carbon dioxide and nitric oxide, is kept to a minimum. Industry and the built environment are enormous users of energy whether directly in processing or through the treatment of waste; some 40% of CO2 is generated by buildings and the cement industry alone producing upwards of 5% of the world’s CO2 emissions. In tandem with technological and process improvements, the economic incentive for concurrent engineering excellence may be enhanced and aided by certain economic instruments; such as carbon taxation and tradable pollution permits to name but two debatable examples. However, in today’s concurrent and collaborative engineering environment, reduction of carbon dioxide is being achieved by a combination of innovative approaches in the design and manufacturing process, operations, and the utilisation of materials, with supporting recycling and waste management strategies. Another high profile example of the challenges facing use today is the aerospace industry, which accounts for some 2% of global CO2 emissions but is heavily dependant on oil, an energy source on which the world is overly dependent. The world’s oil reserves are finite in the medium term but yet there is an immediate business, leisure and defence dependency on the compressed transportation time offered by air travel. There are also serious ecological impacts of air travel due primarily to pollution but also noise, as identified by ACARE in their VISION 2020 initiative. However, the demand for air transportation is predicted to rise exponentially over the next few decades, leading to a much greater potential impact on the environment. For this reason, the European Union has set targets for the year 2020 that include a reduction of nitric oxide emissions by 80%, carbon dioxide by 50%, noise by 12 dB, and cost by 50%, with a five fold increase in safety. These targets have set challenges in the aerospace community in terms of innovation and integration that will
Preface
vii
necessitate state-of-the-art concurrent engineering practices. The introduction of emission trading in the aviation industry may provide further economic incentive for reaching some of these targets but dramatically new solutions from a concurrent engineering approach are being demanded in propulsion technologies and fuel, energy consumption, vehicle design, air transportation management and environmental footprint management. The immediate response of many countries and governments has been to set ambitious targets in the field of renewable energies. For example, The Renewable Obligation of the UK targets an increase in the proportion of electricity provided by renewable sources of at least 10% by 2010, with suppliers to source a specific and annually increasing proportion from renewables until 2027. As well as wind and solar, this has led to renewed interest in marine renewable energy in the form of ocean waves and tidal currents as a vast and virtually untapped resource. However, the concurrent engineering challenge of harnessing this to produce economic and reliable energy is considerable; its commercial exploitation being in its infancy but expanding rapidly. This is all in the context of renewed interest in the potential solution provided through nuclear energy, perhaps best representing the complexity of the trade-offs to be considered in addressing the provision of energy to support our 21st Century lifestyles and patterns of consumption, but in a truly sustainable manner. It is certain that socially, contemporary and future policy design in relation to combating climate change and managing the transition towards a post-carbon energy economy will require the ‘upstreaming’ of public engagement and widespread public acceptance and ‘buy in’. Equally, the rise in the geo-political importance of ‘energy security’ has now become coupled with the policy and political debates around climate change and renewable energy generation. An issue here is the politics and deliberate use and misuse of the science around climate change within the popular media, making the whole issue of climate change and our responses to it confusing and non-coherent for many citizens, consumers and policy-makers. These social and political considerations must be incorporated into the concurrent and collaborative engineering enterprise in order to make research policy-relevant as well as scientifically and technologically innovative. It can be concluded that sustainable development is actually very positive in not only seeking technological solutions through a restricted short-term market view but rather, through a more expansive truly concurrent approach that must be adopted in synthesising all of the far reaching requirements and implications relating to products and their intended operation, service provision and end-of-life. The need for sustainable development is increasingly driving the market to reach for new and innovative solutions that more effectively utilise the resources we have inherited from previous generations; with the obvious responsibility to our future generations. However, these solutions always need to be acceptable to governments, societies, local
viii
Preface
communities and the individual consumer, and fundamentally, need to be economically viable in addressing 21st Century needs. Therefore, this will entail a just distribution of the costs, risks and benefits of economic development. The question of ‘environmental justice’, relative to environmental degradation and social exclusion, is emerging as a subject with enormous resonance in global, national and regional debates over sustainability and is an issue that institutions from the UN to local authorities are increasingly engaging with to promote the objectives of sustainable development. As a concept, environmental justice is explicitly recognised at a policy level by the EU and UK Sustainable Development strategies and in law by key EU and international sustainability instruments such as the UN Rio Declaration, the Aarhus Convention and, via the principle of common but differentiated responsibilities in the UN Kyoto Protocol. It is now true that even in the short-term, serious reputational, financial and legal risks are being faced by those acting in an irresponsible way towards the environment. It is only through interdisciplinary research developed in a truly concurrent and collaborative enterprise context that research solutions can be demonstrated to be “theoretically valid”, “environmentally friendly” and irrefutably “economically viable” in the sustainable future. In closing these thoughts on the future direction of concurrent and collaborative enterprise engineering, served through the International Society for Productivity Enhancement (ISPE), it is encouraging to refer to the proposition expounded by McDonough and Braungart in their book ‘Cradle to Cradle’. Essentially, that we need to rethink the way in which we make things in order to revise the ‘Cradle to Grave’ philosophy of the Industrial Revolution that is inconsistent with nature’s principles and sustainable evolution; that human productivity and progress can be positively engineered and managed in harmony with the provision and needs of our natural environment, rather than sustainability being viewed as negative fixed constraints. McDonough and Braungart propose a new and fresh approach that provides an alternative route to utilising and enjoying the resources that nature has provided us, in exploring our future destiny in a more sustainable manner. One century on from the Industrial Revolution, this is now the time of the Sustainable Revolution; requiring holistic technological, process and integrated solutions to evolved socio-economic needs that are currently not well met in a sustainable manner. It might surprise Albert Einstein that he rather well encapsulated the nature of this evolutionary struggle when he stated: “The world will not evolve past its current state of crisis by using the same thinking that created the situation”. And so it is our great pleasure to welcome you to go through the Proceedings of the 15th ISPE International Conference on Concurrent Engineering (CE2008) hosted by Queens University Belfast in Bangor, Northern Ireland. Previous CE Conferences have been held in São José dos Campos, SP, Brazil (E2007); Antibes-Juan les Pins, France (CE2006); Dallas, Texas, USA (CE2005); Beijing, China (CE2004); Madeira Island, Portugal (CE2003) ; Cranfield, UK (CE2002) ; Anaheim, USA (CE2001) ; Lyon,
Preface
ix
France (CE2000) ; Bath, UK (CE99) ; Tokyo, Japan (CE98) ; Rochester, USA (CE97); Toronto, Canada (CE96); McLean, USA (CE95); and Pittsburgh, USA (CE94). The CE Conference series is organized annually by the International Society for Productivity Enhancement (http://www.ispe-org.net) and constitutes an important forum for international scientific exchange on concurrent and collaborative enterprise engineering. These international conferences attract a significant number of researchers, industrialists and students, as well as government representatives, who are interested in the recent advances in concurrent engineering research and applications. Concurrent engineering is a well recognized engineering approach for productivity enhancement that anticipates all product life cycle process requirements at an early stage in the product development and seeks to architect product and processes in a simultaneous and integrated manner. Therefore, it is fitting that this year the CE Conference Series considers “Product and Service Life Cycle Management for a Sustainable World” following on from last year’s focus on “Complex Systems Concurrent Engineering: Collaboration, Technology Innovation and Sustainability”. You are invited to consider all of the contributions made by this year’s participants through the presentation of CE2008 papers collated into this Book of Proceedings, in the hope that you will be further inspired in your work in achieving Product and Service Life Cycle Management for a Sustainable World.
Ricky Curran General Chair CE2008 Queen’s University Belfast Northern Island – UK
Shuo-Yan Chou Program Chair CE2008 National Taiwan University of Science and Technology - Taiwan
Amy Trappey Program Chair CE2008 National Taipei University of Technology - Taiwan
Program Committee
General Chair: Ricky Curran, Director of the Centre of Excellence for Integrated Aircraft Technology, QUB, UK; and Chair of Aerospace Management and Operations, Faculty of Aerospace Engineering, Technical University of Delft, The Netherlands
Program Chairs: Shuo-Yan Chou, Professor of Industrial Management, National Taiwan University of Science and Technology, Taiwan; and visiting scholar at Department of Industrial Engineering and Logistic Management, Hong Kong University of Science and Technology Amy Trappey, Dean of College of Business Administration, National Taipei University of Technology, Taiwan; and Professor of Industrial Engineering and Engineering Management, National Tsinghua University, Taiwan
Organizing Committee Knowledge Exploitation Chair: Tom Edgar, Director of the NITC, QUB, UK Industry/Entertainments Chair: Colm Edgar, BPC Manager, NITC, QUB, UK Management/Publicity Chair: Rory Collins, BPC, NITC, QUB, UK Administrative Chair: Marie Teresa McGuire, CEIAT, QUB, UK Logistics Chair: Joe Butterfield, CEIAT, QUB, UK Conference Materials Chair: Yan Jin, CEIAT, QUB, UK Conference Venue Chair : Ricky Gault, CEIAT, QUB, UK Technical Review Chair: Shih-Wei Lin, Chang Gung University, Taiwan Scientific Chairs: Shuo-Yan Chou, NTUST, Taiwan Amy Trappey, NTUT, Taiwan
xii
Program Committee
Conference Sponsorship Chairs: Ricky Curran, QUB & TU Delft, The Netherlands Shuichi Fukuda, Stanford University, USA Mark Price, CEIAT, IAT Cluster Director, QUB, UK Members: Brian Abernathy, Thales, UK Simeon Alsop, DELMIA, UK Robert Burke, Bombardier Director, UK Richard Cooper, CEIAT, QUB, UK Graham Colin, FG Wilson, UK Cathy Criag, Psychology, QUB, UK Carl Dalton, Galorath, UK Juliana Early, CEIAT, QUB, UK Brendan Hinds, CEIAT, QUB, UK Peter Hornsby, Materials Cluster Director, QUB, UK John Hsu, Boeing, USA Donna McLennan, Bombardier Manager, UK Tony McNally, Materials Cluster, QUB, UK Adrian Murphy, CEIAT, QUB, UK Raghu Raghunathan, CEIAT, QUB, UK Gordon Spratt, FG Wilson, UK Jian Wang, CEIAT, QUB, UK Brian Welsh, Bombardier Manager, UK
ISPE Advisory Committee Conference Advisory Chairs: Shuichi Fukuda, ISPE President, Stanford University, USA Geilson Loureiro, Brazilian Institute for Space Research (LIT/INPE), Brazil Members:
Ahmed Al-Ashaab, Cranfield University, UK John Cha, Beijing Jiaotong University, China Ricky Curran, Queens University, UK Ricardo Goncalves, UNINOVA, Portugal Parisa Ghodous, University of Lyon, France Geilson Loureiro, INPE, Brazil Jerzy Pokojski, Warsaw University, Poland Rajkumar Roy, Cranfield University, UK Mike Sobolewski, TTU, Texas, USA Amy Trappey, NTUT, Taiwan
Program Committee
International Scientific Committee Ahmed Al-Ashaab, School of Engineering and Built Environment, UK Alain Bernard, Institut de Recherche en Communications et en Cybernétique de Nantes, France Daniel Capaldo Amaral, Universidade de Sao Paulo, Brazil John Barry, Politics, QUB, UK Milton Borsato, Universidade Tecnológica Federal do Paraná, Brazil Robert Burke, Bombardier Director, UK Joe Butterfield, CEIAT, QUB, UK John Cha, Beijing Jiaotong University, China Hsueh-Ching Chen, Chaoyang University of Technology, Taiwan Kai-Ying Chen, National Taipei University of Technology, Taiwan Yu-Kumg Chen, Huafan University, Taiwan Shuo-Yan Chou, National Taiwan University of Science and Technology, Taiwan Chih-Hsin Chu, National Tsing Hua University, Taiwan Richard Cooper, CEIAT, QUB, UK Richard Curran, QUB, UK John Doherty, Qinetic, UK Yulia Ekawati, National Taiwan University of Science and Technology, Taiwan Joao Carlos Ferreira, Universidade Federal de Santa Catarina, Brazil S Fukuda, Stanford University, United States Ricky Gault, CEIAT, QUB, UK Parisa Ghodous, University of Lyon I, France Ricardo Gonçalves, UNINOVA, Portugal Raija Halonen, University of Oulu, Finland Kazuo Hatakeyama, Universidade Tecnológica Federal do Paraná, Brazil George Hutchinson, ISW Director, QUB, UK Kuan-Ying Hwang, Jinwen University of Science and Technology, Taiwan Haruo Ishikawa, The University of Electro-Communications, Japan Gudrun Jaegersberg, University of Applied Sciences, Zwickau, Germany Jeng-Ywan Jeng, National Taiwan University of Science and Technology, Taiwan Yan Jin, CEIAT, QUB, UK Da-Sheng Lee, National Taipei University of Technology, Taiwan Jimmy Lee, Institute for Information Technology, Taiwan
xiii
xiv
Program Committee
Justin J.Y Lin, Chaoyang University of Technology, Taiwan Shih-Wei Lin, Chang Gung University, Taiwan Da-Chuan Liu, Industrial and Technological Research Institute, Taiwan Shih-Che Lo, National Taiwan University of Science and Technology, Taiwan Geilson Loureiro, LIT-INPE, Brazil Yiping Lu, Beijing Jiaotong University, China Yuan Ping-Luh, National Taipei University of Technology, Taiwan Donna, McLennan, Bombardier Manager, UK Alexandre Moockel, Universidade Tecnológica Federal do Paraná, Brazil Jerzy Pokojsky, Warsaw University, Poland Roy Rajkumar, Cranfield University, UK Lukasz Rauch, AGH-University of Science and Technology, Poland Roberto Silvio Ubertino Rosso Jr., UDESC, Brazil Henrique Rozenfeld, USP, Brazil James P. Scanlan, University of Southampton, United Kingdom Shana Smith, National Taiwan University, Taiwan Mike Sobolewski, TTU, Texas, USA Markus Stumptner, University of South Australia, Australia Kai Tang, The Hong Kong University of Science and Technology, Hongkong Yung Ting, Chung Yuan Christian University, Taiwan Michel van Tooren, TU Delft, The Netherlands Amy J.C. Trappey, National Taipei University of Technology, Taiwan Charles Trappey, National Chiao-Tong University, Taiwan Chao-Hua Wang, National Taichung Institute of Technology, Taiwan Jian Wang, CEIAT, QUB, UK Brian Welsh, Bombardier Manager, UK Stefan Wesner, University of Stuttgart, Germany Nel Wognum, University of Twente, the Netherlands Jyh-Cheng Yu, National Kaohsiung First University of Science and Technology, Taiwan
Sponsors
ISPE : International Society for Productivity Enhancement
Contents
Collaborative Engineering………………………...…………………………………..1 Distributed Collaborative Layout Design in Service-Oriented Architecture………..….3 Nan Li, Jianzhong Cha, Yiping Lu Resolving Collaborative Design Conflicts Through an Ontology-based Approach…..11 Moises Dutra, Parisa Ghodous, Ricardo Gonçalves Creating Value Within and Between European Regions in the Photovoltaic Sector….21 Gudrun Jaegersberg, Jenny Ure Agent-based Collaborative Maintenance Chain for Engineering Asset Management...29 David Hsiao, Amy J.C. Trappey, Lin Ma, Yu-Liang Chung, Yong-Lin Kuo
Collaborative Engineering Systems…………………………………………...…….43 Research on the Distributed Concurrent and Collaborative Design Platform Architecture Based on SOA…………………………………………………45 Jia-qing Yu, Jian-zhong Cha, Yi-ping Lu, Nan Li Collaborative Architecture Based on Web-Services…………………………………..53 Olivier KUHN, Moisés Lima Dutra, Parisa Ghodous, Thomas Dusch, Pierre Collet From Internet to Cross-Organisational Networking…………………………………...63 Lutz Schubert, Alexander Kipp, Stefan Wesner Grid-based Virtual Collaborative Facility: Concurrent and Collaborative Engineering for Space Projects………………………………………………………..77 Stefano Beco, Andrea Parrini, Carlo Paccagnini, Fred Feresin, Arne Tøn, Rolf Lervik, Mike Surridge, Rowland Watkins
xviii
Contents
Cost Engineering……………………………………………………………...……...87 Cost CENTRE-ing: An Agile Cost Estimating Methodology for Procurement………89 R. Curran, P. Watson Cost of Physical Vehicle Crash Testing……………………………………………...113 Paul Baguley , Rajkumar Roy and James Watson Estimating Cost at the Conceptual Design Stage to Optimize Design in terms of Performance and Cost………………………………………………….123 Mohammad Saravi, , Linda Newnes Antony Roy Milehamb and Yee Mey Goh
DRONE………………………………………………………………………………131 Design for Sound Transmission Loss through an Enclosure of Generator Set……..133 Matthew Cassidy, Jian Wang, Richard Gault, Richard Copper Design Tool Methodology for Simulation of Enclosure Cooling Performance……...143 Richard Gault, Richard Cooper, Jian Wang, Graham Collin Using Virtual Engineering Techniques to Aid with Design Trade-Off Studies for an Enclosed Generator Set……………………………………………………......153 Richard Gault, Richard Cooper, Jian Wang, Graham Collin Sound Transmission Loss of Movable Double-leaf Partition Wall………...………..163 Jian Chen, Jian Wang Modelling Correlated and Uncorrelated Sound Sources…………………………….173 Mark Boyle, Richard Gault, Richard Cooper, Jian Wang
Interoperability……………………………………………………………………...183 Backup Scheduling in Clustered P2P Network………………………………………185 Rabih Naim TOUT, Nicolas Lumineau, Parisa Ghodous, Mihai Tanasoiu Towards an Intelligent CAD Models Sharing Based on Semantic Web Technologies………………………………………………………………...…195 Samer ABDUL-GHAFOUR, Parisa Ghodous, Behzad Shariat, Eliane Perna
Contents
xix
Towards an Approach for Multiple-View Semantic Model in Product Development……………………………………………………..………….205 Patrick Hofmann, Shaw C. Feng, Gaurav Ameta, Parisa Ghodous, Lihong Qiao, and Ram D. Sriram
Integrated Design……………………………………………………………...……215 Development of a Lightweight Knowledge Based Design System as a Business Asset to Support Advanced Fixture and Tooling Design……….………..217 Nicholas James Reed, James Scanlan, Steven Halliday Near Net-shape Manufacturing Costs……………………………………….………225 Stuart Jinks, James P. Scanlan, Dr S Wiseall Modelling the Life Cycle Cost of Aero-engine Maintenance………………………..233 James Wong, James P. Scanlan, Murat H. Eres Value Driven Design………………………………………………………………...241 Julie Mei Wen Cheung, James Scanlan, Steve Wiseall
Integrated Wing…………………………………………………….…………….....249 A Generic Life Cycle Cost Modeling Approach for Aircraft System………………..251 Y. Xu, Jian Wang, X. Tan, Ricky Curran Cost-Efficient Materials in Aerospace: Composite vs Aluminium…………………..259 Y. Xu, Jian Wang, X. Tan, Ricky Curran A Multi-Fidelity Approach for Supporting Early Aircraft Design Decisions…….….267 John J Doherty, Stephen R H Dean, Paul Ellsmore and Andrew Eldridge Cost Modelling of Composite Aerospace Parts and Assemblies…………………….281 R Curran, M Mullen, N Brolly, M Gilmour, P Hawthorn, S Cowan
Integrated Product Process Development………………..……….…………….…295 A Design Methodology for Module Interfaces……………………………………....297 Régis Kovacs Scalice, Luiz Fernando Segalin de Andrade, Fernando Antonio Forcellini
xx
Contents
Reducing the Standard Deviation When Integrating Process Planning and Production Scheduling Through the Use of Expert Systems in an Agent-based Environment……………………………………………………………305 Izabel Cristina Zattar, Joao Carlos Ferreira, Paulo de Albuquerque Botura Extracting Variant Product Concepts Through Customer Involvement Model……...313 Chao-Hua Wang, Shuo-Yan Chou QFD and CE as Methodologies for a Quality Assurance in Product Development….323 Kazuo Hatakeyama, José Ricardo Alcântara
Information Systems………………………………………………..........................331 Integration of Privilege Management Infrastructure and Workflow Management Systems………………………………………………………………...333 Wensheng Xu, Jianzhong Cha, Yiping Lu A Comparative Analysis of Project Management Information Systems to Support Concurrent Engineering……………………………………………...….341 Camila de Araujo, Daniel Capaldo Amaral Location-Aware Tour Guide Systems in Museum…………………………………...349 Tsai Chih Yung, Shuo-Yan Chou, Lin Shih Wen PDM – University Student Monitoring Management System……………………….357 Prof. Jožef Duhovnika, Žiga Zadnik
Knowledge Based Engineering…………………………...….…….……………….373 Multidisciplinary Design of Flexible Aircraft………………….…………………….375 Haroon Awais Baluch, Michel van Tooren Service Oriented Concurrent Engineering with Hybrid Teams using a Multi-agent Task Environment……………………………………………………………………387 Jochem Berends and Michel van Tooren Systems Engineering and Multi-disciplinary Design Optimization………………….401 Michel van Tooren and Gianfranco La Rocca
Contents
xxi
Application of a Knowledge Engineering Process to Support Engineering Design Application Development……………………………………………………………417 S.W.G. van der Elst and M.J.L. van Tooren
Knowledge Engineering…………………………………………….………..……..433 Knowledge Based Optimization of the Manufacturing Processes Supported by Numerical Simulations of Production Chain…………………………….………435 Lukasz Rauch, Lukasz Madej, Paweá J. Matuszyk Characterization of Products Strategic Planning : a Survey in Brazil…….………..443 Alexandre Moeckel, Fernando Antonio Forcellini Using Ontologies to Optimise Design-Driven Development Processes…….……….451 Wolfgang Mayer, Arndt Muhlenfeld, Markus Stumptner CAD Education Support System Based on Workflow……………………………….461 Kazuo Hiekata, Hiroyuki Yamato, Piroon Rojanakamolsan Configuration Grammars: Powerful Tools for Product Modelling in CAD Systems………………………………………………………………...……469 Egon Ostrosi, Lianda Haxhiaj and Michel Ferney
Ontologies……………………………………….………………….……..………....483 A Semantic Based Approach for Automatic Patent Document Summarization……..485 Amy J.C. Trappey, Charles V. Trappey, Chun-Yi Wu Develop a Formal Ontology Engineering Methodology for Technical Knowledge Definition in R&D Knowledge Management…………………………...495 Amy J.C. Trappey, Ching-Jen Huang, Chun-Yi Wu Ontologia PLM Project : Development and Preliminary Results……………………503 Carla Amodio, Carlos Cziulik, Cássia Ugaya, Ederson Fernandes, Fábio Siqueira, Henrique Rozenfeld, José Ricardo Tobias, Kássio Santos, Marcio Lazzari, Milton Borsato, Paulo Bernarski, Rodrigo Juliano, Simone Araujo. Modelling and Management of Design Artefacts in Design Optimisation…………..513 Arndt Muhlenfeld, Franz Maier, Wolfgang Mayer, Markus Stumptner
xxii
Contents
PREMADE…………………….…………………………...…….……..………...…521 A Quantitative Metric for Workstation Design for Aircraft Assembly………..……..523 Yan Jin, Ricky Curran, Joseph Butterfield, Robert Burke An Integrated Lean Approach to Aerospace Assembly Jig and Work Cell Design Using Digital Manufacturing…………………………………………………………531 J. Butterfield, A. McClean, Y. Yin, R. Curran, R. Burke, Brian Welch, C. Devenny The Effect of Using Animated Work Instructions Over Text and Static Graphics When Performing a Small Scale Engineering Assembly…………………………….541 Gareth Watson, Dr Ricky Curran, Dr Joe Butterfield, Dr Cathy Craig Digital Lean Manufacture (DLM): A New Management Methodology for Production Operations Integration…………………………………………………...551 R. Curran, R. Collins and G. Poots
Sustainability………………………………………………………..……….…...…573 Simulation of Component Reuse Focusing on the Variation in User Preference……575 Shinsuke Kondoh, Toshitake Tateno, Nozomu Mishima, Mitsutaka Matsumoto Evaluation of Environmental Loads Based on 3D-CAD…………………………….585 Masato Inoue, Yumiko Takashima, Haruo Ishikawa Proposal of a Methodology applied to the Analysis and Selection of Performance Indicators for Sustainability Evaluation Systems……………………...593 Juliano Bezerra de Araujo, Joao Fernando Gomes Oliveira Ocean Wave Energy Systems Design: Conceptual Design Methodology for the Operational Matching of the Wells Air Turbine……………………………...601 R. Curran
Author Index………………………………………………………………………….617
Collaborative Engineering
Distributed Collaborative Layout Design in ServiceOriented Architecture Nan Lia,1, , Jianzhong Cha b Yiping Lub and Jia-qing Yub a
Ph.D. Candidate, School of Mechanical, Electronic and control Engineering, Beijing Jiaotong University, China. b School of Mechanical, Electronic and control Engineering, Beijing Jiaotong University, China. Abstract. Current computer-aided layout design systems only support layout state generation, which is not ideal for engineering layout design based on distributed knowledge and intelligent environment. This paper proposes a system framework to enable distributed engineering layout design in service-oriented architecture. A federated layout design system based on Service-ORiented Computing EnviRonment (SORCER) implements the framework. In order to supply design services to users, distributed design resources and design tools can be wrapped as SORCER service providers. And the users should be wrapped as service requestors so that they can join in the federated layout design system. A layout design interface protocol is developed to define standardized design services for whole layout process. The protocol content include standard layout components and containers representation, design parameter, layout state representation, design constrain representation and human-computer interaction command etc. Data interoperability between services is enhanced by design context communication. In order to be free loaded and used in the federated layout design system, each service needs to implements the interface protocol strictly. This system aims to enable asynchronous distributed collaborative design with ease of alternative design services, reduced design cycles, and improved layout resolution quality. Keywords. Distributed collaborative design, Service-oriented architecture, Layout, Layout design interface protocol
1 Introduction With the recent occurrence of collaborative complex product layout design among designers, manufacturers, suppliers and vendors is one of the keys for designers to improve product design quality, reduce cost, and shorten design cycle in today’s global competition. Distributed intelligent resources participate in layout approach 1
Ph.D. Candidate, School of Mechanical, Electronic and control Engineering, Beijing Jiaotong University, No. 3 of Shangyuan Residence Haidian District in Beijing, China; Tel: 8610-51685335; Email:
[email protected]; http://www.bjtu.edu.cn/en
4
N. Li, J. Z. Cha and Y. P. Lu
development, layout components modelling, design decision making and share product information across local boundaries in an Internet-enabled distributed environment. Current, some researches for automatic engineering layout design can generate good layout resolution [1-2], and some integrative computer-aided layout design (CALD) system support whole engineering layout design process [5]. But compared to traditional stand-alone CALD system, there are new issues that need to be resolved in distributed collaborative CALD system based on service-oriented architecture (SOA). For example, (1) Design service provider and service requestor: design resources, design knowledge and design tools should be wrapped as service providers or service requestors, so that they can work in distributed computing environment based on SOA. (2) Service registry, service lookup and service proxy. (3) Service management. (4) System security. (5) Layout Design Interface Protocol (LDIP): each layout design service and service requestor need to implement the LDIP, so that they can join in the environment with loose coupling. Due to Service-ORiented Computing EnviRonment (SORCER)[6-8] can deal with most of issues abovementioned, we build our federated layout design system (FLDS) on top of SORCER platform. A LDIP was developed for services in our system.
2 Framework of FLDS based on SORCER SORCER is a federated service-to-service metacomputing environment that treats service providers as network objects with well-defined semantics of a federated service object-oriented architecture [6].
Figure 1. Framework of FLDS
Distributed Collaborative Layout Design in Service-Oriented Architecture
5
Figure 1 illustrates the framework of FLDS. The design requestors should be wrapped as services so that they can join in the FLDS. A design proxy—net objects implementing the same LDIP as its service provider—always ready for calling by service requestors. As shown in figure 2, the technology detail of service registry, service lookup and service employ will be hid by SORCER. The layer of layout design system only needs to deal with layout design services building, services management and design process control.
Figure 2. Layered platform of FLDS
The LDIP is fixed and known beforehand by the provider and requestor. Using our mechanism, a requestor can use this fixed protocol and a service description obtained from a service registry to create a proxy for binding to the design service provider and for remote communication over the fixed protocol.
3 Layout design interface protocol LDIP play an importment role in FLDS. Each service should find match service provider according to the protocol. Both design data interoperability and design information commnication need implemention of this fixed protocol. In order to get same kind of service, A service requestor can employ different design service providers, which implement same LDIP. The mainly content of this protocol include: (1) Layout components and containers model format: a standard representation for 3-D layout components and containers modeling. If implemented this interface protocol, a general CAD system can supply layout components modeling service for FLDS as a service provider. (2) Design parameter: a design service requestor can implement this interface protocol to submit user needs to FLDS. (3) Layout state description: every service which wants to use layout resolution should implement this protocol. This interface protocol describes all information of layout result.
6
N. Li, J. Z. Cha and Y. P. Lu
(4)
Layout constrains model format: a standard representation for 3-D layout constrains modeling. A constrain modeling tool should implement this interface protocol to supply constrains modeling service to the FLDS. (5) Algorithm interface: some algorithms which implement this interface can supply layout optimization service for the FLDS. (6) Evaluation parameter structure: evaluation service should implement this interface protocol to supply evaluation service for FLDS. (7) Human-computer interaction command: the command is used to operate some services with GUI in batch mode, for example: modeling command stream is used to build components model automatically on components modeling service. (8) Multimedia report interface protocol: this protocol support to build a multimedia layout result report. A report service should to implement this interface for custom-built report generation. The LDIP includes a mass of engineering layout design knowledge. Thus, more information and rules will be added into the protocol structure in the future. Figure 3 illustrates a demonstration of the layout components and containers model format abovementionedΫa engine system modelling. This XML-based model format can be used in FLDS arbitrarily.
Figure 3. An engine modelling with standard layout components and containers model format
Distributed Collaborative Layout Design in Service-Oriented Architecture
7
Due to powerful description ability of domain data, SORCER Context [6] and XML are good carrier for LDIP. SORCER Context is used as runtime communication carrier, and XML is employed to be data store format.
4 Implemention of FLDS based on SOA FLDS builds on top of SORCER to introduce intelligent distributed collaborative design system. Whatever knowledge resources and intelligent resources can build their own service according to the LDIP, and launch the service to FLDS as a component. It is allowable that more than one service can implement same function in FLDS, and that’s lead to services competition. The users or the service employer can choose the best service form all services with same function in FLDS. The “best” means best quality, best efficiency, or least cost etc. 4.1 Services structure of FLDS Figure 4 illustrates that the hierachical services structure of FLDS. Users should play two roles in our system: service provider and service requestor. As service provider, a user should supply layout design requirement to other services in FLDS. To be a service requestor, a user can monitor the whole design process and get final layout design result. Every service are autonomous and can call other service in FLDS. The employer service don’t need to care about what happend in employee services, even though the employee service calls other services either.
Figure 4. Hierarchical services structure of FLDS
8
N. Li, J. Z. Cha and Y. P. Lu
As shown in figure 4, the engineering layout design service is an integrated service to supply whole layout design function. The user only needs to call engineering layout design service singly to start layout design process. Every call between services should follow matchable interface protocol. 4.2 Implemention of pivotal services of FLDS A FLDS service must be a SORCER service first, and two ways are used to build a SORCER service: to wrap general software as SORCER services or to build SORCER applications directly. Figure 5 illustrates how to build a FLDS service from applications.
Figure 5. Build a FLDS service
Some pivotal design tools of FLDS were developed and the system was demonstrated through some real engineering application, as vehicle engine compartment layout design (As shown in figure 6).
Figure 6. Pivotal services of FLDS
Distributed Collaborative Layout Design in Service-Oriented Architecture
9
Not all of the services shown in figure 3 are necessary for an engineering layout design task. The users or designers can organize the services and add or delete them according their desire. As the example—vehicle engine compartment layout design (Shown in figure 6), the user employed layout evaluation service, knowledge-based layout service, layout algorithm service, layout constrains modeling service, layout components choice service, geometrical modeling service to deal with the task. A service provider can supply Java-based GUI to service requestor for humancomputer interaction. Some simple interaction can be wrapped as SORCER GUI, which can be loaded by Jini [4] service browser—IncaX [3] (As shown in figure 6). In this case, service users only need to run IncaX to call service GUI to get humancomputer interaction. In contrast, complicated interaction application should be wrapped to Rich Client Program (RCP) (As knowledge-based layout service shown in figure 5), so that users must run integrated RCP to call the services what they want.
Figure 7. Jini service browser: Inca X
5 Conclusions This paper presents a federated system for distributed collaborative engineering layout design in SOA to enhance automatic layout design ability. A layout design interface protocol is developed to define standardized design services for layout process. As a computing and metacomputing grid environment, SORCER was employed as bottom platform to build our FLDS—a highly flexible software system. Using FLDS and layout design interface protocol, engineer can arbitrary organize and manage the layout design services. The FLDS enable asynchronous distributed collaborative design with ease of alternative design services, reduced design cycles, and improved layout resolution quality.
10
N. Li, J. Z. Cha and Y. P. Lu
With computation complexity and modeling complexity, engineering layout design problem needs to assemble more design resources to enhance design ability in the future.
6 Acknowledgement This research is supported by BJTU Research Foundation under grant No. 2006XZ011.
7 References [1] [2] [3] [4] [5]
[6] [7] [8]
Aladahalli C, Cagan J, and Shimada K, Objective function effect based pattern search—an implementation for 3D component layout. ASME Journal of Mechanical Design 2007; 129(3); 255-265. Cagan J, Degentesh D, and Yin S, A simulated annealing-based algorithm using hierarchical models for general three-dimensional component layout, Comput.-Aided Des. 1998; 30(10); 781–790. Inca X Service Browser for Jini Technology. Available at:
. Accessed on: Mar. 23th, 2008. Jini architecture specification, Version 1.2. Available at: . Accessed on: Mar. 23th, 2008. Li N, Cha J Z, Lu Y P, el a1. A simulated annealing algorithm based on parallel cluster for engineering layout design. In: Complex Systems Concurrent Engineering: Proceedings of 14th ISPE International Conference on Concurrent Engineering, Sao Paulo, 2007; 83–89. Sobolewski M. SORCER: computing and metacomputing intergrid. Retrieved Mar. 23, 2008; from: http://sorcer.cs.ttu.edu/publications/papers/2008/iceis-intergrid08.pdf. Sobolewski M. Service-oriented programming. SORCER Technical Report SL-TR-13. Retrieved Mar. 23, 2008; from: . SORCER Lab. Retrieved March 23, 2008, from: .
Resolving Collaborative Design Conflicts Through an Ontology-based Approach Moisés Dutra a, Parisa Ghodous b,1 and Ricardo Gonçalves c a
PhD
Student,
LIRIS
Laboratory,
University
of
Lyon
1,
France,
[email protected]. b
Head of Collaborative Modelling Team of LIRIS Laboratory, Univ. of Lyon 1, France, [email protected]. c Head of the Group for the Research in Interoperability of Systems (GRIS), Uninova Institute, Portugal, [email protected]. Abstract. This paper presents an ontology-based approach to resolve conflicts in collaborative design. In a collaborative design environment, achieving a global design of a product implies the proposed model is realisable and acceptable to all participants involved in the design project. Whenever this not happens we have a conflicting situation. The work presented here is based on the use of ontology modelling (OWL) to represent knowledge and, like that, to enable a reasoning process to be done. The results of this reasoning, the conflicting axioms detected, are used as starting point to a conflict resolution process. First, an automatic approach is tried. In case of failure, the next step is the direct interaction among the project participants, i.e., negotiation and mediation. A small electrical connector was taken as example to illustrate our approach. Keywords. Collaborative design, conflicts, ontologies, constraints, negotiation, case-based reasoning
1 Introduction Time and resources required to resolve conflicting situations in collaborative design have increased proportionally to the complexity of modern industrial systems. According to [14], even more, companies use geographically distributed knowledge, resources and equipment. The collaborative design process is typically expensive and time-consuming because strong interdependencies between design decisions make it difficult to converge on a single design that satisfies these dependencies and is acceptable to all participants [7]. Concurrent engineering brings new ways of organising design and manufacturing activities, introducing 1
Laboratory of Computer Graphics, Images and Information Systems (LIRIS); Bâtiment Nautibus, 43, bd. du 11 Novembre. 1918, 69622 Villeurbanne cedex, France; Tel: +33 (0) 4 72 44 58 84; Fax: +33 (0) 4 72 43 13 12.
12
M. Dutra, P. Ghodous and R. Gonçalves
deep modifications, such as the concurrent realisation of product life cycle tasks. The collaborative approach also emphasizes the integration of all disciplines that contribute to the product development. The early-stage design is a very important part of this approach, as important decisions are made considering the entire project life cycle [6, 12]. Hence, conflict attenuation and resolution in early-stage design are essential points to be considered. Conflicts can be extremely resource hungry in terms of resources such as development time, budget and materials. Preventing them at this point – rather than later – is preferable, as it enhances the chances of success for consecutive design phases. This process involves identification and categorisation of conflicts and notification to the different involved parts, in order to put the situation under control as soon as possible [10]. When early conflict detecting is not possible, or not successful, a conflict resolution process must be undertaken. This paper presents an approach for conflict resolution in collaborative design that takes into account the results obtained by an ontology-based conflict detection process [2, 3].
2 Conflict dealing in collaborative design A lot of approaches have arisen to deal with conflicts in collaborative design. Among them, we chose to highlight the following ones: ontologies; thesaurus; prototyping; constraints checking; constraints relaxation; case-based reasoning; rule-based reasoning; priorities management; negotiation and mediation. Ontologies and thesaurus are resources used to resolve linguistic conflicts. While the use of ontologies permits dealing with more complex conflicts; providing exact terminology is an accurate approach to mitigate meaning-based conflicts – the polysemic ones. So, for this kind of conflict a thesaurus is suitable [4]. Simulation tools are used to detect conflict inconsistencies [13]. Virtual prototypes permit the detection of structural-level interferences and simulators permit the evaluation of objects being used in the design. The use of these tools envisages detecting eventual conflicts [12]. Constraints are used to represent system’s requirements, in order to enhance the collaboration process. Requirements are represented as groups of variables in spaces of feasible values. Such spaces improve efficiency through avoiding artificial conflicts, improving design flexibility, enhancing change management and assisting conflict resolution [9]. A constraint checking is an automatic task, taken to verify the consistency of a given model. Defined constraints may be relaxed during the negotiation process – if it is necessary – to facilitate the search for a solution. Case-based reasoning is the process of solving new problems based on solutions for similar past problems. In this case, the most common past solutions are taken as starting point to solve the new problem [6]. Rule-based reasoning takes predefined rules / statements as parameters to check the given model. It is quite similar to constraint checking, except that the rules
Resolving Collaborative Design Conflicts Through an Ontology-based Approach
13
defined by it are not design specifications but instead, they should be seen more like a to-do list, to be followed whenever a conflict appears. The negotiation process involves direct interaction among designers, to find the best solution for everybody involved in the design project [8]. Priorities management and mediation are two tasks accomplished by the project manager, to solve the problem. Priorities can be attributed to designers, to knowledge areas, to specific topics, to product subparts, etc. They are used to establish a “rank of importance”. Mediation is a unilateral decision, made when the negotiation process fails and there is no more chances of success. It is an extreme solution and should be avoided as much as possible. 2.1 Using ontology modelling to detect conflicts The use of ontology modelling in collaborative design has been proving to be a prominent approach to detect conflicts in early-stage design [2]. Besides, representing knowledge in Web Ontology Language (OWL)2 offers a reasonable trade-off between expressibility and decidability, witch when used to verify product specifications in collaborative design may fit as an efficient conflict attenuator. OWL supports automated reasoning and, to this effect, has a formal semantics based on Description Logics (DL) – typically a decidable subset of First Order Logic; suitable for representing structured information about concepts, concept hierarchies and relationships between concepts. The decidability of the logic ensures DL reasoners can be built to check OWL ontologies consistency, i.e., verify whether there are any logical contradictions in two or more ontology axioms. Furthermore, reasoners can be used to infer from the asserted information, e.g., to infer whether a particular ontology concept is a subconcept of another, or whether a particular individual in a given ontology belongs to a specific class. According to [11], a typical OWL reasoner provides at least the standard set of Description Logic inference services, namely: consistency checking; concept satisfiability; classification and realisation.
3 An ontology-based detection of conflicts In our collaborative architecture [6], designers are grouped by clusters of knowledge and expertise. Each one of these clusters is called an agency. Inside the agencies, designers collaborate to achieve a common design model and ontologies are used to represent such models [3]. At this stage, intra-agency collaboration is done. Once this step is completed, common ontologies obtained there are merged together in a higher level, the inter-agency collaboration one. In this higher level, the common ontology obtained is, then, the final design solution (Figure 1).
2
http://www.w3.org/TR/owl-features/
14
M. Dutra, P. Ghodous and R. Gonçalves
Figure 1. Collaboration levels and common ontologies
Each expert designs his own model according to his expertise and knowledge skills. A mechanical engineer will, naturally, be concerned about the mechanical structure of the product and his components. In the meantime, a thermal engineer will be more concerned about heating and temperature control for a specific part of the same product. In this approach, both models – mechanical and thermal – are merged to a common one, comprising mechanical and thermal specifications. If we consider other knowledge – or interest – areas in such architecture, there will be one agency for each considered area. Thus, electrical engineers, material engineers, raw material suppliers, manufacturers, people of distribution department, clients, vendors, marketing people – among others, that is, every expertise involved in the design process, will be contemplated with an agency. In our architecture is mandatory considering the publication of propositions in public spaces. Publishing a proposition means to merge different proposed instances of a product model into the design space. This merging is only possible if there is no interference between the elements, which means, if they are all coherent. To guarantee such a scenario, two operations are processed in the moment of the publication: constraint checking and ontology reasoning (Figure 2).
Figure 2. Ontology publication in public spaces
Constraint checking is part of the conflict attenuation approach. It uses predefined rules / statements to ensure the coherent data will be published. This step is not
Resolving Collaborative Design Conflicts Through an Ontology-based Approach
15
collaborative, as it does not take into account other proposals, but the one being published. However, as constraint checking is not enough to guarantee data coherence, an OWL reasoning is taken right after it. To illustrate this situation, let’s take an electrical connector as example. Such a connector comprises different subparts, e.g.: spring, shell, screw, cable, conductor, etc. Considering a small collaborative design project, where two engineers model the same piece differently, according to their personal convictions, two concurrent ontology instances will be given, representing the same product (Figure 3).
Figure 3. Two different points of view through the same product
As can be seen in Figure 3, second designer’s Shell concept is equivalent to Body concept in the first designer’s model. In the second one, however, Spring concept is no more linked directly to Body / Shell concept but instead, directly to Case concept. The conflict detection process (which comprises an ontology reasoning) undertaken in [2] has attested the inconsistency of these models, as well as the impossibility of merging them. In this case, we say Connector concept has been detected to be unsatisfiable. Discovering such information is essential to advance to the next step, the conflict resolution process.
4 The conflict resolution approach In our architecture, all system data and information are stored in a blackboard (Figure 4). This blackboard comprises two subspaces: x Solution space: This space stores the system database; merged ontologies (produced after the collaboration process); and predefined ontologies (if it is the case), to be used as “standard models” by designers. x Collaboration space: Space where collaboration is done.
16
M. Dutra, P. Ghodous and R. Gonçalves
Questions Area: Can be seen as a “FAQ” area of the system. It is used by designers to clarify global doubts / problems related to design process. Coordination Area: Project manager’s workspace. Interaction Area: Is the designers’ communication area. In there, they are able to notify one another, to leave them messages, to make them propositions, to make suggestions and to argue. Conflicts Area: This area is activated whenever a conflict is detected. It is the responsible for the conflict resolution process.
Figure 4. Blackboard and conflicts area
4.1 Resolving a conflict Once a conflict is detected the system tries, firstly, to resolve it in an automated way. To use such an approach, three options are available: definition of priorities, case-based reasoning and rule-based reasoning. Definition of priorities is a project manager’s task. He is the one in charge to state a specific designer / model has priority over another one. In this case, and if there is consistency, the higher ranked ontology will be set as the common model. Nevertheless, it must be highlighted that it is not mandatory defining priorities. Next, the system will verify the past cases. Here it will use an analogy-based approach to make a decision, based on what have already happened before in such situations. However, as this step is not that easy because of the great number of involved variables (at the end, we can even say only rigorously identical ontologies can be compared), we do not take the achieved solution to be “the one” but rather, we send it to evaluation of concerned designers.
Resolving Collaborative Design Conflicts Through an Ontology-based Approach
17
If some rules have been defined, a deduction process may take place. This step uses the same principles constraint checker does for conflict attenuation. Both approaches are based on ontology rules, expressed in SWRL3 language. Here a constraint relaxation process may also be undertaken, if the project manager decides to. If the conflicting situation persists at the end of the automated process, a negotiation process will be started. This time designers will directly take part of all phases of the task. First of all, each one of them – to whom conflicting situation matters – will receive a system notification saying an ontology merging has failed and a conflicting situation came up. This notifying message must “translate” to a comprehensible language the detected inconsistency, since designers are not necessarily ontology experts. In the example showed in Figure 3, Connector concept was detected to be unsatisfiable. Back there, incoherence was detected because both Cable and Conductor concepts – being disjoint – have been assigned as equivalent ones. Consequently, a typical notification like the one showed in Figure 5 is sent to concerned designers.
Figure 5. System notification of the conflict
The recipient designer must, then, contact his partner(s) in order to resolve the situation. They should talk, attempting to produce together a feasible solution for everybody. They should also check each other’s ontologies – by accessing the public workspaces (intra or inter agencies) – as well as they should ask for other designers’ opinions, too. Finally, they are supposed to collaborate as much as they can to resolve the conflict. However, if achieving such a scenario is not possible by any reason, whatever it may be, a mediation process will be undertaken by the project manager. He is, in the last stage, the “referee of the quarrel”.
3
http://www.w3.org/Submission/SWRL/
18
M. Dutra, P. Ghodous and R. Gonçalves
5 Closing Remarks Resolving collaborative design conflicts is a hard task to deal with. The differences among designers have to be taken account of, especially if we consider a verylarge-scale design project. Each one of them has a different background, different expertise and different cultural and social points of view. It is never easy to resolve problems when such a set of people is involved. In our collaborative architecture, we chose to use ontologies to model these different kinds of knowledge and expertise. We consider the use of OWL as a very efficient approach to represent knowledge in collaborative environments. Besides, OWL reasoning facilitates coping with conflicts in early-stage design, as it permits the detection of inconsistencies. The last trends in this research domain – along with the several ongoing projects that work with ontology merging and aligning [1] – have encouraged us to keep going toward this direction. Our latest results have been proving our expectations in this area, so far. In our proposal, all resolution process relies on the OWL representation of information. Consequently, two different scenarios arise. The first one is the scenario where a standard ontology is given, i.e., defined before the starting of the collaboration process. In this context, designers should take this “standard ontology” and use it to build their personal models. The second scenario is the one where we start from different kinds of representation. Here, each designer works with the format he is used to, e.g.: STEP protocols (ISO 10303), Function-Behaviour-Structure framework [5], natural language, among others. In this case, the collaborative platform is responsible for harmonising them, merging them into a common ontology. This common ontology is, then, used as “standard” for the reasoning process. The harmonisation process is the next step of our work.
6 References [1] De Bruijn J, Martin-Recuerda F, Manov D, Ehrig M. State-of-the-art survey on Ontology Merging and Aligning. EU-IST Integrated Project (IP) IST-2003-506826 SEKT, Deliverable D4.2.1 (WP4), Digital Enterprise Research Institute, University of Innsbruck, 2004. [2] Dutra M, Ferreira da Silva C, Ghodous P, Gonçalves R. Using an Inference Engine to Detect Conflicts in Collaborative Design. To appear in the 14th International Conference on Concurrent Enterprising (ICE 2008) – Lisbon, Portugal, June 2008. [3] Dutra M, Ghodous P. A Reasoning Approach for Conflict Dealing in Collaborative Design. In proceedings of the 14th International Conference in Concurrent Engineering (CE2007) – Springer Verlag. São José dos Campos, Brazil, July 2007. [4] Falquet G, Jiang, CLM. Conflict Resolution in the Collaborative Design of Terminological Knowledge Bases. In proceedings of 12th International Conference on Knowledge Engineering and Knowledge Management (EKAW), Juan-les-Pins, France, October 2000:156-171. [5] Gero JS, Kannengiesser U. The situated function-behaviour-structure framework. In
Design Studies, Elsevier, UK, 2004; 25; n. 4.
Resolving Collaborative Design Conflicts Through an Ontology-based Approach
19
[6] Ghodous P. Modèles et Architectures pour l’Ingénierie Coopérative [in french]. Habilitation Thesis, University of Lyon 1, Lyon, France, 2002. [7] Klein M. The Dynamics of Collaborative Design: Insights From Complex Systems and Negotiation Research. In Complex Engineered Systems, ISBN 978-3-540-32831-5, Springer Berlin / Heidelberg 2006:158-174. [8] Klein M. Supporting Conflict Resolution in Cooperative Design Systems. In IEEE Transactions on Systems, Man and Cybernetics, Special Issue on Distributed Artificial Intelligence, 1991; 34, n. 6. [9] Lottaz C, Smith IFC, Robert-Nicoud Y, Falting BV. Constraint-based support for negotiation in collaborative design. Artificial Intelligence in Engineering 14, Elsevier Science Ltd., 2000. [10] Matta N, Corby O. Conflict Management in Concurrent Engineering: Modelling Guides. Proceedings of the European Conference in Artificial Intelligence, Workshop on Conflict Management, Budapest, Hungary, 1996. [11] Sirin E, Parsia B, Cuenca Grau B, Kalyanpur A, Katz Y. Pellet: A practical OWL-DL reasoner, Journal of Web Semantics: Science, Services and Agents on the World Wide Web. In E. Wallace (Ed.) Software Engineering and the Semantic Web, 2007;5(2):5153. [12] Slimani K, Ferreira da Silva C, Médini L, Ghodous P. Conflict mitigation in collaborative design. In International Journal of Production Research, ISSN 0020-7543, Taylor & Francis, 2006. [13] Sriram RD. Distributed and Integrated Collaborative Engineering Design. Savren, ISBN 0-9725064-0-3, 2002. [14] Xie H, Neelamkavil J, Wang L, Shen W and Pardasani A. Collaborative conceptual design - state of the art and future trends. Proceedings of Computer-Aided Design, 2002; 34: 981-996.
Creating Value Within and Between European Regions in the Photovoltaic Sector Gudrun Jaegersberga,1 , Jenny Ureb a
School of Economics, University of Applied Sciences at Zwickau,Germany. School of Informatics, University of Edinburgh, UK.
b
Abstract. The paper highlights emerging evidence that Europe is ideally positioned to leverage diverse local strengths and SME-led innovation in renewable energy clusters across regions. The authors highlight the outcomes of a series of action research studies looking at value-creation within and between regions in the traditional and the renewable energy sectors, and comment on the emergence of models of competitiveness dependent on local innovation that are consonant with the aim of the Lisbon treaty to create a competitive Europe based on knowledge-based innovation and support for SMEs.
Keywords. Value creation, photovoltaics, European regions, concurrent enterprise engineering, innovation, competitiveness.
1 Introduction The Lisbon and Lisbon/Gothenburg strategies [11, 12] show a clear commitment by the European Union to sustainable development, with the aim of fostering economic, social and environmental renewal to make Europe the most competitive and most dynamic, knowledge-based economy in the world. In setting the highest binding target worldwide for the use of renewable energy in overall energy consumption (20% by 2020) 2 and providing both incentives and penalties, European companies may have acquired a degree of first mover advantage by 2012, when the international agreement replacing the Kyoto Protocol is also adopted by the US, Russia and Japan as well as the EU. To manage the triple bottom line (economic, social and environmental) successfully, Europe requires policies and practices that will leverage its diverse regional strengths and knowledge assets. Innovative forms of enterprise interoperability are required to better integrate the local knowledge of SMEs in the process of value creation, since they are the economic engine of the EU economy. 1
Professor, University of Applied Sciences at Zwickau/Germany, School of Economics, Dr.-Friedrichs-Ring 2a, 08056 Zwickau, Germany; Tel: +49 (0) 375 5363463; Fax: +49 (0) 375 5634104; Email: [email protected]; http://www.fh-zwickau.de 2 Set by the European Council in 2007.
22
G. Jaegersberg and J. Ure
SMEs represent 99% of all European enterprises, contribute two thirds of European GDP and provide 75 million jobs in the private sector. [2] Accordingly, SMEs are core to the implementation of Lisbon Strategy [11], and policy, practices and models supportive of this are central to the achievement of this ambitious agenda. This paper aims to highlight some of the findings from a series of distributed action research based projects on the integration of SMEs in the process of wealth creation, highlighting an evolution in the view of regional SMEs as a potential source of value creation. The pilot case is taken from the European photovoltaic sector. The paper will also draw on projects carried out by the authors in the automotive and oil and gas sectors.
2 Photovoltaics – a Distributed Market Photovoltaics (PV) technology converts light/solar energy into electricity. PV is one of the fast-growing renewable energy sectors across the European single market with a world-leading cluster in Eastern Germany [16], that has supply chain industry clusters in South West Germany, around Berlin and in the Ruhr area. Currently, the PV industry is fragmented across a highly distributed market in Europe. Further agglomerations are developing especially in southern European regions of Spain, Portugal, Italy and France where there is enormous potential to harness solar energy. The Green Electricity Directive (2001/77/EC) was enacted [8] to establish a consistent framework for the promotion of electricity generation from renewable energy sources across the EU. Implementing this directive, each member state has taken its own approach, and there has been some transfer – for example, some of the member states have introduced incentives modelled on German feed-in tariffs to support a rapid development of this industry. An initiative by energy agencies from over eight key European solar energy nations and the European PV Industry Association (EPIA) is being implemented by a PV Policy Core Group to identify strategies for the improvement and harmonisation of current national policy frameworks for PV [15]. It is notable that the PV sector currently has no overarching policy and practice for interregional cooperation.. To draw on the enormous potential in disparate regions there is a need for identification and coordination of enabling factors for concurrent engineering of these fragmented processes.
3 The Single Market – a Lever to Move Europe The most important precondition to leveraging interregional economic cooperation is the European single market with its free movement of the basic factors of production: people3, products/services and capital, i.e. there are no trade barriers4. 3
There are some restrictions, cf. Schengen Treaty. No direct trade barriers, there are indirect ones through, e.g. culture, language, management styles.
4
Creating Value Within and Between European Regions in the Photovoltaic Sector
23
The EU has reached a further economic level of integration, presently in 15 member states (the Eurozone), where a common currency facilitates transactions, a common monetary and fiscal policy has been introduced, and value-added tax rates have been harmonized. Europe already has a wide range of financial and policy instruments to support development within and between regions, for example the Structural Funds and the Cohesion Fund to reduce disparities and stimulate the regional economies, the European Regional Development Fund (ERDF), the European Social Fund (ESF), and the INTERREG Programmes to develop new solutions to economic, social and environmental challenges. These support the development and leverage of technical and human infrastructure across regions to collective advantage, together with further support from other programmes supporting the exchange of knowledge and people across regions such Framework (FP7), ERASMUS and LEONARDO. The EU has also invested in scalable communication infrastructures such as EGEE [5] with the potential to provide the EU with a high power platform for distributed collaboration in research and business collaboration of particular value to SMEs. This investment embodies many of the principles of unity in diversity in Europe, as a technical infrastructure for unifying disparate knowledge and resources across heterogeneous groups.
4 Unity in Diversity: a Value Driver Policy, funding, training and technology frameworks all have potential roles to play in aligning the heterogeneous and distributed knowledge, skills, strategies and infrastructure already available for researching, supplying, manufacturing and marketing photovoltaic/solar energy products. The sheer diversity of regional strengths in renewables, and in PV in particular, reflects both climatic/ geographical variation as Figure 1 demonstrates [13].
Figure. 1 Photovoltaic Solar Energy Potential in Europe [13]
24
G. Jaegersberg and J. Ure
There is diversity too in (often historical) regional investment in particular areas such as engineering and manufacturing. For example, the engineering infrastructure of regions in East Germany supports a leading PV cluster that already functions for some SMEs as an R&D centre and production base. In regions such as Southern Spain there is a huge market for solar energy, and a well developed service culture and marketing strengths. The Italian cluster builds on design strengths and internationally networked family business- based SMEs. The potential for value-adding partnerships is therefore clear, and already evident in companies where case studies are ongoing, where they are sourcing parts from engineering regions, and sourcing production and marketing skills in others. In these regions, PV could turn into a regional growth engine also involving complementary industries such as the construction industry. There is potential for win: win partnerships, with the pooling of complementary strengths and the fostering of synergies. 4.1 Collaborative Benchmarking across Regions As prior field studies conducted by the authors have revealed, coordination strategies among stakeholders within and across regions can support the development and re-use of successful regional policies and practices more rapidly and cost-effectively than by painful reinvention. Competitiveness and innovation are key cases in point where this has been demonstrated [20]. Earlier trans-regional studies in the UK and Western Australian oil and gas industry [10, 19] , and the automotive supply chain [9] suggest current approaches have moved from an emphasis on cost-efficiency savings, to an emphasis on value-creation through innovation, based on the application of local knowledge that SMEs can provide. To identify the nature and extent of collaboration across regions in the PV sector, the authors are carrying out a European field study. We outline emerging evidence from early pilot work on the nature and implications of these models and their implications for policy and practice. 4.2 Transregional Collaboration in PV : Emerging Collaboration Models in three European Regions: The European pilot study is one of a number of regions where researchers and students on placement have used collaborative action research [4] with a range of stakeholders to identify key success factors (drivers), risk factors (barriers) and general lessons learned valid across regions and sectors. The PV pilot is presently taking place in the Eastern German cluster, the region of Valencia/Spain and the region of Lombardy/Italy and, in a second stage, will be extended to further European regions. 4.2.1 Germany Using transregional exchange and mobility programmes such as ERASMUS and LEONARDO, as well as the support of companies, the Saxon Economic Development Corporation and Chambers of Commerce have provided technical and office support for students on placement conducting case studies of SMEs,
Creating Value Within and Between European Regions in the Photovoltaic Sector
25
identifying stakeholder requirements in different regional contexts. In one of these, a SWOT analysis of the Eastern German cluster has clearly shown that environmental policies such as the German Renewable Energy Sources Act, have stimulated early development, and subsequent first mover advantage in wider markets [6]. The cluster is in a growth phase and is now a world leader in PV. The strengths in Eastern Germany were identified in the regional infrastructure in fields such as mechanical engineering, chemicals, semiconductors and optics that also provide research and manufacturing support for PV clusters. The business model of supply chain production builds on the engineering tradition in this region. Close R&D cooperation with Germany’s PV research institutes such as the Fraunhofer Institute and PV manufacturers supports regional innovation in production technologies and processes. (Although this could still be improved by intensifying cooperation with universities). To fully leverage the value of this PV cluster, more effective linkages between the stakeholding players in government, research and industry will be required. The strengths of other regions also provide potential for value-added partnerships that draw on diverse resources, hisotrical investment, local knowledge and local geography. 4.2.2 Spain The Spanish PV cluster in the region of Valencia is embarking on a growth phase supported also by policy such as the Real Decreto 426/2004 [17]. As opposed to the German cluster, it builds more on its natural resource, the sun, which has helped the rapid evolution of a PV solar electricity producing market. Traditionally, small family-run companies with strong, culturally-rooted networking skills, still dominate this market. Stakeholder analysis has highlighted network centric working practices, and business models which tend towards brokerage and transfer between distributed groups, weaving value networks between other sectors, such as the construction industry, as well as with SMEs with particular strengths in other regions. Companies in the PV sector in Spain and Germany were already forging innovative models brokering parts from different regions to meet the needs of end-users on demand. Sourcing parts, companies mainly benefit from the single European market (unity), and in particular, from the engineering innovation and manufacturing strengths of the German cluster. Challenges here included a perceived lack of transparency in working practices, and lack of coordination of stakeholders in government, education and industry. 4.2.3 Italy The “distretti industriali” in northern Italy are the breeding ground for PV industry, and the Nuovo Conto Energia is to incentivize this sector. The studies underway in the regions of Lombardy, Veneto and Emilia Romagna show the Italian context is also highly (internationally) networked, and reflects the family based business culture for which the region is famous. Product design is a particular strength here which could be further leveraged both in northern Italy and trans-regionally across Europe. The case studies suggest that political instability has contributed to a lack of clear overarching policy to support the development of the industry.
26
G. Jaegersberg and J. Ure
5 Unity in Diversity:a Basis for Innovation and Competitiveness? The ongoing studies in these very different sectors suggest that innovation and competitiveness would be best served by the strategic alignment of these diverse strengths in terms of x Practice: benchmarking solutions to generic problems x Policy: support for particular economic and business models x Technology: support for VO5s via Grids such as EGEE [ 5 ] It is perhaps important here to unpick what we mean by innovation and competitiveness, since there are clear lessons on this from the evolution of the traditional, trans-regional energy sector. 5.1 Competitiveness Through Cost-Reduction This initial approach to competitiveness in trans-regional enterprises is epitomised by the CRINE6 initiative in the UK oil and gas supply chain [3]. This provided a flagship example of cost-efficiencies based on standardisation, scaling, and strategic alliancing, however a number of aspects of these lean strategic alliances combined to undermine many regional SMEs [10,19]. Many local SMEs disappeared as a result, undermining the ability of large companies in the region to innovate in the application of new techniques on the ground in this very knowledge-based market. 5.2 Competitiveness Through SME-Led Innovation The subsequent UK initiative to support competitiveness, PILOT [14], supports SME-led innovation as a basis for competitiveness in knowledge-based environments in the development, adaptation and use of technology in the difficult environment of deep sea drilling. Local SMEs have a crucial role here, not only in leveraging their local knowledge and expertise, but in sustaining regional employment, in supporting region LMEs, and in the attractiveness of that region as a base for other companies. This has been the basis of useful lifecycle benchmarking with other regions, such as Western Australia addressing similar challenges, as part of other ongoing trans-regional case studies by the authors in the traditional energy sector [10, 19]. 5.3 A Model for Transregional Collaboration in PV The lesson from the experience of trans-national collaboration in the traditional energy industry, is that competitiveness in knowledge-based markets is heavily 5 6
Virtual Organisations (VOs) for transient business collaboration. Cost Reduction to the New Era.
Creating Value Within and Between European Regions in the Photovoltaic Sector
27
dependent on innovation, and much of which is local and SME-led. We argue that these models of innovation and competitiveness are applicable to PV, and are evident also in collaborative models in other distributed, knowledge-based business sectors [19, 20, 21]. Analysis of the nature of SME collaboration models in these case studies has highlighted a range of collaboration models, each configuring cost, risk, and value in different ways - from traditional supply chains, dominated by international energy companies diversifying into another engineering sector, through to SMEs acting as brokers across regions, drawing on the particular strengths of each to source parts, or market products, and creating value through the strategic alignment of complementary strengths. The authors argue that policy in Europe should be actively supportive of SMEled brokerages of the kind identified in these preliminary studies, working concurrently across regions to source, manufacture and market products within and across Europe’s diverse regions, and achieving value for the whole that is more than the sum of the parts.
6 Conclusions and Future Work The Aho [1] Report to the European Community on R&D and Innovation in Europe underlined the urgent need for new economic models to leverage the diversity of local, community-based knowledge to competitive advantage through innovation. Initial research indicates there are already emerging models that could build more effectively on the unity of Europe’s political, legal, economic, educational and technical frameworks, and better exploit the rich diversity of regional strengths and local innovation in real and virtual organizations.7 How this happens in practice (and in policy) is crucial. Will traditional top heavy models prevail by default, or along the faultlines of existing alliances between large transnationals and national governments? Or will it be by design, to achieve competitiveness by, through and for the skills, knowledge and resources embedded in local regions drawing on climatic, geographical, social, political or historical assets? Current work mapping models, barriers and enablers in existing regions will be complemented by work in other regions such as Scotland and Portugal, where policy and practice is being aligned in different ways, as a basis for informing policy and practice in this area.
Acknowledgements The authors would like to thank the many collaborators in research, education, industry and government in participating regions, and local, national and European funding agencies who have supported the project in different regions. 7
EGEE provides advanced EU wide support for distributed Grid enabled research and business collaboration [5].
28
G. Jaegersberg and J. Ure
Citations and References [1] Aho. E, Creating an Innovative Europe, Report of the Independent Expert Group on R&D and Innovation. EC 2006. [2] CORDIS/Framework 7. Available at: Access on: April 3rd 2008. [3] Crook J, Crine cuts Britannia costs. Petroleum Review. Vol. 52, no. 620, Sept. 1998 pp. 18-20. [4] Denzin N K & Lincoln Y S, Handbook of Qualitative Research. London: Sage, 2002. [5] EGEE Available at Access on: April 8th 2008. [6] Georgi D, Standortmarketing für die Region Sachsen – Entwicklung transregionaler Clusterstrategien innerhalb der EU auf dem Photovoltaik-Sektor am Beispiel von Sachsen und Italien. Dissertation. University of Applied Sciences, Zwickau, 2006. (Available on request from the authors). [7] Gray A, Hay J, March R, Punt A. Can SMEs Survive CRINE? In Proceedings of Offshore Europe Conference, Aberdeen 5-8 Sept, 1995, 483-488. [8] Green Electricity Directive. Available at: Access on: April 2nd 2008. [9] Jaegersberg G and Ure J, Inter-Regional Cluster Strategies: Value-Adding Partnerships between Government, Education and Industry in the Automotive Supply Chain, in M. Sobolewski & P. Ghodous (eds.) Next Generation Concurrent Engineering 2005 ISPE, Inc., 2005, 253-259. [10] Jaegersberg, G, Ure, J, Trans-regional Supply Chain Research Network: Developing Innovation Strategies Within and Between Regional Oil and Gas Clusters, in Complex Systems Concurrent Engineering (eds.) Loureiro, G, Curran, R. London: Springer 2007, 801-808. [11] Lisbon Strategy. Available at: [12] Lisbon/Gotenburg Strategy. Available at: Access on: April 3rd 2008 [13] Map of PV Solar Electricity Potential in European Countries. Available at: Access on: April 5th 2008. [14] Pilot Task Force. Available at: < http://www.pilottaskforce.co.uk/> Access on: April 1st, 2008 [15] PV Core Policy Group. Available at:
Agent-based Collaborative Maintenance Chain for Engineering Asset Management Amy J.C. Trappeya, b, , David W. Hsiaob, Lin Mac, Yu-Liang Chungd and YongLin Kuod a
Department of Industrial Engineering and Management, National Taipei University of Technology, Taiwan b Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Taiwan c School of Engineering Systems, Faculty of Build Environment and Engineering, Queensland University of Technology, Australia d Mechanics and System Research Laboratories, Industrial Technology Research Institute, Taiwan Abstract. Engineering asset nowadays mostly replies on self-maintained experiential rulebases and periodic maintenance, which is lacking a concurrent engineering approach. To enrich the maintenance efficiency and customer relationship, this research proposes collaborative environment integrated by research center with good diagnosis and prognosis expertise. The collaborative maintenance chain jointly combines asset operation sites (i.e., maintenance demanders), research center (i.e., maintenance coordinator), system providers (i.e., maintenance providers), and suppliers. Meanwhile, to realize the automation of communication and negotiation among organizations, multi-agent system technique is applied. With agent-based collaborative environment, the entire service level of engineering asset maintenance chain is increased. Keywords. Engineering asset, Multi-agent system (MAS), Maintenance chain, Concurrent engineering
1 Introduction Integrated Engineering Asset Management is a continuous process covering the whole life cycle of an asset containing conceptual design, construction/manufacture, operational use, maintenance, rehabilitation and/or
Please send all correspondence to Professor Amy Trappey, Department of Industrial Engineering and Management, National Taipei University of Technology, Taipei (10608), Taiwan, E-mail:[email protected]. Tel: +886-2-2771-2171 Ext. 4541; Fax: +886-2-27763996
30
Amy J.C. Trappey, David W. Hsiao, Lin Ma, Yu-Liang Chung and Yong-Lin Kuo
disposal [1]. When speaking of engineering asset management, how to extend the asset operation time is always one of the most concerned issues. Therefore, many researchers are devoted to the field of reaching effective and efficient repair and maintenance works, e.g., condition monitoring, symptom diagnosis, health prognosis [6], [8], [11]. Moreover, in order to enhance the customer relationship and gather more information as the basis for future equipment redesign, system providers start to offer total after-sales service, including maintenance, rehabilitation and professional consultation, after engineering asset installation. However, recent engineering assets, including manufacturing/production machinery and related equipments (e.g., AGVS, transportation equipment, AS/RS), are much more complex in functional design, and are more difficult to be operated and maintained. As a result, self-maintained experiential rule bases are no longer sufficient in dealing with the unpredictable problems [9], [11]. Therefore, enterprises nowadays start outsourcing helps to technical centers to integrate with enterprises’ historical experiences to assist themselves dealing with the complexity of assets to have predictive maintenance actions and better utilization of assets. Moreover, different engineering assets may be offered and served by different system providers. Therefore, an integrated high-level maintenance which contains multiple sub-systems requires the cooperation of multiple system providers, and thus increases the difficulty of coordination [4]. To enhance the efficiency of maintenance chain for engineering assets, this research proposes a collaborative maintenance chain integrated by technical centers. In the proposed collaborative maintenance chain, the technical center acts as the prognosis and diagnosis experts who provide professional consultations, including accurate diagnosis and reliable prognosis, as the basis for afterward maintenance arrangement. The technical center also acts as the coordinator for maintenance demanders and suppliers. Moreover, multi-agent system (MAS) technology is applied to complete the collaborative maintenance owing to agent’s characteristics, including autonomous, communicative, goal-oriented, proactive, rational, learning and active [2], [3], [7], [10]. In the following sections, the current practice of engineering asset maintenance chain and its concerns are firstly depicted. Afterward, the proposed collaborative maintenance chain combined with multi-agent system technology is discussed in detail. Finally, we will draw the conclusion.
2 Current Maintenance Practice and Main Concerns Current maintenance chain for engineering assets mainly contains three tiers of participants, including asset operation/user sites, system providers (i.e., the asset maintenance provider) and spare part suppliers. In the current practice, the maintenance jobs are either shutdown driven maintenance or periodic maintenance (Figure 1). However, these two types of maintenance are not able to deal with the unexpected shutdowns and consequently cause great damages to the assets and operators.
Agent-based Collaborative Maintenance Chain for Engineering Asset Management
31
Figure 1. Periodic maintenance and shutdown driven maintenance are the primary maintenance actions of current practice
According to the field research and interviews with industrial companies (e.g., automatic parking towers and power plants), it is concluded that the current practice of maintenance chain can be improved from four directions, containing daignosis/prognosis, maintenance demand/provide mismatch, spare part overstock, and system/database linkage. 2.1 Prognosis and Diagnosis In the current practice, prognosis and diagnosis are conducted according to selfmaintained experiential rule bases combined with internal condition monitoring data. However, recent engineering assets, including manufacturing/production machinery and related equipments, provide more functions than ever, and make themselves more difficult to be operated and maintained. Consequently, the lacking of experts dealing with symptom diagnosis and health prognosis may result in inefficient maintenance. Thus, the maintenance chain needs experts from diagnosis and prognosis domains integrating historical condition monitoring data to support preventive maintenance. 2.2 Maintenance Demand and Supply Mismatch In a large plant, there are numerous systems which are provided and maintained by different system providers. Therefore, when a higher level maintenance which requires the involvement of different system providers’ efforts to accomplish the maintenance job may be a big scheduling problem to both asset operation site and system providers. Therefore, a platform, which brings together the suppliers and demanders of after-sales service to coordinate one another’s maintenance schedules, is required. 2.3 Spare Part Overstock In the current practice, individual system provider forecasts requirements of maintenance components to prepare spare part inventory. However their forecasts cannot match real market requirements, and thus, results in overstock or low service level. Therefore, a forum to collaboratively bridge and integrate
32
Amy J.C. Trappey, David W. Hsiao, Lin Ma, Yu-Liang Chung and Yong-Lin Kuo
maintenance demanders (i.e., asset operation site), maintenance providers (i.e., system providers) and spare part suppliers in advance is needed. In the forum, they can cooperatively decide production schedules, safety stock level and lead time. 2.4 Inefficient System and Database Linkage With the improvement of information and database technology, each company is operating more information systems and databases than ever. For example, when maintenance is requested by an asset operation site, the system provider checks the experience rule base, maintenance schedule, and human resource allocation to generate a maintenance decision for the operation site. Afterward, the operation site adjusts its production/service schedule, maintenance schedule, and related systems to support the maintenance decision. It becomes a very complicated problem if a higher level maintenance job is required owing to the complex linkage among these information systems and databases. Consequently, a better communication and negotiation technology among these information systems is required to increase the communication flexibility and efficiency.
3 Agent-based Collaborative Maintenance Chain 3.1 Integrated Maintenance Chain To solve the problems depicted in the as-is model, this research proposes a new agent-based collaborative maintenance chain which is integrated by a research center with prognosis and diagnosis expertise (Figure 2).
Figure 2. The proposed agent-based maintenance chain is integrated by service center with prognosis and diagnosis expertise
In the proposed maintenance chain, the asset operation site automatically monitors the asset condition, and shares these data with service center for following diagnosis and prognosis. The Research center (Service center) receives
Agent-based Collaborative Maintenance Chain for Engineering Asset Management
33
condition monitoring data, and proceeds following diagnosis and prognosis. Moreover, the research center also brings together maintenance demanders (asset operation sites) and maintenance providers (system providers), and coordinate suitable maintenance schedules. The system provider (maintenance provider) takes charge of regular, emergent and predictive maintenance for asset operation site. Further, the system provider also coordinates resources (human resources and spare parts) to accomplish maintenance and repair jobs. The spare part suppliers supply PLC, monitoring equipments, and related components and materials. In this new collaborative maintenance chain, engineering asset management can be divided into four stages, including condition monitoring, prognosis/diagnosis, maintenance decision making, and scheduling and dispatching (Figure 3). These four stages will be discussed in detail as follows. In the condition monitoring phase, with the improvement of condition monitoring techniques and database technologies, the asset is hierarchally monitored to provide complete asset information for further asset health prognosis and symptom diagnosis. If the engineering asset requires diagnosis or prognosis, corresponding experts will proceed the prognosis and diagnosis jobs. To provide accurate diagnosis and reliable prognosis, the experts need to communicate interactively with the asset sites. After generating the prognosis and diagnosis results, subsequent maintenance decisions, including maintenance start time, maintenance period, maintenance cost and supporting enterprise resources, are made. However, the same maintenance job has different meanings to different departments/organizations. For production department, how to prevent shutdowns, especially during the peak time, is the major concern. For finance department, how to extend the asset operation life with minimum maintenance cost is the major concern. For maintenance organization, how to minimize maintenance cost (e.g., least overtime work) or maximize the maintenance benefits is the major concern. Consequently, iterative communication and negotiation among these parties are required to gather satisfactory maintenance decisions for these parties. After the maintenance decisions are made, production department or service proving department adjust its dispatching based on the determined schedules. Meanwhile, maintenance organization dispatches its human resource allocation and prepares corresponding maintenance materials.
34
Amy J.C. Trappey, David W. Hsiao, Lin Ma, Yu-Liang Chung and Yong-Lin Kuo
Prognosis Expert
Condition Monitor
Diagnosis Expert
Production Manager
Finance Manager
Maintenance Provider
Spare Part Supplier
Condition Monitoring
Condition Monitoring
Abnormal Signal Detected Request for Prognosis or Diagnosis
Prognose Diagnose
Prognosis/ Diagnosis Prognosis Report
Diagnosis Report
Determine Maintenance Start Time
Maintenance Decision Making
Determine Maintenance Period Determine Maintenance Cost Assign Supporting Enterprise Resources
Scheduling and Dispatching
Spare Part Confirmation
Adjust Production Schedule
Adjust Maintenance Schedule
Re-dispatching
Re-dispatching
Figure 3. There are four phases of engineering asset management in the proposed maintenance chain
3.1 System Requirement Since the maintenance chain is jointly integrated by the research center, this chain still needs some information technology to enhance the automation mechanism to increase the chain efficiency. During the diagnosis and prognosis phase, diagnosis and prognosis experts have to communicate with asset operation site frequently to gather enough information for precise diagnosis and prognosis. Therefore, an autonomous information exchange is required to eliminate the constraints from locations and time. In making maintenance decisions, multiple participants in the maintenance chain are invited to jointly discuss and negotiate the related time, cost and resources of certain maintenance job. However, the distributed environment and numerous information systems diminish the discussion efficiency. Therefore, a mechanism that can represent human beings with certain authority to proceed the discussion and negotiation is demanded. After maintenance decisions are made, internal production, service and maintenance schedules are changed. These changes affects following human resource and machine dispatching and the preparation of required materials. To
Agent-based Collaborative Maintenance Chain for Engineering Asset Management
35
quickly respond to the changes in a timely matter to keep the enterprise working efficiency, a mechanism that interlinks the scheduling and dispatching effectively and efficiently is required. Based on above requirements, it is concluded that a mechanism that can represent human beings do the discussion, negotiation and decision making is required to complete the proposed integrated maintenance chain. Consequently, agent technology, with the characteristics of autonomous, communicative, goaloriented, proactive, rational, learning and active, is embedded to the maintenance chain. Under multi-agent system environment, agents are authorized with certain range of authorization. Within the authorization, agents can help condition monitor, prognosis experts and diagnosis experts progress the data confirmation, data request, data response and results confirmation. Afterward, the agents help production/service manger, finance manager, maintenance provider, and spare part supplier proceed the discussion and negotiation about detailed maintenance decisions without being restricted by the physical location boundaries and time limitation. Moreover, the agents efficiently and effectively interlink the scheduling database and dispatching database to generate the adjusted arrangement of human resources and related material preparation. 3.2 System Analysis The proposed MAS for collaborative maintenance chain mainly contains eight function modules, including condition monitoring, production or service scheduling, diagnosis or prognosis, maintenance schedule coordination, maintenance cost coordination, spare part inventory, production or service dispatching, and maintenance dispatching. Condition monitoring focuses on continuous condition monitoring, and send abnormal signal and real-time information to service center for further diagnosis and prognosis. Production or service scheduling module records and balances the utilization of engineering assets. Diagnosis and prognosis helps to find out the potential symptoms and predict asset health of engineering assets. Maintenance schedule coordination module coordinates maintenance schedule both considering asset operation site’s constraints (production or service schedule) and system provider’s constraints (human resource and spare part inventory). Maintenance cost coordination focus on coordinating the maintenance cost which is accepted by both the maintenance demander and maintenance supplier. Spare part inventory continuously checks system providers’ inventory level, and reminds system providers of replenishment. Production or service dispatching adjusts production or service human resources and corresponding materials. Maintenance dispatching adjusts maintenance human resources and corresponding spare parts. Figure 4 shows the use case diagram of agent-based collaborative maintenance chain.
36
Amy J.C. Trappey, David W. Hsiao, Lin Ma, Yu-Liang Chung and Yong-Lin Kuo
Maintenance Schedule Coordination Finance Manager
Maintenance Provider
Maintenance Cost Coordination
Condition Monitor
Maintenance Dispatching
Spare Part Inventory
Condition Monitoring
Production or Service Scheduling
Production or Service Manager
Diagnosis Expert
Diagnosis or Prognosis
Prognosis Expert
Production or Service Dispatching
Spare Part Supplier
Figure 4. The use case diagram of the agent-based collaborative maintenance chain
To increase the efficiency of the collaborative maintenance chain, there are corresponding agents in different departments and organizations to assist synchronous discussion, communication and negotiation. Figure 5 demonstrates the agent relationship. In asset operation site, Monitoring Agent (MA) continuously monitors the parameters of engineering assets. While anomaly is detected, MA actively informs asset manager, and sends the formatted data to Asset Agent upon request. Maintenance Scheduling Agent (MSA) takes charge of arranging the maintenance schedule of certain engineering asset. Dispatching Service Agent (DSA) coordinates with MSA to adjust following production/service dispatching of human resources and machineries. Asset Agent (AA) acts as the manager of engineering asset, who cooperates with Diagnosis Agent (DA) from research center to determine diagnosis results, collaborates with Prognosis Agent (PA) from research center to determine the risk distribution, and co-works with Finance Agent (FA) and Maintenance Decision Support Agent (MSDA) to make final maintenance decisions. After making the maintenance decision, Dispatching Service Agent (DSA) rearranges the following dispatching jobs. In research center (service center), there are three kinds of agent, including Service System Agent (SSA), Diagnosis Agent (DA), and Prognosis Agent (PA). SSA is the coordinator of maintenance demanders and maintenance suppliers. DA and PA represent diagnosis experts and prognosis experts to collect data and generate diagnosis and prognosis results based pre-developed algorithms. Maintenance Decision Support Agent (MDSA), System-provider Maintenance Scheduling Agent (SMSA), Human Resource Agent (HRA), and Spare Part Agent (SPA) come from system provider. While making the maintenance decisions, e.g., maintenance start time, maintenance period and maintenance cost, these agents are invoked and join the virtual forum to discuss with agents from asset site and suppliers. In supplier site, when Supplier Interface Agent (SIA) is requested about when the spare parts are available, it turns to communicate with Inventory Agent (IA) or Production Line Agent (PLA) to determine the precise time.
Agent-based Collaborative Maintenance Chain for Engineering Asset Management
37
Figure 5. Agent relationship diagram
3.3 System Design The proposed multi-agent system of this research is based on Java Agent DEvelopment Framework (JADE) [5] which simplifies the implementation of MAS. JADE follows Foundation for Intelligent Physical Agents (FIPA) specifications and provides Graphical User Interface (GUI) to enable users debug and develop systems more efficiently. Figure 6 shows the MAS architecture of collaborative maintenance chain. The agent community is contributed from three sites, containing asset operation site, research center site, system provider site and supplier site. Moreover, agent communication on JADE is based on IIOP protocol. The service layer provides interfaces for agents to perform their behaviors based on their pre-defined logics, and data access layer provides functions for service layer to access the databases.
38
Amy J.C. Trappey, David W. Hsiao, Lin Ma, Yu-Liang Chung and Yong-Lin Kuo
Figure 6. System architecture of agent-based collaborative maintenance chain
To clearly clarify the agent interactions, agent communication models based on unified modeling language (UML) sequence diagram with communication performative based on the agent communication language (ACL) specification of FIPA are drawn. Figure 7 to Figure 9 depict three critical agent communication models. Figure 7 shows the interactions among condition monitoring agent, asset agent and diagnosis agent of generating accurate diagnosis results. Figure 8 shows the interactions among condition monitoring agent, asset agent, diagnosis agent and prognosis agent of determining the asset health prediction. With the predicted asset health distribution, agents, including asset agent, production agent, maintenance decision supporting agent, prognosis agent and enterprise resource agent (i.e., SMSA, HRA and SPA), cooperate and communicate iteratively to generate satisfactory maintenance and production schedules (Figure 9).
Agent-based Collaborative Maintenance Chain for Engineering Asset Management
Figure 7. Agent communication model of generating symptom diagnosis
Figure 8. Agent communication model of generating asset health prognosis
39
40
Amy J.C. Trappey, David W. Hsiao, Lin Ma, Yu-Liang Chung and Yong-Lin Kuo
Figure 9. Agent communication model of determining production schedule and communication schedule
5 Conclusion The purpose of this research is to provide complete collaborative maintenance chain architecture, and realize the architecture via multi-agent techniques. Detailed agent relationship and agent communication models are depicted as the basic guidelines of further implementation. There are mainly four advantages of this research. First, the complicated diagnosis and prognosis are outsourced to research center so as to help enterprise keep their focuses on their core competences. Second, the research center acts as the coordinator of maintenance suppliers and demanders, which contributes to enrich the chain efficiency and customer satisfaction. Third, the research center provides a forum for maintenance chain participants to discuss their requirements in advance and afterward determine the production schedules, safety stock level and lead time. With these information considered in advance, the maintenance service level is increased without being overstocked. Finally, the agents contribute to consistent communication among enterprises, which enables better capability for dealing emergent events and reduces physical boundary constraints.
Agent-based Collaborative Maintenance Chain for Engineering Asset Management
41
6 Acknowledgements This research is partially supported by Industrial Technology Research Institute and National Science Council, Taiwan.
7 References [1]
CIEAM (CRC for Integrated Engineering Asset Management). Available at: , Accessed on: Mar. 15th 2008. [2] Davidson, E.M., McArthur, S.D.J., McDonald, J.R., Cumming, T., and Watt, I., “Applying multi-agent system technology in practice: automated management and analysis of SCADA and digital fault recorder data,” IEEE Transactions on Power Systems, Vol. 21, Issue2, pp. 559 – 567 (2006) [3] Hossack, J.A., Menal, J., McArthur S.D.J., and McDonald, J.R., “A multiagent architecture for protection engineering diagnostic assistance,” IEEE Transactions on Power Systems, Vol. 18, Issue 2, pp. 639 – 647 (2003) [4] Huang, C.J., Trappey, A.J.C., and Yao, Y.H., “Developing an agent-based workflow management system for collaborative product design,” Industrial Management and Data System, Vol. 106, No. 5, pp. 680-699 (2006) [5] JADE (Java Agent DEvelopment framework). Available at: . Accessed on: Mar. 12th 2008. [6] Majidian, A. and Saidi, M.H., “Comparison of fuzzy logic and neural network in life prediction of boiler tubes,” International Journal of Fatigue, Vol. 29, pp. 489-498 (2007) [7] McArthur, S.D.J., Booth, C.D., McDonald, J.R., and McFadyen, I.T., “An agentbased anomaly detection architecture for condition monitoring,” IEEE Transactions on Power Systems, Vol. 20, Issue 4, pp. 1675 – 1682 (2005) [8] Sun, Y., Ma, L., Mathew, J., and Zhang, S., “An analytical model for interactive failures,” Reliability Engineering & System Safety, Vol. 91, Issue 5, pp. 495-504 (2006) [9] Sun, Y., Ma, L., Mathew, J., Wang, Y., and Zhang, S., “Mechanical systems hazard estimation using condition monitoring,” Mechanical Systems and Signal Processing, Vol. 20, Issue 5, pp. 1189-1201 (2006) [10] Trappey, A.J.C., Trappey, C.V., and Lin, F.T.L., “Automated silicon intellectual property trade using mobile agent technology,” Robotics and CIM, Vol. 22, pp. 189202 (2006) [11] Yao, Y.H., Lin, G.Y.P., and Trappey, A.J.C., “Using knowledge-based intelligent reasoning to support dynamic equipment diagnosis and maintenance,” International Journal of Enterprise Information Systems, Vol. 2, No. 1, pp. 17-29 (2005)
Collaborative Engineering Systems
Research on the Distributed Concurrent and Collaborative Design Platform Architecture Based on SOA Jia-qing Yua,1 ,Jian-zhong Chab, Yi-ping Lub, and Nan Lib a
Ph.D. Student, School of Mechanical, Electronic and control Engineering, Beijing Jiaotong University, Beijing 100044, China. b School of Mechanical, Electronic and control Engineering, Beijing Jiaotong University, China. Abstract. The design and development of the complex product has become a bottleneck restricting the economic development of countries. The concurrent and collaborative design for complex product is a new mode of design and develop based on concurrent engineering and multidisciplinary collaborative design using various fields of Cax. At present, the most advanced collaborative technology in the world is based on the Service-Oriented Architecture (SOA) –an important stratagem of the USA to seize the high ground of international strategy in the 21st century. According to the theory of knowledge flow, concurrent engineering and the optimization theory in the multidisciplinary collaborative design, the paper defines the concept of Distributed Concurrent and Collaborative Design (DCCD). Based on SOA and the distributed intelligent resources environment, this paper first presents a novel DCCD design platform architecture. The architecture based on SOA integrates some large-scale commercial engineering software tools and expert systems so as to quickly accomplish the design and analysis of complex product such as the railway bogies in this paper, which proves the feasibility of the platform architecture. Take the design and development of the railway bogies as an example to demonstrate the application and advancement of the new architecture. Keywords. Concurrent and collaborative design, SOA, Distributed intelligent resources
1 Introduction Design is seen by many as the area most in need of collaborative working and where the advantages of concurrent activity will be most prevalent. As engineering design and construction is growing in complexity, bigger teams of engineers with widespread, complementary expertise are needed to complete the design task. Colocation of these teams is rare and yet shared decision making is of paramount 1
Ph.D. Student, School of Mechanical, Electronic and control Engineering, Beijing Jiaotong University, No. 3 of Shangyuan Residence Haidian District in Beijing, China; Tel: 861051685335; Email: [email protected]; http://www.bjtu.edu.cn/en
46
J.Q. Yu, J. Z. Cha, Y. P. Lu and N. Li
importance. Hence, communication and collaboration have become key issues in terms of efficiency and cost and distributed design has become a necessity for the future of engineering. Along with the enhancement of product complexity in the field of aviation, aerospace, shipbuilding and railway rolling stock, the development process in essence not only considers the distributed intelligent resources environment and the concurrent design process in the entire life cycle of the manufacturing and assembly, but also the integrated concurrent and collaborative design in the mechanical, control, dynamics, etc. However, the traditional product design theory canϗt meet the current design for complex products. Concurrent engineering, collaborative design and the optimization theory in the multidisciplinary design are popping up in this area. Similarly, for the cross-sectoral, cross-region and crosscountry alliance of virtual enterprises develop quickly, the design and develop environment has changed a lot, and many designs for complex product have to be collaboratively completed by the product design staffs and the other related staffs distributed in different places, then the distributed collaborative design technology came into being. The research for the distributed collaborative design started in the 1990s, and Cutkosty from the design institute of Stanford firstly began the research in this area [1]. In 1990, National Institute of Standards and Technology invested 21.5 million dollars for a develop team in a project called Federated Intelligent Product Environment (FIPER) planning to exploit a collaborative supported work environment architecture in five years. General Electric Company successfully find good application of FIPER to develop the American key weapon equipment [4, 5]. One researcher proposed a product data model and an Web-based open architecture of product data management [8]; another paper introduced a software distributed collaborative exploitation environment and discussed the architecture and the distributed parallel exploitation model [7]; Some researchers presented a distributed collaborative product customization system based on Web3D, which provides distributed collaborative product customization for product users in a virtual environment [10]. Aimed at the application actuality of CAD system in corporation, concurrent design will expedite the product's development Someone researched Intelligence Concurrent Design System under network [6]. From the description above, it can be seen that the researches in the area of distributed collaborative design focus on the research of the method and model of the collaborative design. Until now, there has been no effective supported technology and technique available for the integrated application of the concurrent and collaborative design in the distributed intelligent resources. Therefore this paper defines the concept DCCD to support the new architecture what’ll be introduced below. Then this paper presents a novel DCCD platform architecture in the distributed intelligent resources environment based on SOA for the first time in this area.
Research on the Distributed Concurrent and Collaborative Design Platform
47
2 The Architecture Design of the SOA-based Distributed Concurrent and Collaborative Design Platform 2.1 The Methodology of Service Oriented Architecture SOA is a new paradigm in distributed systems aiming at building loosely-coupled systems that are extendible, flexible and fit well with existing legacy systems. By promoting the re-use of basic components called services, SOA will be able to offer solutions that are both cost-efficient and flexible. SOA presents an approach for building distributed systems that deliver application functionality as services to either end-user applications or other services. It is comprised of elements that can be categorized into functional and quality of service [3]. With a SOA, we can realize several benefits to help organizations succeed in the dynamic business landscape of today: x x x x x
Leverage existing assets Easier to integrate and manage complexity More responsive and faster time-to-market Reduce cost and increase reuse Be ready for what lies ahead.
2.2 The Architecture Design of the SOA-based Distributed Concurrent and Collaborative Design Platform For the purpose of achieve one mission, developers must work together and utilize the distributed resources simultaneously in the world-wide. Allowing engineers to work together regardless of geography is a huge potential advantage in an increasingly global market. Similarly, enabling engineers to concurrently design will dramatically increase efficiency whilst reducing errors currently made due to communication breakdowns. That means that all parties to be involved throughout the design process. On the base of the existing concept Concurrent and Collaborative Design [2], this paper defines the concept DCCD in the distributed intelligent resources environment. It’s illustrated in Figure 1 below.
48
J.Q. Yu, J. Z. Cha, Y. P. Lu and N. Li
Figure 1. Approach of DCCD
Definition 1. Distributed concurrent and collaborative design. In the design process of product, the developer not only need to consider the design process in the time dimension, but also collaboratively consider the cooperation of the different research group regardless of geography in the space dimension, and the distributed intelligent resources environment in the resource dimension. It can be seen that the process of design is a whole tridimensional distributed concurrent and collaborative design based on the distributed intelligent environment, i.e. DCCD. Figure 1 shows that DCCD is a three-dimensional design approach. The dimensions in a distributed concurrent and collaborative design approach are: x x
x
Space dimension: one team should carry out the collaborative design in the space dimension. There are mechanical domain and other domain in space dimension Time dimension: one design should be done simultaneously in the time dimension. In the design for manufacturing, parts are decomposed according to their feature. In the design for assembly, parts compose products Resources dimension: Modern design is based on intelligent resources and mainly depends on the external intelligent resources and internal knowledge reserves [9]. In China, there are a lot of existing resources and potential resources which can support the knowledge acquisition in the process of product design. Most of these resources are in the scientific research institutes, universities, national and sectoral key laboratory or open laboratory, engineering research centers, etc.
Research on the Distributed Concurrent and Collaborative Design Platform
49
Here, this paper presents a novel architecture for SOA-based DCCD systems is illustrated in Figure 2. The architecture is designed for the research of the development and design, not for the commercial application.
Figure 2. The architecture of the SOA-based DCCD platform
In the SOA-based DCCD platform architecture, it contains four tiers: 1)
2)
3)
Tier 1 is the Operational Systems. This consists of existing custom built applications, otherwise called legacy systems, including existing CRM and ERP packaged applications, and older object-oriented system implementations, as well as business intelligent applications. The composite layered architecture of an SOA and leverage existing systems and integrate them using service-oriented integration techniques Tier 2 is Services. The services the business chooses to fund and expose reside in this layer. They can be discovered or be statically bound and then invoked, or possibly, choreographed into a composite service. This service exposure layer also provides for the implementation components, and externalizes a subset of their interfaces in the form of service descriptions. Thus, the enterprise components provide service realization at runtime using the functionality provided by their interfaces. The interfaces get exported out as service descriptions in this layer, where they are exposed for use. They can exist in isolation or as a composite service Tier 3 is the Application. Although this layer is usually out of scope for discussions around a SOA, it is gradually becoming more relevant. There is an increasing convergence of standards, such as Web Services for Remote Portlets Version 2.0 and other technologies, that seek to leverage Web services at the application interface or presentation level. It is also important to note that SOA decouples the user interface from the
50
J.Q. Yu, J. Z. Cha, Y. P. Lu and N. Li
components, and that you ultimately need to provide an end-to-end solution from an access channel to a service or composition of services.
3 Application of The New Platform Take the design and development of the railway bogies as an application example of our new architecture presented above. The railway bogie design and analysis sequence (static strength) is shown in Figure 3.
Figure 3. The railway bogie design and analysis sequence (static strength)
3.1 Services Identification The design process for static strength analysis is shown as follows in Figure 4.
Research on the Distributed Concurrent and Collaborative Design Platform
51
Figure 4. The design process for static strength analysis
We can identify the services in the design and analysis process by the client. There are four services as follow: 3D parameter design service, mesh service, static strength analysis service and human expert analysis service. Then we can wrap ansys as a DCCD architecture service provider, and ansys will be invoked as server mode with the input of APDL file and output of *.rst - resolution file, *.db - db file, etc. Using the Java programming language, we define the interface descriptions of the four services mentioned above in the architecture. 3.2 Amendment and the Future Work This platform architecture offers a way of allowing engineers to work together regardless of geography in the current increasingly global market. Meanwhile, enabling engineers to concurrently design noticeably increases efficiency whilst reducing errors currently made due to communication breakdowns. Enabling all parties to be involved throughout the design process will change the nature of design, spreading input and responsibility. However, the greater the level of concurrency the higher the level of co-ordination required to ensure a successful product. The tools required to facilitate these activities need to be robust yet sufficiently flexible to ensure their long term usage. So, this platform will be mended in the process of application. Maybe some new function will be added and some unfair function will be removed. In the future, we will refine the user GUI, program the application programming interface for Pro/E and Mesh, provide the optimization service and at last combine process management system with service providers.
52
J.Q. Yu, J. Z. Cha, Y. P. Lu and N. Li
4 Conclusions Firstly, this paper defines the concept DCCD. The process of design is a whole tridimensional distributed concurrent and collaborative design based on the distributed intelligent resources environment, i.e. DCCD. Then, this paper presents a novel distributed DCCD platform architecture. The architecture integrates some large-scale commercial engineering software tools and expert systems to quickly accomplish the design and analysis of complex product such as the railway bogies in this paper, which offers a way of allowing engineers to work together regardless of geography in the current increasingly global market.
5 Acknowledgement This research is supported by BJTU Research Foundation under grant No. 2006XZ011.
6 References [1] Gao Shuming, He Fazhi. Survey of Distributed and Collaborative Design. JOURNAL OF COMPUTE-AIDED DESIGN & COMPUTER GRAPHICS 2004; 2(2); 149-157. (in Chinese) [2] HU Jie, PENG Ying-hong, XIONG Guang-leng. Research on concurrent and collaborative design based on system theory. Computer Integrated Manufacturing Systems 2005; 2(2); 151-156. [3] Mark Endrei, Jenny Ang, Ali Arsanjani. Patterns: Service-Oriented Architecture and Web Services. Retrieved March 23, 2008, from: . [4] S. Soorianarayanan, M. Sobolewski. Monitoring federated services in CE Grids. In: Research and Applications Concurrent Engineering: Proceedings of the 11th ISPE International Conference on Concurrent Engineering: Research and Applications, Beijing, China, 2004. [5] The Federated Intelligent Product Environment (FIPER)–Project Brief (1999). Available at: . Accessed on: Mar. 25th 2008. [6] Wang haijun, Meng xiangxu, Xu yanning. Concurrent Design in the Network Environment. In: Proceedings of the 8th International Conference on Computer Supported Cooperative Work in Design, Xiamen, 2004; 197-201. [7] WU Heng, ZHANG Weimin, ZHAO Xi-an, etal. The Distributed Parallel Exploitation Technology of a Distributed Cooperation Exploitation Environment. COMPUTER ENGINEERING & SCIENCE 2005; 8; 88-91. (in Chinese) [8] WU Jian-wei, QIU Qing-ying, FENG Pei-en, etal. Management strategy of product data in distributed collaborative design environment. Journal of Zhejiang University (EngineeringScience) 2005; 10(10); 1465-1480. (in Chinese) [9] Xie Youbai. Study on the Design Theory and Methodology. CHINESE JOURNAL OF MECHANICAL ENGINEERING 2004; 4(4); 1-9. [10] Xiong Hongyun, Sun Surong. A Distributed Collaborative Product Customization System Based on Web3D. In: Proceedings of the 2007 11th International Conference on Computer Supported Cooperative Work in Design, Melbourne, 2007; 926-930.
Collaborative Architecture Based on Web-Services Olivier Kuhna,b,c, 1 , Moisées Lima Dutraa, Parisa Ghodousa, Thomas Duschb, and Pierre Colletb LIRIS laboratory, University of Lyon 1, France PROSTEP AG, Germany c LSIIT laboratory, University Louis Pasteur, France a
b
Abstract In this paper we present an enhancement of our collaborative architecture with Web Services for data access and OWL ontologies to define domain concepts. The original platform is a two-level multi-agent system where communications are made through blackboards. To improve cross-skill collaboration, we enrich shared data with domain ontologies to formally define concepts and enable the reuse of domain knowledges. We also propose to access blackboards via Web Services. In this way we take advantage of standard protocols and allow the integration and the reuse of collaborative services. Keywords: Collaborative Engineering, Web Services, Ontology, Information Systems, Multi-agent
1 Introduction One of the current industrial world challenges is to reduce time to market and to improve quality of new products and services. Due to the development of collaborative engineering, it becomes primordial to have efficient tools to share and exchange product related information during development phases and associated processes. The modern view of product development [5] lean on communication and is based on simultaneous engineering approaches. On the one hand concurrent engineering introduced new paradigms that are parallel, distributed and
1
Olivier Kuhn, LIRIS Laboratory, UMR 5205 CNRS/Universitée Claude Bernard Lyon 1. Bâtiment Nautibus, Campus de la Doua, 8 Bd Niels Bohr, 69622 Villeurbanne Cedex, France. Email: [email protected] Url: http://liris.cnrs.fr/olivier.kuhn/
54
O. Kuhn, M. Lima Dutra, P. Ghodous, T. Dusch and P. Collet
collaborative [15, 12]. Although this design approach seems to be easy and its objectives clear, its setup is complex. On the other hand web based collaborative design is a hot research topic as most information resources are located on the web. This implies various difficulties such as the management of heterogeneous resources, the complexity of finding relevant information and the lack of explicit and formal modeling of scientific resources content. The aim of this paper is to present an aided information system for collaborative design based on Web Services (WS) [1]. This system allows experts to express their information needs, find good information sources on the web and to integrate them into the system. The originality of this system comes from the exploitation of semantic web and Web Services technologies, the use of ontologies, information research techniques and distributed artificial intelligence such as multi-agent systems. In this paper we first present related research work in collaborative design field. Then we present our collaborative infrastructure and we focus on Web Services aspects. Finally, we present conclusions and perspectives of this work.
2 Related work The life cycle of industrial products is complicated. Usually, it involves many persons with different knowledge and expertise engaged in different activities for several years. Moreover they can be located at different places. Different design disciplines, during the design process, need to collaborate and have different views of a product design according to their functional concerns. These views translate into different models of a product, which need to be accommodated in a comprehensive description of the design product. 2.1 Collaborative platforms Concurrent engineering (CE) has been the subject of many research activities [7, 15, 13] that have resulted into different platforms embedding several concepts, among which data management. The following solutions are available for data communication during lifecycle and application integration: data exchange and data sharing. The former is based on messages exchanges. Each participant builds his model independently, which will then be exchanged thanks to standard formats and communication protocols. A well known collaborative engineering project is SHADE (SHAred Dependency Engineering) [14, 10]. In the latter, the current solution to the problem is stored in a common repository accessible by all participants and is divided into several areas and levels. An example of a project using data sharing is DICE [12], which was developed at the MIT. Current trends head toward data sharing with a central repository as it reduces problems such as data consistency and complexity of design process.
Collaborative Architecture Based on Web-Service
55
2.2 Emergence of Web Services Since the beginning of the decade, Web Services (WS) [1] have become more and more used, especially by businesses, thanks to the availability of standards like SOAP, WSDL and UDDI. W3C defines Web Services as software systems designed to support interoperable Machine to Machine interaction over a network2. These standards enable great interoperability as SOAP and WSDL are XML-based formats. Web Services are especially used in Service-Oriented Architectures (SOA) where they are loosely coupled and reusable. As Web Services are a quite young technology, only few collaborative platforms are based on them, although they present very attractive characteristics for concurrent engineering. Nevertheless, some research have been done on using Web Services in collaborative platforms architecture. A SOA collaborative platform that combines collaborative services with “classical” CE tools is presented in [13]. Other works related to the use of Web Services in distributed environment can be found in [8, 2, 9].
3 Proposal We have based our proposal on a collaborative platform that has been previously developed by our team [7]. Resources are more and more located on the web. This is why we have chosen the following representation languages and protocols. We try to enhance this distributed architecture for design activities by updating it with new technologies such as Web Services for data access and ontologies with OWL3 for formal data representation. 3.1 Existing platform Our team has already developed a collaborative system based on multi-agency paradigm [7]. Figure 1 shows the architecture of our current system.
2 3
W3C Web Services Glossary: http://www.w3.org/TR/ws-gloss/ Web Ontology Language (http://www.w3.org/2004/OWL/)
56
O. Kuhn, M. Lima Dutra, P. Ghodous, T. Dusch and P. Collet
(a) Top level architecture (b) Internal architecture of an agency Figure 1. Current collaborative architecture [7]
At the top level (fig. 1(a)), several agencies are gathered around a blackboard. An agency is a multi-agent system, where each agent represents the activity within a design discipline. The use of agencies allows the reproduction of a global view, to a given participant (client, designer...). Each agency communicates its intermediate results to other agencies through the shared blackboard. For example, when the Designer Agency elaborates the functional model, it communicates it to other agencies through the blackboard. At the lower level (fig. 1(b)), agents in a given agency represent disciplines taken into account for a given participant. Communications between each agent are made thanks to a blackboard located in each agency. Each agent, representing a given design discipline, has a knowledge base (KR) allowing him to extract an expression of requirements according to his discipline. In each agency, as well as at the global architecture level, the blackboard is composed of two parts: the “data/result workspace” part (DW) and a “collaboration workspace” (CW). The DW is accessible by all agents of the agency. The CW is organized into areas: Questions Areas (QA), Coordination Area (CA), Conflicts and Negotiation Area (CNA), and Interaction Area (IA). This workspace is accessible to all agents of the agency. During the activity of modeling, agents put into/get from this workspace the initial data and intermediate results of their activity of reasoning. At the beginning of their activity, initial data correspond to the results obtained by the reasoning on initial requirements. In order to optimize our system and to use new technologies such as Web Services, we propose a new infrastructure. In this architecture, business services of each agent are classified and represented by Web Services. 3.2 Web Services based collaborative system We propose a new collaborative system based on the Semantic Web and Web Services. The use of Web Services aggregates several advantages compared to the
Collaborative Architecture Based on Web-Service
57
previous version. First of all, the system is based on standards which improve interoperability. Secondly, interfaces provided to the user can be highly dynamic and user specific. It would also be possible to directly integrate some engineering software. Another interesting point is that all interactions are defined by the system. The system propose advanced services which work as an abstraction layer. That means that participants are guided and not lost in huge amounts of data. Global architecture The first modification concerns the access to blackboards. All read/write accesses are done via Web Services interfaces. Various sets of actions are defined gathering actions that concern a specific goal such as defining the needs or designing a Function-Behavior-Structure (FBS) model [6]. Figure 2 shows the updated architecture where the access to blackboards is provided with Web Services. Each participant is allowed to do specific actions depending on their discipline, expertise field or point of view. This way, shared data are located in the blackboard and each participant works on a local copy of them. The system is able to determine who is working with what and informs concerned persons of updates and, this way, try to attenuate complex merges due to non synchronized work.
Figure 2. New collaborative architecture
Semantic data representation Collaborative workers often have to face misunderstandings caused by different definitions of some concepts as there is a variety of competence fields involved in the design process. This kind of problem can be prevented with a formal semantic definition and representation of domain concepts. In order to achieve this, our collaborative platform uses ontologies to describe data located in the blackboard. The representation language we have chosen is the Web Ontology Language (OWL) which is a W3C recommendation since 2004. Right now, we are using OWL-DL, a sublanguage which is named in correspondence with Description Logic. It is the largest set of OWL that provides decidable reasoning procedures. This way we can use reasoning to find correspondences [3] between several ontologies and also detect inconsistencies and conflicts [4].
58
O. Kuhn, M. Lima Dutra, P. Ghodous, T. Dusch and P. Collet
As design experts may not be familiar with the ontology concepts, we propose to add an abstraction layer between users and data (Figure 2). To have a standardized and interoperable interface, we proposed to use Web Services. Web Services abstraction layer Concurrent engineering activities need to be structured to minimize and resolve conflicts and divergent work. Previously on the platform, agents had direct access to the blackboard without assistance. By doing so, participants would have to manipulate directly ontologies, and thus they need to be familiar with them. Furthermore, they may have had to much freedom when looking for information and researches get complicated as projects get larger. Moreover, introducing the concept of ontology may change their working habits. We would like to avoid changing their habits as it may involve training periods and time before being adopted and it can cause other problems of adaptation and adoption by participants [11]. To “hide” the ontological representation of data to the users, we have chosen to present services upon which can be connected the user interface they are used to work with. We tried to separate the presentation layer from the core of the collaborative platform as in a Three-Tier architecture (figure 3). Services abstract the data layer with a set of functionalities that are available to participants according to their objectives. Each functionality of the platform, such as project management or publishing some results, is a service available on the network. This way, the user interface, they are used to work with, could be directly connected to the proposed services. To implement these services, we have decided to use the Web Services technology as it provides the interoperability we need between our services.
Figure 3. Layers in Three-Tier architecture
In our application, services are also a way to automate the handling of ontologies. The user, through his graphical interface, can use the data structure he
Collaborative Architecture Based on Web-Service
59
is used to work with, depending on the application. All his actions are translated to Web Services calls and so, the use of ontologies is then transparent to users' mind. As example we propose a short design case where some experts from various disciplines are supposed to collaborate. They will follow the FBS methodology [6] by defining, firstly the functionalities of the product; then the behavior i.e. how to fulfill functions; and finally the structure of the product. As we focus on the design with the FBS methodology, a related ontology model is defined in the system. This ontology contains the FBS model and the links between the model, the experts and the expertise fields. The experts, following their habits, will create their part of the FBS model, i.e. functions, behaviors and structures related to their domains and then commit it to the platform via the interface connected to the corresponding Web Services. The services will then instantiate the ontology and the links between concepts. So when an expert add a function, the system knows who add it and then will semantically enrich this function with expert's information such as the expertise domain. This way, a Web Service in charge of a functionality also enriches data from the context automatically. This abstraction layer works in both ways. On one hand, as said above, the user accesses data through the Web Services which make the links between ontologies and data. On the second hand, Web Services enhance interoperability, so our platform can be interfaced with different kind of clients. It can be as well a Web site than a heavy client or a third application in which a plug-in has been developed. Implementation To deploy our approach, we have restarted the development of our collaborative platform from scratch, while keeping the organization presented in section 3.1. We have oriented our development towards a SOA and we decomposed the application into three independent layers as in a Three-Tier architecture (see figure 3). At he bottom is the data layer where are stored data, the ontologies and theirs instances using OWL-DL. In the middle is located the business layer which is the heart of the collaborative platform. It is decomposed into several modules such as conflict detections, data handling or project management. These modules are implemented as Web Services. This way, the business layer presents an high modularity and is open to include new services. To provide cross-platform services, we implemented them using Java. Java provides useful tools and API to handle ontologies and Web Services. Services can then be dispatched on the Web, but they are currently gathered on our team server and run by an Axis2 4 Web Services engine. The use of an UDDI server as registry for our Web Services is not necessary yet as they are proposed as an API5 to access the platform. The last layer is the presentation layer. Its role is to present the services available in the business layer to the user in a common and transparent way. Today, it is done via an web application developed using Java Server Pages (JSP) technology over a Tomcat server.
4 5
http://ws.apache.org/axis2/ Application Programming Interface
60
O. Kuhn, M. Lima Dutra, P. Ghodous, T. Dusch and P. Collet
4 Conclusion and future orientations In this paper we have presented an enhancement of our collaborative architecture via Web Services and semantic representation of data. The global platform architecture is composed of two levels. In the upper level we have agencies and a repository which is a blackboard comprising several areas. An agency represents a group in which all participants, here agents, have a common point of view on the model/problem. At a lower level, each agency is structured like the upper level, i.e. a blackboard and several agents. To ensure good understanding among experts, we use OWL-DL ontologies. This way we can use reasoning to find correspondences and also detect inconsistencies and conflicts. The second improvement is the utilization of standardized communication protocols between the various modules present in the platform. Each module is implemented as Web Services and can be invoked using SOAP messages. These Web Services delimit the scope of interactions with blackboards and make the link with the ontological data representation. Now that the basis of the platform is established, we aim at taking advantage of Web Services and OWL to propose advanced new functionalities. As our proposed architecture is based on loose Web Services, we can easily integrate new services and also orchestrate and compose services. Automated services composition is also an active research field that we will investigate. We will also extend semantic definitions to services via the service ontology OWL-S. Another issue our research is focused on are user interfaces. We would like to provide personalized interfaces to users depending on their individual needs. This way we want to take into consideration the context and also devices which can be PC, PDA or even smartphones. All this will be eased by the use of Web Services that facilitate exchanges between heterogeneous systems.
5 References [1] [2] [3] [4] [5] [6] [7]
Alonson G, Casati F, Kuno H, and Machiraju V. Web Services: Concepts, Architectures and Applications. Springer-Verlag, 2004. Dustdar S, Gall H, and Schmidt R. Web services for groupware in distributed and mobile collaboration. In 12th Euromicro Conference on Parallel, Distributed and Network-Based Processing: pp 241-247, 2004. Da Silva C, Discovery of semantic mappings between semantic resources in a cooperative environment. Ph.D. thesis, University Lyon 1, 2007. Dutra M, and Ghodous P. A Reasoning Approach for Conflict Dealing in Collaborative Design. Complex Systems Concurrent Engineering: Collaboration, Technology Innovation And Sustaintability, CE2007, 2007; pp 481-488. Gero JS, Sudweeks F. Artificial Intelligence in Design'02, 2002. Gero JS. Design prototypes: A knowledge representation schema for design, AI Magazine 11(4): pp 26-36, 1990. Ghodous P, Martinez M, Hassas S, and Pimont S. Distributed architecture for design activities. International Journal of IT in Architecture, Engineering and Construction. Millpress, 2002.
Collaborative Architecture Based on Web-Service [8] [9] [10] [11] [12] [13] [14] [15]
61
Kammer PJ. Distributed Groupware and Web Services. In CSCW 2002 Workshop: Network Services for Groupware, New Orleans, LA, 2002. Li WD, Ong SK, and Nee AYC. Integrated and collaborative Product Development, Technologies and Implementation, World Scientific, 2006. Olsen GR, Cutkosky M, Tenenbaum J, and Gruber T. Collaborative Engineering based on Knowledge Sharing Agreements, Proc. of the 1994 ASME Database Symposium, 1994. Orlikowski WJ. Learning from Notes: organizational issues in groupware implementation. In Proceedings of the 1992 ACM conference on Computer-supported cooperative work, pp. 362-369. 1992. Sriram RD. Distributed and Integrated Collaborative Engineering Design. Savren. ISBN 0-9725064-0-3, 2002. Stokic D. A new Collaborative Working Environment for Concurrent Engineering in Manufacturing Industry. Leading the web in concurrent engineering: Next Generation Concurrent Engineering, CE2006: pp 120-127. ISSN 0922-6389, 2006. Tenenbaum J, Gruber T, McGuire J, Weber D, and Olsen GR. SHADE : Technology for Knowledge-Based Collaborative Engineering. Journal of Concurrent Engineering : Applications and Research, 1(3), 1993. Vandorpe D, Ghodous P. Advances in Concurrent Engineering. In Proceedings of the 7th ISPE International Conference on Concurrent Engineering : Research and Applications, CE2000, 2000.
From Internet to Cross-Organisational Networking Lutz Schuberta,1, Alexander Kippa, Stefan Wesnera a
HLRS – University of Stuttgart, Nobelstr. 19, 70569 Stuttgart, Germany
Abstract. The Internet has become a powerful means of communication and interaction and various research projects have shown its potential to revolutionize business models and means of cooperation. Only recently, development has made significant progress in catching up with research and a series of products have been exposed to the market which may well represent the next step to realize this revolution. This development will allow flexible resource and capability sharing across the net, as if the according capabilities would be locally available – even though this is already possible in principle, new models will allow maintenance of resources & capabilities on an operating system level, making it completely transparent to the average user. This paper will show how the market is currently changing to host a new range of operating systems and collaboration support that will give rise to complete new capabilities, business models and communities, but at the same time will have us rethink classical approaches to problem solving. The paper will therefore examine recent research approaches to so-called Virtual Organisations and how they contribute to realizing new collaboration modes. It will show how major IT vendors are approaching this vision and where the current development may lead to, and how this will influence future business models. Keywords. future internet, platform as a service, collaborative networks, service oriented architectures, virtualisation
1 Introduction The Internet is no longer just a means of sharing data and information: with the increase in bandwidth and hosts, it has become a new form of resource itself. With the advent of the Grid, respectively more recently of Web Services, it has become possible to use and share application logic, code and local resources programmatically over the web. This shifts the need for resource availability away from the actual organization wanting to perform specific tasks to any host available on the web, i.e. a form of outsourcing over the Internet. Say for example that company Y needs to acquire more computational power in order to complete a specific calculation in time – since the advent of computational Grid (see e.g. EGEE [8]) it is possible for scientists to use distribute computational resources in order to get their results in time, without having to spend money on buying 1
Corresponding Author Email : [email protected]
64
L. Schubert, A. Kipp and S. Wesner
additional computing systems which they may not need as part of their common, day-to-day business. We will explain in section 1 of this paper how Web Services and the concept of Virtual Organisations have realized new business models. However, the concept of exposing resources and capabilities over web services / the net is taking up only very slowly – up to day, only a few thousand web services [15] exists that are openly available on the internet and only a few of them are actually of commercial interest, whilst most are provided by academic research or communities, similar to open source tools. Even Amazon and Google, ranking to the most popular commercial organizations providing web service, still offer their services cautiously and with a higher interest in research than in commercial exploitation. As with any new development in the market, the problem behind this lack of commercial support is a mixture of both supply and demand: providers will not want to go through the effort of exposing capabilities via the web with no obvious demand for it and with ongoing problems in resource usage accounting, whilst potential customers do not yet see the benefit of troubling themselves with writing applications for such remote resources when there are still so few commercially interesting capabilities available which are furthermore difficult to retrieve. Whilst an experimental, research driven transition is certainly a valid approach to increase the interest in the consumers, the main problems are thereby not addressed: remote resource usage is still complicated, interoperability issues hinder simple integration, required capabilities / providers are difficult to find and security breaches, respectively resource misusage are difficult to prevent. Efforts undergone e.g. by IBM and SUN, as well as by standardization bodies to reduce this problem by introducing standard means for interoperability, protocols for resource account, security strategies have not yet impacted upon the community as was originally hoped for. Framework support that intends to cover the full problem scope, as realized e.g. by Globus, Unicore or gLite, is still cumbersome to use and has hence not found the uptake necessary. Only recently, a complete new set of base capabilities has been published, that opens up a complete new range of possibilities for future internet based interactions and cooperation. In section 2 of this paper we will examine these technologies, such as Google Apps, Force and in particular the programming foundation .NET3.5 by Microsoft, which, with its WCF, provides a means of realizing future platforms tightly integrated with and across the Internet. Section 3 will show how such future platforms could be devised, what they will look like and how this will revolutionize the classical ways of enacting Virtual Organisations across the Internet, respectively to realize collaborative setups in a complete new, dynamic and community-like fashion. Finally, in section 4 we will show how such new business models may find explicit usage in the domain of concurrent engineering, being one of the most resource demanding, and complexity driven domains that may hence even be considered a testing milestone for complex infrastructures. This will also show up restrictions of the models, as well as outstanding work to be performed in this area.
From Internet to Cross-Organisational Networking
65
2 Service Oriented Collaborations: Virtual Organisations The principle of the internet allows consumption of resources across the web, in particular of simple data sets, such as text, multimedia etc. As Grid, ASP and Web Services have shown, however, the internet can go further than that and provide actual application logic over the web: server hosted code can be executed upon requests, just like JavaScript upon opening the website. With the increasing bandwidth and processor speed it was thus principally possible to generate cross organisational business processes over the Internet where each task is represented by individual parties exposing the according capabilities (e.g. in the EU project GRASP [1]). This led to the concept of electronic Virtual Organisations, where business entities sell their resources, ranging from individual business logics to devices and human capabilities with the according interfaces exposed to the web. This concept allows business entities to enhance their individual capabilities with resources they do not own themselves but can access and integrate over the web, whilst other entities can make better use of free resources by selling them over the internet. We leave security & contractual issues aside here, as they would go beyond the scope of this paper – please refer e.g. to [20] [21] [24] for more details on business requirements in Virtual Organisations. The particular advantages of this approach consists in (a) increased control through additional supervision mechanisms and (b) the capability to principally set up, adapt and destroy such collaborations on demand. In particular from the customer perspective, this allows making use of resources upon time of actual need and thus, which is more, making the customer more independent from individual providers, as they may be principally replaced dynamically during runtime so as to maintain constant business logic execution. It is hence in the interest of providers to allow for simple and stable integration, in order to maintain competitiveness. At the latest the integrated project TrustCoM [11] sponsored by the European Commission has shown how these capabilities can be used to build up complex collaborative networks that respect the business requirements per participant and thus allow business entities to extend their capabilities in a stable and contract managed manner. These collaborations undergo a lifecycle from finding the appropriate collaboration partners over operation of the VO down to the termination. 2.1 Lifecycles of Virtual Organisations Virtual Organisations and thus most of the current Grid research projects aim at provisioning of resources, capabilities and services across the full lifecycle of a collaboration. This means, that a VO middleware will completely replace the necessity to (1: Identification) identify interaction partners according to the collaborative goal, (2: Formation) configure them in order to grant secure access etc. (3: Operation) execute distributed business processes across these participants, potentially requiring reconfiguration of the collaboration in order to address resource failures etc (Evolution), and (4: Dissolution) finally to shut down the VO
66
L. Schubert, A. Kipp and S. Wesner
again, thus ensuring that the resources are no longer accessible outside the collaboration (cf. Figure 1). Preparation
Identification
Formation
Operation
Dissolution
Evolution
Figure 1. Lifecycle of Virtual Organisations
Addressing the full scope of real business collaborations obviously exceeds the (current) capabilities of the internet infrastructures, even considering legal restrictions (see e.g. [25]). However, whilst it is improbable that collaborations in the near future will use only web based resources, the principles are still relevant just for individual interaction partners. This means, that VO middlewares can take over the full support of managing stable web based resource access – we will come back to that specific point in later sections. Readers interested in Virtual Organisation specifically, should refer to the TrustCoM framework [2]. 2.2 Drawbacks Non-regarding the big advantages of Virtual Organisations, the concept did not catch on to the degree originally hoped for – a particular obstacle consisting in the complexity of the steps to be undertaken in order to prepare and use the resources across the web. This applies in particular to the additional requirements towards security and control mechanisms to maintain the dynamicity and privacy aspects of such collaborations. VO supporting middlewares so far do not cater for the fact that each service provider will have their own way of describing and exposing their capabilities, thus leading to confusions when trying to integrate the respective resource and thus when writing the overall collaboration description. This obstacle will not be overcome by standardisation effort, but instead must be addressed through semantic interpretation of “intention” (i.e. capability) vs. “expression” (interfacedescription). 2.3 Next-Generation VO Support Recent VO related projects, amongst them BREIN [12] try to address these drawbacks by providing more intuitive management of provider issues, in particular with respect to their individual business goals and means to realise them: by enhancing both the resources, as well as the overall service provider with agent-
From Internet to Cross-Organisational Networking
67
capabilities, that enable them to align themselves and take cooperative decisions without requiring human interaction – this aims mostly at minor decisions with in particular no legal implications. BREIN deals in particular with resource management problems in distributed environments, i.e. where jobs can be dynamically hosted on different machines and resources – be that restricted to local management (e.g. for scheduling bus resources in the airport) or more globally, where service provider expose in particular computing resources (in particular for distributed engineering tasks). In either case, failures in schedule execution – be it due to resource shortage, delays etc. – are difficult to deal with in a dynamic fashion. With the dimensions of internet based collaborations constantly increasing, and hence the scope gaining in complexity, such management will become too complicated for the average resource provider. In such environments, the agent enhancements allow for self-monitoring of the resources so that failures can be easily recognised and potentially compensated (cf. Figure 2). Currently, BREIN can demonstrate such self-managing resources in the context of bus scheduling at the airport: buses are treated as independent agents that can communicate their capabilities and availabilities – jobs (schedules) are no longer painstakingly assigned for each resource, but published to all resources, that can then negotiate who takes over the respective assignment, depending on (a) the relevance of the respective job (here in particular costs due to non-fulfilment) and (b) the impact of execution on other jobs of the respective resource. Interface Controlandstatus reportsusing natural language terminology
Semantic Knowledgebase
• Descriptionofworld • ServiceProviderspecific policies&rules
• Understandingofitself • Definitionofits boundaryconditions
jobtobedeployed /tasktobe executed
SemanticRulesEngine MultiAgentSystem(Whiteboard)
Agent
Agent
Agent
Actual Resource
Actual Resource
Actual Resource
ServiceProvider
Figure 2. The BREIN enhanced service provider offering more abstract
In such a simple example, similar resource management take place across different providers (across two types of bus managers) in a similar fashion, though typically on a higher level, i.e. not per individual bus, but for a set of tasks.
68
L. Schubert, A. Kipp and S. Wesner
3 New Approaches Most approaches to realising Virtual Organisation suffer from the same drawback: supporting different messaging formats through standardised interfaces. Unless commercially supported, these solutions will not be able to cope with the changes and advances in research and on the market, and thus not find the uptake necessary to boost the product in the first instance. Only recently, new approaches in Service Oriented Architectures and Infrastructures have been devised, which could well introduce such a change in approach and in particular commercial awareness towards this problem: Cloud computing, Platform as a Service and WCF. 3.1 Cloud computing Amazon was one of the first companies to expose computational resources in a Grid like manner over Web Service interfaces (Amazon EC2 [1]) – this allows deployment and execution of various jobs on remote computers, just as foreseen in particular in computational grids, such as EGEE [8] and which commonly find application in particle physics (see e.g. [10]) and related areas. As opposed to the VO approaches, however, Amazon does not control any cross-resource interactions, as necessary for distributed applications, nor does it itself compose complex services into a single interface, i.e. Amazon does not provide abstract “products” [9],but simple computational resources in a restricted manner. As such, it is e.g. not possible to make use of these resources in a similar fashion local resources could be used, as the classical Grid vision foresees it. However, with the Amazon programming interfaces, these resources could be used as a means to execute code that is generated for the particular purpose of remote hosting – Code Providers (rather than service providers) thus may generate sets of applications / tools that could be hosted on demand on Amazon like cloud computing networks, thus reducing the costs for providing and administrating such machines. 3.2 Platform as a Service One step further than Amazon EC2 go Salesforce’ “Platform as a Service” [4]: it allows not only hosting of prepared machine images complying to the Amazon specifications, Salesforce provides a platform for developing and hosting complete applications. This allows complete web server hosting capabilities, as opposed to pure computational power. Whilst this is principally comparable to classical Web Server hosting, Service Platforms significantly reduce the development time and in particular the management overhead, in particular regarding scaling, load balancing, security etc. Google has shortly followed on this approach with its “Google App Engine” [5] and there are rumours that Microsoft will soon join in with a Cloud Service called “Red Dog” [6].
From Internet to Cross-Organisational Networking
69
3.3 .NET 3.5 Framework One of the most interesting developments in recent years, however, has been the release of Microsoft’s .NET Framework, and in particular its introduction of the following enhancements: x The Windows Communication Framework (short WCF [7]): WCF provides a new breed of communications infrastructure built around the Web services architecture, providing secure, reliable, and transacted messaging along with interoperability. x The Windows Presentation Foundation (short WPF [16]): WPF provides a new presentation system for building visual client applications. The Extensible Application Markup Language (XAML) being part of the WPF is a declarative language with flow control support allowing the creation of visible UI elements in the declarative XAML markup, and then separate the UI definition from the run-time logic. This allows the transmission of visual user interfaces e.g. via web services without the need to consider the underlying service infrastructure.
4 Putting it All Together: Tomorrow’s Internet When looking at the technologies that have become available over recent years, one can note in particular the following main developments: x x x x x x x
web servers (hosts) have become more accessible and available computational resources (and thus machines) are offered over the web communication frameworks maintain a stronger Service Oriented Architecture approach full platforms become available as part of the internet applications and web services merge semantic enhancements the internet bandwidth increases constantly
If this development is pursued further and will finally be merged in future technologies, we face the brink of a new, internet wide infrastructure that allows data, information and code exchange in a complete new way, exceeding the aims of Grid Services, though not reaching the envisaged stability and reliability. This development has also been noted by the European Research Community by initiating the FIRE initiative [23] to support advanced networking research coupled with large-scale experimentation in order to find solutions to overcome the shortcomings of the current Internet architecture. In the following sections we will sketch this new platform in more detail: 4.1 A Vision of Cross-Web Infrastructures In principle, local computers have become obsolete: in most cases, a simple browser-like interface of a thin client is sufficient to fulfil most daily tasks, given
70
L. Schubert, A. Kipp and S. Wesner
that the according services are hosted on a remote machine. However, such interaction is still tiresome, due to the lack of connection speed and due to “incomplete” interfaces that do not provide the look & feel of local, complex applications. This may change soon: computational power and in particular storage are already subject to outsourcing across the web (cf. above). Web Services and the new .NET framework allow easy transaction not only of simple commands, but also of full interfaces, as well as the generation and execution of complex code on the fly. Semantic searches enable interface abstractions and standardisation efforts ensure common messaging, even for complex data structures. Thus, it is principally possible to host rich applications on remote machines with just deploying interfaces locally. With standard formats for application descriptions (XAML) and the convergence of messaging protocols between different layers of usage (namely between applications, web forms and web services), any application can principally be hosted remotely with the form being represented locally – the only obstacle: speed, even though the actual code execution may increase through stronger (remote) processor power, the actual interaction speed decreases due to bandwidth limitations (cf. Figure 3). 1:start application oftypeX
Empty Interface (Browser)
checkservercloud
Application Descriptions get
2:use
Application Interface
XAML
Application Interface
Application Hosts instanͲ tiate
Wo r l d Wid e We b requests&responses
CustomerSide
Application Instance
ApplicationHost
Figure 3. remote access through dynamic local interfaces
In order to overcome this problem, partial deployment is the keyword: we need to distinguish not only between application types that are either computing or interaction intensive, but also between code parts with the same characteristics. As such, interaction intensive code should be hosted locally whilst computational code could run remotely or even in a distributed fashion not only to increase performance speed, but also to increase stability (cf. below). The attentive reader will notice the similarity to the concept of Virtual Organisations discussed above: by splitting up the main functionality into individual components and distributing them across the web, strengths (and expertise) of the individual providers can be exploited optimally, thus leading to better performance, stability and reliability of the overall system.
From Internet to Cross-Organisational Networking
71
Microsoft again has undertaken a first step in this direction: with the new .NET framework following strongly the service / component oriented approach comes a new model of software interpretation: the “Just in Time” (JIT) compiler does no longer compile all code ahead of time and stores it as a self-running executable, but compiles the text-based code at the time of need and in a partial fashion: only currently relevant parts are compiled and stored in memory, whilst rarely used features remain uncompiled until time of need – unexpectedly, the result is highly performing. A similar step is taken by Microsoft’s “SoftGrid” [19]: a server farm with preinstalled and preconfigured applications – independent of the application type – allows domain users to access and use these applications on the fly. Instead of just an interface, SoftGrid transfers the whole application to the user – however, in a time- and work-efficient manner, i.e. only the parts of the application are transferred that are currently of need. The commonality is obvious: segmentation of code bits into “relevant” and “less relevant” features that could be executed locally and remotely. Let us take that one step further and distinguish between the sensitivity of data, how critical correct execution is etc. and we will come to Distributed Managed Platforms. 4.2 Distributed Managed Platforms User‘sSystem Application DistributedOperatingSystem I/OManagement StandardisedInterfaces
Standard.Interfaces
Standard.Interfaces
ResourceController
ResourceController
EmbeddedmicroOS
EmbeddedmicroOS
ActualResource
ActualResource
EnhancedResource
EnhancedResource
Figure 4. enhanced hardware resources to build up a distributed OS
Additional aspects, such as privacy, reliability and quality determine the platform of choice for individual tasks – managed platforms, such as Salesforce [4] and the Google App Engine [5] allow for replicated, secure execution of code on a remote server farm. Whilst this does not yet meet all requirements, it shows that the concept of Virtual Organisations and its according business requirements (see e.g. TrustCoM [11]) still hold true, but will have to move to a new level, closer to actual execution layer of operating systems.
72
L. Schubert, A. Kipp and S. Wesner
Let us again go one step further and apply this concept on the operating system, and thus the most relevant platform from the user perspective. By linking parts of the execution environment on a low level, remote platforms could be used as part of the local infrastructure and thus all code execution managed in a similar fashion to current multi-core processors (cf. Figure 4) – a similar approach towards enhancing operating systems was already foreseen by Andrew Tanenbaum [26]. With the current development on both the hardware and software market, this vision is on the verge of becoming a technical, commercial solution. 4.3 A New Community Model In this environment, the “Prosumer” [13]is taken to a complete new level, introducing new ways of doing business, but also of general interaction across the web: not only server farms will become more relevant in the future again, but also hosting of data, code, as well as provisioning of storage and computing resources will become more easy than ever before. Expose Capabilities
User Initiate Search
Identify Providers
Query Provider Expose Capabilities
BuildUp Interface
Identify Hosts
Link Interface
Deploy Application
Use Application
Execute Application
Application Providers
Host Providers
Query Hosts
Manage Host(s)
Figure 5. the process of distributed managed application use
With the semantic enhancements of Web 3.0 [14], the actual usage may look as follows: a user initiates a local search engine to look for all applications that provide a specific feature, rather than having a specific name – similar to current Desktop Search engines, but with support for semantic enhancements. The actual search is executed across the web and retrieves all remote applications satisfying the user constraints and definitions. Upon execution, interaction intensive code bits are transferred to the local machine, whilst in particular computational intensive and data critical code is distributed across a server farm, which may actually consist in community members offering parts of their local infrastructure. Depending on relevance of data and related aspects, code and data may be
From Internet to Cross-Organisational Networking
73
replicated, maintained on a single platform, load balanced across different servers etc. Notably, such a web oriented platform community will realise more of the initial ideas associated with “the Grid” [22] than any other approach so far. And the technological basis is already there: 4.4 Realisation Due to lack of space, this section can only outline the basic realisation approach: the architecture of the WCF framework (see [17]) already provides most of the details relevant for cross-organisational communication on a peer-2-peer basis and across different operational layers on the operating system. However, it is mainly restricted to Windows platform machines and as such not performing well in particular on a low device near level, as would ideally be granted in order to realise the framework. Almost all current operating systems however come at the high cost of providing a huge infrastructure that significantly reduces computational power, available storage etc. Key to the new technology will be the development of embedded micro kernels as a basis for more complex operating systems. Current micro kernels (such as [18]) do not fulfil this purpose and are in fact replacements for full-fledged operating systems with no higher-level instance to integrate them.
5 The Future of CE Now where is the impact of that for future concurrent engineering tasks? After all, the technologies described above aim particularly at realising a low level means of integration that seems at first sight to be of no interest for the coordination of high level, complex tasks as addressed by CE. As already mentioned in the introduction, the area of concurrent engineering is one of the most demanding for distributed task management – one specific aspect thereby being stable, reliable and secure execution of multiple complex computational tasks. Particularly for design and analysis, a lot of effort is vested in realising and executing HPC machine code that then needs to be carefully maintained so as to ensure correct execution. The future of the internet will not make HPC obsolete, even though followers of the SETI and BOINC movement may claim so: as HPC relies on much faster data transfer then ever possible over the web, distributed p2p approaches are only valid for optimally parallelisable codes, i.e. with weak data exchange between nodes. However, with a micro kernel per node and for the overall cluster, management of HPC tasks becomes ever more simple: as the micro kernel combination will be unaware of the actual distribution of nodes, but only of their requirements and availabilities, any changes in the infrastructure – be it due to failing nodes or other circumstances – will go unnoticed by the overall execution, thus allowing smooth transitions. On a further, higher level, this applies similarly to all code-based application across Virtual Organisations which transform to a mixture of EGEE like
74
L. Schubert, A. Kipp and S. Wesner
community model and TrustCoM like business collaboration. With further enhancements, such as the ones envisaged by BREIN, a collaborative network could be realised, in which node management, security and reliability come implicit with the system. Whilst the future internet will therefore mainly contribute to stronger interaction models, distributed management tasks on top of it, such as BREIN’s VO concept become more simple and therefore realistic: current VO approaches spend more time on coping with interoperation and low level resource management issues than on actually realising management strategies, as the low level platform has not yet been realised.
6 References [1] Wesner, S.; Serhan, B.; Dimitrakos, T.; Randal, D.M.; Ritrovato, P. & Laria, G. Overview of an Architecture Enabling Grid Based Application Service Provision, 2nd Across Grid Conference 2004 [2] Wesner, S.; Schubert, L. & Dimitrakos, T. (2005), 'Dynamic Virtual Organisations in Engineering', Notes on Numerical Fluid Mechanics and Multidisciplinary Design [3] Amazon Elastic Compute Cloud (Amazon EC2) - Beta. Available at: . Accessed on: April 8th 2008. [4] On-Demand Business Application Platform and Programming Language. Available at: . Accessed on: April, 11th 2008 [5] Google App Engine. Available at: . Accessed on: April, 4th 2008. [6] Mary Jo Foley. Red Dog: Yet another unannounced Microsoft cloud service. Available at: . Accessed on: April, 9th 2008 [7] Windows Communication Foundation. Available at: . Accessed on: March, 3rd 2008. [8] Enabling Grids for E-sciencE. Available at: . Accessed on: March, 17th 2008 [9] Haller, J.; Schubert, L. & Wesner, S. Private Business Infrastructures in a VO Environment. In: Paul Cunningham & Miriam Cunningham, ed. Exploiting the Knowledge Economy - Issues, Applications, Case Studies. 2006, 1064-1071. [10] LHC Computing. Available at: . Accessed on: March, 3rd 2008. [11] TrustCoM - EU IST Project (IST-2003-01945). http://www.eu-trustcom.com. [12] BREIN - EU IST Project (IST- 034556). http://www.gridsforbusiness.eu. [13] WikiPedia - http://en.wikipedia.org/wiki/Prosumer. Accessed on April, 15th 2008 [14] Times Online – Web 3.0. Available at - , Accessed on April, 15th 2008 [15] SOA4All. Available at: . Accessed on April, 15th 2008 [16] Windows Presentation Foundation. Available at: . Accessed on April, 15th 2008 [17] WCF Architecture. Available at: . Accessed on April, 15th 2008 [18] L4 Micro-Kernel Family. Available at . Accessed on April, 15th 2008
From Internet to Cross-Organisational Networking
75
[19] SoftGrid. Available at . Accessed on April, 15th 2008 [20] L. Schubert, S. Wesner, T. Dimitrakos. Secure and Dynamic Virtual Organizations for Business. Paul Cunningham & Miriam Cunningham, ed., Innovation and the Knowledge Economy: Issues, Applications, Case Studies, IOS Press Amsterdam, 2005, pp. 1201 - 1208. [21] S. Wesner, L. Schubert, T. Dimitrakos. Dynamic Virtual Organisations in Engineering. 2nd Russian-German Advanced Research Workshop on Computational Science and High Performance Computing, March 14 - 16, 2005. [22] OGSA, OGSI and GT3. Available at . Accessed on April, 15th 2008 [23] FIRE Initiative. Available at <www.cordis.europa.eu/fp7/ict/fire>. Accessed on April, 15th 2008 [24] K. Stanoevska-Slabeva, C. Figà Talamanca, G. Thanos and C. Zsigri: Development of a Generic Value Chain for the Grid Industry. Lecture Notes in Computer Science, Volume 4685/2007. Springer, Berlin 2007, pp. 44-57 [25] Arenas, A. E.; Wilson, M. D.; Crompton, S.; Cojocarasu, D.; Mahler, T. & Schubert, L. (2008), 'Bridging the Gap between Legal and Technical Contracts', IEEE Special on Virtual Organisations – in print. [26] Tanenbaum, A. S. (2001), Modern Operating Systems, Prentice Hall PTR, Upper Saddle River, NJ, USA.
Grid-based Virtual Collaborative Facility: Concurrent and Collaborative Engineering for Space Projects Stefano Becoa1 , Andrea Parrinia , Carlo Paccagninib, Fred Feresinc, Arne Tønd, Rolf Lervike, Mike Surridgef, Rowland Watkinsf a
Elsag Datamat spa, Italy Thales Alenia Space Italia S.p.A., Italy c Thales Alenia Space France, France d Jotne EPM Technology AS, Norway e Det Norske Veritas AS, Norway f University of Southampton IT Innovation Centre, UK b
Abstract. In the past decade, Concurrent Engineering approach has been demonstrated very favourable for the assessment and conceptual design of future space missions. At the same time, a remarkable increase in distributed and collaborative computing power has been achieved by designing and prototyping technologies thanks to the Grid technology. For this reason, the European Space Agency awarded a project called Grid-based Distributed Concurrent Design to study how to allow geographically distributed facilities to interact each other in real-time over wide area networks adopting the Grid technology for the purpose of space projects, to make the structure deployment reliable, cheap and compatible with Concurrent Facilities. This project resulted in a Virtual Collaborative Facility architecture to be taken as a reference step for a distributed concurrent and collaborative platform for the Space sector. Together with this, a tailored prototype was implemented and deployed to proof the concepts and the architecture according to two common scenarios in Space Projects. Keywords. Concurrent Engineering, distributed collaborative environment, Grid.
1 Context for Concurrent and Collaborative Engineering Nowadays space activities are characterised by increased constraints in terms of cost and schedule combined often with a higher and higher technical and programmatic complexity.
1
Stefano Beco’s full coordinates, acting as corresponding author: Elsag Datamat spa Via Laurentina, 760 – I-00143 – Rome – Italy Tel: +39 06 5027 4541 – Fax: +39 06 5027 4330 Email: [email protected]
78
S. Beco et al.
To answer this challenge, Space Agencies and main industrial Space Integrators have deployed Concurrent Engineering Facilities at their premises (see also [8]) to make available environments where tools from various disciplines can be exploited enabling concurrent analysis, providing quick results, increasing data sharing and coherence among engineering options. Thanks to the automated information exchange and the use of interconnected tools, the change from a sequential vision to a concurrent one for space project design allows tackling and solving problems, enabling a quick exploration of several solutions not only faster but also deeper, leading often to the possibility of taking real-time decisions. The European Space Agency, at its ESTEC premises, has set up the Concurrent Design Facility (CDF) [1, 3] starting in 1998. This has widely demonstrated the advantages of applying the Concurrent Engineering (CE) approach to the assessment and conceptual design of future space missions and has raised an enormous interest among the European partners (academia, scientific communities, industry, other agencies) in the space sector. At the same time, starting from mid 90’s, a remarkable increase in computing power has been achieved by designing and prototyping technologies, most notably the Grid [4], to support distributing tasks and data on distributed computing centres linked with high-speed networks. With such potentials the capability to organize virtual collaboration and online interaction will become more and more concrete; data and tasks will be shared across geographically wide areas, and whole teams will interact with one another on a regular basis. Grid technology can therefore provide the means for secure connectivity of design environments as well as integrate multiple heterogeneous systems into a powerful virtual “single” system.
2 A Grid-based Virtual Collaborative Facility Within this framework, the European Space Agency, at beginning of 2006, awarded a project called Grid-based Distributed Concurrent Design (GDCD)2 3 to study how to allow geographically distributed facilities to interact each other in real-time over wide area networks adopting the Grid technology for the purpose of space projects, to make the structure deployment reliable, cheap and compatible with Concurrent Facilities. One of the purposes of this project is in fact to deploy a prototype to interconnect the above mentioned CDF to other sites run by ESA, national agencies or by industrial partners using a Grid based architecture [6]. The project successfully held its Final Presentation in September 2007, and the paper will summarise the main outcomes. 2
ESA Contract No. 19602/06/NL/GLC GDCD Consortium is led by Elsag Datamat spa (Italy) and includes • Elsag Datamat spa (Italy) • Thales Alenia Space Italia S.p.A. (Italy) • Thales Alenia Space France (France) • Jotne EPM Technology AS (Norway) • Det Norske Veritas AS (Norway) • University of Southampton IT Innovation Centre (UK)
3
Grid-based Virtual Collaborative Facility: Concurrent and Collaborative Engineering
79
The overall objective of a Grid-based distributed concurrent design approach is to combine the application of the CE approach, methods and tools with the emerging Grid technologies. This combination is expected to extend the benefits of the CE approach to a wider context with the aim of improving the overall design and development process of space projects, reducing the schedule and cost. The wider context refers to both a geographically distributed architecture as well as to the application to later phases of the project life-cycle.
3 Functional Requirements for a Virtual Collaborative Facility Analysing the needs for concurrent and distributed processes in the space sector, the following main issues arose: Share a common description of a space system in order to ease information flow, changes propagation, overall consistency in the form of a machine processable System Data Model; Common references and data pools; Resource sharing.
x x x
Starting from there, four conceptually different scenarios have been identified to represent the high level objectives and requirements identified (Table 1). Table 1. Scenarios for concurrent and distributed processes in the space sector Scenario 1
Objective Data share
2
Composed simulation
2.a
Composed process
2.b
Composed analysis
3
Parallel simulation
4
Parallel computing
Requirements Actors shall be able to share a common system description and to seamlessly propagate changes through it between them Actors shall be able to perform a complex problem solving by linking and sequentially executing simulations and analyses based on tools owned and residing at each actor premises and shared in the Virtual Organisation When the tools used in the chained simulation are COTS tools linked through commercial available or specifically developed interfaces When the chain is built in support to a specific problem resolution, connecting also specifically developed tools through the data/command interfaces available in the pool of tools accessible by the Virtual Organisation. Actors shall be able to perform a complex problem solving (typically a stochastic analysis) by executing multiple instances of the same simulations and analyses and taking advantage of tools, licences and computing resources shared in the Virtual Organisation Actors shall be able to perform a complex problem solving (typically running a parallelised code) taking advantage of computing resources shared in the Virtual Organisation
80
S. Beco et al.
The analysis of above scenarios, which are considered as representative of the Space Engineering process, helped to derive functional requirements for a Virtual Collaborative Facility. The requirements can be resumed in the following few classes: x x x x x
Capability to provide distributed services; Capability to support shared data and resources; Capability to guarantee a secure access to data and resources; Capability to support complex problems solving by mean of proper use of distributed services and resources; Capability to provide human-to-human collaboration tools
In order to cope with above classes of requirements, a technological survey of the state-of-art has been carried on, with the goal to have some best candidates where to start from for the prototyping activities.
4 Virtual Collaborative Facility Architecture On the basis of the outcomes of functional requirements, the design of the Virtual Collaborative Facility took place [2]. The VCF architecture is based on SOA approach and can be represented with the following Figure 1:
Application (Derived from Scenarios)
Orchestration (Job / Workflow Services)
Conferencing (Video/Audio)
Persistence (File store, DB) Publish & Subscribe
Security and Trust (Token services)
Collaboration
Management (SLA service, Registry, Policy)
Figure 1. VCF Service Oriented Architecture
Grid-based Virtual Collaborative Facility: Concurrent and Collaborative Engineering
81
The white circle representing the Publish & Subscribe service derives directly from the SOA model to allow the publish, find and bind mechanism. All services to be accessed and all users must refer to this mechanism; in Figure 1, the triangle shows a typical interaction based on this mechanism in the case that an application needs to use a Persistence service. The white area on the top of the Management service indicates that the publishing topic needs the use of registries that pertain to the management domain. The green circles represent application domain. A green area is on top of the Orchestration services meaning that a part of the tasks to be done in order to execute jobs is due to the application/user intervention, so it is much more bounded to the application context. There are dedicated services for Collaboration and Conferencing, as they play a very important role in the Space Systems design process. From a technological point of view, Figure 2 shows a VCF architecture based on a snapshot of most suitable technologies available at the time of the study (2006-2007):
Application Application (Derived (Derivedfrom fromScenarios) Scenarios)
Collaboration Tools
Video/Audio Conferencing
Management Management (SLA (SLAservice, service, Registry, Registry,Policy) Policy)
Orchestration Orchestration (Job (Job/ /Workflow Workflow Services) Services)
Persistence Persistence (File (Filestore, store, DB) DB)
Metadata Schema
Security Securityand and Trust Trust (Token (Tokenservices) services)
Messaging Messaging (WS-Addressing, (WS-Addressing,WS-Notification, WS-Notification,“WS-Security”, “WS-Security”,X509, X509,SAML) SAML)
XML, SOAP, WSDL
Transport (HTTP, HTTPS)
IP
Figure 2. VCF Technological Architecture Layers
This architecture view puts in evidence two main things: x
The existence of a common background among services and applications: this is the Metadata background layer, which provides the context that allows the whole picture works. Based on information standards, metadata
82
S. Beco et al.
x
enables seamless information exchange. Given well-integrated metadata, information can freely flow from one place to another across boundaries imposed by operating systems, programming languages, locations, and data formats. As seen both applications and services use metadata in order to do all the needed operations, discovering and finding services, classifying information and defining services relations. In particular with respect to Orchestration, metadata is usually considered essential for any dynamic workflow where real-time decisions are being made on which services to tie together to solve a particular problem. The layers that build the base of VCF architecture are all well adopted open standards and their benefits. Primarily, there is less chance of being locked in by a specific technology and/or vendor. Since the specifications are known and open, it is always possible to get another party to implement the same solution adhering to the standards being followed. Another major benefit is that it will be easier for systems from different parties or using different technologies to interoperate and communicate with one another. As a result, there will be improved data interchange and exchange.
These are the main reasons why Grid has been chosen as a viable technology for the VCF [9]: x x x x x
Grid also enables distributed, collaborative access to remote resources, where resources are not just CPU cycles and storage; Most (if not all) Grid implementations are Service Oriented, so fitting with VCF architeture principles, providing accessibility to resources through remote services; Grid offers mechanisms to enforce security providing an embedded security infrastructure, as users (and services) must be authenticated and authorised to have access to VCF resources and services; All Grid implementations offer workflow facilities so that users can create workflows composed of services and Grid has features/services to orchestrate and enact such workflows; Grid implements standards to enable interoperability e.g. allowing to easily connect custom(ised) clients to external services.
The role of Grid in the above set of technologies is embracing the whole set of services, including the management of Metadata Schema and Messaging layer. This could be seen as a “light Grid”, not strictly related to a computing- and dataoriented applications where a “heavy Grid” infrastructure is mandatory. Nonetheless, although the focus of GDCD and of the VCF is more towards a “light Grid”, the two are not mutually exclusive.
5 Virtual Collaborative Facility Prototype The VCF prototype has been implemented as a suitable tailoring of VCF architecture. It is based on a selected Grid middleware, GRIA [5, 7], chosen after a comparative analysis of most used and advanced middleware stacks.
Grid-based Virtual Collaborative Facility: Concurrent and Collaborative Engineering
83
GRIA, compared to other Grid implementations, has the following features that are in line with VCF needs: x
x
x x
It is a “light” B2B Grid middleware, and the scenarioes on the basis of the VCF concept are not (as for e-Science) based on long term (quasi)static collaborations but need a dynamic and effective management of collaboration actors who can (or better have to) enter and exit as needed during the life time of the collaboration itself; It follows a dynamic security management paradigm with a dynamic access control, which allows a secure management of fast, dynamic collaboration taking also into acount the possibility that actors access rights could change over time; It allows interoperability using standards that enable effective integration with other middleware like .NET/WSE 3.0 [9]; It is open source, thus allowing to be customised and extended with domain/application-related “plug-ins”.
The VCF prototype has then been deployed on a geographically distributed infrastructure based on several European centres acting either as service providers or clients according to the two storyboards specified to represent most common scenarios in Space Projects. Such storyboards were: x
Sharing System Data Model: actors, at distributed facilities premises, shall be able to share a common Space System description, or part of it, and to seamlessly propagate changes through it between them adopting the ESA CDF Integrated Design Model (IDM) as System Data Model (Figure 3).
Figure 3. GDCD Storyboard #1: Sharing System Data Model
84
S. Beco et al.
x
Parallel Simulation Analyses: actors, at distributed facilities premises, shall be able to execute complex analyses by enabling the execution of multiple independent instances of the same model on remotely located machines, taking advantage of licenses owned by the different actors joining the VCF. All execution instances will share a common System Data Model, data flow shall be enabled and the code execution shall be performed (Figure 4).
Figure 4. GDCD Storyboard #2: Parallel Simulation Analyses
6 Conclusions The following statements resume the main outcomes of GDCD project: x
x
x
Grid is a suitable technology as e-collaboration enabler for collaborative and concurrent engineering for Space Systems, where the needs are not just driven by computing- and/or data-intensive applications but mainly related to knowledge and information sharing; The VCF Prototype, although just a proof-of-concept providing reduced functionality w.r.t. a full fledged operational VCF, shows the possibility to perform Grid-based Collaborative and Concurrent Engineering sessions, which represents a big step forward for the current ESA CDF: it would allow involvement in all Space Systems design phases of the customer, prime, partners and manufacturers; The VCF Prototype shows with tangible results that VCF’s use of workflow exploiting remote services (tools, simulators, etc.) can support complex computations;
Grid-based Virtual Collaborative Facility: Concurrent and Collaborative Engineering
x
85
As verified during the demonstration preparation and execution phases using the VCF Prototype, the VCF will have to face more challenges to be used effectively by collaborating partners: -
Management of different institutional security policies applicable on industrial projects (firewalls, proxies, NAT, etc.); Easy-to-use or well-known client software necessary to share data between experts and avoid additional training; Capability to perform integrated (video) conferencing and to allow actors to enter and leave the collaborative session at anytime, would increase session efficiency.
7 References [1] Bandecchi M, Melton B, Gardini B, Ongaro F. The ESA/ESTEC Concurrent Design Facility. In: Proceedings of 2nd European Systems Engineering Conference (EuSEC 2000), München, 2000; 329-336. Available at: . Accessed on: May 30th 2008. [2] Beco S, Parrini A, Paccagnini C. Architecture of a Grid-based Virtual Collaborative Facility for Space Projects. In: 2nd Concurrent Engineering for Space Applications Workshop 2006, ESA Publications Division, Noordwijk, 2006; T3.03. Available at: < http://esamultimedia.esa.int/docs/2006-10-24_AbstractsBook-WebsiteVersion.pdf>. Accessed on: May 30th 2008. [3] ESA Concurrent Design Facility (CDF). Available at: . Accessed on: May 30th 2008. [4] Foster I, Kesselman C, Tueke S. The Anatomy of the Grid: Enabling Scalable Virtual Organizations. International Journal of Supercomputer Applications, 2001; 15(3):220222. Available at: . Accessed on: May 30th 2008. [5] GRIA - Service Oriented Collaborations for Industry and Commerce. Available at: . Accessed on: May 30th 2008. [6] Paccagnini C, Martelli A, Beco S, Bandecchi M. GDCD: Grid-based Distributed Concurrent Design. In: Ouwehand L (ed) Proceedings of DASIA 2006 - DAta Systems In Aerospace, ESA Publications Division, Noordwijk, 2006; 375-378. [7] Surridge M, Taylor S, De Roure D, Zaluska E. Experiences with GRIA - Industrial Applications on a Web Services Grid. In: Stockinger H, Buyya R, Perrott R (eds) First IEEE International Conference on e-Science and Grid Computing. IEEE Computer Society, Los Alamitos, 2005; 98-105. Available at: . Accessed on: May 30th 2008. [8] Value Improvement through a Virtual Aeronautical Collaborative Enterprise (VIVACE) project. Availabe at: . Accessed on: May 30th 2008. [9] Watkins ER, McArdle M, Leonard T, Surridge M. Cross-Middleware Interoperability in Distributed Concurrent Engineering. In: Fox G, Chiu K, Buyya R (eds) Third IEEE International Conference on e-Science and Grid Computing. IEEE Computer Society, Los Alamitos, 2007; 561-568. Available at: . Accessed on: May 30th 2008.
Cost Engineering
Cost CENTRE-ing: An Agile Cost Estimating Methodology for Procurement R. Currana and P. Watsonb P Hawthorne c, and S Cowan c a
Director of the Centre of Excellence for Integrated Aircraft Technologies, Reader, School of Mechanical and Aerospace Engineering, Queens University Belfast, NI, UK (Professor of Aerospace Management and Operations, TU Delft) b School of Mechanical and Aerospace Engineering, QUB c Bombardier Aerospace Belfast (BAB) Abstract: The paper presents a cost reduction methodology to be deployed within the procurement function for enabling the more efficient operation and cost management of supply chains. Cost CENTRE-ing involves six procedural steps of: item Classification; data Encircling; cost driver Normalization; parameter Trending; cost Reduction identification; negotiated Enforcement. The methodology is applied to industrial case studies at Bombardier Aerospace Belfast for validation. The case studies embrace a representative range of procured aerospace parts in highlighting the generic nature of the method; including outside production, raw material and bought-out (vendor specialized) systems. The proposed methodology is applied in these three validation studies to show that it is effective over a wide range of applications (generic), has been used to significantly reduce the cost of supplied items (accurate), and is being adopted by a leading aerospace manufacturer (relevant). It is concluded that the proposed methodology exhibits all the above because it is based on an improved understanding of supply chain cost management; thereby contributing to the body of knowledge in terms of process understanding; the importance of a causal relations in estimating; and identifying inheritance and family relations in groups of products. It is shown that the Cost CENTRE-ing method provides an agile method for responsive cost analysis, estimation, control and reduction of procured aerospace parts. Keywords. Cost modelling, procurment, business process modelling
1 Introduction In an effort to be more competitive, aerospace companies have to embrace a more integrated and concurrent approach to their business processes. The aim is to meet the key requirements of being more cost effective, lean and agile while delivering consistently high quality performance in their practice. This requirement is further set against the backdrop of changeable global events, fluctuating markets, and technological progress in both the commercial and military spheres. Therefore, cost
90
R. Curran, P. Watson, P. Hawthorne and S. Cowan
engineering issues are becoming increasingly dominant in Product Lifecycle Management (PLM) and as a consequence, the role of procurement is recognized as evermore influential due to its impact on acquisition cost; to be traded-off now in a formalized process during the technical design review. In terms of PLM, the procurement process specifically involves all of the activities required to manage the decision regarding the perceived product and its service, the killer requirements, acquisition, and the disposal of the goods and services. In an effort to address some of the above challenges through practical means, the research presented investigates the development of a methodology and associated tooling for the agile estimating of supply chain cost management, through collaboration between Bombardier Aerospace Belfast and Queens University Belfast. The main aim is to provide an integrated approach that can draw on the in-house engineering experience of the company; their procurement knowledge; product specification; whilst to reacting to market forces. This is integrated into a methodology that is generic and can therefore assimilate whatever information and relevant knowledge is available in a manner that can be utilized in an agile manner, i.e. dealing with large amounts of historic information in order to provide a responsive estimating capability that is based on all of the information (past, present and projected) relating to the acquisition of new supply, parts, and assemblies. The following presents the methodology developed and a number of case studies undertaken to validate the accuracy and relevancy of the derived tools.
2 Supply Chain Cost Management The importance of the procurement function is highlighted by the fact that it is common today for companies to externally procure major portions of their projects with some buying as much as 80% externally [1, 2]. Momme [3] states that the typical industrial company spends 50-85% of its turnover on purchased goods including such things as; raw materials, components and semi-manufactures. This continues to be an increasing trend whereby industrial firms exploit outsourcing for those products and activities deemed to be; (1) performed better by other organizations therefore offering value improvement opportunities or (2) outside the company’s core business [4]. Therefore central to the issue and activity of sourcing, for which the procurement function has ownership, is the strategic decision as to whether to internally make or externally source. Yoon & Naadimuthu [5] state that the strategic decision to ‘make or buy’ can often be the major determinant of profitability, making a significant contribution to the financial health of a company. Value improvement can be attained through leveraging the benefits of external sourcing in supply markets. A recent report from AT Kearney [6] states that industrial leaders are creating value and gaining competitive advantage through the use of supply markets by focusing on four key areas: 1) Innovation and growth; 2) Value Chain Optimization; 3) Advanced cost-management; (4) Risk management and supply continuity. From the areas offering opportunity for value creation, the wider focus of the current research will be that of facilitating improved cost-management for sourcing applications given the ‘practical-
Cost CENTRE-ing
91
industrial’ constraint of not always having the required degree of cost and financial breakdown data desired. This Chapter presents the developed approach to cost reduction opportunity (CRO) identification for current parts. It is clear that the enhanced significance of the supply chain has made procurement a strategic function [2] and cost management and assessment a critical activity [7, 8] proposes that increased attention to cost management is a critical factor to the operational control and sustained improvement of the procurement function as it provides a quantifiable basis upon which to assess related activities. Fleming [1] states that the objective when sourcing is to; “negotiate a contract type and price (or estimated cost and fee) that will result in reasonable contractor risk and provide the contractor with the greatest incentive for efficient and economical performance”. Ellram [7] also indicates that the management of costs related to purchased-goods remains one of the hottest issues in purchasing. The first component of life cycle cost is that of acquisition cost. The term ‘cost’ from the AICPA Inventory [9] is; “the amount, measured in money or cash expended, property transferred, capital stock issued, services performed, or liability incurred, in consideration of goods or services received”. Cost and price are often used interchangeably whereby acquisition price refers to the burden associated with externally acquiring a part from suppliers, and acquisition cost is that incurred due to internal production of a part. Parts can be internally made or alternatively be externally sourced from the extended supply chain, as shown in Figure 1 [10]; consisting of the internal and external supply chains as depicted. In this sense, the price of the external supplier is equivalent to the cost of internal production, being integrated into some product that is delivered to a customer.
`
Figure 1. Elements of extended supply chain (Adapted from Chen [10])
Parts that are externally sourced from best practice suppliers operating within a competitive market place often do not exhibit such a discrepancy between the actual manufacturing cost and supplier’s selling price; the latter including fair and reasonable mark-up, as illustrated by Scanlan [11] in Figure 2. When however orders are placed with suppliers who operate towards the left hand side of the Figure, which represents low-efficiency and low-competitive markets, then potentially excessive mark-ups are likely applied by the selling supplier. It is in the
92
R. Curran, P. Watson, P. Hawthorne and S. Cowan
interest of the buyer to understand actual manufacturing cost as well as to have to the ability to assess the quality of potential suppliers before entering into business with them.
Figure 2. Cost and price relationship with market efficiency (effects of volume removed), [11]
Figure 3 shows that unit price is influenced by a number of issues such as; (1) procurement strategy and requirements, (2) the technical requirements which directly influence manufacturing cost, (3) the actual cost basis on which the company operates, and (4) the external forces that determine an acceptable market price. All this is required to actively interface in the activity of negotiation: aimed at identifying mutually satisfactory terms for contract specification and price determination with potential suppliers. Procurement strategy and requirements defines the need for the types of part required in a given system. Specialist parts for which a buyer is dependent and little internal knowledge exists regarding their design and manufacture results in supplier leverage and a potentially significant difference between cost and price. For standard parts a small difference between unit cost and price is expected. Understanding the costs involved in the production of a part with other specified requirements enables a procurement buyer to physically negotiate and determine price and contract particulars with potential suppliers; based upon a platform of informed judgment. A derived element research is unit cost assessment that is based on technical design and manufacturing requirements; or when only supplier data is available, price analysis reflecting the same technical factors while acknowledging that other variables also have an influence. In the next section, the state-of-the-art practice in cost analysis and cost estimating is reviewed, after which the paper will go on to describe the methodology that has been developed and will include results from case studies to which it has been applied.
Cost CENTRE-ing
93
Figure 3. Underlying components of Unit Price
3 State-of-the-Art: Procurement Cost Estimating Companies in all sectors are examining ways to reduce costs, shorten product development times and manage risk [12]. As a means of doing this greater attention is now being given to the supply chain for potential improvement opportunities. Once identified, the subsequent transactions that occur between companies (buyer to supplier) are characterized by adding value up through the chain and consequent payments down the chain. The procurement function tends to be characterized as exploiting the supply chain in order to develop opportunities for increased profitability. It has been noted by [12] that this is envisaged through manipulation of the areas that directly effect asset and resource utilization, as well as profit margins, including: production decisions, outsourcing verses in-house management, supplier relationship type sought, and inventory turnover. Typically, it is recommended to formalize best practice procedures for all activities that describe the procurement function. The best practice principles that are identified as procedurally correct need to be supported by facilitating tools that provide quantitative measures of cost, time, risk, quality, etc. In particular cost modeling tools can easily be related to a wide range of procurement needs as described in the literature [13, 14]. The challenge with developing supporting technologies is that of making them as widely applicable as possible and thus providing scope for system integration and cross-functional knowledge extraction. An important factor to consider when discussing best practices in procurement however is that no two companies are exactly alike, and as a result there is no simple generic approach to best practice policy [14]. Best practices often depend on people, suppliers, processes, or other business elements that are specific to a certain situation [14, 13]. Specifically considering cost analysis; Ellram [7] states; “there are many cost management tools and techniques and they continue to proliferate. Thus, it is difficult to determine which type of analysis should be used in a given situation, and time pressure may inhibit the purchaser selecting the right tool”. Due to this it is proposed that a methodology be developed to help procurement management determine what kind of cost assessment technique should be applied to given purchase situations.
94
R. Curran, P. Watson, P. Hawthorne and S. Cowan
Probert [15] recommends the use of sophisticated techniques which offer greater accuracy to those classes of parts that are deemed to be of ‘highimportance’ and conversely simpler techniques to parts which belong to groups that are thought to be of lesser importance. Purchases may be [1]; (1) big and others small in terms of both quantity and value, (2) some complex whilst others routine, (3) some high risk and others with perhaps no attached risk at all, (4) some requiring a lengthy contract whilst others needing only a short time commitment between the buyer and seller. As procurement needs are different for different purchases authors [1, 15, 7] recommend categorizing procurements into broad but distinct families before conducting any cost analysis. The old adage of ‘not putting all one’s eggs in the same basket is known as portfolio theory, which dates back to financial investment analysis in the 1950s [16-18]. This in fact can help management to focus more thoroughly on problems or issues specific to each category of procured part [1].
One-time
Nature of Buy
Ongoing
Classifying Suppliers / Purchases for Cost Analysis Leverage
Strategic
Items purchased in large quantity that are made to stock with many available sources Items available on commodity exchanges
Items important to distinctive competency Items important to the future success of organisation
Low Impact
Critical Projects
Most specialised Low price, repetitive buys
Critical project Long-term capital investments
Arm’s-length Strategic Alliance Type of Relationship Sought With Supplier Figure 4. Classifying Suppliers / Purchases for Cost Analysis, [7]
Following from this it is thought that optimal analysis approaches may then be identified for application to each particular grouping [7]. In a similar fashion to that of Flemming [1] it is acknowledged that before purchasers can choose the right analysis tool for a particular situation, they must understand the nature of the buy (which considers: scale, complexity, duration, contract type, dependency/risk, etc.) and the type of the supplier relationship sought. Ellram [7] recognizes that this can range potentially from a loose agreement to a strategic alliance which importantly affects the availability of data as well as how much time or additional resources the organization is perhaps willing to devote to both supplier and cost analysis. Figure 4 provides a matrix of buying situations consisting of varying; types of buy and types of supplier relationship sought. Purchases are classified as low impact, leverage, strategic and critical, in terms of their cost and impact on the organization and relationship potential. Ellram [7] acknowledges that, the type of
Cost CENTRE-ing
95
cost analysis techniques used should support the relative importance of the item being purchased, as well as the type of supplier relationship that the organization currently has or desires. Following from Figure 4, Figure 5 highlights potential cost analysis techniques to be used in each of the buying-type situations identified. Strategic ‘Continuous Focus’
Improvement
Estimate cost relationships Value Analysis Analysis of supplier cost breakdowns Cost estimate / ‘should cost’ Industry analysis Total cost modeling
Open books Target Cost Analysis Competitive Assessment / teardowns Total cost modeling of the supply chain
Low Impact ‘Price Analysis Focus’
Critical Projects ‘Life Cycle Cost Focus’
Competitive bids Comparison price list / catalogues Comparison to established market Price Indexes Comparison to similar purchases
Total cost modeling and life-cycle costing
One-time Limited I
Nature of Buy
or
Ongoing or Major Impact
Cost Analysis Techniques Leverage ‘Cost Analysis Focus’
Arm’s-length Strategic Alliance Type of Relationship Sought With Supplier
Figure 5. Cost Analysis Techniques applicable for various types of supplier relationship and types of buy situations, [7]
As shown in Figure 5 relatively simple analysis techniques are recommended for low impact purchases which focus primarily on analyzing price, where competitive bidding is viewed as the most common basic method of analysis. Moving from low-impact to leverage items it can be seen that greater attention is given to the analysis of cost rather than price in supplier cost breakdowns. Price analysis is simpler and faster than cost analysis. The simpler price analysis may be satisfactory for low-impact items however cost component understanding is desirable for high-impact parts. Even though cost analysis requires more processing time to practically employ; it generates a greater breakdown of cost information over that from price analysis and so is therefore better able to support informed ‘fair-price’ negotiation. The technique involving the use of cost estimating relations is similar to that of the price analysis approach [7] of comparing similar purchases at price or sub-component cost levels. ‘Shouldcosts’ or cost estimates involve attempting to independently construct the current or potential suppliers’ product cost structure(s). Value analysis is a methodology which compares the function of an item or the service it performs to cost, in an attempt to find the best value alternative. The purpose of which is to identify quality or features for which the organization is paying that are not required. Total cost modeling or life cycle cost analysis goes beyond the focus upon suppliers’
96
R. Curran, P. Watson, P. Hawthorne and S. Cowan
cost structures and looks specifically at; “the cost of doing business with a particular supplier for a particular item over the life of that item” [7]. In terms of the definition of Cost Estimating, the Society of Cost Estimating and Analysis (SCEA) state that this is: “the art of approximating the probable worth or cost of an activity based on information available at the time” [19]. The Newsletter of the International Cost Engineering Council states that the main function of cost estimation is the provision of independent, objective, accurate and reliable capital and cost operating assessments that can be used for investment funding and project control decisions. In particular, accurate cost estimation is important for cost control, successful bidding for jobs and maintaining a competitive position within the marketplace [20]. There are two main approaches towards cost estimation: cost estimation based on past experience variant [21, 22] and generative cost estimation [23]. Curran [21] refers to generative or compilational costing as an approach which seeks to aggregate the various constituent cost elements identified for a given exercise whereas in variant or relational costing, comparative relation of product defining parameters is adopted in order to target/interpret causal reasons for cost differences between similar items. According to Humphreys [9], variant (analogy) estimating involves identifying a similar part cost and then using this actual cost as a basis for the estimate of the new part. Generative estimating methods can be further divided into explicit (rule based) cost estimating, rough order magnitude (ratio) estimating, parametric and feature based cost estimating as well as detailed estimating potentially using activity based costing (ABC) [24], each of which are often based upon past experience. As well as these, approaches involving the use of artificial intelligence such as fuzzy logic and neural nets [25, 26] are rapidly developing which mimic the human thought process. Rough order magnitude or ratio estimating is a factor based technique which is used to arrive at a preliminary cost estimate inexpensively and quickly [9]. It is based upon the application of a ratio determined factor, from a previous contract, to a particular variable in order to calculate the value of a second. Parametric estimating is a technique that uses validated Cost Estimating Relationships (CERs) to estimate cost. Parametric cost models [27] statistically estimate part cost based on the correlation between historical cost data and part properties which are considered to be related to cost. Parametric models can use a small number of independent variables or in the case of feature based modeling, which is more generative in nature; any number of variables can be used to adequately describe the required detail present in an item. As discussed earlier, Activity Based Costing [24, 28, 29] is an accounting practice which specifically aims to identify the activities of an organization and the associated cost of each, using which activity costs are then allocated to cost objects. Artificial intelligence approaches consisting of neural nets and fuzzy logic are receiving some interest now. Using neural nets for costing involves the training of a computer program given product related attributes to cost. A number of researchers are investigating the use of neural nets for cost estimating purposes [30-33]. The neural net [25] learns which product attributes most influence the associated cost and then approximates the functional relationship between the attribute values and cost during the training. Following which, when supplied with
Cost CENTRE-ing
97
product attributes describing new parts, the neural net selects the appropriate relationship function and generates the required cost estimate. Neural networks are entirely data driven models which through training iteratively transition from a random state to a final model. Brinke [34] identifies that both neural nets and regression analysis can be used to determine cost functions based on parametric analysis; whereby parametric analysis is becoming an increasingly employed tool in industry for cost estimating purposes. Both techniques use statistical curve fitting procedures however neural nets do not depend on assumptions about functional form, probability distribution or smoothness and have been proven to universal ‘approximators’ [35, 36]. Brinke [34] states that when the cost parameters are known and the type of function is unknown or cannot be logically argued then neural networks are suitable to deduce cost functions, however that it is easier to quantify the quality of a result from regression analysis. Smith and Mason [30] indicate that in instances where an appropriate CER can be identified, regression models have significant advantages in terms of accuracy, variability, model creation and model examination. Considering the use of such techniques for cost estimating it is desirable that causal relationships are known between cost driving independent variables and cost. This subsequently strengthens one’s case when attempting to enforce a cost reduction with a current supplier based upon non-disputable causal logic. Neural nets can be used to generate more accurate results than those from the use of regression however the challenge associated with the further diffusion and wider implementation of this methodology according to Cavalieri et al [31] is that of making the approach more transparent to the analyst and developing tools which reproduce in a comprehensible, easy to use fashion the behaviour of the network. Finally with respect to fuzzy logic, a fuzzy expert system is one that uses a collection of fuzzy membership functions and rules to deal quantitatively with imprecision and uncertainty, authors [37-41] agree that the major contribution of fuzzy set theory is the inherent capability of representing vague knowledge. Roy [42] however states that fuzzy logic applications within the field of cost estimating are not established, well researched or published. It should be noted that each of the estimating methods can be modeled in either a ‘top-down’ or ‘bottom-up’ fashion. ‘Top-down’ involves the formulation of an overall estimate to represent the completed project which may then be broken down into subcomponents of cost as required. In contrast, ‘bottom-up’ estimating [39] generates sublevel and component costs first which may then be aggregated in order to produce an overall estimate. Elements of each of these methods are more or less applicable at various stages of the product life cycle. Further reviews of these methods are provided by Curran [43], Roy [42] and Stewart [44].
4 Methodology: Cost CENTRE-ing Figure 6 indicates the authors’ view of procurement practice in unit cost/price analysis and has been constructed based on literature in this section. It is reflective of the latest cost management research in the area and involves tailoring cost analysis to given types of purchase situation. The purpose of incorporating improved estimating methodologies within Procurement is essentially to provide
98
R. Curran, P. Watson, P. Hawthorne and S. Cowan
Figure 6. Procurement best practice in unit cost or price analysis
additional information against which sourcing issues may be more readily considered. The research method presented in this paper gives attention to identifying opportunities for cost reduction from currently outsourced parts based upon unjustifiable cost or price variances amongst similar parts. Control follows estimate generation and usually involves the comparison with actual and other estimates for the purpose of identifying such variances and then attempting to understand their causes with the view to bringing cost to a desired baseline. Three types of cost variance are of interest when comparing cost information of similar items include: Comparison of actual cost to actual cost, or indeed lower level cost components; Comparison of actual costs to cost estimates, at any level of aggregation; Comparison of an estimate to another estimate developed from a different approach. Following from this, a methodology as been developed that is termed Cost Centre-ing, which has been applied to three industrial case studies: 1) a machining example employing parametrics, as an ‘Outside Production part’; 2) a spigot that is
Cost CENTRE-ing
99
costed using the analogous concept; 3) a TAI valve, as a Vendor specialized part, which also uses analogy and performance due to limited information, ‘dissimilarity’ is used to establish the design drivers that gave rise to cost discrepancies between valves of a similar nature. As illustrated in Figure 7, the method is broken down into six key areas: (1) Classification, (2) Encircling, (3) Normalization, (4) Trending, (5) Cost Reduction Identification and (6) Enforcement. Steps one to four involve knowledge discovery incorporating data mining, statistical study (e.g. for variable selection, significance and hypothesis testing, trending and optimization) with scope for sensitivity and likelihood testing, which brings in concepts central to probability. (1) Classification: is a key aspect of the methodology and was implemented to define families of parts. There is an obvious trade-off here in terms of increasing the complexity through the number of Cost Estimating Relationships (CERs) embodied in the eventual Cost Estimating Methodology (CEM). Classification was developed according to the following descriptors as taken from a part’s Bill of Material: Procurement Part Type, Aircraft Type, Sub-Level Contract, Process, Material Form and Material. (2) Encircling: involves analysis of a data set’s principal components and allows clusters to be identified in order to improve grouping refinement and proceeds as follows: Machine Type, Part Size and Batch Size. Figure 8 highlights a hybrid data mining approach involving data exploration, standardization, and visualization, reduction with subset generation as well as statistical testing and iterative evaluation. Considering this, the process of pattern matching that is being used in the presented approach to data grouping is analogous to having degrees of freedom in a formal statistical test. (3) Normalization: After surveying the more advanced methods being developed, such as Neural Networks and fuzzy logic etc, it was decided that Multiple Linear Regression would be used to model the link between part attributes, as independent variables, and unit cost, as the dependant variable. This requires that the data be normalized in order to distil out the key cost drivers to be used in the formulation of parametric relations. There is a trade-off here in terms of the number of drivers, which may be used to optimize a given result and the corresponding actual improvement considering the additional processing time required to generate the result. (4) Trending: also considering knowledge discovery this step allows the appropriate trend which de-scribes the mapping relationship of cost driving independent to dependent variables to be selected. The most appropriate trend to use may change from case to case although what is common is the means by which the goodness of fit of a relationship may be measured (R2). The trend that best minimizes random variance or error is selected for each case.
100
R. Curran, P. Watson, P. Hawthorne and S. Cowan
Figure 7. The Cost CENTRE-ing methodology
Cost CENTRE-ing
101
Figure 8. A hybrid approach to data mining
(5) Reduction and (6) Enforcement: these steps are linked to Procurement’s use of the relationships and trends developed to this point. ‘Reduction’ entails application and comparison of prediction trends to current ‘actuals’ or to results developed by other estimating techniques for the purpose of identifying Opportunities for Cost Reduction either by direct total cost comparison at part level or sub-cost components (e.g. Make, Material, Treatments, etc.). Once identified, the Procurement function must then decide upon the appropriate course of action to be taken in order to attain reductions through ‘Enforcement’. 10%
Machined Part
23%
Major Assembly Metal Bond Part 55%
2%
Sheet Metal Part Systems Part
10%
Systems Hardware 2%
13%
9%
Fastener Hardware
7% 5%
11%
Electrical Hardware Outside Production Raw Material 2
9%
Raw Material 44%
Bought Out (blank)
Figures 9 & 10. Outside product parts and Procurement spend
102
R. Curran, P. Watson, P. Hawthorne and S. Cowan
5 Results and Validation 5.1 Validation 1: In-house Production; Machining Case Study (1) Classification: Figure 9 shows the breakdown of procurement spend, and also that of Outside production as illustrated in Figure 10. As stated the main purpose here is to define and develop families of parts, which are similar in nature. (2) Encircling: In figures 9 and 10, it can be seen that the parts have been categorized in order to group parts with a heightened degree of commonality. Primarily, at this level of distinction it is paramount to choose part attributes that have been closely identified as driving manufacturing cost. These may be as abstract as airframe weight driving unit cost or as direct as part count driving assembly cost. However, it is also important to choose attributes that are already defined at whatever stage of the product life that the model is to be utilized. Following on from that, it is important that these are also readily available. When fully linked to design, it will be possible to impose a much greater level of definition, through actual part volume etc, which should increase the accuracy but also the operational complexity of the Model. (3) Normalization: A simple initial parametric relation was generated using the Multiple Linear Regression facility within the MS Excel Data Analysis module. The detailed manual cost estimates of the machining times for 850 parts was used as the dependant variables while the readily available independent variable were taken from size (thickness, length and breadth). In terms of driving the parametric relation, the size envelope is primarily linked to the material removal although the relation would be much improved with more detailed attribute data. Work is progressing in also linking part complexity, as driven by key design attributes of the part. 14
Machining time (hrs)
12 10 8 6 4 2 833
801
769
737
705
673
641
609
577
545
513
481
449
417
385
353
321
289
257
225
193
161
97
129
65
1
33
0 P arts listed according to ascending cum ulative E stm ate tim e
QUB
A ctual
Figure 11. Cost comparisons of 850 parts using the model
(4) Trending: Trending was carried out using multiple linear regression. The parametric equation for machining time is of the following form: where
Cost CENTRE-ing
103
MachTime is the estimated machining time for a given component made from a billet of thickness T, length L and width W; according to the regression coefficients and the constant. It is interesting to also note that the regression statistics return a Multiple R value of 0.71, which means that the mathematical formulation can account for approximately 70% of the variation in the historical data. A Multiple R value of 0.8 would be preferable which should be feasible by improving the range of independent variables used to characterize the parts, e.g. through the additional normalization according to part size and design/machining complexity, as available. The resulting estimates for the 850 parts are referred to in Figure 11 as the QUB parametric estimate is compared against the actual time (Actual) calculated as the supply price divided by an averaged rate. Regions where the two are significantly different highlight parts requiring further investigation and potential Opportunities for Cost Reduction. The QUB modeling approach was then applied to an older contract, for which it was assumed that poorer correlation would exist given that the Should Cost has not been used on this contract.
Machining time per part (hrs)
14 117
12
10
8
112
93 98
104
6 48
54
4
2
0
1
11
21
31
41
51
61
71
81
91
101
111
Parts list according to ascending Estimate time Estimate
ROM
QUB
Actual
Figure 12. A detailed comparison for part costs with ‘actuals’
(5) Reduction: For this older contract, Figure 12 shows a direct comparison between all cycle time values for the 117 listed parts using each of the four estimate types: detailed, rough order, QUB (Parametric) as well as ‘actuals’ recorded. It can be seen that a significant number of ‘actuals’ are extremely different. Figure 13 provides a cumulative comparison for each of the estimate types in which the cumulative differentials again imply that the ‘actuals’ are too high. Consequently, a number of these parts were identified and the differentials calculated to estimate the potential savings if the current suppliers were to reduce their price to the appropriate should cost or else via supplier sourcing. For this case, potential savings of £100,000 were generated through (6) Enforcement.
104
R. Curran, P. Watson, P. Hawthorne and S. Cowan
Cumulative machining time (hrs)
350 300 250 200 150 100 50
116
111
106
96
101
91
86
81
76
71
66
61
56
51
46
41
36
31
26
21
16
6
11
1
0
Parts listed according to ascending cumulative Estmate time Estimate
ROM
QUB
Actual
Figure 13. A comparison of the cumulative cycle times of the parts
Figure 14. A typical Off-The-Shelf item: an anti-icing valve
5.2 Validation 2: Off-The-Shelf Items; Anti-icing Valve Case Study (1-2) Classification/Encircling: This study considers Thermal Anti Icing valves, relating to system hard-ware in Figure 9. Ice protection is the prevention and removal of ice accumulation (anti-icing and de-icing respectively). The pneumatic and electrical systems supply the required heat from the engine bleed hot air for: Wing anti-icing; Engine nose cowls and inlets and centre engine inlet duct; the upper VHF antenna; fuel filter de-icing (more under power plant): see Figure 14 for a sample thermal anti-icing valve. The case study was undertaken with a view towards determining why there is a cost variation between those currently being sourced and ultimately to improve their understanding so as to be better able to cost valves to ‘Should Cost’. As such, the valve was classified within the vendor item group with the valves identified as an encircled grouping of parts with a particular commonality.
Cost CENTRE-ing
105
(3) Normalization: The normalization procedure is implemented as set out previously in order to deter-mine the cost drivers that differentiate the cost of one instance of the encircled group from another. It was found that the cost of a valve is dependent for example upon; casing and seal materials, performance requirements, testing and scale of production or order quantities. The valves being examined are particularly challenging as they are vendor-supplied items with little information available over that of the original operational specifications and requirements and the actual buying price. Naturally, the implication is that one is dealing with price as the dependant variable rather than cost, which means that it is less feasible to look for a causal linkage between price and item parameters. Notwithstanding, the more fairly an item is priced the more likely it is to find statistical significance. The initial process followed, was that of extracting from the source documents all operational specifications and requirements with a view towards removing any common characteristics and then analyzing the remaining variables, to ascertain their influence on the unit price. It is realized that there are many attributes that go together to contribute to-wards any item’s overall cost as well as other environmental factors that effect the part price. Currently, however given the lack of cost breakdown data, basic relationships of those variables considered to be the major cost drivers with unit price have been considered. 1400
y = 151.6x + 991.14
PO Value ($/part)
1200 1000
y = -578.16x + 1740.6
800
y = 87.543x + 1059.3 600 400 200 0 0
0.5
Max Int Leakage (lbs/min) Press Drop Through Valve Linear (Max Ext Leakage (lbs/min))
1
1.5
2
Max Ext Leakage (lbs/min) Linear (Max Int Leakage (lbs/min)) Linear (Press Drop Through Valve)
Figure 15. Indicative cost benefit modeling with regards to performance specification
(4) Trending: As previously, the trending relied on multiple linear regression technique as the means of linking the available cost drivers to the measure of cost, or more accurately price in this case. Figure 15 plots some of the regression findings that were carried out to investigate the relations between performance drivers and factored order value. Some of these initial relations are of use in terms of a Rough Order Magnitude (ROM) estimate and also provide with some idea of where they might have leverage and call to push for cost reduction. However, it
106
R. Curran, P. Watson, P. Hawthorne and S. Cowan
should be noted that there is often interaction between these identified performance parameters and these issues are currently being addressed in order to arrive at more accurate relations. (5) Reduction: It was found from the studies that there was a deviation of almost 50% in the cost of procurement but very little discernable difference in the performance specifications. A more influential parameter was the order quantity although again there were anomalies in the trending. Ultimately, the anomalies were exploited as Opportunities for Cost Reduction (OCR) in the following phase of negotiated (6) Enforcement.
5.3 Validation 3: General Supply Items; Spigot Case Study (1) Classification: Bombardier Aerospace Shorts Methods Procurement currently outsource in region of 34,000 parts across 618 suppliers for use with air-craft subassembly build contracts. Of those parts, the overall part list was first classified according to commodity code for ‘Machinings’ which took the part count down to 7000.
Figure 16. A example of a General Supply item: a spigot
(2) Encircling: In encircling a particular cluster for analysis those parts used in engine Nacelle manufacture were considered, reducing the part count down to 840. Of the 840 then present, a further step of filtration was carried out to generate a list of those items, which are considered to be similar in nature to a number of other parts within the grouping. This included the main characteristics of a part being present in each item contained within the ‘Similar to’ part set. The parts list of 840 parts was condensed to a list of ‘Similar to’ part sets which contained in total a shortlist of 201 parts. In this instance the encircling was driven by a more product
Cost CENTRE-ing
107
driven and function-role approach than for part families with groups such as: valves; fuselage panels, Nosecowls etc. One such ‘Similar to’ part set existed for a particular style of Spigot, which is a member of the ‘Round Bar & Tube’ part family as shown in Figure 16. (3) Normalisation: The individual items/parts are normalized according to make cost, material cost and treatments. According to the ‘Should Cost’ Approach, parts with similar attributes in terms of material, geometry, manufacturing and treatments requirements should approximately have close make, material & treatment costs. (4) Trending: Again the procurement information is more price oriented and therefore rather than direct modeling, the lowest component cost for each within the part set is then considered to be an initial baseline value to which the others should be brought in line with, remembering again that SC is an estimate of a unit price that accurately reflects reasonably achievable contractor economy and efficiency. (5) Reduction: For each part set, the Opportunities for Cost Reduction (OCR) are identified by calculating the differential between each parts’ current; Make, Treatments & Materials Costs for each of these parts and the Should Costs for these Costing components within each part set factoring in; quantity of parts per delivery batch, rate of usage per year and expected duration of build contracts to which the parts are being used [Marquez and Blanchar, (2004)]. This gives the overall potential for savings for each ‘Similar to’ Part set. (6) Enforcement: The projected savings across six contracts currently in development with Bombardier Shorts are shown below in Figure 17. It is interesting to see that there is a greater potential for savings in three projects. This can be accounted for by the fact that Contracts D, E & F have been focused on for some time now with the application of the Should Cost philosophy. If the other parts in the set have been sourced via the one supplier then, Shorts Methods Procurement will con-tact the supplier and discuss the cost drivers for the set of parts to establish why each are not currently being supplied at Should Cost and ultimately look to renegotiate part costs. If sourced via a few different suppliers then this process is more complicated but in essence the same for when analyzed and understood, the cost drivers will indicate the true unit cost for an item so through mutually beneficial discussion it should be possible to bring the items to Should Cost. It should be noted that an activity that requires and develops increased understanding of the cost drivers is beneficial for both the supplier and customer and is done not ‘to eat unfairly into supplier profit margins’ but to establish a profitable and sustainable relation-ship between the two based upon enhanced efficiency and best practice driven initiatives.
108
R. Curran, P. Watson, P. Hawthorne and S. Cowan
£4 5 0,00 0 .0 0 £4 0 0,00 0 .0 0
Savings (£/Year)
£3 5 0,00 0 .0 0 £3 0 0,00 0 .0 0 £2 5 0,00 0 .0 0 £2 0 0,00 0 .0 0 £1 5 0,00 0 .0 0 £1 0 0,00 0 .0 0 £ 5 0,00 0 .0 0 £ 0 .0 0
A
B
C
D
E
F
CO NTRACT
Figure 17. Enforced savings for the spigot General Supply case study across a number of contracts
6 Discussion and Conclusion The paper has presented a cost reduction methodology to be deployed within the procurement function for enabling the more efficient operation and cost management of supply chains. It is shown that the Cost CENTRE-ing method provides an agile method for responsive cost analysis, estimation, control and reduction of procured aerospace parts. The methodology is based on the structuring of parts into product families and utilized both manufacturing and performance cost drivers to establish cost estimating relationships. Case studies have been presented to test the generic relevance. A ‘machinings’ example representing outside production used both specific design and cost data while a General Supply spigot example used analogy applied to comparison of sub-cost components. An off-the-self anti-icing valve example relied exclusively on broad contract based information (not specific to the part) with purchase order value as the dependent variable. This is confused due to differing suppliers using differing cost stack up and allocation policies, including profit, which makes it difficult to identify causal drivers that affect the cost differentials. The Cost CENTRE-ing method uses ‘comparison’ in early data grouping and refinement but is also the basis of normalization and trend selection. It does this by selecting those drivers with the smallest measure of random error and which can be linked causally to cost. The methodology is being developed as one component of the QUB Pro-Cost suite of procurement orientated costing tools that links design information with procurement logistics. The proposed methodology was applied to the three validation studies to show that it is effective a wide range of applications (generic), has been used to significantly reduce the cost of supplied items (accurate), and is being adopted by a leading aerospace manufacturer (relevant). It is concluded that the proposed methodology
Cost CENTRE-ing
109
exhibits all the above because it is based on an improved understanding of supply chain costing; thereby contributing to the body of knowledge in terms of process understanding; the importance of a causal relations in estimating; and identifying inheritance and family relations in groups of products. It is envisaged that the application can be further developed into a web-based technology that is more responsive in the identification and control of Lean suppliers who operate within an optimal cost basis.
9 References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]
Fleming, Q., (2003), “Project procurement management: contracting, subcontracting, teaming”, Fmc Publisher, ISBN: 0974391204. Dubois, A., (2003), “Strategic cost management across boundaries of firms”, Journal of Industrial Marketing Management, 32, pp. 365 – 374. Momme, J., (2002), “Framework for outsourcing manufacturing: strategic and operational implications”, Computers in Industry”, Vol. 49, pp. 59 – 75. Dulmin, R., and Mininno, V., (2003), “Supplier selection using a multi-criteria decision aid method”, Journal of Purchasing and Supply Management 9, pp. 177 – 187. Yoon, K., and Naadimuthu, G., (1994), “A make-or-buy decision analysis involving imprecise data”, International Journal of Operations and Production Management, 14 (2), pp. 62 – 69. AT Kearney Report, (2004), “Creating value through strategic supply management – 2004 Assessment of Excellence in Procurement”. Ellram, L.M., (1996), “A structured method for applying purchasing cost management tools”, The Journal of Supply Chain Management – A Global Review of Purchasing and Supply, Volume 32, No.1, pp. 11 – 19. Monczka, R. and Morgan, J., (2000), “Competitive supply strategies for the 21st century” Purchasing, January 13. pp. 48-59. Humphreys, K.K., (1991), “Jelen’s Cost and Optimisation Engineering”, Third Edition, McGraw Hill Publishers, ISBN 0-07-053646-5, pp. 538. Chen, I.J., and Paulraj, A., (2004), “Understanding supply chain management: critical research and a theoretical framework”, International Journal of Production Research, Vol. 42, No. 1, pp. 131 – 163. Scanlan, J.P., (2004), “DATUM (Design Analysis Tool for Unit Cost Modeling): a tool for unit cost estimation of gas turbine design within Rolls-Royce”, The Cost Engineer, No. 4, November 2004, pp. 8 – 10. Hicks, C., McGovern, T., and Earl, C.F., (2000), “Supply chain management: A strategic issue in engineer to order manufacturing”, International Journal of Production Economics, Volume 65, Issue 2, 20 April 2000, pp. 179-190. Fitzgerald, K.R., (2002), “Best Practices in Procurement”, Ascet Volume 4, Ascet – Achieving Supply Chain Excellence Through Technology. Handfield, R.B., Krause, D.R., Scannell, T.V., and Monczka, R.M., (2002), “Avoid the pitfalls in Supplier Management”, Sloan Management Review, January 2000. Probert, D. (1996), “The practical development of a make or buy strategy: The issue of process positioning”, Integrated Manufacturing Systems, 7 (2), pp. 44-51. Markowitz, H., (1952), “Portfolio Selection”, Journal of Finance 7 (1), pp. 77 – 91. Olsen, R.F., Ellram, L.M., (1997), “A portfolio approach to supplier relationships”, Industrial Marketing Management 26, (2), pp. 101 – 113.
110
R. Curran, P. Watson, P. Hawthorne and S. Cowan
[18] Kulmala, H.I., (2004), “Developing cost management in customer-supplier relationships: three case studies”, Journal of Purchasing and Supply Management 10, pp. 65 – 77. [19] ISPE Parametric Estimating Handbook, (1999), “Appendix A – Definitions of Estimating Terminology”, 2nEdition, International Society of Parametric Analysis. [20] Ben-Arieh, D., (2000), “Cost estimation system for machined parts”, International Journal of Production Research, Vol. 38, No. 17, pp. 4481 – 4494. [21] Curran, R., Raghunathan, S. and Price, M., (2004), “Review of aerospace engineering cost modeling: The genetic causal approach”, Progress in Aerospace Sciences 40, pp. 487 – 534. [22] Curran, R., and Watson, P., Cowan, S., Hawthorne, P., Watson, N., “Cost CENTREing: A Procurement Methodology for Cost Reduction within Aerospace”, Proceedings of Concurrent Engineering Conference, CE2004, Beijing, China, 2004. [23] Weustink, I.F., Ten Brinke, E., Streppel, A.H., and Kals, H.J.J., (2000), “A Generic Framework for Cost Estimation and Cost Control in Product Design”, Laboratory of Production and Design Engineering, University of Twente, The Netherlands. [24] La Londe, B.J., and Ginter, J.L. (1999), “A Summary of Activity-Based Costing Best Practices, The Ohio State University’s Supply Chain Management Research Group”, Ohio State Publications. [25] Rush, C. and Roy, R., (2000), “Analysis of Cost Estimating Processes used within a Concurrent Engineering Environment throughout a Product Life Cycle”, Proceedings of CE2000, Lyon, France, July 17 – 21, pp. 58 – 67. [26] Villarreal, J.A.; Lea, R.N. & Savely, R.T., (1992), “Fuzzy logic and neural network technologies”, 30th Aerospace Sciences Meeting and Exhibit, Houston, Texas, January 6-9. [27] Collopy, P.D., (2001), “Aerospace Manufacturing Cost Prediction from a Measure of Part Definition Information”, Copyright © 2001 Society of Automotive Engineers, InC. [28] Mileham, A.R., (1993), “A Parametric approach to Cost Estimating at the Conceptual Stage of Design”, Journal of Engineering Design, Volume 4, Part 2, pp. 117 – 125. [29] Esawi, A.M.K., and Ashby, M.F.*, “Cost Estimates to Guide Pre-selection of Processes”, Mechanical Engineering Department, The American University in Cairo, *Engineering Design Centre, Cambridge University Engineering Department, January 2003. [30] Smith, A.E. and Mason, A.K., (1997), “Cost Estimation Predictive Modelling: Regression versus Neural Network”, The Engineering Economist, Volume 42, Winter, No. 2, pp. 137 – 138. [31] Cavalieri, S., Maccarrone, P. and Pinto, R., (2004), “Parametric vs. neural network models for the estimation of production costs: A case study in the automotive industry”, International Journal of Production Economics 91, pp. 165 – 177. [32] Idri, A., Khoshgoftaar, T.M. and Abran, A., (2002), “Can neural nets be easily interpreted in software cost estimation?”, 2002 World Congress on Computational Intelligence, Honolulu, Hawaii, May 12 – 17. [33] Wang, Q. and Maropoulos, P. G., (2005), “Artificial neural networks as a cost engineering method in a collaborative manufacturing environment”, Next Generation Concurrent Engineering © 2005 ISPE Inc., ISBN 0-9768246-0-4, pp. 605 – 610. [34] Brinke, E.T., (2002), “Costing Support and Cost Control in Manufacturing: A Cost Estimation Tool applied in the sheet metal domain”, PhD Thesis, Printed by PrintPartners Ipskamp, Enschede, The Netherlands, ISBN 90-365-1726-5. [35] Funahashi, K., (1989), “On the approximate realisation of continuous mappings by neural networks”, Neural Networks, Vol. 2, pp. 183 – 192.
Cost CENTRE-ing
111
[36] Hornik, K., Stinchcombe, M., and White, H., (1989), “Multilayer feedforward networks are universal approximators”, Neural Netwroks, Vol. 2, pp. 359 – 366. [37] Gerla, G., “Fuzzy Logic, Mathematical Tools for Approximate Reasoning”, Kluwer Academic Publishers, ISBN 0-7923-6941-6, 2001. [38] Kishk, M., and Al-Hajj, A., “An integrated framework for life cycle costings in buildings”, RICS Research Foundation , ISBN 0-85406-968-2, 1999. [39] Ting, P.,-K., Zhang, C., Wang, B., and Deshmukh, A., “Product and process cost estimation with fuzzy multi-attribute utility theory”, The Engineering Economist, Volume 44, Number 4, 1999. [40] Klir, G.J., and Ruan, D.A., “Fuzzy logic foundations and industrial applications”, Kluwer Academic Publishers, International Series in Intelligent Technologies, 1996. [41] Mamdani, E.H., and Gaines, B.R., “Fuzzy reasoning and its applications”, Academic Press, 1981. [42] Roy, R., (2003), “Cost Engineering: Why, What and How?”, Decision Engineering Report Series, Cranfield University, ISBN 1-861940-96-3. [43] Curran, R., Raghunathan, S. and Price, M., (2004), “Review of aerospace engineering cost modeling: The genetic causal approach”, Progress in Aerospace Sciences 40, pp. 487 – 534. [44] Stewart, R., Wyskidsa, R., Johannes, J., (1995), “Cost Estimator’s Reference Manual”, 2nd Edition, Wiley Interscience.
Cost of Physical Vehicle Crash Testing Paul Baguleya1, Rajkumar Roya and James Watsonb a b
Decision Engineering Centre – Cranfield University, United Kingdom. Cranfield Impact Centre – Cranfield University, United Kingdom.
Abstract: The automotive safety-testing environment currently deploys virtual methods and physical crash testing for new product development and validation in safety testing legislation. Cost benefit analysis of crash testing is considered here by estimating the cost of physical crash testing. This has been achieved via the compilation of detailed process maps and AS-IS analyses of the current physical testing procedures. This leads on to detailed work and cost breakdown structures used in the comparative analysis of cost drivers. The consideration of cost drivers at several stages of the New Product Development process aids Concurrent Engineering. This research considers front and side impact only. Keywords: Physical crash testing, cost estimating, cost of crash testing
1 Introduction Crash testing, or the study of crashworthiness, is a method of measuring how well a vehicle withstands a crash or sudden impact. European legislation still requires physical testing for final safety analysis and legislation, but, due to the high number of iterations and repetitions involved in the overall testing procedure, from design to final approval, even a partial conversion to the virtual simulation testing domain could have large cost saving opportunities. It may also have implications on raising car safety standards by increased integrity and more complex parametric testing that wouldn’t be possible in physical tests. An investigation has taken place into the potential of introducing wider virtual impact testing within the vehicle development process, assessing the cost and benefits to legislation, society and car manufacturers. The study was undertaken via the formulation of detailed process maps, and cost breakdown structures. Data and information has been collected via communication with a number of external parties, including private car manufacturers, academics, public services and other interested parties, and these sources were also utilised in the final validation of outputs. This paper reports on the results of investigating physical crash testing only.
1
Corresponding Author E-mail: [email protected]
114
Paul Baguley, Rajkumar Roy and James Watson
2 Aim and Objectives The aim of the research is to investigate the cost and benefit of the amount of virtual and crash testing. This is developed using cost estimates of physical crash testing. Front and side impact of physical crash testing is considered only. The objectives of the research are to: Identify cost breakdown structures for physical crash testing. Identify cost drivers and some of their behaviour physical crash testing. Validate the above using industry experts.
3 Literature Background The process of physical crash testing has evolved dramatically since the first official test took place in 1969, both as a result of increased societal emphasis on vehicle safety and the utilisation of new technology. It remains the primary measure of vehicle crashworthiness, and indeed the only compulsory method accounted for in official legislation. By law, all new car models must pass certain safety tests before they are sold. These specific safety requisites vary worldwide and are defined by the legislation of the country in which the car is to be licensed. In the UK and other countries in the European Union, legislation controlling vehicle safety and crashworthiness is defined by ([1]) the United Nations Economic Commission for Europe (UNECE) Working Party of the Construction of Vehicles (WP29). This legislation largely evolved in complexity throughout the 20th Century as road travel increased, in line with harsher controls on vehicle safety and the increasing introduction of safety features such as seatbelts and airbags. Cost Engineering is a multi-disciplinary profession including estimating costs ([2]). For example the development of cost model equations for estimating costs of design from a dataset ([3]).
4 Methodology Seven companies, three academics, two consultants, and 30 hours of interviews were used to collect qualitative and quantitative data about the structure and behaviour of costs of the three domains of virtual test, physical test and cost to society. Overall 56 experts were contacted via a questionnaire, however not all were able to take part. The development of the domains other than that of physical crash testing is not considered in this paper. Types of questions used in the research can be categorised as quantitative and qualitative. Here is an example of a quantitative question: “In your opinion, what would be the cost drivers involved in PT and their weight?” Here is an example of a qualitative question :
Cost of Physical Vehicle Crash Testing
115
“What are the examples of successful application of VT replacing the PT?” An IDEF0 process map of the physical test domains was produced. The process mapping exercise enabled understanding of the target domain and the identification of cost elements and their associated cost drivers. This resulted in a Cost Breakdown structure shown in Figure 1.
5 Physical Crash Testing Results The process of physical crash testing has evolved dramatically since the first official test took place in 1969. It remains the primary measure of vehicle crashworthiness, and indeed the only compulsory method accounted for in official legislation. In order to maintain consistency, legislation and benchmarking abilities, a crash is carried out under specific impact configurations, such as speed, type and positioning of the dummy passengers, amount of sensors and external recording equipment. Those can be defined by the customer in the original specification documents. A standard crash test takes in order of a week to complete, the five primary stages. Once the prototype arrives at the testing house, approximately 3-4 days are spent preparing the crash scenario, including setting up the rig, monitoring equipment, instrumentation, positioning the vehicle and barrier, and setting up the dummy. This is highly labour intensive. Running the impact test typically takes place during one full day, including the last minute preparations and final dummy placement, the crash itself, and the immediate post-crash assessment. After the test, the analysis of the crash data typically takes 2-3 days, wherein the data collected is processed, analysed and translated into a usable form for the client. The Work Breakdown Structure (WBS) in Figure 3 details the physical crash activities only. This aids in identifying the key areas of work throughout the process.
6 Cost Analysis of Physical Crash Testing Directed towards a more qualitative, dynamical analysis, a cost graph analysis was created to map the variation, magnitude and evolution of key cost elements. These cost graph analyses are based on the cost data gathered, interviews compiled and process maps formulated and was carried out via two different dimensions: x First: mapping the major cost drivers from the test house point of view (labour, instrumentation costs), and plots the development of these over the five key stages of a crash test (Figure 2),
116
Paul Baguley, Rajkumar Roy and James Watson
x
Second: taking the major cost driving customer specifications and monitoring their variation within a crash test over the key stages of a vehicle product development process (Figure 4).
6.1 First analysis Most of these cost components (aside from those which vary according to customer specifications, such as cameras and accelerometers) are presented to the manufacturer as a fixed overhead cost. In reality, however, the distribution of these costs across the various stages is not uniform, in fact different costs accumulate at different stages. The length of each stage is not uniform, and largely depends on the specific test in question. Although the running of the test itself is largely more labour intensive than the preparation stage, it takes place over a shorter period of time, and thus the overall cost accumulation of labour is generally larger in the preparation stage. Dummies are expensive kit to acquire and maintain. In addition to purchasing costs, there are the post-crash maintenance costs (certify, position, set up and rework the dummy) in addition to any additional instrumentation forming the largest costs associated with the dummy. Naturally, the highest stage of dummy costs accumulates within the running of the impact test itself, although costs arise again during the equipment reworking stage. The price of still and high-speed photography and data channels is judged via both the number and technical specification of the equipment itself. This cost component does vary greatly depending on customer specifications. This factor understandably peaks in the running of the impact test itself. ‘Consumables and overheads’ are also another major cost component in the running of a crash test. This category contains all indirect costs, including the cost of the facility and equipment, administration and power. This is typically charged as a fixed percentage of the crash cost, based on standard overhead calculation procedures, considering factors such as depreciation of facilities. A large selection of computational hardware and software is in use throughout the testing procedure, both for monitoring and analysing the crash. It peaks in the analysis of results section, where the raw crash data is processed. 6.2 Second analysis The cost of the prototype is by far the dominant cost driver throughout the development stages, proving a huge expense particularly in the initial prototype testing stage. Here the model has been crafted on an individual and developmental basis. The prototype cost drops dramatically at the final model certification stage, where the prototype for testing is simply a car directly off the production line. The number of data channels is low during concept and component testing. The figure peaks and plateaus within the intermediate testing stages. Within the final testing stage there are still a considerable number of data channels to pass the legislative testing, but less than in the developmental stages. The price of still photography costs are judged via the average number of exposures produced at each stage. The price of high-speed photography is judged
Cost of Physical Vehicle Crash Testing
117
via both the number and technical specification of the high-speed recording equipment. Both factors slightly drop between the initial prototype phase and the complete model testing phase. Monitoring increases for the final stage inline with the legislation requirements in assessing the car’s crash performance. When discussing the number of reference targets, this analysis usually meets its maximum within the initial prototype development stage, as this is the first stage where a vehicle is tested in its entirety; and thus the stage where most information is gleaned concerning chassis behaviour under impact.
Figure 1. Cost Breakdown Structure derived from I-DEF0 process maps
118
Paul Baguley, Rajkumar Roy and James Watson
7 Validation Validation of the physical crash test results in this paper was performed by (1) Expert A from the automotive sector with working experience as Chief Executive Officer and academia, (2) Expert B with 10 years crash testing experience, (3) Expert C being a Cost engineer with experience in automotive sector for about 30 years and (4) an automotive related company. The results were altered in parts to reflect the results of the validation process.
A graph depicting the evolution of cost driving customer specifications throughout the key vehicle developmental stages Prototype
Cost (log10)
Number of data channels
Extent of still photography (i.e. number of exposures) Extent of high speed photography (i.e. number and specification of monitoring cameras) Number of reference targets
Concept/ Initial Complete Final model component prototype model testing certification testing phase testing phase phase testing phase
Depth of film analysis
Figure 2. Evolution of principal cost drivers throughout the key stages of a full scale crash test
8 Discussion Physical crash testing is the only established method of carrying out full-scale vehicle crash testing currently accounted for within worldwide legislation. All manufacturers need to complete specific physical legislative scenarios in order to place their vehicle for commercial sale. It allows one to monitor the actual (as opposed to theoretical) behaviour of the car under impact, allowing unpredicted faults to be noticed. Physical crash testing is an extreme expense for manufacturers, especially when taking into account the large variety and iterations of test required throughout the vehicle design process. A multitude of expensive prototypes need to be produced, and can only be crashed one time. Each test can only be performed under a specific, exact configuration, without being able to give any valid data concerning any slight or major scenario variations.
Cost of Physical Vehicle Crash Testing
119
Figure 3. Work Breakdown Structure of Physical Crash Testing
Dummies utilised in standard crash tests do not by any means cover an allencompassing range of body types, shapes and seating positions, when in reality any variety of these factors could have large implications to the extent of occupant injury. Variations in both the crash scenario and dummy characteristics could be easily altered for a virtual test for multiple iterations, where as for a physical test a whole new test would need to be set up, adding extra expense and time to the vehicle certification process.
9 Conclusion This research has considered costs of front and side impact in Physical Crash Testing. The cost of physical crash test is influenced by some significant cost drivers. Depending on the stage in the new product development process, a prototype car to be crashed can be anything from ten thousand pounds to one million pounds. The first or early prototypes being the most expensive. Other major cost elements are: labour, crash barrier, instrumentation, crash dummies, and overheads of facility and other equipment. A crash dummy can be the order of a hundred thousand pounds and then must be maintained and re-worked. A crash test can take the order of a week for preparation, a day for the crash test and two to
120
Paul Baguley, Rajkumar Roy and James Watson
three days to analyses the results. There were eight types of labour involved during the full cycle of a crash test. Most of the labout was required during preparation. The more information that was required from a crash test the more instrumentation was used. Knowing the cost at the design stage will support concurrent engineering of a vehicle through trade off analysis among the cost drivers.
A graph depicting the evolution of principal cost drivers throughout the key stages of a full scale crash test
C ost
Labour
Disposa l/ equipm ent rew o rking
Vehicle certifica tio n
Ana lysis of results
Test running
Test prepa ra tio n
Dummy Still photography and video monitoring Consumables and overheads Hardware and software
Figure 4. Evolution of cost driving customer specifications throughout the key vehicle development stages
Due to the significant cost of the physical crash testing the use of more strategic virtual crash testing is a potential solution to cost reduction.
10 Acknowledgements This works acknowledges the funding from the Integrated Framework 6 Project on Advanced Protection Systems (Aprosys, www.aprosys.com)). The authors are also grateful to the members of Decision Engineering Centre for their contribution to the research.
Cost of Physical Vehicle Crash Testing
121
11 References [1] [2] [3]
UNITED NATION ECONOMIC COMMISSION FOR EUROPEhttp://www.unece.org/, first accessed 15/02/2008. GRANT, I.C., BAGULEY, P., AND ROY, R., Development of a Cost Engineering Knowledge Audit Tool, the 4th International Conference on Digital Enterprise Technology (DET 07). Bath University, 19-21 September, 2007. ROY, R., KELVESJO, S., FORSBERG, S., AND RUSH, C. Quantitative and Qualitative Cost Estimating for Engineering Design. Journal of Engineering Design, Vol. 12, No. 2, pp. 147-162, 2001.
Estimating Cost at the Conceptual Design Stage to Optimize Design in terms of Performance and Cost Mohammad Saravia,1 , Linda Newnesb Antony Roy Milehamb and Yee Mey Gohb a
Mohammad Saravi, University of Bath, Bath, UK. Linda Newnes, Antony Roy Mileham and Yee Mey Goh, University of Bath, UK.
b
Abstract. In the highly competitive business environment, cost estimation is a strategic tool, which can be used to assist decision making with regard to products throughout their life cycle. 70 to 80 percent of the life-cycle costs of a product are determined by decision taken by designers during the early design stages. Therefore it is important to estimate and optimise cost as early and as accurately as possible. The main aim of this research is to use typically available information at the conceptual stage of design and estimate cost in order to optimise design in terms of performance and cost. The main objective is to employ Design of Experiments (Taguchi method) to use the sparse information more effectively in order to estimate the cost of a product at the early design stage. This paper presents the current status of the research activity. A case study is introduced which illustrates the initial applications of the optimization process. Conclusions are then discussed and the future research described. Keywords. Cost estimation, quality techniques (Taguchi method) and conceptual design
1 Introduction One of the difficult tasks undertaken by designers is to evaluate the cost of a new design. When designers start to design a new product, cost is a critical factor in determining whether the product will be viable or not. Nowadays a company needs to estimate the cost of the product and the confidence of that estimate in order to start to design and manufacture a product in detail. Reliable cost estimation of future products plays a significant part for designers in avoiding investing much time and losing considerable sums on non-economically viable products. Good cost estimation plays a significant part in the performance and effectiveness of a business enterprise as overestimation can result in loss of business and goodwill, whereas underestimation may lead to financial loss to the enterprise. Asiedu and Eu [1] state that 1
PhD Student, Mechanical Engineering Department, University of Bath, Bath, BA2 7AY, UK; Tel: +44 (0) 1225 386131; Email: [email protected]; http://www.bath.ac.uk
124
M. Saravi, L. B. Newnes, A. R. Mileham and Y. M. Goh
x The greater the underestimate, the greater the actual expenditure. x The greater the overestimate the greater the actual expenditure. x The most realistic estimate results in the most economical project cost. 70 to 80% of the product cost is said to be committed by the end of the conceptual design stage [2]. Therefore, it is important to estimate and optimise costs as early as possible since any changes during production are usually very costly. Because of the importance of the conceptual design, cost estimation, at this stage, should be precise and available as soon as possible and provide valuable information to product designers. Despite this importance, having accurate cost estimates at this stage is very difficult. The available data are limited and the designer must depend on the use of various synthetic and parametric techniques in the development of the cost estimates. At this stage, the concept of the product is determined including; overall shape, the main features and materials used, however this limited data makes the estimating process extremely difficult. The main aim of this research is to use quality technique such as the Taguchi method of Design of Experiments to estimating the cost using the sparse and available information more effectively. In this paper a review of cost estimation techniques, their application in different design stages and available information at the conceptual stage of design is presented and illustrated via a case study.
2 Importance of Cost Estimation at the Conceptual Design Stage
Figure 1. Manufacturing cost commitment during design. [12]
For efficiently achieving the goal of project cost control, accurately estimating the total cost of a system in the early design stage is necessary. Reducing the cost of a product at the early design stage is more effective than at the manufacturing stage. If reliable cost estimates occur during the early design stage, it can help designers to modify a design in order to achieve both performance and cost. Many authors [2, 3, and 12] have pointed out the importance of cost estimation at the design stage. Corbett [2] indicates that 80% of a product cost is committed during the early design stage. Mileham [4] states 70-80% of a product cost is determined during the early design stage. Figure 2 shows the importance of cost estimation at the design
Estimating Cost at the Conceptual Design Stage to Optimize Design
125
stage and the cost commitment curve. All these concur that early estimates are important.
3 Cost Estimation Techniques There are several; different methods for evaluating future cost, but not all of these are suitable for the whole lifecycle. Some methods are better than others depending on the context. Farineau [5] explains that cost estimation methods can be divided into four categories; intuitive, analogical, parametric and analytical. Roy [6] describes the cost estimation domain using five methods: traditional, parametric, feature based, case based reasoning and neural networks. Finally, Niazi and Dai [7] categorise cost estimation into qualitative and quantitative approaches. There are three key techniques which have been used by different researchers to estimate costs at the conceptual stage of design including parametric, neural networks and feature-based costing methods. Parametric models are widely used, and are often used as the primary or, in some cases, the sole basis for estimating. They are especially useful at the early stages of design in a program where detailed information is not yet available [8]. Cmarago and Rabenasolo [8] define a parametric model as a series of Cost Estimation Relationships (CER), ground rules, assumptions, using relationships, variables and constants to describe and define a specific situation. A CER is a mathematical expression, where cost is a function of the cost driver(s) variables. Neural networks (NN) or artificial neural network (ANN) network simple processing elements to exhibit complex global behavior by connecting the processing elements and element parameters. NN are particularly effective for complex estimating problems where the relationship between variables can not be expressed by simple mathematical expressions [9]. Ayed [9] explains that an ANN can simulate the action of a human expert in a complicated decision situation. The growth of CAD and CAM technology has played a significant part in the development of feature-based costing. The feature-based cost estimation methodology deals with the identification of a product’s cost-related features and the determination of the associated costs [7]. Roy [6] explains that products can be described as a number of associated features such as holes, edges, folds etc.
4 Conceptual Design & Available Information at this stage For this research the focus is on estimating at the conceptual design stage. Design can be broken into the three major stages including; conceptual design, embodiment design and detailed design [13]. Conceptual design is considered to be the most important stage in design. It is in conceptual design that the basic questions of configuration arrangement size, weight and performance are answered [10]. In the embodiment and detail design stage, material specifications, dimensions, surface condition and tolerances are specified in the fullest possible detail for manufacturing.
126
M. Saravi, L. B. Newnes, A. R. Mileham and Y. M. Goh
Figure 2. Concept development process. [11]
The concept development stage varies form industry to industry and product to product. Ulrich and Eppinger [11] propose a generic diagram (Figure 2) which shows the activities that must be considered for all projects. Most companies identify customer needs before the concept development process. At the target specification stage agreement on the general design functions is achieved. At the “generate product concept” stage various designs are proposed to meet the target specification. From this a few will be selected in “Select product concepts” for further investigation. At “test product concepts”, the team chooses the most appropriate concept. Again, at this stage the designers do not have details of the products and just have overall shape, main features and material type.
5 Selecting the Most Appropriate Concept using DOE The target specifications are set to meet the design requirements. However, they are established before the designers know what constraints the product technology will place on what can be achieved [11].
Figure 3. Adding cost to the screening matrix.
Estimating Cost at the Conceptual Design Stage to Optimize Design
127
Hence, targets are expressed as a range of values. At the target specification the values are wide and are reduced in the final specification list. The ideal value, in the target specification, is the best result that could be hoped for and the marginal value would make a product border line for commercial viability [11]. Normally cost is often not considered in selecting the final concept. Figure 3 shows a concept screening matrix typically used by the designer to select the most appropriate concepts. One concept is used as a reference and the other compared against this and scored. This research aims to add more criteria to a typical matrix to evaluate cost, shown in the matrix in bold. 5.1 Case Study - Fluid Dispenser for the elderly To illustrate this, a pilot study was undertaken, the design of a new fluid dispenser for the elderly, where the elderly needs were identified. The requirements were translated to engineering terms and a target specification created, table 1. Seven concepts designs were created including; e.g. a plunger, disposable cup, armband, Camelback, and water cooler, all with the same target specification with the metrics in the target specification being rated in terms of functionality using QFD techniques. Table 1 The target specification for fluid dispenser. [14]
By using DOE, designers will be able to evaluate cost and assess how changing the values of the product specification can influence cost. For example, in Table 1, if the value of capacity is 1000 ml what will be the output in terms of cost and how accurate can the cost estimate be made? If it is changed to 1400 ml what is the change in cost? Also they can study how changing these values influence the confidence level of cost estimates or in which value the company can have a higher cost estimate confidence level for their products in the future. 5.2 Using DOE for the case study concepts to identify key cost factors DOE has been selected to evaluate the optimum cost for four shortlisted concepts. DOE can be used to determine which factors (controllable and uncontrollable)
128
M. Saravi, L. B. Newnes, A. R. Mileham and Y. M. Goh
affect the output of a process. DOE can also be used to reduce deviation against a target. In particular it has been used to minimize the deviation of a quality characteristic from its target value in order to improve the quality of products. Considerable research has been devoted to improving the quality of products using DOE but little in optimising the quality of the cost estimate. Table 2. Selecting factors and cost model for different concepts.
The next step in applying DOE is to select the most appropriate factor (metric) in the target specification (Table 1). Any metric in table 1 can be considered as a factor (parameters controlled by designers are called control factors) and its value can be considered as a level. For example we can name capacity factor A and 500 ml and 700ml its two different levels. After selecting the factors, the second step will be to run DOE for each concept (table 2) and the 3rd step is to create response graphs, Figure 4. The graphs shown in Figure 4 can be used by designers to identify factors that have an impact on the cost. After selecting the most appropriate factors, for this study there were five, the next step will enable the cost to be optimised (in this case, the cheapest possible cost designers can get) for each concept and its variance to use in the screening matrix (figure 3) to select the most appropriate concept. As figure 4 shows, for each different concept, designers considered five factors (A, B, C, D and E) and each of these factors has two levels. There is a cost model for each concept which can be used to see how changing these values affect cost.
Figure 4. Example of response graph for different factors (concept 1).
This is repeated for each concept and the response graphs are created. Figure 4 shows the response graph in the Taguchi method used by designers to assess the effect of different levels for each factors. In this example the response modelled is
Estimating Cost at the Conceptual Design Stage to Optimize Design
129
cost. By using the graphs designers are able to identify which factors have more effect on cost and how changing their levels can affect cost. For example, how changing factor A from level 1 to level 2 can reduce the cost. For this example A2, B1, C1, D2 and E1, reduce the cost, however A2, E1 and B1 show the greatest impact. DOE can be run for different concept and there will be different result for each of them. Figure 5, shows the estimated cost and their confidence. This means that the minimum cost we can get for plunger is 25 with variance of 8 and the minimum cost for armband is 27 with the variance of 10 and so on.
Figure 5. Example of assessing cost and variance of different concepts.
5.3 Specification and Cost – Screening matrix The next stage of the approach will be to utilise the results in the concept screening matrix (Figure 3). In this case, plunger has been selected as the reference concept and other concepts will be compared to that. Any concept whose performance is better than the plunger in a specified criteria will receive a ‘+’ and any concept worst than the plunger will receive a ’–‘. If they are the same the value is 0. For this pilot study a benchmark cost of 27pence with a variance of 10 has been selected for the arm band. In this case both of the values are worst than the plunger the disposable cup has a cost of 27p with a variance of 6. In this case the cost of the plunger is cheaper than the disposable cup but the variance of the disposable cup is better than the plunger. We can do the same thing for the last concept. By this method designers are able to estimate cost and confidence of their estimate and consider cost as a factor to select the most appropriate concept.
6 Conclusion Nowadays a company needs to know the estimated cost precisely before it starts to detail design and manufacture a part. Cost estimation should be precise and quick
130
M. Saravi, L. B. Newnes, A. R. Mileham and Y. M. Goh
to perform, be available as early in the design process as possible and providing valuable information to product designers. Typically 70-80% of the product cost is committed by the end of the conceptual design stage. . The main aim of this research is to use the Taguchi Method of Design of Experiments to use the sparse concept information more effectively to estimate the cost of a product. The use of DOE has been demonstrated via a case study to assess the product specification and find the optimum solution with a high level of confidence in the cost estimate. By using this technique designers are able to evaluate cost and assess how changing the values of the product specification can influence cost. This technique can then be used to identify which of the values has the greater contribution to the product cost, and also then enable negotiations over the specification between need and preferred requirements. Using the fluid dispenser for the elderly as an exemplar the way of selecting factors and running DOE for each concept has been presented. Also how Taguchi method can be used to estimate optimum cost with higher confidence level has been discussed. From here the findings are then presented in a typical design screening matrix to assist designers to consider cost as a critical factor to select the most appropriate concept.
7 References [1] Asideu, Y. and Eu, P. (1998). Product life cycle cost analysis: state of the art review. International journal of production research, 36(4), 883-908. [2] Corbett, J. (1986). Design for economic manufacture. CIRP., 35(1), 93-97. [3] Mileham, A. R., Currie, C. G., et al. (1992). Conceptual cost information as an aid to the designer. International operation: crossing borders in manufacturing and service, Elsevier Science Publisher. 199-124. [4] Mileham, A. R., Currie, C. G., et al. (1993). A parametric approach to cost estimation at the conceptual stage of design. Journal of engineering design. 4(2), 117-125. [5] Farineau, T., et al. (2001). Use of parametric models in an economic evaluation step during the design phase. International journal of advanced manufacturing technology, 17(2), 79-86. [6] Roy, R. (2003). Cost engineering: why, what and how. Cranfield, Cranfield University. [7] Niazi, A., Dai, J. S. (2006). Product cost estimation: technique classification and methodology. Journal of manufacturing science and engineering. 128: 13. [8] Cmarago, M., et al. (2003). Application of parametric cost estimation in the textile supply chain. Journal of textile and apparel, technology and management. 3(1), 1-12. [9] Ayed, A. S. (1997). Parametric cost estimation of highway projects using neural networks. Engineering Newfoundland, Canada, Master of science, 87 [10] Raymer, D. P. (1999). Aircraft design: a conceptual approach. American institute of aeronautics and astronautics, Inc, London, UK. [11] Ulrich, K. T., Eppinger, S. D. (2003). Product design and development. McGraw-Hill higher education; 3 edition. New York, NY, USA. [12] Ullman, D. G. (2003). The mechanical design process, McGraw-Hill. Boston/London. [13] Pahl, G., Beitz,W., et al. (2007). Engineering Design. Spring-verlag London limited, 3rd edition, London, UK. [14] Bennett, Chris. ‘Design of a fluid dispenser for the elderly’, specialist design project, Dept. mechanical Eng. University of Bath, Claverton Down, BA2 7AY
DRONE
Design for Sound Transmission Loss through an Enclosure of a Generator Set Matthew Cassidya1, Richard Gaulta, Richard Coopera, and Jian Wanga a
School of Mechanical and Aerospace Engineering, Queen's University Belfast.
Abstract: Estimation of the sound transmitted through an enclosure is crucial in its design, so there is the need for a simple but accurate method required for the design team. One such method is the mass law which calculates the transmission loss from the mass of the partition in relation to frequency. It is the purpose of this paper to establish if the mass law is suitable for predicting the sound transmitted through a canopy. In order to carry out this study transmission loss tests were completed using an international standard. Keywords: Mass law, sound transmission loss.
1 Introduction The facilities consist of a hemi-anechoic chamber (Figure 1), and a reverberation room. They are used for measuring the sound emissions of generator sets and acoustic properties of lining material. An aperture between the rooms is used for research and transmission loss of canopy panels. Transmission loss is the property of a wall or barrier that defines its effectiveness as an isolator of sound. It is also referred to as the sound reduction index, and is computed from the logarithmic ratio of sound power incident to sound power transmitted, eq (1). (1) From previous work for the test facilities, the method to find the transmission loss is the International standard for measurement of sound insulation in buildings and of building elements using sound intensity, ISO 15186. This standard uses sound intensity to calculate the transmission loss for a reverberation room to hemianechoic chamber. A rotating microphone recorded the sound pressure level in the diffuse field, and an intensity probe measured the transmitted sound. Tests were
1
School of Mechanical and Aerospace Engineering, Queen's University Belfast, Ashby Building, Stranmillis Road, Belfast. E-mail: [email protected]
134
Matthew Cassidy, Richard Gault, Richard Cooper and Jian Wang
Figure 1. The hemi-anechoic chamber.
carried out on a steel plate and a lead sheet and compared with transmission loss based on the mass law.
2 Theory Laboratory measurement of sound transmission of a partition mounted in large side walls may give different results due to other transmission paths from the source room to the receiving room. Examples include radiation of sound due to excitation of fixings and transmission through the structure into the walls. There is a limit to the insulation obtained by improving only the adjoining partition known as the flanking limit. The test facilities have a sufficiently high flanking limit, (Figure 2).
Figure 2. Test facilities flanking limit.
Design for Sound Transmission Loss through an Enclosure of a Generator Set
135
Sound reduction of a partition can be calculated using the mass law, eq (2). (2) m surface density, kg/m2 f centre frequency of the third octave band This is derived from eq (1) as follows, [1]: (3) Pi incident sound pressure Pt transmitted sound pressure (4) Ȧ frequency ȡ density c speed of sound For sufficiently high Ȧ eq (4) becomes: (5) (6) For standard air properties eq (6) becomes eq (2). The mass law assumes that only the mass of the partition is significant in determining the transmission loss, therefore ignoring the effects of stiffness and damping. Resonances at low frequencies and coincidence effect at high frequencies cause a deviation from the mass law. Stiffness and damping are important at these frequency regions. The mass law assumes a mass spring system, and above the fundamental frequency, the response is governed by the mass of the system. The sound transmission properties of a partition can be divided into three distinct regions, (Figure 3). Region 1 is where stiffness and resonance is critical, whereas region 2 is mass controlled, but once again region 3 is stiffness controlled which occurs above the critical frequency. There is also control due to damping in region 3. Given that the purpose of this study is to establish the accuracy of the mass law, it is necessary to find these regions to show the frequency range for which the mass law is applicable.
136
Matthew Cassidy, Richard Gault, Richard Cooper and Jian Wang
Figure 3. Sound reduction and the relationship with frequency, [2].
3 Method 3.1 Setup Equipment required includes the reverberation room setup containing two speakers and a diffuse field microphone on a rotating boom. Other items are coaxial cables, networks leads, microphone calibrator, data acquisition unit, and a computer with the relevant software. The intensity probe is mounted on rods and fixed on a tripod but this is not at a fixed position and moves over a box grid as shown in Figure 4 and Figure 5. The initial step is to detect all connections to the software, this includes the speakers as outputs, the diffuse microphone and the intensity probe is the inputs. For the test the signal generated for the reverberation room through the speakers is sourced from the software with white noise through one speaker and pink noise through the other. The diffuse field microphone in the reverberation room is set on a rotating boom with a 64 second cycle. After calibration of recording equipment the background sound levels are recorded in both rooms for a check to be carried out later. The next step is to use the graphics equaliser to adjust the source sound levels in the reverberation room. When this is complete the transmitted sound can be recorded. The ISO 15186 test recommends a minimum of 10 seconds recording per point, for this setup 18 second averages were recorded. There are 96 points to measure, and the average calculated for each one third octave band frequency.
Design for Sound Transmission Loss through an Enclosure of a Generator Set
137
Figure 4. Measurement grid for the ISO 15186
Figure 5. Intensity probe used for the ISO 15186 tests
Measurements are exported from the software to a spreadsheet where calculations are carried out. Also exported are the calibration data and background levels to complete the necessary checks. 3.2 Calculations Sound transmission loss is the logarithmic ratio of sound power incident to sound power transmitted as shown in eq (1). It can be evaluated from eq (1) that: (7)
138
Matthew Cassidy, Richard Gault, Richard Cooper and Jian Wang
W1 incident sound power W2 transmitted sound power For ISO 15186 the sound intensity is measured therefore using power is equal to intensity times area: (8) (9) I1 incident sound intensity I2 transmitted sound intensity S area of the test specimen Sm area of the measurement surface, Fig4 Since sound pressure levels are recorded in the reverberation room, the effective intensity in one direction of a diffuse field is [3]: (10) P1 is the source sound pressure ȡc is the acoustic impedance From eq (8) and eq (10) (11) therefore from eq (7), eq (9), and eq (11) (12) (13) To convert sound pressure into sound pressure level, and sound intensity to sound intensity level: (14) P0 is the reference sound pressure (2 x 10-5Pa) I0 is the reference sound intensity (10-12W/m2) Eq (14) then becomes: (15)
Design for Sound Transmission Loss through an Enclosure of a Generator Set
139
LP1 average source sound pressure level LIn average transmitted sound intensity level Substituting the values for the constants eq (15) becomes: (16) Eq (16) is used to calculate the transmission loss for the international standard ISO 15186.
4 Results Fig6 shows the transmission loss of the 6mm steel plate using the ISO 15186. From this graph it is observed that there is very good correlation for the frequency range 100Hz to 1250Hz. The difference between the mass law and the measured data is shown in Figure 7, these large differences at the higher and lower frequencies can be accounted for by stiffness and damping.
Figure 6. Transmission loss of steel using the ISO 15186.
Another comparison is the 3mm lead sheet tested using the ISO 15186 to compare its accuracy with the mass law for lead. Both Figure 8 and Figure 9 show acceptable correlation for the 100Hz to 1250Hz, the same as the steel. The issues arising due to the stiffness and damping are observed again, but are not as large due to lead being a limp material.
140
Matthew Cassidy, Richard Gault, Richard Cooper and Jian Wang
5 Discussion It is shown in the results that there is a good correlation between the mass law and the measured transmission loss for both the steel plate and the lead sheet. This is specifically concentrated in the frequency range of 100Hz to 1250Hz, from Figure 6 and Figure 8, with a difference of less than 2dB. This is the region controlled by the mass as shown in Figure 3, therefore region 1, due to stiffness and resonance, lies below 100Hz, and region 3, controlled by damping and stiffness lies above 1250Hz. For steel the critical frequency is 2500Hz, but since lead is a limp material there is no obvious critical frequency as there is very little difference between the mass law and the measured over the tested frequency range.
Figure 7. Difference between mass law and measured for steel.
Figure 8. Transmission loss of lead using the J1400.
Design for Sound Transmission Loss through an Enclosure of a Generator Set
141
In order to accurately predict the transmission loss through a partition it is important to know the relevant frequency range because anything below 100Hz and above 1250Hz will not be accurately estimated by the mass law. However, in region 1 low frequency structural modes dominate. These modes can be identified through finite element (FE) analysis, and through rearrangement of stiffeners, resonances can be avoided. In region 3, stiffness and damping affects the sound emission, but noise can be reduced using absorption lining. For this reason the design team can use the mass law for estimating transmission loss for the frequency range 100Hz to 1250Hz.
Figure 9. Difference between mass law and measured for lead.
6 Conclusion As a simple method of calculating the transmission loss for a partition the mass law is ideal. However if a greater accuracy is required or for an important frequency lying outside the mass controlled range, estimation using the mass law cannot satisfy the knowledge required for design.
7 Acknowledgements The authors thank FG Wilson, Larne for use of their facilities and personnel to carry out the tests.
8 References [1]
A.P. Dowling, J.E. Ffowcs Williams (1983) Sound and Sources of Sound. John Wiley & Sons.
142 [2] [3]
Matthew Cassidy, Richard Gault, Richard Cooper and Jian Wang B.J. Smith, R.J. Peters, S. Owen (1996) Acoustics and Noise Control, Second Edition. Addison Wesley Longman Limited. David A. Bies, Colin H. Hansen (2003) Engineering Noise Control, Third Edition. Spon Press, London and New York.
Design Tool Methodology for Simulation of Enclosure Cooling Performance Richard Gaulta,1,, Richard Coopera, Jian Wanga and Graham Collinb a
School of Mechanical and Aerospace Engineering, Queen’s University of Belfast, Northern Ireland. b FG Wilson Engineering Ltd. Abstract. Virtual design tools are becoming more and more relevant in industrial applications enabling upfront design performance charcateristics to be known prior to prototype build and testing. The use of these virtual design tool methodologies early in the design cycle can result not only in an increased understanding of the products performance but also in a lead-time reduction. In this work a detailed cooling performance simulation of an enclosed generator set was computed. Multiple rotating reference fames were used for the cooling fans. In parallel a simpified model was developed negating the radial cooling fan and replacing the engine cooling fan with a fan model. Measured fan performance data was used as a boundary conditon for the model. Both models yielded similar airflow rates and ventilation face velocity profiles. Based on the simplified fan modelling technique for cooling performance prediction, a design tool was implemented allowing the automation of geometry, meshing and solver runs. Keywords. Virtual Engineering, CFD, airflow, design tool
1 Introduction Power generator sets are used to provide secure electric power for a vast arena of applications, ranging from prime power in remote locations and developing regions to construction power, standby power and emergency power for the grid. Traditionally, designs are generated based on previous experience and empirical algorithms, which usually involves an iterative modification process on a specially developed prototype. With ever increasing global competition, methods for synthetically modelling new products early and during the design process stage are essential to reduce lead-time, product cost and increase efficiency. This work is concerned with the development of an engineering design tool methodology to allow cooling airflow and radiator performance to be computed upfront in the design process. Higher performance engines, ever increasing cluttered and 1 Research Fellow, School of Mechanical and Aerospace Engineering., Ashby building, Stranmillis Road, Belfast, BT9 5AH, Northern Ireland; Tel: +44 (0)28 9097 5642; E-mail: [email protected]
144
R.I. Gault, R.K. Cooper, J. Wang and G.Collin
compact enclosures make it more difficult for designers to evaluate performance characteristics using traditional methods. Computational Fluid Dynamics (CFD) has become a cost effective tool in itself for performing cooling and thermal performance characterisctics. The components of a typical engine cooling system consist of an axial fan, shroud and radiator. The cooling fan is typically driven from the engine through a geared pulley system to achieve the required fan rotational speed. Generally this setup is well established for pusher type axial fans, where the flow is forced into the shroud box and through the radiator fin stack. Cluttered environments, including various types of inlet/outlet louvres, ducts, bends and various cowls etc., can significantly affect the system pressure loss resulting in the fan performance being impared. Various studies have been completed for the analysis of engine compartment flows and thermal performance [1-3]. Using CFD, radiator volumetric flows have been predicted and compared with experimental flow results, Aoki, Janaoka and Hara [1], while Hant and Skynar [2] investigated the effects of placing vents near the fan and radiator. They were able to show using CFD that the correlation between measured and predicted flow rates were similar for various vent locations. Lyu and Ku [3] conducted a numerical and experimental study into the effects of the flow field around an engine and cooling system. They showed that by varying the front vent open area the cooling performance was impared and that a high degree of non-uniformity in the mean flow velocity exists at the front of the radiator. Lee and Hong [4] showed that the non-uniformity of the cooling airflow decreased the heat transfer of the radiator, thus increasing the coolant inlet temperatures. Numerous shroud treatment studies [5,6] have been performed to try and improve the fan stall margin, as the operational range of industrial axial flow fans is limited by the stall. Other effects such as the airflow through the radiator core were examined in conjunction with the cowl geometry including the shroud. Recirculation structures were predicted within the shroud and improvements were made to increase the airflow through the core [7]. This work presents an overview of the steps required to develop a design tool methodology for generator set enclosure airflow including thermal performance and water ingress. The methodology itself can be applied to other relevant areas such as the cooling of compact electronic enclosures. Starting from a detailed numerical model using rotating fans to induce the cooling airflow, geometric simplification steps were undertaken enabling the rotating engine fan to be replaced by a fan model. These simplification steps were necessary to allow realistic design tool implementation including automation of the geometry build, mesh and CFD solver execution. The trade-off in accuracy between the detailed and simplified numerical models was examined in terms of the airflow rates and velocity profiles on the inlets/outlets. The implementation of a design tool environment was undertaken capturing the applied methodologies.
Design Tool Methodology for Simulation of Enclosure Cooling Performance
145
2 Generator Set Enclosure Airflow Prediction Method 2.1 Multiple rotating reference frame model Using a bottom-up topological approach, the surfaces of a typical generator set, namely air inlet vents, engine, alternator, axial cooling fan, radial cooling fan, shroud and outlet box, were generated. Detail like bolt holes, ribs, stiffeners etc., were not modelled as the cost numerically, both in generating the mesh and running the solver, would be too great. It was also assumed that this detail wouldn’t have a significant impact on the airflow rate prediction as the flow velocities anticipated inside the enclosure are in the order of 1m/s to 10m/s. Both the axial and radial cooling fan blade profiles were generated through a process of reverse engineering using point cloud data from a series of 3D laser scans on the actual blades themselves. Both the axial and radial fans can be seen in Figures 1(a) and (b) respectively. To minimise turnaround time an unstructured mesh was generated manually using ANSYS-ICEM-CFD. The mesh was typically in the order of 2.5million elements including triangular, tetrahedral and prismatic elements. Prismatic elements were grown from the fan blade surfaces to capture the boundary layer. Prismatic layers were not grown from any other surface. Figure 2 (a) shows the surface geometry of the enclosed generator, with radial and axial cooling fans installed.
(a)
(b)
Figure 1. CAD representation of cooling fans generated using reverse engineering techniques for (a) axial fan and (b) radial fan.
R.I. Gault, R.K. Cooper, J. Wang and G.Collin
146
Fan disc
(a)
(b) Figure 2. Generator set (a) rotating fan model (b) simplified fan model.
2.2 Fan curve model One of the main assumptions made for this simplified model was that the airflow could be modelled using a fan boundary, essentially an infinitely thin disc, Figure 2 (b). The fan model relies upon performance data as a boundary condition and when using fan performance data, the application should be similar to the performance measurement setup, otherwise considerable discrepancies in airflow rates etc., can be incurred, [8]. The other assumption made was that the radial fan, used for alternator cooling, would not affect the overall cooling performance of the engine or the general flow structures in the enclosure. This assumption was validated through running the radial cooling fan on its own. The inference of the latter assumption was that the radial fan could be ignored in the modelling, significantly simplifying the geometry build. The geometry in Figure 2 (b) was generated again using a bottom-up approach and meshed using the functionality in GAMBIT. Unstructured tetrahedral meshes were generated, with a nominal mesh element size of 0.05m. The final mesh density was in the order of 300,000 elements. 2.3 CFD Analysis The commercial code Fluent [9] was used for all the flow simulations. The Reynolds Averaged Navier Stokes (RANS) equations were solved and the flow was assumed to be an incompressible gas. Inlet vent boundaries were imposed on the ventilation openings, and an outlet vent boundary on the outlet. Polynomial loss coefficients, determined from experimental data, were imposed on the inlet boundaries. A multiple rotating reference frame model was used to drive the rotation of both fans, Figure 2(a), while performance data was supplied and used in the model as a boundary condition in the case of the fan curve disc, Figure 2(b). Adiabatic no-slip walls were used elsewhere in the domain. The standard k-e turbulence model was used to close the RANS equations. Second order discretisation schemes were used for the governing flow equations. In order to
Design Tool Methodology for Simulation of Enclosure Cooling Performance
147
model a restriction caused by the radiator, loss coefficients were determined from experimental values of the radiator core pressure drop. Standard atmospheric conditions were assumed. A discrete phase model was used to simulate water particles, using the standard spherical drag law. Droplet break-up and coalescence was not modelled and all walls were set as particle escape boundaries. Typically after 2500 iterations, a converged steady state solution was obtained as evidenced by the residuals flat-lining.
3 Results Figure 3 shows the contour plots of velocity magnitude on the inlet and outlet vents for both simulations. From the detailed simulation, Figure 3 (a) the volumetric flow rate was 2.4m3/s, compared with that of 2.7m3/s for the simplified fan model. This 12% difference can be attributed to the validity of the fan performance data, and underlines the importance of using supplied data representative of the real application. Although the flow rates do not correlate exactly, the velocity profile trends on the inlets are very representative. For the detailed model, Figure 3 (a) the maximum velocity magnitude over the majority of the inlets is 3.0m/s. Using the simplified design tool model, Figure 3 (b), the velocity profiles are very similar to those in Figure 3 (a), albeit that that magnitudes are slightly higher, typically 3.3m/s maximum velocity. This equates to the higher predicted flow rates. The main differences between the two cases can be seen on the outlet vents of the enclosure. The simplified fan model does not include any swirl in the outflow, an assumption which was deemed acceptable for this type of configuration. Typically the flow rate structures in the bulk of the enclosure are similar for both models, except downstream of the fan, where swirling effects are present, but not sufficient to affect the flow rates. The simulation of the detailed model using the multiple rotating reference frame model typically took approximately several hours to develop and compute, whereas the simplified fan curve model took considerable less time. Provided the fan performance data is acceptable for the application, these results suggest that the current methodology is acceptable for implementation in a design tool.
R.I. Gault, R.K. Cooper, J. Wang and G.Collin
148
13.0
8.0
3.0
(a)
(b)
3.3
Figure 3. Contour plots of velocity magnitude on the inlet and outlet vents for (a) multiple rotating fan model and (b) simplified fan curve model using design tool.
4 Design Tool Methodology The numerical modelling process using performance curves as boundary conditions to drive the model allowed the complex cooling fan geometry to be represented as an infinitely thin fan disc. All the main internal components were represented as simple surfaces. Through this process of geometric simplification, points, curves and surfaces for all the components in the model can be generated with relative ease. The design tool methodology hinges on the complete automation of the geometry build, meshing, application of boundary conditions, initiation of the solver run and results output. There is no set way to develop a design tool interface, but in this instance graphical user interface forms were generated which step through the complete process from geometry selection and arrangement through to analysis and results output. The graphical user interfaces are relatively basic but intuitive. Initially the enclosure design is driven by selection of an engine/alternator arrangement, from a preloaded database. Once loaded into the user form, the enclosure dimensions can be input, Figure 4 (a). Following this, a control box geometry can be added, Figure 4 (b), deemed important as it may cause part blockage of the airflow in the vacinity of a ventilation opening. The cooling system components can then be selected, again from a preloaded database, and arranged accordingly, Figure 5(a). Shroud size and depth can be adjusted manually, as can be the spatial locations of all the components inside the enclosure. Once the main components are selected, arrangement of ventilation inlets/outlet and ducts can be specified, Figure 5 (b). Various other duct types and exhaust silencer geometries can be added to ensure all relevant bodies are included. It should be noted that any design tool should be flexible enough to be able to
Design Tool Methodology for Simulation of Enclosure Cooling Performance
149
allow the input of additional parameters, ensuring that knowledge is captured and designs can evolve. In this method, the database spreadsheet containing fan curves, grille parameters and radiators etc., can be continually updated with new performance data. Introduction of a new geometry requires separate modules to be scripted, containing the additional parameters. This way the tool naturally evolves in complexity. Once the analysis was launched, script files were generated invoking the geometry creation process including the necesssary boolean operations. The volumetric and surface meshes were generated automatically and all the boundary surfaces selected. The mesh was exported and read into the CFD solver, where all the solver parameters were applied. Once the chosen number of iterations was achieved, the relevant cooling performace data was exported and loaded into a report style interface, where decisions were made about the valididty of the design. These decisions were aided by an expert system which is capabile of warning the user if for example sufficient cooling performance was not being met, or if the fan was operating near stall. The potential for water ingress was ranked, in terms of low, medium and high risk, based on average inlet grille velocities. Care was taken to ensure that convergence was sufficient, and in this method convergence criteria were loaded along with the cooling results.
(a)
(b)
Figure 4. Design tool grahical user interface for (a) selection of engine/alternator and enclosure input, and (b) control box input.
150
R.I. Gault, R.K. Cooper, J. Wang and G.Collin
(a)
(b)
Figure 5. Design tool grahical user interface for (a) selection of cooling fan, radiator and charge air performance characteristics and (b) analysis launch.
5 Conclusions A design tool methodology to aid with the cooling performance prediction of an enclosed generator set was developed. Design tools are becoming more necessary in environments where the shortening of a products lead-time is desired. Design tools can be used early in the conceptual design stage to ensure that certain performance criteria are being met, prior to prototype build and testing. They also can help in performing trade-off studies, and sensitivity analysis. In this work, an airflow ventilation design tool methodology was developed based upon simplification of the main components of an enclosed generator. Replacing the rotating fan model with that of fan model resulted in an over prediction of the airflow rate by 12%. However, the velocity profiles on the ventilation inlets were very similar, and in the bulk of the enclosure. The results have shown that the rotating fan model can be simplified using a fan performance curve, provided that the performance data is measured in a similar configuration to its application. Differences in flow rate can be attributed to the validity of the fan performance data supplied. Supplied fan data should be used with caution unless the measurement procedure is known. In this instance the measurement fan performance data was supplied. A design tool environment was implemented using several graphical interfaces allowing selection and arrangement of the main components. The geometry build, meshing and simulation were automated for a limited number of parameters. There are no hard and fast rules for design tool environment development, and in this instance graphical user interfaces were preferred, making the process as intuitive as possible.
Design Tool Methodology for Simulation of Enclosure Cooling Performance
151
6 Acknowledgements The authors would like to thank FG Wilson for providing financial support for this work.
7 References [1] [2] [3] [4] [5] [6] [7] [8] [9]
K Aoki, Y Janaoka, M Hara. Numerical simulation of three dimensional engine compartment air flow in FWD vehicles. SAE; 1990; paper no 900086. T Hant and M Skynar. Three-dimensional navier stokes analysis of front end flow for a simplified engine compartment. SAE; 1992; paper no 921091. YG Ku and MS Lyu. Numerical and experimental study of three-dimensional flow in an engine room. SAE; 1996: paper no 960270. YL Lee and YT Hong. Analysis of the engine cooling including flow nonuniformity over a radiator. Int.J.vehicle design 2000; Vol 24 No 1: 121-135. DC Prince, DC Wisler and DE Hilvers. A study of casing treatment stall margin improvement phenomena. ASME paper 1975; paper 75-GT-60. SD Hill, RL Elder and AB Mckenzie. Application of casing treatment to an industrial axial-flow fan. Proceedings for the institute of mechanical engineers 1998; Vol 212 Part A: 225-233. S Chacko, B Shome, V Kumar, AK Agarwal and DR Katkar. Numerical simulation for improving radiator efficiency by air flow optimization. ANSA & ȝETA International Congress 2005. R Biswas, RB Agarwal, A Goswami and V Mansingh. Evaluation of airflow prediction methods in compact electronic enclosures. Semiconductor thermal measurement and management symposium 1999: 48-53. Fluent 6.3 User’s Guide, Fluent Inc.
Using Virtual Engineering Techniques to Aid with Design Trade-Off Studies for an Enclosed Generator Set Richard Gaulta,1,, Richard Coopera, Jian Wanga, Srinivasan Raghunathana and Graeme Mawhinneyb a
School of Mechanical and Aerospace Engineering, Queen’s University of Belfast, Northern Ireland. b FG Wilson Engineering Ltd. Abstract. The influence of even a single parameter change can be difficult to second guess at the design stage of a product. There may be many fluid, acoustic or structural performance effects that are sensitive to specific parameters. Modifiying the product to satisfy one area of performance may weaken it in others. The sensitivity of parameter changes in terms of performance can be extremely difficult to determine through experimentation. Virtual Engineering techniques were used to highlight the sensitivity of Sound Pressure Level (SPL) and cooling airflow rate for a typical inlet vent of a power generator set, whose open area was reduced to induce a parameter change. For this parameter change, Computational Fluid Dynamics (CFD) simulation showed how the cooling airflow rate was unaffected. An acousitc analaysis using the engine and alternator as the source noise showed how the Sound Pressure Levels (SPLs) were sensitive to this parameter change at low frequencies. This work underlines the necessity to always consider both the acoustic and cooling performance as a coupled system during design for this particular product. Keywords. Virtual Engineering, CFD, Multi-physics, Acoustics, Enclosure
1 Introduction Power generator sets are used to provide secure electric power for a vast arena of applications, ranging from prime power in remote locations and developing regions to construction power, standby power and emergency power for the grid. Traditionally, designs are generated based on previous experience and empirical algorithms, which usually involves an iterative modification process on a specially developed prototype. With ever increasing global competition, methods for synthetically modelling new products are essential to reduce lead-time, product 1 Research Fellow, School of Mechanical and Aerospace Engineering., Ashby building, Stranmillis Road, Belfast, BT9 5AH, Northern Ireland; Tel: +44 (0)28 9097 5642; E-mail: [email protected]
154
R.I. Gault, R.K. Cooper, J.Wang, S. Raghunathan and G. Mawhinney
cost and remain within noise legislative targets. Virtual Engineering techniques provide a platform to evaluate design trade-off studies prior to design implementation. This paper uses two virtual engineering methods applied to a diesel engine generating set to evaluate the trade-off between cooling airflow and noise for a single parameter change. In general the cooling airlow is usually induced using a cooling fan driven by the engine. Cluttered environments, including various types of inlet/outlet louvres, ducts, bends and various cowls etc., can significantly affect the system pressure loss resulting in the fan performance being impared. The major elements in the total sound energy radiation from a diesel generating set are due to the action of its vibrating solid surfaces and aerodynamic noise [1]. The presence of ventilation openings significantly weakens the overall sound attenuation characteristic of the enclosure. Frequently, generator sets are installed in areas of high sensitivity where the acceptable noise levels are very low, especially as mains power can fail at any time, day or night. The control of diesel engine noise presents a number of particular problems and also offers scope for a number of treatments designed to give particular benefits to the users. The growing environmental awareness has led to recently tightened European regulations relating to the permissible sound power level of power generating sets. Prediction of noise from a cluttered enclosed diesel generator set is inherently complex due to geometric shape complexities and variable damping factors of the system components including complex sound transmission mechanisms through multi-layered structures. Further, the sound radiated by the individual sources is influenced by the rest of the machine altering the radiation directivity. Several noise tranmission paths are prevalent, namely the airborne path, mechanical path and coupled sound transmission path. Ju et al. [2] used boundary element methods and CFD to aid with the design of an acoustic enclosure for their application. Interestingly they measured the source noise using a sound intensity method to calculate the sound power then used this as a boundary condition to drive the source model. Choi, Kim and Lee [3] also considered a simple design method of the engine enclosure considering cooling and noise.
2 Virtual Engineering Technique for Airflow Analysis 2.1 CAD Modelling and Mesh Generation Using a bottom-up topological approach, the surfaces of a typical generator set, namely air inlet vents, engine, alternator, axial cooling fan, radial cooling fan, shroud and outlet box, were generated. Detail like bolt holes, ribs, stiffeners etc., were not modelled as the cost numerically, both in generating the mesh and running the solver, would be too great. As both the axial and radial cooling fans are inducing the airflow, their blade profiles were generated through a process of reverse engineering using point cloud data from a series of 3D laser scans on the actual blades themselves. Both the axial and radial fans can be seen in Figures 1(a) and (b) respectively.
UsingVirtual Engineering Techniques to Aid with Design Trade-Off Studies
(a)
155
(b)
Figure 1. CAD representation of cooling fans generated using reverse engineering techniques for (a) axial fan and (b) radial fan.
2.2 Meshing, Pre-Processing and CFD Analysis To minimise turnaround time an unstructured mesh was generated. The mesh was typically in the order of 2.5million elements including tri, tetra and prismatic elements. Prismatic elements were grown from the fan blade surfaces to capture the boundary layer. Figure 2 (a) shows the surface geometry of the enclosure.
Figure 2. Generator set CAD model.
The commercial code Fluent [4] was used for the flow simulations. The Reynolds Averaged Navier Stokes (RANS) equations were solved and the flow was assumed to be an incompressible gas. Inlet vent boundaries were imposed on the ventilation openings, along with a nominal turbulence intensity of 1% and length scale of 0.08m. Polynomial loss coefficients, determined from experimental data, were imposed on the inlet boundaries. An outlet vent boundary was used at the exit of the enclosure, with the same imposed loss polynomials as the inlets. A
156
R.I. Gault, R.K. Cooper, J.Wang, S. Raghunathan and G. Mawhinney
multiple rotating reference frame model was used to simulate the rotation of both fans. Adiabatic no-slip walls were used elsewhere in the domain. The standard k-e turbulence model was used to close the RANS equations. Second order discretisation schemes were used for the governing flow equations. In order to model a restriction caused by the radiator, loss coefficients were determined from experimental values of the radiator core pressure drop. Standard atmospheric conditions were assumed. Typically after 2500 iterations, a converged steady state solution was obtained as evidenced by the residuals flat-lining and by the drag monitors on the rotating fans. 2.3 CFD Results The flow rate was predicted at 1.92m3/s, for the initial total inlet area of 0.92m2. The inlet vent normal face velocities varied from approximately 0.1m/s to just over 6.0m/s around the top of the inlets, Figure 3(a). The velocity profile exists due to the nature of the flow into the duct in this particular case. For the parameter change, the total inlet area was reduced to 0.52m2, in effect blocking off the portion of the inlet where the normal face velocities were low. The flow rate for this new case was identical to the first computation at 1.92m3/s. The normal to face velocities are all now approximately between 5.0m/s and 6.5m/s. What this suggests is that the open area can be reduced by approximately one third of the original area without any penalty in cooling airflow rate. Also in this instance the pressure drop in the enclosure was similar for both cases.
6.0-6.5
(a)
6.5
0.1
(b)
Figure 3. Normal to face velocities for (a) original inlet vent and (b) reduced area inlet.
3 Virtual Engineering Technique for Airborne Noise Analysis In order to predict the SPLs at the ventilation openings for the same generic CAD model as that detailed earlier, the multi-physics solver RADIOSS [5] was used. Multi-physics is becoming an ever increasing virtual engineering method and involves the modelling of fluid/structure interaction as a coupled system. In effect
UsingVirtual Engineering Techniques to Aid with Design Trade-Off Studies
157
it brings together these separate disciplines, enabling the user to model the problem in a more realistic way. RADIOSS is an explicit finite element code, solving the compressible fluid flow equations with solid structures, using an Arbitrary Lagrangian Eulerian formulation (ALE) [6]. For this solver a 3 million element structured grid was generated, adhering to specific mesh sizing criteria in the noise generation and propagation zones. 3.1 Pre-Processing The Young’s modulus, Poisson ratio and density were defined for the material properties of the various components of the enclosure. A fluid/structure interface was defined between the solid surface mesh and the surrounding fluid mesh. Lagrangian x, y and z boundary conditions were defined for the fluid in contact with the solid walls. Non-reflecting boundaries were defined at the ventilation openings.
Figure 4. Global mesh structured grid representation of the generator set for the acoustic analysis.
3.1.1 Engine/Alternator Noise Source Characterisation In order to mimic the noise generation from the vibrating engine and alternator surfaces, a detailed experimental survey of the surface vibration was undertaken using Laser Doppler Vibrometry (LDV). With the instrumentation, vibration data over a frequency band ranging from 50Hz up to 1000Hz was measured. Spacing of measurement points on the engine/alternator surface were 1/6th of the maximum wavelength of interest, in this case approximately 0.06m, avoiding any severe forms of spatial aliasing. For all the surfaces of the engine and alternator, approximately 700 measurement points were required to adhere to the spacing criterion. The sample rate was set at 4kHz for sample duration of 1.0 second. Figure 5 shows a typical vibration time history for one of the measurement locations on one side of the engine/alternator. A reference vibration on the bore of
158
R.I. Gault, R.K. Cooper, J.Wang, S. Raghunathan and G. Mawhinney
the engine was used to ensure that all the subsequent measurements on the various surfaces were phased locked relative to the reference.
Figure 5. Typical vibration measurement for a point on the engine/alternator.
3.2 Acoustic Analysis For the numerical simulation, all 700 vibration measurements were input to corresponding node locations on the engine/alternator surface mesh, Figure 6. The hard-linked fluid/structural mesh ensured that the driving vibrations would propagate into the fluid.
Figure 6. Measured surface vibrations imposed on node points on engine/alternator structural model.
The unsteady pressure fluctuations in the computational domain were computed utilizing the explicit time integration, including non-diffusive streamline upwind Petrov-Galerkin (SUPG) for momentum advection, and Large Eddy Simulation for turbulence. The model was run for a time period of approximately 20ms just sufficient to resolve the lowest frequency of interest, which in this case is 50Hz.
UsingVirtual Engineering Techniques to Aid with Design Trade-Off Studies
159
3.3 Acoustic Results During the simulation the unsteady pressures and velocities were recorded on node points at the ventilation openings. A Fourier transform of these pressure time histories was completed and octave wide plots of Sound Pressure Level (SPLdBA), calculated. Figure 7 shows an octave wide contour plot at a centre frequency of 105Hz. The SPL peak varies between 60dB(A) and 70dB(A) over the complete vent, Figure 7(a). For the case where the inlet vent was reduced in open area, identical to that of the airflow simulation, the SPL over all the open area is typically in excess of 70dBA, Figure 7(b). At a centre frequency of 210Hz, Figure 8, the SPL tends to vary in magnitude from approximately 85dB(A) to around 77dB(A) at the bottom on the complete inlet, Figure 8(a). For the reduced area inlet, Figure 8(b), the trend in SPL is markedly different to that of the full inlet, Figure 8(a). There is a band of SPL running through the top half of the vent between 80db(A) and 82dB(A). Further plots of SPL at a centre frequency of 1256Hz can be seen in Figure 9. Interestingly the trends and magnitudes of SPL are very similar for both vent configurations, Figure 9(a) and (b), and show evidence of higher acoustic modes. In practice these higher frequencies are attenuated by acoustic foam. Finally, average plots of SPL over the complete frequency range for both vent cases can be seen in Figures 10 (a) and (b). An increase of approximately 2dB(A) can be seen at the top of the reduced area vent, Figure 10 (b).
(a)
(b)
Figure 7. SPL dB(A) – CF 105Hz for (a) complete inlet and (b) reduced area inlet.
160
R.I. Gault, R.K. Cooper, J.Wang, S. Raghunathan and G. Mawhinney
84-85
84-85
84-85 77-78
(a)
(b)
Figure 8. SPL dB(A) – CF 210Hz for (a) complete inlet vent and (b) reduced area inlet.
90
77
90
77
(a)
77
(b)
Figure 9. SPL dB(A) – CF 1265Hz for (a) complete inlet vent and (b) reduced area inlet.
94
95
84-86
(a)
(b)
Figure 10. SPL dB(A) – Average for (a) complete inlet vent and (b) reduced area inlet.
UsingVirtual Engineering Techniques to Aid with Design Trade-Off Studies
161
4 Conclusions For a typical enclosed engine generator set, virtual engineering techniques were employed to aid with the understanding of trade-off effects for a single parameter change with respect to cooling airflow and noise for a typical enclosed generator. The parameter change was induced by simply blocking off approximately one third of the ventilation open area. The cooling airflow was modelled using CFD, and a multi-physics approach was employed to model the interaction between the vibrating engine/alternator and the surrounding fluid. Several hundred measured surface vibrations were directly input to the engine/alternator mesh model as a surface boundary condition to drive the acoustic propagation. SPLs were predicted over a frequency range of 50Hz to 2000Hz. The overall airflow rate was unaffected for this particular parameter change, but the SPL magnitudes at the lower frequencies were increased by reducing the open area. Differences in acoustic duct modes, as a result of the parameter change, were visible. Virtual engineering techniques can provide upfront design information pertaining to the trade-offs in question, giving insight into performance characteristics. In this work, the airborne path only was considered due to computational constraints. Structural modes were not computed.
5 Acknowledgements The authors would like to thank FG Wilson and Invest Northern Ireland for their grant contribution to the project.
6 References [1] Mahon LLJ. Diesel Generator Handbook. Butterworth-Heinemann Ltd, 1992; 536602. [2] HD Ju, SB Lee, WB Jeong and BH Lee. Design of an acoustic enclosure with duct silencers for the heavy duty diesel engine generator set. Journal of Applied Acoustics 2004;65: 441-445. [3] JW Choi, KE Kim and HJ Lee. Simple design method of the engine enclosure considering cooling and noise reduction. Journal of KSNVE 1999; 9(1): 184-8. [4] Fluent 6.1 User’s Guide, Fluent Inc 2003-01-25. [5] Mecalog, RADIOSS CFD user manual version 4.3.2001. [6] Donea J. Arbitrary lagrangian eulerian finite element methods. Computational methods for transient analysis 1983;1:10.
Sound Transmission Loss of Movable Double-leaf Partition Wall Jian Chena, Jian Wanga,1ʳ andʳGerard Muckianb a
School of Mechanical and Aerospace Engineering, Queen’s University Belfast, Ashby Building, Stranmillis Road, Belfast BT9 5AH, Northern Ireland, UK b Master’s Choice Ltd. Silverbridge, Newry,BT35 9LJ, Northern Ireland, UK Abstract: In this paper, the laboratory tests of Sound Transmission Loss (STL) of movable double-leaf partition walls are present. Three sets of sample partition walls, with different configuration, are employed; and all tests were carried out under the guidance of ISO140-1 and 3 standards. The results shown that bigger air gap, increase frame’s damping and reducing frame’s stiffness are benefit to the improvement in walls’ acoustic performance. It was also demonstrated that if movable partition walls are mounted in similar manors to the actual construction in laboratory tests, their STL are much worse than that of the counterpart drywalls. Keywords: sound transmission loss, movable partition wall, laboratory test.
1 Introduction Movable partition walls are broadly used in construction industry for room’s subdivision. This kind of partitions enjoys the merit of flexible setting for its lightweight characteristic. They share the similiar double-leaf configuration to fixed partition drywalls. Like all lightweight walls, the sound insulation level of movable partition walls is an important factor that needs to be taken into account during their application. The existing research has studied the sound performance of double-leaf configuration and Sound Transmission Loss (STL) of drywalls. Sharp [1] developed an empirical method for predicting the STL of double panel by analyzing the power radiated from a point-or line-loaded panel. Mead and Pujara [2] proposed to use space-harmonic expansions to study periodic partitions; they set up a two-dimensional model in which the panel is represented as a beam supported by regularly spaced elastic supports. The experimental sound insulation data for different partition configuration has been showed in J.Q.Wang’s work [3]. The carefully planned experimental parametric study present by Hongisto et al. [4] also strengthens the understanding of double panel’s acoustic performance. Wang
1
Corresponding Author Email : [email protected]
164
J. Chen, J. Wang and G. Muckian
et al. [5] studied the smeared and periodic model for sound transmission across the partition walls, and the predictions of the two models are compared on the basis of practical testing results. Most of the research is based on fixed partition walls’ experiments. In this paper, the laboratory sound transmission tests on three sets of movable partition walls were conducted. Owning to the differences in configuration, the experiment results proved that air gap and frame’s stiffness and damping influence the moveable partition walls’ STL. Meanwhile, compared to previous research on fixed partition walls, it was also concluded that the moveable partition walls’ acoustic performance is confined by the installation ways adopted in the tests.
2 Prediction of sound transmission loss The theories present here are only for purpose of estimating the various in moveable partition wall’s STL than the absolutes values. Previous work [5] has put forwards a periodic model of the sound transmission loss through double-leaf lightweight partitions stiffened with periodically placed studs. In this model, the panels on two sides are assumed infinitive large and stiffened in one direction by studs which is simplified as translational and rotational springs with two pieces of lumped mass attached to the two panels respectively. The STL of double-leaf partition can be predicted by the following route.
Figure 1. Side view of double-leaf partition wall with studs
The panel transverse displacement Wi (x, t) and the velocity potential (ĭ1, ĭ2, ĭ3) in the incident, cavity and transmitted areas (Fig. 1) can be presented as W1 ( x, t ) W2 ( x, t )
f
j >k 2 nS L @x jZt e ¦ D1,n e x
n f f
¦ D 2, n e
j >k x 2 nS L @x
e
n f
> )1 x, y, t Ie
j P L x k y 0 y Z t
@ f E e j >P 2nS L xk yn y Z t @ ¦ n f
(1a)
jZt
(1b) (2a)
Sound Transmission Loss of Movable Double-leaf Partition Wall
) 2 x, y , t ) 3 x, y , t
>
f
j P 2 nS L x k yn y Z t
¦ H ne
n f
> ¦[ ne
f
j P 2 nS L x k yn y Z t
@
f
¦] ne
>
j P 2 nS L x k yn y Z t
165
@ (2b)
n f
@
n f
(
Wi x, t
(2c)
D i, n
where is the panel transverse displacement, the coefficient can be considered as the travelling wave amplitude of the structure, L is the spacing between studs, Z is the angular frequency, kx is the component of the wave number in the x direction (Fig. 1). With reference to Fig. 2, one has: k x Z k sin T (3) k y Z k cos T
(4)
where
k
Z /c
is the wave number of the incident plane wave.
k yn
is the wave number in the y direction, which can be calculated from the following formula [6,7] : k yn
Z c 2 k x 2nS L 2
(5)
Figure 2. One periodic element and notation
When Z c
k x 2 nS L
the corresponding pressure waves become evanescent, and
the appropriate sign convention is to then replace J
y
J
k
2
jk yn y
in the exponent of equation
2
2nS L Z c
x (2a) by yn , where yn Corresponding changes are made to equation (2b) and (2c). )1 , ) 2 and ) 3 represent the velocity potentials in incident, cavity and transmitted areas respectively. The coefficients E n , H n , ] n and [ n may be considered as the travelling waves amplitudes of the incident (to the bottom panel), reflected and transmitted waves, which are coupled with the motions of the two panels.
The coefficients D i, n can be found by solving the linear equation system derived using the principle of virtual work for one bay of the partition (Fig. 1) [6,7] shown below:
166
J. Chen, J. Wang and G. Muckian
D w W
wx m w W
wt jZU )
D1 w 4W1 wx 4 m p1 w 2W1 wt 2 jZU 0 )1 ) 2 0 2
4
4
2
p2
2
2
2
0
2
)3 0
(6) (7)
where Di is the flexural stiffness of the panel and mpi is the mass per unit area of panels. Following the procedures proposed in[5], the power transmission coefficient is: Ii
ZU 0 k y 0 2 I 2
(8) where I i and I t are the incident and transmitted normal intensities, respectively, given by [6,7]: Ii It
ZU0 k y0 2 I 2 ZU 0 2
f
(9a) 2
> @
¦ [ n Re k yn
n f
(9b)
Substitution of (8) and (9) into the following equation (10) and (11) completes the calculation of the STL, RL , across a double-leaf partition. The transmission coefficient averaged over all angles of incidence is: W
T lim
Ⱥ /2
0
0
³W T sin T cosT dT ³ sin T cosT dT
(10) from which the transmission loss is calculated as: RL
10 log10 W
(11)
3 Experimental arrangement and measurements Three sets of movable partition walls were tested. Each sample consists of three panels and there are two categories in the panel’s thickness: 110mm and 150mm. One of the 110mm samples and the 150mm sample are composed of three standard panels; for simplicity, we will called them 110 standard and 150 standard respectively in the following sections. Another 110mm sample is composed of two standard panels and one final panel with a telescoping panel mounted inside, and this sample will be named as 110 plus in the following sections. All panels are supported by aluminium frames in flank, which play the same function as aforementioned partition walls’ studs. Specially, the aluminum frames of 150 standard panels are divided in the place of central line and riveted again via a connecting aluminium strip, and the rubbers are set at the connection points; this design leads to less configuration stiffness coefficient and better damping characteristic(see Fig 3).
Sound Transmission Loss of Movable Double-leaf Partition Wall
167
Figure 3. the configuration of 150 standard panels
For the standard panels, there is a layer of 15mm MDF boards as the outside facing on both sides of the panels, and the boards are screwed with aluminium frame in flank. A layer of 9.5mm gypsum boards are screwed on each inside of the MDF boards. Inside faces of both sides of gypsum boards are covered by a layer of 2.5mm polymeric acoustic pad. A jack is located in the centre of the panels to allow the extension of the sealing blocks top and bottom. Five wooden beams are periodically screwed on one side combined board (MDF + gypsum board + acoustic pad) to support the mechanical extending rods and, to some extend, horizontally strengthen the aluminium frames in spite of just a touch between them. Two layers of 25mm mineral wool are filled into the cavity of 150mm thick panels (see Fig. 3); but for 110mm thick panel, only one layer of 50mm mineral wool is packed inside. The differences in their configuration can also be seen in Fig. 3 to 5. The area mass data of materials are list in table 1. The whole mass of 110 standard panels is 168kg and the surface density is 50 kg/m2; for 150 standard panels, they are 189 kg and 56 kg/m2 respectively. Table 1. Surface density of materials
Because it needs a bigger space to place complicated mechanical system in the cavity of final panel, the gypsum boards are removed and only one side MDF is covered by a polymeric acoustic pad. Four pieces of wooden beam are vertically screwed on the combined board (MDF + acoustic pad) to support the mechanical
168
J. Chen, J. Wang and G. Muckian
system. The telescoping panel is mounted inside of the final panel, and 1.8mm steel sheet are used as the main material which is packed by 1mm laminate outside (area mass are shown in table1). The mass of final panel and telescoping panel is 135kg, and the surface density is 40kg/m2.
Figure 4. The installation way of 110 standard panels
Figure 5. The installation way of 150 standard panels
Figure 6. The installation way of 110 plus panels
The sound insulation of the movable partition walls were tested for under reverberant sound conditions in which sound is incident on one side of the specimen from all directions. All the mounting ways and tests are complied with
Sound Transmission Loss of Movable Double-leaf Partition Wall
169
ISO 140-1 and 140-3 standards. The test samples were against a steel frame in the aperture of 10.58m2 between two reverberant chambers, which have been constructed to suppress the transmission of sound by flanking paths. A slam post was fitted along the left hand side aperture for the panels to fit into, and the edges were lined with silicon. Sealing blocks were extended from the top and bottom of the panels to the top and bottom edges of the aperture. For the 110 standard (see Fig. 4) and 150 standard (see Fig. 5), the right hand side were packed with 38mm neoprene sponge, but for the 110 plus panels, the telescoping panel was extended out to squash into the 5 mm neoprene sponge, which is adhered to the right hand side aperture in advance (see Fig. 6). The edges of the sample were packed with close cell foam. Additionally, in order to mount the test partitions in a manner as similar as possible to the actual construction, special seal treatments in the joints like installation ways of drywall laboratory test were ignored in all three tests. In each test process, a steady sound source with a continuous spectrum in the frequency bands of interest was used to drive an omni-directional loudspeaker, which was located sequentially in two positions in the source chamber. Measurements of the sound levels were made simultaneously in both chambers at the one-third octave intervals from 100 Hz to 5K Hz as prescribed in ISO 140-3. The measurements were made using a swept microphone scan in the receiving chamber and a swept microphone in the source chamber to obtain a good average of the sound pressure levels in each chamber. The Sound Reduction Index (R) in decibels (dB) is calculated in each frequency band using the equation: R = (L1 – L2 + 10Log S/A) dB Where: L1 is the average sound pressure level in the source chamber (dB) L2 is the average sound pressure level in the receiving chamber (dB) S is the area of the test specimen (m²) A is the equivalent absorption area in the receiving chamber (m²) The equivalent absorption area in the receiving chamber was determined from twelve sets of reverberation time measurements using various microphone positions. The measurements were made in accordance with International Standard ISO 354. The Weighted Sound Reduction (Rw) in decibels (dB) was calculated by comparing the eighteen values of Sound Reduction Index from 100 Hz to 5K Hz with a defined reference curve that was adjusted until the requirements of ISO 717-1 were met. The Rw rating system has two correction factors (C ; Ctr) which have been introduced to take into account different spectra of noise sources. C relates to higher frequency noise while Ctr relates to lower frequency noise. These correction factors are used to indicate the performance drop of the wall in corresponding frequency ranges. For example, an Rw (C ; Ctr) of 55 (-1 ; -4) would give a sound transmission loss of 55 - 4 = 51 decibels if the incident noise is predominantly low frequency.
170
J. Chen, J. Wang and G. Muckian
4 Result and discussion Fig. 7 shows the testing results of three sets of partition walls. The weighted sound reduction data for the 150 standard panels, 110 standard panels and 110 plus panels are 39dB, 36dB and 35dB respectively. It is obvious that 150 standard panels have better sound insulation level than other two kinds. The STL line of 150 standard panels shows that there are only small fluctuations responding to the frequency VWDQGDUG VWDQGDUG SOXV
67/G%
)UHTXHQF\+]
Figure 7. Test results
band from 100Hz to 2000Hz, approximately 1-2dB increase per octave; but after 2000Hz, the STL line increase strongly, about 7dB per octave. The 110 standard panels’ STL line exhibits more apparent fluctuations; from 100HZ to 160Hz, there is a more than 10dB increase; but after that, the STL index go down against the frequency increase, reaching the rock bottom at 500Hz; then the STL line enters a smooth raise section at the rate of 3-5dB per octave. For the 110 plus panels, the increases in STL index are also obvious in the low frequency band; from 400Hz to 2000Hz, the fluctuation phenomena presents some small increases, accompanied by a couple of little drops; the increase rate from 2000Hz to 4000Hz is about 4dB per octave. The complex aluminium frames of 150 standard panels enjoy lower structure stiffness and bigger damping coefficient. It is worth noting that the cavity of 150 standard panels is thicker than these of other two kinds. Therefore it is expectable that the 150 standard panels have best sound insulation level, which is in line with the acoustic performance rule of double-leaf partition. According to Sharp’s conclusion[8], the slope of the STL with uncoupled double wall is 18dB/octave above the lowest mass-air-mass resonance frequency f0; Hongisto et al. [4] experiment results also shown that there is approximately 10dB/octave increase above f0 for the coupled double leaf wall. But this characteristic is not clear for all
Sound Transmission Loss of Movable Double-leaf Partition Wall
171
three testing samples. It is also expectative that sound-absorbing mineral wool in the cavity weakened all resonance and coincidence dips. Because there is no special seal treatment in the joints like installation ways of drywall laboratory test, the testing results of these movable partition walls are worse than that of common drywall. It was proved that flanking sound leakages, to a large extend, influence the acoustic performance of movable partition wall, about 8dB loss compared to the results from Hongisto et al. [4]. Judging by the differences between 110 standard’s STL line and 110 plus’s STL line, the lighter surface density of final panel and the telescoping panel’s application weakened the sound insulation level.
5 Conclusion The results of this paper have revealed the STL differences of different movable partition walls. The stiffness and damping of frame affect the STL; and increasing the cavity thickness make sense to the improvement in the sound insulation level of movable partition walls. Telescoping panel and final panel used in practice are weak points in acoustic performance. Additionally, the installation ways also, to a large extent, influence the STL of movable partition walls, roughly 8dB loss in comparison with common drywalls.
6 Acknowledgments This project is sponsored by DTI, UK and Invest NI (KTP 006294, Noise control through partitions). Special thanks are due to Masters Choice, UK for providing testing samples.
7 References [1] [2] [3] [4] [5] [6]
B.H. Sharp, Prediction methods for the sound transmission of building elements, Noise Control Engineering 11, 1978. D.J. Mead, K.K. Pujara, Space-harmonic analysis of periodically supported beams: response to converted random loading, Journal of Sound and Vibration 14 (4), 525– 541. 1971 J. Q. Wang, An experimental study on sound transmission loss of lightweight panelstud partitions, Journal of Tongji University, Vol.2, 79-91. 1981 Hongisto V., Lindgren M. and Helenius R., Sound insulation of walls –an experimental parametric study. ACTA ACUSTICA-ACUSTIC 88, 904-923. 2002 Wang J., Lu T. J., Woodhouse J. and Langley R. S., Sound transmission through lightweight double-leaf partitions: theoretical modelling. Journal of Sound and Vibration, 286, 817–847. 2005 Mathur G. P., Tran Bio N., Bolton J. S. and Shiau N-M., Sound transmission through stiffened double-panel structures lined with elastic porous materials. Proceedings of 14th DGLR/AIAA, 1992
172 [7] [8]
J. Chen, J. Wang and G. Muckian Lee J-H, Kim J., Analysis of sound transmission through periodically stiffened panels by space harmonic expansion method. Journal of Sound and Vibration 251:2, 349-366. 2002 B. H. Sharp, A study of techniques to increase the sound insulation of building elements. Wyle laboratories report WR73-5, El Segundo, California, USA, 1973.
Modelling Correlated and Uncorrelated Sound Sources Mark Boylea,1, , Richard Gaultb, Richard Cooperb, and Jian Wang b a,b, School of Mechanical and Aerospace Engineering, Queen’s University of Belfast, Northern Ireland.
Abstract. In this study the relationship between near-field sound intensity and normal surface velocity, relating to a sound source, was investigated. Predictive techniques will aid with development of an integrated design tool. Correlated and uncorrelated sound sources in the frequency range of 63Hz to 1000Hz were examined through an experimental and numerical activity. The sound source consisted of 8 independent loudspeakers in an enclosure. Sound intensity measurements in the near-field of the source were measured over an enclosed surface. Substitution Monopole Technique (SMT) for both correlated and uncorrelated sources was used to compute normal monopolar surface velocities from measured near-field sound intensity levels. The calculated normal monopolar surface velocities were used as boundary conditions to drive an acoustic Indirect Boundary Element Model (IBEM) of the sound source. From 63Hz to 400Hz the correlated SMT was accurate within +/-2dB but under predicts above this frequency. The uncorrelated SMT using pesudo random phasing was accurate within +/-3dB across the frequency range. Keywords. Subsitution monopole techinque (SMT), Sound sources.
1 Introduction Designing products to adhere to strict acoustic emmison regulations can be challenging in an ever increasing environmentally conscious society. Iterative trial and error techniques, including prototype building and testing can be a very expensive method to aid with product development. The integration of an accurate acoustic model of a product at an early stage in the design process is essential to evaluate the environmental impact of the product. Also, sound attenuation design can be developed concurrent with the design process rather than during product testing as is usually the case.
1
School of Mechanical and Aerospace Engineering, Queen's University Belfast, Ashby Building, Stranmillis Road, Belfast, BT9 5AH, Northern Ireland a) Electronic mail: [email protected]
174
M. Boyle, R. Gault, R.Cooper and J. Wang
Mathematical models of small discrete acoustic sources, such as loudspeakers, have been developed and are well understood. Mathematical models of large acoustic sources with complex geometry, comprising multiple sound sources, are less developed. This is partly due to the computation requirements for large complex source models. As computational power is ever increasing, vibro-acoustic computations are becoming more viable. Whatever the application for the vibro-acoustic simulation, whether that be in the aerospace, automotive or rail industry, accurate models of the acoustic sources are an essential component. Having a well described numerical model of the noise source will increase the accuracy of sound transmission loss predictions. The most widely used techniques for modelling acoustic sources involve Equivalent Source Methods (ESM) of which there are several variations. The models use the concept of replacing the complex acoustic sources with an array of smaller multiple sources that act collectively to create the same effect, using minimal input data as boundary conditions, without reducing the accuracy of the model. In developing satisfactory noise source models, there has been a compromise between complexity, accuracy and computational time. The Substitution Monopole Technique (SMT) assumes that the surface of a large source can be represented as a number of smaller sub-surfaces. These sub-surfaces contain a number of monopole sources each with a specific volume velocity.
Sound
Equivalent
Source
SMT Source
Figure 1. Complex sound source and equivalent SMT sound source
The main advantage of the SMT method is that even when the transmission path is changed the source model remains unchanged. There are two variants of the SMT, namely correlated and uncorrelated. The correlated monopole method works best for low frequencies and simple vibration patterns. At higher frequencies and with complex structural vibration fields a large amount of input data is required which often makes the method impractical. The SMT using uncorrelated monopoles which assumes random phase is often referred to as the Equivalent Power Volume Velocity (EPVV) method. Verheij [1,2] proposed a model of an acoustic source using uncorrelated monopoles that assumes the phase can be randomised. Radiating sources that are dimensionally similar to the wavelength of the source are directional and randomising the phase would be incorrect. Augusztinivicz et al. [3] discussed the phase problem where a computational BEM model using uncorrelated SMT could not be solved, since phase information is required for the monopole boundary conditions.
Modelling Correlated and Uncorelated Sound Sources
175
In this study, a relatively simple sound source model is developed to further test the SMT method for correlated and uncorrelated sources. A method for uncorrelated SMT with equally distributed pseudo random phasing is proposed.
2 Method There are several different techniques for modelling complex sound sources but all follow a similar methodology: x x x
Experimentation -Measurement of sound source Modelling of the source -Calculation of the boundary conditions Computation - Solving of the radiation problem
SMT requires in-situ sound intensity measurements of the source in order to calculate the boundary conditions required for the Indirect Boundary Element Method (IBEM) computation. 2.1 Experimentation The same physical apparatus is used for the correlated and uncorrelated experiments. Differences are in the phase of the signal supplied to each loudspeaker. The sound source consists of a rectangular wooden enclosure of dimensions, 600mm, 300mm and 450mm respectively for the length, width and height. Eight 107mm diameter 8-Ohm Eurotec loudspeakers were inserted into the right hand side face of the enclosure as shown in Figure 2.
Figure 2. Sound source enclosure over a reflecting plane
Near-field sound intensity measurements were made at centres of an equi-spaced grid, decomposed into sub-grids. The sound intensity level at each grid position was measured using a Brüel & Kjær Sound Intensity Probe Kit Type 3599,
176
M. Boyle, R. Gault, R.Cooper and J. Wang
combined with a 50mm microphone spacer adequate for the 63Hz to 1000Hz frequency range. The grid placed over the sound source has dimensions, 1200mm, 900mm and 900mm in length, width and height respectively, Figure 3. The frame was manufactured out of mild steel tubing and the grids marked out using string. On the right hand side of the sound source, the grid spacing was 150mm, while all the other enclosing surfaces had a spacing of 300mm. The normal distance from the measuring location to the surface of the sound source was 300mm in all planes. The experimental apparatus was placed in a hemi-anechoic chamber of dimensions 32m, 26m and 12.5m for length, width and height respectively, shown in Figure 4.
Figure. 3. Sound source and measurement grid with sound intensity probe
Figure. 4. Hemi-anechoic chamber
The loudspeakers were driven by two Inter-M QD-4240 4-channel power amplifiers. An 8-channel interface, RME Fireface 400, was directly linked through a firewire cable to a computer using Adobe Audition software, to drive the speakers independently of each other. A Brüel & Kjær Pulse 3560C data acquisition unit was used to capture the experimentally measured sound intensity levels and vibration data.
Modelling Correlated and Uncorelated Sound Sources
177
2.1.1 Correlated Experimental Procedure Using the Adobe Audition software, sine waves were generated and played through each of the eight loudspeakers at spot frequencies in phase and of equal voltage. The frequencies investigated were 63Hz, 100 Hz, 160 Hz, 250 Hz, 400 Hz, 600 Hz and 1000Hz. 2.1.2 Uncorrelated Experimental Procedure The 8 loudspeaker sources were used to replicate a random source. It has been statistically verified that a minimum 8 sound sources can reproduce an acoustically uncorrelated source, to sufficient accuracy. By trial and error, it was found that equally spaced phases over a band approximately 0 - 270°, gave minimal similar sound power levels (SWL) to average SWL of a large number of repetitions with random phases. Figure.5 shows a typical configuration of the pseudo phase angles allocated to each speaker.
4 6
3
2
1
5 8
7
Channel 1 2 3 4 5 6 7 8
Phase (degrees) 33.75 67.5 101.25 135 168.75 202.5 236.25 270
Speaker 3 4 8 6 2 7 5 1
Figure 5. Phase angle, Channel and speaker number
2.2 Modelling The SMT method was used to calculate the surface velocity boundary conditions of the monopoles at each frequency. 2.2.1 Correlated Monopole Sources Assuming n equal strength monopole sources with volume velocity, Qi , then the total volume velocity QT , from Reynolds [4], is:
QT
nQ i
(1)
178
M. Boyle, R. Gault, R.Cooper and J. Wang
It can be shown that normal velocity,
vn , for correlated sound sources over n
sub-areas is:
vn
1 2c3 nAi f SUo Fd
(2)
Where f is the signal frequency, c is the speed of sound in air, Ȇ is the sound power, ȡ0 is the density of air, Ai is the monopole area and Fd is reflection factor. 2.2.2 Uncorrelated Monopole Sources The cumulative effect of uncorrelated sound sources is the product of the squared 2
volume velocity Qi and n equal magnitude uncorrelated monopole sound sources, Reynolds [4], such that:
QT2
nQi2
It can be shown that normal velocity sub-areas is:
vn
(3)
vn for uncorrelated sound sources over n
1 2c3 Ai f nSUo Fd
(4)
The reflection factor, Fd , for uncorrelated sources is considered to be 2 from Verheij [2], i.e. each source acting as a baffled monopole radiating in half space. 2.3 Computation Using SYSNOISE, an acoustics software package [5], an Indirect Boundary Element Method (IBEM) model was developed. This was used to compute the radiation and scattering in the near field.. A two-dimensional mesh of the speaker box was created using an elemental length of 50mm, adhering to the 6 elements per wavelength for the highest frequency of interest, 1000Hz. The acoustic source was represented by eight monopoles which correspond to the speaker centres. Each monopole was made up of two mesh elements grouped together which allowed for equal spacing. This monopole configuration remained the same for both correlated and uncorrelated models. Velocity boundary conditions with phase information were applied to each monopole group. Figure 6 shows a screen shot of the IBEM model with the boundary conditions applied. The plane below the speaker box mesh represents a symmetry boundary.
Modelling Correlated and Uncorelated Sound Sources
179
Figure 6. IBEM model of the sound source with boundary conditions applied.
3 Results Sound power levels (SWLs) in the frequency range 63Hz - 1000Hz were used for comparison. Computed and measured total SWLs emitted by the correlated sound source, over the frequency range, are shown Figure 7.
Figure 7. SMT Correlated source sound power level comparison
Computed and measured SWLs emitted by the uncorrelated sound source, over the frequency range, are shown Figure 8.
180
M. Boyle, R. Gault, R.Cooper and J. Wang
Figure 8. SMT Uncorrelated source sound power level comparison
4 Discussion The predicted SWLs using the correlated SMT show a good correlation up to 160Hz, under predicting thereafter. At 1000Hz, the deviation is approximately 8dB from measured. These results clearly indicate that for this particular source, the interference effects between the sources are progressively dominant as the frequency increases beyond 400Hz. This is a direct result of the destructive interference of progressively shorter wavelengths. At 400Hz, the wavelength is 0.85m, similar to the source width, 0.9m. The predicted SWLs using the uncorrelated SMT using experimental phase again show similar trends with magnitudes within +/- 3 dB. Up to frequencies of 250Hz, the SMT model tends to over predict. This may be a result of spatial averaging of the locally measured sound intensities over the measurement area.
5 Conclusions The SWL results suggest that that destructive interference is prevalent at frequencies above 400Hz, i.e. wavelengths less than the width of the sound source itself. The spatial trends were shown to be similar in the near-field with lower SWLs at increasing frequencies, a result of this destructive interference. The correlated SMT method tends to under-predict at these higher frequencies. It can be concluded that up to 400Hz the source does act as a ‘compact source’ and hence the correlated SMT model does work. Once the wavelength becomes significantly less than the width of the source then the correlated SMT method is not valid.
Modelling Correlated and Uncorelated Sound Sources
181
For the uncorrelated source, the uncorrelated SMT method shows an acceptable correlation with measured data over the entire frequency range, within +/-3dB. The SMT models developed demonstrate sufficient accuracy to form the basis of a design tool to predict the acoustic behaviour of larger and more complex sound sources. With future development this modelling technique could be integrated into initial product design to predict acoustic emissions and to evaluate attenuation methods thus facilitating concurrent engineering.
6 Acknowledgements The authors would like to thank FG Wilson for providing financial support for this work.
7 References [1] Verheij J. W. "On the Characterisation of the acoustical source strength of the structural components," Processing ISAAC 6: Advanced Techniques in Applied and Numerical Acoustical, 1995; 24. [2] Verheij J. W., "Inverse and reciprocity methods for machinery noise sources characterization and sound path quantification Part 2: Transmission paths," International Journal of Acoustics and Vibration 2 1997;1 [3] Augusztinivicz F. et al., 21st International Seminar on Modal Analysis, Anonymous ISMA, Leuven, 1996; 55-68. [4] Reynolds D. Engineering Princlpes of Acoustics: Noise and Vibration Control .,1st ed. Boston, Allyn and Bacon, 1981;520-560 [5] LMS SYSNOISE (CAE (Computer Aided Engineering) software company Available at: < http://www.lmsintl.com >. Accessed on: May 1st 2008.
Interoperability
Backup Scheduling in Clustered P2P Network Rabih Touta1, Nicolas Lumineaub1, Parisa Ghodousc1 and Mihai Tanasoiud2 a
University of Lyon1 – Villeurbanne, France. University of Lyon1 – Villeurbanne, France. c University of Lyon1 – Villeurbanne, France. d Alter Systems Company – Lyon, France. b
Abstract. Peer-to-Peer (P2P) backup systems have become popular solutions to ensure data availability. The key idea of these systems is to use the shared memory space between peers to store data. Nevertheless, some limits appear because of the large scale and the dynamic context of P2P networks and the lack of collaboration between peers. Likewise, as development data become more volumetric and distributed on multiple sites, these data need an optimized way of backup. Within the framework of concurrent engineering, our motivation is to create a corporate environment that facilitates communication and collaboration between peers to better support large scale, highly dynamic systems and handle the backup process in an efficient and organized way. In this paper, we propose a collaborative-based approach allowing efficiently exploiting resources of each peer. Our solution is based on a specific clustered organization of the network in order to improve peer interactions. Performances are validated throught simulations. Keywords. Peer-to-Peer, Backup, Network clustering, Scheduling.
1 Introduction Having a backup strategy is essential to avoid critical data loss and ensure data durability and availability. Usually, backup process uses Client/Server (C/S) architecture and consists of making additional copies of data (replicas) that are distributed on one or more disks (e.g. RAID systems) stored on a centralized server. Thus, when original data is lost after a crash, at least one replica can be found to restore the original data. Backup efficiency is related to the storage medium reliability and replicas management. Using C/S architecture has some disadvantages like congestion charge or disconnection caused by central server going down. Moreover, a server’s software and hardware is usually very strict and expensive. As we can see, traditional
1 Corresponding Author E-mail: [email protected] 2 Corresponding Author E-mail: [email protected]
186
R. Tout, N. Lumineau, P. Ghodous and M. Tanasoiu
backup requires special administration and equipments. This is certainly expensive and cumbersome. Some recent studies start using peer to peer (P2P) network to do backup. P2P computing gained a lot of recognition in the recent years with file sharing software like Napster 3 , KaZaA 4 and eMule 5 . In a P2P network, a large number of participating computers can share resources without the need for a centralized controlling server. This decentralization property aims to prevent bottlenecks and single points of failure in the network. P2P backup applications have emerged as an approach that promises to offer a backup process closer to the traditional backup. These applications consist of using free hard disk space of computers in a network to store data of users in the backup system. This is a transparent process to the user that does not need to know how the backup is done neither on which computers in the network the data is being stored. All those are details that the user does not need to worry about. The backup process consumes a lot of resources. Usually backup is done during hours where PC resources are free. A busy peer may receive many requests from other peers in the systems but it is not able to answer none of them. Sending all these requests is a time wasting. A good idea is that each peer in the system presents its availability time to the others. In this article we define two schedules that will allow each peer to indicate its desired backup time and its availability time to serve other peers. We see how our proposition limits the number of requests sent across the network and gives peers a better understanding of the backup system. Efficient P2P backup management involves peer collaboration strategy in order to make use of available resources. The remainder of the paper is organized as follows to present more detailed information: Section 2 discusses related works. Section 3 introduces our proposition. Section 4 gives an overview of the global system architecture while section 5 presents the system evolution. Section 6 presents our experimental results. Finally section 7 concludes the paper.
2 Related works P2P file sharing softwares have gained a lot of success around the world, grouping millions of users in the same network. As P2P networking has become popular and well known in lot of networking applications, some research projects start integrating it in backup application. The most notable research projects about P2P backup are pStore [1], PeerStore [2] and Pastiche [3]. These systems provide the user with the ability to securely backup and restore files from a distributed network of peers. A user first installs a client that helps him generate keys and mark files for backup. Selected files are encrypted to prevent other members of the backup network from seeing their private contents. Each file is then broken into digitally signed blocks using a synchronization algorithm and the list of block identifiers (or signed metadata) is 3 http://www.napster.com 4 http://www.kazaa.com 5 http://www.emule-project.net
Backup Scheduling in Clustered P2P Network
187
assembled in a way to indicate how the blocks can be reassembled. The blocks are inserted into the P2P network. If the file is changed and backed up again, only the changes to the file are stored. Peers look for trading partners in order to create new replicas of blocks. To retrieve a file, the user specifies its name and version. The metadata or the list of file block identifiers is retrieved and it indicates the location of the blocks for the selected version. When the file blocks are retrieved, their signatures are examined to ensure file integrity. Once a copy of each block has been obtained, the blocks can be decrypted and the original file reassembled. Unlike pStore that makes no difference between the actual data and metadata that is stored on the network, PeerStore utilizes Chord [4] only for metadata which is much smaller than the raw backup data. Metadata management in PeerStore is accomplished by using a distributed hash table (DHT). PeerStore eliminates all blocks that already have a sufficient number of replicas in the network. The number of replicas of each block is determined by consulting the metadata DHT. Pastiche assumes that complete disc images will be created and stored on the backup network. Computers based on the same operating system and/or using similar applications will have huge amounts of redundant data consisting of OS specific libraries, executable programs, templates, etc. Using structured overlay networks without encryption, all of these similar files would hash to the same key value, and unnecessary bandwidth and storage space could be avoided if the redundant data blocks were able to be recognized. In existing P2P backup systems, backup process is based on trading partnership between peers. In the case of small network, this will not work efficiently because nodes will not find partners easily. These systems do not evolve during the time to respond to the existing needs of peers in the network so the only solution is to have a large number of participating peers for the systems to work well. To establish a partnership, a peer needs to search and contact other peers in the system until getting a confirmation. This time wasting is caused by the lack of knowledge about other peers’ availability to accept backup data.
3 Proposition 3.1 Network and Group modeling We define our network architecture on which our backup system is based. As illustrated in (Figure 1), we define a clustered network where peers are organized in groups. A group is a set of collaborating peers supplying a backup service. Those peers are gathered around one or more group master (about super-peer). These group masters are indexed in a DHT by a group id. Peers have two different types of logical connections. Connexions that bind peers in the same group in a way to simplify the collaboration process between them and thus provide the backup service. The second type of connexions is used by a peer to specify the target group that handles its backup.
188
R. Tout, N. Lumineau, P. Ghodous and M. Tanasoiu
P21 P12 P11
P22 SP2
SP1
DHT
P31
P32
SP3
SP5 P51
P52
Pjk SPj
P42
SP4
P41
Peer
Peer Group
Group Master
Master network
Intra-group connexion (for collaboration) Backup connexion (for handling backup)
Figure 1. Network overview
To model our network, we consider our P2P network as a connected graph G ( N , C ) , where N is the set of nodes representing to the set of peers and C is the set of edges representing logical connections between peers. We consider the partition G ^Gi `1didk of N, where Gi represents a logical group. The size of a group Gi is the number of peers related to it. Each cluster groups collaborating peers. In order to organise peers collaboration, at least one peer should be elected as a master of the cluster. For the sake of clarity, we suppose that only one group master is related to a group Ci . Obviously, a strategy of a group master redundancy should be required to ensure the system robustness. All other peers (besides the master of the cluster) in a cluster Ci are named Pjk with 1 d k d mi where mi is the number of peer inside Ci . We consider the partition L ^CCk , BCl ` of C, where CCk represents the logical connexion for peers
collaboration (i.e. intra-cluster connexion) and BCl represents the logical connexion for backup management. 3.2 Schedules
Our network architecture is based on groups of peers supplying a backup service. In order to build an efficient service and to define a collaborative strategy between peers, we consider different schedules, illustrated in (Figure 2). 3.2.1. Schedule Definitions We consider two different schedules stored by each peer. The first one expresses the peer availability and the second one specifies the time desired for personal data
Backup Scheduling in Clustered P2P Network
189
backup. We represent these schedules respectively by a Peer service schedule and a Peer backup schedule. These schedules can be setup for monthly, weekly or daily services. Peer service schedule: The Peer Service Schedule (PSS) is defined on each peer and represents the hours during which the peer is able to give services to other peers in the network. Thus, each peer Pij stores one peer service schedule, named PSS j . Peer backup schedule: The Backup Schedule (PBS) also concerns a single peer and presents the time during which a peer wants to back up its own data. Thus, each peer Pij stores one peer backup schedule, named PBS j .
Figure 2. Peer and Group schedules expressing needs and availability of peers and groups
To aggregate information stemming from the several Peer Service Schedules and Peer Backup Schedules, we respectively define a Group Service Schedule and a Group Backup Schedule handled by the group master. Group Service Schedule: A group service schedule (GSS) represents the service schedule of the group. It aggregates by union service schedules of all peers belonging to the group. Each peer SPi stores one group service schedule, named GSS j . Group Backup Schedule: A group backup schedule (GBS) aggregates by union the backup schedule of all peers belonging to the group. This schedule represents the number of backup accepted for each time lap. It is used to express the backup load handled by the group. Thus, each peer SPi stores one group backup schedule, named GBS j .
3.2.2. Schedule Representation The schedules we have defined are used in our backup protocol. To complete those definitions, we give more details about the schedule representations. The schedules are stored as a 7 u 12 matrix, a row by day of the week and a column by time lap. We arbitrary choose to consider 12 time periods of one hour by day.
190
R. Tout, N. Lumineau, P. Ghodous and M. Tanasoiu
3.3 Group indexing
Our network architecture is based on a DHT which indexes all group masters. As we detail in the following, the DHT is used to find the most relevant group able to integrate a peer or achieve the data backup of a peer. Thus, the criteria we use as key in our DHT must express the availability of a peer. Thereby, we propose to define our keys from the previously defined Group Service Schedule (GSS). The days and the periods are converted to keys as shown in the following conversion table (Figure 3). For exemple for Tuesday (03:00 – 04:00), the key will be 00100011. Day
Mo
Tu
We
Th
Fr
Sa
Su
Digits
000
001
010
011
100
101
111
Time
00:00 – 01:00
01:00 – 02:00
02:00 – 03:00
03:00 – 04:00
…
Digits
00000
00001
00010
00011
…
Figure 3. Conversion table
With this transformation, each group master put in the DHT the set of keys generated from its own Group Service Schedule. Thus, this DHT can be queried by keys generated from the peer service schedule of any peer. The purpose is to find a group master having at least each wanted key. Note that we do not need to hash our keys.
4 Collaborative Backup Management 4.1 Backup protocol
The backup protocol sets in two steps. Firstly, the most relevant group for backup management must be retrieved and secondly, the peers belonging to the selected group execute the backup.
4.2.1. Group retrieval We consider a peer Pjk which wants to backup its data. If Pjk has not a backup connection towards one group (i.e. no connexion from Pjk to an entry point in a group) or if it has an obsolete entry point (i.e. the peer is disconnected to the network), then peer Pjk search a group able to handle its data backup. For that, Pjk request its group master SPj by sending the set of keys obtained from its PSS.
From these keys, SPj requests the DHT as illustrated in Figure 4.a. In this figure, group G p is selected, that means all keys wanted by Pjk are found on group master SPp . In other words, group G p covers all needs of Pjk .
Backup Scheduling in Clustered P2P Network
191
4.2.2. Block broadcasting We assume that the group G p is in charge of executing the backup of Pjk . Thus, a coordinator peer is chosen among the least loaded peers in the group. In figure 4.b, the peer Ppq is the coordinator peer for Pjk . After that a backup connection is opened between Pjk and Ppq , Pjk sends its data blocks in order that Ppq handles the backup process with the other peers in the group. The knowledge of other peers in the group helps to define a collaborative strategy in order to distribute the backup process on the most available and the least loaded peers.
Pjk
Q
SPj
SPj
SP2
Pjk
Pjk
Peer
SPj
Group Master
Ppq
Coordinator Peer Peer Group
SP3 Pp1 Ppq Pp2
SPp
Pp1
DHT Pp2
SP4
Pp3 Pp5
Master network Ppq
Gp
SPp Intra-group connexion
Pp3
Backup connexion
Pp5
Data block forwarding
Pp4 Q
Pp4 (a)
(b)
Query for backup connexion establishment Message for group retrieval
Figure 4. (a) Group search with DHT indexing master group. (b) Data blocks forwarding with block replication and replicas broadcasting.
5 Results To evaluate our approach, we used the P2P simulator PeerSim. Using Java, we implemented schedules to each peer in the system. These schedules are presented by matrixes filled randomly. To enforce fairness, each node gives storage space as much as it gets. In all simulations, the disk space allocated to backup others data is equal to the amount of data that the node wants to backup. We did many simulations while varying the number of peers in the system. We compared the P2P backup protocol with and without schedules. In (Figure 5), we compare the schedule-based backup to the non schedule-base (or random backup).The network size varies between 300 and 15000 peers. As we can see, the percentage of accepted backup in the schedule-based system varies between 80 and 92. In the schedule-based backup, each peer uses the DHT and looks directly for peers giving services. In the non schedule-based system, each peer searches its neighborhood and asking each neighbor to backup its data. In this system, the percentage of accepted backup varies between 20 and 26 for a time to
192
R. Tout, N. Lumineau, P. Ghodous and M. Tanasoiu
live (TTL) = 1. When we increase this TTL, we allow peers to contact more neighbors to ask for a backup service. This will increase the chance to find a peer to backup data. As the node is interrogating more peers, note that this process will increase the number of messages sent across the network.
Figure 5. Scheduled queries vs. random queries
6 Conclusion and Future Work This work presents the architecture of our P2P backup system. We define a new network organization in order to ease the collaboration between peers and efficiently achieve the data backup. We use a set of schedules that we define to organize the network. These schedules express the availability and the needs of each peer and each group of peers. By introducing a collaboration process, not only we distribute the backup process, but we also exploit peers efficiently in groups according to their availability and their capacity. This minimizes the search time in order to establish a trading partnership between peers. We use a DHT to index groups in order to retrieve relevant ones according to the needs. Future work should consider the replication strategy.
7 References [1] [2]
C. Batten, K. Barr, A. Saraf, and S. Treptin. pStore: A secure peer-to-peer backup system. Unpublished report, December 2001. Landers, M., Zhang, H., and Tan, K. 2004. PeerStore: Better Performance by Relaxing in Peer-to-Peer Backup. In Proceedings of the Fourth international
Backup Scheduling in Clustered P2P Network
[3] [4]
193
Conference on Peer-To-Peer Computing (P2p'04) - Volume 00 (August 25 - 27, 2004). P2P. IEEE Computer Society, Washington, DC, 72-79. Cox, L. P., Murray, C. D., and Noble, B. D. 2002. Pastiche: making backup cheap and easy. SIGOPS Oper. Syst. Rev. 36, SI (Dec. 2002), 285-298. F. Dabek, E. Brunskill, M. F. Kaashoek, D. Karger, R. Morris, I. Stoica, and H. Balakrishnan. Building peer-to-peer systems with Chord, a distrubuted location service. In Proceedings of the 8th IEEE Workshop on Hot Topics in Operating Systems, pages 71–76, 2001.
Towards an Intelligent CAD Models Sharing Based on Semantic Web Technologies Samer Abdul-Ghafour, Parisa Ghodous, Behzad Shariat and Eliane Perna1 1
LIRIS Research Center, Claude Bernard Lyon I University, Villeurbanne France.
Abstract. Interoperability among CAD systems is a well known problem in collaborative product design and development. Nowadays, existing solutions and standards for product data integration are restricted to the process of geometric data. As a result, the model can hardly be modified, and the original intent of the designer may be misunderstood. Hence, design intent such as construction history, features, parameters and constraints should be consistently maintained while processing semantically modeling terms both by design collaborators and intelligent systems. In this paper, we investigate the use of Semantic Web technologies for the development of a common design features ontology, sharable for collaborative design. In our approach, we define the neutral format as an ontology using OWL (Web Ontology Language), and more specifically its sublanguage OWL DL based on Description Logics, which is then enriched from the logical data model with axioms and rules. Rules have been defined using SWRL (Semantic Web Rule Language) for enriching our ontology expressivity such as for handling composed properties. Keywords. Semantic Interoperability, CAD, Feature-based design, Ontology, SWRL
1 Introduction A major issue in concurrent engineering and collaborative design is the creation and maintenance of a suitable representation for design knowledge that will be shared by many design engineers [1]. This knowledge includes many concepts such as design history, component structure, features, parameters, constraints, and more specific information. Heterogeneous tools and multiple designers are frequently involved in collaborative product development, and designers often use their own terms and definitions to represent a product design. Thus, to efficiently share design information among multiple designers, the design intent should be persistently captured and the semantics of the modeling terms should be semantically processed both by design collaborators and intelligent systems. 1
LIRIS (Lyon Research Center for Images and Intelligent Information Systems), Claude Bernard Lyon I University; 43 Bd. Du 11 novembre 1918, 69622 Villeurbanne, France. Tel: +33 (0)4 72 44 83 09 ; Fax: +33 (0)4 72 44 83 64; Authors e-mails adresses: {samer.abdulghafour , ghodous, behzad.shariat, eliane.perna } @ liris.cnrs.fr; http://liris.cnrs.fr/
196
S. Abdul-Ghafour, P. Ghodous, B. Shariat and E. Perna
Regarding CAD models, designers do not only need to exchange geometric data as is done among many of today’s CAD exchanging tools, but also to abstract engineering knowledge about the design and the product development process [2]. Most of the current commercial CAD systems provide feature-based design for the construction of solid models. Features are devised to carry, semantically, product information throughout its life cycle [3]. Consequently, features should be maintained in a part model during its migration among different applications or while switching systems at the same stage of product life cycle. In order to define design feature terminology for integration, knowledge about feature definitions of different CAD systems should be considered. Current standards, such as ISO 10303, known as STEP (STandard for the Exchange of Product model data) [4] have attempted to solve this problem, but they define only syntactic data representation so that semantic data integration is not possible [5]. Moreover, STEP does not provide a sound basis to reason with knowledge. The importance of capturing and representing real world knowledge in information systems has long been recognized in artificial intelligence, software reuse, and database management. Our research investigates the use of Semantic Web technologies, such as ontologies and semantic web rule languages for the exchange and sharing of product data semantics among various CAD systems. Ontologies have been proposed as an important and natural means of representing real world knowledge for the development of database designs. Furthermore, “Ontology” offers an additional benefit for sharing CAD models by way of its reasoning ability. To achieve collaboration in product development, semantic gap should be filled by integrating seamlessly product development processes into a comprehensive collaborative design environment. This integration is carried out by developing a feature-based design data model that includes semantic information which is sharable and reusable for downstream product development activities. To adequately achieve this, we need a formal method for representing features, such as using formal ontologies. Hence, we propose in our paper a method of sharing CAD models based on the creation of a sharable common design features ontology to enhance the interoperability of CAD systems. Implicit facts and constraints are explicitly represented using OWL and SWRL. The rest of our paper is composed as follows: In section 2, we present an overview of related work in interoperability among CAD models, and an overview on Semantic Web technologies. In section 3, a methodology for sharing CAD models is described. The developed ontology is discussed in section 4. Examples of axioms and reasoning are provided in section 5. Finally, some conclusions are drawn at the end of this paper.
2 Related Work 2.1 CAD models interoperability CAD data exchange problem is highly addressed by the international standard ISO 10303 or STEP which provides a system independent format for the transmission of data, in computer interpretable form. STEP, as various neutral formats, has been proven successful in the exchange of product geometry on a high quality level.
Towards an Intelligent CAD Models Sharing Based on Semantic Web Technologies
197
Nevertheless, the problem consists in the ability of editing the outset model in the target system. Indeed, design intent including construction history, parameters, constraints, and features are potentially lost [6]. The problem of defining an appropriate and comprehensive feature taxonomy has been recognized as one of the central problems in order to share CAD models including information related to features and construction history. Recent advances in feature taxonomy rely on the concept of ontology. Han et al. [5] defined a feature ontology based on the definition of modeling commands of CAD systems. This is implemented through a declarative approach, which has been developed using the F-Logic language. Practically, the described approach operates on the text files (journals) produced by some commercial CAD packages which include command histories. Patil et al. [7] theorized an ontology-based framework to enable exchange of product data semantics across different application domains. The authors proposed as a tool the Product Semantic Representation Language – PSRL. In the prototypal implementation discussed in [8], the authors show as the PSRL representation can be used to implement a feature taxonomy and ontology. 2.2 Overview on Semantic Web technologies Semantic Web applications aim to be able to integrate data and knowledge automatically through the use of standardized languages that describes the content of Web-accessible resources. OWL was developed as a formal language for constructing ontologies that provide high-level descriptions of these web resources [8]. Recent work has concentrated on adding rules to OWL to provide an additional layer of expressivity. The Semantic Web Rule Language SWRL is one of the results of these activities. SWRL allows users to write Horn-like rules that can be expressed in terms of OWL concepts and that can reason about OWL individuals. Many of OWL’s limitations concern the development of OWL properties. As there is no composition constructor, capturing relationships between composite properties is not possible [9]. In [10], authors state in their work that rules are needed in addition to the ontology to capture (1) dependencies between ontology properties, (2) dependencies between ontologies and other domain predicates, and (3) queries. The necessity of rules is of course dependant on the use of the end application utilizing the OWL knowledge base.
3 Methodology for CAD Models Sharing Our approach uses Semantic Web technologies for exchanging feature-based CAD models by considering semantics assigned to product data. This will enable data analysis, as well as manage and discover implicit relationships among product data based on semantic modeling and reasoning. In our approach, we define the neutral format as an OWL ontology, which is then enriched from the logical data model with axioms and rules. Thus, the CAD models are represented as the instances of the ontology defining concepts and relationships among these concepts. However, Ontology offers an additional
198
S. Abdul-Ghafour, P. Ghodous, B. Shariat and E. Perna
benefit for sharing CAD models by way of its reasoning ability where implicit information can be discovered. Figure 1 illustrates the architecture of our approach. There are syntactic and semantic heterogeneities among different ontologies. Syntactic heterogeneity is settled by the homogenization process of applications syntax into OWL DL. For this purpose, only the syntax is changed, but the terminologies remain intact. However, there is structural and semantic heterogeneity which cannot be connected by syntactic mapping. Hence we need to bridge them by defining axioms and rules. Mapping rules are created to link the applications ontologies with the developed common OWL ontology enriched with axioms and rules. As illustrated in figure1, sharing CAD models is carried out by creating a common design feature model that includes semantic product information which is sharable during the product development activities. However in reality, existing systems use their own set of terminologies, leading to interaction difficulties. Hence, creating shared ontology is based on analyzing existing CAD applications.
Figure 1. Architecture of Ontology-based CAD Models Sharing Afterwards, mapping rules are defined to enable integration between different ontologies. Ontologies mapping is accomplished by defining axioms which enable terms to be reasoned as being equivalent semantically, for instance, even though they are using different terminologies. OWL expressivity allows the definition of logical classes (intersection, union and complement operators), which enables automatic classification of product components. In addition, OWL restrictions are used to create new specifications of product information by constraining properties characteristics such as domain, range, cardinality, or value. For example, a new class “ThreadedHoles” is defined as a “hole” specification with a “hasValue” restriction to gather automatically all threaded holes instances. Thus, instances of newly defined classes could be retrieved automatically by means of OWL DL reasoning ability. SWRL is used for defining formally more general relationships
Towards an Intelligent CAD Models Sharing Based on Semantic Web Technologies
199
flexibly. In particular, using OWL and SWRL, inferred facts could be detected to effectively discover implicit relationships by explicit assertions.
4 Design-features Ontology Description In this section, we propose a feature-based design ontology, with a special attention to part design features. In this ontology, the main characteristics of CAD models have been established. The aim is to share CAD models as instances of this ontology, enabling access to design knowledge for collaborative designers using semantic queries. A part of our developed ontology is illustrated in Figure 2. It describes the conceptual hierarchy of main concepts used in CAD models. It concerns not merely geometrical data, but also technical information related to the part, or the assembly representing the product. In the following, we describe key concepts defined in our ontology:
Figure 2. A generic view of developed feature-based design ontology Model3D: represents the 3D CAD model, including among other data the product structure, parameters, and geometric representations. A model is designed and assembled to fill a functional need. It consists of either a PartDocument defining a manufactured component, or a ProductDocument as an assembly of components brought together under specific conditions to form a product and perform functions. Some properties could be defined for a model, e.g. its version and material. A fundamental class of our ontology is “Feature” which constitutes a subset of the form of an object that has some function assigned to it. However, product
200
S. Abdul-Ghafour, P. Ghodous, B. Shariat and E. Perna
information is not merely restricted to geometry; it holds indeed a richer and more complex semantic content (functional, structural, behavioral, technological…). In order to capture this semantic, the meaning of feature has been extended to have a relevant definition according to the context it is used in, thus bridging the gap between geometry and other product information. Other concepts are added in our ontology to describe feature properties. “Function” represents what the feature is supposed to do. The feature satisfies the engineering requirements largely through its function. The form of the feature can be viewed as the proposed design solution for the requirement specified by the function. There is a need to handle geometrical entities that constitute the feature, a created face for instance. For this reason, a GeometricRepresentationSet is added to our ontology to describe the way the feature is represented, such as its B-Rep (Boundary Representation). We can have several types of features: analysis features, design features, manufacturing features, etc. Compound features can be generated from primitive features and are related to their sub-features with the property hasSubFeature. Other aspects related to a feature are constraints, parameters, tolerance, and behavior. An important aspect of sharing semantically CAD models is the ability to store the design history as far as the order of the arguments is highly meaningful. In our ontology, each PartDocument is composed of an ordered sequence of bodies. A solid body SolidBody is in turn defined by an ordered sequence of solid components characterizing the shape of this body. This is implemented in our ontology with the following SolidBody restriction: hasSolidBodyComponentList (SolidBodyComponentList EmptyList) (1)
The class SolidBodyComponentList is a subtype of the OWLList class that specifies the order of statements in OWL. An item of this list may contain a sketch, a part design feature, or a geometric set body. Moreover, the OWLList class has the following characteristics: x
The class OWLList is the list element followed by only OWLList .
x
The class EmptList is a subtype of OWLList containing no other elements. It is defined by the following restriction: ( isFollowedBy owl : Thing) ( hasContents owl : Thing) (2) hasContents is an functional object property defining a pointer to the content of an element of the list.
x
x isFollowedBy is a transitive property. It has a hasNext sub-property defined as functional object property to point to the tail (sublist) containing the other elements.
Towards an Intelligent CAD Models Sharing Based on Semantic Web Technologies
201
Figure 3. Description of Solid Body Components List
Figure 3 describes the definition of the class SolidBodyComponentList. This class has two subtypes defining the first and the last components of the list. The following statement describes the restriction applied to the final component list: ( hasNext EmptyList) ( ( hasNext OWLList) (3)
5 Axioms and Reasoning Semantics in our ontology are encoded using representation and reasoning mechanisms based on Description Logics [11]. OWL DL is designed for use by applications that need to process the content of information instead of just presenting it to humans. It provides a rich set of constructs that enables to define concepts, explicit semantic relationships and also to create instances. Furthermore, ontology could be semantically enriched by defining axioms. Axiom is a core element that enables semantic query. It is declarative and reasonably recognized without proof, so in logic world it is considered as a premise of reasoning by providing knowledge into a data model so that it allows semantics to be processed by machines. This includes defining subsumption relations for classes and properties, equivalent classes, equivalent properties, and some properties characteristics, e.g. transitivity. This research shows that SWRL rules can be defined to accommodate potential semantic queries and information requests in a collaborative design environment. Rules are defined in our ontology using SWRLTab, a development environment for working with SWRL rules in ProtégéOWL. In the following, we will demonstrate examples on how complex types of relationships could be established between concepts of our ontology by creating SWRL rules. Table 1. Belong to relationships Property belongToPart belongToBody
Domain Body PartDesignFeature
Range Part Body
Consider in table 1 the functional object property “belongToPart” that states a component relationship between a part and a body. Another functional property “belongToBody” defines a belonging relationship between part design features and their bodies. Rules 4 and 5 are thereby created to define explicitly relationships
202
S. Abdul-Ghafour, P. Ghodous, B. Shariat and E. Perna
between a part and its composed features. Instances of these relationships are inferred using reasoning engine. (SolidBody (?x) SolidBodyComponentList(?y) hasContents(?y, ?z) hasSolidBodyComponentList(?x, ?y) o belongToBody(?z, ?x) SolidBody( ?x) SolidBodyC omponentLi st(?y) hasContent s(?t, ?z) hasSolidBo dyComponen tList(?x, ?y) SolidBodyC omponentLi st(?t) isFollowed By(?y, ?t) o belongToBo dy(?z, ?x)
(4) (5)
Retrieving semantic relationships between features is a crucial issue for browsing and querying CAD models. For example, to extract hole feature parameters in a CAD model, it’s necessary to detect explicitly all inter-features relationships linked to the hole instance, e.g. if a hole instance is followed by a translation feature applied to that hole, new parameters as the hole center coordinates should be computed with respect to the translation feature parameters.
6 Conclusion This paper presents a neutral format ontology of feature-based design modeling. The main objective of developing our ontology is the enhancement of collaboration among designers. This could be carried out by providing the ability to access design knowledge and retrieve needed information not only asserted by designers, but also implicit facts inferred by reasoning engines. Thus, this research takes full advantages of Semantic Web technologies to represent complicated relations that are scattered among form features and parts. Since constraints have been standardized, they can be interpreted and maintained during collaboration. The constraints assist to keep the system consistent and at a minimum design cost by allowing developers proper interpretation of the complex relations. In the Protégé ontology editor, once the ontology is instantiated, it is possible to create queries to retrieve needed information from the shared ontology.
7 Acknowledgments The research described in this paper is carried out in collaboration with Datakit2 society, a leader in developing CAD/CAM converters tools.
8 References [1] Jinxin Lin, Mark S. Fox and Taner Bilgic, A Requirement Ontology for Engineering Design - Concurrent Engineering.1996; 4: 279-291. 2
http://www.datakit.com
Towards an Intelligent CAD Models Sharing Based on Semantic Web Technologies
203
[2] Kyoung-Yun Kim, David G. Manley, Hyungjeong Yang Ontology-based assembly design and information sharing for collaborative product development; [3] WONG, T.N., « Feature-Based Applications in CAD/CAM », Department of Industrial & Manufacturing systems Engineering, University of Honk Kong, 1994. [4] ISO. Industrial automation systems and integration product data representation and exchange, ISO 10303 - International Organization for Standardization (ISO), 1994. [5] T. Seo Y. Lee S. Cheon S. Han L. Patil D. Dutta. Sharing CAD models based on feature ontology of commands history. International Journal of CAD/CAM,5(1), 2005. [6] Guk Heon Choi, Dunhwan Mun, Soonhung Han , Exchange of CAD Part models Based on the Macro-parametric Approach, Int. Journal of CAD/CAM vol. 2, No.1, (2002). [7] L. Patil D. Dutta R. Sriram. Ontology-Based Exchange of Product Data Semantics. IEEE Transactions on Automation Science and Engineering, 2(3):213–225, 2005. [8] O'Connor M.J., Knublauch H., Tu S.W., Musen M.A., “Writing Rules for the Semantic Web Using SWRL and Jess,” 8th International Protégé Conference, Protégé with Rules Workshop, Madrid, Spain, SMI-2005-1079, 2005. [9] Horrocks I., Patel-Schneider P.F., Bechhofer S., Tsarkov D., “OWL Rules: A Proposal and Prototype Implementation,” Journal of Web Semantics, vol. 3, no. 1, 2005. [10] Golbreich C., Bierlaire O., Dameron O., Gibaud B., “What reasoning support for ontology and rules? The brain case study,” 8th International Protégé Conference, Pro tégé with Rules Works hop , Ma d r id , Spa in , S MI - 2 00 5 - 1 07 9 , 2 00 5 . [11] F. Baader andW. Nutt. Basic description logics. In Description Logic handbook, 2003.
Towards a Multi-View Semantic Model for Product Feature Description Patrick Hoffmann1,2, Shaw C. Fenga,1, Gaurav Ameta1,3, Parisa Ghodous2, and Lihong Qiao4 1
National Institute of Standards and Technology, Gaithersburg, Maryland, USA. LIRIS Laboratory, Claude Bernard Lyon 1 University, Lyon, France. 3 Arizona State University, Tempe, Arizona, USA. 4 Department of Industrial and Manufacturing System Engineering, Beihang University, Beijing, China. 2
Abstract. Multiple perspectives need to be included in a product development process. Engineers from different departments usually have different views on a product design. It is hence necessary to define information structures that support multiple views. This paper provides an analysis and approach to develop a multi-view semantic model of three levels to describe product features. We base our analysis on a three-level conceptulization of engineering design features. The base level is substance, the intermediate level is view, and the top level is purpose. A multi-view semantic model will enhance semantic integrity of feature information throughout the product development for sharing information, such as design intent, manufacturing capability, and quality requirements. Keywords. Feature-based design, feature modeling, interoperability, multi-view model, semantic model.
1 Introduction Multiple perspectives, including engineering, manufacturing, business, and marketing, need to be included in a product development process. Engineers from different departments usually have different views on a product design. Realizing the need for multiple views of a product, we propose a multi-view semantic model that has a three-level conceptualization of objects in the physical world. In this paper, only “shape features” are in the scope of discussion. When the term “feature” or “product feature” is used, it is meant as shape feature. Other features, such as functional features and aesthetic features, are out of the scope. Fundamental properties of a feature are specified on the base level. It is the a
Manufacturing Engineering Laboratory, NIST, 100 Bureau Drive, Stop 8263, Gaithersburg, Maryland, 20899, U.S.A.; Tel: +1 301 975-3551; Fax: +1 301 258-9749; Email: [email protected]
206
P. Hoffman, S.C. Feng, G. Ameta, P. Ghodous and L. Qiao
substance level. These properties are independent of any application viewpoint. An application requires a specific set of properties, namely, application-centric properties. These properties are on the intermediate level, which is the view level. The design intent of a feature is addressed on the top level - the purpose level. This three-level conceptulizatoin assists information model developers to category feature properties. Proper categorization leads to unambigous feature definitions in communication between different applications of a feature. Meaningful communication between different application software systems requires features to be described with a predefined information structure, to be adaptable to various applications and to preserve the design intent. The purpose of this model is to provide application-specific views including any relevant information related to a product and its features, to support unambiguous data exchange becomes intrinsic in an information model. Featurebased product data exchange faces some limitations when it occurs across the different phases in a product development process. Notably, the relation from Computer-Aided Design (CAD) to downstream applications is mainly done by feature recognition, based on the product geometry. The designer's intent is lost during the process. Feature-based exchange is also hindered by the divergent definitions of the feature. A feature can be described as an encapsulation of the engineering significance of portions of the geometry of a part or assembly [[1]]. The “engineering significance'' is application-dependent. Thus, an applicationspecific feature, such as design, assembly, manufacturing, and inspection, associates a specific meaning with a portion of the part geometry, as shown in Figure 1.
Figure 1. Application-specific attributes related to a hole feature
Towards a Multi-View Semantic Model for Product Feature Description
207
We explored a methodology to define the meaning of engineering terms more rigorously to enable interoperability among engineering and manufacturing software systems. Inferring from a conducted literature study on feature information models for data exchange, we found that different data exchange specifications have slightly different definitions and representations of feature. As a rigorous definition of feature is needed to enable interoperability, our focus is on semantic modeling. The paper is organized as follows. Section 2 reviews various approaches to data exchange across applications in product design and manufacturing. Section 3 describes our proposal that the model should be composed of three specific levels of conceptualizing feature from different application perspectives. Section 4 presents a scenario with an example. Section 5 concludes that the three-level approach is a basis for mutli-view semantic modeling of product features.
2 Review of Approaches for Information Exchange Across Applications in Product Development The ISO 10303 standard series (also known as STEP – STandard for Exchange of Product data) is intended for data exchange between heterogeneous engineering systems. STEP enables the transfer of information, such as geometry and topology [[2]], features [[3]], inspection data [[4]], and machining plan data [[5]]. The Dimensional Measuring Interface Standard [[5]] provides communication between CAD systems and Coordinate Measuring Machines. The STEP model of feature is manufacturing-oriented [[7]]. It lacks the generality that is needed for exchange of product data across different applications. It also lacks constraints between features and suffers from a limited implementation in commercial systems [[8]]. Exchange of CAD data through STEP does not transmit semantic information such as the axis and curve used to define a part by revolution, but only the raw geometry. The designer's intent that the part should be produced by turning is lost. For exchange of neutral definitions of features, Shah et al. propose an application-independent declarative language for feature definition [[8]]. ``N-Rep'' describes the shape of form features with a B-Rep representation and maps them to a face adjacency graph for feature recognition. Features are related to one another through topological or geometric constraints, and feature parameters can be derived from other feature parameters by calculation of an arithmetic expression. Features are defined by their shape only. The model can be extended with user-features. The design intent is lost. Dartigues et al. propose to use an ontology as a neutral model, for “design intent”-preserving conversion between CAD and Computer Aided Process Planning systems. The approach consists in converting design features by mapping the ontological feature model of the system with a neutral ontological feature model. The approach would require that CAx systems publish an ontological model of their features, which has not happened until now [[9], [11], [12], [13]]. On multiple-view feature modeling, all the approaches focus on building a system where different views of features are defined, and the product model is
208
P. Hoffman, S.C. Feng, G. Ameta, P. Ghodous and L. Qiao
progressively concurrently built. Consistency management and change propagation among views are the main concerns. The multi-view approach facilitates the exchange of information across domains, but does not aim at providing any means for an external application to relate to the system and exchange information. Bronsvoort and Noort [[14]] propose a system that supports conceptual design, assembly design, part detail design and part manufacturing planning. All views are updating each other. It is possible to add user-defined features. The design intent is made of constraints and connections between application-specific features. The macro-parametric approach consists in recording the succession of construction steps (or history) with the parameters used, when building a model in a feature-based CAD system. Then the steps can be “replayed” in another CAD system. The approach is limited to the exchange between systems that have a same set of features. We do not know of any adaptation of this approach for exchange throughout the product development stages. Ding et al. [[15]] propose a model to annotate semantically a designed part, for improved communication. The approach provides little support for relating application-specific features.
Figure 2 Model overview
3 Definitions in a Three-level Multi-View Semantic Approach According to the literature study, when geometry is standardized in STEP, a multiview model is used to describe features from application perspectives, and design intent is poorly communicated. Many generic definitions of feature exist, that describe what “feature” can imply. A feature has a specific meaning within the engineering context, is mappable to a generic shape, and both are related [[16]]. We thus propose to (1) relate features to a portion of the part geometry, through feature placement and feature recognition by using a pattern matching or tracebased recognition method; (2) describe the engineering context by categorizing features in views, and defining their parameters with application-oriented and
Towards a Multi-View Semantic Model for Product Feature Description
209 b
application-centric properties; and (3) represent the meaning by intentions . We also propose to categorize feature information in three different levels, namely substance, view and purpose. Figure 2 shows these levels and how they are interrelated. The goal of the design of this three-level model is to provide open semantics and methods to define new views of a feature. 3.1 Substance Level Features have to be related to some pieces of geometry in a geometric model of a part. Even though the geometry described is not exactly the same for all the related c features , there is usually some part of the geometry which is shared by the features, by which they could be connected. All the information that describes the product structure, independently of the organization and of the applications is on the substance level. We divide the substance level as follows: essential properties that are used to represent a part model, such as geometric and topological elements, dimension, location, orientation, and the location of the material side in the boundary representation; application-oriented properties that are essential to some applications, but not all, such as datums, tolerances, dimensions, and material properties [[17], [18]]. 3.2 View Level We propose to include a meta-model for views so that users can define their own views, according to their perspectives on the product. The view level contains engineering knowledge about the product, that is relevant for a particular application. As an example, one could define a view for manufacturing [[19], [20], [21], [22]], with properties like “machine tool” and “tool path,” and features, such as milling and drilling features. The model of view should include applicationcentric properties that are specific to one application, e.g., inspection [[23]]; and feature prototype, which describes how a portion of geometry is interpreted for a particular feature in that application. It thus needs to contain a description of the form of the feature as it should be recognized on the part. 3.3 Purpose Level On the purpose level, the design intent on a feature and its properties is described. Application experts who participate in the elaboration of the product know the constraints and specificities in their domain. For example, a manufacturing engineer may indicate that some portion of the design is too expensive to manufacture as is. It is often stated that the manufacturing engineer will be interested to know of the designer's intention, but the reciprocal is true as well.
b
We distinguish between intent and intention. Intent is a sustained unbroken commitment or purpose. Intention is an intermittent resolution or an initial aim or plan. Source: http://thesaurus.reference.com/ c A good example of this can be found in [[19]], page 116.
210
P. Hoffman, S.C. Feng, G. Ameta, P. Ghodous and L. Qiao
The reasoning on why the product model is what it is results in the purpose of an engineering design. Essential or application-centric intents can be used to describe an intention, which guides the choice of some parameters of feature properties. Intentions must include or be related to the following elements: the source of an intention and feature parameters. For example, a functional intention I1 can state “the arm must pivot so that ….” An assembly expert could relate the corresponding features on the two different parts by an assembly-specific view. He could choose the intent “one-dimension translation,” and select I1 as the intention source, with a comment that explains the decision.
4 A Scenario and Example As features are described relatively to their geometry and topology in views, a feature recognizer may be applied to the part file, or users may manually choose the features in the geometry of the part. Users will thus get the application-specific features in which they are interested. Users can edit the values for any parameter of the feature recognized, and associate an intention to the modification. If this generates a conflict with another application, the intention is mandatory. While modifying a feature parameter values, users have an immediate visual access to the intents associated with the current value, and to their views, which can help avoid or solve conflicts.
Figure 3. Example use of the model
Towards a Multi-View Semantic Model for Product Feature Description
211
Figure 3 illustrates the methodology of the model with an example. A part with a toleranced pattern of feature and a datum (substance level) is presented along with two views of detailed design and manufacturing (view level). The datum hole and the pattern hole and some of their parameters are included in both views as design and manufacturing features. On the right of the figure, intentions for detailed design, manufacturing, assembly and maintenance describe how features are inter-related (purpose level). In intention B, the designer expresses that the diameter of the pattern hole needs to be in the range of 8 mm to 18 mm, to minimize the stress on the part. The supplier can supply only bolt-screws of diameters 10 mm, 15 mm, or 20 mm (intention Z). As the manufacturer's machine-tool cannot drill holes with a diameter less than 14.2mm, the diameter of hole should be 15mm (intention C). The reason why the datum hole and the pattern hole should be concentric is expressed in the intentions N and M from the assembly and maintenance views.
5 Conclusion An initial work has shown some analyzed characteristics which will be used to develop a novel multi-view semantic model that should support meaningful exchange across the product lifecycle. The model relates features to the STEP definitions for the geometry, and to other standards for tolerancing, dimensioning, and process planning. It also integrates views to support feature descriptions in different applications. We proposed to note the intent explicitly and relate it to an enginneering design, to better preserve it. We plan to further develop the model based on the three-level conceptualization. We will specify views, based on the NIST models for (a) assembly [[1]], (b) process planning [[21]], (c) manufacturing [[20], [22]], and (d) inspection [[18], [23]]. Future work should also include an in-depth review of all constraints needed to connect features.
6 References [1] [2] [3] [4]
Rachuri, S., Han, Y., Foufou, S., Feng, S., Roy, U., Wang, F., Sriram, R., Lyons, K.: A model for capturing product assembly information. Journal of Computing and Information Science in Engineering 6(1) (March 2006) 11–21. ISO 10303-203: 1994, Industrial automation systems and integration – Product data representation and exchange – Part 203: Application Protocol: Configuration controlled 3D design of mechanical parts and assemblies. ISO 10303-224: 2006, Industrial automation systems and integration – Product data representation and exchange – Part 224: Application protocol: Mechanical product definition for process planning using machining features. ISO 10303-219: 2007, Industrial automation systems and integration – Product data representation and exchange – Part 219: Application protocol: Dimensional inspection information exchange.
212 [5] [6] [7] [8] [9]
[10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23]
P. Hoffman, S.C. Feng, G. Ameta, P. Ghodous and L. Qiao ISO 10303-238: 2007, Industrial automation systems and integration – Product data representation and exchange – Part 238: Application protocol: Application interpreted model for computerized numerical controllers. Dimensional Measuring Interface Standard (DMIS), Version 5.1, Dimensional Metrology Standards Consortium, Arlington, TX, 2008. Pratt, M., Anderson, B., Ranger, T.: Towards the standardized exchange of parameterized feature-based cad models. Computer-Aided Design 37 (2005) 1251– 1265. Shah, J., Anderson, D., Kim, Y., Joshi, S.: A discourse on geometric feature recognition from cad models. Journal of Computing and Information Science in Engineering 1(1) (March 2001) 41–51. Shah, J., D’Souza, R., Medichalam, M.: N-rep: A neutral feature representation to support feature mapping and data exchange across application. In: ASME 2004 Internal Design Engineering Technical Conferences & Computers and Information in Engineering Conferences, Salt Lake City, Utah, USA (2004). Dartigues, C., Ghodous, P., Gruninger, M., Pallez, D., Ram, S.: Cad/capp integration using feature ontology. Concurrent Engineering 15(2) (2007) 237–249. Patil, L., Dutta, D., Sriram, R.: Ontology-based exchange of product data semantics. IEEE Transactions on Automation Science and Engineering 2 (2005) 213–225. Brunetti, G., Grimm, S.: Feature ontologies for the explicit representation of shape semantics. International Journal of Computer Applications in Technology 23 (2005) 192–202. Abdul-Ghafour, S., Ghodous, P., Shariat, B., Perna, E.: A common designfeatures ontology for product data semantics interoperability. In: Web Intelligence, IEEE/WIC/ACM International Conference on. (2007) 443–446. Bronsvoort, W.F., Noort, A.: Multiple-view feature modelling for integral product development. Computer-Aided Design 36 (2004) 929–946 Ding, L., Davies, D.: Sharing information throughout a product lifecycle via a product model. In: International Design Engineering and Technical Conference on Computers and Information in Engineering 2008 to appear, New-York, NY, USA (August 2008). Shah, J., Mantyla, M.: Parametric and Feature-Based CAD/CAM: Concepts, Techniques, and Applications. Wiley-Interscience (1995). ANSI/ASME Y14.5.1M-1994: Mathematical definition of dimensioning and tolerancing principles. The American Society of Mechanical Engineers, New-York (1995). Feng, S., Yang, Y.: A dimensional and tolerance data model for concurrent design and systems integration. Journal of Manufacturing Systems 4(6) (1995) 406–426. Schulte, M., Weber, C., Stark, R.: Functional features for design in mechanical engineering. Computers in Industry 23 (1993) 15–24. Feng, S.: A machining process planning activity model for systems integration. Journal of Intelligent Manufacturing 14(6) (December 2003) 527–539. Feng, S., Song, Y.: An information model of manufacturing processes for design and process planning integration. Journal of Manufacturing Systems 22(1) (2003) 1–28. Feng, S.: Manufacturing planning and execution objects foundation interfaces. Journal of Manufacturing Systems 19(1) (2000) 1–17. Feng, S.: A dimensional inspection planning activity model. Journal of Engineering Design and Automation 2(4) (1996) 253–267.
Towards a Multi-View Semantic Model for Product Feature Description
213
7 Acknowledgements The authors gratefully acknowledge helpful discussions with and comments from Thomas Kramer and Xenia Fiorentini. Funding for this work was provided by the NIST Sustainable and Lifecycle Information-based Manufacturing Program.
8 Disclaimer No approval or endorsement of any commercial product by the National Institute of Standards and Technology (NIST) is intended or implied. Certain commercial products are identified in this paper to facilitate understanding. Such identification does not imply that these products are necessarily the best available for the purpose.
Integrated Design
Development of a Lightweight Knowledge Based Design System as a Business Asset to Support Advanced Fixture and Tooling Design Nicholas Reeda, 1, James Scanlana and Steven Hallidayb a b
The University of Southampton, United Kingdom Rolls-Royce plc, United Kingdom
Abstract. This paper introduces and describes a continuing programme of work initiated between Rolls-Royce plc and the University of Southampton to create a Knowledge Based System, intended to reduce the demand on currently limited specialist resource and to facilitate future business growth. The paper begins by introducing the working context and provides an explanation of the demands faced by the business. The structure of the Knowledge Based System is given and the rationale detailed with respect to the business context. The paper argues that in order to provide value to the business, the most benefit will be seen from a lightweight system that supports user’s actions rather than introduce automation. The paper concludes with a review of the work completed to date and the proposed future work. Keywords. Knowledge Based Engineering, Intelligent Systems, Evolutionary Structural Optimisation.
1 1.1
Introduction and Background The Knowledge Economy and Manufacturing
It is now commonly accepted that the current economy has become the post-industrial knowledge driven economy predicted by Bell [3]. This new economy now requires 1
Research Engineer, Computational Engineering and Design Group, School of Engineering Sciences, University of Southampton, Highfield , Southampton SO17 1BJ; Tel: +44 (0) 2380 598 359; Email: [email protected] © copyright 2008 Rolls-Royce plc. All Rights Reserved. Permission to reproduce may be sought in writing to IP Department, Rolls-Royce plc, P.O. Box 31, Derby DE24 8BJ, United Kingdom.
218
N. Reed, J. Scanlan and S. Halliday
companies to effectively manage and exploit their knowledge, in order to maintain a competitive advantage and maximise their returns [5]. This is particularly important for companies in manufacturing and engineering. Traditionally limited by factors of production such as materials, labour and money, companies are now being forced to consider knowledge as their key competitive advantage, it is unsurprising therefore that Knowledge Management in engineering is seen as the new step change since CAD/CAM introduction [13]. Knowledge Management has generated a burst of activities and investment in projects, indeed in 2002, 80 percent of fortune companies had Knowledge Management staff [4]. The term encompasses an array of issues and approaches, not least in engineering where Knowledge Based Engineering has been experimented with since the early days of Artificial Intelligence in the 1950’s [15]. The impact to manufacturing is potentially immense. With the increase in rapid and repeatable production techniques, many products are becoming knowledge-based goods. These goods “obey a law of increasing returns once you have absorbed the cost of designing or making the first”[17]. With the addition of modern rapid manufacturing this statement can hold to a manufactured item providing the value is inherent in its design, not the material, generating the potential for high returns. 1.2
A Novel Fixture and Tooling Design Technology
In June 2006 Rolls-Royce initiated a programme of work to develop and exploit a novel approach to the design and manufacture of leading-edge fixtures and tooling in the Aerospace sector. Coupling state-of-the-art laser-cutting processes with unique design methods, the products are highly knowledge-orientated, providing customer solutions within ultra short lead-times. The technology has been primarily developed by a single highly experienced expert. This expert represents the most valuable asset to the development of the technology and as the business begins to grow, is placed in increasing demand. Therefore in September 2006, Rolls-Royce initiated a package of work with the University of Southampton to create a Knowledge Based System that would reduce the demand on the expert and facilitate growth in the business. The system was to be targeted at improving and accelerating new product design and development, in particular for new or less experienced designers. 1.3
Knowledge Based Systems
There is an entire dichotomy of tools that fall under the classification of Knowledge Based Systems, ranging from knowledge repositories and basic filing systems to expert systems intended to replicate or replace human capabilities [2]. In this paper, systems will be generally categorised into two different families under the headings of lightweight and heavyweight systems. These terms are intended to
Development of a lightweight Knowledge Based Design System
219
correspond to the degree of automation, knowledge and intelligence embedded in the system together with the implementation cost and effort required for implementation. Best illustrated by expert systems, these “embody expertise” [9], but require significant investment to capture and embed the knowledge before the system is suitable for use [15]. These systems symbolise the heavyweight approach, ideal for optimising high performance components or multi-part systems where performance is crucial and the investment in the system is realised by the end product. Conversely a lightweight system offers less automation, intelligence and capability but requires lower investment. Note this classification is not intended to be rigorous, but to indicate the difference between the common approach to knowledge based systems and to that taken here.
2 2.1
The Proposed System Knowledge Management in Business
All businesses exist to generate an income. Knowledge management and Knowledge based systems are designed to add value to a business and must function to support the increasing wealth of a company. Typical heavyweight systems exist generally because they facilitate the production or development of a product that could not otherwise have occurred without the system i.e. optimisation of multi-objective functions. Note that analysis of performance and business benefit is the least developed field of Knowledge Management and little proof of the value the systems provide has been completed [5, 11]. Here it was proposed that rather than automate existing processes or to gain capability, the most benefit to the business would be seen by supporting existing roles and formalising work flows using a lightweight system. Although not new, this is different to the typical role of a Knowledge Based Engineering such as described by Studer [16]. 2.2
Application to the Technology
Fixtures and tooling are intended to provide the means for manufacture and assembly, and their common functionality is to “support, locate and clamp the part” in order to perform some operation [10]. Thus, providing the fixture fulfills the required function, its precise geometry or mass is not critical. The key aspect of fixture manufacture is its role as the “critical designmanufacturing link”[6]. For example if a fixture is required in the production of a
220
N. Reed, J. Scanlan and S. Halliday
component, a long lead-time in the production of the fixture can cripple production of the end component. Using the repeatable and rapid manufacturing capability of flat-bed laser cutting, the technology drastically reduces the traditionally long lead-time required for manufacture of fixtures and tooling. The product becomes a knowledge-based product and the lead-time is given primarily by the time required to produce a design. To add value to the business the aim here is to improve the final product and reduce this design time by capturing and re-using expert knowledge to facilitate design. Thus a lightweight system was proposed that would be driven by the user to provide access to knowledge or knowledge based tools upon demand.
3 3.1
The Knowledge Based System System Structure
Initially time was spent observing designers interaction with the expert. Three aspects of support were identified, loosely supporting Lundvall’s [12] classification of the forms of knowledge transfer, described as: know-what, know-why and know-how. In short, designers needed to have the knowledge to design, understand the process of design and know how to design. Existing strategies detailing frameworks and/or methodologies for Knowledge Based Systems were examined. Most relevant was Hahn’s, a framework based heavily on Nonaka and Tekeuchi’s model [8]. It was found that these frameworks tended to assume a relatively large user population and knowledge base i.e. including expert databases (the so called ‘yellow pages’) and electronic discussion forums. Here, there is one expert and limited users, yet new and existing knowledge must still be managed. A system was therefore proposed that combined three components; a Knowledge repository, a Methodology and a CAD orientated toolkit, shown schematically in Figure 1.
Development of a lightweight Knowledge Based Design System
221
Figure 1. Schematic diagram illustrating the intended flow through the system
3.1.1 The Knowledge Database The knowledge database is designed as the first point of contact for users faced with a new or complex design. Studies indicate that experienced designers rely heavily on past designs [1] and the repository will allow new designers to do so to. The repository is a SQL driven database storing codified information and rich media about previous designs, including the design drivers, product requirements and materials, together with relevant CAD files, photographs and video files. Following a new solution, information is entered via a form relating to; the client and product requirements, the details of the design, special considerations and design experiences. A search function allows users to search and retrieve required information on existing information. A large number of past designs were initially codified and to date there are approximately 400 designs stored. Moving forward, photo annotation will be incorporated into the system to aid feature/device interpretation and additional functions will be developed to index data more effectively. 3.1.2 The Methodology The methodology was derived from the experiences of the existing designers following time spent discussing past designs and observing their approaches to new designs. Currently a package of work is being launched to integrate the knowledge database with the methodology. The intention is to create a linear path through which the designers progress, and at key stages interface with the system to record the relevant design information. This will be an improvement on the existing system; data entry will become a part of the work flow and establish a best practice design methodology.
222
N. Reed, J. Scanlan and S. Halliday
3.1.3 The Toolkit The toolkit encompasses a series of different tools, primarily orientated around the CAD engine, to support and accelerate future design work. Initially experiments were completed on full product parameterisation. This required large upfront investment in coding but offered no cross product value. More value was seen by parameterising the most commonly used parts and creating user defined features. These allow users to drag and drop existing geometry into their new designs; while not automatic they aid designers to complete their most common tasks. 3.2
System Testing
Following implementation of a trial system, a formal trial was conducted to test the methodology and the benefit provided by the system during design. The aim of the test was: “To assess the effectiveness of the current design system, training and associated knowledge in the full development and production of a fixture design”. To complete the test, two user groups were compared. Engineers who were previously unfamiliar with the technology were asked to design a solution to a problem using the knowledge provided by the system. This same task was given to existing designers and the relative approaches and designs compared. All designers completed a design in the time designated with a variety of solutions. All novice engineers found the knowledge repository useful in providing the basis for design solutions and the test indicated that this knowledge base facilitated concept creation by new designers. However, the designers did struggle with implementation and development of their designs. The most common problem seen was in assessing the mechanical performance of structures or shapes. This highlighted the need for additional CAD orientated tools (such as standard design features with validated structural data) to aid design. A development system is now in place in the business, is in regular use and continues to provide a valuable resource to the designers.
4 4.1
Current Research A Multi-Tool Approach
Following the experiences of the trial, the intention is to develop the system as a portal for different tools to support aspects of the geometry design. These can be continually developed and new tools created.
Development of a lightweight Knowledge Based Design System
4.1.1 To Date To date two tools or streams of work have been completed within Parametrised common features have been created within the CAD recently a macro driven design tool to calculate elastic deformation components. This is currently accessed through Excel and represents solution to common geometry calculations.
223
Rolls-Royce. software and of compliant a lightweight
4.1.2 Proposed Moving forward two additional tools are proposed: A similar macro driven tool as above to determine bend radii for complex press brake tooling and an advanced tool for geometry creation of compliant mechanisms using an evolutionary algorithm. The latter tool represents continued focus on a key functional element of the technology – customisable and precision clamping and locating devices. Offering high value to customers, design capability of these devices is limited to the most experienced designers – often relying on experience to judge the optimum load for example. The proposed tool will interface directly with the CAD engine and will use Evolutionary Structural Optimisation (ESO) to create an approximate solution. This method was first developed in the early 1990’s to optimise a component with respect to volume, by systematically removing low stressed material to produce an the optimum design of a uniformly stressed component with minimum mass [7, 14]. The research here will use the ESO principle to develop a model that will generate the approximate geometry for a sprung or compliant component. A new algorithm will be required in order to optimise compliant geometry with respect to a desired load from the device. The ESO principles offer a fast easy method for new designers to produce a working solution without relying on expert intuition or trial and error testing. This should provide a key support function within the Knowledge Based System.
5
Conclusion
The work presented here is intended to demonstrate a less traditional approach to the creation of a Knowledge Based System, focusing on a lightweight user driven system with low initial cost and investment (relative to a automated knowledge based system). The system combines a searchable knowledge repository with a collection of tools to provide guidance and support for specific aspects or functions of the design process such as access to previous design and rationale, deployable common features and an ESO based tool to generate geometry for compliant components. Future work will be focused on completing and deploying the ESO tool, continued development of the system and structure and an assessment of the impact the system provides to the business process.
224
6 [1] [2] [3] [4] [5] [6] [7] [8]
[9] [10] [11] [12] [13] [14] [15] [16] [17]
N. Reed, J. Scanlan and S. Halliday
References Ahmed, S., K. Wallace, and L. Blessing. Understanding the differences between how novice and experienced designers approach design tasks. Research in Engineering Design 2003. 14(1): p. 1-11. Beckman, T.J., The Current State of Knowledge Management, in Knowledge Management Handbook, J. Liebowitz, Editor. 1999, CRC Press LLC. Bell, D. The Coming of Post-Industrial Society. 1974, London: Heinemann Educational Books Ltd. Bontis, N. The Rising Star of the Chief Knowledge Officer. Ivey Business Journal 2002. 66(4): p. 20-5. Bose, R. Knowledge Management Metrics. Industrial Management & Data Systems 2004. 104(6): p. 457-468. Cecil, J. Computer-Aided Fixture Design - A Review and Future Trends. The International Journal of Advanced Manufacturing Technology 2001. 18(11): p. 790793. Edwards, C., H. Kim, and C. Budd. An evaluative study on ESO and SIMP for optimising a cantilever tie—beam. Structural and Multidisciplinary Optimization 2007. 34(5): p. 403-414. Hahn, J. and M.R. Subramani, A framework of knowledge management systems: issues and challenges for theory and practice, in Proceedings of the twenty first international conference on Information systems. 2000, Association for Information Systems: Brisbane, Queensland, Australia. Hopgood, A.A. Intelligent Systems for Engineers and Scientists. 2nd ed. 2001: CRC Press LLC. Hunter, R., et al. Knowledge model as an integral way to reuse the knowledge for fixture design process. Journal of Materials Processing Technology 2005. 164-165: p. 1510-1518. Kim, J.-A. Measuring the Impact of Knowledge Management. IFLA Journal 2006. 32(4): p. 362-367. Lundvall, B.A., The Social Dimension of the Learning Economy. 1996: Department of Business Studies, Aalborg University, Denmark. McMahon, C., A. Lowe, and S. Culley. Knowledge management in engineering design: personalization and codification. Journal of Engineering Design 2004. 15(4): p. 307-325. Querin, O.M. and G.P. Steven. Evolutionary Structural Optimisation (ESO) using a bidirection algorithm. Engineering Computations 1998. 15(8): p. 17. Sandberg, M., Knowledge Based Engineering - In Product Development. 2003, Department of Applied Physics and Mechanical Engineering, Lulea University of Technology. Studer, R., V.R. Benjamins, and D. Fensel. Knowledge engineering: principles and methods. Data Knowl. Eng. 1998. 25(1-2): p. 161-197. Tapscott, D., D. Ticoll, and A. Leavy. Digital Capital, Harnessing the Power of Business Web.
Near Net-shape Manufacturing Costs Stuart Jinksa,1 , Prof J Scanlanb, and Dr S Wiseallc. a
EngD student, University of Southampton, UK.
b c
School of Engineering Sciences, University of Southampton Rolls-Royce Plc
Abstract. Improved efficiency in aero engines requires leaner fuel burn, resulting in higher working temperatures and the use of high temperature alloys. These high temperature alloys are extremely expensive and it is widely known that their material costs contribute to a significant fraction of the total product cost. Near net-shape manufacturing techniques such as Hot Isostatic Pressing, (HIP) provide a way of reducing material costs through a high buy-to-fly ratio, compared to traditional manufacturing routes. Cost modelling of some existing components and processes within Rolls-Royce Plc uses a parametric approach, using historical data of similar components and processes to establish cost estimates. The parametric approach is unsuitable for preliminary costing of novel components and processes, where historical data is no longer relevant or there is little production data available. An object oriented parametric cost model, with discrete event simulation, will remove the reliance of historical data and allow preliminary design of novel components and processes to be conducted. Part of the Resource Efficient Manufacture of high performance hybrid Aerospace Components (REMAC) project is to manufacture a high performance Nickel-based alloy component via net-shape powder HIPing and complete a cost, energy and environmental assessment. Keywords. Near net-shape, cost modelling.
1 Aim The aim of this research is to investigate the costs and environmental impact of powder HIPing and other near net-shape manufacturing methods, and compare it to current methods of manufacture. Novel methods of simulating process times and activity rates will be investigated, to give a deeper depth and understanding to the cost models. This forms part of the Cost Modelling Strategy being developed in the research and technology costing team at Rolls-Royce plc. 1
Corresponding Author Email: [email protected]
© copyright 2008 Rolls-Royce plc. All Rights Reserved. Permission to reproduce may be sought in writing to IP Department, Rolls-Royce plc, P.O. Box 31, Derby DE24 8BJ, United Kingdom.
226
S. Jinks, J. Scanlan and S. Wiseall
2 Background 2.1 REMAC Resource Efficient Manufacture of high performance hybrid Aerospace Components is a Department for Business Enterprise and Regulatory Reform (BERR) (formally the Department of Trade and Industry (DTI)) project. Members of the group are Rolls-Royce plc, Birmingham University, Bodycote plc and Sandvik Osprey Ltd. The scope of REMAC is to manufacture a high performance Nickel-based alloy component via net-shape powder hipping to maximise material usage and minimise energy consumption. 2.2 Powder metallurgy Powder metallurgy (PM) is a range of manufacturing and metal forming techniques that are used to produce net or near net-shape components from metal powder. PM consists of four major processing stages; manufacture of metal powder, compaction, sintering and secondary operations. These stages are fundamentally the same but are achieved in different ways by different manufacturing techniques. 2.2.1 Manufacture of metal powder There are three major techniques to manufacturing metal powders; these are atomisation, mechanical comminution and chemical. Atomisation is the process used commercially to produce the largest tonnage of metal powders. [1] In the atomisation process molten metal is broken up into small droplets and solidified before coming into contact with each other or a solid surface. The principle is to disintegrate a stream of molten metal by impacting it with a highpressure gas or liquid (Figure 1 and Figure 2 respectively). Nitrogen and argon are commonly used gasses and water is the most widely used liquid. [2] Centrifugal atomisation is another form of atomisation, where droplets of molten metal are discharged from a rotating source. Molten metal is rotated and droplets are thrown off or the molten metal is poured onto a rotating disk to produce the droplets. Another process is when a bar is rotated where an electrode melts the free end; this process is called the Rotating Electrode Process (REP) (Figure 3). [2]
Near net shape manufacturing costs
Figure 1. Vertical Gas Atomizer [5]
227
Figure 2. Water Atomization Process [5]
Figure 3. Centrifugal Atomization by the Rotating Electrode Process [5]
2.2.2 Powder Hot Isostatic Pressing (PHIP) In the PHIP process the compaction and sintering stages occur simultaneously. The process involves filling a gas tight container with powder and degassing to remove excess air, the container is then sealed. The container and powder is subjected to heat and equal pressure in all directions within a pressure vessel. The sintering temperature is kept below the melting point of the base material, but some of the additives may melt, which results in liquid phase melting. After pressurisation the container is removed from the vessel, the consolidated component is removed from the container by secondary operations. (Figure 4)
228
S. Jinks, J. Scanlan and S. Wiseall
Figure 4. Hot Isostatic Pressing Sequence-Schematic [5]
2.3 Costing estimation Cost estimation was traditionally completed after the design process, but the design process contributes 70-80% of the total avoidable cost of the product life cycle, and after this stage cost implications are often irreversible. [4, 11] Many authors have reviewed and classifed cost estimation techniques Curran et al. [4] provided classification of cost estimation techniques as, classic estimation techniques (i.e. Analogous, Parametric, Bottom-up) and advanced estimation techniques (i.e. Feature-based estimation, Fuzzy logic, and Neural networks) and, Niazi et al. [8] classified several PCE (Product Cost Estimating) techniques into qualitative and quantitaive techniques. Analogous costing is used for products that contain existing components with historical cost data available. Parametric costing employes cost estimating relations (CERs) and associated mathematical algorithms to establish cost estimates. 2.4 Modelling A model is a representation of a system with a hypothesis to describe the system, often mathematically.[3] Vanguard studio formally known as Decision Pro was chosen by DATUM, a Rolls-Royce sponsored project. Vanguard Studio is a visual object-oriented modelling tool, that can create complex distributed models and decision trees. The cost engineering team at Rolls-Royce plc are using Vanguard Studio to model the unit costs of components and processes. The principle that the cost engineering team are working towards is to “write once use many times”. Cost models for processes and materials can be used as child models within parent models for new components. Figure 5 shows an example of a cost model within Vanguard Studio. A hierarchical tree structure is shown with clear naming of cost elements, use of units and application of component. [7]
Near net shape manufacturing costs
229
Figure 5. Vanguard Studio example
2.5 Simulation Simulation is the operation of a mathematical model to imitate internal processes and not just the results of the operation. Simulation allows testing of implications of possible operations without having to implement them. ExtendSim has been chosen because of its easy to use graphical based interface, the relative low cost of the tool and the ability to validate compared to spread sheets. Panko [9] referenced by Tammineni [10] states "Given data from recent field audits, most large spreadsheets probably contain significant errors". ExtendSim is a simulation tool that utilises two forms of simulation, continuous and discrete event. Continuous simulation is when time advances in equal steps and model values are recalculated at each time step. In discrete event simulation the system changes state as events occur and only when those events occur. Figure 6 shows a transmitter-receiver system [6], an example of a discrete event simulation model.
230
S. Jinks, J. Scanlan and S. Wiseall
Figure 6. Shows a car wash discrete event model
ExtendSim uses blocks as the main building components, each block represents some part of the process being modelled and are linked with lines. Each block contains procedural information for each part of the process and can be put into hierarchical groups to aid in visual representation. ExtendSim is an open source program that supports component object model (COM/ActiveX) and open database connectivity (ODBC). Excel spreadsheet can be embedded into ExtendSim models and can directly access data from a database, control an application, or have it control ExtendSim.
3 Methodology A novel costing approach that utilises an object-oriented modelling and discrete event simulation environment, to develop generative process based cost models for an engine component. The knowledge of powder atomisation and the powder HIPing process is captured in a simulation environment to generate detailed process times and activity rates. This approach will allow comparative scenario evaluation at the preliminary design stage to be conducted. This could also potentially improve the processes physically and economically by identifying the cost drivers. Figure 7 shows a flow chart for the operation of how the cost of a component will be derived. Uncertainties will be applied where appropriate as both Vanguard and ExtendSim can perform sensitivity analysis.
Near net shape manufacturing costs
231
Figure 7. Flow chart showing how cost will be derived
4 Future work A cost model of the current manufacturing route and multiple cost models reflecting the developments of the powder HIPing route will be completed, as the REMAC project continues. These models will be used to compare the powder HIPing route with the current manufacturing route. The methodology of using a discrete event simulation tool to calculate the process times and activity rates to be used within cost models, can be applied to other situations such as the powder bed process and repair of components. Designers use geometry to define a component, cost models use this geometry to model the costs, linking these steps to give a cost as a component is being designed would allow designers to see the cost implications of their design choices. An environmental assessment will be carried out by comparing the current method of manufacture to the proposed powder hipping method. The type of information to be compared is waste material for each process, disposal methods for waste material, recycling methods of consumable and processed material, and energy consumption.
5 Acknowledgements This work is part of the author’s Engineering Doctorate (EngD) at University of Southampton supervised by Professor Jim Scanlan, Sponsored by Rolls-Royce plc in conjunction with the REMAC project and EPSRC.
232
S. Jinks, J. Scanlan and S. Wiseall
6 References [1].
Manufacture of metal powder. Available at: <www.mpif.org/apmi/doc4.htm>. Accessed on: [2]. Gas Atomisation. Available at: . Accessed on: 15th April 2008 [3]. Modelling. Available at: . Accessed on: 15th April 2008 [4]. Curran R, Raghunathan S & Price M Review of aerospace engineering cost modelling: The genetic causal approach. Progress in Aerospace Sciences 2004; 40:487-534. [5]. German Rm. Powder Metallurgy Science. EPMA, 1994. [6]. Inc It (Ed.) (2007) ExtendSim User Guide, Imagine That Inc. [7]. Maccalman R & Reuss D DATUM - Vanguard Example. Available at: . Accessed on: 15th April 2008 [8]. Niazi A, Dia Js, Balabani S & Seneviratne L Product cost estimation: Technique classification and methodology review. Journal of Manufacturing Science and Engineering 2006; 128:563-575. [9]. Panko R, P. (2000) Spreadsheet errors: what we know, what we think we can do Proceedings of Spreadsheet Risk Symposium (European Spreadsheet Risk Interest Group). [10]. Tammineni Sv. Designer Driven Cost Modelling UNIVERSITY OF SOUTHAMPTON 2007 [11]. Weustink If, Brinke E, Streppel Ah & Kals Hjj A generic framework for cost estimation and cost control in product design. Journal of Materials Processing Technology 2000; 103:141-148.
Modelling the Life Cycle Cost of Aero-engine Maintenance James S. Wong1, James P. Scanlan2 and Murat H. Eres3 School of Engineering Sciences, University of Southampton, U.K. Abstract. This paper presents an approach of modelling the maintenance Life Cycle Cost (LCC) of an aero-engine which links the capabilities of hierarchical modelling and discrete-event simulation (DES) tools. It follows up on work previously done on a component level hierarchical cost estimation model [1]. It is concluded that, as the calculation of LCC involves a highly diverse set of representations and processes, it is undesirable to use a single software tool to undertake this task. This work seeks to demonstrate how different modelling paradigms should be used in tandem to produce an elegant solution. The individual parts of the model and the results generated are presented and discussed. Essentially, the approach shows how a design parameter can be linked to the resultant LCC to help form cause and effect relationships. Keywords. Life cycle cost, Cost modelling, Cost Engineering, Aerospace engines, Gas turbines
1 Introduction Programs like Roll-Royce’s Total Care® [2] deviate radically from the traditional procurement method in which the airlines/operators own and maintain the aero-engines of their fleet. When airlines/operators purchase Total Care® they pay the Original Equipment Manufacturer (OEM) a fixed dollar per flying hour for the missions being flown. Under this contract the OEM assumes the cost of maintenance and support services. Since both the OEM and the airliners/operators seek to ensure that the engines are kept flying with minimal disruption and cost, the conflict of interest that existed before is eliminated. This kind of arrangement sets up new implications for the OEM. Most significant of which is that the OEM now has to consider the operating performance and costs of 1
Postgraduate research student, Computational Engineering Design Group, Building 25, Room 2029, University of Southampton, Southampton, SO17 1BJ, U.K.; Tel: +44 (023) 8059 4642; Fax: +44 (023) 8059 3230; Email: [email protected] 2 Professor of Design, Computational Engineering Design Group 3 Senior Research Fellow, Computational Engineering Design Group
234
J.S. Wong, J.P. Scanlan and M. H. Eres
its product at the early design stage. Any disruptive events which take an engine out of operation will cause the engine to cease generating income for the OEM. Additionally, the engine will become a cost drain as well. It is thus incumbent on the OEM to analyse potential areas where operating costs can be reduced to increase revenue. One area of interest is in the calculation and prediction of maintenance life cycle cost. Harrison [3] notes that while direct maintenance costs only make up 6-8% of the total operating costs, engine maintenance plays a significant factor in other important operator cost drivers. It is well established that decisions affecting more than 70% of the total life cycle cost (LCC) of a system are made in the early concept design phase [4, 5]. Therefore, Rolls-Royce has implemented its ‘Design for Service’ program, which advocates designing its products and services in tandem [3]. This work aims to develop tools which will allow the designer to model LCC maintenance costs and form cause and effect relationships.
2 Component Level Hierarchical Model The component level hierarchical cost estimation model [1] simulates the maintenance costs for a set of aero-engine components. The hierarchical approach taken for this model uses a bottom-up method. This allows causes and effects of the cost estimate to be understood [5]. The model was built in a commercial software package called Vanguard Studio [6]; a tool currently used in Rolls-Royce as a concept costing tool. Figure 1 shows the model implemented in the Vanguard Studio environment.
Figure 1. Hierarchical model in Vanguard Studio environment
Modelling the Life Cycle Cost of Aero-engine Maintenance
235
One of the major reasons why the Vanguard Studio software was selected for this purpose was its visual modelling capabilities. It generates hierarchical trees to model complex problems. This allows the user to trace how the cost model is structured and to study the cause and effect of each case. Another feature of Vanguard Studio is its web deployment capability which developers can use to publish models on the Vanguard Studio server. The models on the server can be run by other users through a standard web browser. Additionally, the development environment uses a scripting language called “DScript”, an extended form of the JavaScript® programming language [6], which gives Vanguard Studio powerful analytic capabilities. Finally, the development environment supports most of the object oriented programming features and this allows for engine components to be modelled as objects. Attempts were made to improve the process and details by which the maintenance costs were calculated in the hierarchical cost estimation model. However, as the model developed and grew in complexity, some limitations were exposed. As the maintenance logic of an aero-engine is a complicated one, the amount of programming script grew substantially. The result of this was a significant deterioration of model runtime. The more pressing issue however, was a loss in transparency within the maintenance logic with regards to how the final costs were calculated. Part of the problem was Vanguard Studio’s suitability for modelling dynamic processes and loops. Vanguard Studio requires the problem to be described in the form of a hierarchical tree. Modelling a dynamic process in this manner results in tree nodes containing relatively complex programming code. Some of the nodes ended up with as many as 800 lines of programming. This runs counter to Vanguard Studio’s philosophy of solving a problem by dividing it into simpler components.
3 A Hybrid Modelling Approach The hybrid model builds on the component level model described above. The issues discussed suggest that it is perhaps unreasonable to expect a single software tool to possess a wide enough range of capabilities to compute LCC. It is therefore more practical to have a suite of tools which can interact with each other to solve the task at hand. Employing existing tools within an integration environment can save valuable development time and effort. These were the reasons behind the creation of a hybrid hierarchical-discrete event simulation (DES) model. This approach links two different commercial software packages. The first is Vanguard Studio and the second is a discrete event simulation tool named Extend [7] which models dynamic processes or systems. A review of more than 50 DES software packages by Tewoldeberhan [8] identified Extend to be a good cost effective tool for developing DES models. Arena® was ranked slightly higher in the review in terms of features such as graphics capability but was significantly more expensive as well (approximately ten times that of Extend).
236
J.S. Wong, J.P. Scanlan and M. H. Eres
Figure 2 illustrates how the two programs interact. Vanguard studio will collate all the necessary input data and format them before passing the data on to a database. Vanguard Studio then executes the Extend model which extracts the data from the database and runs the dynamic simulation. The results are passed, via the database, back to Vanguard Studio which has superior statistical analysis functions. Vanguard Studio, in essence, acts as a wrapper or as a front end for the Extend model. In other words, for each simulation the user does not interact with the Extend software. The Extend model is only modified if the maintenance logic has to be changed.
Figure 2. Framework of the hybrid hierarchical-discrete event simulation model
3.1 Hierarchical Component One of the most useful functions in Vanguard Studio is the ‘component-based modelling’ feature. It allows other Vanguard Studio models to be inserted and used in a ‘parent’ model. The advantages of this feature are as follows [6]: 1. The responsibility for building and maintaining each component can be assigned to a person who has direct knowledge of what the component represents. 2. The development process is sped up by dividing the model build into various pieces that fit specific functionalities. 3. The components built can be shared; thus, eliminating the duplication of effort. 4. Knowledge is captured for future builds because individual components can be reused in new projects. The ‘component-based modelling’ function also allows inputs and outputs of a component to be exposed in its parent model. Consequently, the effects of a design
Modelling the Life Cycle Cost of Aero-engine Maintenance
237
parameter change in a component model can be seen in the parent model. This allows the designer to trace and identify cause and effect relationships. Vanguard Studio unit cost models, developed by the DATUM project [9], as well as physics-based life estimation models were used in this model. Design parameters affecting the manufacturing cost and life of an engine part could then be linked directly to maintenance life cycle cost. 3.2 Discrete Event Simulation Component The discrete event simulation (DES) model performs the calculations for the maintenance costs over a period of time. One of the drawbacks with previous tools for this kind of application is the representation of the underlying logic behind the cost calculations. Often in previous cost estimation tools, especially in purpose built programming codes and spreadsheet based programs [10], critical logic statements are hidden within the lines of code. The result is that these programs become difficult to understand, maintain, debug and modify. These programs lack transparency and it is practically impossible to trace if there are any errors in the logic or if the model actually does what it is supposed to do. The use of Extend addresses these issues as its graphical interface allows the modelled processes to be visually replicated. Figure 3 shows a modelled process loop and it illustrates Extend’s ability in representing a process accurately.
Figure 3. A process loop implemented in Extend
238
J.S. Wong, J.P. Scanlan and M. H. Eres
Figure 4 shows the engine level process loop as implemented in Extend. This is the highest level of representation in the maintenance model. Each engine comprises a number of modules which in turn are themselves made up of the required combination of components. Similarly, a component contains a number of ‘incidents’ representing the various failure modes (i.e. creep, fatigue, etc.). The occurrence of each failure mode is determined from the statistical sampling of the respective probability distribution (i.e. Weibull, Lognormal, etc.). The failure modes determine the actions performed on the engine and the costs of the various maintenance actions are aggregated to give the total maintenance cost. The simulation ends once certain conditions are met.
Figure 4. Engine level process loop implemented in Extend.
4 Results Results from the hybrid model typically include charts and graphs detailing the breakdown of each individual shop visit cost. Error! Reference source not found. shows an example of these charts; taken from a LCC model of a set of aero engine compressor blades. Figure 5 shows the results of a sensitivity analysis performed on five input variables with respect to the total LCC. This feature allows a designer to study the impact each design input has on the resultant LCC. These results were generated by the statistical analysis tools available in Vanguard Studio. Frequency and cumulative distributions of total maintenance cost were also generated since many of the model inputs are stochastic. For this case, Extend performs repeated random sampling for the Monte Carlo analysis. It should be noted that the maintenance costs calculated only consider the cost of replacement components and repair operations. The
Modelling the Life Cycle Cost of Aero-engine Maintenance
239
model also assumed an inexhaustible inventory or infinite resource queues. For a more complete cost-estimate, labour rates, repair times and material costs would have to be included.
Figure 6. Graph of shop visit causes
Figure 5. Table of sensitivity analysis result
5 Conclusions and Future Work A hybrid hierarchical-discrete event simulation cost estimation model was developed to address the limitations of the approach presented in the hierarchical component model [1]. The model ably simulates the maintenance processes and logic and displays them in a clear and concise fashion. It also demonstrated that these two disparate tools could be used in tandem to take advantage of both their capabilities. Critically, the model’s value as a design tool lies in its ability to link an input design parameter to the resultant LCC cost. The developed approach compares favourably to manually programming a model and provides a flexibility and transparency which have been absent from previous tools. Current and future work will focus on developing a full engine model and the integration of this work with a fleet simulation model [11].
6 Acknowledgements This work is supported by the IPAS project [12] (DTI Grant TP/2/IC/6/I/10292), which is co-funded by the Technology Strategy Board's Collaborative Research and Development programme (www.innovateuk.org).
240
J.S. Wong, J.P. Scanlan and M. H. Eres
7 References [1]
Eres M H, Scanlan J P. A Hierarchical Life Cycle Cost Model for a Set of Aero-Engine Components. 7th AIAA Aviation Technology: Integration and Operations Conference (ATIO), 2007. AIAA 2007-7705. Belfast, N. Ireland. [2] Aviation Industry Group. Why TotalCare can work for every Rolls-Royce Customer. The Engine Yearbook 2007. [3] Harrison A. Design for Service - Harmonising Product Design with a Services Strategy. GT2006 ASME Turbo Expo: Power for Land Sea and Air, 2006. [4] Asiedu Y, Gu P. Product Life Cycle Cost Analysis: State of the Art Review. International Journal of Production Research, 1998, Vol 36. [5] Curran R, Raghunathan S, Price M. Review of aerospace engineering cost modelling: The genetic causal approach. Progress in Aerospace Sciences, 2004, Vol 40, pg 487-534. [6] Vanguard Studio Website. Available at: . Accessed on: April 4th 2008. [7] ExtendSim Product Website. Available at: Accessed on: April 4th 2008. [8] Tewoldeberhan T W, Verbaraeck A, Valentin E, Bardonnet G. An Evaluation and Selection Methodology for Discrete-Event Simulation Software. Proceedings of the Winter Simulation Conference, 2002, Vols. 1, pp 67-75. [9] Scanlan J, Rao A, Bru C, Hale P, Marsh R. The DATUM Project: A Cost Estimating Environment for the Support of Aerospace Design Decision Making. Journal of Aircraft, 2006, Vol. 43. ISSN: 0021-8669. [10] Burkett M. DMTrade – A Rolls-Royce tool to model the influence of design changes and maintenance strategies on lifetime reliability and maintenance costs. ASME Turbo Expo 2006: Power for Land, Sea and Air, 2006. GT2006-90023. [11] Yu T T, Scanlan J P, Wills G B. Agent-Based and Discrete-Event Modelling: A Quantitative Comparison. Belfast, Northern Ireland : 7th AIAA Aviation Technology, Integration and Operations Conference (ATIO), 2007. AIAA 2007-7818. [12] IPAS Project Website. Available at: . Acessed on: April 4th 2008.
Value Driven Design – an Initial Study Applied to Novel Aerospace Components in Rolls-Royce plc Julie Cheunga,1 , James Scanlana and Steve Wiseallb a b
University of Southampton, United Kingdom. Rolls-Royce plc, United Kingdom.
Abstract. Aero-engine customers today are demanding high quality, low cost products to meet their strict requirements. This consequently identifies cost as an important design parameter throughout the development cycle of the product. This paper describes possible unit cost modelling approaches and introduces the idea of ‘value driven design’ and presents opportunities to conduct multi-disciplinary optimisation for aero-engine designs. This research project aims to develop the concept of designing future generation gas turbines to not only meet performance targets but also to meet cost targets. The research encompasses feasibility studies to explore unit cost modelling strategies in the Cost Engineering group at Rolls-Royce plc, which include cost modelling methodologies for approaching novel components, when historical data does not exist, and whole engine system design. Keywords. Cost engineering, cost estimation, cost modelling, unit cost, value.
1 Introduction Cost plays an increasingly important role in the development of high-performance products and in competition between aerospace companies. Figure 1 illustrates the three major dependent competitive factors for the aerospace industry – cost, performance and reliability. A portion of a product’s cost is dictated by decisions on its design [20]. This highlights the significance of treating cost as an independent design parameter that can be controlled during the development cycle [9]. Traditionally, cost tended to be considered late in the development cycle, which can lead to uncontrolled costs particularly if the designs were to change. There is now a shift to establish and understand cost drivers throughout the design phase, essentially when design and manufacturing options are considered [13, 19]. 1
Engineering Doctorate (EngD) Student, Computational Engineering Design Research Group, School of Engineering Sciences, University of Southampton, Highfield, Southampton, SO17 1BJ; Email: [email protected]
© copyright 2008 Rolls-Royce plc. All Rights Reserved. Permission to reproduce may be sought in writing to IP Department, Rolls-Royce plc, P.O. Box 31, Derby DE24 8BJ, United Kingdom.
242
J. Cheung, J. Scanlan and S. Wiseall
Figure 1. Dependent competitive factors in aerospace products [6]
2 Background 2.1 Cost Engineering and Cost Estimating Cost engineering takes into consideration design and engineering principles of a product and applying this knowledge to assess trade-off studies, particularly when design options are available [15]. The cost drivers are evaluated and cost reduction opportunities are identified to aid design decision making. This type of activity is specifically beneficial at the early stages of design. Cost estimating provides a forecast of the cost of a product by analysing historical data [9]. The estimation process has improved significantly as industries become more competitive, consequently the time to generate a cost has reduced (Figure 2) [18].
Figure 2. Evolution of cost estimation process [18]
Value Driven Design
243
Currently, the research case studies in this paper are working towards the future column in Figure 2, investigating a more automated process by assessing cost modelling tools and techniques that can be applied to Rolls-Royce plc. There is a comprehensive selection of literature on cost estimation and its methodologies. A common classification of cost estimation techniques can be categorised into two groups: qualitative and quantitative [13]. Qualitative cost information utilises historical data to generate cost estimates, whereas quantitative cost information is obtained from a detailed analysis of product design and manufacturing processes [12, 13]. The latter can produce a more credible cost estimate however it requires a detailed design of a product, which is not readily available at the early stages of design. The qualitative technique can cope without detailed designs, but the question lies on what happens if there is no historical data available such as for novel components. The DATUM project [16] introduced Vanguard Studio (formerly Decision Pro), an object-orientated tool for cost modelling. Library objects were built to capture design and manufacturing knowledge which can be reused (qualitative technique) for cost estimating alterative product designs. The main advantages of using the tool are ease of use such that cost breakdowns are clearly shown so models can be easily verified; able to capture knowledge and has the functions and flexibility of a spreadsheet. An example of this approach is demonstrated in section 4.2 for the whole engine system design. 2.2 What is Value Driven Design? “Value-driven design (VDD) is an improved design process that uses requirements flexibility, formal optimization and a mathematical value model to balance performance, cost, schedule, and other measures important to the stakeholders to produce the best outcome possible” [2]. VDD builds on the concurrent engineering platform and applies multidisciplinary optimisation to the design of large systems. The motivation for value driven design has now been established [2] and this approach is logical to facilitate and meet customer requirements. The opportunity to apply this context to aerospace applications has been initially created by Collopy [7], where a value model is developed to not only represent a single dependent variable (e.g. cost) but all the design variables and combine it to a single measure or scoring function. This function can indicate the “goodness” of the design, where the higher the score, the better the optimal solution. Therefore, this technique can support product designers to make justifiable and effective decisions. As Rolls-Royce plc possesses mature technology for high performance and reliable products [1], the investigation and application of cost engineering is very much appropriate to VDD.
244
J. Cheung, J. Scanlan and S. Wiseall
3 Aims and Objectives The aim is to support the concept of designing future generation gas turbines to not only meet performance targets but also to meet cost targets. Initial feasibility studies are carried out to explore unit cost modelling strategies in the Research and Technology Cost Engineering group at Rolls-Royce plc. Exploration of case studies builds on the idea of integrating the cost parameter at the early stages of design, where there are greater opportunities to impact design decisions. The studies will support the investigations of VDD models. The significance of this work will be to influence Rolls-Royce plc design and manufacturing engineers to become aware of unit cost and to support the choice of design solutions and manufacturing processes. This approach is assisted through delivery of tools/techniques and knowledge to support design decision making.
4 Cost Modelling Case Studies 4.1 Novel Component Cost Modelling As stated, a qualitative technique uses historical data to estimate the cost of a new but similar design. However, when a radically new product is introduced, existing data is not available to develop a cost estimating relationship (CER) to estimate a new design unit cost. This case study discusses an approach to cost modelling a new component in Rolls-Royce plc. It is common in the aerospace field to use mass as a prime cost driver as the data is readily available, however this is not correctly defined as a cost driver since reducing mass can incur higher manufacturing costs [8]. Instead, Arago (Figure 3) [4] illustrates the primary cost drivers as geometry, materials and method of manufacture. The part geometry defines the complexity of the design [11]; material costs are defined by the production level; and the method of manufacture determines cost of machinery, resources and elements required to make the product. An open rotor engine fan blade (Figure 4) has been considered to demonstrate the cost modelling approach presented in this case study.
Figure 3. Primary cost drivers [4]
Value Driven Design
245
Figure 4. Open rotor aero-engine [3]
A framework for cost modelling machined parts was developed by Shehab [17]. The current methodology for novel component cost modelling adopts a similar approach whereby the design, manufacturing process and material attributes are modelled. In addition to this a discrete event simulation (DES) is introduced to model the dynamic operations of manufacturing the novel component. This is required to generate and gain credibility for operation times and rates [14]. The experimentation of a factory model also acts as a capacity study, whereby equipment and manpower is modelled to evaluate if an annual demand can be met. As a result, the number of equipment and amount of manpower is determined and ultimately the manufacturing costs. ExtendSim is a simulation tool and states “simulating a system or process provides a quick and cost-effective method for determining impact, value, and cost of changes” [10]. The cost modelling method is illustrated in Figure 5. Design attributes and operating costs are modelled in Vanguard Studio with the factory model running in the background. Common material costs can be extracted from a materials library database and inputted into the cost model. The output will be the component unit cost estimate. Uncertainties can also be modelled in both tool applications to gain a three point estimate rather than a single figure.
Figure 5. Component unit cost modelling methodology
246
J. Cheung, J. Scanlan and S. Wiseall
Since manufacturing costs depend on production operations [21], this becomes a generative cost model. The lack of detailed information during the early stages of design means it is difficult to produce a mature generative model. Instead a flexible and robust model that can accommodate design and manufacturing changes is highly desirable. 4.2 Whole Engine System Design There can be hundreds of design parameters for a whole engine. Therefore, engine sections needs to be broken down to manageable modules for cost modelling. In each module, a library of existing engine component part cost models is developed. If the user wishes to observe the effects of adding or omitting a particular component, then the cost impact can be shown simply by adding or removing the corresponding cost model. This is also relevant to novel aerospace components, where changes of the design at whole engine level can be reflected in the unit cost. Parametric Cost Estimation Relationships (CER) are modelled for each individual component using Vanguard Studio to evaluate design changes. The cost drivers for each component are extracted to the top level of the engine (Figure 6). The whole engine unit cost model is represented in a hierarchal tree structure with levels – whole engine>module>components>features. Weustink, too, uses a generic framework to breakdown a product’s cost into assembly, component and feature level [21].
Figure 6. Whole engine unit cost model in Vanguard Studio
A bill of materials (BOM) can be utilised by an engine template and an automated process can be developed which enters the data into the appropriate nodes. This visually enhances the cost breakdown and allows engineers to understand where the component cost drivers are. Furthermore, if design rules are modelled, the impact of altering a single design parameter on cost can be observed on the whole engine level e.g. increasing the fan stage diameter will increase the fan case diameter; as a result the material and manufacturing costs will be affected.
Value Driven Design
247
5 Future Work Further research will be carried out on the cost modelling case studies. The novel component study will define an appropriate unit cost modelling methodology for approaching new designs. Further refining of the cost models and factory model will be made as more manufacturing information becomes available, yielding a robust unit cost model that can also address uncertainty. The Product Lifecycle Management (PLM) tool, Unigraphics, will also be linked to Vanguard Studio to ascertain more accurate geometry parameters. This features as part of the design attributes in Figure 5. An alternative approach to estimate unit cost is using data mining techniques. Data mining extracts statistical characteristics from a mass of data and helps to develop and analyse the cost estimation relationships (CER) [9]. Bru suggests that data mining facilitates the successful analysis of cost data [5]. His research has used data mining to find data relevant for developing cost models and how best to visualise cost data. Therefore, this presents an opportunity to explore the data mining methods and apply it to both component and whole engine level. The whole engine system design case study will be further developed to assess the integration of other variables e.g. performance and life-cycle costs. This steers onto the idea of value driven design. The work discussed in this paper will present and apply the unit cost parameter to a value driven model. By exploring other parameters that influence design, the research can lead into multi-objective optimisation - contributing towards achieving sophisticated optimal designs for aerospace products.
6 Acknowledgements This work is part of the author’s Engineering Doctorate (EngD) at the University of Southampton, sponsored by Rolls-Royce plc and the Engineering and Physical Sciences Research Council (EPSRC).
7 References [1] Overview of Rolls-Royce plc and Products. Available at: . Accessed on: October 2007. [2] Home Page of the AIAA Value-Driven Design (VDD) Program Committee. Available at: . Accessed on: November 2007. [3] Flight Global - Image of an Open Rotor Aircraft Engine. Available at: . Accessed on: 1st February 2008. [4] Arago O, Bretschneider S & Staudacher S. A Unit Cost Comparison Methodology for Turbofan Engines. ASME Turbo Expo 2007: Power for Land, Sea and Air Proceedings. 2007 [5] Bru C. Generalisation and Visualisation of Cost Information. Doctor of Philosophy Thesis, University of the West of England, Bristol, 2007.
248
J. Cheung, J. Scanlan and S. Wiseall
[6] Cheung JMW. Value Driven Design. University of Southampton Engineering Doctorate Conference, 2008. [7] Collopy P & Horton R. Value Modeling for Technology Evaluation. Published by American Institute of Aeronautics and Astronautics. 2002. [8] Collopy PD & Eames DJH. Aerospace manufacturing cost prediction from a measure of part definition information. Society of Automotive Engineers 2001. [9] Curran R, Raghunathan S & Price M. Review of aerospace engineering cost modelling: The genetic causal approach. Progress in Aerospace Sciences 2004; 40: 487-534. [10] Imagine-That-Inc. ExtendSim User Guide: Imagine That Inc, 2007. [11] Jung J-Y. Manufacturing cost estimation for machined parts based on manufacturing features. Journal of Intelligent Manufacturing 2002; 13: 227-238. [12] Kaufmann M. Cost/Weight Optimization of Aircraft Structures. Licentiate of Technology Thesis, KTH Stockholm, 2008. [13] Niazi A, Dia JS, Balabani S & Seneviratne L. Product cost estimation: Technique classification and methodology review. Journal of Manufacturing Science and Engineering 2006; 128: 563-575. [14] Potter J. The Effectiveness and Efficiency of Discrete-Event Simulation for Designing Manufacturing Systems. Doctor of Engineering Thesis, Cranfield University, 2000. [15] Roy R, Kelvesjo S, Forsberg S & Rush C. Quantitative and qualitative cost estimating for engineering design. Journal of Engineering Design 2001; 12: 147-162. [16] Scanlan J, Rao A, Bru C, Hale P & Marsh R. The DATUM project: a cost estimating environment for the support of aerospace design decision making. Journal of Aircraft 2005; 43: 1022-1028. [17] Shehab EM & Abdalla HS. An intelligent knowledge-based system for product cost modelling. International Journal of Advanced Manufacturing Technology 2002; 19: 4965. [18] Tammineni SV. Designer Driven Cost Modelling. Doctor of Philosophy Thesis, University of Southampton, 2007. [19] Tammineni SV, Rao AR, Scanlan JP, Keane AJ & Reed PAS. A Hybrid Knowledge Based System for Cost Modelling applied to Aircraft Gas Turbine Design. University of Southampton, 2007. [20] Tirovolis NL & Serghides VC. Unit Cost Estimation Methodology for Commercial Aircraft. Journal of Aircraft 2005; 42: 1377-1386. [21] Weustink IF, Brinke E, Streppel AH & Kals HJJ. A generic framework for cost estimation and cost control in product design. Journal of Materials Processing Technology 2000; 103: 141-148.
Integrated Wing
A Generic Life Cycle Cost Modeling Approach for Aircraft System Yuchun Xua1, Jian Wanga, Xincai Tana, Ricky Currana, Srinivasan Raghunathana, John Dohertyb and Dave Gorec a
Centre of Excellence for Integrated Aircraft Technologies, School of Mechanical and Aerospace Engineering, Queen's University Belfast, Ashby Building, Stranmillis Road, Belfast, BT9 5AH, Northern Ireland, UK. b QinetiQ, Cody Technology Park, Ively Road, Farnborough, Hants, GU14 0LX, England, UK. c Airbus UK, New Filton House, Filton, Bristol, BS99 7AR, England, UK. Abstract: Life cycle cost (LCC) is truly representative to the total cost of an aircraft through its life cycle. It is usually used for estimating the cost-effectiveness of an aircraft design. For enabling LCC estimation at early stage, A LCC model have been being developed for aircraft wing under the umbrella of Integrated Wing Advanced Technology Validation Programme in United Kingdom. Object-oriented and hierarchical approaches are used for LCC modelling. The cost estimation is based on bottom-up approach. The developed LCC model is generic, and can be customized and applied for estimating the costs of other aircraft systems. Keywords: Life cycle cost, Cost engineering, Object-oriented approach
1 Introduction The life cycle cost (LCC) of a product includes all of the costs which are incurred during its life cycle, from research & development phase, through to the eventual retirement and disposal [1]. Within aerospace industry, LCC is not only used by airline operators for making acquisition decisions, but also used by aircraft manufactures increasingly to assess the competitiveness of an aircraft’s design [2]. Life Cycle Cost Analysis (LCCA) is an evaluation technique applicable for the consideration of aircraft investment decisions. It enables the total cost comparison of competing design alternatives, each of which is appropriate for implementation of an aircraft project. All of the relevant costs that occur throughout the life of an alternative, not simply the original expenditures, are included. Specifically, when it has been decided that a new aircraft program will be implemented, LCCA will
1
Corresponding Author E-mail: [email protected]; [email protected]
252
Y. Xu, J. Wang, X. Tan, R. Curran, S. Raghunathan, J. Doherty and D. Gore
assist in determining the best and the lowest cost way to accomplish the program [3]. In the aspect of engineering costing in LCCA, some work was conducted, e.g. Curran et al. reviewed the state of the art of Aerospace Engineering Cost Modelling and proposed a genetic causal cost modeling approach. Eres et al. [4] developed a hierarchical life cycle cost model for a set of aero-engine components, which provides a robust and maintainable environment for assessing the engine maintenance cost. Wang et al. [5] developed a structured life-cycle cost estimating methodology for air traffic control Decision Support Tools under development by NASA, using a combination of parametric, analogy, and expert approaches. Those modeling are specific for their applications, and are not easy to be customized for a new application case. Some commercial models also claim the capability of conducting LCC analysis, e.g. Relex Life Cycle Cost [6], and SEER H [7], but the algorithms in those models usually are not fully opened to customers, which constraints their applications. Within aerospace industry, most LCC models use parametric methods, which rely on the historical data and correlates the cost with some attributes, e.g. the design parameters mass, maximum speed etc. Since the LCC is not directly determined by engineering information in parametric models, usually the impact of technology on LCC can not be represented. Ongoing research has seen the emergence of ‘hybrid’ lifecycle costing approaches, which normally uses some engineering information to supplement the main parametric models [8]. This paper introduces a generic object-oriented Life Cycle Costing approach for aircraft system, which can be customized for different applications. The framework and modeling process for the aircraft wing system are introduced. The work is under the umbrella of the Integrated Wing program [9]. The genetic causal costing approach has been used for the modeling [2].
2 Background In order to achieve the challenging environmental targets set by the Advisory Council for Aeronautics Research in Europe (ACARE) [10], an Integrated Wing Aerospace Technology Validation Program (IWATVP) [9] was initiated in United Kingdom to research, improve and validate aircraft wing design and integration techniques. Normally there are likely to be a number of technologies available for an aircraft design meeting targets, so comparison and trade-off study between different technologies should be conducted. A tool called RETIVO is used to conduct technology evaluation. Life Cycle Cost is an important part of RETIVO. Under the umbrella of IWATVP Program, a LCC model is being developed. For the future consideration, some approaches are taken throughout the development process, to enable a generic model being developed.
A Generic Life Cycle Cost Modeling Approach for Aircraft System
253
3 LCC modeling 3.1 Scope As a subsystem of aircraft, the wing system also goes through research and design, manufacturing, operation & maintenance, and retirement & disposal phases [11, 12]. Therefore, the life cycle cost can be expressed as:
LCC
C RDTE C MAN COPS C DIS
(1) LCC is the life cycle cost of aircraft wing. CRDTE is the research, design, testing and evaluation cost. CMAN is the manufacturing cost of wing. COPS is the operation cost associated with wing. CDIS is the disposal cost of wing. The paper will mainly consider those costs from manufacturer’s and airline operator’s point of view. where
3.2 Architecture The LCC model is designed into some functional modules, i.e. it has been modularized as a collection of functional modules for some reasons and aims: 噝 To enable to deliver each part of modelling separately and make it clear how it progresses 噝 To allow different levels of models to be integrated depending on the level of data available The model architecture is shown in Figure 1. The functional modules include Framework module, Initial Sizing Tool module, Design cost module, Manufacturing cost module, Operation cost module, and Disposal cost module. The core part of the LCC model is the Framework module, which externally communicates with RETIVO that conducts multi-disciplinary optimization for technologies in higher level, to collect aircraft and wing design parameters, and aircraft operation condition parameters as well. The Framework module passes the aircraft and wing design parameters to the Initial Sizing Tool module which produces the wing part sizes and attributes. The part attributes are distributed by Framework module to the appropriate functional modules as required, e.g. to the Manufacturing cost module (including Materials, Fabricating and Assembling modules) and Operation cost module (maintenance and flying cost modules). The Framework module will finally gather the costs estimated from each sub modules and output the cost break down as required, e.g. the LCC and DOC (Direct Operating Cost). The LCC model is being developed in Microsoft® Excel environment. All the communication work are controlled and achieved by Macros.
254
Y. Xu, J. Wang, X. Tan, R. Curran, S. Raghunathan, J. Doherty and D. Gore
Figure 1. The architecture of the LCC model
3.3 Bottom-up approach The LCC model is developed based on bottom-up approach. According to the genetic causal approach, the cost is caused either by components or activities or both. Correspondingly, the LCC is estimated based on below procedures: 噝 Analyze all the costs associated with each component. 噝 Analyze all the component costs within each phase. 噝 Analyze the costs throughout the life cycle.
A Generic Life Cycle Cost Modeling Approach for Aircraft System
255
This procedure is expressed in Figure 4, where the “overall volume” of the cube represents the LCC of a product, in this case the LCC of an aircraft wing.
WBS
Part 3
Activity 2
Part 2
Activity I
Part 1
Part …
Ac t
iv it i
es
Part 4
Axtivity …
po sa l D is
R
& D M an uf ac tu r in O g pe ra t in g
Life cycle
Figure 2. Life Cycle Cost composite
3.4 Object-oriented Approach The LCC model is being developed using object-oriented approach. The model is founded on a series of objects, which are modularized and have specific attributes, operations and boundary conditions. Each object is described by attributes, e.g. the geometric and material information etc. The object can be inherited, and it encapsulates the attributes and relationships to other objects throughout the life cycle of aircraft wing. The operations are functions of the attributes and input parameters; they deliver the outputs of interest to the user. In the life cycle cost modeling for an aircraft wing, all components and activities are classified to a suitable object by switching on/off the appropriate describing attributes. New objects can be evolved through the definition of new attributes and operations. The object may have more attributes and operations then needed for a specific component/activity, but these can be “switched off” if they are not required. On the other hand, new customized attributes and operations can also be added. In addition to the components and activities, a cost item/category of the life cycle phase can also be represented by objects. All of the objects are organized in a hierarchical structure. The object-oriented model allows users quickly and easily develop a new cost model for other aircraft systems. 3.5 Hierarchy The LCC model is developed based upon hierarchical structures, e.g. the hierarchical work breakdown structure and cost breakdown structure. This approach provides some advantages: 噝 To enable cost estimation at all development stage.
256
Y. Xu, J. Wang, X. Tan, R. Curran, S. Raghunathan, J. Doherty and D. Gore
噝 To provide better fidelity of cost estimation. At the early design stage, only a few design parameters are available for wing design, then Sizing Tool is used to provide WBS. As the design process goes, more system/parts design information become available at the later design stage, then the updated system/parts information are used for cost estimation, which consequently provides better fidelity of cost estimation. Similar to the application of a hierarchical work breakdown structure, a hierarchical cost breakdown structure is also used in the cost estimation depending on the requirements and information available. E.g. manufacturing cost includes recurring cost and non-recurring cost (costs of tooling, jigs etc.), and recurring cost is more process related. Depending on the accuracy requirement, non-recurring cost can either be taken into account for cost estimation or not. Once the level at which to conduct cost estimation is selected, the standard time and unit cost for parts/systems at bottom level need to be assumed. Assumption can either be done by using industrial standard, or using parametric models that are developed based on historical data.
4 Case study A few number of case studies were carried out. As an example, three different aircraft wings are chosen to estimate their life cycle cost. The main design parameters of those aircraft/wings, and the production and operation parameters are summarized in Table 1. The DOC and LCC are outputted for comparing the costs of these three wings, the result is shown in Figure 3. Due to the consideration of confidentiality of data used in LCC model, the output results are normalised. The DOC and LCC of aircraft wing 1 were setup as 1 unit. Table 1. Aircraft parameters used in LCC estimation
Main relevant parameters Maximum take off weight (Kg) Mass of Wing with power plants (Kg) Wing span (m) Wing area (m2) Operational life (years) Aircraft utilization (hours) Production Quantity Number of flight test aircraft
Aircraft Wing 1 72000 5900 36 130 25 3750 300 5
Aircraft Wing 2 220000 20000 58 360 25 3750 300 5
Aircraft Wing 3 320000 18000 76 623 25 3750 300 5
A Generic Life Cycle Cost Modeling Approach for Aircraft System
257
2 1.8 1.6
Normalized cost
1.4 1.2
Wing 1
1
Wing 2 Wing 3
0.8 0.6 0.4 0.2 0 DOC
LCC
Figure 3. Comparison of DOC and LCC for different aircraft wings
5 Conclusion A generic LCC model for aircraft wing is developed based on bottom-up approach, and it is on going. The model is object-oriented and hierarchical. It can be customized for other applications, e.g. to estimate the cost of other aircraft systems. The model developed using bottom-up approach allows the validation of technology impact on LCC.
6 Acknowledgments The work described in this paper has been carried-out with the financial assistance of the Department for Business, Enterprise and Regulatory Reform (DBERR), under the Integrated Wing Aerospace Technology Validation Programme (IWATVP). The authors are very grateful to Michael Smith and Stuart Alexander of Airbus, Darren White, Andrew Eldridge and Paul Ellsmore of QinetiQ, for support and discussion of the work.
7 References [1]
Asiedu Y, Gu P. Product life cycle cost analysis: state of the art review. Int J Prod Res 1998;36(4): 883–908
258 [2]
Y. Xu, J. Wang, X. Tan, R. Curran, S. Raghunathan, J. Doherty and D. Gore
R Curran, S Raghunathan, M Price, Review of Aerospace Engineering Cost Modelling: The Genetic Causal Approach, Progress in Aerospace Sciences, 2004, 40: 487-534. [3] U.S. Department of Transportation, Life-Cycle Cost Analysis Primer, Aug. 2002 [4] Murat H. Eres, James P. Scanlan, A Hierarchical Life Cycle Cost Model For A Set Of Aero-Engine Components, AIAA 2007-7705. [5] Jianzhong Jay Wang, Koushik Datta, A Life-Cycle Cost Estimating Methodology for NASA-Developed Air Traffic Control Decision Support Tools, NASA/CR-2002211395. [6] http://www.relex.com/products/lcc.asp, accessed on 30 Nov 2007-11-30 [7] http://www.galorath.com/news_PR-990125.html, accessed on 30 Nov 2007-11-30 [8] William J. Marx, Dimitri N. Mavris, Daniel P. Schrage, A hierarchical aircraft life cycle cost analysis model, AIAA-95-3861. [9] Smith, S., “‘Integrated Wing’ Aerospace Technology Validation Programme”, AIAA 2007-7892, 7th AIAA Aviation Technology, Integration and Operations Conference (ATIO), Belfast, September 2007. [10] Group of Personalities, European aeronautics: a vision for 2020, published in January 2001. [11] Tan, X., Xu, Y., Early, J., Wang, J., Curran, R., and Raghunathan, S., A Framework for Systematically Estimating Life Cycle Cost for an Integrated Wing, AIAA-20077809, 7th AIAA Aviation Technology, Integration and Operations Conference (ATIO), Belfast, September 2007. [12] Y. Xu, J. Wang, X. Tan, R. Curran, S. Raghunathan, J. Doherty and D. Gore, Life Cycle Cost Modeling for Aircraft Wing Using Object-Oriented Systems Engineering Approach, AIAA2008-1123.
Cost-Efficient Materials in Aerospace: Composite vs Aluminium X Tana,1, J Wang a, Y Xua, R Currana, S Raghunathana D Goreb and J Dohertyc a
Centre of Excellence for Integrated Aircraft Technology, School of Mechanical and Aerospace Engineering, Queen's University Belfast, Ashby Building, Stranmillis Road, Belfast, North Ireland, BT9 5AH, UK. b Airbus UK, New Filton House, Filton, Bristol, BS99 7AR, UK. c QinetiQ, Cody Technology Park, Ively Road, Farnborough, Hants, GU14 0LX, England, UK. Abstract. Aluminium alloys are a series of traditional materials applied in aerospace industry. Composite materials have been being used to replace some aluminium alloys in airframe structures. To select a cost-efficient material in aerospace design, a framework for systematic assessment of composites and aluminium is therefore suggested. Through evaluation of 3Ps which represent prices, properties and processing of material, life cycle assessment of aluminium and composite could be made. By comparison of and trade-off the listed baselines of 3Ps, an economic, rational, available and useful material can be logically selected. Keywords. Aluminium, aerospace, composite, cost estimate, material selection
1 Introduction To meet targets of European Aerospace Vision (ACARE) 2020 [1], design of aircrafts should carry out “greener design” [2]. By 2020, there are some main targets to be met: reduction in nitric oxide emissions by 80%, carbon dioxide emission by 50%, noise by 10 dB, and increase in safety by 5-fold and a significant reduction in cost [1]. In order to reduce weight of an aircraft in turn to reduce drag and fuel, composite is increasingly used in airframe structures although aluminium is still a key material in the aerospace industry. As reviewed by Bailey [3], at the age of Wright brothers, natural composite – wood was mainly used for aircraft/flyer. Since 1930s, aluminium alloys have been used for aircraft structures. As International Aluminium Institute [4] reviewed, aluminium had not been produced on an industrial scale until 1886; and in 1990 annual output of alumimium was one thousand tonnes. In July 2005, however, daily average primary aluminum output reached to 64,800 tonnes. Aluminium 1
Corresponding Author Email : [email protected]
260
X. Tan, J. Wang, Y. Xu, R. Curran, S. Raghunathan D. Gore and J. Doherty
alloys have been developed a series of mature materials in aerospace engineering. There are a number of standards and rules which have been accepted in manufacturing and application. As reviewed by Niu [5], in 1990s, in a typical aircraft, aluminium material counts for about 80% of the structural materials. Great efforts have been made to use composites for aircraft structures. Synthetic composite materials have been investigated since 1930s in the UK and applied to fighter aircraft in 1943, as Bailey [3] reviewed. Since then, research and development, manufacturing and application of composites have been carried on. For example, Peterson [6] reported investigation into structural composite materials for primary aerospace vehicle structures. Niu [7] in his book summarised types of composites; methods of tooling, manufacturing, testing, repair and application of typical composites. Hinrichsen and Bautista [8] reported that Airbus has been investigating and applying composites in their aircrafts. As Middleton et al. [9] reviewed, composite materials in 1990s have been extensively applied in structures of aircrafts, such as A320, B767, B757, C-17, B-18, V-22, AN-124, F18, AV-8B. Delft University of Technology in the Netherlands developed a composite with aluminium-plastics laminates named GLARE (GLAss-Reinforced). As Vlot et al. [10] reviewed, GLARE is composed of several very thin layers of aluminium interspersed with layers of glass-fibre bonded together with a matrix such as epoxy. Baker et al. [11] summarised recent developed techniques of various composites applied in aircraft structures, including types and properties of composites, manufacture, testing, evaluating, repair, etc. There are a number of references in the literature to describe how to select a material (e.g., by Ashby [12]), how the properties of materials look like (e.g., by U.S. Department of Transportation [13]), how to manufacture a material/product (e.g., by Kalpakjian and Schmid [14]), and how to estimate costs of manufacturing processes (e.g., by Ehrlenspiel [15]). Compared to the evaluation of a single process or a property or a product costing, implementation of life cycle assessment (LCA) for a material is quite expensive, challenging, and time consuming. Few reports on LCA of composite and aluminium have been found in the literature. Compared with aluminium alloys, composite materials can be made stronger or lighter, but with current technology, they are more expensive to make, more difficult to inspect and test, trickier to repair and maintain, and harder to recycle and reuse. For a material, there are its unique characteristics and properties, its suitable manufacturing processes and methods, and its costs which are dependant on its availability, and processing complexity. How to assess worthiness of these materials should be known during selection of materials for engineering application to make a “greener design”. Thus a systematic assessment for the materials of both composite and metallic should be made to properly evaluate the advantages and disadvantages of these materials. In this paper, a framework for systematic assessment of composites and aluminium alloys is developed. After a brief review of composites and aluminium alloys in this section, typical processes of life cycle for both composite and aluminium is described. Following discussion of criteria for selection of components, comparison and trade-off of known materials is suggested. Finally, some conclusions are drawn out.
Cost-Efficient Materials in Aerospace: Composite vs Aluminium
261
2 Factors Relative to Cost For a material, factors relative to cost are numerous. Basically, the parameters shown in Fig.1 should be taken into account when selecting a material in engineering design including aerospace. The P cube has three P axes among which the first P is for prices, the second P is for properties, and the third P is for processing. Through price prediction or cost estimation, it can make a product more cost-efficient, and make sure the user/buyer affordable. By analysis of properties of material, it can make the product safety, durable, faultless, and aesthetic. Through comparison and analyses of different processes, it can be understood that whether or not the processing of the product/part is economic, convenient, local, environmentally friendly, timely, recyclable, maintenancable, etc.
Figure 1. Basic factors relative to cost for a material
2.1 Material Price Material price is a measure of value integrated from a number of factors. In view of material property, for a given component with a certain shape and dimension, the more modified/improved properties, the higher the price of the product will be. In view of the processing, for a given component with a certain amount of materials, the more complicated the shape, or the more the manufacturing processes needed, the higher the price of the product will also be. Because of a raw material’s history, the unit cost of the raw material depends not only on materials which are of specific properties to meet the user’s demand, but also on its shape, dimension and condition which are obtained by a proper processing (may be a combination of a series of processes).
262
X. Tan, J. Wang, Y. Xu, R. Curran, S. Raghunathan D. Gore and J. Doherty
The economic aspects of material selection are as important as the technological evaluations of properties/characteristics of materials, and the engineering considerations of processing of materials. The manufacturing cost consists of costs of materials input, energy consumed, labour operation, and overhead, hardware and software applied, as well as fixed and capital costs. Typically, manufacturing costs represent about 40-50 % of a product’s selling price. Facing international competition, a company may minimize their manufacturing costs by lean manufacturing and agile manufacturing. Lean manufacturing is the production of goods using less of everything compared to mass production: less human effort, less manufacturing space, less investment in tools, and less engineering time to develop a new product. Agile manufacturing is a term applied to an organization that has created the processes, tools, and training to enable it to respond quickly to customer needs and market changes while still controlling costs and quality. Reduction of costs in processing of a product and improvement of the product properties are always challenges for a manufacturer. 2.2 Material Property A material has its unique properties which directly affect the performance and service of the material. The material function during performance is dependent on the properties. Mechanical properties are very important in selection of materials. Typical mechanical properties are strength, toughness, ductility, hardness, elasticity, fatigue resistance, creep resistance, and corrosion resistance. The mechanical properties of materials can be significantly modified by various methods, such as by mixture of chemical composition, by heat treatment, or by mechanical processing (e.g., stretching). The strength-to-weight ratio and the stiffness-toweight ratio are taken into account very much when design of aerospace and aircraft structures. Physical properties of the materials also affect performance of the product. Example physical properties are density, specific heat, thermal expansion, thermal conductivity, melting point, glass transition temperature, electrical property, and magnetic property. Chemical properties sometimes are considered seriously in some products. Oxidation, corrosion, general degradation of properties, and flammability, toxicity are typical chemical properties. Manufacturing properties of materials determine how a processing can be selected and applied. 2.3 Material Processing Various costs are involved in material processing by different methods. A method has its conditions, requirements and costs. Some methods require expensive machinery, others require extensive labour. For a material, a number of manufacturing processes may be used although the resultant costs will be different each other. Example processing methods for materials include: casting, forming and shaping, machining, joining, fabricating, and finishing. For casting, there are expendable mold casting, and permanent mold casting. In forming and shaping,
Cost-Efficient Materials in Aerospace: Composite vs Aluminium
263
there are rolling, forging, extrusion, drawing, sheet forming, powder metallurgy, and molding. For machining, there are turning; boring; drilling; milling; planning; shaping; broaching; grinding; ultrasonic machining; chemical, electrical, and electrochemical machining; and high-energy beam machining. Processes of joining can be: welding, brazing, soldering, diffusion bonding, adhesive bonding, and mechanical joining. Understanding characteristics of material processing, such as castability, formability, machinability, and weldability of materials, will be essential for selection of processing. A specific material always has its particular advantages and limitations, in cost, in properties, or in processing. In the aircraft and aerospace industries, trade-off and taking advantage of the three Ps, price, property and processing are always seriously considered based on rational analyses, such as life cycle assessment.
3 Composite vs Aluminium In order to select cost-efficient materials for next generation of aircraft, in-house tools are being developed for assessment of life cycles of aircraft and its relevant typical materials. As a case study, an example assessment for aluminium and composite for a wing box structure is made here. Table 1 shows typical parameters of the example aircraft. Table 1 Typical parameters of the example aircraft
Design parameter Maximum take off weight, kg Fuselage diameter, m Wing span, m
Value 75000 4 40
Wing Root Chord, m
7
Design parameter Mass of Wing mounted power plants, kg Fuselage length, m Wing root thickness / Chord ratio Wing Tip Chord, m
Value 6000 40 0.14 2
3.1 Material Prices Table 2 shows a comparison of estimated material parameters for Al and composite used in an example aircraft wing by two different material selections in design. Table 2 Comparison of estimated material parameters for Al and composite used in an example aircraft wing by two different material selections in design
Design method
Raw material
Number of parts
Weight, kg
Estimated cost, $
M1
Al Composite Al Composite
132 34 117 42
29887.41 114.22 12959.35 1873.30
288790.11 10861.63 91857.08 178141.63
M2
Total cost, $
Total weight, kg 30001.63
299651.74
14832.65
269998.71
264
X. Tan, J. Wang, Y. Xu, R. Curran, S. Raghunathan D. Gore and J. Doherty
It is assumed that the average price of aluminium alloys is 6 USD/kg, and the average price of composites is 80 USD/kg. 3.2 Material Properties Usually, aluminium alloys used in aerospace are mainly aluminium-copper (2XXX series) and aluminium-zinc (7XXX series) alloys. The reason why aluminium is chosen by the designer is because its cost is relatively low, weight light. Also, it has fairly high strength, and it is easy to be fabricated so its manufacturing cost is relatively low [16]. Most of composite materials used in aerospace are fabricated from carbon fibre reinforced epoxy resins. The significant advantages of composite are low density and good corrosion resistance. For material properties and processes, there are a number of references in the literature, for example, U.S. Department of Defense, Military Handbooks [17]. 3.3 Material Processes Fig.2 depicts typical processes for whole life cycle of (a) composites and (b) aluminium alloys. Typical processes for aluminium alloys are: bauxite mining Æ anode production Æ alumina refining Æ electrolysis Æ primary casting Æ shaping/forming including billet casting, rolling, and extrusion Æ manufacturing Æ performance Æ recycling and disposal. Cost estimate for such typical processes can be seen in our previous work [18]. For composite processing, three main separate processes are: x Carbon fibre processing: Exploration/exploitation Æ Petroleum refining Æ Acrylonitrile synthesis Æ Polymerisation Æ Spinning Æ stabilization Æ carbonization Æ surface treatment Æ carbon fibre packaging. x Epoxy resin processing: Exploration/exploitation Æ Petroleum refining Æ epichlorohydrin Æ bisphenol-A Æ resin synthesis Æ storage. And x Mould processing: Material selection Æ tooling Æ mould fabrication.
Cost-Efficient Materials in Aerospace: Composite vs Aluminium
(a)
265
(b)
Figure 2. Typical processes in the life cycle of materials applied in aerospace engineering (a) Carbon Fibre Composite, and (b) aluminium alloys.
4 Conclusions A framework for assessment of material selection has been developed and an example evaluation of both composite and aluminium was given. To evaluate a material, life cycle assessment should be comprehensively made, from exploitation to manufacturing, to service/operation, and to disposal. The P cube can be a tool for systematic assessment of a material. Through analysis of product prices/costs, it can make cost-efficient for each phase in whole life cycle. Through evaluation of material properties, it can make a product faultless, to make sure safety, well worthiness, and durable. Through examination of processing, it can find economic and effective methods/solutions to manufacture a product, to test and evaluate the product, to repair and maintain the product, and to environmentally friendly dispose the aged/damaged product.
5 Acknowledgments The work described in this paper has been carried-out with the financial assistance of the Department for Business, Enterprise and Regulatory Reform (DBERR), under the Integrated Wing Aerospace Technology Validation Programme
266
X. Tan, J. Wang, Y. Xu, R. Curran, S. Raghunathan D. Gore and J. Doherty
(IWATVP). The authors are very grateful to Michael Smith and Stuart Alexander of Airbus, and Darren White of QinetiQ, for support and discussion of the work.
6 References [1] Advisory Council for Aeronautics Research in Europe, Strategic Research Agenda, Vol.1, Vol.2, and Executive Summary, October 2004. URL: http://www.acare4europe.org/html/background.shtml [cited 12 September 2007]. [2] The Science and Technology Sub-Group, Air Travel – Greener by Design, Mitigating the Environmental Impact of Aviation: Opportunities and Priorities, Royal Aeronautical Society, UK, July 2005. [3] Bailey, J.E., Charter 1: Origins of Composite Materials, in Composite Materials in Aircraft Structures (Ed. D.H. Middleton), Longman Scientific and Technical, Essex, UK, 1990, pp.1-8. [4] International Aluminium Institute, The Aluminium Industry’s Sustainable Development Report, 2007. [5] Niu, M.C.Y., Airframe Structural Design: Practical Design Information and Data on Aircraft Structures, Second Edition, Hong Kong Conmilit Press Ltd., 1999. [6] Peterson, G.P., Advanced Composites for Structures, Journal of Aircraft, Vol.3, No.5, 1966, pp.426-430. [7] Niu, M.C.Y., Composites Airframe Structures: Practical Design Information and Data, Hong Kong Conmilit Press Ltd., 1992. [8] Hinrichsen and Bautista, The Challenge of Reducing both Airframe Weight and Manufacturing Cost, Air and Space Europe, Vol.3, No.3/4, pp.119-121. [9] Middleton, D.H. (Ed.), Composite materials in Aircraft Structures, Longman Scientific and Technical, Essex, UK, 1990. [10] Vlot, A., and Gunnink, J.W. (Eds.), Fibre Metal Laminates: An Introduction, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2001. [11] Baker, A., Dutton, S., and Kelly, D., Composite Materials for Aircraft Structures (2nd Edition), American Institute of Aeronautics and Astronautics, Virginia, USA, 2004. [12] Ashby, M.F., Materials Selection in Mechanical Design, Second Edition, ButterworthHeinemann, Oxford, 1999. [13] U.S. Department of Transportation, Metallic Materials Properties Development and Standardization (MMPDS), Replacement Document for MIL-HDBK5, Scientific Report, DOT/FAA/AR-MMPDS-01, Federal Aviation Administration, January 2003. [14] Kalpakjian, S., and Schmid, S.R., Manufacturing Processes for Engineering Materials, Fourth Edition, Pearson Education International, N.J., USA, 2003. [15] Ehrlenspiel, K., Kiewert, A., and Lindemann, U., Cost-Efficient Design, Springer, Berlin, 2007. [16] F.C. Campbell, Manufacturing Technology for Aerospace Structural Materials, Elsevier, Amsterdam, the Netherlands, 2006. [17] U.S. Department of Defense, Military Handbooks - MIL-HDBK-17: Composite Materials Handbooks, 2002. Volume 1 - Polymer Matrix Composites Guidelines for Characterization of Structural Materials; Volume 2 - Polymer Matrix Composites Materials Properties; Volume 3 - Polymer Matrix Composites Materials Usage, Design, and Analysis; Volume 4 - Metal Matrix Composites; Volume 5. Ceramic matrix composites. [18] Tan, X., Wang, J., Xu, Y., Early, J., Raghunathan, S., Gore, D., and Doherty, J., Costing of Aluminium Process for Life Cycle, 46th AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada, January 2008, AIAA-2008-1123.
A Multi-Fidelity Approach for Supporting Early Aircraft Design Decisions John J Dohertya1, Stephen R H Deanb , Paul Ellsmoreb and Andrew Eldridgeb a
Technical Fellow, QinetiQ Ltd, Farnborough, UK. Aerospace Consultancy, QinetiQ Ltd, Farnborough, UK.
b
Abstract: The QinetiQ Aerospace Consultancy group has been actively developing and applying process automation and optimisation capabilities in support of air vehicle assessment and design for over 20 years. These capabilities have evolved greatly during this timeframe from their initial origins as research activities, into mature capabilities for underpinning decision making in both civil and military air vehicle projects. In parallel the same generic approaches have also found usage in weapons, maritime and motorsport design. In recent years effort has focussed on enhancing a number of different, but complementary, capabilities at QinetiQ, each of which have different advantages and disadvantages, but which together better address the needs of air vehicle assessment and design. These capabilities are linked by a common requirement to assess widely differing characteristics concurrently, in order to model the consequences of design decisions, such as technology and system choices, in terms of the overall impact on an air vehicle project. This paper describes these tools in the context of their use within the Integrated Wing project. Keywords: Aircraft Concept, Design Space, Requirements Selection, Multi-disciplinary Design, Optimisation.
1 Introduction Within the aerospace industry a typical air vehicle design project can be broadly described by a number of different phases: feasibility, conceptual design, preliminary design, detailed design and manufacturing design. The feasibility phase identifies the customer need and the manufacturer’s potential top level business offering and hence establishes the overall business case for the air vehicle design. The conceptual design phase, supported by initial preliminary design activities, evolves potential air vehicle concepts and associated performance datasets, in order to down-select the final air vehicle concept. During the remaining
1
Technical Fellow, Aerospace Consultancy, X80 Building, QinetiQ Ltd, Cody Technology Park, Ively Road, Farnborough, Hants, GU14 0LX, UK; E-mail: [email protected].
268
J. J. Doherty, S. R. H. Dean, P. Ellsmore and A. Eldridge
phases the definition of this selected air vehicle concept is significantly refined in order to provide the target for subsequent production. It is widely recognized that this conventional aerospace design process is nonideal. In particular major decisions must frequently be made in each phase based upon information which is insufficiently detailed and immature. More detailed and mature information can potentially be generated using the analysis and design toolsets normally employed in subsequent phases, but generally this cannot be achieved within the timescales dictated by earlier phases and, in addition, often cannot be generated for a sufficient number of possible design alternatives. This issue is particularly constraining at the early stages of design when major design freedoms/decisions associated with the choice of the air vehicle concept and its associated technologies and systems are still open, but these must then be narrowed/down-selected without having access to all the desired supporting information. In order to reduce risk during the design process, the overall business case must be regularly re-evaluated as further information becomes available, in order to check the continued success of the overall project. Inevitably this information often comes too late to fundamentally rethink early design decisions and final compromise solutions may often result. This conventional aerospace design process is facing even greater challenges for addressing the design of future air vehicles. Operational performance drivers for both military and civil air vehicles are becoming increasingly demanding, potentially leading to the future adoption of more novel air vehicles, employing novel technologies and systems. For military air vehicles the thrust is towards mission flexibility, improved survivability and use of unmanned air vehicles. Stringent environmental targets for civil aircraft, such as the ACARE 2020 Vision [1] for 50% reduction in CO2 and 80% reduction in NOX, mean that novel aircraft configurations, employing novel technologies such as flow control and extensive use of composites, must be considered. The addition of these novel factors into the existing non-ideal design process means alternative approaches for supporting design decisions are required, in order to predict and hence decide upon the best combination of aircraft concept, technologies and systems.
2 Integrated Wing Programme The Integrated Wing Aerospace Technology Validation Programme (IWATVP) [2] brings together industry and researchers within a UK national project funded jointly by the UK Department for Business, Enterprise & Regulatory Reform (DBERR) and the industrial partners. The overall aim of the project is to validate technologies which can lead to a step change in performance for future aircraft, in order to address challenging future operational performance requirements best characterised by the ACARE 2020 Vision. As part of the IWATVP project QinetiQ leads a Work Package focused on Requirements Integration and Optimisation (WPI) shown in Figure 1. A key objective for QinetiQ within WPI is to research and demonstrate the potential for alternative analysis and design approaches to support decision making in the early stages of design. Capabilities for process automation and optimisation, operating at
A Multi-Fidelity Approach for Supporting Early Aircraft Design Decisions
269
different levels of modeling fidelity, are reasonably well established at QinetiQ and have been used for a number of years for other air vehicle design studies. Within WPI the potential for these alternative approaches will be demonstrated for supporting design decisions in the context of both conventional and novel civil aircraft concepts, encompassing associated conventional and novel technologies and system choices. Two approaches for supporting design decisions are being investigated within WPI. The first addresses the selection of technologies and systems in the context of broad design space exploration. The second approach enables a more detailed investigation of the impact of technology and system choices for specific aircraft concept types using a multi-disciplinary design optimisation approach. These capabilities, together with a discussion of how they would be used to support the design process, are described in the remainder of this paper.
Figure 1. The Integrated Wing Programme Structure
3 Multi-level Support for Design Decisions In the early stages of aircraft design, the traditional approach to assessing technology or system choices is through designing a baseline aircraft concept, followed by parametric modeling of the effect of different choices of technologies or systems, in order to assess the potential benefits, costs and risks associated with each design choice. Although this approach can provide a reasonable level of modeling fidelity and hence an understanding of the main trades and payoffs of design choices for the aircraft concept, it reduces the applicability of the assessment to a small area of the concept design space.
270
J. J. Doherty, S. R. H. Dean, P. Ellsmore and A. Eldridge
An alternative approach is to use a more generalized concept modeling approach, which requires less detail of specific geometry features, in order to avoid constraining the study to a specific region of the concept design space. In particular aircraft concepts are defined by parameters that primarily represent the desired performance characteristics, rather dictating specific geometry detail. By avoiding the need to down-select a specific aircraft concept (including assumptions about the mission and what level of technology is used), a study does not need to be constrained by existing concept assumptions or constraints that may limit the applicability of certain technologies. In particular it is very likely that certain technologies or systems will be more beneficial for some aircraft configurations than others. This alternative approach means there is flexibility within the overall study to assess design decisions on the basis of the benefits they provide across many different concepts, rather than the effect for one specific concept. Typically for this type of analysis there may be thousands of concepts that need to be analyzed individually. Each analysis is comprised of multiple calculations using analysis methods such as aerodynamics, mass estimation, mission modeling etc. The process is further complicated by discrete concept types, such as under wing or aft mounted engines or a series of options in the mission definition. Performing these analyses manually is not practical and so an automated approach is required. Within IWATVP the proposed solution to these issues is the use of QinetiQ’s RETIVO (Requirements Exploration, Technology Impact, and Value Optimisation) capability [3], which is described in more detail later. RETIVO is a software approach that allows the user to carry out assessments of concepts and technologies over a very broad design space. As outlined above, this flexibility is gained by adopting a relatively low level of aircraft geometry fidelity and associated performance modeling. RETIVO can be considered to model the design space in a broad but shallow approach. Favorable combinations of aircraft performance requirements and related technologies/systems choices can be explored using more detailed concept analysis and design capabilities. For example QinetiQ’s complementary MDCAD (Multi-Disciplinary Concept Assessment and Design) capability [4], which is also described in more detail later, allows a more detailed concept geometry representation to be investigated and designed. Hence MDCAD can be considered as modeling the design space by a narrow but deep approach. This complementary use of RETIVO and MDCAD provides a systematic basis for both broad and deep studies, with the more detailed studies being used to underpin the broader, less detailed studies. In addition the more detailed MDCAD approach can be used to generate modeling information which can be used directly within broader RETIVO studies. Finally MDCAD can provide more detailed and specific data, such as detailed configuration geometry and structure, which can provide the starting point for the subsequent detailed design phase of an aircraft project.
A Multi-Fidelity Approach for Supporting Early Aircraft Design Decisions
271
4 Broad Design Space Exploration (RETIVO) The general structure of RETIVO is shown in Figure 2 and is based around a flexible open software framework, that provides the data flow and structure into which individual tools or modules can be integrated, whether they simple equations, spreadsheets, program executables, or other bespoke or commercial analysis tools. This approach allows trusted modules, developed by individual disciplines for different purposes and without an original need to interface to other tools, to be integrated and re-used. This reduces the time and cost required to develop a RETIVO application suitable for a particular air vehicle project. Each module is “owned” and developed by a specialist discipline, such that the module is underpinned by a detailed understanding of the relevant subject which has been distilled into a rapid method that is suitable for this level of analysis. This ensures that the modules used in RETIVO, which fundamentally use quite a low level of modeling fidelity, still provide results which are trustworthy and which have been validated against higher fidelity sources of information.
Inputs
Engine Converger
Mission
Geometry & Mass
High-speed Aero
Take-off
Cost
Optimiser
Outputs
Figure 2. General structure of RETIVO
This approach also allows a RETIVO application to be tailored to a specific use; by careful choice of the most appropriate modules available, an appropriate trade off can be made between broad applicability and fidelity for the specific task at hand. For example, the focus within IWATVP is wing technologies, so a geometry and mass module that contains fuselage sizing and mass estimation
272
J. J. Doherty, S. R. H. Dean, P. Ellsmore and A. Eldridge
techniques relevant to a conventional civil aircraft is appropriate. Generally, as the scope of a study is expanded many of the existing modules would still be appropriate, whilst some may require enhancement or replacement. The flexibility in the RETIVO framework allows additional modeling capability to be added simply and quickly. All data output by one module is subsequently available to the other modules. Modules that have a loop dependency are iteratively converged via an inner ‘converger’. For its use within IWATVP, this availability of data has allowed additional cost and emissions modules to be added to an original core of performance modules. The core modules allow the performance of the aircraft to be assessed with constraints upon performance attributes such as take-off distance, climb rates, and fuel consumption, whilst performing a defined mission. This core performance data is then used within the additional cost and emissions modules. The data produced by any module can be used as the focus for parametric trade studies, as objective and constraint functions as part of an optimisation process, or simply stored for information. One of the most important factors in the choice or development of a module for use within RETIVO is that it should have a relatively wide range of applicability. One of the main benefits of RETIVO, compared to more specific concept design approaches, is that it avoids overly constraining the aircraft configuration modeling, allowing it to assess trends across a broad design space. The correct modeling of these trends across this broad design space is more important than absolute accuracy for a narrow set of configurations. For example, a study might be undertaken to consider how optimum wing area and engine thrust changes with decreasing aircraft mass. If the engine module was accurate for large engines, but this accuracy diminishes for smaller engines, then as the aircraft mass is reduced, and hence wing area and required thrust generally reduces, then the engine may be predicted to be heavier than it should be leading to misleading trends. An engine module which is comparatively less accurate, but which applies equally well to both large and small engines, will allow relative comparisons to be drawn. This requirement for flexibility can be a major challenge to the specialists dealing with a given module, especially where traditional techniques rely on empirical methods. The wider range of applicability of a more physics based approach, such as computational fluid dynamics (CFD) or finite element analysis (FEA), is preferred, though this is not always possible, or practical, given the computational overhead associated with such modeling. Where a fully generalized physics-based approach is not available, it is essential to be aware of the limitations of each module and the constraints that this applies to the concepts being studied. An important step in the application of RETIVO within the Integrated Wing project is to model the benefits and drawbacks of technology and system choices within RETIVO. At its simplest level this will take the form of “Technology A is likely to increase the wing weight by y%, but bring a z% saving in zero-lift drag”. In this case, Technology A can be modeled within RETIVO by use of ‘technology factors’ applied to the wing weight and zero-lift drag calculated by the baseline modules. If it was required that an aircraft concept, employing Technology A, must achieve the same mission as an aircraft concept that does not incorporate this technology, then RETIVO may resize the concept to enable a like-for-like comparison. This ultimately means that the real impact of Technology A will
A Multi-Fidelity Approach for Supporting Early Aircraft Design Decisions
273
extend far beyond merely wing weight and zero-lift drag. More complex technology or system choices may involve adding more detailed modeling within RETIVO in order to better represent the associated impact. For example, within Integrated Wing the geometry and mass module has been extended to model a wing box from first basic principles, in order to better capture the structural aspects of novel planform choices. This level of enhanced modeling is often not required or appropriate within RETIVO and hence the majority of technology and system choices will be captured through imposed technology factors.
5 Detailed Concept Design (MDCAD) In order to investigate the impact of design decisions in greater detail than is available within a RETIVO type study, it is necessary to use a more detailed aircraft concept representation. In particular it may be necessary to adopt a more detailed representation of the geometry e.g. 3D configuration geometry, structural layout, packaging of systems, powerplant integration etc. Higher fidelity analysis methods would also be required to sufficiently resolve this additional level of geometry detail. Further, to achieve the realistic level of performance associated with these detailed concept models, it is necessary to ensure that a realistic level of design maturity is incorporated. This effectively means that it is necessary to design each concept, taking into account a wide range of factors, such as payload, range, take-off/landing performance, cruise conditions etc. As for the RETIVO approach, the implications of technology or system choices will then be added into these detailed concept models. Each concept will then be redesigned, in the light of these new design choices, in order to better assess the true associated benefits or penalties. Without this concept redesign step, the predicted impact would correspond to the addition of a technology or system as a post conceptual design step. Within Integrated Wing it is desired to assess how the original conceptual design would have changed, if the technology or system had been integrated from the outset. This enables the true value of a technology or system to be investigated. To meet these requirements QinetiQ has established a Multi-Disciplinary Concept Assessment and Design (MDCAD) capability over many years [5-9]. Development of MDCAD has been driven both by the need to be applicable to novel aircraft configuration design, and to reduce the overall elapsed time for conceptual and preliminary design and performance assessment. The resulting capability uses computational physics based performance prediction tools where appropriate, such as Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA). Lower fidelity modules, such as those used in RETIVO, can also be used in combination where the particular application requirements allow. For example the mission modeling in RETIVO, which is relatively low fidelity, may also suffice for use in MDCAD. The addition of physics based performance prediction tools ensures that the resulting capability can be applicable to novel and conventional concepts. It also means that MDCAD can resolve more subtle effects due to design decisions than is possible in the less detailed models used in RETIVO. MDCAD makes extensive use of generalized Computer Aided Design (CAD) modeling, to provide a common aircraft geometry representation, which is
274
J. J. Doherty, S. R. H. Dean, P. Ellsmore and A. Eldridge
central to the multi-disciplinary analysis and design process. The full capability is extensively automated to enable numerical optimisation driven design to be completed, in order to incorporate the required level of design maturity with reduced man effort. Within the Integrated Wing project QinetiQ is using MDCAD as a baseline environment, enabling the integration of further technology and system choices to be developed and studied relatively quickly. MDCAD has been built upon high-fidelity computational physics based analysis and optimisation capabilities, much of which already existed within individual discipline groups within QinetiQ [10-13], to establish an integrated multi-disciplinary design optimisation capability. A critical factor necessary to achieve this has been the development of a common computer aided engineering (CAE) environment. This CAE environment consists of two main parts: x Software framework for process automation and data exchange. x Rules-based, parametric CAD model generator, which provides a multi-disciplinary, shared parametric representation of the configuration. A bespoke framework utilizing Python based scripting has been developed and is used to automate the process. The rules-based, parametric CAD model generator is based upon the CATIA V5 commercial software product from Dassault Systèmes, enabling the automated generation of full external aircraft surfaces, structural layouts, local surface features (e.g. blending), deployable devices and internal packaging and systems. Computational physics analysis and optimisation tools are interfaced with this central CAD model within the software framework, to enable rapid analysis and optimisation. The exchange of information between the disciplines is standardized, for example the aerodynamics/structures exchange of loads and aero-elastic displacements. The baseline MDCAD framework used within the Integrated Wing project is shown schematically in Figure 3 for a generic civil aircraft case. Conceptual Optimisation Engine Data
Concept Definition
Total Weight Budget
Aerodynamic Surfaces
Engine, Systems, Primary Wing Components Structure/Fuel
Aero Data
FEA – Structural Loads Sizing For Loads
CFD - Primary Design Points Loads
Figure 3. The Baseline MDCAD Framework
Structural Weight
A Multi-Fidelity Approach for Supporting Early Aircraft Design Decisions
275
The aircraft concept is defined in terms of typical configuration parameters, such as wing planform and the fuselage length/diameter etc. Additionally, more detailed parameters are also specified which define the external aerodynamic surfaces e.g. camber, thickness and twist, and also the primary structural components e.g. spar and rib locations and sizing. Each of the configuration and detailed parameters is available for overall concept optimisation. These parameters are used to drive the CATIA V5 rules-based CAD geometry generator, which creates both the external CAD surfaces and the internal structural geometry. Once the external surfaces have been generated it is then necessary to create the internal structural geometry. Using rules that define the location of the spars, rib and stringer spacing in addition to high-lift and control devices a structural CAD representation of the wing is generated. This CAD representation is used to calculate the capacity of the fuel within the wingbox region and is then translated into a FEA compatible model for use with MSc NASTRAN SOL200. Figure 4 shows the resultant structural geometries of four different planform variations. For these cases, the rules defining the spar location, rib spacing and number of stringers have remained constant although these can also be varied if required.
Figure 4. The structural layout for four different planforms
The external CAD surfaces are analyzed using CFD to generate aerodynamic performance information together with aerodynamic loading information. The primary structural components (spar, ribs, skin) are analyzed and their thicknesses optimized using FEA to generate structural weight information. During the structural analysis of the wing the aerodynamic loads are applied, together with
276
J. J. Doherty, S. R. H. Dean, P. Ellsmore and A. Eldridge
loads associated with fuel carriage and other possible wing components e.g. engine installation, high-lift systems and landing gear. The aerodynamic performance predictions from CFD, the weight predictions from FEA and other aircraft components, together with details of the chosen engine e.g. specific fuel consumption, are fed back into the overall optimisation problem formulation. A mission analysis module is used to calculate the performance of the aircraft from which various optimisation objective functions can be derived e.g. total fuel burn or maximum range. Typically numerous constraints are also included within the process e.g. operational requirements for fuselage cabin size, payload, cruise Mach number etc. Each of the component parts of the overall process are set up as automated processes e.g. CAD generation, CFD grid generation, FEA model generation. The data exchange between these components can also be automated e.g. CAD input to CFD grid generation, transfer of aerodynamic loads into FEA. Hence the overall concept optimisation process can be fully automated and can be run without the requirement for user intervention. The civil aircraft example described above highlights the importance of the central parametric CAD model within MDCAD. This CAD model provides a multi-level link between traditional conceptual design parameters (configuration definition) and preliminary design parameters (detailed features). The use of commercial CAD software within MDCAD also provides other benefits. For example the CAD software provides functionality for calculation of areas and volumes which can be used directly within constraints e.g. payload or fuel volume. Centres of gravity and inertias are also available and can be used as part of the structural model definition. Functionality for calculating distances between component CAD parts or features, or indeed identifying unwanted intersections between parts or features, can also be used within constraints. The importance of a framework for linking the overall MDO process together is also clear from Figure 3. There is a requirement to establish an optimisation process across a network of machines. For example the CATIA V5 software is used on Windows platforms at QinetiQ, whilst the CFD and FEA software is usually run on a Linux multi-processor cluster. QinetiQ also uses the Python scripting to establish an overall optimisation process across a series of machines. Within the MDCAD process the use of optimisation, and the associated higherfidelity analysis methods, to directly support concept analysis and design means that the conventional boundaries between conceptual design and preliminary design have been removed, and the two phases have to a large extent been merged. By using physics based analysis methods rather than simpler historical based correlations, the MDCAD approach provides generality to enable novel concepts to be assessed and designed, and improves the accuracy of the performance levels assumed during the conceptual design phase. This approach also results in the output of concepts which are more compatible with later stages of design helping to reduce overall design cycle. The MDCAD process, presented in this paper, has been run to demonstrate the importance of optimizing both the aerodynamic and structural characteristics of a configuration simultaneously. For the cases investigated improvements in aircraft performance were noted through alterations to the planform, camber and thickness
A Multi-Fidelity Approach for Supporting Early Aircraft Design Decisions
277
profiles resulting in changes to the external aerodynamic shape of the wing. These changes in external geometry and also due to the resultant change in the aerodynamic loading on the wing resulted in a new internal structural geometry. The thicknesses of the internal structural geometry were also optimized to minimize the overall wing weight. Constraints that were put on the configuration were satisfied including those dictating the performance of the aircraft ensuring that it was able to travel a given range, as well as meeting low speed performance and stability criteria. Through the implementation of enhanced or additional modules within the process, the MDCAD capability can be used to further explore novel configurations, technologies or systems and the associated impact on the overall aircraft solution. Within the Integrated Wing Project enhanced versions of the MDCAD capability will be used to investigate the impact of several technology and system choices relevant to fuel systems, composites, landing gear and other systems.
6 Conclusions The feasibility, conceptual design and preliminary design phases, when considered as part of one overarching activity, could be viewed as focussing upon an exploration of the ‘achievable design space’ of aircraft solutions and the resulting identification of the best solution to take forward into detailed design and production. In this context an ‘aircraft solution’ refers to matching the required performance and cost targets (customer requirements) with a viable aircraft concept which the manufacturer (and customer) is confident will achieve these targets. Ideally the priorities for particular performance and cost targets would be developed based upon an understanding of, and confidence in, these targets actually being achievable and representing some best design balance, such as between performance and cost. During these early phases of design there would hence a need to be able to trade-off requirements, to ensure concept viability and to be able to have confidence in the associated prediction of the anticipated final operational performance and cost for each aircraft concept. To support this overarching design activity there would be a requirement for design capabilities which are fast, broadly applicable and sufficiently accurate. However these requirements conflict in practice presenting a dilemma for the development of such a design capability. In particular the requirement for a design approach to be both generally applicable and sufficiently accurate could potentially be addressed by solely using a relatively high fidelity modeling. However it is widely recognized that this high fidelity modeling is not always needed and can lead to undesirable complexity and a large increase in both manual and computational workload. Such an approach, if used in isolation, would inevitably lead to the possible space of design options being narrowed earlier than desired in order to reduce this workload. This situation would not represent an improvement compared to the traditional aerospace design process. The current paper presents progress towards a more systematic approach for supporting the early phases of design, by using a combination of low and high
278
J. J. Doherty, S. R. H. Dean, P. Ellsmore and A. Eldridge
fidelity tools, process automation and optimisation. The overall approach, as presented, incorporates two levels of design modeling, one for ‘broad and shallow’ exploration of the design space, the other enabling ‘narrow and deep’ investigations for more specific aircraft concepts. In practice these approaches are intentionally similar and ultimately many of the components can be common to both approaches. This potentially provides the basis for a single hybrid capability to be derived in the future, which would allow the modeling fidelity of different aspects of the design to be chosen according to the specific application requirements. The design processes, which have been presented, use concurrent modeling of many different multi-disciplinary aspects of an aircraft and many associated measures of overall performance. This concurrent modeling ensures the value and viability of design decisions can be assessed in the context of the whole aircraft and the overall top level requirements. There are several drivers for focusing on support for decision making in the early stages of design. Firstly there is a need to ensure that imposed top level requirements are viable and sensibly balanced. There is a desire to facilitate selection of the best aircraft solution which is matched to these requirements. Ultimately there may be an opportunity to de-risk the downstream design phases, by attempting to prevent possible problems from happening, through improved and higher fidelity upstream design.
7 Acknowledgements The capabilities described have been funded over many years at QinetiQ and most recently through the Integrated Wing project, which is funded jointly by the UK Department for Business, Enterprise & Regulatory Reform and QinetiQ. The authors would also like to show their gratitude to Airbus UK for the numerous discussions to date within the Integrated Wing project.
8 References [1] [2] [3] [4] [5]
ACARE, Strategic Research Agenda, Volumes 1 & 2, 2002. Available at: http://www.acare4europe.org. Accessed on: May 1st 2008. Integrated Wing Aerospace Technology Validation Programme. Available at: https://www.integrated-wing.org.uk/default.htm. Accessed on: May 1st 2008. ELLSMORE PD, RESTRICK KE. Application of RETIVO to Civil Aircraft. Paper AIAA-2007-7808, Belfast, 2007. DOHERTY J, DEAN SRH. MDO-Based Concept Optimisation and The Impact of Technology and Systems Choices. Paper AIAA-2007-7806, 7th AIAA ATIO conference, Belfast, September 2007. FENWICK SV, HARRIS JapC. The application of Pareto frontier methods in the multidisciplinary wing design of a generic modern military delta aircraft. In proceedings of NATO RTO AVT symposium, Ottawa, October 1999.
A Multi-Fidelity Approach for Supporting Early Aircraft Design Decisions [6] [7] [8] [9] [10] [11] [12] [13]
279
FENWICK SV, HARRIS JapC, DEAN SRH. Multi-disciplinary Optimisation to Assess the Impact of Cruise Speed on HSCT Performance. Paper AIAA-2005-4538, Albany, 2004. BARTHOLOMEW P. Structural Optimisation within the Multidisciplinary Design Process. In proceedings of 5th ASMO UK/ISSMO conference on Engineering Design Optimisation, Stratford-upon-Avon, July, 2004. DOHERTY JJ. Rapid Multi-Disciplinary Analysis and Optimisation of Novel Air Vehicle Configurations. In proceedings of CEAS/KATnet Conference on Key Aerodynamic Technologies, Bremen, 2005. DOHERTY JJ, DEAN SRH. The Role of Design Optimisation within the Overall Vehicle Design Process. In proceedings of 6th ASMO UK / ISSMO conference on Engineering Design Optimization, Oxford, July 2006. BARTHOLOMEW P, VINSON S. STARS: Mathematical Foundations; In Software Systems for Structural Optimisation. Birkhauser Verlag, Basel 1993. DOHERTY JJ, PARKER NT. Dual Point Design of a Supersonic Transport Wing using a Constrained Optimisation Method. In proceedings of 7th European Aerospace Conference - The Supersonic Transport of Second Generation, Toulouse, 1994. ROLSTON SC, DOHERTY JJ, EVANS TP, GRENON R, AVERARDO MA. Constrained Aerodynamic Optimisation of a Supersonic Transport Wing: a European Collaborative Study. Paper AIAA-98-2516, Albuquerque, June 1998. HACKETT KC, REES PH, CHU JK. Aerodynamic Design Optimisation Applied to Civil Transports with Underwing Mounted Engines. In proceedings of ICAS Conference, Melbourne, 1998.
Cost Modelling of Composite Aerospace Parts and Assemblies R Currana,1, M Mullen b, N Brolly b, M Gilmour b, P Hawthorne c, S Cowan c a
Director, Centre for Integrated Aerospace Technology (CEIAT); Reader, School of Mech. and Aerospace Engineering, Queens University Belfast, NI, UK (Professor of Aerospace Management and Operations, TU Delft) b School of Mechanical and Aerospace Engineering, QUB c Bombardier Aerospace Belfast (BAB) Abstract. The paper address the cost estimation of composite part and assembly aerospace structures. It is shown that SEER-DFM can be effectively used as a tool to estimate the cost of composites but that the user requires some skill in calibrating the software to the cost environment within any particular company. In collaboration with Bombardier Aerosapce Belfast, a range of composite parts and assemblies were used to verify the tool’s estimation capabilities while the true value of the tool was then validated through a particular study on a set of composite airstairs. An opportunity for 20% reduction was identified on the basis of the cost breakdown generated by the study. Finally, a parametric study of the assembly costs showed that it is possible to have a relatively accurate estimate of the cost based on weighht alone which would facilitate very early cost estimation prior to the more detailed information being available to run the SEER model. In conclusion, such cost estimating tools can be used to great effect within a concurrent engineering context to control cost and inform design of the manufacturing cost implications of their decisions. Subsequently, the estimation capability can also be used to compress the time required for price/cost reductions to be identified and secured through supply chain cost rationalisation, whether up-front at the make/buy decision stage or even later during production. Keywords. Cost Modelling, composite structures, aerospace manufacture
1 Introduction Advanced Composite Materials (ACM) are being introduced into the aerospace industry and manufacturing at an ever increasing rate. Therefore aircraft manufacturers are now being differentiated to a large degree on the percentage of composites used in their aircraft. Consequently, the modelling of cost [1] for these components is crucial for their economic viability and integration into a Design for Cost approach.
1
Corresponding Author Email : [email protected]
282
R. Curran, M. Mullen, N. Brolly, M. Gilmour, P. Hawthorn and S. Cowan
Composite use has progressed from secondary structures to main structural components and forecasts indicate that this trend will continue in the foreseeable future [2], albeit tempered by the likely increase in the cost of composite raw materials due to the increased oil prices. Composite structures have been demonstrated to have better performance and less weight than conventional metallic designs. This advantage is most prevalent in the area of aerospace structure design [3]. However, the high cost of manufacturing composites remains an economic barrier to their increased usage. Additionally, the uncertainty regarding both manufacturing difficulties and life-cycle costs presents extra risk that can often discourage the use of composites in applications where they would be most beneficial. Therefore it is crucial to integrate the key requirements of both design performance and cost [4] into the trade-off involved in composite part or assembly selection. This paper presents the investigations of the authors into the estimating capabilities of a commercial cost estimating package for composite aerospace parts and assemblies. SEER-DFM (Design For Manufacture) from Galorath Inc. is a parametric cost estimating tool [4] that can be used to model any manufactured part or assembly. For this report the detailed composites plug-in for SEER-DFM was used which is specifically orientated to aircraft composite parts. SEER-DFM provides a link between the design of a component and its manufacture. The software allows the analyst to view a complex array of the cost, labour, assembly, process, part design, materials and production variables. After presenting a review of the literature, the paper will discuss the basic costing methodology used. Subsequently, verification studies on various parts and assemblies will be presented as the basis for developing the tool to be validated on a new case study. The paper culminates in a parametric analysis of all the assembly results to show the cost trends relative to weight in order to highlight the concurrent potential for the use of such cost modelling tools.
2 Literature Review Research in the area of ACM (Advanced Composite Materials) manufacturing costing models [5] has developed and early work by Kim [6] was concerned with incorporating cost as a variable in the design of composite structures. He presents a first order model approximation for composite manufacturing processes and uses a case study to show its use. The first order model approximations model the physical system as a signal having a gain and a feed back demonstrated in Figure 1.
Figure 1. First order system schematic
Cost Modelling of Composite Aerospace Parts and Assemblies
283
Mawuli [7] developed a cost model which can be applied to the design stages of composite structures in the aviation industry, the same area in which this paper will be applied using SEER-DFM software. Mawuli’s model is based on the complexity of the aircraft part or assembly where relating cost to the complexity simplifies the model. Mawuli also describes the measurement of complexity based on an information theory useful for the cost estimating model variable parameters. Mack [8] provides a case study on an electric vehicle showing the relationship between cost and performance criteria of composite materials. It focuses on the decision making process involving these factors for the automotive industry. The article focuses on the decision making process between these two factors composite material fabrication for medium volume, high performance composite components such as aircraft components. Friedricht et al [9] applies an activity based costing model and have applied their model to GRP and also more currently used CFRP counterparts for a blister fairing that formed part of a Rolls Royce thrust reversing unit. The costing model compares the associated costs of both thermosetting plastics and thermoplastic plastics, which are being increasingly investigated in aerospace applications. Eaglesham [10] has been critical over the inappropriate distorted accounting methods which uses volume-based allocations of overhead used in the modelling of composite manufacture. He advocates implementing an activity based costing methodology to try to improve composite parts cost estimation at early design stages. Pas [11] who lead a research team in 1998 tackled the lack of software related to Cost Estimating Modelling (CEM) for advanced composites. In this article they focus on the parts and processes used in the manufacture of advanced composite as well as the implementation of the process based models using first order dynamic law in industry. Continuing the computer sided research and relevance to this project Boyer [12] created a cumulative database of material costs for a web based cost estimator. In this he has developed an up to date database which is essential to WEB- based cost estimating.
Figures 2 and 3. MIT cost equations [15, 16]
Goel [13] covers the pricing information of custom built composite manufacturing machinery as well as standard equipment. The article studies the price and price drivers of equipment used to manufacture composite parts and composite assemblies that is associated with the non-recurring cost of the composite manufacture. Following the equipment costing, Barlow et al [14] developed a methodology based on applying MIT cost equations [15, 16] to composite manufacturing process steps from which cost variables and constants
284
R. Curran, M. Mullen, N. Brolly, M. Gilmour, P. Hawthorn and S. Cowan
can be established to represent an estimated costing of the composite aircraft structure (see Figs. 2 and 3). Haffner [17] has included work on investment cost for production equipment and tooling as well as estimation guidelines for labour and material. He has identified a detailed process plan for each composite manufacturing process. The study shows the consequences of design changes in detail. Haffner considered two modelling techniques, process based and technical cost models, to develop up to 270 associated modelling equations. The development of time-estimating models for advanced composite manufacturing processes is outlined by Stockton et al [18]. Time estimating models are investigated for the second stage of Affordable Manufacture of Composite Aircraft Primary Structures (AMCAPSII) led by British Aerospace. He discusses the models being developed for both part manufacture and assembly although he only deals with the automated tape lay-up process. Complementary to this report, Choi et al [19] published work on the use of a knowledge based engineering tool to estimate the cost and weight of composite aerospace structures. The process was to be implemented at the conceptual stage of design. The authors used the CATIA V5 knowledge environment to model the components relevant to aerospace engineering, providing the geometric detail needed to accurately estimate cost and weight, which is usually modelled as simple surfaces at the conceptual stages of design. The system they developed used MSC. NASTRAN to allow the designer to use a “what if” analysis to explore different configurations for the composite parts, in order to optimise cost efficiency. In addressing the issue of commingled yarn based composite costing Bernet et al [20] published work explaining a cost estimating procedure and consolidation model. The report explains in detail the determination of the processing conditions necessary to achieve the desired quality at minimum manufacturing cost and was illustrated with generic composite reference material. The work concludes with the integrated processing technique model being demonstrated for the application in complex composite components. A cost model was developed by Wang et al [21] which used artificial neural networks to examine how the constraints imposed by changing market trends affect the identification of cost estimating relationships. In this report a series of experiments were undertaken to select an appropriate network for the fine tuning process of the model. The accuracy and robustness of the cost model were developed in order to investigate varying conditions and guidelines where then presented.
3 Methodolgy To develop a robust model that can be reused for any composite aircraft structure a series of guidelines to ensure uniformity must be set out. This Section will describe the generic method used to model a composite part in SEER-DFM that will be capable of translation to any aircraft composite part. A flow of procedures and tasks has been developed to provide a quick reference guide for analysts to view the information required and the order in which it is needed.
Cost Modelling of Composite Aerospace Parts and Assemblies
285
SEER-DFM is being used as a tool to develop a cost estimating model for composite parts and assemblies. It will use a breakdown of the process used to manufacture the part in providing the estimation. The process breakdown will come from the historical data collected for each individual part and assembly. This will therefore lead to a process costing approach for the recurring cost of the composite components. This procedure allows the part or assembly in question to be modelled in detail. The research approach involved a portion of work that considered the use of the model on a real world problem, facilitated through collaboration with Bombardier Aerosapce Belfast (BAB). This work considered a bonded assembly composite airstair assembly that formed the validation case study for the models developed and verified on other existing parts and assemblies. The airstair assembly includes complex geometries, build ups, honeycombs and additional manufacturing materials and processes to be added into to the software’s system configuration file, allowing the model to be tested in sufficient detail. The methodology will therefore incorporate 3 key aspects of research in this project: 1) the specific cost estimating model; 2) the manufacturing process modelling; 3) the capture of knowledge. Therefore, final outcome of the work provided a systematic breakdown of the recurring cost to manufacture the composite assembly. The first step for the cost estimation is the analysis of the given problem and the determination of the main cost drivers. SEER–DFM is no different and before the software is used it is essential that background research into the composite part or assembly is undertaken to fully understand what is being imputed into the software. Essential research is required in the form of: x Materials used; both in the composite part (resin/ fabric) and disposable materials (bagging material/release agent/ breather etc.) x Labour rates x Processes used x Understanding of the company’s experience of the manufacturing process in question; i.e. relating to their Technology Readiness Level (TRL) x Part complexity: shape, dimensions, vertices, build ups, cores, etc. If it was found that any of the materials or processes used for the part manufacture is missing from the SEER-DFM database then they were manually added to the system configuration file. Starting with these details already known, a relatively fast and accurate estimate was produced by SEER for the reoccurring costs. The process of data mining is the most labour intensive part of the estimation process. To input these values into SEER, a new project is opened in the user interface so that new work elements can be inserted through user-forms that request generic information concerning each new element. At this stage the work element is created using the “detailed composites” process. This determines the knowledge databases that will be used in the parameter window, where the program already has a predetermined database for aircraft components. Any of these can be selected for the knowledge database to be used. If this first component was to be used as part of a larger assembly then at this point “insert next element” would be selected, and otherwise “OK” is selected.
286
R. Curran, M. Mullen, N. Brolly, M. Gilmour, P. Hawthorn and S. Cowan
The available definable parameters will now appear in the parameter window to the top right of the same user interface. These parameters will relate to the work element selected in the work element window. All the parameters are set to a default defined by the database that was selected at the initial work element insertion stage. Many predefined composite components are available from the SEER database composite plug-in, many of which are highly specialised to the aerospace industry. These can be accessed by double clicking on the “Shape/ Dimensions” parameter in the parameter window. There are two windows here: one displaying the dimensions and the other displaying the schematic component shape. The correct knowledge database must be selected for this part at the initial “insert new element” stage. These can be selected from: x Spar: in wing boxes, vertical fin boxes, and horizontal tailplane boxes. x Rib: in wing boxes, vertical fin boxes, and horizontal tailplane boxes. x Skin: the two aerodynamic surfaces of a wing box. x Empennage Panel: the aerodynamic surfaces of a vertical fin and a horizontal tailplane. x Bottom Spoiler Skin: The non-aerodynamic lower surface of a spoiler. x I Beam: Typically to stiffen a larger panel, or is part of a load bearing structure. x Hat Stiffener: Typically used as part of large panels. x Panel: Most are rectangular or trapezoidal in shape, and quite thin in relation to their length and width. x Other: This option allows user to create a defined shape based on periphery, dimensions and area. Using all the parameter data found from resources obtained at the research stage, the program will calculate a quick automatic estimate. This estimation is not correct however unless all the parameters match the process being modelled. The parameters that are associated with the others that have been changed need to be updated. This is done selecting “calculate now” which will change material cost per kilogram, cure times and temperatures for the materials selected. With the SEER-DFM program it is now possible to view an output of the cost estimations that it has calculated. The estimate can then be exported to show a breakdown of the constituent parts in an easily read portable document format (pdf) or EXCEL spreadsheet for comparison or analysis. The user interface is illustrated in Figure 4.
Figure 4.Section of multi ply composite estimate
Cost Modelling of Composite Aerospace Parts and Assemblies
287
Although the software can be used to populate a cost breakdown structure with estimates, accurate labour rates are essential. Calculation of these rates requires a large amount of data mining and research into the manufacturing processes used and the company itself, especially in terms of overheads allocation. The equations to be used for the calculation of these parameters are highlighted in Figure 5.
Figure 5. Equations quoted for direct hourly setup rate
The research methodology adopted included verification studies on various parts and assemblies as the basis for developing the tool to be validated on a new case study. However, the paper also culminates in a parametric analysis of all the assembly results to show the cost trends relative to weight in order to highlight the concurrent potential for the use of such cost modelling tools.
4 Results 4.1 Parts results It is possible with SEER-DFM to perform a complete breakdown of the estimated cost in terms of times and costs. The equations derived earlier for all up costs originated from this method of breakdown. The material placement of the composite components consists of both cutting time and lay-up time. These two operations require a large proportion of labour time for manufacture. The cure cycle will be relatively similar for most components as it will depend prominently on the pre-preg cure time defined by the material specification in the system configuration file. The other operations require relatively little time. A large proportion of the material placement time is contributed by the lay-up time of the fabric. This is exceptionally large for a basic component but this can be attributed to the large size of the part (>110 inches) and the large number of honeycomb inserts involved in its construction. The breakdown of the times can be visualised as exemplified in Figure 6 where the pie chart shows all of the times estimated by SEER-DFM. It can be seen that the main time components in the manufacture of composite components are: Layup time; Tool closing; Cure cycle; Cutting time (depend on the component complexity and features). The generation of such breakdowns was repeated for all the individual parts to examine trends and generic breakdowns. It can also be seen that the less dominant operation times only make up a fraction of the time; less
288
R. Curran, M. Mullen, N. Brolly, M. Gilmour, P. Hawthorn and S. Cowan
than a quarter of the total time. It can be seen in the methodology that the main time constituents rely heavily on the part dimensions whereas the smaller time operations are more dependant on the machine rates and setup times.
Figure 6. Pie chart of front spar manufacturing times (mins)
The SEER model was verified using two methods for the parts aspect of the estimation. The first was to ensure that the software could accurately estimate the total standard hours needed to manufacture the part and in essence correlate the labour rates used to be comparative to those at BAB. This was done using the spars from a 200-seater commercial transport aircraft. The larger the size of the component, the larger will be the contribution to the higher lay-up and tool closing times. These operations mainly rely on drivers associated in distance-per-time rates and consequently are dependant on the dimensions of the component. The parts studies were carried out for a forward spar, rear spar, and a composite access door. Figure 7 shows a summary of the parts used for the verification of the model. The section has been summarised by percentage error in the estimates in order to give a relative reference comparing standard hours and cost. It can therefore be seen from this that the MLG door clearly has the largest error between the actual and estimated values even though the honeycomb costs had been exactly estimated. However, the model was used to achieve a high degree of accuracy with very little calibration through adjusting some of the input parameters.
Figure 7. Summary of results for parts
Cost Modelling of Composite Aerospace Parts and Assemblies
289
It has also been identified during the analysis of the results that a trend exists in the estimates of parts. This trend can be seen in Figure 8 and seems to indicate a strong relation in material cost to overall part cost. This does however have an exception in the case of the access panel but this can be seen in Figure where the cure time was the largest percentage of the total time.
Figure 8. Parts trend analysis
4.2 Assembly Results A range of assemblies were tested to verify the software, ranging from simple panels assembled with purchased alloy parts to highly complex assemblies with several composite parts, honeycombs and build-ups. The results for the assemblies are similarly accurate to that of the parts however the degree of accuracy is not as consistently low as that of individual parts. Altogether four assemblies were used for the verification. These consisted of a Kevlar and honeycomb leading edge, a trailing edge inboard and outboard flaps and a leading edge fixed flap. Several of these assemblies included alloy components and as the tool being verified is to be used specifically for composites, the exact cost for these alloy components were made available so that the tool was only “estimating” the cost of manufacturing and assembling the composite components. It was highlighted from the analysis of the results graphs that the assemblies also followed a trend in the cost breakdown of material and labour, as exemplified in Figure 9. The trend seems to indicate that the assemblies incur a greater cost in the added value element of the components. The exception to the rule however is the Kevlar/ Nomex leading edge that consists of a single composite panel with two simple metallic inserts, namely end plates, which explain the low cost. The anomaly can therefore be explained simply by the fact that it is a large component with very little assembly cost and added value.
Figure 9. Assembly trend analysis
290
R. Curran, M. Mullen, N. Brolly, M. Gilmour, P. Hawthorn and S. Cowan
The summary of the results achieved from the assemblies can be seen in Figure 10. The first impression is that the results obtained from SEER-DFM are of a similar accuracy to those for individual parts. The program was also used to look at the more detailed breakdown of the values in order to give a better view of where the tool over estimated or under-estimated the constituent cost elements, thereby increasing the understanding of cost drivers for the user.
Figure 10.Graph of actual against estimated for assemblies
It can be seen from Figure that the greatest percentage error occurred in the leading edge fixed flap inboard assembly. However, this is still well within the acceptable range identified by the authors. The other assemblies studied in the verification process have very small percentage errors and show great promise in terms of the accuracy of the software for the simple costing composite assemblies.
Figure 11.Summary of estimated percentage errors for assemblies
4.3 Validation Results With respect to the main assembly considered for validation in this study, the industry objective was to identify cost savings within the composite assembly. The rationale behind this was that BAB believed the supplier to be less competitive for this composite assembly, and consequently the study would facilitate BAB in challenging the supplier’s price.
Cost Modelling of Composite Aerospace Parts and Assemblies
291
As with the individual part analysis, the cost of an assembled component will be decomposed into its smallest cost and time elements. As the airstairs used in this study are a major assembly made up of several smaller assemblies it is acceptable to analyze one of these sub-assemblies. The sub-assembly considered was the “step assembly”. This part was chosen as it involves all aspects of composite manufacture studied so far in this project, namely honeycomb, foam build ups, drilled holes, adhesive bonding, riveting, cut outs, inserts and composite structures.
Figure 12. Breakdown of actual composite airstairs
With no accurate should-cost breakdown available to compare to the detailed results from SEER, only the prices charged for each minor assemblies of the total assembly could be used, as provided by the supplier. These are the only guidelines that will identify savings on each of the minor assemblies. In addition to the high level price list, a Bill of Material (BOM) was acquired to provide a breakdown of the materials being used to manufacture the component. Figure 12 presents the breakdown of the total assembly into the minor assemblies. Comparing the results obtained from SEER it can be seen that the savings have been made on the smaller assemblies. The most significant saving has been identified on the end panel assembly but also due to the project, a difference in cost between the side panel assembly and its opposite hand has been rationalized. Also note that there has been a saving for all of the sub-assemblies estimated but this is not to be taken literally as the cost quoted by the supplier will also include a profit margin. It has been assumed that a 20% profit is used by using a crude method of taking an average net profit of the supplier over the previous three financial years, being one of the input parameters to be considered in the fine calibration of the estimates. Even after taking this into consideration there is still more than 20% saving per total assembly. 4.4 Parametric Summary of Assembly Results To conclude the results section a characteristic trend will be explored comparing the total cost of each component to the weight, which will in turn be proportional to the components dimensions, for the composite element in that assembly. The intention is to highlight a trend or method of calculation that may be used as a tool to approximate the cost of future composite assemblies. The weights used were those obtained from SEER for the calculated weight of composite element,
292
R. Curran, M. Mullen, N. Brolly, M. Gilmour, P. Hawthorn and S. Cowan
dependent on its dimensions and pre-preg density in the assembly. The costs used were the total cost for each entire assembly, including inserts and hardware, therefore this analysis includes the price of these extra components but not their weight contribution. Combining all the previous assembly results, stochastic equations and an associated degree of confidence can be obtained for predicting future costs based only on the components composite element weight. This would be primarily of great use at the early concept stage of design. Figure below shows the combination of assemblies and airstair validation results. The statistical degree of confidence is calculated to be a very reasonable R2 value, for both the actual and estimated regression trend lines, of 8 and 8.6 respectively. The gap between the trend lines therefore denotes the % error of cost as the weight of the assembly varies.
Figure 13. Graph of estimated and actual trends for assemblies and the airstairs
5 Conclusion It has been shown that SEER-DFM can be effectively used as a tool to estimate the cost of composite parts and assemblies but that the user requires some skill in calibrating the software to the cost environment within any particular company. A range of composite parts and assemblies were used to verify the tools capabilities and the true value of the tool was then validated through a particular study on a set of composite airstairs. Rather than proving that the tool could estimate cost the rationale for the study was to identify whether the supplier was less competitive in their pricing, if so highlighting the opportunity for price/cost reduction. An opportunity for 20% reduction was identified and is being negociated with the supplier, particularly on the basis of the cost breakdown generated by the study and the increased confidence in understanding the constituent costs. Finally, a parametric study of the asembly costs showed that it is possible to have a relatively accurate estimate of the cost based on weight alone whch would facilitate very early cost estimation prior to the more detailed information being available to run the SEER model. In conclusion, such cost estimating tools can be used to great effect within a concurrent engineering context to control cost and inform design of the manufacturing cost implications of their decisions. Subsequently, the
Cost Modelling of Composite Aerospace Parts and Assemblies
293
estimation capability can also be used to compress the time required for price/cost reductions to be identified and secured through supply chain cost rationalisation, whether up-front at the make/buy decision stage or even later during production.
6 Acknowledgements The work was part of a collaborative project between the Procurement and Methods Functions at Bombardier Aerospace Belfast (BAB) and the School of Mechanical and Aerospace Engineering at Queen’s University Belfast. The authors would like to acknowledge a great deal of assistance from BAB staff that included: Philip McIlroy – Procurement Manager, Neil Watson - Procurement Methods Project Engineer, Ruth Henderson - Procurement Methods Project Engineer, Jim Stewart - Procurement Methods Project Engineer, Mark Walkingshaw - Composite Sourcing Specialist; Composite Materials Contact at BAB, whose time, effort and support are greatly appreciated.
7 References [1]
STEWART, R., WYSKIDSA, R., JOHANNES, J. Cost Estimator's Reference Manual, (2nd ed). Wiley Interscience, 1995. [2] The Advanced Stitching Machine: Making Composite Wing Structures Of The Future. (1997). NASA Facts Online . [3] Curran, R, A. K. Kundu, J. M. Wright, S. Crosby, M. Price, S. Raghunathan, E. Benard, (2006), Modeling of aircraft manufacturing cost at the concept stage, The International Journal of Advanced Manufacturing Technology, Pages 1 – 14. [4] Curran, R, Raghunathan, S and M Price (2005). A Review of Aircraft Cost Modeling and Genetic Causal Approach, Progress in Aerospace Sciences Journal, Vol. 40, No 8, 487-534. [5] Roy, R., & Kerr, C. (2003). COST ENGINEERING: WHY, WHAT AND HOW? Decision Engineering Report Series . [6] Kim, C. E. (1991). Composites Cost Modeling: Complexity. Massachusetts Institute of Technology, Phd Thesis . [7] Tse, M. (1992). Design Cost Model for Advanced Composite Structures. Massachusetts Institute of Technology, Phd Thesis . [8] Mack, N. E. (1994). Cost Effective Design of Composite Structure for Automotive Applications. The University of Michigan, Phd Thesis . [9] Friedricht, K., Bhattacharyya, D., & Krebs, J. (1996). Production and evaluation of secondary composite aircraft components- a comprehensive case study. University of Auckland, Universittit Kaiserslautern, Phd Thesis . [10] Eaglesham, M. A. (1998). A Decision Support System for Advanced Composites Manufacturing Cost Estimation. Virginia Polytechnic Institute and State University, Phd Thesis . [11] Pas, J. W. (1999). WEB cost Estimation Models for the Manufacturing of Advanced Composites. Massachusetts Institute of Technology, Phd Thesis . [12] Boyer, J. R. (2001). Compilation of a Materials Cost Database for a WEB-Based Composites Cost Estimator. Massachusetts Institute of Technology, Phd Thesis .
294
R. Curran, M. Mullen, N. Brolly, M. Gilmour, P. Hawthorn and S. Cowan
[13] Goel, A. (2000). Economics of Composite Material Manufacturing Equipment. Massachusetts Institute of Technology, Phd Thesis . [14] Barlow, D., Howe, C., Clayton, G., & Brouwer, S. (2002). Preliminary study on cost optimisation of aircraft composite structures applicable to liquid moulding technologies. Hawker de Havilland, Delft University, Phd Thesis [15] Neoh ET. Adaptive Framework for Estimating Fabrication Time, PhD Thesis, Department of Mechanical Engineering, MIT, 1995. [16] Muter S. Cost Comparisons of Alternate Designs: An Information Based Model, M.S. Thesis, Department of Mechanical Engineering, MIT, June 1993.) [17] Haffner, S. M. (2002). Cost Modeling Guidelines for Advanced Composite Fabrication. Massachusetts Institute of Technology, Phd Thesis . [18] Stackton, D. J., Foster, R., & Messner, B. (1998). Developing time estimating models for advanced composite manufacturing processes. Aircraft Engineering and Aerospace Technology , Volume 70 • Number 6 • 445–450. [19] Choi, J.-W., Kelly, D., & Raju, J. (2007). A knowledge-based engineering tool to estimate cost and weight of composite aerospace structures at the conceptual stage of the design process. Aircraft Engineering and Aerospace Technology: An International Journal , Volume 79 • Number 5 • 459–468. [20] Bernet, N., Wakeman, M., Bourban, P.-E., & Manson, J.-A. (2002). An integrated cost and consideration model for commingled yarn based composites. Composites Part A: applied science and manufacturing , 495 -506. [21] Wang, Q., & Stockton, D. (2001). Cost model development using artificial neural networks. Aircraft Engineering and Aerospace Technology , Volume 73 Number 6 536- 541.
Integrated Product Process Development (IPPD)
A Design Methodology for Module Interfaces Régis Kovacs Scalicea, 1 , Luiz Fernando Segalin de Andradeb and Fernando Antonio Forcellini c a
Associate Professor, UDESC/DEPS – State University of Santa Catarina, BR. Associate Professor, CEFET – Federal Centre for Technological Education of Santa Catarina, BR. c Associate Professor, UFSC/GEPP – Federal University of Santa Catarina, BR. b
Abstract. This paper describes a design methodology to develop interfaces for modular products. Interface should be understood as the boundary among two or more product modules. Interfaces are essential to guarantee a proper exchange of materials, energies and information among modules also playing an important role in product standardization and module interchangeability. The proposed method uses the product architecture to establish the interface requirements, which are employed to define the working principles. A morphological matrix is used to organize and aid the definition of the interface concept variants. An evaluation matrix is used to perform the interface concept analysis and rank. Evaluation criteria are also presented. This design method was preliminary employed for the development of a modular product family demonstrating its viability and potential. Keywords. Interface design, Modular product, Product design.
1 Introduction During the last decades, a great number of enterprises have been increasing their competitiveness through the use of products with modular architecture, resulting in a higher variety with lower production costs. Various authors [7,10,19,20] point out that the use of modular architecture is an improving aspect for this competitiveness. Miller & Edgard [8] define two attributes that crystallize as carriers of modularity, regardless of the application: x Modular systems are recognized for their ability to create variety by the combination and interchangeability of different modules. Interchangeability and combinations require the modules to have standardized interfaces and interactions. x Modules contain essential and self-contained functionalities compared to the product they are part of. Self-contained means that the function is 1
UDESC – State University of Santa Catarina. Production and Systems Engineering Department. Campus Universitário Prof. Avelino Marcante, CEP:89223-100, Joinville, SC – Brazil. Tel.: +55 47 4009-7830. E-mail: [email protected]; [email protected].
298
R. K. Scalice, L. F. S. de Andrade and F. A. Forcellini
carried out within the module and limited to it or, in other words, the module is independent. Based on these features it is possible to perceive the importance of an accurate interface to the integrity of modular product architecture. This paper presents an interface design methodology for the development of modular products, which aims to facilitate the decision making process during the end of the conceptual design and the beginning of the detailed design of modular products. The structure of the proposed procedure was conceived to provide a clear view of module interactions and to make interface standardization simpler, as well as to reduce the development time.
2 Related Work The development of modular products influences different aspects of the life cycle of a product [3,6,9]. In the production phase, a decrease in the number of processes is observed and, during the assembly, a decrease in the number of required operations is also noticed. In the use phase, a modular structure can lead not only to a better performance product, but also to an easily repaired, maintained and disposed product. In the final disposal of a product, modularity is more suitable for disassembly and reuse. However, the module interchangeability and interoperability required to perform all these features depend on a proper interface design. The importance of a proper interface design to modular products has been outlined by various authors [1,5,11,18,21]. This importance is related to the influence of interfaces in the final product and to the flexibility of varieties [1]. The emphasis on interface design arises due to the complexity that occurs in the interfaces [18]. So, the interface complexity could be used as a measure of product flexibility [5] The concept of interface may vary. Miller and Elgard [8] emphasize the difference between interfaces and interactions. The first ones are the boundaries between modules, and the latter describes inputs and outputs between modules. Zeng [23] establishes the concept of boundary as interactions between a product and its working environment. For the author, there are two types of interactions: structural and physical. Ullman [18] classifies connections as one or more of these types: Fixed, not adjustable interface; Adjustable interface; Separable interface; Locating interface; and Pivoting interface. Hillströn [4] uses the axiomatic design presented by Suh [17] combined to DFMA (Design for Manufacturing and Assembly) tools to provide a method to assist designers to understand how interfaces have effect on product modules, and to aid in the definition of their best position. To the author, interfaces are functional surfaces that unite two or more modules and carry out, at least, one of these functions: provide support, transmit power, locate part on assembly, provide location for other parts and transmit motion. Based on these definitions, it is possible to notice that there is a common sense that an interface is an area where there is a flow of energy, material, information
A Design Methodology for Module Interfaces
299
or, at least, a spatial interaction among two or more modules or parts [12]. Thus, standardized interfaces are fundamental to maintain or increase product flexibility [14]. To deal with the module interface problem some design tools and methods have been presented. Pereira et al [11] presents a method that uses a tool called Interface Evaluation Matrix to determinate the relationship among modules, whose structure is very similar to the matrix proposed by Erixon et al [1]. The differences between these two matrixes are on the parameters evaluated. The first one evaluates compatibilities and interactions among modules, and the latter focuses on form compatibility and assembly time. These methods demonstrate the viability of the use of matrix based tools in interface design. Fixson [2] created a model to evaluate interfaces based on product architecture. This model uses some product parameters to analyse and evaluate proposed interfaces to select the most suitable one. Interface design could also be a result of a standardization process. Whitney [22] states that interface standardization surfaces when: x Interfaces are submitted to heavy loads or tension; x Interfaces do not carry out a main function or affect the product performance; x Interfaces do not expend too much resources, such as space; x Economy of scale is needed; x When they could be designed regardless of the items united by them. Module interfaces depend on product architecture. Methods, such as the one proposed by Stone, Otto and Wood [16], are also useful to establish interface requirements, since they describe how modules connect and interact with other modules. This method uses three heuristics (dominant flow, branch flow and convert-transmit modules) to determine functional interactions and module possibilities. However, as it could be seen of his literature overview, there are few design methodologies that focus on interface design. Just a few describe all steps required to determine interface requirements, to provide the working principles, and to select the most suitable interfaces for a conceived modular architecture.
3 Methodology overview The methodology structure is based on procedures usually employed in the concept design of a product. This structure allows engineers to discuss design alternatives for module interfaces in a similar way to those they use to develop product concepts. Furthermore, a design methodology fosters and guides the abilities of designers, encourages creativity and, at the same time, drives home the need for objective evaluation of results [10]. A structured overview of the proposed methodology is presented in Figure 1. The first step of the interface design methodology is the definition of interface requirements. Interface requirement is defined as the functions that need to be performed by each module interface. To guide the establishment of interface
300
R. K. Scalice, L. F. S. de Andrade and F. A. Forcellini
requirements, five standard interface functions, derived from Hillströn [4], are presented: x Transmit energy – includes any exchange of energy, including kinetic, potential, human, electric, magnetic, hydraulic and pneumatic; x Provide support – related to any resource developed to physically sustain modules connected by a particular interface; x Locate component – linked to the module assembly; x Provide location for other components – it is similar to the previous one, but related to assembly with other modules or components; x Transmit information – includes cognitive exchanges between module and user, as well as logical interactions among modules.
Figure 1. Methodology for Interface Design – Overview.
Figure 2 shows an example of evaluation of the five standard interface functions for a cordless phone.
Figure 2. Standard interface functions for a cordless phone.
The use of a matrix relating all the modules of a product is proposed to aid in the definitions of the interface requirements. This matrix permits to evaluate
A Design Methodology for Module Interfaces
301
module interfaces one by one and to weight the importance of interface functions for each module interface. Figure 3 shows the matrix to establish interface requirements for the cordless phone example, based on the interface functions ilustraded in Figure 2. Modules 01
01 (base)
z
Provide support
z
Locate component
z
US EN Relationships {
Provide locations for other components Transmit information
02 (phone)
02
Transmit energy
Transmit energy Provide support Locate component Provide locations for other components Transmit information
z
Strong
z
Medium
Weak
{
User US Environment EN
{
Figure 3. Matrix to establhish interface requirements - Cordless phone example.
The second step aims to search and define the working principles to carry out the interface requirements. The use of a morphological matrix is recommended to systematically gather the working principles found. The last step is to develop interface concept variants for each interface requirement and evaluate them to define the most suitable ones. Interface concept variants are created by combining working principles of the morphological matrix for each interface requirement. In order to evaluate and determine the most suitable interface concept variant, an evaluation matrix [13] is used. An evaluation matrix must be provided for each interface requirement. As evaluation criteria, eleven technical requirements related to the performance needs of an interface are proposed: tightness, interchangeability, assembly, disassembly, form, material, production, security and ergonomics, cost, maintenance, and power, energy and movements. The final result is a ranking of interface concept variants for each interface requirement. It is important to notice that the definition of the interface to be employed depends on other factors besides the interface design, such as module specification, interchange needs of the system, platform requirements and existing products and modules. All steps of the methodology for interface design for the example of the cordless phone are illustrated in Figure 4.
4 Practical Evaluation This method was applied in the interface design of a modular product family for the mechanization of Brazilian mussel farming processes [15]. This project was particularly benefited by this methodology, not only on driving the process, but also providing a proper structure to register and recover engineering information regarding interface design. An example of the achieved results is shown in Figure 5, which describes the development steps for interface between modules 1 (for gathering mussels inside the machines) and 6 (for agitating material in process), established during the
302
R. K. Scalice, L. F. S. de Andrade and F. A. Forcellini
conceptual design of the product family (Figure 5-a). There were three functional functions for this particular interface requirement (Figure 5-b): transmit energy (torque and motion between modules), provide support (module 6 is a rotating shaft) and provide location (easiness to assemble).
A Design Methodology for Module Interfaces
303
Figure 5-c presents interface concepts for the interface between modules 1 and 6. It is possible to notice that all interface concepts have a common working principle in the interface functions of providing support and locating components. This decision was based on the project constrains. Figure 5-d illustrates the evaluation matrix for the evaluated interface, which illustrates that a coupling (interface concept #3) was the most suitable option for this particular case. The final result of this interface design is presented in Figure 5-e, which illustrates the detail design of modules 1 and 6, and shows a photograph of the fabricated interface.
5 Final Comments The methodology presented by this paper was aimed at the use viability of the same line of thinking used in product design for the development of module interfaces. This methodology describes all steps required to determine interface requirements, to provide the working principles, and to select the most suitable interfaces for a conceived modular architecture. The results achieved in the initial use of the proposed method for the mechanization of mussel farming processing emphasized the effectiveness of the proposed method as a guide for interface definition.
6 References [1] Erixon G, von Yxkull A, Arnström A. Modularity – the Basis for Product and Factory Reengineering. In: Annals of the CIRP. 1996;v.45/1; p.1-6. [2] Fixson SK. Product Architecture Assessment: A Tool to link Product, Process, and Supply Chain Design Decisions. Journal of Operations Management 2005;23(3/4):345369. [3] Gershenson JK, Prasad G, Allamneni S. Modular product design: a life-cycle View, Journal of Integrated Design and Process Science 1999;3(4). [4] Hillströn F. Applying Axiomatic Design to Interface Analysis in Modular Product Development. Advances in Design Automation – ASME, DE, 1994; v.4-2. [5] Hölttä KMM, Otto KN. Incorporating design effort complexity measures in product architectural design and assessment. Design studies 2005;v.26;n.5;p.463-485. [6] Ishii K. Modularity: A Key Concept in Product Lifecycle Engineering. Handbook of Life-cycle Engineering, 1998. [7] Kusiak A, Huang C. Development of modular products. IEEE Transactions on components, packaging, and manufacturing technology 1996;Part-A;v.19(4);p.523-538. [8] Miller TD, Elgard P. Defining Modules, Modularity and Modularization. In: Proceedings of the 13th IPS Research Seminar, Fuglsoe, 1998. [9] Newcomb PJ, Bras B, Rosen DW. Implications of Modularity on Product Design for the Life Cycle. Journal of Mechanical Design 1998;120(3);483-491. [10] Pahl G, Beitz W. Engineering design. A systematic Approach. Great Britain: SpringerVerlag London Limited, 1996. [11] Pereira M, Weingaertner WL, Forcellini FA. Design methodology for reconfigurable precision systems applied to a sclerometer development. In: 38th CIRP - International Seminar on Manufacturing Systems, 2005.
304
R. K. Scalice, L. F. S. de Andrade and F. A. Forcellini
[12] Pimmler TU, Eppinger SD. Integration Analysis of Product Decompositions. In: ASME Conference on Design Theory and Methodology. Minneapolis, MN, 1994;p343351. [13] Pugh S. Total Design Integrated Methods for Successful Product Engineering. Adison Wesley Publishing Company, 1991. [14] Sanchez R. Strategic product creation: managing new interactions of technology, markets, and organizations. European Management Journal 1996;14;121–138. [15] Scalice RK, Forcellini FA, Back N. Development of a modular product family for the mechanization of mussel farming and processing in Santa Catarina. Product: Management & Development 2002;v.1;n.3;p.47-60. Available in: http://pmd.hostcentral.com.br/revistas/vol_01/nr_3/v1n3a05.pdf [16] Stone RB, Otto KN, Wood KL. Product architecture. In: Product Design: Techniques in Reverse Engineering and New Product Development. Upper Saddle River: PrenticeHall, 2001. p. 357-410. [17] Suh NP. The Principles of Design. Oxford University Press, 1990. [18] Ullman DG. The Mechanical Design Process. New York, EUA: McGraw-Hill, 2003;415p. [19] Ulrich K. The Role of Product Architecture in the Manufacturing Firm. Research Policy 1995;n.24;p.419-440. [20] Ulrich K, Tung K. Fundamentals of Product Modularity (Issues in Design Manufacture/ Integration). ASME, DE, 1991;v.39,p.73-79. [21] van Wie M, Rajan P, Campbell M, Stone R, Wood K. Representing Product Architecture. In: ASME Design Engineering Technical Conferences – Design Theory and Methodology Conference, Chicago, IL, DETC2003/DTM-48668. 2003. [22] Whitney DE. Mechanical Assemblies: Their Design, Manufacture, and Role in Product Development. Oxford University Press, 2004. [23] Zeng Y. Environment-based formulation of design problem. Transactions of the SDPS: Journal of Integrated Design and Process Science 2004;v.8;n.4;p.45-63.
Reducing the Standard Deviation When Integrating Process Planning and Production Scheduling Through the Use of Expert Systems in an Agent-based Environment Izabel Cristina Zattar a , Joao Carlos Espindola Ferreira b,1 and Paulo Eduardo de Albuquerque Botura c a b c
Ph.D. Student, Universidade Federal de Santa Catarina, Departamento de Engenharia Mecânica. GRIMA/GRUCON, Brazil. Lecturer, Universidade Federal de Santa Catarina, Departamento de Engenharia Mecânica. GRIMA/GRUCON, Brazil. Undergraduate Student, Universidade Federal de Santa Catarina, Departamento de Engenharia Mecânica. GRIMA/GRUCON, Brazil.
Abstract. The main objective of this work is the reduction of the standard deviation generated through the routing of production orders in manufacturing resources in a job shop environment. This reduction is obtained through the suggestion of the best machining route for each job order in a simulation, based upon historic simulation data (base of facts) and a set of rules. In order to generate these routes, an expert agent and the expert system were developed. This agent is responsible for processing the information and controlling the rule engine, whereas the expert system code is responsible for the rules that will be chosen to be loaded and applied by the agent. Keywords. Multiagent systems, Expert systems, Routing, Scheduling, Manufacturing
1 Introduction In dynamic environments, a great obstacle for the integration between process planning and production scheduling is the lack of flexibility for the analysis of alternative resources during the allocation of the jobs on the shop floor. In this phase the process plan is treated as fixed, that is, scheduling does not consider all the possible manufacturing combinations. According to Shen et al. [1], the integration problem of manufacturing process planning and scheduling becomes even more complex when both process planning and manufacturing scheduling are to be done at the same time. 1
Universidade Federal de Santa Catarina, Departamento de Engenharia Mecânica. GRIMA/GRUCON, Caixa Postal 476, 88040-900, Florianopolis, SC, Brazil. Tel.: +55(48) 3721-9387 extension 212; Fax: +55(48) 3721-7615; E-mail: [email protected]
306
I. C. Zattar, J. C. E. Ferreira and P. E. de A. Botura
In order to solve this problem, it was proposed and developed a multiagent system that enables the use of process plans with on-line alternatives, besides helping real time decision-making about part routes in flexible job shop (functional) layout environments. After implementing the system, a large number of tests were carried out, resulting in a database with more than twelve thousand simulations. By analyzing the results, it was observed that despite shorter makespan and flow times were attained, the standard deviation was high when comparing with other approaches found in the literature. As the problem is significantly complex, involving many parts, resources and alternative plans, an expert agent based on the JESS language (Java Expert System Shell) [2] was implemented which, through the application of rules, filters the information in the database of simulations and provides the system with an adequate suggestion of the route to be executed.
2 Characteristics of the System The developed system is composed by a simulation model [3] developed in the Java language, according to the FIPA standard [4], using the JADE platform for agent development [5], the Eclipse development environment [6], and the MySQL database [7]. Firstly the simulation model generates data related to the makespan, flow time, and queue time, based on the simulation results for sets of pre-determined production orders [8]. These data are stored in a database developed in MySQL. Table 1 shows the 24 types of production orders and the parts that compose them. The expert agent uses the acquired knowledge along these simulations to indicate to the production orders which is the best resource to be hired in the first round of negotiations between parts and resources during the following simulation. A detailed description of the negotiation protocol applied between the agents is given in [3]. The code developed in JESS is an interface between the expert agent, which runs in the Java memory domain, and the expert system and its rules, which in turn runs in the JESS memory domain. In order for a routing suggestion to be generated, initially the user should select the parts to be analyzed, and their respective times to set up (machine and fixturing). Then this information is sent to the expert agent responsible for the interface with the expert system. Based on the chosen parameters, the database is searched for simulations that have similar number of parts, set ups and amounts. In this work the MySQL database located in a local server, but it could be stored anywhere, in a distributed way.
Reducing the Standard Deviation When Integrating Process Planning
307
Table 1. Orders used in the simulation model [8] Orders 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Parts 1, 2, 3, 10, 11, 12 4, 5, 6, 13, 14, 15 7, 8, 9, 16, 17, 18 1, 4, 7, 10, 13, 16 2, 5, 8, 11, 14, 17 3, 6, 9, 12, 15, 18 1, 4, 8, 12, 15, 17 2, 6, 7, 10, 14, 18 3, 5, 9, 11, 13, 16 1, 2, 3, 5, 6, 10, 11, 12, 15 4, 7, 8, 9, 13, 14, 16, 17, 18 1, 4, 5, 7, 8, 10, 13, 14, 16 2, 3, 6, 9, 11, 12, 15, 17, 18 1, 2, 4, 7, 8, 12, 15, 17, 18 3, 5, 6, 9, 10, 11, 13, 14, 16 1, 2, 3, 4, 5, 6, 10, 11, 12, 13, 14, 15 4, 5, 6, 7, 8, 9, 13, 14, 15, 16, 17, 18 1, 2, 4, 5, 7, 8, 10, 11, 13, 14, 16, 17 2, 3, 5, 6, 8, 9, 11, 12, 14, 15, 17, 18 1, 2, 4, 6, 7, 8, 10, 12, 14, 15, 17, 18 2, 3, 5, 6, 7, 9, 10, 11, 13, 14, 16, 18 2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 16, 17, 18 1, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18
2.1 Implementation The structure for entering and recovering information from the expert system code is based on the use of static functions implemented in Java, whereas the rules that are applied and the logic of the expert system were implemented in JESS. In this way, the expert system can be used by the simulation program, and interfacing takes place by including the class that contains the static functions in the Eclipse project. Static functions are portions of code that can be executed without belonging to any instantiated object, and can be run by any object that has access to the class that owns this function. For instance, supposing that there is a class called "Utility", with implemented static functions "Save" and "Send", an object of a given class called "Agent" can execute the functions "Save" and "Send" if the “Utility” class is visible to it. In this work it is necessary that the expert agent executes a static function, passing a vector of simulations as a parameter. The output of this function will be the best simulation result. The criterion for choosing what will be considered as the best simulation is based on a pre-established parameter, such as the shortest makespan for the manufacture of a given group of parts.
308
I. C. Zattar, J. C. E. Ferreira and P. E. de A. Botura
Thereafter the vector of simulations is swept in a programming loop included in the code of the static function, and in this way every selected simulation is added to the memory of the expert agent's active facts. Along this sweep, the enabled rule (or rules) are applied, selecting the best simulation to be considered. It is important to remember that when instantiating a simulation object as a fact in the memory of the expert agent, the object and the fact become two different entities, but with the possibility of one being associated with the other. Also, it is important to note that when instantiating a fact in the JESS memory from a Java object, a pointer is created with which the fact can be recovered. This occurs, because, through such instantiation, a reference to this object is kept as a pointer, and when this fact returns to the Java program, it is possible to recover the agent which it is related to. The expert agent executes a function whose output is the reference to the object that represents the best simulation, and this is supplied by the expert system. Thus, the chosen object Simulation is passed to the Server Agent (described in detail in [3]), which suggests a route for the production orders.
3 Functioning of the System 3.1 Shortest Makespan Rule This rule seeks to suggest the route for a given group of orders through the use of the shortest makespan found in its database of facts. As mentioned previously, for a rule to be implemented, it is necessary to instantiate Java objects as facts in the knowledge base of the expert agent, besides instantiating the Java objects from the facts in the knowledge base of this agent. These two instantiations allow a two-way communication between the Java memory and the JESS memory. Figure 1 shows the shortest makespan rule and its life cycle.
Figure 1. Rule based on the shortest makespan
Reducing the Standard Deviation When Integrating Process Planning
309
Firstly the static function smallerMakespan is executed by the Server Agent, which composes the simulation model developed in the JADE platform [3]. This function is responsible for sending to the Expert Agent the attributes of the simulation that are to be queried, such as the parts, amounts, and set up times. Besides, this function identifies which standard deviation control rule (or rules) should be used among the three that were implemented, namely shortest makespan, shortest flow time, or shortest queue time. After this selection, the Expert Agent loads the selected rules in the JESS memory through the command setRules, beginning to search the database for all the simulations similar to the one that is to be controlled, adding one by one to the JESS memory, instantiating them as facts, through the includeSimulation function. In the JESS memory, the rules defined above are applied to each added fact, selecting the best simulation in each loop called applyRules. After finishing the addition of all the recovered simulations, the best simulation, i.e. the one with the shortest makespan for the requested parameters, is made available in the JESS memory, through the object called bestSimulation. Finally, the Expert Agent executes a function whose output is a reference to the bestSimulation object, which is sent to the Server Agent, and then the JESS memory is cleaned and prepared to the next search. An example of the functioning of the proposed system is given as follows: consider that simulation 11, composed by the production orders related to parts 1, 5 and 7, with machine set up time of 30%, and fixturing set up time of 10%, and batch size equal to one, has makespan equal to 200 minutes. With these same parameters, simulation 15 has makespan equal to 180 minutes. Both simulations are stored in the MySQL database. During the loop of the function includeSimulation, both simulations 11 and 15, and all the other simulations that have similar parameters are recovered from the database and instantiated as facts, but when selecting the simulation that outputs the best makespan, only simulation 15 will return to the Expert Agent. 3.2 Queue Time Rule In this rule the objective is to obtain the simulation where a certain part has the shortest queue time, supposing a scenario in which the user wants to manufacture only a certain part with the smallest possible queue. Although this rule works similarly to the previous one, it is more complex in its implementation. Besides the simulation parameters that the Server Agent should supply to the Expert Agent, it is necessary to send the identification of the part whose queue time one wishes to optimize. This identification is made through its PartID, which is stored in a MySQL database. Figure 2 shows the shortest queue time rule and its life cycle. Then, when the static function bestQueueTime is executed by the Server Agent, beside the attributes of the simulation that one wants to query, the identification of the chosen part is also sent. Thereafter the Expert Agent loads the selected rules into the JESS memory through the command setRules, beginning the search in the database for all the simulations similar to what one wants to control, adding one by one to the JESS memory, instantiating them as facts, through the function includeSimulation.
310
I. C. Zattar, J. C. E. Ferreira and P. E. de A. Botura
At this point, when beginning the sweep of the vector to insert each Simulation object in the JESS memory, the rule includeJobs is fired in order to add all the parts with PartID, besides associating them with each Simulation object in it.
Figure 2. Rule based on the shortest queue time
Up to this point all the orders were stored in the database with all the jobs that compose each of them. Thus, it is necessary to filter only the chosen part PartID to be optimized, and to remove the remaining pieces of information from the JESS memory. That is done through the firing of the rule FilterPartID. Thereafter the best simulation, i.e. the one with the shortest queue time for the requested parameters, will be made available in the JESS memory, through the bestSimulation object. Finally, the Expert Agent executes a function whose output is a reference to the bestSimulation, which is sent to the Server Agent, and then the JESS memory is cleaned and prepared for the next search.
4 Obtained Results The standard deviation decreased significantly in all the groups of parts simulated when compared with the simulations carried out without the routing suggestion in the first negotiation. Table 2 shows the simulation values obtained for a group
Reducing the Standard Deviation When Integrating Process Planning
311
composed by 18 pieces and 15 different resources, with batch size equal to one. Each makespan in this table is a result of 20 replications, with machine set up times varying between 0% and 70%, and fixturing set up time equal to 10%. It is observed that for the machine set up time of 30%, and the fixturing set up time of 10%, the average found for makespan and standard deviation was equal to 527.52 minutes and 15.40 respectively. Table 2. Results without the use of expert system without the shortest makespan rule set up_ 0%_0%
set up_10%_10%
set up_30%_10%
set up_50%_10%
set up_70%_10%
Makespan
534.14
Standard deviation
16.57
Makespan
527.40
Standard deviation
17.74
Makespan
527.52
Standard deviation
15.40
Makespan
531.68
Standard deviation
19.84
Makespan
533.18
Standard deviation
20.28
The same group of 15 parts and 18 resources, with a batch equal to one, was then submitted to the Expert Agent. In order to demonstrate the reliability of the expert system, table 3 shows the results obtained for 5 replications. Table 3. Results with the use of expert system with shortest makespan rule
set up_0%_0%
set up_10%_10%
set up_30%_10%
set up_50%_10% set up_70%_10%
Makespan
Makespan
Makespan
Makespan
Makespan
Replic. 1
501.38
439.59
437.66
439.53
441.25
Replic. 2
501.31
439.62
437.50
467.56
441.25
Replic. 3
501.31
440.59
434.50
440.62
441.25
Replic. 4
505.31
440.59
434.50
439.53
429.25
Replic. 5
505.31
439.78
434.53
444.72
441.25
Mean
502.93
440.03
435.74
448.11
438.85
St. Dev.
2.18
0.51
1.68
13.16
5.37
By analyzing again the group composed by the machine and fixturing set up times of 70% and 10% respectively, it is now observed that the values obtained for the average and standard deviation decrease to 438.85 minutes and 5.37
312
I. C. Zattar, J. C. E. Ferreira and P. E. de A. Botura
respectively. This occurs because the expert agent does not work with the average of the replications of the simulation model, but with the best result obtained for a given set of parameters. The developed system was run on a Pentium IV 1.6 GHz computer, with 1 GB RAM under Microsoft XP operating System.
5 Conclusions This work proposed the integration of an expert system developed in JESS with a simulation model developed in Java, in order to reduce the standard deviation generated during the routing of parts in a job shop manufacturing environment. The model allows the use of a large number of rules, and the increase of its complexity in order to optimize the routing of orders, and the effectiveness of the order sequence is evaluated through the makespan, flow time, and queue time. The optimization degree allowed by the expert system depends mainly on: (a) the amount of available data; (b) the quality of the rules; and (c) how they can reflect the empiric knowledge of the process planners. For future works, in order to improve even more the obtained results, it is intended to add other artificial intelligence tools to the system, such as reinforcement learning and neural networks.
6 References [1] Shen W, Wang L, Hao Q. Agent-Based Distributed Manufacturing Process Planning and Scheduling: A State-of-Art Survey, IEEE Transactions on Systems, Man and Cybernetics – Part C: Applications and Reviews 2006; 36: 563-571. [2] JESS, the Rule Engine for the Java. Available in . Accessed on: Dec. 13th 2007. [3] Zattar IC; Ferrreira JCE; Rodrigues JGGG; Sousa CHB. Integration between Process Planning and Scheduling Using Feature-Based Time-Extended Negotiation Protocols in a Multi-Agent System. To appear in International Journal of Services Operations and Informatics, 2008. [4] Foundation for Intelligent Physical Agents (FIPA) Available at: . Accessed on: Mar. 13th 2007. [5] Jade Administrator’s Guide, 2005. Available in < http://jade.tilab.com/doc/index.html>. Accessed on: Jun. 01th 2007. [6] Eclipse - an open development platform. Available in . Accessed on: Jan. 03th 2008. [7] MySQL. Available in . Accessed on: Dec. 05th 2007. [8] Kim YK, Park K, Ko J. A symbiotic evolutionary algorithm for the integration of process planning and job shop scheduling, Computers & Operations Research 2003; 30:1151-1171.
Extracting Variant Product Concepts Through Customer Involvement Model Chao-Hua Wanga,, Shou-Yan Choub a
Associate Professor, Department of Multimedia Design, NTIT. Taiwan, R.O.C. Professor, Department of Industrial Management, NTUST. Taiwan, R.O.C.
b
Abstract. In this article, customer involvement model is used to establish an efficient support system in product concept development (PCD). The method, integrated with meansend chain (MEC), Kansei analytical aided design (KAAD) and simple multi-attribute rating technique exploiting ranks (SMARTER) techniques, proposed here is illustrated via the case study on the creative design of cell phones. From the case study, the prototype customer involvement in collaborative design (CICD)-enabled product conceptualization approach has demonstrated its effectiveness in the early stage of product development. Keywords. Customer involvement, Means-end chain, Kansei analytical aided design, SMARTER
1 Introduction The beginning of PCD is therefore particularly important, when the design requirements are defined. Hence, an organization should put forth considerable effort in capturing the genuine or “real” voice of the customer (VoC) rather than focusing on technological issues during this stage of development. It is known that products should be designed with relevant functionalities, innovative attributes, aesthetic sensibilities, qualities, and values to meet the needs of customers. For SMEs to develop such products, it is a priority to have effective collaborations between marketers, users, managers, engineers and designers, who are possibly geographically distributed. Besides, designers need to differentiate customers’ characteristics and determine their overall attributes. In this light, a novel collaborative design strategy that integrates user needs information into the product conceptualization procedure is necessary. Such system is especially relevant to SMEs, which often have limited resources to materialize the benefits of customer-oriented strategies, and are often required to collaborate with other enterprises.1 1
Department of Multimedia Design, Taichung Institute of Technology, 129 Sanmin Road, Sec. 3, Taichung, Taiwan, R.O.C. Tel: +886 (4) 2219 6230; Fax: +886 (4) 2219 6231; Email: [email protected]
314
Chao-Hua Wang and Shou-Yan Chou
In order to interpret how users perceive the benefits produced by the product attributes, and what personal values the benefits reinforce. This investigation integrates four modules systematically, namely, customer involvement for requirements elicitation module using means-end chain (MEC) techniques with factor analysis, requirements interpretation and configuration mapping module using function analysis (FA) methods, transferring user preference to design criteria module using Kansei analytical aided design (KAAD) technique, and alternative design for variant user needs module using simple multi-attribute rating technique exploiting ranks (SMARTER) technique for interpreting different combination to implement PCD. With the proposed approach here, designers are able to identify with some degree of certainty, attributes known as key design factors for creative product development through collaboration with customers on Internet.
2 Customer involvement in conceptual design Due to the multi-faceted and diversified markets needs, the conventional encoding process in conceptual design procedures is not suitable for applications. To avoid interpretation variances resulted from inefficient information transmission, this study subscribes to the concept of customers’ direct participation in the process of conceptual design. Designers will be expected to play active roles in information transmission, communicating directly with end-users, instead of passively waiting for information from the marketing department. For instance, group participations involving various parties through simple and convenient environments would support designers in obtaining objective design assessments. The process is illustrated in Figure 1.
Figure 1. Customer involvement encoding process of concept development
With the participation of customers, designers can more thoroughly understand the precise demands of users, and can (in real-time) obtain critical information for direct response to market changes. To reap such benefits by exploiting customers’ participation, several requirements are listed as follows.
Extracting Variant Product Concepts Through Customer Involvement Model
315
- Open up the process of design context development for customers’ participations. Interpretation differences between designers and users can be reduced by information sharing through interpretations and interactions of formalized cognitive constructs. The process will render customers as design co-developers and contributors of specialized knowledge. -Rapid and effective communication. The communication environment should be real-time and independent of geographical constraints. Multimedia format of data transmission is necessary to enhance the system efficacy. - Knowledge formalization: In the process of conceptual design, designers are required to express the values and significances of products. Hence, parameters and indices that are necessary for developing design assessments have to be established. The parameters and indices can be employed to analyze and interpret objective design information. The details of the framework will be presented in Section 3. Figure 2 also depicts a web-based KAAD that was integrated for the purpose of customers’ involvement. This sub-system will be elaborated in Section 4. With the proposed approach, designers will be able to identify the attributes that are key design factors for creative product development. This result among others will be demonstrated (in Section 5) via an application case of a cell phone design based on the prototype CICD system.
Figure 2. System framework of KAAD within CICD
316
Chao-Hua Wang and Shou-Yan Chou
3 Eliciting and interpreting customers’ needs 3.1 Methodology From the perspective of customer-oriented conceptual design, the following tasks in the early stage of product development are considered as essential in accomplishing users’ satisfaction and implementing designers’ goals [1,2]. (1) Requirements elicitation. (2) Requirements interpretation. (3) Configuration mapping. (4) Alternative solutions generation. (5) Concept verification and specification. The rest of Section 3 describes the systematic approach to implement the above tasks, accompanied with the reviews of literature in the related fields. The reviews of well-established theories, in the fields of eliciting, interpreting and transforming customer needs, support the proposed framework in this study. 3.2 Requirements elicitation based on MEC technique The MEC theory is based on the assumption that customers demand products because of the expected positive consequences of using them [3]. It links customers’ knowledge on product attributes with their knowledge pertaining to the consequences and values. In the context of VoC interpretation, MEC theory is applicable to explain how product preference and choice is related to the achievement of consumers’ central life values [4]. In analyzing specific customer requirements, MEC illustrates the connections between product attributes, the consequences (benefit components) of using the product and the personal values, i.e. the means is the product and the end is the desired value state. In other words, products and services are seen as means to satisfy needs, which are conscious to a varying degree. Figure 3 shows the MEC via the Attributes-Consequences-Values (A-C-V) structure of user’s product aspirations. The attributes are the concrete and tangible characteristics of a product; the consequences refer to what the product does or provides to the customer; and the values are intangible, high-order outcomes or ends. The values are also perceivable as cognitive representations of customers’ most basic and fundamental needs and goals.
Figure 3. MEC via A-C-V structure of user’s product aspirations
For instance, Figure 4 illustrates a means-end chain for in the context of cell phones. It shows that consumers purchase cell phones not only for communication
Extracting Variant Product Concepts Through Customer Involvement Model
317
purposes but also for the multi-tasking tools that enhance the quality of life in other ways. Integrated with data-collection methods, such as content analysis, laddering technique can be employed to understand customers’ cognitive patterns surrounding various products; customers can be directed to clearly communicate their inner attributes, consequences and values. The outcomes of the MEC method elicited by laddering can take the form of a summary implication matrix (SIM).
Figure 4. A MEC via A-C-V for cell phones
SIM represents the number of times each objective leads to each other objective. Figure 5 shows the summary implication matrix which is a square matrix Z whose elements (Zij) reflect how often objective i leads to objective j. This is based on an aggregation across the direct linkages, i.e. without intermediation of other objectives, and indirect linkages, i.e. with intermediation of other objectives, between the objectives in the ladders of the individual users [5].
Figure 5. A summary implication matrix for objectives linkages analysis
3.3 Function structure evolving from MEC A function structure can be established by function analysis, which is able to provide the connections between users’ concept and designers’ treatment. It can also be employed to analyze the mission and performance requirements of a product system, thereby decomposing them into discrete tasks or activities. The
318
Chao-Hua Wang and Shou-Yan Chou
structure is defined in terms of transformations in flows of matter (M), energy (E) and information (I), and sub-functions, which consist of a number of parts and components. These sub-functions define the relationships by connecting their inputs and outputs so as to present a feasible system. Accordingly, designers can make the transition from a function structure to solution principles for the product to be developed by means of morphological analysis [6,7]. 3.4 Multiple attributes evaluation of alternative design concepts Two popular methods based on Multi-attribute utility theory are SMART (simple multi-attribute rating technique) and SMARTER (simple multi-attribute rating technique exploiting ranks) [8]. The latter is an improved version of the former. The method SMARTER is adopted for the evaluation process in this work. SMARTER assumes that the evaluator should consider different attributes, and every attribute’s single dimension utility in her or his mental framework is quantifiable. This method also assumes that the attribute’s relative weights are quantifiable. The method converts a complex multi-attribute decision problem into a series of single-attribute sub-problems. Each attribute’s single dimension utility function and weight are elicited respectively, and are integrated based on a multiattribute utility function.
4 Bridging functional to emotional design 4.1 Design for emotion using KAAD Semantic approach has been used in the implementation of Kansei Engineering [9]. Sets of products sufficiently diverse to evoke a wide range of different emotional responses were used to elicit responses from samples of customers. The results based on bipolar attribute rating scales were statistically compared to provide distributions of products across different rating criteria. Analyzing the products that were rated highly on a particular characteristic allows researcher to draw conclusions on the relationships between the perceptual elements and the subjective customers’ judgment. Kansei Engineering with semantic approaches, such as semantic differential analysis, can be utilized to facilitate collaborative ‘design for emotion’. It involves the evaluation of concepts from emotional perspective, and the generation of concepts with prediction of the consuming behavior [10]. The KAAD portion of the framework proposed in this study (see Figure 3) is based on the abovementioned Kansei Engineering Type II-Kansei Engineering System (KES). Specifically, the approach of Kansei Engineering with semantic differential (SD) method is established.
Extracting Variant Product Concepts Through Customer Involvement Model
319
4.2 Implementing KAAD Semantic Differential (SD) method is a highly applicable technique in this work. Based on the analysis of correlation matrix, it measures customers’ understanding and subjective feelings with respect to the variations of objects (Figure 6). To elicit for input data, this study manipulated SD design under the semantic space of the superficial characteristics of product and a set of adjective pairs. Measurement scales of the adjective pairs are established by reducing and organizing a list, as the basis of the perception tests. Following the process of data collection, factorial analysis of principal components is performed to identify the semantic axes, which serves to demonstrate the specific concept of product’s superficial characteristics. Subsequently, clustering analysis of the meaningful structures groups customers’ perception of similar kinds into respective categories. As a result, the variant superficial product characteristics and the derived categories can be employed in the interpretation phase of customer needs, which aims to identify the technical criteria of the given product.
images set
adjective pairs of semantic space Figure 6. SD measures customers’ understanding and subjective feelings with variational objects
5 An application case of CICD The CICD system proposed in this study exploits Kansei Engineering theories for product concept design. The system supports designers in arriving at product concepts can satisfy customers’ needs, which can be abstract values. The systematic approaches proposed in this paper can be illustrated in an application case of cell phone design. After the customer preference parameters that would be applied in the SMARTER approach have been established by SD analysis, customer specificity ratings (i.e. the descriptive mean values from -3 to 3 for each PAPs set) for all design attributes of sample products were completed. The attributes preference
320
Chao-Hua Wang and Shou-Yan Chou
ratings can be elicited using the SD SMARTER. In the system prototype, KAAD allows the designers to lookup suitable Kansei words and to extract the explicit/implicit design features of the products. Suitable Kansei words are the ones that have the Kansei score close to the ideal value. Consequently, product concepts would get profile that reasonably corresponds to the expected results. When the weightings the selected Kansei words gave a particular product concept a ranking close to the ideal value, it indicates that the quality is attractive as user requirements. In many cases, design experts are more aware of the users’ demands than the users themselves. In such cases, Kansei data can be used with SMARTER technique to identify customer needs and to determine their importance level, therefore determine the desirable product features for design. Figure 7 presents different product recommendations that were ranked by the priority values of Kansei words. According to the recommendations, product features such as style, interface, functionalities and specifications, were extracted and integrated to the results of previous analysis to implement the variant cell phone conceptual design.
Figure 7. KAAD presents the different recommendation
6 Conclusions Overall, the prototyped CICD system based on the proposed approaches has demonstrated its effectiveness in user requirements elicitation, product concept interpretation and design stakeholders collaboration in the early stage of new product development. This study can serve as a reference for organizations and designers to gain competitive edge in the context of NPD.
7 References [1] Calantone, R.J., Chan, K. and Cui, A.S., Decomposing Product Innovativeness and Its Effects on New Product Success, Journal of Product Innovation Management; Volume 23 (5);2006; 408-421.
Extracting Variant Product Concepts Through Customer Involvement Model
321
[2] Yan, W., Khoo L.P. and Chen C.-H., A QFD-enabled Product Conceptualisation Approach via Design Knowledge Hierarchy and RCE Neural Network, KnowledgeBased Systems, 18(6); 2005: 279-293. [3] Walker, B.A. and Olson, J.C., Means-end chains: Connecting products with self, Journal of Business Research; 22(2); 1991;111-119. [4] Jolly, J.P., Reynolds, T.J. and Slocum, J.W., Application of the Means-End Theoretic for Understanding the Cognitive Bases of Performance Appraisal; Organization Behavior and Human Decision Process; 41; 1988;153-179. [5] Pieters, R.M., Baumgartner, H. and Allen, D., A means-end chain approach to consumer goal structures. International Journal of Research in Marketing 12; 1995; 227–244. [6] Bagozzi, R.P. and Dabholkar, P.A., Consumer Recycling Goals and Their Effect on Decisions to Recycle: A Means-End Chain Analysis”, Psychology & Marketing, Vol. 12; 1994;245-256. [7] Pasquale, C., Michele, G. and Ferruccio, M., Aesthetic and functional analysis for product model validation in reverse engineering applications, Computer-Aided Design Volume: 36, Issue: 1, January; 2004; 65-74. [8] Edwards, W. and Barron, H.F., SMARTS and SMARTER: improve simple methods for multiattribute utility measurement. Organizational Behavior and Human Decision Processes, 60; 1994; 306-325. [9] Kohritani1, M., Watada1, J., Hirano, H. and Yubazaki, N., Kansei Engineering for Comfortable Space Management, Knowledge-Based Intelligent Information and Engineering Systems, Volume 3682; 2005;1291-1297. [10] Bouchard, C., Lim, D. and Aoussat, A., Development of a KANSEI ENGINEERING SYSTEM for Industrial design: Identification of input data for KES, France; 2004 .
QFD and CE as Methodologies for a Quality Assurance in Product Development Kazuo Hatakeyama1, José Ricardo Alcântara2 Federal University of Technology – Paraná Abstract. Quality Function Deployment (QFD) combined with Concurrent Engineering (CE) as a support tool for the competitive strategy on product development is devised. In this study, beyond the proposed method, it is intended to develop relations with innovational models, arrangements of innovation and technology transfer, learning in organizations, and how the diffusion of knowledge occurs. QFD can also be one those main tools of CE as this identifies the customer’s main requirements translating into the features required for products. The field survey of exploratory and descriptive type, using the questionnaire as data collection technique, was carried out in the manufacturing companies in the fast growing sectors of automobile industries in the State of Parana located in the Southern of Brazil. The selection of sample companies was made intentionally to guarantee of return of answer through the accessibility criteria. The reasons for this fact that can be pointed out are: the use of “home made” methodology to fulfill customer’s requirements, unknowing of the methodology, and the lack of adequate training to use QFD. It is expected that the results of findings, if disseminated adequately among local companies, will help to enhance the competitiveness performance beyond the local market scenario. Keywords: Quality Function Deployment; Competitive Management; Concurrent Engineering; Quality Assurance.
Strategy;
Development
1 Introduction The QFD method was created to aid the management in the product development process, allowing follow up each stage of its streamline. For this, a set of activities should be performed and completed for all the development process to be consolidated efficiently to attain the established goals defined at the beginning of the project work. This method can be applied to products and services as well as to intermediate products between clients and suppliers. Also it can be applied to improve the existing products as well as for new ones. Moreover, it is one method 1
Av. Monteiro Lobato s/n km 0, ponta Grossa, PR, Brazil Telephone: +55 21 41 3220-4878, [email protected] 2 Av. Sete de Setembro nº3165, Curitiba PR, Brazil Telephone: +55 21 41 33104616, [email protected]
324
Kazuo Hatakeyama and José Ricardo Alcântara
to develop products with quality, aiming to fulfill the clients´ satisfaction. The hope is that this work can be opportune in the implementation of one technique or strategy of interaction between product development stages. This strategy allows better negotiation among specialized teams assuring the alignment of projects and the compatibility of values of the designed quality, besides reducing costs and to make possible to attend the clients´ demand. For this aim, it is believed that the implantation of working system based on CE model becomes paramount for the success of QFD model.
2 Model Analysis over this method within the CE model, this encompasses in conformity between the scope of strategic and operational (means x operational requirements), since it serves mainly as a tool for development activities and engineering, as well as for quality of scientific and technical services. According to [1], “due to the life cycle of new products is becoming shorter and shorter, at the same time the technology advances is too quickly as never happened before. This manner, the competitive advantage obtained due to the technology advancement is quite often of short duration, thus, the effort of innovation in the modern and competitive enterprises needs to be agile and effective”. Therefore, the application of the methodology proposed in this paper becomes essential, within the scope of enterprise strategic model cited by [2], in enterprises of aggressive strategy, main features of the enterprises that look for the leadership in the market, exploit new possibility and invest in basic research and forecasting, in spite of having an experimental development as essential. Invest in hiring scientists, technologists and technicians, having the R&D department as the backbone of the organizational structure [3].
3 Proposal The QFD model as an assurance of stages interaction by the overlapping task with CE model used for product development. The matrices of the system allow that the coordinator promotes the negotiation between specialized teams, assuring the alignment of projects and a “compatibility” of the values of designed quality. This avoids the late modifications of projects, economizing the rework and making viable the overlapping of activities [4]. Information related to the project must be driven to attend the interests of each involved members. For [5], the information not only should be available, but also should occur in timely, and mainly in the right place. The adequate management of this flow of information becomes crucial for the success of methods and models of development such as QFD and CE. The planning activities are characterized by the needs of quick and effective generation process and diffusion of knowledge. Today´s market demands and pushes the companies to be innovators, pay attention to the cost of product, to the quality of product and process, to possess flexibility of
QFD and CE as Methodologies for a Quality Assurance in Product Development
325
volume and of demand, and the search for the continuous diminution of the time for product development, among others. It is fundamental, thus, that the intercation or exchange of ideas between several teams involved happen in the efficient manner, without the loss of time, either to wait for the information as well as to repeat the work due to the supply of incorrect information, for example, relevant changes to a given project. This manner, there is a need for the collaboration, to form a suitable multifunctional and multidisciplinary team, as to complete the teams in all functionalities and perspectives requiresd for the product and process development, that matches with the proposal of QFD and of CE. It is a question of engineering with the sight on managerial perspective. An expert author [6] states that the QFD and the CE are fundamentals for the “Management of development”. Then, seek through the integration of multidisciplinary and multifunctional teams, the control of all development, since its conception till the launch of the product to attend the clients´ requirements. Unless sufficient training and practice of the team in the use and application of these methods are provided, the existing auxiliary models can be dispensed as too difficult or useless. Lack of sufficient awareness over the application and its value can leads to misunderstandings over the real values of its application and results achieved. The time and practice are critical factors for the success of these methodologies [7]. Thus, the formation of multidisciplinary or interfunctional team, is strongly recommended to avoid to loose the important insight for the success of this method [3]. Taking this factors in account that [8] within his model, try to specify the type of problem of the interest for the application of the QFD. Considers that the application of the method happens in the case when the problems in development are already in the stage of “well-structured or welldefined”, such as the objectives of improvement to be achieved or the development already clearly stated. This model of interest, dependent of the logics of structure and individual reasoning, relates to two main resources: the information (collect, process, and distribution) and the work (structure, providence, and execution). The same author [8] also searched through the “WHY”, “WHAT”, and “HOW”, to define each of this resources in his “guide for intervention”. All activities encompassing in the application of the QFD, i.e., all the procedure to elaborate and the interpretation of tables and matrices utilized are the tasks that varies for each case, depending on the dimension of the defined objectives. In this context, the knowledge of the technical team (education, training and experience) already mentioned, becomes very important, to achieve the planned objectives. In the sequence of this paper, it will be presented the results of the field survey carried out in 2003. From these results, it will become well known that the implementation of the QFD meets strongly related to the practice of CE, related to the measurable or non measurable benefits until outstanding by several authors, as a consequence of its applications.
326
Kazuo Hatakeyama and José Ricardo Alcântara
4 Survey This survey aimed, among others, to relate the use of QFD model with CE model. The sample was formed by 32 companies among the best and/or the biggest in the State of Parana, according to the regional and/or national records (publications regarding to the companies that outstanding annually). The return of answers were 27 companies, with the return index approximately 73%. From this total, 100% answered the data collection tool. The questionnaire was the tool utilized for the data collection, since according to [9] constitutes as a main technique available to obtain the relaible data, so far. To verify the trustworthy of the application of the data collection tool, it was performed one pilot case study, the sample test, according to [10], in two selected companies, with the managers of R&D, with the tool almost in the finished shape. The sectors of the activities of companies that answered the questionnaire for the survey, were the sectors of “foods” and “automotive”, the largest representatives, with 18,5% of the cases for each one; “electroelectronics” with approximately 15% of the cases; “paper and pulp” with 11% of the cases; “industry and commerce” and “telecommunications”, both with approximately 7,4% of the cases, and equaly with 3,7% of the cases each one: “wholesale and external commerce”, “bewerage”, “hygiene”, “R & D”, “chemical” and “services”. To the analysis of the answers of this survey, aiming to evaluate the level of the implementation for each analyzed characteristics, it was required to establish ranges of classification for each question that utilizes the Likert´s scale as can be seen in Table 1. Table 1- Classification of the levels of implementation for each item analyzed Classification – associated to the Likert´s scale (resulting values from the summation of points by item analyzed) Number of companies
Low (L) (minimum score)
Medium (M)
High (H) (maximum score)
*1 1 (DS) to 2 (D) 3 (I) 4 (A) to 5 (AS) 3 3 to 6 7 to 11 12 to 15 4 4 to 8 9 to 15 16 to 20 20 20 to 40 41 to 79 80 to 100 27 27 to 54 55 to 107 108 to 135 DS – Disagree Strongly; D – Disagree; I – Indifferent; A– Agree; AS – Agree Strongly.
Due to such requirement, it was believed convenient in this survey to rely on the classification criteria adopted by [11]. Thus, it was established three levels of implementation associated to the Likert´s scale: Low (companies are far to reach the analyzed items), Medium (companies are near to reach the analyzed items) and High (companies are adopting the analyzed items). The same criteria will be adopted to the variations in the Likert´s scale for the Table 2.
QFD and CE as Methodologies for a Quality Assurance in Product Development
327
Table 2 – Variations of the Likert´s scale Classification
Low – L
Medium – M
High – H
Punctuation Likert´s of the scale scale
Variations in some questions
1
Disagree Strongly
Very bad
There wasn´t
Indifferent
2
Disagree
Moderate
Few
Eventualy
3
Indifferent or neutral
Indifferent
Satisfactory
Will be good
4
Agree
Good
Good
Important
5
Agree Strongly
Very good
Very good
Very Important
Starting with the analysis of the formation of development teams, it can be seen that there is, in almost all companies, the quality representative. From this fact, it can be understood as higly relevant to the development process, the concern related to the quality management system as shown in Fig. 1. 81,5
100
Percentile of the Answers (%)
74,1
70,4
74,1
66,7
80 51,9
60
51,9
37
29,6
40 11,1
Others
Shop floor
Technical
Sales
Marketing
Quality
Process
Machines
Product´s
External
0
Administration
3,7
20
Figure 1. Constituent of the development teams
Authors [12] describe that for the product development to be competitive requires to apply methodologies and techniques that increases the speed, efficiency and effectiveness..In view of this, to evaluate the product development in this same context presented by [11], i.e., the development of products occurring in the domain of CE. In the Fig. 2 it can be seen the strong trend of companies to apply the principles that represents more CE. Outstanding, according to the scale of punctutation and classification, the “Leader to coordinate the product development process” (110 points - H) and “Design for manufacture” (108 points - H), i.e., greater approximation between engineering and production. however, it can be seen that almost all items fall near to the fulfilness of applicability. The items, not less important, but with lesser ponctuations were: “computer tools” (98 points - M) and “multidisciplinary teams” (100 points - M). In the opinion of [13] consider the organizational strategy the use of specialists of several fields for the application of CE, as well as for [8] in the desenvolopment of QFD.
328
Kazuo Hatakeyama and José Ricardo Alcântara
To [14], decision over the project with the participation of persons that own the relevant knowledge over several sectors of the company is the ideal person to act during the development activities, for that reason the importance to adopt multidisciplinary teams. About the afirmation that the company adopts systematicaly the CE in projects, some companies prefered to answer “No” in 26% of the cases or “Eventualy” in 18% of the cases. Regarding to the survey on these data, it can be considered, accordiang to the survey presented by Schneider (apud [11]), few companies in Brazil formam uma an adequate vision about what is CE, due to the lack of the knowledge and its concepts, and mainly its domain. V e r y
b a d
R e g u la r
G o o d
M
V e r y
u ltid is c ip lin a r y te a m s C o n c u r r e n t d e v e lo p m e n t D e s ig n f o r m a n u f a c tu r e S h a r in g in f o r m a tio n L e a d e r to c o o r d in a te C o m p u te r M
I n d if f e r e n t
g o o d
3 ,7
3 ,7
2 5 ,9
5 1 ,9
1 4 ,8 7 ,4
1 8 ,5
6 3
1 1 ,1 1 8 ,5 1 8 ,5 1 1 ,1
3 ,7
6 3
1 8 ,5 1 8 ,5
5 1 ,9
1 1 ,1 3 3 ,3
7 ,4 7 ,4
to o ls
a n a g e r ia l p r a c tic e s
3 ,7
7 ,4
6 3
1 1 ,1
5 1 ,9
2 5 ,9 0
5 1 ,9
1 1 ,1 1 1 ,1
2 0
P e r c e n tile
4 0
o f
th e
6 0
a n s w e r s
8 0
( %
)
Figure 2. Position of companies related to the concurrent engineering characteristics
Among practisings of CE or that eventualy practice, to amount to 74% of the cases. The results can be evaluated applying several approaches, such as Pareto, for instance. The Pareto´s graph presents the “Limited availability of persons for projects”; “Get over of required alterations with delay” and “Internal communication problem” as the main difficulties faced by those seeks to adopt the CE. This fact can be associated to some conditions as: the use of traditional organizational structure (departamental), not recommended to development of CE ([11];[15]); needs to integrate several computer tools, to support the activities within the CE [16]; the overload of information for each involved person [5]; the tendency of some organizations to initiate the process of implantation of the CE without an adequate planning, causisng several defficiencies in its application [11]. Nevertheless, the improvement of these characteristics can be realated to the correct application of tools, associated to the CE. In the Fig. 3, therefore, illustrates the main difficulties faced by the companies to adopt the CE in the product development projects. Group 1 – Lack of experience and a training versus knowledge and training. Group 2 – Structural difficulties and lack of support versus Objective definition, and managerial support. Group 3 – Lack of commitment and conflict of opinions versus Improvement of communication, improvement of the work, and evaluation of the method. Group 4 – Inadequate use versus adequate use.
QFD and CE as Methodologies for a Quality Assurance in Product Development
L im it e d a v a ila b ilit y o f p e r s o n s f o r p r o je c ts
329
6 2 ,5
G e t o v e r r e q u ir e d a lt e r a t io n s w it h d e la y
4 3 ,8
I n t e r n a l c o m m u n ic a t io s p r o b le m s
3 7 ,5
L a c k o f s t a n d a r d iz a t io n o f t e h r e c o r d o f p r o je c ts .
3 1 ,3
D e la y t o s o lv e p r o b le m s d u e t o in t e r s e c t o r ia l s y s t e m
3 1 ,3
O th e rs
25
I n a d e q u a t e f o llo w u p b y t h e le a d e r s o f p r o je c t
1 8 ,8
D if f ic u lt y in s h a r in g t h e f ile s im u lt a n e o u s ly
1 8 ,8 0
20
40
60
80
P e r c e n t ile o f t h e a n s w e r s ( % )
Figure 3. Pareto´s graph of problems that kept away or obstructed the practice of CE.
As evidenced, the QFD is a method that requires time for the execution, being this gradually reduced from the enhancement of the knowledge and experience of its practices. From these data can even be added that the companies that use the computational means for the development represents 81,5% of the cases, and its good integration in 59,1% of companies that utilize. And those that adopt the CE are 55%, or that eventually adopt are 19% of the cases. Therefore, it can analyze that this method, despite the great acceptance in the development sector, also has revealed the deficiency of concepts in its practice. It can be seen the existence of large relation among the benefits related to the communication, teamwork (Multifunctional) and training of the team as the organization. This means the gain in the quality of the development.
5 Conclusion The oragnizational structure adopted by the majority of companies, is of departamental or convencional, is not that better adapt to the application of methodologies as the QFD, besides not proporcionate adequate means to form the multidisciplinary teams. More detailed surveys must be carried out, related to this possible adaptation problem. As a result, it has noticed that the QFD yet is used scarcely, being the lack of experience the major problem faced during the implementation. This can be directly related to the lack of adequate training and better knowledge over the methodology. Also has noticed, a strong tendency of companies to apply the principles that represnts more CE. However, few companies in Brazil form an adequate vision over what is the CE, this can be due to the lack of knowledge and its concepts, and mainly on its domain. Organizations that strategy aims to an aggresive competition due to innovation, should utilize the methods that allow to be competitive. Can not be competitive, in the national or international market unless define and operationalize the requirements of the customers. According to our focus, the product development process presents above all the dependency all that of knowledge, involving
330
Kazuo Hatakeyama and José Ricardo Alcântara
practices, computer tools, and enhancement of the persons´skills. Due to the increasing evolution of the information technology, there is an intense work over actions with knowledge, and information as a means to acquire the knowledge. Particulary, the QFD and CE represent an advance and consolidate a set of methods, techniques, and organizational structures to improve the skills to develop new products. The knowledge is not regarded as a static, it is contained in the processes and persons, and in the great extent it is find in the sophisticated methods and in the sharing of the tasks for the solution of problems.
6 References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
Sbragia, R. Trabalho em equipe e inovação tecnológica, Revista de Administração, São Paulo, v 28, n.1, p.36-43, Jan. /Mar., 1993. Freeman Motivação e estratégias empresariais, In: Carvalho, H. G. PPGTE/CEFETPR. 51 transparências: coloridas. (Material da disciplina Tecnologia e Inovação), 2000. Guimarães, L. M. QFD: ferramenta de suporte a estratégia competitiva, Revista CQ Qualidade, São Paulo, p.50-54, janeiro, 1996. Peixoto, M. O. C. Uma proposta de aplicação da metodologia desdobramento da função qualidade (QFD) que sintetiza as versões QFD - estendido e QFD das quatro ênfases, EESC, Dissertação (Mestrado) - USP, 1998. Romero Fo., E A Contribuição do CAD para Implementação da Engenharia Simultânea. 1º CBGDP, Belo Horizonte, Anais, Belo Horizonte: UFMG, 1999. v.1, p.177-185. Akao, Y. QFD: Past, present, and future, INT. SYMPOS ON QFD. Linkoping. 1997. Klink, B.; Schlicksupp, H. Criatividade: uma vantagem competitiva, Rio de Janeiro: Qualitymark, 1999. Cheng, L. C. QFD em desenvolvimento de produto: características metodológicas e um guia para intervenção. Revista produção on line, Florianópolis, v. 3, n. 2, 2003. Gil, A. C. Métodos e técnicas de pesquisa social, 4.ed. São Paulo: Atlas, 1995. Marconi, M. D. A.; Lakatos, E. M. Técnicas de pesquisa: planejamento e execução de pesquisas, amostragens e técnicas de pesquisas, elaboração, análise e interpretação de dados, 3 ed., São Paulo: Atlas, 1996. Costa, C. C. E. G. A engenharia simultânea em empresas do setor industrial brasileiro: sua utilização e alternativa de difusão, Dissertação (Mestrado em Tecnologia) – PPGTE, CEFET-PR, 1998. Peixoto, M.O.C.; Carpinetti, L.C.R. O QFD como facilitador da engenharia simultânea, In: 1º CBGDP, Belo Horizonte, Anais, Belo Horizonte: UFMG, v.1, p.142-151, 1999. Azevedo, H. J. S.; Sato, G. Y. Gestão do conhecimento em equipes multifuncionais: estudo do núcleo de pesquisa em engenharia simultânea, In: ISDM98, “Anais...,” Curitiba: PUC, v. único, p.227 – 235, 1998. Nascimento, C. A. A. M. Aplicação do QFD para identificar pontos críticos do processo de desenvolvimento de produtos a partir dos dados de assistência técnica, Belo Horizonte, Dissertação (Mestrado em Engenharia de Produção) – UFMG, 2002. Vasconcelos, E. Estrutura das organizações, São Paulo: USP, 1989. Silva, S. L.; Rozenfeld, H Estruturação dos conhecimentos envolvidos no desenvolvimento do produto com base em um cenário de engenharia simultânea, In: 1º CBGDP, Belo Horizonte, “Anais,” Belo Horizonte UFMG, v.1, p.104113, 1999.
Information Systems
Integration of Privilege Management Infrastructure and Workflow Management Systems Wensheng Xua,1 , Jianzhong Chaa and Yiping Lua a
School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing, China
Abstract. Workflow management systems, especially web-based workflow management systems, have played an important role in concurrent engineering to support the management of dynamic product development processes over Internet. But most existing workflow systems, however, only provide limited security services such as authentication and authorization of users for workflow applications. A full-fledged flexible authorization scheme—Privilege Management Infrastructure (PMI) is still to be fully integrated with workflow management systems to enhance the security of the workflow applications. In this paper, the security weaknesses of the existing workflow management systems are first analyzed, then two different approaches to integrate PMI with workflow management systems are proposed, and workflow security policy enforcement in workflow systems is also analyzed. Keywords. Workflow management systems, privilege management infrastructure.
1 Introduction Workflow management systems are widely used in concurrent engineering (CE) to manage and automate the product development processes. Web based workflow management systems (WFMS) can integrate distributed processes that are across or within enterprise boundaries in CE with the support of standard web browsers. Since users of a web-based workflow management system can be distributed dynamically anywhere on the Internet, system security may be a concern to the workflow management system. Appropriate authentication and authorization mechanisms are required for WFMS to ensure the security of both users and the workflow management system itself. In 1998, Workflow Management Coalition (WfMC) proposed a simple guideline for security management in WFMS [7], but there are still more research work to do for security management in WFMS. Most of the research work in this field is for strengthening the authorization mechanism for end users’ access to performing tasks in workflows based on role-based access control [4, 1], but the authorization mechanism for protecting the interfaces between different components in distributed workflow systems is not sufficiently 1
Corresponding Author E-mail: [email protected]
334
W. Xu, J. Cha and Y. Lu
considered. Thus the reliability and strength of the authorization mechanisms in the current workflow systems are still not satisfactory, so the integration of a fullfledged Privilege Management Infrastructure (PMI) and the workflow systems is still needed to enhance the security level of the general workflow systems. In this paper, the potential security weaknesses in the general workflow systems are first analyzed (Section 2), then two basic integration models for integrating PMI with workflow systems are proposed (Section 3), and the workflow security constraints enforcement for workflow systems is analyzed (Section 4), and finally the conclusion is given (Section 5).
2 Security weaknesses of the ordinary workflow management systems Despite the functions, features and user interfaces of the various workflow management systems in the market are far different from each other, they still share some common functionalities, components and the generic basic structure that can define these software systems as workflow management systems. In the generic workflow system structure, several general interfaces can be identified within this structure, which enable workflow products to interoperate at a variety of levels. The general interfaces in the workflow reference model proposed by WfMC are shown in Figure 1 [6]. There are generally five interfaces between the workflow enactment service and five other components: the workflow modeling and definition tools (Interface 1), the workflow client applications (Interface 2), the invoked applications (Interface 3), other workflow enactment services (Interface 4), and administration and monitoring tools (Interface 5). These interfaces exhibit as unified workflow APIs and interchanging formats for the five respective functional areas, which regulate the interactions between the workflow control component (workflow engine) and other system components. Through these five interfaces, external entities, either human users or software applications, are able to perform certain actions on the workflows—execute or manipulate the workflows, thus protection for these interfaces is important. But unfortunately, protections for these five interfaces are not fully considered yet by the current workflow applications, and a range of potential security threats exist toward the generic workflow management system. Without proper authentication and authorization measures, any entity conforming to the five interface standards can interact with the workflow engine, thus potential tampering with the workflow engine is possible. PMI can be adopted in workflow systems to better enforce security policies regarding various entities in the five interfaces. The core component of a workflow system is the workflow engine, and there are different entities in the workflow systems that need to have access to the workflow engine. PMI can be employed for the five interfaces to manage the access control to the workflow engine and hence increase the security level of workflow systems.
Integration of Privilege Management Infrastructure and Workflow Management
335
Process Definit ion Tools Interface 1 Interface 5 Admin istration and Monitoring Tools
Workflo w API and Interchange Formats
Interface 4
Workflo w Enact ment Serv ice
Other Workflow Enact ment Serv ices
Workflo w Engine(s)
Workflo w Engine(s)
Interface 2 Workflo w Client Applications
Interface 3 Invoked Applications
Figure 1. Components and Interfaces in generic Workflow Management Systems [6]
3 Integration models for PMI and workflow systems PMI is an infrastructure for access control management based on attribute certificate framework, which is based on the ITU-T recommendation of directory systems specification [5]. One PMI implementation is PERMIS [2]. In PMI there are several basic components, including source of authority (SoA), attribute authority (AA), attribute certificate (AC), access control policy certificate, AC repository and policy repository. These components can be grouped into several sub-systems in PMI: privilege allocation sub system, access control policy management sub system, privilege verification (PV) sub system, privilege decision point (PDP) sub system. To incorporate PMI with workflow systems to address different security concerns, according to connections between the PMI service and the workflow systems, there are basically two different models to integrate PMI with workflow systems—one is the parallel integration model, and the other is the embedded integration model. In the parallel integration model there is only one connection point between the PMI service and the workflow engine, while in the embedded integration model, there are more connection points between the PMI service and the workflow engine as well as other workflow components in the workflow system. 3.1 Parallel integration model In the parallel integration model, PMI serves as a separate application package and provide access control service for the workflow engine in the workflow system, and the API interface only exists between the workflow engine and the PMI
336
W. Xu, J. Cha and Y. Lu
service, as shown in Figure 2. In this figure, other parts of the workflow system are omitted for clarity purposes. In this model, the PMI service does not need to know the internal structure of the workflow system, all the required information for the access control is provided by the workflow engine. This integration model can serve all the five interfaces in the workflow systems. For Interface 1, process definitions should be defined and signed by an SoA or its delegated agent in the form of process certificates, then the workflow engine can decide whether a process definition is trusted or not through the service of PMI before the process definition can be instantiated and executed. For Interface 2, users are issued with attribute certificates by AAs, and the workflow engine decides if the users are allowed to perform workflow tasks in workflows. For Interface 3, external applications are signed by an SoA or its delegated agent, and only authorized applications can be invoked by the workflow engine. For Interface 4, different workflow enactment services, i.e. different workflow engines in different workflow domains can be issued with attribute certificates by a common SoA or by SoAs recognized by each other, then only authorized workflow engines can communicate and cooperate with each other, thus interoperability based on PMI can be achieved. For Interface 5, workflow administration tools can also be issued with attribute certificates, only authorized administration tools can operate on the workflow engines, thus interoperability and security between various workflow administration tools and various workflow engines can be ensured. Process definit ion interpreted by WFM Engine(s)
get credentials
maintain
Workflow control data
return attributes decision request
decision response
PV
PDP retained ADI
retrieve ACs
retrieve RAP
AC LDAP
retrieve TAP
policy LDA P
PMI service
Figure 2. Parallel integration model for PMI and workflow system
The integration of PMI for Interface 2 is the most important application of PMI in workflow systems, as normally there should be a large amount of participants in practical workflow systems, and the workflow participants have important
Integration of Privilege Management Infrastructure and Workflow Management
337
influences on the system security level. Based on the workflow definition, different tasks are assigned to different roles in a workflow. When a workflow process model is instantiated and executed, users will be assigned with the workflow tasks and they could be authorized to perform workflow tasks and change the states of the workflow. The following information should be passed from the workflow engine to the PDP in PMI in order to make role-based access control decisions: 1) the user’s attributes/roles (optionally including the user’s ID), 2) the requested operation and its parameters, 3) the requested target object, 4) any environmental or contextual information such as the time of day, and 5) the workflow process instance. There is an obvious disadvantage of the parallel integration model. In order to make context based access control decisions for workflow process instances, the information about the workflow process instances should be passed from the workflow engine to the PDP, and this information should then be kept in the retained ADI (access decision information) component within the PDP in the PMI service. Only after a workflow instance is completed, can this information about the workflow process instance be removed from the retained ADI component [3]. But actually, all the information about the states of the workflow instances is already maintained in the workflow control data component in the workflow system, as shown in Figure 2, so duplicate information about workflow instances exists in the retained ADI component in the PMI service and in the workflow control data component in the workflow system. This may cause data consistency maintenance problem between the two components in case of system crashes or attacks and therefore cause potential security problems. To solve this problem in the framework of the parallel integration model, a transaction management approach should be introduced for the data maintenance in the retained ADI component. The parallel integration model for PMI and workflow systems is a flexible integration model for workflow systems, as it only requires interactions between PMI and the workflow engine, so it needs only few changes to existing workflow systems. But it causes more potential complications for the PMI service, thus could adversely affect the overall system performance of the integrated workflow applications. 3.2 Embedded integration model To minimize the disadvantage of the parallel integration model for PMI and workflow systems, the integration structure of the PMI and workflow systems needs to be further revised, and the duplication of information in the PMI service about process definitions and workflow instances can be eliminated, and more closely integrated PMI-based workflow systems can be constructed. An embedded integration model for PMI and workflow systems is shown in Figure 3. In the embedded integration model, the same information should be passed from the workflow engine to the PDP in order to make access control decisions as in the parallel integration model. But compared to Figure 2, a main change has been made in the embedded integration model in Figure 3. Since the PMI service is closely integrated with the workflow system and the PDP has access to the internal workflow control data component, the PDP does not need to maintain an internal
338
W. Xu, J. Cha and Y. Lu
retained ADI component in itself any more. The history information of the workflow instance, such as who had been authorized to perform which previous tasks in this workflow instance, is stored and maintained in the workflow control data component. The workflow control data component is accessible by both the workflow engine and the PDP. When making an access control decision for a workflow participant, the PDP will then retrieve the history information about the workflow instance from the workflow control data component, and then make an access control decision based on the access control policy in PMI. In the embedded integration model, the process definitions can even be retrieved by the PDP directly as part of the access control policy in the PMI system, thus redundancy about workflow process definitions in the access control policy can be removed. Process definition (LDAP) interpreted by WFM Engine(s) get credentials
maintain
return attributes decision request
decision response
PV
retrieve ACs
retrieve workflow information
PDP
retrieve RAP
AC LDAP
Workflow control data
retrieve TAP
policy LDAP
PMI service
Figure 3. Embedded Integration Model of PMI and Workflow Systems
Since process definitions from external process definition tools are the driving source of the workflow engine, they need special protection against potential tampering. To achieve this, an SoA can issue a process definition certificate for each process when the process definition tools are modelling workflows, and store it in a local file repository or a process LDAP. Each process can be identified by its unique process identifier. In the case of a process LDAP, even multiple workflow engines can retrieve the process definitions by the process identifier and can validate them based on the certificates via the service of the PV in the PMI service, therefore secure interoperability for Interface 1 can be achieved. The above two models are mainly for Interface 2 in the workflow system model for authorizing workflow participants, but they can also work for Interface 1, 3, 4 and 5 after further customizations in workflow systems.
Integration of Privilege Management Infrastructure and Workflow Management
339
4 Enforcement of workflow security policies in PMI When working with workflow systems, the enforcement of security policies in PMI should slightly be changed to accommodate workflow systems. Workflow security constraints are part of the PMI access control policy, and they should be enforced in the PDP along with target access policy (TAP). The PV sub system can work as normal to validate the attributes of workflow entities against the role assignment policy when PMI is working with workflow systems. In the PDP sub system, when the PDP is making access control decisions, apart from the normal working procedure to check the entity attributes, workflow task and actions against the target access policy, further checking on the workflow definition and the workflow constraints are needed to enforce workflow related security policy, such as separation of duty policy in a workflow. In the parallel integration model, workflow structure information and workflow security constraints are expressed both in the process definitions in the workflow systems and in the PMI access control policy. The PDP maintains a retained ADI component within it and keep history information about workflow instances in the retained ADI component. When enforcing the workflow related security policies in the PMI access control policy, history information stored in the retained ADI component will be retrieved by PDP and checked against the workflow security policies. If the workflow security policies are complied with, a “true” result will be returned by this module and this result will be joined with the result for checking the TAP by PDP for the final decision result; otherwise a “false” result will be returned, and thus PDP will certainly return a “denial” result to the workflow engine as the response to the requested action on a requested task by a particular workflow system entity. Since the retained ADI is maintained by PDP, so when a workflow instance is completed, all the history information about this workflow instance should be removed from the retained ADI by PDP. In the embedded integration model, during the execution of a workflow, states and history information of the workflow instance tasks are maintained in the workflow control data component, therefore apart from the normal decision procedure in the PDP, the PDP needs to check the workflow instance states and history information against the workflow security policies, and then return a “true” or “false” result to indicate if the workflow security policies have been complied with. This result will be joined by the TAP checking result in PDP and the final decision can be returned for the requested action on the requested task by a workflow system entity. The workflow engine is responsible for maintaining the history information of workflow instances in the workflow control data component, the PDP does not need to maintain this information.
5 Conclusion and future work In this paper, two different integration models are proposed to integrate PMI with workflow systems to fit different system situations. In the parallel integration model, the workflow instance history information is maintained by both the workflow system and the PMI system, while in the embedded integration model,
340
W. Xu, J. Cha and Y. Lu
only the workflow system needs to maintain the workflow instance history information. By integrating PMI with workflow systems, all PMI features such as attribute certificates, policy certificates, separation of duty, and delegation of authority, etc., can all be supported in workflow systems, and the security level of workflow systems can be greatly enhanced. For a workflow process, there could be multiple choices and instances for the workflow process to proceed with, this will result in different instantiation and different participants of the workflow process. Currently access control decisions are only based on previous workflow tasks, not on forthcoming workflow tasks, thus sub-optimized access control decisions may happen. To reach optimized access control decisions, further analysis on workflow process instances and optimization algorithms for access control may be needed.
6 Acknowledgement This paper is supported by Beijing Jiaotong University Research Fund (Project No.2007RC035, No.2006XZ011 ).
7 References [1] Ahn GJ, Sandhu R, Kang M. Injecting RBAC to secure a web-based workflow system. In: Proceedings of the fifth ACM workshop on role based access control, Berlin, Germany, 2000 : 1-10. [2] Chadwick DW, Otenko A. The PERMIS X.509 role based privilege management infrastructure, In: Proceedings of the seventh ACM symposium on access control models and technologies, Monterey, California, USA, 2002. [3] Chadwick DW, Xu W, Otenko S. Multi-Session Separation of Duties for RBAC. In: Proceedings of first international workshop on security technologies for next generation collaborative business applications, Istanbul, Turkey, 2007. [4] Ferraiolo DF, Cugini J, Kuhn DR. Role based access control: features and motivations. In: Proceedings of the 11th Annual Conference on Computer Security Applications, New Orleans, LA, 1995 : 241-248. [5] Information technology – Open systems interconnection – The directory: public-key and attribute certificate frameworks, ISO/IEC 9594-8:2001. Available at . Accessed on: May 16, 2008. [6] The Workflow Management Coalition. The workflow reference model, TC00-1003. Available at . Accessed on: May 16, 2008. [7] The Workflow Management Coalition. Workflow security considerations, WFMC-TC1019. Available at . Accessed on: May 16, 2008.
A Comparative Analysis of Project Management Information Systems to Support Concurrent Engineering Camila de Araujoa and Daniel Capaldo Amaralb1 a
Postgraduate student, University of Sao Paulo, BR Assistant Professor, University of São Paulo, BR.
b
Abstract. Many organizations have attempted to create information technology systems to support the collaborative concurrent engineering, including project management functions, capable of meeting the needs of all types of industries, projects and purposes, i.e., a universal collaboration platform. The result is an exceptional amount of IT products organized under a myriad of tool classes and promising to solve all of the collaborative engineering problems. However, each firm is a universe in itself, with its own culture, specific product characteristics, language, methods, rules, standards, etc., and this makes all the difference between efficient communication and efficient collaboration. Are the barriers to the design of collaborative IT concurrent engineering infra-structure similar in all cases? Since these systems are becoming popular, it is important that enterprises know that their first challenge is to support senior and project management in the difficult task of finding for the correct IT products in order to create a collaborative engineering environment that is efficient and economically feasible. Keywords. Project Management, Collaborative Concurrent Engineering, Information Technology Systems.
1 Introduction Collaborative product development projects require a complex mix of planning, evaluation and decision-making. All of these activities, in turn, are based on information generated during the projects. This information should be updated and available to all. Research on collaborative engineering projects has emphasized the importance of this information to concurrent engineering (CE) by indicating factors that prevent the accomplishment of their objectives (except for those of a political nature) [3]. These factors are: ignorance about what other project teams are doing, failure in controlling project change, different perspectives on project goals, 1
Assistant Professor, University of Sao Paulo, Sao Carlos School of Engineering, Industrial Engineering Department, Integrated Engineering Research Group (EI2). Trabalhador Saocarlense, 400; 13566-590; Sao Carlos-SP; Brazil; Tel+55(0)16 3373-8289; Fax +55(0)16 3373-8235; Email: [email protected] ; http://www.numa.sc.usp.br/grupoei/
342
C. Araujo and D. Amaral
rigidity in planning projects and routines, faulty reactions to sudden changes in project environment, and unexpected technological hindrances. In addition, Barnes, Pashby and Gibbons [1] carried out a literature review and identified indicators of success with respect to company-company collaborative projects: well defined goals, clearly assigned responsibilities, consensual project planning, proper resources, defined project milestones, synchronized project assessment, effective communication, and assured delivery by collaborators. In order to meet communication and collaboration needs of engineering projects, mitigating their problems and focusing on the aforementioned factors of success, several IT tools have been created and are currently available to enterprises. Besides engineering applications, these tools include project management applications that may, for instance, assist in monitoring the project progress. In spite of this, some problems can be found in today’s IT tools. A review of recent research on CE carried out by Li, Fu and Wong [4], shows an emphasis on engineering application tools to the detriment of project management tools. Rodriguez and Al-Ashaab [5] also present a literature review of collaborative product development systems, and only two of them involve project management functions. Moreover, White and Fortune [6] carried out a survey in which project management software appears as a chief limitation to project management methods/tools/techniques because of its inadequacy to collaborative projects. Although it is possible to verify in the literature that existing project management tools still require improvement as regards their application to collaborative CE, it is not clear about the difficulties encountered by different types of enterprises that carry out collaborative CE projects. This work is part of a research project that aims to investigate these issues in the industry of capital goods. Its objective is to present a comparison of IT collaborative platforms of new product project management in order to identify, beyond the best and bad practices, differences in critical success factors in the design and operation of these platforms. This paper presents a first attempt, a comparison among four enterprises with distinct size and production strategies.
2 Methods The method used was the multiple-case studies according to Yin [7]. It was adopted a holistic approach. Three dimensions were considered in each case: the product development process (PDP), the IT infrastructure employed to support this process and problems, and the practices and critical factors of success for collaborative CE projects. The data collection instruments were: interviews, nonparticipant observations and document analysis. Models of the enterprises’s PDP phases were devised to analyze the data, which included the modeling of software used to assist the processes. BPMN [2] was employed in the process modeling. The study analyzed capital goods enterprises with distinct sizes (medium versus large) and different manufacturing strategies (only Engineer-To-Order versus a mix of Engineer-To-Order, Make-To-Stock, Make-To-Order and Assembly-To-Order). Figure 1 represents each case.
A Comparative Analysis of Project Management Information Systems
343
Figure 1. Cases description
3 Results The results presented in this article derive from the study of four cases as described in the above section, i.e., capital goods enterprises. The deficient areas are presented without detailed descriptions. 3.1 Enterprise A 3.1.1 The product development process(PDP) and IT infrastructure Enterprise A’s PDP comprised the following areas: Sales, Management, Engineering, Supplies and Manufacture. The IT infrastructure supporting its PDP was composed of: x x x x
PDM (Product Data Management) software, acting on Sales and Management processes; ERP (Enterprise Resource Planning) software used after product sales (it has an integration customization and employs PDM data); Project management software to manage engineering processes activities (additional project management software is also employed to display results and follow customers’ tasks); Electronic spreadsheet to generate product sales reports for the management of finished projects.
Figure 2 depicts the information macro-flow throughout PDP, showing where IT tools were employed.
344
C. Araujo and D. Amaral
Figure 2. Representation of Enterprise A’s information macro-flow
3.1.2 Problems, practices and critical success factors of collaborative CE projects Enterprise A showed a satisfactory performance in controlling its projects since its ERP and PDM systems seemed to meet the perceived needs. The customization of these project control tools may reinforce the hypothesis that existing project management tools should be modified. The main problems were found in the functions: x x x x x
Global contract of projects, activities and resources; Information exchange between partners via monitored system; Generation of on-line consultation about finished projects; Integration of existing databases, discontinued use of electronic spreadsheets and project management software; On-line monitoring of on-going processes, with swift interactions.
3.2 Enterprise B 3.2.1 The product development process and IT infrastructure Enterprise B organized its PDP around the following areas: Board, R&D (Research and Development), Thin Films and Industrial Operations. Figure 3 depicts its information macro-flow.
A Comparative Analysis of Project Management Information Systems
345
Figure 3. Representation of Enterprise B’s information macro-flow
The IT infrastructure employed by the enterprise comprised the following tools: x x x x x
ERP software to oversee production plans, acquisitions and costs; Local net area to store engineering data; Word processor to enter product configurations and information as well as to prepare reports in general; Project management software to monitor project timetable; Electronic spreadsheet to store information generated by project management software and to establish a database to monitor the development of projects and generate reports on performance indicators.
3.2.2 Problems, practices and critical success factors of collaborative CE projects Enterprise B displayed a positive aspect as regards project management. It made use of a visual board, which allowed communication of indicators to all involved in the project. However, the main problems were related to its functions: x x x x
Automatic generation of indicators; Integration of project management tools and product data; Use of database to manage resources by using effort data; On-line collaboration tools to contact external partners or collaborators from other units.
346
C. Araujo and D. Amaral
3.3 Enterprise C 3.3.1 The product development process and IT infrastructure Enterprise C’s PDP consisted of Sales, Engineering, PPC (Production Planning and Control), Production and Quality, as shown by its information macro-flow in Figure 4. The IT tools used in project development were: x x x x x
ERP software employed in all involved sectors to control activities focusing on PPC (Sales, specifically, used a ERP module related to the list of products and parts to develop budgets); Project management software to control on-going engineering projects and monitor planning management activities; CAD software to produce drawing, Word processor to prepare project reports; Electronic spreadsheet to produce graphic reports with indicators about projects.
Figure 4. Representation of Enterprise C’s information macro-flow
3.3.2 Problems, practices and critical success factors of collaborative CE projects From the analysis of information it is possible to affirm that Enterprise C is at a highly positive organizational stage as it was capable of integrating its tools to meet its project needs. The main problems identified lay in its functions:
A Comparative Analysis of Project Management Information Systems
x x x
347
Use of tools to share product documents; Development of reports on performance indicators directly in ERP since it gathers most project information; Adoption of risk analysis in all projects.
3.4 Enterprise D 4.4.1 The product development process and IT infrastructure Enterprise D’s project development took place in the following areas: Directory, Electronic Engineering, Mechanic Engineering and Manufacture. The following IT tools were employed in the process: x x x x
Word processor to develop reports on planning, validation, development and alteration of projects; CAD software to design hardware components; Electronic spreadsheet to control list of project materials; Project management software to monitor Engineering activities only.
Figure 5 shows information macro-flow in Enterprise D’s project development.
Figure 5. Representation of Enterprise D’s information macro-flow
3.4.3 Problems, practices and critical success factors of collaborative CE projects Enterprise D’s best practices lay in product planning as it was capable of identifying differential characteristics and opportunities in the market. It also held
348
C. Araujo and D. Amaral
weekly meetings to monitor projects and employed helpdesk systems. However, Enterprise D needs to change with respect to the use of IT tools. The main challenges were identified in its functions: x x x x x x
Use of project management software to communicate knowledge on project data, e.g., timetable and activities; Development of environments to manage documents and workflows; Establishment of databases to control materials, thus eliminating the obstacle of employees’ tacit knowledge; Use of tools to manage documents and workflows; Definite implementation and use of ERP to better manage activities related to process plans and integration of engineering, PPC and purchase activities in Electronic Engineering; Generation of information to establish project performance indicators
4 Final Considerations The requirements and challenges presented in this study illustrate the difficulties encountered in managing collaborative CE projects, especially concerning the integration of several types of data and their communication to all collaborators involved in the projects. The main contribution of this paper is to present the difficulties for use the features of current tools as well as the requirements and challenges to development a new class of project management systems capable of supporting collaborative work in the industry of capital goods. In addition, the cases described in this study may be of assistance to professionals interested in building or improving IT infrastructures to support project management in collaborative concurrent engineering of capital goods.
5 References [1] Barnes TA, Pashby IR, Gibbons AM. Managing collaborative R&D projects development of a practical management tool. Int J of project management 2006;24:395404. [2] BPMN. OMG Final Adopted Specification, 2006. Available at: . Access on: Dec. 15th 2007. [3] Hameri A, Puittinen R. WWW-enabled knowledge management for distributed engineering projects. Computers in Industry 2003;50:165-177. [4] Li WD, Fuh JYH, Wong Y S. An Internet-enabled integrated system for co-design and concurrent engineering. Computers in Industry 2004;55:87-103. [5] Rodriguez R, Al-Ashaab A. Knowledge web-based system architecture for collaborative product development. Computers in Industry 2005;56:125-140. [6] White D, Fortune J. Current practice in project management: an empirical study. Int J of Project Management 2002; 20:1-11. [7] Yin RK. Case Study Research-Design and Methods. Newbury Park, Sage Publications,1994.
Location-Aware Tour Guide Systems in Museum Chih-Yung Tsaia., Shuo-Yan. Choub,1 and Shih-Wei. Linc a
National PengHu University of Science and Technology, Taiwan National Taiwan University of Science and Technology, Taiwan c Chang Gung University, Taiwan b
Abstract. This study develops a location-aware tour guide system that combines wireless networking, established content in digital archives for museums, interior locating technology and geographical information system. A back-propagation neural network algorithm is applied to locate the user, and monitor properties such as the visitor’s personal background (e.g., language and age), the content of the visiting materials, and visiting times to provide customized service tour guide system. Keywords. Museum guide system, PDA, WLAN, positioning, neural network
1 Introduction A museum provides physical surroundings for touring/entertaining people to acquire knowledge. Countries all over the world are using museums as a core facility to promote culture, art and tourism by widening their collections and services. Exhibitions in museums generally have descriptions beside them in the form of written board or pamphlets. However, these media are inconvenient for visually impaired people, children and older people. Therefore, many museums employ guides to provide vivid descriptions. However, the limited human resources mean that they can only provide group guidance is provided, and are unable to guide each visitor individually. Recent advances in information and networking technology, along with the increase in ownership of wireless network devices, have led museums to begin to construct wireless guidance systems. Current wireless guidance systems in museum adopt wireless RFID technology to place RFID tags on their collections. A user may detect the specific identification number coded on the collection items through RFID Readers onto his PDA to access or save the guidance information via the wireless network [3][13]. However, this system is expensive. A single RFID reader currently costs over NT$100,000, making it much more expensive than WiFi equipment. Additionally, since the effective reflection distance of the positioning of the Tag is short, the sensitivity of the guidance machine to the Tag decreases in large crowds. 1
National Taiwan University of Science and Technology, #43,Sec.4,Keelung Rd.,Taipei,106,Taiwan,R.O.C, 886-2-27333141- 6327; Email: [email protected]
350
C.-Y. Tsai, S.-Y. Chou and S.-W. Lin
Wireless networking technology i.e., IEEE 802.11g, integrates wireless networking technology and established museum content in digital archives format, and easily applies geographic information systems to provide Location-Aware Tour Guide System. Visitors may enter their personal location, background (e.g. language, age), the content of the materials they plan to visit, and visiting timeframe to set up their personal handheld tour guide systems. The guide function for visiting moving line shortens the visitor’s visiting line searching time, enable the visitor gains a wide range of appropriate information, thus increasing the satisfactory level of the museum service. The rest of this paper is organized as follows. Section 2 summarizes pertinent literature for museum tour guide systems. Section 3 then describes the operation of the user-location system within buildings. Section 4 describes the integration of the proposed location estimation method into a location-aware museums tour guide system. Conclusions are finally drawn in Section 5, along with recommendations for future research.
2 Related Work 2.1 location position technique Existing systems for location determination include Global Positioning System (GPS) [10], Infrared Ray Positioning System (IRPS) [12], and Radio Frequency– Based Systems [11]. Significantly, GPS is the most frequently adopted location determination system. However, GPS does not work properly in indoor environments or urban areas due to signal blockage and attenuation, which usually decrease the overall positioning accuracy. The IRPS signal cannot pass through walls, ceilings, floors or large objects in a room, since the emitted signal is commonly reflected by objects OMIT. Moreover, a transmitter must be less than 20 feet from any object that it detects, and must not be covered by transparent objects when accepting a tag. Radio-frequency identification (RFBS) generally uses (RFID) tags. The RFID tags reflex distance effectively shorter and PDA device selectivity also less. Therefore, the RFID technique is still under development, and is fairly expensive. The hardware architectures of these three location determination systems is generally difficult to access. However, the wireless local area network (WLAN) technique is highly popular [1][9],. Therefore, this study adopts WLAN to sense and detect a location. WLAN reduces the cost and risk of hardware construction, and uses existing network resources to determine locations. Therefore, it does not influence original network transportation functions. Several recent studies have reported of location detection methods in a wireless environment. Fox et al. [2] summarized location tracking systems. Bahl and Padmanabhan [11] adopted IEEE 801.11b access point signals to locate users. This study concentrates only on works that exploit the properties of the communications medium location determination, without requiring any additional hardware. This study adopts an empirical approach to the problem by considering a
Location-Aware Tour Guide Systems in Museum
351
WLAN environment using the IEEE 802.11g standard. The strength of signals received by wireless terminals from multiple access points at different locations in the building are recorded. 2.2 The digital guide system type Current digital guidance systems can be classified as follows [4]: 1. Systems that store the audio-video data in the guidance media. Visitors are expected use these systems on their own. The iPAQ Exhibition Explorer in Modern Art Museum in San Francisco belongs to this type. 2. Systems that store guidance audio-video data in a server. The content information is accessed from a data bank through the guidance media, which senses the matching code of the collection, and requests the appropriate information to be transferred to it. Systems of this type include the electronic MUSS program, the Personal Digital Museum Assistant in Japan, the Wireless Museum PDA Tour Guide System in the Tate Modern Art Gallery in London and the National Palace Museum Tour Guide System in Taipei belong to this kind of system. 3. Systems of this type are similar to those in type 2, except that the audience can obtain information using the internet at the exhibition site (through OR via) wireless transmission in a broadband Access Point (AP). Such systems include the wireless tour guide system in Explorer Exhibition in San Francisco, palmtop digital tour guide system in Gaty Museum in L.A. and the wireless tour guide system in the History Museum.
3 Positioning Technique This study proposes WLAM infrastructure, since WLANs have better scalability and lower installation and maintenance costs than ad hoc solutions, enabling them to be easily used to survey location systems using their own infrastructure and components [5][6]. The access points (APs) are D-Link, and the mobile terminal is a Personal Digital Assistant computer running Windows CE. The network operates in the 2.4GHz license-free (ISM) band with a data rate of 54Mbps. Four channels, 1, 6, 7 and 11, were used. The signal strength of these beacon packets was used as the Received Signal Strength Information (RSSI). The locations of coordinates for measuring signal strength were chosen and stored on a Personal Digital Assistant. A WLAN based on the 802.11g standard was located on the ground floor of a building. The ground floor had 9 access points (APs). The ground floor had an area of 35.6 × 24.4 = 872.2 m2, and had 3 classrooms, 2 bathrooms and 2 storage rooms. Nine Vigor 2600VG APs were installed on the floor. Figure 1 shows the locations of the APs. The nine access points operated on channels 1, 6, 7 and 11 to prevent overlap.
352
C.-Y. Tsai, S.-Y. Chou and S.-W. Lin
Figure 1. The ground floor layout with positions of the access points
Back-propagation neural networks were used train RSSI signals from a WLAN to generate two-dimensional coordinates. A Back-propagation neural network is a multi-layered feed-forward neural network, and generally consists of several layers. Source nodes form the input layer, one or more hidden layers and an output layer of neurons. The term “feed forward” indicates that input connections go in only one direction, i.e. from input to hidden, or from hidden to output, but not both. A neural network is trained by adjusting the synaptic weights such that the network provides a particular output from a particular input. A back-propagation network is trained by an iterative algorithm called the error back-propagation algorithm. The neural network was trained by signal strength samples from each of these 30 training locations, after preprocessing the data to remove the outliers. The vectors used in training the neural network were the average signal strengths obtained from each access point. In the testing process, the average of signal strengths obtained from different access points was taken to build the input vector, which adopted two hidden layers fed into the neural network, and a two-node representing the coordinates of the location [7]. The experiments were performed using feed-forward fully connected 3 layer architectures. A ratio of 4:1 was adopted for training and test examples. Thus, from the 6600 collected examples, 5280 examples were chosen for training, and 1320 were chosen for testing. Several neural networks were trained by different configurations and learning algorithms. In all cases, the output layer had two neurons corresponding to the X and Y coordinates being estimated. Two hidden layers were adopted, experiments were performed with 5, 10, 15, 20, 25 and 30 neurons in each layers. Ten different back-propagation variants were compared. The experimental results were summarized along with the observed accuracy. The best results were obtained with 25 nodes in two hidden layers. For the activation function, the sigmoid function was adopted in the input and hidden layers, and the identity function was adopted in the output layer.
Location-Aware Tour Guide Systems in Museum
353
Table 1 presents the probability distribution function for the best configuration. Clearly, the test error fell rapidly with the number of test samples. This result demonstrates that 87.88% of total number of samples produced distance errors less than below 2m. This result can be adopted to measure the probability of each distance error appearing. Based on room size and antique of the museum used as this site study, the target was set at a maximum error of 2m to deliver the locationaware information, in the same way that a tour guide delivers information to tourists. To locate people in the museum, a maximum error of 2m was considered adequate, since a person can be reached visually within this range. Table 1. Probability distribution of the error estimated Under 0.5m
0.5 ~ 1m
1 ~ 1.5m
1.5 ~ 2m
Upper 2m
25.53%
35.83%
18.33%
8.18%
12.12%
4 System Implementation 4.1 System Framework The location-aware museums tour guide system consists of four components, the client (PDA), the Location Position Agent, the Context-Aware Agent and the Content Server. (1) The client collects the power signature information, and transmitted it to the location position agent. Additionally, the client acts as a terminal to display the received messages. (2) The location position agent estimates the position of the PDA. Location detection is achieved by combining the strengths of 802.11b/g wireless access signals. Neural networks are used to estimated position from RSSI signature information received from the access points. The location position agent stores results in the content server. (3) The Context-aware agent will sends data to the appropriate recipients. (4) The content server stores the messages and location information of the PDA. The exhibition is shown in multimedia format. The images are mostly stored in *.png or *.jpg. Text files are stored in the BIG5 format. Audio files are in *.wav or *.mp3 format. Audio-video combination is in the *.wma, *.wmv or *.avi format. All these images, texts and audio-video files are stored using Flash in a content server. 4.2 Sample Scenario A scenario is presented to illustrate the use of the location-aware museums tour guide system. Figure 2 shows how the components of the system's architecture interact for this scenario. As the visitor enters the museum, he could lease a PDA from the service desk. A visitor who has been to the museum before, inputs his ID and the password to enter the system. Visitors can not only edit personal data, but can also choose the tour guide mode, e.g., preset, alternative or self-assigned. The
354
C.-Y. Tsai, S.-Y. Chou and S.-W. Lin
preset tour guide follows the system preset guiding route; the alternative guide follows the themes of the exhibition, and the personalized tour guide follows an itinerary set by the visitor. The Location Position Agent can sense the strengths of the signals from all the access points to which the devices can be linked after the tour guiding mode is selected. If the visitor changes the location, then these signals function as input numbers. They are then calculated along with the weighted trained BPN to produce the X and Y coordinates. The Context-Aware Agent then matches the coordinates to the location on the map, which is then shown on the visitor’s PDA. Figure 3 presents an example of this situation, where the red frame line indicates the floor plan of a particular floor in a museum; the black dot indicates the location of the exhibiting material, and green flag shows the path taken by the visitor. When the visitor selects a particular collection in the exhibition, the Context Aware Agent transmits the information on its surrounding exhibiting materials to the visitor’s PDA. The size of the exhibited collection changes based on the distance between the visitor and the exhibition material, becoming larger as they get closer (see Fig. 7). The system describes the exhibition in detail as the visitor clicks on its image (see Fig. 8).
Figure 2. Sequence diagram of Museums Tour Guide System
Figure 3. Wireless Positioning
5 Conclusion This study establishes a prototype tour guide system for the National Palace Museum of Taiwan, based on a context-aware framework where visitors in different contexts can obtain information customized to their needs. Location is undoubtedly important to understand the context of mobile users. Location becomes a useful indexing datum from which to infer the overall context used by a
Location-Aware Tour Guide Systems in Museum
355
system to provide services and information to mobile user. Moreover, mobile users constantly adjust their contexts, especially their locations. Visitors provide their personal data, special needs and constraints to the guidance system. The system in turn extracts appropriate information from the structured museum contents for the visitors to use during the visit. Such context data are classified by demographic data, preferences and interests such as age, sex, education, profession, language, preferred media type, time available, special subject of interest, specific assignment, device utilized and location. The location-detection system senses the location of a visitor travelling around the museum; indicates the location to the visitor through the PDA, then uploads the pre-defined information in real time to the device based on the location. The guidance system can then be designed as a geographic information system (GIS), which analyzes data and provides information based on geographic location. Since the layout of the museum does not change frequently, the floor space is partitioned into identical grids, and the learning algorithm is executed at each grid point. The strengths of the signal from various accessible APs are recorded at each grid point, and act as the signature at that grid point. Since the strengths of the signals change due to different environmental conditions, the distribution of the signal strengths is measured from each accessible AP. These signatures at the grid points are recorded and can be used as the reference for location detection, along with the geographic information system established for the museum,. Analytical results clearly demonstrate that the locations in an indoor environment can be determined using the signal strengths of IEEE 802.11g access points as input samples for training neural networks. The accuracy of location determination depends on the learning algorithm and the number of labeled examples. A reasonable number of labeled samples can yield very good results, with an average absolute distance error less than 1.1m. Based on the room size and antique of the museums utilized as our site study, this study set a maximum error of 2m as the target for delivering location-aware information, such as tour guides, to the tourist. A 2m maximum error is considered as reasonable for locating people within the museums, since a person can be reached visually within this range. To help visitors with their own portable mobile devices (e.g. cellular phones) when adopting digital tour guide services in the future, an integrated system of digital tour guide portals with multi-terminals, so that visitors can access information in a manner that is most appropriate for them. Additionally, the tour guiding material could be classified to satisfy the requirements of different groups, increasing the fun in learning. Users’ behavior could be further analyzed based on the established effective membership data bank in the tour guide system in order to serve the target users more precisely, and thus improve the operation performance of museums.
6 References [1] C. Wang, M. Gao, X. F. Wang, “An 802.11 Based Location-Aware Computing: Intelligent Guide System,” in Proceeding of 1st International Conference on Communications and Networking(ChinaCom), 2006, pp.1-5.
356
C.-Y. Tsai, S.-Y. Chou and S.-W. Lin
[2] D. Fox, J. Hightower, L. Liao, D. Schulz, and G. Boriello, “Bayesian filtering for location estimation,” IEEE Pervasive Computing, Vol. 2, 2003, pp.24–33. [3] F. Kusunoki, M. Sugimoto and H. Hashizume, “Toward an Interactive Museum Guide System with Sensing and Wireless Network Technologies,” in Proceedings of the IEEE International Workshop on Wireless and Mobile Technologies in Education, 2002, pp. 99-102. [4] H. Y. Lin, “From Audio to Audiovisual: A Discussion of Museum Planning for Mobile Multimedia Guides,” Museum quarterly, Vol. 20(1), 2006, pp. 97-114. [5] J. Scott, M. Hazas, “User-friendly surveying techniques for location-aware systems,” in Proceedings of the Ubicomp 2003, 2003, pp. 44-53. [6] L. A. Castro and J. Favela, “Continuous Tracking of User Location in WLANs Using Recurrent Neural Networks”, in Proceedings of the 6th Mexican International Conference on Computer Science, 2005. pp. 174- 181. [7] L. D. Chou, C. H. Wu, S. P. Ho, C. C. Lee and J. M. Chen, “Requirement Analysis and Implementation of Palm-Based Multimedia Museum Guide Systems”, in Proceedings of the 18th International Conference on Advanced Information Networking and Application, 2004, pp. 352-357. [8] M. Stella, M. Russo and D. Begusic, “Location Determination in Indoor Environment based on RSS Fingerprinting and Artificial Neural Network”, in Proceedings of the 9th International Conference on Telecommunications, 2007, pp. 301-306. [9] N. Patwari, N. J. Ash, S. Kyperountas, O. A. Hero, R. L. Moses and S. N. Correal, “Locating the Nodes: Cooperative localization in wireless sensor networks,” IEEE Signal Processing Magazine, Vol. 22(4), 2005, pp. 54-69. [10] P. Bahl and V. N. Padmanabhan, “RADAR: an in-building RF-based location and tracking system,” in Proceedings of. 19th Annual Joint Conference of the IEEE Computer and Communications Societies, 2000, pp. 775-784. [11] R. Want, A. Hoopper, V. Falcao, and J. Gibbons, “The active badge location system,” ACM Transaction Information. System, Vol. 10(1), 1992. pp. 91–102, [12] V. Ivan, and Z. C. Branka, “WLAN Location Determination Model Based on the Artificial Neural Networks,” in Proceedings of the 47th International Symposium ELMAR, 2005, pp. 287-290. [13] Y. Wang, C. Yang, S. Liu, R. Wang and X. Meng, “A RFID & Handheld Device-Based Museum Guide System,” in Proceedings of the International Conference on Pervasive Computing and Applications (ICPCA), 2007, pp.308-313.
PDM – University Student Monitoring Management System Jožef Duhovnika, 1, Žiga Zadnika, 1 a
Faculty of Mechanical Engineering – LECAD, Ljubljana
Abstract. The studies monitoring information system is an important support to the management of higher education. A survey of a wealth of information systems and their variants has shown that they are conceived without a fundamental message on the monitoring of process states that are developing during studies and are generated by students. This paper presents the information flow, derived from the study process and the necessary administration parts. It provides continuous monitoring of students’ activities, important for the management of the educational process at each university. To do so, we have applied the basic PDM systems concepts. Due to the extent and recognisability, we have termed it PDM – USMM (University Student Monitoring Management). The system has been upgraded with derivative analyses models, providing quality data for the management of the entire study process. They have been termed “smart registers”, providing an analytical view, broken down by individual students, year, generation, first school, course group and by professors. On the basis of “smart registers”, it is possible to establish a decision making system. It enables direct influence of human factors that are important for a successful management of the educational process with students. Keywords. PDM, information system, university, student monitoring, decision making support
1 Introduction 1.1 General Each process can be defined by its phases, parts, sub-processes, partial implementations etc. It is important to recognize the process in all details and to be able to catalogue it. Cataloguing is important in order to be able to recognize the group of data that defines the process on the entry, during individual parts and on the exit. It is the data and their monitoring during the process that represent the key 1
LECAD Laboratory, Faculty of Mechanical Engineering (Ljubljana), Aškerþeva 6, 1000 Ljubljana, Slovenia, Europe; Tel: +386 (1) 4771 416; Email: [email protected]; [email protected]; http://www.lecad.uni-lj.si
358
Jožef Duhovnik and Žiga Zadnik
elements for the management of the system itself. Many [1][2][3][4][5] publications therefore present data processing in different environments and they are often based on the presumption that it is important to ensure data processing and tracking. We believe that the process should first be recognized and recorded in such a form that its repeatability is guaranteed in its entirety. After cataloguing the process, each state should be provided with adequate data and their interaction (e.g. with mathematical terms), which provides the basis for managing the process in an abstracted form. 1.2 The university study monitoring information system Carrying out the educational process at the university is based on the individual approach of the professor and their assistants. Adult education methods encourage and increase students’ interest in specific topics. Students can be directed to certain courses in the first years by recognizing their talents. A successful management of the educational process is therefore related to vital data of the student as a subject. It therefore makes sense to try to assess the data accurately. Through interviews, the university governing body can further encourage students’ interests in particular knowledge. However, it cannot enter such interviews without adequate data. It is understandable that the body is bound to remain totally silent and to respect the highest standards of human dignity. For this reason it is of utmost importance to ensure a high level of the security of data. The whole studies monitoring information system at the university is actually a product data management system that includes collection, processing, storing, analysing, arranging and protection of all important data on students during their studies. This is highlighted because we believe that one should not understand a student or a graduate as the end product. The PDM studies monitoring system is presented in the same way to the users of the system. Students, professors and the university governing body are primarily considered the users of the system. With such presentation of the PDM system it is possible to ensure that the student feels a subject rather than an object. The student then starts to compete against himself or herself to achieve the best possible results. PDM systems are established at some universities and users make use of them in different ways. They use them as a derivative from traditional PDM systems, such as UNIGRAPHICS – NX, Team centre. To a large extent, they are being used as specific software products, following current users’ requirements and even using standard data bases. The common denominator of these systems is the typical systemic – information approach. It means that we first address the problem of data identification, data processing and application as a statistical indicator of the analysed data. It was this that has made us believe that in this case it is necessary to use the PDM systems concept for SME enterprises as the concept is based on process states and the description of the process itself. From the process it is then possible to develop a model of data structures. During the process, we then try to analyse them at each point, after each important indicator of a process state, which provides for dynamism during analysing. At each change of the process, the data structure change and its introduction into analytics is introduced. High autonomy
PDM – University Student Monitoring Management System
359
during the collection of data on the process itself is ensured. And most of all, we are not bound to the structure of a large information system, which is in principle lethargic and often deterministic, too. In the case of PDM systems it is necessary to ensure a suitable distributed locations system, ensuring high flexibility. Our case, where the system at the university level is being implemented, calls for extra attention. We are based on the fact that each student has access from any location (at the campus, at home, while travelling). The same access is possible also for professors, who can enter their course or course group. Specific analyses should be carried out in the system, verified within the university network. In the concrete example, access has been provided at the University of Ljubljana, within the METULJ (Butterfly) network. Using the Internet should therefore be ranked according to a specific protection system during data transfer [6][7][8][9][10][11][12]. A standard cryptographic system has been used in our case [13][14][15][16]. The protection system for databases and access to databases was used with a triple redundancy level, which is, however, not further explained in this paper.
2 Work flow of student monitoring process 2.1 Studies monitoring process Studying begins with enrolment in the first or any other year (Figure 1). Generally, those parts of the process are presented that represent individual modules as complete unities. We should stress that the system does not include current monitoring of students during the studies, as this was not the purpose of the paper. Due to the specifics of the system that includes a complete university environment, it has been termed PDM – USMM (University Student Monitoring Management). Students enter the study process in different ways. Entering individual universities is subject to specific selection criteria. They are defined by different factors and should be settable in accordance with the university’s quality policy. Criteria are usually public. In the case of special criteria, set by individual university or faculty for itself, these criteria are usually in the domain of the commission, deciding on whether to accept a particular student or not. As a rule, a student can enrol from anywhere and no direct contact is necessary. For this reason, we have to choose three possible accesses for students to introduce themselves. They are: Internet access, via postal order or directly in person. Access via postal order requires the inclusion of university administration. After the interview, personal visit can be treated in the same way as access via the Internet. Promotions to the next year are similar. Access is possible in all three ways. The only difference is criteria, which are different, but are also defined by the university or faculty policy, setting the criteria. In general, it is possible to define criteria by several levels. According to the recognized criteria at various universities, we have also opted for four levels: A, B, C and D. During the enrolment process it is necessary to provide the user with automatic help and instructions. It is a fact that this is a specific data environment and it is
360
Jožef Duhovnik and Žiga Zadnik
fair to offer the candidate some assistance with direct enrolment. For this purpose, we use Direct Help Support (DHS) system. We presume that the candidate has entered the data correctly. After data entry and formal verification, performed automatically, an automatic answering device can notify the candidate of having entered all enrolment data. After the notification, the enrolment candidate or the student, progressing to the next year, has the right to pay all necessary initial and whole year expenses immediately. Payment can be made in different ways: via the Internet (banking system), via postal service or directly at the faculty. Because each method is specific, we will not go into details during the process. Only once all study expenses have been paid at registration, are the student’s data visible to the administration staff, who’s take care of administrative issues of the study. During this stage of the process, the so-called super registration data control is carried out, where all data are verified and harmonized with the student via e-mail, if necessary. In the case of direct access, harmonisations take place on the spot. With a view to a relatively high number of students, it is possible to arrange access timetable for
Figure 1: The entire upgrade system process
personal contacts. It can be specific with a view to the extent of amendments. It should be specifically stressed that personal contacts are possible also via video systems, specified by each university for itself. We should point out that this is the critical part of the registration and therefore requires highly skilled staff on the part of the university. The student should collect all necessary original documents, confirmations of enrolment etc before the beginning of study. Original documents are filled out in the student’s presence and certified in front of him or her. It prevents advance printing of documents or printing for those students who cannot begin the study for
PDM – University Student Monitoring Management System
361
various reasons. A special system and criteria are established to refund the paid sums, if necessary. As a rule, the paid sums are non-refundable. After the enrolment, there is an established study system. Study is carried out in different forms, which calls for different methods of continuous recording of performance for each student and for each course. Usually, the following methods are used: 1. Lectures (attendance), colloquia (grades), examination (grade) 2. Lectures (attendance), tutorials (attendance), tutorials (grades), colloquia (grades), examination (grade) 3. Lectures (attendance), tutorials (attendance), homework (grades), colloquia (grades), examination (grade) 4. Lectures (attendance), tutorials (attendance), homework (grades), seminars (grades), examination (grade) 5. Lectures (attendance), seminars (grades), colloquia (grades), examination (grade) 6. Lectures (attendance), seminars (attendance), seminars (grades), colloquia (grades), examination (grade) 7. Lectures (attendance), tutorials (attendance), exhibitions (grades), colloquia (grades), examination (grade) 8. Lectures (attendance), tutorials (attendance), public presentation – concert, examination (grade) 9. Other A 10. Other B The submitted overview of different approaches to disseminating and absorbing knowledge reveals that each university has in general its own method of study. Differences exist also within faculties. Unification of methods is not appropriate for university study; we only have to ensure comparability of knowledge reproduction among universities with a view to the extent of study and verified grades. Study monitoring is performed weekly. Data input shall take place no later than Friday 5 pm. It provides the first analyses of the situation as early as the following Monday at 9 a.m.. Analytics is important when it comes to 60 or 100 registered students per year, especially the first year. We believe that study monitoring in the first year makes the most sense because transition from the secondary school or another environment is the most sensitive for the student. The same goes for the first year of levels 2 and 3 of the Bologna reform. On the basis of such collected data, the faculty governing body can take measures for more intensive, better and more personal approach to the student. With total recognition of student monitoring process during his or her study we wish to point out that the objective of the submitted information system is to produce the analysis of student enrolment (in the first year) and promotion (to the next year) in order to enable total control of enrolment and promotion from different locations. It was also necessary to take account of the possibility to pay student expenses via electronic banking in such a way that each payment shall be properly recorded and protected in the portable electronic worldwide web. It was
362
Jožef Duhovnik and Žiga Zadnik
necessary to check the possibility to monitor the student at tutorials, lectures, colloquia and examinations while at the same time to provide and ensure easy integration of the teaching staff into the system. Considered is also a possibility for the transfer of data on student monitoring into the student performance monitoring system throughout his or her study process. Transition to the so-called electronic index with all the necessary document support for issuing documents of fulfilled study requirements is also supposed to be ready. Electronic indexes already exist and have been in use for some time. The purpose of this project is to introduce an information system, based on the PDM systems model and allowing the electronic index among other things. We would like to point out that the electronic index is only a result of a well established PDM study monitoring system and not its goal. Some vital elements of the PDM USMM system will be presented below. In this way, we would like to present elements of the system that significantly round up parts of the process. 2.2 Enrolment conditions For students, enrolment and promotion conditions are the most important annual line between the previous and the next year of study. It is important to immediately define the difference that specifies different conditions with criteria (Figure 2). It improves transparency during programming and clearly separates criteria specification.
Figure 2: Scheme of enrolment and promotion conditions, divided into two parts: first year and higher years
Enrolment in the first year is defined by conditions, taking account of learning results from secondary schools in different environments. Conditions are set according to past results at secondary schools. The overall result of the final year and the method of finishing secondary school are important. The matura exam, vocational matura and secondary school diploma can be considered methods of finishing secondary schools. The learning results serve as a basis for ranking enrolment candidates on the enrolment list. If the number of applicants is higher than the number of advertised or available spaces, a pre-defined selection process is carried out. Students who do not meet the set criteria are not accepted to the faculty. The faculty officially publishes the list of accepted candidates according to the set enrolment conditions and begins the enrolment of candidates in the first year. Promotion conditions are laid down differently. They are set out on the basis of student’s work during the preceding period of study. Each student’s results are
PDM – University Student Monitoring Management System
363
reviewed, which then determines his or her performance group. A special ranking module ranks students into one of four main groups according to their performance. Each year has four completely independent groups A, B, C and D (Figure 3), which provide the faculty governing body with a tool to shape appropriate policy. Students in group A are students who meet all conditions and have therefore passed all exams. Students from group D can exceptionally enter the same year if they haven’t already made use of this possibility. Groups B and C are the students who are given extra opportunities to take exams under special conditions. Properties of the new ranking programme are as follows: x students are better informed of their performance and can make better decisions on their future x student groups are public and published on the Internet x students can check their ranking anytime, 24 hours a day x the information system allows the teaching staff regular updates of student groups via a private network - Intranet x students ranking into groups during examination periods (three periods per year) is regularly updated, so students are completely up-to-date with their latest achievements. Things have shown that the level of passing exams improves considerably as each student works towards improving his or her ranking
Figure 3: Scheme of student groups according to promotion conditions
Introducing students ranking and more or less direct access has resulted in students being much more committed when they sit for exams, which is one of the problems in countries with free education. Students have become aware of their status. They started to make effort to achieve the highest status as soon as possible. It was also our ambition to spark the desire to advance to the next year without major difficulties when we introduced the PDM USMM information system. Competition emerged among students, which further stimulated their studies.
364
Jožef Duhovnik and Žiga Zadnik
3 Important PDM USMM modules 3.1 Electronic registration The purpose of electronic registration is transmission of student’s data in fully electronic form. In the past, enrolment and promotion forms were mainly filled out manually, which was time consuming, tiring, annoying and took too much time. For these reasons, we opted for fully electronic registration. For the university, it makes collection of the desired data much easier. At the same time, the data are automatically recorded in the central database. By taking complete autonomy of registration location into account, we have provided the possibility to register from any location in the world. The user therefore does not have to register at the faculty; he or she can do it “at home”. Because the location is not specified, the student – user can also register at the faculty if he or she so wishes. Electronic registration starts with the student entering the electronic registration system. Entry to the system is controlled and is provided by the faculty during the registration period only. Upon entry, the student is made familiar with all necessary instructions to proceed and register. It is followed by the registration process. The registration process continues with entering the main data on the registration form and attachment of the candidate’s photograph. Throughout the registration process, Direct Help Support (DHS) is available to the user. Once the registration has been completed, the main data and photograph are recorded in the central database. Because data entering is subject to inaccuracies and mistakes, the process continues with the decisive phase of verifying the entered data. During this phase, all data are reviewed and the decision whether to continue the process or not is taken. If the data are correct and credible, the process continues with storing the data in the main base, and if they are not correct the user returns back to the phase of entering the main data. Each return is accompanied by a notice, informing the user of the error and offering help to eliminate it. The next set includes filling out registration forms for documents that the student needs for various administrative bodies. The set of processes is identical to the set of filling out the registration form, with the only difference being that the rest of documents should be filled out instead of the registration form. Direct Help Support is available to the user in this case, too. Entering the data and communication with the database is followed by verifying the accuracy of the entry. If it is correct, the programme directs the user to the next step – storage of data. Identically to filling out the registration form, the user is notified of an error and at the same time offered help to eliminate the error. Having filled out registration forms, the electronic registration process is finished and the user can proceed to the next phase. Compared to the manual system, the electronic system has several fundamental advantages. The main advantage is full readability of documents. The system completely eliminates problems of students’ different handwriting as the writing is unified and transformed to computer writing (“all students have the same handwriting”). The other fundamental advantage is the printed documents being ready for binding and archiving. The third major advantage lies in the fact that the
PDM – University Student Monitoring Management System
365
student has considerably less work with filling out examination papers and forms as most of the data are automatically pre-completed by the system. In the past, the student had to enter his personal data into the required fields, which took more than one hour on average. The electronic system has considerably reduced the time of filling out the form to one quarter, which amounts to 15 minutes on average. In these 15 minutes, the student can fill out the whole registration form and all required forms for documentation purposes. Attaching the photograph onto the registration form is another particular advantage. In the past, students had to include photographs twice the size of official documents standards. In this system, the photograph size remains the same and is defined by the scan resolution of 300 DPI. Such required resolution enables quality reproduction at all times. 3.2 Payment of registration fee Payment of study requirements is an important part of the information system. Upon registration for the next study year, each student should cover some financial obligations to the institution (faculty). Institutions and faculties have different requirements, regarding payments and levels of charges. The required charges usually include: • • • • •
payment of schooling fee, payment of expenses, accruing from registration, payment of costs, associated with the coming study year, payment of confirmation of registration and other documents, payment of library costs
Regardless of the extent and level of charges, the student should pay them in their entirety. Up to the 2007/2008 academic year it was a norm that students paid their obligations to the institution over a bank or postal service. Such approach is typical of state and highly bureaucratic systems. Students collected a payment order at the faculty, took it to a bank or post office and make the payment to the faculty account. Working towards modernizing the payment system, special attention has been paid to the improvement of the payment process. During the electronic registration process we found several ways to save time and money and in this respect, the payment process is similar to the electronic registration but ways for improvements are hidden in other areas. The purpose of the payment process is that the student makes payment to the institution (faculty) account. The payment guarantees the student promotion to the next year. In other words, subject to discharged obligations, the faculty allows the student to continue studying. There are several methods to discharge the obligations. The payment process can be activated as soon as the student successfully completes the electronic registration process. It starts with automated invoice generation. All necessary information for making the payment are displayed on the screen. The new information system provides for three payment methods and the user can freely choose between them.
366
Jožef Duhovnik and Žiga Zadnik
3.3 Payment over electronic banking Electronic banking is the first method. The student makes the payment through an electronic banking provider. The provider then makes the payment of expenses to the faculty account, generates the receipt and registers it within the system. This is the end of the payment process. The student can see the status of the payment in the system (notification of a successful payment). Once the payment has been made and verified, all documents are sent to the super vision matriculation process. If the payment is not made, whatever the reason, the student is directed back to the first step where he or she decides about the financial process. Figure 4 shows the electronic banking scheme in the new information system. The scheme includes D, S and T marks, standing for process description, subject and term. 3.4 Payment through a bank or postal service The other method is payment through a bank or postal service. So far, this method has been the most obvious method of making payments. The student fills out the BN02 form (“payment order”) with the payment order data, found in the first step of the financial process, and takes it to a bank or a post office. He or she pays the specified amount and the bank or post office makes the payment to the faculty account. By doing this, the student makes the payment and discharges the obligations. If the payment is successful, which is confirmed by the relevant financial service at the university, the payment confirmation is recorded in the Estudent system.
Figure 4: Scheme of electronic banking process
The student can already see the status of the payment (notification of a successful payment). The process usually take about 2 or 3 days. Once the payment has been made and verified, all documents are sent to the super vision matriculation process. 3.5 Direct payment The third option is direct payment at the students office or the institution, faculty. The student discharges all obligations at the university and receives a document of
PDM – University Student Monitoring Management System
367
the payment he or she has made. The financial department takes care of recording all information, confirming the payment. Physically collected funds are transferred to the faculty (institution) bank account. The student can see the status of the payment in the E student system (notification of a successful payment) the next day. Once the payment has been made and verified, all documents are sent to the super vision matriculation process. 3.6 Super vision matriculation Up to this phase, the registration process applies a self-control principle, using the DHS feature. It provides high quality of data input at the user. Before each final entry of a registered student and irrespective of the applied self-control principle, we want to provide the final control, exercised by relevant services at the university or faculty. Experiences have shown that basically each new generation is completely capable of meeting minimum standards for independent data entry in the registration form part, documents preparation and payment process. An authorized person should find out and verify the real data before anyone at the faculty or university issues any confirmations, which is the main activity of the super control. The verification should be exercised on the registration form, registration documents and the payment process. A positive control provides a message to the student that he or she can come and collect all original documents and that he or she is a full time student at the University of Ljubljana. The super control should take account of some static, unchanged answers. These data are related mostly to personal data, completed schools before enrolment etc. Photographs should be stored because they are of static nature. Verification of photographs of the registration form should also be ensured because no changes of the photograph content are permitted during the storage process. Photographs with wrong contents, such as a cat, a dog or an elephant will be rejected and not stored in the database and the user will be notified of a registration error. There are also dynamic data, such as maintenance methods or place of living. Dynamic data can change and are therefore not of key importance. As mentioned above, all documents, together will all details, must be verified here. It is important to be aware that the super control process is crucial because later on, all data are stored in a suitable, safe database. Super control operators bear huge responsibility because they should take care of accurate and credible data entry as well as of error free main database. Because the super control is only possible in the case of derived payment process, a relatively high safety on the entry of data is ensured. It should be stressed that there will be quite a few differences on the data entry between the registration form, registration documents and also before the payment of expenses, which calls for a systematic continuous control of registration data. If a process from each an individual phase is not unfolding as expected, it is necessary to use the system and start sending e-mails to the addressee directly and with a notification what is yet to be done. Irrespective of automated functionality, it is still necessary to ensure the participation of super controlling officers to check the conditions in each phase and, if necessary, interfere personally with a message
368
Jožef Duhovnik and Žiga Zadnik
(telephone) or e-mail with each addressee. Through super control, PDM USMM should provide for a perfect data management system at the registration. 3.7 Student status and performance documents Every year, the final part of registration is students’ collection of registration documents. As a rule, the student supplies documents in an electronic form, filling out registration documents and making the payment. After that, he or she is notified of (in) accuracies during the submission of all data. After reviewing students documents, the relevant students office sends the student a notice, confirming the end of the registration process. The notice can be sent via e-mail or surface mail. At his or her institution, the student should collect in person the documents, confirming and revealing his or her status. Once the documents have been collected, the students office prints and certifies all other documents. Before collecting the documents, the student should identify himself or herself. Documents usually consist of certified confirmations of enrolment, a certified index and extended status on the student ID. The relevant student office is the main operator for issuing the documents. Students collect documents according to a detailed pre-fixed student list. It saves the student their precious time because thanks to the schedule they know exactly when to collect the documents. It is expected that the frequency for the collection of documents will range between two to three students per five minutes. Once the documents have been delivered, the whole contents of delivery and confirmations of delivery with date and time should be stored in archives. Each of the mentioned processes, carried out by administrative personnel at the student office, should support the printing feature.
4 Studies analytics, using “SMART REGISTERS” During the studies, it is important to monitor the student not only at the end of the semester or academic year but also during the rest of the time. Some details have already been presented in the section “Studies monitoring process”. Special attention should be paid to students at the beginning of studying at each level. Changes of the environment and the way of life have a particular influence upon students’ abilities to enter such environment without additional pressures. For this reason, we are trying to establish needs for adequate presentation of knowledge and to encourage studying with the use of suitable studying methods. We should do that through the analysis of achieved results of individuals as well as their peers from secondary schools and other generations. Smart registers shall mean data and statistical assessment of students’ current situation, courses and study programme (Figure 6). Smart registers can be broken down into five sub-sections. Each subsection represents its own set, specifying the details of a carefully chosen analysis and assessment of results . Describing individual sets is not important for the PDM USMM system and is therefore not further presented.
PDM – University Student Monitoring Management System
369
5 The influence of the PDM USMM system on users The student is actually the most important user of the submitted system. Our goal was to establish a new modern PDM system, known from the literature. For the students of technical sciences we are trying to prove that PDM systems are very useful for services, too, and monitoring the study process in fact falls into this category. Besides, the first test of students for the use of modern computer technologies is available. It is understandable that it is necessary to pay very close attention to natural communication between the student – user and the recipient of the data. Administration personnel is the other important user. They carry out the following functions – services within the comprehensive process of study monitoring: choosing candidates – partly, publications of lists, updating the lists – partly, results reporting, sending mails, preparation of the registration process, beginning of the registration process, notifying all members of the processes, control over the system functioning, payment control, students monitoring, student registration super control, preparation, storage and review of registration documents, issuing confirmations of registrations, indexes, student ID stickers, students results analyses, setting enrolment and promotion criteria, assistance and support to students and finally, monitoring, execution, directing and controlling features that make up the information system. Figure 6 shows the teaching and administrative personnel scheme.
Figure 5: Scheme of the application of “smart registers”
370
Jožef Duhovnik and Žiga Zadnik
Figure 6: Teaching personnel scheme
The main representative of the administrative personnel in charge of contacts is the student affair office. During the registration period, it is the main entity for controlling and carrying out the enrolment and promotion processes. It has a special contact with each student, which makes it even more important that all actions are carefully planned and executed. The student affair office also has all authorities, concerning the treatment of students. For example, during the registration period, the office is the only entity with the authority to certify all registration documents and it provides the student with all necessary confirmations of registration and further studying. The teaching personnel consists of: professors, a pro-dean, a dean, a pro-rector and a rector. Pro-dean for educational activities is directly in charge of smooth registration. The system precisely defines the responsible persons for the administrative as well as teaching part of the personnel. The responsible personnel have access to all data in the information system. They can monitor and make decisions on educational activities. Each member of the personnel is responsible for a smooth running of the registration processes in line with the objectives and for finding joint solution in case of mistakes and misunderstandings.
PDM – University Student Monitoring Management System
371
6 Conclusion Problems that crop up when students enrol in the first year or progress to the next ones can be eliminated by a methodological use of the working process and all its details. Only in the second phase, when some process states are programmed, are we engaged in the programming of detailed sub-processes. Developing the working process in all details has resulted in a very good tracking of all input data. This way, we have completely met the quality requirements, in line with the ISO 9000 standard. This was also our goal because we have found out that most problems and bad mood originate in the mistakes during processing. During the implementation, we have considerably lowered the number of registration days and most of all, the registration time has been reduced from 120 minutes to a mere 5 minutes on average. This way, important data have been collected and will be used for interviews with students who have problems and have been ranked in groups B and C. The comission that conducts interviews with students of the two groups before registrations is equipped with important data that provide for a good interview with students and help it direct the student. Interveiws have been introduced on the basis of such accelerated treatment of studnets. The interviews have provided a much higher level or, in better words, introduced a more humane treatment of students. It should be stressed that the full introduction of PDM USMM system has even reduced the registration expenses, which could have been higher and higher every year. Calculations have shown that the investment in building our own PDM USMM system returned in one year and a half. The paper has proved that the use of a specific PDM system is of utmost importance also for the performance of services. It is possible to conclude that defining the working process is crucial for the establishment of a suitable software. The student – the main user saves a huge amount of unnecessary rushing and preparing some data. The administrative personnel is less stressed. The management of the educational process becomes much more transparent and provides the faculty and university with a more active role in achieving higher quality of study also in the area of knowledge transfer.
7 References [1] [2] [3] [4] [5]
Duhovnik, J.; Tavþar, J. PDMS – Product Data Management Systems. Ljubljana: Fakulteta za strojništvo, 2000 Duhovnik, J.; Tavþar, J. Information flow in CAD process, International conference, Design to manufacture in modern industry, Bled, Slovenia, June 1993 Miller, E. PDM Market Continues Strong Growth, Computer-Aided Engineering magazine, Nov., 1996 Eigner, M.; Haesner, D. Konfigurationsmanagement als integrierter Teil von PDM, EDM – Engineering-Data-Management Report, Nr. 3, Dressler Verlag, Heidelberg, 1998 Tavþar, J.; Duhavnik, J. Trees of Knowledge and Experience in the PDM system, International conference, Design to manufacture in modern industry, Podþetrtek,l Slovenia, 1999
372 [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
Jožef Duhovnik and Žiga Zadnik Lawrence, Eric (2005-10-22). IEBlog : Upcoming HTTPS Improvements in Internet Explorer 7 Beta 2. MSDN Blogs. Retrieved on 2007-11-25. Bugzilla@Mozilla - Bug 236933 - Disable SSL2 and other weak ciphers. Mozilla Corporation. Retrieved on 2007-11-25. Pettersen, Yngve (2006-05-16). Opera Labs - What's new in the SSL/TLS engine of Opera 9?. Opera Software. Retrieved on 2007-11-25. Pettersen, Yngve (2007-04-30). 10 years of SSL in Opera - Implementer's notes. Opera Software. Retrieved on 2007-11-25. Woo, Thomas Y. C. and Bindignavle, Raghuram and Su, Shaowen and Lam, Simon S. 1994. SNP: An interface for secure network programming In Usenix Summer Technical Conference Association for Computing Machinery, "ACM: Press Release, March 15, 2005", campus.acm.org, accessed December 26, 2007. (English version). Wagner, David; Schneier, Bruce (November 1996). "Analysis of the SSL 3.0 Protocol". The Second USENIX Workshop on Electronic Commerce Proceedings, USENIX Press. Eric Rescorla,. SSL and TLS: Designing and Building Secure Systems. United States: Addison-Wesley Pub Co. ISBN 0-201-61598-3. Stephen A. Thomas (2000). SSL and TLS essentials securing the Web. New York: Wiley. ISBN 0-471-38354-6. Bard, Gregory (2006). "A Challenging But Feasible Blockwise-Adaptive ChosenPlaintext Attack On Ssl". International Association for Cryptologic Research (136). Retrieved on 2007-04-20. Canvel, Brice. Password Interception in a SSL/TLS Channel. Retrieved on 2007-0420.
Knowledge Based Engineering
Multidisciplinary Design of Flexible Aircraft Haroon Awais Balucha, , and Michel van Toorenb,1 a
Ph.D. Researcher , Faculty of Aerospace Engineering. Prof. Dr., Faculty of Aerospace Engineering.
b
Abstract. The increasing use of fiber composite materials in the design of aircraft structures and the advent of easily available fast personal computers, have forced the aircraft design engineers to adopt complex mathematical models, which usually adress the trio of flight mechanics, aereoalsticity and controls in one simulation. The adoption of such kind of mathematical models also open a door for a robust methodology on the multidisciplinary optimization (MDO) of flexible/aeroelastic aircraft. This paper gives an overview of the a “closed-loop” design framework, which optimizes the structure of any given component like fuselage of the aircraft under dynamic loads. The objective is to reduce the weight of the given structure vis-à-vis maintaining the constraints of structural strength and the dynamic stability of the whole aircraft. An optimization problem is presented in the end, where the fuselage structure of a small executive jet is optimized under strcutural loads due to atmospehric turbulance. Keywords. MDO, flexible aircrafts, fuselage, dynamic loads, minimum weight, stability constraints
1 Introduction The advent of fast and affordable personal computers has opened a new door to implement efficient methodologies of multidisciplinary optimization (MDO), which can easily handle the large problems related to aircraft structural design. Previously the optimization problems are mainly limited to the relatively smaller problems [3, 8, 9], in which the fuselage panels made of fiber composite materials are optimized under static loads conditions. During the panel level optimization, most of the time, the sensitivities of equivalent stiffness of that panel on the overall dynamic response and consequent structural loads of the aircraft are neglected, which do not seem to be a good practice. An optimization process, called “aeroelastic tailoring” of flexible wing and tail sections, normally suggests several different combinations of fiber orientations 1 Prof. Dr., Design of Aircraft and Rottercraft, Faculty of Aerospace Engineering, Delft Univeristy of Technology, Kluyverweg 1, 2629 HS Delft, The Netherlands; Tel: +31 (0) 15 27 84794; Fax: +31 (0) 15 27 89564; Email: [email protected]; http://www.lr.tudelft.nl/dar
376
Haroon Awais Baluch and Michel van Tooren
along the webs and skins of the structure. Where the equivalent stiffness of that section or panel is suppose to change in every optimization iteration and that change in the stiffness, in turn, is also suppose to change the vibration spectra of that component and so are the structural loads. For example, an optimization problem can end up with such a design that can hold generally anisoptropic material properties. With anistropic properties the bending-torsion coupling in a beam like structure under bending loads is quite obvious, whereas the design engineer, most probably, have been provided with the structural loads based on assumption of uncoupled bending-torsion deformations. In such a sconorio the optimization process should include a kind of “closed-loop” framework where the new sets of loads should be calculated for each iteration. The problem is not that simple as it is stated here. The mathematical model which addresses the trio of flight dynamics, aeroelasticity, and the controls of the fully flexible aircraft is quite complex and requires extra care while using for large optimization problems. Especially when a design engineer does not have a pre hand reference on the sensitivity of optimization parameters on the structural loads then the unnecssary calling of loads module in the optimization framework is to be avoided. In this paper we give an overview of a framework [2] that would be applicable for optimizing the flexible aircraft under dynamic loads. Section 2 starts with the discrete representation of a fully flexible aircraft. A brief discussion about the dynamic loads and the mathematical model is also presented in section 2. Section 3 talks about the optimization framework, which is divided into three layers that include the panel level local structural optimization, section level structural optimization and above all the global level optimization which takes care of the aircraft stability, where the mass and stiffness matrices and consequent structural loads are updated in upper most layer. The methodology of structural representations of the fuselage is also discussed in this section. In section 4 an executive aircraft is subjected to dynamic loads due to the gust while the aft portion of the fuselage is optimized with two different concepts of fiber composite panels. Conclusions are made in section 5.
Figure 1. Aerodynamic and Structural Discretization of a Flexible Aircraft
Multidisciplinary Design of Flexible Aircraft
377
2 An Overview of Aircraft Dynamic Model The structure of a flexible aircraft can be discretized into a number of beams. Figure 1 shows a sample aircraft modeled with seven beams to represent fore and aft fuselage structures, one beam per half wing and half horizontal tail, and one beam for the vertical tail, where the aircraft body axes ‘Of’ lies on the juncture of aft and fore fuselages beams. These beams are further discretized into several sections with lumped mass elements ‘mi’ at their mass centers (c.g). These mass elements are attached to each other with springs of average stiffness over the two neighboring sections. For each fuselage beam there are two degrees of freedom (d.o.f) in bending ‘u’ along each ‘y’ and ‘z’ directions of` the aircraft body axes ‘Of’ and one torsion ‘ȥ’ along the longitudinal axis of ‘Of’. For each wing and empennage beam there is one bending d.o.f normal to the plane of the lifting surface and one torsion d.o.f. along the reference axis (r.a), i.e. longitudinal axis of their respective coordinate axes at ‘Oi’. The aerodynamic model is presented in the form of several strips with particular lift and drag coefficients. The quasi-steady forces and moments on each strip are the functions of these coefficients and the local angle of attack of the strip. The instantaneous local angle of attack of a strip is a sum of the torsion angle ‘ȥ’ of that strip and the rigid-body angle, which includes the aircraft pitch angle at ‘Of’ and the incidence of the lifting surface at its attachment with the fuselage . Dynamic loads are calculated by solving the inertially coupled equations of motion (EoM) [6] of a fully flexible aircraft by using the DARLoads computer code. DARLoads is a software tool for the dynamic loads analysis for the flexible aircrafts, which is being developed by the DAR group of the Faculty of Aerospace Engineering in Delft University [1]. It accepts the aircraft structural and aerodynamics data in the form of local stiffness and lumped mass elements. The aerodynamics data is given in the form of local strips on each lifting surface, where each strip is defined with its particular quasi-steady lifting coefficients. All the component level stiffness, mass and aerodynamic influence coefficients are assembled in the global matrices of the full aircraft, which are then solved in state space form. The aircraft motion is distinguished into rigid body motions with respect to inertial axes on ground, and the elastic motions of aircraft structural components with respect to aircraft body axes. Considering the elastic motions or vibrations about equilibrium state are smaller in magnitude to those of rigid motions, EoM can be written into state-space form and linearized into zero and first-order equations by perturbation theory of extended aeroelasticity [6]:
x (0) t
A(0) x (0) t B (0) x (0) t u (0) t
(1)
The above equation introduces the zero-order state-vector x(0) that represents the rigid body motions i.e. translations and rotations with respect to inertial axes. The control vector, u(0), represents the zero-order control inputs of the elevator, aileron, rudder and the thrust. State space matrices A(0) and B(0) represent the
378
Haroon Awais Baluch and Michel van Tooren
coefficient matrices for inertia and control forces, respectively. During the steadystate flight the zero-order coefficient matrices, A(0) and B(0), remain constant and so is the zero-order state vector x(0)(t). The first-order vector x(1)(t) which takes account of vibrations and their effects on overall response of the aircraft:
x (1) (t )
A
*
Bx - Bu G x(1) (t ) Fext (t )
(2)
The state-matrix A* contains the partial derivatives of zero-order velocities, stiffness, and damping matrices with respect to the first-order state-vector. The coefficient matrix Bx gives the sum of aerodynamic and gravitational forces and subsequent moments due to vehicle motion resulting from external disturbance Fext. Bu multiplied by the closed-loop gain matrix G gives the coefficients of forces and moments due to first-order control inputs which consequently minimize the effects of external disturbance. Using the mode displacement method, which is based on the internal elastic forces, the total loads along certain degree of freedom (d.o.f) ‘u’ of a component ‘i’ are expressed as the sum of static loads and time integration of the dynamic loads over the steady state: Li u
§ ¨ ©
W
· ¸ ¹
M Ki ¨ xi0 ³ xi(1) (t ) dt ¸ i u
u
u
u
0
(3)
where K and ij are the stiffness matrix and vector of mode shapes of the
3 The Optimization Framework The optimization framework is formulated under the domain of the “Analysis Tools” section of the Design and Engineering Engine (DEE), which is a knowledge based engineering (KBE) tool to automate the process of multidisciplinary optimization in aircraft preliminary design [4, 7]. The framework is specifically developed for the structural optimization of a fuselage structure and as for as other components like wings and stabilizers are concerned, a small change in constraint equations and a few design variables can let a design engineer to use the same routine. The algorithm is segregated into three layers, see Figure 2. In which the first or the upper most layer starts with the inputs for the initial conditions of the aircraft, which includes the flight conditions, aerodynamics, and the structural properties of the whole aircraft that also includes the initial structure which is to optimized later on. The initial conditions in the form of structured arrays are then transferred to DARLoads for the dynamic loads analysis. DARLoads gives the output in the form of internal structural loads and deflections of all the components of the aircraft, which are mentioned in section 2.
Multidisciplinary Design of Flexible Aircraft
379
Figure 2. The Optimization Framework
DARLoads is being followed by a while loop, which optimizes the length Ls of a section Ns of the aft fuselage, see Figure 2, where the Ls represents the length between the two adjacent fuselage frames. The while loop begins with a section number and whenever the length Ls of current section Ns is optimized by the use of downstream layers, the Ns is incremented to the next section number. It is supposed that the total length Lf of the aft fuselage is fixed during the preliminary sizing of the aircraft so the condition to enter in the while loop is decided by the sum of all the optimized lengths of the previous sections should not exceed the total length of the aft fuselage, which can also acts a design space for the next section i.e. the upper bound of Ls. The while loop calls the 2nd layer by using the fmincon optimization function in Matlab [5]. Loads in the form of shear forces in three directions and corresponding bending/torsion moments, at the root of the current section, are read from the DARLoads output. A for loop is called afterwards, where each panel in the current section is optimized through the 3rd layer of the algorithm. The length of a for loop depends upon the number of panels considered in a circular section of the fuselage. For e.g. the length of the for loop is 04, if a fuselage is discretized into four panels i.e. crown on the top, keel at the bottom, and two sides [3], but in this paper a fuselage cross-section is discretized into 12 straight panels, see Figure 3. For the turn by turn optimization of each panel, the computer program known as WISST [7] is called in each loop. Depending upon the position of a panel along the circumference of the fuselage, the loads in the 3rd layer are converted into the load intensities of the panel under consideration [2]. Depending upon the panel type i.e. stiffened or sandwiched, WISST first initiates a feasible solution of the
380
Haroon Awais Baluch and Michel van Tooren
certain width out of the total given width of the panel. Readers are referred to Figure 5 and 6 of Reference 7, where a stiffened panel is depicted in the form of a stiffener and called as a blade element. The failure criteria taken into account are the strength and stability of the blade element. The solution from the initiator is then transferred to the sizing tool, which takes account of the constraints of ply strength and buckling of a full-width panel and meanwhile minimizing the objective of weight per length of the panel. Whenever the objective is achieved the optimized panel is transformed into an equivalent sandwich panel with a fixed skin thickness for each panel along the circumference of the fuselage but with different equivalent material properties of both skin and core i.e. modulus of elasticity/rigidity, poison ratio, and density etc. These equivalent properties are replaced with the new values, till the objective and constraints with respect to the section length Ls are achieved in the 2nd layer of the optimization. Table 1 gives a brief overview of the design variables, constrains, and objectives from local panel to full aft fuselage structural level.
Figure 3. The structural representation of the fuselage
When the upper bound of the design space, as given in the condition to enter in while loop, is exhausted, the equivalent model for each panel is written out in a text file, which includes the bulk data entries of grids and corresponding quad elements with particular equivalent material properties. MSC/Nastran is called on for the static condensation which gives the new stiffness and mass matrices of the aft fuselage, which are then assembled with the rest of the components in global stiffness and mass matrices of the whole aircraft. If the constraint of real and negative eigenvalues of the aft fuselage structure is not achieved then the
Multidisciplinary Design of Flexible Aircraft
381
DARLoads is called on again with the new stiffness and mass matrices, which in turn gives the new sets of loads for the next iteration. Table 1. Optimization variables, constraints and objective Local Panel Level
x
f ( x)
Constraints
Global Structural Level
min f x
Objective
Design variables
Local Section Level
Weight Panel Ls
totalPanels
¦ 1
Weight Panels Ls
h = panel height t1 = facing thickness t2 = stiffener thickness n1= facing stacking seq. n2= stiffener stacking seq. w1= stiffener spacing w2= stiffener width Material failure (Tsai-Hill) Skin (facing) buckling loads Stiffener wrinkling loads Panel buckling loads
i totalSections
WeightSection
1
Ls
¦
i
Ls = section length
Torsion buckling loads
eig(KAF-ȜMAF) 0 eig = eigenvalue sol. MAF = mass matrix KAF = stiffness matrix
4 Optimization Example A Twin-jet aircraft is selected as a test case over a discrete gust. To get the initial loads set, the input data required for the structural and aerodynamics properties of the aircraft are taken from the Reference 6. Flight conditions for symmetric flight with the dimensions of the outer geometry of the aft fuselage are given in Reference [2]. After reading the inputs DARLoads assembles all the required matrices. To get the trim condition, DARLoads minimizes the quadric function of rigid-body zero order state vector in Equation (1) and optimizes the control vector for the given speed. The external disturbance in the form of discrete gust is applied for a period of 1 sec, and Equation (2) is numerically solved over a 10 sec. Response of the aircraft in the form of both rigid and elastic motions are recorded and loads along each d.o.f are extracted by using the Equation (6). Figure 4 shows the static loads along the length of the aft fuselage during the trim conditions, where the Figure 5-7 show the dynamic loads at the root section of the fuselage over the time period of simulation. The sign convention for the loads is to be followed with respect to the axes system shown in Figure 3.
382
Haroon Awais Baluch and Michel van Tooren
Figure 4. Static shear and moments in Z-Y Plane
Figure 5. Dynamic shear and moments in Y-Z plane
Figure 6. Dynamic shear and moments in Z-Y plane
Multidisciplinary Design of Flexible Aircraft
383
Figure 7. Dynamic torsion moment along X-axis
To start with the structural optimization problem, first the optimization for a foamfilled sandwich fuselage structure is initiated. Material used in the analysis is given in the Table 2. It is thought that a sandwich panels should have a higher buckling strength to that of a stiffened panel so the upper bound of the design variable Ls in the 2nd layer of the optimization is taken same as the while loop condition given in the 1st layer of the optimization. The lower bound is given as 0.1 meters. While using the clockwise direction and starting from the panel 1, as shown in Figure 3, the 2nd layer calls the 3rd layer for the optimization of each panel. Meanwhile the loads sets that are given in Figure 4-7 are integrated by using the Equation (3) and converted to panel load intensities. The length of the first section is optimized at 3.62 meters. Consequently the optimizer suggests only two sections with approximately the same lengths but different weights. The optimized lengths and corresponding weights of the fuselage sections are given in Table 3. Table 2. Material properties
Carbon fiber fabric Flexural modulus, E11 = E22, N-mm-2 Shear modulus, G12, N-mm-2 Poisson ratio, Ȟ12 Density, Kg- mm-3 Foam core Flexural modulus, E11 = E22, N-mm-2 Shear modulus, G12, N-mm-2 Poisson ratio, Ȟ12 Density, Kg- mm-3
45000.0 4000.0 0.03 1.561e-6 75.0 24.0 0.0 52e-9
While keeping in mind the trend of the optimized section lengths in a sandwich structure, the foam-filled stiffened panel optimization is initiated in the second case
384
Haroon Awais Baluch and Michel van Tooren
with a fixed upper bound i.e. a length of 2.0 meters. The results are pretty much different to those of the first case and the fuselage is optimized into five sections with diverging weights. Table 3 shows the section lengths with their weights per lengths. The section length for each of the second last section is settled at the upper bound. It shows that the analysis could have proceeded further and optimized the section with the larger lengths but the upper bound limited it to the given length. The length of the last section is automatically selected as the remaining portion left out of the sum of the optimized section sizes minus the total length Lf of the fuselage. Table 3. Section lengths and weight comparison
Sec. # Ns
Foam-Filled Sandwich Structure (Case 1) Section Weight Ratio Length Lf, m Kg-m-1
1 2 3 4 5
3.6282 3.4783
Total Weight
238.27 215.6
453.87
Foam-Filled Stiffened Structure (Case 2) Section Weight Ratio Length Lf, m Kg-m-1 1.2237 1.6119 1.8059 2.000 0.465
115.13 66.46 62.79 62.56 32.31 339.25
Now comparing the total weights of both the design concepts i.e. stiffened and sandwich structure. It shows that the foam-filled stiffened panel has an advantage over the sandwich structure. The weight of frames is not included in the design yet, so the weight for the stiffened structure will increase further. However, a practical structural design of the fuselage requires several frames to support the floor and also the connections between the fuselage and wings or tail sections, which make it obvious to include the frames in the sandwich structure too. From this study the only advantage of sandwich structure over the stiffened one appears to be in terms of manufacturing. As stated in Reference 3, the stiffened panels require several manufacturing processes and factory hours, whereas the sandwich panels are easy to manufacture and require less factory hours.
5 Conclusions An attempt is made to formulize an optimization algorithm, which is to be used for the structural optimization in the fuselage design. The algorithm is divided into three layers of optimization, where each layer has its own objective and constraints functions, and design variables. The first layer optimizes the full fuselage structure, while keeping the constraints of negative and real eigenvalue solution of mass and stiffness matrix. The second and third layers take care of section level and panel level optimization, respectively, where the objective is to minimize the ratio of
Multidisciplinary Design of Flexible Aircraft
385
weight per length of a section or a panel. The constraints in this case are buckling and material strength of the concerned fuselage section or the panel. The aft fuselage structure of an executive jet is taken as a test case for the optimization. The structure is designed with two types of concepts i.e. foam-filled sandwich panels and foam-filled stiffened panel. Structural loads sets due to a discrete gust input are created and optimization problem for each concept. The results show that the foam-filled sandwiched structure is quite efficient in terms of panel buckling and cylinder wrinkling, which requires require only one frame over the length of 7.1m of the aft fuselage, whereas the stiffened structure requires at least five to six numbers of frames, where in terms of weight ratio per section length the stiffened structure has an advantage over the sandwiched one, which may not be very practical where a fuselage structure require a quite numbers of frames to hold structures like floor, wings and tail plane. The only advantage of sandwich structure over the stiffened one seems to be in terms of manufacturing.
6 References [1] Baluch HA, Slingerland R, van Tooren MJL. Dynamic Loads Optimization during Atmospheric Turbulence on a Flexible Aircraft. Young Persons Aerodynamic Conference Royal Aeronautical Society Bristol UK October 29-30 2006. [2] Baluch HA, van Tooren MJL, Schut EJ. Design Tradeoffs for Fiber Composite Fuselages under Dynamic Loads using Structural Optimization. 49th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Material Conference 2008. [3] Jhonson RW, Thomson WL, Wilson RD. Study on utilization of advanced composites in fuselage structures of large transports. NASA 1985; CR-172406. [4] La Rocca, G, van Tooren, MJL. Enabling distributed multidisciplinary design of complex products: a KBE approach. J of Design Research 2007; 5: 1605-1613. [5] Matlab Software Package. The Mathworks USA 2004; Version 7.0 - Release 14. [6] Meirovitch L, Tuzcu I. Control of Flexible Aircraft Executing Time-Dependent Maneuvers. J of Guidance, Control and Dynamics 2005; 28: 1291-1300. [7] Schut EJ, van Tooren MJL. Design “Feasilization” using knowledge-based engineering and optimization techniques. J of Aircraft 2007; 44: 1776-1786. [8] Tuttle ME, Zabinsky, ZB. Methodologies for Optimal Design of Composite Fuselage Crown Panels. Proceedings of the 35th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Material Conference Apr 18-20, 1994: 1394-1405. [9] Watson JC. AV-8B composite fuselage design. Journal of Aircraft 1986; 19: 235-238.
Service Oriented Concurrent Engineering with Hybrid Teams using a Multi-agent Task Environment Jochem Berendsa,1 and Michel van Toorenb a
PhD candidate, Delft University of Technology, Delft, Netherlands. Professor, Delft University of Technology, Delft, Netherlands.
b
Abstract. The MDO process of products can be supported by automation of analysis and optimisation steps. A Design and Engineering Engine (DEE) is a useful concept to structure this automation. To power the automatic analysis an agent based framework has been developed to support human and agent teams. The agent-based framework seeks to integrate the human and computer engineer into a hybrid design and built team, providing engineering services to the product design team. In this perspective four levels of scoping are identified; organisational scoping level, framework or integration level, tool or engineering service level and data scoping level. These four scoping levels are a good frame of reference to link the identified actors, the four main established functions of a framework and the recent contributions in engineering framework development. Keywords. Service Oriented Engineering, Multidisciplinary Design Optimisation, Design and Engineering Engine, Knowledge Based Engineering, Multi-agent Task Environment, Engineering Frameworks.
1
Introduction
Designing advanced engineering systems, like aircraft, is an intrinsically complicated process, essentially a lot of involved and interwoven elements. Teams of engineers need a technology that will enable them to improve virtual access to their ideas, model the multidisciplinary aspect of a product, manipulate geometry and the related knowledge, and investigate multiple what-ifs about their design. To achieve the above in a reasonable time and with confidence in the reliability of the results, the concept of a Design and Engineering Engine (DEE) [3],[7],[14] is proposed to motor the multi-disciplinary design optimisation (MDO) of aircraft design with engineering teams. In the heart of the DEE a generative aircraft product model is implemented in a multi-model generator (MMG). This modelling tool, using Knowledge Based Engineering (KBE) methodologies, is able to 1
PhD Researcher, Faculty of Aerospace Engineering, Delft University of Technology, Kluyverweg 1, 2629 HS Delft, The Netherlands, Tel:+31 15 278 5334, Email: [email protected]; http://www.lr.tudelft.nl/dar
388
J. Berends and M. van Tooren
Figure 1. The Concept of the DEE to support MDO analysis; left the main process flow; right the Multi-Model Generator and the discipline analysis tools.
generate many different aircraft configurations and variants, using combinations of specifically developed classes of objects, called High Level Primitives (HLP) [7]. The HLPs provide designers with a powerful concept to capture and re-use not only the geometric aspect of design, but also provide capability modules, which include rules for automatic creation of analysis models for various disciplines. Based on the research of the MMG and the HLPs in particular, a framework process primitive has been created and described by Schut et al.[11]. This so called Engineering Primitive (EP) integrates methods and knowledge needed to instantiate and Feasilize [12] a design. All elements in the DEE can be seen as engineering services contributing to a pool of services. A human operator actor that needs to determine the behaviour of a possible product solution proposal selects the services from this services pool. An automation framework through which the behaviour of product solution proposal is evaluated is provided by the multi-agent task environment (MATE) [1][2]. This agent framework form the non-human part of the hybrid team. A prototype framework capable of supporting such distributed and concurrent MDO analysis, using the concept of a DEE, is the TeamMate Multi-Agent Task Environment. This framework is under active development and a prototype has been implemented in several DEE projects like a what-if study of a tail-plane design being subject to dynamic loads [3], a structural optimisation of a wingbox [13], several master theses and a tool to perform design of electrical wire harnesses [4]. Since the framework is the enabler for the DEE, this concept is first explained in the next section.
Service Oriented Engineering with Hybrid Teams
2
389
An Overview of the DEE Concept
A Design and Engineering Engine (DEE) (Figure 1) is defined [7] as an advanced design environment, where the design process of complex products is supported and accelerated through the automation of non-creative and repetitive design activities. Figure 1 shows the concept of the DEE. The main components of the DEE are: x Initiator: Responsible for providing feasible starting values for the instantiation of the generative parametric product model. x Multi-Model-Generator (MMG): Responsible for instantiation of the product model and extracting different views on the model in the form of report files to facilitate the discipline specialist tools. x Analysis (Discipline Specialist) tools: Responsible for evaluating one or several aspects of the design in their domain of discipline (e.g. structural response, aerodynamic performance or manufacturability). x Converger & Evaluator: Responsible for checking convergence of the design solution and compliance of the product’s properties with the design requirements and generation of a new design vector. These elements use loops in order to function. The definition of the product is based on selection (or creation) of High Level Primitives (HLPs). These are functional building blocks, containing an a priori definition of a family of design solutions. These functional blocks are encompassed sets of rules that use sets of parameters to initiate objects that represent the product under consideration. The object oriented approach of the HLPs allows capability modules to specify the representation of the product as desired by various engineering disciplines.
3
Analysis of the MDO problem domain
Various levels of scoping and several actors have been identified in relation to the MDO problem solving domain. This differentiation in scope and identification of the actors is necessary to focus the development and implementation of solutions for MDO support frameworks. 3.1
Identification of Process versus Product, Scoping and Actors
As seen in Figure 2a, the scoping starts in the top with the organizational level. On this level, the design process is executed and managed. The interest of the organization is that the design problem that needs to be addressed is solved efficiently (within time and budget). All human actors that are identified are part of this organizational level, as this scoping level is the interface between the organization and the problem solving itself. Five actors are identified (Figure 2a), of which three actors are actively part of the Design and Build Team (DBT). All three actors have close relationships with another level of scoping. A DBT is characterized by individual members being
390
J. Berends and M. van Tooren
(a)
(b)
Figure 2. (a) Four levels of scoping (Organization, Agent, Tool and Data) and five actors (maintainer, manager, operator, integrator, specialist) are identified. (b) Relation between process vs product, scoping levels, key advances in framework development [9] and technical requirements [1][10].
responsible for their respective knowledge domains and the whole team being responsible for meeting the team objectives and deliverables. The first actor within the DBT is the operator actor. This actor is responsible for selecting services provided by the framework to produce a problem solving environment in which a MDO problem is to be solved. This actor does not need to have a full understanding of all the tools that are involved in solving the problem, this understanding and selection process is carried out on the framework level. An integrator actor is responsible for the framework level. The integrator facilitates the cooperation between the organizational level and the tool level. Predominantly this actor is responsible that functions are available on organizational level in order for these functions to operate the framework and that correct interfacing exists between the various tools in the tool level. The third and very important actor is the specialist. The specialist is responsible for the correct functioning of discipline analysis tools that provide the engineering services to the framework and consequently to the operator. The last two actors are placed outside the DBT as they are mainly facilitating actors. The maintainer ensures the proper functioning of all software and hardware components within all scoping levels. The manager actor ensures that necessary resources are available for the DBT and guards time and resources constraints. The framework level or services integration level is the level for which the integrator actor is responsible. There is a one-to-many relationship between organisation and framework and a one-to-one respective tools form a problem solving environment called a DEE. The specialist tools are part contained within
Service Oriented Engineering with Hybrid Teams
391
Figure 3. For each scoping level a set of market supplier and applications is identified. On the left the four key advances in framework development by Padula [9] are listed.
the tool level or (engineering) services level, which is the domain of expertise of the specialist actor. The final scoping level is the data level. In essence all data is a product of the tool level and therefore no direct actor is identified. One could say that the specialist actor is indirectly responsible for this level. However, an integrator actor would like to control this level in order to facilitate inter-communication between various tools in order to provide a working framework. Figure 2b describes the relations found in four important articles related to framework design to support MDO processes. Salas [10] describes several requirements for framework design that match and overlap with the requirements proposed by Berends [1] resulting in four requirements groups: Resource Management, Resource Interfacing, Process Execution Support and Information Flow Control. Another interesting observation is that the four key advances in framework development as described by Padula [9], can be matched on the four scopes described earlier and displayed in Figure 2a. 3.2
Identification of scoping specific tools
When looking at the role of the four scoping levels within the MDO problem domain, it can be deducted that each level contains a specific part of the MDO solution domain. Moreover, various commercial engineering tool suppliers are active to provide applications used by the engineering intensive industry as can be seen in Figure 3, with a note that the figure with respect to the market suppliers is far from complete. On the data level, product lifecycle management tools are found like Dasault Systemes Enovia MatrixOne, Dassault Systemes Enovia SmarTeam, Siemens UGS
392
J. Berends and M. van Tooren
TeamCenter, and Oracle Agile PLM. These data level tools provide an enterprise integrated management of product data, storage and versioning control, often integrated with product modelling tools. The Tool level is the scoping level on which most applications and their (market) suppliers can be linked to. Also Padula [9] identified the tool scoping level (modularity) as the first level to mature (chronologically) before the data level (data handling), framework level (parallel processing) and eventually the organisation level (user interfaces) matures (see Figure 3, Figure 2b). Several suppliers can be linked when considering solely the structural design and analysis domain, product modelling domain and aerodynamic design and analysis domain. For the structural design and analysis domain products like MSC MD Nastran, Dassault Systemes Simulia Abaqus, Ansys Structural, Siemens UGS Femap, and Siemens NX Nastran are available. For the aerodynamics design and analysis domain these are AMI VSAero, and Ansys Fluent. Finally for the product modelling domain Siemens UGS, Dassault Systemes Catia, Dassault Systemes Solidworks and eventually GDL Genworks can be linked. In this scoping level most of the specialist analysis and modelling tools are placed. The framework or integration level has its own set of tools. The most common engineering frameworks to date within the industry are LMS/Noesis Optimus, Phoenix Integration Modelcenter and Engenious iSight. These frameworks are all equipped with various design space search tools like optimisers, convergers, Design of Experiments (DoE), full factorial or Gaussian search and so forth. The TeamMate research framework as under development by the authors is also linked to this scoping level, however not portrayed in Figure 3. The last and top level, organisational level, is the most interesting level. Padula [9] describes the creation of user interfaces as the last advancement in design support frameworks, and yet to be discovered. On enterprise integration level high level tools and application are to be found. Suppliers for knowledge engineering and management tools Epistemics PCPack and Mondeca are linked to this level. This level is in active development and in embryonic stage. The next release TeamMate framework software provides technologies in order to integrate intuitive design environments in a later phase. Developing such a design environment is continued focus of research by the authors. The next section handles the design of the TeamMate framework in an abstract way, with the actors and scoping levels described in this section as a background. 3.3 Multidisciplinary Design and Build Teams that include agent team members Most engineering organisations use a matrix structure (Figure 4) to organize the different engineering activities. A project, which is part of a wider program, contains any number of DBTs with Specialists from different engineering disciplines. Examples of such disciplines are: structural analysis, aerodynamic performance, cost estimation, production preparation or aircraft systems. These multidisciplinary teams are installed for the duration of the project and membership of specialists varies during the duration of the project.
Service Oriented Engineering with Hybrid Teams
393
When a project is in a detailed design phase, which feature a lot of repetitive analysis work, handy Specialists start using and creating tools to offload repetitive engineering tasks. These tools tend to be created ad-hoc and are totally inflexible whenever another project or problem is concerned. Moreover, these tools are generally poorly documented, so that only the owner is able to operate the tool. However the short term benefits may be obvious, the long term investment of these resources is completely wasted. This is mainly because the focus in the detail phase is on the product, blurring the capturing of process common features. Another problem of having Specialists within single project teams is that the cross-learning of these Specialists with other Specialists of the same discipline seated within other projects is limited. It is more likely for a Specialist to acquire knowledge from other specialist areas in their own team then from other Specialists in their own
(a)
(b)
Figure 4. (a) Traditional matrix structure with DBT project teams. (b) A service oriented organisation structure with Discipline Specialist Teams supporting various projects.
area, which can hamper learning of an organisation. To tackle these problems, a service oriented paradigm is proposed. Teams of Specialists develop a collection of tools that provides services to the various engineering projects. Based on a common collection of discipline specific tools, various project and product specific additions can be created. As these additional capabilities are created and maintained by the team of Specialists, most of the product-family specific additions, i.e. non-product specific, can be reused for other engineering projects. Moreover, the tools (or engineering services), created by the teams of specialist, are connected using the multi-agent framework to form a working DEE and eventually a hybrid DBT, consisting of humans and agents.
394
J. Berends and M. van Tooren
4
Design of the Multi-Agent Task Environment Framework
The design of the agent based framework is inspired by the problem that earlier generation design support frameworks address the automation of MDO problems often as a top-down execution of a string of individual discipline analysis tools. These strings are executed from start to finish. These support frameworks are often created by (a team of) engineers during the design process in an implicit way and need heavy adapting when a new MDO problem or product is addressed. This problem is defined as the ah-hoc and inflexibility problem. Moreover when errors in a particular discipline analysis tool emerge, the highly coupled nature of an execution string often leaves no other possibility than to reexecute all or parts of the tool chain, even when this is not always necessary. In theory, only those tools that are dependent on output data from the discipline tools that produced an error need to be executed. Re-executing the whole string is a waste of resources in the form of CPU time. To overcome the identified obstacles a multi-agent task environment is developed that addresses the aforementioned problems in a structured and consistent way: decoupling the knowledge of the product from the process and able to handle a family of design problems (objective 1). Moreover, the framework should prevent waste of resources when partial re-execution of tools is needed (objective 2) and should avoid channelling all data through a single bottleneck (objective 3). Moreover, instead of depicting up-front to each tool its address and freezing this in the chain definition, the problem is communicated to the framework and each agent and tool combination is using its communication skills and knowledge of the problem to request information through a specified, but not tool and address specific, request (objective 4). Entities in the virtual team of
(a)
(b)
Figure 5. (a) DEE process flow for support of MDO. (b) DEE translated into a hybrid team layout using a multi-agent system to integrate various tools and discipline specialists.
Service Oriented Engineering with Hybrid Teams
395
agents and tools become Knowledge Workers: respecting their own responsibility for data handling and acquisition within and between disciplines. Finally, when working in a multidisciplinary problem domain, a language should be used to facilitate the clear communication, avoiding engineering domain specific language. Engineering domain specific language is acceptable for internal communication, but a common engineering language is mandated whenever inter-disciplinary communication is concerned (objective 5). From these objectives the four main functions where drafted and embedded in a set of requirements on which the first release framework software is based. The four main functions are resource management, resource interface, process execution support, and information flow control (Figure 2 and Figure 6). These four established functions form the backbone of the framework design and implementation. Following this review of the first release agents and proposals for a second release, a new set of requirements has been determined based on the excellent work of Salas and Townsend [10], Padula and Gillian[9] and earlier work by the authors [2]. The result is displayed here in Figure 6 for completeness. In this figure several colours and fill patterns are used to denominate the origin of the various
Figure 6. The Requirements Discovery Tree for MDO frameworks is a merge of proposed requirements by Salas and Berends [10][1].
396
J. Berends and M. van Tooren
requirements as found by the mentioned sources. Based on this set of requirement and findings of the first release framework software, a second generation software is designed and being implemented. 4.1
Industrial Network Architectures
When introducing distributed and concurrent engineering services, the physical network architecture wherein these services operate become a very important factor to the operation of these services. This was learned from earlier implementations of the engineering framework. Industry and corporations have stringent security policies and consequently have compartmentalised network architectures in place to protect corporate data. Based on a review amongst various commercial partners cooperating in the TeamMate research, several network architectures are identified. Derived from this, several network architecture use cases are drafted to serve as the benchmark for the integration of TeamMate into these architectures. Network architectures outside the scope of the described ones, can be derived from the ones described or the framework software can be re-configured to suit the alternative architecture.
(b) (a) Figure 7. (a) Single Agent Architecture. (b) Multiple Agent Architecture within a corporate LAN: bidirectional traffic via XML-RPC over HTTP(S).
4.1.1 Single agent architecture In Figure 7a a simple architecture is displayed in line with the most simple of use cases as expected a single engineer would use. On a single machine in a corporate Local Area Network (LAN) a single release 2.0 package is installed. Each package can contain multiple agents, so the engineer, identified as operator actor, can generate a simple collection of engineering services. No connections outside the LAN are necessary. 4.1.2 Multi-agent architecture When multiple computers within a corporate LAN need to work together (Figure 7b) this is possible by installing release 2.0 TeamMate software on each computer. These computers can have different system architectures. As long as firewalls on these computers allow bi-directional connections between the agents, the framework is operational. The agents are utilising standardised ports and protocols for their network traffic. No traffic outside the corporate LAN is present.
Service Oriented Engineering with Hybrid Teams
397
4.1.3 Hybrid agent architecture. When combining the two LAN’s in Figure 7 the landscape changes drastically. Between the corporate LAN’s there should be a direct connection between all agent installations, which most of the time is not possible due to effected security policies. In order to bridge the two LANs a need arises for an agent installation which acts as a proxy and is accessible by anyone within the connected LANs (Figure 8). This agent, denominated in Figure 8 as MATE server, automatically becomes a master node in the framework and performs master functions, such as distribution of messages, a list of capabilities available within the framework and, in rare cases, data. This architecture is currently enforced by recent changes in the IT infrastructure of the Delft University of Technology. Features in the hybrid architecture are: x Introduction of a dedicated agent acting in a server fashion. This MATE server in a, so called, demilitarized zone (DMZ), reachable for all agents within the enterprise MATE framework. x The MATE server can provide other services as well and open up the web interface (via HTTP or HTTPS) to outside clients. In effect, any agent can provide this web interface service. x Bidirectional traffic (firewall opened) between MATE server and various LAN within the enterprise x Bidirectional traffic (firewall opened) between different LAN and their contained Agents x Introduction of a polling client in a remote LAN segment, possibly integrated in the enterprise infrastructure via a secure Virtual Private Network (VPN).
Figure 8. Hybrid Multi Agent Architecture within enterprise network architecture.
398
J. Berends and M. van Tooren
Figure 9. A feature request by tool developers was to be able to initiate data search requests like the green dotted arrow by the ‘actor’ (in this case another agent) and announce data pattern availability by the tool to the agent.
4.2
Open Standards and Application Programming Interface (API)
The TeamMate 2.0 design is based on webservices and open standards. It was discovered during operating and integrating DEEs and tools in the first release software, that a need to communicate direct with the framework by various discipline tools (Matlab, PyCoCo – An application to perform automated FEM analysis [8]) would be beneficial. Several features were introduced to enable this communication. The main feature requested was to instruct the agent to initiate a search request and the ability to instruct the agent that new data was available (Figure 9). These features where necessary to be able to integrate search tools (optimisers, convergers) within the framework. Search tools produce a new dataset (variable and parameter vector) for tools within the search loop and need to request the output of the analysis of this dataset. The need for an open interface, which was in use widespread and integrated in several programming languages, was the basis for the choice of an XML-RPC interface for the second release. All communication between the agents is exclusively performed via the XML-RPC interface except data transfer. This interface is known and can be made available to any tool developer that wants to interface with the framework. It might even be possible for any tool developer to mimic the behaviour of the agents by only using the calls to the interface.
Service Oriented Engineering with Hybrid Teams
5
399
Implementation Status
Currently the implementation of the second release Multi Agent Tasking Environment software framework is well underway. In May 2008 the current implementation of the MATE framework has demonstrated basic functionality within to industry in a nationally and internationally funded project, which is described in [4]. It is scheduled to demonstrate full capabilities of the second release framework in the third quarter of 2008. The second release framework will be tested within several National and European funded projects in very close collaboration with industry.
6
Reference List
[1] Berends JPTJ. Development of a Multi-Agent Task Environment for a Design and Engineering Engine, M.Sc. Thesis, Delft University of Technology, Faculty of Aerospace Engineering, Delft, The Netherlands, 2005. [2] Berends JPTJ and Tooren MJL van. Design of a Multi-Agent Task Environment Framework to support Multidisciplinary Design and Optimisation, 45th AIAA Aerospace Sciences Meeting and Exhibit, AIAA-2007-0969, Reno, NV, USA, 2007 [3] Cerulli C, Schut EJ, Berends JPTJ and Tooren MJL van. Tail Optimization and Redesign in a Multi Agent Task Environment, 47th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Newport, RI, US, 2006 [4] Elst SWG van der and Tooren MJL van. Application of a Knowledge Engineering Process to Support Engineering Design Application Development, 15th ISPE International Conference on Concurrent Engineering, Belfast, Ireland, 2008 [5] Fishwick PA, editor. Handbook of Dynamic System Modeling, Chapter 19 “Process Algebra”, Chapman & Hall/CRC, ISBN 15-8488-565-3, Boca Raton, FL, USA, 2007 [6] Hofkamp, A.T. and J.E. Rooda, “Ȥ (Chi) Language Reference Manual”, Eindhoven University of Technology, 2002, Available at . Access on: June 1st 2008 [7] La Rocca G and Tooren MJL van. Enabling Distributed Multidisciplinary Design of Complex Products: A Knowledge Based engineering Approach, Journal of Design Research, Vol. 5, No. 3, 333-352, Inderscience Enterprices Ltd., 2007 [8] Nawijn M, Tooren MJL van, Arendsen P and Berends JPTJ, Automated Finite Element Analysis in a Knowledge Based Engineering Environment, 44th AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada, USA, 2006 [9] Padula SL and Gillian RE. Multidisciplinary Environments: A History of Engineering Framework Development, 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Portsmouth, VI: 2006, AIAA 2006-7083 [10] Salas AO and Townsend JC, Framework Requirements for MDO Application Development, 7th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, St. Louis, MO; USA: 1998, pp. 261-271, AIAA-1998-4740 [11] Schut EJ and Tooren MJL van. Engineering Primitives to Reuse Design Process Knowledge, 49th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, 4th AIAA Multidisciplinary Design Optimization Specialist Conference, Schaumburg, IL, USA, 2008
400
J. Berends and M. van Tooren
[12] Schut EJ and Tooren MJL van. Design ‘Feasilisation’ using Knowledge Based Engineering and Optimisation Techniques, Journal of Aircraft, Vol. 44, No 6, 2007, pp 1776-1786 [13] Schut EJ, Tooren MJL van and Berends JPTJ, Feasilization of a Structural Wing Design Problem, 49th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Schaumburg, IL, USA, 2008 [14] Tooren MJL van, Nawijn M, Berends JPTJ and Schut EJ, Aircraft Design Support using Knowledge Engineering and Optimisation Techniques, 46th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference, Austin, Texas, USA, 2005.
Systems Engineering and Multi-disciplinary Design Optimization Michel van Toorena,1 and Gianfranco La Roccab a
Full Professor Systems Integration Aircraft, TU-Delft, The Netherlands. Assistant Professor Aircraft Design, TU-Delft, The Netherlands.
b
Abstract. The worlds of Systems Engineering and Multi-disciplinary Design Optimization are distinct disciplines which represent the qualitative and quantitative side of product development methodology. Merging these two worlds would improve the applicability of both. This will,however, require a substantial range of additional concepts, methods and tools on both sides. The Design and Engineering Engine is such a concept. It sets a framework in which part of the abstractions from Systems Engineering can be implemented using the Multi-disciplinary Design Optimization approach as a structured design domain search. The Design and Engineering Engine adds the concepts of High-Level Primitives and Capability Modules to structure a-priori product family characterization and re-usability of engineering processes. The potential of the concepts has been shown successful in several pilot projects. Keywords. Systems Engineering, Multi-disciplinary Design Optimization, Knowledge Based Engineering
1 Introduction Systems Engineering has become the standard framework for product development in most of the high-tech industry. It offers a set of methods and tools that help to structure the process that leads from market need identification to product design documentation. It uses the concept of the System Life Cycle to minimize the risk of developing the wrong product by assuring the timely involvement of the right disciplines in all the Requirements Discovery challenges, the Design for X tasks, the trade-off activities, the reviews and the compliance finding work. The qualitative nature of most of the Systems Engineering concepts and the tendency of the Systems Engineering community to overload the world with abstractions, terminology and yet another tool, however, divides the engineering world in 1
Full Professor Systems Integration Aircraft, Delft University of Technology, Faculty of Aerospace Engineering, Design of Aircraft and Rotorcraft, Kluyverweg 1, 2629 HS Delft, The Netherlands; Tel: +31 (0) 15 2784794; Fax: +31 (0) 15 2789564; Email: [email protected]; http:// www.lr.tudelft.nl/dar
402
M. van Tooren and G. La Rocca
believers and non-believers in Systems Engineering. While Systems Engineering is meant to structure the synthesis of knowledge and competences to achieve robust product development, it can easily turn into an isolated, even esoteric discipline with an engineering dialect of its own, decoupled of those it should unite. When used properly Systems Engineering, creates a structure to provide freedom for creativity and innovation. Multi-disciplinary Design Optimization (MDO) is an extension of Operations Research from the operation of systems into the design phase of systems. Also MDO offers a set of methods and tools that supports the product development process. However, this tool set does not consist of qualitative methods but of well structured mathematical tools. It allows an automated design domain search based on objective function(s) and constraints for a product described by a set of design variables. Although promising, MDO does not yet find wide application in industry. Many of the applications are still closer to Operations Research applied to the requirements analysis phase using performance parameters of the system under consideration as design variables than as true design optimization cases applying first principle analysis techniques and search methods to a product described with parameters, not in the behavioural space but in the 3D-world, i.e. describing its constituents not its performance. This, however, requires complex product models and true automated analysis; a not yet achieved reality. Also MDO can easily end up decoupled from the real engineering practice. Too much focus on mathematics instead of the actual design problem can isolate the MDO expert from the designers and engineers. When used properly it can help to improve time to market and first time right. Roughly speaking SE can be considered a qualitative framework of tools to solve ill-posed problems. It helps to discover a set of requirements and deliver a verified solution to these requirements. It assumes a process structure to solve a design problem, it doesn't assume product solutions to the design problem itself. MDO is a quantitative framework to solve design problems. MDO, like SE, assumes a structure to solve a design problem, but, since it is a mathematical approach to the problem, it needs, from the start, also a description of the solution space itself in the form of design variables. First conclusion is that SE and MDO both aim at supporting the product development process. SE supports the total engineering effort while MDO helps finding the best parameter values for a pre-selected family of design solutions against quantitative requirements with mathematical tools. Therefore MDO should be seen as a tool within the SE context.
Systems Engineering and Multi-disciplinary Optimization
403
2 Connecting SE and MDO SE and MDO have a shared background and a mutual interest. Both originate from US DoD requirements and were conceived to improve the robustness of the design of complex military equipment. MDO is founded on Operations Research, also a DoD demand, that implemented mathematics into solving complex military operational questions, including operations with complex military equipment. The mutual interest is a more widespread use of their principles. 2.1 Principles To better connect SE and MDO it is important to appreciate the basic assumptions of both. Starting with those of Systems Engineering: x Each complex product can be seen as a system x Each system has a life cycle consisting of different phases: o it is conceived (starting with a need (market) or a seed (invention)) o it grows (design and engineering) o it is born and raised (production)} o it has a professional life (operation)} o it needs care (support and maintenance)} o it dies (phase out + re-use/re-cycling) x Each life cycle phase generates requirements that have to be taken into account during the design and engineering phase of the complex product. x Within each phase different disciplines can generate requirements x Design of systems requires Design for X: for every discipline that generates requirements one has to: o know how to express proper requirements for each relevant life cycle phase o have design options for each of these requirements o be able to assess the behaviour of a system synthesized from these options o specify proper methods to verify compliance with the requirements x Each of the life cycle phases can require the development of related systems like a production system (e.g. a new production plant), operational systems (like a ground system for an Unmanned Aerial Vehicle), support and maintenance systems (like new tools and equipment for maintenance), recycling systems (e.g. if you design a new bottle for beer) SE comes to life when combined with generic Project Management techniques. It can very well be seen as a standard for phasing a project, providing tools for use within these phases and standardizing a number of deliverables. Concurrent Engineering in this respect can be regarded as a specific implementation of some
404
M. van Tooren and G. La Rocca
of the SE elements. It tries to achieve discovery of requirements and proper judgment of design options by simultaneous involvement of representatives of the different disciplines and adds the principle of simultaneous activities (doing as much as possible in parallel) to shorten time to market. The basic assumptions of MDO can be summarized as: x a system can be described as one or more sets of hierarchical or nonhierarchical, bounded or unbounded design variables x design constraints originating from different disciplines can be expressed as functions of these design variables x a best system can be chosen based on an objective function expressed in the design variables. x The process of choosing can be implemented as a search algorithm, tuned to the specific characteristics of the problem (e.g. discrete vs continuous variables, availability of gradient information) MDO needs a computational framework to come to life. Within this framework the optimizer, the design of experiments tool (or any other tool used to come up with start values for the set(s) of design variables), the analysis tools and, if applicable, the meta-models, need to be smoothly connected. Creating generic systems that can be sufficiently tuned to company-specific practices and are flexible enough to adjust to new product ideas and new tools, is key. 2.2 Tools The tools used by Systems Engineering can be best explained in the context of project management. The tools are elements of a set of logical definitions, diagrams and methods that support engineers in structuring, starting, executing and reporting the different phases of the product development process. The most important SE-tools focusing on the generic project management process are: Project objective statement - Work Flow Diagrams + Design and Development Logic - Work Breakdown Structure - Gantt Chart - Version Control and Templates. The most important SE-tools focusing on the product design specific items in project planning are - Mission Need Statement – Functional Flow Diagram and Functional Breakdown Structure – Requirements discovery tree / List of Requirements - N2 charts – Technical budgeting / Risk management - Design Option Tree – Design verification and certification / the compliance check list Trade-off methods and tools - Design recording (including hardware diagrams) Market analysis - Design for X - Quality Function Deployment (House of Quality). There are many tools available to help the MDO process ranging from programming languages to domain specific pre-programmed frameworks which include optimization methods, design of experiments, interfacing to and/or
Systems Engineering and Multi-disciplinary Optimization
405
including analysis tools for different disciplines, meta-modelling tools like response surface techniques, post-processing tools including data-mining and visualization tools. 2.3 The connection Both SE and MDO force a design team to express proper requirements for each life cycle phase, to have design options for each of these requirements and to be able to assess the behaviour of a system synthesized from these options in order to compare the system's behaviour with the requirements and value each result of the search (Design for X). Where SE covers the complete development and allows for gradual discovery of requirements parallel to gradual discovery of solutions, MDO needs a-priori definition of the requirements and needs a-priori definition of the product family to be assessed against these requirements. The strength of MDO is its capability to quantitatively scan a design domain. To have MDO accompanying the SE process and follow the requirements discovery and the design option discovery, it must be delivered in a framework that allows agile specification and execution of a sequence of MDO problems.
3 Requirements on MDO frameworks One can look at an MDO framework as a virtual engineering team that tries to do a best fit of a range of design options to the design problem at hand. To do so such a framework needs to: x Allow quantitative specification of requirements from the different life cycle phases x Allow distributed parametric multi-disciplinary descriptions of the design options (concepts) for all the system elements x Estimate initial values for design parameters and variables (named feasilization) x Provide automatic search for optimal values of design parameter and variables} x Provide automatic analysis model input x Provide linkage to distributed analysis tools to derive all the properties of the system related to the requirements from the different life cycle phases x Provide automatic interpretation of analysis output results x Offer flawless connections between all its elements The development of these features needs a very thorough understanding of how people currently act and connect to perform their role in the Systems Engineering Process and how this can be (partially) be taken over by MDO. Only in that way the resulting computational system will successfully support the exploitation of the
406
M. van Tooren and G. La Rocca
Multidisciplinary Design Optimisation (MDO) methodology in a Distributed Design Environment.
4 The Design and Engineering Engine concept The computational system, baptized Design and Engineering Engine (DEE), is constructed such that it resembles the working of a human team in a design process. Its formal definition is: A DEE is an advanced design system to support and accelerate the design process of complex products, through the automation of non-creative and repetitive design activities [5]. A DEE consists of a multidisciplinary collection of design and analysis tools, able to automatically interface and exchange data and information, Figure 1.
Figure 1. The Design and Engineering Engine (DEE)
The proposed DEE's main components are: 1. Reqs / design options specificator 2. Initiator 3. Multi-Model Generator (MMG) 4. Expert tools covering all the analyses required to derive the behaviour of the System in the different Life Cycle phases 5. Converger and Evaluator (the global optimizer) 6. An (agent-based) framework Each of these components is discussed in the following subsections.
Systems Engineering and Multi-disciplinary Optimization
407
4.1 Describing design options Each system is supposed to be a synthesis of design options selected to meet a set of system and subsystem requirements. The user of the DEE can specify which design domain to be searched, using so-called High Level Primitives (HLPs) as basic building blocks [6]. A primitive is a consistent set of parameters/variables (degrees of freedom, DOF) that describes an elementary design option such that it allows the calculation of the element's behaviour by appropriate disciplinary tools when values are assigned to its DOF. This behaviour includes several spatial appearances as visualized by a CAD program, but is does not start from a unique geometry. The geometry is generated based on the high level primitives. It is true that for conceptual aircraft design, geometry plays a very important role and therefore the HLPs for wing trunks and other airframe building blocks, Figure 2, include geometry representation generators. For other systems, like wire harnesses, geometry plays a much smaller role and HLPs for such systems do not necessarily include geometry generators.
Figure 2. Some High Level Primitives used in a DEE for conceptual aircraft design
The collection of primitives must be such that a multi-scale and high fidelity estimate of the objective function is possible. This objective function describes how the parameters and variables in the HLP, together with the behaviour of the resulting system can be used to quantify the fitness for purpose of the solution under consideration with respect to the underlying requirements. This means that the DEE goes beyond an operations research approach. The system is not represented with a parametric description of its behaviour but a first principle approach is used for the determination of the behaviour and optimization techniques are used to select and size the design options.
408
M. van Tooren and G. La Rocca
4.2 Describing Requirements Requirements are translated into an objective function, constraint functions and bounds on design parameters/variables. Each of these functions and bounds is based on the parameters and variables in the HLPs or on behaviour computed using an instantiation of the HLPs. Examples can be a minimization of noise, expressed in awakenings, annoyance or money. For each potential design solution under consideration a description is made using HLPs. The HLPs should be such that the parameters and variables involved are sufficient to allow calculations of the behaviour related to the objective function. The same is valid for constraint functions, e.g. deliver enough lift, limit cost to.. etc. The third way of expressing requirements is to specify bounds on values of the parameters/variables (e.g. size of wing span) in the HLPs. The "degrees of freedom" of the HLPs and-or combinations thereof are the variables / parameters within the DEE process. To be able to use the product model defined by a set of HLPs, for example to visualize the result or to transfer knowledge about the product to an analysis tool, so-called Capability Modules (CMs) are defined [6]. The Capability Modules are formalized versions of engineering operations normally done by humans on HLPs, like generating 3D-views or meshing surfaces to prepare for FE-analysis etc. They will be discussed in more detail in the section about the Multi-Model generator. The current available primitives are implemented using Knowledge Based Engineering (KBE). This is a technology based on the use of dedicated software tools (i.e. KBE systems) that are able to capture and reuse product and process engineering knowledge [4]. Instead of 'drawing' the engineer 'describes' his ideas in a collection of objects. The following five important "lowest common denominator" features are intrinsic in any generative KBE system: x x x x
x
Functional Coding Style: Programs return values, rather than modifying things in memory or in the model. Declarative Coding Style: There is no "begin" or "end" to a KBE model - only a description of the items to be modelled Runtime Value Caching and Dependency Tracking: The system computes and memorizes those things which are required - and only those things which are required (no more, no less). Dynamic Data Types: Slot values and object types do not have to be specified ahead of time. They are inferred automatically at runtime from the instantiated data. Their data types can also change at runtime. In fact, the entire structure and topology of a model tree can change, depending on the inputs. Automatic Memory Management: When an object or piece of data is no longer accessible to the system, the runtime environment automatically reclaims it
These conditions are not met by (most) parametric CAD systems today. They still are geometry focussed and more fit to record a finalised design than to build
Systems Engineering and Multi-disciplinary Optimization
409
parametric models to start a design. Using a KBE system the designer describes his idea to the computer as a set of objects. Very important for successful usage of KBE systems is that the right knowledge about product and processes is elicited and captured before the actual coding is done. Therefore formalized methods, socalled Knowledge Acquisition Techniques, are employed [7]. In practice the HLPs are implemented as classes in a KBE system. The Capability Modules are implemented in two ways. The preferred option is as Classes coupled with the HLPs through the mix-in principle. In some cases it is more practical to define a CM in the methods of a HLP directly. If we combine a proper set of HLPs and CMs we have a formal definition of a product family including the engineering processes that can be performed on this family. So instead of a drawing we can use a class diagram as defined by UML to record and communicate our thoughts. An example of such a diagram including the link to instantiations (different aircraft types) of the model is shown in Figure 3.
Figure 3. A UML View on a collection of HLPs and CMs to define a wide range of aircraft
410
M. van Tooren and G. La Rocca
4.3 The initiator The HLPs define an object as a consistent set of parameters and variables. To be able to start a search for the best solution within the design domain defined by the HLPs, we need a start value for each of the parameters and variables. The initiator component of the DEE is responsible for generating such feasilization [9] and follows the normal behaviour of engineers, namely obtaining an approximated feasible solution to the design problem by assuming x A reduced set of requirements x An iteratively decomposed (and independent) set of sub-problems x Simplified design solutions (design options)} x Simplified behavioural models (schematic models)} Considering the fact that we want to use the DEE for novel designs it is important to use first principle based methods to estimate product behaviour and not use statistical estimates. The feasilization process for box structures (e.g. aircraft wings) can be found in [9]. During the development of this process it became clear that the feasilization requires a DEE by itself with all its components. It also became clear that it is beneficial to describe the combination of the design problem and its verified solution as a so-called generalized Engineering Object, Figure 4. The structure of the object and the selection of its elements is based on the philosophy that for every well engineered product it is possible to define the following relationship between requirements (divided in functional requirements, performance requirements and constraints), design options and their fitness for purpose: criteria ( behaviour ( properties ( design\_option ) , testCases )) = designValue.
Figure 4. The Engineering Object
Systems Engineering and Multi-disciplinary Optimization
411
To obtain the DesignValue (the value of the multi-objective function describing the design problem at hand), we need criteria (requirements) with which we can compare the actual behaviour of our design. This behaviour requires the properties of the product and a set of appropriate test cases with which the behaviour can be estimated. The feasilization process uses this approach to give an initial value to each variable and parameter of each design option. Currently a feasilization process is being implemented for the outer surface definition of wing trunks; an adjoint formulation of an Euler code will be used for airfoil/trunk local optimisation [2]. 4.4 The Multi-Model Generator The Multi-Model Generator is a Knowledge Based Engineering (KBE) application (MMG) [6], providing two functions: x Support of the definition of product models, based on High-Level Primitives x Support of the creation of multiple views on the product model by means of Capability Modules which generate input files for various analysis tools. Especially the definition and implementation of the Capability Modules is a complex issue matter. Getting robust modules to prepare input for FE calculations or for Cost calculations requires a thorough understanding of these processes as performed by human engineers. The specialists spent a substantial percentage of their time on inventing work-arounds to deal with program peculiarities and bugs. In general, however, a KBE system will be able to mimic this behaviour when proper knowledge acquisition has been done. 4.5 The Life Cycle Analysis with Expert Tools The determination of the system behaviour is done with a collection of analysis tools. Depending on the problem at hand the proper selection of tools is made. Since many commercial analysis tools expect a human user to communicate with the tool through an interactive interface, it is not a simple task to have tools flexibly incorporated in the DEE. As far as possible the battle of the engineers with these tools is mimicked with the Capability Modules. Sometimes it is necessary to add additional code between MMG and analysis tools to create a robust and sufficiently product independent linkage between MMG and tool. 4.6 The converger/evaluator The Converger checks if results from the expert tools are valid (converged). Sometimes this functionality is delivered by the expert tool itself but in many cases the results from the analysis tool still have to be judged separately. This can be done based on test cases.
412
M. van Tooren and G. La Rocca
The Evaluator is an optimization routine that controls the search in the solution domain. It evaluates each design option analysed with the expert tools for its fit with the requirements. A wide range of methods van be applied: Sequential Linear Programming, Sequential Quadratic Programming, Genetic Algorithms, Heuristic methods, etc. Where necessary, a limit amount of behavioural data from the expert tools is approximated with surrogate models (e.g. response surfaces) to make the optimisation affordable. Multi-level optimisation methods like Bliss and Target Cascading have not been tried yet in the DEE context. 4.7 The Agent Based Framework To make the components of the DEE work as a single service, they have to be flexibly connected, taking into account the fact that they can be distributed over different locations and can be running in different environments. The solution has been found in the SE practice where design team members solve these communication and geographical issues.
Figure 5. The DEE Agent Based Framework
A DEE can be seen as an MDO equivalent of the Integrated Product Team (IPT) or Design Built Team (DBT), where more actors, with different roles and functionalities, co-operate in the design process, Figure 5. To make the DEE components act as virtual team members, they are wrapped in agents that communicate with each other and, if necessary, with the human team members [1].
Systems Engineering and Multi-disciplinary Optimization
413
The most senior agent performs the master functions, thus acting as project manager. When the master agent is unavailable, an automatic fall back system transfers the management to the next most senior agent. Four Main functions have been identified for the agents: x Resource Management: Which resources connected and available to the network x Resource Interfacing: Communication between elements and actors x Process Execution Support: Transformation Process management x Information Flow Control: External and Internal Data and Data Request management The language of the agents is a major issue. Currently XML is used to import and export information about products and processes. Also for the Engineering Object, XML is used as a basis. Work is ongoing to define Domain Specific Languages that ease the communication between the components and between human and component. The language will most likely include UML-like diagrams to communicate with the user and use ontologies to do grammar and semantics checking on communication between humans and the DEE.
5 Results and discussion The different components of the DEE concept have been developed and tested in various projects. The MMG approach, using HLPs and CMs has been tried successfully in the 5th Framework project Multi-Disciplinary Optimization of A Blended Wing Body Aircraft (MOB) [8]. Since the completion of that project, the work continued [5], and currently the ICAD based MMG of MOB is being turned into a Genworks International GDL based KBE application. In addition the approach is being applied in other domains like wind energy [3]. The initiator approach has been tested in several structural design projects and proved to be highly successful. Extension to other product domains and other disciplinary fields is on its way. The linkage between MMG and different expert tools has been achieved for various tools. The later development of the agents based framework has helped to make the links more flexible. The expert tools used are mostly commercial codes. For the aerodynamics, however, an adjoint solver based CFD tool has been made in-house to analyse and optimise the outer shape of airfoils and 3D-objects. The future coupling to the MMG will make this a very powerful tool. Different optimisation algorithms have been used in various projects. The implementations used have mostly been commercial tools. Only the Sequential Linear Programming has been implemented in-house.
414
M. van Tooren and G. La Rocca
Although no full MDO case has been solved by the DEE approach yet, the spin-off of the development has resulted in successful applications. Work will therefore continue, aiming for a full implementation of the proposed methodology.
6 Conclusions MDO can be seen as a potentially very powerful computational methodology within the Systems Engineering process. The implementation of MDO can be structured according to the Design and Engineering Engine concept. It allows an object based variational approach to design optimisation. The components of a DEE need a multiple of technologies to make them effective: x Specifying the design problem: MDO, XML x Specifying the design space: KBE, XML x Initiating search process: Heuristics, MDO x Linking to expert tools: KBE, agents x Searching the design space: MDO (incl SLP, SQP, GA, DoE) x Recording product/process results: XML Prototypes of each component have been tested in various research programs. A full integration of the tools to perform a real high fidelity multi-disciplinary design optimisation has not yet been achieved but will remain the ultimate proof of the approach.
7 Acknowledgement The authors want to thank Airbus UK, Airbus Hamburg, Stork Fokker AESP and Genworks International for their continuous support.
8 References [1] [2] [3] [4]
Berends J, van Tooren MJL, Design of a Multi-Agent Task Environment Framework to Support Multidisciplinary Design and Optimisation. 45th AIAA ASME Conference, Reno, USA, 2007. Carpentieri G, Koren B, van Tooren M, Adjoint-Based Aerodynamic Shape Optimization on Unstructured Meshes. J Comp Physics 224:1:267—287, 2007. Chiciudean T, LaRocca G, van Tooren, A Knowledge Based Engineering Approach to Support Automatic Design of Wind Turbine Blades. CIRP Design Conference, Twente, The Netherlands, 2008. Cooper D, La Rocca G., Knowledge-based Techniques for Developing Engineering Applications in the 21st Century. 7th AIAA ATIO Conference, Belfast, Northern Ireland, 2007.
Systems Engineering and Multi-disciplinary Optimization [5] [6] [7] [8] [9]
415
LaRocca G, van Tooren MJL. Enabling Distributed Multi-Disciplinary Design of Complex Products: a Knowledge Based Engineering Approach. J Design Research 2007;5;3:333-352. LaRocca. Knowledge Based Engineering Techniques to Support Aircraft Design And Multidisciplinary Analysis and Optimisation. PhD Thesis, Technical University of Delft, The Netherlands, to be published. Milton N, La Rocca G, Knowledge Technologies. Polimetrica Monza/Milano, 2008. Morris A, Arendsen P, La Rocca, et al, MOB - a European project on multidisciplinary design optimisation. 24th ICAS Congress, Yokohama, Japan, 2004. Schut E, van Tooren MJL, Design "Feasilization" Using Knowledge-Based Engineering and Optimization Techniques. J Aircraft 44:6, 2007.
Application of a Knowledge Engineering Process to Support Engineering Design Application Development S.W.G. van der Elsta,1 and M.J.L. van Toorenb,2 a
PhD. researcher, Design of Aircraft and Rotorcraft, Delft University of Technology, The Netherlands b Professor, Design of Aircraft and Rotorcraft, Delft University of Technology, The Netherlands Abstract. The design, analysis and optimization process of complex products can be supported by automation of repetitive and non-creative engineering tasks. The Design and Engineering Engine (DEE) is a useful concept to structure this automation. Within the DEE, a product is parametrically defined using Knowledge Based Engineering (KBE) techniques. To develop and successfully implement the concept of the DEE in industry, a Knowledge Engineering (KE) process is developed, integrating KBE techniques with Knowledge Management (KM). The KE process is applied to develop an application supporting the design and manufacturing of aircraft wiring harnesses, focussing on the assignment of electrical signals to connectors. The resulting engineering design application reduces the recurring time of the assignment process by 80%. Keywords. Knowledge Based Engineering, DEE, Knowledge Engineering, agents, wiring harness.
1 Introduction Today’s prevailing aircraft configurations have seen no significant changes in the last 50 years. Product and process improvements were aimed at increasing performance and reducing cost [16]. However, the targets set by Vision 2020 aim at more affordable, safer and more environmentally friendly air transport [8]. Together with the globally increasing demand for air traffic [1], aircraft design requires a paradigm shift to exceed present design process efficiency and meet the demands. A methodology that supports this paradigm shift is Knowledge Engineering (KE), providing a number of methods and tools that considerably improve the process of acquiring, using and implementing engineering knowledge. 1
[email protected] [email protected] Design of Aircraft and Rotorcraft, Delft University of Technology, Faculty of Aerospace Engineering, Kluyverweg 1, 2629 HS Delft, The Netherlands, www.lr.tudelft.nl/dar 2
418
S. van der Elst and M. van Tooren
Knowledge is a vital component of engineering design and significant reductions in costs and product development time can be realized if engineering knowledge is reused.
Where previously the geometric model took a central position, today design knowledge should have the focus: knowledge should be managed and engineered as a key business asset [6]. Computer systems enriched with logic and engineering knowledge support engineering by automating repetitive and time-consuming processes. This reuse of knowledge decreases the engineering resources required and relieves the engineers from non-value adding activities, making more time available to exploit their creativity and engineering skills. The Design and Engineering Engine (DEE) is a framework concept to structure this automation. Within the DEE, a product is parametrically defined using Knowledge Based Engineering (KBE) techniques. The analysis of a particular instantiation of this product model is performed by discipline analysis tools while a search engine provides a strategy to drive the design toward a feasible solution, satisfy functional and performance requirements within constraints. To develop and successfully implement the concept of the DEE in industry, a KE process has been developed. The KE process integrates KBE techniques with Knowledge Management (KM) and ranges from engineering process analysis to training and support for deployed engineering design applications. This paper is structured as follows. First, the aspect of Knowledge Management and knowledge are discussed in section 2. Second, a short background on the KBE methodology and the concept of a DEE is presented in section 3. In section 4 the structure of the Knowledge Engineering process is discussed, followed by the application of the process on wiring harness design in section 5. The paper is concluded with a discussion on the performed research and the next development steps.
2 Knowledge Management Knowledge Management (KM) addresses the use of techniques and tools to make better use of the intellectual assets in an organization. KM concerns [14]: Identifying what knowledge is important to an organization Deciding what knowledge should be captured to provide appropriate solutions to real-world problems Capturing, representing and storing knowledge from domain experts and existing repositories for understanding and reuse In order to describe the approach to Knowledge Management more clearly, the concepts of knowledge, Knowledge Acquisition and knowledge base will be explained.
Knowledge Management
419
2.1 Knowledge Knowledge is defined as (i) the information, understanding and skills that you gain through education or experience, (ii) the state of knowing about a particular fact or situation [11]. Knowledge can also be considered as a dynamic concept, strongly linked to the context it is applied to. Knowledge can be thought of as a system, driving a process that takes data and information as input in order to generate decision or actions. This results in the following definition [14]:
ability ½ manipulate ½ data ½ perform skilfully ½ ° ° ° °° ° ° ° Knowledge is the ®skill ¾ to ® transform ¾ ®information ¾ to ® make decisions ¾ °expertise ° °create ° °ideas ° °solve problems ° ¯ ¿ ¯ ¿¯ ¿ ¯ ¿ There are various classifications of knowledge. Two important dimensions with which to describe knowledge are: (i) Procedural knowledge versus conceptual knowledge; (ii) Basic, explicit knowledge versus deep, tacit knowledge. Procedural knowledge concerns processes, tasks and activities. It describes the conditions, under which specific tasks are performed, the order in which tasks are performed and the resources required to perform tasks. Conceptual knowledge concerns the description of concepts and their relation to other concepts. Hence, it addresses the ways in which objects are related to one another and their properties. An important form of conceptual knowledge concerns taxonomies, i.e. classes and class membership. Another type of conceptual knowledge addresses attributes of concepts. Basic, explicit knowledge is the type of knowledge residing at the forefront of a specialist’s brain and is thought about in a deliberate and conscious way. It is concerned with basic tasks a domain expert performs, basic relationships between concepts, and basic properties of concepts. Deep, tacit knowledge is at the other extreme to basic, explicit knowledge. It is knowledge residing at one’s subconscious. It is often built on experiences rather than being taught. It often leads to automatic activities that seem to require no conscious thought. It is described in everyday words and phrases such as ‘gut feel’, ‘hunches’, ‘intuition’, ‘instinct’ and ‘inspiration’ [7]. The corporate advantages of captured and stored knowledge are numerous: Disseminated knowledge to other people within an organization to provide expertise Reduce the risk of knowledge loss in domains where only a small number of experts hold vital knowledge Reuse knowledge to enrich computer systems to perform tasks normally performed by human domain experts Knowledge is captured through the application of knowledge acquisition techniques.
420
S. van der Elst and M. van Tooren
2.2 Knowledge Acquisition Knowledge acquisition is the capturing and structuring of knowledge from humans and already existing repositories in order to enable knowledge reuse. Although the benefits of capturing and using knowledge are manifest, it has long been recognized that knowledge is hard to acquire from domain experts. The difficulties stem from a number of factors. First, domain experts are not good at recalling and explaining everything they know. The tacit knowledge which operates at a subconscious level is hard, if not impossible, to explain. Second, domain experts have different experiences and opinions that require aggregating to provide a single coherent picture. Third, domain experts develop particular conceptualizations and mental shortcuts that are not easy to communicate. Fourth, domain experts use jargon and assume most other people understand the terminology being used. To deal with such difficulties a number of techniques and tools have been developed that considerably improve the process of acquiring knowledge. A diversity of knowledge acquisition tools is presented in the Knowledge Acquisition Matrix, illustrated in Figure 1. The Knowledge Acquisition Matrix provides several tools in order to acquire the different types of knowledge.
Figure 1. Knowledge Acquisition Matrix [7]
Dedicated software tools make the process of acquiring, storing and representing knowledge more efficient and less prone to errors. PCPACK [15] is the comprehensive of such tools. It is used for creating a knowledge base, a special database storing organizational knowledge and information representing the expertise of a particular domain [14]. To represent the expertise efficiently, the
Knowledge Based Engineering
421
structure of the knowledge base is identical to the structures that underlie human expertise. Psychologists have found that this is based on four main components; concepts, attributes, values and relations. Furthermore, the appearance of the knowledge base is determined by the purpose of the knowledge. In the case of the development of a DEE based on KBE techniques, the structure will focus on representing the engineering process. The knowledge base will encompass both a current state and a future state, integrating the redesigned engineering process and software architecture for the application.
3 Knowledge Based Engineering 3.1 Knowledge Based Engineering Principles La Rocca [12] defines KBE as a technology that is based on the use of dedicated software tools, i.e. KBE systems that are able to capture and reuse product and process engineering knowledge. The main objective of KBE is reducing time and cost of product development by means of the following: Automation of repetitive and non-creative design tasks Support of multidisciplinary integration from the conceptual phase of the design process The KBE cornerstones are rule-based design, object-oriented modeling, and parametric CAD [13]. KBE has its roots in knowledge-based systems (KBS) applied in the field of engineering, hence the name. KBS is based on methods and techniques from artificial intelligence (AI). AI aims at creating intelligent entities [17]. KBE focuses on capturing rules of repetitive, non-creative human processes. Engineers have a product or object-oriented view of the world, which the objectoriented modeling approach supports. KBE found its first application as follow-up of CAD to enable designers to reuse models. CAD is based on geometrical primitives, KBE on knowledge primitives. 3.2 Design and Engineering Engine A DEE is defined [13] as an advanced design environment that supports and accelerates the design process of complex products through the automation of noncreative and repetitive design activities. Figure 2 shows the DEE concept.
422
S. van der Elst and M. van Tooren
Figure 2. The Design and Engineering Engine (DEE)
The main components of the DEE are: 1) Initiator: Responsible for providing feasible initial values for the instantiation of the generative parametric product model. 2) Multi-Model-Generator (MMG): Responsible for instantiation of the product model and extracting different views on the model in the form of report files to facilitate the discipline specialist tools. 3) Analysis (Discipline Specialist) tools: Responsible for evaluating one or several aspects of the design in their domain or discipline (e.g. structural response, aerodynamic performance or manufacturability). 4) Converger & Evaluator: Responsible for checking convergence of the design solution and compliance of the product’s properties with the design requirements and generation of a new design vector. These elements use loops in order to function. The definition of the product is based on selection or creation of High Level Primitives (HLPs). HLPs are functional building blocks, containing an a priori definition of a family of design solutions. These functional blocks are encompassed sets of rules that use sets of parameters to initiate objects that represent the product under consideration. The object oriented approach of the HLPs allows capability modules (CMs) to specify the representation of the product as desired by various engineering disciplines.
4 The Approach to Knowledge Engineering Knowledge Engineering (KE) is the engineering discipline integrating KM and KBE techniques by developing computer systems that conduct engineering
The Approach to Knowledge Engineering
423
processes normally requiring human expertise. It addresses the management of corporate knowledge and the creation of design applications that automate repetitive or time-consuming processes by reusing this knowledge. To successfully implement the concept of the DEE as a suitable framework to structure this automation, a KE process has been developed. It consists of six fundamental phases, as illustrated in Figure 3: 1) The first phase concerns an engineering process analysis, aimed to identify process improvement opportunities by applying lean principles to the field of knowledge engineering. The deliverable is a value map, addressing key opportunities for the application of KBE techniques. 2) The second phase addresses the knowledge acquisition; relevant expert knowledge is captured and validated. The deliverable is a knowledge base, a knowledge repository containing a formal description of knowledge associated with the concerned process. 3) The third phase focuses on the knowledge structuring. The formalized engineering process is analyzed and redesigned, incorporating computer automation strategies. The deliverable is a redesigned engineering process, and forms a blueprint for the DEE. 4) The fourth phase, knowledge application, concerns the software engineering of the DEE components based on the formal model, using KBE principles. The deliverable is a set of stand-alone KBE software modules with full capability to execute the engineering process, requiring a fraction of the formerly addressed engineering resources. 5) The fifth phase addresses the integration of the KBE modules into a framework to form the DEE. It includes the development of communication interfaces and the deployment. The deliverable is an engineering design application based on the concept of the DEE, offering engineering services. 6) The last phase concerns support, maintenance, and training related to the deployed design application. The main deliverable is a training and maintenance program to train engineers as operators of the application and ensure the knowledge base is up-to-date.
Figure 3. Knowledge Engineering process
424
S. van der Elst and M. van Tooren
In the next section, the specific phases of the KE process are discussed in more detail as performed during the development of a DEE supporting the design process for aircraft wiring harnesses.
5 Wiring harness Design Application Electric aircraft wiring harnesses can be comprised of hundreds of cables and ten thousands of wires, providing connectivity between all the mission and vehicle systems ensuring sufficient redundancy and reliability. Electrical wiring design is often performed in parallel with structural design. Consequently, the wiring harness design is subject to changes in the aircraft structure that occur with subsequent design iterations, requiring time consuming rework for any harnesses affected [18]. The routing for all wires is determined manually and strongly dependent on personal knowledge and experience. Besides, the electric wiring design is governed by numerous regulatory and functional design rules. The repetitive, time consuming and rule-based nature makes aircraft wiring design a key opportunity to develop design applications based on the concept of the DEE and applying the KE process. The development of the application is performed in close corporation with a main, international player on the aircraft electric wiring market, regarding both design and manufacturing. 5.1 Process Analysis The first phase of the Knowledge Engineering process aims at identifying general process efficiency improvement opportunities regarding the design of aircraft wiring harnesses. The process analysis is mainly aimed at knowledge-intensive processes and to products belonging to a larger product family, ensuring a large applicability of the application. During the analysis of the engineering design process the flow of information and required expert knowledge is monitored. The analysis focuses on three main process properties: Required engineering resources Repetitiveness of engineering activity within product family Complexity of applicable expert knowledge The deliverable of the process analysis phase is a value map, providing identified process improvement opportunities, involving the concept of the DEE. For the wiring harness design process, one of the key opportunities involves the assignment of signals at production breaks, where connectors connect the different wiring harnesses (Figure 4). Each wiring harness connector can include up to 150 slots, called pins, to accommodate a signal. The pins can vary in size, as do the signals to be assigned.
Wiring Harness Design Application
425
Figure 4. Connectors applied at a production break for electric wiring harness
For each production break the signals are assigned to a pin and associated connector, one by one. This process of pin assignment is repetitive and timeconsuming due to several reasons: Separation of signals across multiple wiring harness segments or cables is enforced by numerous opposing design rules and regulations, for example redundancy of flight controls, electromagnetic compatibility or heat dissipation of power cables. The increasingly vast quantity of signals to be assigned. Rework caused by changes in the input data, for example governed by design iterations for the aircraft structural design. The development of the DEE hence the subsequent process phases focus on the pin assignment process as the key identified opportunity. 5.2 Knowledge Acquisition In the knowledge acquisition phase expert knowledge involved in the engineering process is formalized and captured. The knowledge acquisition phase is the first step in the development of the actual design application, and forms the foundation for the subsequent phases of the KE process. The quality of the formalized and captured knowledge will largely determine the success rate of KE process hence the resulting DEE as design application. To guarantee a successful result, the acquisition process is performed in close cooperation with the domain experts. The domain experts are important for two main reasons: Identification of relevant knowledge rules Validation of quality and completeness of the captured knowledge Using the different knowledge acquisition techniques, a so-called informal model of the engineering activity is constructed, providing an informal but detailed description of the pin assignment process. It mainly encompasses a detailed activity diagram, describing the complete process. The informal model is part of the knowledge base. The iterative knowledge acquisition process of identifying, capturing and validating the expert knowledge is supported by PCPACK. A separate ontology is developed, specifically built to suit the wiring harness domain. The ontology forms a Domain Specific modeling Language (DSL) and support the communication
426
S. van der Elst and M. van Tooren
between engineers by unambiguously representing the (simplified) real-world problem: each element or concept in the problem domain is represented by an element in the model. The most important knowledge components captured in the informal model are the set of design rules and best practices, many of which are opposing. Some examples of applicable design rules are: The ratio of occupied pins over available pins has a settable maximum (design requirement) Signals are grouped among connectors to fulfill separation requirements (authority regulations) Per connector, signals should be centered and grouped together (manufacturing requirements) The informal model functions as a detailed engineering handbook, decreasing the knowledge level required to perform the pin assignment processes. 5.3 Knowledge Structuring After the formalization and validation of the domain expert knowledge, the pin assignment process is analyzed. The result is a redesign of the process, called the formal model and composes the second main item in the knowledge base. It takes the informal model as fundamental input, to ensure the governing functions are sustained. The formal model provides the outline and the process for the improved engineering process, incorporating the software automation of repetitive tasks. Therefore, during this stage of knowledge structuring the configuration of the DEE and the associated KBE modules are defined. Although inheriting the functions of the original process, the redesigned process might consist of entirely different subprocesses or activities. For example, when the objective is to assign 10 signals across 90 available pins fulfilling all requirements, a human engineer will require a vast amount of time to explore most if not all possibilities. Applying commercial of-the-shelf optimization software results in a much more efficient exploration of the solution space, solving the problem concurrently for all signals thus increasing the reduction in recurring process time. Due to availability and experience, the selected optimizer to solve the pin assignment process is ILOG’s Cplex [5]. The KBE system selected to develop the MMG is GDL from Genworks [10]. GDL is a new generation KBE system that combines the power and flexibility of the older ICAD system with new web technology. Its object oriented programming language is based on the standard ANSI Common Lisp. It allows both the manipulation of parametric geometric primitives as well as the construction of HLPs. Within the concept of the DEE, GDL will perform the functions of the Initiator and the MMG; Cplex represents the mathematical analysis tool as well as the Converger & Evaluator, as represented in Figure 5. Since, the pin assignment problem is uni-disciplinary (mathematical discipline), no integration of multiple discipline-specific solutions is required before evaluation. Hence, Cplex will function as a general optimizer, not exclusively as a discipline specific optimizer.
Wiring Harness Design Application
427
Figure 5. DEE consisting of GDL and Cplex
During the knowledge structuring phase, a large amount of specific domain knowledge is crunched into the formal model, reflecting deep insight into the domain and a focus on the key concepts [9]. The formal model is considered a collaboration between the domain experts and the developers of the design application, so-called knowledge engineers. Since the development of the KBE modules is iterative, the collaboration must continue throughout the knowledge application and knowledge integration phases as well. 5.4 Knowledge Application The knowledge application phase addresses the development of the KBE software modules composing the DEE, according to the redesigned pin assignment process as stated in the formal model. The deliverable is a set of stand-alone KBE software modules, providing the full capability of the DEE. Since Cplex is available as a commercial of-the-shelf optimization, the knowledge application phase involves the development of the MMG, encompassing the HLPs and the CMs. The HLPs are composed of mostly conceptual knowledge and represent the building blocks that allow combining and assembling to instantiate a member of the product family. By applying the parameter values generated by the initiator to the HLPs, the MMG is capable of producing an instantiation of this generative product model. For the pin assignment process, the initial or default solution generated by the initiator is empty: no signals are assigned, with the exception of manually pre-assigned signals. Although the pre-assigned signals are incorporated into the final solution, they will be left out of scope during the problem analysis, as are the associated pins.
428
S. van der Elst and M. van Tooren
The CMs are responsible for extracting different views on the product model, forming report files to facilitate the discipline specialist tools. The CMs are mainly composed of procedural knowledge. For the pin assignment process, the only incorporated discipline is mathematics. The related CMs extract a mathematical model of the connectors composing the production break, defining the supply of pins as well as the demand generated by the signals per separation code. The CMs define the objective function (minimize the number of pins occupied by a signal) and generates all constraints derived from the applicable design rules. The output is a report file, specifying a Linear Programming (LP) problem modeled after the instantiated pin assignment problem. This LP problem can be analyzed and solved by Cplex efficiently. The development of the KBE software modules is performed iteratively and can be considered domain driven. After each iteration cycle, the formal model is adjusted to ensure the model accurately represents the structure and process of the DEE. 5.5 Knowledge Integration To empower automatic (mathematical) analysis of the pin assignment problem a multi-agent task environment is used. The environment integrates the KBE software modules into a framework and provides communication between the KBE modules through software agents [4]. The design of the multi-agent task environment is inspired by the issues posed by earlier generation design support frameworks experienced addressing the automation of Multi-disciplinary Design and Optimization (MDO) problems often as a top-down execution of a string of individual discipline analysis tools. These execution strings are executed from start to finish: when errors in a particular discipline analysis tool emerge, the highly coupled nature of an execution string often leaves no other possibility than to re-execute all or parts of the tool chain. Besides, these support frameworks are often created an implicit way and need severe adapting when a new MDO problem or product family is addressed [2]. To overcome the identified obstacles a multi-agent task environment is developed that addresses the aforementioned problems in a structured and consistent way: decoupling the knowledge of the product from the process and able to handle a family of design problems. The framework should prevent waste of CPU resources when partial re-execution of discipline analysis tools is required. Moreover, instead of upfront fixing the tool chain definition, the problem is communicated to the framework and each agent and tool combination is using its communication skills and knowledge of the problem to request information through a specified, but not tool and address specific, call [2, 3, 4]. Considering the pin assignment problem, the resulting DEE functions as a stand-alone design application and has not yet been connected to the company’s other engineering software tools. GDL and associated agent have been deployed at the company on-site, whereas the Cplex-agent is executed remotely as engineering service, on request. The software architecture of the application is illustrated in Figure 6.
Wiring Harness Design Application
429
Figure 6. Software architecture of the pin assignment design application
A Graphical User Interface (GUI) is designed to enable interaction with engineers. The GUI allows the engineers to specify the input data (problem description) and provides identified best practices as execution options, such as grouping of signals. The GUI also enables the engineers to manually adapt the solution as provided by the DEE through incorporated selection functionality and provides different types of report files to accommodate manufacturing as well as design engineers. The GUI illustrates the front view of the set of connectors composing the production break of the wiring harness. The different signal types are color-coded by separation code, to enable easy verification by the engineers. The efficiency improvement gained by the design application is twodimensional; reduction of time-to-market and reduction of product development cost for all products within the family. The recurring time for the pin assignment process for all production breaks per aircraft is reduced from approximately several hrs to a couple of minutes. Taking into account the manual processing of the solution data into the main engineering database, the gross recurring time is approximately a half hour: a recurring time reduction around 80% is obtained. 5.6 Business Implementation The final phase of the KE process covers training, support and maintenance regarding the deployed design application. After the deployment and integration of the application within the company’s engineering process, engineers will be trained and educated in order to utilize the full potential of the design application.
430
S. van der Elst and M. van Tooren
Furthermore, the knowledge base will be maintained and additional knowledge included, enabling the DEE to solve assignment problems for other product families, for example another series of connectors. For the pin assignment design application, the business implementation phase is no yet performed and will be the subject of a subsequent paper.
6 Concluding Remarks A time reduction of approximately 80% is obtained, reducing the recurring time from several hours back to a couple of minutes. Further reductions can be obtained up to an estimated 95%, if the automatic coupling with the organizational main engineering database is established. Since the formal model describes the structure and process of the pin assignment application, engineers are not only capable of operating the design application; they will also gain better understanding of the process that is applied in order to obtain the results. Since the formal model encompasses the structured conceptual knowledge within the application, it can be concluded that it contains an accurate diagrammatic model of the MMG. Besides, the DSL makes the source code of the DEE more transparent, since all programmed objects will have an equivalent object in the actual problem domain. This straightforward mapping between the software application and the problem domain enables the exploration of automatic modeldriven generation of source code directly from the knowledge base. The decoupling of domain knowledge and platform dependent problem knowledge remains a major challenge within the Knowledge Engineering process.
7 References [1] [2] [3] [4]
[5] [6]
Advisory Council for Aeronautics Research in Europe (ACARE). Strategic research agenda, vol 1+2 and executive summary. 2002. Available at: http://www.acare4europe.com. Berends JPTJ. Development of a Multi-Agent Task Environment for a Design and Engineering Engine. M.Sc. Thesis, Delft University of Technology, Faculty of Aerospace Engineering, Delft, The Netherlands, 2005. Berends JPTJ, van Tooren MJL. Design of a Multi-Agent Task Environment Framework to support Multidisciplinary Design and Optimisation. 45th AIAA Aerospace Sciences Meeting and Exhibit, AIAA-2007-0969, Reno, NV, USA, 2007. Berends JPTJ, van Tooren MJL, Schut EJ. Design and Implementation of a New Generation Mulit-Agent Task Environment Framework. 49th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Schaumburg, IL, USA, 2008. Cplex Interactive Optimizer, Software package, version 10.2.0. Gentilly France: ILOG, 2006. Drucker P. Management challenges for the 21st century. Oxford United Kingdom: Butterworth-Heinemann, 2001.
References [7]
[8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18]
431
Emberey CL, Milton NR, Berends JPTJ, van Tooren MJL, van der Elst SWG, Vermeulen B. Application of Knowledge Engineering Methodologies to Support Engineering Design Application Development in Aerospace. 7th AIAA Aviation Technology, Integration and Operations Conference (ATIO), AIAA-2007-7708, Belfast, Ireland, 2007. European Commission. European aviation: a vision for 2020, Belgium, 2001. Available at: http://www.cleansky.eu. Evans E. Domain-Driven Design. Boston USA: Addison Wesley, 2004. GDL, Software package, version 1.5.5.7, Genworks International, Birmingham, MI, USA, 2007. Hornby AS. Oxford Advanced Learner’s Dictionary, 6th edition, Oxford University Press, 2000 La Rocca G. PhD. thesis, Delft Univeristy of Technology, Delft, Netherlands (to be published) La Rocca G, van Tooren MJL. Enabling distributed multi-disciplinary design of complex products: a knowledge based engineering approach. J. Design Research, vol 5, No. 3, pp.333-352. Milton N. Knowledge technologies. Monza Italy: Polimetrica, 2008. PCPACK, Software package, version 1.4.4R, Release 5. Epistemics, Nottingham, United Kingdom, 2006. Pratt PD. History of flight vehicle structures 1903-1990. Journal of Aircraft, vol 5, No 41, 2004. Russel S, Norvig P. Artificial intelligence: a modern approach. Second edition. Prentice Hall, 2003. Van der Velden C, Bil C, Yu X, Smith A. An intelligent system for automatic layout routing in aerospace design. Innovations Syst Softw Eng No 3, pp 117–128, 2007.
Knowledge Engineering
Knowledge Based Optimization of the Manufacturing Processes Supported by Numerical Simulations of Production Chain. Lukasz Raucha,1, Lukasz Madeja, Paweá J. Matuszyka a
Department of Applied Computer Science and Modelling, AGH – University of Science and Technology, Krakow, Poland Abstract. The system dedicated to optimization of the manufacturing processes, used in metal forming branches, is presented in the paper. The proposed approach is based on the conventional optimization methods supported by Good Practice Guides (GPG), which represent rich engineering knowledge and are usually applied in industrial practice. Each step of optimization algorithm includes numerical simulations of analyzed manufacturing process, while the goal function is calculated regarding the ‘in use’ material properties allowing to minimize the number of expensive, time consuming industrial tests. It also leads to the higher efficiency of the production chain under consideration. These features result in the system which is flexible enough to face the challenges of the market and rapid development of customers requirements. Moreover, this combination guarantees that the whole presented solution is innovative and unique in the field of manufacturing support systems. Application of the developed software to optimize the flat rolling process with respect to uniformity of final material properties was selected as an example. The obtained results regarding temperature distribution in the slab are presented in the paper. Possibilities of further improvement of the system are finally drawn. Keywords. production chain, digital manufacturing, optimization, advanced engineering
1 Introduction The Advanced Manufacturing Techniques (AMT) are crucial, especially in branches where a scale of production reaches high level and even small changes in manufacturing chain influence essentially the final profitability of a company. Such branch is represented, among the others, by the metal forming companies, manufacturing millions of tons of steel products annually and exert huge impact on the natural environment [10]. Therefore, the process of decision making during the configuration phase of production cycle is very important. However, since it requires concurrent application of the numerical multi scale simulations, 1
AGH – University of Science and Technology, Krakow, Poland, Department of Applied Computer Science and Modelling; Tel: +48 12 6172921; Fax: +48 12 6172921; Email: [email protected]; www.isim.agh.edu.pl
436
L. Rauch, L. Madej and P.J. Matuszyk
optimization methods and knowledge bases, this decision making process is difficult. Although the knowledge about metallurgical processes and material behaviour under loading conditions is rich, it is rarely used as complex solution in combination with numerical simulations. The available systems, which should support everyday work of engineers in metal forming, do not cope with these complicated problems. On the other hand, in order to meet market pressures, modern manufacturing systems have to be intelligent and adjustable, what can be achieved mainly by the feedback of information from the final product to the manufacturing stage. Thus, the computer system has to influence planning and organization of the production line, to create process which is flexible, reconfigurable and cost efficient. These aspects are widely discussed in [3,8,16]. Therefore, there exists strong need to create the system for virtual manufacturing, which is able to optimize the parameters of production process regarding the following criteria: x ‘in use’ properties of final product – increase of quality, x economical aspects of production – cost reduction and time savings, x natural environment issues – energy and resources consumption. Development of this system is the general objective of the work. The short literature review is presented in the second section to point out the lack of engineering systems able to support mentioned issues. Methodology to model the entire production chain to predict final material properties is presented in section three. This is followed by the detailed presentation of the proposed system including the architecture, main functionality focused on optimization and the role of knowledge base in the process of calculations. The obtained results, discussion and conclusions are presented in the last sections.
2 State of the art The numerical algorithms and artificial intelligence approaches dedicated to support the decision making process in the Production Planning Systems (PPS) play important role in everyday business life. These systems manage the supply chains, optimize work of employees, maximize incomes or plan space of warehouses. Commercial versions of these systems were developed in early 90’s of the last century and are based on the task schedulers or the management of Gantt charts [15]. This simple functionality was constantly developed by implementation of the following ideas [4]: mass production, flexible manufacturing, computer integrated manufacturing, lean manufacturing and Material Resource Planning (MRP). These steps were necessary to achieve the milestone in the lifecycle of PPS, i.e. conversion from MRP to Enterprise Resource Planning (ERP). Further evolution of the functionality of the PPS systems focused on implementation of the methods supporting concurrent engineering and, finally, to agile manufacturing. The latter systems were created to manage production processes held in unstable environment and to satisfy individual fast changing customer needs. Following the PPS development, the systems were modified and equipped with algorithms based on the artificial intelligence and soft computing, to create the Intelligent Manufacturing Systems (IMS). One of the first examples, proposed by
Knowledge Based Optimization or the Manufacturing Processes
437
Giachetti in 1998 [6], was based on the formal multi-attribute decision model and the relational database. This system was helpful in selection of materials and manufacturing processes, however its functionality became out of date very fast. Nowadays, such approaches often use the expert systems [14] or the knowledge bases [7]. The first proposed a framework to create the customize rule based system, using semantic net structure, where calculation of semantic hulls allows to obtain solution and to determine the optimal decision. The second suggested to create the knowledge-based “road-map”, which facilitates decision making in production planning by introducing the flexibility and dynamics of the manufacturing process. As presented, the lack of the complex computer systems, which employ the numerical simulations, optimization methods and knowledge base at the same time, can be noticed. This fact and the needs of metal forming branch inspired Authors to create the system based on numerical simulations of industrial processes combined with recent scientific knowledge. This system is able to predict final properties of products and optimize the manufacturing process. To predict mentioned properties, the proposed system uses production chain modelling approach, combined with the optimization algorithms.
3 Production chain modelling During last decade the Finite Element (FE) method was broadly used in industrial conditions to simulate various forming operations i.e. cold forging, hot forging, rolling, extrusion, stamping etc. In these cases the FE simulations provided information regarding only changes in shape, stress, strain or temperature distribution in the one particular deformation process, i.e. forging. The rheological models, that were in common use, described relations between the flow stress and some external variables like strain H , strain rate H or temperature T:
V
f H , H, T
(Ҟ1җ)
The optimization algorithms had to be applied to obtain numerical results, which are in a closer agreement with material behaviour under industrial conditions. Due to limited power of computers in the early 90-ies of the last century, the number of the optimization variables was limited. Computers capabilities have increased noticeably during the last decade and rheological models could be extended to account for the microstructure evolution under loading conditions, i.e. dynamic and static recrystallisation, phase transformation or precipitation. In these approaches, the flow stress becomes a function of time t, again of some external variables and also internal variables, such as density of mobile U m and trapped Ut dislocations and grain size of particular phases D:
V
f t , H , H, T , U m , Ut , D,...
(Ҟ2җ)
In consequence, the number of optimization variables increased, what resulted in the higher accuracy of the numerical results. More information about selection
438
L. Rauch, L. Madej and P.J. Matuszyk
of material models for the hybrid simulation-design systems can be found in [18,19]. Despite the progress in capabilities of rheological models, the FE simulations and optimizations algorithms have been mainly applied to a single deformation process, what prevented the possibility to account for the influence of other operations in the production chain. Thus, the novel approach to design manufacturing processes has to be developed. This approach should be based on considering not only one stage of manufacturing processes, but it should account for the entire production chain. Modelling of the production chain, which is in the field of interest of researchers in several laboratories [2,12,13], provides the possibility to control the final product properties at the stage of manufacturing. It means that required properties and specific behaviour of product under exploitation can be obtained by optimization at the stage of material processing. That allows to manufacture products with better ‘in use’ properties. The proper micro scale material model, which replicates important phenomena such as microstructure evolution [5], phase transformation [17], failure [1] or strain localization [11] has to be implemented into the FE code. Such model is difficult for engineers, who are inexperienced in the field of numerical simulations. Selection of the proper optimization algorithm, that can be applied to obtain desired material properties in particular manufacturing process, is even more difficult. This is the reason why Authors decided to continue the previous work[18] and to develop the system, which optimizes the objective function composed of required product properties, as well as selects proper models, which predict microstructure evolution and properties during all stages of manufacturing.
4 Knowledge based optimization system 4.1. Architecture
Figure 1. DMR basic concept in comparison to conventional approach.
Knowledge Based Optimization or the Manufacturing Processes
439
The architecture of the system is described in details in other Authors’ papers [18,19]. This work extends the system capabilities with three additional modules i.e. knowledge base dedicated to production processes, modelling of production chain as a part of ‘Modelling & Simulations’ layer, and knowledge based optimization, which is the most innovative part combining the functionality of numerical simulations and knowledge base. The whole system is designed to support typical ERP systems by using their external interfaces (Figure 1). 4.2 The role of knowledge base and optimization The recent knowledge about metallurgical processes is thorough and has the form of GPGs, which provide the recipes how to perform the production process to obtain expected results. The example of the GPG dedicated to the rolling of flat products made of carbon-manganese steel is presented in Figure 2a. For the purposes of this paper this GPG is called ‘model scheme’. It has a form of timetemperature-transformation (TTT) plot with superimposed most important characteristic temperatures. However, since the GPGs are given in a general form, each particular industrial process needs to be reconfigured individually to agree with the assumptions of the GPG. This is the most difficult process, because of many parameters, which are used to setup the real equipment e.g. velocities, forces, drafts, temperatures, accelerations, etc. Therefore, the numerical calculations were implemented to simulate the entire production process accordingly to values of input parameters.
Figure 2. Model scheme of 2 phase rolling process (a) and comparison between model rolling scheme and calculation results for optimization purposes (b) [9].
The results obtained from simulations are compared with the previously described model scheme (Figure 2b). The difference between the results obtained from simulations and the GPG data is the objective function used in the optimization of manufacturing parameters. The optimization method is applied and the new process parameters, which are closer to the GPG data, are obtained.
440
L. Rauch, L. Madej and P.J. Matuszyk
5 Results The proposed approach was verified for a single rolling pass (Figure 3) and its capabilities were proven. However, the main objective of this work is to apply this methodology to simulate entire production chain that involves several rolling passes.
Figure 3. Scheme of rolling process of flat products with example of obtained results.
The case studies of the rolling process using the reversible mill are analysed in the paper. The main objective was to design the manufacturing path by using the optimization procedure based on GPG to obtain flat rolled products with uniform distribution of the final properties. The input data consisted of material information, slab geometry (w x h x l=1.25 x 0.22 x 11m), initial temperature (T=12500C), speed up (a=0.1m/s2) and velocities of rolls in each of 5 passes (v1=2.5, v2=3, v3=4, v4=4.6, v5=6 m/s). The example of results obtained for calculations of 3E5R scheme (three vertical and five horizontal passes) is presented in Figure 4.
Figure 4. Trace of the slab slice through the manufacturing path.
Knowledge Based Optimization or the Manufacturing Processes
441
Various parameters were selected as optimisation variables, with particular interest in the rolls speed up (a) and the initial temperature of slab (T). The system was designed to provide useful industrial data and also to control the process by minimizing the difference of temperature along the slab (Figure 5). Due to this solution the uniform distribution of final material properties increases significantly. Although in reality it is not possible to obtain exactly uniform temperature distribution through out the material, huge improvements were obtained by application of the proposed optimization procedures.
Figure 5. Distribution of temparature along the slab surface after rolling process.
6 Conclusions The developed system to support the manufacturing process design through optimization was presented in the paper. The proposed software is dedicated to engineers working in the metal forming industry, with particular focus on the flat rolling steel plants. The proposed methodology employs three important components: numerical simulations, optimization and knowledge base. This makes the system innovative in comparison to currently used computer systems in the field of decision making support and virtual engineering. The system offers accurate suggestions on how to reconfigure manufacturing process to obtain expected ‘in use’ material properties. This reduces optimization costs and minimizes destructive impact on the natural environment. The proposed system has modular architecture, thus, it can be reconfigured and extended with the new modules to simulate other production processes, new optimization methods or new knowledge regarding the good practice guides. Accordingly to these capabilities, the future work will be oriented to add new functionality to the system including the advanced multi scale material models.
7 Acknowledgements The financial support of the Polish Ministry of Science and Higher Education, project no. R15 012 03, is acknowledged.
442
L. Rauch, L. Madej and P.J. Matuszyk
8 References [1] Allix O., Multiscale strategy for solving industrial problems. Comp. Meth. Appl. Sci. 2006;6:107-126. [2] Bariani PF, Bruschi S, Ghiotti A. Material testing and physical simulation in modelling process chains based on forging operations. Comp. Meth. in Mat. Sci. 2007;7:378-382. [3] Bruccoleri M, Lo Nigro G, Perrone G, Renna P, Noto La Diega S. Production planning in reconfigurable enterprises and reconfigurable manufacturing systems. Annals of the CIRP 2005;54:433-436. [4] Cheng K, Harrison DK, Pan PY. Implementation of agile manufacturing – AI and Internet based approach. J of Mat Proc Tech 1998;76:96-101. [5] Gawad J, Pietrzyk M. Application of CAFE multiscale model to description of microstructure development during dynamic recrystallisation. Arch. Metall. Mater. 2007;52:257-266. [6] Giachetti RE. A decision support system for material and manufacturing process selection. J of Intell Manuf 1998;9:265-276. [7] Halevi G, Wang K. Knowledge based manufacturing system (KBMS). J of Intell Manuf 2007;18:467-474. [8] Hon KKB, Xu S. Impact of product life cycle on manufacturing systems reconfiguration. Annals of the CIRP 2007;56:455-458. [9] Hulka K. The role of nobium in low carbon bainitic HSLA steel, Nobium Products Company, Germany, http://www.msm.cam.ac.uk/phase-trans/2005/LINK/10.pdf, 2008. [10] Lakshman ST, Vijay KJ. Advanced manufacturing techniques and information technology adoption in India: A current perspectives and some comparisons. Int J Adv Manuf Technol 2008; 36:618-631. [11] Madej L, Hodgson PD, Pietrzyk M. Multi-scale rheological model for discontinuous phenomena in materials under deformation conditions. Computational Materials Science 2007;38:685-691. [12] Madej L, Szeliga D, Kuziak R, Pietrzyk M. Physical and numerical modelling of forging accounting for exploitation properties of products. Comp. Meth. in Mat. Sci. 2007;7;397-405. [13] Madej L, Weglarczyk S, Packo M, Kuziak R, Pietrzyk M. Application of the life cycle modeling to forging of connecting parts. Proc. MS&T. In :Detroit, 2007;191-200. [14] Mahl A, Krikler R. Approach for a rule based system for capturing and usage of knowledge in the manufacturing industry. J of Intell Manuf 2007;18:519-526. [15] McKay KN, Black GW. The evolution of a production planning system: A 10-year case study. Computers in Industry 2007;58:756-771. [16] Pereira J, Paulre B. Flexibility in manufacturing systems: a relational and a dynamic approach. European J. Operational Research 2001;130:70-82. [17] Pietrzyk M, Kuziak R. Coupling the Thermal-Mechanical Finite-Element Approach with Phase Transformation Model for Low Carbon Steels, 2nd ESAFORM Conf. on Material Forming, ed., Covas J., Guimaraes, 1999;525-528. [18] Rauch L, Madej L, Pietrzyk M. Hybrid system for modeling and optimization of production chain in metal forming. J. Machine Eng. 2008; 8: 14-22. [19] Rauch L, Madej L, Weglarczyk S, Pietrzyk M. System for design of the manufacturing process of connecting parts for automotive industry. Archives of Civil and Mechanical Engineering, 2008; 8: (in press).
Characterization of the Products Strategic Planning: a Survey in Brazil Alexandre Moeckela,b,1 and Fernando Antonio Forcellinib,1 a
Federal University of Technology - Parana, Brazil. Federal University of Santa Catarina, Brazil.
b
Abstract. This work focus the products strategic planning, a phase of new product development where take place the portfolio management, in which are defined what projects will be developed in the organization. Exists a lack in the mechanisms that intend to support knowledge management at portfolio management, because there are abstraction and instability in the information used for decision making. To attend this demand, are identified some requirements of the products strategic planning and knowledge management best practices that can contribute for decision making at portfolio management. This research intend to check if the theory described in The Standard for Portfolio Management, published by Project Management Institute in May of 2006, is really considered by the organizations in its day-by-day, and to identify what knowledge management practices are proper to support the products strategic planning. The parameters of the survey were collected from March until May of 2008, with the contribution of project managers, including professionals of PMI Brazilian Chapters. It is possible to improve the possibilities of the organizations into obtain competitive advantages, because the consistency in the products strategic planning results in minor deviation at subsequent phases of the new product development. Keywords. Products strategic planning, portfolio management, knowledge management, new product development.
1 Introduction Performance on new product development is a determinant factor for the creation of the organizational knowledge, but the processes in which the knowledge is managed also need to receive attention. We will discuss about the low effectiveness of the knowledge management in the product strategic planning, especially in the portfolio management, where is common to work with imprecise information, almost not able to be gauged.
1
Product and Process Engineering Group (GEPP), Federal University of Santa Catarina (UFSC), Postal Box 476, 88040-900, Florianopolis-SC, Brazil; Tel: +55 (48) 3721 7101; Email: [email protected], [email protected]; http://www.gepp.ufsc.br
444
A. Moeckel and F. A. Forcellini
Organizations rely on projects and programs in order to achieve their strategic intent. The application of portfolio management allows this interconnection by the goals sharing and resources allocation [9]. In the portfolio management practices are observed reasons, restrictions, tendencies and impacts, by the insertion of competitive intelligence concepts in the perspective of knowledge management, aiming that the projects can be differentiated in accordance with their contribution to be reached the strategic targets of the organization [5]. The following issues will be addressed, relative to existing problems in the portfolio management practices: how to make the expectative alignment between the stakeholders in portfolio management; how to check the alignment between the decisions at portfolio management and the organization strategic goals; how to update the criteria’s that support the decision making; which are the key performance indicators for portfolio management; which are the really used methods and techniques for portfolio management; how the relevant information for decision making is obtained and used; among others.
2 Portfolio Management at Products Strategic Planning Products strategic planning is a cross-functional management process, where the stakeholders need to interact with a large scale of unstructured information and knowledge to obtain success in the product innovation management. His purpose is [10] to guarantee that strategy direction, stakeholder’s ideas, opportunities and restraints, can be systematically mapped and transformed in a project portfolio. A portfolio is a collection of projects or programs and other work that are grouped to facilitate effective management of that work aiming to reach strategic business objectives [8]. As part of the portfolio are the products in planning, in development and also that already in commercialization. Portfolio management is a dynamic decision process, where a business's list of active new product projects is constantly updated and revised [1]. Project selection can involve value measurement approaches as well as other decision criteria – for example, to create competence in a strategic area that can be important to guarantee the future of the organization [2]. Portfolio management is the centralized management of one or more portfolios, which includes identifying, prioritizing, authorizing, managing, and controlling projects, programs, and other related work, to achieve specific strategic business objectives. It is accomplished through processes, using relevant knowledge, skills, tools, and techniques that receive inputs and generate outputs [9]. Most of the existing portfolio management tools are designed to maximize the value and then balance a portfolio with the use of visual techniques (for example, a bubble diagram). Are few in the existing literature the tools that really intended to align a portfolio with the business strategy [4]. Portfolio management in product innovation emerged as one of the most important functions of the high management. It involves the manifestation of the organizational business strategy, that determine where and how will be invested the resources in the future. His main problem is in the diversity of approaches, which muddle and make difficult the choice of the more indicated [2].
Characterization of the products strategic planning: a survey in Brazil
445
Figure 1 illustrates the portfolio management processes that are considered by the Project Management Institute (PMI): Portfolio Management Processes Aligning Processes
Identification
Prioritization
Categorization
Portfolio Balancing
Evaluation
Authorization
Monitoring and Controlling Processes
Component Execution and Reporting
Current Strategic Plan * Goals Definitions & Categories
Portfolio Reporting and Review
* Key Performance Criteria * Capacity Definition
No
Strategic Change?
Selection Yes
Figure 1. Portfolio management processes at the PMI perspective
The current strategic plan of the organization is the basis for the decisions involved in the portfolio management processes. The alignment processes group identifies what will be managed in the portfolio, in which categories, and how the components will be evaluated and selected to be incorporated or not in a given project portfolio. The monitoring processes group periodically checks the preselected performance indicators, in order to guarantee the portfolio components alignment regarding the organizational strategic objectives. There is a series of correlated processes since the portfolio components identification and authorization until the progress revision of each component and of the overall portfolio [9]. In these processes, new projects are evaluated, selected, and prioritized; existing projects may be accelerated, killed, putted in standby; and resources are reallocated to the active projects [1]. Products strategic planning includes the need of relationship between different areas, involving engineering, sales, directors, distributors, as well as call centers. New concepts are evaluated in terms of money, technologies, competences and risks, considering the capacities in financial, production, human and market. Usually it’s difficult to evaluate and quantify some parameters in the portfolio management, such as the project links with organizational strategy, to define how many information is necessary to decision making, to suppose how will be the marketplace, and to understand customer tendencies [5]. The portfolio decision process is marked by uncertain and changing information, dynamic opportunities, multiple goals and strategic considerations, interdependence among projects, and multiple decision-makers and locations [1].
446
A. Moeckel and F. A. Forcellini
Published in May of 2006 by PMI, the first standard for project portfolio management in different kinds of organizations try to address the following gap in the management-by-projects field: the need for a documented set of processes that represent generally recognized good practices in the portfolio management [9]. However, many efforts are needed in order to make satisfactory this first standard in different cultures, countries, kind and size of organizations, because the people have not the same perspective about project and portfolio management. 2.1 Information Quality at Portfolio Management After the absence of resources, the low information quality is the main problem for portfolio management. It’s not constructive a task force to take complex approaches for portfolio management if haven’t quality in the data entry. The information quality about new products - sales forecasts, costs and profitability, the evaluation of the market trends, as well as the achievement probabilities - are critical factors for the success of the portfolio management process [3]. Information quality is usually treated in the literature as a hazy and fuzzy notion, which has a common sense understanding. The definitions are based in subjective attributes that are able to be measured, and normally consider only the positive side of the information value [6]. The earned value for portfolio management is directly proportional to the quality of the available information for decision-making and to the capacity of the team involved in to use that information with intelligence (his experience), being inversely proportional to the cost necessary to support the decision process and to the team answer time up to decision-making. The metaphorical expression below – Equation 1 – shows the idea of earned value for portfolio management: Earned ValuePortfolio Management { f (
QualityInformation x ExperienceTeam ) Cost Process x Time IntervalDecision Making
Equation 1. Earned value for portfolio management
In a summary about the relative dimensions for information quality, based in thirteen works in the period of 1996 until 2005, the terms more cited are: accessibility, timeliness, accuracy, relevance, believability, completeness, objectivity, appropriateness, representation, source and understand ability [7]. Contextualizing these terms for the portfolio management, quality can be related mainly with the {current, precise, reliable and relevant} information for the decision-making. The degree of information quality is directly proportional to the success that will be enabled in the decision process. 2.2 Knowledge Engineering at Product Innovation Management A company goes ahead in his product innovation practices when perceives what the customers require today and is going to demand tomorrow. However, it is common to take great impact decisions without an overall comprehension about the context, because is not effective the knowledge management.
Characterization of the products strategic planning: a survey in Brazil
447
Knowledge management comes being transformed in a critical factor for the organizations survival. Beyond to be aware of the central concepts from several approaches, the companies need to reflect about the changes in the knowledge emphasis at individual, organizational, and social level. The future of knowledge management is in the limits for organizational knowledge creation. The capacity to be flexible, associated with quickness in the generation and re-configuration of competences, will be to main ability desired by the corporations [11]. The complexity involved at products strategic planning requires know-how, wisdom accumulation, and complementary models/ methodologies based in knowledge engineering to adequately support the product innovation management.
3 Preliminary Results of the Survey The parameters of the survey were collected from March until May of 2008, regarding fifty four Brazilian enterprises. Services (35%), Information Technology (22%) and Electronic & Informatics (9%) were the more represented sectors. In the majority are medium enterprises that work in matrix or by projects/ products structure, and have a Project Management Office (PMO) established. Each project has until twenty people involved, and the portfolio has no more than fifty projects. The participants of the survey are managers, supervisors or coordinators of departments related to products strategic planning. The most of them are Project Management Professionals (PMP) certified by the Project Management Institute, with more than twenty projects managed in their career. 3.1 Strategic, Operational and Behavioural Aspects of Portfolio Management Were evaluated forty five aspects of portfolio management (PM), fifteen related to strategy, fifteen to operations and the last fifteen related to behaviour. Using a five degrees Likert scale (1 to discord strongly, 2 to discord moderately, 3 to neutral, 4 to agree moderately and 5 to agree strongly), each participant made your evaluation about how the forty five aspects are considered today in their companies. Were calculated the medium and the standard deviation: few answers out of the two sigma interval were suppressed, in order to obtain a normal distribution. The results are showed at Table 1. The low score obtained for the most of the aspects indicates that are necessary improvements in the practices and process involved at portfolio management.
448
A. Moeckel and F. A. Forcellini Table 1. Strategic, operational and behavioural aspects of portfolio management
Strategic aspects: Portfolio management adds efficiency to the enterprise operations execution. Portfolio management aggregates value to the company production capacity. Portfolio components selection considers the client needs. Portfolio balance considers the market position regarding the competition. Portfolio decisions are tuned with the company vision and mission. Portfolio components can be measured, classified and prioritized. The strategic plan is disseminated broadly for portfolio management people. It is monitored the portfolio alignment with the company strategic objectives. Portfolio components represent the company investments made or planned. Portfolio revisions (add/ exclude components) can occur at any time. Success is measured by portfolio components aggregate performance. Portfolio revision considers the value of each component and his relationship. The strategic plan communicates with clarity and consistency the PM value R&D/ marketing areas have an active participation in portfolio management. Innovative projects have priority about already existing solutions derivatives. Operational aspects: Portfolio management decisions are supported by the high administration. Portfolio authorization communicates the expected results for each component. There is availability of resources to support the collaboration. There is a Portfolio Manager at the human resources structure of the company. There is availability of resources to support the knowledge management. There is agility in the decision process, with few bureaucracies. There is availability of registration mechanisms to maintain new ideas. Project managers influence the decision-making about the portfolio. There is a portfolio management team or staff. There are tools or techniques that provide recommendations for future action. There is a Program Manager at the human resources structure of the company. Program Manager continuously monitors the overall changes in the portfolio. The criteria for components evaluation at the portfolio are mainly qualitative. There are tools or techniques that permit the comparison between expectations. The final decision is of the Program Manager (Portfolio Manager, if available). Behavioural aspects: People involved at portfolio management have a multidisciplinary profile. New ideas and approaches are encouraged and valued. It is stimulated the interaction with people inside and outside of the company. Scientific/ technological information are sought before the decision-making. Before the decision-making is deed the expectations alignment. There is consensus about the main abilities and competences of the company. It is shared helpful information for the decision-making. It is efficient the communication between the people at portfolio management. Knowledge and know-how related to the decision-making are preserved. Exists the systematic utilization of lessons learned in previous decisions. The decision-making occurs without ideas imposition. Prioritization considers the quality of the information about each component. Without information the people stay out of the decision-making. It is easy the choice of the adequate approach for portfolio management. The information necessary for decision-making is obtained easily.
Score: 4,6 4,4 4,4 4,0 4,0 3,9 3,8 3,8 3,8 3,8 3,8 3,7 3,4 3,1 3,1 Score: 4,3 3,7 3,5 3,4 3,4 3,3 3,3 3,3 3,3 3,1 3,1 3,0 3,0 3,0 2,7 Score: 4,1 3,9 3,8 3,8 3,7 3,7 3,7 3,6 3,5 3,4 3,4 3,4 3,0 2,9 2,8
Characterization of the products strategic planning: a survey in Brazil
449
3.2 Some Practical Parameters Relative to the Portfolio Management Considering the methods, tools and techniques that are used to support portfolio management, the survey identify that the preferred are Cost and Benefit Analysis (76%), Scenarios Analysis (56%), Expert Judgment (56%), Brainstorm (56%) and Expected Commercial Value (50%). Were cited also Balanced Scorecard (39%), Performance Measurements (39%), Graphics Representations (37%), Capacity Analysis (35%), Score Models (33%), Technology Roadmap (28%) and Probability Analysis (15%). About the key performance indicators that are used to monitor the portfolio management processes, was observed the following distribution: ROI - Return on Investment (63%), Client Satisfaction Rate (56%), Gross Margin (44%), Retail Sales Percentage (44%), Cost Reduction Rate (41%), Net Present Value (37%), Quality Improvement Rate (33%), IRR - Internal Rate of Return (31%) and Cycle Time Reduction Rate (22%). Concerning the sources of technological knowledge and information used to support the decision-making, was identified that the more requested are high administration (74%), portfolio management team (61%), market research (59%), internal R&D area (41%), strategic map (37%), internal marketing area (28%), external collaborators (28%), universities & research centers partnerships (28%), scientific events (26%), individual competencies map (24%), scientific publishing (13%) and patent basis (7%). Inquired about how can be obtained the prominent information for decisionmaking at portfolio management, was said by 80% that are from interactive meetings with the stakeholders; 54% through a collaborative information system (Intranet, for example), that makes feasible the information storage, categorization, relevance attribution and recuperation; 35% through diverse annotations (paper, emails, diary), in accordance with the personal preferences of the project/ program/ portfolio manager; and 33% go in the direction that each people have a systematic preferred and the information is compiled in meetings summaries. 3.3 Some Practical Parameters Relative to the Knowledge Management In what refer to the knowledge creation at portfolio management, 52% of the participants believes that new knowledge creation is based in the adaptation of previous knowledge; 50% that depends on the capacity of each people into freely share his knowledge; 39% that there are conditions for the people knowledge at portfolio management can be converted in explicit knowledge (structured), able to make influence in the decision process; 30% see the lose of the new knowledge emerged to support the decision-making because his structuring absence; and 20% that new knowledge creation is not considered a priority at portfolio management. About the knowledge representation at portfolio management, 50% have difficulties for capture the knowledge in his systems and processes; 43% have a concern in preserve the knowledge (existing know-how); 30% aren’t prepared for consider it; 28% stimulates the creativity and the abstraction, providing conditions for knowledge creation, but have not worry with the knowledge representation; 15% don’t see as priority the knowledge representation at portfolio management.
450
A. Moeckel and F. A. Forcellini
4 Conclusion and Future Perspectives Considering the PMI intention to make generic for all kinds of organizations his Standard for Portfolio Management, the real world of the majority companies shows that are necessary much efforts to do it a reality. Some answers in the sending of the survey questionnaire reinforce this worry: - “At present, our structure hasn’t a portfolio management, reason by which we will not answer the survey”. It is a big Brazilian company (electrical area), with intensive R&D (several products at the market) and great export volume. - “We don't have product development and portfolio management in similarity of your research”. Here an answer of a governmental research institute. This paper brings subsidies for the implementation of a complementary model, supported by knowledge management practices, that is going to contribute for clarify and make feasible the portfolio management at new product development. In the sequence of the research, metrics will be defined in order to identify the quality degree of the information available for decision making at portfolio management, aiming to generate subsidies for product innovation management.
5 References [1] Cooper RG, Edgett SJ, Kleinschmidt EJ. New product portfolio management: practices and performance. Journal of Product Innovation Management 1999;16(4):333-351. [2] Cooper RG, Edgett SJ, Kleinschmidt EJ. Portfolio management for new product development: results of an industry practices study. R&D Management 2001;31(4):361-380. [3] Cooper RG, Edgett SJ, Kleinschmidt EJ. Portfolio management for new products. Basic Books, New York, 2001. [4] Iamratanakul S, Milosevic DZ. Using strategic fit for portfolio management. In: Proceedings of the PICMET'07 Conference on Management of Converging Technologies, Portland, Oregon, USA, August 5-9, 2007. [5] Moeckel A, Forcellini FA. Collaborative product pre-development: an architecture proposal. In: Loureiro G, Curran R (eds) Complex systems concurrent engineering: collaboration, technology innovation and sustainability. Springer, London, UK, 2007. [6] Nehmy RMQ, Paim I. A desconstrução do conceito de "qualidade da informação". Ciência da Informação 1998;27(1):36-45. [7] Parker MB, Moleshe V, De la Harpe R, Wills GB. An evaluation of information quality frameworks for the World Wide Web. In Van Brakel PA (ed.). Proceedings of the 8th Annual Conference on World Wide Web Applications, Bloemfontein, 6-8 September 2006. Cape Town: CPUT, 2006. [8] Project Management Institute. A guide to the project management body of knowledge: 3rd ed. Project Management Institute, Newton Square, PA, 2004. [9] Project Management Institute. The standard for portfolio management. Project Management Institute, Newton Square, PA, 2006. [10] Rozenfeld H, Forcellini FA, Amaral DA, Toledo JC, Silva SL, Alliprandini DH, Scalice RK. Gestão de desenvolvimento de produtos: uma referência para a melhoria do processo. Saraiva, São Paulo, 2006. [11] Tuomi I. The future of knowledge management. Lifelong Learning in Europe 2002;7(2):69-79.
Using Ontologies to Optimise Design-Driven Development Processes Wolfgang Mayer, Arndt Mühlenfeld and Markus Stumptner1 1
Advanced Computing Research Centre, University of South Australia
Abstract. While optimisation-driven design has become prevalent in many engineering disciplines, support for designers to effectively use the results of simulation processes has not been addressed satisfactorily, since, processes monitoring and reuse of simulation results are not well-integrated into current development practices. We introduce a framework to integrate Multidisciplinary Design Optimisation processes using ontological engineering, where artefact and simulation models are exploited to yield more effective optimisation-driven development. We show how meta-modelling techniques can overcome representational and semantic differences between analysis disciplines and execution environments. Keywords. Ontological Engineering, Process Modelling, Design Optimisation
1 Introduction In the design and engineering context, ontologies provide an explicit formalisation of design knowledge that is otherwise distributed among several teams [6]. Ontologies also aid in semantic interoperability between design disciplines due to the introduction of meta-models that serve as a linking element between disciplines [11], providing means to reason about process-, simulation- and domain-specific aspects [3]. As designs become more complex, designers and engineers increasingly rely on tool support to manage not only design artefacts, but also the design processes themselves. In order to store, manipulate, connect and validate processes, semantic representations of processes are desired that are able to convey not only the structure, but also the semantics of process parameters and activities unambiguously. While research in distributed scientific computing [10] and Web Services has led to numerous attempts to represent the “meaning” of individual tasks, most frameworks do not fully address the challenges posed by non-trivial design tasks, 1 Mawson Lakes Blvd, Mawson Lakes, SA 5095, Adelaide, Australia mayer|arndt.muehlenfeld|[email protected] Fax:+61 8 8302 3988. This work was funded by the CRC for Advanced Automotive Technology under project 10 Integrated Design Environment. We are grateful to Chris Seeling (VPAC) and Daniel Belton (General Motors/Holden Innovation) for providing a test-bed and domain expertise.
452
W. Mayer, A. Mühlenfeld and M. Stumptner
where information is rich in structure and detailed interpretation of semantics is necessary. Conversely, static artefact models fail to capture dynamic aspects of processes and their execution. Hence, effective tool support requires an integrated approach that facilitates the development of adequate models of individual artefacts and tasks, processes and their execution [12]. This paper extends the emphasis from artefact-centric ontologies and process-oriented approaches to a comprehensive unified framework. We outline an ontological representation of typical multidisciplinary optimisation processes within analysis-driven design processes. In this context, ontologies may serve as part of a reusable framework in which reasoning about artefacts, related processes and simulation tasks and their results is exploited to design, enact and adapt product development processes. Although not explored in this paper, the integration of different models and disciplines may also prove useful to guide designers and engineers in navigating analysis results and in exploring design alternatives. We are not primarily concerned with establishing interoperability between engineering applications, but focus on how existing analysis tool chains can be employed more effectively. We illustrate how ontologies of processes and engineering models can be combined in a uniform framework build on top of well-established engineering standards (STEP/EXPRESS [7]) and how declarative mappings between meta-models facilitate automated synthesis and adaption of complex processes. Our work aims to unify dynamic process enactment, detailed representations of design-related artefacts, and expressive knowledge-representation formalisms to gain a more powerful framework that surpasses the limitations of purely process-oriented and concept-oriented representations. While no single language is likely to satisfy all use cases, the integration of models at the meta-level can be done using a small set of powerful languages. In this paper, we advocate the use of EXPRESS to formalise the necessary meta-models and model transformations, since this language has been shown to be powerful enough to express and execute model mappings within a unified framework [14]. In Section 2, the concepts and requirements underlying model-driven engineering processes are outlined. In Section 3 our approach to ontological process modelling is introduced and the role of ontologies in design scenarios is discussed. Section 4 presents the architecture of our framework and Section 5 discusses our approach to overcome disparities between different models. Section 6 elaborates on possible benefits of our framework. Section 7 summarises our view on knowledge modelling in the design and engineering context and outlines future research directions.
2 Concurrent Design Optimization Processes The desire to reduce costs and product cycles and increasing product complexity have led to a paradigm shift towards virtualisation. Early adoption of standardised modelling and simulation techniques and frequent model interchange strive to eliminate the need for physical artefacts.
Using Ontologies to Optimise Design-Driven Development Processes
453
Multidisciplinary Design Optimisation (MDO) is a form of virtual development where rigorous modelling and optimisation techniques are applied early in the design process, to obtain a coarse understanding of different aspects of a design across a number of heterogeneous domains. Disciplines are analysed in parallel and the results are merged to obtain the design alternative representing the “best” compromise. A prerequisite for this is the representation of MDO processes and information flow in a way that permits semantic analysis. For this we have to focus on two views: the traditional product/design modelling view, and the explicit modelling of the process view. We use what we call an Ontology-based approach, where “ontology” is used in the interpretation of [8] as meaning a set of concepts plus logical axioms that describe their interrelations.
3 Ontologies for Engineering Processes Adequate representations of structure, function and semantic annotation are essential requirements for reasoning about design artefacts as well as design processes, their prerequisites and their results. Ontologies provide the means to represent the relationships between artefacts and sub-artefacts, as well as (material and domain-specific) properties and annotations made by designers/engineers, in a way that is amenable to semantic analysis and translation. Standards like STEP [7] and ontologies to represent function [9] serve as a starting point for the specific purpose of artefact modelling. The process aspects are concerned with the representation and execution of design and engineering processes within and across organisational boundaries. Here, a consistent formal process model allows to connect engineering decisions and artefacts to the processes that induced them. Different formalisations of semantic representations of activities in design and engineering processes have been proposed, but do not satisfy all requirements of the MDO context: Models proposed for process flows [1], workflow and Web service execution typically provide means to specify preconditions and effects, but leave the precise ontology to be used unspecified or suffer from limited expressiveness. Similarly, process modelling efforts in the scientific [10] and engineering domains [15] do support dynamic and adaptive processes, but fail to embrace detailed representations of artefacts and information manipulated by these processes; typically, design artefacts are represented as black boxes. In the MDO context, both process enactment and detailed artefact knowledge are required for successful process management. Conversely, well-established standards like STEP are designed to capture detailed artefact information, but are not specifically designed for managing dynamic processes. We aim to address to narrow this gap by introducing an ontological framework that combines dynamic process adaption mechanisms with detailed engineering models to complement existing approaches in order to support the design and execution of MDO analyses.
454
W. Mayer, A. Mühlenfeld and M. Stumptner
4 Ontology Architecture The approach to design support sketched in this paper relies on formal ontologies to represent processes and design knowledge, as well as automated reasoning techniques to build, validate, and translate design processes and design knowledge. However, no single ontology or language is sufficient to represent all desired aspects [12], and different representations must be combined into a unified framework to overcome differences. The explicit use of adaptors [2] has been advocated to bridge gaps between ontologies, including for parametric configuration problems [5]. Although not based on adaptors, [9] show that domain-specific ontologies representing the function of devices can be related via a common model to extend reasoning about roles and functions beyond a single domain. We rely on domain experts and knowledge engineers to identify and represent relevant interactions between domain ontologies and formalise mappings between ontologies (Fig. 1). A layered approach where meta-models are located at the top, unified task and artefact ontologies comprise the intermediate layer, and domain-specific ontologies form the bottom layer in the ontology hierarchy has been adopted. Concrete executable systems, such as CAD environments, optimisation tools and workflow orchestration engines, are located below the knowledge representation layers. The model and simulation repository acts as a conceptual memory component where ontologies, processes and information about (partially) executed simulations (obtained from the execution platform) and their results are stored. This repository is accessed to draw inferences based on historic process information.
Figure 1. Ontological Support Framework
From analysis of individual domains, ontologies of domain-specific concepts, properties and relations are created. Process execution environments, for example workflow enactment systems, are treated in the same way. As a result, a set of domain ontologies is obtained. Domain-independent aspects and processes are
Using Ontologies to Optimise Design-Driven Development Processes
455
found by generalisation of domain-specific ontologies to form the intermediate layer: By defining suitable ontology mappings, domain-specific knowledge is mapped into the unified common ontology at the intermediate level. Hence, it becomes possible to describe and reason about domain-independent and task-independent concepts, such as execution traces and execution histories. Common ontologies allow to design, trace, reason about and execute analysis-driven processes. Support environments developed for the design and execution of distributed scientific experiments have demonstrated that this is feasible in certain domains while hiding much of the underlying formal knowledge representation mechanisms from engineers [10]. Ontologies and inference systems that comprise the intermediate layer serve as a platform to integrate information and processes obtained from different domains and expressed in languages defined by different ontologies. This aspect is vital when dealing with artefact and process information simultaneously, which may not be expressed in a common language. Differences between ontologies, workflow languages and data formats can be overcome at the meta-model level [4].
5 Integration by Declarative Abstraction Translation between the intermediate layer and the ontologies below is accomplished by adaptors that map between domain-independent and domain-specific representations. The same idea is used to instantiate abstract process models at the intermediate level to generate process specifications tailored to specific workflow execution environments, such as [10], to leverage mature platforms for process enactment. We created a generic meta-model [12] of optimisation-driven design processes (Fig. 2) to form the basis for the joint semantic representation . The central components are Processes, DataFlow and ParameterDescriptions, which represent (concrete or abstract) tasks, information flow between tasks, and the structure and meaning of passed data from an engineer’s point of view, respectively. The model is generic in that bidirectional translations to and from widely-adopted process modelling languages can be defined. The abstract representation of parametric engineering models as Parameters and DomainModels allows to introduce detailed domain-specific engineering models through specialisation; Constraints serve to specify and record additional semantic relationships expressed by extensions of well-established STEP Application Protocols (APs). The representation of domain models and tasks via abstract parameters and constraints is the foundation of our engineering support framework: Critical properties of concrete artefact models and simulation tasks are captured as domainand analysis-specific abstractions of concrete artefacts and simulations formalised as ontologies represented in a uniform language based on STEP/EXPRESS. The common representation allows to define relationships between different abstract models, such as equivalence criteria with respect to a given analysis task, which are used to find and retrieve stored optimisation scenarios compatible with the current problem under consideration. The retrieved information may subsequently be used to guide further design optimisation and exploration processes. The relevance of
456
W. Mayer, A. Mühlenfeld and M. Stumptner
STEP/EXPRESS in this context is reinforced by the fact that domain-specific models of design artefacts conforming to standardised APs can be obtained directly from engineering tools, such as CAD systems. Hence, abstract representations of design artefacts can be derive automatically once given abstractions have been specified. Furthermore, APs serve as the domain-specific reference ontologies upon which abstractions are built using declarative knowledge representation paradigms, reducing the effort to build abstractions and increasing the chance of reusing mappings in other projects. Our meta-model is an abstraction of concrete process models used in a particular domain, which in turn are specifications of possible execution scenarios that may occur. The model must be instantiated for a particular domain to obtain domain-specific process models, which are subsequently used to analyse, execute and monitor processes.
Figure 2. MDO Process Meta-Model [12]
Analogous to the synthesis of execution models through ontology mappings, complex computing environments can be described by ontologies that allow to automatically create a workflow to store and access required information, considering the individual capabilities of existing sub-systems. Hence, it becomes possible to supplement certain system components with additional databases holding (meta-)data that cannot be processed by legacy components.
6 Supporting Design and Engineering The architecture introduced in the previous sections facilitates the support of a variety of different modelling and engineering tasks. For space reasons we restrict our presentation to a few representative examples. The main benefits relevant for designers and engineers are as follows: Through model management enabled by our framework, each analysis may be based on a consistent and up-to-date global view of the design process.
Using Ontologies to Optimise Design-Driven Development Processes
457
Tracing the evolution of designs and analysis results becomes possible. In particular, the origin and assumptions underlying critical models and parameters can be recorded and later exploited to guide analysis efforts. For example, if it is known that a design constraint on a particular artefact part is only a default value, it may be opted to re-negotiate the exact value rather than spending much effort on satisfying the constraint (which may become obsolete in later design stages). System support for selection, composition and reuse of analysis processes may help to avoid errors in the setup stages and to shorten turnaround time. Flexible integration of different systems through automatic workflow synthesis and orchestration lessens the burden of integrating disparate systems from the IT personnel’s point of view. Process execution. Using automated reasoning technology, process models can automatically be translated into workflow enactment models that are subsequently executed. This allows to automatically monitor, store, and reason about process outcomes that are handled by the MDO environment. Simulation inputs and results can be compared and possible changes to the process can be suggested and validated. Declarative ontology mappings enable to separate modelling and reasoning from implementation-dependent aspects, legacy applications and scientific Grid platforms. Simulation reuse. MDO optimisation tasks can be adapted and streamlined if suitable results are available from previous similar analysis. Using ontologies to compare past simulations to the current analysis, reuse of partial results to streamline the current simulations becomes possible. Conceptually, our approach extends the idea of process refinement described in the context of distributed scientific workflows [13] to complex engineering models that cannot be described using the simple meta-data approach provided by well-known Web services and Grid platforms. Optimisation design. Similar reasoning can be applied in the preparation stage before optimisation processes are executed. For example, formal process descriptions can be used to ensure that a suite of experiments leads to compatible results that can subsequently be aggregated into a global view. In addition, potentially inappropriate process inputs can be flagged by comparing parameter settings chosen by engineers with formal models of successful and failed executions stored in the repository. Error handling. Simulations may also benefit from improved robustness of models and execution through semi-automated error recovery. If a simulation aborts due to modelling errors or out-of-range input values, formal ontologies and a repository of models and execution traces can help to determine whether a different model is available that does not exhibit the same problem. Since permissible input values may not be known explicitly for complex models, case-based or machine learning approaches can be adopted to build the necessary descriptions incrementally.
7 Discussion and outlook We presented a meta-model-driven framework for the execution and analysis of optimisation-driven engineering processes, where the integration of common and domain-specific ontologies allows to reason about process executions and
458
W. Mayer, A. Mühlenfeld and M. Stumptner
simulation results on a meta-level. We concede that the initial effort to create ontology mappings may be considerable for large projects, but believe that the potential benefits of the approach by far outweigh its costs. By relying on existing engineering standards, such as the STEP APs and other data representations, the modelling effort can be directed towards ontology mappings at the meta-model layer, rather than on the lower-level ontologies and data format conversion implementation. To further curb the complexity underlying the creation of models, a hierarchy of meta-models is proposed, where domain- and tool-specific models at the lower layers are reconciled into higher-level generic models that allow to translate between semantically overlapping domains. The same approach is applied to mediate between heterogeneous infrastructure components. Currently, we explore these technologies for consistency assessment of processes and their instances based on a crash simulation scenario in the automotive industry, and investigate translation of higher-level representations into different workflow representations. Extension and detailed evaluation on additional scenarios and design domains remains future work. Creating mappings between ontologies to complex (common and domain-specific) entities and relations and to explore different inference mechanisms to support engineers in defining, maintaining and applying such transformations are additional topics we are planning to investigate.
8 References [1] C. Bock and M. Grueninger. PSL: A semantic domain for flow models. Software Systems Modeling, pages 209–231, 2005. [2] B. Chandrasekaran, J. Josephson, and R. Benjamins. The ontology of tasks and methods. In Proc. KAW’98, 1998. [3] C. Dartigues and P. Ghodous. Product data exchange using ontologies. In Proc. (AID’02), pages 617–637. Cambridge, 2002. [4] Laurent Deshayes, Sebti Foufou, and Michael Gruninger. An ontology architecture for standards integration and conformance in manufacturing. In Proc. IDMME, 2006. [5] Dieter Fensel, Enrico Motta, Stefan Decker, and Zdenek Zdráhal. Using ontologies for defining tasks, problem-solving methods and their mappings. In EKAW, volume 1319 of LNCS, pages 113–128, 1997. [6] A. Gómez-Pérez, M. Fernández-López, and O. Corcho. Ontological Engineering. 2004. [7] ISO. 10303-11:1994: Part 11: The EXPRESS language reference manual. ISO, 1994. [8] M. Kifer, G. Lausen, and J. Wu. Logical Foundations of Object-Oriented and Frame-Based Languages. Journal of the ACM, 42(4):741–843, 1995. [9] Yoshinobu Kitamura, Yusuke Koji, and Riichiro Mizoguchi. An ontological model of device function: industrial deployment and lessons learned. Applied Ontology, 1(3–4):237–262, 2006. [10] Bertram Ludäscher et al. Scientific workflow management and the kepler system. Concurrency and Computation: Practice and Experience, 18(10):1039–1065, 2006. [11] Andreas Maier, Hans-Peter Schnurr, and York Sure. Ontology-based information integration in the automotive industry. In ISWC, volume 2870 of LNCS, pages 897–912. Springer, 2003. [12] Franz Maier, Wolfgang Mayer, Markus Stumptner, and Arndt Mühlenfeld. Ontology-based process modelling for design optimisation support. In Proc. DCC’08, 2008.
Using Ontologies to Optimise Design-Driven Development Processes
459
[13] Simon Miles et al. Connecting scientific data to scientific experiments with provenance. In Proc. E-SCIENCE’07, pages 179–186. IEEE Computer Society Press, 2007. [14] Vincent Plantec, Alain AND Ribaud. PLATYPUS : A STEP-based integration framework. In Interdisciplinary Information Management Talks (IDIMT-2006), 2006. [15] Richard Sohnius et al. Managing concurrent engineering design processes and associated knowledge. In ISPE CE, pages 198–205. IOS Press, 2006.
CAD Education Support SystemBased on Workflow Kazuo Hiekataa,1 , Hiroyuki Yamatob and Piroon Rojanakamolsanb a
AssistantProfessor,Graduate School of Engineering, The University of Tokyo, Japan b Graduate School of Frontier Sciences, The University of Tokyo, Japan Abstract. This paper presents a development and experiment of a CAD education support framework providing enhanced design assistance for students undertaking ship design coursework using a document management system called ShareFast. The software offers the following features: 1. CAD education framework for self instruction based on workflow of CAD operation; 2. Network based system for sharing educational materials, and support communication between instructors and students; 3. Sophisticated and flexible contents management feature based on metadata in RDF(Resource Description Framwork) format; and 4. Operations monitoring function to keep track of students’ behavior in finding problems for materials. This educational framework was used for empirical studies with university students and the results show that the framework can hasten students’ learning by means of learning materials improvement, and it also helps the instructor to understand the students’ problems in real time. Keywords. CAD, CAD education, Document management system, CAD operations monitoring
1 Introduction Computers were first used in ship design as far back as the late 1950s, although the hardware and software of those days are slightly similar to present-day facilities.[1] Since then, rapid growth has taken place in the use of computers in the marine industries.It is now becoming more common for educational departments to have several terminals and in some cases their own computing power. But dramatic improvement is not reported in the learning efficiency.[2] So many researches about education exist. One example of using computers for ship design education is at the School of Marine Science and Technology, Newcastle University, United Kingdom. A commercial CAD software, which performs as a tool for basic design and 1
Assistant Professor, Graduate School of Engineering, The University of Tokyo, Building of Environmental Studies, Room #274, 5-1-5, Kashiwanoha, Kashiwa-city, Chiba 277-8563, Japan; Tel: +81 (4) 7136 4626; Fax: +81 (4) 7136 4626; Email: [email protected]; http://www.nakl.t.u-tokyo.ac.jp/index-en.html
462
K. Hiekata, H. Yamato and P. Rojanakamolsan
calculation of ships, has been used to support large design projects undertaken by undergraduates in the teaching of ship design as part of the Marine Design module since 1995 for the Naval Architecture degree program.[3]In another example, inhouse educational CAD software by a research team of Storch[4] was developed and deployed as a training tool to acquaint students and designers with this design approach in ship design process. However, ship design education using computer is not very simple. Students have to learn both design process and the use of ship design software, e.g. CAD software. Moreover, teachers find it difficult to manage the class that involves many information technology aspects. Wright stated that one of the main problems that he encountered in teaching the CAD software was helping students to manage the files associated with the separate modules at each stage of the design process. This paper presents the framework structure and its functions to support CAD education, and explains the experiment conducted on the framework. The empirical evaluation of the experiment and potential improvements of the system will finally be stated.
2 Proposed Framework for CAD Education 2.1 Overview The framework for CAD education developed in this research is illustrated in Fig 1. The framework consists of three layers, users (instructors and students), client program, and server program. The user creates the workflow for CAD operation and puts related documents (e.g. CAD manuals) to the server by use of the client program. Then the program generates metadata for the workflow and documents automatically while the server stores all the data and metadata. When the students try to retrieve the workflow and documents, they will use the client program to see the workflow of what they have to do and also retrieve the related documents. The workflow concept is introduced to the system during the development of former prototype system. Workflow shows whole process and it is assumed to give some benefit to design education. Text search feature is also provided by the server. The programs are based on ShareFastsystem [5] developed by the authors. It is an open source, client/server application for document management based on workflow[6] using RDF metadata, a model making statements about resources in the form of subject-predicate-object expressions written in XML format.[7] Instructors create educational materials for CAD operation using the client program and save them on the server to share the materials with their students. IT infrastructure plays a key role in supporting both ShareFast system and ship design software.[8]To ensure its competence and ease of use, the ShareFast system was developed as a complement to ship design software to incorporate many of the principles of the Newcastle Protocol which defines the requirements for student friendly software.[9] Advantages and disadvantages of using CAD in design education are also analyzed by Robertson.[10]
CAD Education Support System Based on Workflow
463
2.2 Learning by Workflows with Related Documents ShareFast offers the instructors the workflow editor tool which can be used to create the workflow process, and add documents such as design manuals to the related tasks in the workflow. Students can learn the design process by following each task with related documents. Fig. 2 displays how to work with the system.
464
K. Hiekata, H. Yamato and P. Rojanakamolsan
Workflow helps students understand the design processes more easily as it allows the students to know the whole learning process before going deeper into the detail of each task. 2.3 Educational Material ManagementBased on RDF All the workflows and document files are the educational materials for CAD operation. The system assigns URI (Uniform Resource Identifier) to all the files on the server and also defines metadata in RDF format to explain the relationship between each files identified by URI. Introducing URI and RDF to this software system, all the workflow, tasks in the workflow and documents can be identified by URI. When the educational materials are needed to be revised through the real use of this system, the instructors and students can discuss the materials based on common identifier provided by the software framework. One of the examples is discussion feature between instructors and students. ShareFast allows student and teacher to add discussion threads to any document, task or workflow. Then, the discussion thread will keep URI, a unique address of a resource kept in a network, of its related document, task or workflow. Discussion thread system, shown by clicking the workflow or task node, acts as a message board for students and teachers to discuss a particular subject at process level (workflow) and task level (task node). 2.4 Operations Monitoring Function ShareFast keeps track of each student’s actions after they log in to the system by writing on log file, as shown in Table 1. It can tell the time that the student logs in to the system, and every click and download of learning material for each workflow and task. This is useful for the teacher to monitor the student’s learning activities. This feature utilizes access records based on URI. The behaviors of students can be roughly extracted from this log file. First the system separates the log files by the users and remove meaningless users’ behavior such as duplicates or un-used records. Then the system regards the access to URI as the start time of the corresponding task. Finally, the report generator will read all the reformed individual log files and generate the real time graphical report in HTML format and then show it to the user (instructor). Table 1. Example of log file contents User:
Time:
Session:
Student 1 Student 2 Student 3 Student 1 Student 2 Student 1
4:55:15 4:55:43 4:56:22 5:03:16 5:03:17 5:03:21
Login Login Login start_task start_task start_task
Task:
URI:
Launch CAD Launch CAD Create new project
n/a n/a n/a http://www...#1 http://www...#2 http://www...#4
CAD Education Support System Based on Workflow
465
3 Experiment 3.1 Overview Two experiments are conducted to evaluate the developed system. One of the commercial CAD software for ship design is used in both experiments. To teach the students, the instructors created workflows and added instruction documents and other necessary files to the related task nodes in the workflows. The students began learning from the first task of the first workflow and moved to the next one after finishing the current workflow, while the instructor observed them in the same room. Fig. 3, for example, shows the workflow of Surface and Compartment module, which is to define ship details and create ship compartments consisting of nine tasks. The students started each workflow by clicking each task node to retrieve instruction materials describing how to work with the CAD software. They were also able to add discussion threads to the workflows and task nodes (tasks) that they had problems with and needed to discuss with the instructor and other students. The operations monitoring function is used only in the second experiment. 3.2 Experimental for Material Improvement The objective of the first experiment is to clarify the benefits of recorded users’ behaviour for improvement of the materials on ShareFast system. The experiment was conducted on two groups of three students. The instructor checked the feedback from the students of group-one to identify the problem of educational materials. The analysis was based on the learning record from the log file, and messages posted by the students. As mentioned before, the instructor analyzed the log file keeping track of student’s activities. From the log, it was possible to know the time duration that the students spent on each task, and it could tell the instructor the students’ learning progress and behaviors. The instructor also took the messages from the discussion thread into consideration. The discussion thread was helpful not only to handle class communication, but also to store students’ question and answer messages during the class. Table 2 shows some messages in the discussion threads of the Patch & Curve module, their causes, as well as how the problems were responded to. From both duration of each task and messages in the discussion threads, the instructor carefully refined the learning materials of trouble-causing tasks, and polished some unclear and time-consuming workflows. For example, the message “Cannot complete the surface” indicated that a student did not understand how to complete the ship surface, and the URI kept with this message showed that the problem is in the “Complete the Surface” task of Patch and Curve module. The instructor would be able to analyze the problem based on the information of what and where the problem is. Finally, the learning material in this task was added with more detailed information on how to adjust curves to complete the ship surface. Subsequent results showed that the average learning duration of task “Complete the Surface” of group-two students was shortened from the first group by 25 percent, resulting from improvement of the learning material.
466
K. Hiekata, H. Yamato and P. Rojanakamolsan
Figure 3. Workflow of Surface and Compartment module Table 2. Example of discussion threads Message:
Cause:
Response:
Cannot complete the surface
Students didn’t know where to add new curves No explanation in the learning material No explanation in the learning material
Explain more about where to add curves to complete surface in the material Add description of accept, edit and profit icons to the material Add more information about how to do it in the material
Don’t know the meaning of icons Don’t know how to import hull form
3.3 Experiment for Operations Monitoring Function The main objective of the second experiment is to evaluate the operations monitoring function. In the prior experiment, the students have to ask questions to the instructor when faced with a problem. This time, the system monitors the student users’ behaviours with associated URI and shows their progress of coursework. Three students used the proposed system and worked on the coursework. Fig. 4 shows how the monitoring system works. The user interface tells what tasks the students are working on, and how long they have been in those
CAD Education Support System Based on Workflow
467
tasks in graphical image. For instance, the instructor could be notified by the real time report that student 3 is spending a longer time on “Adjust isophote and release the surface” task than the expected period. The instructor went to investigate the problem and found that the student did not know where the Lines View window is. Line view window is used in the Release the Surface step. Finally, the student got help before they spend more time than is necessary. This kind of mechanism helps class duration as a whole to be effectively controlled.
Expected Time Student 1 Student 2 Student 3
Figure 4. Real time report notifying students’ progress
4 Discussion The first experiment proved that the discussion thread system and log file paved the way to the educational materials improvement, consequently resulting in shortened class duration and trouble reduction. By the operations monitoring system, the instructor could notice if any students are performing behind schedule and help these students catch up with the rest of the class. As a result, the learning duration can be effectively controlled. Moreover, according to the “Adjust isophote and release the surface” task’s problem investigation case in the first experiment, the operations monitoring system could also help the instructor know when any student has a problem during the class, so that the problem can be immediately inspected and dealed with. As a result, the instructor could gain a better problem analysis than problem investigation after class, through log files and messages from the discussion thread system. This results in the learning material improvement in case the problem is caused by the learning material. The experimental case studies in this paper, however, were conducted with small groups of students. It might not be enough to prove efficiency of every area of this ShareFast–based educational framework functions. The authors need to work more on further case studies on a larger number of students with more complicated study scenarios.
468
K. Hiekata, H. Yamato and P. Rojanakamolsan
5 Conclusion A CAD education support system based on Workflow is proposed. The experiment results showed that it was able to establish a faster and more efficient CAD education. The instructor can locate students’ problems and improve the educational materials from learning records kept in log files and discussion threads. The instructor will also be noticed if any student is performing behind the class in any given task by the operations monitoring feature.
6 Acknowledgement The authors would like to thank the students who participated in the experiments, and Mr. Masakazu Enomoto for his help. This work was supported by Grant-inAid for Young Scientists (B) No. 20760556.ShareFast project was supported by Exploratory Software Project of Information-technology Promotion Agency, Japan.
7 References [1] Kuo C, MacCallum KJ. Computer Aided Applications in Ship Technology. Computers in Industry1984; 5(3):211-219. [2] M. Khosrowjerdi, G. L. Kinzel, D. W. Rosen. Computers in education: Activities, contributions, and future trends. Journal of Computing and Information Science in Engineering 2005; 5(3):257-263. [3] Wright PNH, Hutchison KW, White GDJ. The Use of Tribon Initial Design for Teaching Ship Design, Proceedings of the 9th International Marine Design Conference, May, Ann Arbor, MI, 2006; 699-722. [4] Storch RL, Singh H, Lim SG. Education and Training Software for Functional Volume Design: AccomDesign. Proceedings of the 12th International Conference on Computer Application in Shipbuilding, Busan, Korea, 2005; 461-474. [5] Hiekata K, Naito N, Ando H, Yamato H, Nakazawa T, Takumi K. A Case Study of Design Knowledge Acquisition Using Workflow System, Proceedings of the 12th International Conference on Computer Application in Shipbuilding, Busan, Korea, 2005; 849-861. [6] Hiekata K, Yamato H, Oishi W, Nakazawa T. Ship Design Workflow Management by ShareFast, Journal of Ship Production, 2007; 23(1):23-29. [7] Brickley D, Guha R.; RDF Vocabulary Description Language 1.0: RDF Schema. Available at: . Accessed on: Apr. 1st 2008. [8] Hiekata K, Yamato H, Rojanakamolsan P, Oishi W. Design Engineering Educational Framework Using ShareFast: A Semantic Web-Based E-Learning System, Proceedings of the 4th International Conference on Information Technology: New Generations ITNG 2007, Las Vegas, USA, 2007; 317-322. [9] Wright PNH, Birmingham RW. Towards Student Friend Ship Design Software, Proceedings of the 9th International Marine Design Conference, May, Ann Arbor, MI, 2006; 723-733. [10] Brett F. Robertson, Joachim Walther, David F. Radcliffe. Creativity and the Use of CAD Tools: Lessons for Engineering Design Education From Industry, Journal of Mechanical Design 2007; 129(7):753-760.
Configuration Grammars: Powerful Tools for Product Modelling in CAD Systems Egon Ostrosia1, Lianda Haxhiajb and Michel Ferneyc a,c b
Laboratoire de Recherche Mécatronique3M, UTBM, Belfort, France. Laboratoire SCOLIA, Université Marc Bloch - Strasbourg 2, France.
Abstract: Engineering design synthesis is considered a formal one, when it is computable, structured and rigorous. Therefore, a design synthesis approach must rely on adequate product descriptions and representations, and also on effective, formal and structured tools and methods. In this paper we propose a product modelling and representation approach based on configuration grammars. Based on the properties of CAD configuration models, a generic configuration grammar is proposed. The representation by proposed grammar yields configurations composed of primitive elements or primitive configuration features, whose meaning is related to the engineering basic structures. Interconnection of these primitive configuration features describes a structural configuration. The proposed grammars are based on feature-component-module-product relationships, considered as adequate structural means for a general product representation.The proposed grammar based configuration modelling approach is validated by some applications. Keywords: Product family, design for configuration, product configuration, product modelling, computer-aided-design, configuration grammar.
1 Introduction Design of configurable product family or design for configuration has emerged as an efficient tool to deal with the new challenges of a constantly dynamic and volatile market [10, 16]. Design for configuration is the process which generates a set of product configurations based on a configuration model and is characterized by a configuration task [2]. An essential characteristic of the conceptual design of a configurable product family is the product modeling [15, 16]. The effective modeling of a configurable product family must be capable to represent the complex relationship between the components of a product on the one hand, and the members of the family, on the other hand [7, 14]. Furthermore, the modeling must deal with the problem of generation and derivation of the different products, and thus carry out the variety of the new and innovative products [9].
1
Corresponding Author E-mail: [email protected]
470
E. Ostrosi , L. Haxhiaj and M. Ferney
Engineering design synthesis is considered a formal one, when it is computable, structured and rigorous. Therefore, a design synthesis approach must rely on adequate product descriptions and representations, and also on effective, formal and structured tools and methods, capable to handle the complex problems of knowledge extraction of configurable products and of learning new configurable structures [8, 15]. However, in conceptual design, there is no adapted formal representation to support the modeling of the configurable products. Today’s grammar based design systems have the potential to allow greater exploration of design alternatives. Grammars can be considered as formal powerful tools to represent the strong structural relationship inside the configurable products [5-9, 11-14]. Grammars are a paradigm that concentrates on the representational structures and underlying transformation mechanisms [4, 17]. Grammars are production systems [4] that can generate designs according to a specific set of user defined rules [3]. The most widely used grammars in design are shape grammars. They are developed to generate architectural designs [18]. In shape grammars, the grammar production rules are written in terms of shapes. In mechanical design shape grammars were used to generate coffee makers [1]. In [14] a graph grammar is developed for modeling product structure. In [11-12] an optimal configuration design based grammars approach is proposed to integrate conceptual design, configuration design and component selection tasks. Grammar-based design systems have the potential to automate the design process and allow a better exploration of design alternatives [3, 8]. This paper proposes and develops a configuration grammar design approach to support the computer-aided-design for product modeling. This paper is structured as follows. In the first section, the problem of design for configuration of product family is presented and the use of design grammars during the conceptual design is introduced.
2 Generation of Configuration Structure During the design process in CAD environment, the product i.e. mechanical system, component, and part to be designed evolves in structure and in characteristics up to the completion of its design. In fact, the transition from the product to be designed up to the designed product can be seen as a progressive and non-determinist process, each step characterising the knowledge acquisition. It is a process, which at any moment can question the previous knowledge acquisition. The product to be designed is probably complete and globally coherent at the end of its design process. During the design process, the product goes through different representations, which are closer to the final virtual product. Also starting from the initial phase going to the final phase, the product goes through intermediate phases. If the complete virtual product represents the finality, then the initial structures and the pathways to attain the final structure are not always the same, determinist and well known. From these observations, we establish the following hypothesis: Hypothesis: A virtual product has a final structure that is result of an evolution of a significant structure set.
Configuration Grammars: Powerful Tools for Product Modelling in CAD System
471
This hypothesis is valid for each level of the product: system-part-configuration feature. Moreover, this hypothesis suggests implicitly a mechanism for the generation of configuration features structures. This mechanism will represent the process of evolution of these structures, from an initial phase up to the final phase. Indeed, this mechanism shows that the evolution of configuration features structures is done going from the simple to the complex. The same reasoning can be applied to the level of parts, and then to the system one. According to the previous hypothesis, the final structure of the product, described by the set of geometric and topological variables, is the result of the significant structures evolution. These significant structures can be classified in three groups: primitive structures (terminal); intermediate structures (nonterminal); final structure. The evolution requires that these structures have the following properties: Property 1: Each structure is provided with a set of particular elements, called attaching elements. The attaching elements give the possibility to the structure to interconnect with other structures. Property 2: From the interaction between structures (primitive or intermediate) evolves a superior structure to these ones. The interaction between structures is realised by the attaching elements, which produce the joint elements on the one hand, and produce the tie elements of the generated structure on the other hand. Thus, the generated structures enjoy the property of inheritance. Property 3: The interaction between structures can come if and only if the structures satisfy the constraints defined on the geometric and topological domains. The mechanism that satisfies the previous properties is represented by the mechanism for structure generation (automaton) (fig. 1). CU
IU
PU
- IU represents the Input Unit - OU represents the Output Unit - CU represents the Control Unit - PU represents the Processor Unit
OU
Feed-back
Figure 1. The mechanism for generation of structures.
The mechanism has three storage units and one processor unit. These units are bound by the transmission lines. The Input Unit and the Output Unit. The Input Unit prepares the structures and transmits them to the Processor Unit, whereas the Output Unit transmits the structures generated by the Processor Unit to the Input Unit, if and only if these structures are not final structures. For example, the primitive structure is composed of two plane and adjacent faces that form a convex angle. These faces play the role of attaching elements (figure 2). The number of attaching elements is m 2 , the first and the second element are identified respectively by 1 and 2 . a
1 2
a
1
2
Figure 2. The primitive structure of a feature and its graph representation.
472
E. Ostrosi , L. Haxhiaj and M. Ferney
Following several cycles of execution, the primitive structures evolve either toward the intermediate structures (non-terminal) either toward a final structure. The non-terminal structures enjoy the property of inheritance, therefore they are provided with the set of connection elements. Processor Unit. The Processor Unit satisfies the property 2, setting up the evolved structures, i.e. the joint elements and the tie elements of these structures. The functioning of the Processor Unit is based on the productions rules for generation of structures. For example, the unit Processor uses the following production rules to generate more evolved structures (figure 3) from the primitive structure a (fig.2): A o a and A o Aa .
1
2
3 Iterat
A
A
A Generated structure
I II
I II
I
II
Ao
1
Ao
I
Ao
I
Ao
I
Ao
I
Ao
a
2
II
A
II
a
1
2
II
A
I
II
a
1
2
II
Graph Representation
ion
Figure 3. Building the structures, the joint elements and the tie elements
Building the joint elements. According to the property 2, the structures (either primitive, either intermediate) are connecting each other by the help of attaching elements forming the joint elements. In the case of figure 3, a joint element is formed during the second and third iteration. The unique joint element is formed from the attaching elements II of A and 1 of a . The Processor Unit manages the building of joint elements based on the production rules, called production rules for the generation of joint elements. In the second iteration, the structure A joins the structure a from the attaching elements, respectively: II of A and 1 of a , to form the auto-similar structure A . Thus, a joint element is created. This joint element does not participate anymore in the building of the evolved structures, therefore it is a final joint element, and it will be noted by the symbol . The rule of production for the generation of joint elements is oII,1 . Building the tie elements. The non-primitive structures must be auto-similar to the primitive structures satisfying the first property, that is to say this structure must be provided with some special attaching elements, called tie elements. In the example of the figure 3, in the first iteration, the Processor Unit forms two tie elements of the structure A : I and II . In this case the production rules for the generation of tie elements will be written under the form I o1 and II o2 . In the second iteration, the production rules for the generation of tie elements are Io I,0 and IIo0,2 . The first rule shows that the tie element I of the structure A is formed from the attaching element I of the first structure, and that the second structure does not participate in the formation of this tie element.
Configuration Grammars: Powerful Tools for Product Modelling in CAD System
473
Control Unit. The Control Unit ensures that the semantics of the relations between structures is as correct as their syntax. Thus, the Control Unit verifies that the obligatory conditions on structures, called semantic conditions, are satisfied before the application of the production rule for structure generation. The semantic reasoning can be extended to the joint elements and tie elements. In these cases, we will have the semantic conditions for joints and the semantic conditions for connections, respectively. Example of a mechanism for generation of configuration features. The following mechanism (figure 4) can generate a class of features based on the functioning of each unit. n1 n2
a( 1,2 )
ª º « » «>A o a@ » «> o @» « » « ªI o 1 º » ««II o 2 » » ¼¼ ¬¬
ª º « » «>A o Aa@ » «> o II ,1@ » « » «ªI o I ,0 º » ««II o 0 ,2 » » ¼¼ ¬¬
ª º « » «>S o A@ » «> o @ » « » « ª/ o I º » ««/ o II » » ¼¼ ¬¬
S/ , / AI , II
Feed-back
Figure 4. Example of a mechanism for generation of configuration features
The primitive structure, provided with attaching elements (1,2), is loaded in IU. The content of IU is transmitted in PU. The result is transmitted and loaded in OU. If the formed structure is final S, then the mechanism stops, otherwise the content of OU is transmitted and loaded in IU. The structures generated in the first, second and third iteration, are shown in the figure 3.
3 Formal Representation of Configuration Generation A configuration language describes the generation of configuration structures, joint elements and tie elements. A grammar provides the finite generic description of this language. Thus we will focus on finding a configuration grammar, which provides the generic and productive description of the configuration language. In these conditions, a Configuration Feature Grammar can be defined as an 8-plet: T T N N G V feature ,V jo (1) int tie ,Vstructure ,V jo int tie , S , , / , P
^
where: T - V feature
`
^a , b , c ` is a finite nonempty set of primitive configuration features
called terminal vocabulary of configuration features; T ^0 ,1,2 , j m` m N is a finite non-empty set of primitive attaching - V jo int tie elements called terminal vocabulary of joint and ties; N ^A, B ,S ` is a finite non-empty set of non-primitive configuration - V feature features called non-terminal vocabulary of configuration features;
474
E. Ostrosi , L. Haxhiaj and M. Ferney
- V joN int tie
^O , I , II , III , /` is a finite non-empty set of non-primitive
attaching elements called non-terminal vocabulary of joint and ties; - S , , / are respectively the axiom of feature, the axiom of joint and the axiom of tie; ª D º ª E º½ °« » « »° - P : ® «*D » o «* E » ¾ is a finite set of productions or replacement rules; °« ' » « ' »° ¯¬ D ¼ ¬ E ¼¿ T N T N Vstructure , V jo . As usual, we require that Vstructure int tie V jo int tie We also assume in this T N T N ( Vstructure Vstructure ) ( V jo V ) . The int tie jo int tie T V jo int tie
^0 ,1,2 , j m`
case primitives
that of
are used to identify the attaching elements of a
configuration feature. Every attaching element of a configuration feature represents a primitive. The null primitive attaching element 0 is not associated with any attaching primitive element, and it is used to denote the relationship "primitive configuration not involved in the joint or the tie formation". Interconnections of configuration features can only be made through specified attaching elements, which can be primitives or non-primitives. Then an analogue interpretation can be done for the utility of the null non-primitive attaching element O. The production rules of the configuration grammar have the following format: ªD º ª E º « » « » « *D » o « * E » «¬ 'D »¼ « 'E » ¬ ¼
Level Level Level
1 2 3
(2)
where: -D is called the left-side configuration feature matrix, D
D -E
1; j
ij
>D11 @ is called the right-side configuration feature matrix, i
- *D
>D @, i
1; j
E
1,
>E @, ij
1,2 m ; m is the number of configuration features;
is called the left-side joint matrix, *D
>* @, i
1,2 , n ; j
D ij
1 ; n is the
number of joint elements; - *E
is
- 'D
is called the left-side tie-point matrix, 'D
i
called
1,2 , n ; j
the
right-side
joint
1,2 m ;
matrix,
>' @ , i D ij
*E
1,2 , s ; j
ª* º , «¬ E ij »¼
1 ; s is
the number of tie elements; - 'E
is
called
the
right-side
tie-point
matrix,
'E
ª' º , «¬ E ij »¼
i 1,2 , s ; j 1,2 n ; There are three levels of production rules for the Feature Grammar. The first is the configuration feature level. Physically, this level specifies the way in which the
Configuration Grammars: Powerful Tools for Product Modelling in CAD System
>E
475
@ interconnect to form a single non-terminal D >D @ . Formally, this level has rules of the form, >D @ o >E @, or >D @ o > E E E E @ where: D V ; E >E , E , E E @ defines an order relation for its configuration features; configuration features E
11 , E 12 , E 1 j E 1m
11
11
11
11
11
12
12
1j
1j
ij
N features
11
1m
1m
T N E 1 j V feature V feature is a terminal or non-terminal configuration feature
component; m is the number of configuration features. The second level is the joint level. Physically, this level specifies which attaching elements of which configuration features connect at each formed joint. Exactly one line is required per joint. The number of columns for the left side and the right side are given by respectively 1 and m, where 1 is the number of columns of D >D 11 @ and m is the number of columns of E E 11 , E 12 , E 1 j E 1m . If the
>
@
th
j configuration feature (terminal or non-terminal) is not involved in a joint, the null identifiers (0 terminal or O non-terminal) appear in the jth column. Then formally, the level of joint has rules of the form: *D ij o * E ij , or (3):
> @ > @
ª y 1 º ªt 11 « » «t « y 2 » « 21 « » « « »o« « y i » « t i1 « » « « » « «¬ y n »¼ «¬t n1
t 12 t 22
t1 j t2 j
ti2
t n2
t nj
t ij
t 1m º » t 2m » » » (3) t im » » » t nm »¼
ª z1 º ªt11 t12 « z » «t t « 2 » « 21 22 «» « « »o« t t z i 1 i2 i « » « «» « « » « ¬« z s ¼» ¬«t n1 t n 2
t1 j t1m º t 2 j t 2 m »» » (4) » tij tim » » » t nj t nm ¼»
where: y i is a formed joint element; tij is an attaching element of the configuration feature j, defined according to the order in the right-side component matrix, that participates in forming the joint element y i . The third level is the tie level. Physically, this level forms the attaching elements of the formed configuration feature by giving the correspondence between the external connections for the left side and the right side. It specifies which attaching elements of which configuration features connect at each formed tie. As the ties are special joints, their form of representation and formation is similar to the joint level. Formally, these rules have the form 'D ij o ' E ij or (4),
> @ > @
where: z i is a formed attaching element; t ij is an attaching element of the configuration feature j, defined according to the order in the right-side component matrix, that participates in forming the attaching element z i .
4 Conditional Configuration Grammar The Configuration Feature Grammar represents the purely syntactic point of view of configuration. It does not always allow expressing the full complexity of structural relations between the primitive configuration features composing a configuration. Once the configuration features and their interconnectivity are
476
E. Ostrosi , L. Haxhiaj and M. Ferney
specified, then there exist a set of attributes, which can be explicitly identified. This set of attributes make up, along with knowledge of connectiveness, the required configuration. For instance, in the case of an I-Beam, some attributes can be: beam height, beam width, web thickness, flange thickness, fillet radii. The set values assigned to these attributes must enable the configuration to meet the functional requirements for the beam. Given two finite, non-empty sets D tpl D1tpl , D2tpl , Dmtpl and D geo D1geo , D2geo , Dngeo called the set of
^
`
^
`
topologic domains and the set of geometric domains, respectively, two sets of attributes A tpl
^a
tpl 1
`
, a 2tpl , a mtpl and A geo
^a
geo geo geo 1 , a 2 , a n
`, called the set
of geometric attributes and the set of topologic attributes, respectively, where each attribute is associated with each domain. For instance, let us define the attribute a1tpl ="relative positions between two faces". The set of values assigned to this
attribute can be {adjacent, parallel, same support}, which define the domain D1tpl . Any terminal or non-terminal configuration feature can be characterized by a subset of topologic and geometric attributes. Thus, the values of attributes in their respective domains are the obligatory conditions that must meet a syntactic rule before being applied. If each level of a production rule (representing the knowledge of connectiveness) must meet obligatory conditions before being applied, then a ªD º ª E º « » C« » conditional production rule P is defined as follows: «*D » o «* E » , where «¬ 'D »¼ « 'E » ¬ ¼ º ª C « D oE » C «C * o* » are semantic conditions associated to each level of a productions « D E» «¬ C 'D o'E »¼ rule. For instance, the first level rule D o E must meet obligatory the condition CD o E before being applied. In these conditions, a Conditional Configuration
Feature Grammar is defined as G C
^G , A geo tpl , D geo tpl ,C`
where: G is the
Configuration Feature Grammar (defined in the precedent section); C are the three levels of semantic conditions associated to the productions rules P; A geo tpl is the set of geometric and topologic attributes; D geo tpl is the set of geometric and topologic domains.
5 Application From the engineering design point of view, the feature-component-module-product relationships are adequate structural means for a general product representation. In
Configuration Grammars: Powerful Tools for Product Modelling in CAD System
477
this section, we give some applications of proposed grammars in the featurecomponent-module-product relationships 5.1 Features represention. Let us given the set C2X
^Blind Step, Simple Blind Slot, Blind Slot,
(figure 5), we have the following inferred Feature Grammars
Pocket`
C : GFeatures
C GFeatures
T T N N ^ Vstructure ,V jonction connexion ,Vstructure ,V jonction connexion , S , , / , P `
where: T - Vstructure
^b` ; ^0 ,1,2 ,3`
T - V jonction connexion
-
N V jonction connexion
- S
m N
^ A, B , Blind Step , Simple Blind Slot , Blind Slot , Pocket , Feature ` ^O , I , II , III , , /`
-
N Vstructure
Feature , , / (0,0)
) ,0 (0 ,1
2
) ,0 ,1 (0
1
b =
(0,1,0)
(1,0)
a)
3 (1,0)
b)
Figure 5: a) Part with features; b) terminal structure
- P: P0 :
>Feature@ >@ >@
>Pocket @
o
>Blind Step @ >@ >@
>bB @ ª1 I º P1 : o ««2 III »» «¬3 II »¼ / ª1 I º ª º «2 III » «/ » » « « » «¬3 II »¼ «¬/ »¼ >Simple Blind Slot @ >@ P3 : o ª/ º «/ » « » ¬«/ ¼» ª º « » « » «¬ »¼
>Simple Blind Slot @ >@ >@
>Blind Slot @ >@ >@
>bC @ >Blind Slot @ ª1 I º «2 III » ; P : >@ o 2 » « ª/ º «¬3 II »¼ «/ » ª1 I º « » «2 III » «¬/ »¼ » « «¬3 II »¼ >B @ >Blind Step @ >A@ >@ ; P4 : >@ o >@ ªI º ªI º ª/ º « II » « II » «/ » « » « » « » ¬« III ¼» ¬« III ¼» ¬«/ ¼»
>Pocket @ >@ >@
>C @ >@
ªI º « II » « » «¬ III »¼
;
;
478
E. Ostrosi , L. Haxhiaj and M. Ferney
>C @ >bB @ ªI º ª1 I º P5 : « » o « » ¬ ¼ ¬3 II ¼ ªI º ª1 I º « II » «2 0 » « » « » «¬ III »¼ «¬0 III »¼ >A@ >b@ P7 : >@ o >@ ª1º ªI º «2 » « II » « » « » «¬3»¼ «¬ III »¼
>bC @ >B @ >bA@ ªI º ª1 I º ª1 I º «3 II » ; P6 : « » o «3 II » ; ¬ ¼ ¬ ¼ ¬ ¼ ªI º ª1 I º ª1 I º « II » «2 0 » «2 0 » « » « « » » «¬ III »¼ «¬0 III »¼ «¬0 III »¼
5.2 Product chair represention. In this application [6], the generic structure of a chair consisting in two modules: the lower module of the chair , and the upper module of the chair is considered. z O
y
x
Profile1
O12 O11
O1
O13 BlindSlot1 <SEAT>
O5
O2
O14
+
O4 O3
Profile1
O6
O7 O8 Profiled Connection1
<SEAT-BACK_SUPP>
Figure 6. First production representing the generation of the non-terminal structure <SEATBACK_SUPP>.
The module is a non-terminal, intermediary and independent structure inside the product chair structure: intermediary or non-terminal because it does not constitute by itself a terminal structure of the product. The result of connection between the two non-terminal modular structures is represented by the generation of the final structure of the product . Next, we are indicating the generation of the upper modular structure . This non-terminal structure is built starting from the set of terminal structures: the <SEAT> structure, the and the structures. The generation of the lower modular structure follows the same mechanism of generation. The first production defines the connection between the terminal structure <SEAT> and the terminal structure of the back support . This connection is carried out through the following joint features (Figure 5): belonging
Configuration Grammars: Powerful Tools for Product Modelling in CAD System
479
to the structure <SEAT> and respectively belonging to the structure . So, the formal production representing the generation of the structure <SEATBACK_SUPP>, from the terminal structures <SEAT> and is the following : P1add :
> SEAT BACK _ SUPP o SEAT BACK _ SUPP @ Structures Level > ProfiledConnection1 o BlindSlot1 Profile1 @ Joint Features Level ª Profile2 o 0 Profile2 « « « BlindHole5 o BlindHole5 0 ¬
º » » » ¼
Tie Features Level
The final structure of the chair is generated from the connection between the lower module and the upper module, previously generated. The connection between the two modules, and , is made through the following joint features: for the structure and for the structure (Figure 7). z O
y
x
CylindricalNeck2
O2 O10 O9
BlindHole5
O1 +
Cylindrical Connection1
Figure 7. Production for generating the final structure .
The formal production representing the generation of the final structure , starting from the connection between structure and structure, is the following :
> > > CylindricalConnection1
@
CHAIR o LOWER _ MOD UPPER _ MOD Structures Level Pn add : CylindricalConnection1 o CylindricalNeck 2 BlindHole5 Joint Features Level o CylindricalNeck 2 BlindHole5
@ @
Tie Features Level
The generated structure represents the final product structure. The generated structure has no more tie features, so the tie features level is the same as the joint features level. The conditions of spatial orientation to connect the nonterminal structures and are indicated in the following matrix (5). As in the case of the formal production, the level of tie features is the same one as the level of joint features.
480
E. Ostrosi , L. Haxhiaj and M. Ferney
Structures Level o LOWER _ MOD UPPER _ MOD ª º O1O 2 { O9O10 o O1O 2 O9O10 » Joint Features Level Cn : « « O1O 2 , O9O10 : 0,0,0 o O1O 2 O9O10 » ¬ ¼ ª º Tie Features Level O1O 2 { O9O10 o O1O 2 O9O10 « » « O1O 2 , O9O10 : 0,0,0 o O1O 2 O9O10 » ¬ ¼
> CHAIR
@
6 Conclusions This paper proposes and develops a formal representation for supporting the computer-aided design approach for product configuration et representation. The characteristics of design for configuration are described by resorting the structural relationships between configurations, which are very strong in design. The representation by proposed grammar yields configurations composed of primitive elements or primitive configuration features, whose meaning is related to the engineering basic structures. Interconnection of these primitive configuration features describes a structural configuration. The contributions of proposed grammars are the following: (1) The configuration grammars are defined on the properties of the structures of configurable products; (2) The proposed grammars are based on feature-component-module-product relationships, considered as adequate structural means for a general product representation; (3) The configuration grammars work on multiple levels of abstraction of the significant structures. Once the proposed grammar is established, the design configuration process is in principle straightforward. Given some functional requirements, the problem is one of deciding in which class the input requirements represent a valid configuration. The subsequent CAD modelling is implemented in computers by using the functions of current CAD software.
7 References [1] [2] [3] [4] [5]
Agarwal, M. and Cagan, J., 1998, A blend of different tastes: the language of coffeemakers, Enviroment and Planning B: Planning and Design, Vol. 25(2), pp. 205226. Brown, D.C., 1998, Defining configuring, Artificial Intelligence for Engineering Design, Analysis and Manufacturing, Vol. 12, pp.301-305, Cambridge University Press. Chase, S., 2002, A model for user interaction in grammar-based design systems, Automation and Construction, Vol. 11(2), pp. 161-172. Chomsky, N., 1965, Aspects of the Theory of Syntax", M.I.T. Press, Cambridge, Massachusetts. Deciu, E.R., Ostrosi, E., Ferney, M., Gheorghe, M., 2008, A configuration grammar design approach for product family modeling in conceptual design, Proceedings of the TMCE 2008, April 21–25, 2008, Izmir, Turkey, Edited by I. Horváth and Z. Rusák, pp. 551-564.
Configuration Grammars: Powerful Tools for Product Modelling in CAD System [6]
[7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18]
481
Deciu, E.R., Ostrosi, E., Ferney, M., Gheorghe, M., 2008, Design synthesis based on configuration grammars approach for configurable products modelling, Proceedings of the CIRP Design Conference 2008: Design Synthesis, April 7-9, 2008 Twente, Hollande. Du, X., Jiao J., Jiao, J. and Tseng, M., 2002, Graph Grammar based product family modeling, Concurrent Engineering: Research and Applications, Vol. 10(2), pp. 113128. Mullins, S., Rinderle, J.R., 1991, Grammatical approaches to design", Part 1: An introduction and commentary, Research in Engineering Design, Vol. 2(3), pp. 121135. Ostrosi, E., Ferney, M. 2005, Feature modeling grammar representation approach, AIEDAM, Vol. 19(4), pp. 245-259. Sabin, D. and Weigel, R., Product, 1998, Configuration Frameworks – A survey, IEEE Intelligent Systems, Vol. 13(4), pp. 32-85. Schmidt, L.C. and Cagan, J. 1995, Recursive Annealing: A Computational Model for Machine design, Research in Engineering Design, Vol. 7, pp. 102-125. Schmidt, L.C. and Cagan, J., 1998, Optimal Configuration Design: An Integrated Approach using grammars, Journal of Mechanical Design, Vol. 120(1), pp. 2-8. Schmidt, L.C., Cagan, J., 1996, Grammars for machine design, Gero J.S., Sudweeks F. (ed.), Artificial Intelligence in Design’96, Kluwer Academic Publishers, pp. 325344. Siddique, Z. and Rosen, D., 1999, Product platform design: a graph grammar approach, In Proceedings of DETC’99, ASME Design Engineering Technical Conferences. Snavely G.L., and Papalambros P.Y., 1993, Abstraction as a configuration design methodology. Advances in Design Automation, (New York: ASME) DE - (65)-1, pp. 297-305. Soininen, T., Tihonen, J., Männistö, T. and Sulonen, R., 1998, Towards a General Ontology of Configuration, AIEDAM, Vol. 12(4), pp.357–372. Starling, A.C. and Shea, K., 2003, A grammatical approach to computational generation of mechanical clock designs, Proceedings of the 14th International Conference on Engineering Design, ICED’03, Stockholm, Sweden. Stiny, G. W. and Mitchell, J., 1980, The grammar of paradise: on the generation of Mughul gardens, Environment and Planning B 7, pp. 209-226.
Ontologies
A Semantic Based Approach for Automatic Patent Document Summarization Amy J.C. Trappey,a, b, * Charles V. Trappey,c and Chun-Yi Wub a
Department of Industrial Engineering and Management, National Taipei University of Technology, Taiwan b Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Taiwan c Department of Management Science, National Chiao Tung University, Taiwan Abstract. With the rapid increase of patent documents, patent analysis becomes an important issue for creating innovative products and reducing time-to-market. A critical task for R&D groups is the analysis of existing patents and the synthesis of existing product knowledge. In order to increase knowledge visibility and sharing among R&D groups, a conconcurrent engineering approach is used to facilitate patent document summarization and sharing. The purpose of this research is to automate patent docment summarization as a key step toward efficient patent analysis and knowledge management. The goal of this research is to develop a semantic-based methodology for capturing patents and to summarize this content for design team collaboration. The system automatically marks, annotates, and highlights the nodes of an ontology tree which correspond to words and provides a visual figure summary. Patents for power hand tools and chemical mechanical polishing tools were downloaded to evaluate the automatic summarization system. For these cases, the accuracy of classification reached 93% and 92% respectively demonstrating 20% summarization improvements over previous methods. Keywords. Semantic knowledge management, Key-phrases extraction, Document summarization, Text mining, Patent document
1 Introduction High technology firms maintain global competitiveness by introducing innovative products, reducing time-to-market, and maintaining high customer satisfaction. If a conventional product development approach is used, there is a tendency for research and development efforts to be split among groups with a resulting loss in information and knowledge sharing. A critical task for R&D groups is the analysis of patents and the synthesis of product knowledge. The management of key * Please send all correspondence to Professor Amy Trappey, Department of Industrial Engineering and Management, National Taipei University of Technology, Taipei (10608), Taiwan, E-mail: [email protected]. Tel: +886-2-2771-2171 Ext. 4541; Fax: +886-2-2776-3996
486
Amy J.C. Trappey, Charles V. Trappey and Chun-Yi Wu
technology knowledge provides a clear understanding of the the competitors and better enables the formulation of strategies for legal control of new inventions and processes. In order to increase knowledge visibility and sharing among R&D groups, a conconcurrent engineering approach is used to facilitate patent document summarization and sharing. By summarizing patent information, information is more readily categorized and managed. Improved use and storage of patent document knowledge increases the success of product lifecycle management. As societies become knowledge oriented and trade increases across boarders, the need to guarantee the protection of intellectual property becomes unavoidable. Governments grant inventors the right to make, use, and sell new technological processes and devices. The grant of rights to protect an invention, or a patent, provides the unique details of the invention and specifies exactly what is protected from copying or duplication by competitors. Patent documents are also the key documents used for settling legal disputes [4]. Technology oriented firms must continuoulsy analyze patent databases to protect their own technologies from infringement and to avoid duplicating or copying the inventions of others. Emerging technologies require the filing of thousands of global patents necessitating a need to better manage and summarize the knowledge. Automated patent analysis is one means to gain competitive advantage. Firms that can best process and map the emergence of technologies become the first to declare new boundaries for their innovations and profit from a time limited monopoly on the new invention. The purpose of this research is to automate patent docment summarization as a key step toward efficient patent analysis and knowledge management. Most of the existing patent document management systems use keyword matching as a search method to find related patents. The searches generate irrelevant results when words may represent different meanings in different contexts or when the keyword indices do not preserve the notion of relationships between words. For improved information retrieval, it is necessary to accurately identify the relations between words and the relations between documents. Therefore, the goal of this research is to develop a semantic-based methodology for analyzing patents and to summarize this content for design team collaboration.
2 Related Research This paper integrates four research issues including semantic knowledge management, key-phrase extraction, technical document summarization, and text mining applications for patent analysis and summarization. Semantic knowledge management uses ontologies that define concepts and the relationships between the concepts. Ontologies serve as a bridge to connect documents and systems for automatic knowledge exchange. External systems can access information through the ontology since it provides a common vocabulary to support the sharing and reuse of knowledge. The ontology also serves as a sharable thesaurus describing entities, attributes, relationships and events for the unified knowledge base and user context. Briefly, the purposes of ontology is to provide domain of discourse
Semantic Based Patent Document Summarization
487
that is understandable by human and computers. Moreover, mapping content onto a simplified and standard specification helps to processes information and to store it consistently. The concept tree of the ontology serves as an access interface and is useful for calculating the similarity between concept definitions. Key words are often extracted from text using a dictionary approach, a linguistic approach, or a term frequency approach. The dictionary approach uses a pre-defined list to extract the key phrases from the document. This method is easy to implement but it is time-consuming to build and maintain the dictionary. The linguistic approach analyzes the keywords using natural language processing grammar programs, and applies rules to filter low meaning phrases [1]. Some linguist programs analyze the sentences corresponding to a grammar but may not weigh key phrases in the title or the headings. Finally, Term Frequency (TF) is based on the hypothesis that high frequency words are most relevant and this method is considered to be the simplest way to compute the scores to weigh the importance of sentence content [6]. By combining TF with Inverse Document Frequency (TF-IDF), the scoring method is further improved [7]. Additional references for methods to analyze document content are described in Krulwich et al. [5]. Text summaries are essential for minimizing information overloading in the workplace. Instead of having to review the entire text, a concise summary allows users to understand the complete content quickly and easily. The task of text summarization requires selecting from a full text document the important sentences and phrases that will serve as the basis for writing a summary. Common text summarization techniques include corpus based approaches which account for different writing styles and technical terms [10], keyword-based approaches using keywords provided by authors, paragraph relationship map approaches which the compute similarity between paragraphs, and discourse based approaches that link words having the same meanings in an article. A Legal Knowledge Management Platform was implemented by Trappey et al. [9] and Hsu et al. [2], to extract keywords automatically, categorize documents, and provide statistical analysis of document metadata. The system presents an electronic document classification and search method applying text mining and neural network technologies to classify semantic document especially for patent documents.
3 The System Methodology The system platform proposed in this paper has two parts. One part extracts key words and phrases. The other part provides the document summary. The patent document summarization algorithm is shown as a procedural flow in Figure 1 where PHT represents power Hand Tools and CMP represents Chemical Mechanical Polishing. In addition to the text summary, the system uses a comparison mechanism based on a specific ontology defined by domain experts. The system automatically marks, annotates, and highlights the nodes of ontology
488
Amy J.C. Trappey, Charles V. Trappey and Chun-Yi Wu
tree which correspond to words in the patent document, and provides a visual figure summary.
Figure 1. The patent document summarization procedure
3.1 Key-Phrase Extraction Before key-phrase extraction, it is necessary to import a domain-specific ontology into the system for extracting domain-specific key words and phrases. Moverover, document content is processed in four steps. First, the system segments words according to several segmentation symbols, such as blank spaces, punctuation marks (ex: [ ! ] [ , ] [ . ]), and special symbols (ex: [ @ ] [ # ] [ $ ]). Next, stop words which hold little information are removed from the word list. A lexical dictionary is imported to identify the morphology of words, and the system keeps
Semantic Based Patent Document Summarization
489
the verbs and nouns, which contain more information and best express the content of the document. The system restores the tense and plurality of words to the root word and achieves the goal of integrating term weight with words which have the same root word. After preliminary processing, the system outputs key-phrases automatically using two extraction algorithms. One extraction algorithm uses TF-IDF methods. The algorithm measures term information weight, and extracts top key words or phrases in accordance with their TF-IDF weight. The other extraction algorithm uses a specific ontology embedded in the system to calculate the frequency of mapping words. The procedure for semantic-based extraction uses the following steps. First, the patent documents are uploaded to delete punctuation, blank and special marks. Afterward, the system divides the sentence into words and eliminates the stop words. MontyLingua is used to analyze the remaining words and keep the verbs and nouns of phrases while eliminating prepositions and articles. After deriving the vocabulary, the system calculates the occurrence of verbs and nouns and ranks them in order. The ontology is built by the domain experts to describe the concepts in vocabularies and determine the linking relationships between vocabularies. The ontology contains descriptions of classes, properties and their instances. This set of knowledge terms, includes the vocabulary, the semantic interconnections, and suitable rules of inference and logic for the particular domain. The inference includes the transitive, symmetric and reflexive concept relationships. Therefore, an ontology differs from TF-IDF methods in that it represents the relation of classes in patent documents. Through the relationship, the system derives the key phrases including the subclasses of key phrases. For example, the word implementation has a subclass phrase motion unit which in turn has a subclass power source and electricity. The upper classes of key phrases are defined (e.g., implementation is the upper class of mechanism) as shown in Figure 2. Thus, the semantic based extraction reduces the number of related words and facilitates the construction of ontologies by refining and assembling components. Consequently, the system acquires key words and phrases from both the ontology based and TF-IDF based methods, and removes key words or phrases in common. Finally, a list of key words and phrases is derived and used as the input and foundation for the summary.
490
Amy J.C. Trappey, Charles V. Trappey and Chun-Yi Wu
Power_Hand_Tool _Ontology
Power_Hand_Tool_Ontology ѧHas subclassШDescription
Description
Function
Document
Performance Engineering Triz
Patent
Implementation
Control
Mechanism
Product
Nailer
Motion
Screwdriver Stapler
Figure 2. Graphical tree for power hand tool ontology
3.2 Summary Representation Summary representation is completed in four steps as shown in Figure 3. First, similar concepts in different paragraphs are placed in the same cluster. Documents are made of several important concepts or main topics that exit in different paragraphs. For this reason, a clustering algorithm is utilized to gather similar concepts or main ideas. On the basis of key words extracted, the frequencies of each key word appearing in every paragraph are tallied and the similarity between each paragraph pair is computed. The closer the value, the greater similarity of these two paragraphs. After computing the similarity value, the paragraph similarity matrix is constructed and the paragraphs are clustered using the k-means algorithm. The Root Mean Square Standard Deviation and R-Squared correlation coefficient are used to minimize the intra-class similarity while maximizing the inter-class similarity.
Semantic Based Patent Document Summarization
491
Figure 3. The summary representation procedure
3.3 Paragraph Importance and Scores Measuring the importance of each paragraph depends on its information content. The system utilizes key words and phrases extracted to measure the importance of each paragraph, and then derives the summary as shown in Figure 3. The ontology provides a clear concept of specifications used for information expression, integration, and system development. The ontology-based method defines basic terms, the relations of vocabularies for specific domains, and the rules for combining terms and relations. If key words and phrases extracted correspond to a node of the ontology, it is essential to select the correct position of the node for mapping the distinct importance weights as leaves and branches of the ontology tree [3]. The node which falls on the lower position has greater weight and provides more information. When words or phrases in each paragraph match the nodes of the ontology tree, it is necessary to record not only term frequency but also their mapping weights. Moreover, if a key word or phrase is extracted using the TF-IDF method, the system records its frequency of occurrence in a paragraph. The proposed system considers how the length of a paragraph affects the possibility of key words or phrases appearing in the paragraph, and picks the highest scoring paragraphs of each cluster as the candidate summary. However, the paragraph scores must be higher than the average scores of the whole paragraph.
492
Amy J.C. Trappey, Charles V. Trappey and Chun-Yi Wu
Therefore, the paragraphs with the highest scores in each cluster are selected to form the content of the patent summary. 3.4 Summary Template and Tree A summary template is used to combine the candidate summaries. Furthermore, other important patent information such as patent figures and claims are also included and the text mining techniques proposed by Trappey et al. [8] are implemented. Finally, the system automatically marks, annotates and highlights the nodes of the tree which correspond to words in the patent document and provides a summary tree graph. The summary tree provides a quick view of key words and phrases appearing in the full patent text. Moreover, the structures of summary tree represents the relation between words and phrases.
4 Evaluation In order to test the system, two hundred patent documents were retrieved from the World Intellectual Property Organization website and United States Patent and Trademark Office (USPTO) website. The patents for Power Hand Tool (PHT) and Chemical Mechanical Polishing (CMP) were downloaded to evaluate the automatic summarization system. The classification for PHT includes hand-held nailing or stapling tools, percussive tools, and combination or multi-purpose tools. The main classification of CMP includes polishing compositions, machines, devices, processes for grinding or polishing, and semiconductor devices. After all the patents were downloaded, the proposed system compares the results with the results of the summarization system developed by Trappey et al. [8]. The evaluation uses statistics to represent the summary generation and compression ratio, the ontology based keyword extraction retention ratio, and the classification accuracy. The compression ratio shows the proportion of the full text that is removed during the summary process (or text shrinkage). In the PHT domain, the average compression ratio is about 21% compared with 19% by Trappey et al.’s summarization results [8]. The result of the retention ratio test is used to evaluate how much information remains in the summary compared to the original text. In this test, ontology based keyword extraction is used to evaluate how many keywords remain in the summary compared to the keywords extracted from the original text. The average retention ratio is about 90% for the PHT case and surpases the 75% achieved with earlier research. The proposed system also uses an ontology-based neural network electronic document categorization system to test the accuracy of classification. Compared with the result of summarization system by Trappey et al. [8], the average accuracy of classification is also improved as shown in Table 1.
Semantic Based Patent Document Summarization
493
Table 1. The results of evaluation Evaluation The Compression Ratio
The Retention Ratio Classification Accuracy
Domain PHT CMP PHT CMP PHT CMP
Paper 20.50% 19.61% 89.68% 78.78% 93.00% 92.00%
Trappey et al. 18.84% 9.95% 75.12% 67.42% 77.00% 76.00%
Improvement 8.81% 97.09% 19.38% 16.85% 20.78% 21.05%
5 Conclusion A methodology for automated ontology-based patent document summarization is developed to provide a summarization system which extracts key words and phrases using a concept hierarchy and semantic relationships. The system automatically abstracts a summary for any given patent based on the domain concepts and semantic relationships extracted from the document. Moreover, the system presents a text summary, relevant important patent information, and a summary tree to enhance representation. The proposed algorithm adopts ontology-based TF-IDF methodologies to retrieve the domain key phrases. These key phrases are used to identify significant paragraphs and form the summary. This methodology is not domain specific and can be used to analyze various domains as long as the domain ontology is defined in advance. In this research, the domains of power hand tool (PHT) and chemical mechanical polishing (CMP) are used to evaluate the summarization system. Using an automatic patent summarization system, enterprises can efficiently analyze technology trends based on the available patents and IP documents and, therefore, refine R&D strategies for competitive advantage.
6 References [1] Chen, K. H., 1996, Natural Language Processing of Information Retrieval (in Chinese), Bulletin of the Library Association of China, No. 57, pp.141-153.
[2] Hsu, F. C., Trappey, A. J. C., Hou, J. L., Trappey, C. V., and Liu, S. J., 2004, “Develop a Multi-Channel Legal Knowledge Service Center with Knowledge Mining Capability,” International Journal of Electronic Business Management, Vol. 2, No. 2, pp. 92-99. [3] Hsu, S. H. (Advisor: Prof. Wu, H. Y.), 2003, Ontology-Based Semantic Annotation Authoring and Retrieval (in Chinese), M. S. Thesis, Department of Computer Science, National Dong Hwa University, Hualien, Taiwan. [4] Kim, N. H., Jung, S. Y., Kang, C. S., and Lee, Z. H., 1999, “Patent Information Retrieval System,” Journal of Korea Information Processing, Vol. 6, No. 3, 5, pp. 80–85.
494
Amy J.C. Trappey, Charles V. Trappey and Chun-Yi Wu
[5] Krulwich, B., 1995, “Learning Document Category Descriptions through the [6] [7] [8] [9]
[10]
Extraction of Semantically Significant Phrases,” Proceedings of the 14th IJCAI Workshop on Data Engineering for Inductive Learning, pp.1-10. Luhn, H. P., 1957, “A Statistical Approach to Mechanized Encoding and Searching of Literary Information,” IBM Journal of Research and Development, Vol. 1, No. 4, pp. 309-317. Salton, G., and Buckley, C., 1988, “Term-Weighting Approaches in Automatic Text Retrieval,” Journal of Information Processing and Management, Vol. 24, No. 5, pp. 513-523. Trappey, A. J. C., Charles, V. Trappey, and Burgess, H. S. Kao, 2006, “Automated Patent Document Summarization,” Proceedings of 10th International Conference on Computer Supported Cooperative Work in Design, May 3-5, Nanjing, China. Trappey, A. J. C., Hsu, F. C., Hou, A. J. L., Trappey, C. V., and Liu, S. J., 2004, “Designing a Multi-channel Legal Knowledge Service Center Using Data Analysis and Contact Center Technology,” Proceedings, The 8th World Multi-Conference on Systemics, Cybernetics and Informatics (SCI), Orlando, Florida, July 18-21, Vol. XVII, pp. 132-136. Wu, P. F. (Advisor: Prof. Wei, C. P. ), 2003, Use of Text Summarization for Supporting Event Detection, M. S. Thesis, Department of Information Management, National Sun Yat-sen University, Kaohsiung, Taiwan.
Develop a Formal Ontology Engineering Methodology for Technical Knowledge Definition in R&D Knowledge Management Ching-Jen Huanga,*, Amy J.C. Trappeyb,c and Chun-Yi Wuc a
Department of Industrial Engineering and Management, National Chin-Yi University of Technology, Taiwan b Department of Industrial Engineering and Management, National Taipei University of Technology, Taiwan c Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Taiwan Abstract. In recent years, many technical disciplines systematically develop standardized ontology that domain experts agree upon and apply to share and annotate information in their fields. The formal ontological representation facilitates R&D knowledge sharing and re-use by both knowledge workers and application systems. In this research, a system engineering approach for creating and managing domain ontology (called ontology engineering, OE) is proposed. The proposed methodology describes an integrated approach in five steps. The approach combining with XML representation schema is adopted to develop a computer-aided ontology engineering (CAOE) tool for effective engineering knowledge construction. The application of the methodology and OE tool application for R&D knowledge management is then investigated through a case study. The case study depicts the ontology engineering for R&D of the monetary bill inspection machine. The machine design knowledge ontology is applied to the related patent search analysis and synthesis to support knowledge management during the machine design and development. Keywords. Ontology, ontological methodology, knowledge management, bill inspection
1 Introduction Ontology is a model of reality of the world and the concepts in the ontology must reflect this reality [5]. One approach to capture real world knowledge is to use an ontology containing and representing facts and relationships about the domain of interest [7]. In recent years, several methodologies and IT tools for building ontology have been reported [8]. Ontology is becoming increasingly important in *
Please send all correspondence to Dr. Ching-Jen Huang, Department of Industrial Engineering and Management, National Chin-Yi University of Technology, Taichung (41101), Taiwan, E-mail: [email protected]. Tel:+886-4-23924505 ext. 7650; Fax: 886-4-24363039
496
C.J. Huang, A.J.C. Trappey and C.Y. Wu
engineering domain since they enable knowledge sharing in a formal and unambiguous matter [10]. Knowledge, in a rapidly growing field such as some engineering disciplines, is usually evolving and, therefore, an ontology development process is required to keep ontological knowledge up-to-date. In this research, a methodology for creating and managing domain ontology is presented and implemented for technical knowledge definition and refinement, mainly in R&D knowledge management. The proposed methodology of ontology engineering (OE) is based on the integration of approaches proposed by Noy [5] and Gruninger [2]. This research develops five systematic steps to transfer operational knowledge into domain ontology. We implement a XML-based CAOE tool based on Microsoft Visio. Using the CAOE, the domain ontology in technique knowledge definition is built, represented and refined with readable graphical interface. Afterwards, the R&D stuff can save them as a machine-interpretable XML format for further R&D knowledge sharing and management.
2 Background and Related Research Review For past decades, many published papers on ontology methodology have put forward a wide variety of models related ontology creation and management [9]. A thorough conceptual overview on history, classifications and functions in ontological methodologies is provided in [3] and [6], including a discussion on various ontology types, functions and proposed framework. Andreasen [1] focuses on how to specify taxonomically organized ontology enriched with compound categories formed recursively. Sugumaran [7] proposes a heuristics-based ontology creation methodology for ontology creation and management. However, as most ontological methodology, the Sugumaran’s approach is hard to apply for an engineering domain because of the lack of detailed constructive steps [10]. Gruninger [2] develops TOVE (Toranto Virtual Enterprise) ontology methodology as a guideline for ontology building. TOVE includes six steps for ontology creation, but without a CAOE tool for effective engineering knowledge construction. Noy [5] defines a simple knowledge-engineering methodology with seven steps for ontology building by applying Protégé 2000 as an implementing tool. From the construction point of view, although Gruninger and Noy have proposed means of representing real world knowledge for the development of machine learning and knowledge sharing, they all lack information on how to accomplish these guidelines, in term of the specific actions and decisions that must be performed in each stage.
3 The Formal Methodology for Ontology Engineering 3.1 The Framework of Ontology Engineering According to the ontological methodologies proposed by Gruninger [2] and Noy [5], this research develops five systematic steps for ontology building. Figure 1
Develop a Formal Ontology Engineering Methodology for Technical Knowledge
497
presents the main standard operation procedures (SOPs) of our methodology for systematized ontology building, maintenance and evaluation. The SOPs will be executed through the entire lifecycle of the ontology design. The concepts and applications of these SOPs are depicted in details in the following section. The framework follows the system engineering principles by, first, identifying the problem domain of the interests. Afterward, find all available references best describing the domain knowledge. Then, abstract the critical phrases from all references to define the ontological concepts and relationships.
Figure 1. The SOPs of ontological engineering
3.2 The SOP Steps of Constructing Ontology Knowledge This section will focuses on the description of SOPs of the proposed ontological engineering. An ontology-based patent inquiry case will be used to clarify the SOP steps at works. SOP1: Define Ontological Problems and the Range of Domain The first step of the proposed methodology is to identify the goal of ontology and the range of ontological domain. The approach of 5W1H method (What, Who, When, Where, Why, How) is applied to define domain questions (DQs) and understand the scope of ontology. The examples of DQ in the case study are “What functions should be included about the hardware and software of a bill inspection machine (DQ1)?” and “What are the steps in analyzing the behaviors of the bill inspection machine (DQ2)?” Afterwards, researchers can ask further detailed questions, e.g., “How does the machine alarm users when mechanism breaking
498
C.J. Huang, A.J.C. Trappey and C.Y. Wu
down?” (Corresponding to DQ1) and “How does the transmission function carry bills for testing?” (Corresponding to DQ2). Ontology is built based on the answers of DQs from domain experts or knowledge engineers step by step. SOP2: Establish formal Competency Questions After the goal and range of domain have been defined, the next step is to establish informal and formal competency questions (ICQ/FCQ) based on previous DQs developed on SOP1. ICQ/FCQ is used to define the main keywords and the relationship between keywords in ontological domain. Generally speaking, user primitively delivers ICQs to domain experts and, then, domain experts translate ICQs to FCQs to form final domain knowledge and keywords accurately. SOP3: Quote Important Reference Information in Existence Before constructing the formal domain ontology tree, the knowledge engineers search the existent references, knowledge, information, and documents related to the problem domain via webs or literatures, such as IEEE Standard Upper Ontology or www.geneontology.org. If there are some resources found, they are used to construct the components and structure of interesting domain expertise in order to form the concepts of entire engineering domain. Afterward, knowledge engineers can combine the concepts with the outcomes of SOP1 and SOP2 to derive the characteristic of domain for SOP4 and SOP5. SOP4: Find Appropriative Phrases in Ontological Domain Upon the completion of steps SOP1 to SOP3, finding out the related phrases of engineering domain in order to build domain ontology will be carried out in SOP4. There are five channels to search and define domain phrases, including online (web) search engine, patent database, related books, journal papers, and product specification. After discovering and collecting all domain phrases, domain experts choose key phrases and map them into the nodes and relationships of existent ontology. In the mean time, the domain expert can search key phrases from the DQ and scope of SOP1 and enlarge the ontology scope until a satisfactory domain scope is achieved or all the key phrases are located in the appropriate position. The main outcome of SOP4 is the domain phrase list. The domain phrase list is very useful for defining the final domain ontology tree in SOP5. SOP5: Classify Phrases into Groups In order to classify phrases into groups, this research proposes three methods for classification. They are the top-down classification, the bottom-up classification, and the hybrid classification respectively. In top-down classification, the classification is started from upper phrases and then extended to the lower phrases to establish the entire ontology tree with nodes and relationship links. In the bottom-up classification, knowledge engineer defines the relationship of phrases from the lower phrases to the upper ones by clustering and stratifying the attributes of phrases. In the hybrid method, the complexity of classifying process is reduced by starting from known impressible phrases to choose dynamically top-down or bottom-up methods to build the additional structure of ontology.
Develop a Formal Ontology Engineering Methodology for Technical Knowledge
499
By investigating the five SOPs repeatedly, domain knowledge will be generalized as a domain ontology tree. Then, a computer-aided 5-step OE tool (based on Microsoft Visio graphical interfaces) is implemented to present and transfer ontology into XML format for R&D knowledge management and application. 3.3 Computer-Aided Ontology Engineering (CAOE) Tool In order to transfer graph-based domain ontology tree derived from SOPs into text-based XML format for knowledge presentation, Visio-based CAOE tool is applied as an intermediary tools for translation in this research. Moreover, this research uses monetary bill inspection machine as a real world case for patent analysis and OE demonstration. The transferring procedure is described as follow. Firstly, use CAOE to draw the domain ontology tree derived from SOP1 to SOP5. Secondly, define the attributes of each node, vocabulary tables and the relationships between nodes in the ontology tree. Finally, after the whole ontology structure of the inspection machine is built, CAOE can save the ontology tree as a XML format file, as shown in Figure 2. Afterwards, the XML file can be shared and delivered for team collaboration in R&D concurrent engineering. Cash count
Has the category of
Be composed of
Has the component of Rubber coil Has the component of
Be composed of
Rubber
Be composed of
Model
Be composed of
Mount Has the component of
Be composed of Cash receive Be composed of Machine Has the category of
Be composed of Transmiss ion
GoldWheel Has the component of plated axie metal rod Plastic Has the component of ring Motor Gear wheel
Be composed of Drive belt
Be composed of
Screw
Machine shave
Be composed of Be composed of
Main control Paper inspection
Has the category of
Plastic box Cash receiverHas the component of Screw
Be composed of Has the category of Has the structure of
Plastic coil
Cash dispenser
Has the category of
Has the category of
Rubber
Power animation
Metal plank Overdeve Has the component of loped film Instrumen t panel Protection Has the component of case Keystroke
Be composed of
Battery
Has the component of
Motherdis k
Motor
Has the component of
Iron core
lens
Circuit coil
Diaphrag m
Be composed of
Has the category Hasofinfluence on Be composed of Inductive Has the structure of compone nt Has the category of Be composed of
Has the component of
Electric inductive Has the component of device Has the component of Silicon Chip Has the component of Glass
Has influence on Infrared Has the category of ray Circuit
Be composed of Be composed of Be composed of Be composed of
Has the category of
Ultraviole t ray
Be composed of
Receiver Video camera Detector
Has the component of
Sunshade Frequency conversion machine
Photoflas h
Has Has the Has component the component Has the Has component the of component the of component of of of Darkroom Inductive chip
Has the component of
Compoun d metal
Has the category of Be composed of Has the category of
Rubber chip
Be composed of
Magnetic Be composed of
Magnetic head Magnetic material
Iron Has Has the component Has the component the component of of of
Nickel
Has the category of Has the category of
Image
Be composed of Be composed of
Electric capacity
Be composed of
Be composed of Speed
Be composed of
Scanner Storage device
Has Has the component the component of of Has the component of
Has the component of Silicon Conducto r Germaniu Has the component of m Medium Has the component of Motor
Has the component of
Micro hard disk
Probe Chargecoupled device
Visible light wave Iron core
Figure 2. An XML format example of bill inspection ontology
500
C.J. Huang, A.J.C. Trappey and C.Y. Wu
4 An Ontology-based Patent Search for Bill Inspection Machine In this section, an ontology-based patent search for bill inspection machine is illustrated for demostrating the effectiveness of ontological engineering. 4.1 Application of OE Methodology Figure 3 shows the process for finding key patents via domain ontology. The process difference between web-based search and ontology-based search was marked by a red dotted rectangle denoting a brief ontological engineering process described on Section 3.
Figure 3. Process for finding key patents via domain ontology
For a web-based search, the user first inputs the important domain questions for basic patent search. The domain questions will be analyzed by MontyLingua tools (http://web.media.mit.edu/~hugo/montylingua/) and term frequency–inverse document frequency (TF-IDF) methods to obtain the key vocabularies and phrases. And then the user can proceed with advanced patent search through United States Patent and Trademark Office (USPTO) site or other search webs to find the related key patents phrases by phrases. It’s obvious that the found patents should be numerous and maybe a little unrelated. Afterwards, the user should put a big effort on filtering the found patents to find useful ones. For an ontology-based search, after the key vocabularies and phrases are resolved by MontyLingua tools and TF-IDF methods, the user maps them into ontological attributes and relationships to find key phrases and then proceed with advanced patent search. Apparently, the found patents should be reduced in a large amount, but more close to the related search object.
Develop a Formal Ontology Engineering Methodology for Technical Knowledge
501
4.2 Ontology Implementation and Evaluation This section illustrates two search examples to explain the use of ontological engineering on patent search of bill inspection machine. Example 1 If a user enters a question: “How to use the technique of ultraviolet ray and infrared ray in bill inspection?” There will get two key phrases: ultraviolet ray and infrared ray by using MontyLingua tool to analyze this sentence. The user can find the relationship between these two words in Visio vocabulary tables and find other adjacent related words through the ontology tree. After using these two words to search patents through USPTO, there will appear 88 patents including many irrelevant ones. However, if one use the related words obtained from the ontology tree to proceed with advanced patent search, there finally can find just 12 patents within three times search, but these patents is more accords with user’s needs. Example 2 For investigating the effectiveness of ontological methodology, if a user keys in a question: “Which functions would affect the speed of cash dispensing in bill inspection?” As illustrated in Example 1, there will obtain two key phrases: cash dispenser and speed, but it wouldn’t be found a direct relationship between these two words in Visio vocabulary tables. In order to make effective search, related crucial phrase inspection, money, and function obtained from ontology tree are added into search terms list. The amount of found patents is reduced from 21 to 7 key patents. As the practical results demonstrated in Examples 1 and 2, applying ontological engineering methodology to patent search will dramatically decreased the time spent on filtering the irrelevant patent for R&D staff or knowledge worker. Table 1 shows the superiority of the ontology-based method over without ontology-based (web-based) one. Table 1. Superiority of the ontology-based method Indexes Without ontology-based patent number found More but most of useless Time Much Cost Much First patent search* 88 Final patent chosen* 88 Information relation Low Expected content Discrete * The numbers of related patent found on Example 1
Ontology-based Fewer but direct Less Less 25 12 High Detailed
502
C.J. Huang, A.J.C. Trappey and C.Y. Wu
5 Conclusion Ontology is becoming increasingly important in the concurrent engineering domain since they enable knowledge sharing in a formal and unambiguous way. In this research, a methodology for building a formally defined ontology was proposed. The methodology focuses on identifying and defining the competency problems, technique terms, properties, relationships and constraints needed to model an application domain, and finally constructs a well-defined ontology tree and translate it into useful XML format of domain ontology for further application using. As a result, the application in the patent analysis domain enabled us to refine our ontology creation process iteratively and integrate it into a methodology for the whole process of ontology development. This methodology is appropriate for application domains where the domain knowledge must be regularly updated.
6 Acknowledgement This research is partially supported by the research grants, funded by the National Science Council and the Industrial Technology Research Institute in Taiwan.
7 References [1] Andreasen T, Nilsson JF. Grammatical specification of domain ontologies. Data & Knowledge Engineering 2004; 48:221-230.
[2] Gruninger M, Fox MS. Methodology for the design and evaluation of ontologies.
[3] [4] [5] [6] [7] [8] [9] [10]
Proceedings of the Workshop on Basic Ontological Issues in Knowledge Sharing, IJCAI-95, Montreal, Canada 1995. http://www.eil.utoronto.ca/enterprise-modelling/entmethod/index.html. Guarino N. Formal ontology, conceptual analysis and knowledge representation. International Journal of Human-Computer Studies 1995; 43:625-640. Lin CYI, Ho CS. Generating domain-specific methodical knowledge for requirement analysis based on methodology ontology. Information Sciences 1999; 114:127-164. Noy NF, McGuinness DL. Ontology development 101: A guide to creating your first ontology. Available at: Access on: Nov. 5th 2007. Poli R. Ontological methodology. International Journal of Human-Computer Studies 2002; 56:639-664. Sugumaran V, Storey VC. Ontologies for conceptual modeling: their creation, use, and management. Data & Knowledge Engineering 2002; 42:251-271. Taboada M, Martinez D., Mira J. Experiences in rising knowledge sources using Protégé and PROMPT. International Journal of Human-Computer Studies 2005; 62:597-618. Turk Z. Construction informatics: Definition and ontology. Advanced Engineering Informatics 2006; 20:187-199. Valarakos AG, Karkaletsis A, Alexopoulou D, Papadimitriou E, Spyropoulos CD, Vouros G. Building an allergens ontology and maintaining it using machine learning technique. Computers in Biology and Medicine 2006; 36:1155-1184.
Ontologia PLM Project: Development and Preliminary Results Carla Cristina Amodioa,c, Carlos Cziulika,c, Cássia Ugayaa,c, Ederson Fernandesa,c, Fábio Siqueiraa,c, Henrique Rozenfeldb,c, José Ricardo Tobiasa,c, Kássio Santosa,c, Marcio Lazzaria,c, Milton Borsatoa,c,1, Paulo Bernaskia,c, Rodrigo Julianoa,c, Simone Braníciob,c. a
Graduate School of Mechanical and Materials Engineering (PPGEM), Federal University of Technology in Paraná, Curitiba-PR, Brazil. b Nucleus for Advanced Manufacturing (NUMA), University of São Paulo, São Paulo-SP, São Carlos/SP, Brazil. c Institute Factory of the Millennium (IFM), São Carlos-SP, Brazil. Abstract. A great difficulty regarding the management of information systems is the fact that much of the knowledge available inside the organizations only can be found in a nonstructured form. As a consequence, one of the major problems faced by the industrial segment, including capital goods industries, is the low degree of interoperability (capacity that a system presents of sharing and interchanging information and applications). This problem is even more serious when considering the whole product lifecycle, where several pieces of software are involved to allow the organization of PLM (Product Lifecycle Management). One of the most promising approaches to address these issues is the structuring of formal ontologies. This paper presents the preliminary development of an ontology (called Ontologia PLM Project), that aims to ensure a transparent interoperability between systems used for the interchange of information through the whole product lifecycle. This project is focused on the capital goods industry and encompasses eight domains of application (DA). A group of specialists is responsible for analysing each DA. At the moment, the DA’s have defined the set of related classes and are involved in inserting properties and restrictions. Finally, a small set of axioms has been implemented allowing verifying the preliminary behaviour of the proposed ontology. Keywords. ontology, PLM system, interoperability.
1 Introduction One of the issues that mostly affect industry is the low level of interoperability amongst existing information systems used during a product's lifecycle. This includes the capital goods industry, which is the main focus of study by the Institute Factory of the Millennium (IFM). In this scenario, one of the most 1
Graduate School of Mechanical and Materials Engineering (PPGEM), Federal University of Technology in Paraná, Av. Sete de Setembro, 3165, Curitiba-PR, Brazil, 80230-901, Tel: +55 (41) 3310-4648; Fax: +55 (41) 3310-4753; Email: [email protected]; http://www.ppgem.ct.utfpr.edu.br
504
C. Amodio, C. Cziulik, C. Ugaya, M. Borsato and H. Rozenfeld
promising and researched solutions is the application of formal ontologies. These are information structures that guarantee semantic interoperability between different information systems. Once a specific ontology is conceived to support Product Lifecycle Management (PLM), different software applications used by those various players involved, can benefit in such a way that they will be able to interact seamlessly. The focus of this research is to develop an ontology (thus, called Ontologia PLM Project) that can ensure interoperability, so the information can be exchanged amongst distinct systems during a product's lifecycle, therefore avoiding information duplication, misunderstandings and incompatibility issues. This paper describes the preliminary developments on the aimed ontology, discussing relevant topics on its structure definition and correspondent implementation. The framework of this paper is organized in five sections. Section 2 presents the background for conducting this research. Section 3 describes the main issues related to the Ontologia PLM Project. The preliminary results from the ontology implementation can be found in section 4. Finally, concluding remarks are provided in section 5.
2 Practical and Theoretical Background 2.1 The IFM Programme The Institute Factory of The Millennium (IFM) is a Brazilian organization supported by the Ministry of Science and Technology that involves 800 researchers, allocated in 39 research groups, spread amongst 32 universities and research institutes [1]. The organization profile is focused on researching manufacturing issues that can be mapped into the needs of the Brazilian industry. In order to manage the activities developed by the involved universities and institutes the approach adopted by IFM is to define work packages (WPs) and subprojects (SPs). 2.2 WP04-SP02: The Ontologia PLM Project The main aim of WP04-SP02 subproject is to disclose the characteristics related to the lifecycle of the capital goods production chain. Additionally, is seeks to examine the issues of lifecycle management that can be addressed by information systems, as well as, to develop the referred systems, based on the proposed ontology. 2.3 Capital Goods Industry The industry of capital goods encompasses companies whose main activity is to produce machinery and equipment that can be employed by other companies for their production processes. The heterogeneity is a strong feature found in this segment, not only by the great variability of products but also because of the
Ontologia PLM Project: Development and Preliminary Results
505
diversity in the competitive conditions in this market [2]. The products delivered by this sector can be produced by several approaches (e.g. in batches, large scale or one-off). Two companies of this sector were visited, with interviews conducted with those responsible for product development. The information gathered allowed to infer that there is an opportunity to examine the interoperability issues. Additionally, the sharing of data in this segment is on huge demand, since it involves various companies to produce a specific type of equipment [3]. 2.4 PLM Issues Since WP04-SP02 deals with lifecycle concepts it is of fundamental importance to understand the relationship between these topics and the industrial segment focused in this work. Thus, PLM is an approach of managing the inherent processes associated with the whole lifecycle of a product, including design, service and final market withdrawal. The idea is to design new technologies that allow an efficient access of a centralized product knowledge database to enable collaborative development of activities and holistic reasoning of independent professionals and work teams. The complexity and variety, characteristic of the modern product development process, in addition to the increasing levels of customer sophistication, demand new forms of collaboration among multidisciplinary teams and require high integrated departments in order to devise strategic, functional, and innovation policies [4]. However, the whole process of creation of knowledge, improvements and innovations in a collaborative environment depends on the existence of a structure that enhances peers communication. This field presents serious challenges to accurate representation, since a single piece of information is often related to subjects, which are shared by several organization units (e.g. Marketing, Production, Logistics, amongst others). The case study developed by [5] argue, for example, that data from the projects area are changed along the product development as each department extracts or saves information that are relevant, in order to represent its own characteristics. Another example is given by [6], who recall that in the context of flexible supply networks there is a need for a unique language to integrate data from different suppliers and clients. Complex products modelling is yet another area that requires vocabulary standardization in order to better handle the data associated with a particular product or component, since the use of collaborative networks with suppliers and clients has been intensive [7]. 2.5 Interoperability Demands When dealing with highly intensive knowledge environments, information structures become critical to capture, represent, retrieve and reuse of knowledge associated with products [8]. The different terms, expressions and languages employed for the identification of the subjects and components, (as well as those different programming languages and environments), usually lead to inconsistencies, errors and losses of data. This can mean waste of time and scarce resources. From this scenario, one of the most promising approaches to address
506
C. Amodio, C. Cziulik, C. Ugaya, M. Borsato and H. Rozenfeld
these issues is the development of formal ontologies. They provide the mechanisms for structuring the information and representing the knowledge from a set of vocabulary and its definitions that guarantee the semantic interoperability between different information systems. In the vision of [9], ontologies are the core of any information representation system and in the absence of it, there would be no vocabulary that truly represents the knowledge of a certain reality. The generation of a common domain vocabulary may result in a more transparent and objective communication among users and can facilitate the search for knowledge in a given area. In addition, it also helps the sharing of knowledge between information systems. That occurs as one system is sharing the representation language with others that have similar demand in that domain, eliminating the need to replicate the process of analysis of the knowledge already performed [9]. Furthermore, as the information is described, codified, and understood by all those involved, the speed and efficiency of the sharing process are enhanced in the area. 2.6 Ontology: Context and Relations An ontology is an explicit specification of a conceptualization [10]. For information systems, anything that exists (e.g. a physical item or knowledge) can be represented. The knowledge of a domain must be represented in a declarative formalism and have a set of axioms that do constrain the possible interpretations for the defined terms. This set of objects, and the describable relationships among them, are reflected in the representational vocabulary with which a knowledgebased program represents knowledge [11]. Ontologies do not have to be limited to conservative definitions and can express the tacit knowledge from those agents involved. The advantages are: i) has a vocabulary for representation of the knowledge; ii) have the sharing of knowledge; and iii) have an accurate description of the knowledge. One of the most promising approaches for developing ontologies is the one provided by the model proposed by [11].
3 The Ontologia PLM Project 3.1 Development Approach In order to accomplish the aims of the Ontologia PLM Project the following phases have been placed together: (1) Definition of customer needs; (2) Search for existing/similar projects and relevant information; (3) Establishment of PLM application domains (knowledge areas); (4) Capture of motivating scenarios (to build relevant vocabulary); (5) Generation of competence questions (to establish a fundamental taxonomy); (6) Specification of formal terminology (to establish an extended taxonomy, properties and disserted definitions); (7) Generation of formal competence questions (to build assertions for defined terms); (8) Specification of
Ontologia PLM Project: Development and Preliminary Results
507
axioms (to establish necessary and sufficient assertions to completely define terms); (9) Verification of axioms (against rationale-based algorithms, e.g. RacerPro); and (10) Ontology proposal. From the literature review, the approach suggested by [11] was adopted to organize and structure the investigation (see Figure 1). This approach is recognised as being one of the most consistent for developing ontologies and presents the demanded formalism for the envisaged system.
Figure 1. Structure for deploying the activities for the Ontologia PLM Project. (Adapted from [11])
3.2 Domains of Application Once the working framework has been established, the next step was to define the domains of application (DAs) that could characterize the whole product lifecycle management scope. For that, the reference model proposed by [12], as can be seen in Figure 2 has provided the guidance for covering the whole product lifecycle.
Figure 2. Structure for defining the Domains of Application based on the reference model. (Adapted from [12])
508
C. Amodio, C. Cziulik, C. Ugaya, M. Borsato and H. Rozenfeld
The reference model has been thoroughly examined and mapped into the capital goods industry, so the relevant DA could be established. The members involved in WP04-SP02 were assigned to identify and define the scope of each DAs that were considered suitable for the project. Table 1 contains the description of each of the seven DAs chosen and respective scope definition. Table 1. The main domains of application (DAs) defined for the project. Title Scope Denotes an excellence in good and services, especially DA1 Quality to the degree they conform to requirements and satisfy costumers. The culture that an individual lives in, and the people DA2 Environment and institutions with whom they interact. Describe the formal procedures used in such an After Development endeavour, such as the creation of documents, diagrams, DA3 Issues or meetings to discuss the important issues to be addressed, the objectives to be met and their strategy. DA4.1 Marketing, Marketing includes advertising, distribution and selling. DA4.2 Product Engineering Product Engineering and Process Engineering involves DA4.3 Process Engineering the design and manufacturing aspects of a product. Strategic Strategic Planning make the decisions and directions to DA5 Planning and allocate the resources, including the capital and people. Production Production tools and work to make things for use/sale. System of organizations, people, technology, activities, DA6 Supply chain information and resources involved in moving a product or service from supplier to customer. Can compose any of the factors of production DA7 Costs (including work, capital, land, tax). Item
Once the research advanced to a more deep understanding of the issues involved (e.g. terms classification and respective definitions) the group realised the need to include two additional DAs. They are: i) DA0: to accommodate terms of high level of abstraction, so the ontology can have a more general application; and ii) to contain terms derived from standards and industrial best practices.
4 Preliminary Results From the capture of motivating scenarios and respective knowledge definitions, it has been possible to establish a set of classes for each domain of application. Table 2 contains some examples of those classes that have been defined for the proposed ontology. The classes are written according to a standard defined inside the group, to work as a mnemonic and help with the traceability of the terms. Additionally, for the formal definition of each class, the members have sought for a sound reference (either from literature or practice).
Ontologia PLM Project: Development and Preliminary Results
509
Table 2. Example of classes and respective definitions for specific DAs DA
Class
DA2
LifeCycleStage
DA5
CapacityPlanning
DA6
ProductionNetwork
Definition Any stage from resources extraction to the final disposition of the product. A forward-looking activity which monitors the skill sets and effective resource capacity of the organization. A set of inter-firm relationships that bind a group of firms into a larger economic unit.
At the moment, 619 classes and correspondent definitions have been inserted into Protégé. Additionally, 57 general properties have been examined and validated by the group. Figure 4 presents an excerpt of the tree classes as provided by the Protégé suite. The arrangement of each primary class and respective subclasses is still being discussed. However, this preliminary distribution has already provided useful insights into the aimed ontology construction.
Figure 3. Excerpt of the Protégé suite, highlighting a set of classes for the Ontologia PLM Project.
5 Final Remarks The knowledge area of product lifecycle management demands more basic steps towards establishing a common vocabulary. From this scenario and a thorough literature review the approach to develop an ontology was identified. The framework proposed by [11] was chosen to the present development, originating the Ontologia PLM Project. From that, eight domains of application were structured with their respective scopes. These allowed establishing 619 classes
510
C. Amodio, C. Cziulik, C. Ugaya, M. Borsato and H. Rozenfeld
with properties and restrictions that were implemented into a software, the Protégé Suite. The arrangement of each primary class and respective subclasses is still being discussed. To this moment, these preliminary class distribution and addressed restrictions have already provided useful insights into the aimed ontology construction, referred as the Ontologia PLM Project. The next stages involve finishing the process of inserting the properties and restrictions for each class which will allow the construction of the needed axioms. Once the axioms are set, it will be possible to validate the ontology in controlled experiments. Mapping strategies that use the proposed ontology may then be developed in order to seamlessly exchange information among different information systems.
6 References [1] IFM (2008). Instituto Fábrica do Milênio. Available from: Accessed on: Feb. 28th 2008. [2] ERBER, F.S.; VERMULN, R.; Nota Técnica sobre Estudo da Competitividade de Cadeias Integradas. [3] ASSOCIAÇÃO BRASILEIRA DA INDÚSTRIA DE MÁQUINAS E EQUIPAMENTOS. (2006). Departamento de Economia e Estatística. Informações IBK. [personal e-mail]. E-mail received for: Jul 20th 2006. [4] RIZZI, C.; REGAZZONI, D. (2007). Conceptual Design Knowledge Management in a PLM Framework, In: GARETTI, M.; TERZI, S.; BALL, P. D.; HAN, S. Product Lifecycle Management Proceeding. Inderscience Publishers. p. 435– 444. [5] CHO, M.; LEE, C.; KIM, D. (2007). A framework for ontology-based manufacturing support systems In: GARETTI, M.; TERZI, S.; BALL, P. D.; HAN, S. Product Lifecycle Management Proceeding. Inderscience Publishers. p. 425-434. [6] SMIRNOV, A.; LEVASHOVA, T.; PASHKIN, M.; SHILOV, N.; KASHEVNIK, A. (2007). Knowledge management in flexible supply networks: architecture and major components. In: GARETTI, M.; TERZI, S.; BALL, P. D.; HAN, S. Product Lifecycle Management Proceeding. Inderscience Publishers. p 73-82. [7] VEGETTI, M.; HENNING, G. P.; LEONE, H. P. Product ontology: definition of an ontology for the complex product modelling domain. In: MERCOSUR CONGRESS ON PROCESS SYSTEMS ENGINEERING, 4., 2005. Proceedings…Costa Verde, RJ: UFRJ. Available from: Accessed on: Feb. 28th 2008. [8] AMERI, F.; DUTTA, D. (2005). Product lifecycle management: closing the knowledge loops. Computer-Aided Design and Applications, v. 2, n.5, p. 577-590. [9] CHANDRASEKARAN, B.; JOSEPHSON, J. R.; BENJAMINS, V. R. (1999) What are ontologies, and why do we need them? IEEE Intelligent Systems, p. 20-26, Jan./Feb. [10] GRUBER, T. R. (1995). Toward principles for the design of ontologies used for knowledge sharing. International Journal of Human-Computer Studies, Vol. 43, Issues 4-5, November 1995, pp. 907-928. [11] USCHOLD, M.; GRUNINGER, M. (1996). Ontologies: Principles, Methods and Applications. Knowledge Engineering Review. 11-2, p. 1-69, Jun. [12] ROZENFELD, H.;FORCELLINI, F.; AMARAL, D.C.; TOLEDO, J C.; SILVA, S.L.; ALLIPRANDINI, D.H; et al. (2006). Gestão de Desenvolvimento de Produtos – uma referência para a melhoria do processo. São Paulo: Saraiva.
Ontologia PLM Project: Development and Preliminary Results
511
[13] MATTHEW, H.; KNUBLAUCH, H.; RECTOR, A.; STEVENS, R.; WROE, C. A Practical Guide To Building OWL Ontologies Using The Protégé-OWL Plugin and CO-ODE Tools Edition 1.0 The University Of Manchester , Stanford University. Available from: < http://www.co-ode.org/resources/tutorials/ProtegeOWLTutorial.pdf >Accessed on: Mar. 08th 2006 [14] INTERNATIONAL ORGANIZATION FOR STANDARDIZATION. ISO14040: 1997. Environmental management: Lifecycle Assessment - Principles and framework. Genebra, 1997. [15] CHRISSIS, M.B.; KONRAD, M.; SHRUM, S.; Capability Maturity Model guidelines for process integration and product improvement, 2003. p.663.
Modelling and Management of Design Artefacts in Design Optimisation Arndt Mühlenfeld, Franz Maier, Wolfgang Mayer, and Markus Stumptnera,11 a
Advanced Computing Research Centre, University of South Australia
Abstract. Complex design processes often embrace various degrees of virtual development, where complex models and simulations replace traditional construction and testing of physical models. However, as the number of models and their inter-relationships grows, managing processes and models becomes increasingly difficult. We describe how to support product development by applying ontologies to manage and guide the design of simulations and to make domain knowledge readily available though context-specific reuse mechanisms. Based on established engineering standards like ISO 10303 we defined domain-specific abstractions and operators to facilitate information reuse. Keywords. Design Optimisation, Ontologies, STEP ISO 10303
1 Introduction While virtualisation has led to considerable streamlining of product design and development processes, the amount of information to be processed through modelling and simulation has grown to an extent where it has become increasingly difficult to manage and analyse models, their underlying assumptions and data produced in different steps of a development process. Hence, tools to support designers and engineers to support their modelling efforts are desired. In this context, ontologies (i.e., machine interpretable domain concept definitions) allow to capture and query knowledge about models, processes and simulations involved in the development processes. As knowledge becomes explicit (and thereby available for further processing), tools to retrieve and reason about models, simulations and their results permit to exploit information acquired through previous processes to guide the execution of the current development 1
Mawson Lakes, SA 5095, Australia,
muehlenfeld|franz.maier|wolfgang.mayer|[email protected]. Fax: +61 8 8302 3988. This work was funded by the CRC for Advanced Automotive Technology under the IDE project C4-03. We are grateful to Chris Seeling (VPAC) and Daniel Belton (General Motors, Holden Innovation) for providing a test-bed and domain expertise.
514
A. Mühlenfeld, F. Maier, W. Mayer, and M. Stumptner
process. In particular in multidisciplinary scenarios, where several teams work concurrently to achieve a suitable trade-off between different requirements, it is important to consolidate information managed in different teams and heterogeneous information systems to provide a consistent picture of the current state of the design processes, artefact(s) and models. Here, we investigate how ontological engineering can improve existing design optimisation scenarios. We introduce a method of modelling the design process via task and artefact ontologies to capture knowledge required for subsequent design optimisation. Exploiting captured knowledge, it becomes possible to analyse and compare past modelling efforts with the current analysis scenario to identify potentially redundant simulations and to query and reuse information obtained from previous simulations. Our work builds on established engineering standards like ISO 10303 [5] and extends existing representations with additional ontologies to provide abstract “views” of concrete development artefacts. Explicit formalisations of essential analysis-specific properties of a design alternative provide the means for isolating and browsing similar “compatible” design optimisation sub-processes. Our framework enables reuse of previous results in design optimisation, hence speeding up the optimisation processes by avoiding unnecessary time-consuming numerical simulations. In Section 2, the concepts and underlying principles of design optimisation are outlined. Section 3 introduces our approach to ontological process and artefact modelling and discusses the benefits. This Section concludes with an example of how we apply ontologies to reuse knowledge required for simulation tasks. Section 4 discusses related work in the area. Section 5 summarises our work and gives an outlook on future research directions.
2 Design Optimisation Process Multidisciplinary Design Optimisation (MDO) is a form of virtual development where rigorous modelling and optimisation techniques are applied starting early in the design process, to obtain a coarse understanding of different aspects of a design across a number of heterogeneous domains. Rather than optimising each discipline separately, all disciplines are analysed in parallel and the results are merged with the intent to obtain the best design alternative as a compromise of all included disciplines. For example, developing a new car may require a trade-off between engine power, fuel consumption, stability/crashworthiness, and overall weight. Since modelling and simulation of different aspects may be performed in parallel, on different versions of a design artefact, and at different levels of granularity, ensuring compatibility between models and results is challenging. For example, in the automotive sector, current development practices have resulted in inefficiencies due to inadequate design or simulation models and insufficient data at critical milestones in the stage-gate development process. Here, formal representations of critical properties of models can help to make explicit assumptions and constraints underlying the individual modelling efforts to detect and help resolve such inconsistencies early. In the following section, we propose an architecture to address this compatibility problem.
Modelling and Management of Design Artefacts in Design Optimisation
515
3 Building a Design Process Ontology Proper representation of important properties of design artefacts and simulation processes is vital for reasoning about the design process, prerequisites, and results. While task-specific aspects and concrete execution scenarios can be captured in process ontologies, process representations must be complemented with (abstractions of) artefact models to enable comparison of different analysis tasks and their resulting models. In this paper, we focus on the role of artefact models in our framework; our approach to capture process-related information has been discussed in [7]. The process model is used to specify and capture the execution of simulation tasks and for automated workflow enactment. It also allows to connect engineering decisions and artefacts to the processes that induced them. The inputs and results of the processes are represented as artefact models. To relate different executed processes, it is necessary to compare the corresponding process elements and artefact representations in both processes. In the following, it is assumed that the artefact models have been annotated with provenance information to capture process-related information. To effectively address these issues, machine-processable representations of design artefacts must be created that support comparing different artefacts with respect to specific criteria. Detailed tool-specific representations of artefacts are directly available from modelling tools (for example, CATIA), but those models are not easily utilised in a wider context due to proprietary representations. This issue has largely been overcome by increased adoption of standards like STEP [5] that have established standardised ontologies and representations for artefacts in particular application domains. Therefore, we base our modelling efforts on the STEP Application Protocols (APs) as reference representation for concrete artefact models. While concrete artefact models, such as detailed geometry information in CAD documents, capture the artefact in full detail, these representations are often too fine-grained to draw useful inferences. Instead, a more abstract representation is desired that captures only those properties that are relevant for a given analysis task. For example, to assess whether two results obtained from finite-element simulations of different parts of a vehicle body may be assembled into a larger model, the detailed geometry information may not be relevant, but the material properties and version number of the simulation software are critical. Similarly, essential requirements and assumptions underlying individual analysis sub-processes can be aggregated into abstract representations (we call those “views”). Since different analysis tasks are likely to address different aspects of a product, a single view is unlikely to be sufficient. Furthermore, a monolithic representation may also hinder reuse, since irrelevant attributes may affect the compatibility test. Hence, abstractions and compatibility criteria between abstract models must be defined with respect to particular analysis goals. Here, STEP/EXPRESS has been identified as suitable framework, since it provides an expressive language that can capture not only concrete representations
516
A. Mühlenfeld, F. Maier, W. Mayer, and M. Stumptner
of artefacts through APs, but also formalise the aggregation operators to transform concrete representations into abstract views and to define compatibility between abstract views.
Figure 1. Artefact model Archictecture and operator ontology (conceptual)
The powerful representation in EXPRESS has shown to lead to simpler development through abstract specification and meta programming [2]. We use meta-modelling to consolidate different models and compatibility criteria. The meta-models integrate artefact and process models with the domain models of the STEP Standard [5]. We use EXPRESS as the formalism for design artefact representation as well as for specification of concrete and abstract artefact properties and their mappings. Furthermore, the STEP standard comprises a large body of standardised, rigorously defined APs. We can directly access the vast amount of domain knowledge already formalised in STEP, by also expressing the engineering model in EXPRESS. Furthermore, the models on the meta-level form a frame of reference for all concrete models and thereby provide the means to express the relations between properties on different levels of abstraction. 3.1 Aggregation of Design Artefact Properties and Meta-Data We distinguish models at different layers of abstraction. The STEP APs constitute the basis for all subsequent model transformations. Concrete documents representing instances of APs produced by design and engineering tools are depicted as ovals in Fig. 1. For example, the STEP integrated resources (represented by parts 41–58) provide constructs for geometric and topological representation as well as mathematical descriptions and numerical analysis. Elements of the simulation process can be described by ISO 10303. We assume a domain model that utilises parts of AP 214 and covers the analysis results including the geometry given in a STEP physical file format (Part 21), parametrisation and constraints defined in STEP Part 108, and the finite element mesh defined in Parts 107 and 104. Execution-specific information such as time stamps, machine identifiers, number of CPUs involved, and different software versions can be directly obtained by extension of core APs. By aggregation from information in the concrete documents, more abstract views are created that capture different aspects of a design that have been
Modelling and Management of Design Artefacts in Design Optimisation
517
identified relevant for a given analysis task. The resulting abstract models no longer reflect all engineering details, but are easier to store and manipulate. Each view conforms to an abstract meta-model that specifies the language that is used to describe a view. In our framework, STEP/EXPRESS is used to define both the meta-model and views. Similarly, a specification of the current analysis scenario (Scenario B in Fig. 1) is also represented as abstract view. Hence, the goal of comparing past simulations and their results to the current scenario reduces to the problem of obtaining and comparing suitable abstract views. By comparing abstract views (using function G in Fig. 1) that reflect only relevant properties, compatibility of concrete artefacts (relation G' in Fig. 1) can be assessed. Abstractions from concrete models are also defined in EXPRESS (functions D and E in Fig. 1). By applying transformation operations specified at the meta-level between the meta-models, essential properties of a concrete model are computed to obtain its abstract counterpart. For example, transformation D in Fig. 1 could represent an abstraction from the concrete shape of a geometry model (as specified in STEP Part 42) to yield an abstract model that represents only the extent and mass of the described object. This transformation would be specified as operation on documents conforming to STEP Part 42 to yield an instance of the meta-model describing the extent+mass model. Let the latter be denoted as ExtentMassModel. To relate different abstract views to each other, comparison operators are defined for each abstract view. Each operator is a function that ascertains whether two instances of given views satisfy particular compatibility criteria. In Fig. 1, operation G represents an abstract comparison function that assesses whether two instances of ExtentMassModel are compatible with respect to a specific analysis. Note that G' is difficult to compute directly, since that entails specifying precise comparison operators on varying structures directly in the language defined by the STEP APs. By applying aggregation beforehand, the computation can be simplified considerably. Furthermore, the separation of abstraction and comparison aids the dynamic adaption of transformation and comparison operators based on either operand – a technique that would be difficult to achieve when using the detailed representations. Since abstractions and compatibility criteria depend on the analysis under consideration, an ontology of transformations is defined (depicted in the right part of Fig. 1) that is used to select suitable criteria given a desired analysis. Hence, through selection of suitable aggregation and comparison operations, different abstract views of the same concrete artefact or simulation result can be obtained and compared with the current analysis setting. For example, an ontology of operators may specify that for fuel consumption calculations, two artefacts with equal overall mass are considered equivalent without considering the precise geometric shape. Hence, a suitable abstract comparison operation G can be chosen, together with abstraction functions to automatically compute the abstract models from the concrete artefact models. The successive aggregation of models leads to a formal representation that can be easily stored in a design repository for subsequent query evaluation and reuse. By adoption of automated reasoning technology and consolidation of design artefact properties, results of simulation scenarios can be compared and existing MDO data sets can be selected for reuse.
518
A. Mühlenfeld, F. Maier, W. Mayer, and M. Stumptner
3.2 Example In the following, we illustrate how to utilise the approach for reuse of design knowledge in context of an analysis in the automotive design domain. Setup. Assume that an MDO task is carried out in order to optimise the deformation of a vehicle’s front part in response to a crash impact. Design objectives are high energy absorption but low mass. The shape optimisation problem is investigated while the remaining car body remains unchanged. Different approaches to optimisation are possible: e.g., a morphing approach that keeps the original finite element model and only modifies the coordinates of the nodes, or, an approach based on re-meshing of the existing finite element model. As the intended modifications of the vehicle’s front part are minor, an approach with an unchanged finite element model is sufficient. A corresponding parametric model represented in STEP/EXPRESS is given from which a concrete domain model is generated for given model parameters. Domain Model. The representation of the domain model is achieved with STEP AP214 Core data for automotive mechanical design processes covering data model and requirements of the chosen design discipline. AP214 provides a detailed model of design artefacts including for example the geometry and the parametric model. As a top data model of AP214 the Application Integrated Model (AIM) makes use of STEP Integrated Generic Resources (parts 41-58) for its development. The AIM of a STEP AP is finally used for data exchange. STEP parts, such as Part 52, the mesh-based topology, and Part 49, the process structures and properties, are used to describe the finite element mesh of the vehicle’s front part, and for representation of particular process properties. The geometry of the investigated design artefact can be represented with Part 42. After an optimisation task has completed, the created files are collected and represented in EXPRESS. Geometry, mesh and further results created during the optimisation process are stored in a repository, together with meta-information about the performed simulation. Besides information directly related to our optimisation goal, to maximise energy absorption while keeping the mass of the design part low, this includes meta-information about the execution of the experiment, e.g., Timestamp, MachineID, CPUInformation, OperatingSystem, SoftwareVersion, Duration. This information can also be represented in the EXPRESS modelling language and therefore can be integrated with the meta-model for the design artefacts. Top-Down Reuse. Considering the design of an MDO task, a designer typically selects from different categories of criteria to define the setup of the simulations to be performed. In addition to design objectives, involved disciplines, design variables and constraints, the process steps and involved software systems to execute the scenario have to be defined. MDO task ontologies can serve as meta-models of the optimisation process steps [7]. To make a decision which design artefacts can be selected from the design repository for reuse, firstly, comparison operators have to be developed to connect the constraints and objectives formulated on a high level of abstraction
Modelling and Management of Design Artefacts in Design Optimisation
519
with the properties attached to design artefacts on the domain layer. To formally define these comparison operators between design artefacts, dependencies between input variables and output responses of previous runs have to be analysed. For example, to maximise the energy absorption and minimise the mass of a vehicle’s front part, input variables that are most significant for these design objectives are derived from previous runs. Hence, the abstraction and comparison operators from Section 3.1 need not be static, but can vary between design scenarios. Summary. The example sketches different categories of design optimisation problems and the necessity of a formalism to compare design artefacts for the purpose of simulation reuse. It illustrates the role the STEP Application Protocols play in this context. Aggregation and abstraction of design artefact properties defined in EXPRESS enables to derive abstract design attributes from a concrete domain. The process is aligned with a meta-model for design artefacts. When retrieving information about existing MDO simulation runs, the requirements provided by the designer are decomposed and traced down to a concrete design repository. Our approach allows the definition of a unified model that supports automated execution as well as a high level view used in organisational decision making in a consistent framework.
4 Related Work The use of ontologies in design, engineering and process modelling is widespread; hence, the following discussion focuses on selected representative works. Ontologies have been applied in engineering contexts, resulting in several proposals to capture distributed design knowledge [3] and to provide interoperability at the semantic level (as opposed to data structure based data exchange) [6, 8]. The Performance Simulation Initiative (PSI) [9] used ontologies to express dependencies between concurrent activities in dynamic engineering design processes in order to speed-up design cycles through increased concurrency between activities. Different to our work, the focus is on improving process structures; hence, design artefacts are modelled using very abstract ontologies that do not permit the detailed comparison and reuse that we aim to address. Work on model-based interoperability [1] aims at merging EXPRESS and ontologies based on formal logic. While this approach is desirable to apply the strong inference capabilities of Description Logics to EXPRESS models, further research is necessary to extend the current mapping to include complex constraints. Our approach consolidates process and artefact ontologies under a common STEP/EXPRESS meta-model. We chose EXPRESS due to existing artefact ontologies described with the standard, and its suitability for meta-modelling. Other standards, for example the Process Specification Language (PSL), may seem better suited for process modelling. However, PSL lacks support for context relationships, unexpected activity outcomes and needs better definitions of process artefacts [4].
520
A. Mühlenfeld, F. Maier, W. Mayer, and M. Stumptner
5 Conclusion and Future Work A number of ontologies have been developed to support designers in annotating and browsing design variants, but most approaches require custom-built interfaces and are not well-integrated in existing engineering environments. Our approach to extend the established STEP standard with domain-specific abstractions aims to avoid this limitation by building upon well-established domain ontologies to define and implement a framework to extract relevant properties from concrete domain models produced by actual engineering tools. We have illustrated that transformations within a common meta-model framework make possible to intelligently reuse partial results stored in a common model repository, at a level of detail that has exceeds the capabilities of previous work. We acknowledge that our approach may require significant modelling effort to represent all desired processes, but believe that the use of STEP and established standards helps to considerably lower this barrier. Furthermore, the approach can be applied incrementally, limiting the impact on the overall design process. Based on a case study drawn from the automotive industry, we have begun to isolate and formalise relevant properties and relationships of engineering models, with the immediate goal of transforming, executing and monitoring optimisation processes on top of publicly available execution platforms, replacing the current proprietary implementation. Further work includes to extend the set of properties that currently considered in our models, and to evaluate our approach using the case study and other application scenarios.
6 References [1] C. Agostinho et al. EXPRESS to OWL morphism: making possible to enrich ISO10303 modules. In Proc. of ISPE CE 2007, pages 391–402, 2007. [2] Y. Ait-Ameur, F. Besnard, P. Girard, G. Pierra, and J. C. Potier. Formal specification and metaprogramming in the EXPRESS language. In Proc. SEKE’95, pages 181–188, 1995. [3] R. Fruchter and P. Demian. CoMem: Designing an interaction experience for reuse of rich contextual knowledge from a corporate memory. AI EDAM, 16(3):127–147, 2002. [4] A. G Gunendran et al. Organising manufacturing information for engineering interoperability. In Proc. IESA 2007, pages 587–598, 2007. [5] ISO. 10303-11:1994: Part 11: The EXPRESS language reference manual. ISO, 1994. [6] A. Maier, H.-P. Schnurr, and Y. Sure. Ontology-based information integration in the automotive industry. In ISWC, volume 2870 of LNCS, pages 897–912. Springer, 2003. [7] F. Maier, W. Mayer, M. Stumptner, and A. Mühlenfeld. Ontology-based process modelling for design optimisation support. In Proc. DCC’08, 2008. [8] P. H. P. Nguyen and D. Corbett. Building corporate knowledge through ontology integration. Adv. in Knowledge Acquisition and Management, 4303/2006:223–229, 2006. [9] R. Sohnius et al. Managing concurrent engineering design processes and associated knowledge. In ISPE CE, pages 198–205. IOS Press, 2006.
PREMADE
A Quantitative Metric for Workstation Design for Aircraft Assembly Yan Jina,1, Ricky Currana, Joseph Butterfielda and Robert Burkeb a
School of Mechanical & Aerospace Engineering, Queen's University Belfast, UK. Bombardier Aerospace Belfast, UK.
b
Abstract. This paper is to study an activity-time-based metric, called Non-Value-AddedRatio (NVAR) which is the ratio between the time consumed by all non-value added activities and the total assembly time, for measuring the goodness of workstation design. With this metric, the manager will have a good sense of shopfloor operation, and the planner will have a target for designing workstations. This metric is specific to an assembly procedure in a workstation. To implement the metric, the shop floor activities are classified into three types, i.e., non-value-added (NVA), value-added (VA) and non-value-added-butnecessary (NVAN) activities. A closed-loop systematic approach is introduced for applying the new metric for continuous productivity improvement. Preliminary results with an industrial case study are presented. Although this metric is currently focusing on the aerospace industry, it can be equally well applied to other industry sectors. Keywords. Quantitative metric, Lean manufacture, Workstation design, Aircraft assembly
1 Introduction Aircraft assembly is a labor intensive and time consuming process which accounts for large percentage of manufacturing cost. There are millions of human operations taking place within workstations on shop floor in a typical aircraft company. A small percentage of time reduction will be equivalent to millions of US dollars, apart from the benefit of shortening time to market. The shop floor improvement, especially the lean workstation design with lean times, has become the major objective of industrial engineers and managers of manufacturing companies for keeping competitive advantages [1, 2]. However, it is very hard to quantitatively measure the goodness of a workstation design, because it is difficult to predetermine the time/cost accurately and effectively. There is no metric to measure the workstation design, and to guide the designer to improve the design in a systematic way. Although many companies try to apply the lean principles into 1
Research Fellow, CEIAT, School of Mechanical & Aerospace Eng., Ashby Building, Stranmillis Rd, Belfast, BT9 5AH, UK; Tel: +44 (0)28 9097 5657; E-mail: [email protected],
524
Y. Jin, R. Curran, J. Butterfield and R. Burke
workstation design, it is still a big challenge to know the potential or target that could be improved. Most of the principles are stand on a high level, and short of detail information. The line managers always have the concern if the current workstation is really lean. In the current competitive pressure, any deficiency or inefficiency will be lethal to the success of a manufacturing company on the global stage. Therefore, a metric which can measure workstation’s efficiency so as to help improve productivity, is imperative and necessary for a manufacturing company to stand on the leading position in the competitive world. In literature, some researchers study the design of manual assembly workstations by computer aided tools with the objective of optimal ergonomics and economic values [3, 4]. The design of a typical manually assembly workstation considers the equipment layout and the work area coverage by human motions. The manual assembly system generally does not take the walking distance into account, which however is very important in aerospace industry and will accounts for more weight in aircraft assembly. In addition, numerous interactions to find the optimal design by what-if scenario, are required while the designer have no sense of the potential for improvement. Some companies design workstations by integrating lean principles, such as JIT, and has gained lots of benefits2. However, they don’t really know how much potential exists for further improvement, although they agree there is still potential to make the workstation leaner. This paper is to study an activity-time-based metric for measuring the goodness of workstation design. With this metric, the manager will have a good sense and the planner will have a target for designing workstations. This metric is specific to an assembly process in a workstation. To implement the metric, the shop floor activities are classified into three types, i.e., non-value-added (NVA), value-added (VA) and non-value-added-but-necessary (NVAN) activities. For example, say the operation “working five steps, picking up a drill-gun, and then drilling a hole”. In this operation, working five steps, which could be improved in terms of time, is NVA; picking up a drill-gun, whose consumed time is hard to be reduced, is NVAN; drilling a hole, which adds the customer’s value to final product, is VA. With such classification, the times consumed by NVA, VA and NVAN activities can be calculated for each operation, and can be further rolled up for each plan. More importantly, such a classification can be seamlessly integrated into advanced digital manufacturing tools. A closed-loop systematic approach is also presented for applying the new metric. Preliminary results with an exemplar study are obtained.
2 Proposed Metric 2.1 The quantitative metric The metric is named as Non-Value Added Ratio (NVAR), which is the ratio between the time consumed by all non-value added activities and the total assembly time as follows.
NVAR TNVA /(TVA TNVA TNVAN ) u100%
A Quantitative Metric for Workstation Design for Aircraft Assembly
525
where TNVA represents the total time consumed by NVA activities; TVA is the total time consumed by VA activities; TNVAN denotes the total time consumed by NVAN activities. The ratio represents the potential of improvement for an assembly plan, and provides a good sense in the workstation design. The lower the NVAR value, the better the workstation design. The assumption here is the sum of the time consumed by VA activities and time consumed by NVAN activities are constant for an assembly plan, which is quite reasonable. 2.2 Implementation of the metric To implement the metric, accurate time measurement is the first step. However, time control or measurement is very loose in most companies. Very few companies have set up their standard time system till the detailed activity level. This section will introduce the existing predetermined time systems firstly, and then study the implementation of the metric based on one selected predetermined time system. 2.2.1 Introduction of predetermined time systems Time analysis and work measurement are important components of productivity quantification. In practice, there are several different systems available for time estimating, such as Work-Factor system (WFS), Design For Assembly (DFA), Methods-Time Measurement (MTM) and Maynard Operation Sequence Technique (MOST). WFS [5] is one of the pioneer methods for establishing standards based on the motion of human body members, such as head, arms and legs. A work factor is used as an index for the extra time required above the basic time. For the original DFA method, the estimates of assembly time were based on a group technology approach in which the design features of parts and products were classified into broad categories and, for each category, average handling and insertion times were established. The main contribution of the DFA method was to improve the assembly efficiency, or so called assemblability, in terms of reducing part counts and the number of assembly operations, rather than an absolutely accurate time prediction. The Methods-Time Measurement (MTM) approach was proposed in the 1940’s by Maynard, Stegemerten and Schwab [6]. MTM is a procedure that analyzes any series of manual operations in accordance with the basic motions required to perform it. It assigns to each motion a pre-determined time standard that is determined by the nature of the motion and the conditions under which it is performed. The modern version of MTM is termed MOST, being a simplified system firstly developed by Zandin [7]. The MOST system makes use of similarities in the sequence of MTM-defined motions to lay out the foundation for the basic activity models; so that the MOST system is fast, accurate, easy to learn and simple to use. Consequently, it aims to be the fastest method with the required relevant accuracy. For different application areas, the MOST system is then further classified into three independent systems: BasicMOST®, for general applications; MiniMOST®, for repetitive and short cycle operations; and MaxiMOST®, for
526
Y. Jin, R. Curran, J. Butterfield and R. Burke
non-repetitive and long cycle operations. It has been reported that the application speed of BasicMOST® system is up to eight times faster than MTM-2; which tends to be utilized by aerospace, considering their associated factory practice. Without loss of generality, BasicMOST® system is employed for the implementation of the metric in this paper; which will be explained later on. 2.2.2 Analysis and classification of basic activities There are three sequence models in BasicMOST® as follows. x General Move (sequence model: A B G A B P A) x Controlled Move (sequence model: A B G M X I A) x Tool Use (sequence model: A B G A B P F/L/C/S/M/R/T A B P A) where A: action distance; B: body motion; G: gain control; P: placement; M: move controlled; X: process time; I: alignment; F: fasten; L: loosen; C: cut; S: surface treat; M: measure; R: record; and T: think. With these sequence models and assigned index numbers for each activity associated with standard predetermined times, the time for each operation can be obtained. As mentioned before, the time consumed by each operation need to be analyzed and further classified into VA, NVA and NVAN times for implementing the metric. For instance, these times of the operation “Drill Hole”, which is composed of three steps, are analyzed as follows. a) Hold & place drill gun from operator to hole (Seq. Model: A0 B0 G0 A1 B0 P3 A0, T = 40 TMUs, TVA = 0, TNVAN = P3 = 30 TMUs, TNVA = A1 = 10 TMUs) b) Push trigger on drill gun at part (Seq. Model: A1 B0 G1 M1 X10 I0 A0, T = 130 TMUs, TVA = X10 = 100 TMUs, TNVAN = G1 M1 = 20 TMUs, TNVA = A1 = 10 TMUs ) c) Move 1/4 step to next group of holes (Seq. Model: A1 B0 G1 A3 B0 P1 A0, T = 15 TMUs, TVA = 0, TNVAN = ¼ G1 P1 = 5 TMUs, TNVA = ¼ A1 A3 = 10 TMUs) Where T, TVA, TNVAN and TNVA are process, VA, NVAN, and NVA times associated with the shop floor activities. With these breakdown analyses, the VA, NVA, NVAN and the standard process times can be easily obtained by rolling up these times associated with detailed activities. That is, the standard times of “Drill Hole” are (T = 185 TMUs = 0.111 mins, TVA = 100 TMUs, TNVAN = 55 TMUs, TNVA = 30 TMUs). Such an analysis or classification will be very helpful to keep focus on the wastes, which is to be kept in minimum. Obviously, the process time, represented by X, which adds customer’s value to the final product, is value-added. The action distance, represented by A, which could be improved, is non-value-added. All the other activities are put into the type of non-value-but necessary. Note that, as there are lots of NVA fasten and disassembly work for the aircraft assembly, the fasten (F) and cut (C) activities are classified into NVAN activities. The VA fastening or cutting work will be represented by X in the sequence model, when the sequence models are defined.
A Quantitative Metric for Workstation Design for Aircraft Assembly
527
2.2.3 Automatic calculation of the metric Digital methods are now playing a more significant role in process planning activities within the aerospace industry [8]. Digital Manufacturing is an emerging software technology that will become a fundamental and liberating component of Product Lifecycle Management (PLM) [9, 10]. Although digital manufacturing can bring many advantages, such as acceleartion process and production planning, it also posed many challenges of implementing a full digital enterprise. It is well known that the digital manufacturing solutions have to be carefully developed and implemented in the whole enterprise system. To realize intelligent metric analysis in a digital manufacturing environment, the metric must be seamlessly integrated into a suitable process structure for automatic generation. Jin et al. [11] have developed an expert system for implementing intelligent time analysis through a digital manufacturing tool, by inferencing the time esmation mechanism based on a library composed by standard operations assocated with standard run and setup times. Naturally, the NVAR metric can be automatically realized through this approach by associating the VA, NVA and NVAN times with each standard operation in the library. B en c h m a rk as s em b ly NVAR
P ro d u c tio n lin e d es ig n an d w o rksta tio n d e sig n
C a lc u la te N V A R b y s im u la tio n
S tan dard tim e s lib ra ry
< = B en c h m a rk as y NVAR
No
Y es B u ild /im p ro v e w o rk sta tio n
R ec o rd a s se m b ly tim e o n s h o p flo o r a n d c a lc u la te NVAR
Y es
< B e n ch m a rk a sy NVAR No = B en ch m a rk a sy NVAR
No
Y es end
Figure 1. Flow chart on how to use the metric
528
Y. Jin, R. Curran, J. Butterfield and R. Burke
2.3 How to use the metric? Figure 1 shows the flow chart of applying the metric for workstation design. During process planning, the metric can be calculated with current standard times or legacy data once the assembly plan is available, as a benchmark. The production line and workstation are firstly designed virtually, with which a new value of NVAR can be calculated. After that, the new NVAR will be compared with the benchmark value, if it is better, the design will be released to build, otherwise, the design need to be improved. After the new NVAR is proved valid on the shop floor, the standard times library will be updated, so does the benchmark metric for next workstation design. Note that the times obtained here are the benchmark values, the time information on simulation in a workstation and shop floor practice have not been done within the tight time frame.
3 Exemplar Study An exemplar study is carried out using the Uplock & Apron Assembly as shown in Fig. 2 for one of Bombardier’s current regional passenger jets. The entire assembly plan contains around 200 operations with standard times. These standard times are average times and may not be lean. Every operation is composed by several sub-operations, each of which is analyzed and represented by one or several sequence models. After analyzing all the sequence models, the times for NVA, VA and NVAN activities can be obtained for each sub-operation, and can be further rolled up for each operations. Similarly, all these times can be obtained for the entire assembly plan, so does the NVAR value. Figure 3 shows the normalized results of NVA, VA and NVAN times in run time, setup time and total time of Uplock & Apron Assembly. Obviously, there is no value-added time in the total setup time. The NVAR value of the overall assembly is 26.31%, which signals the big potential to reduce the total assembly tim/cost of current workstation/process design.
Figure 2. CAD model of the Uplock & Apron Assembly
A Quantitative Metric for Workstation Design for Aircraft Assembly
529
1 0.9 0.8 0.7 0.6
Tnva
0.5
Tva
0.4
Tnvan
0.3 0.2 0.1 0 Run Time
NVAR: 25.96%
Setup Time
Total Time
29.10%
26.31%
Figure 3. Normalized results of assembly times of Uplock & Apron assembly
4 Conclusion A quantitative metric is proposed for the workstation design, especially for eliminating the waste of action distance so as to get the lean standard times. It is an effective measure on the workstation design and provides the good sense to assess the potential. Shop floor activities are analyzed and classified into three types, i.e., value-added, non-value-added, and non-value-added-but-necessary. Based on this classification, the metric can be easily realized, and seamlessly integrated into digital manufacturing. The preliminary result of an exemplar study with valid industry times is presented. Automatic generation and seamless integration with digital manufacturing tools are our ongoing work.
5 Acknowledgments The authors gratefully acknowledge the help and guidance of Brian Welch, Paul Smith, Gerry Mcgrattan, Donna Mcaleenan of Bombardier Belfast, Tom Edgar, Colm Higgins and Rory Collins of the Northern Ireland Technology Centre at Queen’s University, and Jason Jones and Simon Allsop of DELMIA UK, without whom this work would not have been possible.
530
Y. Jin, R. Curran, J. Butterfield and R. Burke
6 References [1] Donald A Dinero, Training Within Industry: The Foundation Of Lean, Productivity Press 2005. [2] A Weber, “Lean Workstations: Organized for Productivity”, Available at: . access on 03 Dec. 2007. [3] WJ Braun, R Rebollar, EF Schiller, “Computer aided planning and design of manual assembly systems”, International Journal of Production Research, Vol.34( 8), 1996, pp. 2317-2333. [4] XF Zha, SYE Lim, “Intelligent design and planning of manual assembly workstations: A neuro-fuzzy approach”, Computers & Industrial Engineering 44 (2003) pp. 611-632. [5] BW Niebel, Motion and Time Study, RICHARD D. IRWIN, Inc. Homewood Illinois 1982 pp.451 – 490. [6] HB Maynard, GJ Stegemerten, JL Schwab, Methods-Time Measurement, New York: McGRAW-HILL Book Company, 1948. [7] KB Zandin, MOST® Work Measurement Systems, Marcel Dekker, 1989. [8] R Curren et al. “Digital Lean Manufacture (DLM) for Competitive Advantage”, 7th AIAA Aviation Technology, Integration and Operations Conference, 18 - 20 Sep. 2007, Belfast, Northern Ireland. [9] R Curren et al. “Digital Design Synthesis and Virtual Lean Manufacture”, 45th AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada, 8-11 January 2007. [10] J Butterfield et al. “Optimization of Aircraft Fuselage Assembly Process Using Digital Manufacturing”, Journal of Computing and Information Science in Engineering, Vol.7 pp. 269-275, 2007. [11] Y Jin et al. “A Structure Oriented, Digital Knowledge Based Approach for Time Analysis of Aircraft Assembly”, 8th AIAA Aviation Technology, Integration and Operations Conference, 14 - 19 Sep. 2008, Anchorage, Alaska (to appear).
An Integrated Lean Approach to Aerospace Assembly Jig and Work Cell Design Using Digital Manufacturing. J. Butterfield1, A. McClean2, Y. Yin3, R. Curran4, R. Burke5, Brian Welch6, C. Devenny7 1-4 5-7
School of Mechanical & Aerospace Engineering, Queen’s University Belfast. Bombardier Aerospace, Airport Road, Belfast, BT3 9DZ.
Abstract. This paper examines the use of integrated digital manufacturing methods for the design of an aircraft panel assembly jig and its associated work cell. The existing jig design and assembly sequence for the Bombardier CRJ700/900 regional jet apron and uplock panel assembly was reviewed. A digital simulation of the new CRJ1000 apron and uplock assembly incorporating a conceptual design for a new jig, was produced. When the jig format was finalised its simulated performance was compared to that of the CRJ700/900 to identify any process improvements in terms of tooling cost and panel build time. It was predicted that the digitally assisted changes had brought about a 4.9% reduction in jig cost and a 5.2% reduction in panel assembly time. It was concluded that the reduction in assembly time was due to improved jig functions and ergonomics as well the implementation of lean principles to the work cell design. The cost was reduced by applying design for manufacturing and assembly (DFMA) principles through the digital medium as well as the reduction in design iterations required because of the use of digital manufacturing.
Keywords: Digital Manufacturing, Process Design, Integration, Lean.
1 Introduction Digital manufacturing methods can help to reduce the production cost, time to market and the number of design changes as product development progresses. This is achieved through the optimisation of build processes and support equipment design which in turn, improves product development lead times, design agility, profitability and ultimately, competitive advantage. Previous work has shown that the use of simulation assisted learning in the form of animated work instructions, can improve operator assembly times for the apron and uplock assembly shown in Figure 1, by 14%[1]. Integrated digital manufacturing methods have also been used to optimise labour usage as assembly processes are established, improving 1
Research Fellow, Centre of Excellence for Integrated Aircraft technologies, NITC, Cloreen Park, Belfast, BT8 6ZF. Tel.: ++44(0)2890974878, Fax.: ++44(0)2890974332, Email: [email protected].
532
J. Butterfield, A. McClean, Y. Yin, R. Curran, R. Burke, B. Welch and C. Devenny
financial efficiency by 19%[2]. The method is equally applicable to the innovation process and its effectiveness[3] as manufacturing processes are designed in the run up to the productionisation of a complex assembly such as an aircraft fuselage. In quantifying the full impact of digital methods in manufacturing, the methodologies involved in the generation and use of information and knowledge must be addressed in terms of the entire enterprise and not just the individuals who physically build the finished product. The application of the method upstream from the production environment can add further value by allowing process and tool designers to make better informed decisions as product build strategies and jigs are developed. To gain a better understanding of the impact of digital manufacturing methods on aircraft production, the learning process should be viewed from an organisational perspective. Organisational learning is an “umbrella” term which covers several topics including knowledge creation, sharing, transfer and management, organisational memory, organisational forgetting etc.[4]. Total organisational learning is represented by a learning or cost curve which is a line on a graph mapping the decreasing time required to accomplish any task as it is repeated, see Figure 2.
Figure 1. Apron & Uplock With Assembly Jig.
Figure 2. Learning Curve Improvement Map[1].
Any reduction in the area below the learning curve, will lead to reduced production cycles[5], lower cost and improved competitiveness. Figure 2 shows how learning can be improved with parallel curve movements made possible by process improvements resulting from management learning (i.e. methods and tool design), implementing lean work practices etc.. The increasing availability of fully integrated digital manufacturing environments based on more traditional CAD platforms, has brought manufacturability firmly into the design arena allowing process design activities to take place concurrently with product development as soon as a bill of materials starts to take shape[6]. The goal of these digital manufacturing methods is to provide the manufacturing community with predictive solutions to create, validate, monitor and control agile, distributed manufacturing production systems geared towards build-to-order and lean production[7]. Although the use of finite element analysis (FEA) is a well established part of the aircraft structural development process, it is not commonly used when considering
An Integrated Lean Approach to Aerospace Assembly Jig and Work Cell Design
533
manufacturing scenarios. When designing a jig it is important that the tooling designer can consider structural behaviour so that adequate provision can be made for flexible parts and that engineering tolerances can be met. This paper examines the integration of digital manufacturing methods including FEA used in a manufacturing context, into the lifecycle management of aircraft assembly jig design. The aim of this work is to quantify the benefits if any, that digital manufacturing methods can now offer to methods engineers and tooling designers when designing the build process for an aircraft panel. The method includes the assessment and simulation of current jig functions, part performance and work cell layout during panel assembly. The outcome will quantify any improvement in panel build time and tooling cost by using time generation and cost calculating tools which have been integrated into the Delmia digital manufacturing modules used for this work. Although the work within this report will analyse a specific aerospace product and its associated assembly processes, the general methodology presented here is equally applicable to any aircraft panel or indeed the development and productionisation of any complex engineering assembly.
2 Method Figure 3 shows the main elements which should be included when considering a study in tool design using digital manufacturing methods.
Figure 3. Considerations For an Integrated Approach to Tool Design Using Digital Manufacturing Methods.
Ergonomics looks at the function of the tool within its work cell, from the perspective of the operator in terms of his / her body orientation and reach envelope. Design for manufacture and assembly (DFMA) is required to make the tool design as streamlined as possible. It must perform its function efficiently and accurately by consuming the minimum amount of parts and materials required for its own construction. Work cell layout is important in terms of the economy of movement of both the operator and the components as they move into and around the work area as the panel comes together. The learning curve is important because of the factors introduced earlier. The more efficient the build process the
534
J. Butterfield, A. McClean, Y. Yin, R. Curran, R. Burke, B. Welch and C. Devenny
less time it will take to complete the build and deliver to the customer. Structural performance during assembly must be considered as this will drive any requirement for part support features on the jig. The position of flexible parts must be controlled if engineering tolerances are to be met. The application of lean principles will minimize or eliminate wastes. These include minimizing part and operator movement and reduction of non conformances in the finished panel. The inclusion of operator input in the process also eliminates intellectual waste. Digital mock up and assembly simulation is a key part of the whole process. The animation of the panel build allows both the methods engineer and the tooling designer to run through virtual builds and assess ‘what if’ scenarios thereby optimizing the panel build and jig design and before it enters production.
3 Applied Methodology The first step was to assess the current CRJ900 build to identify any improvements which could be made both in terms of its functionality and the material content of the jig’s structure. This included the use of digital build simulations to examine the build sequence, finite element analyses to assess the flexibility of individual parts where required and observational work on the factory floor. A DFMA study and a lean assessment of current build practices were carried out and the engineering process records (EPR) and non conformance reports (RNC) for the current apron uplock assembly were reviewed. Based on the current product assessment, the CAD data for the first new jig concept was generated in CATIA V5 and FEA was used to assess the strength of components which had been identified as being flexible, under self weight and assembly loads. The purpose of this was to identify any changes based on material savings or additional jig features that may be required due to the physical properties of the components. The jig geometry was merged with the new uplock design in the Delmia Digital Process for Manufacturing (DPM) environment. A pre-existing EPR was used to animate the series of operations required to assemble the uplock. A process review was carried out with tool designers and methods engineers to bring the conceptual jig design into line with Bombardier tool design standards and standard material usage guides. Time analysis was carried out to compare the assembly of the new CRJ1000 uplock with the equivalent time for the existing CRJ700/900 uplock. The assembly time for the new CRJ1000 uplock was based on the addition of new operations and the removal of redundant activities. Methods Time measurement (MTM) technology was used to derive the assembly timings for the activities specified within Delmia Process Engineer (DPE). DPE functionality was enhanced so that by defining the series of tasks required to complete an activity, the process times were automatically populated. A cost analysis was also carried out for the new CRJ1000 uplock assembly process using the SEER DFM costing package to generate manufacturing costs. The part costing parameters were automatically populated within the DPE software using the CATIA CAD part attributes. Historical data for existing tooling production costs also made it possible to predict the production cost of the new jig.
An Integrated Lean Approach to Aerospace Assembly Jig and Work Cell Design
535
4 Results 4.1 Structural Performance The flatness tolerance required on the outer skin surface for the apron and uplock assembly requires that the skin itself is held rigidly by a relatively thick backing plate on the jig during the panel drilling and assembly processes. Although the stress levels on the skin are relatively low the deflection under its own weight when it is held by its edges, is significantly greater than the required flatness tolerance. This justified the use of relatively thick backing plates and stiffening features on the jig and proved that there would be no material saving in this area for the new CRJ1000 Apron and Uplock tool. Figure 4 shows the ‘dog leg’ feature on the uplock. Although this component is relatively strong on the upper and lower sections, the strength providing flanges do not continue through the bend. A total of three neighboring components are all influenced by the position of the dog leg so if it is not assembled correctly the positional tolerances of the three other significant pieces are affected. The FEA results for the dog leg see Figure 4, show that when it is subjected to its own self weight load, there is a concentration of stress on the bend where the flanges have been removed to aid component forming. Although the maximum stress is not high enough to cause material yield, the deflections shown in Figure 4(b) are of the same order of magnitude as the positional tolerance required for the lower section of the component. Although there are no significant problems with the position of the dog leg on the current CRJ700/900 jig build, the process of positioning and drilling the dog leg is time consuming. This obviates the need for an additional support feature on the jig to facilitate quicker positioning of the dog leg on the panel, while maintaining current levels of positional accuracy.
(a). Dog Leg Stress Contours.
(b). Dog Leg Displacement Contours
Figure 4. FEA of Uplock Dog Leg Feature: Self Weight Load.
4.2 Cost Estimation A breakdown of the current cost for the manufacture of the CRJ700 jig revealed that that the 56% of the final cost for the jig is accumulated in the manufacture of the tool. This shows that any improvements in new CRJ1000 jig, which facilitates more cost effective manufacture will have the greatest impact on final cost. The use of digital manufacturing methods including animated simulations, could also have a significant affect on the design process which accounted for 35% of the total jig cost, giving another potential cost saving in this area. The remaining 9% of total jig cost was for materials. Table 1 shows the net change in cost having
536
J. Butterfield, A. McClean, Y. Yin, R. Curran, R. Burke, B. Welch and C. Devenny
taken into account the change in these main cost areas for the new CRJ1000 apron and uplock assembly. Table 1. Cost Changes for CRJ1000 Uplock Jig Relative to CRJ700/900 Jig.
Increase in Cost: Decrease in Cost:
Net Change in Cost:
Material Manufacture Design Material Manufacture
+ 1.1% + 1.1% - 5.8% - 0.8% - 0.5% - 4.9%
This reduction cost corresponds to a net 4.9% reduction in the overall cost of the CRJ1000 jig. The majority of this saving is due to a reduction in the number of design iterations which was achieved by using knowledge acquired through the use of simulations generated using digital manufacturing methods. This shows the extent to which management learning has been improved through the use of the simulations during process design. The manufacturing and material costs have increased marginally due to the necessary design changes required after the DFMA exercise resulting in a new assembly procedure for the skin i.e. the addition of the dog leg support feature and hard stops on the upper half of the jig. These were offset by cost reductions due to the removal of redundant features, for example the skin retaining clamps and jig frame spindles. 4.3 Time Estimation The uplock build was divided into three operations OP10, OP30 and OP50. Each of the individual operations was subdivided into a series of repeated standard procedures or tasks which are required to complete the assembly. The first stage was the fitting of the skin and uplock brackets to the jig. The second stage was the fitting of the main structural items to the inner face of the skin and the final stage was the attachment of the minor brackets and gussets required to hold the structure together. The standard activities included in the setup time, completion of the work and signing off were: Clock on Job, Prepare Tools, Review paperwork, Locate, Fix, Drill, Remove, De-burr, Re-Fix, Install Fastener, Call Inspector/Stamp Paperwork. The location activities were further sub-divided into activities related to small, medium and large parts for timing purposes. The change in the assembly times between the existing CRJ700/900 uplock and the conceptual CRJ1000 uplock where determined by analysing the re-design and where appropriate, changing the standard times for each activity. Table 2 shows the final outcome in terms of assembly time differences between the CRJ700/900 uplock and the CRJ1000 uplock assembly times.
An Integrated Lean Approach to Aerospace Assembly Jig and Work Cell Design
537
Table 2. Time Differences Between CRJ700 and CRJ100 Uplock Build Times.
Operation Number:
Difference Between CRJ700/900 & CRJ1000 Build Times
Op 10
-16.00%
Op 30
-3.22%
Op 50
-1.86%
Total:
-5.20%
4.4 Quality When designing the Apron/Uplock the important factors which affected the quality were tolerance build up, non conformances and scrapping. A review of production data for the existing CRJ700/900 uplock revealed that 40% of the non conformances were due to component positional issues. These issues included part clashes and low clearance conditions related to the relative position of parts and fasteners such as rivets which required considerable additional design hours to correct after production had begun. The provision of accurate, animated work instructions[1] reduces the potential for scrapping due to operator error. Designing the work cell using simulation to minimise part and final assembly handling and movement, also reduces the chances of part damage. 4.5 Health & Safety The drill holsters on the CRJ1000 are an example of how health and safety could be improved. The recommended placement of the drill on the jig with air lines coming from above, removes the potential hazard of coiled air supply lines on the floor which are a potential tripping hazard. This arrangement also creates a more ergonomically sympathetic design as the operator reaching distance is dramatically reduced thereby saving time. 4.6 Lean Considerations Time can also saved by providing three separate drills for each of the hole sizes used on the uplock. The one off increase in tooling requirement is offset by the recurring time saving due to the operator not having to change drill bits during the assembly process. The result of this was a reduction in non-valued added activities and hence a decrease in the overall assembly time. Fastener containers located on jig eliminated unnecessary movement. Analysis of overall operator motion through the build revealed that the delivery of parts to the station in bags - inside bags - inside a box used up time as the operator searched for smaller pieces. Figure 5 shows a conceptual work cell layout where the skins are delivered on racks and the remaining parts are positioned on a peg board. This arrangement means that all
538
J. Butterfield, A. McClean, Y. Yin, R. Curran, R. Burke, B. Welch and C. Devenny
parts are conveniently to hand and the operator can see clearly if anything is missing.
(a). Old Version.
(b). Optimised Version.
Figure 5. Conceptual Work Cell Layouts for Apron and Uplock Assembly Build.
5 Discussion The aim of this work is to quantify the benefits that integrated digital manufacturing methods can offer to methods engineers and tooling designers when designing the build process and jig for an aircraft panel. The cost of the existing CRJ700/900 tool comes from three main areas: jig design (35% mainly derived from the engineering man hours), material cost (9%) and jig manufacture (56% arising from the man hours required for the tool build). A combination of digital manufacturing simulations including FEA calculations, application of standard DFMA principles and the use of animated build sequences were applied in an effort to improve the performance of the CRJ1000 apron and uplock panel assembly processes. The benefit of this approach was the concurrency of process and tooling design activities which traditionally take place in a linear sequence after product design has been completed. Work cell layout, lean performance, health and safety and the structural analysis of flexible parts can all be rolled into these activities facilitating better informed decision making from a manufacturing perspective. The outcome shows that the non recurring cost of jig design and manufacture could be reduced by 4.9%. This was due mainly to a 25% reduction in design hours as the number of design iterations was reduced by 60%. At each stage of the process methods engineers and tooling designers were able to make better informed decisions as the process and jig design evolved, through the use of structural analyses for flexible parts and the use of animated build simulations. Having estimated the non recurring development cost of the RJ1000 jig relative to its predecessor, the change in the recurring cost of the assembly time for the new CRJ1000 apron and uplock was determined. The 16% time reduction in OP10 is due to the omission of the top half of the uplock skin. The saving comes mainly from the improved handling characteristics of the smaller skin. The additional dog leg bridge was the most important aspect of the jig design in terms of the 3.22% time saving for OP30 which is the most time consuming of the three main operations. This process relies on the skill of the operator for positional accuracy.
An Integrated Lean Approach to Aerospace Assembly Jig and Work Cell Design
539
The addition of the third bridge insured that the time taken to complete this operation was reduced as the dogleg was ‘jig located’. There is a relatively small improvement of 1.86% in the time required to complete OP50. This is the stage where the smaller items such as brackets and gussets are fitted. Although minor changes were implemented in the assembly sequence for these parts, it had little impact on the overall time reduction. Assumptions regarding the structural integrity of an aircraft are based on specified design tolerances. Tolerances have a critical affect on product safety, certification and subsequent customer acceptance of the finished product. The biggest single contributor to non conformances on the uplock is tolerance build up, therefore it is very important to eradicate tolerance problems due to the assembly process. With 40% of current RNCs on the uplock related to positional issues it was concluded that the use of clash detection and distance / band analysis within DPM could eliminate these non conformances but the CAD data would require full product modelling including fasteners. Health and safety is concerned with protecting the safety, health and welfare of people engaged in work or employment. Digital simulation can play an important part in examining how an operator interacts with their working environment. The jig and work cell must be designed in such a way that no overreaching or sustained awkward body orientations are required to complete an assembly task. The omission of sharp tooling features and the recommendation to feed air supplies to the uplock assembly jig from above have reduced the risk of operator injury in this case. The purpose of creating an egocentric jig design is to promote a lean environment in which the operator has all the equipment and tools which he needs around him has only to move a minimal distance, reducing wasted movement and hence time. By creating an egocentric work cell such as the one shown in Figure 5, the assembly of the component becomes leaner as unnecessary non value added time is reduced. The use of an integrated, digital environment meant that in practical terms, communications were improved between disciplines as methods engineers and tooling designers had full access to all aspects of the build – tooling cost, assembly time, structural properties of the individual components etc..
6 Conclusions The aim of this work, was to quantify the benefits of digital manufacturing by using the CRJ apron and uplock assembly as a case study. The main goal was to prove that the use of digital manufacturing could lower the overall build time for a component by improving the process and implementing lean changes. The jig and build process for the new CRJ1000 uplock has been improved by reducing the number of design iterations, improving the levels of concurrency in the process design activities and improving inter-departmental communications. Jig usage and work cell function have been improved ergonomically, health and safety issues have been addressed and all aspects of jig design and panel build have been made leaner. The use of digital manufacturing methods in the support of these activities
540
J. Butterfield, A. McClean, Y. Yin, R. Curran, R. Burke, B. Welch and C. Devenny
has achieved a reduction in panel assembly time of 5.2% and a reduction in tooling cost of 4.9%.
7 References [1] J. Butterfield et.al. Use of Digital Manufacturing to Improve Operator Learning in Aerospace Assembly. 7th AIAA ATIO, 2nd CEIAT International Conference on Innovation and Integration in Aerospace Sciences. Hastings Europa Hotel, Belfast, Northern Ireland, 18th – 20th September 2007. [2] J. Butterfield et.al. Assembly Process Optimisation for a Regional Jet Fuselage Section Using Digital Manufacturing Methods’. 1st International Conference on Innovation and Integration in Aerospace Sciences. Queen's University Belfast. 4 - 5 August 2005. [3] Granath J. A., Adlar N. Organisational Learning Supported by Design of Space, Technical Systems and Work Organisation. Flexible Automation and Intelligent Manufacturing Conference (FAIM 95), 5th International Conference, June 28-30, 1995, Stuttgart, Germany. [4] Neece O. E.. A Strategic Systems Perspective of Organizational Learning Theory: Development of a Process Model Linking Theory and Practice. Managing the human side of information technology: challenges and solutions. Pages: 182 – 221. IGI Publishing Hershey, PA, USA, 2002, ISBN:1-930708-32-7. [5] Garbaya, S. et. al. Experiments of Assembly Planning in a Virtual Environment. Proceedings of the 5th IEEE International Symposium on Assembly and Task Planning, Besancon, France. July 10-11, 2003. [6] J. Butterfield et. al. Optimisation of Aircraft Fuselage Assembly Process Using Digital Manufacturing. American Society of mechanical Engineers, Journal of Computing and Information Science in Engineering. Volume 7, Number 3, September 2007, PP 269 – 275. [7] Brown, R.G. Driving Digital Manufacturing to Reality. Simulation Conference Proceedings, 2000. Winter. Volume: 1, pp 224-228, 12/10/2000 - 12/13/2000, Orlando, FL, USA, ISBN: 0-7803-6579-8.
The Effect of Using Animated Work Instructions Over Text and Static Graphics When Performing a Small Scale Engineering Assembly Gareth Watsona,1, Dr Ricky Curranb, Dr Joe Butterfieldc and Dr Cathy Craigd a
PhD Student, School of Mechanical and Aeronautical Engineering, Queens University Belfast b Senior Lecturer, School of Mechanical and Aeronautical Engineering, Queens University Belfast c Research Fellow, Northern Ireland Technology Centre, Queens University Belfast d Senior Lecturer, School of Psychology, Queens University Belfast Abstract. Digital Manufacturing technologies can yield geometrically accurate dynamic assembly sequences to be used as work instructions. An independent groups experiment was carried out in order to investigate the effects of different instructional media on performance on a small scale mechanical assembly task. Twenty four participants completed the assembly task a total of five times on consecutive weekdays. Three types of unimodal instruction sets were designed and delivered via a laptop computer – text only; static CAD diagrams and CAD animation. Build times were recorded for each participant and plotted as a learning curve. Results suggested that the use of animated instructions can reduce initial build times, as the mean build time at build one was 37% and 16% quicker than the text and diagrams groups respectively. The beneficial effect diminished after the first build, however, the graphics (diagrams and animation) groups continued to yield quicker mean build times up until build 3. Results are discussed in light of cognitive theories relating to how we process instructional information. Keywords. Digital Manufacturing, Instructional Media, Mental Models, Cognitive Processing
1 Introduction Advances in computer assisted technologies over the last several decades have meant that digital mock ups, simulations, animations and immersive and desktop virtual environments can be created which allow the user to visualise concepts, 1
PhD Student, School of Mechanical and Aeronautical Engineering and School of Psychology (Interdisciplinary Project), Queens University Belfast, David Kier Building, 1830 Malone Road, Belfast BT7 1NN, Northern Ireland, UK; Tel:+44 (0) 2890 974686; Email: [email protected]
542
G. Watson, R. Curran, J. Butterfield and C. Craig
objects and phenomena, in a more realistic, real time and sometimes dynamic way, thus picking up on certain types of information that could otherwise be hard to convey. For example, when information regarding motion is conveyed via 2D static diagrams, arrows and other symbols must be used in order to ‘suggest’ or imply motion, whereas in a dynamic visualisation, motion can be observed. The availability of systems like DELMIA, offering a fully integrated digital manufacturing environment, now means that it is possible to generate training materials directly from build sequences and bills of material originating from the central data hub. When product designs and assembly sequences have been created, animated assembly procedures complete with instructional data can be generated as a part of the ‘process design’ procedure. The use of an integrated digital manufacturing platform means that the quality of instructional material and the speed with which it can be delivered may offer an improvement on traditional methods. Comparative studies of different instructional modes have been carried out within the engineering literature, often including virtual reality conditions. For example, Tang conducted a study in order to test the relative effectiveness of Augmented Reality (AR) instructions and found that that the overlay of instructions on actual work pieces (AR) reduced the error rate for the assembly task by 82% [10]. In a study performed to assess the effectiveness of virtual reality in assembly planning, Banerjee reported that the subjects could, on average, perform the assembly operations in approximately half the time in the immersive and nonimmersive VR environments compared to the traditional environments using engineering blueprints [3]. Baird and Barfield conducted a piece of research where instructional type was varied for the construction of a computer motherboard. Results showed that, among the four types of instructional media (paper, computer aided, opaque AR and see through AR), the AR conditions resulted in the fastest completion times and the least number of errors. In order for instructions to facilitate operator learning, it is vital to consider how the end user will process the information presented and turn this into action. Within the psychology literature, there is a limited body of research that considers these issues. Ganier, Gombert and Fayol conducted a study exploring the effects of instructional format on procedural learning when faced with a new device (household appliance) [6]. Findings suggested that pictures facilitated procedural learning as the execution phase of the task was shortest for text and picture combined, while the reading time was shortest for the pictures; inspection (of the object) time was longest for the text and the total time of the task from start to finish was longest for the text. A model is proposed by Ganier as to how a reader of instructions moves from perceiving the instructions to performing an action [6]. According to this model, people either jointly or sequentially activate and or maintain the goal of the task in Working Memory (limited in terms of both time and processing capacity, [1]); encode the instructions, encode the characteristics of the device; elaborate both an integrated representation (mental model) of all these sources and an action plan; and finally, execute the action. The model also implies that users can control the procedure and within each step, compare the state of the device to the initial goal. It suggests that users are able to regulate all of these cognitive processes until the
The Effect of Using Animated Work Instructions Over Text and Static Graphics
543
initial goal is achieved. When applied to the findings in the study, they suggest that pictures facilitate encoding and the integration of information from different sources, yet a combination of the two (multimedia) will allow the formation of a more complete representation – richer than the one induced by the processing of either format alone having also used up less cognitive effort. Guthrie, Bennett and Weber had previously proposed a Transformational model – whereby transformation of information represented verbally in a procedural text must be transformed into a procedure represented behaviourally in a performance. The model proposes that in order for this transformation to be successful, users must form a conceptual model of the performance, encode procedures from the document, engage in self-testing and conduct self-corrections to repair mistakes [7]. A novel approach is adopted in the current study where Digital Manufacturing methods are applied in order to produce animated work instructions and compare them to static CAD diagrams and text only instructions. The end user – i.e. a human operator must also be considered when evaluating their use – what is it about the delivery of assembly information via animated, static or textual means that may or may not facilitate performance. This question can only be addressed through a collaborative approach, amalgamating underlying psychological theory with applied methods in the digital manufacturing domain. The current study will compare animated instructions to traditional methods for a small scale assembly carried out on consecutive days in order to evaluate their effectiveness in a comparative manner but also to explore the effect of the instructional format on a learning curve.
2 Method 2.1 Participants A total of Thirty undergraduate students, postgraduate students and post-doctoral researchers were selected through convenience sampling from the School of Psychology at Queens University Belfast. Students took part on a voluntary basis. All participants were effectively novice with regard to the skills required for the assembly. Full ethics approval was granted for the study. 2.2 Apparatus 2.2.1 Assembly Task Participants were required to complete an assembly task once a day, on five consecutive days, according to the instructions presented. A reverse rotation device was created using AUTOMAT Engineering Kit. The handheld device comprised of 49 separate parts assembled in such a way that when a crank arm was turned in a clockwise direction, the rotation of a propeller at the opposite end of the assembly was reversed. The assembly process was comprised of four main sections.
544
G. Watson, R. Curran, J. Butterfield and C. Craig
2.2.2 Stimulus materials All instructions were monitor based and visual – i.e. there were no auditory elements to the instruction. They were displayed on a DELL Precision M90 laptop computer on the desk where the participant sat. All three instructional types – text, static diagram and animation were delivered as a Microsoft PowerPoint (2003) presentation. 2.2.3 Monitor Based Text Instructions A basic set of text instructions was created by the experimenter having videoed himself carrying out the assembly. Each procedural step was bullet pointed and there was one action per bullet point. 2.2.4 Monitor Based Diagram Instructions Computer Aided Design (CAD) was used to create the set of 3D static diagram instructions. Firstly, a digital 3D mock up of the reverse rotation device was created in the CATIA Version 5 Software (Dassault Systemes). Each part was created with accurate geometric measurements and colour and detail was added to enhance reality. Once each part was recreated individually, they were assembled within the software part by part, following the exact same assembly sequence of the text instructions to maintain equivalence of information. Screen shots were taken after each procedural step. 2.2.5 Monitor Based Animated Instructions The assembly process was animated within DELMIA V5 software. This followed exactly that of the text instructions in terms of sequence, so as to maintain equivalence of information. This sequence was then segmented into separate chapters – encompassing the four main sections of the build. 2.3 Design A mixed design was used in the current study. The three instructional groups represented the independent groups to be compared, however, each person built the device five times, allowing a within groups performance assessment as well. Learning curves were plotted enabling the experimenter to compare both a between groups performance at each build, and a within groups performance across builds to see if one instructional group descended the learning curve quicker. The independent variable in the current study was the type of instruction. There were three levels – text, static diagram and animated instructions. The dependent variable in the current study was total build time (extracted from video footage).
The Effect of Using Animated Work Instructions Over Text and Static Graphics
545
2.4 Procedure Participants were invited into the lab and having given consent to participate in the study, were randomly assigned to one of the three groups. The experimental protocol was explained verbally from a standardised set of instructions. A demonstration was then given on how to navigate through the slideshow using the software controls and participants were given the opportunity to practice. The participant was then given a period of time (up to 5mins) to preview the instructions and were informed that they could start the build any time within the five minutes. At this stage, the experimenter set a camcorder to record and retired to the observation room. When the participant had completed the assembly, the experimenter emerged form the observation room and stopped the recording equipment. Trials were carried out on consecutive days, Monday – Friday in order to keep the time between each build relatively constant.
3 Results 3.1 General Trends As can be seen from Figure 1a, the general trend is that the text instructions group yielded the slowest build times compared to the diagram and animation groups. However, all mean times improve across consecutive builds, with the most noticeable improvements being evident for builds 2 and 3. For builds 4 and 5 only small differences exist between the groups. The means and standard deviations for each group are shown in table 1. Mean build times were extracted from video footage and were averaged across participants in each instructional group. They are in seconds and represent the time from where the first part is picked up to where the device is completely assembled.
Total Build Time (s)
Mean Build Times: Text v Diagrams v Animation 1100
TEXT
1000
DIAGRAMS
900
ANIMATION
800
Animation Run Time
700 600 500 400 300 200 1
2
3 Build Number
4
5
Figure 1. a) Mean Build Times for all participants in each of the three instructional groups over the five builds. b) CAD model of assembled device.
546
G. Watson, R. Curran, J. Butterfield and C. Craig
3.2 Statistical Analyses Basic assumptions of ANOVA were met and so, a mixed design analysis of variance was conducted to explore the impact of type of instruction (text; diagram; animation) on performance in an assembly task over five builds, as measured by the total build time (s). A main effect for type of instruction was reported (F (2, 21) = 4.655; p= 0.021), i.e. the type of instruction affected the time taken to assemble the device. The benefit of animated instructions is quite apparent at build 1 (first exposure to the assembly task) – where animation (679.40s) < diagrams (807.13s) < text (1076.17s). The mean build time for the animation group was 37% lower than the text (p=0.005) and 16% lower than the Diagrams (p=0.077 – approaching significance). After build 1, the benefit of animation over static diagrams becomes much less apparent – in fact, the mean build times for the static diagram groups and the animation groups are very similar for the duration of the builds. However, the ‘graphics’ groups (animation and static diagrams) continue to demonstrate an advantage over the text group at builds 2 and 3. At build 2, text (653.83s) is 47% slower than diagrams (443.63s) and 45% slower than animation (449.90s; (p=0.010 and p=0.009 respectively). At build 3, the mean build time for the text group (471.17s) is 27% slower than the diagram group (371.00s) and 20% slower than the animation group (391.10s). To conclude, there appears to be an immediate benefit of using animated instructions over text and diagrams for the first build. After which, this effect diminishes. There then appears to be an advantage of using either graphical means over text up until build 4, where the effect of type of instruction is no longer evident – as an optimum level of performance, regardless of instructional type, has been reached. As well as the differences between each group at build 1,2,3…..etc, it is clear from Figure 1(a) that the rate of improvement between each build diminishes as builds go on. This is in accordance with learning curve theory. A mixed design ANOVA was carried out, however, the assumption of sphericity was violated and so, Greenhouse-Geisser corrected values were reported. The mixed design Analysis of Variance revealed a main effect of build (F (1.583, 84) = 128.732; p<0.005) which confirms that the build number will have an effect on build time. This, in itself, is not surprising as practice will improve build times. What is interesting here is that for the animation and diagram groups, after an initial steep reduction between builds 1 and 2, both curves remain quite flat. For text, there are steep reductions in build times between builds 1 (1076.17s) and 2 (653.83s) (F(1,5) =24.006; p=0.004) ; 2 and 3 (471.17s) (F(1,5) = 31.302; p=0.003) and 3 and 4 (391.17s) (F(1,5) = 23.216; p=0.005). After build 3, all curves start to plateau, suggesting that the minimum assembly time is approached, and subsequent builds do not see significant improvements. This would suggest that ‘graphic’ instructions are advantageous over text based ones as they seem to meet the ideal, horizontal level of optimal performance earlier on – i.e. the mean build time reached by the text group at build 5 is reached by the graphic groups at build 3.
The Effect of Using Animated Work Instructions Over Text and Static Graphics
547
3.3 Subassemblies As explained in the method, the entire assembly process in this particular study was divided into four distinct subassemblies. To further analyse results, each subassembly was explored in isolation, over the five builds in order to see if the general trend held true for each sub-build – or are there certain sections that lend themselves to a more thorough verbal explanation (e.g. text instructions) or the portrayal of temporal information (e.g. animation)? As can be seen from Figure 2, the general trend is not followed exactly throughout each subassembly. For example during the first build, the mean build time for subassembly 1 is slowest for the static diagrams group. Most pronounced differences occur in subassembly 2 where Post Hoc tests revealed that early on (builds 1 and 2), text yielded significantly slower build times than animation (Build1: p=0.004, Build2: p=0.017). The difference between the two graphic groups – diagrams and animation – is not found to be significant. From build 3 onwards, the curves plateau and little separates any of the groups. It is also interesting to note that after build 2, for subassembly 1, mean build time for the animation group is lower than the actual run time of the animation, suggesting that that particular sequence had been learned and was not being imitated directly from the animation. M ean B ui ld T imes f o r Sub assemb l y 2 : T ext v D iag r ams v A ni mat i o n
M ean B uil d T i mes f o r Sub assemb ly 1: T ext v D i ag r ams v A ni mat i o n TEXT DIAGRAM S ANIM ATION Animation Run Time
TEXT ANIM ATION
Build Time
Build Time (s)
320 220 120
320 220 120 20
20 1
2
3
4
1
5
2
M ean B uil d T i mes f o i r Sub assemb ly 3 : T ext v D iag r ams v A nimat io n TEXT ANIM ATION
4
5
M ean B ui ld T imes f o r Sub assemb l y 4 : T ext v D iag r ams v A ni mat i o n
DIAGRAM S Animat ion Run Time
320 220 120 20
TEXT ANIM ATION
420 Build Time (s)
420
3 Build Num ber
Build Number
Build Time (s)
DIAGRAM S Animat ion Run Time
420
420
DIAGRAM S Animation Run Time
320 220 120 20
1
2
3 Build Num ber
4
5
1
2
3
4
5
Build Number
Figure 2. Mean build times for subassembly 1,2,3 and 4 for text, diagrams and animation groups over five builds.
548
G. Watson, R. Curran, J. Butterfield and C. Craig
4 Discussion 4.1 Summary of Results Findings revealed that there was an immediate beneficial effect of animation over static diagrams and text instructions at build 1. This effect diminished after build 1 and from then on, no further benefit of animation over static diagrams was observed. Text continued to yield the slowest mean build times however; no differences were significant after build 3. This general trend was not followed through each individual subassembly, suggesting that certain characteristics of the subassembly will be portrayed more effectively through one particular instructional format than another – for example when there are several possible orientations of a part. 4.2 The Construction of Mental Models Central to both the Ganier model [6] and the Guthrie model [7] put forward in the introduction, is the construction of an internal conceptual model – or mental model. The construction of an accurate mental model in a procedural assembly task will allow for goal formation and a reference point for relating instructional information and for checking progress throughout the build. Both of these models are applied to findings where studies have involved the use of text, diagram and multimedia instructions. However, they have not, until now, been considered when discussing the effectiveness of animated instructions. Animated instructions present an informationally rich external visualisation of the procedure and the device in question, so does this produce an informationally rich internal conceptualisation? Results from the current study suggest that they do, even prior to first build – as performance in the first build is superior, in terms of build time. This finding can be considered in relation to the construction of mental models from instructional materials. Ganier suggests, based on models of text and picture comprehension that when mental models (and action plans) are constructed from text (verbal information), an internal verbal representation is constructed followed by a propositional representation which then allows the user to build a ‘situational’ mental model [5]. In other words, the construction of the mental model is built solely from verbal information and is left open to variability in user’s interpretation of the device. Static diagrams allow a more ‘direct mapping’ as they are essentially an external visualisation and provide spatial information (position in space). Animation also falls into this bracket. However, animation will not only provide spatial information but adds another layer of information due to its temporal component (change in position in space over time). This extra ‘layer’ of information will mean that there is an even more direct mapping from external representation to internal representation as motion between static frames is portrayed directly and does not have to be implied or inferred. For the conditions where the temporal or spatial element is lacking (diagrams and text respectively), it may be the case that in order to ‘complete’ or elaborate the mental model that has been constructed from the
The Effect of Using Animated Work Instructions Over Text and Static Graphics
549
processing of the instructions, the build must be completed – so the user can integrate spatial and temporal components, derived from self observation, into the internal model. The fact that after build 1, there is no advantage of using animated graphics versus static ones, supports this theory any temporal information has been integrated from doing the build once. Slower rates of improvement observed in the static and animation groups also support this. Only when this model is complete, will the process be able to be learned – this is where the curve would plateau. It is clear that animation can provide a relatively complete mental model from the outset, resulting in a quicker build than diagrams and text at build 1. Guthrie suggests that the encoding phase of their model would be facilitated by information that emphasises temporal elements – as animation does in this context [7]. 4.3 The ‘Depth of Processing’ Theory The immediate benefit of animation over diagrams and text could be due to a reduced amount of cognitive processing. The instructional animation, because it is detailed in its portrayal of spatial and temporal aspects of the build, may be viewed in a passive way with a mimicry strategy adopted i.e. follow the screen and not think about it. The static diagrams however, require inferences of movement between frames and the text requires the reading of the information and the encoding of verbal information – a deeper level of cognitive processing [9]. Also, in terms of part recognition, when parts are named verbally (as in text) rather than a visual representation, there is another step involved in relating the name of the part to the physical part. For the diagrams and animation, it is possible to directly map the part in the instructions to the physical part. Although this may suggest that animated instructions do not lend themselves to developing a thorough understanding of the assembly, it does seem an effective strategy in order to ‘get the job done’ when a quick performance is the preferred outcome rather than a thorough understanding. Further work would need carried out in order to see if this passive style of learning from animation will affect retention and the transferability of skills from one task to another. 4.4 Conclusions and Real World Implications Having discussed findings, it is evident that for this particular assembly procedure – where the device was abstract and participants were novice, in terms of their experience with the assembly kit and the device, there is a benefit of using animated, CAD modelled instructions. This benefit is an immediate one, where, at the first exposure to the instructions and task, the richer, spatial temporal elements will enable the construction of an accurate and elaborate mental model that will be referred to as the participant progresses through the build. This effect then diminishes after build one where there is little to separate either graphic group from then on. Although this is a small scale assembly task, the underlying cognitive processes involved in the understanding of instructional information and the translation of this information into successful actions, are applicable to larger scale assemblies. Regarding Computer Aided Instruction in a digital manufacturing
550
G. Watson, R. Curran, J. Butterfield and C. Craig
environment; findings of the current study support their use in both demonstrating their effectiveness and adding some theoretical justification and explanation as to why they may facilitate performance. Through their use, the current study would suggest that animated assembly instructions will not only reduce first build assembly times, but also facilitate the learning process i.e. reach an optimum level of performance at an earlier stage in the learning curve than widely used traditional means of instruction, such as text.
5 References [1] Baddeley A. Working Memory. Oxford. Oxford University Press, 1986. [2] Baird KM, Barfield W. Evaluating the effectiveness of augmented reality displays for a manual assembly task. Virtual Reality,1999; 4(4), 250-259. [3] Banerjee A, Banerjee P, Ye N, Dech F. Assembly Planning Effectiveness using Virtual Reality. Presence, 1999; 8(2), 204-217. [5] Ganier F. Factors Affecting the Processing of Procedural Instructions: Implications for Document Design. IEEE Transactions on Professional Communication, 2004; 47(1), 15-26. [6] Ganier F, Gombert J, Fayol M. Effets du Format de Presentation des Instructions sur L’Apprentissage de Procedures a L’aide de Documents Techniques. Le Travaille Humain, 2000; 63, 121-152. [7] Guthrie JT, Bennett S, Weber S. Processing Procedural Documents: A Cognitive Model for Following Written Directions. Educational Psychology Review, 1991; 3, 249-265. [9] Mayer RE, Hegarty M, Mayer S, Campbell J. When static media promote active learning: Annotated illustrations versus narrated animations in multimedia instruction. Journal of Experimental Psychology: Applied, 2005; 11, 256-265. [10] Tang A, Owen C, Biocca F, Mou W. Comparative Effectiveness of Augmented Reality in Object Assembly. New Techniques for Presenting Instructions and Transcripts, in CHI., 2003; Ft Lauderdale, Florida.
Digital Lean Manufacture (DLM): A New Management Methodology for Production Operations Integration R. Currana, R. Collinsb, G. Pootsb , T. Edgarb, C. Higginsb and J. Butterfieldc a
Director of the Centre of Excellence for Integrated Aircraft Technologies, Reader, School of Mechanical and Aerospace Engineering, Queens University Belfast, NI, UK (Professor of Aerospace Management and Operations, TU Delft). b
Northern Ireland Technology Centre (NITC) and cCEIAT, School of Mechanical and Aerospace Engineering, Queens University Belfast, NI, UK Abstract. A methodology for the systematic integration of digital manufacuring is presented through Digital Lean Manufacturing (DLM). DLM offers a new management methodology for production operations integration that achieves vertical and horizontal integration of process, tools and systemic manufacturing effort. Vertical integration was achieved through the hierarchical structuring of effort according to business process, integrated crossfunctional manufacturing processes, specific manufacturing activity and knowledge capture. Horizontal integration was achieved through the mapping of processes within functional swimlanes and through the specific activity mapping of the Digital, Lean and Manufacturing functions in particular. Validation elements have also been presented in this initial positioning of DLM as new and novel management methodology for production operations integration. Keywords. Manufacturing system, digital manufacture, Lean, process modelling, manufacturing integration
1 Introduction The main aim of the paper is to present a new management methodology for Digital Lean Manufacturing (DLM). The general concept of DLM was first presented by Curran et al. (2007) and the work herein now presents the specific methodology for DLM developed at Queens University Belfast (QUB). QUB are now working primarily with Bombardier Aerospace Belfast (BAB) to validate the methodology as part of the £2.5M DTI-funded PreMade research project, which involves a consortium of 12 partners also including DELMIA, Galorath, Bombardier Transport, Thales, Cardiff University (Lean Engineering Research Centre) and the Welsh Aerospace Forum. However, the paper already includes evidence of early validation particularly for the lower-level elements of DLM,
552
R. Curran, R. Collins, G. Poots, T. Edgar, C. Higgins and J. Butterfield
while also referring to the higher-level validation garnered from expert industry (BAB) opinion on the methodological approach. Consequently, although the DLM methodology is herein presented in a more generic format, the authors are currently working within the PreMade consortium to develop a specific implementation solution that is termed DLM-MAP; which is currently assessed to be at a Technology Readiness Level (TRL) of 6, i.e. moving out of the adaptation phase into the validation phase. There are many research groups that are currently looking at an integrated approach to design and digital manufacturing technologies [1-3] and in a wider context, Product Life Management (PLM) modeling. It is not surprising that manufacture is now being elevated into the digital and virtual worlds and major aerospace players are now courting such Digital Manufacturing tools. Digital Manufacturing is an emerging software technology that will become a very fundamental and liberating component of Product Lifecycle Management (PLM). Bob Klem, GM’s Global Director – Information Systems Services has been recently quoted [4] as saying “From an IT perspective, the main components are pretty much in place. Our role is in developing an IT toolkit and PLM Management process. The emphasis, or competitive advantage, is in the process and how we use it.” This quote encapsulates the rationale underwriting this paper: to present Digital Lean Manufacture (DLM) as a validated methodology for the manufacturing element within PLM management, essentially enforcing an integrated process that effectively utilizes an IT toolkit to facilitate competitive advantage. Consequently, the paper will begin with a brief overview of the state-of-the-art in Digital Manufacture and Lean as the context for the presentation of the DLM methodology. Subsequently, the paper will present an exemplar case study of the type of DLM activity associated with Tier 3 of the methodology before addressing the more general validation associated with the integrated approach that is presented in Tier 2.
2 State of the Art There are many research groups who are currently looking at developing a more integrated approach to digital manufacturing [1-3]. This is also being driven by the fact that all major aerospace producers are now using digital manufacturing tools. According to a Dassault Systèmes press conference [5] “DELMIA – the leading digital manufacturing tool form Dassault System - was selected to join the team developing the new A380, Airbus' 21st century super jumbo aircraft, for the final assembly processes in Airbus' new manufacturing facility in Hamburg, Germany”. As well as Boeing [6], Bombardier and Lockheed Martin, etc other industries are also taking a lead, such as MicroTurbo, Hitachi Zosen and Volkswagen who are using the Dassault Platform as a PLM Solution.
Digital Lean Manufacture (DLM)
553
Although digital design mock-up is now well established, championed for example by Dassault Aviation on their Falcon aircraft programme, digital manufacture is an emerging software technology that has become a key component of Product Lifecycle Management or PLM. A key component of this is the management and inter-relation of product, process and resource data. For example, product information is defined within an Engineering Bill of Material (E-BOM) each element of which is explicitly linked to the geometric solid model that constitutes part of the digital mock-up. The E-BOM can be used to generate the Manufacturing Bill of Material (M-BOM) which is then organised according to the Work Breakdown Structure (WBS) that is defined by the process steps required to produce the product. The resources consumed can then be associated with the process stages so that all of this information is held in an inter-relational data base that facilitates management, planning and control of the complete process. Analysis can be performed for optimisation in terms of: Network optimisation, clash detection, ergonomic study, cost estimation and control, work instruction creation, design modification, etc. Independent survey and studies conducted by CIMdata [7, 8], a leading independent worldwide strategic consultancy specialised in the use of PLM Tools, has confirmed that digital manufacturing technology is “a key component for today’s manufacturing industry, thus providing the OEM’s proper tools to achieve real savings.” The advantages and benefits from such a tool are many but they have highlighted the reduction in number of design changes, more effective communication and collaboration, and also savings in tool design: CIMdata have estimated an average percentage improvement in overall production cost of 13%; increased production throughput of 15%; and overall time-to-market of 30%. As stated in a paper from Delft University [9], aircraft design is the product of a complex process which involves many different fields and techniques. Indeed, concurrent engineering is a collaborative approach that can be simulated in part by a platform such like the Dassault V5 software. However, “a tool to make the concurrent engineering approach feasible on a large scale, as required by a civil aircraft-like product development, is not yet available” [9]. There are a wide range of approaches to general management theory and production operations that includes supply chain management [10], Total Quality Management [11], Time Based Competition [12, 13], Business Process Reengineering [14], Theory of Constraints [15, 16], Quick Response manufacturing [17], Agile manufacturing [18], Leagile [19] and Lean thinking [20, 21]. However, Lean thinking is widely regarded as one of the most influential [22] and is treated as a key initiative by the UK government sponsored Foresight Manufacturing 2020 Panel [23]. Similarly, Lean implementation has also formed the focus of the UK government sponsored (DTI) industry forum adaptation schemes in such disparate industries as automotive, ceramics, textiles, oil and gas, metals, process and chemicals, shipbuilding, agrifood, construction, furniture production and aerospace.
554
R. Curran, R. Collins, G. Poots, T. Edgar, C. Higgins and J. Butterfield
In the eighties the International Motor Vehicle Programme (IMVP) set about benchmarking the performance of over 90 automotive final assembly and component plants in seventeen countries and found that that a subset of these plants consistently achieved a 2:1 ratio of quality and productivity performance when compared to the rest. These ‘world class’ plants were predominantly Japanese owned and led by Toyota and its supply base, and crucially, they were seen to be using fundamentally different operational practices that aimed to maximise the material flow rate through the supply chain [20]. This excellence had lead to a number of publications that addressed the Toyota Production System [24, 25] and Just-in-Time manufacturing [26, 27] but the term ‘Lean’ was first applied by the IMVP researcher John Krafcik [28]. However, the seminal book that documented the IMVP programme and identified the associated Lean practices, ‘The Machine that Changed the World’ [20], is recognised as one of the most cited and influential works in operations management [29, 30]. A significant proportion of the subsequent ‘Lean’ literature has focused on the definition of its attributes and characteristics [31, 32], and the application of Lean principles in different industrial settings, including: Lean tools and techniques such as 5S [33], SMED [34] and kaizen [35], as well as implementation methods and mapping techniques [36-38]. The authors’ like to simplify the concept of Lean as: ‘being mean with the consumption of time and resources in response to the need for customer value’. Of course, explicit tools and techniques have developed and the concept has evolved from a focus on shop floor operations (‘Lean Production’), to the underlying thought process (‘Lean Thinking’), and currently to the more holistic appreciation of the value chain involved in the production and supply of a good or service up to its point of consumption (‘Lean Enterprise’). However, despite the widespread appreciation of Lean [39, 40], much of the literature tends to relate to the production and delivery processes for high volume, mass production manufacturing firms in long product lifecycle industries such as automotive [41]. Contemporary research into Lean needs to address some apparent underlying challenges, such as: the ‘fit’ of Lean within different types of operating and production environments; the development of new management accounting methodologies that are supportive rather than resistant to Lean implementation; and the relationship/opportunities between Lean and Information and Communication Technologies (ICT) - notably Digital Manufacturing and ERP [42]. However, one of the key elements that underwrite this paper is the potential for digital manufacture to provide an integrating platform that facilitates the implementation of Lean, as well as the integration with other engineering functions. As part of their assessment [43] of the state-of-the-art in design and manufacture modelling the US National Research Council have constructed Fig. 1. They conclude that: “There is little overlap between manufacturing modelling and simulation tools, or manufacturing process planning, and engineering design tools, reflecting the lack of interoperability between these steps with currently available software”.
Digital Lean Manufacture (DLM) Enterprise
Strategic Tactical
Product Planning
Product Architecture
Engineering Design
Function Manufacturing Engineering
Manufacturing Operations
Field Operations
Enterprise Resource Planning
Forecasting Engineering Modelling, Simulation, and visualisation
Manufacturing Modelling, Simulation, and Visualisation
Computer-Aided Engineering Design Automation Computer-Aided Geometric Design
Execution
Enterprise
Level
Mission Needs
555
Product Data Management
Logistics
Process Planning
Purchasing
Supervisory Control Machine Control
Figure 1. An assessment of design and manufacture tools, 2004
Currently, there are no tools on the market that link conceptual design, manufacturability, Lean thinking, total cost, simulation and Knowledge Based Engineering. The Foresight Manufacturing 2020 Panel [23] recommended that strategically the research community should strengthen its relevance to the future needs of UK manufacturers and develop: “tools…to enable real-time modelling and decision-making inside a company and shared with customers and suppliers in the chain” while “agile, lean and remote manufacturing technologies and systems greatly to increase value in manufacturing processes, drive out waste and enable mass customisation”. It is evident that the fundamental understanding and application expertise does not yet exist in this area of engineering, although there has been a vast amount of research effort expended over the last decades. In addition, it is a real concern to note an apparent lack of investment within the UK and EU in simulation technologies, highlighted by the fact that 40% of the DELMIA manufacturing simulation software users are now placed in Asia [44] The DTI also states the need for a radical shift in engineering where manufacturing is finally brought into design through modelling and simulation. The November Call of the DTI Technology Programme stated: “Design, Simulation & Modelling are powerful tools that allow designers and developers to envisage new systems, products and services, and facilitate their better design, engineering, manufacture, operation use and end of life recycling”. DLM is presented herein as a key piece of the required solution as crucially, it facilitates a Lean approach to the integration of manufacturing into a digital platform that can be integrated readily into the design sphere, thereby addressing one of the key challenges of the Aerospace Innovation and Growth Team [45].
556
R. Curran, R. Collins, G. Poots, T. Edgar, C. Higgins and J. Butterfield
3 Digital Lean Manufacture (DLM) Methodology The primary importance of this paper is the following presentation of the Digital Lean Manufacture (DLM) methodology. The state-of-the-art review has established that the concept of Lean has developed from the shop floor, to the underlying thinking, to the application across the enterprise. Similarly, the development of supporting IT systems has progressed from bespoke software, to Product Life Management (PLM) methodology, to Digital Manufacturing software platforms, as illustrated in Fig. 2. Consequently, the authors are proposing that it is timely to achieve the true integration of these two complimentary developments, and therefore the paper presents DLM as a consolidating methodology that facilitates the full exploitation of state-of-the-art digital manufacturing capability while simultaneously offering a systemic IT solution to the true integration of Lean into the production system (see fig. 2). It is anticipated that in future this will extend to the full enterprise, for example, with regards to collaborative and distributed engineering activity, supply chain integration and through life systems support. However, the already ambitious premise of the paper is fundamentally based on the need for the synthesis of digital manufacturing with production systems and manufacturing process knowledge, and the Lean approach. DLM is presented as a methodology that the authors are currently developing collaboratively with the aerospace industry to that end. Bespoke Software
PLM
Digital Manufacture
Lean Manufacture
Lean Thinking
Lean Enterprise
Digital Lean Manufacture (DLM)
Figure 2. Synthesis of digital manufacturing and Lean developments into Digital Lean Manufacture (DLM)
In developing the DLM methodology, the authors were aware of the need for any manufacturing systems methodology to integrate into the business process. The term business process is here used loosely in reference to the systems engineering concept of structuring the Integrated Product Process Development (IPPD) effort into a sequence of key phases that are separated by major review gates with associated exit criteria to be satisfied before the business case is established for moving to the next phase of engineering effort. A generic representation of this is shown in Fig. 2, which is essentially a life-cycle view of the operations to be managed at these key life-cycle stages, whether design, manufacturing or service. All of the large aerospace companies for example will have their own version of such a business model, defined with additional sub-
Digital Lean Manufacture (DLM)
557
processes and milestones etc; a good example of this in the public domain being the MOD’s CADMIT cycle.
Figure 3. The chronological business process associated with key phases of the product life cycle
The aerospace business process therefore provides the framework and basic drumbeat to which manufacturing must adhere, as must design. The design effort tends to be seen as the driving function whereas Fig. 3 highlights that the primary effort is in addressing a life cycle balanced solution to the product that provides the customer with maximum value. Consequently, once a project has been initiated through a successful bid or market survey, there will already have been conceptual effort expended in providing an initial definition of the product. This engineering effort must be of an integrated nature from the very earliest conception, so that the life-cycle balanced solution is reached. With particular reference to the issues of integration between design and manufacture this highlights the need to move away from an ‘over the wall approach’ where companies tend to start designing before the customer requirements have been identified and then pass their solutions ‘over the wall’ to manufacturing for them to get on with realizing the physical product. Of course, with a number of initiatives and approaches such as design-build teams, Design for Manufacture, Systems Engineering, IPPD, etc, the folly of the ‘over the wall’ approach has been recognized but still tends to be major challenge. However, it was shown in Fig. 1 from the US National Research Council that there is a general lack of integration of tools and that this is particularly true of linking manufacturing into the business process. DLM is primarily a methodology which aims to integrate manufacturing processes and the Lean approach within a digital manufacturing design, modelling and simulation platform. Consequently, the main aim is to provide a methodology that presents a process architecture that facilitates integration, where all of the manufacturing design, modelling and simulation tools are provided on the one systems platform, i.e. digital manufacturing. The underlying structure of DLM is illustrated in Figure 4, where Tier 1 represents the business process presented in Figure 3 and falls within the remit of general Product Life Management (PLM). Tier 2 refers to DLM-MAP Process and falls within the remit of systemic manufacture (noun) and the Lean integration of all manufacturing processes; where DLM-MAP is a specific reference to a
558
R. Curran, R. Collins, G. Poots, T. Edgar, C. Higgins and J. Butterfield
software version of the methodology being developed by the authors as part of a DTI sponsored initiative [42]. Tier 3 refers to DLM-MAP Activity and falls within the remit of specific manufacturing (verb) activities that are facilitated using digital manufacture and Lean principles. Tier 4 refers to Knowledge Capture and falls within the remit of any engineering knowledge, whether tacit or explicit, that needs to be captured, formalized to some extent, and presented to the user for reuse and learning at the functional level. Notwithstanding the latter point, an underlying concern of the authors is that the DLM methodology should not be seen as a static fixed solution and that rather the framework is generic and flexible enough to encourage the capture of knowledge so that it can be embedded for reuse in the model presented in Figure 4.
Figure 4. The DLM integrated modelling structure
Tier 4 relates to the explicit capture of knowledge that will aid the engineer in the implementation of specific Tier 3 activities but it will be seen that knowledge capture is a fundamental requirement of Tier 2, while Tier 1 will be nurtured by the growth of this understanding and knowledge so that the ensuing ‘wisdom’ can be used to improve the Tier 1 business process; the latter most probably through increased opportunity for concurrency and improved integration of functional activity and IT systems. Figure 5 illustrates the mapping of the generic business process (Tier 1) to the DLM process integration (Tier 2). As part of the PreMade DTI project [42] the generic business process is defined in manufacturing terms as Bid, Concept, Detail and Build. The validation work on DLM presented in this paper is restricted to these phases although operational service integration is being investigated one other research program at QUB in particular [46]; with reference to Fig. 3. Notwithstanding, the key phases detailed in Fig. 5 represent the bulk of the core manufacturing processes and operations carried out during the product life-cycle. It is also evident from Figure 5 that each of the key phases is then expanded into systemic process maps within Tier 2. In developing the DLM methodology towards implementation, the intention was to capture and organize all of the
Digital Lean Manufacture (DLM)
559
general manufacturing activity and processes into a graphical chart so that roles, chronology and function are made explicit. Moreover, the Tier 2 maps provide the user’s current understanding of the ‘circuitry’, interlinkage and potential dependency between the various cross-functional manufacturing activities.
Figure 5. Mapping of the generic business process (Tier 1) to the DLM process integration (Tier 2)
The Tier 2 perspective offers the fundamental opportunity for: 1) greater concurrency - for lead time reduction, 2) improved robustness – by understanding the dependency and sensitivity between process elements, and 3) explicit definition of the process elements – all of which are implementation deliverables – but equally importantly 4) knowledge capture and reuse – being an improvements oriented deliverable. Furthermore, the integration of Tiers 1 and 2 requires the mapping of all of the information and knowledge captured in Tier 2 into the overriding business process of Tier 1. In practical terms this requires the content of the manufacturing effort from any one phase (typically assimilated into a manufacturing plan) to be assessed relative to appropriate exit criteria from that phase, and to be used as appropriate as an input to the next stage. Specifically, this addresses the challenge of chronological integration (horizontal) as well as the hierarchical integration (vertical) that is being explicitly addressed in the model presented in Figure 4.
Figure 6. DLM Tier 2 mapping for the cross-functional manufacturing processes at the concept phase
560
R. Curran, R. Collins, G. Poots, T. Edgar, C. Higgins and J. Butterfield
Figure 6 presents a more detailed view of the content of the DLM-MAP Process charts associated with Tier2; for the Detail Design Phase. In particular, the manufacturing process integration map divided into the core (horizontal) swim lanes of Digital Environment (D), Lean Thinking (L) and Manufacturing Physical Environment (M), however the DLM activity is supplemented with a People swim lane and a Business Process swim lane which facilitate organizational implementation within the context of the identified business drivers respectively. The time domain has been incorporated through the chronological staging of process elements (nodes) from left to right in Figure 6. The IDEA approach was conceived for generic ordering of the key steps in implementing each stage of the business process; representing Initiate, Determine, Evaluate and Approve (IDEA). It can also be seen that the DLM-MAP solution [42] color-codes primary (blue), secondary (brown) and business (white) elements, and maps all of the key relationships with the critical path. One of the elements within the Approval/Digital Environment cell in Fig. 6 is highlighted (Generate Work Instructions) to highlight that the DLM-MAP solution allows the user to navigate through to Tier 3 (DLM activity related to specific tasks) by double clicking these objects. Specific DLM activity guidance and supporting materials is presented to the user in Tier 3 and is represented by DLM knowledge modules that primarily include activity process mapping (see Fig. 7), user checklists (see Fig. 8) and benefits and metrics tables (see Fig. 9), but can be tailored to include any key supporting information or governance that is deemed necessary, e.g. six sigma or Design for Manufacture modules. The examples shown in Figs. 7 through 9 are for the Generate Work Instruction process at Tier 2 that was highlighted in Fig. 6; these DLM Knowledge Modules are arranged on a Tier 3 workbench that is intended to provide simple and effective help, governance and objective guidance to the domain expert, being work instruction authors in this case. Planning (Inputs) Establish Hierarchy of Parts
Establish Hierarchy of Processes
Establish Tooling Requirements
DMU Virtual Build
Step by Step Procedure
Authoring (Process) Establish Hierarchy of Need
Define Instructional Format & Media Type
Author Instructions
Measure Instructional Content Against Requirement Metrics
Presentation (Output) Issue Instructions
Figure 7. DLM Tier 3 activity mapping: work instructions example
Figure 7 details the specific activity steps that the user should take in the generic generation of digital work instructions and includes the three phases of Planning (inputs), Authoring (process) and Presentation (output). More detailed
Digital Lean Manufacture (DLM)
561
information is also presented for each specific step relative to the implementation requirements for a given company, which are likely to be tailored to that company’s work environment and needs. Figure 8 shows the checklist which is structured into Planning Information (inputs), Graphical Content (visual nature) and Textual Content (supporting textual information). Consequently, the user is helped by this aid-memoir in collating all of the necessary information by following due diligence in the undertaking of the Tier 3 activity
Figure 8. DLM Tier 3 activity
Figure 9. Tier 3 activity business benefits and metrics
Figure 9 presents the benefits and metrics associated with the work instructions as a means of helping the user appreciate the impact of their work, which can lead to cultural change and improved vertical process integration.
562
R. Curran, R. Collins, G. Poots, T. Edgar, C. Higgins and J. Butterfield
However, the associated metrics offer a very practical and necessary means of providing the user with decision making criteria and ratings so that they can assess how good their particular solutions are. Key metrics are associated with competitive advantage and therefore include Cost, Quality and Time but each Tier 3 activity will also have additional metrics associated with that activity that are of a more engineering nature. Finally, Tier 4 of the DLM methodology represents Knowledge Capture. Again as for Tier 3, Tier 4 consists of modules but these tend to be either of a capture nature or of a more supporting and contextual nature, i.e. the latter including examples of work instructions for the exemplar provided in this paper, details of shop floor issues, etc. The authors like to make a distinction in the knowledge content between Tiers 3 and 4 in terms of Tier 3 having primarily captured explicit knowledge for reuse while Tier 4 helps to also capture tacit knowledge and improve the explicit content formalized at Tier 3. However, relative to the knowledge capture element of Tier 4, there tends to be four recognized approaches, including: direct, indirect, observational, and machined-based approaches [47]. However, the general practice recognizes the need to combine as wide a range of techniques as possible that provide a more sequential knowledge capture process facilitated by procedural methods and tools [48-52]. However, in the DLM methodology the authors wanted to incorporate a more flexible and intuitive knowledge capture process that made elicitation more intuitive and less time consuming for the DLM-MAP user, i.e. with the demonstrator toolkit being developed in the PreMade project. Consequently, a number of the modules that constitute Tier 4 utilized various elements of the four approaches mentioned by Winstanley [47]. The DLM knowledge capture and reuse approach is essentially a machine or system based approach although this is complimentary to the core manufacturing integration function being provided by the DLM-MAP interface. This is integrated machine/system based knowledge capture. In this sense, the knowledge capture is also indirect in that it is a by-product of the user ‘just’ capturing what they are doing as a statement of their perceived view of best practice. However, the user themselves are required to be observational in terms of capturing their ‘expert’ approach, rather than a third party interpreting what is happening through either interview, participant observation or protocol analysis [53, 54]; especially in terms of the challenge of eliciting the tacit knowledge that is being used to inform the decision making of the expert user. In addition, the major role of indirect knowledge capture is primarily addressed through the provision of a flexible and intuitive knowledge capture environment within the Tier 4 user interface. Essentially, this provides an environment and mechanisms for the capture of knowledge that seem to be direct but have been so designed to encourage indirect knowledge capture. For example, the user is free to upload various files into the Tier 4 data base that they feel are relevant while they are also free to upload their comments and opinions. However, indirect knowledge capture is also addressed through the correlation of knowledge recorded for all processes represented at Tier
Digital Lean Manufacture (DLM)
563
2, for example where expert observations associated with one activity at Tier 3 could impact on another. This element of knowledge refinement requires a specific governance and review process to be put in place for maintaining the knowledge capture and reuse element of DLM. Naturally, direct knowledge capture is exploited as effectively as possible but without putting undue demand on the user’s time and effort. The mechanisms for direct capture should be as sensible and reasonable to the user as possible, and therefore the guiding principle used was to minimize the administrative overhead while also only including modules that are readily acceptable and relevant to the user. In general, Rush [55] states that knowledge elicitation is the process of collecting, from a human source of knowledge, information that is thought to be relevant to that knowledge [54]. The capture of this information or intelligence [56, 57, 47], including the more tacit elements, is being facilitated within the DLM methodology at Tier 4. This is being achieved primarily through 1) on-line structured questionnaires (whether multiple choice or textual in nature); 2) data, information and knowledge repositories structured into sub-categories; 3) electronic blackboard facilities that capture and present a record of either local or global related knowledge (to be defined by the expert user who chooses whether the information is made available only at a local (Tier 3) or global level (Tiers 1 and 2); and 4) a radar plot of knowledge capture for any process from Tier 2 that is expanded at the Tier 3 specific DLM activity level. People (Organisational) Knowledge 8 7 6 5 4
Business (Case) Knowledge
3 2
Digital (Manufacturing) Knowledge
1 0
Manufacturing Knowledge
Lean Knowledge
Figure 10. Expert opinion knowledge capture assessment rating
Figure 10 illustrates the novel radar plot visualization tool that is used to assess the level of ‘expert’ knowledge that has been captured for an associated Tier 3 DLM activity. It can be seen that the knowledge categorization correlates to the (horizontal) functional swim lanes incorporated in the process mapping utilized at Tier 2 (see fig. 6). The implementation of the Tier 4 knowledge mapping and rating through DLM-MAP also exploits the Web-based capabilities afforded by such a dynamic medium, as opposed to CD deployment. The expert user is requested to input a rating for their understanding of the five key knowledge sectors identified
564
R. Curran, R. Collins, G. Poots, T. Edgar, C. Higgins and J. Butterfield
for the DLM methodology. The Web-based implementation allows the server to add each new assessment inputted and average the general rating for all the experts’ scoring levels within each of the key DLM functions. Consequently, the tool highlights areas of strength and confidence but also highlights areas for increased knowledge capture activity. It is important to note that the authors believe that any toolkit such as that manifesting the DLM methodology must be dynamic in nature in order to capture knowledge and best practice; for use, augmentation, and reuse of knowledge. Consequently, a Web-based intra-net deployment is envisaged that facilitates company-wide standardization, knowledge capture (prompted via expert user input requests) and networked knowledge/information reuse. The elicitation of knowledge through electronic blackboard (local or globally made available), online questionnaires, and the upload of relevant supporting files (whether text, presentation, animation, spreadsheet, Webpage, etc), further consolidates the value of having an interactive DLM platform solution.
4 Exemplar Low-Level Tier 3 DLM Activity Validation: Work Instructions and Learning This Section aims to present an exemplar of the validation work being carried out in support of the Tier 3 DLM methodology activity mapping and lower-level operational management. Consequently, an example of a specific Tier 3 activity process, which is an element of the cross-functional process mapping of Tier 2 (illustrated in Fig. 6), is the generation of work instructions, with reference also to the associated impact on Learning Curve. Essentially, Butterfield et al [58] conducted a research study into the use of DLM methodology in work instructions and measured the impact of animated instructions verses paper-based instruction in terms of the impact on learning curve. Total organisational learning is represented by a learning curve (also known as progress functions) or cost curve which is a line on a graph mapping the decreasing time required to accomplish any task as it is repeated. It represents two facts. Firstly, the time to do a job will decrease each time that job is repeated and secondly, each time reduction will be less with each successive unit. The main aim of the study was: 1) verification that digital manufacture could be used to positive effect in the Lean implementation of work instructions and 2) knowledge capture of the process to aid in the definition of the Tier 3 information to be encapsulated into the overall DLM methodology. The result of the assembly build experiments conducted by Butterfield et al. [58] was a set of personal learning curves which tracked individual improvements over five builds using the two instructional methods: paper verses animated. This was carried out for an ‘Apron Uplock Assembly’ within a jig that BAB currently manufacture; being an outer skin plus supporting internal structure type assembly within a 2m2 envelope. A five point overall learning curve was generated for both instructional types. Each point was an average of the individual times for build
Digital Lean Manufacture (DLM)
565
numbers one through to five. The effectiveness of the two instructional methods was judged on assembly completion times.
Figure 11. Paper work instructions based on engineering process reports (EPR)
Figure 11 shows a hard copy of the paper instructions used for the aircraft panel assembly as well as an isometric view of the panel. These instructions are based on the assembly sequence contained in the Bombardier Aerospace ‘Engineering Process Record’ (EPR) for this panel. The verbal instructions were presented with the isometric view of the assembled panel which included individual part numbers as well as a sequential code letter which indicated the order of the parts in the assembly sequence. The letters were colour-coded according to the sequence operation that they were associated with within the EPR. Figure 12 shows the first frame from the animation used for the aircraft panel digital instructions. In this case the panel assembly was animated in the workshop surroundings where the experiment took place. The opening frames included static views of the tools with a text box displaying a tooling list. All of the components are placed on the tables surrounding the panel assembly jig and when the animation is activated the parts are shown moving to their final positions.
Figure 12. Animated work instruction frame
Figure 13 shows the overall learning curves for the two test groups who assembled the aircraft panel. The largest difference between the two instructional types was 17.3% on build two. However, when the total time taken to build the panel using the two instructional methods, was calculated using the power law
566
R. Curran, R. Collins, G. Poots, T. Edgar, C. Higgins and J. Butterfield
curve fits, the total time taken to carry out the twenty builds was 14% lower for the group using digital instructions. 7000.00
A s s e m b ly T im e , S e c o n d s
6000.00 5000.00 4000.00 3000.00 2000.00 1000.00 0.00 0
1
2
3
4
5
6
Number of Build Completions Illustrated Average
Animated Average
Power (Illustrated Average)
Power (Animated Average)
Figure 13. Learning Curves For Apron Uplock Assembly using paper verses animated instructions
Having validated the potential impact at Tier 3 of DLM for work instruction activity and learning improvements (indicatively recorded to be 14%), the activity mapping was carried and captured for input into Tier 3 DLM methodology, as well as the other supporting material already present to the end of Section 3 on DLM methodology.
5 High-Level Integrated Tier 2 DLM Process Validation Having looked at the specific validation work being carried out for Tier 3 Activities, with the work instruction exemplar presented in Section IV, this Section addresses the more general validation of the work. Ultimately, Fig. 14 represents the true validation of the DLM methodology through an improved learning curve relating to all the manufacturing activity. It is shown that DLM implementation should at least result in a lower build time for the first unit and that this reduction should carry on throughout the production phase, with a significant reduction in the associated production costs. Therefore, the lead time on orders has been compressed through the increased velocity of manufacturing activity and output, in accordance with fundamental Lean principles. The Work Instruction exemplar was used in Section IV, as opposed to Factory Layout for example, as it is very
Digital Lean Manufacture (DLM)
567
associated with the business benefit illustrated in Figure 14. Figure 13 indicated a 14% improvement in learning which translates to an incredible reduction in cost over the production phase of the product at the highest cost level, e.g. £1.68 billion for 400 units of a £30M aircraft. However, there maybe an additional cost and effort in the preliminary work represented by the Virtual Factory in Figure 14, due to the cost of digital manufacturing software, training and addition time required to implement the DLM approach. This has not yet been researched in detail but it is important to note that the current experience has indicated that there is considerable scope for reducing lead time on the development process prior to production (see Figure 3) and that this may well off-set the increase in time and cost associated with implementing the DLM methodology [59, 60]. Labour Historical unit cost curve
Virtual learning New unit cost curve Virtual Factory
Physical Factory
Cumulative units
Figure 14. Ultimate validation of DLM methodology on learning improvements through improved cost and lead-time reduction
The DLM methodology has been validated through close collaboration with Bombardier Aerospace Belfast (to be extended to all partners in the PreMade research project), although further research must be undertaken to scientifically prove out the extent of the improvement illustrated in Figure 14. Notwithstanding, the DLM methodology presented has been approved at various stages with domain experts as BAB and is currently being implemented by them; the implementation process involving the tailoring of the generic DLM methodology presented to suit their system, process and requirements architectures. However, the DLM methodology has also been validated at a business level as BAB went through an internal process of establishing the business case: with a positive outcome. This alludes to another key contribution of DLM in that it offers industry a third route to Product Life Management (PLM) implementation, as apposed to 1) full-blown acceptance of the need to implement/deploy companywide PLM on a single platform or alternatively 2) cherry-picking only certain functionality for isolated deployment of the associated tools. Realistically Option 1 can only be implemented over a longer time span and is more of a strategic nature while Option 2 has already been happening for some time. However, DLM offers a third Option: of a structured and integrated methodology for the adoption of PLM
568
R. Curran, R. Collins, G. Poots, T. Edgar, C. Higgins and J. Butterfield
that captures hard-won knowledge capital and facilitates the transition within current manufacturing environments towards more concurrent integrated digital manufacturing system; which will also increasingly break down the barriers between the fundamental activities of product design and manufacture [61, 62]. The DLM methodology has been primarily validated at Bombardier Aerospace Belfast (along with other PreMade consortium partners) but the authors are also working closely with the UK Department of Trade and Industry (DTI) – or their Technology Strategy Board (TSB) – to roll-out the DLM methodology throughout the UK; it having been further validated by their Monitoring Officer in its developed form subsequent to the original ratification of the Statement of Work.
6 Conclusion A comprehensive review of the literature pertaining to digital manufacture, Lean and integrated manufacturing practice has been undertaken in establishing the importance of the DLM methodology presented in this paper. Consequently, it has been established that DLM offers a new management methodology for production operations integration that achieves vertical and horizontal integration of process, tools and systemic manufacturing effort. Vertical integration was achieved through the hierarchical structuring of effort according to business process (Tier 1), integrated cross-functional manufacturing processes (Tier 2), specific manufacturing activity (Tier 3) and knowledge capture (Tier 4). Horizontal integration is achieved at Tier 2 through the mapping of processes within functional swimlanes and at Tier 3 through the specific activity mapping of the Digital, Lean and Manufacturing functions in particular. Validation has been presented for a tier 3 exemplar while the issue of general validation of DLM has also been addressed in the final Section. However, the crucial underlying ethos of dynamic semi-passive knowledge capture and active reuse has been highlighted though the modules incorporated into Tier 4 of the DLM methodology; thereby positioning DLM as new and novel management methodology for production operations integration.
7 References [1] Singh, V., (1997), “Systems Integration – Coping with legacy systems”; 8/1, pp 24-28. [2] Freedman, S, (1999), “An overview of fully integrated digital manufacturing technology”; Proceedings of the Winter Simulation Conference. [3] Olds, L, (1997), “Integration of Cost Modeling and Business Simulation into Conceptual Launch Vehicle Design”; Paper No. AIAA 97-3911. [4] Klem, B, (2008), “article”, Automotive Manufacturing Solutions, January/February Edition. [5] Dassault Systèmes Press Conference (2002), “Delmia Solutions for the Airbus A380”; Fellbach, Germany. [6] Boeing (2005), Journal of Aircraft Engineering and Aerospace Technology, “Boeing deploys Dassault Systèmes update to digital tools for 787 global team”; 77/6.
Digital Lean Manufacture (DLM)
569
[7] CIMData (2003), “The Benefits of Digital Manufacturing”; White Paper; [Accessed from http://www.cimdata.com]. [8] CimData & Dr Michael Grieves (2006); “Digital Manufacturing in PLM Environments”; White Paper; [Accessed from http://www.cimdata.com]. [9] La Rocca, G., Krakers, L, and van Tooren, MJL, (2002), “Development of an ICAD Generative Model for Aircraft Design, Analysis and Optimisation”; 13th IIUG conference, Boston. [10] Oliver, N. (1999). Rational choice or leap of faith: The creation and defence of a management orthodoxy, University of Cambridge Working Paper Series, February. [11] Fiegenbaum, A.V. (1983). Total Quality Control, McGraw-Hill, New York. [12] Myers, K., N. Zumel and P. Garcia (1999). Automated capture of rationale for the detailed design process. Proceedings of the 11th Conference on Innovative Applications of Artificial Intelligence (IAAI99), AAAI Press, Menlo Park, CA, USA, 876-883. [13] Stalk, G.J. and Hout, T.M. (1990). Competing Against Time, The Free Press, New York. [14] Hammer, M. and Champy, J. (1995). Reengineering the Corporation, Nicholas Brealey Publishing, London. [15] Dettner, H.W. (1997). Goldratt’s Theory of Constraints: A Systems Approach to Continuous Improvement, ASQC Press, Milwaukee, WI. [16] Goldratt, E.M. and Cox, J. (1993). The Goal, 2nd edition, Gower, Aldershot. [17] Suri, R. (1998). Quick Response Manufacturing: A Companywide Approach to Reducing Leadtimes, Productivity Press, Portland, Oregon. [18] Christopher, M., Harrison, A. and van Hoek, R. (1999). Creating the agile supply chain: Issues and challenges, Proceedings of the International Symposium on Logistics, Florence, July. [19] Naylor, B., Naim, M. and Berry, D. (1999). Leagility: Integrating the lean and agile manufacturing paradigms in the total supply chain, International Journal of Production Economics, Vol. 62, pp.107-118. [20] Womack, J.P., Jones, D.T. and Roos, D. (1990). The Machine that Changed the World, Rawson Associates, New York. [21] Womack, J.P. and Jones, D.T. (1996). Lean Thinking: Banish Waste and Create Wealth in Your Corporation, Simon & Schuster, New York. [22] Towill, D.R. (1999). Management theory: Is it of any practical use? Or, how does a fad become a paradigm, Engineering Management Journal, June, pp.111-122. [23] Foresight Manufacturing 2020 Panel (2000), UK Manufacturing: We Can Make It Better, DTI, 2000, pp. 4 (www.foresight.gov.uk/manu2020). [24] Monden, Y. (1983). The Toyota Production System, Productivity Press, Portland. [25] Ohno, T. (1988). The Toyota Production System: Beyond Large-Scale Production, Productivity Press, Portland. [26] Hall, R.W. (1983). Zero Inventories, McGraw Hill, New York. [27] Schonberger, R. (1982).Japanese Manufacturing Techniques: Nine Hidden Lessons in Simplicity, The Free Press, New York. [28] Krafcik, J.F. (1988). Triumph of the lean production system, Sloan Management Review, Vol. 30, No. 1, pp.41-52. [29] Lewis, M.A. and Slack, N. (2003). An introduction to general themes and specific issues, In: M.A. Lewis and N. Slack (Eds), Operations Management: Critical Perspectives in Management, Routeledge, London. [30] Naim, M. (1997). The book that changed the world, Manufacturing Engineer, February, pp. 13-16. [31] Liker, J. (2004). The Toyota Way, McGraw-Hill, New York.
570
R. Curran, R. Collins, G. Poots, T. Edgar, C. Higgins and J. Butterfield
[32] Spear, S. and Bowen, H.K. (1999). Decoding the DNA of the Toyota Production System, Harvard Business Review, Sept-Oct, pp.97-106. [33] Hirano, H. (1992). Putting 5S to Work: A Practical Step-by-Step Approach, PHP Institute Inc., New York. [34] Shingo, S. (1985). A Revolution in Manufacturing: The SMED System, Productivity Press, Cambridge, MA. [35] Imai, M. (1986). Kaizen: The Key to Japan’s Competitive Success, McGraw-Hill, New York. [36] Hines, P. and Rich, N. (1997). The seven value stream mapping tools, International Journal of Operations and Production Management, Vol.17, No.1, pp. 47-64. [37] Kobayashi, I. (1995). 20 Keys to Workplace Improvement, Productivity Press, Portland, Oregon. [38] Rother, M. and Shook, J. (1998). Learning to See: Value Stream Mapping to Create Value and Eliminate Muda, The Lean Enterprise Institute, Brookline, MA. [39] Hines, P., Holweg, M. and Rich, N. (2004). Learning to evolve: A review of contemporary lean thinking, International Journal of Operations and Production Management, Vol. 24, No. 10, pp. 994-1011. [40] Holweg, M. (2006). The genealogy of lean production, Journal of Operations Management, Pending Publication. [41] Francis, M. (2006). Incremental NPD cycle time performance: The UK FMCG industry, Cardiff University Innovative Manufacturing Research Centre Working Paper Series, Cardiff, No. WP10. [42] Curran, R., Butterfield, J., Collins, R., Castagne, S., Jin, Y., Francis, M., Darlington, J., and R. Burke. (2007). Digital Lean Manufacture (DLM) for Competitive Advantage, Proceedings of AIAA ATIO’07 Conference, Belfast [43] National Research Council (2004), National Research Council Committee On Bridging Design and Manufacturing. [44] Delmia User Conference (2004), General Address. [45] AeIGT (2004), National Aerospace Technology Strategy Implementation Report. [46] Service Support Solutions, 2007, EPSRC-BAE SYSTEMS Research Programme. [47] Winstanley, G. (1991). Artificial intelligence in engineering. John Wiley & Sons, West Sussex, UK. [48] Schreiber, G., H. Akkermans, R. Anjewierden, N. Shadbolt, W. Van de velde, and B. Wielinga (2000). Knowledge engineering and management: the CommonKADS methodology. A Bradford Book, The MIT Press, Cambridge, Massachusetts. [49] Bailey, J., R. Roy, R. Harris, and A. Tanner (2000). Cutting tool design knowledge capture. In: Industrial Knowledge Management - A Micro Level Approach, edited by R. Roy, Springer-Verlag, London, ISBN 1-85233-339-1, 393-411. [50] Adesola, B., R. Roy, and S. Thornton. (2000). Xpat: a tool for manufacturing knowledge elicitation. In: Industrial Knowledge Management: A Micro-level Approach, edited by Roy, R., Springer, London, 449-476. [51] Hoffman, R., R. Shadbolt, M. Burton, and G. Klein. (1995). Eliciting knowledge from experts: a methodological analysis. Organisational Behaviour and Human Decision Processes, 62:129-158. [52] Shadbolt, N. and N. Milton (1999). From knowledge engineering to knowledge management. British Journal of Management, 10:309-322. [53] Spradley, J. (1980). Participant observation. Holt, Rinehart & Winston, New York. [54] Cooke, N. (1994). Varieties of knowledge elicitation techniques. International Journal of Human-Computer Studies, 41(6):801-849. [55] Rush, C (2002), Formalisation and Reuse of Cost Engineering Knowledge, PhD Thesis, Cranfield University.
Digital Lean Manufacture (DLM)
571
[56] Meyer, M., and J. Booker (1991). Eliciting and analysing expert judgement: a practical guide. Academic Press, London, UK. [57] Meyer, C. (1993). Fast Cycle Time: How to Align Purpose, Strategy and Structure for Speed, Free Press, New York. [58] Butterfield, J, Curran, R, Watson, G, Craig, C, Raghunathan, S, Collins, R, Edgar, T, Higgins, C, Burke, R, Kelly, P, and Gibson, C, (2007), “Use of digital manufacture to improve operator learning in aerospace assembly.” Proceedings of AIAA ATIO’07 Conference, Belfast, AIAA 2007-78565. [59] Curran, R, Kundu, A, Raghunathan, S, and D Eakin (2002). “Costing Tools for Decision Making within Integrated Aerospace Design”, Journal of Concurrent Engineering Research, 9(4), 327-338. [60] Curran, R, Raghunathan, S and M Price (2005). A Review of Aircraft Cost Modeling and Genetic Causal Approach, Progress in Aerospace Sciences Journal, Vol. 40, No 8, 487-534. [61] Curran, R, M. Price, S. Raghunathan, E. Benard, S. Crosby, S. Castagne and P. Mawhinney, (2005), Integrating Aircraft Cost Modeling into Conceptual Design, Journal of Concurrent Engineering: Research and applications, Sage Publications, Vol. 13, No. 4, 321-330. [62] Curran, R, A. K. Kundu, J. M. Wright, S. Crosby, M. Price, S. Raghunathan, E. Benard, (2006), Modeling of aircraft manufacturing cost at the concept stage, The International Journal of Advanced Manufacturing Technology, Pages 1 – 14.
Sustainability
Simulation of Component Reuse Focusing on the Variation in User Preference Shinsuke Kondoha, 1 , Toshitake Tatenob, Nozomu Mishimaa, and Mitsutaka Matsumotoa a
National Institute of Advanced Industrial Science and Technology (AIST), Japan. Tokyo Metropolitan University, Japan.
b
Abstract. The efficiency and validity of product and component reuse should be evaluated as the balance of utility value, environmental load, and cost throughout whole product life cycle from the viewpoint of a society as well as those of individual users. This paper proposes an evaluation method for reuse focusing on this balance from both of societal and individual viewpoints. For evaluating these balances, multi-agent consumer model representing users with different preferences for products is introduced and an algorithm to find out near-optimal solution of reuse is developed. A simplified simulation of reuse is conducted to demonstrate feasibility of our method. Keywords. Total performance Reuse, Variation in user preferences, Multi-agent simulation
1 Introduction Due to growing concern about environmental problems, transition from current mass production and mass consumption society to sustainable society is eagerly required. Promotion of reuse of post-used products and components is quite promising approach to attain this goal. From environmental viewpoint, reuse sometimes significantly reduces material and energy consumption at production stage. Reuse is also beneficial from the economic viewpoint, especially when the market consists of a wide variety of users with different preferences for products. Successes in internet auction business of second-hand products (e.g., personal computers, home appliances, and automobile components etc.) are typical examples of this case. Recent increase of international trade of second hand products also attributes to significant difference in user preferences between developed and developing countries. However, reuse sometimes deteriorates the environmental performance of a society, as well as service value of individual users, hindering the diffusion of new 1
Corresponding Author Email : [email protected]
576
S. Kondoh, T. Tateno, N. Mishima and M. Matsumoto
innovative and environmentally conscious technologies. In addition, an optimal reuse from the viewpoint of each individual user doesn't always improve the environmental and economic performance of a society. Therefore, the efficiency and validity of reuse should be evaluated considering the balance of user value, environmental load and cost from the viewpoint of the whole society as well as those of individual users. Although many literatures evaluate the possibility of reuse to reduce environmental load by using life cycle assessment (LCA) and life cycle simulation (LCS) methods [3,4,6], these issues are hardly addressed. The objective of this study is to propose an evaluation method for reuse focusing on the balance of user value, environmental load and cost from both of societal and individual viewpoints throughout whole product life cycle to identify indispensable factors that makes it environmentally and economically efficient and feasible. Especially, variations in user preferences for products and their operating conditions, which attribute to success of reuse in many cases, is addressed in this paper. To this end, a multi-agent consumer model representing users with different preferences and operating conditions for products is introduced with an algorithm which finds out near optimal component reuse among them. A simplified simulation of component reuse of laptop computers is also conducted to demonstrate feasibility and validity of our method. Possible increase in utility value despite decreases in environmental load and cost by adequate component reuse is also discussed.
2 Approach Our approaches are summarized as follows; (1) Evaluation of user value From engineering viewpoint, user value should be evaluated in relation with functional performance of a product and degree of satisfaction of the user throughout whole product life cycle. To this end, user value of a product is calculated based on Multi-Attribute Utility Theory (MAUT) [9], which relates the user value to dominant attributes of a product. (2) Evaluation of the balance of UV, environmental load and cost Total performance indicator (TPI) [5], which represents the balance of UV, Life Cycle Environmental load (LCE), and Life Cycle Cost (LCC), is employed to evaluate the environmental and economic performance of reuse from societal and individual viewpoint in this study. (3) Reuse considering the variation in user preferences and operating conditions for products A multi-agent consumer model is introduced to represent users with different preferences and operating conditions for products. In order to evaluate the efficiency and validity of reuse of products and components, reuse plan that matches the demand and supply of post-used products (and components) among the users should be determined. Finding out optimal reuse plan is expressed as a combinational optimization problem, which is difficult to solve when the
Simulation of Component Reuse Focusing on the Variation in User Preference
577
calculation space is large. In order to solve this problem, Contract net protocol (CNP) [7] is employed in this study. The rest of this paper is organized as follows; Section 3 presents an evaluation method for efficiency of products and components reuse based on total performance indicator (TPI) we have been proposed. Section 4 illustrates a multiagent based matching algorithm of demand and supply of post-used products and components, considering the variation in user preferences. Section 5 shows and discusses a result of simplified simulation for reuse of components of laptop computers. Section 6 concludes the paper.
3 Evaluation of the Balance of UV, LCE, and LCC 3.1 Total Performance Indicator (TPI) The validity and efficiency of products (and components) reuse should be evaluated as a balance of user value (UV), environmental load (LCE) and cost (LCC) throughout whole product life cycle. Total performance indicator (TPI) [5] is employed to evaluate the balance. From individual viewpoint, the TPI of a product consumed by user j is given as follows; TPI
UV j j
LCE j LCC j
ʳ ʳ (1) where UVj, LCEj, and LCCj represent utility value obtained by user j, environmental load and cost throughout whole product life cycle, respectively. Since components can be reused in several products of different users, the UVj, LCEj, and LCCj should be calculated as the total of those of the components. The allocation of UV, LCE, and LCC of a product to each component is executed by using quality function deployment (QFD) [1] method by referring its contribution to those of a product. Assuming that the UV, LCE, and LCC of a whole society can be calculated as a total of those over all users in the society, the total performance of a society TPI* is given as follows; TPI *
¦ UV
j
j
¦ LCE ¦ LCC j
j
j
j
ʳ ʳ
(2)
578
S. Kondoh, T. Tateno, N. Mishima and M. Matsumoto
3.2 Formulation of UV 3.2.1 Definition of UV In general, user value (UV) of a product increases as the product's functional performance and the length of its continued use increase. Thus the UV of a product is defined as time integral of product value. UV j ³ V j (t )dt ʳ (3) where Vj(t) represents the value of a product at time t. In order to correlate product value with its functional performance, MultiAttribute Utility Theory (MAUT) [9], which is widely used in evaluation of customers' preference for products and services, is employed as follows;
¦V
V j (t )
j ,i
(t )
i
V j , i (t )
(4)
w j ,i (t ) FR j ,i (t )
(5) where, i,j,Vj,i(t), wj,i(t) and FRj,i(t) denote the index of FRs, the index of user, product value allocated to FRi, weighted factor for FRj,i of user j, and functional performance of FRi at time t, respectively. Weighted factor for each FR represents its importance to the user, which means the user's preference for a product. In this study, we assume that a product value is measured by its market price. Therefore, importance of each FR can be estimated by conjoint analysis [2] of various products with different specification. 3.2.2 Time variation of UV Since UV is defined as time integral of product value, time variation of UV should be estimated. The performance of FR deteriorates along time due to aging and wear of its corresponding components. In addition, the importance for each FR may also decrease due to obsolescence caused by rapid technological innovation and changes in market trend. In this paper, these time variations are expressed as following equations; (6) w j ,i (t ) b j ,i exp( a j ,i t ) FR j ,i (t )
c j ,i (t t 0 ) d j ,i
(7)
where, bj,i, aj,i, cj,i, dj,i, and t0 denote the initial importance of each FR for user j, its lowering (obsolescence) rate, deterioration rate of functional performance of FRj,i, initial performance of FRj,i, and the starting time of product use phase, respectively. Obsolescence rate for each FR can be estimated by regression analysis on wj,i(t) at various time t. 3.3 Formulation of LCE and LCC Focusing on energy using products, the longer a product is continued to use, the higher LCE and LCC of a product become. Thus, the simplest representation of LCE and LCC of a product consumed by user j are given as follows;
Simulation of Component Reuse Focusing on the Variation in User Preference
LCE j
¦ LCE
j ,k
¦ LCC
j ,k
579
(8)
k
LCC j
(9)
k
LCE
j ,k
e j , k lt j , k f j , k
LCC j ,k g j ,k lt j ,k h j ,k
(10) (11)
where, k, ej,k, fj,k, gj,k, hj,k and ltj,k denote index of components, partial environmental load and cost allocated to component k per unit time during product use stage, those throughout all product life cycle except product use stage, and component lifetime, respectively. LCE and LCC of a product areallocated to components by using QFD method, referring to the material and energy consumption of each component at each life cycle stage.
4 Multi-agent Model for Component Reuse 4.1 Multi-agent consumer model In order to represent the variations in preferences and operating conditions for products, a multi-agent consumer model is introduced. Variations in initial user preferences and its changes along time are represented as those in bj,i and aj,i, respectively. Different operating conditions for products may result in different functional deterioration rate, and environmental load and cost per unit time during product use stage. Thus, the variations in physical conditions of products can be represented as those in cj,i, ej,k, and gj,k. These variations in user preferences and operating conditions also influence on lifetime of products. In this study, the authors assume that a product is disposed of when at least one of its dominant FRs completely loose their functionality or allocated values. Thus, lifetime of a product is calculated as the minimum of physical and value lifetime of each FR, which are defined as the duration until its functional performance becomes too deteriorated to function well (viz., becomes below the lower specification limit of FR) and the duration until its allocated value becomes too worthless to maintain (viz., becomes below the lower value limit of FR), respectively. In this study, we also assume that each user behaves (i.e., buys new or secondhand products and components, disposes his or her products) so as to maximize his or her individual TPI. Any reuses that deteriorate his or her TPI cannot be feasible. Thus, the efficient and feasible reuse is expressed as the one maximizing total performance of a whole society (TPI*) without deteriorating TPI for any individual users.
580
S. Kondoh, T. Tateno, N. Mishima and M. Matsumoto
4.2 Multi-agent algorithm for matching demand and supply of post-used components In order to find out the optimal reuse that meets the requirement mentioned in section 4.1, we introduced two kinds of agents; namely, component agents and product agents. A product agent corresponds to each product in the market during its service life. In order to maximize its user's individual TPI, it stores and updates information about residual lifetime and user's preferences for a product, which are given as importance for each FR, so as to negotiate the component agent to obtain post-used components that can improve individual TPI of the user. A component agent corresponds to each component in a post-used product. It stores and updates information about its usage history and functional performance so as to decide whether its corresponding component is reused or not and to which product it is embedded. The negotiation protocol is decomposed into three phases as shown in Figure1. Phase 1: Specification and condition announcement phase When a product is disposed of, some of its components may function well and are suitable for reuse. The component agents associated with the components of End of Life (EOL) products announce their specifications and conditions (e.g., total operation time etc.) to all the product agents to consider possibility of their reuse. Phase 2: Bidding phase Bid represents an offer to reuse the components into the product of different user. Receiving the announcement message of the component agent, a product agent decides whether or not it should respond with a bid, considering the possibility of improvement in its total utility value. Since the total utility value is given as the time integral of product value, it should bid when product life can be extended or its functional performance can be improved by embedding the post-used components. If it chooses to respond, it sends a bid for reuse message to the component agent with the information of its location, estimated lifetime, and user's preference for the product. Phase 3: Contracting phase After receiving the bid before the bid-reception deadline, the component agent evaluates the bid based on the estimate of utility value, life cycle cost (including logistic cost to the location of the product), and environmental load during its next life cycle that are calculated from the information of the Bid for reuse message. It calculates TPI in next life cycle and selects the best one and sends a Contract message to a product agent. A failure to send the Contract message before the bidaccept deadline means the component agent is rejecting the bid.
Simulation of Component Reuse Focusing on the Variation in User Preference
End of life of a product
Component agent Component agent Component Component agent agent
Announcement phase
Product agent Product agent Product Productagent agent Announcement message:
Announcement Bid for reuse
Bidding phase Contract Contract phase
581
•Specification of a component •Physical conditions (e.g., total operation time, residual lifetime)
Bid for reuse message: •User’s preference for a product •Estimated residual lifetime •Location of the product Contract message: •Acceptance of the bid
Figure 1. The negotiation protocol of component and product agent
5 Simulation of Component Reuse for Laptop Computers To demonstrate feasibility and effectiveness of the proposed method, we conducted simplified simulation of component reuse for laptop computers among nearby users. A laptop computer is manufactured at the site located 1,000 [km] distant from the user site. When it is disposed of, the components that cannot be reused are transported to EOL treatment site located 100 [km] distant from the user site. Reusable components are transported among nearby users and transportation cost and environmental load among the users are assumed to be negligible. The value of a laptop computer for each user is calculated as weighted sum of its performance in eight FRs (i.e., FR1: computing speed, FR2: compute large capacity data, FR3: storage capacity, FR4: portability, FR5: easily viewable, FR6: handle multiple recording media, FR7: aesthetic beauty, and FR8: user friendliness). The performance of each FR is evaluated by its corresponding functional parameter. For example, performance of FR1: computing speed is evaluated by processing speed of CPU. The simulation model contains 200 users. Each user has its own preference for a laptop computer, and its variation is represented as normally distributed values of aj,i, and bj,i in equation 6. These average values are calculated by the result of conjoint analysis executed in two different years, 2002 and 2006[5]. The variation in product conditions are also represented as normally distributed values of cj,i in equation 7. A laptop computer consists of eight components (i.e., main board, memory card, hard disk drive, CD/DVD drive, power supply, battery, housing, and LCD) and LCE and LCC of a laptop computer are estimated by referring literature [8] and historical price data of components, respectively. The calculation was conducted from the year 1995 to 2006, assuming that the functional performance of a laptop PC improved every year. Thus, there exist multiple types of laptop PCs in the market. The simulation contains two scenarios; (i) reuse scenario and (ii) no-reuse scenario. Five calculations are conducted on each scenario and the average of total production of products, number of reused components, total utility value,
582
S. Kondoh, T. Tateno, N. Mishima and M. Matsumoto
environmental load which is evaluated by CO2 emissions, and life cycle cost, and total performance of a society TPI* are shown in Table 1. Table 1. Simulation results Number of UV LCE [kgCO2] LCC [kJPY] TPI* reused [kJPY*month] components 948 1086 10591970.88 108706.5299 135991.87 87.115 1047 0 10304325.86 119903.4839 153839.67 75.87
Production volume Reuse scenario Non reuse scenario
. In reuse scenario, both of the environmental load and life cycle cost were reduced approximately 10%, while total utility value over all products increased. The reduction in total environmental load and cost is due to the reduction in total production volume of laptop computers, because reuse of components sometimes extended lifetime of products. Adequate matching of supply and demand of reuse components was achieved by multi agent negotiation algorithm. Note that the increase of total utility values in spite of the decrease in total production volume. Effective reutilization of recovered components contributes to increase in total utility value as well as decrease in life cycle cost and environmental load.
6 Conclusion This paper proposed an evaluation method for reuse considering the balance of value, environmental load and cost throughout whole product life cycle from societal and individual viewpoints. Based on this evaluation method, the objective function for realizing adequate and efficient reuse is formulated and a multi-agent algorithm that maximizes this function is proposed. A simplified simulation of component reuse for laptop computers is conducted and its results show that the effectiveness and feasibility of our method. The results also show that an adequate component reuse increases the total utility value over all products while reducing their environmental load and cost throughout their entire life cycles. Future works include conducting simulations of plactical examples (i.e., international trade of second-hand products) to identify key success factors for efficient reuse.
7 References [1] Akao K, Quality function deployment, Productivity Process, Cambridge, M.A., 1990. [2] Green, P. E., and V. Srinivasan, Conjoint Analysis in Consumer Research: Issues and Outlook, J. of Marketing Research, XV, 132-136, 1978. [3] Hanatani, T., Fukuda, N., and Hiraoka, H., Simulation of Network Agents Supporting Consumer Preference on Reuse of Mechanical Parts, Proc. of the 14th CIRP International Life Cycle Engineering Seminar, 2007, 353-358 [4] Inamura, T., Umeda, Y., Kondoh, S., and Takata, S., Proposal of life cycle evaluation method for supporting life cycle design, In Proc. of the 6th Int. Conf. on EcoBalance, pp.43-46, 2004.
Simulation of Component Reuse Focusing on the Variation in User Preference
583
[5] Kondoh, S., Masui, K., and Hattori, M., Total performance analysis of product life cycle considering the deterioration and obsolescence of product value, In Proc. of CARE INNOVATION 2006 (CD-ROM), 2006 [6] Mitsumune, N., Kato, S., Kimura, F., Analysis of Probability of Reuse using Life Cycle Simulation, EcoDesign 2002, Japan Symposium, pp.158-151, 2002 (in Japanese). [7] Smith, R. G., 1980, The contract net protocol: High level communication and control in a distributed problem solver. IEEE Transaction on Computers 29 (12), 1104-1113. [8] Tekawa, M., MiyamotoS.,and Inaba,A., Life Cycle Assessment; An Approach to Environmentally Friendly PCs, In proc. of 1997 IEEE International Symposium on Electronics & the Environment, pp.125-130, 1997. [9] Winterfeld D. V and Edwards, W., Decision Analysis and Behavioral Research, Cambridge University Press , (1986), Cambridge, England.
Evaluation of Environmental Loads Based on 3D-CAD Masato Inouea,1, Yumiko Takashimaa, and Haruo Ishikawaa a
Department of Mechanical Engineering and Inelligent Systems – UEC, Japan.
Abstract. The impact design on the disposal process referred to as the late stage of the product life cycle, is becoming important. Product lifecycle management tools based on the 3Rs, reuse, recycle, and/or reduce, using exsiting design technologies is needed. To consider environmental loads at the early stages of product design leads to significant reductions in the environmental loads for the whole product life cycle. In the present paper, we propose an object-oriented design system that helps the designer to consider environmental loads at the early stage of product design using a combination of 3D-CAD and an application of evaluation methods for determining environmental loads. Our proposed system compiles existing evaluation formulae and material data for the environmental loads. Designers just create the 3D-CAD solid model which has the attribute information in the 3D-CAD model itself, such as designer’s intention, disposal process, material properties, and so on. The present study applies the proposed system to the evaluation of the environmental loads of an example problem, that is, an office chair model created using 3D-CAD. Keywords. Life cycle assessment, Design for Environment, Environmental Loads, 3D-CAD
1 Introduction Since concurrent engineering (CE) attempts to incorporate the various product life cycle stages i.e., manufacturing, usage, disposal, from the early stages of product design, this approach is intended to obtain an overall satisfactory design solution which is sympathitic to environmental loading at the disposal process. Threedimensional computer-aided design (3D-CAD) systems have helped to significantly develop CE practice [1]. Advantages of 3D-CAD are not only the ease of three-dimennsional geometrical development of a product but also collaboration with other design disciplines based on the sharing of design data, productivity improvement, faster product development, earlier production of prototypes, and improved quality by integrated computer-aided engineering (CAE) systems. 1
The University of Electro-Communications (UEC), Department of Mechanical Engineering and Inelligent Systems, 1-5-1 Chofugaoka, Chofu-shi, Tokyo 182-8585, JAPAN; Tel: +81 (0) 42 443 5420; Fax: +81 (0) 42 484 3327; Email: [email protected]; http://www.ds.mce.uec.ac.jp/index-e.html
586
M. Inoue, Y. Takashima, and H. Ishikawa
Product life cycle stages including material mining, manufacturing, assembly, usage and disposal, all affect the environment. Design for Environment (DfE) focuses on reducing environmental impact as much as possible through all of the product life cycle stages. It is important to address the disposal process as the late stage of the product life cycle. It is possible to control the product life cycle based on 3Rs technologies (reuse, recycle, and/or reduce) to minimize resources, energy consumption, and the amount of discarded materials. Manufacturing industries try to address disassemblability, cost, and recycle rate issues, but they must also consider the environmental loads. These days, the target value for the reduction in greenhouse gases is intended to prevent global warming, therefore, it is an important task to address the reduction in the environmental loads, due to these gases. One method for the evaluation of product environmental loads is life cycle assessment (LCA). This method is documented in international standard ISO14040-49. Several systems based on an evaluation method for environmental problems are currently available but most of them depend on designer’s manual input because design information from 3D-CAD solid model such as weight and material cannot apply to these systems. It is therefore difficult to consider the late stages of the product life cycle by using 3D-CAD. This study aims to develop a system for the evaluation of environmental loads with minimization of manual input by a combination of 3D-CAD and an evaluation program. The present study applies the system to the evaluation of the environmental loads for an office chair model created by 3D-CAD to show the functionality of our system.
2 Method for Evaluation of Environmental Loads The design and evaluation of products from an environmental perspective is necessary in our modern age of significant environmental issues. In general, design information becomes gradually more detailed through conceptual design to embodiment design and detail design. Therefore, early evaluation leads to advancement of environmental performance, however, designers can’t adequately evaluate the design because of uncertain design information at the earlier stages of product design. In many cases, designers working have little knowledge about environmental performances, we need to develop a system for determining environmental performance during the early stages of product design. LCA is a methodology for evaluating the products’ environmental loads. LCA can evaluate potential environmental loads by determining the amount of resources, energy, and harmful emissions from design, manufacturing, usage, recycle, through to final disposal. Measures of evaluating these factors in LCA are CO2 which is a known as greenhouse effect gas and also NOX and SOX which cause acid rain. Japan Vacuum Industry Association (JVIA) has proposed the “JVIA LCA model” which can evaluate the environmental loads of the equipment associated with a vacuum and pump [2]. This model assists the designer in figuring out the environmental loads of manufactured equipment and inventory analysis for environmental loads at the product design stage. The objective of this model is to get information which is
Evaluation of Environmental Loads Based on 3D-CAD
587
needed for the LCA of a product. The model can calculate the inclusive sum of the environmental loads at each product life cycle stage, such as manufacturing, inservice use, maintenance, reuse and recycle/disposal process. Moreover, this model can also analyze environmental loads at each stage of the product life cycle. In this section, we explain about the evaluation at the recycle/disposal process which has no relationship to performance at the design stage. Environmental loads for disassembly, segregation, and cleaning are calculated by multiplying energy and the charge of sub materials by embodied environmental loads intensities which show direct and indirect environmental loads linked to unit production activity of goods. The environmental loads for transportation by the recycling processor or disposal site are calculated by multiplying the transportation distance by the embodied environmental loads intensities. Environmental loads for the disposal process are calculated by multiplying amount of disposal by embodied environmental loads intensities. As for others sub materials, environmental loads are calculated by multiplying the charge of sub materials by embodied environmental loads intensities. Environmental loads for recycle materials are calculated by multiplying their weights by the embodied environmental loads intensities. Reuse components are calculated by multiplying the number of components by embodied environmental loads intensities. Environmental loads (Aj (a)), reduced environmental loads (Aj (b)), and disposal environmental loads (T) are calculated by the following equations including necessary charge (Cj) and embodied environmental loads intensities (Bj), respectively. Where j is the items for environmental loads (disassembly/segregation/cleaning, sub material, transportation, and incineration) or reduced environmental loads (recycle and reuse). (1) Aj C j u B j T
¦A j
j(a)
¦A
j (b )
(2)
j
Where T are disposal environmental loads [kg]. Aj (a) are mount of environmental loads at j (a) [kg]. Aj (b) are amount of reduced environmental loads at j (b) [kg]. Cj is charge at j (a) or j (b). Bj is embodied environmental loads intensities at j (a) or j(b). Embodied environmental loads intensities are defined by “Embodied Energy and Emission Intensity Data for Japan Using Input-Output Tables [3]” and “Environmental load data of 4000 social stocks for preliminary LCA [4]”.
3 Evaluation System of Environmental Loads Based on 3D-CAD 3.1 Structure of the Proposed System We propose an object-oriented system for the evaluation of environmental loads based on 3D-CAD. This system helps the designer to consider environmental loads at the early stages of product design by using a combination of 3D-CAD and the evaluation of environmental loads to save designer’s labor. The designer inputs the attribute information, such as designer's intention (disposal process and material
588
M. Inoue, Y. Takashima, and H. Ishikawa
3D-CAD (SolidWorks)
Data files (MS-Excel)
Solid model data base about sub material knowledge
data file of data base about design information material knowledge
data base about energy knowledge
Add-in functions
data file of evaluation results
data base about transportation knowledge
attribute addition pickup and output design information
data load
output results
Application of evaluation for environmental loads
component information used material disposal attribute
configurations at transportation reference
configurations at recycle factory
set configuration addition of attribute information creation of models
Designer
display of results log change data of database
Figure 1. Overview of proposed system
attribute) to the 3D-CAD model itself, and then, the application automatically extracts their information. The 3D-CAD solid model and attribute are defined as objects. SolidWorks (SolidWorks Corporation) is used as the 3D-CAD software, and VisualStudio.NET and Excel (Microsoft Corporation) are also used for construction of the evaluation application for environmental loads. Figure 1 shows the overview of the proposed system. We have implemented the proposed system by developing an add-in program for SolidWorks which adds the functions such as the attributes for 3D-CAD models and information export for use in the application for determining environmental loads. Through the interface of the developed add-in program, users can easily input the required attributes. Additionally, emission volumes for greenhouse effect gases, such as CO2, NOX, and SOX, are used as an index for the evaluation of environmental loads. 3.2 Evaluation of Environmental Loads Considering a Disposal Process At present product design using 3D-CAD, general product information is confined to material density, and Young’s modulus, and so on. If designers consider the disposal process, they need to define the material and disposal method for the product. We propose to add attribute information for the disposal process to every component model for the product. Addition of the attribute at the early stages of product design means preparation for evaluation which saves the designer a lot of trouble.
Evaluation of Environmental Loads Based on 3D-CAD
589
Input of required data depends on manual operation in almost all conventional evaluation software [5]. However this system outputs information for the product design including the added attribute compatible format for the evaluation application software. As a result, the number of items that the designer should input to the application and the evaluation time are reduced. The machine used at recycle factory and transport method are not information related directly to the CAD model. This information has to be input directly to the application of evaluation for the environmental loads. Equations used at evaluation for the environmental loads need not only product information and recycle process information but also after-mentioned various constants. It is desirable to be able to modify these constants easily in this system, because of any changes or additions brought about by changing circumstances. Therefore this system assigns the necessary information from the database (MSExcel files) to the equation of the evaluation for environmental loads. Because the database files also have an accumulator function of evaluation results for environmental loads, the designer can conduct a design examination during the later stages by using the database. 3.3 Application of Evaluation for Environmental Loads Evaluation equations used at this application are based on “JVIA LCA model” described at section 2. This system outputs the emission volume of greenhouse gas such as CO2 [kg], NOx [kg], and SOx [kg], as an amount of the environmental load. These amounts for each process of the product lifecycle such as disassembly, segregation, cleaning, sub material, transportation, incineration, recycle and reuse, are integrated to determinate the evaluation result. Equations calculating the amount of environmental loads at each process (A1 A6) are shown in equations (3) - (8). T as a determinate evaluation result is calculated using equation (9). Where sub materials are chemicals used at disassembly process, segregation process, and cleaning process. Transportation weight is a summation of every component excluding any reuse components on the basis that recycle process is delegated to processor. A1 is the amount of environmental loads at disassembly, segregation, and cleaning. (3) A1[kg ] ¦ Pm u t u I p Where Pm is electrical power of processing machine [kW], t is used time [h], Ip is embodied environmental loads intensities for utility electrical power [kg/kWh]. A2 is the amount of environmental loads by sub material. (4) A2 [kg] ¦ M s u I s Where Ms is amount of used sub material [kg], Is is embodied environmental loads intensities at process using sub material [kg/kg]. A3 is the amount of environmental loads at transportation process. (5) A3 [kg] d t u M t u I t d t u I f u f Where dt is transportation distance [km], Mt is transportation weight [kg], It is embodied environmental loads intensities for transport machine [kg/kmkg], If is
590
M. Inoue, Y. Takashima, and H. Ishikawa
embodied environmental loads intensities for fuel [kg/l], f is fuel consumption [km/l]. A4 is the amount of environmental loads at incineration process. (6) A4 [kg] ¦ M i u I i Where Mi is amount of incinerated material [kg], Ii is embodied environmental loads intensities at incineration process [kg]. A5 is the amount of reduced environmental loads at recycle process. (7) A5 [ kg ] ¦ M c u I c Where Mc is amount of recycled material [kg], Ic is embodied environmental loads intensities at recycle process [kg/kg]. A6 is the amount of reduced environmental loads at reuse process. (8) A6 [kg] ¦ N u I u Where N is number of components, Iu is embodied environmental loads intensities at reuse process [kg/number]. (9) T [kg ] A1 A2 A3 A4 A5 A6 These equations calculate the amount of environmental loads by substitution of model information from 3D-CAD and various constant from database with reflection of setting directly input to this application.
4 Application to an Office Chair Model We have implemented the functions of attribute addition and information output by developing an add-in program for SolidWorks. The application of evaluation for environmental loads is implemented by Visual Studio.NET (Microsoft Corporation). In our proposed application, a designer clicks the Enter key with the required inputs (machine used at recycle factory, transport method, and so on) to get evaluation result. This application can automatically load solid models information from 3D-CAD and evaluate the environmental loads. Designers can show the results and save data as MS-Excel files. Moreover, the designer can change the configuration described at section 3.2. In the present research, an office chair as shown in Figure 2 is chosen to illustrate the effectiveness of our proposed system. In this example, we create a Backrest
Base of backrest
Screw of seating face Base of leg
Screw of backrest
Seating face Base of backrest Support of backrest
Leg
Figure 2. 3D-CAD office chair solid model
Screw of backrest
Evaluation of Environmental Loads Based on 3D-CAD
591
simplified solid model based on an office chair by using 3D-CAD and then evaluate the environmental loads. Firstly, the designer adds the attribute information to each component model. As a result, the information used by application of evaluation for environmental loads is output as a MS-Excel file. In this example, the materials used for the office chair are chosen based on the materials obtained from distributor firm and manufacturer. The total evaluation for environmental loads (see Table 1) is calculated by the products information (name of components, material, weight, volume, and disposal process). This system can calculate the amount of environmental loads and can support the designer in considering the reduction of environmental loads with changing disposal processes and materials.
5 Future Work This study shows an example of evaluation for environmental loads attached importance to disposal process. In the further study, we need to add the various evaluation items for environmental loads, such as disassemblability, cost, and manufacturing to our proposed system. We will develop an agent-oriented system to respond flexibly to arbitrary changes in 3D-CAD solid models and perform multi-objective evaluations for future work. The concept of our future agent-oriented system is shown in Figure 3. This system consists of 3D-CAD system and agent-based design evaluation system. A designer creates simplified 3D-CAD solid models and adds the attribute information to each component model. The system agentifies the 3D-CAD solid models with attribute information, and that information is defined as model agents. Moreover, the system agentifies the design objective of evaluation as evaluation agents. This system is a multi-agent system which can evaluate various design objectives, including environmental load evaluation, through a shared memory called black board by collecting and managing design information autonomously. Table 1. Results of environmental loads (a) Details of environmental loads Item of environmental loads
Amount of environmental loads CO2 [kg]
SOx [kg]
SOx [kg]
NOx [kg]
0.000
0.000
0.000
0.000
0.000
Material recycle
9.245
0.122
0.012
0.074
0.018
Chemical recycle
0.000
0.000
0.000
0.446
1.178
Thermal recycle
0.000
0.000
0.000
1.196
Total
9.245
0.122
0.012
0.000
0.000
0.000
0.000
Incineration
9.807 32.540 42.347
Amount of reduced environmental loads CO2 [kg]
Sub material
Total
NOx [kg]
Item of reduced environmental loads Reuse part
Disassembly process
Transportation
(b) Details of reduced environmental loads
0.520
(c) Determine of environmental loads CO2 [kg] Total of all items
33.102
SOx [kg] 0.398
NOx [kg] 1.184
592
M. Inoue, Y. Takashima, and H. Ishikawa Creation / Modification of 3D-CAD solid model Addition of attribute information
3D-CAD System
Agent-Based Design Evaluation System
3D-CAD Solid Model
Model Agents Model Agent 1 Agent Data
Attribute information 1 Attribute information 2 Agentify model agents Attribute information 3
Designer
Model Agent 2 Agent Data
Model Agent 3 Agent Data
Input & require information
Presentation of evaluation
Interface Input & Require information
Write & require information
Write & require Black Board information Agent data of model agents Agent data of evaluation agents Evaluation value Write & require model information and evaluation value Evaluation Agents Evaluation Agent 1
Evaluation Agent 2
Evaluation Agent 3
Figure 3. Concept of future agent-oriented system
6 Conclusions A system for evaluation of environmental loads is proposed by developing the application software to evaluate for environmental loads and MS-Excel spreadsheet software into 3D-CAD software. This study applies our proposed system to the evaluation for environmental loads about disposal process of an office chair model (an example model) created by 3D-CAD. As a result, our system can calculate the amount of environmental loads and can support designer for consideration the reduction method of environmental loads.
7 References [1] Nahm YE, Ishikawa H. A new 3D-CAD system for set-based parametric design. Int J Adv Manuf Technol 2006;29:137–150. [2] Japan Vacuum Industry Association (JVIA). Available at : < http://www.jvia.gr.jp/e/index.html >. Accessed on: Apr. 1st 2008. [3] Nansai K, Moriguchi Y, Tohno S. Embodied Energy and Emission Intensity Data for Japan Using Input-Output Tables (3EID) – Inventory Data for LCA –. National Institute for Environmental Studies, Japan, 2002. Available at: < http://wwwcger.nies.go.jp/publication/D031/D031.pdf >. Accessed on: Apr. 1st 2008. [4] National Research Institute for Metals Environment-COnscious Materials Research Team. Environmental load data of 4000 social stocks for preliminary LCA. Available at: < http://www.nims.go.jp/ecomaterial/ecosheet/ecosheet.htm >. Accessed on: Apr. 1st 2008. [5] Boothroyd Dewhurst, Inc. Design for Manufacturing and Assembly. Available at: < http://www.dfma.com/ >. Accessed on: May. 26th 2008.
Proposal of a Methodology Applied to the Analysis and Selection of Performance Indicators for Sustainability Evaluation Systems Juliano Bezerra de Araujo1 and João Fernando Gomes de Oliveira Production Engineering Department, Engineering School of Sao Carlos, University of Sao Paulo (EESC/USP), Sao Carlos, Brazil. Abstract. Companies are interested in investigating the performance of their processes in terms of sustainability, since it is capable of providing a framework that integrates the environmental, social and economical interests into the business strategy. It is not possible nowadays to think about economical development without the parallel preservation of the environment and mutual benefit of the society. In this manner, companies are continually seeking to use wide-ranging input resources with better efficiency and more responsibility in their products and manufacturing technologies. In a highly competitive economical scenario, the most competitive companies are not the ones that use lower cost inputs anymore. As a result, more and more companies have been using performance measuring systems to identify and abandon resource intensive operations, pursuing efficient production models. Therefore, the main goal of this paper is to present a robust methodology proposal to be used in the assessment and selection of indicators for sustainability analyses of products and processes. Starting from a set of sustainability indicators obtained from different sustainability performance measurement systems, e.g. GRI, the methodology comes to make the construction of a special set of metrics suitable to specific purposes of investigation viable. Keywords. Sustainability, Performance measurement systems, Manufacturing technologies
1 Introduction Sustainability in business activities can be defined as the adopted actions and strategies which fulfill the company requirements and the different stakeholders demands, as it protects, enhances and improves the human and natural resources that may come to be necessary in the future [11]. The motivations presented by the economical sector to develop sustainable projects are not entirely altruistic. Recent researches have demonstrated that 1
PhD Student, Nucleus of Advanced Manufacture (NUMA), Production Engineering Department, Engineering School of Sao Carlos, University of Sao Paulo (EESC/USP). Trabalhador São-Carlense Avenue, 400 – Centro. CEP: 13566-590 - São Carlos/SP – Brasil. Phone number: +55 16 3373-9438. E-mail address: [email protected].
594
J.F.G. Oliveira and J.B. Araujo
pursuing sustainable standards in different activities, not only brings environmental and social benefits, but also improves the economical value of the firm [5]. Besides, it is not possible in the present days to think about economical development without the parallel improvement of the society and the environment. According to Schwarz, Beloff and Beaver [16] it is a premise that the economical success is directly linked to the preservation of the environment and to the social welfare. In the business segment, there was the belief that in order to enhance environmental quality, companies should raise their expenses associated with products and process. In other words, there was the idea of an inherent trade-off applied to the environmental consciousness companies: economy versus ecology. Lindle and Porter [14] for instance, opposed to this idea, saying that companies should treat pollution as a source of economical loss. When scrap, toxic substances or sources of energy are rejected to the ecosystem as pollution, this is a sign that resources have been insufficiently and ineffectively used. Strategies based on innovation and new technologies have a good prospective to increase efficiency and to reduce pollution generation. Investments on technological innovation can improve the environmental performance and at the same time, bring more competitiveness to the company, since the resources are used without wastes [4]. However, an important question still remains: what technology could be considered as ideal? To solve this problem, companies have been using performance measurement systems. According to them “what gets measured, gets managed” [16]. To assess the technology performance it is crucial to collect and analyze a sufficient amount of information. All the environmental, social and economical impact aspects and pressure points should be detailed and then classified in sustainability reports. The challenge is to generate and disseminate practical data for decision processes that turn to be robust, relevant, accurate and feasible in terms of cost for their users [13] [9]. Jin and High [9] explain that to expose the operation sustainability using reports is a great opportunity to raise the market share. According to them, currently, 45% of the 250 biggest companies from the US produce a special report on sustainability in their corporative inform, while in some specific sectors the sustainability reports reach 100% of the participants, e.g. chemical sector, mining and paper. The next section of the present paper will briefly present the evolution of the topic sustainability in manufacturing. As it follows, some important performance measurement systems in sustainability will be discussed in order to eventually present the framework developed to conduct the analysis and selection of performance indicators for sustainability evaluation systems
2 Sustainable Manufacturing Development The first approach used by the economical sector to avoid the environmental degradation by industrial activities was the adoption of end-of-pipe pollution control. The end-of-pipe technologies try to control the pollution generation by
Proposal of a Methodology Applied to the Analysis and Selection of Performance
595
treating and disposing pollutants at the end of the productive processes. To accomplish this goal, new pieces of equipment and operations are added into the productive chain. These modifications do not bring any change in the quantity of pollutants that are produced, just increase the quality of their treatment. Because of this, they are named end-of-pipe control [10]. Rusinko [15] considers end-of-pipe control as cost intensive and not productive, since no competitive advantages are obtainable from it. This approach is concerned basically with the internal and external controlling standards, without the responsibility of overcoming the existing limits. Cleaner production approach, for instance, can be considered as a strategy of continuous pollution prevention applied to products and processes. It incorporates a more efficient use of the natural resources, minimizing the pollutant generation and the risks to human health and safety. In summary, cleaner production deals with the impact aspects at the source and not at the end of the process, overlapping the end-of-pipe approach. It is a positive strategy for both business and environment to define the responsible source of impact and not their effects [20]. A more developed and updated approach, known as eco-efficiency, focuses on the resource efficiency. It aims to enhance the environmental performance as it benefits from better economic results. This strategic approach brings the cost elements to attention and gives more importance to the competitiveness of the firm, together with the cleaner production principles [15]. Finally, there is the sustainability approach, defined for the first time in the report “Our Common Future: Report of the World Commission on Environment and Development” [2] and then endorsed by Agenda 21during Eco 92, Rio de Janeiro. Sustainable development can be defined as the actions that satisfy the needs of the current generations without compromising future generations’ ability to develop themselves [20]. According with Sikdar [17], sustainable development is a balance between economical development, environmental concern and social fairness. In some business circles this definition is referred to as triple bottom line [3]. Therefore, sustainability will only happen when the economical and social situation gets better and at the same time, the environmental capacity is not exceeded. A fair balance between value creation, social care and ecological compatibility should exist and evidences from this responsible management are indispensable. Metrics and performance aims to evaluate, improve and report the company activities used by the society to move in the direction of a sustainable world.
3 Sustainability Performance Measurement Systems Different sustainability performance measurement systems can be used to evaluate products, processes and services. These models came to harmonize the sustainability management process according to clear principles. The information provided by these methodologies are reliable and practical to the stakeholders, without the counterpart of high costs and exhaustive long-term time analyses. There were described five sustainability assessment models from a large group: ISO 14031 [8]; Labuschagne, Brent e Erck [11]; Verein Deutscher Ingenieure [18];
596
J.F.G. Oliveira and J.B. Araujo
Global Reporting Initiative (GRI) [6]; and Institution of Chemical Engineers (IChemE) [7]. These are relevant and precise approaches which are used constantly by different parts, i.e. government agencies, business units and non-profit organizations, for the construction of sustainability reports. Generally, the sustainability performance evaluation systems follow the business management model “plan-do-check-act” (PDCA) [8] [6] [18]. The first step investigates the sustainability impact aspects. In the next step the data are collected among several productive and non productive departments. In the third place, the metrics are analyzed and confronted against the performance target specifications. Finally, all the informations are checked closely to find improvement opportunities. Labuschagne, Brent and Erck [11] affirm that, some time ago, sustainability was thought more in strategic and institutional terms, without considering the economical-operational view from the manufacturing activities. A few quantities of metrics were dedicated to operations performance and at the same time, they were very focused in the environmental criteria and oriented basically for the product development. Therefore, a crucial procedure before conducting the selection of the correct performance indicators (next section) is to develop a large quantity of sustainability criteria or impact aspects that carry out the scenario characterization. In the next three figures (1 to 3) are specified a number of sustainability impact aspects collected from the respective models.
Environmental Impact Categories
Environmental Condition
GRI
ISO 14031
Labuschagne, Brent and Erck
IChemE
VDI
Materials
_
_
_
_
Energy
_
_
_
_
Water
_
_
_
_
Biodiversity
_
_
Emissions
_
_
_
_
Effluents
_
_
_
_
Waste
_
_
_
_
Suppliers
_
_
Products and Services
_
_
_
_
Compliance
_
_
_
_
Transport
_
_
_
Others
_
Air
_
_
_
Ozone depletion
_
_
_
Human conditions
_
_
Land resources
_
_
_
Mineral and energy resources
_
_
_
Others
_
_
Figure 1. Environmental impact categories according to five different sustainability performance measurement systems.
Proposal of a Methodology Applied to the Analysis and Selection of Performance
Financial Health
Economic performance
Potencial financial benefit
597
GRI
ISO 14031
Labuschagne, Brent and Erck
IChemE
VDI
Customers
_
_
Suppliers
_
_
Employees
_
_
Investors
_
_
Public Setor
_
Research and development funds
_
_
_
Liabilities (e.g. environmental)
_
Profitability, liquidity and solvency
_
_
_
Others
_
Shareholders
_
_
Market share
_
_
Contribution to GDP
_
Others
_
_
Trading opportunities
_
Subsidies
_
Figure 2. Economical impact categories according with five different sustainability performance measurement systems.
The social impact categories (figure 3) sometimes are too complex to be treated. In these cases, qualitative measures are more appropriated than quantitative ones.
Labour Practices
Human Rights
Society
Product Responsability
Stakeholder influence
GRI
ISO 14031
Labuschagne, Brent and Erck
IChemE
VDI
Employment
_
_
_
_
Work relations
_
_
_
_
Health and safety
_
_
_
_
_
Training and education
_
_
_
_
_
Diversity and opportunity
_
_
_
Strategy and management
_
Non-discrimination
_
_
_
_
Freedom of association and collective bargaining
_
Child labour
_
_
_
Forced and compulsory labour
_
_
Disciplinay practices
_
Security practices
_
Indigenous rights
_
_
Community
_
_
_
_
Bribery and corruption
_
Competition and pricing
_
Human capital contribution
_
Public benefits (i.e. infrastructure)
_
_
Customer health and safety
_
_
Products and services
_
_
_
Advertising
_
_
Respect for privacy
_
Decision influence potential
_
_
Stakeholder enpowerment
_
_
Figure 3. Social impact categories performance measurement systems according to five different sustainability
598
J.F.G. Oliveira and J.B. Araujo
4 Proposal of a Methodology for the Sustainability Performance Indicators Analysis and Selection The sustainability indicators specification guarantee a valuable report on the economical, environmental and social firm condition. It is necessary to choose among the many possible indicators, the ones that are influenced by the main impact aspects of the company activities. However, the current models for sustainability evaluation do not hold a clear methodology appropriate to the analysis and selection of these indicators. In this manner, even though many performance measurement systems frameworks have been developed, their adoption is often constrained by the fact that they are simple frameworks. Neely et al. [12] have verified that little guidance is provided on how the appropriate measures can be identified, introduced and ultimately used to manage the business. This way, the methodology is presented, which points the guidelines required for the selection of metrics. Through figure 4 it is possible to visualize the framework suggested for the evaluation system configuration. Originally, in order to choose between different metrics it is important to have a considerable database about impact categories. This activity was fulfilled through the complete analysis of several performance measurement systems, presented in figures 1 to 3.
Figure 4. Sustainability performance measurement system and supporting tools applied to the selection of sustainability metrics.
Starting with the environmental indicators, it is necessary to consider the impacts caused and then to choose the best indicators that represent these circumstances. Among the different tools used to support the impact aspects evaluation, e.g. environmental audit or life cycle assessment, the environmental inventory and the impact assessment were chosen for the methodology since they
Proposal of a Methodology Applied to the Analysis and Selection of Performance
599
are robust and broad enough. With these tools, it is possible to quantify the material resources, energy, emissions, effluents and others features of products and processes. The Environmental Protection Agency (U.S.) [4] sustains that the environmental inventory and the impact assessment are appropriate to select the main impact aspects, from which the performance indicators are derived. The construction of environmental inventories is mainly divided into four stages. In the first place, it is necessary to build a process diagram with all the input, output and transformation processes involved. After this, a plan to collect all the information is developed, including forms or computer interfaces. In third place, the data should be collected and stored. Finally, the informations gathered are reported to the interested parts. To acquire data many organizations count with their own database for environmental inventory, or use software for this purpose, e.g. LCA software. At the end, the essential part is to produce comparative measures about the environmental impact of the different aspects and then, to choose the indicators that represent appropriately the most important pressure points of the process or product technology. The social dimension of the proposed methodology is concerned with the company’s impacts on the social systems in which it operates, as well as the company’s relationship with its various stakeholders. Social business sustainability therefore has internal and external focuses. To select the most important indicators, the first option is to conduct surveys with specialists. With their knowledge it is possible to rate the relevance of the several criteria proposed in the social analysis framework. As suggested in guidelines and principles for social impact assessment (SIA), “the social impact specialist selects the SIA variables for further assessment situations” [1]. As the first option is time and cost consuming, another possibility for the companies is to apply the core indicators selected from the most recognized sustainability performance evaluation systems, e.g. IChemE (see figure 3). Veleva and Ellenbecker [19] support the idea that, “it is better to approximately measure the right things than to measure the wrong ones with great accuracy and precision”. The economical indicators, for instance, present a distinct approach than the usual financial metrics. They should truly symbolize the value creation from the company activities. A precise control from the added value to the activities is fundamental, considering aspects such as materials cost, labor cost and delay. Other non financial metrics, e.g. quality and flexibility, are also described in the report in order to provide more arguments during the decision process.
5 Conclusions As it was previously stated, the current sustainability assessment models do not have an appropriate methodology to analyze and select indicators. Through the proposed methodology, some guidelines were discussed and presented as part of a framework with this specific use. Therefore, it is expected that the sustainability reports for products, processes and services become more accessible and the key impact aspects more easily traceable.
600
J.F.G. Oliveira and J.B. Araujo
For the future, it is intended to detail the methodology and its different execution steps through a series of case studies in the manufacturing sector. Companies will take part of this study and will provide information that reflect the impact categories selected.
6 References [1] Burdge, R.J. A Conceptual Approach to Social Impact Assessment. Social Ecology Press, Wisconsin, 1998. [2] Brundtland, G.H. Our Common Future: Report of the World Commission on Environment and Development. Oxford University Press, 1987. [3] Elkington, J. Cannibals with Forks – The Triple Bottom Line of 21st Century Business, New Society Publishers, Canada, 1998. [4] EPA. Life Cycle Assessment: Principles and Practices. U.S. Environmental Protection Agency (EPA), 2006. [5] Fiksel, J., McDaniel, J., Mendenhall, C. Measuring Progress towards Sustainability Principles, Process and Best Practices. Battelle Memorial Institute, Ohio, USA, 1999. [6] GRI. Sustainability Reporting Guidelines. Global Reporting Initiative (GRI), Amsterdam, 2002. [7] IChemE. The sustainability metrics: sustainable development progress metrics recommended for use in the process industries. Warwickshire: Institution of Chemical Engineers (IChemE), 2002. [8] ISO 14031. Environmental Performance Evaluation. International Standard Organization (ISO), 1999. [9] Jin, X. and High, K.A. Application of Hierarchical Life Cycle Impact Assessment in the Identification of Environmental Sustainability Metrics, 2004. [10] Klassen, R.D. and Whybark, D.C. The impact of environmental technologies on manufacturing performance. Academy of Management Journal, 1999. [11] Labuschagne, C., Brent, A.C., van Erck, R.P.G. Assessing the sustainability performances of industries. Journal of Cleaner Production, 2005. [12] Neely et al. Performance Measurement system design: developing and testing a process-based approach. International Journal of Operations & Production Management, Vol. 20, No. 10, 2000. [13] Olsthoorn, X., Tyteca, D., Wehrmeyer, W., Wagner, M. Environmental indicators for business: a review of the literature and standardization methods. Journal of Cleaner Production, 2001. [14] Porter, M.E., Linde, C. Green and Competitive: Ending the Stalemate. Harvard Business Review. September-October, 1995. [15] Rusinko, C.A. Green Manufacturing: Na Evaluation of Environmentally Sustainable Manufacturing Practices and Their Impact on Competitive Outcomes. IEEE, 2007. [16] Schwarz, J., Beloff, B., Beaver, E. Use Sustainability Metrics to Guide DecisionMaking. Chemical Engineering Progress, 2002. [17] Sikdar, S.K. Sustainable Development and Sustainability Metrics. AIChE Journal, 2003. [18] VDI 4070. Nachaltiges Wirtschaften in kleinen und mittelständischen Unternehmen: Anleitung zum Nachhaltigen Wirtschafen Verein Deutscher Ingenieure (VDI), 2006. [19] Veleva, V. and Ellenbecker, M., 2001. Indicators of sustainable production: framework and methodology. Journal of Cleaner Production 9, 519-549 [20] WBCSD. Cleaner Production and Eco-efficiency: complementary approaches to sustainable development, Genebra, 1998.
Ocean Wave Energy Systems Design: Conceptual Design Methodology for the Operational Matching of the Wells Air Turbine R. Curran1 Director of the Centre of Excellence for Integrated Aircraft Technologies (CEIAT), Reader, School of Mechanical and Aerospace Engineering, Queens University Belfast, NI, UK (Professor of Aerospace Management and Operations, TU Delft) Abstract. The paper has set out a conceptual design methodology that was employed in the design of a Wells air turbine for OWC ocean wave energy plants. In particular, the operational matching of the performance of the turbine is used as the premise in achieving an optimal design configuration and sizing, given the range and frequency of power bands presented to the turbine over long periods of time. This is in contrast to designing the turbine to accommodate the average power rating delivered by the OWC. It was seen that this resulted in a 5% improvement in power output with the optimal size of the turbine required to be slightly larger than the average pneumatic power rating would suggest. Keywords. Ocean wave energy, well turbine, design integration, concurrent engineering
1 Introduction The paper sets out the conceptual design methodology that was employed in the design of a Wells air turbine for the OE Buoy prototype wave energy plant currently deployed in Galway Bay, Ireland. In particular, the operational matching of the performance of the turbine is used as the premise in achieving an optimal design configuration and sizing, given the range and frequency of power bands presented to the turbine over long periods of time. This is in contrast to designing the turbine to accommodate the average power rating delivered by the OWC, which ignores the wide range of power bands in which the turbine has to operate. The lowest and highest power bands can occur relatively frequently due to wave groupiness and also the hydraulic performance of the OWC influencing the occurrence profile, tending to result in negative turbine performance at the lower and higher ranges due to high running losses or separation and stall respectively. In the study, the primary conversion from wave energy to kinetic energy is achieved with a floating Oscillating Water Column (OWC) of the ‘backward bent 1
Corresponding Author Email : [email protected]
602
R. Curran
duct’ type, where the wave energy is used to excite vertical oscillations in the water column entrained in a floating plenum chamber with an open submerged base. The work was carried out to design a Wells turbine for this device that would ultimately couple to a generator set for power take-off. The prototype is currently deployed in Galway Bay Ireland. It was decided that a Wells turbine configuration would used for the pneumatic power take-off as the most simple and robust turbine type that has been well tested in the field. It was evident for the pilot OC Buoy plant that a simple monoplane configuration would be most suitable. This provides a good efficiency rating across a sufficiently wide range of flow rates and can easily accommodate the power rating envisaged for the plant, not requiring a biplane configuration for example. Moreover, the monoplane design has a minimum of moving parts and therefore is robust and cheap to manufacture, for example as compared to a counter-rotating design. Other alternative turbine designs include the impulse turbine or the Dennis-Auld turbine. The impulse turbine has been well developed by the Japanese but does not offer significant gains due to its lower efficiency, although it is claimed to work across a wider range of flow rates, should that be required, at a lower rotational speed. The Dennis-Auld turbine is at an early stage of validation and is still under patent. Since the conception of the Wells turbine [1] at Queen’s University Belfast (QUB) there has been considerable effort devoted worldwide to the development of the basic design [2-8]. Similarly, there has been a considerable amount of development effort directed towards the self-rectifying impulse turbine that was first suggested by Kim et al. [9]. The development of the Wells turbine initially included investigations into the geometric variables, blade profile, and number of rotor planes [10], and also the use of guide vanes [11]. Two further design enhancements have been the pitching of a monoplane turbine’s blades and the counter-rotation of a biplane’s rotors [12]. Full scale plant systems [13] have utilized more advanced turbine configurations, encouraged by the successful pioneering of prototype plants such as the 75kW QUB Islay plant that utilized basic monoplane and biplane turbine configurations with standard symmetrical NACA profile blades [14, 15]. The development work for Impulse turbines also included investigation of the basic design parameters but much of the work then went on to focus on the design of the guide vanes and whether these would be fixed or pitching, or even self-pitching [16]. The Impulse turbine has been installed in several plants in Asia and there is still much interest in it as an alternative to the more widely used Wells turbine (used in plants in the UK, Portugal, India and Japan); while the Denniss-Auld turbine has been used in the Port Kembla Australian plant [17-19].
2 Input Pneumatic Power Data The input data used to drive the design and sizing process was provided by the University of Cork. A distribution of estimated power conversion was gathered from the site in Galway Bay in order to provide the estimated pneumatic power
Ocean Wave Energy Systems Design
603
distribution. These figures are summarised below in Figure 2.1 in terms of a range of average pneumatic power in 2kW bands. Distribution of pneumatic power 0.35 % occurence
0.3 0.25 0.2 0.15 0.1 0.05 0 0
5
10
15
20
25
30
35
Sea State (Wpnue)
Figure 1. The input pneumatic power distribution for the OE Buoy
It can be seen that the maximum pneumatic power state was at 31kW and that there is a lot of irregularity in the range from 5-15kW. This may be due to hydrodynamic effects that occurring in Galway Bay where the input data was collected, being where the pilot plant will be deployed. However, the distribution may in practise be smoother if data from a much longer time scale was available, i.e. years rather than months. For the fullscale plant that will be next in the development for OceanEnergy, a more regular distribution would be expected in open sees. Not-with-standing the data was not smoothed and was used as provided. A large amount of recorded instantaneous data wave height data was also provided by Cork University which showed that the summarised data presented in Figure 2.1 was actually averaging a significant amount of variation for each of the power states. It was evident that there was a very high occurrence of values around zero, a decreasing occurrence of values around the average and an increasingly low occurrence at values many times higher than the average. This is exemplified in Figure 2.2 which shows the distribution of pneumatic power recorded for a sea state with an average pneumatic power rating of 17kW. The power has been grouped according to occurrence in 2kW bins or increments, although the first bin is for values of less than 0.25kW, representing 16% of the occurring power bands. It can be seen that for that values as high as 155kW were recorded, approximately 9 times the average value of 17kW. It will be shown subsequently that never-theless the turbine can be effectively designed for the seas where the there is the largest contribution of power. However, the irregular occurrence of such large pneumatic power surges will have to be accommodated in the plant. Although the turbine tends to be self limiting by stalling when driven beyond its design range, these power surges will cause large fluctuations in the turbine’s axial loading and torque and it is expected that this will have a negative effect on the equipments survival. One very effective solution to this would be a blow-off valve which dumps excess power, as it is likely that a control strategy would not be able to react
604
R. Curran
fast enough by increasing the turbine rotational speed, this being a more gradual control strategy that is very effective over greater time periods. These irregular occurrences of very large pneumatic power spikes have not been noted as a feature of non-floating OWC devices and it is possible that the floating OE Buoy is displaying some unusual hydrodynamic effects, e.g. through extreme resonance, and this is an extra complication for the turbine and the general protection of the power take-off system. Distribution of pneumatic power for 17kW Sea State 0.2 0.18
% Occurence
0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 1
9
17 25
33 41 49 57
65 73 81 89
97 105 113 121 129 137 145 153
Pnuematic power
Figure 2. An example of the distribution of pneumatic power measured for the OE Buoy
Following on from some of the previous discussion, it is crucial to consider the actual contribution of power that is being provided by the sea states represented by Figure 2.2 for example. Therefore, Figure 2.3 presents the contribution of power available for the 17kW sea state presented in Figure 2.2. It will be evident from later presentation of the analysis that this tends to increase the size of the turbine, although it can be seen that there is negligible contribution at power levels of some 4 times the average value of 17kW. Contribution of pneumatic power for 17kW Sea State 0.03
% Contirbution
0.025 0.02 0.015 0.01 0.005 0 1
10
19
28
37
46
55
64
73
82
91 100 109 118 127 136 145 154
Pnuematic power
Figure 3. An example of the contribution of pneumatic power measured for the 17kW pneumatic power state
Ocean Wave Energy Systems Design
605
3 Turbine Design and Sizing Strategy Initially, it was decided that the performance of the turbine should be individually assessed for each of the sea states represented in Figure 2.1 and that the design and sizing parameters would be chosen to maximise the overall rotational power output. However, it was then decided in consultation with Cork University that this “optimal” size should be decreased to ensure that the turbine would operate more efficiently in the smaller sea states, to increase running time rather than overall efficiency. Specifically, the result will be to reduce overall output but to increase running time and minimise time spent in shutdown, when the converted power would be less than the running losses. It was pointed out in the previous section that the pneumatic power states presented in Figure 2.1 actually represent the average output from across a wide range of powers experienced during that associated “sea state”. It is paramount that this global range is considered relative to the turbine’s performance as the turbine has an effective range of flow rates across which it operates efficiently and these must therefore be matched over the long term. Consequently, there are two elements to the optimal matching: 1) globally matching the turbine’s efficiency range to the range of pneumatic power states that maximises overall power conversion and 2) locally matching the turbine’s efficiency range to the pneumatic power distribution associated with any particular pneumatic power range relating to a given sea state. However, as previously stated, the chosen turbine design was actually chosen to be smaller than the global optimal sizing value in order to increase running time. In order to investigate the more detailed localised performance matching during each of the individual pneumatic power states, a generic distribution was used to generate the representative range of instantaneous pneumatic power values. Consequently, Figure 2.4 presents the general form of the distribution of pneumatic power, for example in terms of occurrence for the 5kW power state. It can be seen that relative to Figure 2.2 (noting that the x-axis is extended much further) the majority of occurrence is focused on the smaller powers but that there is a considerable occurrence of powers up to 3 times the average, in agreement with Figure 2.2, with much higher values also included in the analysis. Therefore, this is in qualitative agreement with the instantaneous distributions of wave height supplied by Cork University. The relevance of these higher values is that the turbine will have to be able to accommodate them, and will do so at a given value of efficiency determined by the turbine’s efficiency characteristic. At the higher values of flow rate, relative to some nominal design point, the angle of attack onto the turbine blades approaches the stalling region at which point the efficiency drops considerable due to boundary layer separation and loss of lift. On the other hand, the lift at the lowest flow rates is not well developed due to the low angles of attack onto the blades and does not provide enough driving torque to overcome the running losses. This highlights the need for matching and the need to perform the analysis at this level of detail. Otherwise, the turbine might be sized more optimistically for the pneumatic power extremes while not being matched to the power bands that occur more, and which represent the majority of the input power.
606
R. Curran
Distribution of pneum atic pow er for the 5kW state 0.25
Occurrence (%)
0.2 0.15 0.1 0.05
52
49.4
46.8
44.2
39
41.6
36.4
33.8
31.2
26
28.6
23.4
20.8
18.2
13
15.6
7.8
10.4
5.2
2.6
0
0
Pneum atic pow er (kW)
Figure 4. Example of power distribution associated with pneumatic power state
4. Performance Analysis 4.1 Air Turbine Performance The turbine performance is driven by a number of key variables and parameters, including: V is the air velocity, Q is the airflow, T is torque, P is pressure, and : is rotational speed. The turbine performance is usually quoted in terms of nondimensional equivalents of flow rate I , pressure P*, damping ratio BR, torque T*, and efficiency K are also included and are given by Equations 1 through 5 respectively.
I
VA * , P Ut
P 2
2 t
U AZ D
, BR
where Ut is the rotational velocity,
P*
I UA
,T
*
T 2
5 t
U AZ D
TZ PQ
,K
is the density of air,
Z
(1-5)
is the angular
velocity, and Dt is the rotor tip diameter. It is important to consider the nondimensional characteristics as turbines of various values of geometry, speed and type can then be compared in absolute terms. It is known that the damping ratio BR can be said to be linear for the majority of the flow range, apart from when in the stall region where secondary flow effects become more dominant. The linearity is an extremely important characteristic as the turbine can be simply designed to apply a constant level of applied damping to the OWC. This damping should maximise the output from the OWC while also producing the optimal range of airflow rates to maximise the turbine’s conversion, as previously pointed out. In the analysis the turbine damping ratio is used to predict the airflow velocities produced by turbines of various sizes, given the range of pneumatic power states/outputs being considered
Ocean Wave Energy Systems Design
607
from the OWC, see Figure 2.1. The efficiencies corresponding to these velocities will then be used to give the conversion performance of the turbine. It is worth pointing out that variable speed control could be considered for the plant so that the speed can be altered in steps of say 50rpm, in order to match the turbine optimally to the output of the OWC. This control technique has been previously utilised on the Islay LIMPET device for the Wells turbine where the speed is reduced in order to decrease the applied damping and therefore, increase the flow rate to a value that produces a higher efficiency. The efficiency characteristic for the turbine was estimated from a synthesis of small-scale steady-state results and full-scale random oscillating flow measurements published for the Islay pilot plant. Much of the literature published on Wells turbine performance results tends to project performance enhancements at full scale, relative to laboratory results, due to the reduced drag associated with the increase Reynolds Numbers. However, it is the author’s opinion that any potential increase in efficiency tends to be countered by the negative dynamic effects of accelerating oscillating flow. However, the damping ratio BR provided by the turbine tends to be slightly higher and the effects of stall seem to be less acute than at small scale. These issues have been taken into account in formulating the instantaneous efficiency characteristic for the monoplane turbine to be used in the OE Buoy, which is required in matching the turbine performance to the individual pneumatic power states. Turbine efficiency 0.6 0.5 Eff
0.4 0.3 0.2 0.1 0 0
0.1
0.2
0.3
0.4
0.5
0.6
Phi
Figure 5. Non-dimensional efficiency for the OE Buoy monoplane Wells turbine
It can be seen from Figure 2.5 that the instantaneous turbine efficiency K (denoted by Eff) varies considerably with the flow rate I (denoted by Phi). The running losses due to blade drag at zero flow result in a negative efficiency and at a flow coefficient of approximately 0.2 stall occurs progressively from the blade root towards the tip to give the turbine an effective flow range of approximately 0.04 0.3. It is readily evident and well illustrated now that this effective operational range, shown in Figure 2.5, needs to coincide with the occurrence of pneumatic power typified in Figure 2.4. The following Section establishes that this is achieved
608
R. Curran
through the level of damping ratio of the turbine which then provides the pneumatic power at the correct flow rates; the latter being associated with a range of angles of attack that are appropriate for the blades’ aerofoil profile. The peak efficiency of just below 60% is conservative relative to more optimistic literature but correlates well to the authors work with full scale plants. Importantly, the estimation of the turbine efficiency characteristic has been conservative in order to not oversize the turbine, any increased conversion performance being easily accommodated by the electrical system. 4.2 OWC-Turbine Coupling and Power Conversion Given that the output of the OWC has been estimated (Figure 2.1) along with the performance of the turbine (Figure 2.5), the two have to be coupled in order to calculate the final power output. Here, the damping applied by the turbine can be said to act as a gearing ratio, between the two, that determines the forced velocity of the airflow due to the action of the OWC. Therefore, the damping can be expressed as shown in Equation 6,
BA
P
AC VC
(6)
while, assuming incompressibility of air and a Mach number of less than 0.5, and conservation of mass flow, Equation 6 can be further manipulated to give
BA
P
AC2 AAV A
(7)
AC and AA are the cross-sectional areas at the water column surface and turbine duct respectively, and VC and V A are the respective air velocities. Finally,
where
substituting in Equations 1 and 2, Equation 7 can be expressed in terms of the turbine’s damping ratio BR, as shown in Equation 8 below.
BA
§ A2 · § P * · 4 UW ¨ C ¸ U t ¨ ¸ © AA ¹ © I ¹
(8)
It has been established that the effective damping ratio BR, or pressure-flow ratio, of the constant-speed Wells turbine is constant. Consequently, the actual applied damping of the turbine can be calculated for any turbine geometry AA and then substituted into Equation 7 to provide the pressure-velocity ratio. As the pneumatic power provided by the OWC is expressed by Eq. 9, (9) W p PV A AA then Equations 7 and 9 can be combined to give Eq. 10.
VA
W p AC2 BA
(10)
Therefore, Equation 10 can be used to give the air velocity that corresponds to a particular value of pneumatic power. Equations 8 and 10 show that the geometry of the OWC and turbine, the rotational speed of the turbine and the damping ratio of
Ocean Wave Energy Systems Design
609
the turbine define the above relationship. Consequently, for the fixed-speed machine, it is a simple matter to take the air velocity and to find the efficiency relating to that value. Thereafter, the values of pneumatic power, given by the distributions represented by Figure 2.4, can be multiplied by the efficiency to give the final power output for that pneumatic power state, the summation of the energy output for all states considered in the distribution in Figure 2.1 giving the overall output of the plant.
5 Analysis Results There were 16 pneumatic power states considered in the analysis, derived from Figure 2.1, with each state being described by a distribution of the form shown in Figure 2.4 (based on the form exemplified in Figure 2.2). Subsequently, the analysis considered the individual values of pneumatic power in order to determine the relevant turbine conversion performance associated with each sea state, the conversion being calculated for that pneumatic power state relative to the occurrence data typified in Figure 2.4. A final value for the power conversion across a given time period was then calculated given the occurrence data presented in Figure 2.1. This value of converted energy was the basis of the optimisation procedure, being the objective function to be maximised. However, as previously stated, this optimal design was adjusted to allow the turbine to operate in a greater percentage of the sea states, even although the overall energy output would be lower due to the reduced ability of the turbine to efficiently convert in the larger pneumatic power states that contain a significant proportion of the available pneumatic power. Distribution of air velocities for the 5kW state 0.25
Occurrence (%)
0.2 0.15 0.1 0.05
27.3
26.6
25.9
25.1
24.4
23.6
22
22.8
21.1
20.2
19.3
18.3
17.2
16.1
14.9
13.6
12.2
10.6
8.62
0
6.1
0
Air velocity (m /s)
Figure 6. Example of air velocity distribution with pneumatic power state
The conversion of the pneumatic power to a flow rate at a given pressure drop was predicted according to the analysis presented in Section 2.4.2. Figure 2.6 presents the occurrence distribution of air velocities for the pneumatic power distribution presented in Figure 2.4. The predicted flow rate was then used to
610
R. Curran
determine the turbine performance, taken from the characteristic presented in Figure 2.5. The matching of the flow rate and the turbine efficiency is exemplified in Figure 2.7 relative to occurrence, showing the two plots superimposed in order to highlight that the turbine needs to be designed in order to accommodate flow range across its most efficient range but also to minimise the amount of time when running at negative efficiencies at the lowest flow rates. Distribution of pneumatic power for the 5kW state 0.8
Occurrence (%)
0.6 0.5
0.15
0.4 0.1
0.3 0.2
0.05
Turbine efficiency (%/100)
0.7
0.2
0.1 0
0 0
0.1
0.2
0.3
0.4
0.5
0.6
Flow Rate (phi)
Figure 7. Example of matching the turbine performance to the pneumatic energy distribution Torque conv e rsion characte ristic for the 5kW state 8 7 6
Torque (Nm)
5 4 3 2 1 0 -1 0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Flow Rate (phi)
Figure 8. Example of torque conversion
Ultimately, the turbine performs with an associated efficiency and produces torque that is then used to drive an alternator for example. An example of the torque conversion for the pneumatic power state represented in the previous Figures is presented in Figure 2.8. It is important to note that the characteristic is not a distribution of torque but rather the instantaneous performance. In practise the zigzag profile at the stall region at phi values of over 0.2 would actually be smooth but appears as it does from the analysis due to the low fidelity of the data
Ocean Wave Energy Systems Design
611
points from the turbine efficiency curve, shown in Figure 2.7. It can also be seen that the turbine runs at a loss at the lowest values of flowrate due to the blade drag. However, it is evident that the torque performance relates well to the majority of flow rates occurring most regularly in Figure 2.7. Relative to Figures 2.2 and 2.3 in terms of the correlation of power contribution to power band occurrence, it is clear that the contribution of power will also tend to shift to the higher powers, thereby better utilising the toque conversion performance presented in Figure 2.8. Contribution of pneumatic and converted energy
% Contribution
0.4 0.35 0.3 0.25 0.2 0.15
Pneumatic energy Converted energy
0.1 0.05 0 0
5
10
15
20
25
30
35
Sea State (Wpnue)
Figure 9. Distribution of pneumatic and converted power
Figure 2.9 shows a plot of the distribution of contributed converted power relative to the contributed pneumatic power (as opposed to occurrence shown in Figure 2.1). The energy levels relate to a nominal period of one year as it is the energy being converted over a period of time that should be used in the calculation of optimal conversion efficiency. However, it is important to note that the period for the analysis is not influential in the optimal sizing as the distribution shown in Figure 2.1 is assumed to be consistent and representative for any extended time period. Figure 2.10 shows the distribution of associated conversion efficiencies from Figure 2.9, being the ratios of the compared pneumatic and converted power values. The Figure also plots the distribution of maximum torque values generated by the turbine. It is interesting to note the turbine’s stalling phenomenon has tended to cap the torque rating of the system and acts as a natural form of limiting, the remainder of the energy at the highest flow rates being dissipated in separated and turbulent airflow. However, in practise there maybe a fatigue issue for the turbine over its life time due to these large oscillations in blade load, as the blades are experiencing catastrophic stall in the larger pneumatic power states. As noted in Section 2.2, this seems to be more extreme and potentially is a major design criteria for the OE Buoy.
612
R. Curran
0.6
60
0.5
50
0.4
40
0.3
30
0.2
20
0.1
10
0
Tmax (Nm)
Conversion efficiency (%/100)
Distribution of efficiency and m axim um torque
Efficiency Tmax
0 0
5
10
15
20
25
30
35
Sea State (Wpnue)
Figure 10. Distribution of overall conversion efficiency and the maximum torque values
The optimal efficiency of the turbine using the presented methodology was calculated to be 40% at a Radius of 0.48m, converting 21.7kWhr per annum. The average pneumatic power rating of 6.3kW was also used as an alternative optimal design point in order to show the benefit of taking the operational view to the optimisation process. The average pneumatic power method returned an optimal efficiency of 37.5% at a Radius of 0.45m, converting 20.7kWhr per annum. Interestingly, the 2.5% improvement in the performance efficiency of the turbine actually translates to a doubling of the actual output of 5% in terms of the converted output, i.e. a 5% increase in sales revenue. This is due to the turbine plant performing better over the whole range of pneumatic power bands accommodated by the turbine in actual terms, rather than relative to input power.
6 Final Design Parameters The overall efficiency of the turbine is presented in Table 2. Also included is the relative values for the “optimal” turbine design that was altered in order to increase the amount of time when the system would be running efficiently, i.e. not at an overall loss. It can be seen that the nominal rotational speed has been kept fixed at 1500rpm, while the design of the electrical equipment is reported to allow considerable speed range. This will allow the speed to be reduced in the smaller seas in order to increase the flow rates so that the pneumatic power is delivered at values over which the turbine is more efficient. Obviously, this will increase the values of overall efficiency towards that of the optimal design but more importantly, will allow an effective speed control strategy to be put in place that will ensure increased efficient running time. Such a strategy is not in the remit of this report, especially without the electrical system design yet available.
Ocean Wave Energy Systems Design
613
Table 1. Basic design variables for analysis
Parameter Tip radius (m) Speed (rpm) Damping Ba (Ns/m) Overall efficiency
Final design 0.4 1500 (Fixed speed) 125,000 30%
“Optimal design” 0.4 2100 (Fixed speed) 175,000 40%
To enable the analysis a number of design parameters are input variables that will determined relative to the optimisation procedure while other input variables are fixed or absolute. These are shown in Table 2, notably including turbine tip diameter Tdiameter, the hub-to-tip ratio H-T ratio, the rotational speed w (rad/sec), the turbine applied damping Ba (Ns/m) and the turbine’s linear damping slope P*/phi. Table 2. Input design variables for analysis
Variable w= Gravity= Col lgth= Rhub= RHOair = Acol = Rtip = H-T ratio Aduct = RPM = Ut = Ba = P*/phi = Ac/Ad = Col width Annual hr= Tdiameter Hdiameter
157.0796 9.81 2 0.26383 1.225 12 0.4 0.659575 0.28398 1500 62.83185 124893.6 0.8 42.25646 6 8766 0.8 0.52766
In determining the turbine’s linear damping slope P*/phi, there is an associated calculation to calculate the turbine’s solidity (blade to annular area ratio) which in turn determines the correct value, from historical data. The solidity was determined to be 0.59.
614
R. Curran
7 Conclusions The paper has set out a conceptual design methodology that was employed in the design of a Wells air turbine for OWC ocean wave energy plants. In particular, the operational matching of the performance of the turbine is used as the premise in achieving an optimal design configuration and sizing, given the range and frequency of power bands presented to the turbine over long periods of time. This is in contrast to designing the turbine to accommodate the average power rating delivered by the OWC. It was seen that this resulted in a 5% improvement in power output with the optimal size of the turbine required to be slightly larger than the average pneumatic power rating would suggest. However, it should be noted that the case study used in the paper had a very large distribution of powers at the higher values and that other geographical positions may result in a smaller turbine being better than the average pneumatic power rating would suggest. This highlights the benefit of such an operational oriented design methodology that allows the designer to tailor the sizing relative to the distribution of power rather than being confined to a single solution for a given average pneumatic power rating: the optimal sizing and configuration is also a function of the distribution, albeit to a lesser degree than the average pneumatic power rating.
8 References [1] Wells, AA (1976), Fluid Driven Rotary Transducer, British Patent Spec. 1 595 700. [2] Curran, R., and Gato, L.C. The energy conversion performance of several types of Wells turbine designs. Journal of Power and Energy, Proceedings of The IMechE, Part A, 1997, Vol. 211, No. A2, ISSN 0957-6509, pp 55-62. [3] Curran, R, Whittaker, TJT, Raghunathan, S, and Beattie, WC (1998a). “Performance Prediction of the Counterrotating Wells Turbine for Wave Energy Converters, J Energy Engineering, ASCE, Vol. 124, pp. 35-53. [4] Curran, R and M Folley, (Invited Book Chapter)“ Integrated Air Turbine Design for Ocean Wave Energy Conversion Systems”, Ocean Wave Power, Springer, submitted. [5] Folley, M, Curran R, and T Whittaker, “Comparison of LIMPET contrarotating wells turbine with theoretical and model test predictions”, Ocean Engineering, Volume 33, Issues 8-9, June 2006, Pages 1056-1069. [6] Count, B (1980). Power from Sea Waves, Academic Press, New York, ISBN 0-12-193550-7. [7] Mei, C.C. Power extraction from water waves. Journal of Ship Research, 1976, Vol. 20, No. 2, pp. 63-66. [8] Salter, Stephen, H, (1988). “World Progress in Wave Energy,” Int J Ambient Energy, Vol 10, pp 3-24. [9] Kim TW, Kaneko K, Setoguchi T and Inoue M. (1988). “Aerodynamic performance of an impulse turbine with self-pitch-controlled guide vanes for wave power generator”, Proceedings of 1st KSME-JSME Thermal and Fluid Eng Conf, Vol. 2, pp133–137.
Ocean Wave Energy Systems Design
615
[10] Raghunathan, S (1995). “A Methodology for Wells Turbine Design for Wave Energy Conversion,” J Power Energy, IMechE, Vol 209, pp 221-232. [11] Setoguchi, T., Takao, M., Kaneko, K., 1998. Hysteresis on Wells turbine characteristics in reciprocating flow, International Journal of Rotating Machinery, vol. 4 (1), 17–24. [12] Raghunathan, S, and Beattie, WC (1996). “Aerodynamic Performance of Counter-rotating Wells Turbine for Wave Energy Conversion,” J Power Energy, Vol 210, pp 431-447. [13] Falcao, AF, Whittaker, TJT, and Lewis, AW (1994). Joule 2, Preliminary Action: European Pilot Plant Study, European Commission Report, JOURCT912-0133, Science Research and Development-Joint Research Center. [14] Whittaker, TJT, Beattie, WC, Raghunathan, S, Thompson, A, Stewart, T and Curran, R (1997). “The Islay Wave Power Project: an Engineering Perspective,” Water Maritime and Energy, ICE, pp. 189-201. [15] Whittaker, TJT, Thompson, A, Curran, R, Stewart, TP (1997), European Wave Energy Pilot Plant on Islay (UK), European Commission, Directorate General XII, Science, Research and Development - Joint Research Centre, JOU-CT940267. [16] Setoguchi T, Santhakumar S, Maeda H, Takao M and Kaneko K (2001). “A review of impulse turbines for wave energy conversion”, Renewable Energy, Vol. 23, pp 261–292. [17] Curran, R. (2002). Renewabe Energy: Trends and Prospects, Chapter 6: Ocean Energy from Wave to Wire. Editors: Majumdar, SK, Miller, EW, and Panah, AI., The Pennsylvania Academy of Science, pp 86-121. [18] Finnigan, T, and Auld, D, (2003). Model Testing of a Variable-Pitch Aerodynamic Turbine, Proc. 13th Int. Offshore Mechcnics and Arctic Engineering Conf, ISOPE, Vol 1, pp 357-360. [19] Finnigan, T, and Alcorn, R, (2003). Numerical Simulation of a Variable Pitch Turbine with Spee Curran, R., Denniss, T., and Boake, C. (2000), Multidisciplinary Design for Performance: Ocean Wave Energy Conversion, Proc. ISOPE’2000, Seattle, USA, ISSN 1098-6189, pp. 434-441.
Author Index
Alcântara, José Ricardo (1) Amaral, Daniel Capaldo (1) Ameta, Gaurav (1) Amodio, Carla Cristina (1) Andrade, Luiz Fernando Segalin de (1) Araujo, Camila de (1) Araujo, Juliano Bezerra de (1) Baguley, Paul (1) Baluch, Haroon Awais (1) Beco, Stefano (1) Berends, Jochem (1) Bernaski, Paulo (1) Borsato, Milton (1) Botura, Paulo Eduardo de Albuquerque (1) Boyle, Mark (1) Branício, Simone (1) Brolly, N.(1) Burke, Robert (2) Butterfield, Joseph (4) Cassidy, Matthew (1) Cha, Jianzhong (3) Chen, Jian (1) Cheung, Julie (1) Chou, Shuo-Yan (2) Chung, Yu-Liang (1)
323 341 205 503 297 341 593 113 375 77 387 503 503 305 173 503 281 523, 531 523, 531, 541, 551 133 3, 45, 333 163 241 313, 349 29
618
Author Index
Collet, Pierre (1) Collin, Graham (1) Collins, R.(1) Cooper, Richard (4) Cowan, S. (2) Craig, Cathy (1) Curran, Ricky (9)
53 141 551 133, 141, 153, 173 89, 281 541 89, 251, 259, 281, 523, 531, 541, 551, 601
Cziulik, Carlos (1) Dean, Stephen R H (1) Devenny, C. (1) Doherty, John J . (3) Duhovnik , Jožef (1) Dusch, Thomas (1) Dutra, Moisées Lima (2) Edgar, T. (1) Eldridge, Andrew (1) Ellsmore, Paul (1) Elst, S.W.G. van der (1) Eres, Murat H. (1) Feng, Shaw C. (1) Feresin, Fred (1) Fernandes, Ederson (1) Ferney, Michel (1) Ferreira, Joao Carlos Espindola (1) Forcellini, Fernando Antonio (2) Gault, Richard (4) Ghafour, Samer Abdul (1) Ghodous, Parisa (5) Gilmour, M. (1) Goh, Yee Mey (1) Gonçalves, Ricardo (1) Gore, Dave (2) Halliday, Steven (1)
503 267 531 251, 259, 267 357 53 11, 53 551 267 267 417 233 205 77 503 469 305 297, 443 133, 143, 153, 173 195 11, 53, 185, 195, 205 281 123 11 251, 259 217
Author Index
Hatakeyama, Kazuo (1) Hawthorne, P. (2) Haxhiaj, Lianda (1) Hiekata, Kazuo (1) Higgins, C. (1) Hoffmann, Patrick (1) Hsiao, David W. (1) Huang, Ching-Jen (1) Inoue, Masato (1) Ishikawa, Haruo (1) Jaegersberg, Gudrun (1) Jin, Yan (1) Jinks, Stuart (1) Juliano, Rodrigo (1) Kipp, Alexander (1) Kondoh, Shinsuke (1) Kuhn, Olivier (1) Kuo, Yong-Lin (1) Lazzari, Marcio (1) Lervik, Rolf (1) Li, Nan (2) Lin, Shih-Wei (1) Lu, Yiping (3) Lumineau, Nicolas (1) Ma, Lin (1) Madej, Lukasz (1) Maier, Franz (1) Matsumoto, Mitsutaka (1) Matuszyk, Paweá J. (1) Mawhinney, Graeme (1) Mayer, Wolfgang (2) McClean, A (1) Mileham, Antony Roy (1)
323 89, 281 469 461 551 205 29 495 585 585 21 523 225 503 63 575 53 29 503 77 3, 45 349 3, 45, 333 185 29 435 513 575 435 153 451, 513 531 123
619
620
Author Index
Mishima, Nozomu (1) Moeckel, Alexandre (1) Muckian, Gerard (1) Mühlenfeld, Arndt (2) Mullen, M. (1) Newnes, Linda (1) Oliveira, João Fernando Gomes de (1) Ostrosi, Egon (1) Paccagnini, Carlo (1) Parrini, Andrea (1) Perna, Eliane (1) Poots, G. (1) Qiao, Lihong (1) Raghunathan, Srinivasan (3) Rauch, Lukasz (1) Reed, Nicholas (1) Rocca, Gianfranco La (1) Rojanakamolsan, Piroon (1) Roy, Rajkumar (1) Rozenfeld, Henrique (1) Santos, Kássio (1) Saravia, Mohammad (1) Scalice, Régis Kovacs (1) Scanlan, James P. (4) Schubert, Lutz (1) Shariat, Behzad (1) Siqueira, Fábio (1) Stumptner, Markus (2) Surridge, Mike (1) Takashima, Yumiko (1) Tan, Xincai (2) Tanasoiu, Mihai (1) Tateno, Toshitake (1)
575 443 163 451, 513 281 123 593 469 77 77 195 551 205 153, 251, 259 435 217 401 461 113 503 503 123 297 217, 225, 233, 241 63 195 503 451, 513 77 585 251, 259 185 575
Author Index
Tobias, José Ricardo (1) Tøn, Arne (1) Tooren, M.J.L. van (4) Tout, Rabih (1) Trappey, Amy J.C. (3) Trappey, Charles V. (1) Tsai, Chih-Yung (1) Ugaya, Cássia (1) Ure, Jenny (1) Wang, Chao-Hua (1) Wang, Jian (7)
503 77 375, 387, 401, 417 185 29, 485, 495 485 349 503 21 313 133, 141, 153, 163, 173, 251, 259
Watkins, Rowland (1) Watson, Gareth (1) Watson, James (1) Watson, P. (1) Welch, Brian (1) Wesner, Stefan (1) Wiseall, Steve (2) Wong, James S. (1) Wu, Chun-Yi (2) Xu, Wensheng (1) Xu, Yuchun (2) Yamato, Hiroyuki (1) Yin, Y. (1) Yu, Jia-qing (2) Zadnik, Žiga (1) Zattar, Izabel Cristina (1)
77 541 113 89 531 63 225, 241 233 485, 495 333 251, 259 461 531 3, 45 357 305
621