Global Design to Gain a Competitive Edge
Xiu-Tian Yan • Benoit Eynard • William J. Ion Editors
Global Design to Gain a Competitive Edge An Holistic and Collaborative Design Approach based on Computational Tools
123
Xiu-Tian Yan, BEng, PhD, CEng, MIET, FITL William J. Ion, Head of Department Department of Design, Manufacture and Engineering Management (DMEM) University of Strathclyde James Weir Building 75 Montrose Street Glasgow G1 1XJ UK
ISBN 978-1-84800-238-8
Benoit Eynard, PhD, MAFM, MDS Department of Mechanical Systems Engineering University of Technology Compiègne BP60319 60203 Compiègne Cedex France
e-ISBN 978-1-84800-239-5
DOI 10.1007/978-1-84800-239-5 British Library Cataloguing in Publication Data Global design to gain a competitive edge 1. Engineering design - Congresses I. Yan, Xiu-Tian II. Eynard, Benoit III. Ion, William J. 620'.0042 ISBN-13: 9781848002388 Library of Congress Control Number: 2008928771 © 2008 Springer-Verlag London Limited Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made. Cover design: eStudio Calamar S.L., Girona, Spain Printed on acid-free paper 9 8 7 6 5 4 3 2 1 springer.com
Preface
Recent rapid globalisation of manufacturing industries leads to a drive and thirst for rapid advancements in technological development and expertise in the fields of advanced design and manufacturing, especially at their interfaces. This development results in many economical benefits to and improvement of quality of life for many people all over the world. Technically speaking, this rapid development also create many opportunities and challenges for both industrialists and academics, as the design requirements and constraints have completely changed in this global design and manufacture environment. Consequently the way to design, manufacture and realise products have changed as well. The days of designing for a local market and using local suppliers in manufacturing have gone, if enterprises aim to maintain their competitiveness and global expansion leading to further success. In this global context and scenario, both industry and the academia have an urgent need to equip themselves with the latest knowledge, technology and methods developed for engineering design and manufacture. To address this shift in engineering design and manufacture, supported by the European Commission under the Asia Link Programme with a project title FASTAHEAD (A Framework Approach to Strengthening Asian Higher Education in Advanced Design and Manufacture), three key project partners, namely the University of Strathclyde of the United Kingdom, Northwestern Polytechncial University of China, and the Troyes University of Technology of France organised a third international conference. This conference aims to provide a forum for leading researchers, industrialists and other relevant stakeholders to exchange and debate their research results as well as research issue. This conference focuses on papers describing the cutting edge research topics, fundamental research issues related to the global advanced design and manufacture and recent industrial application papers with a goal towards bringing together design and manufacture practitioners from academics, government organisations, and industry from all over the world. The conference aims to cover the recent advancement and trends in the area of design and manufacturing and to facilitate knowledge sharing, presentations, interactions, discussions on emerging trends and new challenges in design and manufacturing fields. The particular focus of this conference is on the understanding of the impact of distributed team based design and manufacture on research and industrial practices for global companies. Being the third conference in this theme since 2004, the aims of the conference are: (a) to become a regular major forum for the international scientific exchange on multi-disciplinary and
vi
Preface
inter-organisational aspects of advanced engineering design and manufacturing engineering; and (b) to provide opportunities in presenting and formalising the methods and means for industrial companies to design and manufacture successful products in a globally distributed team based environment. It is well know that engineering design activities are mostly undertaken in the developed countries, represented by European, American and Japanese companies, whereas more manufacturing actives are undertaken by more companies that are located in Asian. This trend may start to change as some engineering design work is gradually outsourced in Asia companies as well. This increasing geographical distribution of tasks involved in the whole product realisation process brings great challenge as well as huge benefits fro all stakeholders. It is therefore timely to organise this international conference and bring together leading researchers, academics and industrialists to discuss these issues and promote the future research in these important areas. Out of 385 full papers submitted, the organisers use the review results from international reviewers, and finally selected 174 papers for publication. Based on the topics of the paper submitted, editors have divided them into relevant chapters and produced two books. This book is the first one and contains a selection of refereed papers presented at the third conference. It represents the latest thinking on engineering design and manufacture from mainly Europe and Asia perspectives. It includes 85 papers from 174 accepted refereed papers, focusing on the advancement in the area of advanced design and integrated design and manufacture area. This book is therefore a reflection of the key papers presented in all areas related to the advanced design, its technologies, and interface to manufacturing engineering. More specifically, the book covers the following seven broad topics in engineering design and each of these has been called chapter: Chapter 1: Front End of Engineering Design Conceptual design including shape design and synthesis, engineering guidelines from practical points of view, functional representation, customers’ requirement capture and so forth becomes even important in the context of global design and some of the selected papers just address their importance and research findings for global design. Chapter 2: Engineering Knowledge Management and Design for X In an ear of knowledge economy, capture of engineering design and manufacture knowledge, representation and their management become very important research and practical issues. Knowledge engineering support in various stages of the product realisation process is vital to the success of any enterprises. A a large selection of papers have been devoted to this topic.
Preface
vii
Chapter 3: Detail Design and Design Analysis Even at the time innovation and new product development becomes the main battle ground for competition, rigorous, reliable and new methods to support the detail design is still important. This has also been identified as an important research topic group from the papers submitted. Chapter 4: Simulation and Optimisation in Design Recent rapid development in computational power of desktop computers made advanced analysis software tools for product simulation and optimisation available even for small to medium sized companies as well as for educational users. This has resulted huge change in the way engineers conduct their engineering design and manufacture business. Sixteen papers have devoted their focus on the use of these technologies. Chapter 5: New Mechanism and Device Design and Analysis Eight papers have been selected to describe some new design and analysis of these devices. It is aimed to show the new Materials sciences focusing on functional ceramic material design and their manufacture; manufacturing systems design, simulation of their manufacturing systems and their optimisation. Chapter 6: Manufacturing Systems Design Design of manufacturing systems has traditionally been considered to be part of manufacturing discipline. Through the papers selected, it is clear that they form integral part of the product realisation process and hence they should be considered at the engineering design process. Chapter 7: Collaborative and Creative Product Development and Manufacture Following the previous chapter, this chapter deals with the collaborative issues of the advanced design and manufacture. Editors deliberately compiled this chapter to be the last chapter to reflect its link to Chapter 6. More importantly, it is appropriate to use this chapter to draw a conclusion to the book on global advanced design.
The editors of the book: Xiu-Tian Yan, Benoit Eynard and William J Ion
Acknowledgements
The editors would like to express their sincere thanks to the Advisory Scientific Board for their guidance and help in reviewing papers. Editors also would like to express their gratitude to the extended reviewers and the conference Secretariats Dr Fayyaz Rehman, Professor Geng Liu, Professor Jingting Yuan, Professor Hong Tang and Mrs Youhua Li for their patience and huge effort in organising the paper review process and answering numerous queries from authors. Without their support, it would have been very difficult to compile this book. The Editors would also like to thank Dr. Andrew Lynn for his kind support and maintenance of the conference paper management system which he developed for journal editing purpose. With a magic touch and modification, this system has provided with editors a wonderful tool to manage over eight hundred submissions in total. The Editors would also like to thank Mr Frank Gaddis for his help and design of the book cover. The editors of the book would also like to thank the sponsoring organisations for their support to the organisation of the Conference.
The Organisers of the ICADAM 2008 Conference: x x x
The University of Strathclyde Northwestern Polytechnical University The University of Technology Troyes
The Conference Sponsors: x x x x x x x x
European Commission; National Natural Science Foundation of China; Institution of Engineering Designers, UK; Institution of Mechanical Engineers, UK; The Design Society – A Worldwide Community; The Chinese Mechanical Engineering Society; Shaanxi Mechanical Design Society; Northwestern Polytechnic University - 111 project.
x
Acknowledgements
ICADAM2008 Organising Committee Conference Co-Chairmen: Professor Chengyu Jiang, President of Northwestern Polytechnical University, Xian, China Professor Neal Juster, Pro-Vice Principal of the University of Strathclyde, UK Dr. Xiu-Tian Yan, The University of Strathclyde, UK Advisory Scientific Board Chair: Mr William J Ion, the University of Strathclyde, UK Dr. Muhammad Abid, Ghulam Ishaq Khan Institute of Sciences and Technology, Pakistan Professor Xing Ai, Academician of CAE, Shandong University, China Professor Abdelaziz Bouras, University of Lyon (Lyon II), France Dr. Michel Bigand, Ecole Centrale de Lille, France Dr. Jonathan Borg, University of Malta, Malta Professor David Bradley, University of Abertay, UK Prof. David Brown, Editor of AIEDAM, Worcester Polytechnic Institute, USA Professor Yang Cao, Hainan University, China Professor Keith Case, Loughborough University of Technology, UK Professor Laifei Cheng, Northwestern Polytechnical University, China Professor P John Clarkson, University of Cambridge, UK Professor Alex Duffy, University of Strathclyde, UK Dr. Shun Diao, China National Petroleum Corporation, China Professor Benoit Eynard, Troyes University of Technology, France Professor K Fujita, University of Osaka, Japan Professor James Gao, Greenwich University, UK Professor John S. Gero, University of Sydney, Australia Professor Philippe Girard, University of Bordeaux 1, France Professor Dongming Guo, Dalian University of Technology, China Professor Lars Hein, Technical University of Denmark, Denmark Professor Bernard Hon, University of Liverpool, UK Professor Imre Horvath, Delft University of Technology, Netherlands Professor Weidong Huang, Northwestern Polytechnical University, China Professor Sadrul Islam, Islamic University of Technology, Bangladesh Professor Chengyu Jiang, Northwestern Polytechnical University, China Professor Bert Jüttler, Johannes Kepler University, Austria Professor Neal Juster, University of Strathclyde, UK Professor Yuanzhong Lei, National Natural Science Foundation of China. Professor Hui Li, University of Electronic Science and Technology of China Professor Peigen Li, Academician of CAS, HUST, China Professor Qiang Lin, Hainan University, China
Acknowledgements
xi
Professor Udo Lindemann, Munchen University of Technology, Germany Professor Geng Liu, Northwestern Polytechnical University, China Dr. Muriel Lombard, University of Nancy 1, France Professor Jian Lu, The Hong Kong Polytechnic University Professor Chris McMahon, University of Bath, UK Professor Phil Moore, De Montfort University, UK Dr. David Nash, University of Strathclyde, UK Professor Henri Paris, University of Grenoble 1, France Professor Alan de Pennington, The University of Leeds, UK Dr. Yi Qin, University of Strathclyde, UK Professor Geoff Roberts, Coventry University, UK Professor Dieter Roller, Stuttgart University, Germany Dr. Lionel Roucoules, Troyes University of Technology, France Prof. Xinyu Shao, Huazhong University of Science and Technology, China Professor Hong Tang, Northwestern Polytechnical University, China Professor Tetsuo Tomiyama, Delft University of Technology, Netherlands Dr. Chunhe Wang, Institute of Petroleum Exploration & Development, China Professor Guobiao Wang, National Natural Science Foundation of China. Professor Runxiao Wang, Northwestern Polytechnical University, China Professor YuXin Wang, Tongji University, China Professor Richard Weston, Loughborough University of Technology, UK Professor Yongdong Xu, Northwestern Polytechnical University, China Dr. Xiu-Tian Yan, the University of Strathclyde, UK Professor Haichen Yang, Northwestern Polytechnical University, China Professor Shuping Yi, Chongqing University, China Prof. Xiao Yuan, Huazhong University of Science and Technology, China Professor Dinghua Zhang, Northwestern Polytechnical University, China Professor Litong Zhang, Academician of CAE, Northwestern Polytechnical University, China Professor Weihong Zhang, Northwestern Polytechnical University, China Professor Li Zheng, Tsinghua University, China
Extended Paper Review Panel Ms. Atikah Haji Awang, The University of Strathclyde, UK Dr. Iain Boyle, The University of Strathclyde, UK Professor Jonathan Corney, The University of Strathclyde, UK Mr. Alastair Conway, The University of Strathclyde, UK Professor Xiaolu Gong, The University of Technology Troyes, France Dr. Pascal Lafon, The University of Technology Troyes, France Dr. Shaofeng Liu, The University of Strathclyde, UK Professor Yuhua Luo, Universitat de Illes Balears, Spain Mr. Ross Maclachlan, The University of Strathclyde, UK Dr. Conrad Pace, The University of Malta Dr. Wenke Pan, The University of Strathclyde, UK
xii
Acknowledgements
Professor Xiangsheng Qin, Northwestern Polytechnical University, China Dr. Fayyaz Rehman, the University of Strathclyde, UK Dr. Sebastien Remy, The University of Technology Troyes, France Dr. Daniel Rhodes, The University of Strathclyde, UK Dr. Michael Saliba, The University of Malta Dr. Hiroyuki Sawada, Digital Manufacturing Research Center, National Institute of Advanced Industrial Science and Technology, Japan Professor Shudong Sun, Northwestern Polytechnical University, China Mr. David Steveson, The University of Strathclyde, UK Professor Shurong Tong, Northwestern Polytechnical University, China Professor Frank Travis, The University of Strathclyde, UK Dr. Dongbo Wang, Northwestern Polytechnical University, China Mr. Wendan Wang, The University of Strathclyde, UK Dr. Ian Whitfield, The University of Strathclyde, UK Dr. Qingfeng Zeng, Northwestern Polytechnical University, China Mr. Remi Zente, The University of Strathclyde, UK
Contents
Chapter 1
Front End of Engineering Design ......................... 1
Computer Aided Design: An Early Shape Synthesis System............................. 3 Alison McKay, Iestyn Jowers,Hau Hing Chau, Alan de Pennington, David C Hogg Constraints and Shortfalls in Engineering Design Practice............................. 13 Lars Hein, Zhun Fan Modular Product Family Development Within a SME.................................... 21 Barry Stewart, Xiu-Tian Yan Duality-based Transformation of Representation from Behaviour to Structure .............................................................................. 31 Yuemin Hou , Linhong Ji Automatic Adaptive Triangulation of Surfaces in Parametric Space............. 41 Baohai Wu, Shan Li, Dinghua Zhang Research on Modeling Free-form Curved Surface Technology ...................... 51 Gui Chun Ma , Fu Jia Wu, Shu Sheng Zhang Pattern System Design Method in Product Development ................................ 61 Juqun Wang, Geng Liu, Haiwei Wang Development of a Support System for Customer Requirement Capture ....... 71 Atikah Haji Awang, Xiu-Tian Yan Comparison About Design Methods of Tonpilz Type Transducer ................. 81 Duo Teng , Hang Chen, Ning Zhu, Guolei Zhu, Yanni Gou Effect for Functional Design............................................................................... 91 Guozhong Cao, Haixia Guo, Runhua Tan Quality Control of Artistic Scenes in Processes of Design and Development of Digital-Game Products................................................... 103 P.S. Pa, Tzu-Pin Su
xiv
Contents
Chapter 2 Engineering Knowledge Management and Design for X .......................................................................... 115 Integration of Design for Assembly into a PLM Environment...................... 117 Samuel Gomes, Frédéric Demoly, Morad Mahdjoub, Jean-Claude Sagot Design Knowledge for Decision-Making Process in a DFX Product Design Approach ................................................................ 127 Keqin Wang, Lionel Roucoules, Shurong Tong, Benoît Eynard, Nada Matta Mobile Knowledge Management for Product Life-Cycle Design.................. 137 Christopher L. Spiteri, Jonathan C. Borg Research on Application of Ontological Information Coding in Information Integration................................................................................ 147 Junbiao Wang, Bailing Wang, Jianjun Jiang and Shichao Zhang RoHS Compliance Declaration Based on RCP and XML Database............. 157 Chuan Hong Zhou, Benoît Eynard, Lionel Roucoules, Guillaume Ducellier Research on the Optimization Model of Aircraft Structure Design for Cost ............................................................................................................... 167 Shanshan Yao, Fajie Wei Research on the Management of Knowledge in Product Development ........ 177 Qian-Wang Deng, De-Jie Yu Representing Design Intents for Design Thinking Process Modelling.......... 187 Jihong Liu, Zhaoyang Sun Application of Axiomatic Design Method to Manufacturing Issues Solving Process for Auto-body ......................................................................... 199 Jiangqi Zhou, Chaochun Lian, ZuopingYao, WenfengZhu, ZhongqinLin Port-Based Ontology for Scheme Generation of Mechanical System ........... 211 Dongxing Cao, Jian Xu, Ge Yang, Chunxiang Cui Specification of an Information Capture System to Support Distributed Engineering Design Teams ............................................................................... 221 A. P. Conway, A. J. Wodehouse, W. J. Ion and A. Lynn Collaborative Product Design Process Integration Technology Based on Webservice......................................................................................... 231 Shiyun Li, Tiefeng Cai Information Modelling Framework for Knowledge Emergence in Product Design .............................................................................................. 241 Muriel Lombard, Pascal Lhoste Flexible Workflow Autonomic Object Intelligence Algorithm Based on Extensible Mamdani Fuzzy Reasoning System .............................. 251 Run-Xiao Wang, Xiu-Tian Yan, Dong-Bo Wang, Qian Zhao
Contents
xv
DSM based Multi-view Process Modelling Method for Concurrent Product Development ............................................................. 261 Peisi Zhong, Hongmei Cheng, Mei Liu, Shuhui Ding Using Blogs to Manage Quality Control Knowledge in the Context of Machining Processes ..................................................................................... 273 Yingfeng Zhang, Pingyu Jiang and Limei Sun Analysis on Engineering Change Management Based on Information Systems ......................................................................... 283 Qi Gao, Zongzhan Du, Yaning Qu Research and Realization of Standard Part Library for 3D Parametric and Autonomic Modeling.................................................................................. 293 Xufeng Tong, Dongbo Wang, Huicai Wang Products to Learn or Products to Be Used? .................................................... 303 Stéphane Brunel, Marc Zolghadri, Philippe Girard Archival Initiatives in the Engineering Context ............................................. 313 Khaled Bahloul, Laurent Buzon, Abdelaziz Bouras Design Information Revealed by CAE Simulation for Casting Product Development.................................................................... 323 M.W. Fu An Ontology-based Knowledge Management System for Industry Clusters......................................................................................... 333 Pradorn Sureephong, Nopasit Chakpitak, Yacine Ouzrout, Abdelaziz Bouras
Chapter 3
Detail Design and Design Analysis.................... 343
Loaded Tooth Contact Analysis of Modified Helical Face Gears ................. 345 Ning Zhao, Hui Guo, Zongde Fang, Yunbo Shen, Bingyang Wei Simplified Stress Analysis of Large-scale Harbor Machine’s Wheel............ 355 Wubin Xu, Peter J Ogrodnik Bing Li, Jian Li, Shangping Li Clean-up Tool-path Generation for Multi-patch Solid Model by Searching Approach..................................................................................... 365 Ming Luo, Dinghua Zhang, Baohai Wu, Shan Li Fatigue Life Study of Bogie Framework Welding Seam by Finite Element Analysis Method ................................................................. 375 Pingqing Fan, Xintian Liu, Bo Zhao Research on Kinematics Based on Dual Quaternion for Five-axis Milling Machine .......................................................................... 385 Rui-Feng Guo, Pei-Nan Li
xvi
Contents
Consideration for Galvanic Coupling of Various Stainless Steels & Titanium, During Application in Water-LiBr Absorption-Type Refrigeration System......................................................................................... 395 Muhammad Shahid Khan, Saad Jawed Malik Real Root Isolation Arithmetic to Parallel Mechanism Synthesis................. 405 Youxin Luo, Dazhi Li, Xianfeng Fan, Lingfang Li, Degang Liao Experimental Measurements for Moisture Permeations and Thermal Resistances of Cyclo Olefin Copolymer Substrates ........................................ 415 Rong-Yuan Jou Novel Generalized Compatibility Plate Elements Based on Quadrilateral Area Coordinates ...................................................................... 425 Qiang Liu, Lan Kang, Feng Ruan Individual Foot Shape Modeling from 2D Dimensions Based on Template and FFD............................................................................................. 437 Bin Liu, Ning Shangguan, Jun-yi Lin, Kai-yong Jiang Application of the TRIZ to Circular Saw Blade ............................................. 447 Tao Yao, Guolin Duan, Jin Cai
Chapter 4
Simulation and Optimisation in Design............ 457
Research on Collaborative Simulation Platform for Mechanical Product Design................................................................................................... 459 Zhaoxia He, Geng Liu, Haiwei Wang, Xiaohui Yang Development of a Visualized Modeling and Simulation Environment for Multi-domain Physical Systems ................................................................. 469 Y.L. Tian, Y.H. Yan, R. M. Parkin, M. R. Jackson Selection of a Simulation Approach for Saturation Diving Decompression Chamber Control and Monitoring System...................................................... 479 Diming Yang, Xiu-Tian Yanand Derek Clarke Optimal Design of Delaminated Composite Plates for Maximum Buckling Load.................................................................................................... 489 Yu Hua Lin Modeling Tetrapods Robot and Advancement ............................................... 499 Q. J. Duan , J. R. Zhang, Run-Xiao Wang, J. Li The Analysis of Compression About the Anomalistic Paper Honeycomb Core ............................................................................................... 509 Wen-qin Xu, Yuan-jun Lv, Qiong Chen, Ying-da Sun C-NSGA-II-MOPSO: An Effective Multi-objective Optimizer for Engineering Design Problems .................................................................... 519 Jinhua Wang, Zeyong Yin
Contents xvii
Material Selection and Sheet Metal Forming Simulation of Aluminium Alloy Engine Hood Panel ......................................................... 529 Jiqing Chen, Fengchong Lan, Jinlun Wang & Yuchao Wang Studies on Fast Pareto Genetic Algorithm Based on Fast Fitness Identification and External Population Updating Scheme ............................ 539 Qingsheng Xie, Shaobo Li, Guanci Yang Vibration Control Simulation of Offshore Platforms Based on Matlab and ANSYS Program ........................................................................................ 549 Dongmei Cai, Dong Zhao, Zhaofu Qu Study on Dynamics Analysis of Powertrains and Optimization of Coupling Stiffness ......................................................................................... 561 Wenjie Qin, Dandan Dong Parametric Optimization of Rubber Spring of Construction Vehicle Suspension.......................................................................................................... 571 Beibei Sun, Zhihua Xu and Xiaoyang Zhang The Development of a Computer Simulation System for Mechanical Expanding Process of Cylinders....................................................................... 581 Shi-yan Zhao, Bao-feng Guo, Miao Jin Rectangle Packing Problems Solved by Using Feasible Region Method ...... 591 Pengcheng Zhang, Jinmin Wang, Yanhua Zhu Aircraft’s CAD Modeling in Multidisciplinary Design Optimization Framework......................................................................................................... 601 X.L. Ji, Chao Sun Optimization of Box Type Girder of Overhead Crane .................................. 609 Muhammad Abid, Muhammad Hammad Akmal, Shahid Parvez
Chapter 5 New Mechanism and Device Design and Analysis ............................................................................... 619 Symmetric Toggle-lever-toggle 3-stage Force Amplifying Mechanism and Its Applications........................................................................................... 621 Dongning Su, Kangmin Zhong, Guoping Li Kinematics and Statics Analysis for Power Flow Planet Gear Trains.......... 631 Zhonghong Bu, Geng Liu, Liyan Wu, Zengmin Liu Green Clamping Devices Based on Pneumatic-mechanical Compound Transmission Systems Instead of Hydraulic Transmission Systems............. 641 Guang-ju Si, Ming-di Wang, Kang-min Zhong, Dong-ning Su Rapid Registration for 3D Data with Overlapping Range Based on Human Computer Interaction .................................................................... 651 Jun-yi Lin, Kai-yong Jiang, Bin Liu, Chang-biao Huang
xviii Contents
A New FE Modelling Approach to Spot Welding Joints of Automotive Panels and Modal Characteristics .......................................... 661 Jiqing Chen, Yunjiao Zhou and Fengchong Lan Precision Measurement and Reverse Motion Design of the Follower for Spatial Cams ................................................................................................ 671 Zhenghao Ge, Jingyang Li, Feng Xu, Xiaowei Han Static Analysis of Translational 3-UPU Parallel Mechanism Based on Principle of Virtual Work............................................................................ 681 Xiangzhou Zheng, Zhiyong Deng, Yougao Luo, Hongzan Bin A Natural Frequency Variable Magnetic Dynamic Absorber....................... 691 Chengjun Bai, Fangzhen Song
Chapter 6
Manufacturing Systems Design......................... 699
Next Generation Manufacturing Systems ....................................................... 701 R.H. Weston and Z. Cui Tooling Design and Fatigue Life Evaluation via CAE Simulation for Metal Forming ............................................................................................. 711 W.L. Chan, M.W. Fu, J. Lu Modelling of Processing Velocity in Computer-controlled Sub-aperture Pad Manufacturing.................................................................... 721 H. Cheng, Y.Yeung, H. Tong, Y. Wang Load Balancing Task Allocation of Collaborative Workshops Based on Immune Algorithm....................................................................................... 729 XiaoYi Yu, ShuDong Sun Study on Reconfigurable CNC System ............................................................ 743 Jing Bai, Xiansheng Qin, Wendan Wang, Zhanxi Wang Development of a NC Tape Winding Machine ............................................... 753 Yao-Yao Shi, Hong Tang, Qiang Yu TRIZ-based Evolution Study for Modular Fixture ........................................ 763 Jin Cai , Hongxun Liu , Guolin Duan , Tao Yao , Xuebin Chen Study on the Application of ABC System in the Refinery Industry.............. 773 Chunhe Wang, Linhai Shan, Ling Zhou, Guoliang Zhang The Application of Activity-Based Cost Restore in the Refinery Industry .. 783 Xingdong Liu, Ling Zhou, Linhai Shan, Fenghua Zhang, Qiao Lin Research on the Cost Distribution Proportionality of Refinery Units .......... 793 Fen Zhang, Yanbo Sun, Chunhe Wang, Xinglin Han, Qiusheng Wei
Contents
xix
Chapter 7 Collaborative and Creative Product Development and Manufacture.................................................. 803 From a 3D Point Cloud to a Real CAD Model of Mechanical Parts, a Product Knowledge Based Approach ........................................................... 805 A. Durupt, S. Remy, W. Derigent Research on Collaborative Design Support System for Ship Product Modelling .............................................................................. 815 Yiting Zhan, Zhuoshang Ji, Ming Chen Research on New Product Development Planning and Strategy Based on TRIZ Evolution Theory ............................................................................... 825 Fuying Zhang, Xiaobin Shen, Qingping He ASP-based Collaborative Networked Manufacturing Service Platform for SMEs............................................................................................................. 835 Y. Su, B.S. Lv, W.H. Liao, Y. Guo, X.S. Chen, H.B. Shi Virtual Part Design and Modelling for Product Design................................. 843 Bo Yang, Xiangbo Ze, Luning Liu Integrated Paper-based Sketching and Collaborative Parametric 3D Modelling...................................................................................................... 855 Franklin Balzan, Philip J. Farrugia, Jonathan C.Borg Mechanical System Collaborative Simulation Environment for Product Design............................................................................................. 865 Haiwei Wang, Geng Liu, Xiaohui Yang, Zhaoxia He Evolution of Cooperation in an Incentive Based Business Game Environment ...................................................................................................... 875 Sanat Kumar Bista, Keshav P. Dahal, Peter I. Cowling Author Index...................................................................................................... 883
Chapter 1 Front End of Engineering Design
Computer Aided Design: An Early Shape Synthesis System............................. 3 Alison McKay, Iestyn Jowers,Hau Hing Chau, Alan de Pennington, David C Hogg Constraints and Shortfalls in Engineering Design Practice............................. 13 Lars Hein, Zhun Fan Modular Product Family Development Within a SME.................................... 21 Barry Stewart, Xiu-Tian Yan Duality-based Transformation of Representation from Behaviour to Structure .............................................................................. 31 Yuemin Hou , Linhong Ji Automatic Adaptive Triangulation of Surfaces in Parametric Space............. 41 Baohai Wu, Shan Li, Dinghua Zhang Research on Modeling Free-form Curved Surface Technology ...................... 51 Gui Chun Ma , Fu Jia Wu, Shu Sheng Zhang Pattern System Design Method in Product Development ................................ 61 Juqun Wang, Geng Liu, Haiwei Wang Development of a Support System for Customer Requirement Capture ....... 71 Atikah Haji Awang, Xiu-Tian Yan Comparison About Design Methods of Tonpilz Type Transducer ................. 81 Duo Teng , Hang Chen, Ning Zhu, Guolei Zhu, Yanni Gou Effect for Functional Design............................................................................... 91 Guozhong Cao, Haixia Guo, Runhua Tan Quality Control of Artistic Scenes in Processes of Design and Development of Digital-Game Products................................................... 103 P.S. Pa, Tzu-Pin Su
Computer Aided Design: An Early Shape Synthesis System Alison McKay, Iestyn Jowers, Hau Hing Chau, Alan de Pennington, David C Hogg University of Leeds, Leeds, LS2 9JT, UK
Abstract Today’s computer aided design systems enable the creation of digital product definitions that are widely used throughout the design process, for example in analysis or manufacturing. Typically, such product definitions are created after the bulk of [shape] designing has been completed because their creation requires a detailed knowledge of the shape that is to be defined. Consequently, there is a gulf between the exploration processes that result in the selection of a design concept and the creation of its definition. In order to address this distinction, between design exploration and product definition, understanding of how designers create and manipulate shapes is necessary. The research outlined in this paper results from work concerned with addressing these issues, with the long term goal of informing a new generation of computer aided design systems which support design exploration as well as the production of product definitions. This research is based on the shape grammar formalism. Shape grammars have been applied in a range of domains, commonly to generate shapes or designs that conform to a given style. However, a key challenge that restricts the implementation of shape grammar systems lies in the detection of embedded parts, or sub-shapes, which are manipulated according to shape rules to create new shapes. The automatic detection of sub-shapes is an open research question within the shape grammar community and has been actively explored for over thirty years. The research reported in this paper explores the use of computer vision techniques to address this problem; the results achieved to date show real promise. An early prototype is presented and demonstrated on design sketches of martini glasses taken from a student research project. Keywords: shape synthesis, shape grammar, computer vision, sub-shape detection
1.
Introduction
Currently available computer aided design systems enable the creation of digital product definitions that are widely used throughout the design process, for example in analysis or manufacturing. Typically, such product definitions are created after the bulk of [shape] designing has been completed because their creation requires a detailed knowledge of the shape that is to be defined. Consequently, there is a gulf
4
A. McKay, I. Jowers, H. H. Chau, A. de Pennington and D. C. Hogg
between the exploration processes that result in the selection of a design concept and the creation of its definition. In order to address this distinction between design exploration and product definition, understanding of how designers create and manipulate shapes is necessary [1]. The research outlined in this paper is concerned with addressing these issues, with the long term goal of informing a new generation of computer aided design systems which support design exploration as well as the production of product definitions. This paper reports on developments towards an automated shape synthesis system intended to augment the generation of design shapes early in the product development process. The system is based on the shape grammar formalism.
2.
Background
Shape grammars are a formal production system where languages of shapes or designs are generated according to shape replacement rules. Their mathematical formalism enables shapes to be manipluated according to their visual stucture, rather than according to underlying representations. As a result, designers are free to manipulate formal descriptions of their designs in a manner that reflects the interactive freedom often associated with sketching [1]. When a designer manipulates parts of a design, emergent patterns and associations can be discovered which suggest new features and relations. Shape rules provide a formal mechanism whereby the structure of the design can be reinterpreted according to these emergent patterns, which can then be recognised and manipulated [2]. Such reinterpretation is a vital element in the exploration of designs and is believed to be a decisive component of innovative design [3]. Since their conception, shape grammars have been applied in a variety of disciplines including art and design, architecture, and product design.The majority of these applications have used shape grammars as a formal approach to the analysis of styles and the generation and exploration of design families. Chau et al [4] provide a comprehensive timeline of research in the application of shape grammars. (reproduced in Figure 1). These have demonstrated the viability of generative techniques to capture and reproduce styles in a range of design domains.
Computer Aided Design: An Early Shape Synthesis System
5
Figure 1. Shape grammar applications to designs
The basic elements of a shape grammar include an initial shape (that seeds shape generation) and a set of shape replacement rules, as illustrated in Figure 2. In this example, two shape replacement rules are defined and the initial shape is a square. The first rule replaces a square with a shape consisting of a square and an overlapping rectangle, whilst the second rule replaces a rectangle with a shape consisting of a rectangle and an abutting square. The shapes at the bottom of the figure show a fragment of the network of shapes that can be generated from the initial shape via application of the two shape rules.
6
A. McKay, I. Jowers, H. H. Chau, A. de Pennington and D. C. Hogg
Initial shape
Rule 2
Rule 1
1 1
1
2
1
2 2
Figure 2. A simple two rule grammar
Application of a shape rule involves two key steps. Firstly, the shape on the lefthand side of a rule must be identified embedded under some Euclidean transformation in the shape from which a new shape is to be computed; this is referred to as “sub-shape detection”. This detection is not restricted to recognising sub-shapes according to the structure that initially defines the shapes, but can be applied to any sub-shapes, even if these sub-shapes emerge as a result of previous rule applications. Secondly, the rule is applied by replacing the sub-shape from the left-hand side of the rule with the shape on the right-hand side of the rule, under the defined Euclidean transformation. A key benefit that results from defining a shape grammar is that it becomes possible to generate large networks of shapes, or design families, where multiple avenues of shape synthesis can be explored by designers. The size of the potential shape networks is vast and sometimes indefinite. An example using the initial shape and rules from Figure 2 is given in Figure 3. At each step, a selection of designs is generated and presented, from which one design is chosen (highlighted in red) which is used to seed further shape generation.
Computer Aided Design: An Early Shape Synthesis System
7
1
2 2
1 1
1
2
1
1
2
KEY 2
The shape from which subsequent shapes are to be computed
Rule that will be applied.
The shapes computed from the selected shape using the given rule.
Figure 3. A network of shapes computed from the two rule grammar in Figure 2
Significant efforts have been directed towards creating systems for automating the application of shape grammars, in order to realise what Smyth and Wallace [5] refer to as a “form synthesis engine” within their model for the synthesis of aesthetic product form. Some progress has been made towards this goal. However a key challenge that restricts such shape grammar implementations lies in the detection of embedded sub-shapes. For example, Chau et al. [4] describe a 3D shape grammar implementation for curvilinear shapes. Once a sub-shape has been detected, this system can automatically apply a rule. However, sub-shapes have to be identified manually (unless they conform to a particular class of shapes consisting of straight lines and circular arcs). Other significant developments have been presented by Tapia [6] who demonstrated a robust implementation for shapes composed of straight lines in 2D and Jowers [7] who reports success with shapes composed of 2D Bezier curves. In these works analytical solutions to the subshape detection problem are presented and shape grammar implementations are described. However, these analytical solutions have a number of limitations, some of which will be discussed in the next section. Instead, this paper reports an alternative approach to sub-shape detection based on the application of approaches that have been established in the computer vision community. Computer vision is concerned with building systems that obtain information from images, and research in this field has resulted in a range of techniques that enable the identification of shapes in real-world situations. For example, statistical learning algorithms have been used for modelling and recognizing new object categories [8, 9]. In contrast to analytic approaches, which search for sub-shapes in
8
A. McKay, I. Jowers, H. H. Chau, A. de Pennington and D. C. Hogg
the mathematical representation of a shape, the method used in this research looks for sub-shapes in visual objects derived from a shape’s mathematical representation. This paper reports early results of an exploration of the application of the techniques used for the recognition of visual objects to sub-shape detection in shape grammar-based design systems.
3. A Computer Vision Based Approach to Sub-shape Detection Previous approaches that have been used to address the sub-shape detection problem have relied on analytical methods to automatically match sub-shapes under transformation. As a result, a number of difficulties have arisen that severely limit the capabilities of the computational systems built upon these approaches. A key premise in the application of shape grammars lies in the fact that the shape to which a rule might be applied, and so in which sub-shapes are to be detected, is a visual entity that is the result of a shape definition process rather than the shape definition itself. When humans search for a sub-shape, visual similarity implies equality whereas when analytical techniques carry out the same process visual similarity does not necessarily imply equality. For example, the two curve segments highlighted in Figure 4 are visually similar but analytically they are distinct because they are segments of infinite curves which are mathematically distinct (as illustrated by the extended curves).
Figure 4. Visually similar curves that are mathematically distinct
Further difficulties result from the dependency of analytical approaches on the formal structures used to represent shapes. These formal structures restrict the general applicability of sub-shape detection algorithms which are suitable only for particular classes of shapes. For example, Tapia’s system addressed the sub-shape detection problem for shapes composed of 2D lines, but the analytical solution employed cannot be readily extended to the freeform curves that typify consumer product designs. Also, in analytical approaches, matching of sub-shapes is achieved by embedding the sub-shape into the shape that is the subject of the search. For this reason, shapes that can be matched are restricted according to the formal structures that were used to define them, for example lines or Bezier curves,
Computer Aided Design: An Early Shape Synthesis System
9
and the embedding properties of shapes are dependent on the formal structures used to represent the shapes. To overcome these problems, the research in this paper has adopted a computer vision approach that involves comparing images in the form of bitmaps according to a distance metric. Existing applications of this approach include word spotting in Chinese document images, visual navigation of robots, merging of partially overlapping images into a single image, and computer-assisted surgery. For image matching, the algorithm used checks whether a template image (representing the sub-shape to be detected) is present in a test image; the lower the separation distance value, the better the match. If the template image is a sub-shape of the first then the distance metric has a value of zero. This metric can therefore be used to determine whether one shape can be embedded in a second. The algorithm has been implemented in an experimental software prototype, where sub-shapes are detected arbitrarily embedded in the target image by considering the distance metric under transformation: currently translation and reflection. The prototype was applied to design sketches produced by undergraduate final year Masters students in Product Design. The application of the prototype is described in the next section.
4.
An Application of Sub-shape Detection to Design Sketches
The software prototype was evaluated on martini glass designs prepared by undergraduate Masters students in Product Design in preparation for a workshop on shape computation. An example of students’ designs is given in Figure 5.
Figure 5. Example martini glass designs
10
A. McKay, I. Jowers, H. H. Chau, A. de Pennington and D. C. Hogg
Designers’ sketches are fed into the system in the form of bitmaps. The software prototype then allows the designer to define a sub-shape to be detected by selecting it as a collection of pixels from the bitmap or by importing an alternative bitmap image. For example, in the screen images in Figure 6, sketches of martini glasses have been imported and are displayed on the left-hand side of each screen. In the top right-hand corner of the screen a stylised image of a martini glass has been imported. In this example, the system searches the image of martini sketches for sub-shapes that match the stylised martini glass image. The sub-shapes identified by the system are highlighted in red in the left-hand sides of the screen.
Figure 6. Screen images of subshape detector in operation
In the software prototype, sub-shapes can be detected under both translation and reflection transformations. Effort is currently being directed towards the implementation of more general kinds of transformation operation and on the definition and application of shape replacement rules. Early developments allow rules to be defined by creating either a new shape to act as the right-hand side or by editing or transforming the shape from the left-hand side. Rules are then applied by removing the pixels that form the left-hand side of the rule and replacing them with the pixels that form the right-hand side of the rule.
Computer Aided Design: An Early Shape Synthesis System
5.
11
Concluding Remarks
The research reported in this paper indicates that there is potential in exploring further the use of computer vision techniques for sub-shape detection. Automated sub-shape detection is a key prerequisite to achieving the goal of a shape grammarbased shape synthesis system to support design synthesis activities. Our vision for how such a system might augment design activity is illustrated in Figure 7. It can be seen that there are three intertwined cycles.
The Shape Synthesis System (S3) generating shapes
The designer designing shapes
Communication between the two
Figure 7. Three interwined cycles
The designer designing shapes and the shape synthesis system computing shapes are independent of each other and joined by a third cycle of communication between the two. Information flowing from the designer to the shape synthesis system is envisioned to be in the form of commonly used design descriptions, such as sketches or, as the designing and computation of shapes proceeds, in the form of shape rules. Information flowing back to the designer will be in the form of lattices of computed shapes (as illustrated in Figure 2) that prompt and inspire the designer. We anticipate that such a system will expand the space within which design exploration occurs and so enhance design activity. A key challenge in the next stage of this research lies in the design of the user interface for communication between designers and the shape synthesis system. This interface is critical in order to ensure a fluid interaction between the designer and the designs that are currently being explored, and in order to avoid disruption to the design process.
12
A. McKay, I. Jowers, H. H. Chau, A. de Pennington and D. C. Hogg
6.
Acknowledgements
The research reported in this paper was carried out as part of the Design Synthesis and Shape Generation project (www.engineering.leeds.ac.uk/dssg/) which is funded through the AHRC1/EPSRC2 Designing for the 21st Century programme. The example martini glass designs were reproduced with permission of Jessica Diniz who graduated with an MDes in Product Design in July 2007.
7.
References
[1] Prats, M and C.F. Earl. Exploration through drawings in the conceptual stage of product design. In 2nd International Conference on Design Computing and Cognition (DCC'06), Eindhoven. Dordrecht: Springer. 2006. p. 83-102. [2] Stiny, G., Introduction To Shape And Shape Grammars. Environment And Planning BPlanning & Design, 1980. 7(3): p. 343-351. [3] Suwa, M. Constructive perception: coordinating perception and conception toward acts of problem-finding in a creative experience. Japanese Psychological Research. 2003. 45(4): p. 221-234. [4] Chau, H.H., et al. Evaluation of a 3D shape grammar implementation. In 1st International Conference on Design Computing and Cognition (DCC'04), Cambridge, Massachusetts. Dordrecht: Kluwer. 2004. p. 357-376. [5] Smyth, S.N. and D.R. Wallace. Towards the synthesis of aesthetic product form. in ASME 2000 Design Engineering Technical Conferences and Computers and Information in Engineering Conference (DETC'00). 2000. Baltimore, Maryland. [6] Tapia, M., A visual implementation of a shape grammar system. Environment and Planning B: Planning and Design, 1999. 26: p. 59-73. [7] Jowers I. Computation with curved shapes: Towards freeform shape generation in design. PhD Thesis, The Open University, 2006 [8] Heap, A.J. and D.C. Hogg. Wormholes in Shape Space: Tracking through Discontinuous Changes in Shape. in IEEE International Conference on Computer Vision 1998, Bombay. 1998. [9] Baumberg, A. and D.C. Hogg. Learning Flexible Models from Image Sequences. in 3rd European Conference on Computer Vision, Stockholm. 1994.
1 2
UK Arts & Humanities Research Council UK Engineering & Physical Sciences Research Council
Constraints and Shortfalls in Engineering Design Practice Lars Hein1, Zhun Fan2 1
IPU, Produktionstorvet, Building 425, DK-2800 Kgs. Lyngby, Denmark. Department of Mechanical Engineering, DTU, Nils Koppels Allé, DK-2800 Kgs. Lyngby, Denmark.
2
Abstract The effectiveness of Engineering Design in practice is what results from a multitude of processes within the realm of Engineering Design itself. However, in order to understand the phenomenon, the processes whereby Engineering Design as a discipline comes together with disciplines from other areas of the company, to sustain the product development process itself, must be taken into account. Therefore, when companies strive to obtain an attractive level of effectiveness of their engineering design activities, the product development process as a whole must be considered. In this paper the first step in an approach by which to optimize the product development process of a company is suggested. This approach makes it possible to arrive at specific conclusions about the constraints and shortfalls of the engineering design activities in a product development context. Keywords: Industry, Constraints, Product Development, Engineering Design, Effectiveness.
1.
Engineering Design and the Product Development Core
The role of Engineering Design in the innovation processes of a company is a central one. Therefore its effectiveness is of great concern not only to the companies that deals with such processes, but also to those that do research into, and teach within the field of, engineering design. However, trying to understand its effectiveness from a purely internal analysis of the engineering design processes and activities leads to an unsatisfactory and incomplete picture. An input/output analysis of the Engineering Design Department of a company yields only the most superficial result which it is almost impossible to relate to the overall success of the company. This is fair warning that any attempt to optimize the processes of engineering design on the basis of an internal analysis will lead to suboptimization (fig.1).
14
L. Hein and Z. Fan
Figure 1. The Product Development Core (PDC) of a company is where innovation and product development takes place. The PDC of the company has many contributors, not only those formally associated with development such as Engineering Design and Industrial Design.
Some approaches in the research into effectiveness in engineering design deals with relevant engineering design tools and methods, and with the extent to which they are being used in industrial context, some work reporting a low rate of use of the more complex tools [2]. However, what carries the effectiveness of the engineering design and product development processes is more than tools and methods. Generally, at least seven dimensions of the product development core must be considered in order to come to an satisfactory understanding: the organisational structure, the physical environment, the performance measuring system, the knowledge structure, methods & tools, the social system, and the decision structure (fig.2).
Figure 2. The seven dimensions of the Product Development system [adapted from 1].
2.
Constraints and Shortfalls
That the quest to understand the constraints and shortfall of engineering design and of the product development process is relevant is indicated by the frustrations that
Constraints and Shortfalls in Engineering Design Practice 15
are voiced to those who enter into a serious discussion on the subject with people from industry: x x x x x x x x
“We don’t get enough from our investments in product development” “We use too much energy dealing with our current products, and do not innovate” “Arriving at the new products takes us too long time” “We do too many new products of the 5%-improvement kind” “We have no control over our product cost” “The content of new and powerful technologies in our products is low” “Our new products fail to realize the market potential” “Our new products’ contribution to the company revenue is too weak”
The understanding in the research community of the Engineering Design processes has made remarkable progress in the last ten to twelve years [3]. Thus, there is a potential for this understanding to be utilized to reorganize and reengineer the product development organization in those companies, and to change what is basically an unsatisfactory situation. However, there is no direct relation between realizing that there is a problem, and to the cure that must be specified to actually change the company to increase the product development effectiveness, This lack of direct correlation between problems and cure is also recognized by the reported work that is being done on metrics and benchmarking of engineering design [4, 5] and metrics of product development [6, 7].
3.
The Concept of ‘A Diagnosis’
This paper put forth the hypothesis that, as a first step, a diagnosis may be made on the product development core of a company, leading to an understanding of the underlying illness, or illnesses. This approach is based on the assumption that the product development core of a company share important characteristics with those of a living organism. The approach is founded on current research based understanding of the product development processes, combined with the accumulated experience of practicing the use of the diagnostic tools and procedures in real companies. 3.1
Understanding the Current Product Development Core
It is one of the basic assumptions that an indispensable first step is to understand how the existing product development core works, before making a diagnosis. One must be able to understand the composition of the system in the seven aforementioned dimensions (fig.2), and understand how the product development tasks are related to the overall strategies and goals of the company. It is also important to gauge the ‘modus operands and attitudes of key personnel in the
16
L. Hein and Z. Fan
product development core, in order to understand the micro-mechanisms which are the actual generators of innovation and synthesis. 3.2
Understanding the Current Problems
It is another basic assumption that one must understand in detail where the problems lies with the current product development core, before any serious attempt at repair can be made: If we do not understand where the problems are, as a reflection of what the company in its current state is capable of, we will not succeed in creating a new and better product development core. 3.3
Seeing the Company ‘Freed from the Ties that Binds’
Before attempting the diagnosis, one must identify if and where the company has been tied down by unwittingly accepting imaginary boundaries, rules, or norms related to their product development. The diagnosis should rest upon an understanding of what product development could and should be like in the company, freed from those ties. 3.4
Understanding the Company’s Environment
Lastly, the environment in which the company must function must be understood. Important aspects of the environment is: x x x x x
4.
The market that the company addresses The customers that the company caters to Direct and indirect competition The nature of the applied technology, and the dynamics involved The context and reality of the society where the company must function
Tools for the Diagnosis
The diagnosis is supported by a number of tools, developed from our current understanding of the engineering design and product development processes. Basically, the tools are organised into three sets: Basic reference patterns, Gap analysis, and the ‘Hypotheses of malfunction’ 4.1
Five Basic Reference Patterns
The five basic reference patterns represents five different facets of product development. They are used to compare what is going on in the company with what are generally known to be healthy and productive patterns. Any major deviation from those patterns points to a potential cause of problems. Composition of the Product Development Core deals with the different organisational elements related to the core. How the contributors and stakeholders
Constraints and Shortfalls in Engineering Design Practice 17
interact is highly important to the function and effectiveness of the engineering design activities, and thus scrutiny of the corresponding patterns is essential.
Figure 3. The five basic reference patterns used in the diagnosis: Composition of the Product Development Core, the structure of the product development tasks, the set of coordinated strategies in the company, the four stages of maturity of the product development system, and the seven dimensions of the product development system.
The structure of the product development tasks deals with the mapping of the often complex pattern of development tasks that the product development core is expected to solve. Here it is important to notice that there are often tasks at both high, medium, and low level in the company, requiring very different competences and measures. A check should be made in order to confirm that the capabilities, resources, and organization are adequate to deal with the tasks with satisfactory results. The coordinated strategies in the company deals with mapping the local strategies of the most important functional areas of the company (typically areas such as Production, Service, Product Development, Quality, and Sales), and checking them for reciprocated consistency and support. The four stages of maturity of the product development system deals with the identification of how far the company has come in its lifecycle, and consequently what the role of the product development core ought to be. Basically, the four stages are: 1. Engineering design stage – the young (and small) company where product development is handled by the engineering design group alone, and where a fair share (if not all) of the commercial awareness is also located. 2. The product development stage - where the technical and the commercial competence and
18
L. Hein and Z. Fan
resources are found in different groups, which must then come together to do product development. 3. The product planning stage – when product development has become so complex that extensive planning and management becomes necessary. 4. The coordinated strategies stage – when further growth and complexity has made the coordination between strong and self sufficient individual departments in the company a major problem. The seven dimensions of the product development system deals with analysing how the company has combined elements from all dimensions into a total working pattern, and of how this patterns compares to the patterns generally known for their functionality, effectiveness, and reliability. Again: Any major deviation from those patterns points to a potential cause of problems. 4.2
Gap Analysis - ‘What We Believe We Are’ vs ‘What We Really Are’
In a company there may many different (and often conflicting) perceptions by people in different positions about product development - with respect to ‘who we are’, ‘what we are doing’, and ‘how we do it’. And even for all of these individual perceptions, the reality may be something different again. The goal of the gap analysis is to arrive at a realization of the gap between ‘what we believe we are’ from ‘what we really are’. 4.3
Specific Ailments – the ‘Hypotheses of Malfunction’
Maybe the most powerful set of tools for establishing the diagnosis is the stock of mechanisms notoriously known to generate constraints and shortfalls, previously identified with other companies. The relevant ‘hypotheses of malfunction’ may be selected from the amoury on the basis of the initial findings of the diagnosis, and subsequently put to the test. The proof or disproof of the individual hypotheses will often emerge from interview with key personnel directly involved in the development processes, or with stakeholders in engineering design or the product development results. Currently the stock comprises some 30 to 40 hypotheses, examples of which are: x x x x x x x
“Not enough management focus on product development.” “A lag of engineering competences in respect of the tasks to be performed.” “The role of engineering design and/or product development in the company is unclear”. “The goals set for engineering design activities are weak and unambitious”. “No link from company strategy through to the engineering design and product development activities.” “The engineering design department is tied down by old debt (= previously unfinished, but unproductive work).” “The chair of the engineering design manager (or of the product development manager) is empty.”
Constraints and Shortfalls in Engineering Design Practice 19
x x x
5.
“The handbook and the procedures for the product development process are being ignored.” “In the projects, the information related to market and customers is not forthcoming, or is weak and unsubstantiated.” “The business acumen has been stifled by bureaucracy.”
Testing the Concept, Tools and Procedures of the Diagnosis
Over a period, the concept of ‘diagnosis’ has been put to the test, and the tools and procedures described above has been tested, modified and optimized. In all, more than fifty companies, of sizes from a few hundred to many thousands employees, has had a diagnosis made either by our group, or by our cooperation partners. The companies have predominantly been manufacturing companies, with a number of service-industry companies also present. The diagnosis was being performed always by two researchers as a team, and in no instances were they themselves employed by, or affiliated with, the company. The presentation of the findings of the diagnosis to the company followed always the same pattern: The findings and conclusions was written up in a formal report, and subsequently presented to the management board - in some instances this group was supplemented by key personnel from engineering design or from the product development core. The management board then discussed the diagnosis, and, as a rule approved the findings and conclusions. The result was then communicated to all relevant staff, and the next phases, those concerned with the re-engineering of the organization, could begin. The conclusion of the test is that the diagnosis arrived at was approved by the individual company in over 90% of the cases. In more that 80% of the cases the go-ahead was given for the subsequent organization re-engineering activities.
6.
Conclusions
In order to deal with the constraints and shortfalls in engineering design practice, the processes by which Engineering Design comes together with disciplines from other areas of the company, to sustain the product development process itself, must be taken into account. The accumulated results of research into Engineering Design and product development is now adequate enough that it may provide the basis for the development of a structured process by which to reduce or remove the constraints and shortfalls in engineering design practice. However, the application and training
20
L. Hein and Z. Fan
of this process in real companies is required in order to adapt and make operational the suggested tools and procedures. In this paper, the first step of such a structured process, the diagnosis, is proposed. Through rigorous testing in companies in the Nordic countries, the concept of Diagnosis and the corresponding set of tools and procedures has been adapted, and been demonstrated to yield productive results. Future research should focus on the remaining steps in the process. In addition, the testing and practice of the process should be used to collect data for the subsequent generation of metrics by which to benchmark the engineering design capabilities and effectiveness of a company, to add to the data repository in this field.
7.
References
[1] Mørch L., Hein L., “The Seven Dimensions of a Development Organisation”. International Conference on Engineering Design. Proceedings of ICED90, Zürich, August 1990. Series WDK 19. [2] Yang M.C., “Design Methods, Tools, and Outcome Measures: A Survey of Practitioners”. Proceedings of the ASME 2007 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, IDETC/CIE 2007, September 4-7, 2007, Las Vegas, Nevada, USA [3] Bligh A., Sodhi M, “Designing the Product Development Process”. Proceedings of the ASME 2007 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, IDETC/CIE 2007, September 4-7, 2007, Las Vegas, Nevada, USA. [4] Acosta L.M.C., Trabasso, L.G. & Araújo, C.S., “Analysis of the Balanced Scorecard Formulation Process for Setting up Engineering Design Performance Metrics”, Proc. 14th International Conference on Engineering Design (ICED 03), the Design Society, 2003. [5] Acosta L.M.C, Araújo C.S. & Trabasso, L.G., “A Review of Product Development Performance Metrics Investigations with Emphasis on the Designer Level”, Proceedings Design 2002, Dubrovnik, Croatia, 2002, pp 621-626 [6] O’Donnell F.J. & Duffy A.H.B., “Modeling design development performance”, Int. Journal of Operations & Production Management, Vol. 22, No. 11, 2002. [7] Driva H., Pawar K. & Menon U., "A framework for product development performance metrics", Int. Journal Business Performance Management, Vol.1.No.3, 1999, pp. 312326.
Modular Product Family Development Within a SME Barry Stewart, Xiu-Tian Yan The University of Strathclyde, Department of Design, Manufacturing and Engineering Management, Glasgow UK.
Abstract Product variation is becoming an important factor in companies’ ability to accurately meet customer requirements. Ever increasing consumer options mean that customers have more choice than ever before which puts commercial pressures on companies to continue to diversify. This can be a particular problem within Small to Medium Enterprises (SMEs) who do not always have the level resources to meet these requirements. As such, methods are required that provide means for companies to be able to produce a wide range of products at the lowest cost and shortest time. This paper details a new modular product design methodology that provides a focus on developing modular product families. The methodology’s function is described and a case study detailed of how it was used within a SME to define the company’s product portfolio and create a new Generic Product Function Structure from which a new family of product variants can be developed. The methodology lends itself to modular re-use which has the potential to support rapid development and configuration of product variants. Keywords: Modules, Methodology, Product Family, SME
1.
Introduction
In today’s world of high paced change and ever increasing consumer options, it is often vital for companies to diversify their product ranges to meet customers changing needs. To keep up with such strains and to help handle the subsequent complexity of the design process companies have to find new and innovative ways of managing their product development. These factors are particularly relevant within Small to Medium Enterprises (SMEs) where lack of time and resources and competitive market environments mean that constant pressures are put on companies to grow. Modularity is a concept that is being introduced as a means to meet some of these complexities and help in introducing a greater variety of products to the market in shorter times[1]. The heart of research into product modularity is the development of modular products, therefore, methods for developing more modular products are essential [2]. There has been much research carried out into modular design methods [3] with many different techniques and methodologies proposed to help companies create ‘modular’ products. The benefits of such formal tasks are well documented
22
B. Stewart and X. T. Yan
with reported cost savings of up to 64 times [4] and studies showing that by implementing formal methods, as opposed to relying on designers’ natural instincts, significant savings in time and resources can be achieved [5]. One such piece of work has been detailed in previous research [6,7,8] is called the GeMoCURE methodology which proposes a modular design methodology that also takes into account product perspectives, lifecycle objectives, modular re-use and product families. One way to meet the requirement of increasing customer requirements is to introduce product families into a company’s product portfolio. A product family is generally considered to be a group of similar products that are all derived from a common product platform [9]. In order to use such a concept to help companies create product variety, these platforms have to be well defined and implemented which is one of the goals of the GeMoCURE methodology. It aims to do this by creating a structure of well defined modules that can either be combined to form a product platform or added to the platform to generate new products. The methodology uses techniques that allow modules to be formed based on product functions and which takes into account the different perspectives that are inherent within any product development. Modularity is ideally suited to the concept of design for reuse i.e. reusing standard, proven components/assemblies/modules in the design of new products. This has the benefit of making a product more reliable (due to use of proven modules), cheaper due to reduced resources necessary for development (since a larger proportion of modules designed by others are used), easy to maintain, etc [10]. The overall objective of the design methodology is to create the greatest product variation while keeping costs, time and resources to a minimum. This paper will examine how the methodology was implemented within a SME, how it is being used and future objectives.
2.
Modular Product Families
The term module is used widely in many different contexts to describe a variety of different concepts. In the realm of product design, Gershenson et al [3] state that there is no universally agreed definition for a what a module comprises of. Ulrich and Eppinger [11] put forward the notion that ideal modules are ‘chunks’ of components where each ‘chunk’ represents one function or a series of functions. This definition is backed up by Stone et al [12] who states that “ Modules are defined as physical structures that have a one-to-one correspondence with functional structures”. It is possible to summarise from these definitions and from other prominent research [12,13,14] that the main features that define a module are; structural independence, functional independence and minimal interfaces or interactions with other modules or outside influences. The definition that has formed the foundation for the GeMoCURE methodology is built on these key points and is defined by Smith and Duffy [14] who state that “Modules are commonly described as a group of ‘functionally’ or ‘structurally’ independent components clustered such that ‘interactions are localized within each module and interactions between modules are minimised’ [16]”.
Modular Product Family Development Within a SME
23
Modularisation of products can lead to a wide range of different products but one of the uses where modularisation can be most effective is in conjunction with a common product platform. The increasingly more specific demands of customers has led to many companies introducing large product families to try an meet this wide range of needs and variety. An efficient and effective approach is to build product families based on a common product platform which allows for the accurate management of product variety [17]. Schellhammer and Karandikar [18] define such a product platform as “…the common basis for multiple product variants targeted to meet specialised requirements for specific applications and markets.” This proposes the idea that a product platform is a common base upon which modules can be added to create a wide variety of products. This type of product architecture lends itself to modular design as it allows modules to be interchanged and reused to create the maximum range of products from the components available. This is supported by Robertson and Ulrich [19] who state that “The platform concept is characterised by the consequent modularisation of a product architecture and the integration of basic (common) elements (components, functions, interfaces, design rules) over a product family.” Schellhammer and Karandikar [18] also define the platform further by declaring that they consider the “product platform to represent a set of functions, features, parameters, components, and information around which a product architecture to base a family of products and technologies can be developed.” This shows that a product platform doesn't necessarily have to consist of purely physical modules/components and can also contain the underlying technology, the product functions and or even knowledge associated with the product family. In this study, the focused company has a goal of creating a new product family that will feature a standard product platform from which a variety of new products can be developed. In addition, the current products will be structured into a well defined product families from which common modules can be found that can be stored as potential candidates for re-use in the new family.
3.
GeMoCURE Methodology
The GeMoCURE methodology is developed as an integrated approach by combining several methods to allow designers to generate design solutions using modular concepts in a systematic manner. This new methodology contains four significant methods that form the integrated methodology; Generalisation, Modularisation, CUstomisation and REconfiguration (GeMoCURE). Figure 1 shows a detailed pictorial representation of the methodology, illustrating all detailed activities and the prescribed sequence of utilising GeMoCURE in a design and manufacturing company. The following sections detail the key process and constituent activities of the GeMoCURE methodology.
24
B. Stewart and X. T. Yan
3.1
Generalisation
The first stage of this new methodology is called the ‘Generalisation’ stage and it focuses on analysing the current company product portfolio (and any new products being added) and creating generalised and generic product development primitives (PDP). This generalisation can be undertaken from two perspectives based on the work reported in [6, 19], namely, function, and structure. Function describes the physical effect imposed on an energy, material and information flow by a design entity without regard for the working principles or physical solutions used to accomplish this effect. Structure is the most tangible concept with various approaches to partitioning structure into meaningful constituents such as features and interfaces in addition to the widely used assemblies and components. Additional perspectives, such as behaviour, solution and life-cycle can also be used to generalise modules. The output from this stage is a series of PDP models from two perspectives that provide generic artefact information and knowledge for each PDP. The methodology has been simplified slightly over previous applications to reflect the nature of the SME business and the complexity of the product portfolio.
Figure 1. The GeMoCURE design methodology
3.2
Modularisation
The modularisation processes are at the heart of the GeMoCURE methodology as they help to define the product families, product platforms and the derivable modules that will help to generate product variety. There are two aspects which
Modular Product Family Development Within a SME
25
have been considered in this approach, namely identification of generic modules and identification of distinctive modules, which focuses more on deriving modules which give unique features and characteristics for the product. The PDPs that were defined in the Generalisation stage are organised into an optimal product structure using a Dependency Structure Matrix (DSM) which uses a genetic algorithm – based on the dependencies between PDPs – to cluster the PDPs into module candidates, see Figure 2. Based on the module definition given in section 1.2, functional modules can be identified by assessing the clusters of components using the Module Identification Module (MIM). This gives a visual display– see Figure 2 – of the strength of the dependencies between PDPs and allows for decisions on what makes the best module to be made. From these results the DSM is then used again to map the modular structure from the functional viewpoint to the structural viewpoint. This is called a Crossviewpoint matrix and allows for the optimal product structure to be maintained throughout the product architecture. The structural concepts can then be stored into a solution depository where they are accurately mapped and searchable so that new products have access to them with the option of modular re-use. The function concepts are also brought together at this point and used to define the product families – more detail on this can be found in section 1.4. Therefore the two main outputs from the generalisation stage are a depository of identified structural concepts and a well defined set of product families that describe the company’s product portfolio.
(a)
(b)
Figure 2. (a) A Dependency Structure Matrix; (b) the Module Identification Module (MIM)
3.3
Customisation
The Customisation stage of the process deals with the development of new products within a product family. It is a process of utilising the available modules, which were identified in the Generalisation stage, to meet a new design requirement by firstly defining the new requirement in the correct terms and then tailoring the modules to meet the requirement. The same Generalisation and DSM
26
B. Stewart and X. T. Yan
techniques are used to describe the new product concept in the same terms as the product family and modules that are in the depository. By comparing the functional concepts and the solution concepts it is possible to generate solutions for the new product. If there are no solutions for certain of the new product functions then these should be designed to integrate with the chosen module solutions and, once properly defined, can be added into the depository and product family. 3.4
Reconfiguration
Once all the modules have been selected, so that they accurately map the function structure and customer requirements, the final stage of the process is carried out. Reconfiguration takes all the modules and configures them into various product layouts while taking into account design processes, markets, standards, interfaces, etc. The output from this short stage will be the final product design ready for production.
4.
SME Product Family Analysis
The GeMoCURE methodology has been implemented, in various forms, within large mulinational companies, but the focus of this research is how it can be implemented within a local Scottish SME. The SME in question is a manufacturer of chain oiling systems that are marketed as after-market maintenance devices. They have a small product portfolio of around 8 products but are keen to expand this by introducing a new product family. The GeMoCURE methodology was implemented into this company with the purpose of introducing modularity concepts that can be used in the design of product families and in module re-use. The steps highlighted in section 1.3 and in Figure 1 were carried out on the SMEs product portfolio to firstly identify the functional modules and the structural modules. To identify functions and perspective dependencies, the functional model proposed by Stone et al[12] was created for each product variant. For the structural concepts a simple structural hierarchy was developed that showed the main structural components and their physical links. These were then added into the DSM matrix and optimised to produce optimal module structures. The functional modules were then mapped onto the structural concepts, using a crossviewpoint matrix, to create a set of solutions for structural modules. The functional modules are then taken and analysed to assess the commonality that existed between them and their products. For each product family they are then split into three distinct categories; common functions, differentiation enable (DE) functions and auxiliary functions. Common functions describes those that are present within all product variations within a family (i.e. comprise the product platform), DE functions describe functions that are selectable and can be used to alter the performance or features of the product platform and auxiliary functions are those that don’t affect he main function or product variants but provide some secondary function. Once these have been identified they can be arranged into a schematic Generic Product Function Structure (GFPS) for the product family which shows all of the
Modular Product Family Development Within a SME
27
options available for product variants within that family – see Figure 3. This structure not only defines the product family but shows all the available configurations therefore opening up the possibility of rapid configuration of new product variants.
Figure 3. An example of a Generic Product Function Structure (GPFS).from a SME
5.
Product Customisation / Configuration
In order to maintain the product portfolio structure, new product development has to follow the steps of the methodology to enable the product to be defined in terms that will allow the product variations to be generated. When a new customer requirement is identified the first step is to carry out the Generalisation of the concepts for the new product. This will define the product in terms of it functions and allow for the inputs to be put into the DSM for the Modularisation stage. By modularising the function concepts the product can now be optimised into a modular function structure that can be used in the Customisation. Once the new concept has been defined in this way a function module comparison can be carried out by searching the GFPS of the product families and the Solution depository. The modules that find matches can then be allocated into the new product scheme while for any functions that don’t have suitable matches a new design will have to be developed. When these new modules are developed it is necessary to keep to the optimal modular structure that was defined as closely as possible. Once all modules – both new and re-used – have been defined it is possible to create a new GPFS for the new product family. Figure 4 shows how the GPFS for the new SME product family is constructed of both new modules and of re-used modules from other product families. By creating the product family in this manner it is possible to use the modules already used within the company in the new product family to allow for several product variants to be produced and to add to the company’s overall product portfolio. The fact that so many of the modules
28
B. Stewart and X. T. Yan
are proven, reliable modules that are already in full scale production allows for rapid configuration of these new variants and fast time to market.
Figure 4. The new Generic Product Family Function Structure (GPFS) highlighting the new modules to be developed.
6.
Interface Analysis
When carrying out modular design - with a view to creating product families and with using design for re-use principles – it is necessary to create good definitions of how the modules interact with one another. It is especcially important to have well defined and standardised interfaces between the common modules (the product platform) and the derivable modules [21]. Sellgren and Andersson[22] define an interface as a “pair of mating faces between two elements”. In this case this can be expanded to mean the pair of mating faces between two modules. In order to cope with the variety of different products they may end up in, the interfaces on modules have to be designed robustly and should preferably be defined early in the design process [23]. The importance of defining the interfaces within modules is clear and this has been particularly evident within this SME. When modules are designed to work over several product variants and several product families it is important that there are definitions set down as to how these should be handled. In the methodology an initial stage has been added called ‘Interface Identification’. The purpose of this stage is to look at both the functional and structural modules and assess the interactions between other modules. In the current company portfolio this is a simple task as the designs are in place and products are in manufacture. Therefore it is simply a case of documenting these and adding them into the depository. By also including interfaces in the modules definitions it gives a better idea of how the modules can actually fit together. It is also critical when designing new function modules, as it is imperative that any new modules that are produced are compatible
Modular Product Family Development Within a SME
29
with the product family. This way when the re-used modules have been defined there will be a definitive list of the interfaces that are present and it will be possible to design new modules to integrate with the product platform.
7.
Future Work and Conclusions
The focus of the research so far has been implementing the system within a SME and observing how it handles such an environment. This has shown some clear areas that are required to be improved in the system. The first area is the identification of commonality with the function modules and structural modules as at present this is done intuitively by the designers. There is work being carried out in parallel with this project looking at introducing algorithms into these stages to ensure that the identification is carried out optimally. The second area is the implementation of a more formal interface strategy within the methodology. It has been realised how important this is to the overall feasibility of a module re-use strategy and this will be the focus of future research. The primary aim will be to establish standard interface descriptions and allow these to be modelled into the methodology along with the crucial interface attributes. This paper has demonstrated a new design methodology aimed at aiding designers in producing modular products and families. By using a series of tools and methods a systematic approach to modular design can be achieved that allows for opportunities for module re-use and rapid product configuration which can lead to reduced product lead times and lower development costs. The methodology consists of four distinct stages; Generalisation, Modularisation, Customisation and Reconfiguration. These four stages were described and shown how the system uses a DSM tool to find the optimal product structure. This structure was then used to sort the company’s products into a range of useable modules and a definition of their product families. The system can also be used to create new product variants from these product families by using a system of differentiable modules that can be altered and added to product platforms to create variants. One of the main outcomes of the implementation of this methodology has been the need for a standardised system of interfaces to allow for effective module re-use. If modules are to be used in many different product variants it is essential that a standard system of interfaces is devised that will allow for this to be carried out efficiently. This has been pinpointed as a key topic for future work and will be built upon within the methodology to introduce a standard set of interfaces.
8.
References
[1] Baxter D, Gao J, Case K, Harding J, Young B, Cochrane S, Dani S; (2007) An engineering design knowledge reuse methodology using process modeling. Research in Engineering Design 18:37-48. [2] Thyssen J. and Hansen P.K. (2001) Impacts for Modularisation, Proceedings of International Conference on Engineering Design, ICED ’01, Glasgow, 547-554.
30
B. Stewart and X. T. Yan
[3] Gershenson J.K., Prasad G.J. and Zhang Y. (2004) Product modularity: measures and design methods. Journal of Engineering Design, 15:33-51. [4] Synopsys Inc. Who can afford a $193 Million Chip? (1999) Synopsys Design Reuse Cost Model. [5] Duffy A.H.B. and Ferns A.F. (1999) An Analysis of Design Reuse Benefits, Proceedings of the International Conference on Engineering Design, ICED ’99, Munich. [6] Smith J.S. (2002) Multi-Viewpoint Modular Design Methodology, Doctoral Thesis, University of Strathclyde, Glasgow. [7] Yan X.T., Stewart B., Wang W., Tramscheck R., Liggat J., Duffy A.H.B., Whitfield I., (2007) Proceedings of International Design Conference on Engineering Design, ICED ’07, Paris. [8] Wang W.D., Qin X.S., Yan X.T., Tong S.R., Sha Q.Y. (2007) Developing a Systematic Method for Constructing the Function Platform of Product Family,??? [9] Jiao J.X., Simpson T. W., Siddique Z. (2006), Product Family Design and PlatformBased Product Development: A State-of-the-Art Review, Special Issue on Product Family Design and Platform-Based Product Development, Journal of Intelligent Manufacturing, pp.1-36. [10] Pahl G. and Beitz P (1994) Engineering Design: A Systematic Approach, (SpringerVerlag Berlin and Heidelberg GmbH & Co). [11] Ulrich K.T. and Eppinger S.D. (2003) Product Design and Development, Third Edition (McGraw Hill). [12] Stone R., Wood K. and Crawford R. (2000) A Heuristic method for identifying modules for product architectures. Design Studies 21:5-31. [13] Gershenson J.K., Prasad G.J., Zhang Y. (2003) Product Modularity: definitions and benefits. Journal of Engineering Design, 14 : 295-313. [14] Smith J.S, Duffy A.H.B. (2001) Modularity in Support of Design for Re-Use, Proceedings of International Conference on Engineering Design, ICED ’01, Glasgow, 195-206 [15] Huang C., Kusiak A. (1998) Modularity in Design of Products and Systems, IEEE Transactions on systems, man and cybernetics, 28:66-77. [16] Sosale S., Hashiemian M. and Gu P. (1997) Product Modularisation for Re-use and Recycling, ASME, Design Engineering Division, 94:195-206. [17] Hofer A.P., Gruenenfelder M. (2001) Product Family Management Based on Platform Concepts, Proceeding of International Conference on Engineering Design, Glasgow, 491-498. [18] Schellhammer W., Karandikar H. (2001) Metrics for Executing a Product Platform Strategy, Proceeding of International Conference on Engineering Design, Glasgow, 531-538. [19] Wie M. V., Bryant C., Bohm M. R., Mcadams D. A., Stone R. B. (2005) A Model of Functional-Based Representation. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 19:89-111. [20] Robertson, D., Ulrich, K. (1998) Planning for Product Platforms, Sloan Management Review. [21] Sundgren N. (1999) Introducing Interface Management in New Product Family Development, Journal of Product Innovation Management, 16:40-51. [22] Sellgren, U., Andersson, K. (1998) MOSAIC - a Framework and a Methodology for Behavior Modeling of Complex Systems, Proceeding of Produktmodeller’98, 119-137. [23] Blackenfelt M., Sellgren U. (2000), Design of Robust Interfaces in Modular Products, Proceedings of 2000 ASME Design Engineering Technical Conference, Baltimore, Maryland.
Duality-based Transformation of Representation from Behaviour to Structure Yuemin Hou 1,2, Linhong Ji 2 1
Dept. of Mechanical Engineering, Beijing Information Science and Technology University, Beijing, 100085, P. R. China 2 Dept. of Precision Instrument and Metchanology, Tsinghua University, Beijing, 100084, P. R. China
Abstract Behaviour bridges function and structure. The designing process is investigated by analogy with embryo development and this approach leads to a six-stage designing process: the function specification, the behavior representation of function, the behavior induction, the behavior specification, the transformation from behavior to feature, and the parameter optimization. To map the behavior into the feature of structure is a key issue for structure development. The paper presents a bioinspired mechanism: gene transcription and a duality-based algorithem to achieve the transformation. The computational model is established and a design case is illustrated to show the method. Keywords: Behaviour, structure, transformation, duality, representation
1.
Introduction
Behaviour bridges function and structure. Functions can be described from a device-centric and/or an environment-centric viewpoint while structure is a configuration of objects[1]. Behaviors may refer to the value(s) or value relations of state variables of interest, properties of an object, and the causal rules that describe the values of the variables under various conditions[1]. In terms of design variables, function variables describe the teleology of the object; structure variables describe the components of the object and their relationships; behaviour variables describe the attributes that are derived or expected to be derived from the structure variables of the object[2]. Therefore, The transformation from behavior to structure is at the core of designing. Researches on function-structure mapping were mainly focused on operational process and computer-supported searching strategy. The typical mapping models include concept-detail design methodology[3], axiom design[4] and the FBS framework[2]. The mapping strategies involve design grammar[5], X-based reasoning[6-9], mathematical programming, game[10] and so on. One way to investigate the mechanism of mapping is to frame design by analogy with
32
Y. M Hou and L. H Ji
embryogenesis. Literature on this field can be classified into evolutionary design[11] and automatic design[12-13] Graph-based approach is a useful tool to establish a computational design model. Related work in this field has mainly focused on the design of mechanisms and dynamic analysis of systems. For example, representation of mechanisms and kinematic chains, automatic generation of the kinematic structure of mechanisms, topological analysis of planetary gear trains, identification of connected components of the designed object, combinatorial representations of multidisciplinary knowledge[14-16]. Duality is mainly used for transformations between physical properties of the static system to the geometrical properties of the kinematical system [17-18]. This paper investigates the transformation from behavior to structure and places emphasis on the mathematical representation of transformation. (The transformation between functions and behaviours has been discussed in other publication [19]). An embryo approach is used to achieve the transformation. Matrices and vertex-edge-face weighted graphs are used to formalize design information. Duality is used to transform the representation of behaviour to that of structure. Computational transformation model is established and a design case is illustrated to show the method. The remainder of the paper starts with the research methodology, follows with transformation model and a design example, and ends with the discussion and conclusion.
2.
Methodology
The process of product design is progressive, from subjective intention to detail description of structures or systems. Key factors to the development of structure may be investigated by analogy with embryogenesis. Embryogenesis is a developmental process that usually begins once the egg has been fertilized. It involves multiplication of cells and their subsequent growth, movement, and differentiation into all the tissues and organs of a biological life[20].. 2.1
Bio-inspired Mechanisms to Map Function into Structure
Basic factors for a biological life to develop from egg to embryo are gene transcription, commitment, cell differentiation and induction[20]. To learn from organisms leads to a progressive design framework consisting of six stages: specifying functions, interpreting functions as behaviour in terms of natural laws, developing behaviours through induction, specifying behaviours, mapping the behaviour and the feature of structure, and optimizing parameters of structure. This process is a simulation of the development of organisms and it can be modeled as six stages: the Function, Surrogate, Property, Specification, Feature, and Parameter models. The Function model represents function specification. The Surrogate model expresses the behaviour of systems in terms of properties that can be described by laws, especially by natural laws. The Property model represents developed property set. A control system is also established at this stage. In mechanical
Duality-based Transformation of Representation from Behaviour to Structure
33
design, the term “property” may denote the stiffness of a structure, the power of a driver, the processing coefficient of a processing unit, the energy transformation coefficient of a sensor or of an actuator, etc. The Specification model represents specified properties of the system. The Feature model relates to the topology and material of the structure, and the Parameter model to the detailed description of artifacts. Bio-inspired mechanisms can be used to achieve the transforms between these models. They are gene transcription, commitment, cell differentiation and induction. The transformation between the Function model and the Specification model has been discussed[19]. The transformation between the Feature model and the Parameter model will be discussed in future publication. Following sections focuses on the transformation from the Specification model to the Feature model 2.2
Transformation from Behaviours to the Feature of Structure
To transform the behaviour into the feature of structure, two processes are needed. One is gene transcription and the other one is commitment. Gene transcription and commitment processes are complex in biology, but, fortunately, they are not so complex in the design of artifact. Commitment can be achieved mainly through decision-making, which will be discussed in other publication. Gene transcription can be achieved partially through dualism. Here, “partially” means that only the representation is transformed.
3.
Modelling the Transformation
Design information consists of a group of concepts. Graph provides ideal tools to represent a group of concepts as well as the relation between them. Matrices are used to represent design concepts. Weighted graphs and dual graphs are used to represent design models. Duality is used to transform the representation of behaviour to that of structure. A weighted vertex-edge graph is used to represent the behaviour and a weighted vertex-edge-face dual graph is used to represent the feature of structure. 3.1
Basic Concepts of Graph
A graph G
V
(V , E ) is a structure which consists of a set of vertices
^v1, v 2,!` and a set of edges
E
^E1, E 2,!`; each edge e is incident to the
elements of an unordered pair of vertices {u,v} which are necessarily distinct [21] . The graph G* (V *, E*) is said to be the dual of a connected graph
G
(V , E ) If there is a 1-1 correspondence f: E-E*, such that a set of edges S
forms a simple circuit in G if and only if f(S) (the corresponding set of edges in G*) forms a cutset in G* [21]. A weighted graph is a graph with weights both for vertices and edges.
34
Y. M Hou and L. H Ji
A vertex-edge weighted graph is extended here to vertex-edge-face weighted graph and the weight is denoted by matrix or cell matrix in order to accommodate all variables and parameters at different design stages. 3.2
Representation
Behaviours are denoted by properties. Properties and property relations are represented by matrix
Pr operty
Rij
^r
^pi | pi
(W 1 ,W 2 ,")`, i 1,2,", n
rij (O1 , O2 ,"), wij
ij
wij J 1 , J 2 ," , `,
i , j 1,2," , n where,
W i , Oi
and
Ji
(1) (2)
represent factors characterizing properties; rij represents
physical relation between two properties, while wij represents signal relation between two properties; n is the number of properties. For mechanical products, generally one gets Pr opertyi [ ki , Ai ,"] ,
rij
rij (Type , F , Ve , a , ") and wij
wij (u , I , Vt , B , T , ve, a , X , F , ") ,
where ki is a stiffness, Ai is a area, Type is a connection type; F is a generalized force, including linear force, rotation and bending moment; Ve is a generalized velocity, including linear and angular velocity; a is a generalized acceleration, including acceleration and angular acceleration. u is a displacement; I is a electrical current; Vt is the voltage; B is the magnetic strength; T is the temperature and X is the position. The feature of structure is represented as
Substructure
^S
i
| Si
si (:, m ); : R 3 , i 1, ns`
(3)
where : is the topology of a substructure; m is the material of a substructure; ns is the number of substructures Parameter of structure is represented as
Subparameter
^Pi | Pi
pi ( para, m); i 1, nP`
(4)
where, para are the parameters of a substructure and m is the material of a subxtructure; nP is the number of final substructures.
Duality-based Transformation of Representation from Behaviour to Structure
3.3
35
Duality-based Transformation
To transform behaviour to the structure, gene transcription is needed, as discussed in section 2.2. Dualism provides a mathematical tool for gene transcription from the Specification model to the Feature model. Consider a graph G (V , E ) . Let n, p, and q be the number of vertices, edges, and faces. The Surrogate, Property and Specification models can each be illustrated by a graph G (V , E ) .
V
{vi , pi } , where pi represents Propertyi
(5)
Relation between properties can be represented as edges.
E {el , Rl }, l 1,2,", p
R
^R
l
| Rl
[ rij , wij ]`, l
(6)
1,2," , p , i , j 1,2," , n
(7)
The number of property, i.e. n, is different in deferent models. In the surrogate model, n=n. In the property model, np will replace n and np>n as a result of induction. In the specification model, some relations may disappear or emerge, so ns will replace np and ns may equals or does not equal np. The Feature and Parameter models can be each illustrated by a dual graph: G* (V *, E*, f *) . When the Specification model can not be represented as a single planar graph, it should be separated into several planar graphs to benefit duality-based transformation. A vertex-edge-face weighted dual graph is used to represent the feature of structure.
V * {v *k | vk * m f k , X k *}, k 1,2,", q
(8)
E* {el * | el * m el , Rl * | Rl * Rl , Ll }, l 1,2, " , p
(9)
f * { f i * | f i * m Subi , Si }, i 1,2,", n
(10)
where Xk* represents the coordinates of subsystems and it is the weight of v*; Ll Subi Sub j . Under the dualism-based transformation, properties that are represented as vertices in a graph evolve into physical descriptions of substructures that are represented as faces in the dual graph; connection properties that are represented as edges in the graph evolve into physical descriptions of adjacent relations of substructures; the space that is represented as faces in the graph evolves into a coordination of substructures. Figure 1 shows the duality-based transformation.
36
Y. M Hou and L. H Ji
Dualism not only provides an explanation of the transformation from the abstract property to the physical structure but also provides a means for programming the transformation.
a
b
Figure 1. Duality-based transformation a. prime graph; b. dual graph
3.4
Programming
The computational model of dual transformation is achieved using B-splines in two ways: interactively drawing or automatically transformation. The initial vertices and edges of the dual graph may need to be rearranged. The rearrangement involves moving vertices (i.e., coordination of substructures) in order to minimize the length of the edges (i.e., the contact length of adjacent substructures). The dual graph is developed in five steps. (1) To generate the feature representation of the dual graph. (2) To modify the positions of vertices of the prime graph to make it ease to identify and to make enough space for drawing the dual graph interactively. (3) To choose the vertices X* of the dual graph. This step is optional because the codes are programmed to automatically generate the vertices according to the number of vertices and faces of the prime graph if no vertices be input. (4) To draw the edges e* of the dual graph. In the case of interactively drawing, to draw lines across the edges of the prime graph line by line by pointing a series of points for one line by the mouse. In the case of automatic drawing, codes are programmed in such a way that each edge is drawn as short length as possible aiming at achieving optimal layout of the structure. This strategy also avoids the codes generating a group of dual graphs, which makes decision complex. (5) Finally, to mark the faces. The duality-based transformation can be easily implemented on Matlab. The function ‘getcurve’ is modified to achieve interactively drawing of dual graph, and the function ‘intersect’ is used to evaluate adjacent edges of faces. The challenge of programming is to get the optimal dual graph. Although a group of graphs are available, only a particular dual graph will be useful for reference. Generally, a small size and simple adjacent relation are preferable. Therefore, the codes are programmed to draw an optimal dual graph with the shortest edges..
Duality-based Transformation of Representation from Behaviour to Structure
4.
37
Example
A basic requirement is that the strain experienced at the center of a beam be less than a certain given value, i.e. Δmax<Δlim. 4.1
Behaviour
The Function model can be represented as F = {F1 F2 F3 } , where Fi mean reference body, supporting and loading respectively. The Surrogate model is established to achieve the functions, as shown in figure 2a. Stiffness k is selected as the properties in the Surrogate model to satisfy the basic requirement. The Surrogate model can be represented as
V = {1 2 3]T , [ p1
p2
p3 ]T }
,
E = {[e1 (1,2) e2 (2,3) e3 (2,2)]T , R} ,
⎧ r (0, F ,0,0)
12 12 ⎧ k1 ⎫ ⎪k ⎪ R = ⎪ ( 0 , r F ⎨ 23 23 ,0,0) p = ⎨ 2⎬ ⎪ r (0,0,0,0) ⎪⎩k3 ⎪⎭ ⎩ 22 ,
w12 (0,0,0,0) ⎫ ⎪ w23 (0,0,0,0) ⎬ w22 (u ,0,0,0) ⎪⎭ .
a
b
Figure 2. Graph representation a. the surrogate model; b. the property model
The Surrogate model is input into induction model as a cell matrix. According to induction rules[19]: w22 will generate the Sensor structure Sub4, the Actuator structure Sub5, the Processing Unit Sub6, the Driver Structure Sub9 and the Energy Structure Sub10; r12 and r23 will generate the PartVolume. Outputs are a sketch and matrices of the Property model. Figure 2b shows the output graph. The output matrices are V = {i, pi }, i = 0,1,
{
,10 , E = {e, R} ,
}
T e = [e1 (1,7) e2 (7,2) e3 ( 2,4) ...] , R ,
38
Y. M Hou and L. H Ji
⎧ r17 (0, F12 ,0) w17 (0,0,0,0,0) ⎫ ⎧k1 ⎫ ⎪r (0, F ,0) w (0,0,0,0,0) ⎪ ⎪k ⎪ 23 72 ⎪⎪ . ⎪⎪ 72 ⎪⎪ 2 ⎪⎪ , w24 (u ,0,0,0,0) ⎬ p = Pr operty = ⎨ k 3 ⎬ R = ⎨ r24 (0,0,0) ⎪ r (0,0,0) w (0,0,V ,0,0) ⎪ ⎪g ⎪ 46 ⎪ ⎪ 46 ⎪ 4⎪ ⎭⎪ ⎩⎪ ⎭⎪ ⎩⎪ In the Specification model, the relation properties may be modified after the types of sensors and actuators are determined. 4.2
Duality-based Transformation
Step 1. Generate the feature representation of the dual graph.
[
V * = v1*
][
]
v 2* , X 1*
⎧ X 11 ⎪X ⎪⎪ 21 X * = ⎨ X 31 ⎪X ⎪ 41 ⎪⎩
X 2* }
⎧ e1 (1,7) X 12 ⎫ ⎪e (7,2) X 22 ⎪ ⎪⎪ 2 ⎪⎪ , E * = ⎨ e3 ( 2,8) X 23 ⎬ ⎪e ( 2,4) X 24 ⎪ ⎪ 4 ⎪ ⎪⎩ ⎪⎭
L1 = l1 ⎫ ⎧1 ⎪2 ⎪ L2 = l72 ⎪⎪ ⎪⎪ L3 = l8 ⎬ , f * = ⎨ 3 ⎪4 L4 = l 4 ⎪ ⎪ ⎪ ⎪⎩ ⎪⎭
S1 ⎫ S 2 ⎪⎪ ⎪ S3 ⎬ . S4 ⎪ ⎪ ⎪⎭
Step 2. Modify the positions of vertices of the prime graph. Step 3. Choose the vertices X*. Step 4. Draw the edges e* of the dual graph. See the graph in figure 3. Step 5. Mark the faces. Step 6. Drawing the layout manually. See the graph in figure 4.
a
b
Figure 3. Dual graph drawing a. interactively drawing; b. the modified dual graph
Duality-based Transformation of Representation from Behaviour to Structure
39
Figure 4. The layout of the structure
5.
Analysis and Discussion
The designing process is divided into two levels. The first level compasses the first three stages that develop the behaviour in an abstract level, while the second level compasses the later three stages that develop the structure in a concrete level. To transform the behaviour into the feature of structure, one of mechanisms is gene transcription. Duality-based transformation acts as gene transcription and couples the abstract behaviour and the concrete structure. In dual graph, the vertices of the prime graph that represents behaviours disappear and they are transformed into dual faces that represent substructures. The advantage of duality-based transformation not only lies in the available mathematical representation, but also lies in that it implies the transformation nature. The dual graph also displays the basic layout of components of structure. Duality theory not only provides an explanation on the transformation from abstract property to physical structure but also provides a tool to program the transformation.
6.
Conclusion
A bio-inspired approach is used to investigate the mechanisms to map behaviour into structure. This leads to a six-stage design framework: specifying functions, interpreting functions as behaviour in terms of natural laws, developing behaviours through induction, specifying behaviours, mapping the behaviour and the feature of structure, and optimizing parameters of structure. These six stages can be represented as six models: the Function, Surrogate, Property, Specification, Feature, and Parameter models. Duality-based transformation provides a mathematical representation from behaviour to structures and acts as the gene transcription mechanism. A weighted graph is used to represent the behaviour set, which are denoted by properties.A vertex-edge-face weighted dual graph is proposed to present the feature and parameters of substructures and their interconnection. The dual graph is also useful to assist in the layout of components of the structure. The computational transformation model is established and a design case is illustrated to show the method.
40
7.
Y. M Hou and L. H Ji
Acknowledgement
This research is partially supported by Aerospace Fund of China and two grants from Beijing Information Science and Technology University.
8.
References
[1] Chandrasekaran B, Josephson JR, (2000) Function in device representation, engineering with computers. Computer Aided Engineering 16:162–177 [2] Gero JS, (1990) Design prototypes: a knowledge representation schema for design. AI Magazine 11(4): 26–36 [3] Pahl G, Beitz W, (1986) Konstruktionalehre, Spring-Verlag, Berlin/Heidelberg [4] Suh NP, (1998) Axiomatic design theory for systems. Research in Engineering Design 10:189–209 [5] Öberg J, O'Nils M, Jantsch A, (2001). Grammar-based design of embedded systems. Journal of Systems Architecture 47: 225–240 [6] Kamal Mubarak, (2005) Design composition in architecture dissertation. Carnegie Mellon University, PhD Dissertation [7] Chen G, Ma YS, Thimm G, Tang SH, (2005) Knowledge-based reasoning in a unified feature modeling scheme. Computer-Aided Design & Applications 2(1–4): 173–182 [8] Horváth I, Vegte WFVD, (2003) Nucleus-based product conceptualization: principles and formalization. Proceedings of ICED ’03, Stockholm 1–10 [9] Bozzo LM, Barbat A, Torres L, (1998) Application of qualitative reasoning in engineering. Applied Artificial Intelligence 12: 29–48 [10] Hernandez G, (2000) Integrating product design and manufacturing: a game theoretic approach. Engineering Optimization 32(6): 749–775 [11] Gero JS, (1996) Creativety, emergency, and evolution in design, Knowledge-Based System 9: 435–448 [12] Lipson H, Pollack JB, (2000) Automatic design and manufacture of robotic life forms. Nature 406(31): 974–978 [13] Teng DX, Tong BS, (2000) Research on bionics structure design based on morphogenesis. Mechanical Science and Technology, 20(4): 483–484 (in Chinese). [14] Al-Hakim L, Kusiak A, Mathew J, (2000) A graph-theoretic approach to conceptual design with functional Perspectives. Computer-Aided Design 32: 867–875 [15] Shai O, (2001) Deriving structure theorems and methods using Tellegen’s theorem and combinatorial representations. Int. Journal of Solids and Structures 38: 8037–8052. [16] Shai O, Reich Y, (2004) Infused design I: theory. Research in Engineering Design 15: 93-107 [17] Shai O, (2002) Utilization of the dualism between determinate trusses and mechanisms. Mechanism and Machine Theory 37: 1307–1323 [18] Shai O, Pennock GR, (2006) Extension of Graph Theory to the Duality Between Static Systems and Mechanisms. Journal of Mechanical Design 128: 179–191 [19] Hou YM, Ji LH (2006) Representation and neural network induction model with growth form design. Advanced Design and Manufacturing for Sustainable Development, Frontiers of Design and Manufacturing, Sydney, Australia 1: 153–158 [20] Slack JMW, (1997) From Egg to Embryo-Regional Specification in the Early Development, Second Edition. Cambridge University Press [21] Tucker A, (1995) Applied Combinatorics, Third Edition. John Wiley & Sons, Inc., New York
Automatic Adaptive Triangulation of Surfaces in Parametric Space Baohai Wu, Shan Li, Dinghua Zhang Key Laboratory of Contemporary Design and Integrated Manufacturing Technology, Education Ministry of China, Northwestern Polytechnical University, Xi'an 710072, China
Abstract Based on the advancing front method (AFM), an automatic adaptive mesh generation method is proposed for triangulating three-dimensional parametric surfaces. Both the new node and new triangle element are generated in twodimensional parametric space, while the error between the triangle and the original surface in three-dimensional physical space is kept within the defined tolerance. The validity checking of the new triangle element, including the intersection checking and range control, is then carried out, and the corresponding correction measure is also discussed in the context. Numerical experiments together with analysis are given to illustrate the efficiency and robustness of the algorithm. Keywords: mesh generation, triangulation, three-dimensional parametric surface, advancing front method
1.
Introduction
With the rapid development of numerical techniques for the solution of engineering problems, there has been increasing demand for the automatic mesh generation of surfaces in the past few decades. It is required for instance in finite element methods (FEM), stereolitography (SLA), CAD/CAM and parametric surface rendering[1]. For an application in which a complicated domain is involved, it is quite often that the time spent on the mesh creation is longer than the time used in the numerical analysis[2]. Furthermore, the results of surface triangulation are the input data of the tetrahedral grids and the quality of tetrahedral grid directly depends on the quality of the surface discretization[3]. In recent years, significant efforts have been made towards developing efficient and robust algorithms for the automatic generation of unstructured grids over three-dimensional parametric surface. It appears that among the many methods that are available to date, the following two methods are most effective and reliable to generate high quality meshes[2], i.e. Delaunay based method[4,5] and advancing front method[3,6]. However, only a few efforts have been made for triangulating 3dimensional surface within a predefined tolerance. In order to obtain the
42
B. H. Wu, S. Li, and D. H. Zhang
triangulation result which is adaptive to the curvature of the parametric surfaces, Sheng [7] and Vigo [5] studied this kind of surface triangulation based on the Delaunay method. In the research of Cuilliere[3], AFM was used to triangulate three-dimensional parametric surfaces within a predefined tolerance, while the detailed procedure was not mentioned in his study. Comparing with the Delaunay method, one of the advantages of the AFM is that the shape and size of the triangle element can be controlled by adjusting the location of the node [8]. As the main disadvantage, a lot of intersection checking between the new generated triangle element and existing elements must be computed in AFM in order to ensure that the new triangle element is valid. It is very difficult to finish this work in threedimensional space because the triangle elements are always not coplanar and the new generated element perhaps penetrates through the existing elements. So an improved method is needed to finish these works in two dimensions and the results can be mapped back into three-dimensional space within the predefined tolerance. This paper focuses on the automatic adaptive triangulation of the three-dimensional parametric surfaces based on the advancing front method. The error between the triangulation result and the original surface is kept within the predefined tolerance. Some generalized techniques will not be mentioned here because that there are lots of studies nowadays about the implement of the advancing front method. The rest of the paper is organized as follows: Section 2 describes the approach of new node generation and new triangle element formation. The validity checking and corresponding correction of triangle element is presented in section 3. In section 4, some examples are given together with the triangulation results analysis. Finally, conclusions are discussed in section 5.
2.
New Element Generation Approach
For the triangulation of the three-dimensional parametric surfaces, the key challenge for new triangle element generation is how to control the shape of the triangle element in two-dimensional parametric space to satisfy the real geometry property in three-dimensional physical space. Based on the effective circle, the paper presents a new element generation approach which is proceeded in the parametric space, while the real triangle element in physical space satisfies the predefined tolerance. The following paragraphs will discuss the approach in detail. When the triangulation tolerance is defined as H , the triangle element is admissible if the distance between any point interior to the triangle and the original surface is less than the given tolerance H . According to reference[1], the triangulation result is said to be admissible if all its triangle elements are admissible with respect to the original surface. Sheng [7] gave the maximum length of the admissible triangle edge in parametric space. Let r(u,v) be a C2 parametric surface, T is a triangle with vertices A, B and C in parametric space, and l be an arbitrary triangle with vertices l(A)=r(A), l(B)=r(B), and l(C)=r(C) on the surface. p is an arbitrary point in parametric space. Then the triangle l is admissible with respect to the original parametric surface r(u,v) if
Automatic Adaptive Triangulation of Surfaces in Parametric Space
:d3 where,
H
(1)
2M 1 4M 2 2M 3
: is the maximal edge length of triangle T in the parametric space and T
M1
43
w 2 r (u , v) ,M2 sup wu 2 pT
w 2 r (u, v) sup wuwv pT
T
w 2 r (u , v) sup wv 2 pT
,M3
T
In order to obtain a curvature-dependent triangulation of the surface, i.e., an adaptive triangulation, we must deal with non-constant : values along the patch. Vigo [5] developed the above theory and defined :( p ) and R ( p ) functions for each point in parametric space (see Figure 1). :(q0 ) q 0
R( p)
:( p)
p q0
p
:(q0 )
Figure 1. Illustration for definition of R(p)
:( p ) 3 R( p)
H 2 M uu ( p ) 4 M uv ( p) 2 M vv ( p)
min ^max( pq , :(q ))`
qC: ( p )
(2) (3)
C: ( p) is defined as a circle with radius :( p) centered on p . Similarly, circle CR ( p) is the circle of radius R( p ) centered on p . Function R and related circle CR illustrate the bounds on the maximum size of an admissible triangle in parametric space, they are defined as effective radius and effective circle respectively in this paper. A triangle is admissible if each of its edges ab is such that ab d min ^ R ( a ), R (b)` . Giving an arbitrary line segment of the current front in parametric space ab (see Figure 2(a)), the coordinates of the two endpoints are (ua ,va ) and (ub ,vb ) , the
44
B. H. Wu, S. Li, and D. H. Zhang
corresponding points on the surface A, B are shown in Figure 2(b). M is the middle point of curve segment AB, and m is the corresponding location in the parametric space. In some application areas, the quality of triangle element is not very important, such as the application of collision detection in some CAD/CAM area. In order to improve the calculating speed, the triangle element number of the triangulation result is expected as few as possible. So, the triangle element is expected with maximum area within the predefined tolerance.
a
m
A
b
M
B
a
b
Figure 2. One of the current front segments. a. the segment in parametric space; b. the segment in three-dimensional space
The determination of new node c in parametric space is illustrated by Figure 3. R (a ) , R(b) and C (a) , C (b) are the effective radiuses and effective circles of point a and b respectively, and the condition of ab d min R (a ), R (b) is satisfied. The initial new node c in parametric space can be taken as the intersection point of circles C ( a ) and C (b) . Then the effective radius and effective circle corresponding to the new node c , i.e. R (c) and C (c) can be calculated according to equation (3). For the purpose of triangle element abc is admissible, the inequation of R (c) t max ac , bc must be satisfied. In most
cases, the geometry property of the surface is continuous and the above condition can be achieved. If the new node can not guarantee the above inequation, node c should be moved towards the mid-point m along the line segment cm until R(c) max ac , bc is satisfied. As a result, the new triangle element
corresponding to nodes a , b and c in parametric space is admissible, i.e. the triangulation error between the triangle element ABC and the original surface r(u,v) is within the predefined tolerance H .
Automatic Adaptive Triangulation of Surfaces in Parametric Space
45
C (a ) C (b) R (a )
a
m
b
R(b)
c R (c )
C (c)
Figure 3. Finding the new node to satisfy the predefined tolerance
3.
The Validity Checking of the New Triangle Element
In AFM, validity checking of the new triangle element is essential to surface triangulation. It is possible for the new triangle element to intersect with the existing elements and redundant triangle elements will be generated if two adjacent nodes in the new front are too close or too far. Validity checking involves two aspects, i.e. intersection checking between triangle elements and nodes merging or inserting according to the distance between the two adjacent nodes. Both are discussed in detail in the following sections. 3.1
Intersection Checking Between Triangle Elements
When a new triangle element is formed, the intersection between the new element and existing elements must be checked and it is finished in the parametric space in present study. In the two-dimensional space, the sufficient and necessary condition for triangle elements intersection is that at least one couple of the edges of the elements intersect with each other. So the essence of element intersection is the intersection between the line segments in two-dimensional space. Li [10] gives a fast approach to estimate if the two line segments intersect with each other. Giving two planar line segments P1P2 and Q1Q2 (see Figure 4), P1Z and Q1Zƍ are perpendicular to the plane determined by P1P2 and Q1Q2. Then the sufficient and necessary conditions for the two line segments to intersect at one point other than endpoints are:
(( P1 P2 u PQ 1 1 ) PZ 1 )(( P1 P2 u PQ 1 2 ) PZ 1 )0 ® ' ' ¯((Q1Q2 u Q1 P1 ) Q1Z )((Q1Q2 u Q1P2 ) Q1Z ) 0
(4)
46
B. H. Wu, S. Li, and D. H. Zhang
Z' Q1 Z P2
P1
Q2
Figure 4. Intersection checking between line segments
when a new triangle element is formed, the two edges connected with the new node should be checked whether they intersect with all existing triangle elements in the parametric space. If the intersections occur (see Figure 5(a)), the two nodes should be merged to one new node, and the new node can be chosen as the midpoint of the two nodes, as illustrated in Figure 5(b).
a
b
Figure 5. Intersection occurrence and modification. a. intersection occurrence; b. modification for the intersection case
3.2
Nodes Merging or Inserting
In order to guarantee the triangulation precision, it is necessary to check whether the triangle edges formed by the adjacent new nodes are admissible with respect to the predefined tolerance H according to equation (3). Redundant triangles will be generated if the distance between the two adjacent nodes is too small, while the triangulation result will overrun the predefined tolerance if the distance is too large. So, the nodes distribution must be rearranged when the above two cases arise. The basic idea for node adjustment is merging or inserting nodes according to the distance between the two adjacent nodes. The adjustment of the nodes is detailed as follows. As shown in Figure 6, polygonal line 12345 is part of the current front and 6, 7, 8, 9 are the new generated nodes and triangles 126, 237, 348, 459 are the corresponding new triangle elements. R(1), R(2), …, and R(9) are the effective radius corresponding to these nodes respectively. As shown in Figure 7, if common area formed by effective circle 6 and effective circle 9 exists (as the shadow area in Fig. 7), it means that there is at least a point A can make the triangles 23A and 34A to satisfy the predefined tolerance. Therefore it is possible to merge nodes 7 and 8 to point A in order to avoid the redundant triangle generation. Node A can be
Automatic Adaptive Triangulation of Surfaces in Parametric Space
47
chosen as a point which is on the line segment 78 and included by the common area. Meanwhile, the following conditions must be satisfied for merging nodes 7 and 8 to node A :
2A ° ° 3A ° ® 4A ° ° 6A ° 9A ¯
d min R(2), R (A) d min R(3), R(A) d min R(4), R (A)
(5)
d min R (6), R(A) d min R(9), R(A)
1 R(6)
6
1
6
2
R(2)
2
7
7
R(3) 3
3
) R( A
A
8
8
R(4)
4 9
4 5
R(9)
9 5
Figure 6. Part of the current front
Figure 7. Merging two nodes to one point
Inequations (5) means that all triangles relating with node A must satisfy the predefined tolerance. When nodes 7 and 8 are replaced by point A, the triangle elements connecting with nodes 7 and 8 should be adjusted accordingly, such as triangles 267, 237, 348 and 489 in Figure 7 will be replaced by triangles 26A, 23A, 34A and 4A9 respectively. Triangle 378 degenerates into the line segment 3A and should be discarded. The other case which need to rearrange the nodes distribution is that the distance between adjacent nodes is too large with respect to the predefined tolerance. In most cases, a new point (always the mid-point) will be inserted between these two nodes[9]. However this problem sometimes can be settled by adjusting the node location other than node inserting. As shown in Figure 8, node 8 is not included in the effective circle R(7), i.e. the distance between nodes 7 and 8 is larger than the permitted value. Let m be the mid-point of line segment 38, 8ƍ is the intersection point of the circle R(7) and line segment 38. If point 8ƍ is located on the line segment 8m and 78' d min R (7), R (8' ) , then node 8 is replaced by new node 8ƍ, as illustrated by the dotted lines in Figure 8. Another possibility is that the point 8ƍ
48
B. H. Wu, S. Li, and D. H. Zhang
locates on the line segment 3m other than 8m, the replacement under this condition will have not any merit comparing with inserting a new node because that this replacement will reduce the advanced distance greatly. So a new node should be inserted in this case, and the new node can be taken as the mid-point of nodes 7 and 8, as point 8ƍƍ in Figure 9. It can be seen that the triangle 378 is divided into two triangle elements, i.e. triangles 378ƍƍ and 388ƍƍ. 1
1 6
2
6 2
R(7)
7 R(7)
7 3
R(8') m
3
'
8
8'
8'' m
8 R(8)
8 R(8)
4
4 9
9
5
5
Figure 8. Node adjustment
3.3
Figure 9. Node inserting
Convergence Checking of Surface Triangulation
In advancing front method, convergence checking is especially important for surface triangulation because the people need to know whether the triangulation is finished or not. In our previous study[9], a simple and effective approach for triangulation convergence checking was developed. The approach is also applicable for this paper, so we will not discuss it here.
4.
Examples and Discussions
As an example, triangulation is carried out in this paper for a centrifugal compressor blade, which is a three-dimensional freeform surface modeled by Bspline method, as shown in Figure 10. The curve lengths of the blade boundaries 1, 2, 3 and 4 are 47.1051, 329.9315, 119.3394 and 227.6364 mm respectively. The area near boundary 3 is the inlet zone of the blade, and is always with high curvature to meet the aerodynamics requirement. 2
1 4
3
Figure 10. The centrifugal compressor blade
Automatic Adaptive Triangulation of Surfaces in Parametric Space
49
Figure 11 (a) and (b) show the triangulation results of the blade surface. The triangulation tolerance is defined as 0.2 and 0.4 mm, total 1470 and 694 triangle elements are generated respectively, as illustrated in Figure 11(a) and 11(b). The triangle quality is not considered in the triangulation procedure. The figures show clearly that the elements in the high curved area are much denser than the low curved area. The edge length of the triangle element depends on the curvature of the original surface.
a Figure 11. Triangulation results of the blade. a. H
b
0.2 ; b. H
0.4
The triangulation result of this study which is adaptive to the curvature of the parametric surfaces is mainly used to construct the bounding box of the blade for collision analysis between the cutter and the blade in multi-axis machining. For this application, the triangulation precision determines the collision detection accuracy, while the quality of the triangle element is not very significant for the collision detection result. In order to reduce the computing complexity, the ideal triangulation result should be with minimum element number under the given tolerance. Comparing with our previous work, the triangle number of the same blade using the presented method is reduced by 13.6% when the tolerance is defined as H 0.2 . It can also be seen that the triangle number decreases obviously with the tolerance increasing. Therefore, one of the key problems is to choose the most appropriate triangulation tolerance when the adaptive mesh generation method is adopted.
5.
Conclusions
A method based on the advancing front method (AFM) for automatic adaptive triangulation of the parametric surfaces has been discussed in the paper. The whole triangulation procedure is carried out in two-dimensional parametric space, including the new node and triangle element generation, validity checking of the triangle element and corresponding correction, and the convergence checking. The triangulation result is adaptive to the surface curvature, the error between the triangle element and the original surface is kept within the predefined tolerance. The practical triangulation of a freeform surface verifies that the algorithm is efficient and robust. Meanwhile, the number of the triangle elements by using this method can be reduced obviously with the same precision, therefore the stability and speed of succeeding numerical computation based on this triangulation result will be improved significantly.
50
6.
B. H. Wu, S. Li, and D. H. Zhang
References
[1] Vigo M, Garciia NP, Crosa PB, (1999) Directional adaptive surface triangulation. Computer Aided Geometric Design 16: 107–126. [2] Lee CK, (1999) Automatic adaptive mesh generation using metric advancing front approach. Engineering Computations 16: 230–263. [3] Cuilliere JC, (1998) An adaptive method for the automatic triangulation of 3D parametric surfaces. Computer-Aided Design 30:139–149. [4] Borouchaki H, George PL, Hecht F, (1997) Delaunay mesh generation governed by metric specifications. Part I. Algorithms. Finite Elements in Analysis and Design 25: 61–83. [5] Vigo M, Pla N, (2000) Computing directional constrained Delaunay triangulations. Computers and Graphics 24: 181–190. [6] Francois V, Cuuilliere JC, (2000) Automatic mesh pre-optimization based on the geometric discretization error. Advances in Engineering Software 31:763–774. [7] Sheng X, Hirsch BE, (1992) Triangulation of trimmed surfaces in parametric space. Computer-Aided Design 24:437–444. [8] Guan ZQ, Song C, Gu YX, (2003) Recent advances of research on finite element mesh generation methods. Journal of Computer Aided Design and Computer Graphics 15: 1– 14. [9] Wu BH, Wang SJ, (2005) Automatic triangulation over three-dimensional parametric surfaces based on advancing front method. Finite Elements in Analysis and Design 41: 892–910. [10] Li YH, Hua YX, (2003) A Method to Quickly Accessing the Intersection of Line Segments. Mapping Aviso: 30–31.
Research on Modeling Free-form Curved Surface Technology Gui Chun Ma1 , Fu Jia Wu1, Shu Sheng Zhang2 1
North University of China, Taiyuan Shanxi 030051, China 2Northwestern Polytechnical University, Xi’an Shaanxi 710072, China
Abstract In the geometrical design system, there are a lot of methods about how to construct a free-form curved surface. With the development of shipbuilding and aviation, the progress of science and technology, it sets a higher demand for the technology of modeling a free-form curved surface. In this paper, we have elaborated the production, development, application, and the present situation as well as development tendency of the free-form curved surface modeling. In addition, we have proposed a new and easy-to-apply method for modeling a free-form curved surface from its perspective projection and the normal vectors at the controlling points on the free-form curved surface. A briefly essentials about perspective projections and normal vectors is summarized. The details of our method are given as follow. Typically we assume four controlling points P00 , P01 , P10 and P11 not coplanar. The perspective projections of the four controlling points on two planes P00 P01P10 and P01P10P11 are known. Equations for calculating the three-dimensional coordinates of these four controlling points are derived, and thus a small patch of the free-form curved surface to be constructed is obtained. This typical processing is repeated until the entire free-form curved surface is determined. A numerical example shows preliminarily that our method is easy-to-apply. By using of our modeling method, it is convenient to control the shape of the free-form curved surface to be constructed. The geometrical structure of the modeling free-form curved surface is apparent. Keywords: modeling free-form curved surface, perspective projection, curved surface, computer-aided design
1.
Instruction
The CAD/CAM technology originates from the aviation industry. Because the airplane contour is not only complexity but also includes the mass of free-from curved surface, the relationship between the CAD/CAM technology and modeling free-from curved surface technology is tightly from beginning to end. People been exploring explored new methods of modeling a free-form curved surface for many years. At the beginning of proposing B-spline functions, modeling curved surface
52
G. C. Ma, F. J. Wu and S. S. Zhang
had developed from parametric spline, Coons, Bézier, B-spline to a geometry theory system which is composed of Rational B-spline surface and Implicit algebraic surface practicing as the main body, and interpolation, fitting and approximation practicing as the framework for 50 years.[1] With the development of production, B-spline has an obvious insufficiency. Simultaneously, NURBS can express quadric surfaces precisely, and the weighting factor make it easy to control and realization easily. Moreover, it can be promoted directly in four-dimensional space. Therefore, the ISO has promulgated the international standard STEP about the industry product data exchanging in 1991. It regards the NURBS as the only mathematics description method which defines the geometry shape for the industry product.[2] The modeling methods above are all based on controlling the free-from curved surface shape by changing the date point position. The designer, whose spatial thought ability is strong or experienced, can operate the free-from curved surface shape with wishes fulfilled by these modeling methods. And, they can estimate which will occur in the free-from curved surface shape because of the change of the point. In opposite, the designer whose spatial thought ability is weak or is lack of experienced, can not judge the points position and the structure of free-from curved surface. In addition, the free curved surface can only be demonstrated in the monitor or planed on the blueprint, and all these graphs are two-dimensional. They cannot reflect the three-dimensional free-from curved surface exactly. Therefore, it is easy to cause misunderstanding. In order to solve the problems above, we have proposed a new and easy-to-apply method for modeling a free-form curved surface from its perspective projection and the normal vectors at the controlling points on the free-form curved surface.
2.
The Research Situation of Modeling Surface Technology[3]
2.1
Deformation or Shape Blending
The traditional NURBS curved mode changes curved surface partial only by adjusting the control apex or the weighting factor. It carries operation on the level directly in the curved surface point by layer model. Some simple curved surface design method based on the parameter curve such as Sweeping, Skinning, Revolving and Stretching all make its deformation by adjusting the curve. The computer animated mapping and modeling entity need urgently to develop the methods of the expression or the coordination which have nothing to do with distortion. Then, some deformation method have been developed, such as FFD(Free-Form Deformation), the elastic deformation and the thermo-elasticity mechanics, the transformation by solution restraint, the transformation by geometry restraint as well as the coordination such as the polyhedron correspondence relations and the Minkowski and operation in picture morphology.
Research on Modeling Free-form Curved Surface Technology
2.2
53
Reconstruction
In the curved surface making about animating manufacture of designing a fine car body or person face sculpture, it always manufactures the model with the greasy dirt and then does three dimensional data point sampling. In the visible medicine picture, it usually uses CT slice to obtain the three dimensional data point of human internal organs surface. A geometry model of restoring the primitive curved surface from partial sampling information is called to be reconstruction. The sampling tool contains laser ranging scanner, medicine imagery meter, contact survey digitizer, radar or seismic survey instrument and so on. According to the forms of reconstruction, it can be divided into function reconstruction and discrete reconstruction. The former representative achievements were that Eck established the reconstruction of free topologic B-spline surface in 1990, and Sapidis invented the method of drawing up with the set of discrete points in 1995. The latter active method was establishing the approaching model of the plane piece of the set of discrete points, like Hoppe established lamination linearity in 1992 and the lamination smooth curved surface model in 1994. Recently, the reconstruction research has formed the upsurge. 2.3
Surfaces Simplification
The same as the reconstruction, this research area is one of international hot spots at present. Its basic thought is to remove the redundant information in the discrete curved surface which is three dimensional reconstructed or the modeling software output result(mainly is triangle network) and to guarantee accuracy, so as to make the graphical display timeliness, the data storage efficiency and the data transmission rapidity. This technology has advantages to establish the level approaching model, and carries on the lamination demonstration, transmission and edition to the multi-resolution curved surface model. The specific methods contain the method of rejecting the grid apices, the method of deleting the mesh boundary, the method of optimizing the grid, the method of the biggest plane to approach the polygon and the parameter sampling method.
3.
Development Tendency of Modeling Surface
3.1
Modeling Curve Surface with Energies Superior Method[4]
In 1987, Terzopoulos, a Canadian scholar, put deformable curve surface technology which based on the physical energy model into computer graphics. With the physics energy model established by the Lagrange equation, partial differential equation was solved by difference method to obtain the energy curved surface, which laid the foundation for the energy optimization modeling. In 1991, Professor Gossard and Dr. Celniker from MIT have further developed the energy optimization thought. They introduced it to the alternately design of the free curve surface, and proposed the characteristic line method to enhance flexibility of the curved surface design. In the energy optimization, Moreover, Welch and Witkin
54
G. C. Ma, F. J. Wu and S. S. Zhang
studied the function of offset point, parametric curve and normal vector etc., and introduced the restraint processing methods of Null-Space Projection. This means that based on the energy model physical level, although alternately or the automatic shape design may be carried on, it may still invoke present geometry operation on the geometry level. Research on distortion curve and modeling curved surface based on the energy model has already obtained the huge achievement, but also many questions are needed to solve, including: the computational efficiency, the interactive speed enhancement is limited when using finite-element method; alternately controls which is how alternately to chose the physical parameter; Energy functional choice that is how to enhance the computational efficiency and to guarantee balance of curved surface quality. 3.2 Modeling Surface Method Based on Partial Differential Equation (PDE) The idea based on modeling surface method of partial differential equation origins from taking the structure question of transition surface as the side value question of a partial differential equation. Afterwards, it was found that curved surface physiques in the actual problems could be easily structured with this method. For example, contour of hull, airplane and propeller leaf blades etc.[5] The PDE method is one new modeling surface technology, this method is only one curved surface design technology, but not the curved surface expression method. 3.3 Application of Wavelets Technologies in Modeling Curve Surface The wavelet analysis is unprecedented progress to the Fourier analysis. It is not only the powerful analysis technology, but also a kind of fast computation tool, and is of important theory significance and useful value. The wavelet is the powerful tool to portray the data interior relevant structure, and approaches the aspect in the data compression and approximation. In recent years, the wavelet analysis has obtained the widespread application in the computer graphics day by day, including radiation computation, curve and curved surface edition, body drawing, image editing, body distortion and progressive transmission and so on. Frinkelistein and Salesin have studied on the wavelet multi-resolutions analysis theory in closed interval. They used B-spline wavelet in closed interval, studied the B-spline curve, the multi- resolution expression of the B-spline curved surface with the tensor accumulates and application in the multi- resolutions edition. Lounbery extended recognition analysis and the wavelet theory to the free type of topology curved surface, developed the application objects of the wavelet technology. Eck has put MRA into free grids LOD (level-of-detail) approaching method, which caused any polyhedron and the grid curved surface all may be expressed by using the multi-resolutions. Certain expanded MRA, may carry on the multi-resolutions analysis to the color grid. Gotler and Cohen studied the wavelet’s application in the modeling geometry of variation. [6]
Research on Modeling Free-form Curved Surface Technology
55
4. Projections of Controlling Points and Normal Vectors of Curved Surface 4.1 Projections of Controlling Points The geometric model of perspective projection is shown in Figure 1. Where the point Pc ( x c , y c , z c ) is the center of perspective projection, xoy is the projective plane, and the point Pt ( xt , y t ) in the plane xoy is the perspective projection of a point P( x, y, z ) in three-dimensional space. Thus the perspective projection equations are given by the equations (1) xc z xzc ½ xt z zc °° (1) ¾ yc z yzc ° yt z zc °¿
Figure 1. Geometric model of perspective projection
4.2 Normal Vectors of Curved Surface On the basis of giving the perspective projections of controlling points which control the position and shape of a free-form curved surface, the normal vectors of the curved surface play a main role in controlling its three-dimensional shape. In order to control the three-dimensional shape of a free-form curved surface, the designer needs to provide the normal vectors of the free-form curved surface at all of controlling points. For example, giving the normal vector P00 ( a, b, c) on a controlling point P00 is a directed line segment connecting the origin of coordinates with point (a, b, c) . References[7-9] have given some methods constructing normal vectors from an image automatically. In our method, designers are only needed to provide the directions of normal vectors at controlling points with spherical coordinates.
56
G. C. Ma, F. J. Wu and S. S. Zhang
4.3 Three–Dimensional Controlling Points Existed methods for modeling a free-form curved surface are mostly based on three-dimensional controlling points on it. For employing these modeling methods, we need to enumerate the three-dimensional coordinates of all controlling points according to their perspective projections and normal vectors. All of current methods for modeling a free-form curved surface are mostly to construct a small patch of free-form curved surface by using of four controlling points, and then join smoothly them to model the whole free-form curved surface. The four controlling points are generally not coplanar. Two planes P00 P01P10 and P01P10P11 can be established are constructed from the four controlling points P00 , P01 , P10 , P11 as shown in Figure 2. Using average the normal vectors of a free-
form curved surface on these controlling points by using of methods such as weighted arithmetical mean or geometric mean, the normal vectors of the planes can be estimated.
Figure 2. Perspective projections of controlling points
If a point on a curved surface in three-dimensional space is determined arbitrarily, the point only reflects the spatial position of the curved surface but its geometrical shape. The z coordinate z 00 of the controlling point P00 on a free-form curved surface is supposed to known in this paper. Assuming p 00 ( x00 , y00 , z 00 ), p 01 ( x01 , y01 , z 01 ), p 01 ( x01 , y01 , z 01 ), p 01 ( x01 , y01 , z 01 ) are the three-dimensional coordinates of controlling points P00 P01P10 and P11 , respectively, and p t 00 ( xt 00 , yt 00 ), p t 01 ( xt 01 , yt 01 ), p t10 ( xt10 , yt10 ), p t11 ( xt11 , yt11 ) are their corresponding perspective projections, and n1 (n x1 , n y1 , nz1 ), n 2 (nx 2 , n y 2 , nz 2 ) are the normal vectors of two planes P00 P01P10 and P01P10P11 respectively. By means of perspective projection equations (1), we can obtain the conclusions as following
Research on Modeling Free-form Curved Surface Technology
x00 y00 x01 y01 x10 y10 x11 y11
xc z 00 xt 00 ( z 00 z c ) ½ ° zc ° yc z 00 yt 00 ( z 00 z c ) ° ° zc ° xc z 01 xt 01 ( z 01 z c ) ° ° zc ° yc z 01 yt 01 ( z 01 z c ) ° ° zc ° ¾ xc z10 xt10 ( z10 z c ) ° ° zc ° yc z10 yt10 ( z10 z c ) ° ° zc ° xc z11 xt11 ( z11 z c ) ° ° zc ° yc z11 yt11 ( z11 z c ) ° ° zc ¿
57
(2)
where unknown parameters are z 01 , z10 and z11 . As z00 is known, z 01 and z 10 can be determined from the plane P00 P01P10 and equations (2) as following z01
z10
nz1 z00 zc nx1 zc ( xt 00 xt 01 ) nx1 z00 ( xc xt 00 ) ½ ° nx1 ( xc xt 01 ) n y1 ( yc yt 01 ) ° ° n y1 zc ( yt 00 yt 01 ) n y1 z00 ( yc yt 00 ) ° nz1 zc ° ¾ nz1 z00 zc nx1 zc ( xt 00 xt10 ) nx1 z00 ( xc xt 00 ) ° ° nx1 ( xc xt10 ) n y1 ( yc yt10 ) ° n y1 zc ( yt 00 yt10 ) n y1 z00 ( yc yt 00 ) ° ° nz1 zc ¿
(3)
Similarly, z11 can be obtained according to the plane P01P10 P11 and equations (2) as following z11
n z 2 z10 z c n x 2 z c ( x t10 x t11 ) n x 2 z10 ( x c x t10 ) n x 2 ( x c x t11 ) n y 2 ( y c y t11 ) n y 2 z c ( y t10 y t11 ) n y 2 z10 ( y c y t10 ) nz 2 zc
(4)
58
G. C. Ma, F. J. Wu and S. S. Zhang
Thus the z coordinates of all of the controlling points are determined, and we take them into the equations (2), and determine the ( x, y) coordinates of all of the controlling points. So ( x, y, z ) coordinates of all of the controlling points are obtained. We can construct the free-form curved surface with bicubic parameterized splines by means of ( x, y, z ) . z
In order to model a small patch of free-form curved surface with bicubic parameterized splines, it is needed to utilize the tangent vector of every controlling point on its boundaries. The normal vector and the tangent vector of every point on a curved surface are perpendicular to each other, that is n ij Tuij 0, n ij Tvij 0 . Where T and T are the tangent vectors of the curved surface in u and v uij
vij
directions at a controlling point Pij respectively. Thus we can determine the tangent vector at every controlling point of the boundaries of a small patch of freeform curved surface with bicubic parameterized splines. After the three-dimensional coordinates of controlling points Pi , j , i 0,1, " m; j 0,1, " n, are determined by above method, the free-form curved surface is constructed based on interpolating points ui v j j , j 0,1," n using the bicubic parameterized splines.
i, i
0,1, " m and
The bicubic parameterized equations of a free-form curved surface are given by Pu, v
>F0 s
ª Pi , j «P « i 1, j « Pu ,i , j « ¬« Pu ,i 1, j
Where u ui s ,i 'i
F1 s 'iG0 s 'iG1 s @ x
Pi , j 1 Pi 1, j 1 Pu ,i , j 1 Pu ,i 1, j 1
Pv,i , j Pv ,i 1, j Puv ,i , j Puv ,i 1, j
0,1, " m 1; t
Pv ,i , j 1 º ª F0 t º Pv ,i 1, j 1 »» «« F1 t »» Puv ,i , j 1 » «' j G0 t » » »« Puv ,i 1, j 1 ¼» ¬« ' j G1 t ¼»
v vj 'j
,j
(5)
0,1, " n 1
4.4 A Numerical Example In our numerical example, the center of the perspective projection is taken as ( xc , y c , z c ) (500,0,1500) . The perspective projections of the controlling points are shown in Figure 2. The perspective projections of controlling points, the normal vectors at the controlling points, and the three-dimensional coordinates of the controlling points determined by using our modeling method are shown in Table 1. A small patch of the free-
Research on Modeling Free-form Curved Surface Technology
59
form curved surface constructed in terms of controlling points and the normal vectors of the curved surface are shown in Figure 3.
Figure 3. A patch of a free-form curved surface constructed Table 1. Projections of controlling points, normal vectors at them, and three-dimensional coordinates of the controlling points.
xt,yt ˥,ij x y z xt,yt ˥,ij x y z xt,yt ˥,ij x y z xt,yt ˥,ij x y z
P00 30,30 0,150
P01 30,10 27,162
P02 30,-10 64,162
P03 30,-30 90,150
30 30 0 P10 10,30 -27,162
30 10 0 P11 10,10 0,165
30.85 -9.982 -2.714 P12 10,-10 90,165
33.129 -29.8 -9.985 P13 10,-30 117,162
12.206 29.865 -6.754 P20 -10,30 -64,162
12.206 9.955 -6.754 P21 -10,10 -90,165
13.365 -9.931 -10.297 P22 -10,-10 180,165
15.797 -29.65 -17.745 P23 -10,-30 154,162
-6.243 29.779 -11.005 P30 -30,30 -90,150
-7.442 9.949 -7.522 P 31 -30,10 116,162
-7.442 -9.949 -7.522 P32 -30,-10 153,162
-6.392 -29.79 -10.612 P33 -30,-30 180,150
-25.76 29.76 -11.992
-26.99 9.943 -8.506
-25.88 -9.922 -11.66
-25.07 -29.72 -13.94
60
5.
G. C. Ma, F. J. Wu and S. S. Zhang
Conclusions
A new method is proposed for modeling a free-form curved surface from its perspective projection and the normal vectors of the curved surface at controlling points on it. We have introduced the method for determining z coordinates of controlling points using the perspective projections of controlling points, normal vectors of the curved surface, and construct a free-form curved surface by using of our modeling method. Designing initial datum of a free-form curved surface is feasible, and has the advantages of agreeing with the designers’ usual design practice in our modeling method. The initial datum of a free-form curved surface are clear geometrically, good geometric intuition. Our modeling method is convenient to control the shape of a free-form curved surface and easy to use by designers. By using of our modeling method, designers first need to give out the perspective projections of all of controlling points on a free-form curved surface according to the principle of perspective projection in terms of the features of the free-form curved surface which they want to construct. The projections of controlling points can also be obtained from an image or a drawing of the freeform curved surface. The normal vectors of the free-form curved surface are determined according to bending sizes and directions of the curved surface at the controlling points, and thus the free-form curved surface that designers want to model can be constructed automatically. If the designers are dissatisfied with the constructed free-form curved surface, they can reconstruct it by modifying the positions of projections of the controlling points or the normal vectors of the freeform curved surface at controlling points, until they are satisfied with it.
6.
References
[1] Guan Fu-qing, Luo Xiao-nan, Li Luo-luo etc. Computer-aided design of geometric shapes [M]. Beijing: Higher Education Press, 1998. [2] Zhu Xin-xiong. Free Modeling Curve surface technology [M]. Beijing: Beijing aviation space University Press, 2000. [3] Shi Fa-zhong, Computer Aided Geometric Design and Rational B-Spline. Beijing: Higher Education Press, 2001. [4] Eon J, Troopette. A new approach towards free-forrn Surfaces control [J].CAGD, 1995(12):395-416. [5] Bloor, Wilson. Special approximations to PDE surfaces[J].CAD, 1996, 28(2):145-152. [6] LounsberyM, PeroseT, Warren J. Multiresolution surface of arbitrary toplogical type [J].ACM Transactions on Graphics,1998.16(1):34-73. [7] F. Uiupinar and R. Nevatia, Perception of 3-D Surfaces from 2-D Contours. IEEE Trans. Pattern Analysis and Machine Intelligence, 15(1),1993:3-18. [8] R. Horaud and M. Brady, On the Geometric Interpretation of Images Contours. Artificial Intelligence, 1988, 37:333-353. [9] I. Weiss, 3-D Shape Representation by Contours. Computer Vision, Graphics and Image Processing, 41(1), 1988: 80-100.
Pattern System Design Method in Product Development Juqun Wang, Geng Liu, Haiwei Wang School of Mechatronic Engineering, Northwestern Polytechnical University, Xi’an, Shaanxi, China, 710072
Abstract The conception of pattern system design was presented in this paper after analyzing the existent pattern design methods in product development. Two pattern design methods having system characteristics were classified, that were process pattern and object pattern. A new pattern system design method was put forward with integrating process pattern and object pattern, which was Process-Object Integration (P-OI) pattern system design method. A group of design elements were abstracted which could reflect full objects and whole process, and then they were integrated into a system associated with each other according to some certain logical relation. The pattern system design method had many good characteristics: share, reuse, system standard, process and object integration, system open, etc. A new concept for developing product design method was provided, which can increase design efficiency, optimize design quality, and reduce development cost. Keywords: Pattern system design, Product development, Design element, P-OI pattern system design method
1.
Introduction
Pattern design is a design method that picking up commonness in huge and complex design fields, setting up a reusable and creatable, optimizing quality, reducing cost, having the advantages of share, simple and credibility. However, there were some problems in existed pattern design methods such as lacking system standard, scarcity in combination of portrait and landscape orientation, disjoint of each design phase. Along with the requirements of domain cross subjects and integration development, the design of complex products will develop consequentially to systematization. The basement is established to research the pattern design by existed design, concurrent design [1] pattern of single element developed to more elements, DFX, similar system [2] etc. Taking system science as guidance, combining similarity theory as the academic foundation of the pattern system design, a concept of pattern system design is presented and a new pattern system design named processobject integration is developed in this paper.
62
2.
J. Wang, G. Liu and H. Wang
Development of the Pattern Design
Pattern design method of products exploitation is a modern design method talent showing itself in the pattern theory and methods in 1970’. By 30 years' development, it has taken on four types-segment pattern, process pattern, aspect pattern and object pattern as yet. IDEF founded by Ross and Douglas in America in 1970s, is a representation of segment pattern[3]. By way of structure analyzing and design technique, IDEF is a series patterns used on building product analyzing, design, produce and management model. Parallel engineering presented in research report of national defend analysis lab of America in 1988 is a typical parallel pattern design. Along with the enlargement of DFX research and application area, more and more designs face to aspect is welling up. And it contains face to function, structure, produce, assemble, quality, cost, and environment etc. most of DFX[4] nowadays run in the disassembly of the product structure step by step. “Function-structure-action”[5] complex transfer pattern based on object pattern and function structure pattern is an object pattern research most. The representation is presented by Gero in Sydney University in Australia. At present, most of the study based on the consequence of the example. And the pattern design through example match and consequence system is the focus of recent study[6].
3.
Pattern System Design
3.1
The Concept of Pattern System Design
It’s no appellation as pattern system design in reference yet. However, in fact, existed parallel design, some function and structure pattern design faced to whole object are pattern system design. In order to standardise the terminology the pattern design methods and research the pattern design system science deeply, this paper presents the concept of the pattern system design which is a design method composed pattern design and system design, is a pattern system design method. 3.2
LARS System Constructed by Three Elements
Through the observation, analysis and study deeply on the character, function and effect of the system elements, a system is regarded as composition of three elements as following: 1. Link Element Link Element is an element has link function in the chain of the system. Link element has three characters: 1) just act in the system; 2) a link element has direct relation with neighbor elements; 3) link element is indispensable, a system will disaggregate immediately by taking out a link element. 2. Aspect Element Aspect Element is an element has a certain aspect element in the whole system. Aspect element has three characters: 1) Aspect element is an element relating
Pattern System Design Method in Product Development
63
system with environment; 2) In system, an aspect element act with all link elements and other aspect elements. 3) Aspect element is not only the abuttal relation but having general function inside the system. 3. Relation Element Relation Element is the contact mode in the system elements. For example, subjection, juxtaposition, succeed, sequence, aggregation, alternation etc. Now the definition of system is definitude and embodied, which system is composed by link element, aspect element and relation element organic contact, LARS for short. We give two format definitions as follows. 1. the symbol format definition of LAR:
LARS := < LE, AE,RE > where㧦 LE —— Link Element;
LE = {LE1 ,LE2 ,...LEi ..., LEn } (i = 1,2,...i...,n) i —— the number of the link element AE —— Aspect Element; AE = {AE1 , AE2 ,LAE j ..., AEm } (j = 1,2,...j...,m)
j —— the number of the
aspect element RE —— Relation Element;
RE = {RE1 ,RE2 ,...REk ..., RE p } (k 1, 2,...k ..., p)
k —— the number of the
relation element 2. the graphic format definition of LAR LARS :=<Ⴜ ,႒ ,໖ > ż——Link Element㧧Ƒ——Aspect Element㧧ᤫ——Relation Elementޕ The format structure is stable relatively in the main because the stable association among the characters of the three elements. Hereby, we give a general format of LARS in Figure 1. This format is suitable for any type of input-output format. It is not affected in general format but a little change in local details with element granularity transformation and flexion management.
Figure 1. LARS distributed net structure
64
J. Wang, G. Liu and H. Wang
3.3
Kinds of the Pattern System Design
3.3.1
Process Pattern System Design
Process starts early, develops mature relatively and takes the lead in systematizing. Considering from time dimension, the whole structure of Process pattern system design presents sequence format, including serial and parallel pattern system design. The researches on process pattern system design are following: 1. continuity of the design process. The continuity of the design process is necessary to carry out process pattern system design and effective to use the design resource 2. optimize the path of design. The one of aim to study process pattern system design is to find a design path which response quickly. For example, research on logic relation management of every step during the design process, research on high efficient arithmetic etc 3. deep development of parallel design. Parallel design is the representational process pattern system design method at the present time. And it is need to more research and compliment on standard, share, parallel and conflict harmonizing. 4. other process pattern system design methods. For instance knowledge current, net current etc. then process pattern system design comes into a new process pattern system design method by bog change in content, organization and structure.
3.3.2
Object Pattern System Design
The main thought of the object pattern system design was “break up the whole into parts, analyze step by step”. No matter the design is to object or aspect pattern system, general collectivity structure is tree structure. Object pattern design method, appearing at the end of this century, such as spiral gene evolution, networking, multi Agent design system and so on, is reinforced in the system. Since the middle 1990’s, several system design methods are presented, including machine system, complex system, extreme system, 1+3+X integrated design method, method of integrated design for complex mechanical product similarity system etc. These design method developed the new approach for the object pattern system design method and theory.
3.3.3
Integrated Pattern System Design
In order to strength macrocosm of pattern system design, an integration pattern system design as the basic type is presented in this paper. Integration pattern system design can be divided into three types as following: 1. Colligation method integration pattern system design. Colligation method is a system method that alternates the existed colligation design method or integrates different design methods to make it pattern.
Pattern System Design Method in Product Development
65
2.
Human-machine intelligent integration pattern system design, namely pattern system design made up of cooperation by human and machine intelligent. Based on human intelligent, depended on human-machine intelligent integration technique, this design method resolves the problems of the innovate products. There are more connotative integrations in the design, and it can extend the function of the pattern system design. 3. Process-object integration pattern system design. The integration of process and object is an integration of two types of system composed by time and space. It is more integrated and more compacted in the system. Components and the relation in the integration pattern system are more complicated than series and parallel pattern system. The collective structure will change with different emphases of the system integration.
4.
P-OI Pattern System Design Method
P-OI(Process-Object Integration) pattern system design is a concrete method of integration design. The start or end of Element design is the start or end of process and object design. P-OI pattern system design is achieved though elements design and system conformity. The guideline to build a method is as follows. 4.1
Determine the Design Process for the System
The generalized design processes of product exploitation include product programming, concept design, detail design, construction design and service design. Product programming and service design are periphery design, and concept design, detail design and construction design are the principal part of product design. This system just takes the principal part into account. For conceptual design, there are concept and concept design in the detail design and construction design phase. All the content in the detail design and construction design are contained in the phase of concept design. In fact, two kinds of classify are reasonable from different point. Therefore, we can confirm two design processes according to two different classify, namely phase design process and arrangement design process. 1. Phase design process: made up of three phases, scheme phase - including the arm function of the products, work principle, and whole distributing structure; concrete phase - including components and assembly sketch design; materialization phase - the techniques process of produce and assemble product. 2. Layer design process: made up of three layers, conception layer - the result is product concept; detail layer - the results is assembly and component; construction layer- including techniques files, NC programming, techniques and the design of working procedure, cutting tool, clamp and measure. Each design layers contains all the design elements. Phase design process and layer design process observe design process in two different points of view. They have some in common and some difference in
66
J. Wang, G. Liu and H. Wang
evidence. As a general design process face to the object, it is not necessary to differentiate, but it should be definite when according to the process design or needing a result of a certain process. The crossover and difference between two design processes is shown in Figure 2.
Figure 2. Crossover and difference between two design processes
4.2
Select the Design Element of the System
LEs of constructed system in the method are function, principle, layout, shape, color, structure, human, material, techniques, manufacture, and assemble. AEs are quality, environment, cost and management design elements. REs are transform, parallel and interaction. Hereby, there are 18 design elements in the system. 4.3
Construct P-OI Pattern Design System
4.3.1
Ensure Whole Structure, Distribute Design Elements
According to the type and function of the design elements, it is necessary to desperate LE from AE, and to arrange them in landscape orientation and portrait process though RE. The PO-I pattern system ensured by it is a layout net structure, as in Figure 3. It is a pattern drive structure has no relationship with platform, and use different interface with different platform to implement share and reuse.
Pattern System Design Method in Product Development
67
Figure 3. The whole structure of P-OI pattern design system
The guideline of the PO-I pattern system is design elements construct to design system. All of the elements present the maturity of the product, and express object pattern of the whole product. These elements come to object pattern by a certain arrangement, come to process pattern from landscape orientation and portrait simultaneously. Then compose the object and process pattern of the product to POI pattern.
4.3.2
Form the Conformity Mechanism, Integrate the System
The aim of the conformity is design process and object integration pattern system. And the content of the conformity are conformity in the elements, conformity in design phase or design layer, system conformity. The conformity process is under the reference frame as Figure 4 shows. The reference frame construct with three coordinate axes and cube, and three coordinate axes represent phase process design, layer process design and object design. The tube is made up of 15 design elements. In the element design process, process design and object design is done at the same time. The process design is composed of phase process design and layer process design. Then process design and object design make into P-OI design.
Figure 4. The coordinate system of P-OI pattern system
68
J. Wang, G. Liu and H. Wang
Figure 5. The conformity mechanism between designs
Conformity mechanism is a program used to synthesize the design element and required the cooperation of communication technique and collaborative technique. The whole design process is a process with continuous information interaction and feedback. All design elements from function design to assemble design construct a concurrent design process. The conformity mechanism for design elements is illustrated in Figure 5. The aspect design and process design element including quality, environment, cost and management. They have intercurrent relationship with each other. In order to reflect the effect of aspect design element to process design element roundly and reduce the difficulty, the aspect elements are considered in each process design element.
5.
Application in the Product Development
The flow of the P-OI pattern system design method is illustrated in Figure 6. The method is an all-purpose design method, and its application is detailed below: 1. Process and object integration design: The basic function of the method is the integration of design process and object. This compositive method enhances design efficiency by separating process and object design. 2. Automatic or semiautomatic programming design: The design method can make the object and process into a pattern and dose not have to take over the design and running order of object and process. The design can run accord to the developed program completely and its performance is better and smarter than the concurrent design. 3. Conceptual design: The selected process design element must be arranged to form the whole design process. These design elements combined with the aspect design element form conception, seamless connection of particular and construction design. 4. Combined Industrial design and engineering design: The industrial design and engineering design are combined to form the whole pattern system since the design elements are selected in this method.
Pattern System Design Method in Product Development
69
Figure 6. The design flow of the P-OI pattern system design
5.
Flexible design: Based on the certain whole structure, the object-process integrated pattern system can reduce and compose the design element and complete the design for one layer or phase, also can complete the design oriented one aspect. 6. Innovation design: Combining the humachine intelligence and tool platform technology, the whole innovation mechanism is constructed because the design element is considered in conceptual design and the design run through the particular design and construction layer longitudinally and transfer to next design element laterally. In this mechanism, the innovation space is kept not only in the conceptual design layer but also in particular and construction design layer.
70
J. Wang, G. Liu and H. Wang
7.
Reverse engineering design: In order to achieve the reverse engineering design, P-OI pattern system design break up the reverse reconstruction model, reverse analysis, reverse design into various design element, also design and conform them. 8. Workflow design. P-OI pattern system design is not only a pattern system design method, but also a sort of workflow of pattern system design used to manage the design flow.
6.
Conclusions
Based on the research of system composed of elements, classify elements three kinds, which form LARS. It has three important effects: 1) benefit to understand, analyze, and handle system issues; 2) associate system theory with similarity theory to form similarity system; 3) benefit to structure formally handle and construct system, improve system response capability, standardize application, simplify complex problems. As one kind of integration pattern system design, P-OI pattern system method is a pattern system design method integrating design process and design object, which takes design elements as organizational content and takes phase design process and layer design process as organizational route. Pattern system design for product development is still at the abecedarian seedtime, and abroad and deeply research is not spread. Aforementioned several pattern system design methods are just the preliminary research results based on conclusion of some practical design methods. Standard research to pattern system design is a kind of huge and complex system engineering, which should be developed more deeply.
7.
References
[1] Stephen, C Y Lu. Beyond concurrent engineering: a new foundation for collaborative engineering, the worldwide engineering grid [J]. Proceedings of 11th International Conference on Concurrent Engineering, Beijing: Tsinghua University Press and Springer, 2004, 11̚22 [2] Zhou Meili. Integration Design Principle and Methods of Diversity for Complexity Mechanical Products Similarity System [J]. THE CHINESE MECHANICAL ENGINEERING SOCIETY MECHANICAL DESIGN SOCIETY,2005(in Chinese) [3] Chen Yuliu. Modeling Analysis and Design Method for IDEF [M]. Beijing: Tinghua University Press, 1999(in Chinese) [4] Geoffrey Boothroyd, Peter Dewhurst, Winston Knight. Product Design of Oriented Manufacture and Assemble [M]. Beijing: China Mechanical Press,1999 [5] Gero J. S. Creativity Emergence and Evolution in design [J]. Knowledge-Based System. 1996, (9): 26~36 [6] Xiong Lihua, Wang Yunfeng. Research on rapid cost evaluation based on case-based reasoning [J]. Computer Integrated Manufacturing Systems, 2004, 10(12): 1605~1609(in Chinese)
Development of a Support System for Customer Requirement Capture Atikah Haji Awang, Xiu-Tian Yan DMEM, University of Strathclyde, James Weir Building, 75 Montrose Street, Glasgow, Scotland, G1 1XJ, UK
Abstract This paper describes the work being done to explore the potentials of establishing an automated customer requirement capture process at the start of a design process. Traditionally the process involves two stages; market research through marketing and identification of “the customer’s voice”, and the establishment of product design specifications from marketing by the design team. The approach is prone to errors in capturing what customers really need and it is difficult that design engineer perceptions can be validated and verified by customers. This may mislead the design engineer to waste his or her time and effort; and the company’s money to buy technologies in developing a wrong product. This research is trying to develop an interactive system in understanding the need identification of a product. For this purpose, users mainly the customers are required to validate whether the design information available in the database matches their requirement Keywords: requirement capture, customer needs, support system
1.
Introduction
Design is a process of generating solutions, which should satisfy all design requirements including expected performance, customer needs, legislation considerations, material properties and behaviours, and other engineering related issues. Researchers have made several attempts to describe the design process, some limit the description to technical level of generating solutions, while some include other non technical activities that indirectly related to design activities such as market analysis and product selling [1-4]. A definition by Pugh [2] summarises the whole design process as a concept called total design. Total design requires a blend of different skills in order to produce a marketable and functioning product. In total design, the process usually starts with collecting information on customer needs and expectations from the market, and ends with selling products in the market.
72
1.1
A. H. Awang and, X. T.Yan
Customer Requirement Capture
The process of understanding customer requirements and converting them into product specifications are done by design engineers mostly based on their experience in designing a particular product [5]. Though there are design methodologies that formalise the requirement capture in theory, what happens in real situation is very different. A series of case studies conducted by Darlington and Culley [6] prove that design engineers took an ad-hoc approach to understand customer requirement and come out with product specifications based on the requirements they have. The case studies have been conducted on experienced engineers working in established manufacturing companies, and if the sample taken is a true representation of the process in industry, this is a disadvantage to new design engineers or those what have not designed the same product before. The case studies involved mechanical and electronic engineers who were assigned to develop mechanical and electronic product requirements. These case studies show that because of the general nature of the product, mechanical designers tend to process information from conceptual design to performance validation mentally, rather than validating the product according to specifications. The case studies also show that there are different groups of customers existing in the market. The first group has no knowledge or experience with the product. Their requirements were expressed in the simplest ways and very general. Designers have to take extra effort to study other background information of this group such as demography and daily activities in order to understand their needs. This is the critical measure of requirement capture success because they may have said something without or with very little thought of the technical specification or constraints, hence lack of information. For the other two groups, one of them may have experience with a product, so they could give some of specifications such as physical description. For the last customer group, they are able to provide full specification of the product because they work along with engineers to design a customised product. Therefore usually no further requirements to be developed. As the majority of the population come from the first category, design engineers usually have problems to capture requirements from customers because there are many vague sentences which may leave designers with ambiguities and uncertainties. In this situation, designers tend to make decision based on their cognitive perception of the uncertainties [7]. This is a major disadvantage to novice designers. Furthermore, in any product development process, it is very critical to fully understand customer voice because inaccurate translation of customer requirement can lead to a wrong product being manufactured, thus negative implications on quality, cost and lead times [8].Another major disadvantage of this is that there is no way customers can validate that design engineers have captured their requirements correctly. The only time to validate this is when the product is being sold in the market, whether it succeeds or fails. Chen, etc. [9] has attempted to develop a multi-layer reference technology for knowledge management framework to facilitate knowledge sharing. The retrieval mechanism is based on the functional mechanism, which has helped designers to find design history based on product functions mentioned by customers. Other design researchers have also succeeded in translating customer requirements to
Development of a Support System for Customer Requirement Capture
73
conceptual solutions by matching customer requirements with functional domains of the product[10-16]. Krishnapillai and Zeid [17] attempt to understand customer requirements in term of product attributes using a configuration table if function-based mapping is not attainable. The capturing process has three stages; direct mapping, function-based mapping and requirement mapping based on transformation table. However, the work is still based on performance and functional of the products, which are easy to translate into product technical specifications. The works could be extended to understanding requirements such as appearance, ergonomics and cost related issues. Another approach to effectively capture the genuine customer needs was developed by Wei Yan, etc. [18]. The approach integrates picture sorting and fuzzy evaluation to elicit and analyse customer requirements. The approach improves the previous sorting techniques by eliminating the uncertainties, imprecisions and customer subjective judgements by introducing customer validation through interviews in the design process. The interviews allow customers to choose the most preferable design by sorting pictures of design alternatives produced based on the available products in the market. The interview results then analysed statistically and this new approach then helps designers to choose the most preferred design alternative, which is close to market orientations and customer demands. However, in this approach, designers first come out with design alternatives for customers to choose which design they prefer. Customers are given limited choices within design alternatives. This technique is very useful if applied to a group of customers who have no specific expectation of the product because the interview process is just to validate whether or not the design alternatives accepted by customers. To get the real voice of customer (VoC), the interview must be conducted before the generation of the design alternatives, and second interview is to validate whether designers have interpreted VoC correctly. As for the scope of this project, the method proposed previously will be applied because by having initial customer requirements and customer validation, it is hoped that product specifications will be the truest version of what has been demanded by customers. A program called Knowledge Acquisition and sharing for Requirement Engineering (KARE) has been developed at the University of Nottingham [19]. KARE automates the conversion process and understanding of customer voice by matching customer requirements to product characteristics in a complex system such as manufacturing. KARE has been designed to integrate knowledge from suppliers and customers to produce tender documents by having database of customer requirements, system and product lists, supplier lists, and constraints. If any requirement is unable to be met due to company or certain constraints, customer and supplier will confer requirements during negotiation. New requirements derived from the requirement negotiation will then re-enter the requirement analysis cycle. This is an improvement to the previous works in requirement management as previously most representations of product definitions are around representations of physical product structures, shape-related or function-related properties, or just having electronic database of various product properties [15, 20-23]. The purpose of studying this area is to develop a support system to automate customer requirement capturing process. At the initial design stage, design
74
A. H. Awang and, X. T.Yan
engineers try to understand customer needs or requirements that usually come from market surveys. Those statements are qualitative and vague as customers are expressing what they want from the products or their experience with current products. This may lead to misinterpretation of the requirements as design engineers usually perceive the requirements based on their experience. With the increasing product complexity, globalisation and market competitiveness, designers now need to clearly understand customer requirements and their transformation into product specifications is a critical part of the design process [2, 4, 24, 25].
2.
Proposed Support System
2.1
Product Design Specifications
Pugh [2] classifies Product Design Specifications (PDS) into 32 design elements, which include engineering and non-engineering aspects of a product. A study has been performed to select the most important PDS elements. The selected elements are the result of comparisons made among final year design projects and an industry survey for product design specifications concerned by customers and designers [26-29]. There are 17 mostly used design specifications for mechanical and electromechanical products. There are some specifications, which are not of customer concerns but more of company concerns such as company constraints, standards and specifications, and patents and other product data that have been excluded during the screening. The element of customer needs is also removed because all the product specifications are to be mapped to customer requirements, while the element of disposable is included in response to the growing environmental concern. Figure 1 shows the number of PDS included in the projects and after comparing these elements with the ones widely used by a minimum of 50% companies in industry, 13 PDS elements have been selected for the support system. Figure 2 shows the selected PDS elements for the artefact of this case study; an automated bicycle learning assistant as adapted from a student project [27]. Elements such as performance and maintenance can be translated directly to quantitative measures such as functional requirements, therefore both are popular among designers. For customers, they usually define these two elements as product applications. Although aesthetics, appearance and finish is difficult to measure quantitatively, it is usually one of the first qualities that attracts customers to buy a product. Generally, cosmetic quality described by customers as ‘attractive’ or ‘bright colour’, which are very vague and ambiguous. Some elements are related one another that when one decision made to fulfil one specification will affect others, for example materials, weight, working environment, product cost, disposable, maintenance and safety.
Development of a Support System for Customer Requirement Capture
75
To enable knowledge sharing, customer requirements and their possible related product specifications extracted from previous design projects are stored in a database. These will be used in a knowledge base analysis performed by the system to map customer requirements to product specifications. A template is to be designed to match the customer words to specifications based on word query.
4
No. of Project
3
2
1
st he
Ta
Pe
rfo En rm vi anc M ron e rg ain me et tic te nt s p ro nan St , a du c an pp ct e da ea co rd ra s st n a n ce d a S Sp nd ize Cu ec fini st ifica sh om t er ions ne ed Li f e Sa s in fe t Co ser y m vi c pe e tit i P a on ck i W ng e Pa Q M igh te u a t a nt lity Erg ter s, lit Co an on ials er m a t p a d R om ic ur e ny elia s an co bi d ns lity pr tra od i n uc t s td M Sh ata an ip uf ac Q pi n g u t Pr urin an od g tity uc fa t l cili i t Ti f e s y m p e- an sc Sh I a n el f l sta les ife lla (s tio to n P r r ag o c e) es s Te es Po st lit in Do ic al cu L g an m eg en al d so ta cia tio l i Loa n m pl din ica g M ti ar ke Di ons t c sp on os s t al ra in ts
0
Ae
PDS Elements
Figure 1. PDS Elements Included in Student Projects Automated Bicycle Learning Assistant
Materials Lightweight material
Safety No small parts No removable parts No sharp edges Non toxic paint
Size Wheel sizes 10” – 18” Maximum bike frame protrusion 500 mm Should not protrude above the height of the tyre
Weight Light enough to handle Max. weight to put on bike frame for control system – 1 kg.
Installation Minimum part number
Life in Service 5 years of continuous use 20 years of life expectancy
Performance Stabilising a child Minimal effect on cycling experience when removed Low voltage Self powered Speed range 0 – 20 mph Run 2 hours without recharge No change in operation for speed change Differentiate turning and falling Beginner to advanced Loading 392 N on saddle, 275N passing through rear wheel
Ergonomics Comfortable seat Handle length for children
Working environment Working temperature -5˚ – 30˚ C Storage temperature -3˚ – 50˚ C All weather conditions Useable in uneven terrain with no need for instant cleaning No storage cleaning required
Product Cost Manufacturing cost £12 48% profit Maximum selling price £35
Figure 2. Product Design Specifications
Aesthetic, Appearance and Finish Easy to clean
Disposable Die assembly to be considered for easier recycling and disposal ISO14001
Maintenance Low/no maintenance training Reset/recharge once per 2-hour session Any hydraulics/pneumatics must be easy to maintain, carried out by parents Any electrical supply should be easily accessible Any electrical supply should be quick to change with prior warning Any specialised tool must be easily attainable
76
A. H. Awang and, X. T.Yan
The selection of related words is based on previous case studies of the same product. The database is dynamic, new matching specifications can always be added to the library. Therefore, apart from judging customer requirements based on their design experience, designers can also make critical analysis by comparing their perceptions with the one retrieved by the database. 2.2
Support System Architecture
The support system architecture as illustrated in Figure 3 constitutes a user interface and a data processing module which comprises of tasks, domain knowledge and inference knowledge. Tasks are the procedures performed by the support system to process the requirements. Requirements are analysed and mapped to the list of customer needs in domain knowledge. Domain knowledge constitutes pre-determined information of the design process and product specifications. Other than the list of customer needs, question lists for users and product design specifications are also categorised as domain knowledge. During this process, the system produces a list of inference knowledge. Examples of inference knowledge are conflicting requirements, fulfilled or undecided requirements and specifications, and requirements and specifications that need further explanation from the users. Inference knowledge is the result communicated to user interface for validation. User reply is then sent back to the data processing module and the system decides whether the requirements have been fulfilled or if there is the need of a new requirement to be given by users. The process is iterative and each component of the system architecture interacts one another to complete the task. User Interface Initial requirements
User validation
New requirements
New specification registration
Data Processing Module Tasks New requirement analysis
Knowledge base mapping
Inference Knowledge ¾ Fulfilled requirements ¾ Conflicting requirements/specifications ¾ Still-undecided requirements/specifications ¾ Requirements for elaborations
Trade-off analysis
New specification registration
Domain Knowledge
¾ Product Design Specifications ¾ Question lists ¾ Customer needs
Figure 3. Support System Architecture
Development of a Support System for Customer Requirement Capture
2.3
77
Program Approach
Figure 4 shows the proposed approach for the decision making support system. The requirement capturing process starts with an interactive market survey and finishes with a product design specification report for design engineers to use as a guideline in producing a conceptual design. Requirements from users
Elicitation of requirements from users
List of pre-stored requirements that match user inputs
Analysis of requirements to map with the pre-stored data
Users either agree or disagree with the results
Communicating results to users for validation
A library of customer requirements from previous projects and experts
Knowledge base analysis using random word match
Validated requirements go as inputs to next stage; else users are required to enter more words to expand the search
No
Validation by users
A library of product design specifications that match user requirements from previous projects and experts
Yes Elicitation of validated requirement as input for product specification mapping
List of pre-stored product specifications that match user requirements
Users either agree or disagree with the results
Validated specifications saved for reference; else users can choose to register new specifications or expand the search.
Knowledge base analysis using random word match Analysis of validated requirements to map with product specifications
No
Communicating results to users for validation
No
Validation by users
Register new specifications Yes
Yes New temporary specifications registered in database for designers to review
Evolvement of new specification
To design engineering
Legends: Flow of requirement mapping
Input/Output
Flow of input/output
Process stage
Flow of information from database/knowledge
Resource
Figure 4. Customer Requirement Capturing Approach
As a user enters a requirement, the system task will analyse whether it has any record of the same requirement by matching keywords of the requirement. It is possible that the search results in one or few records. Therefore, the user has to validate the search result to go to the next stage. Validation is required in this
78
A. H. Awang and, X. T.Yan
program just to make sure the system is capturing the real problem. However, if the search produces no result, then the user has to enter few more words to re-search. The validated requirement is then sent to knowledge processing of information in the database. Information kept as domain knowledge in database obtained earlier by consulting expert experience and extracting previous design records. Input from requirement engineering process will be analysed and mapped with the domain knowledge and communicated to the user for validation. The system will try to match the requirement with product specifications available by key words, but if the database has no records that match the requirement, then the user will have to choose either to restart the process or explain with few words of the specific requirement so that the system will register a new product specification for the design engineer to consider. 2.4
Support System Software
The proposed system architecture and approach will be implemented in a prototype software designed in Microsoft Access environment. This software could facilitate the requirement capturing process and generation of design specifications by design engineers. Because it constitutes the previous design information, a novice engineer will have the knowledge of experienced design engineers to help him or her in making the design decision. Once this is implemented, an existing working prototpe system entitled DeCoSolver [30] will also be used to use the technical constraints generated from this system. This combined system will enable a designer to capture precisely the customer requirements and convert them into technical specifications, and eventually used in a constraint based design problem solving.
3.
Conclusion
A decision support system can be developed to support design engineers in capturing, processing and understanding customer requirements of a certain product in order to solve the real needs of the customer. The program also stores design information from experienced design engineers and previous design records for the engineers to compare before making a design decision. This is very useful for engineers to learn from the past to avoid making a mistake that waste money, time and effort. The database may be expanded to including product competitors and design standard data
4.
References
[1] Suh, N.P., The Principles of Design. Oxford Series on Advanced Manufacturing. 1990, Oxford: Oxford University. [2] Pugh, S., Total Design. 1991, Essex: Addison-Wesley. [3] Roozenburg, N.F.M. and J. Eekels, Product Design: Fundamental and Methods. 1995, West Sussex: John Wiley and Sons.
Development of a Support System for Customer Requirement Capture
79
[4] Pahl, G. and W. Beitz, Engineering Design: A Systematic Approach. Second ed. 2003, London: Springer-Verlag. [5] Cooper, R., A.B. Wootton, and M. Bruce, "Requirement capture": theory and practice. Technovation, 1998. 18(8): p. 497-511. [6] Darlington, M.J. and S.J. Culley, A model of factors influencing the design requirement. Design Studies, 2004. 25(4): p. 329-350. [7] Globerson, S., Discrepancies between customer expectations and product configuration. International Journal of Project Management, 1997. 15(4): p. 199-203. [8] Zhu, H. and L.Jin, Scenario analysis in an automated tool for requirement engineering. Requirements Engineering, 2000. 5: p. 2-22. [9] Chen, Y.-J., et al., Developing a multi-layer reference design retrieval technology for knowledge management in engineering design. Expert Systems with Applications, 2005. 29(4): p. 839-866. [10] Chen, L.-C. and L. Lin, Optimization of product configuration design using functional requirements and constraints. Research in Engineering Design, 2002. 13: p. 167-182. [11] Corbridge, C., et al., Laddering: technique and tool use in knowledge acquisition. Knowledge Acquisition, 1994. 6(3): p. 315-341. [12] Rehman, F. and X.-T. Yan, Product design elements as means to realise functions in mechanical conceptual design, in International Conference on Engineering Design. 2003: Stockholm, Sweden. [13] Rehman, F. and X.-T. Yan, A prototype system to support conceptual design synthesis for Multi-X, in International Conference on Engineering Design. 2005: Melbourne, Australia. [14] Jiao, J. and M.M. Tseng, Fuzzy Ranking for Concept Evaluation in Configuration Design for Mass Customization. CONCURRENT ENGINEERING: Research and Application, 1998. 6(3): p. 189-206. [15] Jiao, J. and Y. Zhang, Product portfolio identification based on association rule mining. Computer-Aided Design, 2005. 37: p. 149-172. [16] Gonzalez-Zugasti, J.P., K.N. Otto, and J.D. Baker, Assessing value in platformed product family design. Research in Engineering Design, 2001. 13: p. 30-41. [17] Krishnapillai, R. and A. Zeid, Mapping Product Design Specification for Mass Customisation. Journal of Intelligent Manufacturing, 2006. 17: p. 29-43. [18] Yan, W., C.-H. Chen, and L. Pheng Khoo, An integrated approach to the elicitation of customer requirements for engineering design using picture sorts and fuzzy evaluation. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 2002. 16: p. 59-71. [19] Ratchev, S., et al., Knowledge based requirement engineering for one-of-a-kind complex systems. Knowledge-Based Systems, 2003. 16(1): p. 1-5. [20] Court, A.W., Issues for integrating knowledge in new product development: reflections from an empirical study. Knowledge-Based Systems, 1998. 11(7-8): p. 391-398. [21] Harding, J.A., et al., An Intelligent Information Framework Relating Customer Requirements and Products Characteristics. Computers in Industry, 2001. 44: p. 51-65. [22] McKay, A., A. de Pennington, and J. Baxter, Requirements management: a representation scheme for product specifications. Computer-Aided Design, 2001. 33(7): p. 511-520. [23] Court, A.W., S.J. Culley, and C.A. McMahon, Information Access Diagrams: A Technique for Analyzing the Usage of Design Information Journal of Engineering Design, 1996. 7(1): p. 55-75. [24] Kroll, E., S.S. Condoor, and D.G. Jansson, Innovative Conceptual Design: Theory and Application of Parameter Analysis. 2001, Cambrige: Cambridge University Press. [25] Daugulis, A., Time Aspects in Requirements Engineering: Or 'Every Cloud Has A Silver Lining'. Requirements Engineering, 2000. 5(3): p. 137-143.
80
A. H. Awang and, X. T.Yan
[26] Finlay, C., Product Design Project Final Report: Domestic Cooker Fire Suppression System. 2001, University of Strathclyde: Glasgow. [27] McCall, J., Product Design Project 1: Design A Bicycle Learning/Assisting Device. 2005, University of Strathclyde: Glasgow. [28] Spears, A.J., Electronic Shelf Edge Label, in Appendices. 2001, University of Strathclyde: Glasgow. [29] Brockett, A., The Use of Product Specifications in Industry 2007, University of Strathclyde: Glasgow. [30] Yan, X. T. and Sawada, H A Framework For Supporting Multidisciplinary Engineering Design Exploration And Life-Cycle Design Using Under-Constrained Problem Solving, in Artificial Intelligence for Engineering Design, Analysis and Manufacturing, Cambridge University Press, 2006, Vol. 20, Issue 4, pp 329-350.
Comparison About Design Methods of Tonpilz Type Transducer Duo Teng , Hang Chen, Ning Zhu, Guolei Zhu, Yanni Gou Marine College of Northwestern Polytechnical University, Xi’an, 710072, China
Abstract A Tonpilz type piezoelectric ceramic ultrasonic transducer is investigated through the methods of equivalent circuit and finite element analysis(FEA). The emphasis of this paper lies in the difference between the two methods. Essentially, the piezoelectric constitutive equation is the common law of them. Derived from this equation, the Mason’s equivalent circuit and the frequency equation of the piezoelectric transducer is obtained, and then the finite element control equation is obtained. A mason’s equivalent circuit model and a 1/4 symmetry finite element model of the Tonpilz type piezoelectric ceramic ultrasonic transducer are both constructed. A comparison about the frequency performance and the admittance performance which are respectively obtained from the two methods and the corresponding test shows that, the method of FEA is more quickly and accurately, and its analysis error is controlled within 5%. What is worth to mention is that some performance and the vibration of the transducer have been predicted distinctly through FEA. So FEA is suitable to the piezoelectric transducer. A transducer prototype which is made according to the analysis result has a good performance and satisfying applied requirements. Keywords: transducer, piezoelectric ceramic, ultrasonic, equivalent circuit, finite element
1. Introduction The piezoelectric ceramic ultrasonic transducer is an electroacoustic device, which utilizes both the piezoelectric and reverse piezoelectric effect to transform energy between ultrasonic and electrical. Such devices have broad applications in ultrasonic medicine, non-destructive testing, oil prospecting well and ocean military[1]. The sensitive element of piezoelectric transducer is some special smart material like piezoelectric ceramics which can translate electrical to mechanical energy and mechanical to electrical energy[2]. Piezoelectric ceramics have the piezoelectric and reverse piezoelectric effect only after polarization processing. The piezoelectric effect causes a crystal to produce an electrical potential when it is subjected to mechanical vibration. In contrast, the reverse piezoelectric effect causes the crystal to produce vibration when it is placed in an electric field. In present, there exist many theory models to design such piezoelectric devices. Here,
82
D. Teng, H. Chen, N. Zhu, G. Zhu, Y. Gou
a Tonpilz type piezoelectric ceramic ultrasonic transducer is investigated. The design methods of Mason’s equivalent circuit and finite element analysis (FEA) are described in the following. A comparison of the analysis results shows that the method of FEA is more efficient.
2. Equivalent Circuit Theory of Tonpilz Type Piezoelectric Ceramic Ultrasonic Transducer Tonpilz means mushroom in German. A so-called Tonpilz type transducer has the structure like a mushroom, which is also called as compound bar transducer. Generally, it is made of a piezoceramic ring stack between a head mass and a tail mass, prestressed by a central bolt. Traditionally, piezoceramic has been used as the active material and the head mass has been used to transmit or receive acoustic energy. A schematic diagram is shown in Figure 1.
Figure 1. A Schematic Diagram of the Tonpilz Type Piezoelectric Ultrasonic Transducer
The piezoelectric constitutive equation is used to model piezoelectric materials mathematically. It describes the relationships of the material's properties, such as mechanical problem (elasticity), the electrical problem(dielectric) and piezoelectricity [3]. There are four possible forms for piezoelectric constitutive equations according to the different boundary conditions[4]. The following is the second one which is suitable to the condition of mechanically clamped(S=0,c᧷ T0,c) and short circuit(E=0,c᧷D0,c).
T ªc E º S e c E °> @ ¬ ¼ > @ > @ > @ ® ª sº ¯° > D @ > e@ > S @ ¬H ¼ > E @ where [S]=strain vector, [T]=stress vector,
(1)
Comparison About Design Methods of Tonpilz Type Transducer
83
[E]=electric field vector, [D]=electric displacement vector᧷ [cE]=stiffness coefficients matrix, a symmetric matrix. The superscript E means that the data is measured at constant electric field, i.e. short circuit(E=0,c; D0,c). [İS]=dielectric matrix. The superscript S means that the data is measured at constant strain field, i.e. mechanically clamped(S=0,c; T0,c). [e]=piezoelectric stress matrix Derived from the piezoelectric constitutive equation and the wave equations under certain boundary conditions, the equivalent circuit for the piezoelectric transducer is obtained. The result is show in Figure 2.
Figure 2. the Mason’s Equivalent Circuit of the Tonpilz Type Piezoelectric Ultrasonic Transducer
where C0=Static Capacitance, ij=mechanical-electrical conversion factor, ȡ=density, c=velocity of sound, k=wavenumber, S=area of cross section, l=length, The subscript f means the head mass, the subscript c means the piezoceramic ring stack, and the subscript b means the tail mass, respectively. When an electric field excitation is applied to drive transducer to vibrate, there exists a so-called non-moving plane (shown in Figure 1) someplace in the piezoelectric ceramics stack. The vibration velocity on this plane is zero. Separated by the plane, two parts are appeared, and then the acoustic energy is transmitted through two contrary ways. Derived from the equivalent circuit, each part has itself frequency equation:
tg (kc lc1 )
Uc cc Sc ctg (k f l f ) U f cf S f
(2.1)
84
D. Teng, H. Chen, N. Zhu, G. Zhu, Y. Gou
tg (kc lc 2 )
lc
Uc cc Sc ctg (kb lb ) Ub cb Sb
lc1 lc 2
(2.2)
(2.3)
The above frequency equation can determine the operation frequency of the transducer, and some other important performance parameters also can be obtained from the equivalent circuit [5].
3. Finite Element Analysis of Tonpilz Type Piezoelectric Ceramic Ultrasonic Transducer The finite element method (FEM) is effective to design piezoelectric transducer. It is a numerical method by discretizing the whole system into finite elements. The program implements equations, which govern the behavior of these elements, and solves them all, and then creates a comprehensive explanation of how the system acts as a whole. FEM is typically used for the design and optimization of a system which is too complex due to their geometry, scale, or governing equations to analyze by hand. Compared with the other theory models of piezoelectric transducer, the large-scale assumption is not necessary for FEM. No matter how complex the transducer is, or no matter what its boundary condition is, the FEM is more effective both in computation speed and in computation accuracy. ANSYS is one of the most famous commercial finite element software. A piezoelectric analysis in ANSYS need to handles the interaction between the structural and electric fields. Some coupled field element type with piezoelectric capabilities which can activate the necessary piezoelectric degrees of freedom, displacements and VOLT, are offered in ANSYS, such as PLANE13, SOLID5, SOLID98. The procedure of coupling is handled by calculating element matrices or element load vectors that contain all necessary terms. Such piezoelectric analysis is only available in the ANSYS Multiphysics or ANSYS Mechanical products. The possible analysis types are static, modal, prestressed modal, harmonic, prestressed harmonic, and transient. Static analysis can be used to determine the stresses and strains in transducer with prestressed bolt. Modal analysis can be used to determine the operation frequencies and mode shapes of a transducer, also applied on a prestressed structure. Harmonic response analysis gives us the graph of some response quantity versus frequency near the resonance frequency of transducer, i.e., admittance, bandwidth, efficiency, and even acoustic radiated field can be obtained, where the admittance curve is the most significant performance to determine the transducer. The control equation for linear material behavior in ANSYS is the following:
>M @ >ut @ >C @ >u t @ >K @ >u t @ >F t @
(3)
Comparison About Design Methods of Tonpilz Type Transducer
85
where [u(t)]= nodal displacements vector, [M]= mass matrix [C]= damping matrix, [K]= stiffness matrix [F(t)]= nodal force vector, it determines the analysis types [6]. Equation 1 is selected by ANSYS to solve the coupled-field analysis on the structural and electrical fields. After the application of finite element discretization, the coupled finite element matrix equation derived for a one element model is given as the following:
>K @º ª>u@º ª>F@º > @ > @ >K @»¼ «¬>V @»¼ «¬>L@»¼
ª>M @ >0@º ª>u@º ª>C@ >0@º ª>u@º ª >K @ « >0@ >0@» « V » « >0@ >0@» « V » « Z T ¬ ¼ ¬ ¼ ¬ ¼ ¬ ¼ ¬K
>@
Z d
(4)
where [u]=vector of nodal displacements, [V] =vector of nodal electrical potential, [KZ]=piezoelectric coupling matrix, [Kd] =dielectric conductivity, [L]=electrical load vector.
Figure 3. 1/4 symmetry model of the Tonpilz Type piezoelectric ceramic ultrasonic transducer
The 1/4 symmetry finite element model of a Tonpilz type piezoelectric ceramic ultrasonic transducer is shown in Figure 3. The piezoelectric ceramic component of transducer is defined as SOLID5 element by setting Keyopt(1)=3 to active the piezoelectric degrees of freedom, ux, uy, uz and volt. When modeling, we should input the right material properties to define the polarization direction aligned along the z axis. Noticeably, the input order must follows the ANSYS standards[7], which is different from IEEE standards[8]. The other components such as
86
D. Teng, H. Chen, N. Zhu, G. Zhu, Y. Gou
prestressed bolt, radiating head, tail mass are also defined as SOLID5 element by setting Keyopt(1)=2 to only active the displacement degrees of freedom, ux, uy and uz. The adhesive layer will be ignored in this model.
4.
Experiment and Comparison
The analysis on Tonpilz type piezoelectric ceramic ultrasonic transducer has been finished through the methods of equivalent circuit and FEM. The admittance curves in air obtained from equivalent circuit and ANSYS are shown in Figure 4 and Figure 5 respectively, while the experimental curve illustrated in Figure 6 is obtained from Agilent4294A(a precision impedance analyzer which can immediately give some electric parameters within the certain frequency range, i.e., impedance, admittance, and capacitance). We can find that the Figure 4 and 5 have the similar shape to the Figure 6, and the magnitude and the resonance frequency of the above 3 curves are in approximately agreement. When analyzing the admittance curve in ANSYS, a damping ratio assigned to the components of transducer is important. Figure 7 gives a displacement vector plot at resonance frequency, which shows that the transducer is a longitudinal vibrator (Tonpilz type). A comparison of the frequency performance obtained from the methods of equivalent circuit and ANSYS with the one obtained from the test is listed in Table 1. The error about the method of equivalent circuit is controlled within 10%, while the finite element analysis error is only within 5%. Some transducer prototypes have been made according to the design result. The maximum size of radial direction is within (9 h 9)mm, and the maximum size of longitudinal direction is within 11mm. The photo is shown in Figure 8.
Figure 4. Admittance Circle Curve Obtained from equivalent circuit in Air (resonance frequency: 132.4kHz)
Comparison About Design Methods of Tonpilz Type Transducer
Figure 5. Admittance Circle Curve Obtained from ANSYS in Air (resonance frequency: 146.3kHz)
Figure 6. Admittance Circle Curve Obtained from Test in Air (resonance frequency: 141.6kHz)
87
88
D. Teng, H. Chen, N. Zhu, G. Zhu, Y. Gou
Figure 7. Displacement Vector Plot at resonance frequency(obtained from ANSYS)
Figure 8. Photo of Piezoelectric Ceramic Ultrasonic Transducer Prototype Table 1. A comparison of the analysis results with the results obtained from the test Frequency(kHz)
Test
Equivalent Circuit
Ratio1
ANSYS
Ratio2
resonance
141.6
132.4
0.935
146.334
1.033
anti-resonance
159.0
173.6
1.092
166.652
1.048
Note: Ratio1
ANSYS Equivalent Circuit , Ratio2= Test Test
Comparison About Design Methods of Tonpilz Type Transducer
5.
89
Conclusion
A Tonpilz type piezoelectric ceramic ultrasonic transducer has been designed through the method of equivalent circuit and finite element analysis. Its performance and vibration have been predicted. A comparison about the frequency performance and the admittance performance which are respectively obtained from the two methods and the corresponding test shows that, the method of FEA is more quickly and accurately, and its analysis error is controlled within 5%. What is worth to mention is that some performance and the vibration of the transducer have been predicted distinctly through FEA. So FEA is suitable to the piezoelectric transducer. A transducer prototype which is made according to the analysis result has a good performance and satisfying applied requirements.
6.
Acknowledgement
This project is supported by Marine Industry Foundation for National Defense of China (No.05J5.8.2)
7.
References
[1] JIA Baoxian, BIAN Wenfeng. Application and Development of Piezoelectric Ultrasonic Transducers. PIEZOELECTRIC AND ACOUSTOOPTICS, 2005.8, 27 (2): 131–135. [2] Ernard Jaffe,William R Cook,Hans Jaffe. Piezoelectric Ceramics: Principles and Applications. APC International Ltd. 2000. Chapter 1 [3] B. Jaffe, W. R. Cook Jr and H. Jaffe. Piezoelectric Ceramics. Science Press, 1979.6: 6– 19. [4] LIN Shuyu. Theory and Design about Ultrasonic Transducer. Science Press, 2004.6: 17–20. [5] ZHOU Fuhong. Underwater Transducer and Array. National Defence Industry Press, 1984.12: 72–82. [6] Peter Koh ke. ANSYS Coupled-Field Analysis Guide, Release 5.5. ANSYS, Inc., September, 1998. [7] Sheldon Imaoka, Conversion of Piezoelectric Material Data. Collaborative Solutions Inc. November 12, 1999. [8] ANSI/IEEE Std 176-1987, An American National Standard IEEE Standard on Piezoelectricity, 1987. [9] Gilder Nader, Emilio C. N. Silva and Julio C. Adamowski. Determination of Piezoelectric Transducers Damping by Using Experimental and Finite Element Simulations. Smart Structures and Materials 2003: Damping and Isolation, 2003, Proc. of SPIE Vol.5052: 116–127.
Effect for Functional Design Guozhong Cao1, Haixia Guo2, Runhua Tan1 1
School of Mechanical Engineering, Hebei University of Technology, Tianjin, 300130, P.R. China 2 Library of Hebei University of Technology, Tianjin, 300130, P.R. China
Abstract Functional design plays the central role in ensuring design quality and product innovation. This paper proposes a functional design approach which is supported by effects in TRIZ (Theory of Inventive Problem Solving). The relationship among function, flow and behavior are discussed. Based on six effect-chain modes and three reasoning methods, multiple effect chains that produce the same output can be generated by effects, which can help engineers achieve breakthrough innovation by proposing new and unexpected variations in producing a specific output. A design example for functional design of a Chinese medicine mechanism is presented to demonstrate the proposed functional design methodology. Keywords: TRIZ, Effect, Functional Design
1.
Introduction
Functional design in engineering design research theory is a well-researched and active field of engineering study. All functional design begins by formulating the overall product function. By breaking the overall function of the device into small, easily solved sub-function, the form of the device follows from the assembly of all sub-function solutions [1]. Functional design plays the central role in ensuring design quality and product innovation. There are various, often conflicting, definitions of function in the literature; no universally accepted definition is currently known, such as, designer’s purpose [2, 3], intended behavior [4], an effect on the environment of the product [5], a description of behavior recognised by a human through abstraction in order to utilize it [6] or a relationship between inputs and outputs, aiming to achieve the designer’s purpose [7]. Clearly, each of these definitions has some aspects of worth, yet none are comprehensive enough to capture the fullness of definition that is desired. Researchers have recognised the importance of a common vocabulary for broader issues of design. Pahl and Beitz [7] list five generally valid functions and three types of flows. Collins et al. [8] develop a list of 105 unique descriptions of mechanical function. Hundal [9] formulates six function classes with more specific
92
G. Cao, H. Guo and R. Tan
functions in each class. Stone et al. [1] lists over 130 functions and over 100 flows. TRIZ describes all mechanical design with a set of 30 functional descriptions [10]. One of the most well-known functional design frameworks is that of Pahl & Beitz [7], i.e., systematic approach, which model the overall function and decompose it into sub-functions operating on the flows of energy, material, and signals. Umeda et al. [6] proposed a Function-Behavior-State (FBS) modeler that reasons about function by means of two approaches: causal decomposition and task decomposition. Deng et al. [11] devised a dual-step function-environmentbehaviour-structure (FEBS) model. There are other similar approaches for functional models, for example, Qian and Gero’s [2] FBS Path, and Prabhakar and Goel’s [3] ESBF model. The lack of a precise definition for functions and different functional models of product generated by different designers cast doubt on the effectiveness of prescriptive design methodologies. During functional design the design knowledge in multiple different domains may be employed, and complicated developing activities may be also involved. At the present, a unifying model of functional design has not yet arrived. TRIZ is a problem solving methodology based on a systematic logic approach that was developed from reviewing thousands of patents and analysis of technology evolution. TRIZ can be used as a powerful intellectual instrument to solve simple and difficult technical and technological problems more quickly and with better results. According to Altshuller’s patent search, for any given problem, there is more than a 90% of chance that a similar problem has already been addressed somewhere, at some time [10]. Effect is one of the knowledge base tools in TRIZ. By the analysis of hundreds of thousands patents, effects are emerged from the relevance between behaviors delivered by a design product described in a patent and a principle used in the product [10]. To push functional design into the realm of repeatable and computational, this paper proposes an automated functional design tool that utilizes six effect modes and the existing effect knowledge base to generate functional model. This tool can produce numerous feasible principle solutions in conceptual design process.
2.
Function and Behavior
Function is a statement to describe the transformation of input/output flows, aiming to achieve the designer’s purpose. Function is expressed as verb. Malmqvist et al. [12] compare TRIZ with the Pahl and Beitz methodology and note that the detailed vocabulary of TRIZ would benefit from a more carefully structured class hierarchy using the Pahl and Beitz functions at the highest level. The 30 functions in TRIZ are expanded and reclassified and the standard set of functions is presented, which is a list of functions, sub-functions and synonymies, as shown in Table 1. The object moving between functions is called a flow, which is divided into matter, energy and information flows based upon the work by Pahl and Beitz [2]. These three flows are considered basic concepts in any design problem. Matter is better represented as material. Information is more concretely expressed as parameters
Effect for Functional Design
93
because they are the contents of information. Flow is expressed as noun. The flows, sub-flows and complements are shown in Table 2. The input and output of one system associate with other technical system, person or natural system, and each flow is identified by its source and destination systems. In order to distinguish different flows, each flow contains a set of attributes to describe its state, which belongs to an attribute domain. The attribute of flow can be divided into physical, chemical and geometrical attribute, etc. Based on the attributes, two flows are said to be different if their attributes are not the same. Table 1. Short list of function set Function
Sub-function
Create
Synthesize, Produce
Change
Increase, Decrease, Convert, Form, Control
Combine
Mix, Embed, Assemble, connect
Separate
Disassemble, Decompose, Dry, Clean
Accumulate Absorb, Store, Concentrate Move
Move, Transfer, Rotate, Vibrate, Lift, Orient
Measure
Determine, Detect, Measure
Preserve
Preserve, Prevent, Stabilize
Eliminate
Destroy, Remove Table 2. Short list of flow set
Flow
Sub-flow
material
Solid, Liquid, Gas, Geometric objects, Loose Substances, Porous Substances, Particles, Plasma, Chemical Compounds
energy
Forces, Motion, Deformation, Thermal Energy, Mechanical and Sound Waves, Electric Field, Magnetic Field, Nuclear Energy, Electromagnetic Waves or Light
Parameters
Solids Parameters, Surfaces Parameters, Geometric Parameters, Deformation Parameters, Fluids Parameters, Concentration Parameters, Chemical Parameters, Forces Parameters, Motion and Vibration Parameters, Process Parameters, Thermal Parameters, Mechanical and Sound Waves Parameters, Electric field Parameters, Magnetic field Parameters, Radioactivity Parameters
Behavior is a causal relationship between input and output flow. Input flow, transformation and output flow are three primary elements of a behavior.
94
G. Cao, H. Guo and R. Tan
Function is an abstracted and subjective representation of behavior, and behavior is a physical interpretation of function. The difference in function and behavior only lies in the identification of their input and output. The overall function can be decomposed into sub-functions operating on the flows of energy, material, and parameters. The functions are classified by input and output flows. The function types derived from the relationship between input/output flows identify the behavior. The behavior, which characterizes the implementation of function, is called external behavior. As with most complex system, it is generally good to break large external behavior down into smaller and more easily sub-behaviors. An internal behavior is a sequence of alternating subbehaviors and sub-behaviors transitions, which represents the achievement way of external behavior.
3.
Effect and Effect Chain
3.1
Effect
Effects are the laws of science including physics, mathematics, chemistry and geometry, and their corresponding engineering applications, which helps to bridge the gap between science and engineering. Effects can be characterized by its input, output relations [13]. Generally speaking, an effect has an input and output flow, which is called as basic effect, thus the effect model has two poles, as shown in Figure 1(a). Most transitions from input to output with effect are controlled by auxiliary flow, so the controllable effect should be denoted with three poles, as shown in Figure 1(b). The control flow specifies the factors that can be manipulated to change the output intensity of an effect. Thereby, an effect may have multiple input poles, output poles or control poles. Effect (a) Effect model with two poles
Effect (b) Effect model with three poles
Effect Input Flow Output Flow Control Flow
Figure 1. Effect model
3.2
Effect Mode (EM)
Effects can fulfill transition from inputs to output, namely, the happening of subbehaviors depends upon effects. Effects can be connected to one another through its input or output ports and compatible relationships among adjacent effects, which confirm causal relation and structural relation of sub-behaviors. The internal behavior can be achieved by the following effect modes, in which the directed link represents one or several flows.
Effect for Functional Design
x x x x x x
95
Single effect mode: achieve an internal behavior by an effect, as shown in Figure 1(a). An effect can fulfill several behaviors. A behavior can be fulfilled by several effects respectively. Serial effect mode: achieve an internal behavior by a set of effects occurring in sequence, as shown in Figure 2(a). Parallel effect mode: achieve an internal behavior by a set of effects occurring at same time, as shown in Figure 2(b). Ring effect mode: achieve an internal behavior by a set of effects, and the output of later effect is transported to the former effect, as shown in Figure 2(c). Control effect mode: the internal characteristic of an effect can be controlled by other effects in order to control the achievement mode of internal behavior, as shown in Figure 2(d). Combined effect mode: achieve an internal behavior by several above effects modes.
Figure 2. Effect modes
3.3
Effect Chain
The effects can be linked into an effect chain by using effect mode. Multiple effect chains that produce the same output can be generated, allowing you to then select the chain that best fits your available resources and interrelated constraints. To generate an effect chain, only those effects where the output flow matches the input flow of the next one can be linked. The consistency between output flow of the first effect and input flow of the next one can be denoted by Dc (Degree of consistency). Supposed there are m attributes in required output flow, and there are n attributes in produced output flow which satisfy the attributes in budget output flow (nm). Dc can be shown as: Dc
n u 100% m
(1)
The effects chosen for an effect chain must be compatible with each other, that is, the name of output flows of the first effect must be same as that of input flow of the next one and Dc =100%.
96
G. Cao, H. Guo and R. Tan
During the transformation from input flow to out flow by effect mode, there are three methods, namely, method of exhaustion, method of minimal path length and method of consistent degree. x Method of exhaustion If times of transformation from input flow to out flow are unlimited, there will be theoretically uncountable effect chains which produce the same output. Figure 3 shows two reasoning model for effect chain: forward direction reasoning model and backward direction reasoning model. tr
fi
1
Name: fpo = fq fpo1 Effect
fi
fi 3
xxx
P 1
fpo1 Effect
2
Dc =100%
fpo2 Effect
fpo1
2 fpo2
Effect
Effect
3 xxx
xxxxxx
(a) Forward direction search reasoning model Name: fpi = fi
tr
Dc =100% fpi1
fq Effect
1 fpi2
fpi2
fpi3 Effect
fq Effect
2
fpi1 Effect
P 1
fpi1 Effect
2
3
fp3 Effect
fq Effect
3 xxxxxx
xxx
xxx
(b) Backward direction search reasoning model fi fpi p
input flow fq producted iutput flow fpo path length tr
required output flow producted output flow reasoning times
Figure 3. Reasoning model for effect chain
Method of exhaustion can produce multiple effect chains in large-scale effects, which offer various potential principle solutions and are helpful for product innovative design, but the computational of method of exhaustion is complex and inefficient.
Effect for Functional Design
97
x Method of minimal path length In principle, the numbers of effects that consist of effects chains can be chosen according to user discretion. But the effects chains should be as short as possible in order to obtain a simple system. The path of between two flows can be identified by method of exhaustion. The minimal path length (pm) is the minimum value of path length of transformation between two flows. The knowledge of minimal path length (kp) is presented by input flow (fi), output flow (fo) and minimal path length (pm), as shown in the following:
kp
{ f i , f o , pm }
(2) For example, {E, ǻl, 1} denotes that the minimal path length from electric field (E) to length change (ǻl) is one (by electrostriction effect). Designers can rapidly achieve transformation from input flow to required output flow by the knowledge of minimal path length (kp). The method is high performance, but maybe the effect chain couldn’t satisfy the requirement. x Method of consistent degree In the method of exhaustion, every reasoning produces various effects. If degree of consistency (Dc) between produced output flow of an effect and required output flow or between produced input flow of an effect and input flow is figured out, the produced output/input flow whose Dc is maximum can be identified and used as new input/output flow and carry through the next reasoning. Thus the method of consistent degree is more effective than the method of exhaustion.
4.
Functional Design Based on Behavior and Effect
By analogy-based design (ABD) [14], one existing but appropriate design can be introduced into the current design, and the former successful effects and structures in examples can be transferred to the new design. The process of functional design can be seen as transforming a functional representation to a design description or physical representation through behavior, effect and Structure, as shown in Equation (3). Design begins with the analysis of functional requirements (Q), and then determines the product function (f) in standardized set of functions. The behavior can be identified by function type (FT). The relations among input flow, transformation and output flow imply the external behavior (be), and the causal relationship (r) among sub-behaviors (bs) based on effect (e) and effect modes (EM) is to support functional realization. Structures (S) exist according as effects (e) and structures are physical entities of effects. The mapping from function to structure is realized based on effect. Based on the functional design process, the Computer-Aided Innovation Software (InnovationTool 3.0) is developed [15], which will help the designer create new concepts by combining one or more effects to accomplish design objectives and improve the original effect’s performance. Given Q Q f = IdentifyForm (Q), f F
98
G. Cao, H. Guo and R. Tan
B= IdentifyForm (FT) be B, be o f R = IdentifyForm (EM), R B×B r = (bsi×bsj)R S = IdentifyForm (e) o bs bi = < bs, r >, bi o be, such that Sof
5.
(3)
Case Study
Pill is a kind of good form of Chinese traditional medicine, but it can not be produced by Western medicine facility for its process and physics characteristic. The present condition is long process and high energy consumes and great labor intensions, so it is important to develop continue forming and shorten process to meet the need of modern times. The granulator system can be initially modeled as a black-box in Figure 4, whose inputs are powder (medicinal powder) and liquid (cementing liquid), and whose outputs are sphericity and particle (pill). Powder Liquid
Overall Function
Sphericity Particle
Figure 4. Black-box model of granulator system
According to the known inputs and outputs, search for the effects by forward direction reasoning model or backward direction reasoning model. In order to shorten manufacturing process of pill, the reasoning times is set as three (tr =3). The effects can be automatically linked into effect chains by using effect modes. Figure 5 shows the part of effect chains of granulator system, which are mainly based on fluidized bed effect, vibration effect, plastic deformation effect, shear effect, friction effect and Pascal’s effect. Figure 6 shows the corresponding principle solutions. The principle solution d2 is identified by forming of the comparison matrix of AHP. The solution structure of granulator system is shown in Figure 7.
Effect for Functional Design
Sphericity
Powder Liquid
Fluidized Bed Effect
Powder Liquid
(a)
Plastic Deformation Force Effect
Ointment Vibration Effect
Powder Liquid
Particle Ointment
Vibration Effect
Powder Liquid
99
Force
Ointment Vibration Effect
Force
Plastic Deformation Effect Plastic Deformation Effect
Medicinal strip Force
Sphericity Shear Effect
Medicinal strip Force
(b)
Cylinder Shear Effect
Medicinal strip Force
Particle
Particle
Sphericity Pascal’s Effect
Cylinder Shear Effect
Particle
Particle
(c)
Sphericity Friction Effect
Particle
Figure 5. Effect chains of granulator system
(a)
(b, c, d)
(b)
(c)
(d1)
Figure 6. Principle solutions based on effect chains
(d2)
(d)
100
G. Cao, H. Guo and R. Tan
Figure 7. Principle solutions of granulator system
6.
Conclusion
Functional design is a process in conceptual design that impacts the key features of the design result. How to map from function to structure and how to describe completely and reason effectively design concepts are crucial issues in conceptual design phase. This paper proposes an automated functional design approach which provides good support for product functional design by the followings: x Effect chains can be generated by effects based on six effect modes; x Many principle solutions are identified through method of exhaustion, method of minimal path length and method of consistent degree; x Based on effect modes, the Effect Module in computer-aided innovation design software InventionTool 3.0 has been developed, and the existing effect knowledge base is established. A design example for functional design of a Chinese medicine mechanism is presented to demonstrate the proposed functional design methodology and prove that the method is feasible.
7.
Acknowledgments
This research is supported in part by the Natural Science Foundation of China under Grant Numbers 50675059 and the National High Technology Research and Development Program of China under Grant Numbers 2006AA042109.
8.
References
[1] Stone RB, Wood KL, (1998) Development of a functional basis for design, Transactions of the ASME, Journal of Mechanical Design, 122(4):359-370 [2] Qian L, Gero JS, (1996) Function-Behavior-Structure Paths and their Role in Analogybased Design, AIEDAM, 10:289-312 [3] Prabhakar G, Goel A, (1998) A functional modeling for adaptive design of devices in new environments, Artificial Intelligence in Engineering Journal (Special Issue), 12(4): 417-444
Effect for Functional Design
101
[4] Shimomura Y, Takeda H, et al., (1995) Representation of Design Object based on the Functional Evolution Process Model. DTM’95-ASME [5] Chandrasekaran B, Kaindl H, (1996) Representing Functional Requirements and Usersystem Interactions. AAAI Workshop on Modeling and Reasoning about Function, pp.78-84 [6] Umeda Y, Ishii M, Yoshioka M, et al., (1996) Supporting Conceptual Design Based on the Function-Behavior-State Modeler, Artificial Intelligence for Engineering Design, Analysis and Manufacturing: Aiedam, 10(4): 275-288 [7] Pahl G, Beitz W, (1996) Engineering Design – A Systematic approach, The 2nd Edition, Springer-Verlag, London [8] Collins J, Hagan B, Bratt H, (1976) The Failure-Experience Matrix - a Useful Design Tool, Transactions of the ASME, Series B, Journal of Engineering in Industry, 98:1074-1079 [9] Hundal M, (1990) A Systematic Method for Developing Function Structures, Solutions and Concept Variants, Mech. Mach. Theory, 25(3):243-256 [10] Altshuller G, (1999) The Innovation Algorithm, TRIZ, Systematic Innovation and Technical Creativity, Technical Innovation Center, INC, Worcester [11] Deng YM, Tor SB, Britton GA, (2000) Abstracting and exploring functional design information for conceptual product design, Engineering with Computers, 16:36-52 [12] Malmqvist J, Axelsson R, Johansson M, (1996) Comparative Analysis of the Theory of Inventive Problem Solving and the Systematic Approach of Pahl and Beitz, The 1996 ASME Design Engineering Technical Conference and Computers in Engineering Conference, Irvine, CA [13] Runhua Tan, (2002) Innovation Design—TRIZ: Theory of Innovative Problem Solving, China Mechanic Press. (In Chinese) [14] Goel K, (1997) Design, analogy, and creativity”, IEEE Expert, 62-70 [15] Runhua Tan, Jianhong Ma, Guozhong Cao, (2006) Computer-aided innovation software system: InventionTool 3.0, Software Registration Number: 2006SR13729
Quality Control of Artistic Scenes in Processes of Design and Development of Digital-Game Products P.S. Pa, Tzu-Pin Su Graduate School of Toy and Game Design, National Taipei University of Education, Taipei, Taiwan, ROC, No.134, Sec. 2, Heping E. Rd., Taipei City 106, Taiwan, (R.O.C.),
[email protected] ,
[email protected]
Abstract A new force on the scene discovered a huge business opportunity in the Internet world and has received much attention – the online game publishing industry. Therefore, numerous papers discussing the realm of games have been published in recent years, but the scope of this research is mainly on the marketing development of game titles, the performance of program design and the discussion of educational value. The number of papers dwelling on the evaluation of key points to take note of the sophisticated and numerous procedures in the processes of digital-game product development is still very limited. However, with the advancement of technology, the scale of game development projects has been growing rapidly in recent years. A huge increase in the artwork required means increased difficulty in the relevant quality management. In light of this, the bulk content of this research is focused on the production quality and evaluation of artistic objects over the course of game development. Sheets of evaluation and validation that have been put to use in actual processes are the subjects of study and they are used in our analysis. By introducing the proposed work processes into actual developmental work, we have proven that the proposed model can offer effective quality management and increase effective production capacity by more than 40%. The proposed model of process in the study will help to improve the efficiency of R&D work for the entire video game industry. Keywords: Quality Control, Design and Development, Online Game, DigitalGame Products, Artistic Object
1.
Introduction
Since the year 2000, the entire world has suffered from when the economic bubble of the Internet burst, leaving a significant impact on Internet ecology. However, a new force on the scene has discovered a huge business opportunity in the world of the Internet and received much attention – the online game publishing industry. In light of this, an increasing number of papers discussing the realm of games have been published in recent years. Nonetheless, most of them were focused on either product marketing/development or elaborated on the educational value of games
104
P.S. Pa and T. P. Su
[1-2]. However, the game publishing industry is a highly paradoxical compound domain [3]; it requires the gathering of diverse talents, knowledge and techniques, together with minute and complex processes along with a mixture of rational and irrational elements to create an outstanding game title [4]. Along the process, there exist many conflicting minor details [5] that are usually difficult to solve by following standard software development processes. For example, if a commercial software application is chosen as the main development tool, then software efficiency and performance are given top priority [6]; software with good efficiency is considered good software. But when it comes to developing game software, many other extra factors need to be taken into account, such as aesthetic perspective. For an object in a game, its producer, programmer, artist and even the end user may have similar and yet varying artistic expectations [7]. This is not a strange phenomenon in the video game industry and it is difficult to come to an argument that is considered to be most accurate because a clearly defined standard for passing good/poor judgment is non-existent. When such problems arise, usually the decision maker at the end, i.e. the supervisor of the art division or chief producer of the project makes the final call. The type of decision made is usually dependant on the resources available; and in this case, it often refers to the hardware limitation of the game platforms. In the developmental process of an online game, reaching a compromise with reality is often the answer to solve irrational questions [8] because creating the perfect game can only be thought of as a spiritual accomplishment or an ideal that may not meet practical cost-effective requirements. This is why spreading out clearly defined items at the beginning phase of any development plan in preparation for the potential conflicts that may arise later on has become one of the essential tasks [9]. The construction and development of information systems have become tremendously difficult due to factors such as scale, resources, manpower, experience, etc. Therefore, even in the United States, only 32% of all IT projects were able to close unhindered. Small and medium sized companies with dated project construction technologies suffer from an even lower success rate due to the lack of strict and well-structured construction methods [10]. The knowledge and technological level involved in game development have long progressed beyond the scope of pure “playing”; the most emblematic example would be online games. Having participated in the development of major titles like Meridian 59 and Ultima Online 2. Damion Schubert said, “Today, pretty much all online games have big budgets. But when you review your budget, you have to make sure that your budget is focused on content; be it artwork or construction of the world in the game. To create a realistic world, you need to deal with a massive amount of content in your game. Even if you have a big budget, if you are not focused on the content of the game, then your programmers may be creating high-risk products that are overcomplicated and difficult to be released for open Beta tests [11]. When a game design project is forced to restrict the imaginative creativity with effective management approaches, elements that are difficult to be normalized [12] (referring to all the artistic objects), the commonly adopted Waterfall Model in the development of common software projects can only offer basic planning and management capabilities but is unable to respond to the variables that may spring up at any time during the course of game development [13]. The Waterfall Model
Quality Control of Artistic Scenes in Development of Digital-Game Products
105
is suited for commercial software that only requires each unit to complete the required tasks on schedule to ensure the whole project can go smoothly, but this model is far from being practical in the video game industry. In fact, developers tend to rely on the Spiral Model [14] more in game development to ensure better results. The addition of every new object and new function may become an extra load on the program in terms of software and hardware, and it takes experience on the developers’ part to avoid these negative effects. Therefore, numerous small and medium game developers have been working towards this goal over the repeated process of production, testing and modifications. With regards to the Software Development Life Cycle Model (SDLC) and the descriptions for the required key documentation, IBM has provided a structuralized descriptive framework in the 80s: the ETVX (Entry/Task/Verification/Exit) model. It is used to describe the life cycle model of the entire software development process and the corresponding procedures and tasks at every stage so that developers will be able to have a very good grasp of the software development framework for the entire project. The ETVX model can be said to be a framework that takes quality as the basic premise to establish all the processes in the later stages. The model has incorporated the “Plan-Do-Check–Action” concepts for quality in all work procedures. The “Verification” step within this model deals with the actual application in the art management of a game’s developmental processes. It is a delicate matter because defining a clear management standard for artwork is difficult, and that will be the content discussed in this study.
2.
Research Methods
The ETVX framework includes the six following steps of actual implementations: Inputs: this represents all the entries that can be inputted at this phase, including requirement specifications, contracts, PEP and so forth. Entry or Entry Criteria: the necessary conditions that must be met before any procedures can begin at this phase, such as the approval of contracts, RFP approved by supervisor, the standards for evaluation, etc. Task: the tasks that need to be executed or completed at this phase. For instance, planning the SOW (Statement of Work), the requirement specifications, recommendations and so forth. Validation: the methods of validation for tasks that have been completed, i.e. the evaluation of documentations, the examination of contracts, the testing of software, etc. Exit or Exit Criteria: the necessary conditions that must be met at this phase, such as the signing and filing of contracts, the examination and publishing of specifications, the testing and release of software, etc. Outputs: the possible output items at this phase, such as contracts, recommendations, specifications, reports, etc. In the processes of game products development for any title, a complete Game Design Document (GDD) will usually come along with an Art Design Document (ADD). If the development is of a smaller scale, the ADD may simply be presented in brevity within the GDD itself. For a larger scaled development project, such as
106
P.S. Pa and T. P. Su
an online game, then an independent and comprehensive ADD is absolutely necessary. The complete ADD contents should include the following as illustrated in Table 1. In this research, authors will discuss the contents of model regulations and scenario regulations. The principle behind these regulations works in the following scheme (see Figure 1). Table 1. ADD Contents
Art Design Document
(1) Overall style description: the establishment of the main artistic style of the entire game (2) Character design sketch: the appearance, race, costume for characters in game (3) Object design sketch: the appearance of significant objects and items in game (4) Scene design sketch: the design sketches of all stages in game (5) Color setting document: the actual value or serial number for designated skin tones, transparent colors and other colours (6) Model regulation: the limitations in character and object model design (7) Scenario regulation: the limitations in the creation of in-game stage models (8) Outsourcing regulation: the limitations in the outsourcing of works related to artistic contents (9) Special effects regulation: the limitations in the examinations of special effects (10)Action regulation: the limitations in the examinations of character or object motion
Figure 1. Flow of basic art work
Every artistic object must go through the entire process, and it is only considered to be complete after its Maker, the Art Director, the Producer and the Keeper have reached consensus and unanimously approve. Any artistic object is only finalized after it has passed the final acceptance test with the personnel in charge providing his/her signature. No further alterations will be made to finalize artistic objects.
Quality Control of Artistic Scenes in Development of Digital-Game Products
107
Including the final acceptance test, all artistic objects must go through three tests over the course of the process: Check 1 Formal test of game platform In the game design, the “Render Ware” cross-platform development software has been chosen. When objects are constructed for the first time, they must go through the first test in the Render Ware environment to make sure object details like appearance and mapping colors are correct. This step in the process can prevent and correct the minute mistakes made by artistic creation personnel due to differences in their work habits. Check 2 Overall test After this test, the completed artistic objects will be inserted to the actual game stage for an actual test run to observe their level of completion in the game. This is also an opportunity to spot and correct any post-production mistakes that may be present. Check 3 Final acceptance testing After all the processes have been completed, all personnel related to the specific object, including the Art Director, the Maker, the 2D Designer of the original script, the Game Designer who has participated in the creation of the object and the Programmer must all be present for the final acceptance test to make sure everything is correct before the final acceptance test can be concluded. By implementing the process, the developers will be able to ensure that every completed artistic object will have the same specification and can be used normally without any problems in other game platforms with the same settings. This process eliminates the need for alterations on numerous semi-finished products in the postproduction phase that would normally be required to make up for the mistakes made over the course of artistic management.
3.
Results and Discussions
For game planning, the supported game platform is the XBOX produced by Microsoft. RenderWare created by Criterion Software has been chosen as the development software in the study. The game content is a full 3D, third person perspective network adventure game for one to four players. The game can be played as a single player game or as a multiplayer game through XBOX Live services. Due to the involvement of network usage and the restrictions of XBOX console’s graphic processing speed and RAM limitations, we have to ensure that the addition of every object to the game must be acceptable within the hardware capabilities. Therefore, the processes presented in this research may not be entirely applicable for game development projects that are targeted for PCs because of the discrepancies in the hardware involved. The discussion and analysis of quality management for artistic objects over the course of game development is purely based on the case of game development processes presented in this study.
108
P.S. Pa and T. P. Su
3.1
Certain of Character Models and Validation Process
3.1.1
Model Setting, Model Specification
Derived from the GDD, the model specification documentation should include the description of the characters’ appearance, equipment, nature, behavior, personality, race and social structure, together with 2D artistic sketches of the characters’ portraits from the front, side, back (including detailed character’s facial expressions and close-up shots). In addition, every character must have at least one artistic portrait in full color, with the color settings used clearly indicated on the portrait. 3.1.2
3D Model Creation
The relevant personnel will conduct a meeting to make sure that the designs stated in step 1 conform to what has been stated in the specification, and then create lowpolygon 3D character models based on the descriptive draft and the original sketch of artistic designs. The character models need to be checked for consistency and artistic style against the scenarios. 3.1.3
Normal Map Creation
In order to improve efficiency and lighten the load on the hardware, high-polygon models need to be created for the existing character models by using Normal and Bump mapping. High-polygon models will then be mapped back to the original low-polygon models so that characters will still look great even at lower frame rates. 3.1.4
Character Motion and Sound Effects Creation
After the completion of character models, the next step is to begin the postproduction process for motion and sound effects. Movement should be based on the descriptive documentation stated in step 1; the sound effects will be handled by game designers and the sound effects production crew (or outsourced crew) with the emphasis of staying true to the original sketch. 3.1.5
Overall Test
The completed character models must pass the final game play test to make sure everything is consistent with design requirements. The coordination and interaction between movement, sound and effects must be integrated seamlessly. Figure 2 shows the flow of character building and verification.
Quality Control of Artistic Scenes in Development of Digital-Game Products
109
Figure 2. Flow of character building and verify
3.2
Scenario Creations and the Validation Process
3.2.1
Scenario Setting, Scenario Specification
The scene concept description derived from the story setting must include features like scene visuals, scene topography and situation, history, and cultural background, with the addition of important scene locations, objects, event listing and descriptions. Furthermore, 2D drafts of scenes, scene setting diagrams, a complete scenario layout and a flow chart of the game’s progression should also be present in the documentation. This is what makes a scenario specification document and it is mainly an account of the game’s progression and the gaming experience. 3.2.2
Scenario Creation (Art)
The construction of 3D scenario models based on the scenario settings must be checked for artistic style consistency with the appearance of the scenes, the atmospheres, along with the coordination with the characters and monster models. Important scenario locations and events must also be checked to see if they were consistent with the designs. All items and objects that are not under the scope of the scenario editor must be created here and observed to see if they fit well in the actual game.
110
3.2.3
P.S. Pa and T. P. Su
Polishing (Scenario Editor)
When the 3D scenarios have been completed, the quest and interactive items must be added to the scenarios by using the scenario editor so that the artistic crew can perform final touch-ups and polishing of visuals. The tasks at this phase are generally done with the quest editor and the scenario editor. 3.2.4
Sound Effects and Music Post-Production (Scenario Editor)
After integration of the scenarios is complete, the final adjustments to sound effects must be made. Sound effects such as the whistling of the wind, ambient sounds such as flowing waters of rivers and background music will be added to the game at this stage. This portion is processed with the scenario editor after having all the necessary sound files ready. 3.2.5
Overall Test
The scenarios must pass through various in-game testing at the final stage to ensure that everything is consistent with the design requirements. Each quest and special effect also has to be tested and any problems found must be corrected. Figure 3 shows the flow of scene/stage building and verify. 3.3
Scenario Objects Creation and the Validation Process
3.3.1
Object Setting, Object Specification
Scenario objects are derived from the scenario setting documentation. This documentation has to include: descriptions of object dimensions, appearances, styles, materials used and functions, with additional images for reference whenever possible. For interactive objects, detailed descriptions on the methods of operation must also be included. This sums up the draft for scenario object description, and can be turned into the original draft for object design after artistic drawings. 3.3.2
D Model Creation
Based on the draft for scenario object description and the draft of artistic design, the 3D scenario model can be constructed. However, the model needs to be checked against the constructed scenarios to see if the artistic style remains consistent and uniform throughout.
Quality Control of Artistic Scenes in Development of Digital-Game Products
111
Figure 3. Flow of scene/stage building and verify
3.3.3
Object Post-Production
After the 3D object construction has been completed, editing has to be done to the motion command depending on the requirements. The motion command needs to be timed with high precision for the post-production of sound effects. 3.3.4
Testing
The completed scenario objects must pass the final test in the game to make sure everything is consistent with the design requirements. The coordination between motion, sound, and effects of interaction must be seamlessly integrated. Figure 4 shows the flow of object building and verification.
112
P.S. Pa and T. P. Su
Figure 4. Flow of object building and verify
On the whole, as far as artists are concerned, the incorporation of the proposed process will not make them draw faster; but the much of the time lost on the relay of opinions and cross-corrections can definitely be prevented. The introduction of this process will ensure that artists can spend less time on communication and corrections, and the process can serve as a guideline and reference for solutions when things become hectic and out of control.
4.
Conclusions
Though purely theoretical processes have certain values as references, they are hardly practical and feasible in actual applications. The contents and examples of references provided in this study can be used as a source of further reference for construction and validation purposes after gaining a good grasp of pure theoretical contents. However, due to the limitations of the research, the scope of this study only covers fields related to art production processes without touching on the implementation details on other levels. In the realm of game products development, due to a lack of an effective learning system that can pass down the related knowledge and experience in planning, the fields of game design validation process and program validation process still have much room for further research. We would like to recommend researchers interested in the related fields to make their attempts in the two areas we have pointed out here.
Quality Control of Artistic Scenes in Development of Digital-Game Products
5.
113
Acknowledgement
The current study is supported by National Science Council, contract 96-2411-H152-003.
6.
References
[1] Tuzun Hakan, (2004) Ph.D., Motivating learners in educational computer games. Indiana University. [2] Moser Robert Breck, (2002), Ph.D., A methodology for the design of educational computer adventure games. University of New South Wales (Australia). [3] Dickey Michele D.Girl gamers, (2006) the controversy of girl games and the relevance of female-oriented game design for instructional design. British Journal of Educational Technology, 37 (5):785-793 [4] Waugh Rachel, (2006), Meet a video game designer. (cover story) Scholastic Scope, 55(3): 14-15 [5] Barendregt W, Bekker M.M., (2006), Bouwhuis, D.G.; Baauw, E., Identifying usability and fun problems in a computer game during first use and after some practice. International Journal of Human-Computer Studies, 64(9): 830-846 [6] Hesseldahl Arik., (2006), Desktops for the Power Player. Business Week, Issue 3997: 72-72 [7] Joynt Patrick, (2006), The Oblivion of RPGs. PC Magazine, 25(12):164-165 [8] Warden James, M.Arch. Senses, (2005), perception, and video gaming: Design of a college for video game design and production. University of Cincinnati. [9] Joseph Dolly Rebecca Doran, (2005), Ph.D., Middle school children's game playing preferences: Case studies of children's experiences playing and critiquing sciencerelated educational games. University of Virginia. [10] Andrew Rollings, Ernest Adams., (2003), Game Design. Pearson Education, Inc., 1225 [11] Jessica Mulligan, Bridgette Patrovsky, (2003), Developing Online Game: An insider’s guide. Pearson Education, Inc. [12] Flanagan Mary, (2006), Making games for social change. AI & Society, 20 (4): 493505 [13] Fisher John W., (2003), II, M.A., Methods and considerations in online game design. Michigan State University. [14] Chang Carl K., (1993), IS EXISTING SOFTWARE ENGINEERING OBSOLETE? IEEE Software, 10 (5): 4.
Chapter 2 Engineering Knowledge Management and Design for X
Integration of Design for Assembly into a PLM Environment...................... 117 Samuel Gomes, Frédéric Demoly, Morad Mahdjoub, Jean-Claude Sagot Design Knowledge for Decision-Making Process in a DFX Product Design Approach ................................................................ 127 Keqin Wang, Lionel Roucoules, Shurong Tong, Benoît Eynard, Nada Matta Mobile Knowledge Management for Product Life-Cycle Design.................. 137 Christopher L. Spiteri, Jonathan C. Borg Research on Application of Ontological Information Coding in Information Integration................................................................................ 147 Junbiao Wang, Bailing Wang, Jianjun Jiang and Shichao Zhang RoHS Compliance Declaration Based on RCP and XML Database............. 157 Chuan Hong Zhou, Benoît Eynard, Lionel Roucoules, Guillaume Ducellier Research on the Optimization Model of Aircraft Structure Design for Cost ............................................................................................................... 167 Shanshan Yao, Fajie Wei Research on the Management of Knowledge in Product Development ........ 177 Qian-Wang Deng, De-Jie Yu Representing Design Intents for Design Thinking Process Modelling.......... 187 Jihong Liu, Zhaoyang Sun Application of Axiomatic Design Method to Manufacturing Issues Solving Process for Auto-body ......................................................................... 199 Jiangqi Zhou, Chaochun Lian, ZuopingYao, WenfengZhu, ZhongqinLin Port-Based Ontology for Scheme Generation of Mechanical System ........... 211 Dongxing Cao, Jian Xu, Ge Yang, Chunxiang Cui Specification of an Information Capture System to Support Distributed Engineering Design Teams ............................................................................... 221 A. P. Conway, A. J. Wodehouse, W. J. Ion and A. Lynn Collaborative Product Design Process Integration Technology Based on Webservice......................................................................................... 231 Shiyun Li, Tiefeng Cai Information Modelling Framework for Knowledge Emergence in Product Design .............................................................................................. 241 Muriel Lombard, Pascal Lhoste
Flexible Workflow Autonomic Object Intelligence Algorithm Based on Extensible Mamdani Fuzzy Reasoning System .............................. 251 Run-Xiao Wang, Xiu-Tian Yan, Dong-Bo Wang, Qian Zhao DSM based Multi-view Process Modelling Method for Concurrent Product Development ............................................................. 261 Peisi Zhong, Hongmei Cheng, Mei Liu, Shuhui Ding Using Blogs to Manage Quality Control Knowledge in the Context of Machining Processes ..................................................................................... 273 Yingfeng Zhang, Pingyu Jiang and Limei Sun Analysis on Engineering Change Management Based on Information Systems ......................................................................... 283 Qi Gao, Zongzhan Du, Yaning Qu Research and Realization of Standard Part Library for 3D Parametric and Autonomic Modeling.................................................................................. 293 Xufeng Tong, Dongbo Wang, Huicai Wang Products to Learn or Products to Be Used? .................................................... 303 Stéphane Brunel, Marc Zolghadri, Philippe Girard Archival Initiatives in the Engineering Context ............................................. 313 Khaled Bahloul, Laurent Buzon, Abdelaziz Bouras Design Information Revealed by CAE Simulation for Casting Product Development.................................................................... 323 M.W. Fu An Ontology-based Knowledge Management System for Industry Clusters......................................................................................... 333 Pradorn Sureephong, Nopasit Chakpitak, Yacine Ouzrout, Abdelaziz Bouras
Integration of Design for Assembly into a PLM Environment Samuel Gomes, Frédéric Demoly, Morad Mahdjoub, Jean-Claude Sagot SeT laboratory – Belfort-Montbéliard University of Technology 90010 Belfort cedex, France, Phone: +33 384 583 006, Fax: +33 384 583 013, e-mail:
[email protected]
Abstract This paper presents a methodology in the field of Design for Assembly (DFA) related to the generation of assembly sequences and information systems in the PLM area. This method has been designed to develop assembly methods in our own PLM tool, taking into consideration assembly constraints in the early phases of the design process in order to be in coherence with concurrent engineering concepts. An experimental case study, a racing car ground-link system, is presented to illustrate the methodology developed. Keywords: PLM, Design for Assembly, Assembly Sequences, collaborative Engineering, knowledge management.
1.
Introduction
In a context of competitiveness with increasing constraints in terms of QualityCost-Time, companies must set up a collaborative engineering approach facilitating co-operation and coordination between their various departments and project teams, using, for example, PLM tools (Product Lifecycle Management). According to the Aberdeen Group report [1], companies that became aware of PLM potential, have seen their performances increase considerably, with a rise in sales of 19% and a fall in product development costs of 17%. There are, however, various directions to consider in order to achieve competitive and profits within company design processes. It requires , for example, accumulating information and re-using expertise on various product-process design activities. While it is based on professional processes integrated into technical data management tools like PLM or PDM (Product Data Management), our research activity is aimed at assembly engineering and focused on the early phases of the product design process. Indeed, the assembly engineering competence, traditionally considered at the end of the product development cycle, can benefit from the use of constraints resulting from upstream design phases to generate optimal assembly process sequences. Moreover, designers who define the product must be able to consider expert rules related to assembly in order to avoid many
118
S. Gomes, F. Demoly, M. Mahdjoub and J.–C. Sagot
iterations, which would result in a reduced level of effectiveness and therefore of productivity in design. Thus the objective of our research activity is to allow a better collaboration between product designers and production engineers through a methodology integrating PLM, CAD tools and assembly know-how. This paper first describes our methodology of semi-automatic generation of assembly sequences, from product data stored in a PLM system, and particularly kinematics links, geometrical constraints and also specific expert rules. The aim here is to enhance our own PLM tool: ACSP (in French: Atelier Coopératif de Suivi de Projet), by proposing to integrate product-process design domains into it, using our DFA approach [2]. Then, in a second step, we present and discuss our results, coming from an experimental case study, performing collaborative design processes, Product Data Management (PDM) and Product Lifecycle Management (PLM) concepts and tools, in terms of Design for Assembly (DFA) methods, and particularly semiautomatic assembly sequence generation in a CAD environment. Finally, conclusions and perspectives are defined to prepare future work.
2.
Our Methodology of Integrated Product-Process Design
Our global methodology takes into account a concurrent design process represented by Gomes’ model [3], and a matrix-based traceability analysis approach, considering our work as a frame of reference for accumulation and reuse of expert rules resulting from product design and assembly activities. The analysis performed on theses matrices uses simple mathematical functions including summation of rows and columns and sorting. These methods, similar to Axiomatic design [4] and the Design Matrix System, provide useful insights to product-process integrated design, by focusing attention on system requirements, functionality, components and finally assembly sequences. This design process model, linked with the MD-MV model (Multi-Domains and Multi-Viewpoints) [3], constitutes the framework of the ACSP PLM tool and has the objective of designing, identifying, selecting, evaluating, accumulating and reusing information, and thus knowledge. Jared et al. indicate that 72% of DFA criteria and assembly process sequence generation can thus be solved through the geometrical CAD model [5] and consequently, by the PDM system. Our approach consists to use PLM data, combined with CAD models and specific filters considering DFA rules. This method favors assembly process generation by matrices analysis [6] and operations in order to reduce problem complexity and to define the “Parts-Workplaces” matrix (PW= (pwij), 1 ืi ืk, 1 ืj ืv). This matrix specifies the workplaces where each part is assembled at several steps of the assembly process. Thus, this methodology, integrated in a PLM environment, can be braked down in the following way: x
Definition of product structure and strategic parameters in the PLM, applied to the Parts connection square matrix (“Parts-Parts” matrix: PP=(ppij), 1 İi İk, 1 İj İk,),
Integration of Design for Assembly into a PLM Environment
x x
x x
3.
119
Constraints modelling between product components in a matrix form, based on the CAD model analysis, Automatic generation of feasible assembly process sequences, by means of specific algorithms (detailed in next paragraphs). These assembly sequences are then created and stored in the process domain of the PLM system, Assembly process sequences representation in CAD tools, generated directly from the PLM, using Visual Basic scripts, in order to validate the assembly proposals, Workplace design by selecting relevant process sequence, considering the time required for each assembly operation and the assembly process reference time.
Experimentation
In order to illustrate our proposals, an experimental design case is chosen. Every year, our mechanical engineering and design department has to develop and prototype an entire new racing vehicle, involved in the SIA car competition (French Automotive Engineers Society). This racing car design project is used as an experimental of our methodology. To simplify the demonstration, we choose to limit the experimental case study to a sub-product of the racing car: the groundlink suspension system. This sub-product of the racing car includes many mechanical parts linking the wheel to the chassis. All these data are stored in the PLM system. After an analysis of the CAD model of our racing car suspension triangle (Figure 1), we extract the strategic constraints (constraints due to direct interferences and precedence constraints between components) for the assembly process.
Figure 1. Suspension triangle concept considered for our design for assembly experience
3.1
Product Structure And Constraints Modelling
Designers and manufacturers must verify that a given designed product can be assembled, without interference between parts, before the product is manufactured. Currently, most PLM tools do not have the capability to directly analyze the
120
S. Gomes, F. Demoly, M. Mahdjoub and J.–C. Sagot
feasibility of a given assembly plan for a product or to generate an optimal or nearoptimal assembly plan. As a result, a great deal of prior research exists on developing external assembly analysis tool for automatic assembly sequence planning and optimization. We focus our approach on automatic generation of assembly process sequences starting from product structure data stored in the PLM system. We chose to represent the precedence knowledge of an assembly in a directed graph form (Figure 2) where each node represents an elementary component and each bond between nodes indicates the presence of a connection between two elementary components. Among the connections, this graph identifies two types: contact connection (in solid line) and dummy connection (in dashed line) bringing an assembly order constraint if no-contact between two components. Kneecap A2
Kneecap B Sleeve B2
Sleeve A2
Tube A
Plate
Tube B
Foam A
Foam B Sleeve B1 Sleeve A1
Kneecap A1
Screw B
Figure 2. Directed graph of a suspension triangle concept
The directed graph describes the precedence properties of an assembly that can be represented in our own PLM tool ACSP (Figure 3). This tool enables to assign the type of connection between two elements like component-component, componentsub-assembly, and sub-assembly-sub-assembly. Besides, each connection will be affected of an order constraint. ACSP tool helps experts to exploit the precedence knowledge described previously and generate automatically a connection matrix adjacent to the directed graph.
Figure 3. Example of connections between Foam A and others elements in PLM tool ACSP
As illustrated in Figure 4, we can map into the previously described PP (“PartsParts”) dissymmetric square connection matrix PP = R = [rij] the relationships showed in the directed graph, applied to the previously described suspension triangle example, and detailed in our PLM tool.
• • • • •
Tube B
Sleeve B2
Kneecap B
Foam B
Sleeve B1
1
0
0
0
0
0
0
0
1
0
O
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 0 0 O
O
0 1
0 1
0 0
0 0
0 0
0 0
1
0
0
0
0 0 0 1
0
0
0
0
0
1 0
0
0 0 1 0 0 1 1
0 0
0 0
0 0
0 0
0 0 0 1
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0 1
0
0 0 1 1 0
O
0 1
0
0
0
1
0
0
O
0 1 O
0 O
0
0
0
0
0 0
1
0 1
121
Screw
Kneecap A1
0 1
Foam A
Plate
R=
Sleeve A1
ª 0 1 « 1 0 « « 0 1 « « 1 0 « 1 O « « 0 0 « 0 0 « « 0 0 « 0 0 « « 0 0 « « 0 0 « 0 0 « ¬ 0 0
Kneecap A2
Sleeve A2
Tube A
Integration of Design for Assembly into a PLM Environment
0º 0»» 0» » 0» 0» » 1» 0» » 0» 0» » 0» » 0» 1» » 0¼
Tube A Sleeve A2 Kneecap A2 Foam A Sleeve A1 Plate Kneecap A1 Tube B Sleeve B2 Kneecap B Foam B Sleeve B1 Screw
1 Contact connection, the component i can be assembled before component j is assembled; -1 Contact connection the component i must be assembled after component j is assembled; Ȝ Dummy connection the component i must be assembled before component j is assembled; -Ȝ Dummy connection, he component i must be assembled after component j is assembled; 0 No connection between two components or self-relationship
Figure 4. The connection matrix of a suspension triangle concept
3.2
Detection of Sub-Assemblies and Sub-Assembly Layers
Starting from a product of n components, a sub-assembly is a set of components p1, p2, p3, …, pm, with 2<m
1 x For a parallel assembly matrix, all other components are connected only to the base component i: | ri, i | = 0, and if | rij | = 1 then (for kj and k i rj k = 0), Two examples of sub-assemblies formed by three elements “Foam A, Tube A, Sleeve A2, Kneecap A2” and “Foam B, Tube B, Sleeve B2, Kneecap B” (Figure 6) are detected in the connection matrix and represented by sub-matrices by extraction of rows and columns corresponding to elements given previously. For each sub-matrices detected, a companion matrix is generated. The detection of candidate’s sub-assemblies is performed via an algorithm computed in Matlab mathematical tool.
122
S. Gomes, F. Demoly, M. Mahdjoub and J.–C. Sagot
1 2 3 4 5 1 2 3 4 5
ª0 «1 « «0 « «0 ¬« 0
1
0
0
1
0 0º 0 0»» 1 0 1 0» » 0 1 0 1» 0 0 1 0¼»
Serial 1
2
3
4
5
1 2 3 4 5 1 2 3 4 5
ª0 «1 « «1 « «1 ¬«1
1 0 0 0 0
1 0 0 0 0
1 0 0 0 0
Parallel
1º 0»» 0» » 0» 0¼»
1
2
3
4
5
Foam B Tube B Sleeve B2 Kneecap B2
0º 0»» 0» » 0¼
Tube B
Sleeve B2
Kneecap B
Foam B
Sleeve B1
0 0 0 0
Kneecap A1
0 0 0 0
0 1 1 0
1
0
0
0
0
0
0
0
O
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
O
0
0
0
0
0
0
0
0 O 0 1 0 0 1 0 0 0 1 1 0 0 0 0
1
0
0
0
0
0
1 0
0 0
0 0
0 0
0 1 0 0
0
0
1
0 1
1
0
0
0
0
0 1
0
1
0
O
0
0
0
1
0
0 1
0
0
0
0
0
0
0
0
1
0
0
O
0
0
0
0
0 1 O
0 O
0
0
0
0 1
0
0
Foam B Tube B Sleeve B2 Kneecap B2
0
0 0
SAC2 =
ª0 «0 « «0 « ¬0
0 1
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
0º 0 »» 0» » 0¼
Foam A Tube A Sleeve A2 Kneecap A2
0º 0» » 0» » 0» 0» » 1» 0» » 0» 0» » 0» » 0» 1» » 0¼
Tube A Sleeve A2 Kneecap A2 Foam A Sleeve A1 Plate Kneecap A1 Tube B Sleeve B2 Kneecap B Foam B Sleeve B1 Screw
0
0
0
0
0
0
1
0 0
0 0
0 0
0 0
0 1
0 0
O
0
0
0
0
0
0
O
0
0º 0 »» 0» » 0¼
Foam B Tube B Sleeve B2 Kneecap B2
Tube A Sleeve A2 Kneecap A FoamA Sleeve A1 Plate Kneecap A1 Sleeve B1 Screw
ª 0 1 0 «1 0 1 0 1 0 « ¬ 0 0 1
SA2 =««
0 0 0 0
Plate
ª 0 1 « 1 0 « « 0 1 « « 1 0 « 1 O « « 0 0 « 0 0 « « 0 0 « 0 0 « « 0 0 « « 0 0 « 0 0 « ¬ 0 0
ª1 «O « «0 « ¬O
SAC1 =
Sleeve A1
R=
Foam A Tube A Sleeve A2 Kneecap A2
Foam A
Tube A
0º 0»» 1» » 0¼
Kneecap A2
1 0 0 1 « 0 1 0 « ¬ 0 0 1
Sleeve A2
ª 0 «
SA1 =«1
Screw
Foam A Tube A Sleeve A2 Kneecap A2
Sleeve A1 Plate Kneecap A1 Tube B Sleeve B2 Kneecap B Foam B Sleeve B1 Screw
Figure 5. Serial and parallel assembly matrices
Figure 6. Serial sub-matrices SA1 and SA2 with respectively the Companion matrices SAC1 and SAC2 to define possibility of sub-assemblies
For a product composed of many components, assembly expert must take into account sub-assembly layers. It is necessary to use another matrix: the contracted matrix R* considering each sub-assembly detected from the connection matrix R as a single component. Thus, others sub-assemblies can also be detected in another layer. This matrix R* depends of the detected sub-assembly matrices (m*m), then the size of matrix R* will be: n-m+1. Hence, to build R*, several rules must be performed (Figure 7). In our case, as illustrated at Figure 7, with the subassemblies detected SA1 and SA2, the contracted matrix has a size 7, while considering each sub-assembly SA1 and SA2 as a single component.
R*(1) =
Sleeve B1
0 1
0 0
0 0
1 1 1 0 0 0 0 0 1 0 1 0 0
0 1
123
Screw
SA2
ª 0 1 0 «1 0 1 « « 0 1 0 « « 0 1 1 « 0 0 1 « « 0 0 1 « 0 0 1 ¬
Kneecap A1
Plate
Sleeve A1
SA1
Integration of Design for Assembly into a PLM Environment
0º 0»» 1» » 0» 0» » 1» 0»¼
SA1 Sleeve A1 Plate Kneecap A1 SA2 Sleeve B1 Screw
IF (all elements in the kth columns of companion matrix SAC =0) THEN (rsa,k = 0); IF (all non zero elements in the kth columns of companion matrix SAC) are (>0 and =Ȝ) THEN (rsa,k= Ȝ) ELSE (rsa,k = 1); IF (all non zero elements in the kth columns of companion matrix SAC) are (<0 and =-Ȝ) THEN (rsa,k= -Ȝ) ELSE (rsa,k = -1).
Figure 7. The contracted matrix considering SA1 as a single component
3.3 Interference Analysis Between Each Detected Set and Other Parts of the Product Starting from these types of assembly configurations, two kinds of matrices will be generated for each sub-assembly: the Boolean square matrix SA and the companion matrix SAC. The SAC companion matrix is used to detect if the potential sub-assembly interferes with the set selected and other individual components. Thus, to determine if the sub-assembly is feasible, several rules must be applied with the associated companion matrix. So if the signs of non zero elements in kth column are neither all positive nor all negative, there are interference between the SA matrices set and other individual components, then this set can not be considered as a sub-assembly. In our example, concerning the potential sub-assemblies detected, each non zero elements in the kth column of each companion matrix are either all positive or all negative, then these two sets can be considered as serial sub-assemblies. Using this method, 3 assembly layers can be generated:
124
S. Gomes, F. Demoly, M. Mahdjoub and J.–C. Sagot
Assembly layer 1 from R Serial sub-assemblies: SA1 = {Foam A, Tube A, Sleeve A2, Kneecap A2} SA2 = {Foam B, Tube B, Sleeve B2, Kneecap B} Individual components : {Sleeve A1, Plate, Kneecap A1, Sleeve B1, Screw}
Assembly layer 2 from R* (1) Serial sub-assembly: SA3 = {SA1, Sleeve A1, Plate} Individual components : {Kneecap A1, Sleeve B1, Screw}
Assembly layer 3 from R* (2) Serial sub-assembly: SA4 = {SA2,, Sleeve B1, SA3} Individual components : {Kneecap A2, Screw}
Assembly layer 4 from R* (3) Parallel sub-assembly: SA5 = {SA4, Kneecap A1, Screw}
Figure 8. Assembly layers
3.4
Sub-Assemblies Layers Management
ª 0 «
1 0 0 1 « 0 1 0 « ¬ 0 0 1
0º 0»» 1» » 0¼
Foam A Tube A Sleeve A2 Kneecap A2
ª 0 1 «1 0 « ¬« 0 1
SA4 =
ª 0 1 « 1 0 « «¬1 0
0º 1»» 0¼»
1º 1»» 0»¼
0 0 0 0 0 1 0 1 0 0 0 1
SA1 Sleeve A1 Plate
ª 0 0 « 0 0 « «¬ 0 1
Sleeve B1 SA2 SA3
ª 0 0 « 0 0 « ¬« 0 1
0º 0»» 1» » 0¼
Foam A Tube A Sleeve A2 Kneecap A2 Foam A Tube A Sleeve A2 Kneecap A2
ª « « « « ¬
0
0
0
0
0
0
0
0
0
0
0 1
0º 0»» 1» » 0¼
Foam A Tube A Sleeve A2 Kneecap A2
0º 1»» 0»¼
SA1 = {Foam 1, Tube A, Sleeve A2, Kneecap A2}
SA1 Sleeve A1 Plate
SA 3 = {SA1, Sleeve A1, Plate}
SA2 Sleeve B1 SA3
SA 4 = {SA2, Sleeve B1, SA3}
SA2 Sleeve B1 SA3
Sleeve B1 SA2 SA3
SA3 =
ª « « « « ¬
SA1 Sleeve A1 Plate
SA1 Sleeve A1 Plate
SA1 = «1
Foam A Tube A Sleeve A2 Kneecap A2
Foam A Tube A Sleeve A2 Kneecap A2
Detected and verified sub-assemblies must have their components organized in a defined way in order to restrict the generation of feasible assembly sequences. To reach this objective, an assembly order will be defined for each layer of subassemblies by permutations. Concerning the first sub-assembly SA1, the base component will be the component where the signs of non zero elements of its row are all positive. In this case, all elements of its row and its column are set as zero, besides the base component will be placed in first position (Figure 9).
0º 1»» 0¼»
Figure 9. Examples of a permutation with the sub-assemblies SA1, SA3, SA4
Integration of Design for Assembly into a PLM Environment
125
This same method must be performed for the other sub-assemblies (SA2 for instance) until having the part assembly order in the sub-assembly. In our example, as shown in Figure 9, two assembly sequences are detected: x x
3.5
Assembly sequence 1: [({Foam Sleeve B1; ({ Foam A, Tube A, Plate)); Kneecap A1; Screw], Assembly sequence 2: [({Foam Sleeve B1; ({ Foam A, Tube A, Plate)); Screw; Kneecap A1].
B, Tube B, Sleeve B2, Kneecap B}; Sleeve A2, Kneecap A2}; Sleeve A1; B, Tube B, Sleeve B2, Kneecap B}; Sleeve A2, Kneecap A2}; Sleeve A1;
PLM and CAD Link
Starting from the previously generated assembly sequences, next objective consists in generating through our own PLM tool a Visual Basic script to be executed in a CAD tool in order to visualize assembly simulations (Figure 10).
Figure 10. Simulation of assembly sequences of suspension Triangle concept in CATIA v5
These scripts take into account positions of components in relationship of assembly layers. For example, with one assembly, two positions can be used such as the position where components are assembled and the position where components are fragmented. Thus, starting from detected assembly sequence, components can move in a predefined order: from an initial position (fragmented view) to the final position (components assembled) for each assembly layer. At the end of this experimentation, it is possible to define the “Parts-Workplaces” matrix (PW=(pwij)), combining the previously defined assembly sequences and the time per assembly operation, and considering time values, as close as possible to multiples of the reference time of the assembly process.
126
4.
S. Gomes, F. Demoly, M. Mahdjoub and J.–C. Sagot
Conclusion and Perspectives
To be competitive, companies must implement a design process and methodologies suited to their needs and constraints. We chose to examine here our methodology through several modules: CAD and PDM tools have enabled us to achieve our aim of incorporating tasks earlier , particularly assembly processes, which are situated downstream of the design phase. Our experimentation has been limited to semiautomated production of assembly process sequences. However, this method is our first research experiment and will be springboard for our future work. It will be interesting to open up perspectives for PDM tool development integrating professional knowledge, and which should result in a tool which can carry out the definition of the product-process couple throughout its lifecycle and which will be able to exploit design information for assembly and vice-versa. This tool will have to take into account the professional aspects (Design For Assembly - DFA, Design For Manufacturing - DFM, etc.), systemic aspects (economic, information, production, etc.), disciplinary aspects (mechanical, electronics, computer sciences, etc.) and technological, epistemological and ontological, while at the same time managing professional processes. We will place importance on drawing up a methodology which can be used for product families in order to provide a complete solution to a defined context. This paper represents a first leap towards a method of semi-automatic process sequence generation based on information coming from CAD but also from professional rules (DFA) which will have to be evaluated. This approach will be necessary in order to be able to select automatically and simulate in a CAD environment, the relevant and optimal assembly sequence.
5.
References
[1] Aberdeen Group, Inc., (2006) Report on: The PLM for Small Mid-size Manufacturers [2] Gao J.X., Bowland N.W., (2002) A Product Data Management Product Configuration and Assembly Process Planning environment, Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, Vol.216, Nb.3, p.407–418, 2002. [3] Gomes S. and Sagot J-C., (2002) A concurrent engineering experience based on a cooperative and object oriented design methodology. Best papers book. 3rd International Conference on Integrated Design and Manufacturing in Mechanical Engineering, Kluwer Academics Publisher, p. 11–18. [4] Suh N.P., (2001) Axiomatic design: advances and applications, Oxford Univ. Press. [5] Jared G., Limage M., Sherrin I., Swift K., (1994) Geometric Reasoning and Design For Manufacture, Computer Aided Design, vol.7, p. 528–536. [6] Zhang Y.Z., Ni J., Lin Z.Q., Lai X.M., (2002) Automated sequencing and subassembly detection in automobile body assembly planning, Journal of Materials Processing Technology, vol. 129, p. 490–494.
Design Knowledge for Decision-Making Process in a DFX Product Design Approach Keqin Wang1, Lionel Roucoules2, Shurong Tong1, Benoît Eynard2, Nada Matta2 1
Northwestern Polytechnical University - China University of Technology of Troyes - France
2
Abstract Product design is a knowledge-intensive process and involves large quantities of decisions. The efficiency and effectiveness of these decisions depend on the provision of many kinds of related knowledge to designers from different sources throughout the product lifecycle. First, this paper makes a literature review about the related works on product design knowledge and classifies the product design knowledge into three categories, namely product, process and product support knowledge. Secondly, the knowledge needs of product design decisions are analyzed. The decisions made during the design process have a critical impact both on the design solution obtained but also on the design process itself. These decisions are of many different types and made by different kinds of engineers and managers. This work illustrates decision-making in product design and some related works are reviewed. The decision-making during the product design process is analyzed and classified into two main categories, namely organizational and technical decisions, which are illustrated in a three dimensional figure. Finally, some conclusions and future works are discussed. Keywords: DFX, product design, knowledge management, decision making process 1.
Introduction
The activity of engineering design can be defined to encompass a variety of activities (i.e. DFX concept) and is also a knowledge-intensive activity. The process of engineering design is an analytic step used to improve the quality of the final product and to reduce the time and resources needed for the final production [1]. The role and importance of knowledge within engineering and in product design in particular has become a major factor to ensuring that competitive advantage is gained by a company. Within forward looking organizations information transfer and knowledge usage by people in design team has become crucial to the ultimate success of a product’s introduction to the market and a fundamental paradigm is that of the relationship among the design team members. It has been seen that the supply of information is a continually growing domain in its own right [2].
128
K. Wang, L. Roucoules, S. Tong, B. Eynard and N. Matta
Thus there is an overwhelming need to provide design decision with enough knowledge support throughout the design process. The knowledge used comes from a variety of sources, from within the company as well as from outside, from work related as well as non-work related events and may exist in many areas including ergonomics, packaging, management, manufacturing processes and so on [3]. As products become more complex and competition intensifies, it is essential to make the maximum use of the available product design knowledge and to deliver that knowledge in the appropriate form to the right point at the right time during the product development process. The efficient delivery of information to design team members is therefore of vital importance to the overall success of a product design [4]. Although there have been some research on the product design knowledge, such as some empirical study on the knowledge needs (or request), knowledge search, knowledge flow etc. However, how many kinds of knowledge and who use this knowledge are not clear yet. This work tries to understand the different kinds of knowledge and make clear the different knowledge needs of different people in product design and needs for decision making process. This paper is organized as follows. Section 2 makes a literature review about the related works on product design knowledge and classifies the product design knowledge into several categories. Section 3 analyzes the knowledge needs of product design decisions. Section 4 introduces a proposal for acquiring manufacturing knowledge and retrieving the right knowledge during decision making process via a specific DFM design process and product modelling. Section 5 concludes with discussion and further research directions.
2.
Product Design Knowledge
2.1
Design Knowledge Identification
Design knowledge can improve the quality of design decisions [5]. Increasing design knowledge and supporting designers to make right and intelligent decisions can achieve the improvement of the design efficiency. Many proposed classifications of engineering design knowledge can be found in the literature [7-9]. The classification by Vincenti [7] includes six categories such as fundamental design concepts, criteria & specification, theoretical tools, quantitative data, practical considerations and design instrumentalities. But it does not include the ‘design process’. The classification shown in Figure 1 by Zhang [8] reflects this concern. Design knowledge Knowledge involved in design process
Design activity knowledge
Current working knowledge Domain knowledge
General knowledge Past cases
Design process knowledge
Figure 1. Design Knowledge by Zhang [8]
Design Knowledge for Decision-Making Process in a DFX Product Design Approach
129
Ahmed [9] classified product design knowledge by two dimensions: In one dimension, the knowledge is divided into process-related and product-related knowledge. In another dimension, the knowledge is divided into Stored externally Information, Stored internally in human memory (including Explicit knowledge, Implicit knowledge, and Tacit knowledge). 2.2.
Classification of Product Design Knowledge
In this paper we group product design knowledge into three groups: Product Knowledge, Process Knowledge and Product Support Knowledge (see Figure 2). The first two only refer to the knowledge being generated and then utilized during the product design process. Product Support Knowledge Final Product
Product idea
Product Knowledge Process Knowledge Clarification of the task
Conceptual design
Embodiment design
Detail design
Product Support Knowledge
Figure 2. Classification of Design Knowledge
Product knowledge is all knowledge related to the product itself. This part of knowledge is most for product design engineers who focus their efforts on the product itself. As Aziz and Chassapis [10] pointed out, any product related knowledge could be grouped into four categories: Knowledge about components; Knowledge about relations between components; Constraints on properties of materials involving part formation; Relations between components and user preferences. Process knowledge is the knowledge about the design process. This part of knowledge is most for product design managers who focus their efforts on the product design process, and based on this knowledge, they have to decide what the work will be done, when it will be done, and who will do it. As in [11], Jung et al. defined process knowledge into three types of process knowledge such as process template knowledge, process instance knowledge, and process-related knowledge. The latter one, Product Support Knowledge, refer to the knowledge coming from many different sources which locating outside the product design process, for example, knowledge from marketing, manufacturing, packaging etc.
130
K. Wang, L. Roucoules, S. Tong, B. Eynard and N. Matta
3.
Decision-Making in Product Design
3.1.
Decision and Design Decision-Making
According to Simon [12], the work of managers, of scientists, of engineers, of lawyers is work of choosing issues that require attention, setting goals, finding or designing suitable courses of action, and evaluating and choosing among alternative actions. The first three of these activities--fixing agendas, setting goals, and designing actions--are usually called problem solving; the last, evaluating and choosing, is usually called decision making. Design activities involve decision-making [13]; however, we believe that not all design activities are decision activities. When the design task is extremely wellformulated, the design engineer’s decision-making process is the solution of an optimization problem. Here decision-making is problem-solving. In contrast, when the design task is ill-formulated, design engineers are less able to apply formulaic numerical techniques to “solve the design problem.” In these cases, the design engineer’s decision-making process is a collection of heuristics that generate and evaluate solutions until a satisfactory one is found [14]. 3.2.
Classification of Decisions in Design
The decisions made during the design process have a critical impact both on the design solution obtained but also on the design process itself [17]. Product development, indeed, includes many different types of decision-making by engineers and managers. Herrmann and Schmidt [14] divided these decisions into design decisions and development decisions. Design decisions determine the product form and specify the manufacturing processes to be used. Design decisions generate information about the product design itself and the requirements that it must satisfy. Development decisions, however, control the progress of the design process. They affect the resources, time, and technologies available to perform development activities. They define which activities should happen, their sequence, and who should perform them. Corresponding to the two types of decision named design decisions and development decisions [14]. We divide the decisions during the product design process into two types: Technical decision and organizational decision. Technical decisions focus on the product itself and determine the parameters about the product. Most of technical decisions are made by product design engineers. Organizational decisions, however, control the progress of the design process. They define what will be done, when it will be done, and who will do it. Most of organizational decisions are made by design managers or people in other company management levels (e.g. the project leader judges the results obtained so far compared to the resources spend, and determines what to do next, how to do it, and who has to do it, the company management at project milestones judges the results obtained compared to the expected business opportunity, and determines the future of design project in a go/no-go decision [16]). Here we analyze design decisions in three dimensions as shown in Figure 3. Axe X indicates a design process consisting of four phases: Clarification of the task,
Design Knowledge for Decision-Making Process in a DFX Product Design Approach
131
conceptual design, embodiment design, and detail design [15]. Axe Y indicates different types of decision-makers including design engineer, design supervisor, design manager and company-level management. Axe Z indicates organizational decision and technical decision. Hierarchical Position of Decision-maker Company management Design manager Design supervisor Clarification Conceptual Embodiment Detail of the task design design design
Product Design Process
Organizational decision
Technical decision
Ty
pe
so fD ec isi
on s
Design engineer
Figure 3. A characterization of decisions during the product design process
4. Support for Knowledge Acquisition and Retrieval in a Design Decision-Making Process Contemporary design process becomes increasingly knowledge-intensive and collaborative. Knowledge-intensive support becomes more critical in the design process and has been recognized as a key solution towards future competitive advantages in product development. To improve the design process, it is imperative to provide knowledge support and share design knowledge among distributed designers. Marsh [18] found the proportion of designers’ time absorbed by information acquisition activities to be 20-30%. Marsh also found that the majority of information is obtained from personal contacts, who in 78% of cases retrieved it from memory. Production line Machining n Machining ...
Machining 2 Machining 1
Mfg. database
DM-based knowledge discovery Process Problem definition Target data selection Data cleaning Data transformation DM implementation Interpretation & Evaluation
Domain Product expert designer
Mfg.-related Knowledge base
Figure 4. Framework of Knowledge Support for Technical Design Decision-making
However, with the increasing flow of experts in manufacturing companies, knowledge base have to established to store more and more knowledge to meet the needs of product designers, especially for those novice designers. Here we propose one framework which could acquire and extract knowledge to support technical decisions for product designers, which is one part of our research project.
132
K. Wang, L. Roucoules, S. Tong, B. Eynard and N. Matta
The ultimate objective of our work is the development and implementation of a Design Decision Support System (DDSS), which acts as an aid tool for support design decisions concerning manufacturing aspect. Figure 4 represents the conceptual framework of the DDSS. In this framework, large quantities of manufacturing data are recorded in database, through our DM-based knowledge discovery approach (cf. 4.1), manufacturing-related knowledge are discovered and stored into the knowledge base after evaluation and interpretation. Then the right knowledge could be delivered to the right product designers at the right time during the product development process (cf. 4.2). 4.1
Manufacturing Knowledge Acquisition Approach
Acquisition of manufacturing knowledge (specifically in our work Manufacturing Quality Information – MQI) is a bottleneck in the proposed system framework. Presently most research on knowledge acquisition is about acquisition of illstructured knowledge such as experience of experts. Most knowledge acquisition tools and methods depend on the experts, knowledge engineers, and man-machine interactive systems. Concerning manufacturing knowledge, first of all the distribution and source should be clear. In fact most of them exist in different information systems all over the enterprise which are structured information, but the fact is that this manufacturing knowledge often has different structure in different information systems. So firstly it is important to acquire the knowledge from different sources, and secondly it has to be structured into one unified structure by some suitable methods such as Semantic Network in order to find knowledge to support product design. MQI acquisition in production process is complicated. Concerning the source of MQI, some of them are from production line and some are the records of quality inspection (cf. Figure 5). Quality information from production line should be reorganized according to different inspection methods and then will be sent to different information systems. Acquisition of quality information depends on different methods of quality information detection. According to the degree of automatization, quality detection can be divided into three categories such as automatic detection, semi-automatic detection, and manual detection. Automatic detection system can collect quality information and transport the analysis result to control equipment. Semi-automatic detection is done manually but the transportation and processing is automatic. In manual detection, all the detection equipments are operated by hand, data processing and transportation of analysis results are also done manually. Quality data acquisition can also be divided into two categories such as online and offline quality data acquisition respectively: x
Online quality data acquisition. It is controlled by computers. Quality data is gathered from the production line and then be pre-processed and transferred to the central control computer [19]. Control computer analyze the data and send adjustment order to the production line. All the online quality control is operated by computer automatically.
Design Knowledge for Decision-Making Process in a DFX Product Design Approach
133
x
Offline quality data acquisition. Quality data is not gathered from the production line. It is most about inspection on finished products offline, and by the inspection results orders are sent to adjust the production line. In practice, the two kinds of quality data acquisition are used concurrently. Often in mechanical manufacturing enterprises, some quality data are acquired by computers automatically and some have to be acquired manually. What we should do is to collect MQI and integrate them, and then the MQI collected can be represented in order to structure knowledge using data mining and other methods.
Figure 5. Framework for MQI acquisition
4.2
Knowledge Retrieval in a DFM Product Design Process
Once the acquisition done in the DSS respect to the above Data-Mining process, the knowledge has then to be used in product design process. As already mentioned the right knowledge has to be provided at the right time to the right person. The initial work done by some of the authors has then been to propose a specific DFM product design process (cf. Figure 6). That knowledge-intensive process describe with IDEFØ and IDEF3 meta-model strongly links a UML manufacturing knowledge modelling that supports product modelling. It is then easy to know which “right” knowledge has to be provided to the “right” designer and when (“right time”) respect to this activity modelling.
134
K. Wang, L. Roucoules, S. Tong, B. Eynard and N. Matta
Design Process Processes selection according to manufacturable skins A111 Skin and skeleton features
Final knowledge display
Materials information
Process & skin information O
&
Manufacturing processes list
Processes selection A112 according to manufacturable skeletons
Process & skeleton information
Needed information for activity A112 Illustration
Filters selection
Description
meroD'operation
Data bases est illustré1 par 1 *
est décrit par
*
onstitué de Procédé -Nom : char
1..*
DeFabrication FormeSection FormeSection iationSection utre
Définit SqueletteDeFabrication.
+Créer() +NotifierProcédéChoisi()
1..*
1
1..*
* est composée de 1 1
1
1
Habillage
Choix de procédés est caractérisée par
1
Figure 6. Information retrieval support
The first implementation of that retrieval system has been based on the following three concepts and related internet-based software technologies (cf. Figure 6): x
To structure the information according to the IDEF and UML modelling which represent the DFM activity. SQL database coupled to DTD (Document Type Definition) mark-ups provided in the XML language are then used. x To edit information through filters. Currently several filters have been identified : to filter the knowledge with respect to the activity (“right time”) and to filter the knowledge with respect to the designer (“right person”). PHP requests and XSL (Extensible StyleSheet Language) have been used to filer part of the DTD when editing the XML files (cf. figure 6). This XSL language can also provide different display formats filter (html or PDF formats). x To have the information evolved (modify, add, delete information). This concepts is tightly linked to the DM module presented in the section 4.1. in order to continuously update the knowledge. For further information, all details of the DFM process and product modelling are presented in [20]. Those on the knowledge retrieval application are in [21].
Design Knowledge for Decision-Making Process in a DFX Product Design Approach
5.
135
Conclusion and Recommendations for Future Works
Product design is knowledge-intensive process and involves large quantities of decisions. The efficiency and effectiveness of these decisions depends on the provision of many kinds of related knowledge to designers from different sources throughout the lifecycle. This work analyzed and classified design knowledge into three categories, namely, product-related, process-related and product support knowledge. Knowledge needs of different kinds of decisions during different product design stages should be analyzed in-depth. One knowledge database need to be established to store product design knowledge, especially to store the product support knowledge. And then the design decision support system should be developed to support design decisions based on product design knowledge.
6.
Acknowledgments
This paper has been supported by National Natural Science Foundation of China (NSFC, Grant 70462066) and the Youth for NPU teachers Scientific and Technological Innovation Foundation.
7.
References
[1] Mili F., Shen W., Martinez I., Noel P., Ram M., Zouras E., (2001) Knowledge modeling for design decisions. Artificial Intelligence in Engineering, Volume 15, Number 2, pp. 153-164. [2] Harrison S.R. and Minneman S.L., (1993) Tools, communication, and the nature of design. In Proceedings of the 9th International Conference on Engineering Design (ICED’93). The Hague, The Netherlands, pp. 351-354. [3] Blessing L.T.M. and Wallace K.M., (1998) Supporting the knowledge life-cycle. In Knowledge-Intensive CAD, KIC3 Workshop, Tokyo, Japan. [4] Court A W, Ullman D G and Culley S J. A., (1998) Comparison Between the Provision of Information to Engineering Designers in the UK and the USA. International Journal of Information Management, 18(6), pp. 409-425. [5] Beheshti R., (1993) Design decisions and uncertainty. Design Studies, 14(1), pp. 85-95. [6] Frise P.R., Rohrauer G.L., Minaker B.P. and Latenhof W.J, (2003) Identifying the design engineering body of knowledge. In International Conference on Engineering Design (ICED 03), Stockholm, August 19-21. [7] Vincenti W.G., (1990) What Engineers Know and How They Know It? (The Johns Hopkins University Press, Baltimore). [8] Zhang Y., (1998) Computer-Based Modelling and Management for Current Working Knowledge Evolution Support, PhD Thesis, Strathclyde University, UK. [9] Ahmed S., Bracewell R. and Kim S., (2005) Engineering knowledge management. In A Symposium in Honour of Ken Wallace. Cambridge, UK, pp. 1-7. [10] Aziz E.-S.S. and Chassapis C., (2005) A decision-making framework model for design and manufacturing of mechanical transmission system development. Engineering with Computers, 21, pp. 164-176.
136
K. Wang, L. Roucoules, S. Tong, B. Eynard and N. Matta
[11] Jung J., Choi I. and Song M., (2007) An integration architecture for knowledge management systems and business process management systems. Computers in Industry, 58, pp. 21-34. [12] Simon H.A. and Associates, (1986) Decision Making and Problem Solving. Report of the Research Briefing Panel on Decision Making and Problem Solving, (National Academy Press, Washington, D.C.). [13] Eekels J., (2000) On the fundamentals of engineering design science: the geography of engineering design science-Part 1. Journal of Engineering Design, 11(4), 377-397. [14] Herrmann J.W. and Schmidt L.C., (2002) Viewing product development as a decision production system. In ASME 2002 Design Engineering Technical Conferences and Computers and Information in Engineering Conference, DETC 2002, Montreal. [15] Pahl G. and Beitz W., (1996) Engineering Design - A Systematic Approach, (SpringerVerlag, London). [16] Hansen C.T. and Andreasen M.M., (2004) A mapping of design decision-making, in 8th International Design Conference, DESIGN 2004, Vol.3, pp.1409-1418. [17] García-Melón M., Aragonés P., Poveda R. and Zabala J., (2005) Analysis of the decision making processes in NPD projects within innovative companies-an empirical study in the Valencian region (Spain), in International Conference on Engineering Design, ICED 05, Melbourne, August 15-18. [18] Marsh, J.R., (1997) The Capture and Untilisation of Experience in Engineering Design, PhD Thesis, Cambridge University. [19] Yin C. Y., (1998) In: Quality Engineering Introduction, 100-109. China metrology publishing house, Beijing. [20] Skander A, Roucoules L., Klein Meyer JS, (2007) Design and manufacturing interface modelling for manufacturing processes selection and knowledge synthesis in design, in International Journal of Advanced Manufacturing Technology, DOI10.1007/s00170007-1003-2. [21] Roucoules L.,Skander A., Eynard B., (2004) XML-based knowledge management for DFM, in International Journal of Agile Manufacturing, vol 7, n°1, pp 71-76.
Mobile Knowledge Management for Product Life-Cycle Design Christopher L. Spiteri, Jonathan C. Borg Department of Industrial and Manufacturing Engineering, University of Malta
Abstract As products become more complex, a significant amount of design knowledge support is required by designers during the design process. However, given that the design process is not only confined within a design office, knowledge support must be provided to designers even when they are away from their usual workplace. This provision of knowledge is essential for designers to take sound decisions that span through the whole product life-cycle, from the conceptual design stage to the disposal phase. Hence, designers require design tools that support them at considering the consequences based on these decisions. With the ever-increasing popularity of mobile devices (such as PDAs, smartphones and pocket PCs), access to such knowledge can be greatly facilitated. The contribution of this paper consists in the presentation of a mobile Knowledge Management (mKM) system architecture that provides distributed engineering designers situated in mobile work settings with design knowledge support in the form of ‘life-cycle consequences’ during the design process. Keywords: mobile knowledge management, distributed environment, product lifecycle
1.
Introduction
Engineering design is considered to be a knowledge-intensive activity [1], and managing it is an important concern for the engineering design industry [2]. This can be achieved by adopting a knowledge management (KM) approach to systematically structure expertise and make it more accessible and easily shared. However, the engineering design environment is highly distributed and mobile in nature [3], and as a result engineering designers are frequently outside the design office to carry out other tasks and/or activities away from their usual working place [4-6]. This mobility is seen as a detriment to engineering designers as they occasionally find themselves without the knowledge support required to take the necessary decisions. On the other hand, companies are expected to deliver high quality products at shorter lead times and lower costs, so designers require adequate support even when away from their usual workplace. With the everincreasing popularity and computational power of mobile devices (such as PDAs,
138
C. L. Spiteri and J. C. Borg
smart phones and pocket PCs), sharing and access to knowledge can be greatly facilitated. This paper presents a mobile Knowledge Management (mKM) system architecture that provides product Life-Cycle Consequence (LCC) [7] knowledge support to designers engaged in mobile work. The paper is organized as follows. Section 2 describes the mobility aspect in design, and examines how decision commitments during the design process can lead to unintended consequences. Section 3 presents a life-cycle consequence knowledge approach framework that describes how such life-cycle consequence knowledge is generated during a design solution. Section 4 describes the mKM system architecture and prototype tool developed, whilst in section 5 a small design scenario is given to demonstrate the application of such a mKM tool in design. Section 6 discusses the current benefits and limitations of the prototype, whilst conclusions and future improvements are presented in section 7.
2.
Supporting Mobility in Engineering Design
Engineering design is a complex activity, and designers need to be proficient in generating and evaluating various candidate solutions to a design problem [8], even when engaged in mobile work scenarios. This research focuses on the mobility aspect of design, where designers are away from the design office and require knowledge support to take correct design decisions to solve a particular (sub)problem. Depending on the design situation, designers may either be: x x x
Co-located: performing design activities face-to-face; Mobile: mainly either travelling, wandering or visiting [5], e.g. visiting laboratories, the shopfloor, testing departments, other buildings, or to participate in meetings with other designers or with customers; Distributed: there are different scales of distribution, from designers being separated by different floors in a building to different countries and different time-zones.
In recent years product development has experienced a significant paradigm shift and the design process is becoming more distributed and collaborative in nature [8]. Moreover, designers are finding themselves in mobile situations more than ever. From a number of empirical research studies on mobile work, it is evident that most members of a product design team very often are away from their usual workplace [4-6], i.e. they are mobile. Mobility implies that a designer is away from the usual workplace and is constantly moving from place to place. Therefore, being distributed does not imply being mobile; however it is possible to be mobile whilst making part of a design team in a distributed environment. Mobile Knowledge Management (mKM) systems have been developed in various fields [9] to support such mobile workers, regardless of temporal and spatial constraints. However, engineering designers still lack such a tool, and this research work aims at developing a prototype mKM tool that supports designers to take sound decisions even when engaged in mobile work scenarios.
Mobile Knowledge Management for Product Life-Cycle Design 139
2.1 Life-Cycle Oriented Design Decision Making A decision maker has to make a choice between a number of alternatives related to a domain, as explained in [7]. These decisions have to be taken irrespective of the design stage (conceptual vs. detail) or of the synthesis viewpoint (e.g. functional or constructional). The selected alternatives, termed decision commitments [7] in this research, are a result of the synthesis decision process. A decision is made to intentionally achieve a desired consequence. However, decision-makers do not always know all the consequences of their alternatives [10]. That is, decision commitments can also result in unintended consequences that have a propagation effect across various product life-phases (such as manufacturing, use and disposal) [11]. Therefore, knowledge pertaining to the whole life-cycle should be provided to designers as from the conceptual design stage, since most of the characteristics of a product depend on decisions taken during the early design stages [12]. This research work aims to provide such knowledge to designers even when these are away from their design office.
3.
A ‘LCC Knowledge’ Approach Framework
This research considers mechanical components, which are considered as a decomposable system consisting of a number of reusable elements (i.e. sub elements), which in turn are composed of simpler elements such as form features, assembly features, material and surface texture. These elements are termed Product Design Elements (PDEs) [7]. Similarly, life phases (e.g. design, realization, use, and recycle) forming a mechanical artefact’s life, involve the re-use of technical systems (e.g. manufacturing or maintenance systems) that realize the relevant transformation effects of that phase. Models of these systems (e.g. milling machine) can also be decomposed of sub-systems (e.g. a work piece holding device), these termed Life-Cycle Phase Elements (LCPEs) [7]. A ‘Life-Cycle Consequence Knowledge’ approach framework has been adopted from [7] as the basis for the development of the mKM system architecture. The approach framework is based on these fundamental goals: x x
x
Predict consequences that are generated during synthesis, based on the commitments made; Provide solution synthesis guidance by proving a means to search for design elements that need to be committed to result in an intended consequence; (e.g. what assembly feature results in a sub-assembly that is easy to dis-assemble in the disposal phase?); Provide pro-active awareness of Life-Cycle Consequences (LCCs) and their source commitments.
The framework is presented in Figure 1, and is based on the following four frames:
140
C. L. Spiteri and J. C. Borg
2. Synthesis Elements Frame Life synthesis element Library Assembly Features
1. Operational Frame
Materials
Sub-problem
LCPEs PDEs
•Intentions •Circumstances
3. Life Modelling Frame
Component
Knowledge of consequences and their source
Life-Phase System
Product Design Elements
Life Cycle Phase Elements
– – – Requires internet access
•Preferences
FORESEE KICAD system
LifeLife-cycle Consequence Knowledge Consequence Action Knowledge
Consequence Inference Knowledge
4. LCC Knowledge Frame
Figure 1. A LCC Knowledge Approach Framework
Operational frame: The fundamental operating principle of this frame is that a designer is mobile (i.e. away from his design office) and is working on a design sub-problem. This frame assumes that the designer is in possession of a mobile device with wireless access (such as a PDA with wireless access to a Wi-Fi hotspot). Mobile devices considered in this research are mainly Personal Digital Assistants (PDAs), smart phones and pocket PCs, because of their small physical size yet high processing speeds compared to other portable devices such as a tablet PC. In this frame the designer encounters a sub-problem and interacts with a synthesis element library (in the Synthesis Elements Frame) for a set of suitable elements. Synthesis Elements Frame: This frame consists of a synthesis element library from which the designer can search for a set of suitable elements directly from the mobile device. These elements can be Product Design Elements (PDEs), such as form features, assembly features, and material, or Life-Cycle Phase Elements (LCPEs), such as fabrication and assembly systems, encountered during the life of an artefact and that can be reused during synthesis.
Mobile Knowledge Management for Product Life-Cycle Design 141
Life Modelling Frame: Based on the designer’s intentions, preferences and circumstances, the designer commits such PDEs and LCPEs to evolve the artefact life model. LCC Knowledge Frame: The artefact model, made up of a number of elements, is submitted to a remote KICAD system (an already existing KICAD model named Foresee [7] is used in this research), which analyses the artefact model and LCCs are inferred from consequence inference knowledge. Furthermore, relevant consequence action knowledge infers actions that need to be carried out, such as changes in performance measures of appropriate life-phase metrics to allow designers to reconsider the artefact life behaviour. This knowledge of consequences and their sources is then made available ‘online’ to the designer, and this inferred knowledge is utilised for exploring such LCCs.
4.
A Mobile Knowledge Management System Architecture
This section proposes a mobile knowledge management (mKM) system architecture for supporting engineering designers engaged in mobile work. The main purpose of a KM system is to provide support for knowledge management activities [13] and provide the designer with highly valued knowledge [14]. The reasons for using KM by using collaborative technology include technology advancement, increased professional specialization, and workforce mobility [15]. The domain selected concerns the conceptual design of thermoplastic components as these provide a suitable case for artefact life exploration due to alternative PDEs and LCPEs such as form, assembly features and fabrication systems. The first module is the “Current Working Solution” module, where the model is built from reusable synthesis elements. The Model Manipulator allows the designer to add, detail and refine elements to the model that is concurrently synthesised with PDEs and LCPEs. The designer can also view a compositional hierarchy of the evolving life model on the User Interface through the Model Viewer. The second module is a “KICAD system” module [7], which provides a means of how captured knowledge can be reused in new design situations. The KICAD system was developed in wxCLIPS® [16], and it consists of a Knowledge Base with structured LCC knowledge using kind_of taxonomies. This structured knowledge base allows relevant knowledge to be revealed and utilised at the right time, with synthesis decision commitments. This module consists also of an Inference Engine which employs LCC inference knowledge to reveal LCCs associated with the solution and LCC action knowledge to enable the KICAD tool to take a proactive role in the component design. The detected LCCs are displayed on the Consequence Viewer, where individual LCC consequences can be viewed independently to represent details about its meaning, source, commitment(s) and guidance to its avoidance. Performance measures, that vary as a result of the consequences inferred, can also be seen on the Consequence Viewer. These are estimated through performance mapping knowledge employed by the Multi-X behaviour function. The two sub-modules interacting with the KICAD system are the Library Access, through which the designer can search for synthesis elements
142
C. L. Spiteri and J. C. Borg
that result in intended consequences, and the Knowledge Manager, which provides utilities to add/modify classes/instances of synthesis elements stored in the library. The third module is the “User Interface”, through which the designer interacts with the other two modules. The user interface is mainly a web-based interface which is made up of a number of hypertext pages designed specifically to be viewed on mobile web browsers by using suitable CSS for mobile devices. The user interface also incorporates a cross-platform server-sided HTML embedded scripting language (PHP Hypertext Pre-processor) necessary for producing a dynamic web interface and also to interact with a database that temporarily stores the synthesized model. The user interface also incorporates a client-sided scripting (JavaScript) that adds functionality and interactivity to the web pages. The user interface is platform-independent, enabling access from different mobile web browsers. The automated process of generating the LCC knowledge from the committed decisions and to provide this LCC knowledge back to the designer was realized using the Visual Studio .NET® Windows Services.
Figure 2. The mobile knowledge management (mKM) system architecture
5.
Design Scenario
To demonstrate the effectiveness and applicability of the mobile Knowledge Managament system architecture, this section considers a small design scenario involved during component synthesis. The scenario considers a designer away from the design office engaged in the conceptual design of a component that is intended to act as a cover for an electronic circuitry housed in an enclosure to act as a case of a radio remote control. The requirements known at this early design stage are that 9000 such covers are required. The cover needs to be non-corrosive,
Mobile Knowledge Management for Product Life-Cycle Design 143
lightweight and to allow the case to be opened for servicing if the need arises. This scenario concerns qualitative commitments, but as the design progresses, quantitative commitments can also be made, such as the specification of parameter values, which through more specific knowledge can evolve further consequences. To satisfy the servicing requirements the designer uses the library access submodule to search for PDEs that result in a non-permanent bond, this giving rise to a decision proposal with a set of feasible options: fastener, screw, snap-fit, pop-rivet or nuts and bolts. The designer also searches for materials that have a low density and are non-corrosive. This gives rise to a decision proposal concerning various feasible alternatives: STYRON, ABS, Bakalite or Aluminium. The designer wants to explore committing ‘STYRON’, and searches for compatible processes related to this material. The system provides a set of technical process compatible with the selected STYRON material: injection moulding or milling or twist drilling or blow moulding. The designer then starts building the model by making a series of commitments: x x x
a Styron material with a screw assembly feature; an injection moulding process; and a quantity of 9000 components.
EASY DISASSEMBLY FASTENER SNAP-FIT CANTILEVER_SNAP ANNULAR_SNAP BOLT SCREW FORMED THREAD INTERNAL_THREAD EXTERNAL_THREAD
Figure 3. mKM Life Synthesis Element Library and proposed set of feasible options
These commitments are submitted to the KICAD system in the form of facts and the inference engine infers LCC inference knowledge that reveals facts associated with these commitments. In this example the LCC consequences are the following: x x x
Styron is a non-magnetic material, so there will be difficulties in separating the material during the disposal phase; A screw assembly feature results in a slow assembly, and this in turn influences the ‘use phase’ and ‘realization phase’ performance measures; The screw assembly feature also introduces the requirement of an additional hole (with parameters that include hole diameter, depth, angle, x-centre and y-centre that are still not defined);
144
C. L. Spiteri and J. C. Borg
x x x
The injection moulding process results in a the requirement of a mould, which in turn generates its own LCCs, such as parting line defects introduced. Also, a core-pin is also added to the mould tool; The minimum economic quantity for injection moulding is 10,000, so the quantity defined earlier of 9,000 is not feasible. If the use quantity is increased to 10,000, assembly automation will be recommended by the system.
Figure 4. Decision commitments for the material and manufacturing process
The designer can select different PDEs, such as a snap-fit instead of a screw as an assembly feature, and such commitments will introduce new LCCs which can be viewed on the consequence viewer. This alternative partial solution will avoid certain consequences (such as a hole required, mould requires core, vibratory bowl required, and weld line defect) but generates others e.g. snap-fit bad for repetitive assembly, weak bond, mould requires split cavity. The designer can then compare the partial solutions and the better alternative can be chosen. The decision of which partial solution to select and proceed with is still entirely controlled by the designer.
Mobile Knowledge Management for Product Life-Cycle Design 145
Figure 5. List of LCCs generated and the Performance Measures matrix
6.
Discussion
Engineering Design is a knowledge intensive activity, and designers require knowledge support even when carrying out design activities away from the design workplace. It is pointed out that although various mobile Knowledge Management systems have been developed, none have so far been specifically dedicated to support engineering designers. This paper has addressed this issue by presenting a mKM system architecture and a design scenario implemented with a prototype tool. The design scenario has demonstrated how designers can be provided with life-cycle consequence knowledge support for decision making in a mobile work settings.
7.
Conclusions and Future Work
This paper reported on the development of a mobile knowledge management system architecture. Its application during synthesis at component level from a constructional viewpoint is seen as a promising way to proactively assist designers forecast intended and unintended consequences that span the total product lifecycle when away from a knowledge base, such as the design office. Design engineers in possession of a mobile device and connected through Internet can utilise the developed mKM tool to timely obtain life-cycle consequence knowledge for a solution and be supported with ‘life-oriented’ component design solutions. Furthermore, the system allows for adding/removing of classes/instances of the stored synthesis elements of the library. However, further work needs to be carried out on the prototype tool to enhance the knowledge management aspect of the system so that dynamic management of classes, instances and parameters will be possible by the distributed/mobile actors. Also, dynamic updating and customisation of the LCC knowledge are another important future concern. Future
146
C. L. Spiteri and J. C. Borg
work also involves the evaluation of such a prototype with practicing engineering designers.
8.
References
[1] Blessing L., Wallace K. Supporting the Knowledge Life-Cycle, in Third Workshop on Knowledge Intensive CAD, 1998, Japan: Kluwer Academic Publishers, pp. 21-38. [2] Ahmed S., Hacker P., Wallace K. The Role of Knowledge and Experience in Engineering Design, in 15th International Conference on Engineering Design (ICED '05), 2005, Melbourne, Australia, [3] Crowder R., Bracewell R., Hughes G., Kerr M., Knott D., Moss M., Clegg C., Hall W., Wallace K., Waterson P. A future vision for the engineering design environment: a future sociotechnical scenario, in International Design Conference on Engineering Design ICED 03, 2003, Stockholm, [4] Bellotti V., Bly S. Walking away from the desktop computer: distributed collaboration and mobility in a product design team, in Proceedings of ACM Conference on Computer Supported Cooperative Work, 1996: Cambridge, MA: ACM Press, pp. 209218. [5] Kristoffersen S., Ljungberg F., Mobility: From Stationary to Mobile Work, in Planet Internet, K. Braa et al., Editor. 2000, Studentlitteratur: Lunnd, Sweden. pp. 137-156. [6] Spiteri C.L., Borg J.C. Investigating mobile Knowledge Management support in Engineering Design - an empirical study, in NordDesign 2006, 2006, Reykjavik, Iceland, pp. 148-157. [7] Borg J.C., Design Synthesis for Multi-X - A 'Life-Cycle Consequences Knowledge' Approach, Ph.D., 1999, University of Strathclyde, Glasgow, Scotland. [8] Zdrahal Z., Mulholland P., Domingue J., Hatala M., Sharing engineering design knowledge in a distributed environment, Behaviour and Information Technology, 2000, 19(3): pp. 189-200. [9] Spiteri C.L., Borg J.C., Cachia C., Vella M. Requirements for a mobile knowledge management system in engineering design, in 16th International Conference on Engineering Design, ICED 07, 2007, Paris, France, [10] March J.G., A Primer on Decision Making: How Decisions Happen. 1994, New York: Free Press. [11] Borg J.C., Giannini F. Exploiting Integrated 'Product' and 'Life-Phase' Features, in Conference on Feature Modelling and Advanced Design-for-the-Life-Cycle Systems (FEATS 2001), 2001, Valenciennes, France: Kluwer Academic Publishers, [12] Andreasen M., Duffy A.B., MacCallum K.J., Bowen J. The Design Coordination Framework: key elements for effective product development, in Proceedings 1st International Engineering Design Debate, 1997, University of Strathclyde, Glasgow, UK: London, Springer-Verlag, pp. 151-172. [13] Deng Q., Yu D., An Approach To Integrating Knowledge management Into The Product Development Process, J. of Knowledge Management Practice, 2006, 7(2). [14] Fu Q.Y., Chui Y.P., Helander M.G., Knowledge identification and management in product design, Journal of Knowledge Management, 2006, 10(6): pp. 50-63. [15] Abdullah R., Selamat M., Sahibudin S, and Alias R., A Framework for Knowledge management System Implementation in Collaborative Environment for Higher Learning Institution, Jounal of Knowledge Management Practice, 2005, 6. [16] Giarratano J., Riley G., Expert Systems: Principles and Programming. 4th edition ed. 2005, USA: Thomson Course Technology.
Research on Application of Ontological Information Coding in Information Integration Junbiao Wang, Bailing Wang, Jianjun Jiang and Shichao Zhang Northwestern Polytechnical University 710072, China
Abstract A technology of ontological information coding was researched to be applied to information integration of manufacturing enterprises. The information resource was organized by classification of "information domains", and the hierarchical structure of ontological information coding system was constructed by analyzing relationship of sub-domains and information objects. Relationships between key attributes and accessorial attributes were organized by ontology, and a model of ontological information coding, which was validated by example of “Material” was formed. At last, a frame of information integration based on ontological information coding was designed with its actualizing mechanism expatiated. Keywords: Information coding; Ontology; Ontological Information Coding; Information integration
1.
Introduction
The main issues of information integration are data information model discrepancy and semantic discrepancy. As the enterprise data basic standardization, information code has supported the information standardization work commendably. The coherence and unification of information can be solved through an actual unified code system in enterprises. But how to use code in information integration and how to solve the problem of phraseological model and semantic description are a valuable study issue. There are many researches on code and ontology at home and abroad. At abroad there are BUOCS parts coding system of American Boeing Company, Germanic OPTIZ coding system,KC,KK-1,KK-2,KK-3 coding systems of Japan and so on; At home, Jiang Jian-jun [1] of NWPU has excogitated OO- flexible coding system, the OO-coding system of Pi Chang-de of Nan-jing University of Aeronautics and Astronautic, and the coding method based on object characteristic of Qian Xiaoming, all of these are researches on coding method, and none of them aim at coding system applied in integration application. And the most research investigations on ontology applied in integration focus on using the technology of WordNet and HowNet in semantic description of web net. For example, the bilingual semantic wordnet of Li Sheng [2] of Harbin institute of technology. But
148
J.B. Wang, B.L. Wang, J.J. Jiang and S.C. Zhang
information integration technology based on ontology of manufacture enterprise is mainly designing a semantic contrasted form for the local ontology [3,4]. Indeed, an ontology is the explicit specification of the conceptualisation of a domain [5]. So in this paper, a technology combining coding and ontology has been introduced. It uses standardization of codes to figure out ontology and uses ontology to describe semantic codes, furthermore the Ontological Information Coding technology can be well used in the area of information integration.
2.
Analysis of Ontological Coding Object
Information coding, part of information standardization system, is the base of information integration.The contents of information coding standardization of manufacture enterprise are as follows: First, organizing and modeling enterprise resource system. Second, coding definite information objects. Synthesizing both of them, the definition of enterprise information code system can be got. Definition 1. Enterprise information classification coding system is a basic standardization system which contains enterprise information classification and coding identification. It describes the structure model of enterprise resource and the constitutions of enterprise information coding standardization. The enterprise resource is programmed with the information objects and attributes classified and organized in this enterprise unified coding standardization system. (1)
Constructing principles of the system There are four principles during the constructing process: Integrality, Harmony, Opening and Expansibility. “Integrality” means the enterprise information classification coding system can fully collect and coordinate information resource; “Harmony” means “information domain” is logical and compensatory, and the entire structure is harmonious; “Opening” indicates resources of the coding system is extensile; And “Expansibility” means that the whole system can be updated without overthrow.
(2)
Constituent elements of the system Information coding system contains four parts: ԘInformation sub-domains; ԙInformation objects; ԚInformation attributes; ԛInformation codes. The information sub-domains are information class domains divided by management logic in whole lifecycle of operations of manufacture enterprises; The information objects are organized by information in product line ,they are the basic of the information system; The information attributes are qualities or characters ascribed to or be inherent in a specific information, they are used in the production management process; Information codes are ascertained by key attributes, they are the ID numbers of information. The coding fundamental must be unified, compatible, steady, extensible, and abbreviated.
Application of Ontological Information Coding in Information Integration
3.
149
Technology of Ontological Information Coding
Ontological Information Coding is using the specialty that “a formal specification of a shared conceptualization” and its descriptive technique of hierarchy of notion and its support of logic ratiocination, to build the model of all parts of the information coding system, and makes a corporate cognition in the enterprise information domain. Definition 1. According to the description of Ontological Information Coding, we can get the formal definition of it as follows: OntoCode ^^C` , ^ N ` , ^ R` , ^ KA` , ^ AA`` where: C
^c1 , c2 , c3 ,...cn ` is a set of information codes˗
˄1˅
N
^n1 , n2 , n3 ,...nn ` is a set of information objects ˗
˄2˅
R
^^CR` , ^ AR`` is a set of the relationship of information and attributes.
Information relation is CR ˗Attribute relation is AR ;
˄3˅
^ka1 , ka2 , ka3 ,...kan ` is a set of the key attributes which ascertain the information codes; ˄4˅ KA
^aa1 , aa2 , aa3 ,...aan ` is a set of accessorial attributes. All of them make up of the Ontological Information Coding. AA
3.1
˄5˅
Project of Ontological Information Coding
Through the analysis of the ontological coding objects, we know the three main components as following, ԘDescription of the relationship of the resource system structure; ԙDescription of the relationship of information objects; ԚDescription of the relationship of the attributes. As shown in Fig. 1. (1)
Description of sub-domains relationship As a part of the resource, the sub-domains are the classification of every kind of information objects. Owing to the cooperation of manufacturing and management among different departments, sub-domains need specific relationship description. We mark off the concrete sub-domains and build up the organization model of the information objects, according to the characteristic of the information objects organization and the technique of “information domains” organization. And then we analyze the relationship of sub-domains to form a relationship sheet.
150
J.B. Wang, B.L. Wang, J.J. Jiang and S.C. Zhang
Figure 1. Overview of Ontological Information Coding (2)
Information objects relationship There are two kinds of relationships of information objects. One is for information objects in the same sub-domain, which is called direct relation ( DCR ), the other is for information objects in different sub-domains, we call it as indirect relation ( ICR ).
(3)
Attribute relationship Key attributes ( KA ) and accessorial attributes ( AA ) are the characterization of information. The relationship of the attributes is showed as AR . In a word, for an aeronautic manufacture enterprise, we mark out eight sub-domains such as Product, Manufacturing Technology, Product Quality Guarantee, Resource, Production Management, Manufacture Management, Scientific Research Management, Environment and Safety. The sub-domains relationship and information objects relationship have been studied adequately. In this paper, we take the sub-domain of “Resource” for example. We design the “Material” information model and analyze the attributes relationship of material, finally form the ontological information coding model of “Material”.
3.2
Model of Ontological Information Coding
Ontological Information Coding describes the organization models of information resource and information objects. In the process, we should pay more attention on the relationship description, which must be based on professional knowledge. Take “Material” for example: Step 1: As “Material” is from “Resource” sub-domain, we analyze the information objects relationship based on professional knowledge, then we can get the relationship sheet of “Material”, as shown in Table 1.
Application of Ontological Information Coding in Information Integration
151
Table 1. Information relationship sheet of “Material” Information Object
Relationship
Material/Supplier
Be supplied/Supply
Material/Product
Form/Be formed
Material/Frock Material/Equipment Material/Stock
Be processed/Process Use/Be used Deposit in/Deposit
Step 2: After analyzing key attributes and accessorial attributes, code the information by key attribute. For example “Material”is coded by Class, Sign, State, Technic-condition and Shape; its accessorial attributes are: Use, Supplier, StoveNo, Spec and Character, as Fig. 2 shows.
Figure 2. Material code structure
Step 3: Analyze the relationship of attributes. For “Material”, “Shape” parallel to “Class”, “Spec” parallel to “Class”, “Character” lie on “Spec”, “Sign” lie on “Class”, “Technic-condition” lie on “Sign”, “State” lie on “Sign”, “State” lie on “Sign”, “Technic-condition” parallel to “State”, as Table 2 shows. Table 2. The relationship of material attributes Attribute
Relationship
Sign/Class
Lie on
State/Sign
Lie on
State/Technic-condition
Parallel to
Technic-condition/Sign
Lie on
Spec/Shape
Lie on
Character/Spec
Lie on
Technic-condition/State
Parallel to
Shape/Class
Parallel to
152
J.B. Wang, B.L. Wang, J.J. Jiang and S.C. Zhang
Step 4: Protégé-2000 is an integrated software tool used by system developers and domain experts to develop knowledge-based systems. Applications developed with protégé-2000 are used in problem-solving and decision-making in a particular domain [6]. Furthermore it offers the API interface, so it is popular in developing ontology for many authoritative institution [7,8]. In this paper the ontology model of material developed by the tool of protégé, and its OWL [9,10] segment is the following: ...
...
4. Application of Ontological Information Coding Technology in Integration The distributed information resource management model determines that the multiattribute query of information must integrate several databases resource. An increasing trend in developing (web) information portals in the usage of ontology as a semantic means for describing the information content [11]. The simplest and most basic part of ontology-based information retrieval is concept-based information retrieval [12,13].For the model of Ontological Information Coding, we design an integration frame and a query mechanism, and then actualize the information query and integration application based on middleware. (1)
Model of integration frame In the information integration frame based on middleware, the ontology drives codes circulating in the distributed databases and in the level of ontology. The codes index information to respond to the query and return accurately. The entire integration frame has three levels, as Fig. 3 illustrates.
Application of Ontological Information Coding in Information Integration
153
Figure 3. Ontological information coding integration frame
The application system submits a query, and then the processor makes the query into a format discriminating for the ontology affirming it; ĸ The ontology matches the query and registered information to get the keyword or the code, and submits to the wrapper to transform it into the local keyword, and then accesses the databases involved;ĹThe database retrievals the getting local key words or codes, and returns the optimum result. After the three steps, the information needed can be comprehended well, and can be queried accurately. (2)
Actualizing mechanism of the integration frame For the area of a special manufacture enterprise, the integrated information is different from Internet. First, the structure of integrated information is simplex in this profession area, and it doesn’t need profound ratiocination; Secondly, the information during the lifecycle of product can be got from every department nearly. Especially after information codes bring into effect, the key attributes of information are uniform, and the difference is the accessorial attributes they used. So it is a good base for the information integration. In the premise of information code actualizing in the enterprise, the Ontological Information Coding can work well in the information integration frame, as Fig. 4 shows:
154
J.B. Wang, B.L. Wang, J.J. Jiang and S.C. Zhang
Figure 4. Query mechanism of Ontological Information Coding
In the Ontological Information Coding integration frame, the pivotal work is the reorganization and disassembly of the query by the ontology. The following illustrations are for the designing of ontology to satisfy the individuation query of uses. x x
x
5.
The query processor matches the local key words and search words through recognizing key words input. According to the information input, the ontology model orients the source of the information, and queries the attributes in the database with analyzing of the relationship of the key attributes and accessorial attributes and the ratiocination of the ontology, and then catches the exact instance data from the databases by wrapper. The processor returns queried results after transforming their format, and arranges them in priority.
Conclusions
In this paper, the Ontological Information Coding was researched for the aeronautic manufacture enterprises. Through the definition of the Information Coding System and the description of the Ontological Information Coding, we took “Material” for example. After the analysis of attributes and the designing of ontological information coding model, the model was actualized into the integration frame. The conclusions of our work are following: x x x
It introduced a new idea of application of information coding and ontology. It took the example of material, designed the process of the ontological information coding, and modeled it with protégé. It designed an integration frame of ontological information coding model, and expatiated on the actualizing mechanism.
Application of Ontological Information Coding in Information Integration
155
As the practice showing, this technology can actualize the normative organization and representation of information resource very well. It affords an effective means of information integration in the production lifecycle. Compared with other manufacturing enterprises, the manufacturing processes and product units of aeronautic enterprises are much more complicate, which determines the difficulty of the information integration. The successful application in the area of aeronautic manufacturing of this model proves its feasibility in other domains.
6.
References
[1] JiangJianjun.An Effective Method Based on Granularity-Structure for Organizing and Utilizing Manufacturing Information Resource. Journal of northwestern polytechnical university, 2007.25 (2), 245-250 [2] LiSheng.Building Bi-lingual Semantic Lexicons Based on WordNet and HowNet.High Technology Letters, 2001 [3] Cf. T. R. Gruber. A translation approach to portable ontologies. Knowledge Acquisition, 1993,5(2):199-220. [4] Thomas R. Gruber. Toward Principles for the Design of Ontologies Used for Knowledge Sharing, Revision: August 23, 1993. [5] N.Guarino,P.Giaretta,Ontologies and knowledge bases:towards a terminological clarification,in:N.Mars(Ed.),Towards Very Large Knowledge Bases: Knowledge Building and Konwledge Sharing, IOS Press, Amsterdam, 1995, 25-32 [6] http://protege.stanford.edu, 2007.08 [7] R. F. Neches, R.; Finin, T.; Gruber, T.; Patil, R.; Senator, T.; Swartout, W.R., Enabling Technology for Knowledge Sharing. AI Magazine, 1991, 36-56 [8] http://bbs.w3china.org/index.asp, 2007.08 [9] http://www.ontoweb.org/, 2007.08 [10] Grigoris Antoniou,Frank van Harmelen.Web ontology language:OWL.In:Steffen Staab,Rudi Studer eds..Handbook on Ontologies in Information Systems.SpringerVerlag, 2003, 67-92 [11] N.Guarino,C.Masolo,G.Vetere,OntoSeek: content-based access to the web, IEEE Intell.Syst.14(3), 1999, 70-80 [12] Y. Tzitzikas, Collaborative Ontology-based Information Indexing and Retrieval, Doctoral Dissertation, Department of Computer Science, University of Crete, Heraklion, 2002 [13] H. Stuckenschmidt, F. Van Harmelen, Information Sharing on the Semantic Web, Springer, 2004.
RoHS Compliance Declaration Based on RCP and XML Database Chuan Hong Zhou1, Benoît Eynard2, Lionel Roucoules3, Guillaume Ducellier3 1
Université de Technologie de Troyes, Laboratory of Mechanical Systems and Concurrent Engineering, BP2060, F.10010 Troyes Cedex, France; and formerly, Associate Professor, CIMS & Robot Center of Shanghai University, Shanghai, China, 200072 2 Université de Technologie de Compiègne, Department of Mechanical Systems Engineering, BP 60319, F.60203 Compiègne Cedex, France 3 Université de Technologie de Troyes, Laboratory of Mechanical Systems and Concurrent Engineering, BP2060, F.10010 Troyes Cedex, France
Abstract At present, more companies are going to adapt themselves to RoHS compliance and enforcement by IT technology. This paper has presented an integration framework that utilizes RoHS Compliance Declaration based on RCP (Rich Client Platform) and XML database, which uses SOAP technology to have the flexibility to complicated network environments and multiple operating systems, makes use of XML technology ,XUL (XML User Interface Language) and XML database to acquire reuse and robustness. Keywords: RoHS Compliance Declaration, RCP, XUL, XML, XML database, SOAP
1.
Introduction
Raising awareness and understanding of harmful effects of specific substancesˈ it’s necessary to control and reduce the amounts of waste generated by used and discarded products. So laws have been made to limit the hazardous substances contained in products, in which RoHS is very famous in electrical and electronic equipment [1-4]. RoHS is intended to limit the use of substances such as lead and cadmium and is one of several rules that are forcing manufacturers to take more environmental responsibility. Similar codes in China and South Korea are also created [5]. For manufacturers, detailed data on the material composition of each component will become increasingly important. Additional changes from new legislation and from RoHS directive in other regions will force manufacturers to closely monitor the substances contained in each part. Nevertheless, many manufacturers are still far from complying with RoHS, with collecting and managing the data efficiently still remaining as the biggest hurdles. On the basis of investigation by famous
158
C. H Zhou, B. Eynard, L. Roucoules, G. Ducellier
consulting corporations, it is an important fact that collecting and managing component and materials data are proving to be the stumbling blocks for large manufacturers [6].
Figure 1. RoHS compliance Process
An overall RoHS compliance Process [6-8] is summarized in Figure 1. RoHS compliance Process has centered on materials databases. Firstly, suppliers submit material information, which must be validated before storing in materials databases. Secondly, extracting the product BOM information from PLM system, it’s necessary to build a new BOM- Compliance BOM through joining materials information from material databases. Finally, it is general to optimize the RoHS compliance process by Life Cycle Assessment (LCA) and other correlation technology. In this paper, we focus on dealing with a solution for RoHS Compliance Declaration (Figure 1. dashed rectangle) Recently, there are two main problems in RoHS Compliance Declaration [6] [9]: x
There is difficulty in finding and verifying data for thousands of parts from suppliers across the world. x Reporting to various states, countries, and legislative bodies requires multiple languages and formats. Information technology has evolved over the last few decades at an extremely rapid pace. In this paper, we present an intelligent, scalable framework for RoHS Compliance Declaration based on the development of synergistic technologies, such as RCP[10], XML, XUL[11], SOAP[12] and XML database[13], which can deal with above problems.
RoHS Compliance Declaration Based on RCP and XML Database
159
The remainder of the paper is organized as follows. First, we review some of the related work. This is followed by a brief background of several key impacts of RoHS Compliance in Section 2. In Section 3, we present an overview of the integration framework, followed by RCP, XUL, SOAP, and XML database. In Section 4, we describe our prototype implementation and summarize our ongoing evaluation of the framework in the context of a case study. Finally, Concluding remarks along with future research directions are given in Section 5.
2.
Background
To the best of our knowledge, RoHS Compliance Declaration has not been previously studied in literature. However, some methods to integrate based on XML and the selection of hazardous substance and recyclable content specifications for components have been discussed. From RoHS compliance analysis aspect, Nikhil Joshi and Debasish Dutta have presented a new approach to account for regulatory requirements early in the design phase, with the aim of reducing downstream costs of compliance [7]. IPC 1752 [14] is the standard for the exchange of materials declaration data, which developed by a group of OEMs, EMS providers, component manufacturers, circuit board manufacturers, materials suppliers, information technology solution providers, and the National Institute of Standards and Technology. Though IPC 1752 is free and it is in an easy to use standard forms-PDF format, it is lacking as a total solution for RoHS Compliance Declaration. XML, due to its structured, platform and language independent, highly extensible and Web-enabled nature, has rapidly become an emerging standard to represent data between diverse applications [15]. By the way, IPC 1752 is based on XML schema to allow electronic data exchange across the web by xml file [16]. An XML database is a data persistence software system that allows data to be imported, accessed and exported in the XML format. XML databases serve in a complementary role to traditional databases especially as XML becomes prevalent [17].
160
C. H Zhou, B. Eynard, L. Roucoules, G. Ducellier
3.
RoHS Compliance Declaration
3.1
Framework
Figure 2. Information framework architecture
In this Section, we present a framework that realizes RoHS Compliance Declaration based on RCP and XML database. The information framework has a complicated structure illustrated in Figure 2. (where numbers indicate the sequence of steps). These steps are explained below: 1.
2. 3. 4. 5.
Publish the RoHS Compliance Declaration RCP. RCP is a new technology used in software development, which will be discussed in Section 3.2. First, the IT administrator will publish a software package for the RoHS Compliance Declaration RCP in his enterprise’s website. As users, compliance engineers install the RoHS Compliance Declaration RCP and start to run it. Using XUL technology, a RoHS Compliance Declaration Form can be associated with a XUL file, which will be explained in Section 3.2. We utilize XML mapping features to connect this XUL and a XML file, which will be explained in detail in Section 3.2. Using SOAP protocol, RoHS Compliance Declaration XML will be transferred to XML database, which will be explained in Section 3.3.
RoHS Compliance Declaration Based on RCP and XML Database
3.2
RCP and XUL
3.2.1
RCP
161
The Java is dominated by browser-based thin clients. For some applications, such as shopping carts, thin clients are quite adequate. For many applications, though, thin clients suffer from one of two problems: x Lack of user interface richness and responsiveness, or x Complex, unmanageable JavaScript and DHTML. Java rich clients have traditionally been built using the Swing widget library that comes packaged with the Java Development Kit (JDK). If used with Java Web Start technology, Swing applications can solve both of the shortcomings of thin clients. An alternative to building rich client applications in Swing is to use the open source Eclipse Rich Client Platform (RCP) [18]. Eclipse is an open-source software framework written primarily in Java. In its default form it is a Java (IDE), consisting of the Java Development Tools and compiler. Eclipse RCP is the technology that the industry standard Eclipse IDE is built with. This technology was donated by IBM to the non-profit Eclipse Foundation in 2004. Since then, the use of Eclipse RCP and its corresponding widget toolkit, SWT, has exploded. Powerful commercial, shrink-wrapped software is now being written with Eclipse RCP, as are rich, responsive custom business and scientific applications. Eclipse RCP offers several advantages over Swing for building rich client applications: x
x
x x
The Eclipse framework is plug-n-play. Eclipse applications consist of an assortment of plug-ins loaded and managed by the OSGI ˉ formerly known as the Open Services Gateway initiative - now an obsolete name,is an open standards organization founded in March 1999, The Alliance and its members have specified a Java-based service platform that can be remotely managed, kernel. This gives application administrators the power to push new functionality out to users in small deployment units or even to allow users to select what functionality they want installed. Eclipse is able to load new or updated plug-ins dynamically without even requiring an application restart. SWT (Standard Widget Toolkit) widgets are fast. Because most SWT widgets are thin wrappers around the host operating system's native widgets, Eclipse applications have responsiveness that is normally found in C/C++ applications. Eclipse RCP provides a rich user interface environment. Because the original Eclipse RCP application (Eclipse IDE) was built to satisfy the needs of a demanding user base (expert Java developers), it evolved a rich set of GUI components and forms.
162
3.2.2
C. H Zhou, B. Eynard, L. Roucoules, G. Ducellier
XUL
The XML User Interface Language (XUL) [11] is a markup language for creating user interfaces. It is a part of the Mozilla browser and related applications and is available as part of Gecko (Gecko is the open source, free software web browser layout engine used in all Mozilla-branded software and its derivatives, including later Netscape releases.). It is designed to be portable and is available on all versions of Windows, Macintosh as well as Linux and other Unix flavors. With XUL and other Gecko components, you can create sophisticated applications without special tools. XUL is an XML language and you can use numerous existing standards including XSLT, XPath and DOM functions to manipulate a user interface, all supported directly by Gecko. In fact, XUL is powerful enough that the entire user interface in the Mozilla application is implemented in XUL. WAZAABI [19] is an open source framework, which delivers significant benefits for building Eclipse RCP applications, as reducing development effort and costs. With Wazaabi, Eclipse RCP UIs are not anymore developed using SWT but described in XML files, using the XUL standard. Thus, it is easy for designers to create UIs and for developers to create rich client components linked to server-side business logic. In this paper, we choose WAZAABI to create Eclipse RCP application. 3.3
XML Database
In Software engineering, an XML database is a data persistence software system that allows data to be imported, accessed and exported in the XML format. Two major classes of XML database exist [20]: 1. XML-enabled database: In an XML-enabled database, the documents are stored in constituent fragments. Here the XML data is stored in object relational form and one can use an XML SQL Utility (XSU) or SQL functions and packages to generate the whole document from its object relational instances. There are some representative products, for example: SQL server 2000 (Microsoft), Oracle (Oracle), DB2 (IBM) and so on. 2. Native XML database: In a native XML database approach [21], XML is not fragmented but rather is stored as a whole in the native XML database. This means that documents are stored, indexed, and retrieved in their original format, with all their content, tags, attributes, entity references, and ordering preserved. There are some representative products, for example: eXist (Wolfgang Meier), Timber (University of Michigan), Berkeley DB XML (Oracle) and so on. eXist [22] is an open source database management system entirely built on XML technology, also called a native XML database. Unlike most relational database management systems, eXist uses XQuery, which is a W3C Recommendation, to manipulate its data. eXist allows software developers to persist XML data without writing extensive middleware. eXist follows and extends many W3C XML standards such as XQuery.
RoHS Compliance Declaration Based on RCP and XML Database
163
In this paper, we choose eXist to store and manage the RoHS Compliance Declaration XML using new XML features in eXist: HTTP SOAP [23] and XQuery.
4.
Prototype Implementation
A prototype of the solution for RoHS Compliance Declaration has been implemented. The prototype runs on a PC under MS Windows and its software interface, which is based on RCP (see Figure 3.):
Figure 3. Form for RoHS Compliance Declaration
There are four functions-“Save to database”, “Quary by Part/Subpart name” , “Export XML to OS” and “Import XML from OS” in this RoHS compliance RCP prototype system as shown in Figure 3. According to “Part/Subpart name”, “Quary by Part/Subpart name” function will get XML content from eXist database and automatically fill in IPC 1752 Form. On the analogy of this, “Save to database” functions will export a XML file to the local PC, or import a xml file from the local PC.
5.
Conclusions and Future Work
Over the past several years, manufacturers have faced an increasing array of new environmental mandates from jurisdictions such as the EU, China, Japan, and some U.S. states. The RoHS directive is famous in these environmental mandates. Many companies are going to adapt themselves to RoHS compliance by leveraging IT technology. This paper has presented an integration framework that utilizes RoHS Compliance Declaration based on RCP and XML database. After the discussion and studying above case, the main conclusions are drawn as follows: x x
RCP make this integration framework with both benefits of a rich client application, a thin client application and Microsoft Office; Making use of XML technology and XUL, which validate the content of a RoHS Compliance Declaration XML document as well as its Form. XML database contributing to store the RoHS Compliance Declaration XML;
164
C. H Zhou, B. Eynard, L. Roucoules, G. Ducellier
x
Flexible application of SOAP and XML makes this integration framework obtain the property of reuse and robustness, which is available for multiple operating systems and complicated network environment. Future research works may focus on studying in a deepgoing way of RoHS compliance. It includes two aspects: to research integration between PLM system and RoHS compliance by leveraging PLM technology; to optimize the RoHS compliance process by Life Cycle Assessment (LCA) and other correlation technology.
6.
References
[1] OFFICIAL JOURNAL OF THE EUROPEAN COMMUNITIES, 2000. Directive 2000/53/EC of the European Parliament and of the Council of 18 September 2000 on end-of life vehicles, October. [2] OFFICIAL JOURNAL OF THE EUROPEAN UNION, 2002. Directive 2002/96/EC of the European Parliament and of the Council of 27 January 2003 on waste electrical and electronic equipment (WEEE), February. [3] OFFICIAL JOURNAL OF THE EUROPEAN UNION, 2003. Directive 2002/95/EC of the European Parliament and of the Council of 27 January 2003 on the restriction of the use of certain hazardous substances in electrical and electronic equipment, February. [4] U.S. ENVIRONMENTAL PROTECTION AGENCY. RCRA Online. http://www.epa. gov/rcraonline/. [5] National Weights & Measure Laboratory, http://www.rohs.gov.uk/Default.aspx. [6] Eric Karofsky, December 19, 2006. “RoHS—The Data Collection Problem” http://www.amrresearch.com/Content/View.asp?pmillid=19996. [7] Nikhil Joshi and Debasish Dutta. 2006. "Towards Regulatory Compliance through PL M." In Proceedings of the IDETC/CIE 2006 ASME 2006 International Design Engin eering Technical Conferences & Computers and Information in Engineering Conference, Philadelphia, PA, Sep. 10 - 13 2006. [8] Imag Asia Limited, 2005. “Gaining Competitive Advantage through WEEE/RoHS/EL V Regulatory Compliance,” matrixone innovation Seminar 2005. [9] Jim Brown, Environmental Compliance in Electronics: Creating a Successful Strategy. April, 2006. Aberdeen Group, Inc. [10] Paul Cornell, 2007, Microsoft Corporation. Smart Documents Development Overview. http://msdn2.microsoft.com/en-us/library/aa537169(office.11).aspx#odc_ smartdocovw 1. [11] Neil Deakin. XUL Tutorial. February 19, 2006. http://docs.huihoo.com/mozilla/xul/xult u/index.html. [12] Newcomer, E. Understanding Web Services: XML, WSDL, SOAP, and UDDI. Addison Wesley Professional, 2002. [13] S. Dekeyser, J. Hidders, A commit scheduler for XML databases, Proceedings of the fifth Asian-Pacific Web Conference, 2003, pp. 83–88. [14] ASSOCIATION CONNECTING ELECTRONICS INDUSTRIES (IPC), IPC 1752 for Materials Declaration. http://members.ipc.org/committee/drafts/2-18_d_MaterialsDecla rationRequest.asp. [15] W3C, XML Inclusions ~XInclude! 1.0, 2002, http://www.w3.org/TR/xinclude/. [16] ASSOCIATION CONNECTING ELECTRONICS INDUSTRIES (IPC), IPC-1751 Generic Requirements for Declaration Process Management Version 1.1. February 200 7, pp.4.
RoHS Compliance Declaration Based on RCP and XML Database
165
[17] Masayuki Shoji and Akira Mita. Application of XML database to autonomous configur ation control and data transfer for sensor networks in buildings. Proceedings of SPIE -Volume 6529 Sensors and Smart Structures Technologies for Civil, Mechanical, and A erospace Systems 2007, Masayoshi Tomizuka, Chung-Bang Yun, Victor Giurgiutiu, Ed itors, 65291F (Apr. 18, 2007). [18] The Eclipse Foundation. Rich Client Platform. August 2007. http://wiki.eclipse.org/ind ex.php/Rich_Client_Platform. [19] Olivier Moises, Arnaud Buisine. WAZAABI OVERVIEW. 2007. http://wiki.wazaabi.o rg/index.php/Wazaabi:Overview [20] Ronald Bourret.2007. XML Database Products. http://www.rpbourret.com/xml/XML DatabaseProds.htm#categories. [21] Shalaka Natu and John Mendonca, “Digital Asset Management Using A Native XML Database Implementation”, CITC4’03, October 16–18, 2003, Lafayette, Indiana, USA [22] eXist Overview. 2007. http://exist.sourceforge.net/index.html [23] Nayef Abu-Ghazaleh, Michael J. Lewis. Differential Deserialization for Optimized SO AP Performance. Proceedings of the 2005 ACM/IEEE conference on Supercomputing SC ‘05 Publisher: IEEE Computer Society, November 2005. Conan C. Albrecht. “How clean is the future of SOAP?” Communications of the ACM, Volume 47 Issue 2.
Research on the Optimization Model of Aircraft Structure Design for Cost Shanshan Yao, Fajie Wei School of Economics and Management, Beijing University of Aeronautics and Astronautics Beijing 100083, China [email protected]
Abstract In the future aircraft development, cost will be one of the most important factors to be considered. The key technologies about aircraft design for cost include: cost estimation for aircraft, establishment of cost target and life cycle cost model, evaluation and tradeoff of the cost and performance, multidisciplinary design optimization. A methodology of the development of an optimization model of aircraft structures design for cost is presented. Cost is integrated into the product definition process as an engineering parameter at the early stage of design which has a major influence on the product life cycle cost. The methodology developed is generic and fundamental in developing causal predictions of manufacturing cost. Design optimization process can be achieved by linking manufacturing cost models with structural analysis models through shared design parameters. The optimization process, which consists of single-objective and multi-objective optimization, focuses on direct operation cost (DOC) as a function of acquisition cost and fuel burn. Through contrast and analysis, the trade-off between weight and DOC is considered more reasonable than either the traditional optimization for weight or for DOC, which is proposed to be adopted in airframe structural design. Keywords: DFC (design for cost); key technology; optimization; DOC (direct operation cost)
1.
Introduction
The paper presents the key technologies about aircraft Design For Cost (DFC) and a methodology of the development of an optimization model of aircraft structure design for cost. The early stage of design is vital for a new aircraft development and cost reduction, as it decides the ultimate configuration and layout on the whole and influences the 70%~80% overall cost. United States took the lead in putting forward CAIV (Cost as An Independent Variable) in the weapon development. Zhang Hengxi[1] put forward Active Control for cost in 1980’s. Since 1990’s weapons and equipments have developed technology- concentrated, multidisciplinary and complicated, and their costs have increased sharply. For a
168
S. Yao and F. Wei
long term, a conventional mode (“Cost + Cost5%”) was being applied to fix the price of Government Issue (G.I.), which was invalid for market mechanism. Along to the reform of the Chinese economy system and the trend that the military expenditure is decreasing all of the world, the cost will become one of the most important factors to be considered in China’s future aircraft development.
2.
Design For Cost (DFC)
2.1
Main Contents of DFC
Design for Cost (DFC) is an embranchment of DFX(Design for X)in the concurrent engineering. The major idea of DFC is to reduce the cost as much as possible through the analysis and research on the cost drivers during the life cycle, including design, processing, manufacturing, assembly, checking, sale, using, maintenance, recycling and disuse, and modify the part which brings the excess cost, on the premise that the customer is satisfied with the product. Its essential is the establishment of the satisfactory solution that is based on the tradeoff between performance and cost [2][3]. DFC contains such contents as follows: 1) The objective of DFC is minimum cost, subject to the performance, with cost a coequal parameter as important as others, such as technology, performance, schedule and reliability. 2) DFC is driven by engineering. DFC estimates and evaluates the Life Cycle Cost (LCC) with the support of computer tools, aiming at successful product by only once design. 3) DFC considers the cost during life cycle of product, establishing the database of LCC and the store for cost estimation methodologies. DFC minimizes LCC by designing the high quality product to reduce the using and maintenance cost. 4) DFC must harmonize with other DFX tools, which needs principles and methodologies to harmonize the different evaluation criterions. 2.2
Key Technology About Aircraft DFC
DFC is an important part of DFX which supports the concurrent engineering. Aircraft DFC should be supported by several particular key technologies except for the features of ordinary machines because of its fairly complex system: 1) Estimation Methodology for Aircraft Cost. It is an important problem to choose proper estimation method. Normally, in concurrent engineering, different methods can be used in different design phases for the different levels of the information integrality. Furthermore, there are some other factors to be considered when choosing the estimation method, i.e., production batch, aircraft type and life. 2) Establishment of Cost Target. In China, there is not very precise method to establish the cost target. In practice the military office establishes the target, considering historic data, according to the performance target, manufacture
Research on the Optimization Model of Aircraft Structure Design for Cost
169
technique, production batch and affordability. The process is rather rough, devoid of scientific methodology to control the target. 3) Life Cycle Cost Model. It can be applied to different phases to estimate corresponding cost. 4) Evaluation and Tradeoff of the Cost and Performance. A piece of reasonable design advice can be put forward with the support of other DFX, integrating not only cost drivers but also other evaluation aspects. 5) Multidisciplinary Design Optimization (MDO). It is one of the most active fields of aircraft design. MDO is a methodology which explores and uses the interaction of engineering system and designs complex system and subsystem [4]. Actually, it provides a theoretical basis and practical method for life cycle design with the optimization principle.
3.
Cost Model for Optimization of Aircraft Structure
In aerospace, life cycle analysis tends to be associated with military application while the commercial sector focus on Direct Operation cost (DOC). The optimization for the DOC of aircraft fuselage panels has shown that by coupling the cost analysis with a structural analysis. DOC is also considered in terms of the impact of weight on fuel burn, in addition to the acquisition cost to be born by the airline operator. In this particular study, the objective function is chosen to reflect both manufacturing cost (through acquisition cost) and weight penalty (through fuel burn) but other objective functions including maintenance costs or linked to pure profitability could also be defined. The global model comprises a manufacturing cost model, a structural model and an optimization model. 3.1
Manufacturing Cost Model
The manufacturing cost analysis is based on empirical data. The cost modeling methodology for the linkage between manufacturing and design imposes a breakdown of the cost into a number of elements, including material cost, fabrication cost and assembly cost, so that it can be formulated into semi-empirical equations to be linked to the same design variables as considered in the structural analysis. A typical aluminum stringer-skin panel is shown in Fig.1. The generic product families used on a typical stringer-skin panel are the skin panel, and the stringers and the frames that support the skin in the longitudinal and lateral directions respectively. In addition, there are cleats at every stringer-frame junction and rivets that are used to fasten the structures together.
170
S. Yao and F. Wei
Figure 1. Sketch of the panel
For these families (skin, stringers, frames and cleats), the overall breakdown in the manufacturing cost analysis is summarized through: 5
C panel
¦C
i
Cskin Cstringers C frames Ccleats Crivets
(1)
i 1
where C panel is the total cost of the panel and Ci the total cost for the family i. For each of the part families defined in Eq. (1), semi-empirical equations are established, which include material cost corresponding to either fabrication cost
Cim and labor cost Cil ; the latter
Ci f or assembly cost Cia . Each factor of
Eq. (1) is then computed as follows:
Ci
Cim Cil
Cim Ci f Cia
(2)
The costing coefficients appearing in the equations are determined empirically on the basis of the drawings and WBS provided by the industrial partner (AVIC I Shenyang Aircraft Industry Co., LTD). Three types of coefficients are used in the equations: the material coefficient the wage rate per hour
cim (᪳/[unit]), the time factor cil (hr/[unit]) and
ril (᪳/hr).
Cim [᪳]᧹M im ( Li , Ai ,...)[mm, mm 2 ,...] cim [᪳ / mm, mm 2 ,...]
(3)
Ci f [᪳] ri f [᪳ / h] uif cif [h]
(4)
Cia [᪳] ri a [᪳ / h] uia cia [h]
(5)
Research on the Optimization Model of Aircraft Structure Design for Cost
171
In Eq. (3), the dimension of the material cost coefficient changes according the dimension of the material cost function MC, which depends on various geometric variables such as, for example, the part length Li or cross section Ai . The most efficient way to measure a labor cost is to use a standard labor time c, which can then be multiplied by an utilization factor u and a rate (cost/unit time) r to obtain the final cost as illustrated by Eqs. (4) and (5).
Figure 2. Cross section of the frame
Fig. 2 shows an example of the frames used to strengthen the panel. The material cost for the frames is computed as a function of the volume. The frames are supposed to be straight and of a “C” cross-section. We suppose that they extend over the width of the panel so that their length corresponds to the panel width W. All frames dimensions are deduced from the drawings. If tf is the frame thickness, hf the frame height, lf the frame flange length, the volume Vf of one “C” shape frame is:
Vf Given
ª 2l f h f t f 2t 2f º W ¬ ¼
(6)
f n frames the number of frames, the material density and c frames (᪳/g) the
material cost coefficient, the material cost for the frames is computed by:
C mframes
n framesV f U c mframes
(7)
l
The frame labor coefficient c frames (hr/hole) is supposed to be directly proportional l
to the number of lightening holes in the frames nholes . If rframes is the frames labor cost per hour (᪳/hr), the total frames labor cost can be calculated as follows:
C lframes
l n frames nholes rframes c lframes
The other family parts’ costs can be calculated in the same way.
(8)
172
3.2
S. Yao and F. Wei
Life Cycle Cost Model
The cost model can be extended to life cycle by computing the direct operation cost (DOC) that is associated with the cost of transporting a given weight of aircraft structure during the aircraft’s life span. For commercial transport applications, the DOC is a function of the acquisition cost, fuel burn, maintenance, crew and navigation, and ground services.
DOC
f ( acquisition, fuel burn, ma int enance, crew and navigation, ground services )
(9)
As this work is concerned with linking and trading off structural efficiency with manufacturing cost, all DOC drivers can be said to be fixed apart from the acquisition cost and fuel burn. Acquisition cost is driven by the cost of investing money to pay for the cost of the aircraft amortized unit manufacture, plus a profit margin. Fuel burn is a function of the Specific Fuel Consumption and the cost of fuel and therefore can be said to be a function of weight in the current context. The DOC function that is used for optimization purposes later in this paper is summarized by Eq. (10). Although profit is a more obvious objective function, DOC can be more readily assessed as one form of an objective goal. It is composed of two terms: the acquisition cost (AC) and the fuel burn (FB), the acquisition cost being the manufacturing cost (MFC) multiplied by a weight factor n.
DOC 3.3
FB AC
FB n MFC
(10)
Structural Model
The structural analysis was idealized as shown in Fig. 3, where b is the stringer pitch, h the stringer height, t the skin thickness and ts the stringer thickness. The panel could be loaded under uniform compression, with loading intensity p, or under compression combined with a uniform shear flow. The failure modes considered are flexural buckling, local buckling, inter-rivet buckling, and material failure based on the allowable stress of the aluminum alloy material.
Figure 3. Modeling of the panel for structural analysis
No local buckling was permitted, ruling out post-buckled designs.
Research on the Optimization Model of Aircraft Structure Design for Cost
173
For flexural buckling the panel is assumed to be simply-supported at the frames, and wide enough to ensure that there is no interference between adjacent stringers. Euler’s formulae then give for the flexural buckling stress [5]:
VF
S 2 E / LF / qs
2
(11)
where E is the elastic modulus, LF is the frame pitch and qs the radius of gyration of the stringer with its attached skin.
qs
I s / As
(12) 4
where I s is the section moment of inertia with its effective skin ( mm ), As is the 2
section area of the stringer with its effective skin ( mm ). For local buckling, the buckling stress is given by:
VL KL
KL E t / b
2
(13)
2
2
S kc / [12(1 Q e )]
For the generic aluminum alloy, the coefficient
(14)
Qe
locates between 0.3~0.33.
K L can be expressed as an function of the ratios h/b and ts /t. For the inter-rivet buckling stress the usual empirical formula is used:
VR
K R E t / rp
2
(15)
in which K R =ҏ2.46 for conventional, round-head rivets and rp is the rivet pitch. In practice, it is not easy to make the flexural buckling take place; furthermore, in this paper, the panel is loaded at a relatively small value of compression which is far from fracturing the panel out of engineering experience. Compared to Eqs.(12) and (14), it is obvious and easy to estimate that the flexural buckling stress is largely greater than local buckling and inter-rivet buckling stress, so the structural restriction is based on the simultaneous local buckling and inter-rivet buckling. The rivet pitch makes a major contribution to the cost of manufacture while the formula for inter-rivet buckling stress V R contains the rivet pitch. Such arrangement is consistent with the importance of rivets in this model and considered to be more reasonable than R.Curran [6]. Link Eqs.(13) and (15), a trade-off buckling stress can be given:
174
S. Yao and F. Wei
V*
Et
2
K L K R / brp
(16)
Therefore in the cost-weight optimization procedure, the applied stress is:
V
p/t
(17)
where t is the equivalent (smeared) thickness,
t
As / b
(18) *
In the optimization the applied stress is not permitted to be greater than V . 3.4
Optimization
The goal of the optimization process is to link together the structural analysis and the cost analysis in order to define the design configuration that meets the structural requirements while minimizing the cost to the airline operator. For Eq. (10) the acquisition cost contributes two to four times more than the cost of fuel burn to the DOC. It was shown that a 50% weighting for acquisition cost and 15% weighting for fuel burn is reasonable for the DOC split for an aircraft [7]. Consequently, the factor n was determined by fitting the cost results for a panel traditionally optimized for weight and the above mentioned percentage. The fuel burn was taken to be 800᪳/kg and a value of n =2~3.5 was obtained. The active design variables were chosen to be: stringer pitch b, stringer height h, skin thickness t, stringer thickness ts and rivet pitch rp. The frame pitch was not varied and the panel was loaded in pure compression. Two types of optimization for the panel were carried out. One was the singleobjective optimization, the other was multi-objective. Single-objective: the panel was optimized for minimum total weight, minimum material cost, minimum total manufacturing cost and minimum DOC. The total weight and the cost of the first optimization (traditionally optimized for weight) were used as the reference for the decrease or increase in corresponding items in the subsequent optimizations. It was found that the various criteria for optimization lead to widely differing panel dimensions (see Table 1). As a result of minimization of DOC, all dimensions are all largely greater than that of the first optimum (for weight). The key observation is that the optimization is tending to drive the design towards fewer stringers that are larger and with a greater rivet pitch, which is consistent with the impact of riveting on manufacturing cost. Savings according to the different objectives are given in Table 2 (compared with the optimization for weight). Positive values denote reductions relative to the reference panel and negative values indicate an increase. The minimization of material cost and total manufacturing cost don’t show improvements with regard to
Research on the Optimization Model of Aircraft Structure Design for Cost
175
DOC. Optimization for DOC shows a substantial improvement of the total DOC. The actual saving of ᪳896.70/m2 would relate to a rough order of magnitude DOC saving of ᪳475000 for the complete barrel section of the fuselage. Table 1. Panel dimensions after single-objective optimization(W=weight, MC=material cost,MFC=manufacturing cost, DOC=direct operation cost. All dimensions in mm) b
h
t
ts
rp
min W
105.6
10.4
1.50
1.61
85.7
min MC
99.8
45.7
0.80
7.23
49.7
min MFC
238.2
16.8
2.05
4.09
166.3
min DOC
150.5
15.2
1.82
2.51
100.0
Objective
Table 2. Savings according to the choice of objective (W=weight, MC=material cost,MFC=manufacturing cost, DOC=direct operation cost. All cost savings are in ̞ per m2 of panel, weight in kg per m2. Negative value indicates an increase) Saving in Objective MC
MFC
DOC
min MC
12.22
19.22
-2658.96
min MFC
-12.20
512.60
-163.23
min DOC
-50.34
456.12
896.70
Multi-objective: because the weight is traditionally the most important indicator for aircraft structure, the optimization was carried out for minimum weight and material cost, minimum weight and manufacturing cost, minimum weight and DOC. Panel dimensions and savings (compared with optimization for weight) after multiobjective optimization were shown in Table 3. Table 3. Panel dimensions and savings after multi-objective optimization (W=weight, MC=material cost,MFC=manufacturing cost, DOC=direct operation cost. All dimensions in mm. All cost savings are in ̞ per m2 of panel, weight in kg per m2. Negative value indicates an increase) Dimensions
Saving in
Objective b
h
t
ts
rp
W
DOC
min W & MC
99.8
18.6
0.95
2.11
50.8
-1.88
-150.38
min W & MFC
120.6
14.3
2.02
3.63
152.3
-3.36
-172.54
min W & DOC
136.3
10.8
1.72
2.50
96.3
-0.88
613.66
176
S. Yao and F. Wei
Dimensions and savings of optimization for minimum weight and DOC located between that of optimization for weight and DOC respectively, so were the other two. The single-objective optimization proved the optimization for minimum DOC might be considerably an appropriate choice for structure design. However, total weight is the vital influence factor for all performances of the aircraft. The tradeoff between weight and DOC would be more conservative than either objective respectively; nevertheless it keeps a relatively low level of weight all the same and saves a lot of direct operation cost. It can be considered to be the most moderate design.
4.
Conclusions
It is concluded that main finding of the paper is that several key technologies are put forward based on the trend of rapidly increasing cost of weapons and the great importance of the early design stage of aircraft, especially an optimization model of aircraft structures design for cost is established by linking manufacturing costs models with structural analysis models through shared design parameters. The optimization process, which consists of single-objective and multi-objective optimization, focuses on Direct Operation Cost (DOC) as a function of acquisition cost and fuel burn. The optimization for minimum weight and direct operation cost simultaneously brings a reasonable result that is more efficient and economical than that of either single-objective optimization. This might be considered a favorable result.
5.
References
[1] H.-X. Zhang, J.-Y. Zhu, J.-L Guo. “Warplane type development engineering guide”. Beijing: National Defense Industry Press, 2004, pp. 49. [2] X.-C. Chen. “Research on theories and methods of Design for Cost(DFC) in Concurrent Engineering”. Dalian University of Technology, 2000, pp. 67-85. [3] J. Zhan. “A project cost control model”. Cost Engineering, vol. 40, no. 12, pp. 31-34, 1998. [4] Z.-Q. ZhuˈX.-L. Wang. “Multi-disciplinary optimization and numerical simulation in civil aircraft design”. Acta Aeronautica et Astronautica Sinica, vol. 28, no. 1, pp. 1-13, 2007. [5] D.-H. Cui. “Handbook of structural stability”. Beijing: Aviation Industry Press, 1996, pp. 13-15. [6] Curran R., Rothwell A. and Castagne S., “A numerical method for cost-weight optimization of stringer-skin panels”, Proc. 45th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics & Materials Conference, AIAA 2004-2018, pp. 13-14, 2004. [7] Labro E., “The cost effect of component commonality: a literature review through a management-accounting lens”, Manufacturing & Service Operations Management, Vol. 6, No. 4, pp. 358-367, 2004.
Research on the Management of Knowledge in Product Development Qian-Wang Deng, De-Jie Yu College of Mechanical & Automotive Engineering, Hunan University, ChangSha, China, 410082
Abstract Knowledge has been recognized as one of the most important resources for the success of product development. An approach on integrating knowledge management into product development processes is proposed. In the approach, the product development process is analyzed through the state-process-resource modeling method. The product development process model provides the context for structuring knowledge and organizing designers. It leads to a user-oriented structure of knowledge produced and used in product development tasks and a process-based informal network of product developers. In addition, the integration between knowledge management processes and product development processes gives not only guidance for managing knowledge activities, but also an optimization for product development process modeling. Keywords: Knowledge Management; Process Modeling; Product Development
1.
Introduction
A product development process means the whole stage from product idea to production preparation, which consists of product planning, product design, and product production preparation sub-stages. Generally, it may be concluded that the product development process is a knowledge-intensive process. The support for knowledge handling is very important for product developers [1]. A popular definition of knowledge derived from computer science considers the premise that knowledge can be organized into a hierarchy of data, information, and knowledge. Data are facts, images, or sounds only, without contextual meaning. Information is formatted, filtered, and summarized data. When information is contextually applied in domains, it becomes into knowledge. Knowledge is instincts, ideas, rules, and procedures that guide actions and decisions. Two types of knowledge, explicit knowledge and tacit knowledge, are generally accepted in the field of knowledge management [2]. Explicit knowledge is that component of knowledge that can be codified. It can be found in the documents of an organization, e.g. reports, articles, manuals, patents, pictures, images, video, sound, software, etc. Tacit knowledge, however, is the internal knowledge that exists in
178
Q.W. Deng and D.J. Yu
people’s brains. It is subjective and experience based knowledge that is highly personal and hard to be expressed in words, sentences, numbers or formulas, making it difficult to communicate with others. In order to work successfully and to make the right decisions at the earliest time possible especially when applying computer support with its multiple simulation possibilities, product developers are looking for a knowledge assistant environment [3]. On one hand, novice designers need it to help learning all of the knowledge about processes, methods, tools, experience, skills, and design strategy in order to fasten their progress from novice to experienced designers. On the other hand, experienced designers also need it to quickly access information and knowledge from other experts. Knowledge management (KM) is such a multi-disciplinary infrastructure to provide product developers with knowledge about methods, instruments, and tools, in order to enhance the product development performance by providing the right knowledge at the right time to those who need it. Many literatures have been contributed into the research of KM [4]. For example, Literature [5] initiates the so-called GAMETH Framework, in order to locate crucial knowledge for business processes in enterprises. The main objective of this work is to propose an approach on integrating knowledge management into the product development process.
2. Steps of Integrating Knowledge Management into Product Development Knowledge embeds in the product development process, while knowledge management fosters the processes of knowledge creation and knowledge sharing in the product development process. An effective and efficient application of knowledge may improve the performance of product development and add additional value to products. Figure 1 depicts an approach on integrating knowledge management into product development. It consists of the following five steps. The first step is to analyze product development processes through process modeling. The product development process should be analyzed in order to identify knowledge existing in the domain of product development, and to examine what knowledge is needed in accordance with each activities of a product development process. The goal is to enable context-sensitive storage, more purposeful access to information, and better integration of the process-oriented, day-to-day work of the employee with the knowledge management system. The result of this step is a process model of product development. The second step is to identify and structure knowledge in product development through the process model. The result of this step is a structure of knowledge in product development. More details are discussed in section 2.2. The third step is to add people to each process elements of the product development process. The result of this step is an informal network of knowledge workers who share similar goals and interests in product development processes. For example, a network of knowledge workers can be organized according to knowledge domains. A knowledge domain is a common area of interest. Since the
Research on the Management of Knowledge in Product Development
179
individual members of a given knowledge domain can be spread across different departments or locations, it is useful in facilitating knowledge sharing and knowledge creation within any organizations. The main objective of building knowledge worker networks is to cultivate and nurture the organization’s tacit knowledge and to support knowledge sharing between people who have same interests, not to replicate the existing departments and organizational structures.
Product Development Process Modelling
Structuring Knowledge
Organizing People
Integrating KM Process Models into Product Development Process Models
Analyzing Suitable Technologies
Figure 1. Steps of integrating knowledge management into product development
The fourth step is to integrate the knowledge management process model into the product development process model. A knowledge management process is a set of activities to make knowledge life cycle (e.g. knowledge identification, acquisition, access, use, creation, and storage) more effective as well as more efficient. The result of this step is an integration model of product development processes and knowledge management processes. The fifth step is to analyse and choose suitable technologies supporting the knowledge management activities in the product development process, e.g. groupware, yellow page, information retrieval tools, content management systems, and knowledge repositories, etc. Two main aspects have to be taken into account in developing knowledge management systems. The first is the provision of assistance for the direct, inter-human KM processes, e.g. communication and collaboration. The second is the management of generation, distribution, access and use of knowledge coded into artifacts (documents, training, and videos, etc), e.g. information management.
180
Q.W. Deng and D.J. Yu
2.1
Analyzing Product Development Processes through Process Modeling
Knowledge management and product development processes are interlinked. Therefore, a product development process can be viewed as the nexus around which knowledge is shared, used, and generated. Knowledge-oriented product development process modeling and analysis clarify what knowledge and information sources are required for, or created in the product development process, what knowledge and information flow happens within and between the knowledge-intensive product development process, and how processintrinsic parameters influence knowledge and information needs. Approaching knowledge-oriented product development process modeling is a foundational enabler for knowledge management, because it builds the basis for process-oriented knowledge archives and efficient access to such archives. In context of knowledge management, the purposes of analyzing product development processes through process modeling are x x x
To identify, classify and structure knowledge, To help understanding how product development processes, people, and knowledge are organized relating to each other together, and To reveal who needs what knowledge and who possesses what knowledge.
In this section, the state-process-resource modeling method is used to analyze the product development process. The detail of the state-process-resource models in product development has been discussed in literature [6]. According to the stateprocess-resource model, there exist three kinds of objects: product (state), process (element), and resource. Product development is considered as such a system that combines the product states, development processes, and resources. The product (state) objects define what a product is in a given state (knowledge of “know-what” and “know-why”), whereas objects of both process (element) and resources describe how the product is developed (knowledge of “know-how”). During a product development process, there are many results at the intermediatestages. The results of a product at the intermediate-stages are called product states. A product can be modeled as a set of product states, e.g. product requirements, product functions, product principles, product geometries, product structures, and manufacturing process, etc. Each product state is a sub-kind of a product. A product development process is a compilation of process elements that change a product from one state to another state, and the relationships between the process elements. A process element describes an activity in a product development process model. The process elements, consisting of some work-steps, are the basic component in a process model. In this work, approximately 67 generic process elements, which cover all of the stages of the product development process are collected and used as a dimension of knowledge classification criteria, in order to identify all of the tasks or activities in the product development process [6]. These generic process elements are independent on any products and corporations.
Research on the Management of Knowledge in Product Development
181
2.2 Identifying and Structuring Knowledge in Product Development through Process Models In product development processes, knowledge has to be modeled, appropriately structured, and interlinked for supporting its flexible integration and its personalized presentation to product developers [7]. The product development process provides the context for knowledge. It leads to a user-oriented structure of knowledge produced and used in product development tasks [8]. Linking knowledge to the product development process elements helps: x x x x x
Analyzing or auditing previous company knowledge corresponding to each element of the product development process model; Judging whether the knowledge regarding to each element of the product development process are currently scattered around the company; Understanding where, when and what knowledge is needed within each element of the product development process; Assisting decision of the suitable format of knowledge representation respective to each element of the product development process; Guiding the index of the structure of knowledge in repository.
Knowledge in a product development process can be divided into the product knowledge and the process knowledge of product development. The product knowledge describes the states of a product, such as knowledge about product market, requirements, functions, behavior principles, structures, specification, service documents, and instruction manuals etc. The process knowledge of product development describes how a product is developed and why a product is so developed. Literature [8] gives a process-based structure of knowledge in product development. 2.3
Adding People to Elements of Product Development Processes
Employees are the bearers of knowledge and the users of knowledge management. Since knowledge is intrinsically linked to people, linking employees with each process element of a product development process model can help to identify people who use or create knowledge in each process element of the product development process, who possesses what knowledge and what knowledge is needed by whom in each process element of the product development process. It shows how people are connected to each process element and the knowledge content of the product development process. In another words, it helps auditing tacit knowledge in a product development process because that tacit knowledge exists in brains of people. In addition, adding people to each process element of the product development process is useful when constructing knowledge communities and expertise locating systems, the key ways to share and distribute tacit knowledge. The results of the link are some informal networks of people who are involved in the common process element of product development (see figure 2(b)). These informal networks of people are process element oriented. They are different from the traditional topographical focused (or department oriented) networks of people
182
Q.W. Deng and D.J. Yu
(see figure 2(a)). Figure 2(a)(b) depict a same organizational structure, but in two different forms. Location #1 (Department #1)
Location #2 (Department #2)
Process Element #1
Process Element #2 (Location #1, #3)
Location #3 (Department #3) Process Element #3 Employee (a)
Employee (b)
Figure 2. Two Types of Organization of People
2.4 Integrating the Knowledge Management Process into the Product Development Process In a product development process, there exist two major classes of behaviors. First, there are those behaviors directly associated with the overall product development process. Second, there are those behaviours that support the knowledge management process that enables and improves the product development process. Therefore, the integration of the knowledge management process into the product development process is a dominant recommendation and a pressing research issue [4]. A knowledge management process is a compilation of activities which enable and facilitate generation, sharing and use of knowledge for the benefit of product development. Although there have existed many models of a knowledge management process, these models have still at least two limitations: one is that they ignore to explicitly modeling the processors correspondent to the activities of knowledge management; the other is that they lack of an explicit integration of a knowledge management process model and a product development process model [6]. Literature [6] distinguishes four roles of knowledge workers. They are knowledge managers, knowledge engineers, knowledge analysts, and knowledge individuals. Therefore, a knowledge management process can be modeled from the four perspectives of knowledge managers, knowledge engineers, knowledge analysts, and knowledge individuals, respectively. The activities performed by knowledge managers include developing KM strategy, planning KM processes/methods, identifying the network of people, and measuring KM. It should be stated that these activities usually run in parallel, and are the center of all other activities (e.g. performed by other knowledge agents) that are initiated and controlled from here.
Research on the Management of Knowledge in Product Development
183
Knowledge analysts, acted by selected qualified domain experts, are familiar with the tasks in product development. They know what knowledge is needed in the product development process, and clarify the need for knowledge in product development. Then, they structure the knowledge about product development. The results of the structuring constitute the fundamentals of knowledge maps (and/or) knowledge repositories. The structure of the knowledge in product development also guides knowledge engineers identifying what knowledge exists in knowledge resources. Then, knowledge engineers acquire knowledge from knowledge resources into knowledge management systems. Before being stored into knowledge repositories, the acquired knowledge should be evaluated by knowledge analysts, whether it is right and/or suitable for product developers to deal with tasks in product development. The knowledge, which has been evaluated as “positive”, is stored to knowledge repositories. These activities always run in cycles, and initiate the development of knowledge management systems. The knowledge stored in knowledge repositories can be re-used by knowledge individuals. During product development processes, knowledge individuals (product developers) identify the requirements of knowledge in tasks. Then, they search for (under assistance by knowledge management systems) appropriate knowledge, and apply it in the tasks of product development. If new knowledge is created, it can be published (codified). The published knowledge should also be sent to knowledge analysts for an evaluation. Then, the “passed” knowledge is stored into knowledge repositories for the re-use in the future. These activities form a cycle to support product development processes. To realize the integration between the knowledge management process model and the product development process model, there is a basic rule to be followed: considering the knowledge management process as a special kind of sub-process in product development processes. Therefore, similar to the definition of a product development process, a knowledge management process is a set of many KM process elements or sub-processes associated with KM. A sub-process associated with KM, called KM sub-process, is also a set of KM process elements or other KM sub-processes. The logical relationships between sub-processes and/or process elements can be grouped into sequence, parallel, alternative, couple, and loop. Figure 3 illustrates the place of the KM process elements in the product development process model. The knowledge management process is dealt with as such an accompanying sub-process of other “normal” sub-process (or process elements) in product development processes. The KM process elements (activities) in the knowledge management processes are also considered as such accompanying activities of other process elements in product development processes. Therefore, the KM process elements run parallel to the “normal” product development process elements or sub-processes. For example, the generic process element “search for principle” in product development processes is always accompanied with the KM process element “access knowledge” in knowledge management processes. These two activities run usually parallel to each other. The detail of the integration model can be found in literature [6].
184
Q.W. Deng and D.J. Yu
Product development processes sub-process (the KM process)
“normal” sub-process
process element
“normal” sub-process
sub-process (knowledge individual) sub-process (knowledge analyst) sub-process (knowledge engineer)
...
process element
process element
...
sub-process (knowledge manager)
Figure 3. An example showing the KM process as a sub-process of product development processes
2.5 Choosing Suitable Technologies Supporting the Knowledge Management Activities in the Product Development Process The two most dominant approaches to deploying KM services are codification and personalization [9]. In a general sense, the codification approach is more suited to situations where work tasks (e.g. tasks of variant design or adaptive design) are similar and existing knowledge assets can be adapted for reuse. The personalization approach is appropriate for situations where work tasks (e.g. tasks of new product development) are fairly unique, and it is difficult to reuse knowledge from task to task without significant modifications. 2.5.1
Codification-oriented Services
Codification-oriented KM services can also be called “people-to-knowledge” services. Theses services provide facilities for storing knowledge and accessing (including indexing and searching) the stored knowledge. Technologies that support for knowledge storage span from relational database management systems to document management systems. A technology-enabled knowledge storage is typically defined by the content and structure of knowledge. The content refers to the actual knowledge stored. The structure refers to how each “knowledge unit” is specified, the format in which it is represented, the indexing scheme and how each “knowledge unit” is linked to others. Indexing and classification services are usually facilitated by knowledge maps that define the channels and the mechanisms available for knowledge categorization.
Research on the Management of Knowledge in Product Development
185
The keyword search that is provided by most Internet search engines offers a simple and easy way to retrieve documents in which knowledge embeds. One main problem with the keyword search is that not all documents (in which knowledge embeds) use the same words to refer to the same concept. Hence, metadata, that describes the concepts that the documents refer to in a controlled vocabulary, are assigned to documents. Therefore, metadata allow a transition from keyword searching to concept searching. Concept searching enables search terms to be expanded to cover a series of related terms. To improve precision in domain searching by reducing the ambiguity of ordinary words, technologies of taxonomy and ontology are used for searching. These technologies allow for the organization of concepts based on a schema of concepts and relations between concepts. They allow knowledge to be retrieved in a context, which helps users to assess their applicability to tasks in hand. 2.5.2
Personalization-oriented Services
Personalization-oriented KM services can also be called “people-to-people” services. These services provide rich, shared, virtual workspaces in which interactions occur between knowledge workers (people) who share a common goal. Theses services provide facilities for the communication and collaboration between knowledge workers. Groupware is a basic technology that offers a platform for communication and collaboration between knowledge workers. This is especially the case for distributed and virtual product development project teams, where team members are from multiple organizations and in dispersed locations. Examples of common features offered by groupware tools are e-mail communications, messaging, discussion groups, and information management tools (e.g. calendar, contact lists, and meeting agendas etc). Common platforms/groupware are Intranets, MS Exchange, Novel GroupWise, and Lotus Notes. Videoconference & visualization systems and computer-mediated collaboration provide tools for knowledge workers to communicate each other efficiently. They provide a closer approximation to the actual face-to-face interaction. This supports the process of knowledge sharing. Workflow management systems also provide collaborative services for knowledge workers. They bring controls to processes that require many people to work on a set of documents. 2.5.3
Some Other Services
Besides of the above-introduced KM services, there are also some other services. General information technologies (IT) provide knowledge workers with tools that improve personal efficiency, e.g. word-processing software, spreadsheets, and databases etc. For example, knowledge workers write reports by using word processing, spreadsheets, and presentation software etc, so that knowledge can be documentation, and the documents can be exchanged easily throughout a company.
186
Q.W. Deng and D.J. Yu
Specific application systems are optional customized and designed to solve engineering application tasks. Artificial intelligence (AI), e.g. expert systems, data mining, and knowledge discovery etc, can be applied in these systems.
3.
Conclusions
The product development process is a knowledge-intensive domain. Knowledge has been recognized as one of the most important resources for the success of product development. In this work, a five-step process modeling-based approach of integrating knowledge management into product development was suggested. In the approach, the product development process, the application domain of knowledge management, was analyzed through the state-process-resource process modeling method. The product development process model provides the context for knowledge. It could lead to a user-oriented structure of knowledge produced and used in product development tasks. The integration between knowledge management processes and product development processes gives not only guidance for managing knowledge activities, but also an optimization for product development process modeling.
References [1] Ahmed S, Wallace KM, Blessing LT, (2003) Understanding the differences between how novice and experienced designers approach design tasks. Research in Engineering Design, Volume 14, Issue 1, p.1-11. [2] Nonaka I, Takeuchi H, (1995) The knowledge-creating company: how Japanese companies create the dynamics of innovation. Oxford University Press, Oxford, UK. [3] Vajna S, (2006) Integrated Product Development. Vorlesung von Lehlstuhl Maschinenbauinformatic, Institute für MaschinenKonstruktion, Otto-von-Guericke Universität Magdeburg, Magdeburg [4] Mertins k, Heising P, Vorbeck J, (2003) Knowledge Management Concepts and Best Practices, 2nd ed. Springer, Berlin Heidelberg New York Hong Kong London Milan Paris Tokyo [5] Grundstein M, Rosenthal SC, (2005) Towards a Model for Global Knowledge Management Within the Enterprise (MGKME). In: Proceedings IRMA'2005 International Conference, Research in progress Managing Modern Organizations with Information Technology. San Diego, California, USA [6] Deng QW, (2007) A Contribution to the Integration of Knowledge Management into Product Development. PhD thesis. University Magdeburg, Magdeburg, Germany [7] Sure Y, Staab S, Studer R, (2004) On-To-Knowledge Methodology (OTKM). In: S. Staab and R. Studer (eds.), Handbook on Ontologies. Springer, Berlin, New York. [8] Deng QW, Yu DJ, (2006) Mapping Knowledge in Product Development through Process Modeling. Journal of Information and Knowledge Management, Vol.5, No. 3 [9] Tsui E, (2003) Tracking the Role and Evolution of Commercial Knowledge Management Software. In: Holsapple, C.W. (ed.) Handbook on Knowledge Management, Volume 2. Springer, Berlin Heidelberg New York Hong Kong London Milan Paris Tokyo
Representing Design Intents for Design Thinking Process Modelling Jihong Liu, Zhaoyang Sun School of Mechanical Engineering and Automation, Beihang University, Beijing 100083, China
Abstract Design is the process in which designers use their expertise and experience to acquire design solutions. Most design results can only exhibit what the design is, but cannot reveal how and why an artifact is designed the way it is. Design intent is the motivations, rules and reasons behind design activities, and the capture, representation and transmission of design intent are of great significance to externalization of tacit knowledge. This paper addresses the representation of design intent for design thinking process modeling. The design thinking process model (DTPM) is composed of design intents, process knowledge and operations. The basic elements and categories of design intents are discussed in detail and eight types of design thinking process segments are expressed by the relationship of design intents and design operations. The examples of an original design process and a routine design process are taken to embody and validate the model. Keywords: Design thinking process, Design process modeling, Design intent
1.
Introduction
Usually, the final outcomes of design processes are technical documents such as drawings, development reports and three-dimensional shape models. These results are not only the embodiments of the design, but also the technical information carriers or vehicles for communication among designers or between designers and manufacturing engineers. However, they do not record the design motivations, intermediary options and solutions, nor the reasons behind them. That is, they can only describe what the design is, and do not explicitly reflect how and why the design is done the way it is. Although the designers can write down what and why they think, or how they do by themselves in their engineering notebooks, the descriptions are too casual to be shared and reused. The lack of this information make it difficult for other users to understand and modify the design. The purpose of this paper is to capture and represent design intent, and to construct the design thinking process model (DTPM). Based on design experiments, elements and classification of the design intents are identified. The formalization of the DTPM is given.
188
2.
J. Liu and Z. Sun
Related Work
Related work concerned with capture and representation of design intent can be divided into three categories: function, feature and design rationale. Artefact functions are the primary design intents and design intents are the reflection of functions in a design thinking process. In [1], design intents are defined as the functional requirements provided by customers. By translating the commands sequence, the designer’s intent can be implicitly translated. Gero[2] constructs situated fuction-behaviour-strcuture framework to represent design in a dynamic and open world. Although exploration of the function models helps capture and represent design intent, most of the results of these researches can only be applied in the conceptual design stage. Feature is taken as the carrier of design intent, and it is expected that design intent can be captured by feature cognition. Wang[3] develops a scheme for feature-based reference design retrieval to provide designers with easy access to relevant knowledge. Both assembly and joining intents are captured as the form of assembly and joint feature in [4]. In the design intent modeling system implemented by Arai et al.[5], the design thinking process is described by intents, operations and design flows. Feature formally represented is readable for computer and can easily be shared and communicated. However, because of the limitation of the form of feature, it can only present design intent indirectly and be applied in specific domains (such as CAD system). Design rationale is the reason for design thinking or activities, and it can be understood in three facets: argumentation, documentation and communication[6]. A argumentative model, QOC (Questions, Options, and Criteria) built by MacLean et al.[7] stresses the explication of the space of possible designs and the rationale for choosing the appropriate designs within the space. Ganeshan proposes a model to represent design intents from the perspective of documentation[8]. The design intents are captured by preserving all the objectives and their evolution tracks. ADD+ developed in [9] assigns intents and beliefs to the user model from heuristic rules applied to the combination of the design dependency graph and design history. Design rationale models can reflect the nature of design intents, however, they are too abstract to be formally described and understood by computers.
3.
Design Thinking Process Modelling
3.1
Design Experiment
Design experiment is an empirical approach to exploring design cognitive activities[10]. The experimenters in the design experiment are required to complete a design task with the method of thinking aloud (TA), i.e. speaking out their thought during a design process. Their verbal reports and all design operations are recorded by the audio-video equipment. The verbal data are coded and analyzed by the method of protocol analysis (PA). In this study, design experiments are carried out to investigate and clarify the elements of the design thinking process. The task
Representing Design Intents for Design Thinking Process Modelling
189
is to design a cash deposit mechanism for an Automatic Teller Machine (ATM). The whole experiment process is recorded by an audio-video recorder. The detailed experiment procedure includes four steps: explanation of the design task and requirements, progress of design, visual and verbal recording, and code and analysis of the data. Based on analysis of the protocols, three categories of cognitive states are recognized as intent, option and solution. Intent reflects what the designer wants to do, e.g., ‘I want to design the sorting part at first’ shows that the designer’s intent was designing a machine of sorting. Option is the candidate design alternative for realizing the intent. Solution is the decision made by analyzing, comparing, evaluating and synthesizing the options. Along with the cognitive states, tacit knowledge as know-how in the design thinking process which is different from product knowledge is extracted. The process knowledge is classified into three categories, i.e., rules, criteria and reasons. Rule is the guideline that the experimenter refers to when generating an option. An example is ‘the idler wheel must be designed according to the size of the cash’, which indicates that the size of cash is the rule that designing of an idler wheel must obey. Design constraints are also rules. Criteria are knowledge the designer used to evaluate the options or solutions, which may belong to different abstract levels. For example, ‘aesthetics and economy’ and ‘the cash can’t be mangled’ are both criteria. Reasons will explain how and why the rules and criteria are applied by the designer in the design thinking process. 3.2
Model of Design Thinking Process
Based on results of design experiment, the design thinking process reflects the interaction of design intents, process knowledge and operations. Design intents specify the next objective of design, propose options for resolving problems and confirm the ultimate solutions for the design. Knowledge in design thinking process contributes the prerequisite conditions for design intents and operations. Operations achieve the solutions by several actions which make up of operation set. The model of the design thinking process is shown in Figure 1.
Figure 1. Design thinking process model
The basic design thinking process model consist of design intents, design process knowledge and design operations. Design intents include meta-intents, options and solutions. Design process knowledge is composed of rules, criteria and reasons. The definitions of these concepts are listed as follows (the character in the parenthesis is the code of the corresponding concept).
190
J. Liu and Z. Sun
Design intent (I) expresses all the mental activities of a designer when he/she is making efforts to solve a design task or problem. Design intent is composed of meta-intent, option and solution. Meta-intent (mI) refers to the goal that the designer wants to achieve at some stage of the design process. It represents the motivation of the designer’s activities. Meta-intent has different categories which will be described in section 3.3. Option (O) is the candidate solutions generated during the process that designer realizes the design intent. Option is always acquired based on some design rules, however, sometimes, Option is acquired from experience or assumption and not supported by explicit and justified evidence, and then it is called a weak option. When the weak option is valued and validated, it turns to a ‘strong’ one and can be used convincingly. Solution (S) is the ultimate decision that the designer makes after analyzing and comparing several options based on some criteria. Knowledge (K) is all the knowledge the designer refers to during the design thinking process, which includes rules, criteria and reasons. Rule (R) is the basis on which the designer depends to propose the options to achieve the design intent. Design rule has different categories and abstraction levels. It may be a physical principle, condition or constraint in the design requirement, and even a formula in the design handbook. Specially, design rule can be the assumptions of a designer according to the expertise and experience. Of course, the options proposed based on the assumption are weak options and must be validated subsequently and become strong ones as mentioned above. Criteria (Cr) are the standards that the designer refers to when he/she chooses a solution from the options. Just like the design rules, criteria have different categories and abstraction levels. They may be either abstract concepts such as aesthetics and economy, or concrete parameters requirements. Reason (Re) is provided by the designer to describe the causal relationship among different design states. For example, it can be why or how the options are derived according to the design rules. It can express the mapping relationship between the designer’s internal mental factors and the external objective environment. Just like the design rule, reason can be common sense or experience of designers. Operation (Op) is the action whose execution should result in the realization of the design intent in the design process, e.g., using a CAD system software to generate solid models of the products, searching for similar solutions or communication with other designers. Sometimes a series of operations are needed to realize the design intent. The collection of all the operations the designer performs during a period is called Operation Set. Operation expresses the interaction between the design intent and the external environment. Operation can be absent, which means that the corresponding design intent is unable to be or has not yet been achieved. 3.3
Categories of Design Intents
The meta-intents of design intents in the design thinking process model are classified into six categories.
Representing Design Intents for Design Thinking Process Modelling
191
Objective Intent (OI) decides what to do next, which introduces another intent. As the design process proceeds, design intent is changed from one to another until the design requirements and constraints are satisfied. Generation Intent (GI) introduces a design operation to generate an option or solution for the current task. The option and solution generated can be at different abstract level, e.g., a physical principle or an artifact structure. Synthesis Intent (SI) introduces a design operation to combine the design options or solutions to obtain a complex design option or solution. Evaluation Intent (EI) introduces a design operation to evaluate the current option or solution itself, or to compare different design options for a solution. Planning Intent (PI) is the intent to plan a solution in an appropriate order. Communication Intent (CI) is the intent to exchange design methods or design results with the external environment or other cooperative partners. 3.4
Segments of Design Thinking Process
The design process is not only driven by different design intents, but also impelled by different design operations. Operations in the design thinking process model can be divided into three classes. The first one is the mental operation, which is a mental activity of the designer. The second is the actual operation, which is the real action that the designer executes. For example, selecting a parameter or solving a set of equations is one actual operation. The third class of design operations is either the mental or the actual operation. For example, optimization can be the idea of the designer and also can be the specific structural or shape optimization operation using a CAE tool. Pairs of design intents and design operations constitute the design thinking process. The pair of a design intent and a design operation is defined as a design thinking process segment. According to the categories of design intents and design operations, eight types of the design thinking process segments are considered. Intent to Intent (Focusing segment): This segment figures the reification of design intent, i.e., the extended design thinking. It induces new objectives or intents. For example, when there are several options to be analyzed, it deals with which option should be taken firstly. Intent to Option (Analysis segment): This segment generates options for the current intent by analyzing, decomposing and comparing similar intents. For instance, in the design experiment, the experimenter proposed ‘The cash can be transferred by a strap, or an idler wheel, or an machine that is used to transfer newspapers in the printing shop, which shows the intent of “cash transferring” can be achieved by three options such as a strap, or an idler wheel or an transfer machine in the printing shop. Intent to Solution (Generation segment): This segment generates solutions directly without considering any options, which always happens in the routine design following a fixed procedure. It also can be found in the design process of novices who have less experience and potential ideas than the experts. Intent to Operation (Communication segment): This segment reflects the communication between the designer and the external design environment. It can
192
J. Liu and Z. Sun
be either the sharing or exchange of design intents with the other designers or the documentation of the design thinking process. Option to Option (Option evaluation segment): This is the evaluation or optimization of the options. As the design proceeds, new conditions and constraints will be activated or discovered, so the proposed options must be regulated to satisfy the newly appearing constraints. Option to Solution (Synthesis segment): The solution of the design task is determined in this segment by selecting one option from the candidates or synthesizing several or all the options. This segment focuses on the evaluation and comparison of the options based on an amount of related information. For example, when the designer decides to use a pair of gears rather than a pair of worm and worm wheel to transfer power, information about gear design and worm-wheel design is needed. Solution to Solution (Solution evaluation segment): By comparing solutions to the design criteria, the solutions in this segment are evaluated and optimized. The final solution is determined. Solution to Option (Planning segment): That a solution is decided does not mean the design work is finished. There may be several plans to achieve the solution, so this segment is needed for generating the plan options for the solution. Table 1. BNF notation of design thinking process structure and model Elements of design thinking process ::= <Meta-Intent>|<Meta-Intent>|<Meta-Intent><Solution>| <Meta-Intent> <Solution> <Meta-Intent> ::= ::= ‘Objective’|‘Generation’|‘Evaluation’|‘Planning’|‘Communication’| ::= | ::= <Solution Description>|<Solution Description> <Solution> <Solution> ::= |||| | | ::= | ::= | ::= | ::= | ::= Design thinking process ::=‘ė’ ::= <Meta-Intent>|<Meta-Intent>‘ė’<Meta-Intent>[‘Ė’]|< Meta-Intent>‘ė ’ [‘Ė’]|< Meta-Intent >‘ė’<Solution>[‘Ė’]|< Meta-Intent>‘ė ’ [‘Ė’]‘ė’[‘Ė’]|< Meta-Intent >‘ė’ [‘Ė’]‘ė’<Solution>[‘Ė’]|< Meta-Intent>‘ė’<Solution> [‘Ė’]‘ė’[‘Ė’]|< Meta-Intent >‘ė’<Solution> [‘Ė’]‘ė’<Solution>[‘Ė’]
Representing Design Intents for Design Thinking Process Modelling
3.5
193
Formalization of Design Thinking Process Model
The definition of elements and model of design thinking process are formalized with BNF notation as shown in Table 1. The symbol “ė” connects different design intent elements in design thinking process segment, and symbol “Ė” denotes the reference to some process knowledge.
4.
Illustrative Example
In this chapter, extracts from an original design process of cash deposit mechanism in the design experiment and a routine design of a pair of spur gears are taken as examples to embody and validate the DTPM.
Figure 2. The design thinking process model of cash deposit mechanism (portion)
4.1
Design Thinking Process Modeling of Original Design
The extract of design experiment is modelled in Figure 2, where the designer wants to design cash deposit mechanism for the ATM. Although three options to achieve this intent is formulated, what is adopted by the designer was to develop a vacuum separation mechanism. This is a generation-evaluation-synthesis DTPM segment and the rules and criteria for the process are explicitly represented in the model. Then the intent to design a vacuum separation mechanism is divided into three subintents which are related by And relationship. The And link means the three subintents must be all satisfied in order to realize the former one. After achieving the three sub-intents, the designer synthesizes the solutions to the final design of the vacuum separation mechanism. There are objective-generation and generationevaluation-synthesis DTPM segments in this process.
194
J. Liu and Z. Sun
One perceived advantage of the modeling approach, we can see, is the ease with which the rationale behind design can be described directly and explicitly. Although this source of knowledge has been noted by other researchers, few of it is represented explicitly with design intents in the design process models. Thus, the design thinking process model can not only tell what the design is (by design intents) and how the design proceeds (by design operations), but also explain why the design is the way it is (by process knowledge). 4.2
Design Thinking Process Modeling of Routine Design
The routine design of a pair of spur gear according to the design specification is analyzed with the DTPM, and portions of the formal expression are shown in Figure 3 and Figure 4. Example I (Figure 3) shows the design thinking process of selecting materials of the pinion and wheel. It includes three types of design thinking process segments. The designer first selects the material of gears according to design conditions and requirements, then appoints the size of them, and at last decides the material performance by using the table in the handbook. This is the generationgeneration-synthesis process. In order to decide material performance with the table in the handbook, the diameters of gears must be clear but not at that time. Therefore, the designer assumes the default values of diameters. However, the values of diameters are weak, and must be justified later in the design process. When later design result is contradictory to these values, designer can find the weak points in the design thinking process and modify them conveniently.
Figure 3. Design thinking process model of routine design: example I (portion)
Example II (Figure 4) represents the design thinking process of deciding the tooth number and modulus of gears. Firstly, the designer primarily selects the tooth number of the pinion according to experience and then calculates the tooth number of the wheel with the formula. Based on the assumed tooth number and the table of standard modulus, the designer gets a standardized modulus. At last, ultimate tooth number is decided with the formula and standardized modulus. The process is a
Representing Design Intents for Design Thinking Process Modelling
195
generation-evaluation-generation iteration and shows the way in which the designer deals with ill-defined problems. Although example I and example II are both the DTPM instances for routine design, example I reflects how the DTPM deals with the default information and example II emphasizes the description capability for assumption and experience knowledge. These features can help externalize the tacit knowledge of the designer such as expertise and experience, explain how and why this knowledge is used to reason about the design thinking process.
Figure 4. Design thinking process model of routine design: example II (portion)
4.3
Discussion
Except for the features of the model discussed above, a noticeable advantage of this modeling method is the general representation ability for all stages of design process. Although different design phase has different pattern and abstract level, the design intents, justification processes and design operations can be explicitly represented for all of them. Thus, not only other users can understand and modify the original design without hindrance but also the novice can learn how to design as an experienced designer with the help of the DTPM. With the BNF expression, the model is readable for both people and computers, and it is convenient to develop a computer-aided design support tool based on it. As mentioned above, the tool can support the whole design process. Categories of intents and segments are classified to extract process templates for similar design thinking processes. The template describes generalized design thinking process of the same kind of design. It includes design procedure, solutions and operations, and explains the design intents and relative rules, criteria and justifications. The template can be applied to facilitate design reuse. With the help of it, the users can not only complete new design quickly but also understand and learn new design approach; meanwhile, they can modify, optimize original design and even innovate smoothly according to current design requirements.
196
J. Liu and Z. Sun
5.
Conclusion
The designer’s cognitive activities in design process are investigated through design experiments using the thinking-aloud and protocol analysis methods. Based on the results of the design experiments, design thinking process model and its constitutive elements are clarified. Six categories of meta-intents and three types of design process knowledge are distinguished. Furthermore, eight types of segments in the design thinking process are affirmed. Based on the above study, the formalization of the DTPM is developed. It is expected to reveal the essences of the design thinking processes distinctly, which illumine in-depth investigation on the methods and computer-based tools for supporting the creative or innovative design activities. Future work involves development of a design thinking process description language and a computer-aided design support tool.
6.
Acknowledgement
The authors gratefully acknowledge the fund support from the National High-Tech Research and Development Program of China (863 program), Grant No.2006AA04Z138. The contribution of Mr. Tiangang Li who is responsible for arrangement and execution of the design experiments is appreciated. Thanks to the anonymous referees for their comments of this paper.
7.
References
[1] Mun D, Han S, Kim J, Oh Y, (2003) A set of standard modeling commands for the history-based parametric approach. Computer-Aided Design 35(3): 1171-1179. [2] Gero J S, Kannengiesser U, (2004) The situated function-behaviour-structure framework. Design Studies 25(4): 373-391. [3] Wang C B, Chen Y J, Chu H C, (2005) Application of ART neural network to development of technology for functional feature-based reference design retrieval. Computers in Industry 56(5): 428-441. [4] Kim K-Y, Manley D G, Yang H, (2006) Ontology-based assembly design and information sharing for collaborative product development. Computer-Aided Design 38(12): 1233-1250. [5] Arai E, Okada K, Iwata K, (1992) Intention Modelling with Product Model and Knowledge in Design Process. Human Aspects in Computer Integrated Manufacturing: 271-281. [6] Ball L J, Lambell N J, Ormerod T C, Slavin S, Mariani J A, (2001) Representing design rationale to support innovative design reuse: a minimalist approach. Automation in Construction 10(6): 663-674. [7] McKerlie D, MacLean A, (1994) Reasoning with Design Rationale: practical experience with design space analysis. Design Studies 15(2): 214-226. [8] Ganeshan R, Garrett J, Finger S, (1994) A framework for representing design intent. Design Studies 15(1): 59-84.
Representing Design Intents for Design Thinking Process Modelling
197
[9] Garcia A C B, Souza C S, (1997) ADD+: Including rhetorical structures in active documents. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 11(2): 109-124. [10] Takeda H, Hamada S, Tomiyama T, Yoshikawa H, (1990) A Cognitive Approach to the Analysis of Design Processes. Proceedings of the Second ASME Design Theory and Methodology Conference, New York: 153-160.
Application of Axiomatic Design Method to Manufacturing Issues Solving Process for Auto-body Jiangqi Zhou1, Chaochun Lian1, Zuoping Yao1, Wenfeng Zhu2, Zhongqin Lin3 1
SAIC-GM-Wuling Automobile Co., Ltd. China Tongji University 3 Shanghai Jiaotong University 2
Abstract Efficiently solving manufacturing issues within limited and tight time becomes the key to achieve successful development of an auto-body model. The authors propose to construct a decision-making system using Case-based Reasoning(CBR) method to support resolving auto-body manufacturing issues. A reasoning mechanism, which aims at satisfying the accuracy requirement for the system during the generation of corresponding problem issue solutions, has been setup utilizing axiomatic design method to help analyze issues and diagnose failure mode. A real case of door closing effort problem is used to illustrate so called method. The system proposed manages to increase the solving efficiency of engineers to deal with auto-body manufacturing issues. Keywords: Manufacturing problem, Case-based Reasoning, Axiomatic design method, Failure diagnosis, Auto-body
1.
Introduction
Dimensional quality is one of most important criteria reflecting the level of autobody product manufacturing capability. Dimensional quality problems related with auto-body product, which frequently occur during the manufacturing process, becomes one of the major challenges for automotive enterprises to get well control of product quality, due to the complexity and system characteristics in its essence. Efficiently solving these problems in time will no doubt save huge time for either new model development team or full volume production line, and hence the money. It’s common observation that most dimensional accuracy issues will happen again after a period of production or when shifting to a new model, completely or partially. For the new problems, the corresponding analysis methods, tools to be used, countermeasures to be taken are also quite similar to that of old ones. Experienced engineers or experts can analyze the problems and propose solving measures in an efficient way and finally fix them within tight time limit. However, for inexperienced engineers, they have to learn from the scratch the skills needed to close these problem. The learn curve could always be more flat than expected.
200
J. Zhou, C. Lian, Z. Yao, W. Zhu and Z. Lin
Undoubtedly, if we could develop a computer-aided knowledge system which will accumulate problem solving related knowledge and help engineers to deal with quality issues, it is sure that engineers increase the effectiveness of quality improvement during manufacturing process, and also shorten lead time and leverage product competitiveness in market. Case-based Reasoning (CBR), which origins from psychological theory of human cognition, is one of fast developing artificial intelligence technologies. Schank[1] put forward the concept of CBR for the first time when he researched representation of human memorization with computer. Case mainly includes description of the environment where problems occur and problem-solving plan. CBR has been applied in many production activities after lots of researchers exploiting and development for over 20 years. In supporting product design, assembly planning, and selecting of part cutting process, some successful systems based on CBR are found: Hi-MAPP[2], EXCAP[3],CBS-TX[4, GARI[5], XPLAN[6], RTC[7], CFCBR[8], which show certain level of intelligence. Axiomatic design method (ADM) is widely applied in system design with aim to establish a standard analytical mapping mechanism in scientific method to ensure a nice system structure. With the understanding of a certain local company’s product development process, this paper attempts to propose a computer-aided problem solving system based on CBR, introducing independent rule of axiomatic design theory to the process of problem solving, which has the ability to improve the efficiency of decision making process for engineers during problem solving. Due to the length limitation, issues related with design stage about dimensional control will not be touched. Main focus will be on the description of system and its reasoning mechanism.
2.
Elements of the System
The system based on CBR is composed of the following parts: a case based knowledge repository, a reasoning mechanism and a revising system of searched solutions. Figure 1 shows the general process of the system. The case with issues available is recorded in case data base, while case is composed of issue description and solving plan, including analyzing method and implementation measures. Newly discovered problems enter the system as key words which presents their problem description characters or constant data, the system then obtains similar case list through reasoning. From the case list, engineers could assess the plans further, and evaluate the new problem according to concrete problems with experts. Engineers determine the resolving measure or pan for the new problems after revising similar cases. As a constant-updating case system essentially, it requires tracing the process of plans in implementation, in order to record the ultimate status of measure implementation and timely updating, which ensures the practicability of cases available. Reasoning mechanism bases on key information (product characters) model of a certain product, it established mapping relationship between every kind (problem description and resolving plan) of cases with the objective of accuracy and efficiency; it’s the key composition part of the whole system.
Axiomatic Design Method to Manufacturing Issues for Auto-body
201
Figure 1. Flow-Chart of the System
3.
Application of ADM to Reasoning Mechanism Using CBR
As mentioned above, a complete case includes problem description, information of resolving process and its countermeasures. Concerning with the complexity of auto-body assembly and computer representation requirements of case related information, an integrated case model, which incorporates such knowledge as that of product, fabricating and assembling process, dimension checking plan and logistic related, has been proposed. Based on this model, the reasoning mechanism of the system was constructed using axiomatic design theory.
202
3.1
J. Zhou, C. Lian, Z. Yao, W. Zhu and Z. Lin
Basic Concepts
As one of systematical decision-making method in design area, Axiomatic design method [9]was first proposed by Professor N.P.Suh of MIT ( USA) in 1970s. The method takes a design process as the hierarchy reflection between four domains, of which the function domain is the smallest set of functional requirements (FRs) the design plan will realize, and the design domain stands for design parameters (DR) set which satisfy FRs in the design plan. The relationship between these two domains can be expressed as a mapping matrix:
^FR` > A@^DP` [A] is so called design matrix. According to axiomatic design theory, a good design with proper structure may be realized when the independence criteria and smallest information criteria are met at the same time. In order to conform to the independent axiomatic, the design matrix must be diagonal or triangle. The diagonal will lead to a uncoupled design while the triangle will bring about a decoupled one. Any other forms of design matrix, also called full rank ones, mean a coupled design. 3.2
Case Modeling and Representation
As the core of a knowledge repository, case model and its two kind of necessary attributes e.g. accuracy and rationability play important roles on setting up of data structure as well as reasoning mechanism. In this section, case model will be described according to actual product development process including structure design, fabrication, tooling, assembly, measurement logistic and so on. All models abstracted from each process will be finally integrated as a case model specified for the system. Product and process parameters such as key geometry features and dimension chains are used to construct interfaces between these models. 3.2.1
Model of Product Structure
Individual panels are joined and assembled through a variety of processes, such as welding, bonding and riveting, progressively into sub-assemblies (such as a door inner with reinforcement), major assemblies (such as a complete door system), and ultimately a Body-in-White(BIW) with closure panels. BIW assembly is hierarchy typically and represented using tree like structure. Binary unit C (i,j) is introduced to save assembly level information, where i represent assembly level the components or parts belong to, j represents the position of the part at the layer(level). The crossing points in the hierarchy tree , Si, represent the assembly working station. The numbers of longitudinal layers i and lateral layers of j of parts depend on the complexity of BIW assembly.
Axiomatic Design Method to Manufacturing Issues for Auto-body
3.2.2
203
Locating Model
During the assembly process of body panels, tooling operations like place, clamping, welding and release will impact dimensional quality of assembly if the tooling being under uncontrolled condition. Normally rigid body adopts the rule of “3-2-1” to be located in the work station, while flexible assembly like panels follow the rule of “N-2-1” [10]. The system uses the sign of PLBs and PLPs represent locating blocks and locating pins of body assembly clamping respectively. Pn ,n ( a, b ) represents the n2 locating hole at the n1 assembly layer, 1
2
which is used mainly to restrict the degree of freedom of a and b directions as shown in Figure 2. 3.2.3
Spot-welding Process Information
As the main form of assembly connection of body panels, spot-welding also impacts body assembly dimensions. Similar with tooling locating expression, all the welding locating points in body assembly is expressed with sign WLPs. Wi , j (a) represents the j welding spot on the i layer, where a is the connection direction.
Layer3
Layer4
Figure 2. Information model of tooling positioning
3.2.4
Measurement point modeling
CMM is widely used in body manufacturing quality control. According to the importance degree of key measurement point position on BIW or subassemblies when to control BIW dimensional quality, measurement point information is expressed using sign MP(i,j,k), which represents the kth measurement point on the jth component of the ith layer on BIW assembly tree. The attribute of measurement point includes its control direction and geometric features such as hole or surface. Among all three values of measurement point in x,y,z directions, only one or two of them may be helpful to identify dimensional variation.
204
3.2.5
J. Zhou, C. Lian, Z. Yao, W. Zhu and Z. Lin
Logistic Information of Parts
In order to sucessfully close an auto-body assembly problem, logistic information related with involved parts is of the same significance as that of above mentioned technology models. Since the validation process of proposed solution is largely executed simultaneously with volume production, sometimes this kind of information may decide the situation of overall completeness of problem solving. It means that the problem solving solution should be proposed with little influence on normal production. Otherwise the solution could not be put into effect as planned. Therefore, in this system, logistic information such as inventory, supplier readiness, cost of sampling parts, time of shipping etc (shown in Figure 3) is added to the case model and represented as relationship data structure.
Contact info.
Parts to be corrected or redesign
Supplier Logistic process
Parts classification
Inventory
Time to shop floor
Cost of extra work
Figure 3. Logistics information model of parts
3.3
Reasoning Mechanism Using Independent Rules
Generally reasoning mechanism works in a way like if-then structure, in which the problem state is taken as input of precondition and the designed or predictable result as one of the targets that proposed solution must satisfy. It’s the core task for the reasoning mechanism to setup mapping relationship between precondition and result. Assembly accuracy problems will be taken to illustrate so called mechanism. there are qualitative and quantitative methods in practice to evaluate and deal with these problems in generally speaking,. Qualitative method may achieve a kind of qualitative judgment toward problems using some simple and low cost tools like rulers. The advantage of qualitative method is its convenient and fast to find roughly correct resolving concepts while the disadvantage is that it’s hard to develop practical resolving measures without quantitative data. Using statistic measurements information provided by high accuracy instrument like CMMs, quantitative method focus on analyzing measurement datum in terms of time sequence or space relationship of measurement points in auto-body assembly. Quantitative method intends to find out suspected measurement points, suspected parts and then the root cause of dimensional deviations in scientific manner. The characteristic of this method is that it count much on statistic thinking and other tools in quality control activity. Compared to qualitative method, the effectiveness of and correctness of measures taken from quantitative one is higher.
Axiomatic Design Method to Manufacturing Issues for Auto-body
3.3.1
205
Determination of Suspected (Possible) Parts
1˅Suspected MPs Based on the models above-mentioned, dimensional quality may be evaluated partly acccording to assembly measurement point(MP) distribution in terms of variation and mean shift of key geometry features (mapping with MPs). The first task for default diagnosis is to find out possible default or suspected MPs from all measured points. The unbiased estimation of each MP’s average value and variance, e.g. MniN, j (*) and variance Si , j (*) are calculated according to the following equations: N
¦ Mn
k i, j
MniN, j (*)
N
(*)
ˈ S (*) i, j
k 1
N
¦ Mn
m i, j
m 1
(*) MniN, j (*)
2
N 1
Due to different contribution rate of MP variation to total assembly variation, it’s necessary to only care about MPs with bigger variance. According to on-site experience, variance threshold, TS, can be set at 70%. That means the focus of engineers effort will be on the selected 30% MPs. The space relativity of these MPs is evaluated using relationship analysis method. The coefficiency of relationship of MPs Mn (i) and MP Mn (*) may be calculated according to the equation as follows: N
¦ (Mn
m i, j
k ,l i, j
R (., )
(.) MniN, j (.))( Mnkm,l ( ) MniN, j ( ))
m 1
N ªN 2ºª 2º m N m N « ¦ ( Mni , j (.) Mni , j (.)) » « ¦ ( Mnk ,l ( ) Mni , j ( )) » ¬m 1 ¼ ¬m 1 ¼
Assuming the threshold of relationship factor TR (70% in this system), suspected sets of MPs can be determined after synthesizing both TR and TS. 2) Suspected assembly parts As the analogy of auto-body structure and assembly process, on one side product assembly layer tree is used to describe the routine of assembly deviation from bottom to top, on the other side this kind of information is also utilized for variation fault detecting and reasoning process from top to bottom. In order to locating panels with deviations, product tree layer information and abovementioned suspected MP determination will be combined together. Obviously there exists an assembly path for every component from itself to final *
BIW assembly. Taking ni , k as the number of suspected MPs on a component C, n as all MPs on the same part, we can calculate a useful value called deviation contribution factor(DCF),ˤ(C ), with the following equation
° ni*, j ½° ¾ ¯° ni , j ¿°
K (Cic, j ) ®
If the DCF of component C reach the biggest among the assembly it belong to, then it is considered as the candidate component or suspected part. Furthermore, if DFC>75%, it means that the influence of the component to the whole body is
206
J. Zhou, C. Lian, Z. Yao, W. Zhu and Z. Lin
global, otherwise it’s called local. Depending on this kind of information, further decision may be made. 3.3.2
Algorithm of Root Cause Positioning
With above-mentioned candidate MP and candidate component, the following reasoning rule chart shown in Figure 4 is established based on independence rule of axiomatic design, which realizes reflection and decomposition of function domain of diagnosis positioning and design domain of reasoning rule. In the first layer, there’s only one function requirement and one design parameter therefore it is a kind of single function requirement and design matrix satisfying the independence criteria naturally. In the second layer, according to rules, positioning one of the deviation assembly stations won’t influence the positioning of next work station. Independence rule can be satisfied when positioning rules can be applied on different assembly work stations at the same time. The kind of mapping between evidence for diagnosing and root cause of variation belongs to one by one mapping, which still meets independence theory. Therefore, the assembly knowledge case base reasoning mechanism established according to the rules above possesses a good structure. Body assembly station
L1
ಹಹ
RC-1
Li
RC-1
ಹಹ
RC-1
Feature sets of case model
Ln
R1-1
RC-1
R2-1
ಹಹ
R2-2
R1-2
R2-3
ಹಹ
R1-3
R2-4
Liüthe ith assembly station R1-i üthe ith rule to locate root cause assembly station RC- i üRoot cause-i R2-i üRule to deduce RC-i Figure 4. Reasoning rules diagnosis with Function-Design mapping Table 1. Reasoning Rules Rule R1-1
R1-2
R1-3
Content IF There is only one suspected component in assembly unit. THEN The assembly station nearest to it on assembly layer chart is deviation work station. IF There are two suspected component in assembly unit. THEN The meeting point of these two suspected component on the assembly layer chart is the deviation work station. IF More than two suspected components in work station. THEN Combining any two of them, make judges according to R1-2.
Axiomatic Design Method to Manufacturing Issues for Auto-body
R2-1
IF THEN
R2-2
4.
…
207
(The MPs of suspected assembly station possess variation shift in X or Z direction) AND (Variation Mode=Global) The reason for deviation caused by the assembly station is that PLP positioning looses. …
An Example
In this section, an example of door closing effort problem from actual manufacturing process is used to illustrate the solution generating process of the system. According to the Problem Communication Report (a document where some detail of a problem will be given for communication purpose), some key words labelled already as certain cases indicated are input to the system in order to search similar cases (please refer to Figure 1). The searching result in the user interface shows that there are two cases available for engineers’ reference(the system GUIs isn’t shown here due to limitation). Obviously seen from the results, the examplary problem has the comprehensive features ranging from single part defects to assembly process problems. The old cases suggest that one of the main reasons be uneven matching gap between the body-side panel and door inner panel. Preliminary measurement using inner gap measuring tool shows that this kind of symptom of gap difference exists. Furthermore, CMM is used to trace and measure 15 samples of BIWs, setting the threshold of 6 times standard deviation of MP, TS, 5.0, and the relationship coefficient threshold TR 0.85. After filtering all the MPs according to the method above mentioned, we get suspected MP set CSS ={MP(*,*,1), MP(*,*,2) , MP(*,*,3) ,…, MP(*,*,7), MP(*,*,15) }. There are 8 MPs totally in the set. Compared with body assembly layer structure tree, the suspected component C of variation source is located to be outer panel of bodyside. According to deviation source-reasoning rules, we can find out the suspected assembly workstation is P21, (*,*) . The variation mode of suspected MPs may be identified as Y-direction mode due to all of 8 MPs are of Y direction. After performing principal component analysis with the 8 MPs in Y direction, the first eigenvalue vector is within 0.2 and 0.5 averagely, which means that these bodyside MPs make a shift in the Y direction. As the percentage of the 8 MPs to all the bodyside MPs is less than 75%, it can be said that this mode has local impact rather than global impact on the assembly. Comparing with Rule 2-3, it may be diagnosed that the reason causing deviation in the assembly station is the positioning deviation of WLPs. In actual production, positioning welding points centralize mainly at the lower and middle parts in the assembly process of bodyside. When the clamps are loosed and the parts is transferred to the next assembly station, bigger springback happen and then mean shift will be formed for less positioning welding points, which conforms with the result of data processing. According to other reference plans proposed by the system, engineers also find out lubrication and matching problems by performing structure analysis and benchmarking for the lock assembly. Then corresponding temporary and long-term counter measures are proposed and
208
J. Zhou, C. Lian, Z. Yao, W. Zhu and Z. Lin
implemented, which finally not only improves the door system, but also solves additionally a functional problem of locking noise. As a successfully solved case, this exemplary problem ad its solving process experience was analyzed and represented in the way this paper described. The related information is input into the case database through the maintenance module of the system, which provides more and more abundant and detailed reference data for new problems resolving.
5.
Conclusion and Future Work
With the development of information technology, artifical intelligent technologies have been developed and applied rapidly in many fields. This paper proposes utilizing CBR technology to construct a decision making support system for resolving body manufacturing problems. With introduction of axiomatic design theory into establishment of reasoning mechanism for manufacturing problem analysis and fault diagnosis, the efficiency and accuracy problem of plans in the resolving plan forming process are solved properly. The system provides computer aided analysis tool for engineers to solve on-site problems, which will promote resolving efficiency of real manufacturing problems. As it’s a reasoning system based on cases, concrete measures are needed to guide engineers to accumulate enough cases, record experience data, and complete analysis process continuously. Ultimately, in the future, we can do further research on obtaining and expression of manufacturing problem, completing case model, and automation of reasoning analysis, the system interface will be more friendly, the resolving plans will be more complete, and the measures will be more applicable.
6.
Acknowledgement
The authors wish to acknowledge the financial support from China Postdoctoral Science Foundation(Granted number:20060400786)
7.
References
[1] R. C. Schank, Dynamic Memory: a Theory of Reminding and Learning in Computers and People. New York: Cambridge University Press, 1982. [2] H. R. Berenji and B. Khoshnevis, Use of Artificial Intelligence in Automated Process Planning, Computers in Mechanical Engineering, ASME, September. 1986:47-55 [3] B. J. Davis and I. L. Derbyshire, The Use of Expert Systems in Process Planning, Annals CIRP, 1984,33(1): 303-306 [4] M. K. Tiwsri, K. Rama, etc., A Case-based Computer-aided Process-Planning System for Machining Prismatic Components, Int. J. of Advanced Manufacturing Technology, 2001, Vol17: 400-411 [5] Y. Descotte and J. C. Latombe, GARI: a problem solver that plans to machine mechanical parts, Proceedings of 7th International Joint Conference on Artificial Intelligence, 1981: 766-772
Axiomatic Design Method to Manufacturing Issues for Auto-body
209
[6] T.C., Chang, D.C., Anderson, O.R., Mitchell, QTC - an integrated design/manufacturing/inspection system for prismatic parts, Computers in Engineering 1988 ̢ Proceedings, ASME, 1988: 417-426 [7] R.C., Schank and C.K. Riesbeck, Inside Case Based Reasoning, Lawrence Erlbaum, Hillsdale, NJ, 1989 [8] Y.G., Lei, Y.H. Peng, Study of an Metal forming process expert system using CBR, China Mechanic Engineering (in Chinese), 2001,12(7): 797-799 [9] Suh, N. P., The Principles of Design [M], New York: Oxford University Press, 1990 [10] Cai W., Hu, S. J., and Yuan, J. X., Deformable Sheet Metal Fixturing: Principles, Algorithms and Simulations, ASME Journal of Manufacturing Science and Engineering, 1996, 118(3): 318-324.
Port-Based Ontology for Scheme Generation of Mechanical System Dongxing Cao1, Jian Xu2, Ge Yang1, Chunxiang Cui1 1
Department of Mechanical Engineering, Hebei University of Technology, China Department of Mechanical Engineering, Tianjin University, China
2
Abstract The port has been considered as the basis of scheme configuration that it plays an important role for product conceptual design. Also port-based ontology (PBO) has been paid attention to represent for functional modeling of mechanical system. The port constitutes the interface of a component and defines its boundary in a system configuration. However, the ontology is a formal, explicit specification of a shared conceptualization, which can guide conceptual generation of artifacts from the functional point view. The combination port with ontology can conveniently capture the definition of component and corresponding design knowledge, it is easy to syntheize design scheme. This paper proposes an approach to PBO for scheme generation of mechanical system. A port-based knowledge building process is described for functional modeling. However, previous knowledge acquisition approaches are based on decomposition techniques for functional modeling. This paper gives a method for creating and managing different ports. Our knowledge framework has a systematic structure with three port types and three knowledge layers. The three port types are mechanical ports, electrical ports and configuration ports, and the three knowledge layers are specialized functional knowledge, behavioral knowledge and structural knowledge for different domains. The three knowledge layers represent different abstraction levels of the product knowledge conceptualization. Each layer includes several knowledge types for accommodating comprehensive knowledge and is represented with first order logic (FOL). We provide formal definitions of the framework to manage comprehensive knowledge according to the proposed knowledge framework. Finally, a fast clasping mechanism case is given to demonstrate the effectiveness of the research. Keywords: Port; Ontology; Knowledge; Conceptual Design
1.
Introduction
Because of the intense competition in the current global economy, successful enterprises should react quickly to changing trends towards market. They should conceive, design and manufacture new products inexpensively to respond market demand quickly. Conceptual design is considered as a crucial stage of design process, the researchers have paid great attention to it in recent years. During
212
D. Cao, J. Xu, G.Yang and C. Cui
conceptual design, a system is decomposed into subsystems based on their functionality[1]. Each subsystem or subcomponent is represented as a functional block. These functional blocks are connected through their compatible ports, such as energy flows from one port to another one. Singh & Bettig[2]use port-based composition to describe hierarchical configurations of complex engineering design specifications. Campbell et al. [3] developed functional representation based on qualitative physics, bond graphs, functional block diagrams. In their representation, ports or points of connectivity with other components describe the isolated systems. Information about how energy and signals are transformed between ports, and how energy variables within the system relate to others is also described. Representing systems as the configurations of port-based objects is useful at the preliminary design stage when the geometry and spatial layout is still ill-defined. Partial geometric constraints related to the interaction between functional blocks may have to be specified at this stage. Component architecture can be captured conveniently as a hierarchical configuration of port-based interfaces. The ontology is a formal, explicit specification of a shared conceptualization. This consistent and sharable description can be summarized as fundamental and generic concepts for capturing and describing the functional knowledge[4-5]. We briefly explain three port knowledge types and their interrelations in following section. The rest of the paper is organized as follows: Section 2 gives the interaction model between two components. Section 3 presents port classification and attributes. A port-based modeling design process is established in Section 4. A case study and conclusions are given in Section 5 and Section 6.
2.
The Interaction Model Between Two Components
Any mechanical product system is composed of a set of interrelated components, each of which is related directly or indirectly to every other components. A closed engineering system exists a system boundary that divides environment and itself, in which the external of boundary exists several input and output relations that act on components through system boundary. At the same time, there are the number of components in the internal of boundary. They exist the interactions among these components and constitute a interacting network shown in Fig. 1.
Figure 1. An engineering system contains the varied components
Port-Based Ontology for Scheme Generation of Mechanical System
213
The interactions between components exist direct connection or indirect connection. Ports correspond to the separated interaction points where two components exchange energy with each other. The interactions between components are represented by connections (Pij), called as port, shown in Fig. 1. It imposes algebraic constraints on the port variables. The interaction between two components are defined by their interfaces and form a connector. It can be described by port and connecting attribute. The interaction is a reification of the port between two connected components. This reification allows us to describe the interaction in more detail and use this information to support design refinement and synthesis. Their interaction relations are shown in Fig. 2.
Figure 2. The interaction interface of two components
The interaction between the components can be formally represented in terms of their interfaces and ports. We can defined the interaction INT between two connected components CO1 and CO2 as triple below INT= (IOC1, IOC2, C)
(2-1)
Where IOC1 is the interface of component CO1, IOC2 is the interface of the component CO2, and C is the connector between IOC1 and IOC2. When there are n components within a system, the C, IOC1, and IOC2 can be further expanded in terms of ports. Thus, IOC1= {CO1.1, CO1.2, ···, CO1.n} IOC2= {CO2.1, CO2.2, ···, CO2.n}
(2-2)
The interaction relation (INT) with n components can be written as a set of triples below. (
n
n
k
i
i
i
¦ CO1, i , ¦ CO2, i , ¦ Ci )
Where Ci is connector between CO1,i and CO2,i. Each CO and C is defined by a set of attributes. Port describes the locations of the intended interaction of components. It is described with the aid of attribute-value pairs. Each port contains a set of attributes. These attributes determine the characteristics of the ports and relations among ports. Meanwhile, the connector describes how two interfaces are
214
D. Cao, J. Xu, G.Yang and C. Cui
connected in an interaction. It contains a set of connected ports with the attribute set describing the connecting conditions. Connectors play a very important role in determining valid interactions between two components. Fig. 3 gives the connector types in different energy domains.
Figure 3. The types of connector in different energy domains
3.
Port Classification and Port Attribute
3. 1
The Port-based Concept Ontology
Ports are convenient abstractions for representing the intended exchange of signals, energy or material. They are an interface of component connections, and they impact system configuration. Ontology provides an understanding of the domain knowledge that facilitates knowledge retrieval, store, sharing, and dissemination. A system consists of component objects and component connections. We give an explicit representation of a shared knowledge understanding, i.e., concept ontology, which can help illustrate conceptual design problem. Fig. 4 shows a framework of port-based ontology. It contains three realms: product realm, component realm and port realm. On the basis of explicit concept specification and domain knowledge base, each realm can transform function to form through port behavior operations. 3.2
Port Types
Different components are of different port types. Often, there exists mechanical port, electrical port, and confuration port so on. For example, a mechanical port that is intended to establish a rigid connection with another port can be described by vectors for position and orientation combined with vectors for forces and torques in mechanical domain. There exists point, line and surface contacts shown Fig. 5. When two components are in contact with each other, it implies it exists the contact surfaces. Penetration of one part into another one requires that the relative velocity at contact point between the parts have the same normal vector relations represented below.
G G G G (Q Y u r ) n t 0
(3-1)
Port-Based Ontology for Scheme Generation of Mechanical System
215
Figure 4. A framework of port-based ontology
A component type specifies its connection possibilities by port definitions. A port definition specifies a port name, a port classification and connection constraints[6]. The component port cannot be specified whether a connection to a port is obligatory or optional. According to domain ontology knowledge, an effective connection of port will depend on a compatible component and its attributes.
Figure 5. The types of point contact and line contact
3.3
Port Compatibility
Assuming X represents the set of components in a product, and a relation Rport can be defined that denotes port compatibility below. x Rport y means that x and y are of compatible port
(3-2)
216
D. Cao, J. Xu, G.Yang and C. Cui
Where x and y are components in X. Rport stands for a compatibility relation, which contains pardon relation and equivalent relation when applied to a set of components. These relations are defined as follows. Definition 1. Pardon relation: A relation xRport z on a set X is called a pardon relation if it satisfies: (i) x Rport y (ii) y Rport z (iii) implies x Rport z Attribute sets can be used to describe ports and connections[7]. For example, a port with a transfer mechanical energy attribute can be treated as a mechanical port. Relations between two ports are determined by their attribute sets. One example of such relation is the parent-child relation. Port A is a parent of port B if the attribute set of port B is a subset of port A.
Figure 6. Attribute representation of ports
Definition 2. Equivalent relation: if x and y have the same port attributes and port classification, that is, the same function, they are of compatibility and they can form a port. The equivalent relation between ports is the compatibility relation. Attribute representation of port is shown in Fig. 6. For example, both mechanical contacted parts have the same attributes with transferring mechanical energy, and they can form a mechanical port.
4.
Port-based Modelling Design Process
In this paper, we propose a port-based ontology framework that mainly focuses on performing the activity of design process matching. It is not easy to choose an appropriate matching approach if the contents of the ports are not known ahead very well. It is also a hard work when there is the number of ports. So there is a great need for the effective technology that can capture the knowledge involved in port modeling. The proposed port-based framework tries to solve this problem. Our model encompasses two main modules: FOL representation and port based FBS representation.
Port-Based Ontology for Scheme Generation of Mechanical System
4.1
217
First Order Logic Representation
A function of a component cannot be determined until the component is installed in a specific system with a specific configuration. We defined port concepts with intention-rich functional concepts. In the FBS model, the functional symbol in natural language in the verb + noun style represents the intention of designers. We tried to identify operational primitives as storing present intentions. We adopt FOL as a representation method for our framework. The FOL representation has sufficient expressiveness and it also provides reasoning algorithms[8]. It can constitute the formalism of the semantic networks and frame slot representations. These concepts are represented as nodes and relationships. Concepts are called classes, attributes or frames. Relationships can also be called as properties, roles or slots. FOL allows users to define more classification rules, its structure-based classification provides the foundation of supporting component-search, design refinement and iterative design. An algorithm using FOL is straightly formulated as follows. Generate_Taxonomy_Tree (nodes, interface) For each interface Find all sub-nodes of same concepts Assign interface all nodes in a hierarchy Return Artifact_Search (types, function, classified repository) Find all function phases corresponding types Create a concept node with the same attribute Matching for all nodes in classified repository Return 4.2
Port-based FBS Representation
The attributes are lower-level concepts for defining ports. We have divided the attributes into three main categories: function, behavior and structure shown in Fig. 7. When port is defined by function attributes, its attributes describe the intended use of the port. Artifact functions have been researched extensively, and we will focus the attributes of component concepts[9]. As ports refer to locations of intended interaction, the functions applied to ports are limited to different types of interaction, such as: (1) transfer (energy, material, or signals) (2) connect (fasten or attach) artifact (3) support (secure and position). In addition to function, the structure attributes describe the structural, geometrical, topological, and part-whole information of an artifact. Attributes are often referred to as features. There it already exists a large number of concepts for defining form from what we can know[10-11]. However, it is often useful to introduce new form attribute classes for specific port geometry. Finally, ports are characterized by behavioral attributes. Again due to limited range of functions that can be performed by ports, their behavioral attributes are also limited to characterizations of energy flow, material flow, or signal flow. For the
218
D. Cao, J. Xu, G.Yang and C. Cui
definition of behavioral attributes, we can build algebra equations by design parameters. Port refinement can be supported by FOL. The process of refinement is divided into two steps. First, when a designer defines a port as having certain functional, geometric or behavioral attributes, the application will offer a set of possible ports or interfaces from the repository. Second, the algorithm will limit the number of possible attributes that can be assigned to port since these attribute constraints are defined in the attribute layer. An optimum algorithm, such as, genetic algorithm and tableau algorithm, can realize iterative design of ports.
Figure 7. The relationship of function, behavior and structure
5.
Example Scenario
A fast clasping mechanism is a fixture used in machining centers. The original clasping mechanism, which is used in machine centers as a subsystem of a fixture, is a screw clasping mechanism. It is operated by using an operator's hand. The speeds for clasping and releasing the workpiece are slow and not suitable for the mass production [12]. The users hope that a new product should be designed for the fast clasping and releasing operations. Generally speaking, to realize fast
Figure 8. The process of port generation
Port-Based Ontology for Scheme Generation of Mechanical System
219
clamping should use “verb” clasp, support, transfer, etc. The process of port generation is described by using FOL. According to the requirements of users, system firstly generates clasp port, configuration port, transfer port, and driver port as well as corresponds component A, B, C, D shown in Fig. 8. The system can further generate different components (E, F, G, H) to realize port functions by reasoning and matching shown in Table 1. Port compatibility is searched to build system structure by attributes shown in Table 2. Figure 9 gives system configuration. Table 1. Functions of ports Port No. A B C D E F G H
Comp_ INT P56 P57 P35 P13 P12 P23 P45 P24
Attributes of ports Point contact, Newton’ law Surface contact, spiral motion Increase pressure rate A1/A0 Surface contact, spiral motion t1 Surface contact, spiral motion t0 Surface contact, translation t0-t1 Surface contact, translation Transport liquid, P, q
Table 2. Clasping mechanism component list Component No. 1 2 3 4 5 6 7
Name of Components Screw nut (input) Big jar Big piston Small jar Small piston (output) Support (input) Workpiece (output)
A
+ +
B
+
C
D +
+
+
E + +
+
+
Figure 9. A principle solution of fast clasping mechanism
F
G
+ +
H +
+ +
+
220
D. Cao, J. Xu, G.Yang and C. Cui
6. Conclusion This paper presents a technique of art in port-based ontology toward conceptual design. We have paid our attention to issues that are particularly important with respect to PBO in support preliminary design. We are currently investigating more detailed ontological schema aiming at explicit representations of design knowledge, component knowledge in order to capture product structure. Current research is expanding this port integration towards describing electromechanical systems.
Acknowledgements This research is sponsored by the National Nature Science Foundation of China under grant No.50775065 and partially supported by the Post-Doctoral Science Foundation of China (Grant No.20060400712).
References [1] Pahl G and Beitz W. (1996) Engineering Design: A systematic approach, 2nd Ed. Springer-Verlag, London [2] Singh P, Bettig B. (2004) Port-compatibility and connectability based assembly design. Journal of Computing and Information Science in Engineering, 4(3): 197-205 [3] Campbell M, Cagan J, Kotovsky K (2000) Agent-based synthesis of electro-mechanical design configurations”, J of Mechanical Design, 122: 61-69 [4] Kitamura Y, Sano T, Namba K, et al. (2002) A functional concept ontology and its application to automatic identification of functional structures. Adv Eng Info, 16(2): 145-163 [5] Lin J, Fox M S, Bilgic T. (l996) A requirement ontology for engineering design, Concurrent Engineering: Research and Application, 4(3): 279-292 [6] Mizoguchi R, Tijerino Y, Ikeda M. (1995) Task analysis interview based on task ontology. Expert Sys with App, 9(1): 15-25 [7] Singh P, Bettig P. (2003) Port-compatibility and connectability based assembly design, 2003 ASME International Design Engineering Technical Conferences & Computer and Information in Engineering Conference, Sept.2-6, Chicago, Illinois, DETC2003/DAC48783, Michigan Technological University [8] Russel S, Norvig P. (1995) Artificial Intelligence, 2nd edition, Prentice Hall [9] Chakrabarti A and Bligh T P. (1994) An approach to functional synthesis of solutions in mechanical conceptual design, Part I: Introduction and knowledge representation, Research in Engineering Design, 6 (3): 127-141 [10] Gorti S R, Sriram R D. (1996) From symbol to form: a framework for conceptual design. Computer- Aided Design, 28(11): 853-870 [11] Roy U, Pramanik N, Sudarsan R., et al. (2001) Function-to-form mapping: model, representation and applications in design synthesis. Computer-Aided Design, 33: 699719 [12] Kumar, A. S., Subramaniam, V. and Teck, T. B. Conceptual design of fixtures using machine learning techniques, International Journal of Advanced Manufacturing Technology, 2000, 16(3): 176-181
Specification of an Information Capture System to Support Distributed Engineering Design Teams A. P. Conway, A. J. Wodehouse, W. J. Ion and A. Lynn Department of Design Manufacture & Engineering Management, University of Strathclyde, Glasgow, Scotland
Abstract The global distribution of design teams and the support of design activities within the digital domain has seen an increase in the need for computational systems for information capture, storage and use. Although significant work has taken place in managing detailed design information, such as CAD data and BOMS, there is currently little support for teams in the capture and communication of the informal and tacit information exchanged, often intensively, in design meetings and other non-computational based activity. The challenge facing organisations is to easily capture this information and knowledge for re-use within the life cycle of the project or for future projects without inhibiting either the designer or the design process. This paper introduces an information capture system architecture and highlights how the system can be of significant benefit when providing design teams with information and knowledge support within distributed design environments. The overall aim is to provide design teams with pertinent information, past examples and possible solutions to the design problem irrespective of their location, providing greater efficiency and more sustainable approaches to engineering by improving the through-life support. Current and future work in this regard is outlined. Keywords: global design, collaboration, information, capture
1.
Introduction
Globalisation has ensured that the design of complex engineering products has become an increasingly collaborative task among design and development teams based in offices around the world. As a result, companies are embracing virtual environments in which design teams can collaborate and exchange information and work during the product development process. As design becomes an increasingly collaborative and knowledge-intensive activity, the need for computer-based design frameworks to support the communication, representation, and use of knowledge and information among distributed designers becomes critical. Companies are increasingly required to provide support throughout the entire lifecycle of a product, including service, which can
222
A. P. Conway, A. J. Wodehouse, W. J. Ion and A. Lynn
encompass 5, 15 or even 30 years into the future. The task many companies face is how to quickly and easily capture this information and knowledge along with its context, for re-use within the lifecycle of the project or for future projects without inhibiting either the designer or the design process. Current virtual environments provide significant support in the exchange of, formal design information such as geometric data and specifications. However, it is also desirable to communicate informal information and knowledge about the design and design process, including design rules, constraints, rationale, etc [1], not all of which is currently captured.
2.
Distributed Team Collaboration
Advances made in the computing world and in particular, the expansion of the Internet, have been key factors in the increased prevalence of distributed design teams in recent years. Another key factor has been the growth of the global market fuelled by demand for technologically advanced products. The knowledge and skills required to develop and manufacture products rarely resides within a single location leading to the need to establish distributed and possibly global design teams. However, the implementation of distributed design teams ensure that the process of capturing the important information, knowledge and decisions undertaken throughout the development of products becomes increasingly difficult. Design is a collaborative process which involves communication, negotiation and team learning. Efficient communication is critical to achieving better co-operation and co-ordination among members of a design team. Fruchter, [2] has made the following observations on conventional design team communication methods: x
Designers record background information and results of reasoning and calculations in private notebooks;
x
Information in the form of text, calculations, graphics and drawings is captured in paper or computer based forms. Unfortunately, much of the design intent in a design dialogue is lost because it is partially documented. The final decision tends to be recorded but much of the interaction and developmental thinking of a design discussion is not;
x
The process of identifying shared interests within a design team is ad-hoc and based on participant’s imperfect memories and retrieval of available documents. This error-prone and time consuming process rapidly leads to inconsistencies and conflicts;
x
Meetings are usually the forum in which inconsistencies are detected and resolved before a project can progress. Discussion of graphic or numerical information by telephone, fax etc. is difficult and leads to misunderstandings and eventually increased product cost; During the undertaking of design and development activities, greater emphasis must be put upon capturing the activities as they occur, allowing a complete record of the activities to be made available for all members of the distributed team. It is
Information Capture System to Support Distributed Engineering Design Teams
223
key that any such document must be located not in local machines or private notebooks, but in online collaborative environments where all members of the design team have the ability to access, download and comment on the design records in real-time. Achieving efficient processes of sharing product and process data within collaborative teams is a key factor in influencing the successful implementation of distributed design teams.
3. Capture, Storage and Retrieval of Product and Process Information As highlighted by Fruchter [2], throughout the undertaking of the design process, a large quantity of information and data is generated, not only on the object being designed, but on the decisions, the rationale, the reasoning and also the use of experience. The challenge is to make this information explicit so that it can be captured and re-used in future projects and activities as well as during the entire life cycle e.g. the maintenance. Further more, once information has been captured it must be stored in a form that allows rapid retrieval within collaborative environments, whether for synchronous or asynchronous use. 3.1
Continuous Knowledge and Information Use
Continuous improvement in service support for long life products such as in the ship building or aerospace industry depend greatly upon the implementation of effective Knowledge Management (KM) systems within dynamic learning environments [3]. Large multinational organisations operating within markets such as defence and construction have the opportunity to capture operational knowledge through in-service evaluation and reporting, and to re-use this knowledge into new design projects. However, due to the lack of communication and sharing of this knowledge and information at different stages of the total product life cycle, such KM systems can become ineffective. The issue stems from the need to capture information and knowledge concerning the product as it is generated during the design and manufacture stages and re-used during the product's life. Furthermore, knowledge of the performance of the product in service should also be captured, enabling the management, upgrading and improvement of the product and to feed this valuable data into new designs. These practices can be time consuming and restrictive to the working designer. 3.2
Information Capture and Retrieval
Information can be categorised as being either formal or informal [4]. We define formal information as being explicit and definite and that which takes the form of reports, finalised documents, CAD drawings, and any other information communicated in a predefined form. Informal information therefore is defined as not having a recognised or prescribed form and can take the form of oral communication, images and sketches to name but a few.
224
A. P. Conway, A. J. Wodehouse, W. J. Ion and A. Lynn
Formal
Explicit
Reports CAD Models
Implicit
Informal
Tacit
Experience Assumptions
Figure 1. Relationship between formal and informal design information [5]
Informal design information is valuable because it reflects many important aspects of the design process not found in formal documentation [6]. During the design and development of an object, the designer or design teams will rely on experience gained from past projects and similar tasks to aid them when making decisions and progressing through the development activity. If the information, knowledge and rationale behind these decisions can somehow be recorded throughout the design process, then these elements will be of utmost value to organisations who can reuse this knowledge in future projects. It is worth noting that the very nature of informal and formal information is dynamic in that by capturing informal information it is transformed into formal information; the ideal scenario is for its transformation from informal to formal without generating additional work for the designer. Recent research studies into the capture of design information have resulted in the emergence of rationale capture tools. Systems such as the Rationale Construction Framework (RCF) developed by Myers, Zumel, and Garcia [7] propose seamless design rationale capture systems that acquire rationale information for the detailed design process without disrupting a designer’s normal activities. Their underlying approach involves monitoring designer interactions with a commercial CAD tool to produce a rich process history and interpret the intentions through the use of representation schemas. Currently there is little work on developing technologies that deal with the unobtrusive capture of informal design process information in its primary stage. In fact, most of the work performed both in the past and present has focused on developing tools and systems to capture information in the latter stages e.g. detailed design, where the data and information has been manipulated into some form so that it can be processed and re-used (such as rationale capture). However, before this information can be manipulated per se, it must be captured, and it is at this stage, where we focus our research efforts. There are systems available such as Informedia [8], Convera and Ferret Browser [9], which capture information using video / audio capture and speech recognition generated during social situations. Spoken Document Retrieval, Video Information Retrieval, Video Segmentation, face recognition, and cross language Information Retrieval are all elements included in the development of these systems. However, these systems are limited in that they capture all information, providing the user with a new problem, structuring and determining what information is useful and
Information Capture System to Support Distributed Engineering Design Teams
225
what is not. Rather than storing everything and attempting to subsequently split the information into smaller subsets, it is proposed that it is instead preferable to be selective in the capture of information during the design activity. Recently, work has been conducted on the development of Virtual and Automated capture environments, whereby the design activity is supported within a distributed environment, facilitating the use of many traditional styled resources to capture and share information. The most notable developments being the iRoom and iLoft [10] projects conducted at Stanford University along with the I-LAND project [11] conducted by the German National Research Centre for Information Technology (GMD) and Integrated Publication and Information Systems Institute (IPSI). Synchronous modes of communication such as videoconferencing and network enabled interactions are supported within these environments and collaborative decisions are made and stored. However these technologies and developments rely on the design activities taking place in specific locations, removing the designer from their natural working environment. Mobility is essential for the use of shared resources and for communication [4] and due to recent advances in technology, the capture of design information can happen almost anywhere and at any time. These technologies facilitate informal interactions and awareness traditionally unavailable to users at remote sites. Implications for technology design include portable and distributed computing resources, in particular, moving beyond individual workstation-centric CSCW applications. The current development time of mobile computing technology is extremely short and as a result, devices such as PDAs, Tablet PC’s and mobile phones are becoming commonplace within offices and especially within meetings. These devices can be extremely useful tools, which aid the capture of information irrespective of the location of the user.
4.
Information Capture and Storage System
It has been established that easy and unobtrusive capture of information as it is being generated is key to the construction of a comprehensive project memory. Our specific interest focuses on the capture and storage of process information and context within a distributed environment. Our aim is to develop an architecture which would enable the capture of design process and or product information without creating additional work for the designer. This will be implemented as a solution within the distributed design environment, allowing the storage and visualisation of information captured for all members of a distributed design team irrespective of their location. 4.1
System Requirements
Following a review of technology and work being done in the area of design information capture, a set of requirements have been drawn which if satisfied, would make up the basis of an effective distributed information capture system architecture. Four key requirements are proposed for an effective system:
226
A. P. Conway, A. J. Wodehouse, W. J. Ion and A. Lynn
Distributed Working In order for an information capture system to be most effective within today’s globally dispersed design and manufacturing organisations, it must facilitate distributed working. As highlighted previously there are a number of systems which support distributed working, however, these systems do not possess the necessary methods to quickly and easily capture information regardless of the situation and location. The system should have the ability to generate online collaborative documents and storage facilities that can be accessed by any webenabled hardware device. Information and Knowledge Capture The key difference must be the capturing of data as it being generated (i.e. in its raw and unaltered format) and the association of metadata with minimal additional effort on the part of the user. In order to allow for more effective data capture solutions, consideration must be given to the physical environment and the use of mobile devices such as PDA’s, mobile phones, laptop computers, digital pens and paper along with desktop computers and various meeting room technologies. Mobile devices provide the necessary mechanisms to record information and knowledge as it is generated during the many different design activities as they take place; from the corridor meeting or sketching designs on the train, to the group discussions and design review meetings taking place in designated rooms. Storage of Data Objects To turn a repository or database into an effective project memory, a higher quantity of information and metadata is required than is normally captured at present. At present, most systems will automatically generate metadata such as date, time, user id and file type, but to be effective, further elements of metadata such as context, description and status should also be captured. In order to create more effective project memories which can be used 5, 10 or even 30 years into the future, the system must generate as much metadata as possible at the point of capture. By doing this, the system can create “data objects” constructed from the data file and the associated metadata. These data objects can then be used to construct a comprehensive project memory, i.e. a representation of the activities undertaken throughout the duration of a project. Creation and Retrieval of Object Views An essential factor in the creation of project memories is the retrieval and visualisation of the data. The use of object views within the system would allow for various methods of viewing the data. Any system developed must have the ability to query the database and retrieve data objects, thus a search / query environment must be incorporated. The system should allow project memories to be interrogated from multiple perspectives. For example, the use of timelines linking together sets of data objects would allow the user to view all activities captured between certain periods in the project. By way of illustration, an object view of concept sketches (Figure 2) generated within a certain period of time during a design project, would give a perspective on the range and scope of concept exploration undertaken by the design team at that point in time.
Information Capture System to Support Distributed Engineering Design Teams
227
Figure 2. Object View Illustration – Concept sketches
4.2
Information Capture and Storage System Architecture
The identification of requirements for an information capture and storage system has provided a basis for a potential solution architecture, which if developed to a full system would satisfy these requirements. The system requirements can be grouped into two areas, physical and virtual. Only if the system can adequately support both environments will it become effective in use. Utilising already available and prominent technologies, the system architecture proposed in figure 3 satisfies all the necessary system requirements previously highlighted. As well as proposing a viable solution to the problem, this architecture provides a framework upon which future development can be performed, laying down the foundations of a potentially critical information capture and storage system. The physical environment consists of the design team, web-enabled hardware and the input to the virtual environment. Due to the distributed nature of design, there is a need to cater for many different situations and therefore the system cannot be hardware specific. As previously stated, the physical environment should possess the functionality to allow designers to access the system through a number of ancillary devices. To do this, an adequate user interface must be incorporated. There are various programming languages such as Java, PHP (Hypertext PreProcessor) or C++, all of which could be used to adequately create this interface. By way of example, the web-based LauLima [12] system uses a PHP based interface as the input to the system. PHP is a widely-used general-purpose scripting language that is especially suited for web development as it can easily be embedded into HTML. Using PHP within LauLima ensures that users require only that they are able to connect through a web-enabled device.
228
A. P. Conway, A. J. Wodehouse, W. J. Ion and A. Lynn
Information Capture Environment
Design team
User Interface
Web-enabled hardware
File Repository
Search / Query Environment
Virtual Environment
Physical Environment
Figure 3. Information capture and storage system architecture
The virtual environment by contrast will be entirely computationally based, consisting of an information capture environment, a search / query environment and a file repository or storage facility. The information capture environment will be dynamic in nature in that allows the user to create and capture information and knowledge as and when it occurs in a “live” environment and support the editing and updating of the information at a later instance. To do this, we envisage the adaptation of current web-based technologies such as wiki pages. Wikis possess desirable properties such as the flexibility required to allow users to quickly create, edit and store information in the pages and due to their being web-based, provide an ideal platform to support distributed working. Incorporated within the virtual environment will is a file repository linked to a database. This repository enables the users to store and access their information irrespective of their location and provides the underlying basis for the system. In order for the user to search and retrieve data objects from the repository, a search and query environment must be included, bridging the gap between the user interface and the repository. The search environment also allow the users to return various views on the data objects contained in the repository, generating multiple perspectives on the data, whether it be by date, user id, title or any other associated metadata.
5.
Future Work
Development of the system architecture is ongoing within the KIM Project [3], the overall focus is to provide users with a rapid and effective method of capturing design process and product information with minimal effort. Initial experimental scenarios were run in an educational setting in an attempt to determine the critical instances which occur during collaborative design meetings. These experiments helped identify how best to capture and store the critical information and knowledge generated during these instances for re-use [5]. Currently a prototype system is being developed, (figure 4) based on the system architecture (figure 3) and will be piloted and validated within further experimental scenarios.
Information Capture System to Support Distributed Engineering Design Teams
229
Figure 4. Prototype Information Capture System
The long term vision is to develop the information and capture system architecture to such a degree that it may be implemented and evaluated into industrial situations such as engineering design review meetings, and discussions are ongoing with various industrial partners with regards to possible collaboration.
6.
Concluding Remarks
Organisations are increasingly aware that the use of shared workspaces and collaborative tools can be beneficial in the support of distributed design activities. The natural coupling of these workspaces with the capture of information is fast becoming an industry focus as firms become more attuned to the need to support products throughout their entire lifecycle. The implementation of information capture systems into virtual, distributed environments ensures design teams have the necessary information and knowledge support whenever and wherever the design activities take place. The adaptation of current web-based technologies, such as wiki pages as capture and storage facilities, allow users to have the ability to quickly and easily capture information and knowledge irrespective of their location. A key factor is the “live” capture of this information where the information can be stored as it is being generated, making the capture process more efficient and removing the need for users to work retrospectively. Overall, the system architecture proposed in this paper has the potential to collate both product and process information that can be of great benefit to firms wishing to reuse information and experience generated throughout the lifecycles of large made to order products and services.
230
7.
A. P. Conway, A. J. Wodehouse, W. J. Ion and A. Lynn
Acknowledgements
This work is part of the “Knowledge and Information Management (KIM) Through-Life Grand Challenge Project”[3] funded by the Engineering and Physical Sciences Research Council and the Economic and Social Research Council
8.
References
[1] Szykman, S., et al., Design Repositories: Next-Generation Engineering Design Databases, in IEEE Intelligent Systems and Their Applications. 2000, MSID. [2] Fruchter, R., Interdisciplinary communication medium in support of synchronous and asynchronous collaborative design, in International Conference of Information Technology in Civil and Structural Engineering Design. 1996: University of Strathclyde, Glasgow. [3] McMahon, C., et al. Knowledge and Information Management (KIM) Grand Challenge Project. 2006 [cited 2007 15th April]; Available from: http://wwwedc.eng.cam.ac.uk/kim/. [4] Bellotti, V. and S. Bly, Walking Away from the Desktop Computer: Distributed Collaboration and mobility in a Product Design Team, in Computer Supported Cooperative Work. 1996: Cambridge, MA USA. [5] Conway, A.P., et al., A Study of Information and Knowledge Generated During Engineering Design Meetings, in International Conference on Engineering Design (ICED). 2007: Paris, France. [6] Yang, M.C., W.H. Wood, and M.R. Cutkosky, Design information retrieval: a thesauribased approach for reuse of informal design information. Engineering with Computers, 2005. 21: p. 177-192. [7] Myers, K.L., N.B. Zumel, and P. Garcia, Automated Capture of Rationale for the Detailed Design Process, in Innovative Applications of Artificial Intelligence (IAAI99). 1999, AAAI Press: Menlo Park, CA, USA. [8] Hauptmann, A., et al. Video Retrieval with the Informedia Digital Video Library System. in Text Retrieval Conference (TREC'01). 2001. Gaithersburg, Maryland. [9] Lalanne, D., et al., The IM2 Multimodal Meeting Browser Family, in Joint IM2 Technical Report. 2005. [10] Johanson, B., A. Fox, and T. Winograd, The Interactive Workspaces Project: Experiences with Ubiquitous Computing Rooms, in Institute of Electrical and Electronics Engineers (IEEE). 2002. [11] Streitz, N.A., et al., i-LAND: An interactive Landscape for Creativity and Innovation, in ACM Conference on Human Factors in Computing Systems (CHI'99). 1999, ACM Press, New York: Pittsburgh, PA, USA. [12] Breslin, C., et al. Digital Libraries for Global Distributed Innovative Design, Education and Teamwork (DIDET). 2003-2007 [cited 2007 10th February]; Available from: http://www.didet.ac.uk/.
Collaborative Product Design Process Integration Technology Based on Webservice Shiyun Li1,2, Tiefeng Cai3 1
Hudong-Zhonghua Shipbuilding (Group) Co., Ltd., Shanghai 200129, China Shanghai Jiao Tong University, Shanghai 200240, China 3 Zhijiang College of Zhejiang University of Technology, Hangzhou 310024, China 2
Abstract In order to solve the consistency and integration about process and process data that is produced in digital product collaborative development, digital collaborative design process model is presented, by detailed analyzing the integration requirement of collaborative design process and data. Based on this model, the CAX/DFX tools which are used in collaborative design are distributed and granule encapsulated by Federal Intelligent Product EnviRonment(FIPER); collaborative design process data produced by CAX/DFX tools is encapsulated based on the logic of Product Data Management (PDM) system’s workflow and is published as Webservice. FIPER is used to realize the collaborative design process flow that uses data Webservice to data exchange between PDM and uses encapsulated CAX/DFX tools to realize the design function. An example which realizes the integration of process design flow of a part is presented at the end of this paper. The collaborative design process flow is used to organize the collaborative process planning and to encapsulate the collaborative design data with Webservice in this example. The realization of this example manifested that the digital collaborative design product model has good practicability and the method of encapsulation for CAX/DFX tools and data Webservice is feasibility. Keywords: Collaborative design, Collaborative design process model, Process integration, Data Webservice
1.
Introduction
Production is realized by the progress of design process, which is the main node of design procedure, and which is assembled by design operation with special time and logic. The effective management of design process is the preferable way to improve the validity of design operations and the efficiency of product development, therefore design process is the key of product development. Design process must be carried out by designer, so design processes [1] will be difference with logic, time, data and form. This will increase the difficulty of uniforming
232
S. Li and T. Cai
management and control of design procedure [2]. So it is necessary to establish a kind of effective uniform management mechanism of design process. Collaborative design process can be divided into large granularity workflow process and small granularity design process [3]. Large granularity workflow process is the process management system in PDM system, which is used to analyze the relationship of design nodes based on project schedule [4]. Moreover, small granularity design process focuses on the function of design, used to analyze the logic of design operation steps and manage the produce process of design data. To integrate the design process, Brandt S.C. [5] designed an approach to reuse the design process based on Process Data Warehouse [6]. And Indrusiak S.L. described the integration of in-memory design data representation and design databases [7]. Chen Y.H. [8] and Gao X.Q. [9] et al. researched the concurrent design process model with Petri net and unified modeling language (UML) and the polychromatic sets theory. However, they lack capabilities for the management of the process operation and its data.
2.
Requirement of Collaborative Design Process Integration
Collaborative Design Process (CDP) of virtual product development is a set of a series interacted operations, which includes the operations of management, design, simulation, analysis and manufacture. The kernel of CDP execution is the transform of product data between design operations and from design operation to management tool and the interaction of product data such as the change of design object state. Therefore CDP is a complexness of time, logic and action data. And the integration of CDP should include two aspects, one is the integration of process time and logic, and the other is the evolvement of process data. The entities among the design process have design resource, design models, design tools and designers. In addition, the requirement of CDP should include the following aspects. 1. Data Integration Process data is the result of design operations. So the chiefly thing is the integration of CDP and design data and process data. To realize this integration, first is to duly and availably obtain data during collaborative design, and then is to submit the temporary data to collaborative platform in time, and at the end of collaborative design the design result should be submitted and the data access authorization should be returned in time, then the collaborative platform can manage the design data and process data. 2. Processes Integration The design operations cover from modifying parameters to carrying through a multidisciplinary optimization. These design operations can be executed until the specific condition is ready, which is the specific requirement of environment for design activity. For example, ahead of modifying parameters of parts, the parts should be checkout from the platform. Moreover, an operation associated with the context and other operations can be form a design which can achieve the special function. The workflow of PDM can manage the large granularity
Collaborative Product Design Process Integration Technology
233
design process, but can’t manage the design operation. To achieve the management and control of design operation, integration of CDP is needed. 3. Tools Integration Tools are the carriers of design operations. Therefore the CDP management should have the capability of associating operation with design tools from its types. Such as 3D modeling operation should be associated with CAD tools. These need that the CDP management should have various interfaces which can be connected to various design tools. 4. Operators Integration The design operation can not be done only with a special design operator. That is to achieve the CDP integration must have the capability of associating with operators, include roles and policies.
3.
Collaborative Design Process Model
Collaborative design process model focuses on the effect between process operation and process data, and emphasizes the variety of process data and the constraint and control of design resource and designer during data changing. Node of CDP is the particular design activity in procedure of product development, and is the smallest element of design process. There is an unambiguous relationship between design activities in design process about logic and sequence. Therefore, the design process model can be described with process node, logic condition, joint line and process data (see Fig. 1.). Process node and connection line are the core of design process model.
Figure 1. Collaborative design process model
Process node that carries the entities of process activity, such as activity data, resource, operation and constraint, is the kernel of process model. Connection line is the ligament in process nodes, and the process flow and data stream are controlled by logical condition of connection line. The logical relationship between process nodes is constructed by logic node and connection line.
234
S. Li and T. Cai
In virtual product development process, the events such as the state variance of product data and design task will trigger the design process. By accessing the switch conditions of logical node and connect line, workflow engine transfers the design data and design resource to the next node and activates it if the switch condition allows when the design flow is running. Therefore, the activated node drives the process engine, which builds the process instance with design data, design resource and process template and then starts it. This process instance includes a set of design operation sequence.
4.
Mechanism of Collaborative Design Process Integration
4.1
Running Mode of Collaborative Design Process
Based on the collaborative design process model, integration framework of CDP is constructed, which includes 6 layers: project layer, workflow layer, process layer, service layer, data layer and organization layer (see Fig. 2.).
Figure 2. Collaborative design process integration framework
Generally, when developing a virtual product, the project president divides the project phases, assigns the project subtasks, plans the project schedule and organizes the work team. Then according to the project task schedule, the principals of subtasks establish the workflow and design process flow. The design process flow is the expandedness of the node in workflow instance. During the running time of project, design process flow instance which drives the manmachine design activity will be started by workflow instance. The design process instance will dynamically get reference data from service layer. Then design data will be updated until the end of man-machine interact design operation, and the operation result of interact design will be submitted to workflow instance.
Collaborative Product Design Process Integration Technology
235
Those instances of workflow and design process flow are the process objects with special design data, but there are still some differences between them. Design data of workflow instance is the result data and supervised by data lifecycle while that of design process instance is the temporary data and unsupervised. For example, the process planning flow instance would include such nodes as checkout the part, do process planning, etc. But this process planning flow instance is only a node of workflow instance. The design data of workflow node includes the part model and the process planning files while that of design process flow includes sizes, version and iteration of part model and machine information of process, etc. 4.2
Integration Mechanism of Process and Process Data
Developing a virtual product is a collaborative team work process, and its target is to obtain a completed product data. When the product data is completed, the product design is finished. During collaborative developing process, product data is filled with process data which is used or created by design process. And the process data which records the state variety during the process running is the temporary data or the product data. The interaction mechanism (See Fig. 3.) of task, process, process data and product data can be demonstrated with a simplified design procedure of a condenser pipe exchange (See Fig. 4.).
Figure 3. Integration mechanism of design process and data
236
S. Li and T. Cai
Figure 4. Structure of condenser pipe exchange
The details of process integration as following: 1. At first, the data items of condenser pipe exchange should be constructed and added to data lifecycle manager in collaborative design platform. Then all of the data belong to these items will be automatically managed by data lifecycle when it is submitted. 2. The task, design condenser pipe exchange, is assigned to a design team in project manager, and then workflow engine of this task will be started at background by task engine. 3. The principal of design team accepts and subdivides the project task, and builds the workflow template of design condenser pipe exchange, which is used to set up a workflow instance of this task. The node type and the executor of workflow instance would be specified before starting this instance. The node type is used to indicate a specific design process. 4. According to the three design actions, the workflow engine sets up the associated task items, and assigns the work items to designer that will occur as the task in personal task list. 5. Designer accepts and starts the task from task list. And the background process engine instantiates the design process with design task data, and then the instance startups. Then the design process engine prepares the initial data for design process instance and starts the man-machine design tools such as Pro/E. After that, the designer can do creationary work. 6. When the man-machine design work is finished, document and data files will be created by design process instance, and then the part models will be uploaded. After the designer confirms, the design work is finished, and then the design process instance notifies the workflow instance that the task is finished. 7. The data lifecycle manager inspects the data of design, and at the same time the workflow engine starts the next node task. With this integration mechanism, CDP is controlled by workflow, design process flow and data lifecycle together. Design tasks are managed by workflow; design operations are managed by process engine; and design data are managed by data lifecycle manager.
Collaborative Product Design Process Integration Technology
4.3
237
Design Process Shared by Component
A specific virtual product develop team will use a uniform design tool for a typical product. Therefore, according to the function of FIPER component service, we encapsulate collaborative design process as components and realize the integration and share of design process. Fig. 5. illustrates the structure of component share.
Figure 5. Structure of process integration based on component
Process component services can be registered by accessing the UDDI(Universal Description, Discovery & Integration) interface which is the standard protocol for component services of FIPER. The principle of process component integration is that when workflow engine needs to access the design process instance, workflow engine searches process component service by UDDI interface, and gets the correlative services information. Then according to this information, process engine starts and design process instance is instantiated.
5.
Integration Design Process based on Webservice
Design data of collaborative design process is dynamic created and used. And PDM is used to manage the static data of product. Then design process data is difficult to integrate directly into collaborative design platform which is based on PDM. Therefore, we build a dynamic relationship from the design process to design data with Webservice. 5.1
Description of CAPP Design Process Data Integration
As we know the process planning relies on the manufacture resource of enterprise. For example, the scheme of part process is influenced by the type and precision of machine tool. Therefore, we design a standard CAPP design process, which associates the CAPP system that has an independent database management system with our platform and makes the CAPP data consistent with platform. Fig. 6.
238
S. Li and T. Cai
illustrates the interaction of CAPP, CAD, FIPER and collaborative design platform. CAPP System
Colloborative Design Platform
FIPER
CAD System
Login Platform
Technologist
Accept task Call Inquire about task information .. . Download part model Call
Code process
Task type
Call
Process information .. .
Part model
Save process data Process files Submit design information Create process document Document information . . . Checkin Part Set task state Task feedback Logout platform
Figure 6. Part scheme of CAPP design process integration
Data used in design process belong to three sets in standard CAPP design process: manufacture resource data, design process data and process design data. Data interacted with these systems are detailed specified in process, and then separate managements of part process data are achieved. Data exchanges are used three different ways: CAPP design process between collaborative design platform and FIPER uses Webservice, and that between CAPP system and FIPER uses configuration files and that between FIPER and CAD system uses parameters and command line. 5.2
Implementation of Webservice
According to the scheme of standard CAPP design process, we build more than ten Webservices to exchange data between FIPER and collaborative design platform with Microsoft .Net C#. The typical Webservices as following: 1. Checkout part Used to checkout part from collaborative design platform. 2. Search document Used to search document information of specifically part which is appointed type. 3. Search process planning information Used to search initial information of part process.
Collaborative Product Design Process Integration Technology
4. 5. 6. 7. 8. 9. 10. 5.3
239
Obtain download information of part model Used to obtain download information about part model, such as location, size, and file name. Create new document Used to create a new document object. Create new content file Used to create a new content file. Set content file state Used to set content file state. Register document Used to register document object. Check in part Used to check in the part object. Task feedback Used to feedback the result of task execution. Design and Encapsulation of CAPP Design Process
Usually, technologist needs to see the CAD model of part during process planning. Therefore, we divide CAPP design process into several subnodes: obtain task information of process design, chekout part, download model, search part process, display part model, startup CAPP system, execute CAPP, save process planning files, create new process planning document in platform, upload part process content files, check in part and task feedback, etc.(See Fig.7.) And then we encapsulate the whole process as a component of FIPER.
Figure 7. Part of CAPP design process flow
5.4
Case Analysis
We carried out many successful tests about the CAPP design process flow in our project. In our tests, auto-transformation of process data between CAPP design process flow and collaborative design platform was realized by Webservice. By using this CAPP design process flow, the designer doesn’t need to search the part model and the documents in platform, and doesn’t need to be care for the location of part models and files in local host, and the accessorial work is simplified. With the design process in our tests, accessorial work nodes of pre-design are reduced from 8 to 3, and nearly 65% of accessorial work is reduced as a whole.
240
6.
S. Li and T. Cai
Summary and Future Work
By analyzing the requirement of CDP, we build a theoretical collaborative design process object model, and research the mechanism of interaction between process and process data, then a way about sharing integrated design process by components which use Webservice to associate the process with process data is put forward. A scheme about process data integration with CAPP design process is presented, which is carried out by XML Webservice. The realization of this example manifested that the dynamic integration about design process and process data with Webservice and encapsulated component is feasibility. When we research this integration instance of CAPP design process, many suppositions are adopted and many unpredictable things are ignored. Therefore, the example has many to be improved for more fitness with the real process. Except for these, there are many design process types and developing tools, and the design objects are various in the real product development. It is impossible to epitomize all design process flow with a uniform one. And the future work is to improve and specify the CAPP design process flow and built more design process flow templates which are suited with actual design process.
7.
References
[1] Hamani N., Dangoumau N., & Craye E., (2006) An iterative method for the design process of mode handling models, Computational Engineering in Systems Applications, IMACS Multiconference on Volume 2:1431–1436. [2] Marquardt W., Nagl M., (2004) Workflow and information centered support of design processes—The IMPROVE perspective. Computers and Chemical Engineering, 29:6582 [3] Hollingsworth D., (2006) The workflow Reference Model, Workflow Management Coalition WFMC-TC-1003(V1.1), http://www.wfmc.org/standards/docs/tc003v11.pdf (Accessed: 11.2007) [4] Mesihovic S., Malmqvist J., Pikosz P., (2004). Product data management system-based support for engineering project management. Journal of Engineering Design, 15(4):389–403 [5] Brandt S.C., Morbach J., Michalis M., Manfred T., (2008) An ontology-based approach to knowledge management in design processes, Computers and Chemical Engineering, 32:320-342 [6] Jarke M., List T., & KÖller J., (2000) The challenge of process data warehousing. In Proceedings of the 26th international conference on very large databases—VLDB [7] Indrusiak S.L.; Murgan T., Glesner M., & Reis R.,(2005) Consistency Control in Datadriven Design Automation Environments Signals, Circuits and Systems, 2005. ISSCS 2005. International Symposium on 2:629-632 [8] Chen Y.H., Liu W.J., Peng G.L., (2006) Modeling and Analysis of the Concurrent Design Process. Computer Supported Cooperative Work in Design, 10th International Conference on, 1-4 [9] Gao X.Q., Li Z.B., Li S.C., Wu F., (2006) Modeling and Analyzing Concurrent Design Process for Manufacturing Enterprise Information Systems, Systems, Man and Cybernetics, 2006. ICSMC '06. IEEE International Conference on, 6: 4999 – 5003.
Information Modelling Framework for Knowledge Emergence in Product Design Muriel Lombard1, Pascal Lhoste2 1
CRAN UMR 7039, UHP, Nancy University, Faculty of Science and Technology BP 239 - F-54 506 VANDOEUVRE-lès-NANCY [email protected] 2 ERPI EA 3767, INPL, 8 rue Bastien Lepage - F-54 000 NANCY [email protected]
Abstract The integration of CAD and CAM (Computer Aided Design/Computer Aided Manufacturing) still does not include all the tools needed to support the activities related to the product life cycle. Indeed, there are still problems of semantic representation and data point of view. Hence, this article proposes to give a report about the integration of CAD/CAM by highlighting the emergence of handled knowledge. Keywords: Integration, knowledge, Models, meta-model, CAD/CAM, NIAM/ORM
1
Introduction
Since the 80’s, with the advent of CIM (Computer Integrated Manufacturing), then concurrent engineering, the integration of software supporting design and product manufacturing was a major objective. Today, these problems of integration are still a hot topic because it was too often approached from a software point of view without taking into account the semantic trade aspects. Indeed, it is not enough to raise the question, however delicate, of finding how to federate the common objects between several software tools, even if these objects are difficult to model because they present characters of synonymy, hyponymy and hyperonymy. It is indeed enough to refer to the bibliographical study made by [1] about integration of CAD/CAM for manufacturing products. This study shows a lot of research works dealt with design and manufacturing trades without never looking for an integration of those. These trades are covered partially by software tools which contribute to assist the design actors with CAD tools (Computer Aided Design) and to assist the manufacturing actors with CAM tools (Computer Aided Manufacturing). These software use various objects. These objects can be common to several CAX tools of the same trade. They can be also common to several different trades. The shelves in term of integration precisely come from the difficulties related to the definition/characterization of these common objects,
242
M. Lombard and P. Lhoste
being located at the interface between two (or more) trades and not always carrying the same signification from a trade to another. As it can be seen, a cognitive modelling should be more able to represent the semantics handled by the trades to connect and would then contribute to guarantee the coherence of the handled objects and thus allow the modelling of their integration. To meet these needs of formalization, the modelling method NIAM (Nijssen Information Analysis Method) [2] alias ORM (Object Role Modelling) [3] is used in what follows. It has a graphical formalism associated with a linguistic analysis method allowing a validation by nonspecialists. Moreover, it is based on an extended entity-association model. This approach confers a strong capacity of semantic expression used for the formalization of the objects and of their relation when considering a given trade or “Universe of Interest”. A first response to the identification or emergence of knowledge objects [4] was brought by the interpretation of the substantivation mechanism (mechanism of transformation into substantive) proposed by NIAM/ORM method. Nevertheless, it is necessary to formalize the deployment of this mechanism to particular contexts of use. Thus, after having illustrated the emergence principle of knowledge objects to support the integration of different Universes of Interest being based on a genetic reference model, we characterize the various possible relations when considering the proposals made by [5] and illustrate their deployments using examples.
2
Integration Contribution to the Trade Ontology Definition
Let consider the two disjoined Universes of Interest respectively related to the CAD and CAM. One of the problems of the RNTL USIQUICK French project, presented in [1], is related to the installation of a real integrated CAD/CAM chain. It is thus a question, initially, to give to a CAM tool to a Product Model defined by CAD, needed for the generation of the manufacturing process planning. As illustrated in Figure 1, these two objects belong to two different Universes of Interest.
? Process planning Model
CAM Universe
Product Model
CAD Universe
Figure 1. How to connect two different universes of interest ?
Usually, process plans are generated from a CAM Product Model. In this model, semantics is comprehensible only by the CAM Universe of Interest. A bibliographical study made by [1] emphases a state of the art about Process planning research and results associated to this research domain. For example, the Product Model used by PROPEL [6] is based on the concept of manufacturing
Information Modelling Framework for Knowledge Emergence in Product Design
243
entity and not of design entity. Hence, it constitutes a CAM Product Model (Figure 2) and not a CAD Product Model. Each Process Plan is generated from one or several CAM Product Model(s) One CAM Product Model is used to generate one or several Process Plan. "Contextualized knowledge of process planning generation" Process Planning Model
is used to generate
Is generated from
CAM Product Model
CAM Universe
Contextualized knowledge of process planning generation is a substantivation (mechanism of transformation into substantive) from the relation between CAD Product Model and Process Planning Model. Figure 2. NIAM/ORM formalization of a process planning generation and its equivalence in binary natural language. CAM Universe
« Knowledge of mapping"
Product CAM Model
corresponds
corresponds to
Product CAD Model
"Contextualized knowledge of process planning generation"
is generated is used to generate
Process Planning Model
Figure 3. Identification of the studied objects to be connected between CAD and CAM universes
Thus, it proves that the objects of study identified on Figure 1 are not the good ones and that the problems of relation between the CAD and CAM could be solved by connecting Product Models existing in each Universe of Interest (Figure 3). Product Models are both based on the concept of entity. However, the handled entities have different significances compared to the Universe of Interest to which
244
M. Lombard and P. Lhoste
they belong. Thus, there is no bijection between an entity of the product CAD model and an entity of the product CAM model. Moreover, as underlined in [1], there is also a semantic gap between these Universes of Interest. Also, one of the keys issues to connect these two product representations stands on “knowledge of mapping” which may contribute to the integration of these two Universes of Interest. While taking as a starting point the various levels of integration defined in Software Engineering, various solutions can thus be planned to meet the need to connect the CAD Universe with the CAM Universe. Thus, it is possible to consider: Connecting tools but which requires the development of as many interfaces (pre and post-processors) as tools to be connected. Low level of integration where each software tool preserves its own structure of data, but by sharing with the other tools some of common objects. This pooling is not coded in the tools, as it is the case of the simple connection, but is characterized “physically” by a “neutral” support (neutral file or Data Base). Specific interfaces to each tool provide an interface with this support. Contrary to the case of connection, the existence of this support can guarantee the persistence and the integrity of the handled common objects, provided that each tool guarantees coherence between its own objects and those that it shares. High level of integration corresponds to a pooling of all the objects handled by the various tools concerned. The problems of object integrity are then solved since information handled by the tools is necessarily part of the “core” of common objects. " Knowledge of mapping"
CAM Universe
Product CAM Model
Corresponds to
Corresponds to
Is enriched by
CAD/CAM Universes
Product CAD Model "Enrichment knowledge of product by the CAD view"
"Enrichment knowledge of product by the CAD view"
enriches
CAD Universe
is a reference for is refers to
enriches Is enriched by
Product CAD/CAM Model
Figure 4. Emergence of the “Product CAD/CAM Model” object and enrichment knowledge
Connection, as considered in Figure 3, is not retained cause of the major disadvantages that it presents. Moreover, it is not sufficient to support the needed level of integration of CAD/CAM tools. Also, now let us consider the first level of integration which aims to connect objects via an intermediate neutral object. The mapping relation between the “Product CAD Model” and the “Product CAM Model” translates and supports the needed processing for information exchange
Information Modelling Framework for Knowledge Emergence in Product Design
245
between these two objects. Since this relation is not explainable directly, it is necessary to create an intermediate object “Product CAD/CAM Model” (Figure 4), which can be seen as resulting from the emergence principle as exposed in [4]. This “Product CAD/CAM Model” is to be brought closer the “Enriched Product Model” proposed by [6, 7] to support the data handled by the transformer developed in the French RNTL project "USIQUICK". So to define this new object as pivot between the two others, we will explain the relations between this object and each Universe of Interest with which it is connected (Figure 4).
3
Collaborative Work Contribution to a CAD/CAM Trade Ontology Definition
The various objects, studied previously, come from universe of interest where there are CAx tools but also human actors to handle them. These actors evolve within a collaborative process of product design. Figure 5 illustrates this process. 2
1
3
" Proper design knowledge" corresponds to corresponds to
Product Model
collaborates with is designed by
designs
Actor
collaborates with
Each Product Model is designed by one or more Actor(s). An Actor designs one or more Product Model(s). A Product Model_A corresponds to one or more Product Model_B. An Acteur_A collaborates with one or more Acteur_B. Figure 5. Emergence of the “Product CAD/CAM Model” object and enrichment knowledge
In fact, the reflexive relation (1) on “Product Model”, indeed the mapping of the various models handled throughout the design/realization product cycle by actors, is highlighted. This process implements specific design knowledge (2) resulting from the connection between a “Product Model” with an “Actor”. This knowledge meets a design need in term of product definition. Those bring into play specific actor skills from his field of expertise (or Universe of Interest), which enable him to carry out its activity. This process also relies on a group of actors who have to collaborate to ensure the success of their common objective, namely the product definition. This collaboration is represented by the reflexive relation (3) on “Actor”.
246
M. Lombard and P. Lhoste
So as to propose a synthesis of collaborative work in CAD/CAM, it is necessary to detail and instantiate, for better clarification, the model suggested Figure 5 by proposing some a development Figure 6: “Mapping Knowledge" CAM Universe
CAM Product Model
CAD Universe corresponds to
corresponds to
1
CAD/CAM Universe
CAD Product Model
is a reference for refers to
"Enrichment knowledge of product by CAD view"
"Enrichment knowledge of product by CAM view" enrichies
is enriched by
"CAM Product Design Knowledge"
CAD/CAM Product Model
is enriched by
enrichies
"CAD Product Design Knowledge"
"" is designed by
is designed by
is designed by
2
2
designs
designs
designs
"Exchanged knowledge"
"Exchanged knowledge"
exchanges with
exchanges with
2
CAD/CAM Actor
exchanges with
exchanges with
refers to is a reference for
"Shared knowledge" CAM Actor
collaborate with
collaborate with
3
CAD Actor
Figure 6. Emergence of the “Product CAD/CAM Model” object and enrichment knowledge
The relation (1) of Figure 6 between the CAD and CAM Universes is based on the “CAD/CAM Product Model”. This part of the model uses again the results of Figure 4. Design is carried out by actors. For each universe of interest, there is at least an actor carrying out this activity. The relation 2 of Figure 6 highlights the exploitation, by the actors, of their “specific design knowledge”. This knowledge brings into play specific competences in a universe of interest. Thus the actors of CAD and CAM universes employ specific trade knowledge. So to contribute to the planned common objective, namely the definition of the product throughout its life cycle, the various actors of the various trades taking part in this process must collaborate to guarantee the accuracy and the coherence of their work. Thus, the relation (3) of Figure 6 expresses this need for collaboration.
Information Modelling Framework for Knowledge Emergence in Product Design
247
While applying the emergence principle (already used for the intermediate object definition of the “CAD/CAM Product Model”) to meet the need for collaboration, it follows from there the emergence of an “CAD/CAM Actor” making possible to guarantee the exchanges between CAD and CAM universes. These exchanges require the definition and the implementation of popularization knowledge [8], kind of interfaces between actors. This “CAD/CAM Actor” has the objective of making the design of “CAD/CAM Product Model”. The relation (2) of Figure 6 highlights the exploitation by this new actor of knowledge resulting from the CAD and CAM Universes. The existence of the relation (2) corresponds to the implementation of trade knowledge in response to a new need for design relative to “CAD/CAM Product Model”.
4
Contribution of Software Engineering to Semantic Typology of Relations Between Knowledge Objects
So as to generalize the emergence structure used previously, we propose, in what follows, to study possible interpretations of this reference model according to the nature of the relation binding two objects. Thus, the work of Favre [9] is used to identify the various types of relation, namely: G- Is composed of. A system is very often defined as a complex set of more elementary parts. This relation represents the decomposition of systems in subsystems, and so on. 2. • P- Describes/Is described by. A model is a representation of a system under study (sus for short). This relation is the key of modeling. Sometimes the distinction is made between specification models, which represent a system to be built, and descriptive models that describe an existing system. 3. • H- Composes. This relation represents the decomposition of a system in subsystems and so on, allowing to define complex systems by breaking them up so as to simplify them. 4. • F- Conforms to/ Is conform to. This relation defines the notion of metamodel with respect to a model. A model must conform to its metamodel. 5. Compared to these definitions and compared to the models previously presented, we propose in Figure 7 to formalize the relations suggested by [9] in NIAM/ORM and to add to them a new type of relation allowing to put in correspondence objects of different universes. This relation is indeed not presented in the proposal of [9] because he is placed in a single homogeneous Universe of Interest. This new type of relation named O expresses the need to map various knowledge objects belonging to universes of different interests such as what was illustrated in the models previously presented.
248
M. Lombard and P. Lhoste
"Conformity knowledge" of F isconformity ofF
conforms to "Decomposition knowledge" is composed of composes
G H
"Connection knowledge" corresponds to
Model
corresponds to
O O
" Modelling knowledge "
P is described by P describes
Object
Figure 7. NIAM/ORM definition of meta-relations in conceptual modeling
5
Application of the Semantic Typology of Relations on an Aircraft Example
The example presented Figure 8 shows the customization of various reference models, the details of those proposed in [10]. Meta-Modelling Reference Model: The aim is to establish a reference aircraft CAD model from a CAX meta-model, which corresponds to a trade level, sufficiently generic to be particularized thereafter in relation to a specific aircraft model. All the rules and / or constraints that could define its future design and production are in this CAD aircraft model. Modeling Reference Model: it is a physical object model. Thus, the "physical" Aircraft XYZ can be modeled by combining a cycle of abstraction (the physical world) and customization (concepts of the world, ie the "CAD aircraft model"). Decomposition Reference Model: in this example, it aims to propose its organic decomposition view. Other decompositions can be considered regarding the considered point of view. This level of the reference model should be the basis for decay as envisaged in the PLM (Product Lifecycle Management) tools for configuration management [11]. Connection Reference Model: it aims to connect different world of objects or different trades. It can be done to the physical level as in this example, and in this case, it concerns the study of the connection materialization. But it may be to connect information objects considering their integration at different levels. The application of the connection reference model has helped to define the emergence of new object following the substantivation between objects to link as a low-level integration within the meaning of Software Engineering.
Information Modelling Framework for Knowledge Emergence in Product Design
249
CAX metamodel
"Connaissances de conformité"
F F
is of conformity of
conforms to
Meta-Modeling Reference Model "Modeling knowledge" XYZ physical aircraft
PP
is described by
describes
aircr aft CAD model
"connection knowledge" "Abstraction knowledge"
"Particularization knowledge" corresponds to
O O
F F
is modeled by models
F F
is modeled by
cooresponds to
models
Modeling Reference Model XYZ aircraft CAD model "Flying knowledge" breaks up into
G H
is composed by
"Fuselage aircraft knowldege"
imposes a conformity is of conformity of
F F
"Connection knowledge"
CAD sub-product wing model
"Fixing wings knowledge" is received by receives
O O
Connection Reference Model
breaks up into
"Connection knowledge compared to ai rcraft"
is connected to
G H
is composed by
Decomposition Reference Model
OO is connected to
"Assembly knowledge"
CAD sub-product fuselage model
"Fixing fuselage knowledge"
induces
is received by
is induced by
receives
O O
Sub-product assembly CAD model
Figure 8. Illustration of types of relationship
6
Conclusion
Proposing all these models, characterized by the type of relationships they implement, is a first step in information modeling for the definition of a methodological framework for the knowledge modeling throughout the design and manufacturing cycle. In order to have a complete environment, it is necessary to define all semantics released by the typology of relationships within these
250
M. Lombard and P. Lhoste
reference models. Moreover, these models need to be considered in a trade framework to propose some ontology domain. As a perspective, it is necessary to study the rules and knowledge definition allowing identification of the relationship type and the deployment of them within a reference model and beside the point of view that we have on the system to model: The compliance rules definition to ensure the integrity of the syntax and semantics manipulated during the passage from a Universe of Interest to another. To move from an Universe to another can be done at different levels of abstraction, but can also be done at the same level as in the case of the study we submitted. The knowledge definition in specific Mechanical Engineering trades to establish the emergence of new knowledge objects. This proposal helps to identify and to justify such knowledge by showing the particular objects that are involved.
7
References
[1] Derigent W, (2005) Méthodologie de passage d’un modèle CAO vers un modèle FAO pour les pièces aéronautiques : Prototypage logiciel dans le cadre du projet USIQUICK, PhD Henri Poincaré University, Nancy I. in french [2] Nijssen GM, Halpin T, (1989), Conceptual schema and relational database design, Prentice Hall, Sydney (Australia) [3] Halpin T, http://www.orm.net/ [4] Mayer F, (1995) Contribution au génie productique : application à l’ingénierie pédagogique en Atelier Inter-établissements de Productique Lorrain, PhD Henri Poincaré University, Nancy I. in french [5] Favre JM, (2004) Towards a Basic Theory to Model Driven Engineering Workshop on Software Model Engineering, WISME@UML 2004, Lisboa (Portugal), October, 2004. [6] Harick R, Caponi V, Lombard M, Ris G (2006) Enhanced functions supporting process planning for aircraft structural parts, IMACS 2006 Multiconference on Computational Engineering in Systems Applications (CESA’2006). IEEE Catalog Number : 06XE1583, ISBN 7-302-13922-9. pp.1259-1266. October 4-6 2006. Beijing (China) [7] Harik R, Capponi V, Derigent W (2007) Enhanced B-Rep graph-based feature sequences recognition using manufacturing constraints, CIRP Design Seminar, The Future of Product Development, Berlin, Germany, March 26-28, 2007 [8] Lombard M, Gaza-Yesilbas L (2006) Towards a framework formalized exchanges during collaborative design, Mathematics and Computers in Simulation, Computational Engineering in Systems Applications. Vol. 70, Issuer 5-6, pp. 343-357, ISSN 03784754. February 24, 2006. [9] Favre JM (2005) Megamodelling and Etymology – A story of Words : From MED to MDE via Model in five milleniums, Dagstuhl Seminar 05161 on « Transformation Techniques in Software Engineering” Dagsthul, Germany, ISSN 1862-4405, Published by IBFI. [10] Lombard M. (2006) Contribution de la Modélisation Informationnelle aux Processus de Conception et Réalisation de Produits Manufacturiers: vers une Ontologie Métier, Habilitation à Diriger des Recherches, Henri Poincaré University, Nancy I. [11] Zina S, Lombard M, Lossent L, Henriot C (2006) Generic Modeling and Configuration Management in Product Lifecycle Management, International Journal of Computers, Communications & Control, Vol. I, N°4, pp. 126-138.
Flexible Workflow Autonomic Object Intelligence Algorithm Based on Extensible Mamdani Fuzzy Reasoning System Run-Xiao Wang 1, Xiu-Tian Yan2, Dong-Bo Wang1, Qian Zhao1 1 Institute of Manufacture Automation Software and Information, Northwestern Polytechnical University, Xi’an, 710072; 2 Department of DMEM, University of Strathclyde, Glasgow, G1 1XJ, UK.
Abstract In order to improve the intelligence of flexible workflow under uncertainty, especially with fuzzy information, a flexible workflow based on multi-Autonomic Object (AOs) is proposed. The architecture of the AO based intelligence approach, as well as the principle of AO monitoring and executing are studied. Building on these, then an AO intelligence algorithm based on extended Mamdani fuzzy reasoning system is proposed, the architecture of AO fuzzy reasoning, the expression of AO knowledge, and the AO weighted fuzzy reasoning algorithm are investigated in detail. Finally the AO intelligence algorithm is demonstrated by a case study, followed by a detailed example. Keywords: flexible workflow; autonomic computing; fuzzy reasoning
1.
Introduction
Flexible Workflow is a kind of workflow which can adapt rapidly to the changes of workflow environment, condition and execution status without redesigning its workflow model. With continuous change of enterprise environment and target, uncertainty and variability have become the inherent characteristics of enterprise process. How to improve the flexibility of workflow has become an imperitive research topic in workflow field [1, 3]. But most of present research investigations pay more attention to passive responses of workflow, hence the intelligence of flexible workflow need to be enhanced urgently. However, flexible workflow is a dynamic and changeable process, and it intends to deal with lots of uncertainties, fuzzy information and knowledge in workflow. It also aims to improve the intelligent capability of flexible workflow in uncertain conditions. After a brief literature review of flexible workflow, the Autonomic Object (AO) and Multi-AOs flexible workflow is defined, and the monitoring and execution of AO is described. Then a flexible workflow AO intelligent algorithm is proposed based on extended Mamdani fuzzy reasoning. The general architecture of AO fuzzy reasoning system is given, and the expression of AO and AO fuzzy
252
R. X. Wang, X.T. Yan, D.B. Wang and Q. Zhao
reasoning algorithm is researched in detail. Finally the flexible workflow AO intelligence algorithm is demonstrated in a case study.
2.
Literature Review of Flexible Workflow Intelligence
2.1
Research of the Intelligence of Flexible Workflow
Recently, Many experts and scholars have engaged in studies on the intelligence of flexible workflow, for example, Hermosillo et al [2] described research on the decision-making and supportable system of workflow. Chunga et al [3] details the support of the management of flexible workflow by applying to ontology, agent and knowledge. Muller et al [4] realised the exceptional initiative process on workflow by using agent and rule-based method. Shu et al [5] proposed a systematic knowledge-based agile workflow model, and a workflow management system model based on expert system is researched in [6]. These methods make beneficial contribution to the enhancement of the intelligence of flexible workflow, but after a detailed analysis, it is clear that these research methods based on agent and expert system focus on enhancing the workflow intelligence in a more general way. It seems inadequate to combine multi-agent expert system and workflow system, as well as in the realization of applying the intelligence to the workflow. Moreover the research on flexible workflow based on knowledge and rules suffer from limited knowledge captured as it is impossible to describe the overall architecture of the intelligence of flexible workflow only using limited knowledge and rules. More importantly, these methods appear to be able to cope with problems with accurate and precise definition. There is no attempt on problems with uncertainties. On the other hand, fuzzy reasoning has been researched in other fields. For example Hermosillo et al [2] introduced the construction of decisionmaking system based on fuzzy reasoning, but there is little details on how the intelligence was actually realised and associated method. This research aims to bridge this gap.
2.2
Example of Flexible Workflow
In order to better describe the flexible workflow using AO intelligence algorithm, a case study company – a subcontracting production process of aviation manufacturing enterprises is introduced in this paper. In recent years, part types produced by different enterprise is growing and the strategy adopt by several world aviation giant core suppliers requires that enterprise’s subcontracting management system can not only change flexibly with the requirements, but also need an intelligence which can assure that the subcontractors production processes meet different dynamic order requirements. This enterprise’s typical subcontracting flow is shown in Fig. 1, and the gray parts show the new additional nodes after an operation of workflow multi-AO.
Flexible Workflow Autonomic Object Intelligence Algorithm
253
Fig. 1. Example of subcontract workflow
3.
The Principle of the Intelligence of Flexible Workflow based on Multi-AO
3.1
The Structure of the Flexible Workflow based on Multi-AO
The concept of autonomic computing comes from the theory of human biology; the autonomic computing can improve the autonomy of management resources by one or several Autonomic Managers (AMs). With autonomic computing as its core technology, as well as the idea of agent theory, the concept of flexible workflow based on multi-AOs is proposed in this paper. Definition 1 [Autonomic Object]: Autonomic Object (AO) is an intelligent entity based on autonomic computing and embedded in flexible workflow activity, AO = Monitor, Analyzer,Planner,Executive, Knowledge,Touchpoint , among them, Monitor , Analyzer , Planner , Executive are monitoring, analyzing, planning and executing rules; is AO knowledge set; Knowledge Touchpo int Sensor , Effector , Orchestrator , Manual , is contact manager of AO, Sensor realizes the state detection of resource managed resources and the information collection, Effector perform the operation to the managed resources,
Orchestrator is coordinator, and Manual is manual Manager. Definition 2 [flexible workflow based on multi Autonomic Objects]: AO _ FW
T , AO, L, D, O, R , and, T
^ti | i
1,..., n` is workflow activity set;
AO is set of autonomic object embedded in every flexible workflow activity.
254
R. X. Wang, X.T. Yan, D.B. Wang and Q. Zhao
Definition 3 [flexible activity]: flexible activity is the activity of flexible workflow , ti ID, Type, Din , Dout , Extend _ A, Extend _ F , S , Constart , Conend , Router , Oi , R j , Extend _ A is an extended attribute set describing the
activity attributes; Extend _ F is extended method set depicting the operation of activity; Router is dynamic router expressed mainly by ECA˄Event Condition Action˅rules, other elements are not explained detailedly in this paper.
3.2
The Intelligent Architecture of AO
The changeable intelligent treatment of flexible workflow is realized by multi-AO embedded in nodes, the change of flexible workflow can be defined as: Definition 4 [flexible workflow change]˖flexible workflow change J means the difference of dynamic instances WI i and the expected model WI 0 in the process of workflow execution,
J
can be defined as J
WI i WI 0 ˈhereþ ÿexpresses
the difference between instances. When J z ) , the activity of AO is triggered, AO gets the instance variable of dynamic instance by sensor firstly, then matches and reasons according to the input instance variable and its knowledge, executes reasoning result by Effector , this will realize the intelligence of flexible workflow. Because flexible workflow is a highly dynamic system in which it’s knowledge has the character of uncertainty, as well as large amount of fuzzy information, that is, the interactive dependent relationship between various events exist the ambiguity of “May be or May be not”. It’s crucial to solve the AO fuzzy problem for the flexible workflow AO intelligence. Fig. 2 shows the architecture of flexible workflow intelligence.
3.3
The Monitoring and Execution of AO
The AO intelligent reasoning evidence mainly comes from the instance variable of flexible workflow dynamic instance WI i which is acquired by a sensor. Definition 5[instance variable]˖instance variable is the execution attribute set of current workflow instance WI i , V V § V · V | k 1,...m , and V e is the e
set of Environment variables,
¨* © i 1,..n
ti
¸ ¹
k
V ti V tia V tif V tis V tic V tid
is the set of
instance variable which correspond with flexible workflow nodes ti , respectively corresponding to extended attribute, method, nodes statement, router rules and instance data. The output of AO U is a series of operation to workflow instance flexible element. Part of the operation set of flexible workflow element is shown as Table 1.
Flexible Workflow Autonomic Object Intelligence Algorithm
255
Fig. 2. The architecture of flexible workflow intelligence Table 1. Part of the operation set of flexible workflow element Operation set ( U )
Element Extended attribute
^Select t , x, y , Insert (t , x, y ), Delete(t , x, y),Update(t , x, y)` i
i
*: x is operation object, y is the value of operation object,
4. 4.1
i
ti
i
is node of workflow.
AO Intelligence Algorithm based on Extensive Mamdani Fuzzy Reasoning System The Autonomic Object Fuzzy Reasoning System
Mamdani fuzzy reasoning system is a typical fuzzy reasoning system. But there exists a large amount of fuzzy reasoning process, as well as precise reasoning process in the application of flexible workflow altogether. If the AO reasoning system only adopts single fuzzy reasoning system, then to certain reasoning process, it is definitely complicate for simple problem, so AO must realize precise reasoning process in the mean time of its fuzzy reasoning. On the basis of Mamdani reasoning system, an Extended Mamdani (EM) fuzzy reasoning system which realizes precise and fuzzy hybrid reasoning is proposed in this paper, it can get more realistic result in reasoning process. The architecture of EM fuzzy reasoning system is shown as Fig. 3.
256
R. X. Wang, X.T. Yan, D.B. Wang and Q. Zhao
Fig. 3. The system of EM fuzzy reasoning
In Fig. 3, instant variable V is defined as a input, and it can become the reasoning fuzzy evidence by fuzzification according to different rules, or directly become the reasoning precise evidence. There are three matching rules respectively, fuzzy rule matching, precise rule matching and hybrid rule matching. According to these rules, precise reasoning can be achieved by precise evidence and precise rules, fuzzy reasoning can be achieved by fuzzy evidence and fuzzy rules, and hybrid reasoning can be achieved by the combination of fuzzy evidence, precise evidence and hybrid rules. So, precise reasoning get precise reasoning result, fuzzy reasoning and hybrid reasoning get fuzzy reasoning result. After precise reasoning, whether the result like the operation in U should be estimated, and if it does, then it is output as a result. Otherwise, the reasoning conclusion is put into the precise evident as the intermediary fact. Meanwhile the intermediary fact is fuzzificated and input into further fuzzy evidence. When results are obtained from fuzzy reasoning system, if the result is like the operation in U after the defuzzifcation, then it is output as final result. If not, it is added into both the precise and fuzzy evidence as intermediary result further. EM fuzzy reasoning realises hybrid precise and fuzzy reasoning and make fuzzy reasoning more flexible, and AO can utilize both precise and fuzzy rules effectively.
4.2
The Expression of AO Knowledge
In AO knowledge, reasoning rules library is the core. Knowing from the architecture of AO fuzzy reasoning system, AO knowledge can be divided into precise knowledge and fuzzy knowledge. The precise knowledge of AO is denoted as P x1 , x2 ,...xi ,..., xn , P is predication, xi is individual which can be constant, variable and function. Rules are typical knowledge expressing causal relationship. Considering the dynamic and changeable process of flexible workflow, knowledge usually is uncertain. AO’s intelligence computing usually
Flexible Workflow Autonomic Object Intelligence Algorithm
257
involves flexible operation after analysing several relevant instance variables which may have mutual dependent relationship. The uncertainty production rules based on weighted factor is adopted to express the AO rule knowledge, as shown in formula 1: IF E1 Z1 AND ... AND En Zn THEN H CF H , E , O (1) To AO fuzzy basic knowledge, the typical expression is to use fuzzy language, the general form is P x, A , typically, x is A , among them, P is predication which express concrete meaning of knowledge, x is the domain variable which represents the attribute of object discussed, A u u / u is fuzzy concept which is depicted
³
A
uU
by corresponding fuzzy set and membership function. Considering the fuzziness, uncertainty of AO knowledge and dependency of instance variables, the fuzzy productive rule based on credible weighted factor is used to express AO fuzzy rule, as shown in formula 2, Ei : xi is Ai CFi 1 is simple knowledge᧨ xi is variable᧨ Ai is the fuzzy set in domain U i , H : y is B CF is conclusion, knowledge premise, CF is credibility of conclusion, knowledge premise.
CFi
Zi
is credibility of the is weighed of the
IF E1 Z1 AND E2 Z2 AND ... AND En Zn THEN H CF1 4.3
(2)
The AO Fuzzy Reasoning Algorithm
Because EM fuzzy reasoning is a hybrid reasoning process, firstly the problem of the matching precise fact and fuzzy knowledge, fuzzy fact and exact knowledge must be solved, and it is respectively realized by fuzzy algorithm and solving ambiguity algorithm in extended EM fuzzy reasoning model. For present weighed fuzzy reasoning, most studies only investigated absolute matching condition between fuzzy evidence and fuzzy knowledge premise, and didn’t give detailed description about how to get membership function of fuzzy conclusion under the condition that fuzzy evidence and fuzzy knowledge are similar or not exactly equal. In the following section, the weighed fuzzy reasoning algorithm is given first, the exact and fuzzy hybrid reasoning algorithm is also proposed. The algorithm is shown as followers: 1᧥Computing knowledge premise intersection A : because of the introduction of weight, every premise fuzzy set is averaged with weight, the construction style of knowledge premise intersection is Z1 u A1 Z2 u A2 ... Zn u An . This construction style equals to multiplying coefficient
Zi
to every knowledge
premise fuzzy set, considering the intersection of fuzzy set actually is to get the minimum operation from membership function. So from the balanced perspective of formula construction, the intersection of knowledge premise should divide
258
R. X. Wang, X.T. Yan, D.B. Wang and Q. Zhao
n
weight average
¦ Z / n on the basis of primary construction, weighed fuzzy i
i 1
reasoning knowledge premise is shown as formula 3; 2᧥Constructing the fuzzy relationship R A, B between A and B ; '
3᧥computing evidence intersection A by formula 3, then computing combination of
A' and R , B '
B ' by the
A' D R A, B ᇭ
The reasoning of flexible workflow AO not only includes pure precise reasoning , fuzzy reasoning and hybrid reasoning, but also constitute combination of precise and fuzzy knowledge, like “when the order is huge and customers is RR company, then…”. There is little research on this hybrid knowledge reasoning, and this is one of the areas that this research made contribution. The hybrid knowledge-reasoning algorithm is proposed as follows: 1᧥firstly computing fuzzy knowledge matching degree for fuzzy knowledge and computing if evidence equals to knowledge for precise knowledge; 2᧥fuzzificating precise knowledge and evidence in hybrid knowledge by fuzzy membership function; 3) Computing hybrid knowledge intersection result is B
'
A and evidence A' respectively, the
A' D R A, B ᧨if defuzzification is needed, and then the exact B '
output can be calculated by defuzzification.
A
n n
u Z1 u A1 Z2 u A2 ... Zn u An
¦ Zi
(3)
i 1
5.
The Instance of Flexible Workflow AO Intelligence
MathWork Corporation’s MATLAB is the mainstream computation software. With the help of MATLAB fuzzy logic toolbox, MATLAB can be used to design, build and test Fuzzy Reasoning System. To unweighted fuzzy reasoning, the fuzzy logic graphical user interface is directly adopted. Since a few algorithms’ detail of MATLAB dose not open to the public, such as Fuzzy logic minimization˄min˅, the weighted fuzzy reasoning is realized by programming with basic MATLAB fuzzy function. The subcontracting enterprise can generally deal with 100 orders per month, and the workers shift is 8 hours every day on average. The material quota AO can calculate required amount of material according to order amount and product pass rate. The calculation result can be the reference for material procurement plan. This process is the weighted fuzzy reasoning process. Production plan AO can calculate average worker production time according to order amount and machine failure rate. Because production plan AO use un-weighted rules, it can be directly
Flexible Workflow Autonomic Object Intelligence Algorithm
259
calculated by MATLAB fuzzy toolbox. Part of weighted input, output and reasoning rules is provided in Table 2. Table 2. Material quota AO weighted rule membership function and result
input
order form quantity O, input include three membership function, function type is triangular membership function. OMF1(O Membership Function): high order form quantity OMF1=trimf(X,[0.6,1,1.4]); X is the range of input;
output
material procurement C, include two membership function, function type is triangular membership function.
rules
Rule 1: If order quantity is (0.4) high AND rate of product cost is low (0.6) Then high material procurement; …
After the calculation of self-defined MATLAB program, when the order part amount is 170, product pass rate is 0.3, the weighted membership function is shown in (a) of Fig. 4. The weighted fuzzy reasoning result diagram is shown in (b) of Fig. 4. The self-defined MATLAB program is realized according to the idea of multi-dimension weighted fuzzy reasoning. For simplicity, the input variables are pre-processed, and their input scope all are between 0 and 1.
(a)
(b) Fig. 4. Material quota AO weighted rule membership function and reasoning result
After computing, production plan AO adds NC workshop 2 to produce parts at the same time while NC workshop 1 produces the part for order. Meanwhile production plan AO modifies the extendable attribute of NC process 2, i.e., average production time. The average production time is 12.7 h/day by fuzzy reasoning, then according to precise rule “if average production time > 12 then add another workshop…”, the operation of production plan AO can be decided. Then increased volume of Material quota is 8.7367 ton when order amount is 170 and production qualified rate is 0.4. This result is decided by material quota AO. The algorithms above all have corresponding precise computing method in actual production process. But for flexible workflow, many data change dynamically, for example, in order to speed the production cycle time, enterprise usually forecast product amount according to order amount, the enterprise dynamic production status and
260
R. X. Wang, X.T. Yan, D.B. Wang and Q. Zhao
previous orders statistics. In this situation, it is hard to get precise orders and production amount. By using of flexible workflow AO intelligence algorithm based on EM reasoning system, the production process can be decided when the order and production is dynamic.
6.
Conclusion
Based on proposing the Multi-AOs flexible workflow, the theory and architecture of AO intelligence is researched in detail. The monitoring and execution of AO is introduced. Aiming at the uncertainty of AO intelligence represented by fuzziness, the AO intelligence algorithm based on Extended Mamdani fuzzy reasoning is proposed. The architecture of AO fuzzy reasoning and AO knowledge expression is introduced at first, then weighted fuzzy reasoning algorithm and precise and fuzzy hybrid reasoning is proposed. Finally it is demonstrated that the proposed AO intelligence can be used to determine the flexible production process when order and production is dynamic through a practical sub-contractor production process.
7.
Reference
[1] Deng Shuiguang, Yu Zhen, Wu Chaohui. Research and Design of Dynamic Workflow Modeling Method [J]. Computer Integrated Manufacturing Systems. 2004(6): 601-608. [2] Hermosillo J, Reynoso Castillo G, Geneste L, et al. Adding decision support to workflow systems by reusable standard software components [J]. Computers in Industry.2002, (49): 123–140. [3] Chunga P, Cheunga L, Stader J, Jarvisb P, et al. Knowledge-based process management—an approach to handling adaptive workflow [J]. Knowledge-Based Systems. 2003, (16): 149–160. [4] Muller R, Greiner U, Rahm R. AgentWork: a workflow system supporting rule-based workflow adaptation. [J]. Data & Knowledge Engineering. 2004, (51): 223-256. [5] Shu Bin, Yin Guofu, Ge Peng, et al. Research on Modeling Method for Agile Workflow System Based on Knowledge [J]. Journal of Xi’an Jiaotong University. 2002, 36 (7): 731-735. [6] Li Dongbo, Xu Ping, Han Xianglan, et al. The Study on the Model of Workflow Management System Based on Expert System [J]. Journal of Nanjing University of Science and Technology. 2001, 25(1): 96-99. [7] IBM. An architectural blueprint for autonomic computing. [EB/OL], http://www03.ibm.com/autonomic/pdfs/AC_Blueprint_White_Paper_4th.pdf, 2005.6. [8] Zeng Huanglin. Intelligent Computing – Theory and Application on Rough Set, Fuzzy Logic and Neural Workflow [M]. Chongqing: Chongqing University Press, 2004.
DSM based Multi-view Process Modelling Method for Concurrent Product Development Peisi Zhong1, Hongmei Cheng2, Mei Liu1, Shuhui Ding1 1
Advanced Manufacturing Technology Center, Shandong University of Science and Technology, Qingdao, P. R. China 2 College of Mechanical and Electronic Engineering, Shandong University of Architecture, Jinan, P. R. China
Abstract Process management system for concurrent product development is one of the key technologies and its main functional modules include product development process modeling, process analysis, process optimization, process improvement, process reengineering, process execution, process monitoring, etc. The module of process modeling is the basis of process management system for concurrent product development. A review for product development process modeling including design structure matrix (DSM) is described. The DSM based method of process modeling for concurrent product development is presented on the basis of multiview process modeling. The bidirectional mapping relationship is set up between multi-level process modeling and DSM. The basic steps for process reengineering is presented based on DSM. The DSM based process modeling system is developed and a case is given. Keywords: Concurrent engineering, product development, process modeling, DSM, multi-view
1.
Introduction
With the fast development of global economy and market, the competition becomes more and more drastic among enterprises. It becomes one of the key technologies for the subsistence and development of enterprises how to reduce the cost of product development, improve the quality of product design, shorten the time to come into the market for new product, meet the requirements of customers to the product and so on. In the environment of complicated competition, all kinds of advanced design theories and methods and manufacturing technologies are emerging as the times require. Concurrent Engineering (CE) and its key supporting technologies are applied one after the other in enterprises all over the world, and it becomes possible to quickly develop product with high quality and low cost. The product development process includes all correlative technologies and management activities related to product development during the time from
262
P. Zhong, H. Cheng, M. Liu and S. Ding
product definition to batch production, and represents the behavior of certain organization to develop product. That is to say, product development process is a technology and management framework to integrate the methods, technologies, tools and designers and to use them in practice [1, 2]. If the product can be developed efficiently with high quality and performance, it is determined by the quality of the product development process and the process management in most cases. The product development process modeling is to represent the product development process and build the process model of product development. The process model is the basis and key issue to research and use the concurrent product development process management [3, 4]. In the process of product development, process analysis and optimization, process execution and monitoring, process reengineering, etc., the supporting of process modeling of concurrent product development is needed [5]. In the recent decade, the product development process improvement and reengineering are attached importance gradually, that is to say, it is starting with improving and reengineering the product development process to improve the product quality and shorten development cycle. It is proved in practice that process improvement and reengineering of concurrent product development is able to reduce effectively product development cost and cycle, and improve evidently product quality [4]. An in-depth research is necessary on the process management of concurrent product development especially the method of process modelling.
2.
Review
Concurrent product development process management is different from workflow management, and the significant difference is that the former adopts the thought of CE with the characteristics of pre-release, little cycle, more feedbacks etc., and the former is more complicated than the latter. The traditional research of product development process was started at 1960s, and reached the research upsurge from the end of 1980s to the beginning of 1990s. With the in-depth research of CE and all kinds of advanced manufacturing modes, the research of concurrent product development process management becomes the current research hotspot [4, 6]. According to the differences of requirements and applied backgrounds, all kinds of methods and technologies are presented. The petri-net based modelling method is fit for describing the dynamic process of discrete system and supporting the description of intercurrent, asynchronous, distributed, uncertain and concurrent system. The series methods of IDEF (ICAM DEFinition language) are used widely in the business modelling and process modelling which is a product of the Integrated Computer Aided Manufacturing (ICAM) initiative of the United States Air Force. The method of workflow modelling focuses on describing the path of process or activity including modelling methods based on activity network, formalized representation, dialog box model, status and activity graph, affair model, etc. The method of agent based modelling is to decompose the product development process into one another unattached agent to solve the conflict by negotiation and cooperate with each other [4, 7].
Multi-view Process Modelling Method for Concurrent Product Development
263
WFOMM (Work Flow-Oriented Modeling Method) is formed based on IDEF methods, object-oriented method, UML method, etc. The process model presented by CERC in West Virginia University clarifies when and who and how to finish what, describes the current process and supports process analysis and improvement, emphasizes the difference between the improved model and the primary model in order to understand, improve and manage the whole product development process. Smith etc. discussed the formalized process modeling by the view of process analysis and planning and considered that the iteration and overlap of process is necessary to attach importance to. Curtis etc. analyzed the requirements of process modeling and presented four kinds of possible views – function view, behavior view, organization view and information view. BPMN (Business Process Modeling Notation) which is the criterion established by BPMI (Business Process Management Initiative), provides a set of general process symbols and makes it easy to communicate with the analysis, design, execution of business process and managers. The concurrent product development process is lucubrated led by Professor Xiong in CE lab in the National CIMS Engineering Technology Center of Tsinghua University, and a kind of multi-view process modeling method is presented and a CORBA-based product development management tool is developed [1,4]. The research on DSM is developed by the driven of Eppinger, Whitney etc. in MIT, and the relationship among activities can be described by DSM which is similar to the incident matrix in graph theory in construction principle. DSM can abstract concrete issue and solve it with matrix theory. Some progresses have been made in the analysis of process iteration and process improvement, application of process analysis and planning with DSM, and the improvement and expansion of DSM has been made by many researchers [8, 9, 10, 11, 12]. Above all, many modeling methods and tools have been studied, but each model can only describe some certain aspects, and reflect the concurrent product development process from some certain sides. All these models are lack of the overall information description and effective management and analysis of product information in the concurrent product development process and can not represent the complicated relationship inside and among process. The advantages of workflow model include automatic execution, visualization, operation convenience, etc., and it is easy to understand and use the model and support the implementation, monitoring and management of product development process. But the workflow model is lack of quantitative process analysis and optimization. DSM model is suitable for quantitative analysis and optimization, but it needs the hypothesis of design tasks and their executing sequence and relativity in order to analyze and optimize the product development process, and requires the initial description of product development process very exactly and completely. In addition, the visualization of DSM model is very bad, and user operation is not friendly. If DSM model combines other visualized process model, it will meet the demands and obtain a satisfied result. Therefore, a DSM based multi-view process modeling method is studied with the integration of workflow model and DSM model in the paper. The model can provide qualitative and quantitative description of each view of concurrent product
264
P. Zhong, H. Cheng, M. Liu and S. Ding
development process in detail, and give attention to visualization and automation to support the process management of concurrent product development.
3. The Principle of DSM based Multi-view Process Modelling On the basis of the existing process management system of concurrent product development, combining workflow model with DSM model, a method of multiview process modeling of concurrent product development is presented to make full use of strongpoints of workflow model and DSM model. The integrated multiview and multi-level process model of concurrent product development is built for the lifecycle of product development based on networks, as shown in Figure 1, including visualized workflow view and DSM model, which can meet the requirement of qualitative and quantitative analysis and optimization for concurrent product development. Bidirectional mapping
View 1
View 2
View 3
A
User
User
User
Process modeling
Process analysis
View 4
1 X X 2 X ; 3 X 4
User Domain knowledge
DSM modeling and analysis
Consistency check
Product development process management system self-contained process base and knowledge base design Product data Activities
Resourc
Internet/Intranet
flow
Knowledge base Process status
History recorder View 8
View 7
View 6
View 5
Status 1 Status 2 User
User
User
User activity
Process execution
Process monitoring
Process history
Design intent
Figure 1. DSM based multi-view process model of concurrent product development
Multi-view Process Modelling Method for Concurrent Product Development
265
Figure 1 describes the principle of the integrated multi-view process modelling based on DSM. In the model, users can describe and use different part of the process and different stage of the development process with different computer in different place. For example, view 1 represents that user can model the different part of the process with a few computers and describe the activities, roles and resource. View 2 represents that the user analyses and improves the process model. View 3 represents that the user improves and optimizes the process model. View 4 represents that the user requires or reuses the domain knowledge. The model built by a few users must be checked in consistence by process modelling tool so that a self-contained executable model can be set up for product development. The process model is executed in the actual project, and the product development process management supporting tool can generate different view such as activitynetwork (in view 5), status transaction of key data (in view 6) and so on according to the demands of users. And the user also can browse, implement and monitor the process in each view with any computer anywhere, for example, to distribute resource, use design tool to reengineer the process or analyze the process schedule, to capture the process history in the development process (in view 7), to capture or reuse the design intent for product development process (in view 8) and so on. And models in workflow model and DSM view can bidirectional map automatically. 3.1 Mapping from Workflow based Model to DSM based Model Based on graph of activity network, the workflow model has very good effect of visualization and can be executed automatically. But it has more advantages to analyze, optimize and reengineer the product development process with DSM. The first step for process optimization is to map the workflow model to DSM model. The graph of activity network is based on the model of directed graph, it is convenient to operate the model using knowledge of graph theory. Referring to the matrix representation of directed graph, the following mapping rules can be used to map the workflow model to DSM model: 1) Each node in the graph of activity network is corresponding to an activity in DSM, and each link is corresponding to the intercommunion of information between activities. 2) The activities are arranged in order to form a matrix. 3) According to the sequence of output, 1 is marked in column i and line j when activity i output to activity j, or 0 is marked. Figure 2 is the mapping from process model to DSM model according to the mapping rules above. A hierarchical DSM can be formed from the product development process with sub-process. The top level of the matrix is partitioning matrix in which each element may be a block in the final matrix. 3.2 Mapping from Multi-layer Process Model to DSM It is difficult to represent a large scale model with one process which contains all tasks of complex product development process, there is no method to display the process perfectly, and the advantage of intuitionistic can not be shown. The most effective method to solve the question is to decompose the process. The top-down
266
P. Zhong, H. Cheng, M. Liu and S. Ding
method of process modeling is adopted to set up hierarchical process model which permits sub-process to embed in the upper process. By splitting the process model, single-layer complex model is changed into multi-layer tree-model as shown in Figure 3, and the complexity of system is reduced. The multi-layer model supports process reuses and is easy to be understood and managed. Flow AC Activity A Feedback CA
Activity C Flow BC Feedback EC Flow BD Activity D
Activity B
Feedback DA
Flow DE
Feedback EB
$
% & ' (
$
%
&
'
(
Activity E
Figure 2. The mapping from process view to DSM view A0
General process
A1 1
2
A12 1
2
3
Figure 3. Multi-layer process model
As it is difficult to represent clearly a complex product development process with single-layer process model, it is also difficult to describe a complex process with one DSM. In a large-scale matrix, it becomes very difficult to identify the interrelation between activities by observing non-diagonal element in the matrix, and ordinary operation cannot analyze and plan the matrix at all. Thus, the multi-layer process model is mapped into a multi-layer tree-DSM, as shown in Figure 4, that is to say, the large-scale DSM is decomposed into lesser DSM which becomes the analysis keystone, difficulties in large-scale matrix are avoided and many different analysis of DSM can be done in details.
Multi-view Process Modelling Method for Concurrent Product Development
267
3.3 Mapping from Multi-view Process Model to DSM The workflow based model is a multi-view model, and the process model is the core of the whole workflow based model which is supported by other models such as resource model, organization model, etc. Therefore, all these models must be mapped to DSM model. Layer 3
Input Output
Layer 2 Layer 1
Input Input
Input
Output
Output Input
Output
Output Output
Input
Input
Output
Figure 4. The Multi-layer DSM structure
The model of resource view can be mapped to numerical DSM (NDSM). The elements on the diagonal are natural number not 0, which represent the number needed, for example, a task to be finished need a computer and a NC tool, the number on the diagonal is 2 as shown in Figure 5, the number of resource is 2, 1, 3, 2 needed by activity A, B, D, E. There is no resource for task C. In the mean time, the number on the diagonal can also represent time, cost etc. to form the NDSM of time and cost which are taken by product development. A
B
C
D
E
A
2
0
1
1
0
B
0
1
0
0
1
C
1
1
0
0
1
D
0
1
0
3
0
E
0
0
0
1
2
Figure 5. NDSM for resource view
The elements outside the diagonal is not only the information flow between activities but also the times of iterative with natural number, and a NDSM is formed with the times of iterative as shown in Figure 6, for example, the number of iterative is 2 between activity A and C, 2 between activity A and D, and only 1 between B and E.
268
P. Zhong, H. Cheng, M. Liu and S. Ding
The organization model describes the roles of executors and corresponding relationship among the members in the integrated development team in activity. During the building of process model, the administrator will distribute role of operation for each activity or sub-process, map the organization model to DSM model, and use clustering algorithm grouping operators to assure the interactive time is the least in the condition of not affecting the product development schedule. The mapping steps in detail are as follows: 1) To Map the process model to DSM model. 2) To correspond with the activities or sub-activities and operators according to the sub-table of activities. 3) To replace each activity or sub-process in DSM model as operator. A B C D E A 2
0
2
2
0
B 0
1
0
0
1
C 1
1
0
0
3
D 0
1
0
3
0
E 0
0
0
1
2
Figure 6. NDSM for iteration times
3.4 DSM based Process Optimization The DSM mapped from the workflow model exists obviously feedback iterative which must be optimized, and the goal of process optimization is to eliminate drastically the mark on the diagonal, that is, to eliminate iterative and change the matrix as a lower-triangle. But it is difficult to eliminate the iterative drastically for the more complex matrix, and its optimization goal is that the activity marks move to the diagonal as close as possible, only less activities are involved in iteratives, and form partitioning matrix as shown in the left of Figure 7 to shorten the cycle of product development.
B D E C A B 1
C 1 A
Flow CA Activity C
D 1 E
Flow BC
1
1 1
1 1
Flow DA Activity B Flow BE Activity D Flow EC Flow DE Activity E
Activity A Feedback CA
Feedback EB
Figure 7. The mapping from DSM to process view after optimization
3.5 Mapping from DSM to Process Model In order to exert fully the characteristics of workflow based process model, the optimized DSM will map backward to visualized process model, as shown in the
Multi-view Process Modelling Method for Concurrent Product Development
269
right of Figure 7. The methods and steps of process optimization in detailed will be described in the later papers.
4.
Implementation
On the basis of the existing process management system of concurrent product development, a visualized, intelligent and networked software prototype of process analysis of concurrent product development is developed by the tools of C#, ADO.NET, ASP.NET etc. in the developing environment of Microsoft .NET Framework, and the visualized workflow model and DSM model are integrated tightly and satisfied results are obtained. Figure 8 is the interface of workflow based visualized hierarchical process modelling tool for concurrent product development, in which describes a visualized process model for a certain sub-process of product development. The left part of figure 8 is the hierarchical process model and the right part is the visualized process model which is mapped into DSM model. The Process model is mapped into DSM model. Figure 9 is the interface of DSM based process analysis, in which describes a DSM corresponding to the visualized process model in Figure 8. After the DSM model is analyzed or optimized, it will be mapped into visualized process model, and the concurrent product development process management system will run with the reengineered process model. Product Development Process management System
Process model
Hierarchical process
Figure 8. The interface of workflow based multi-view process modeling
5.
Summary
Process management system for concurrent product development is one of the key technologies and its main functional modules include product development process modeling, process analysis, process optimization, process improvement, process
270
P. Zhong, H. Cheng, M. Liu and S. Ding
reengineering, process execution, process monitoring, etc. The module of process modeling is the basis of process management system for concurrent product development. On the basis of the existing research on concurrent product development process management system, the DSM based framework for multiview process modeling of concurrent product development is set up to integrate tightly the workflow based visualized multi-layer process model and DSM model and exert each advantages. A software prototype has been developed to realize the unification of qualitative analysis with process model and quantitative analysis with DSM model, learn from others’ strong points to offset one’s weakness, and provide strongly support for the concurrent product development process management. Product Development Process management System
DSM model
A B C D E F G A 1 1 B 1 C 1 1 1 1 D 1 E 1 1 F G 1
Hierarchical process
Figure 9. The interface of DSM based process analysis
In the next research, the theory and method of DSM based quantitative process analysis or optimization for concurrent product development will be studied farther, and the corresponding supporting tools will be developed.
6.
Acknowledgement
The research is supported by the National Natural Science Foundation of China Methodology for domain knowledge management in cooperative product development (No. 50275090), the Nature Science Foundation of Shandong Province, China - Theory and method of process analysis and optimization for concurrent product development (No. Y2005F21) and the Science and Technology
Multi-view Process Modelling Method for Concurrent Product Development
271
Programme of Shandong Provincial Education Department of China - Design history knowledge management system for concurrent product development process (No. J04A05).
7.
References
[1] Wang Jibin, Xiong Guangleng, Chen Jiadong, (1999) Rule-Based Product Development Process Modeling with Concurrent Design Iterations Supported, Journal of Tsinghua University (Science and Technology) 39:114-117 [2] Maropoulos P G, (1995) Review of research in tooling technology, process modeling and process planning, Part 1: Tooling and process modeling, Computer Integrated Manufacturing Systems 8:5-12 [3] Zhong Peisi, Zeng Qingliang, Liu Mei, Liu Dazhi, (2003) Knowledge-Based Concurrent Product Development Process Management System and Its Implementation. Proceedings of ASME 2003 DETC & CIE, Chicago, Illinois, USA [4] Zhong Peisi, (2001) Knowledge-Based Process Management for Concurrent Product Development, Postdoctoral Research Report, Tsinghua University [5] Liu Dazhi, Liu Mei, Zhong Peisi, (2004) Method of Product Development Process Analysis and Reengineering for Concurrent Engineering. Materials Science Forum 471-472:770-774 [6] Smith R P, Morrow J A, (1999) Product development process modeling, Design Studies 20:237-261 [7] Sun Zhaoyang, (2005) Research of the Method for Process Analysis and Optimization of Concurrent Product Development, M.Sc. Dissertation, Shandong University of Science and Technology [8] Eppinger S D, Whitney D E, Smith R P, et al. (1994) A model-based method for organizing tasks in product development, Research in Engineering Design 6(1):1-13 [9] Smith R P, Eppinger S D, (1997) A predictive model of sequential iteration in engineering design. Management Science 43(8):1104-1120 [10] Browning T R, (2002) Process integration using the design structure matrix. System Engineering 5(3):180-193 [11] Chen Chun-Hsien, Ling Shih Fu, Chen Wei, (2003) Project scheduling for collaborative product development using DSM. International Journal of Project Management 21:291-299 [12] Browning T R, (2002) Process integration using the design structure matrix. System Engineering 5(3):180-193
Using Blogs to Manage Quality Control Knowledge in the Context of Machining Processes Yingfeng Zhang, Pingyu Jiang and Limei Sun State Key Laboratory for Manufacturing Systems Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi 710049, China
Abstract Currently, blogs are becoming more popular and in fact has formed a community where all kinds of users can easily share, look for and reuse the useful knowledge and experiences with others. Therefore, a blogs-based knowledge management framework is proposed in this paper to manage quality control knowledge in the context of machining processes. Around how to enable the knowledge blogs, several methods including the context-based quality knowledge classification and representation, ontology-based quality knowledge model and blog-based quality knowledge management etc, are presented. In order to verify the methodology, a running example is given. Keywords: Blogs, Context, Quality control knowledge, Ontology, Knowledge management
1.
Introduction
Currently, the rapid progress of information and network techniques are impelling the globalization of manufacturing activities, e.g. knowledge management. In order to win the competition in global markets, enterprises have to effectively manage and use direct or indirect knowledge resources to improve the quality of product. Quality knowledge management system provides enterprises with not only a platform to implement the quality knowledge communication and sharing, but also a platform to support and diagnose when quality problems occur. To meet with the needs of managing quality control knowledge, many kinds of methods and systems are developed, e.g. knowledge-driven expert system, knowledge map and BBS (Bulletin Board System) etc. In recent years, more attention from the academic areas has been paid to the researches on knowledge management. Ioannis [1] etc. propose an integrated expression method including rules, nerve network and instance knowledge, which enhances not only the efficiency of creating knowledge but also the capability of self-studying of knowledge. Dieng [2] shows the importance of context and personality of knowledge management. K.T. Wang [3] etc. design and develop a blog-based dynamic learning map to share learning experiences. Blood [4, 5] proposes
274
Y. Zhang, P. Jiang and L. Sun
weblog-based technologies to help users gain more knowledge. Scardamalia [6] builds architecture of knowledge forum which provides an open platform with users for sharing, searching and using knowledge. Rector [7] proposes an ontology-based strategy for modular implementation which provides a basis for defining more complex knowledge. F. Giunchiglia [8] etc. propose a more radical approach to distributive representations of knowledge. Thomas R [9] defines a distributed version of description logics based on ontology model semantics that has all advantages of the contextual representations. Zhonghua Yu [10] etc. propose a quality knowledge management system by using BBS. It must be stated that the research discussed above focuses mainly on the general knowledge management technologies related to their own application area. It is therefore necessary to integrate the modelling, representing, publishing, searching and evaluating of the quality control knowledge of Machining Processes into one platform in order to achieve the idea and concept presented in this paper. According to the above points of view, we put forward a Blogs-based quality control knowledge system. On the basis of the above research outcomes, the research objective presented in this paper is just concerned with modelling, representing, publishing, searching and evaluating of the quality control knowledge. The rest of this paper is organized as follows. Section 2 proposes the architecture of BMQCK. Some key enabling technologies are described in Section 3. In order to demonstrate the concepts and methods mentioned above, a running example is shown in Section 4. Conclusions are drawn with brief comments in Section 5.
2. Architecture of Blogs-based Management of Quality Control Knowledge (BMQCK) This section discusses the architecture and corresponding functions of BMQCK. As shown in Figure 1, based on information, ontology and blog technologies, the architecture of BMQCK is proposed. It servers for several purposes. Firstly, it helps to systematically mange knowledge of the job-shop, enterprises and extended enterprise level. Secondly, it highlights the methodologies and technologies of creating, sharing, publishing and finding knowledge which makes BMQCK be easy to implement. Thirdly, it defines the general functionalities and knowledge structure of BMQCK. In this architecture, three tiers are divided according to their functions. They are briefly described as follows: 2.1
User Interface Level
This tier provides users with graphics interfaces to operate or manage knowledge. Users can use browsers to send their requests and get results through the internet.
Using Blogs to Manage Quality Control Knowledge
2.2
275
Application Level
This tier plays a very important role in BMQCK, which integrates key enabling technologies including context-based knowledge classification, ontology-based knowledge model, blog-based creating, finding and sharing methods of knowledge, evaluation of knowledge and dynamic knowledge Kanban to mange the quality knowledge. Some interface technologies such as soap, XML etc. are adopted to implement the information communication among the three tiers. 2.3
Data Level
This tier is used to store and share all kinds of knowledge such as quality knowledge, ontology information and source knowledge etc. The standard data structure e.g. XML data and owl data are stored and corresponding database application services (e.g. Java database connection, JDBC) are also installed to implement functions of operating the database.
Quality Quality Knowledge Knowledge
Ontology Ontology Database Database
Source Source Knowledge Knowledge
SOAP᧧XML
Application Level
Key enabling technologies Context-based knowledge classification and expression Ontology-based knowledge model Blog-based creation, find and share method of knowledge Evaluation of knowledge Dynamic knowledge Kanban
MQCK Attribute of knowledge
Manage knowledge
Publish knowledge
Ontology (owl)
Capability of knowledge
Context
Relevant Archives
Kanbans
Call interface
Knowledge node
User Interface
SOAP᧧XML HTML / JSP
JAVA APPLET
Archives, text, audio, video etc.
Figure 1. Architecture of BMQCK
Technologies: Database / OWL / UDDI / SOAP / Web
Data Level
276
Y. Zhang, P. Jiang and L. Sun
3.
Key Enabling Technologies of BMQCK
This section discusses some key enabling technologies including context-based quality knowledge classification and representation, ontology-based quality knowledge model and blog-based quality knowledge management of BMQCK. 3.1
Context-based Quality Knowledge Classification and Representation
Generally speaking, context refers to not only the personalized knowledge of the worker but also the working environment. Context may be understood in a variety of ways under different application fields. In this paper, context is defined as follows: Theorem 1. In the quality control field, context is a collection of semantic situational information of one or more machining stages including the operator and current environment, and the information characterizing the internal features or operations and external relations under the specific setting. Context aids the user with his work, which mainly contains searching, reading and creating knowledge document by adding the context of the user. To help users describe the correspondent knowledge, a set of elements are defined as guideline. Figure 2 shows five main categories of elements according to quality control knowledge and each element can be further decomposed. The most important issue involved in context-based knowledge system is knowledge representation. To represent the natural context knowledge description and facilitate knowledge to be reused at the semantic level and context level, a multi-level context-based knowledge representation and structure model is developed shown in Figure 3. Qualitycontrol control&&management managementknowledge knowledge Quality
Figure 2. Categories of quality knowledge
Report of equipment
Report of quality analysis
Quality control plan
Report of control chart
CAPP files
Report of scrap
Statistic report of quality
Quality control plan
Quality control point
Archives
Root cause identification
Quality control chart
Machining techniques
Answer
Stage knowledge
Correlated stage
Question
User’s Q & A
Example of equipment
Example of control chart
Example of control method
Control chart exception
Quality control method
Application knowledge
Correlated stage analysis
Equipment failure
Root cause identification
Quality control point
Quality control chart
Principles & Rules
Using Blogs to Manage Quality Control Knowledge
277
Classification Context
Figure 3. Representation of quality knowledge
3.2
Ontology-based Quality Knowledge Model
Ontology can be used in the artificial intelligence, knowledge representation, inductive reasoning and a variety of problem solving techniques, as well as in supporting the semantic web and systems integration. This paper uses OWL (Web Ontology Language) to represent the ontological framework for quality knowledge model. Figure 4 shows the main stages of ontology and corresponding quality knowledge instances. A piece of knowledge can be mapped as a node of OWL file. The content of knowledge node consists of a series of attributes which dynamically change according to the knowledge context.
278
Y. Zhang, P. Jiang and L. Sun
ൕ Creating Ontology
ൖ Maintenance Stage
ൗ : Operating Ontology
Build objective, range and requirement of ontology
Knowledge database maintenance
Search, publish and make use of knowledge
Information and field ontology model
Ontology management
Evaluation knowledge
Ontology modification and update
Share knowledge
Expression, context, etc. Engineers
Experts
User
Ontology-based structure of Quality Knowledge
Relationship analysis
Knowledge .QRZO HGJH
Context or Examples 㟿㘶幍⸭√
Correlated relationship &RU U HO DW HG U HO DW L RQVKL S
Figure 4. Ontology-based quality knowledge model and instances
3.3
Blog-based Quality Knowledge Management
Blogs have emerged as a potential solution to the publication problem. The idea is based on the premise that publication occurs incrementally in discrete units, blogs entries, and users manage their own content (as opposed to newsgroups). Figure 5 shows a framework of blog-based quality knowledge management. Users can take full advantage of the functions provided by blogs to create, publish, search, question, answer and communicate their interesting quality control knowledge.
Using Blogs to Manage Quality Control Knowledge
Requirementofofknowledge knowledge Requirement
Search engine Based on examples
Based on context
Based on semantics
Based on similarity
Knowledge Database
Structural knowledge
non-structural knowledge
semi-structural knowledge
other
279
Multi-stageknowledge knowledge Multi-stage Mapping Ontology model
MQCK
Attribute
Lookfor foroptimal optimalknowledge knowledge Look
Domain & scope
Axiom Acquireknowledge knowledgecontent content Acquire (KnowledgeBlog) Blog) (Knowledge
Import
Figure 5. Blog-based quality knowledge management
4.
Case Study
Following the concepts and methodologies described thus far, we have developed a software prototype on blogs-based management of quality control knowledge. As a simple running example, in Figure 6, knowledge of quality control chart is used for illustrating how the system works. The main functions include: x x
x x x
Create and publish a new knowledge. Users can login the blog to create and publish a new quality control knowledge using blog entry shown in Figure 6 (a). Structure of quality control knowledge. The structure of context-based quality knowledge is show in Figure 6 (d). In other word, quality knowledge has its knowledge tree and users can add or edit the nodes and their content according to the characteristic of the knowledge context. Search engine. Users can use the search engine provided by blogs to find their required quality knowledge, as shown in Figure 6 (b). Search result. The quality knowledge items are listed in the search result page ordered by the similarity between the inquired knowledge and knowledge ontology, as illustrated in Figure 6 (e). Gain quality knowledge. Users can link the helpful quality knowledge through the blogs-based quality knowledge platform where multi-type quality knowledge are provided in this system, e.g. text, example, chart and media etc. Figure 6 (f) shows a quality control chart under the multi-variety and small-batch production mode.
280
Y. Zhang, P. Jiang and L. Sun
x
Knowledge Kanban. The knowledge Kanban is proposed to indicate the frequency of reusing and historical statistic of quality knowledge, shown in Figure 6 (c).
a. Create a new knowledge
b. Context-based knowledge structure
Search Result engine MQCK MQCK
c. Knowledge Representation
Knowledge Kanban
Calendar Details The quality control chart
Structure of knowledge d.
e.
f.
Figure 6. Representation of quality knowledge
5.
Conclusions
In this paper, we put forward a framework and corresponding methodology on managing quality knowledge through knowledge platform. To sum up, the following conclusion can be drawn. x x x x
The architecture of BMQCK and its components provide a clear line for forming the information platform for collecting and utilizing the available quality control knowledge. Context-based quality control knowledge classification and representation is useful and effective for standardizing the complex knowledge. Ontology-based quality knowledge model helps to organize the structure of knowledge, which makes it easier to store and look for the specific knowledge. Blogs-based quality knowledge management provides users with a public platform to create, share, publish and look for their interesting quality knowledge. Multi-type achieves can also be available in this platform.
Using Blogs to Manage Quality Control Knowledge
281
The above framework and methods just provide a kind of useful mechanism to make the quality control knowledge easier to share and reuse. However, this research still needs to be studied in other aspects (e.g. the theory of knowledge context, the method of knowledge mining, etc,) and we still have to do some further researches in order to improve the methodology proposed in this paper.
6.
Acknowledgements
The research presented in this paper is under the support of national 863 HighTech R&D Program (Grant No.: 2006AA04Z149). The authors hereby thanks them for the financial supports.
7.
References
[1] Ioannis H., Prentzas J. (2004) Integrating (rules, neural networks) and cases for knowledge representation and reasoning in Expert systems, Expert Systems with Applications, 27: 63-75 [2] Dieng R. Corby O, Giboin A, Ribiere M. (1999) Methods and Tools for Corporate Knowledge Management, International Journal of Human Computer Studies, 51(3): 567-598 [3] Kun Te Wang, Yueh-Min Huang, Yu-Lin Jeng and Tzone-I Wang (2007) A blog-based dynamic learning map, Computers & Education, In Press, Corrected Proof, Available online [4] Blood, R. (2002a). We’re got blog: How weblogs are chaning our culture? Cambridge, MA: Perseus Publishing [5] Blood, R. (2002b). Weblog handbook: Practical advice on creating and maintaining your blog. Cambridge, MA: Perseus Publishing [6] Scardamalia, M. (2004). Knowledge forum. Education and technology, 183-192. [7] A. Rector. (2003) Modularisation of domain ontologies implemented in description logics and related formalisms including OWL, Proceedings of the 16th International FLAIRS Conference [8] F.Giunchiglia, C.Ghidini, (1998) Local models semantics, or contextual reasoning, Proceedings of the Sixth International Conference on Principles of Knowledge Representation and Reasoning (KR’98), Morgan Kaufmann, 282–289 [9] Gruber. Thomas R. (1995) Towards principles for the design of ontologies used for knowledge sharing, International Journal of Human-Computer Studies, 43(5): 907-928 [10] Yu Zhonghua, Liu Shouxin etc. (2005) Research on Knowledge Acauisition Based on BBS and Its Application in Quality Management, China Mechanical Engineering, 16(4): 315-319
Analysis on Engineering Change Management Based on Information Systems Qi Gao1, Zongzhan Du2, Yaning Qu3 1
School of Mechanical Engineering, Shandong University School of Electrical Engineering, Shandong University 3 Shandong Hoteam Software Co., Ltd. 2
Abstract Engineering Changes (ECs) are inevitable and frequent in manufacturing enterprises. The primary challenge in efficient management of ECs arises because the sources as well as the effects of an EC are spread across different phases of the product lifecycle. With the application of information systems in enterprises, it becomes an urgent problem to run integrated engineering change management based on information systems. In this paper, we analyze the state-of-the-art in the context of current information management application environments. Problems are defined and solving strategies are presented from the viewpoint of information integration. The overall goal is to enable a streamlined enterprise change management environment, which aggregates all required product information. Keywords: engineering changes (ECs), integration, information system
1.
Introduction
Design, in a firm or in a consortium, is iterative and does require change. An engineering change (EC) refers to any change or modification in the shape, dimensions, structure, material, manufacturing process, etc. of a part or assembly, after the initial design has been released ( and often after the part is already in production) [1]. It can be a simple modification for documents, also be a complex redesign for the whole phase of product design and manufacture. ECs are important and necessary during product development, especially in manufacturing companies. An EC may be necessitated by a number of different reasons. Among the common reasons for an EC are [2]: x x x x
To achieve new functionality or meet new specifications; To take advantage of new material or manufacturing technology; To improve reliability, serviceability, aesthetics, ergonomics, etc; To compensate permanent loss of supply of a component or material, replace a supplier, etc;
284
Q. Gao, Z. Du and Y. Qu
x x
To eliminate design faults; To solve quality problems
ECs usually induce a series of downstream changes. Multiple disciplines and responsibilities are therefore involved in managing ECs. Once an EC is approved, all the downstream functions must be notified so that they can make the necessary adjustments in time to implement the EC. No matter where the request for an engineering change may originate and no matter what beneficial effects may result from its incorporation, it will be disruptive to the routine process and to the normal flow of production work. Therefore, engineering change is often the focus of concern and is a sensitive area in most companies. The ability to manage changes efficiently and effectively reflects the agility of an enterprise and is vital to maintain its competitiveness. Most of the efforts reported in the literature have been based on paper-based ECM systems. Although some companies seem to have well-structured comprehensive ECM documents, paper-based systems generally fail to manage ECs with sufficient effectiveness and efficiency. Moreover, the change accuracy is hard to be ensured because data source is non-unique and data transfer is untimely. Recent investigations conducted by other researchers have revealed that the number of ECs active at any one time in a noticeable number of manufacturing companies reaches a level that is too difficult to manage with a paper-based system and by and ad hoc procedure. Information technology has been introduced to overcome these limitations. Standalone computerized EC management (ECM) systems are developed to support basic EC activities. However, these software packages can only be accessed by a single user at a time. They can only record and process ECrelated forms [3]. Several software vendors have developed information management systems such as PDM which have significant function in ECs management. Comparing to other stand-alone ECM systems, it can manage EC data, process and person. It supports concurrent work, intensive teamwork and close communication. But most state-ofthe-art PDM systems in industries are only used by the product development department and do not support the information required for the whole life cycle of a product due to the lack of integration. Information integration has received much attention. Yang etc. proposed an environment integrating a PDM and MRP application which can be used to support designers in a part re-design project to analyse inventory scrap costs [4]. The information integration between PDM and MRP is realized by BOM conversion Module and ABC module. They developed an agent-based PDM/ERP collaboration system to support the designer in making decisions about the replacement parts requirement analysis [5]. Peng [6] proposed a STEP-compatible product data and engineering change models, including product definition, product structure, shape representation, engineering change, approval, and production scheduling six models. Thus, application systems such as CAD/CAM and MRP can interact with the EDM system by accessing the database. But there is no actual application example for explanation in that paper. Relatively little research has addressed integrated ECM based on information systems.
Analysis on Engineering Change Management Based on Information Systems
285
With the development and application of information technology, more and more information management systems are deployed in enterprises. The sources as well as the effects of an EC are spread across different phases of the product lifecycle. The required data usually stored in different information systems. Therefore, it is necessary to research on how to integrate the information stored in the various systems. This paper contributes to the ECM research by describing integrated engineering change management based on information management systems from the aspects of workflow, people and data. Thus the change information consistency, integrity, validity and traceability can be realized in product lifecycle. This paper is organized in four sections. Section 2 presents a change management procedure model conforming to the industry-standard CMII closed-loop change model and clarifies existed problems. Section 3 proposes problem solving strategies in an integrated environment. The last section summarizes the key elements of the paper and identifies new perspectives.
2.
EC Model Based on Information Systems
2.1
Engineering Change Standard Procedure
There are several authorized standards for engineering change management. CMII is the norm widely used in the manufacturing companies of United States and China. So we take it as an analyzed base for the sake of convenience. The proposed research challenges and strategies are universal. Fig. 1 is the process model CMII which defines roles, boards, their tasks and a closed change process in a complete and very detailed manner. Of course, it is just a top level for process control. When we use it, we need to refine it to enable it practicable.
286
Q. Gao, Z. Du and Y. Qu
Fig. 1 Process model CMII
2.2 The State-of –the-art of EC Procedure Model Based on Information Systems Currently, there are mainly four management systems for enterprise application in a product-developed company, which are Product Data Management (PDM), Material/Enterprise Resource Planning (MRP/ERP), Supply Chain Management (SCM) and Customer Relationship Management (CRM). These solutions focus on some specific lifecycle process and are applied to different departments. PDM helps design engineers to manage the product data and product development process, plays major roles in the design departments and has significant function in ECs management. Any EC must involve the design departments. So the execution of ECM relies on the PDM system in modern enterprises. A ECM case of one diesel engine factory is described in Fig. 2, which enables this factory manage product change repeatedly and systematically in accordance with the requirements of the industry-standard CMII closed-loop change model.
Analysis on Engineering Change Management Based on Information Systems
287
Fig. 2 Change management procedure model based on current information systems
Rectangle and diamond represent activities, each of which is divided into two parts, the upper is the action and the lower is the participator. Activities are organized to form complicated dealing workflow for change objects. The procedure starts with the identification of the need for an EC, i.e. engineering change request. It is usually presented by manufacturing and design department, also customers, quality inspectors and so on. The coordinator need to collect and prepare the ECR form. Once the ECR form has been prepared, it is presented to the creator firstly to do technical review. If it is felt to be unnecessary or uneconomic, it will be rejected. If it is low risky, it is handled through the fast track process. If it is high risky, the coordinator will require relevant person to analyse the change effect and submit an analysis report. To analyse the change effect, data from store, purchasing,
288
Q. Gao, Z. Du and Y. Qu
industrial engineering, quality control, finance and marketing are requested. The results are presented to the change review board (CRB) to facilitate business decision. In the case where the change is rejected, the reason will be recorded and the package returned with that message to the engineering department and presenter. If it is accepted, the design director prepares the ECN form. The implementation planning described in ECN is reviewed by change implementation board (CIB). On approval of the ECN by CIB, all concerned disciplines should be notified of an approved change, its effective dates, batch numbers, etc. Meanwhile, the designers begin to redesign identified product data and documents. After audited, these new data are released. Then engineering change orders are raised and notify all departments relating to the change that this change is implemented in production according to planned schedule. Many activities in this flow may be decomposed further. That means they will associate with another workflows. For example, the activity of change execute associates with different workflows according to different change kinds such as drawing change, bill of material (BOM) change, process change etc. Fig. 3 is a document design audit flow used in the situation of drawing change.
Fig. 3 Document design audit flow
The role, authority and state of activities are defined as Table 1. Table 1. Activity setup
Review_Re ad
design
design
design
All
All
All
All
All
1/standardiz or
design
1/technologi st
design
1/auditor
1/archive administrato r1/project administrato r
Check-in
1/approver
Appro ve
Review_Re ad
Standa rdize
Review_Re ad
All
Proces s
Review_Re ad
number of sign in
Audit
Review_Re ad
return
1/checker
authority
Check
Review_Re ad
number/ro le
1/designer
Desig n
Review_AC L
phase setup
Analysis on Engineering Change Management Based on Information Systems
2.3
289
Problem Analysis
Although we have set such change workflow for this factory, it doesn’t run smoothly. Anyone who participates in the activities relating to the product can present a change request. It is usually presented by manufacturing and design department, also customers, quality inspectors and so on. The EC data is usually stored in the PDM system. The manufacture related data which are used to analyze change effect usually need to be obtained from the ERP system. The EC functions are performed by the R&D division of a company. However, the effects of an EC are spread across different phases of the product lifecycle. For example, ECN and ECO need to be notified to the effected divisions so that they can respond to the EC. In most businesses, few relations exist between these divisions. Data generated from or needed by these different systems are independent. Data acquirement must be done by paper or accessing different systems respectively. ECR and ECN review flow doesn’t run in the PDM system. Only the records are stored in the PDM system. The reason is that the PDM system is only used in design department. The people in other departments can’t log on this system to do the review works.
3.
Solving Strategies
3.1
Integration Information Requirement Analysis
The integration information can be identified from Fig. 2. x x
x
ECRs from different departments need to be inputted into PDM in the first step; The information of changed parts such as stock number, purchasing plan, product plan, standard cost, manufacturing cost, material cost, work hour and charge rate stored in ERP are forwarded to CRB and CIB committee members for reference; ECN, ECO, difference BOM and changed documents stored in PDM are forwarded to relevant persons who are users of ERP
In order to resolve above issues, it is necessary to develop an integration environment to bridge the gaps existing between different systems. 3.2
Integration Environment Framework
We can realize data sharing by a sharing database. The data being shared would be extracted from the databases of PDM, ERP or any other relevant systems and stored in the sharing database for public use. EC data are shared and communicated between all concerned parties immediately after they enter the system. It allows simultaneous data access and processing while paper-based and standalone systems only allow single-user access, so the throughout time is significantly reduced.
290
Q. Gao, Z. Du and Y. Qu
For the CRB and CIB committee members, we can add them as the users of PDM and give them proper authority. When a change workflow runs to the relevant activities, CRB or CIB committee members will get a window message and an email. Then they log on the PDM system to do their tasks. When the review of ECN or ECO is completed, all persons relating to the change will be notified by means of message mechanism and email. The framework is described as Fig. 4.
Fig. 4 Integration environment framework
W-, P- and E- represent Windows user, PDM user and ERP user respectively.
4.
Conclusions and Future Works
The ECM procedure model is analyzed based on information systems. Currently, the implementation of ECM needs many manual taches to link activities intra- and inter- systems. It seems to be too hard to be used in practice. The major issue is how to integrate the information stored in the various systems. It is the foundation to realize integrated engineering change management. This research has proposed an information integrated framework to support EC data to be shared and communicated between all parties. The implementation of the integration between different information management systems is the efforts of the future research in order to demonstrate the use of this method. In this research, information integration is considered. Other possible future work could consider the engineering change under a condition of process integration and automation.
5.
Acknowledgements
Authors are most grateful to the China fund council, Shandong University and PLM alliance of university of Michigan for financial supports that made this research possible.
Analysis on Engineering Change Management Based on Information Systems
6.
291
References
[1] Huang GQ, Mak KL, (1998) Computer aids for engineering change control. Journal of Materials Processing Technology 76(1-3): 187-191 [2] Dale BG, (1982) The management of engineering change procedure. Engineering Management International 1(3): 201-208 [3] Huang GQ, Yee WY, Mak KL, (2001) Development of a Web-based System for Engineering Change Management. Robotics and Computer - Integrated Manufacturing 17(3): 255-267 [4] Yang CO, Cheng MC, (2003) Developing a PDM/MRP Integration Framework to Evaluate the Influence of Engineering Change on Inventory Scrap Cost. International Journal of Advanced Manufacturing Technology 22: 161-174 [5] Yang CO, Chang MJ, (2006) Developing an Agent-based PDM/ERP Collaboration System. International Journal of Advanced Manufacturing Technology 30: 369-384 [6] Peng TK, Trappey AJC, (1998) A Step toward STEP – compatible Engineering Data Management: the Data Models of Product Structure and Engineering Changes. Robotics and Computer - Integrated Manufacturing 14(2): 89-109
Research and Realization of Standard Part Library for 3D Parametric and Autonomic Modeling Xufeng Tong1, Dongbo Wang2, Huicai Wang1 1
School of Electronic Mechanical Engineering, Xidian University, Xi’an, China Mechatronics Engineering Institute, Northwestern Polytechnical University, Xi’an, China
2
Abstract The revision and expansion of standard parts always bring library users a lot of trouble due to their implementation mostly through programming effort, which is difficult for users to achieve independently. A novel dynamic autonomic modeling method for 3D standard part library is presented and an interactive modeling wizard is constructed in this work. A detailed account of its realization steps is given. Then, compared with the traditional methods, the idea and characteristics of autonomic modeling method are analysed. Furthermore, based on SolidWorks 2005, the key algorithms and database design are realized in order to correlate the driven parameters to 3D Computer Aided Design (CAD) models automatically. The application example illustrates that the dynamic autonomic modeling method allows users to modify the library conveniently and effectively and frees users from the heavy programming works. Keywords: Standard part library; Parameterization; Autonomic modeling; Algorithm
1.
Introduction
Establishing a common standard part library which is in line with national and enterprise’s standards is necessary for improving product design efficiency. Generally speaking, the modeling method of 3D standard parts comprises of static modeling and dynamic modeling (parametric modeling): 1) Static modeling is concerned with modeling which uses 3D CAD modeling tool. A developer establishes complete 3D models for every used standard part and inputs these models into a standard part library, then parts in library can be called according to the design requirement. Actually every kind of standard part comprises a series parts with same 3D appearance and different specifications. Under the static modeling circumstance, the developer has to construct the same 3D CAD model in different sizes repeatedly for respective specification. So it is a tough work to input and manage these numerous models.
294
X. Tong, D. Wang and H. Wang
2) Dynamic modeling is a modeling method which can realize 3D standard parts parametric modeling by secondary programming. Because standard parts of same series have same topology structure and different sizes, they can share one 3D entity model and their actual size can be acquired from the parameter table stored in database. The standard part can be modified by modifying its corresponding parameters in parameter table which is very convenient for maintenance and management of a standard part library. This method is a feasible way to establish standard part library. Several kinds of dynamic modeling methods under the different 3D CAD circumstances are introduced in [1], [2], and [3]. As shown in Fig.1, they have following common features and procedures: 3D CAD model of Standard part
Database
Features information
Driven parameters
Program Figure 1. Traditional Dynamic Modeling Method
1) The 3D CAD model for each standard part is constructed firstly. 2) According to the CAD model feature parameters, the driven parameter table is created in the database. During the modeling process, the driven parameters must be kept consistent with the CAD modeling methods. For example, if a cylinder feature is modeled by stretching a circle, the corresponding driven parameter should be the diameter of that circle. Similarly, the driven parameter of a cylinder feature, which is modeled by rotating a rectangular cross-section should be the length of that rectangular. 3) Finally, the proper driven parameters are called and the parametric modeling is implemented through programming. Thus, the establishment of 3D standard part library can be realized by traditional dynamic modeling method which requires a lot of programming and handling of the database, so it must be completed by professional programmers. However, the initial standard part library cannot be unchanged permanently. With the modified products, enterprises will revise the existing standard parts inevitably, or need to add new standard parts. As the users of standard part library, the product designers are more adept at CAD software application. Therefore, the following difficulties will appear if the enterprises don’t want to rely on programmers to achieve the revision and expansion of standard part library: 1) Users are required to be equipped with basic knowledge of database management system such as the creation and modification of tables. 2) In addition, users have to master a kind of programming language at least so as to achieve the parametric modeling for the new parameters.
Standard Part Library for 3D Parametric and Autonomic Modeling
295
Evidently, it is unrealistic to cultivate every user into a programmer who is familiar with the database. So enterprises have to resort to programming staff for revision and expansion of standard part library. This situation results in the increase of design costs and the low work efficiency. In order to solve these problems, the standard part library 3D parametric autonomic modeling is researched and realized in this paper. The autonomic modeling method will allow users to revise and expand the standard part library conveniently and free them from troublesome programming and operating to database.
2. 2.1
Standard Part Library for 3D Parametric Autonomic Modeling Modeling Method
The key point of dynamic modeling is to realize the parameter driven for the 3D CAD model. In other words, the 3D CAD features should be linked properly to the standard parts parameters. The standard part library 3D parametric autonomic modeling can be realized by an interactive modeling wizard as explained in Fig.2, its the key steps of this process are as follows: Step one: 3D entity modeling User can use modeling tools (such as UG, SolidWorks etc.) to implement 3D entity modeling of standard parts. In this process, there is no specific requirement for particular modeling method. For example, both extending a circle and the rotating a rectangular method can be adopted to model a cylinder. Step two: model feature parameters extracting The model feature parameters can be extracted by using graphical topology technology, and all the feature parameters of the 3D model can be displayed especially the parameters selected by user to drive the mode. For example, if a cylinder feature stretched from a round is to be driven, user can select the diameter of the circle. The step is shown in Fig.2a. Step three: driven parameters and feature parameters correlating The feature parameters stand for the modeling process of a standard part. The driven parameters stand for the alterable size of a standard part and are from the standard part manual which specifies the standard parts such as GB manual. Shown as list in the interactive modeling wizard, both parameters are designated by user and correlated one-to-one for calling parameters in 3D modeling as shown in Fig.2b. For instance, the same series of standard flange parts have different number of its connecting holes. The corresponding parameters of connecting holes are defined as D2 (represents the distribution angle of holes) and N (represents the total number of holes) in standard part manual. During the process of flange part 3D modeling, if the holes are modeled by the array method, the corresponding feature is an array circle. Thus the feature parameters are extracted as D1 and D3 which represents the diameter of holes and the distribution angle of holes respectively. Consequently,
296
X. Tong, D. Wang and H. Wang
users can correlate D1 to N and D3 to D2 in a table named driven parameter relationship table through the interactive modeling wizard. Finally, according to the manual, the actual sizes of every specification standard flange part are inputted in another table named size parameter table. Both tables will be created in database automatically. Fig.2c illustrates the size input procedure. a. to extract feature
b. to correlate driven parameters and feature parameters
c. to input the actual size parameters Figure 2. Interactive Modeling wizard
Step four: realizing of parametric modeling The 3D parametric modeling is realized by calling the driven algorithms which will be introduced in latter section. The interactive modeling wizard provides users a convenient platform. Guided by tips, users can realize the standard parts parametric modeling step by step. 2.2
Autonomy
Compared with other dynamic modeling technologies, this method has its autonomy as follows: x
Independence of 3D modeling methods During the general dynamic modeling technology procedure, each driven parameter for a series of standard parts is pre-set in database. Thus, the
Standard Part Library for 3D Parametric and Autonomic Modeling
x
x
297
candidate features of 3D model must be in line with the requirements of modeling parameters table. However, in the modeling steps, the design of the driven parameter table in database follows the 3D entity modeling step. In other words, the driven table must be adapted to 3D entity. So the users can choose a skilled 3D entity modeling method in order to improve efficiency Arbitrary selection of driven parameter According to the features of products and standard parts, users can determine the driven parameters arbitrarily which are necessary without choosing all parameters. The algorithms and database structure which will introduced in section 3 to 5 ensure the correct association between driven parameters and 3D CAD models. Autonomic definition of driven parameter table in database Generally, the establishment of the database tables is completed by the specialized programmers. It is difficult for every user to master the techniques of database and programming. The dependence on programmers has become a bottleneck when standard part library is modified or expanded. The operations for users in the modeling wizard shown by Fig.2 have no direct relationship either to the programming or to the database technologies. Instead, what users need to do is just to give the driven parameters and to correlate them one-to-one with entity model parameters, and then the remaining difficult work is completed by programs automatically.
The autonomy ensures the independence of programmer, thus simplifying the modeling procedures and improving efficiency when standard part library is modified or expanded. 2.3
Realization of Key Techniques
The standard parts parametric autonomic modeling method indicates that the key techniques lie in: x x x
Model feature parameters extracting The correct extraction of 3D entity model feature parameters is the foundation of driving parameters. Design of database Since the driven parameter table determined by user is built dynamically, the correlation to other tables is the key to the construction of database. Driving of parameters The entity model can be driven accurately so long as the all kinds of driven parameters such as linear size, angle size are processed correctly.
298
3.
X. Tong, D. Wang and H. Wang
The Model Feature Parameters Extracting Algorithm
At present, most of 3D CAD software have their own graphical topology functions which can extract all the feature parameters of the 3D model. The following program shows the application of topology algorithms in SolidWorks2005. Among it, the function “GetNextFeature” is used for feature extraction and “GetDimension” is for feature parameters. Algorithm: Set swPart = swModel Set swFeat = swPart.FirstFeature //feature variable definition listAllFeatureDim.Clear //clear feature list While Not swFeat Is Nothing //Circulation conditions setting: Determine whether there are features message = swFeat.Name Set swDispDim = swFeat.GetFirstDisplayDimension // Circulation conditions setting: Determine whether there are // parameters If Not swDispDim Is Nothing Then listAllFeatureDim.AddItem message End If While (Not swDispDim Is Nothing) Set swDim = swDispDim.GetDimension // extracting the first parameter of the feature sFullDimName = swDim.FullName Dim mypos As Variant mypos = InStrRev(sFullDimName, "@", -1) sDimName = Left(sFullDimName, mypos - 1) listAllFeatureDim.AddItem " " + sDimName + "" Set swDispDim = swFeat.GetNextDisplayDimension(swDispDim) // extracting the next parameter Wend Set swFeat = swFeat.GetNextFeature // extracting the next feature Wend As shown above, there are two circulation interpreted in “while” and “wend” syntax in the algorithm. In the outer circulation, the function “GetNextFeature” can list all features of a 3D CAD model one by one. Then the function “GetDimension” of inner circulation extracts all parameters in the each feature. So through the two embedded circulations, all parameters in the 3D CAD model can be displayed as a list. The users can choose the necessary parameters arbitrarily.
Standard Part Library for 3D Parametric and Autonomic Modeling
4.
299
The Design of Driven Parameter Table in Database
In order to ensure the correlation between driven parameters and feature parameters, a special table is created automatically in database. The relationship between driven parameters and feature parameters are designated in this table which have the feature parameter column and corresponding driven parameter column. Each record in the table stands for one pair of match parameters. Column data type represents the modeling method of features. With a series of flange parts which GB code is GB1000, Table 1 illustrates the structure of the table. Table 1. The SU_1000 Table in Database No.
Feature parameter
Driven parameter
Data type
1
D2@Draft 1
d
Linear size
2
D1@ Draft 1
D0
Linear size
3
D2@ Draft 2
M
Linear size
4
D1@Stretch 1
H
Linear size
5
D3@Array (circle) 1
D2
Angle size
6
D1@ Array (circle) 1
N
Number
7
D1@ Draft 2
D1
Linear size
Each column in size parameter table stands for the size parameter of standard parts. Records show the actual sizes of certain specification standard parts. Table 2 gives a size parameter table of GB1000 standard parts. Table 2. The SU_1000_PARA Table in Database d
D0
M
H
N
D1
D2
30
80
15
20
4
60
360
40
100
18
30
4
60
360
20
120
20
40
5
60
360
Created dynamically, both tables must be named uniquely and obviously so that other tables can retrieve them conveniently. Thus the unique GB code of a standard part can be the part of the table name, such as SU_1000 and SU_1000_PARA. With these tables, the random size of a standard part can be related with its 3D entity model.
300
5.
X. Tong, D. Wang and H. Wang
The Driven Algorithm
The designated parameters can be driven by using the functions in 3D modeling software. For instance, the function “Parameter(sDriverPara(t)).SystemValue” in SolidWorks2005 is competent for this job. However, the size unit of features and standard parts is different. As for length, the unit is meter in feature and millimeter in standard part. With regard to angle, the unit is radian in feature and degree in standard part. Especially, if a feature is designed through array or mirror, the corresponding parameter type is a number. So it is important to recognize each kind of feature parameters and transform them correctly. The related algorithms are as follows: Algorithm: For t = 0 To nCout If sValueType(t) = "linear size" Then // transform of linear size .Parameter(sDriverPara(t)).SystemValue = sDimParaVal(t) / 1000
ElseIf sValueType(t) = "angle size" Then //transform of angle size .Parameter(sDriverPara(t)).SystemValue = sDimParaVal(t) * PI / 180 ElseIf sValueType(t) = "number" Then //transform of number .Parameter(sDriverPara(t)).SystemValue = sDimParaVal(t) Else End If Next Through the transformation of different parameter types, the function “Parameter(sDriverPara(t)).SystemValue” passes the actual parameters to the 3D CAD model. Then the model can be formed according to the parameters in Table 2. From the features and procedures of the traditional parametric modeling methods introduced in section 1, it can be concluded that the traditional parametric modeling methods limit the patterns of 3D CAD modeling in that their database structure is designed before the stage of program developing and is unchangeable. Thus, once the standard library is modified or revised, the corresponding data tables will be changed or added. The program will be modified as well. On the contrary, with the parametric and autonomic modeling technology, the data tables can be created dynamically and automatically in order to avoid the modifications of program. Furthermore, the correlative algorithms ensure the association of driven parameters and 3D CAD models. Thus the dynamical database structure and the algorithms simplify the revision and expansion work of standard parts to a great extend.
Standard Part Library for 3D Parametric and Autonomic Modeling
6.
301
Conclusion
The standard parts parametric and autonomic modeling technology provides an autonomic modeling platform for users and eliminates the dependence on programmer through the secondary programming in 3D modeling software and the reasonable database structure. Although the examples given are based on SolidWorks2005 and SQL Server2000, the algorithms are available to other 3D modeling software and DBMS. They are of great values to product design and standard information management. The system based on this technology has applied in an enterprise.
7.
References
[1] Xiao Liwen, He Yuanjun, Qin Pengfei. Development and Application of Toolkit for Building Parametric Parts Library [J]. Journal of Computer Aided Design & Computer Graphics, 2001 Vol. 13, No. 5: 444-448 [2] Zhou Kangqu, Hu Biwen. Study on Distributed Three-dimensional Standard Part Library System for SolidWorks [J]. Computer Engineering and Applications, 2005, 15: 221-223 [3] Liu Yonghong, Ren Gongchang, Zhang Youyun. Solid Modeling of CAD Standard Parts Database Based on Network Technical [J]. Computer Engineering and Applications, 2002, 38(16): 198-200 [4] TYRKA K., Part libraries on the Web [J]. Design News, 2002, 57(1): 80-83. [5] Huang Jing, Zhao Zhen, Chen, Jun. Development of 3D standard-part library of stamping die CAD in I-DEAS [J]. Forging & Stamping Technology, 2004(5): 56-59 [6] Zhang Yilan, Mo Rong, Zhang Junbo. Design and Implement of a Network Standard part library Based on Heterogeneous CAD Platforms [J]. Mechanical Science and Technology, 2005,24(3): 261-264 [7] Jin Tao, Zhong Ruiming, Chen Min. The Technology of The Library Construction of 3D Parametric Standard Parts [J]. Computer Engineer and Application, 2002, 20: 2528. [8] Wan Jiutuan, Huang Xiang. The Establishment of 3D Parameterized Standard part library Based on UG [J]. Machine Building and Automation, 2002(6): 82-84. [9] Yuan Bo, Zhou Yun, Hu Shi Min, Sun Jiaguang. The Assembly Model of Hierarchical Components [J]. Journal of Computer Aided Design & Computer Graphics, 2006, Vol. 12(6): 450-454. [10] Tang Tingxiao, Liao Wenhe, Huang Xiang. Research and Apply on Product-level Parametric Modeling [J]. Jiangsu Machine Building & Automation, 2005, 34(5): 61-64 [11] Wang Feng, Yu XinLu. Research and Development of Product-Level ThreeDimensional Parametric Design System [J]. Journal of Computer Aided Design & Computer Graphics, 2001, Vol. 13(11): 1012-1018. [12] Jiang Hong, Li Zhongxing, Xing Qien. Secondary Development Foundation and Directory on SolidWorks2003 [M]. Beijing: Publishing House of Electronics Industry, 2003
Products to Learn or Products to Be Used? Stéphane Brunel, Marc Zolghadri, Philippe Girard IMS-Labs, Bordeaux University, 351 Cours de la Libération, 33405 Talence Cedex – France Tel/Fax: +33(5) 4000 2405 / 6644, E-mail: [email protected]
Abstract The aim of this paper is to study how a product generates knowledge throughout its lifecycle. We show how the knowledge is generated and how it should be employed on various levels of decision making within the firm. Outside the form, learnings and trainings induced by the product throughout its lifecycle participate to the generation of an additional service which can be provided to customers and final users. Some of these ideas have been already tested through an industrial case but other prospective results are also proposed. These ideas may help the definition of a more efficient business strategy. A generic tool, a strategic training positioning, is suggested in order to allow a clear definition of the firm’s needs in terms of learning and training. Keywords: Generation of knowledge, extended product, ingenition, strategic decision.
1.
Introduction
Innovation is often considered as a main factor of differentiation by society. This differentiation is generally based on new technologies. Nevertheless, the differentiation cannot be guaranteed only technological innovations. This paper will show how the product can generate knowledge by itself or can foster knowledge generation throughout its lifecycle. Authors believe that this represents a long-term differentiation parameter. Products are either functional or innovating (see Fisher [1]), and firms will achieve the differentiation goal if their managers set up and follow a coherent strategy, not only in terms of technological innovations but also in terms of knowledge management. "A company can override its rivals if and only if it can establish a difference which it can preserve" Porter [2]. Consequently, the design of products cannot be an activity primarily directed towards technology. We focus our research on a specific kind of products used mainly to learn and teach something to some trainees. We call this class of products "Used for learning". It is this specific design which is further detailed and which we call «Ingenition».
304
S. Brunel, M. Zolghadri and P. Girard
These “Used for learning” products are extended products. An extended product is a product delivered with all associated services to customers (see for example the work of Thoben [3]). In our case, the additional service consists on training. This additional service could consolidate the strategic differential positioning, if it is designed in harmony with the physical product along the product development phases. Porter shows that the technological innovations are extremely difficult to implement by giving the following reasons: x Difficult to conclude x Difficult to industrialize x Difficult to protect effectively from competition x Difficult to become profitable and release some profits Similar products, often imitated, will come on the market quickly. The technological innovation is a hard challenge to carry out. Therefore, companies should seek for their differentiation parameters in other fields than in technological innovations too. A company can empower its relative business position on the market by providing various business services. These services are offered to the customers and final users. The trainings correspond to one of these services. This is not a new idea. What is new, is that this service has to be designed and realized by setting up a cross-functional training strategy, focused on customers’ training and learning needs, along the whole physical product lifecycle. The training associated with each phase of product lifecycle (from design to industrialization followed by use and recycling) should use and capitalize the required knowledge for the firm’s differentiation. The knowledge that can be backed up easily is the knowledge generated and managed by the Internal Knowledge Generation (IntKG) process within the firm’s services. This knowledge can be used by all employees. They will be able to adapt it according to their specific needs. We propose to study the knowledge generation processes. We can observe, extract, store and study how the knowledge grow up. The knowledge generated throughout the product life cycle forms the main differentiation factor for the strategic decisions. In a virtuous loop, the generated and managed knowledge contributes to the training and can be re-used internally and pushes also towards new technical solutions. The next paragraph reconsiders the research works related to knowledge. We will define some concepts which clarify our future use of the initial paradigm. We will propose finally a grid of analysis, design and study. The results are exploratory and at the end of the article they will be discussed determining on-going research works.
2.
State of the Art
Many works do exist in the field of knowledge management and obviously this brief state of the art cannot be representative of all of them. Nevertheless, it contains the most important ideas related to our works. The model proposed by Nonaka in [4] distinguishes a knowledge creation framework by three different elements:
Products to Learn or Products to Be Used?
x
305
SECI Process, is the creation of knowledge by the conversion of tacit and explicit knowledge, in knowledge reusable and transposable. x “Ba”, is the context of social sharing, cultural, environmental for the creation of knowledge. This concept is not translated because it is firmly attached to the crop and Japanese perception. x The capitalization of knowledge, input, output and regulators of the process of knowledge creation. Tollenaere [5] proved that it is necessary to model the data and the knowledge related to the product at the beginning of design process. Several methodologies are used by the Anglo-American and Scandinavian researchers. They study the product knowledge representation by solving specific problems such as the phase of design or other phases of the product life cycle. For example, De Martino [6] speaks about models under several aspects (geometrical and simulation). Holmqvist [7] studies the architecture of the products in the case of the products of large varieties. Moreover, integration between the geometrical definition of the product and the physical behaviour, are discussed by Finger [8]. Approach of Grabowsky [9] sets the problem in the product life cycle. Four levels of modelling appear: x Level 1 - Modelling of the conditions x Level 2 - Modelling of the functions x Level 3 - Modelling of the physical principles x Level 4 - Modelling of the form “Function - behavior - state”, modelled by Umeda [10] and “function development - model process” by Shimomura [11] have similar characteristics. The proposal of Andreasen [12] is concentrated on the knowledge structure of any product according to these four fields. This knowledge structure corresponds to the four sequential activities of the design: physical phenomena, functions, organs and items. The multi-model of product, developed by Tichkiewitch [13], and Chapa Kasusky [14], and Roucoules [15], consider the design innovating by seeking the knowledge of a commodity coming from various foreign companies. This way of thinking makes it possible to preserve the last experiments and shares. This work is in agreement with Ouazzani [16] who shows how the designers come to discover specific solutions. But, in this work, the operational aspect is not studied and the links between this model and the other activities of the process or with the product itself are not mentioned. We think that this point of view of creation, re-use and capitalization of knowledge is very important. We find also the structure matrix, « Design Structure Matrix” (DSM) showed by Browning [17]. This matrix keeps tracks of to possible paths of design. DSM is often employed in work of the AngloAmericans scientists and Scandinavian. Fagerstrom [18] employs it and structure the links between the designers and the subcontractors in a process of design. Lockledge [19] conceives an information system to facilitate the communication between the actors. European project “DEKLARE” studied by Saucier [20], shows a model of product based on the integration of three models: physical, functional and geometrical. Finally, the approach of Pourcel (see [21]) is close to our research interests even if it remains concentrated on the knowledge management and not on the generation of knowledge.
306
S. Brunel, M. Zolghadri and P. Girard
This short state of the art proves that our work is clearly related to all of them and suggests new possibilities and new fields for further research in this area.
3.
Products to Learn
A product must be designed or reorganised in order to improve strategic positioning of society. The design activity, by considering that as a fundamental objective, is an operational potentiality allowing a clear competitive differentiation of the firm. While designing products, the various sectors of the firm will generate knowledge. It’s possible to learn and thus to better know and manage all parts in designing process. Throughout the product life cycle, innovating or not, it is possible to identify situations of study and training. When speaking about study, we include the knowledge and the know-how generated internally in each phase of the product life cycle. Several types of knowledge can be identified: 1) knowledge produced during the phase of design, 2) knowledge produced during the phase of production or manufacturing, 3) knowledge produced during its use by customers and final users, 4) knowledge produced during its maintenance and finally, 5) knowledge produced by the product itself during training phase. 3.1.
Extended Product
Based on Thoben [3], we define extended products as combination of physical product and additional services. The main service that we study in the extended products is the training-learning. This service is provided for operators (within the firms, who have to work either on the physical product or the data related to it), students of university, schools or high-school pupils for example. 3.2.
Extended Product Oriented Training
We call a “didactic oriented product” if it is designed and carried out in order to forward knowledge (a constructible minirobot by students for example). This definition seems to draw a clear border between a “didactic oriented product” and other products. However, we think that any product can be employed as a didactic oriented product characterizing by an indicator called “LRI - Learning Relevancy Indicator”. This concept is, in our opinion, fundamental. We will develop it during the next paragraphs. The LRI should measure the product potentiality implemented in the following way: x if a « didactic oriented product » has a low coefficient LRI, it is not adequate for the provided knowledge transmission. This product will not be a good support for the knowledge transmission. x if an industrial product, therefore usage-oriented, has a high LRI, then it can be used a support of knowledge transmission. This indicator of relevance for training (LRI) is a powerful element of decision making for the top management of a firm. The main trends and ideas regarding the use of the LRI are resumed in the following figure. Four main market positions are
Products to Learn or Products to Be Used?
307
identified: Critical, Target perfection, Normal and Opportunistic. The critical situation concerns that product which does fit to the learning purposes of the product (a micro-processor used for learning the bipolar transistor principles). The Target perfection situation is that situation where the product fits quite well to the requirements. The only axiom in this situation is to improve continuously. The other two situations correspond to the usage-oriented products (a TV or a car). At the beginning, such products could be designed and realized without any learning purpose ideas. Nevertheless, if a usage-oriented product is a useful support for knowledge transmission, this could give a new differentiation factor for his market conquest. The basic improvements strategies are numbered at the self-explanatory right side of the figure. The works done in our research looks for the determination of the strategy which helps firms to go from the critical situation towards target perfection situation.
But let us look deeply at the knowledge generation process. In the figure below, we show the various sources of the knowledge generation related to a product: x The internal knowledge generation, Int-KG. x The knowledge generation during product usage, KG-Using. x The knowledge produced during the maintenance of the product, KGMaintenance x The generation of knowledge for the knowledge transmission, KG-DOP (learnability dimension).
4.
Various Situations in the Generation of Knowledge
4.1.
Internal Generation of Knowledge
The product generates knowledge throughout the various phases of its lifecycle: design, manufacture, marketing, etc…. They represent somehow the power and capability of a firm. Various methods are available such as MASK, REX, MSKM, etc… to model the knowledge generated within these phases
308
S. Brunel, M. Zolghadri and P. Girard
Figure 1. Generation of knowledge with an “Extended Product”.
4.2.
Generation of Knowledge by Using
Often, final users understand what a product can do exactly by using it. Consequently, the firms provide instruction manuals to help them identifying the variety of the services which the product can offer. In this case, the product is an operational vector of transmission of knowledge. This corresponds to a specific whole of strategies of study which we call “Learning by using”. The experiences of users, if correctly collected, analysed and capitalised, form a significant source of knowledge for all of the services of the firm especially for designers. It is what we could call the generation of knowledge by experimental know-how of the users. 4.3.
Generation of Knowledge by Maintaining
Often, the manufacturers think of maintenance of the product from the beginning of design. Two kind of maintenance are often distinguished: preventive and curative maintenance. In both cases, the knowledge is different. We know that the knowledge generated by the users in these situations is not identical to that generated by the experts. This is “Learning by maintaining”. 4.4.
Generation of Knowledge by Training
The firm puts on the market a product which will be used to support knowledge generation for the final users such as learners, students, etc. To discuss this last concept, we refer to our practices of teacher (in university or in school). Sometimes a product used to support our teaching does not help us at all or sometimes the results are completely different from those awaited ones! In these scenarios the most pessimistic trainees do not understand anything. Often the adequacy between the product support and teaching is seriously questioned! This is related to indicator LRI (i.e. the critical situation). Therefore, a strategy should be set up in order to go towards Target Perfection situation. To help decision makers in this crucial task, we are working on a global framework composed of a reference design model, tools and methods. This framework allows differentiating clearly usage-oriented and learning-oriented products with their specific set of constraints. We study various learning situations, their relationship with the product itself, the various levels of interpretation, their accumulation and their aggregation.
Products to Learn or Products to Be Used?
5.
309
The Analysis Grid
The idea of this section is to establish a grid of analysis which helps decision makers to formalise their strategy and to support their decisions in this field. Like Merlo [22], we seek how the knowledge, know-how and human factors grow up in order to identify methods for capitalization of knowledge and know-how in the design process. In fact, decisions should be taken based on data, models and knowledge which one will employ in the design process. Consequently, the grid must allow the expression of various levels of decision of design through a different granularity. On strategic level, one of the firm finalities should be to have a comprehensive view of learning-training objectives. The decision makers have to distinguish how various (internal and external) factors can influence the design process, the production and the organisation. To do so, the analysis grid is built. Its main role is to help formulation of decisions regarding learning-training within the firm or for final users. In this section, we will build this grid step by step. 5.1.
The Social Context and Environmental Interest
In this item, one of the most important factors is the manner by which the social environment and society influence the design, the manufacture and the use of the product. In the same way, it is interesting to observe how a product can influence the social environment and society (phone cellular). This means that the transmission of knowledge relating to the product will be influenced by the social environment and “the society of the customers” (the haute couture for example) [23]. It is also important to consider the aspects of the relative studies to the sociology for which the future product is intended. Using this point of view, we can use a cursor which measures the social constraints on a continuous scale of going from the soft constraints to the hard ones (to see Figure 2). Soft constraints mean that there is no specific constraint on the product (a pen for instance). A product hardly constrained means that designers and all of the internal operators and managers should take care of them in order to offer a product respecting social and cultural constraints of final users (clothes industry). By enumerating all these constraints, the creation of the product can seem as a serious strategic error. The tools for cursor positioning are based on experts audit and are under development. 5.2.
The Products and Their Customers
The second criterion relates to the final relation between the customer and the product: do the customers want just to use the product and/or want to learn/teach with it? By analogy with the classification of Fisher [1], we propose a first “classification”. It is easy to understand that this criterion offers to the users a continue scale of which the two ends are respectively made up of the usageoriented products (a calculator) and learning-oriented products (a rule). Nevertheless, we put a postulate saying that “a product is always useable for both use and learning-training purpose. It means that even a pure usage-oriented product can support a given knowledge transmission process. For example, a computer can be used not only for precise purposes (use) but also it can be used to understand the
310
S. Brunel, M. Zolghadri and P. Girard
way that a human uses it. It is understood naturally that the educational levels and of observation permitted by the product are not the same ones in these various cases (e.g. a pneumatic cylinder for industrial use and this same pneumatic cylinder made translucent for the study of the internal components). We would like to explain that a usage-oriented product can be employed for the study and knowledge transmission but obviously the results will not be identical compared to the oriented products training. This simple observation shows, sometimes, why the instructors cannot transmit their knowledge to their students. The product is badly adapted (low LRI and critical situation). 5.3.
Knowledge Generation in Product Life Cycle
Now, we will integrate into our model the product life cycle. Various phases are shown on this new grid (see Figure 2). 5.4.
The Resources are Integrated
On the level of the product life cycle, the three classes of resources included in the model are: 1) Generic tools (data-processing software for example) and specific tools (a software of CAD), 2) generic knowledge (mechanical laws...) and specific knowledge representing the know-how of the firms (laser cutting...), etc. 3) human resources. The managers must find these resources in-house or externally. Thus, the grid will include three indicators, tools, human resources, and knowledge.
6.
How to Use This Grid?
This grid contains two distinct parts: 1) context allowing the description of the constraints of the environment, those related on the users and the product. 2) the operational one describing the forces and weaknesses of the firm throughout the product life cycle in comparison with the human resources and knowledge of the actors. Initially this grid makes it possible to describe the actual position (AS IS) for the firm within the framework of the launching of a new project of product. This analysis makes it possible to the decision makers to identify the requirements for tools acquisition. It allows set up a strategy of acquisition of knowledge. We propose the formulation of this strategy in three points: - formulation of the needs, - highlighting of the interdependences between these various acquisitions, planning in the time of the trainings necessary. The execution of this strategy should allow the realisation of the objective initially identified (TO BE).
Products to Learn or Products to Be Used?
311
Figure 2. Final grid.
7.
Conclusion
In this paper, we study the dimension of a product dedicated to the training in each phase of its life cycle. The main idea is that the generation of knowledge during these various phases represents an important internal source of innovation. The firm can use the knowledge produced as a tool for its competitive positioning on the market. The main tool presented here, the analysis grid of study and positioning makes it possible to model social environment and cultural. It allows, to stress the aim of the product, with vocation training or vocation use or something between, to preserve the knowledge produced in relation to the activity considered, to measure the variations between what society can do itself and what it would externalise; In a market of an always increasing complexity, any solution improving the effectiveness should be explored in order to provide to society the means of keeping an interesting position on world market. We believe that the study described and model here can be tools useful. However, other research tasks are necessary to refine and reach the final objective that we fixed ourselves.
8.
References
[1] Fisher Marshall L. What Is the Right Supply Chain for your Product? Appeared in the March-April, 1997, issue of The Harvard Business Review [2] Porter M.E in HBR, November-December 1996 « What is strategy? ». [3] Thoben Klaus-Dieter BIBA Bremer Institut, Extended Products: Evolving Traditional Product Concepts. EXPIDE Project, 2000 [4] Nonaka and Takeuchi (1995). Toward middle-up-down management: accelerating information creation, Sloan Management Review 29(3).
312
S. Brunel, M. Zolghadri and P. Girard
[5] Tollenaere M., Quel modèle de produit pour concevoir? Symposium International La conception en l’an 2000 et au delà: outils et technologies, Strasbourg, France, 1992 [6] De Martino Teresa, Falcidiendo Bianca, Habinger Stefan, Design and engineering process integration through a multiple view intermediate modeler in a distributed object-oriented system environment, Computer-Aided Design, 1998, vol.30, n°6, pp.437-452 [7] Holmqvist T.K.P., Visualization of product structure and product architecture for a complex product in a mass customization company, 13th Int. Conference on Engineering Design, Glasgow, UK, 21-23 August 2001 [8] Finger S., Fox M., Prinz F.B., Rinderle J.R., Concurrent design, Applied artificial intelligence, 1992, 6 : 257-23 [9] Grabowski H., Towards A Universal Design Theory, Integration of Process Knowledge into Design Support Systems, edited by Hubert Kals, Kluwer Academic Publishers, 1999, ISBN 0-7923-5655-1, pp.47-56 [10] Umeda Y., Takeda H., Tomiyama T., Yoshikawa H., Function, behavior and structure, Applications of Artificial Intelligent in Engineering, Berlin, Springer-Verlag, 1990 [11] Shimomura Y., Takeda H., Yoshioka M., Umeda Y., Tomiyama T., Representation of design object based on the functional evolution process model, Design Engineering Technical Conferences, ASME’95, Boston, USA, 1995 [12] Andreasen M.M., Machine Design Methods based on a Systematic Approach, PhD Thesis, Lund Technical University, Lund, Sweden, 1980 [13] Tichkiewitch S., La communication entre acteurs dans un processus de conception intégrée, Entreprises communicantes Tendances et enjeux, Université Pôle Productique Rhône-Alpes, 5e session, 8-12 1997 [14] Chapa Kasusky E., Tichkiewitch S., Modèle produit multi-vues pour une démarche intégrée de conception, 5e colloque Priméca, La Plagne, 1997 [15] Roucoules L., Méthodes et connaissances: contribution au développement d’un environnement de conception intégrée, Thèse de l’Institut National Polytechnique de Grenoble, spécialité Génie Mécanique, 1999 [16] Ouazzani A., Bernard A., Bocquet J.C., Process modeling: a design history capture perspective, 2nd Int. Conference on Integrated Design and Manufacturing in Mechanical Engineering, Compiègne, France, 1998 [17] Browning, T.R. Lockheed Martin Aeronaut. Co., Fort Worth, TX; Engineering Management, IEEE, Aug 2001 Volume: 48, Issue: 3 on page(s): 292-306 [18] Fagerstrom B., Johannesson H., A product and process model supporting main and subsupplier collaboration, 13th International Conference on Engineering Design, Glasgow, UK, 21-23 August 2001 [19] Lockledge J.C., Salustri F.A., Design Communication using a Variation of the Design Structure Matrix, 13th Int. Conference on Engineering Design, UK, 21-23 2001 [20] Saucier A., Un modèle multi vues du produit pour le développement et l’utilisation de systèmes d’aide à la conception en ingénierie, thèse de l’ENS, France, 1997 [21] Pourcel et C. Clémentz, Modélisation, ingénierie et pilotage des établissements de formation, Actes du 1er Congrès int.sur le management de la qualité dans les systèmes d’éducation et de formation, Rabat, 2004. [22] Merlo C., « Modélisation des connaissances en conduite de l’ingénierie: Mise en œuvre d’un environnement d’assistance aux acteurs », Thèse de l’Université Bordeaux 1, décembre 2003. [23] De Souza Marilia; Dejean Pierre-Henri. Integration of cultural factor in the product design) Techniques de l'ingénieur. L'Entreprise industrielle (Tech. ing. Entrep. ind.) ISSN 1282-9072 2002, vol. 1.
Archival Initiatives in the Engineering Context Khaled Bahloul, Laurent Buzon, Abdelaziz Bouras LIESP Laboratory, University of Lyon, Campus Porte des Alpes, Bron, FR
Abstract Over the last decades, the amount of digital technical documents related to industrial products has increased exponentially. In spite of the application of traditional document engineering methods, the long term preservation issues is becoming crucial in the engineering context. Long-term preservation of digital technical materials requires a strong characterization of the structural and semantic properties, or data format, of these materials for purposes of validation, monitoring for obsolescence, transformation, etc. In this paper, we present some of the undergoing works linked to the archive management in the engineering context and we relate on an initial experiment that have been made to preserve the information related to the product lifecycle context. Keywords: OAIS, Management.
1.
Archive
Management,
Knowledge,
Product
Lifecycle
Introduction
Our digital heritage is highly endangered by the silent obsolescence of data formats, software and hardware and severe losses of information already happened. Obsolescence of media formats and data formats is the most demanding problem while preservation of bit streams can be mastered by using well-known techniques [1]. Long Term is long enough to be concerned with the impacts of changing technologies, including support for new media and data formats, or with a changing user community. The preservation of digital data for the long term presents a variety of challenges. One of the most important challenges is technical and related to the changes in the storage medium, software, devices, and data formats. Another one is social and related to the behavioral aspects, in terms of decision making, information selection, intellectual property and so on. Several projects have already targeted these issues but few of them are related to the product engineering context. This paper presents some initiatives and projects. It focuses on the technical aspects of the problem, and mainly on the preservation and retention of the information. The first part of the paper is dedicated to a discussion related to the archival needs and the presentation of some projects. Then a brief presentation of a rich standardized conceptual ISO framework called Open Archival Information System
314
K. Bahloul, L. Buzon and A. Bouras
(OAIS) is done. Finally, in order to assess the feasibility of the preservation in the PLM context, a simple experiment is proposed at the end of the paper.
2.
The Engineering Archival Projects
The term ‘archive’ has come to be used to refer to a wide variety of storage and preservation functions and systems [2]. Traditional archives are understood as facilities or organizations which preserve records, originally generated by or for a government organization, institution, or corporation, for access by public or private communities. The archive accomplishes this task by taking ownership of the records, ensuring that they are understandable to the accessing community, and managing them so as to preserve their information content and authenticity. There are few projects that deal with the archive management in the engineering field, and the first industrial sectors that were concerned are the aircraft, the space and nuclear industries, where the lifecycle of the product is very long [3]. The enterprise must be able to adapt itself to a rapidly changing digital environment without disrupting its operations and should be able to find, authenticate and reuse processes and knowledge at will. Some recent projects outlined the importance of the semantic management and showed that the organization in this context must enable the ‘Chain of Preservation’ to justify faith that an electronic record retrieved from storage is the same in all essential respects as the record previously placed in storage [4]. These projects insist on the design of a single digital repository and on the importance of the metadata management and on the re-use of engineering design knowledge [5]. They also show the importance to use open standard tools in order to facilitate the re-using of data after a long time of preservation. To avoid confusion with simple ‘bit storage’ functions, a reference model, developed by CCSDS Panel 2 in response to ISO TC20/SC 13 [6], defines an Open Archival Information System (OAIS, ISO 14721) which performs a longterm information preservation and access functions. OAIS is a reference model that facilitates a much wider understanding of what is required to preserve and access information for the Long Term.
3.
The Open Archival Information System (OAIS)
An OAIS archive is one that intends to preserve information for access and use by a designated community [7]. It includes archives that have to keep up with steady input streams of information as well as those that experience primarily aperiodic inputs. The OAIS presented in Figure 1 is separated into six functional parts and related interfaces. The lines connecting these parts identify communication paths over which information flows in both directions.
Archival Initiatives in the Engineering Context
315
Figure 1. OAIS functional entities
The reference model addresses a full range of archival information preservation functions including ingest, archival storage, data management, access, and dissemination. It also addresses the migration of digital information to new media and forms, the data models used to represent the information, the role of software in information preservation, and the exchange of digital information among archives. It identifies both internal and external interfaces to the archive functions, and it identifies a number of high-level services at these interfaces. It provides various illustrative examples and some “best practice” recommendations. It defines a minimal set of responsibilities for an archive to be called an OAIS, and it also defines a maximal archive to provide a broad set of useful terms and concepts.
4.
The OAIS Based Engineering Archive Projects
One of the initial projects that have used the concepts of the OAIS reference model in the engineering field is the LOTAR (Long Term Archiving and Retrieval of Product Data within the Aerospace Industry) project [7]. In this type of industry, archiving and retention of data and documents are needed for the proof of legal constraints related to certification, product liability, contracts, re-use of data, manufacturing processes, modifications on products and documents, etc. The project group expected that the OAIS definition of Archival Storage will be also applicable for 3D data and PDM (Product Data Management) without modifications or extensions. It also used the STEP Standard (ISO 10303) standard as a basis for the logical data models, semantics and format needed to ensure the accessibility and possibility of interpretation by the designated community of the data for the retention period. The project group acquired three potential concepts to realize an archive: x as a part function within a PDM backbone system x a stand alone archiving system
316
K. Bahloul, L. Buzon and A. Bouras
x
a mixed system environment with a distribution of archival and retrieval functions into both, may include a leading system The LOTAR project group strongly recommended an implementation of the processes belonging to the subject areas ingest (and archiving), dissemination and removal according to OAIS. The processes were expected to represent the first level description for audits aiming for data security, quality assurance. The functional modules were divided into subject areas such as Ingest and Archiving, Archival Storage, Data Management, Access and Dissemination. This project did not focus on all the data related to the life of the product in a “whole life” context. It concentrates its attention on the “beginning of life” data, whereas some new initiatives, such as the LTKR (Long Term Knowledge Retention) consortium, deal with the engineering data and knowledge beyond the “beginning of life” phases. The LTKR initiative lies on the knowledge considered as a critical asset because Knowledge includes abstractions and generalizations [9]. It is interested in the development of application based semantic technologies that manage the metadata creation during archival and facilitate intelligent retrieval. This is used in a proactive way using an archived solution (a design for example) as input to creating a new solution and modification of an archived one and in a reactive way to manage the contractual compliance, the incident investigation legal issues. The general hypothesis is that all properties of engineering artifacts could be subject to future query and retrieval. The archive should contain both viewable and processable representations, viewable representations used on screen perusal and manipulation (ie: U3D, JTOpen …) and the processable representations attempt to capture full functionality of the original system (ie: STEP). Some other projects related to this context are also under development, such as the MIMER and the KIM projects.
5.
An Approach of Archive Management in a PLM Context
The approach that we are currently testing in the AncarPLM project [10] and briefly present in the following sections is an attempt to model the data of a product along the different phases of its lifecycle. The first tests considered three phases of the product lifecycle: design, production and maintenance. This requires concepts of traceability and granularity of data to help in better structuring and optimizing the data to be archived. The granularity model is inspired from the KIM (Knowledge and Information Management Through Life) project [11] used to model the design process. For each phase (design, production, maintenance) we consider six levels for the structuring of the data: stages, projects, tasks, activities, operations and actions. For each level we define its models, participants, documents and the work timetable. The Metamodel of the data is primarily made up of three modules: x The first module gathers the information on the three phases of the product lifecycle: design, production and maintenance; x The second module gathers the six structuring levels: stages, projects, tasks, activities, operations and actions;
Archival Initiatives in the Engineering Context
x
5.1.
317
The third module gathers the four types of information: models, participants, documents and timetable. The Traceability of Product Data
The traceability of product data is carried out according to two modes. The first mode is interested in the history of technical entities. The second mode is interested in the interactions between these entities. To ensure the traceability of the archived data, we propose to archive the initial data and the change in the data between the different phases (also called the “delta of the data”). This delta of the data help avoiding the accumulation of the archived data and is represented by the following attributes: x added value for the information already archived, x difference between two successive versions, x knowledge acquired from the difference (why two versions? why change?), x restitution of the genealogy of the products, x history to save an exact. The main objective is then the optimization of the archiving process to decrease the quantity of data to be saved within each evolution of the data. The principle of the proposed traceability analysis tools is to execute the needed traceability requests on each given level, along the different phases.
Figure 2. Principle of traceability analysis
5.2
The Granularity of Data
The granularity must be based on analyzers in order to: x catch information through the stages of the product lifecycle, x determine the detail level of these information, x extract the data by reconstruction, x refine the quality of the information to a maximum extent. We use the defined types of data (model, document, timetable and participants) defined in [10] as filters of information. This principle of filtering allows us to better capture the data according to the desired vision. Filtering may be done either by applying requests on the result or by dealing with each element separately in order to study its relevance as illustrated in the Figure 3.
318
K. Bahloul, L. Buzon and A. Bouras
Figure 3. Granularity analysis
6.
An Experiment
The following archiving prototype is based on PDM tools as generators of the initial data and Dspace “open-source” platform [12] as an archiving platform. DSpace adopts the OAIS model and vocabulary to articulate its objectives and its design terminology. DSpace platform is “filled” by data in the form of files in XML format. The generated XML files are analyzed and separated into two types of data: traceability data and granularity data. After this classification, the content of each file is analyzed and data to be archived is selected using decisions based on the filtering criteria and traceability requests (Figure 4). This is made with a module developed in JAVA language. The resulting file is the output of the analysis system and is also the input of the “filling” process which populates the DSPACE platform. The tested PDM systems to generate the XML files are Windchill [13] and Audros [14]. They have been used in a complementary way in order to validate the genericity of the proposed approach. Moreover, Audros system organizes and ensures the group of documents and the operations connected to the product, from the design to delivery and maintenance. (Figure 5) shows the tested example.
Archival Initiatives in the Engineering Context
Figure 4. Populating (“filling”) the DSpace data base
Figure 5. The used example (pen) on Windchill
319
320
K. Bahloul, L. Buzon and A. Bouras
The archiving on DSpace undergoes several stages: archiving the metadata, archiving the data, checking and validating the archived data and the whole archiving process. One of these stages is given by Figure 6.
Figure 6. One of the of DSpace stages
This simple example shows the feasibility of the proposed architecture.
7.
Discussion
The archive management is an emerging issue in the PLM context. It constitutes a real challenge to preserve the knowledge related to the product’s life cycle. Some of the projects dealing with this problem in the engineering field have been briefly described. They show the importance of managing the metadata to facilitate the reuse of data a long time after their creation. The simple experiment, presented in this paper, shows that an articulation of some existing tools and neutral standards (such as XML) is possible to archive data and metadata in compliance with the archiving methodologies. A structured interface could be considered to handle all the services in order to facilitate the use and the appropriation of the system by the end user. Therefore, to address an intelligent filtering of the data to be archived, more accurate query processors should be created. We believe that the current investigations in ontology systems could be a good basis. Ontology structures and Service Oriented Architectures (SOA) could also improve searching and extracting the archived data according to the user needs. An extension of this work towards these concepts is under investigation.
8.
References
[1] S. Hamburger, Preservation and Conservation for Libraries and Archives, American Library Association, Chicago (2005) ISBN 0838908799 240 pp. Library Collections,
Archival Initiatives in the Engineering Context
[2] [3] [4]
[5]
[6]
[7]
[8] [9] [10] [11]
[12] [13] [14]
321
Acquisitions, and Technical Services, Volume 29, Issue 4, Dec. 2005, Pages 444-445 edited byNelly Balloffet and Jenny Hille. IBM Research Report: Long-Term Archiving of Digital Information. 2000 U. Borghoff, P. R¨odig, J. Scheffczyk, and L. Schmitz. Langzeitarchivierung. Heidelberg: dpunkt.verlag, 2003 Y. Keraron, Annotation functionalities to enable an improved use of digital technical publication. Proceeding of the international workshop on annotation for collaboration p113-121 Z. Zdrahal, P. Mulholland, M. Valasek and A. Bernardi Worlds and transformations: Supporting the sharing and reuse of engineering design knowledge, International Journal of Human-Computer Studies, In Press, Corrected Proof, Available online 27 July 2007. ISO 14721:2003, Space data and information transfer systems – Open archival information system – Reference model, February 24, 2003. Previously available as CCSDS 650.0-B-1: Reference Model for an Open Archival Information System (OAIS), Blue Book, Issue 1, January 2002. International Organisations for Standardisation & International Electrotechnical Commission (IEC). 2004 ISO/IEC guide 2: standardization and related activitiesGeneral vocabulary (8th ed.). Geneva Switzerland. Lotar, Long Term Archiving and Retrieval of Product Data within the Aerospace Industry (LOTAR) Technical Aspects of an approach for application. 2003 G. Wiederhold, Knowledge versus Data, in On Knowledge Base Management Systems, Springer Verlag, 1986 AncarPLM – Analysis and characterization of PLM solution – Project of the French GOSPI research cluster - http://iutcerral.univ-lyon2.fr:8080/AncarPLM. 2007 L.C.M. Tang, S.A. Austin, Y. Zhao, S.J. Culley & M.J. Darlington. Immortal Information and Through Life Knowledge Management (KIM): how can valuable information be available in the future? Proceedings of KMAP2006, 3rd Asia-Pacific International Conference on Knowledge Management, 11-13 Dec. 2006. Christian Wewetzer, Klaus Lamberg and Rainer Otterbach, Creating Test Patterns for Model-based Development of Automotive Software, 2006 PTC. The Product Development Company. Disponible sur : http://www.ptc.com. 2007 Assetium, Gestion de Patrimoine Industriel. Technical document and data management software for SMB (PDM/PLM). 2007
Design Information Revealed by CAE Simulation for Casting Product Development M.W. Fu Department of Mechanical Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong
Abstract In casting product development, the design and development paradigm is shifting from traditional trial-and-error in workshop to simulation-based virtual realization in up-front design process. The traditional trial-and-error approach appears to be more heuristic know-how than deep scientific analysis and calculation. The knowledge and know-how acquired through trial-and-error is difficult to be applied in similar product development as a little change of product geometry would lead to significant changes of casting design, tooling design, melt flow pattern, and process route and parameter configuration. CAE simulation technology, which models the entire casting system and imitates the dynamic behaviors of the system in working conditions, provides complete design information for generating, verifying, validating and optimizing design solutions for process and die design via simulation of the entire casting process. In addition, the design information provided helps reveal and predict the final product output in terms of product microstructure, defects, quality and properties in such a way that the optimal design solution can be determined. In this paper, the modeling of casting processes is first articulated and the associativity between the casting process, modeling, simulation and output variables are presented. A simulation-based paradigm for revealing the information in different categories is described and how the information helps design solution evaluation and verification is articulated. Through case study, the information in high pressure die casting filling process is presented and the phenomena in filling process is further explained. Keywords: Casting process, CAE simulation, Integrated product and process design
1.
Introduction
In today’s casting product development, the development paradigm is basically trial-and-error. This development paradigm cannot meet industrial needs and competitiveness requirements as this kind of product development paradigm is quite time-consuming, error-prone and not cost-effective. Currently, casting products, especially for high pressure die casting (HPDC), which is a casting
324
M.W. Fu
process under pressure, are widely needed in many industries due to its near net shape or net shape characteristics, high productivity and the complicated geometries and features of the castings. As the market demands for shorter design and manufacturing lead-times, good dimensional accuracy, overall product quality and rapid change of product design and process configuration are increasing significantly, they are becoming the bottleneck in casting product and process design and development. The traditional product development paradigm is obviously handicapped in this severe competitive marketplace. To address these issues, efficient enabling technologies are needed. Traditionally, CAD/CAM technologies provide an essential part of the solution to address the above issues as it provides efficient enabling technologies for representation of design intent and solutions and helps the realization of the design physically. CAD/CAM technologies greatly enhance design quality and shorten design and manufacturing lead-times. However, it is difficult to address some critical issue in the design of casting process, tooling structure, material selection, product properties configuration and finally the quality control and assurance by CAD/CAM technologies alone. Computer-aided engineering (CAE) simulation technology, on the other hand, fills this gap as it helps practitioners generate, verify, validate and optimize the design solutions before they are practically implemented and physically realized. The technology and the simulation procedure have become a standard design tool and design process in casting product development to help generate and verify design solutions. The CAE simulation technology will be widely used in small and medium enterprises as an upward technology. In CAE simulation, the simulation is a representation of a physical system by models that imitates the dynamic behaviors of the system in working processes and conditions. The numerical simulation uses numerical methods such as Finite Element Method or Finite Difference Method to quantitatively represent the working behaviours of the physical system. The numerical results are correspondingly related to the physical content of the physical system to be simulated. Taking a metal casting process as an instance, the fluid dynamics of the metal melt in the cavity, the thermal phenomena and solid state transformation of the melt during the process need to be modeled by physical and mathematical models and the final simulation results will thus be related to the behaviours of the casting process and the properties of the casting products. From production process perspective, the numerical simulation results will associate the structure, quality, property and defect issues of the products. This up-front process and casting system simulation is critical as the 20% of design activities at the up-front design stage commits to about 80% of product cost and product quality issues. Furthermore, it has been reported that about 90% of product defects is related to the mistakes made in design stage and only 10% is due to the manufacturing problems. In addition, it has also been calculated that the costs to change design is ten times higher in the subsequent step of the design and manufacturing process [1]. Therefore, any methods and tools to improve design or even better to ensure “right design the first time” and reduce tryout in workshop will help cut product development cost and shorten time-to-market. CAE simulation technology is one of those tools.
Design Information Revealed by CAE Simulation for Casting Product Development
325
Presently, the application of CAE simulation technology to support casting product development is basically focused on casting design, process determination, flow pattern prediction, tooling design, quality control and product stress analysis. From casting design perspective, CAE simulation helps analysis of the castability through filling and solidification simulation, and optimization of casting geometries and features from process determination, tooling design and quality control perspectives [2-4]. This type of design activity is critical as it is the first step of design activities and affects the entire casting system design and casting quality. From process determination point of view, simulation helps determine process route and parameters configuration [5-10]. It also helps verify and optimize die design [11-16]. From product quality control and assurance, simulation reveals melt flow and solidification behaviours and finally provides solutions for product quality improvement and design enhancement [17-25]. In this paper, how the CAE simulation reveals the design information for casting product development based on filling analysis, solidification simulation and stress analysis is presented. In addition, the modeling process, simulation procedure and a paradigm of design information generation via simulation are described.
2.
Modeling of Casting Process
Modeling of casting process and system needs to represent the real processes by models. The models are usually formulated as governing equations and boundary conditions. Fig. 1 illustrates the associativity between the real processes, simulation procedure, physical phenomena and behaviors to model, the governing equations to represent the models, and the output variables. In the real casting processes, the materials and material properties, equipment and working parameters are the input Real process
Simulation process
Product parameters
Physical, thermal and metallurgical phenomena
Mold filling Input
Cast and die materials Casting equipment
Modeling & Modeling representation
Casting realization Products 1. Dimensions 2.Microstructure 3. Quality 4. Properties
Modeling, equations and variables Navierstokes eq.
Velocities
Mass balance
Continuity eq.
Pressure
Energy balance
Energy eq.
Temp.
Momentum balance
Solidification and cooling Numerical analysis Output Simulation results
Heat balance
Thermal conduction eq.
Stress and strain Equilibrium Balance eq. & state of forces
Temp.
Disp., stresses & strains
Figure 1. Associativity among the process, modeling, simulation and output variables
326
M.W. Fu
information for the modeling of physical behaviors and phenomena of the casting processes. The simulation results reveal the information about the performance of the designed process route and process parameter configuration, tooling and the entire casting system. In addition, it further represents the microstructures, defects, quality and property of the cast products. From modeling perspective, on the other hand, there are three phenomena or behaviors to be modeled. They are filling process, solidification and cooling, and stress and strain in the casting and die. Taking the modeling of filling process as an instance, there are three physical phenomena viz., melt momentum balance, mass balance and energy balance, to be represented and modeled. These phenomena are modeled by the following governing equations: Continuity equation (when T > Ts ) wU w ( UU j ) wt wx j
(2.1)
0
Momentum equation (Navier-Stokes equation, when T > Ts): w w ( UU i ) ( UU jU i ) wx j wt
wU i w wU (P ) Ug i wX j w xi w x j
(2.2)
Energy equation: w w w wT (UCpT) (UCpUjT) (O ) Q wt wxj wxj wxj
(2.3)
where t- time, x-space, U-density, P-viscosity, g-gravity, Cp-heat capacity, Oconductivity, U-velocity, T- temperature and Q-heat source. For the open surfaces, a Volume of Fluid function (VOF), defined as a ratio of metal melt to actual volume, is used to track the moving free surface of the metal melt. The VOF function is governed by the equation in the following: wF wF U j wx j wt
0
0d F d 1
(2.4)
All of the above governing equations are nonlinear in terms of both geometry and material properties. They are linearized and discretized by numerical methods and a set of simultaneous and algebraic equations can then be obtained. Through solving these linearized equations, the velocity, pressure and temperature of the melt can be obtained. For solidification modelling, the Fourier heat conduction equation is used. Phase transformation enthalpies like melt heat need to be considered. Through the modelling of heat balance in the solidification, the temperature in the casting is determined and its solidification behaviours can be revealed. To model casting stress and strain, the equilibrium equation and Hooke’s law for representation of the relationship between displacements, stress and strain are employed. The displacements, stress and strain are thus identified through solving the above governing equations. Regarding the residual stress, the formation is very complex due to the nonlinear and elastic-plastic behaviours. Therefore, the exact modelling and calculation of the residual stress is a nontrivial issue in casting process modeling.
Design Information Revealed by CAE Simulation for Casting Product Development
327
3. Information Revealed for Solution Generation and Verification In the previous section, a casting process simulation paradigm is presented. Under the paradigm, the panorama of simulation relationship is articulated. From design solution generation and evaluation, how CAE simulation can help and what the information is revealed is the focus in this section. Fig. 2 presents a solution generation and verification paradigm with the aid of design information revealed by simulation in casting product development. In the figure, the whole casting system is configured through the design of casting (product) geometry and design specifications, process route determination and process parameter configuration, die design, casting equipment selection and working parameters configuration by consideration of Voice of the Customers (VoC), and the detailed functional requirements and design specifications. The whole casting system can then be evaluated and verified through CAE simulation. In this process, the casting system is first modeled through establishing the physical, mathematical and numerical models of the system and then input into the CAE simulation systems for simulation. In CAE modeling process, the physical model idealizes the real engineering problems and abstracts them to comply with certain physical theory with assumptions. The mathematical model specifies the mathematical equations such as the differential equations in FEM analysis the physical model should follow. It also details the boundary and initial conditions and constraints. The numerical model describes the element types, mesh density and solution parameters. The solution parameters further provide detailed calculation tolerances, error bounds, iteration specifications and convergence criteria. Usually, most CAE simulation packages have part of built-in content of these models, but users still need to prepare and input most of the model information into CAE systems. The model information can be classified into three categories of information. The first one is the CAD geometry model information related to CAD modeling of the product and tooling. The second model information is related to the material properties and working parameters. The former needs to go CAD data exchange to convert it from native CAD models into data exchange format such as STL. As for the latter, it needs direct input into the simulation system for simulation. As for the last one, it is the information related to control the simulation procedure and the numerical model related information including calculation tolerances, error bounds, iteration specifications and convergence criteria. This category of information is also needed to be input into CAE simulation systems. With all of the needed information input into the simulation system, the CAE simulation can be conducted. Upon the simulation, the filling-, solidification-, thermal- and property and quality-related information is available for evaluation and verification of the system design and generation of new design solutions or modification solutions. But how to use this information to aid the solution generation and verification is another nontrivial issue and needs specific methodologies and approaches to support. With the identified information, the solutions to be evaluated and verified include process-, tooling-, property and quality-, or casting design-related solutions. If these solutions are satisfactory, they
328
M.W. Fu
can be implemented in workshop. Otherwise, the new or modification solutions are needed to be generated and the new or modification casting re-constructed. The further searching for a better design is carried on until the optimal design of casting system is obtained. Tooling and whole forming system design
Process route & parameters determination
Product design
Requirements & specifications
VoC
Solution evaluation & verification x Process-related x Die-related x Quality-related x Product-related
New design or modification design of the system
Casting system Modeling & representation 1. Physical model 2. Mathematical model 3. Numerical model
CAE systems
x x x x
Info revealed Filling-related Solidification-related Thermal-related Property & qualityrelated
Figure 2. The information needed for design solution evaluation and verifcation
4.
Case Study
To illustrate the design information provided by CAE simulation, Fig. 3 shows the melt flow path and the layout of the casting system. The CAD models as shown in the figure are created in Unigraphics, which is a commercial CAD/CAM system for product design and development, and then converted into STL format through CAD model data exchange. The generated STL CAD models are directly imported into the casting CAE simulation systems for simulation. In this case, the cast material is AlSi9Cu3. The CAE simulation is Magmasoft, which is a popular and commercial casting simulation system in industries and academia. The pouring temperature of the melt is 670oC. The liquidus and solidus temperatures are 578 oC and 479oC, respectively. All the die components have the initial temperature of 150oC. Five cycles of simulation are conducted to reach a stable condition of the simulation and in such a way that the simulation outcomes are reliable. The simulation reveals the MFA and filling sequence during the filling process. The MFA position reveals the filling status. In addition, it identifies the filled up place and the last filled area. The last filled area is usually the locations where slag and drag exists and air entrapment occurs. Thus the overflows or air venting positions should be located at the last filled areas. For the filling sequence, it further verifies MFA status in the filling process.
Design Information Revealed by CAE Simulation for Casting Product Development
Runner
329
Ingate
Overflow
Casting
Inlet
Biscuit
Figure 3. Filling process simulation
Fig. 4 presents the MFA status at different filling stages. Fig. 4 (a) is 40% filling and there is air entrapment in the runner. Fig. 4 (b) is 60% of the filling and it is found that the MFAs at the two ingates of the cavities are almost at the same pace. However, the air entrapments are still there. In Fig. 4 (c), the filling is 80% and the cavity is filled from this direction of view. The melt starts flowing into the overflows. However, the melt in the up two cavities moves faster than that in the down two cavities. This would create the difficulty in control of the melt speed as required in filling process. In Fig. 4 (d), it indicates that the filling is 90%. From this direction of view, it can be found the filling at this stage is not complete. The boss in the casting is not filled. However, the melt has started filling the overflow. Therefore, the air in the boss feature will be difficult to escape and the porosity in the boss feature will be happened. From this case study, it can be found that the information revealed via CAE simulation about filling status is helpful to evaluate the process determination, layout configuration and tooling determination.
330
M.W. Fu
Almost at the same pace Air entrapment Air entrapment
(a) Filling at 40%
(b) Filling at 60%
The last filling of the boss in the cast
Filling into the overflows
Filling into the overflow
(c) Filling at 80%
(d) Filling at 90%
Figure 4. Simulation results of filling process
5.
Conclusions
In casting product development, the design information is important in process determination, tooling design, casting system layout planning, and product assurance and control. Traditionally, these information can only be revealed via the tryout realization of design solution in workshop. CAE simulation, however, is an efficient approach to providing the design information to casting product design. In this paper, the modeling of casting process, what kind of information is needed for solution generation and verification in casting product development and the information revealed by simulation are presented. Through case study, the information and behaviors revealed via CAE simulation related to filling process are presented.
Design Information Revealed by CAE Simulation for Casting Product Development
6.
331
Acknowledgments
The authors would like to thank the Competitive Earmarked Research Grant of BQ08V, funded by the Hong Kong Research Grants Council, to support this research.
7.
References
[1] Arno. Louvo, Casting simulation as a tool in concurrent engineering, International ADI and Simulation Conference, May 28-30, 1997. [2] Gotz Hartmann and Achim Egner-Walter, Optimized development for castings and casting processes, 19th CAD-FEM Users’ Meeting 2001, Berlin Potsdam. [3] W. Sequeira, R. Kind, R. Roberts and M. Lowe, Optimization of die casting part design, process parameters and process control using newly die casting simulation tool, Proceedings of die casting in the 21th century, Cincinnati, Ohio, 2001. [4] T. McMillin, G. Hartmann and A. Egner-Walter, CAE opens new frontier in casting design, Engineered casting solutions, 29-31, Spring 2002 [5] T. Barriere, B. Liu and J.C. Gelin, Determination of the optimal process parameters in metal injection molding from experiments and numerical modeling, J. Mat. Proc. Tech., 143-144 (2003) 636-644. [6] S. Naher, D. Brabazon and L. Looney, Simulation of the stir casting process, J. Mat. Proc. Tech., 143-144 (2003) 567-571. [7] S.M.H. Mirbagheri, H. Esmaeileian, S. Serajzadeh, N. Varahram and P. Davami, Simulation of melt flow in coated mould cavity in the casting process, J. Mat. Proc. Tech., 142 (2002) 493-507. [8] Patrick Ulysse, Optimal extrusion die design to achieve flow balance, Int. J. of Machine Tools and Manufacture, 39 (1999) 1047-1064. [9] F. Pascon, S. Cescotto and A.M. Habraken, A 2.5D finite element model for bending and straightening in continuous casting of steel slabs, Int. J. Numer. Meth. Engng, 2006, 68, 125-149. [10] A. Krimpenis, P.G. Benardos, G.C. Vosniakos and A. Koukouvitaki, Simulation-based selection of optimum pressure die-casting process parameters using neural nets and genetic algorithms, Int. J. Adv. Manuf. Technol, 27(2006), 509-517. [11] B.H. Hu, K.K. Tong, X.P. Niu and I. Pinwill, Design and optimization of runner and gating systems for the die casting of thin-walled magnesium telecommunication parts through numerical simulation, J. Mat. Proc. Tech., 105 (2000) 128-133. [12] J.Y.H. Fuh, Y.F. Zhang, A.Y.C. Nee and M.W. Fu, Computer-aided injection mould design and manufacture, Marcel Dekker, Inc., New York, 2004. [13] M.W. Fu, M.S. Yong, K.K. Tong and T. Muramatsu, A methodology for evaluation of metal forming system design and performance via CAE simulation, Int. J. Prod. Res., 44 (2006) 1075-1092. [14] K.K. Tong, M.S. Yong, M.W. Fu, T. Muramatsu, C.S. Goh and S.X. Zhang, A CAE enabled methodology for die fatigue life analysis and improvement, Int. J. Prod. Res., 43 (2005) 131-146. [15] M.W. Fu, M.S.Yong and T. Muramatsu, Die fatigue life design and assessment via CEA simulation, Accepted for publication in Int. J. Adv Manuf. Technol. [16] X. Dai, X. Yang, J. Campbell and J. Wood, Effects of runner system design on the mechanical strength of Al-7Si-Mg alloy castings, Materials Science and Engineering A354 [17] R.W. Lewis and K. Ravindran, Finite element simulation of metal casting, Int. J. Numer. Meth. Engng., 47(2000) 29-59. (2003) 315-325.
332
M.W. Fu
[18] P. Cleary, J. Ha, V. Alguine and T. Nguyen, Flow modeling in casting processes, Applied Mathematical Modeling, 26(2002) 171-190. [19] S. Kulasegaram, J. Bonet, R.W. Lewis and M. Profit, High pressure die casting simulation using a Lagrangian particle method, Commun. Numer. Meth. Engang, 19 (2003), 679-687. [20] C. Monroe and C. Beckermann, Development of a hot tear indicator for steel castings, [21] Y.L. Hsu and C.C. Yu, Computer simulation of casting process of aluminum wheels- a case study, Proc.IMechE, Part B: J. Eng. Manuf., 220 (2006) 203-211. [22] A. Midea, A. Nariman, B. Yancey and T. Faivre, Using computer modeling to optimize casting process, Modern Casting, 90 (2000) 4-10. [23] Z.Guo, N. Saunders, A.P. Miodownik, J. Ph. Schille, Modeling of materials properties and behaviors critical to casting simulation, Mat. Sci. and Eng. A, 413-414 (2005) 465469. [24] L. Neumann, R. Kopp, H. Aretz, M. Crumbach, M. Goerdeler, G. Gottstein, Prediction of texture induced anisotropy by through-process modelling, Materials Science Forum, 495-497(2005), 1657-1662. [25] Y.H. Peng, D.Y. Li, Y.C. Wang, J.L. Yin, X.Q. Zeng, Numerical study on the low pressure die casting of AZ91D wheel hub, Magnesium – Science, Technology and Applications Materials Science Forum, 488-489(2005), 393-396.
An Ontology-based Knowledge Management System for Industry Clusters Pradorn Sureephong1, Nopasit Chakpitak1, Yacine Ouzrout2, Abdelaziz Bouras2 1
Department of Knowledge Management, College of Arts, Media and Technology, Chiang Mai University, Chiang Mai, Thailand. {dorn | nopasit}@camt.info 2 LIESP, University Lumiere Lyon 2, Lyon, France, {yacine.ouzrout | abdelaziz.bouras}@univ-lyon2.fr
Abstract Knowledge-based economy forces companies in every country to group together as a cluster in order to maintain their competitiveness in the world market. The cluster development relies on two key success factors which are knowledge sharing and collaboration between the actors in the cluster. Thus, our study tries to propose a knowledge management system to support knowledge management activities within the cluster. To achieve the objectives of the study, ontology takes a very important role in the knowledge management process in various ways; such as building reusable and faster knowledge-bases and better ways of representing the knowledge explicitly. However, creating and representing ontology creates difficulties to organization due to the ambiguity and unstructured nature of the source of knowledge. Therefore, the objectives of this paper are to propose the methodology to capture, create and represent ontology for organization development by using the knowledge engineering approach. The handicraft cluster in Thailand is used as a case study to illustrate our proposed methodology. Keywords: Ontology, Semantic, Knowledge Management System, Industry Cluster
1.
Introduction
In the past, the three production factors (Land, Labor and Capital) were abundant, accessible and were considered as the reason of economic advantage, knowledge did not get much attention [1]. Nowadays, it is the knowledge-based economy era which is affected by the increasing use of information technologies. Thus, previous production factors are currently no longer enough to sustain a firm’s competitive advantage; knowledge is being called on to play a key role [2]. Most industries try to use available information to gain more competitive advantages than others. Knowledge-based economy is based on the production, distribution and use of knowledge and information [3]. The study of Yoong and Molina [1] assumed that one way of surviving in today’s turbulent business environment for business organizations is to form strategic alliances or mergers with other similar or
334
P. Sureephong, N. Chakpitak, Y. Ouzrout and A. Bouras
complementary business companies. The conclusion of Yoong and Molina’s study supports the idea of industry cluster [3] which is proposed by Porter in 1990. The objectives of the grouping of firms as a cluster are maintaining the collaboration and sharing of knowledge among the partners in order to gain competitiveness in their market. Therefore, Knowledge Management (KM) becomes a critical activity in achieving the goals. In order to manage the knowledge, ontology plays an important role in enabling the processing and sharing of knowledge between experts and knowledge users. Besides, it also provides a shared and common understanding of a domain that can be communicated across people and application systems. On the other hand, creating ontology for an industry cluster can create difficulties to the Knowledge Engineer (KE) as well, because of the complexity of the structure and time consumed. In this paper, we will propose the methodology for ontology creation by using knowledge engineering methodology in the industry cluster context.
2.
Literature Review
2.1
Industry Cluster and Knowledge Management
The concept of the industry cluster was popularized by Prof. Michael E. Porter in his book “Competitive Advantages of Nations” [3] in 1990. Then, industry cluster becomes the current trend in economic development planning. However, there is considerable debate regarding the definition of the industry cluster. Based on Porter’s definition of industry cluster [4], the cluster can be seen as a “geographically proximate group of companies and associated institutions (for example universities, government agencies, and related associations) in a particular field, linked by commonalities and complementarities”. The general view of industry cluster map is shown in figure 1. Until now, literature of the industry cluster and cluster building has been rapidly growing both in academic and policy-making circles [5]. After the concept of industry cluster [3] was tangibly applied in many countries, companies in the same industry tended to link to each other to maintain their competitiveness in their market and to gain benefits from being a member of the cluster. From the study of ECOTEC in 2005[6] regarding the critical success factors in cluster development, the two critical success factors are collaboration in networking partnership and knowledge creation for innovative technology in the cluster which are about 78% and 74% of articles mentioned as success criteria accordingly. This knowledge is created through various forms of local interorganizational collaborative interaction [7]. They are collected in the form of tacit and explicit knowledge in experts and institutions within cluster. We applied knowledge engineering techniques to the industry cluster in order to capture and represent the tacit knowledge in the explicit form.
An Ontology-based Knowledge Management System for Industry Clusters
335
Government Agents
Supporting Industries
Associations
Cluster’s Core Business
CDA
Academic Institutes
Figure 1. Industry Cluster Map
2.2
Knowledge Engineering Techniques
Initially knowledge engineering was just a field of the artificial intelligence. It was used to develop knowledge-based systems. Until now, knowledge engineers have developed their principles to improve the process of knowledge acquisition since last decade [8]. These principles are used to apply knowledge engineering in many actual environment issues. Firstly, there are different types of knowledge. This was defined as “know what” and “know how” [9] or “explicit” and “tacit” knowledge from Nonaka’s definition [10] Secondly, there are different type of experts and expertise. Thirdly, there are many ways to present knowledge and use of knowledge. Finally, the use of structured method to relate the difference together to perform knowledge oriented activity [11]. Context
Concept
Artifact
Organization Model
Task Model
Knowledge Model
Agent Model
Communication Model
Design Model
Figure 2. CommonKADS Model Suite
In our study, many knowledge engineering methods have been compared [12] in order to select a suitable method to be applied to solve the problem of industry cluster development; i.e. SPEDE, MOKA, CommonKADS. We adopted CommonKADS methodology because it provides sufficient tools; such as a model suite (figure 2) and templates for different knowledge intensive tasks.
336
2.3
P. Sureephong, N. Chakpitak, Y. Ouzrout and A. Bouras
Ontology and Knowledge Management
The definition of ontology by Gruber (1993) [13] is “explicit specifications of a shared conceptualization”. A conceptualization is an abstract model of facts in the world by identifying the relevant concepts of the phenomenon. Explicit means that the type of concepts used and the constraints on their use are explicitly defined. Shared reflects the notion that an ontology captures consensual knowledge, that is, it is not private to the individual, but accepted by the group. Basically, the role of ontology in the knowledge management process is to facilitate the construction of a domain model. It provides a vocabulary of terms and relations in a specific domain. In building a knowledge management system, we need two types of knowledge [14]: Domain knowledge: Knowledge about the objective realities in the domain of interest (Objects, relations, events, states, causal relations, etc. that are obtained in some domains) Problem-solving knowledge: Knowledge about how to use the domain knowledge to achieve various goals. This knowledge is often in the form of a problem-solving method (PSM) that can help achieve the goals in a different domain. In this study, we focus on ontology creation and representation by adopting knowledge engineering methodology to support both dimensions of knowledge. We use the ontology as a main mechanism to represent information and knowledge, and to define the meaning of terms used in the content language and the relation in the knowledge management system.
3.
Methodology
Our proposed methodology divides ontology into three types: generic ontology, domain ontology and task ontology. Generic ontology is the ontology which is reusable across the domain, e.g. organization, product specification, contact, etc. Domain ontology is the ontology defined for conceptualizing on the particular domain, e.g. handicraft business, logistic, import/export, marketing, etc. Task ontology is the ontology that specifies terminology associated with the type of tasks and describes the problem solving structure of all the existing tasks, e.g. paper production, product shipping, product selection, etc. In our approach to implement ontology-based knowledge management, we integrated existing knowledge engineering methodologies and ontology development processes. We adopted CommonKADS for knowledge engineering methodology and OnToKnowledge (OTK) methodology for ontology development. Figure 3 shows the assimilation of CommonKADS and On-To-Knowledge (OTK) [15].
An Ontology-based Knowledge Management System for Industry Clusters
337
Knowledge Model
Organization Model
Feasibility Study
Task Model
Communication Model
Agent Model
Design Model
Ontology Kick Off
Refinement
Evaluation
Maintenance and Evolution
Feedback
Figure 3. Steps of OTK methodology and CommonKADS model suite
3.1
Feasibility Study Phase
The feasibility study serves as decision support for an economical, technical and project feasibility study, in order to select the most promising focus area and target solution. This phase identifies problems, opportunities and potential solutions for the organization and environment. Most of the knowledge engineering methodologies provide the analysis method to analyze the organization before the knowledge engineering process. This helps the knowledge engineer to understand the environment of the organization. CommonKADS also provides context levels in the model suite (figure 2) in order to analyze organizational environment and the corresponding critical success factors for a knowledge system [16]. The organization model provides five worksheets for analyzing feasibility in the organization as shown in figure 4.
OM-3 Worksheet
OM-1 Worksheet
OM-2 Worksheet
Problems, Solutions, Context
Description of organization focus area
Process breakdown
OM-5 Worksheet
Feasibility OM-4 Worksheet
Judge Feasibility
Knowledge assets
Figure 4. Organization Model Worksheets
The Knowledge engineer can utilize OM-1 to OM-5 worksheets for interviewing with knowledge decision makers of organizations. Then, the outputs from OM are
338
P. Sureephong, N. Chakpitak, Y. Ouzrout and A. Bouras
a list of knowledge intensive tasks and agents which are related to each task. Then, KE could interview experts in each task using TM and AM worksheets for the next step. Finally, KE validates the result of each module with knowledge decision makers again to assess impact and changes with the OTA worksheet. 3.2
Ontology Kick Off Phase
The objective of this phase is to model the requirements specification for the knowledge management system in the organization. The Ontology Requirement Support Document (ORSD) [17]guides knowledge engineers in deciding about inclusion and exclusion of concepts/relations and the hierarchical structure of the ontology. It contains useful information, i.e. Domain and goal of the ontology, Design guidelines, Knowledge source, User and usage scenario, Competency questions, and Application support by the ontology[15]. Task and Agent Model are separated in to TM-1, TM-2 and AM worksheets. They facilitate KE to complete the ORSD. The TM-1 worksheet identifies the features of relevant tasks and knowledge sources available. TM-2 worksheet concentrates in detail on bottleneck and improvement relating to specific areas of knowledge. AM worksheet lists all relevant agents who possess knowledge items such as domain experts or knowledge workers. 3.3
Refinement Phase
The goal of the refinement phase is to produce a mature and application-oriented target ontology according to the specification given by the kick off phase [18]. The main tasks in this phase are knowledge elicitation and formalization. Knowledge elicitation process with the domain expert based on the initial input from the kick off phase is performed. CommonKADS provides a set of knowledge templates [11] in order to support KE to capture knowledge in different types of tasks. CommonKADS classify knowledge intensive tasks in two categories; i.e. analytic tasks and synthetic tasks. The first is a task regarding systems that preexist. In opposition, the synthetic task is about the system that does not yet exist. Thus, KE should realize about the type of task that he is dealing with. Figure 5 shows the different knowledge task types. Knowledge Intensive Task
Analytic Task
Classificatio
Diagnosis
Assessment
Synthetic Task
Prediction
Monitoring
Design
Planning Modeling
Assignment Scheduling
Configuratio n
Figure 5. Knowledge-intensive task types based on the type of problem
An Ontology-based Knowledge Management System for Industry Clusters
339
Knowledge formalization is transformation of knowledge into formal representation languages such as Ontology Inference Layer (OIL) [19], depends on application. Therefore, the knowledge engineer has to consider the advantages and limitations of the different languages to select the appropriate one. 3.4
Evaluation Phase
The main objectives of this phase are to check, whether the target ontology suffices the ontology requirements and whether the ontology based knowledge management system supports or answers the competency questions, analyzed in the feasibility and kick off phase of the project. Thus, the ontology should be tested in the target application environment. A prototype should already show core functionalities of the target system. Feedbacks from users of the prototype are valuable input for further refinement of the ontology. [18] 3.5
Maintenance and Evolution Phase
The maintenance and evolution of an ontology-based application is primarily an organizational process [18]. The knowledge engineers have to update and maintain the knowledge and ontology in their responsibility. In order to maintain the knowledge management system, an ontology editor module is developed to help knowledge engineers.
4.
Case Study
The initial investigations have been done with 10 firms within the two biggest handicraft associations in Thailand and Northern Thailand. Northern Handicraft Manufacturer and EXporter (NOHMEX) association is the biggest handicraft association in Thailand which includes 161 manufacturers and exporters. Another association which is the biggest handicraft association in Chiang Mai is named Chiang Mai Brand which includes 99 enterprises. It is a group of qualified manufacturers who have capability to export their products and pass the standard of Thailand’s ministry of commerce. The objective of this study is to create a Knowledge Management System (KMS) for supporting this handicraft cluster. One of the critical tasks to implement this system is creating ontologies of the knowledge tasks. Because, ontology is recognized as an appropriate methodology to accomplish a common consensus of communication, as well as to support a diversity of activities of KM, such as knowledge repository, retrieval, sharing, and dissemination [20]. In this case, knowledge engineering methodology was applied for ontology creation in the domain of Thailand’s handicraft cluster. Domain Ontology: can be created by using three models in context level of model suite; i.e. organization model, task model and agent model. At the beginning of domain ontology creation, we adopt generic ontology plus acquired information from the worksheets as an outline. Then, the more information that can be acquired
340
P. Sureephong, N. Chakpitak, Y. Ouzrout and A. Bouras
from organization and environment, the more domain-oriented ontology can be filled-in. Task Ontology: specifies terminology associated with the type of tasks and describes the problem solving structure. The objective of knowledge engineering methods is to solve problems in a specific domain. Thus, most of knowledge engineering approaches provide a collection of predefined sets of model elements for KE [16]. CommonKADS methodology also provides a set of templates in order to support KE to capture knowledge in different types of tasks. As shown in figure 5, there are various types of knowledge tasks that need different ontology. Thus, KE has to select the appropriate template in order to capture right knowledge and ontology. For illustration, we will use classification template for analytic task as an example for task ontology creation. Figure 6 shows the inferences structure for classification method (left side) and task ontology (right side). Object Object
Generat
Specify
Class
Match
Truth Value
Attribute
Candidate Handicraft Product Obtain
Feature
Export Product
Non Export Product
Feature
Attribute
Figure 6. CommonKADS classification template and task ontology
In the case study of a handicraft cluster, one of the knowledge intensive tasks is about product selection for exporting. Not all handicraft products are exportable due to their specifications, function, attributes, etc. Moreover, there are many criteria to select a product to be exported to specific countries. So we defined the task ontology of the product selection task (see the right side of figure 6).
5.
Conclusion
The most important role of ontology in knowledge management is to enable and to enhance knowledge sharing and reusing. Moreover, it provides a common mode of communication among the agents and knowledge engineer [14]. However, the difficulties of ontology creation are claimed in most literature. Thus, this study focuses on creating ontology by adopting the knowledge engineering methodology which provides tools to support us for structuring knowledge. Thus, ontology was applied to help Knowledge Management System (KMS) for the industry cluster to achieve their goals. The architecture of this system consists of three parts,
An Ontology-based Knowledge Management System for Industry Clusters
341
knowledge system, ontology, and knowledge engineering. Hence, the proposed methodology was used to create ontology in the handicraft cluster context. During the manipulation stage, when users accesses the knowledge base, the ontology can support tasks of KM as well as searching. The knowledge base and the ontology is linked one to another via the ontology module. In the maintenance stage, knowledge engineers or domain experts can add, update, revise, and delete the knowledge or domain ontology via knowledge acquisition module [21]. To test and validate our approach and architecture, we used the handicraft cluster in Thailand as a case study. In our perspectives of this study, we will finalize the specification of the shareable knowledge/information and the conditions of sharing among the cluster members. Then, we will capture and maintain the knowledge (for reusing knowledge when required) and work on the specific infrastructure to enhance the collaboration. At the end of the study, we will develop the knowledge management system for the handicraft cluster relating to acquiring requirements specification from the cluster.
6.
References
[1] Young P, Molina M, (2003) Knowledge Sharing and Business Clusters, In: 7th Pacific Asia Conference on Information Systems, pp.1224-1233. [2] Romer P, (1986) Increasing Return and Long-run Growth, Journal of Political Economy, vol. 94, no.5, pp.1002-1037. [3] Porter M E, (1990) Competitive Advantage of Nations, New York: Free Press. [4] Porter M E, (1998) On Competition, Boston: Harvard Business School Press. [5] Malmberg A, Power D, (2004) (How) do (firms in) cluster create knowledge?, in DRUID Summer Conference 2003 on creating, sharing and transferring knowledge, Copenhagen, June 12-14. [6] DTI, (2005) A Practical Guide to Cluster Development, Report to Department of Trade and Industry and the English RDAs by Ecotec Research & Consulting. [7] Malmberg A, Power D, On the role of global demand in local innovation processes: Rethinking Regional Innovation and Change, Shapiro P, and Fushs G, Dordrecht, Kluwer Academic Publishers. [8] Chua A, (2004) Knowledge management system architecture: a bridge between KM consultants and technologist, International Journal of Information Management, vol. 24, pp. 87-98. [9] Lodbecke C, Van Fenema P, Powell P, Co-opetition and Knowledge Transfer, The DATA BASE for Advances in Information System, vol.30, no. 2, pp.14-25. [10] Nonaka I, Takeuchi H, (1995) The Knowledge-Creating Company, Oxford University Press, New York. [11] Shadbolt N, Milton N, (1999) From knowledge engineering to knowledge management, British Journal of Manage1ment, vol. 10, no. 4, pp. 309-322, Dec. [12] Sureephong P, Chakpitak N, Ouzrout Y, Neubert G, Bouras A, (2006) Economic based Knowledge Management System for SMEs Cluster: case study of handicraft cluster in Thailand. SKIMA Int. Conference, pp.10-15. [13] Gruber TR, (1991) The Role of Common Ontology in Achieving Sharable, Reusable Knowledge Bases, In J. A. Allen, R. Fikes, & E. Sandewall (Eds.), Principles of Knowledge Representation and Reasoning: Proceedings of the Second International Conference, Cambridge, MA, pp. 601-602.
342
P. Sureephong, N. Chakpitak, Y. Ouzrout and A. Bouras
[14] Chandrasekaran B, Josephson, JR, Richard BV, (1998) Ontology of Tasks and Methods, In Workshop on Knowledge Acquisition, Modeling and Management (KAW'98), Canada. [15] Sure Y, Studer R, (2001) On-To-Knowledge Methodology, evaluated and employed version. On-To-Knowledge deliverable D-16, Institute AIFB, University of Karlsruhe. [16] Schreiber A Th, Akkermans H, Anjewerden A, de Hoog R, Shadbolt N, van de Velde W, Wielinga B, (1999) Knowledge Engineering and Management: The CommonKADS Methodology, The MIT Press. [17] Sure Y, Studer R, (2001) On-To-Knowledge Methodology, final version. On-ToKnowledge deliverable D-18, Institute AIFB, University of Karlsruhe. [18] Staab S, Schnurr HP, Studer R, Sure Y, (2001) Knowledge processes and ontologies, IEEE Intelligent Systems, 16(1):26-35. [19] Fensel, Harmelen Horrocks (OIL) [20] Gruber T R, (1997) Toward principles for the design of ontologies used for knowledge sharing, Int. J Hum Comput Stud, vol. 43, no. 5-6, pp.907-28. [21] Chau K W, (2007) An ontology-based knowledge management system for flow and water quality modeling, Advance in Engineering Software, vol. 38, pp. 172-181.
Chapter 3 Detail Design and Design Analysis
Loaded Tooth Contact Analysis of Modified Helical Face Gears ................. 345 Ning Zhao, Hui Guo, Zongde Fang, Yunbo Shen, Bingyang Wei Simplified Stress Analysis of Large-scale Harbor Machine’s Wheel............ 355 Wubin Xu, Peter J Ogrodnik Bing Li, Jian Li, Shangping Li Clean-up Tool-path Generation for Multi-patch Solid Model by Searching Approach..................................................................................... 365 Ming Luo, Dinghua Zhang, Baohai Wu, Shan Li Fatigue Life Study of Bogie Framework Welding Seam by Finite Element Analysis Method ................................................................. 375 Pingqing Fan, Xintian Liu, Bo Zhao Research on Kinematics Based on Dual Quaternion for Five-axis Milling Machine .......................................................................... 385 Rui-Feng Guo, Pei-Nan Li Consideration for Galvanic Coupling of Various Stainless Steels & Titanium, During Application in Water-LiBr Absorption-Type Refrigeration System......................................................................................... 395 Muhammad Shahid Khan, Saad Jawed Malik Real Root Isolation Arithmetic to Parallel Mechanism Synthesis................. 405 Youxin Luo, Dazhi Li, Xianfeng Fan, Lingfang Li, Degang Liao Experimental Measurements for Moisture Permeations and Thermal Resistances of Cyclo Olefin Copolymer Substrates ........................................ 415 Rong-Yuan Jou Novel Generalized Compatibility Plate Elements Based on Quadrilateral Area Coordinates ...................................................................... 425 Qiang Liu, Lan Kang, Feng Ruan Individual Foot Shape Modeling from 2D Dimensions Based on Template and FFD............................................................................................. 437 Bin Liu, Ning Shangguan, Jun-yi Lin, Kai-yong Jiang Application of the TRIZ to Circular Saw Blade ............................................. 447 Tao Yao, Guolin Duan, Jin Cai
Loaded Tooth Contact Analysis of Modified Helical Face Gears Ning Zhao1, Hui Guo1, Zongde Fang1, Yunbo Shen1, Bingyang Wei 2 1
School of Mechantronic Engineering, Northwestern Polytechnical University, Xi’an 710072, China, E-mail:[email protected] 2 Henan University of Science of Technology, Luoyang 471039, China
Abstract For improving the meshing performance of helical face gears, the present study adopts a design method of double crowning. By the profile and longitudinal modification in term of parabola type, the drive gains the quasi-conjugated character. The mathematial model of the loaded tooth contact analysis (LTCA) for the helical face gears is established. Simulations for different designs under different working conditions are performed for getting the loaded contact patterns, load distributions and loaded transmission errors. Meshing analysis indicates that the proposed method can effectively avoid the edge contact, optimize the load distribution, decrease the sensitivity to misalignments. The results are illustrated with numerical examples. Keywords: helical face-gear, loaded contact analysis, surface modification
1.
Introduction
Investigation of face gear drives was the subject of research accomplished by representatives of the University of Illinois at Chicago, Boeing, NASA Glenn Research Center and Lucas Western[1,2], and has found an important application in helicopter transmissions. The main advantage of such gear drives is the possibility of split of the torque and reduction of weight. The design of the face gear drive presented by Litvin in [1-2] is based on application of a conventional involute pinion being in contact with the conjugated face gear, representing the most applied solution in the literature for the design. Localization of the contact for such a gear drive is required to prevent edge contact and separation of tooth surfaces that may occur in presence of errors of alignment. The most applied method in the literature to localize the contact in face gear drives is based on the generation of the face gear by a shaper with increased number of teeth with respect to the pinion [1, 2]. Litvin et al.[3] investigated the application of a double crowned pinion generated by a grinding disk being in mesh with a face gear. In this case, the localization of the contact was achieved by crowning the surface of the pinion teeth in longitudinal
346
N. Zhao, H. Guo, Z. Fang, Y. Shen and B. Wei
direction. Profile crownings of the gears provided by application of parabolic rack cutters let the tooth obtain a longitudinal path of contact. Tooth contact analysis (TCA) showed good results in terms of sensitivity to misalignments. However, all these literatures have not considered the meshing performance under load, and most their objectives are face gears with spur pinion. The finite element analysis[4] can only generate the pressure and stress distribution and it will consume a lot of computation time. In this paper, the mathematical model of loaded tooth contact analysis (LTCA) for helical face gears is established. This model can allow for the loads and can solve the real contact ratio, the loaded contact path and the loaded transmission errors et al. More importantly, the new model needs much less computation time than the contact method of FEM. In addition, a longitudinal modification method different from the one proposed by Litvin[3] is presented for improving the stability of contact pattern.
2. 2.1
Surface modification Generation of a face-gear by a shaper
Figure 1 shows the coordinate systems applied for the generation of the face gear surface. Sa is the global fixed system. System Ss and S2 are rigidly connected to the shaper and the face gear, respectively. Sp is the auxiliary coordinate system. Ȗm is the angle between the axes of rotation of the shaper and the face gear, Zs and Z2 respectively. ɮs and ɮ2 are the rotation angles of the shaper and face gear respectively, and ɮ2=ɮsNs/N2, where Ns and N2 are the tooth numbers of the shaper and face gear. L1 and L2 are the limit inner radius and the limit outer radius, as shown in Figure 1-a. ya
z2 ¦ 2μ
¦ mÃ
¦ sμ
Face gear
o2 oa os
za z s
L1
L2
a
x2
xa, x p ¦ 2μ
y2
o a o p o 2 za
yp
¦Ã m z p,z2 y a
xs
xa ¦ sμ
oa o s za z s ya ys
b
c
Figure 1. a. Generation of a face-gear; b.c. Coordinate systems applied for generation of a face-gear
The face-gear tooth surface is calculated as the envelope to the family of the shaper’s surfaces.
Loaded Tooth Contact Analysis of Modified Helical Face Gears
°r2 (us , ls,Is ) M 2 s (Is )rs (us , ls ) ® ( s 2) 0 °¯ f 2 s (us , ls , Is ) ns
347
(1)
here, rs(us,ls) is the surface of the shaper, and it is a modified involute helical surface in this paper. us and ls are the shaper surface parameters. f2s(us,ls,ɮs) is the meshing equation for the generation of the face gear. vs( s 2) is the relative velocity between the shaper and the face-gear in system Ss. ns is the unit normal to the shaper tooth surface. 2.2
Profile modification
The profile of the pinion or the shaper in traditional face gear drives is standard involute, and it is generated by rack cutter with straight profile. Profile crowning to pinion or shaper can provide a predesigned parabolic function of transmission errors. Such a function is able to absorb almost linear discontinuous functions of transmission errors (caused by mis-alignment) that are the source of vibration and noise. For this reason, we use rack-cutters with parabolic profile (see Figure 2-a). ui(i=s,1) is the coordinate parameter along profile of the rack-cutter for pinion and shaper respectively. ai is the parabolic coefficient; u0 is the parameter of parabola apex. 2
Surface with longitudinal crowning
¦ nÁ
u0
xa
ui
ai u i
oa T
ya
Z1
a
b
Figure 2. a.Rack cutters definition; b. Longitudinal crowning of pinion surface
2.3
Longitudinal modification for the pinion
Longitudinal crowning is required for localization of bearing contact. In this paper, a new simple type of surface modification is proposed. The longitudinal crowning is illustrated by Figure 2-b. Keep the shape of profiles in sections normal to axis z1 invariable, rotate the profile toward tooth surface by a small angle ș.
T
a pl ( z zmid ) 2 / b 2
(2)
Here, apl is the parabolic coefficient for longitudinal crowning; z is the axis coordinate of any point on the pinion surface; zmid is the axis coordinate of
348
N. Zhao, H. Guo, Z. Fang, Y. Shen and B. Wei
midpoint on face width. b is the tooth width of pinion. This crowning can be performed on CNC machine tools.
3. 3.1
Loaded tooth contact analysis (LTCA) The model of LTCA
The tooth surfaces are in point contact but the contact is spread on an elliptical area due to elastic deformation of contact surfaces. Because the length of the long axis in the contact ellipse is much bigger than the short axis, we can consider that the pressure load only distributes on the long axis of the ellipse. The orientation of instant contact ellipse can be determined applying the relations between the surface principal curvatures and directions at the instantaneous contact point [5]. The aim of LTCA is to determine the load distribution on the instant contact line and loaded transmission errors under different working conditions. įj1
F1
ĉ
II
x
İjI
įj3
F2 İjII
įj4
įj2 Figure 3. Simulation for contact of two pairs of teeth
The mathematics model is given by Figure 3. Suppose there possibly are two pairs of teeth in contact ( ĉ , Ċ ). The contact lines denote the cross section in the principal direction. i is the instantaneous contact point, j is one discrete point on the principal direction. İj is the initial clearance of point j; į is the elastic deformation; F1 or F2 is the normal force of a pair of teeth. As Figure 3 shown, the tooth pair Ċ has an initial clearance İiĊ because of the surface modification, and the surfaces of tooth pair ĉ is right tangent. The tooth contact can be expressed as the following equations [6]: Minimize N 1
¦Z
j
(3)
j 1
such that -SF + Įe + IY + IU T
eF
=İ
(4)
+ ZN+1=P
(5)
Subject to the condition that either Fk =0 or Yk =0 (k=1,2,…,N)
Fkı0,
Ykı0,
Įı0,
Zjı0
Loaded Tooth Contact Analysis of Modified Helical Face Gears
349
Here, Zj (j=1,2,…N+1) is artificial variable which is required to be nonnegative, N is the number of all contact pairs; S (S=Sp+Sg) is the integrated flexibility matrix of the instant contact line on the pinion and the face gear, Spij(i,j=1,2,…,N) is the elastic deformation of point j on pinion surface when point i is applied a unit force in the normal direction. In this paper we first compute the flexibility matrixes S0p and S0g for the corner nodes in finite element model, then get Sp and Sg for every instant position by duality interpolation (as shown in Figure 4). F is the reaction force vector of the discrete contact points. (SF)j is the total deformation of the contact pair j; Į is the linear transmission error under load, and it can be transformed to angle error; e is a vector with all elements equal 1; I is the unit matrix; Yk is the final clearance of the contact pair k; U is a column vector; İ is the initial clearance vector; P is the total force applied on the gears. cornor nodes instant contact line discrete contact points
Figure 4. Finite element model for flexibility matrix computation
The problem (3) ~ (5) is a nonlinear programming question which aim is minimizing the strain energy, and it can be solved by an improved simplex-type algorithm. LTCA is needed for all the contact position in order to get the meshing course under load. It is reminded that the flexibility matrixes S0p and S0g are needed to be solved only once for different loads, misalignments or modifications. 3.2
Some correlative conceptions
As shown in Figure 5, due to surface modifications and errors, the transmission has incomplete conjugate characteristics. The transmission error ǻij2 is defined as the difference between real rotation angle and the theory one of the driven gear. ǻij2=ij2-ij20-N1/N2(ij1-ij10)
(6)
Here, ij is the rotation angle of the gears; ij0 is the initial angle. In Figure 5, ijp(ijp=2ʌ/N1) is the mesh period; ijr is the fact mesh angle for one tooth under certain load; ijd is the maximum mesh angle for one tooth without edge contact(contact on the gear tooth top or pinion tooth top with severe pressure is called edge contact). We think that the transmission with edge contact is un-normal, so the tooth surface corresponding to c-d part is invalid. The design contact ratio(DCR) is defined as ijd /ijp, and the real contact ratio (RCR) is ijr /ijp.
350
N. Zhao, H. Guo, Z. Fang, Y. Shen and B. Wei
Mp
transmission error/(arcsec)
0
a
TE
b
-5 LTE
-10
Mr
-15
c Md
-20
d -40
-20
0 pinion rotation/(deg)
20
40
Figure 5. Typical transmission errors
4.
Analysis for meshing quality
Simulation of meshing and contact has been performed for three cases of design, and the parameters of gear drives are represented in Table 1. Table 1. Design parameters applied in cases 1~3 of design for simulation of meshing Design parameter
Case 1
Case 2
Case 3
Pinion tooth number, N1
25
25
25
Shaper tooth number, Ns
28
28
28
Gear tooth number, N2
160
160
160
Pressure angle, Įn (deg.)
25.0
25.0
25.0
Helix angle of pinion, ȕ (deg.)
15.0
15.0
15.0
Module, mn (mm.)
3.175
3.175
3.175
Shaft angle, Ȗ (deg.)
90.0
90.0
90.0
Inner radius, L1 (mm.) (Figure 1-a)
247.0
247.0
247.0
Outer radius, L2 (mm.) (Figure 1-a)
287.0
287.0
287.0
Rack-cutter parabola coefficient, a1 (Figure 2-a)
0.000
0.002
0.002
Rack-cutter parabola coefficient, as (Figure 2-a)
0.000
-0.001
-0.001
Parameter of parabola apex, u0 (mm.) (Figure 2-a)
0.000
-0.462
-0.483
Parabola coefficient for plunging, apl (Figure 2-b)
0.000
0.000
0.0014
Loaded Tooth Contact Analysis of Modified Helical Face Gears
351
0
transmission error/(arcsec)
transmission error/(arcsec)
Case 1. The face gear is generated by an involute shaper, the pinion is a spur involute one, but the difference of number of teeth Ns-N1=3 has been provided. Longitudinal crowning has not been accomplished. This is a traditional design. The results of accomplished research show (Figure 6): (i) the transmission errors (TE) are zeros for zero load, but the curves of loaded transmission errors (LTE) fluctuate severely, and this will induce impacts between gears; (ii) Both the DCR and RCR are 1.5 under alignment or misalignment; (iii) The edge contact happens under load (Figure 7), so the contact pressure on the tooth top and root is very high this is not in favor of life of gears; (iv) The load distributions move to the outer end of face gear, so this design is sensitive to change of shaft angle ǻȖ (Figure 7, Figure 8).
-5 -10 -15 -20 -25
-20 0 20 pinion rotation/(deg)
a
0 -5
50Nm 100Nm
-10 -15
200Nm
-20 -25
-40
-20 0 pinion rotation/(deg)
20
400Nm
b
Figure 6. Case 1, loaded transmission errors. a. no errors of alignment; b. ǻȖ=2'
Figure 7. Case 1, load distributions under 200Nm. a. no errors of alignment; b. ǻȖ=2'
a
b
Figure 8. Case 1, contact traces under 200Nm. a. no errors of alignment; b. ǻȖ=2'
Case 2. In addition to conditions of case 1, profile crowning of involute pinion and shaper have been provided. Drawings of Figure 9~11-a show that the meshing quality is very good for no errors of alignment, for instant, the curve of TE is a symmetric parabola; the DCR reaches 2.6 which indicates there may be three pairs of teeth in mesh at the same time; the load distribute varies gently; the contact trace distributes on the whole surface; the RCR is 2.2 under torque 200Nm. But, for error of alignment ǻȖ=2'(Figure 9~11-b), the curve of TE becomes badly asymmetric with DCR 1.2, which indicates there is only one tooth in load for the
352
N. Zhao, H. Guo, Z. Fang, Y. Shen and B. Wei
0
transmission error/(arcsec)
transmission error/(arcsec)
most time; the edge contact happens on the top of face gear; the gear drive is also sensitive to change of shaft angle.
-5 -10 -15 -20 -20 0 20 pinion rotation/(deg)
a
40
0 -10 -20 -30 -40 -50
-20 0 20 pinion rotation/(deg)
40
b
Figure 9. Case 2, loaded transmission errors. a. no errors of alignment; b. ǻȖ=2'
a
b
Figure 10. Case 2, load distributions under 200Nm. a. no errors of alignment; b. ǻȖ=2'
a
b
Figure 11. Case 2, contact traces under 200Nm. a. no errors of alignment; b. ǻȖ=2'
Case 3. Besides conditions of case 2, longitudinal crowning of involute pinion has been provided. The bearing contact is stabilized (Figure 12~14-a.b), and the function of TE is still a parabolic one and of a small magnitude when error of alignment ǻȖ=2' is applied (Figure 12-b). When ǻȖ=2' and torque of 200Nm are applied, there is no edge contact, and the RCR is 1.9 which indicates there are two teeth sharing load for the most time. When torque of 400Nm is applied, the edge contact happens on the tooth top of face gear(in Figure 12-b, the LTE is lower than TE on the left side), because the bearing limit is exceeded.
0
transmission error/(arcsec)
transmission error/(arcsec)
Loaded Tooth Contact Analysis of Modified Helical Face Gears
-10
-20
-30 -20 0 20 pinion rotation/(deg)
a
40
353
0
0Nm -10
50Nm
-20
200Nm
-30
400Nm -20 0 20 pinion rotation/(deg)
40
b
Figure 12. Case 3, loaded transmission errors. a. no errors of alignment; b. ǻȖ=2'
a
b
Figure 13. Case 3, load distributions under 200Nm. a. no errors of alignment; b. ǻȖ=2'
a
b
Figure 14. Case 3, contact traces under 200Nm. a. no errors of alignment; b. ǻȖ=2'
5.
Computation velocity comparing with FEM
The finite element analysis of the same design with case1 discussed above has been done using the contact approach in general commercial software ANSYS[7]. The FEM contact model contains total 45120 SOLID45[7] elements with 57195 nodes and 1600 contact elements[7] and target elements[7]. The LTCA model has the same element number of single tooth with the FEM contact model. The FEM contact model needs 120 hours(about one hour for each contact position; there are 15 positions, two alignment errors and 4 kinds of load for case1, so the total time is 120 h), but the LTCA only needs 3 hours because most time is used for S0p and S0g, and equation (3) needs just several seconds for one contact position. So the LTCA method is much faster than contact approach in FEM.
354
N. Zhao, H. Guo, Z. Fang, Y. Shen and B. Wei
6.
Conclusions 1.
The method of surface modification for helical face gears is investigated. A longitudinal modified geometry for pinion surface is proposed. 2. The mathematical model of loaded tooth contact analysis (LTCA) for helical face gears is established. This method is much faster than the contact approch in FEM. The analysis of LTCA indicates that the face-gear drive with double-modification pinion has the following advantages: wider contact range, higher contact ratios, bigger endurance to edge contacts, lower sensitivity to misalignments and better load distributions.
7.
Acknowledgements
The authors express their deep gratitude to the National Nature Sciences Foundation of China for the financial support of the project (Sanction No. 50675176).
8.
References
[1] F. L. Litvin, A. Egelja, J. Tan, G. Heath, (1998) Computerized design, generation and simulation of meshing if orthogonal offset face gear drives with a spur involute pinion with localized bearing contact. Mech Mach Theory 33:87-102 [2] F. L. Litvin, Y. Zhang, J. C. Wang, R. B. Bossler, Y. J. D. Chen, (1992) Design and geometry of face-gear drives. ASME J Mech Des 114:642-647 [3] F. L. Litvin, et al, (2001) Design, generation and TCA of new type of asymmetric facegear drive with modified geometry. Comput Methods Appl Mech Engrg 190:58375865 [4] F. L. Litvin, et al, (2002) Face-gear drive with spur involute pinion: Geometry, generation by a worm, stress analysis. Comput Methods Appl Mech Engrg 191:27852813 [5] F. L. Litvin, (1994) Gear Geometry and Applied Theory. Prentice Hall, Inc., Englewood Cliffs, New Jersey [6] T. F. Conry and A. Seireg, (1971) A Mathematical Programming Method for Design of Elastic Bodies in Contact. ASME Journal of Engineering for Industry 95B:387-392 [7] ANSYS Inc, (2000) ANSYS User’s Manual. America ANSYS Inc office, Beijing, P. R. China
Simplified Stress Analysis of Large-scale Harbor Machine’s Wheel Wubin Xu1, Peter J Ogrodnik2, Bing Li1, Jian Li1, Shangping Li1 1 Guangxi University of Technology, 268 Donghuan Road, Liuzhou, Guangxi, China, 545006 2 Beacon Building, Staffordshire University, Beaconside, Stafford, Staffordshire. ST18 0AD
Abstract In order to confirm the mechanical model of large-scale harbor machine’s wheel subject to a radial load, an analogous small wheel under a similar load is used as a reference model for investigation using finite element analysis. This analysis gives results of the stress and displacement distributions, and the results are further verified by stress testing. Several different kinds of pressure distributions, such as uniform distribution, cosine distribution, parabolic distribution and their spread angles are discussed. The finite element models under these load distributions are calculated respectively and analyzed to ascertain the best model which can be used in the design of large-scale harbor machine’s wheel. Finally, cosine distribution with spread angle 60° is used in the finite element modelling of harbor machine’s wheel, the analysis results show that this method is reliable, and is useful for direct design of the wheels. Keywords: Large-scale wheel, Stress analysis, Finite element modeling.
1
Introduction
The P2515 model wheel is one of the most critical components of large-scale harbor machine, as showed in figure 1. Its strength is a vital factor which influences on the safety and stability of the operator and the machine. It is typically characterized by large size, heavy load (4.8E+5N on a single wheel), high inflation pressure of tyre (1.2Mpa), slow moving speed (6-15Km/h), complicated non-linear contact stress between tyre and rim, therefore it is difficult to construct finite element analysis model in ANSYS. Furthermore, it is difficult to test its stress and strain under working status, thus, the finite element analysis model of the wheel is not easy to verify by test. Therefore, for application purpose, a simplified and reliable mechanical model of the wheel is necessary to analyze its stress in ANSYS. To the author’s knowledge, several researchers have paid attention to analyzing stress of aluminum disc wheels. J. Stearns presented and discussed the utilization of finite element technique to analyze stress and displacement distribution of an
356
W. Xu, P. J. Ogrodnik, B. Li, J. Li, S. Li
aluminum alloy automotive rim–tyre combination unit which was subjected to the conjoint load of inflation pressure and radial load [1]. U.Kocabicak [2] presented a phenomenological constitutive model integrated with a notch stress–strain analysis method and local loads under general multi-axial fatigue loads were modeled with linear elastic FE analyses. The computed stress–strain response was used to predict the fatigue crack initiation life using effective strain range parameters and two critical plane parameters. Other researchers [3, 4, and 5] utilized ANSYS, MSC. PATRAN and MSC. NASTRAN as the basic tools to implement strength analysis and optimum design of aluminum alloy automotive wheels᧪
Figure 1. The construction of P2515 model wheel
However, the research on the load modeling on such a large scale wheel is scarce. This paper focuses on comparing several different approaches to convert the radial load into the distributed pressure on the surface of an analogous small wheel which is used to simulate the large scale harbor machine’s wheel. The different pressure distributions, such as cosine function distribution, uniform distribution, parabolic distribution, and their distribution angles are analyzed to ascertain the best way to construct the mechanical model of the harbor machine’s wheel under combined loads.
2 2.1
Load Distribution Analysis Theoretical Consideration
The P2515 model wheel is used in large-scale harbor machine, such as rubber tyre gantry crane. Because of its particular load features described above, the static analysis of the wheel is more useful for its designing. When it works, it bears an inflation pressure of tyre (pt) and a radial load (W), as showed in figure 2. The contact status and the pressure distribution between the tyre and the rim are complicated. Theoretically, the magnitude and the spread range of the pressure depend on the magnitude of radial load, the tyre inflation pressure, the stiffness of
Simplified Stress Analysis of Large-scale Harbor Machine’s Wheel
357
the tyre, and the relationship between them is non-linear. For application purpose, we can omit the tyre and hypothesize that the pressure distributes on the surface of the rim with certain laws and certain spread angles in circumferential direction, such as uniform law, cosine law or parabolic law. Therefore, the finite element analysis model in ANSYS can be simplified for the application in the wheel manufacturing company.
Figure 2. The load of P2515 model wheel
Density of Strain Energy of Tyre(11ˊ00R2)
Density of Strain Energy(J/mm3)
0.50
W=50KN W=45KN W=40KN
0.40 0.30 0.20 0.10 0.00 -180
-120
-60
0
60
120
Spread Angle ș (Degree)
Figure 3. Density of Strain Energy of Tyre(11ˊ00R2)
According to the available researches on tyre mechanical responses [6], when the tyre bears a radial force as well as an inflation pressure, the most part of strain energy is concentrated on a sector as showed in figure 3, which demonstrates the relationship between the density of strain energy and the swept angle under certain radial load. And the swept angle is around -60°~60°. It can be hypothesized that the pressure distribution is described as p(ș), where ș falls in [-ș0, ș0], and pmax as the maximum value of pressure, f(ș) as the distribution law, the pressure within the spread area intends to balance the radial load (W) of the wheel and therefore the relationship between them can be calculated by the following equations:
358
W. Xu, P. J. Ogrodnik, B. Li, J. Li, S. Li
2b ³
W
T0
T 0
2 br b p max
p (T ) rb d T T0
³T
2b ³
T0
T 0
p max f (T ) rb d T
(1)
f (T ) d T 0
Therefore, the pmax can be obtained by the following equation:
p max
W / 2 br b ³
T0
T 0
f (T ) d T
(2)
Where: b ——the width of bead seat (mm) rb ——the radius of the bead seat (mm)
Figure 4. Analogous wheel
In order to further investigate the pressure distribution, an analogous small wheel used in a micro-car is used to analyze and test as showed in figure 4, which bears an inflation pressure of tyre (0.4Mpa) and a radial load (3625N). 2.2
Uniform Distribution of Pressure
The uniform distribution of pressure is the simplest one that the pressure equally distributes on the surface of bead seat within a certain spread area. It is the simplest way to model the pressure distribution between tyre and rim under radial load, and the pressure is easy to be calculated and applied to the model of the wheel in ANSYS. The pressure within the spread area can be given by the following equations: p (T )
W S
W 2 b u 2 rb sin( T 0 )
Where: W ——the radial load of the wheel (N)
W 4 br b sin( T 0 )
(3)
Simplified Stress Analysis of Large-scale Harbor Machine’s Wheel
359
p(ș)——the pressure at a given point on the surface of bead seat (MPa) S ——the projective distribution area (mm2) b ——the width of bead seat (mm) rb ——the radius of the bead seat (mm) ș0 ——the spread angle (radian) In this analogous wheel’s finite element analysis model, if the spread angle ș0= ʌ/3, while b = 15 mm, W = 3625N, and rb = 190 mm, then, the value of p(ș) is 0.636MPa. 2.3
Cosine Distribution of Pressure
The cosine distribution of pressure is that the pressure is spread on the surface of the bead seat of the wheel according to the cosine law with a certain spread angle. The distributive pressure p(ș) is followed by the following expression˖
p (T )
p max cos(
S T ) 2 T0
(4)
The pressure within the spread area intends to balance the radial load (W) of the wheel and therefore it can be given by the following equations:
W
W W
2b ³
T0
T 0
2 bp max 8 br bT 0
p (T ) rb d T
2b ³
1 sin rb >S 2T 0 @
T0
T 0
§ S · T ¸¸ d T p max rb cos ¨¨ © 2T 0 ¹
§S T · ¨¨ ¸¸ © 2 T0¹
T0
(5) T 0
p max
S
Hence, the maximum pressure (pmax) at the centre of the spread area can be given by the following expression:
p max
WS 8 br bT 0
(6)
If the spread angle ș0= ʌ/3, while b = 15 mm, W = 3625N, and rb = 190 mm, then, the value of pmax can be calculated by the following expression:
p max
WS 8brbT 0
3625 u S 8 u 15 u 190 u S 3
0 .48 MPa
(7)
360
2.4
W. Xu, P. J. Ogrodnik, B. Li, J. Li, S. Li
Parabolic Distribution of Pressure
The parabolic distribution of pressure refers to that the pressure is spread on the surface of the bead seat of the wheel according to the parabolic law with a certain spread angle. The distributive pressure p(ș) is followed by the following expression:
p(T )
§ T · pmax ¨¨1 ( ) 2 ¸¸ T0 ¹ ©
(8)
The distributive pressure p(ș) can be determined based on force equilibrium as well. It can be given by the following equations: T0
p (T ) rb d T
W
2b ³
W
2 p max br b 2 T 0 2 p max br b
T 0
W p max
2b ³
T0
T 0
p max
§ ¨1 ¨ ©
§ T ¨¨ ©T0
· ¸¸ ¹
2
· ¸r dT ¸ b ¹
2T 0 3
8 p max br b T 0 3 3W 8 br b T 0
(9) (10)
Taking the previous parameters, we can figure out the maximum pressure at the centre of the spread area of the bead seat:
p max 2.5
3W 8 br b T 0
3625 u 3 8 u 15 u 190 u S 3
0 . 4557 MPa
(11)
Results of Calculation and Stress Test
In order to investigate the influences of different pressure distributions and spread angles on stress analysis of the wheel, the finite element analysis modelling of the
Figure 5. Stress test of the small wheel
Simplified Stress Analysis of Large-scale Harbor Machine’s Wheel
361
analogous wheel using various parameters which were discussed above was modeled and calculated. And then, a static stress test of the wheel was carried out, in which DH-3815 static strain test system was used to measure the stress of test point of the wheel and to process the test data, as showed in figure 5. The lines in figure 6 illustrate the value of Von Mises stresses at the same point near the edge of rim, where the Von Mises stress reaches the maximum value, as different pressure distributions and spread angles are used. The tested stress at the corresponding point is 83.5Mpa.
Figure 6. Relationship between Von Mises stresses and swept angles
The line graph in figure 7 illustrates the trend of Von Mises stresses when adopting cosine distribution at different spread angles.
Figure 7. Von Mises stresses and swept angles of cosine distribution
From figure 6 and figure 7, several observations can be concluded as followings: x
When using uniform pressure distribution to calculate, the maximum value of Von Mises stress decreases as spread angle increases. While the fact is that the value of Von Mises stress reaches maximum value at a certain
362
W. Xu, P. J. Ogrodnik, B. Li, J. Li, S. Li
x x
x
3
spread angle[1]. Therefore it is not suitable for detailed stress analysis. However, it might be useful for a rough analysis. The parabolic distribution of pressure is similar to cosine distribution, but it fluctuates irregularly. And when the spread angle is 90°, the maximum stress point changes. When using the cosine distributions of pressure, the line graph illustrates a smooth relationship between stress and spread angle. When the spread angle is 60°, the value of Von Mises stress reaches maximum value which is approximately equal to the test result. Consequently, the cosine distribution with spread angle of 60 degree is recommended.
Stress Analysis of P2515 Model Wheel
The P2515 model wheel is made of steel Q345, and consists of a rim, a right and a left shield ring, a base ring, and a locking ring. When the tyre is fixed into the rim between two the shield rings and the base ring, the locking ring is then used to lock the whole assembly. Therefore the possible maximum stress would take place in the locking ring which is contacted with both the rim and the base ring, where contact elements are used in the finite element analysis modelling. While the contact between the base ring and the shield ring has slight influence on the wheel stress distribution, in order to simplify the calculating model, they are bundled together as one entity.
Figure 8. Finite element analysis modelling of the harbor machine’s wheel
Figure 8 is the finite element analysis modelling of the harbor machine wheel. The hole of the rim, which is used to assemble the wheel to the harbour machine, is restricted as fixed. Inflation pressure of the tyre (1.2Mpa) is applied onto all the surfaces of the wheel where the tyre covers. The load of the wheel (4.8E+5N) is transformed into the pressure distributed on the surface of the bead seat according to the cosine distribution with a spread angle 60º by the expression as following:
Simplified Stress Analysis of Large-scale Harbor Machine’s Wheel
2b ³
W
T0
T 0
p (T ) rb d T
8 br bT 0
p max
S
363
(12)
Where: W ——the radial load of the wheel, 480000N; ș0 ——the distribution angle, 1.047 or 60º; b ——the width of bead seat, 60mm; rb ——the radius of the bead seat, 318mm. Therefore:
p max
WS 8brbT 0
480000 u S 8 u 60 u 318 u 1.047
9.434 MPa
(13)
Through calculating, the Von Mises stress status of the wheel is showed in figure 9, the maximum value of Von Mises stress is 227.2MPa, which occurs in the top of locking ring. Although this result hasn’t been verified by stress test, however, the experienced designers of wheel manufacturer confirm that the result is reliable. And the result was accepted by the customer.
Figure 9. Von Mises stress status of the harbor machine’s wheel
4
Conclusions
Based on the study, several conclusions can be deduced as followings: x
x
Different strategies of transforming the radial load into pressure distributed on the surface of the wheel have been modeled, calculated and compared. The cosine distribution with spread angle around 60 degree is recommended. There still is a difference between theoretical calculating result and stress test result, and the relationship between pressure distribution and stress status of the whole wheel is still not assured, further investigation is needed.
364
W. Xu, P. J. Ogrodnik, B. Li, J. Li, S. Li
x
5
The method to construct the finite element analysis modeling of the large scale harbor machine wheel in ANSYS is proposed and the result is reliable.
Acknowledgements
The research is supported by Guangxi Science and Technology Council, China, project No: GKN05112001ˉ7B. The support is gratefully acknowledged.
6
References
[1] J.Streams, T.S.Srivastan, Modeling the mechanical response of an alloy automotive rim. Materials Science and Engineering 366: 262–268 [2] U. Kocabicak, M.Firat, (2004) A simple approach for multiaxial fatigue damage prediction based on FEM post-processing. Materials and Design 25:73–82 [3] Wang Chenglong, (2004) Application of CAE in Enhancing Strength of Steel Rings for Automobiles. Journal of Shanghai University (Natural Science) 10:13-16 [4] Fu Sheng, (2004) The Static Strength Finite Element Analysis for Car Wheel Rims. Mechatronics 4:34-36 [5] Wang Xiaofeng, Wang Bo, (2004) Structure Strength Analysis of Automotive Wheels. Journal of Mechanical Strength 4:66-69 [6] Dai Weiwei, Su Jun, Miu Yadong, (2006) FEA Simulation for Truck Radial with Tire/Rim Seating Process. Journal of Changsha Communications University 22:78-82
Clean-up Tool-path Generation for Multi-patch Solid Model by Searching Approach Ming Luo, Dinghua Zhang, Baohai Wu, Shan Li Key Laboratory of Contemporary Design and Integrated Manufacturing Technology, Ministry of Education, P.O.Box552, Northwestern Polytechnical University, Xi’an, China, 710072
Abstract Focusing on the clean-up tool path generation for multi-patch solid model, a new efficient and robust searching approach is presented in this paper. The multi-patch solid model is considered as an integrated object other than dealing with surfaces on the solid model. The initial points are selected on the part surface in the physical domain, and then the points are converted into the parametric domain to determine the search center and the search direction. After that, the search operation is carried out in the parametric domain, while the cutter-center point is calculated in the physical domain. Finally, a cutter-center curve of the clean-up tool path is fitted with all the searched points. In this paper, some illustrative examples are provided and the results show that the method is feasible and efficient. Keywords: Clean-up machining Tool path generation Multi-patch solid model Searching approach
1.
Introduction
Clean-up machining is one of the most challenging problems in freeform surface machining. The purpose of the clean-up machining is to remove uncut volumes left at concave regions after finish machining, by employing ball end mill of the same or smaller size. Clean-up machining is critical to achieve good part surface finish and to shorten the total machining time of complex part surface. Unfortunately, advances in theory for clean-up tool-path generation have not kept pace with the increasing usage of complex parts with freeform surface and the advances in NC(Numerical Control) technology. A number of commercial CAD/CAM systems including UG, CATIA, are capable of generating clean-up tool paths, but very few methods have been openly published [1-4]. Some researches have been aimed at solving this critical problem. For example, equidistant-offset surface approach is widely used in the tool-path generation for freeform surface [5-6]. The main idea of this approach is to find the intersection curves of equidistant offset surfaces, then the intersection curves are used for generating clean-up tool-path. It works well when there are fewer surfaces,
366
M. Luo, D. Zhang, B. Wu and S. Li
however, it is very complicated and time-consuming to construct the offset surface exactly and completely when the number of the surface is large [6-7]. In order to generate effective clean-up tool path, the polyhedral model is often employed in the research, including those by Ren [8] and Kim [2]. In Ren’s research, a contraction tool method was proposed to detect gouging and generation clean-up tool paths for machining complex polyhedral models. It utilizes a series of intermediate virtual cutters to search for clean-up boundaries and construct the clean-up tool paths. Kim [2] employed a curve-based approach for clean-up machining. The pencil-cut and fillet-cut paths for a polyhedral model of the STL form with a ball-end mill are obtained from the curve-based scanning tool paths on the xz, yz, and zy planes. The premise of their approach is to obtain polyhedral model of the STL form, which sometimes is not necessary in NC machining and it also limits its scope of application. This paper presents a searching strategy for generating the clean-up tool paths for machining complex multi-patch solid model. The multi-patch model is considered as an integrated object and the searching strategy searches cutter-center points in both the physical domain and the parametric domain. The remainder of this paper is organized as follows. Section 2 presents the characters in generating tool paths for the multi-patch solid model and the overall conceptual approach. Section 3 discusses the searching strategy for the multi-patch model in the parametric domain and physical domain. Computer implementation and practical examples are presented in section 4, followed by the conclusion in Section 5.
2.
Overall Conceptual Approach
2.1
Characters of Generating Tool Paths for the Multi-patch Solid Model
Lots of freeform surface parts such as turbine blades, impellers, molds and dies are machined with multi-axis NC machines [9-10]. Most of the parts include a great number of small freeform surfaces, and the model of these parts are often termed as multi-patch solid models. Taking the snubber of a kind of turbine blade as an example, there are total 64 freeform surfaces in the model. Figure 1 shows the multi-patch solid model of the snubber. The following problems will occur if the offset approach is employed in generating clean-up tool path: x x
x
It is very complicated to construct the offset surface exactly and completely, and sometimes no satisfactory equidistant offset surface can be obtained. Cross curves and discontinuousness often exist among intersection curves of the offset surfaces, as shown in Figure 2, it needs lots of manual edit, which significantly influences the improvement of the automated programming. It is time-consuming to construct all the offset surfaces for the model. Sometimes, we even don’t know which surface should be offset while others do not.
Clean-up Tool-path Generation for Multi-patch Solid Model
367
Multi-patch solid model Part surface
Figure 1. Solid model of the snubber Intersection curve
Intersection curve of offset surfaces
Discontinuousness
Cut out
Part surface
Part surface
a
b
Figure 2. a. Cross curves; b. Discontinuousness
In summary, it is very complicated and time-consuming to generate clean-up tool paths for multi-patch solid model by equidistant offset surface approach. In our research, the multi-patch solid model is regarded as an integrated object, no single surface of the solid model is taken out for special consideration. With this method, there is no need to calculate equidistant offset surface. 2.2
Overall Conceptual Approach
As shown in Figure 3, in pencil-cut, the center of the ball-end cutter is O, the distance between O and the solid model is Rˈ the distance between O and the part surface is R too. The above qualification is called distance qualification; point that satisfying the qualification is on the cutter-center curve. Thereby, as long as all points qualified are searched, the cutter-center curve can be obtained. Solid model
Part surface
O R R
Figure 3. Side-view of a clean-up region
The overall conceptual approach of generating tool paths is summarized in the flowchart shown in Figure 4 and it is explained briefly in this section. x Determine initial points: Select several points around the multi-patch model on the part surface in the physical domain, they are regarded as initial points. x Convert initial points in the physical domain into the parametric domain: All the initial points in the physical domain should be converted into the parametric domain to determine all initial points and the search-center. x Determine the search direction: In the parametric domain, determine the search direction for every initial point.
368
M. Luo, D. Zhang, B. Wu and S. Li
x
Search destination point: Search every point satisfying the distance qualification along the search direction, and then convert it from the parametric domain into the physical domain. Fit cutter-center curve: When all destination points are searched, fit the cutter-center curve with all the destination points. Calculate CL points: A set of scatter points can be obtained after the discretization of the cutter-curve, convert every point in the set to CL point and store the point in a CL data file.
x x
Begin Assign initial points Calculate search center and all initial points Current initial point number: i=0 Determine the search direction for the ith point Assign the value of the ith point to the current searching point Calculate cutter-center point for the current searching point
Move the current point along the No search direction to Does the cutter-center point satisfy the distance qualification? a new position in the parametric Yes domain Put the cutter-center point into the cutter-center point set
i=i+1
No
i equals N? Yes Fit the cutter-center curve with all the points in the point set End Figure 4. Overall conceptual approach
3.
Tool Path Generation for Pencil-cut
In this section, the determination of initial points and search direction will be discussed first, followed by the detail discussion of the searching strategy for every point. The model shown in Figure1 will be taken as the example to show the calculating procedure. 3.1
Determination of Initial Points and Search Direction
3.1.1
Determination of Initial Points
Let G represent the multi-patch solid model, and S be the surface where G locates. As shown in Figure 5, convert the contact region between G and S from physical domain into parametric domain. As it is a closed region in the parametric domain, it can be encased by a quadrangle box.
Clean-up Tool-path Generation for Multi-patch Solid Model
369
0.6 v 0.5 0.4 u 0.3 0.0 0.2 0.4 0.6 0.8 1.0 Figure 5. Convert the contact region from the physical domain to the parametric domain a
b Pi (B) Search direction I2 .M
I1
OC(A) Search center I4
I3
Figure 6. a. Initial points in the physical domain; b. Initial points and search direction in the parametric domain
As shown in Figure 6(a), four points can be chosen follow the contour around G in the physical domain; Figure 6(b) shows the corresponding points in the parametric domain. Then the search center OC in the parametric domain can be defined as:
uOC vOC
u v
4 4
I1
u I 2 u I3 u I 4
I1
vI 2 vI 3 v I 4
Where u I , vI ˈ u I , vI ˈ u I , vI ˈ uI , vI 2 2 1 1 4 4 3 3
(1)
are the parametric values in
the parametric domain of the four chosen points represently. Four points here are not enough for the searching strategy, more initial points need to be determined. As shown in Figure 6(b), connecting the four points one by one with straight line in the parametric domain; discretize the line, then a set of ordered points ĭ={P1, P2,…, Pi,…, PN} can be obtained, where 1iN. Ordered point set ĭ defines all the initial points around G. 3.1.2
Determination of Search Direction
As shown in Figure 6(b), effective search direction should guarantee that destination point can be searched along the direction as well as that no disorder will happen. To achieve the above objective, the method employed in this research is the following: In the parametric domain, connect the initial point Pi and the search centerOC with straight line, and then the search direction is defined by the line, as shown in Figure 6(b). The advantage of this direction is that since every point Pi in the point set ĭ is ordered, every destination point searched by the determined direction is ordered and unique.
370
3.2
M. Luo, D. Zhang, B. Wu and S. Li
Searching Strategy for Every Initial Point
The search operation can be carried out when the search center and search direction is determined for every initial point. For point Pi in the initial point set ĭ, three points are recorded during the search operation: A(uA,vA): The point close to the search center OC; B(uB,vB): The point away from the search center OC; M(uM,vM): The midpoint between A and B. All of the three points are determined in the parametric domain and the initial value evaluated for every point is as follows:
A O, B
Pi , M
1 A B 2
(2)
The detailed search operation can be explained by the following steps: Step1: In the parametric domain, assign initial values forA,B, calculate M. nS
Moffset
R M
S
Figure 7. Calculation of offset point
Step2: As shown in Figure 7, calculate coordinate values M(xM,yM.zM) for M according to its parametric value on S. Then calculate the offset point Moffset along the surface normal nS to S at M:
M offset
M RnS
(3)
Where R is the radius of clean-up cutter. Step3: Calculate the distance Dist between Moffset and G in the physical domain:
Dist
distance M offset , G
(4)
Step4: Given the distance tolerance įR, if (Dist-R)>įR, it means that the current Moffset is far away from G, go to step5. If (Dist-R)<-įR, it means that the current Moffset is too close to G, go to step6. If | Dist-R |<įR, it means that the current Moffset satisfies the distance qualification, go to step 7. Step5: The current Moffset is far away from G, move current B to the current M along the search direction, and then calculate the new M and then go to step2. Step6: The current Moffset is too close to G, move current B to a new destination along the direction AB, and the moved distance is half of AB, calculate a new M and then go to step2.
Clean-up Tool-path Generation for Multi-patch Solid Model
371
Step7: The current Moffset satisfies the distance qualification, put Moffset into the cutter-center point set Ȇ. Judge if there is still some points in ĭ not searched yet, if so, go to step1; if not, go to step8. Step8: End the search, fit a single spline with all the cutter-center points in Ȇ. Begin
End Yes
A=O, B=Pi No
Every initial point been searched?
MInit=0.5(A+B) Yes
|Dist-R|<įR
Put M into point set Ȇ
No Temp=B |uA-uB|<įu |vA-vB|<įv No B=A+1.5(B-A) A=Temp
B=M |uA-uB|<įu |vA-vB|<įv
Yes
Yes
No B =ȜPi
B = MInit
M=0.5(A+B)
Figure 8. Search operation
During the search operation, as for the given tolerance įu and įv, if |uA-uB|<įu and |vA-vB|<įv, the three points A, B and M are considered to be coincident. If the three points are coincident and M doesn’t satisfy the distance qualification, no destination point will be found for the current initial point. In order to avoid this situation, the value of |uA-uB| and |vA-vB| are judged every time during the search operation, when it is found that |uA-uB|<įu and |vA-vB|<įv at the same time, reassign initial values for A and B. The whole search operation is summarized in Figure 8.
4.
Illustrative Examples and Analysis
4.1
Illustrative Examples of Turbine Blade Snubber
The proposed approach has been implemented on personal computer using C++ programming language and UG/Open API. Shown in Figure 9 is the searching procedure for a single initial point, the line represents the searching direction and the solid blank points represent points during the search procedure. Shown in Figure 10(a) are searched points in the parametric domain, shown in Figure 10(b) is the cutter-center curve in the physical domain. It cost 60.09 seconds doing the searching operation for eighty initial points. In average, about seven points are searched before the destination point is found. Shown in Figure 11 is the relationship between the initial points and their search times. We can see that
372
M. Luo, D. Zhang, B. Wu and S. Li
deviation for every initial point between its search times and the average time is not large, which means that our search approach is robust enough. 0.54 v 0.53 0.52 u 0.51 0.1 0.2 0.3 Figure 9. Searching procedure for a single initial point. Ɣ˖Searched points during the search operation; ż˖Points satisfying distance qualification 0.8 v
Multi-patch solid model in the parametric domain Searched points
0.6 0.4 0.2 0.0
u 0.2
0.4
0.6
0.8
1.0
a
b
Figure 10. Cutter contact points and cutter-center curve. a. Cutter-contact points in the parametric domain. b. Cutter center-curve in the physical domain
Shown in Figure 12 are the results of the comparison between the proposed approach and the equidistant offset surface approach. The results show that machined part with the proposed approach has better machined surface finish. 10
Search times
8 6 4 2
0
20
40
60
CC points 80
Figure 11. Relationship between the initial points and their search times
Clean-up Tool-path Generation for Multi-patch Solid Model
a
373
b
Figure 12. Comparison between the proposed approach and the equidistant offset surface approach. a. Proposed approach; b. Equidistant offset surface approach
4.2 The Influence of the Selection of Initial Points on the Search Operation
In this research, the four initial points were chosen randomly on the surface S in the physical domain, and they were converted into the parametric domain for next search operation. Different initial pints will result in different searching times, shown in Figure 13 are different initial points chosen on surface S, the corresponding searching time are summarized in Table 1. We can see from the table that initial points which are closest to the final searching cutter contact points will result in less searching time. Summarize previous presentation, we can come to the basic principles of choosing initial points: x x
In the physical domain, the multi-patch solid model should lie in the quadrangle box formed by the initial points. The chosen initial points should not be far away from the multi-patch solid model or too close to it.
a
b
c
d
e
Figure 13. Different initial points chosen on surface S Table 1. Comparison of searching time in Figure 13 Initial points
Searching time(second)
Cutter radius(mm)
įR(mm)
Figure 13(c)
57.45
5.0
0.01
Figure 13(d)
18.59
5.0
0.01
Figure 13(e)
50.97
5.0
0.01
374
5.
M. Luo, D. Zhang, B. Wu and S. Li
Conclusion
A new clean-up tool path generation approach for multi-patch solid model is presented in this paper. The multi-patch solid model is considered as an integrated object other than dealing with surfaces on the solid model. Furthermore, the proposed approach depends on the theoretical model of the solid model, the STL form and offset surfaces are not need any more. As a result, the approach can avoid the complicated and time-consuming calculation, so it can greatly improve the level of automated programming. Computer implementation shows that the searching process is fast and robust while the practical machining results demonstrate that the proposed approach can get better surface finish.
6.
References
[1] Flutter A, Todd J, (2001) A machining strategy for toolmaking. Computer-Aided Design 33(13):1009-1022 [2] Kim DS, Jun CS, Park S, (2005) Tool path generation for clean-up machining by a curve-based approach. Computer-Aided Design 37:967-973 [3] Ren YF, Zhu W, Lee YS, (2005) Material side tracing and curve refinement for pencilcut machining of complex polyhedral models. Computer-Aided Design 37(10):10151026 [4] Park SC, (2005) Pencil curve detection from visibility data. Computer-Aided Design 37(14):1492-1498 [5] Liu XW, Zhang DH, (2001) Theory and applications on NC machining. China Machine Press [6] Seong JK, Elber G, Kim MS, (2006) Trimming local and global self-intersections in offset curves/surfaces using distance maps. Computer-Aided Design 38:183-193 [7] Yu Z, Liu X, Chen L, Wang Y, Peng Q, (2007) New Approach to Construct Offset Surface. Journal of Communication and Computer 4(2):5-9 [8] Ren YF, Yau HT, Lee YS, (2004) Clean-up tool path generation by contraction tool method for machining complex polyhedral models. Computers in Industry 54:17-33 [9] Lo CC, (1999) Efficient cutter-path planning for five-axis surface machining with a flat-end cutter. Computer-Aided Design 31(9):557-566 [10] Lee YS, (1998) Non-isoparametric tool path planning by machining strip evaluation for 5-axis sculptured surface machining. Computer-Aided Design 30(7):559-570
Fatigue Life Study of Bogie Framework Welding Seam by Finite Element Analysis Method Pingqing Fan, Xintian Liu, Bo Zhao College of Automation Engineering, Shanghai University Engineering of Science 201620, Shanghai, [email protected]
Abstract Using adaptive grid algorithm method, framework and its weld seam are meshed by manual method, fillet is instead of the corner of weld seam and the needed finite element model of a bogie framework and weld-seam are built. Based on the miner theory of cumulative fatigue damage and S-N curve modified, the strength and fatigue life distribution contours for welding seam of the bogie framework are given; the results are basically consistent with that of the experiment, so the framework satisfies with the demand of strength and fatigue life. The thesis predicts weak location of weld seam and provides basis to practical production. Keywords: FEA, Fatigue life, Welding seam, S-N curve
1.
Introduction
Carriage bogie bears the carbody weights and vibration from railway, and each component endures continuous random stress in running process. At present, there isn’t perfect norm about fatigue design, so bogie of passenger car adopts design method of static strength, and fatigue intensity is verified by experiment. Bogie framework is produced by welding steel plates which promotes the fatigue strength of components rather than casting steel plates. Aiming at component of welding structure, welding seam is area of the stress concentration and has great effect on fatigue strength of component. Based on a new bogie frame, the thesis analyzes strength and fatigue life of weld seam, and predicts weak location.
2.
Welding Seam Fatigue Life Theory
Nominal stress method is the earliest fatigue design method, which considers S-N curve of material or component as a main parameter to predict fatigue cycle numbers (total fatigue life). It is an empirical method. Usually, S-N curve is expressed by formulas as follow:
376
P. Fan, X. Liu and B. Zhao
SD N
C
(1)
Where, D ᧨ C are constants. Solve logarithm of equation at both sides:
D lg S lg N lg C
(2) Obviously, cycle stress and cycle fatigue life are linear relationship in double
S
logarithm. Knowing fatigue limit stress 0 and corresponding cycle fatigue the S-N curve can be expressed as follows:
D lg S lg N
lg C
N0 , (3)
So and N o are input into equation (2), equation (4) can be concluded:
D lg S 0 lg N 0
lg C
(4)
According to (3) and (4): lg N lg N 0 lg S lg S 0 Equation (5) is simplified as equation(6):
D
N
N0(
S D ) S0
When material constant
(5)
(6)
D
, fatigue limit stress
S 0 and corresponding cycle
N
fatigue 0 are given, cycle number under known stress amplitude is calculated directly. At present, miner theory of linear fatigue cumulative damage is used in project. For the component under random loads, miner is not inferior to nonlinear fatigue cumulative damage theory. If fatigue loads among random loads are almost in area of high cycle fatigue (HCF), miner theory of linear fatigue cumulative damage is sufficient. When multiple alternating amplitude value stresses play a role, damage can be expressed: Damage caused in a circle: D
1 N
Where N is fatigue life under present stress S . Damage caused in n circles is divided into constant-amplitude and variableamplitude: Constant-amplitude force, the damage is expressed as follows:
(7)
Fatigue Life Study of Bogie Framework Welding Seam
n N Variable-amplitude force, the damage is expressed as follows: D
D
ni
¦N i
DCR
377
(8)
(9)
i
is defined as critical damage.
Under Constant-amplitude force, if n =N, fatigue damage happens.
D CR
1
3.
FEA of Welding Seam
3.1
FEA Model of Welding Seam
(10)
It is necessary to deal with the corner framework welding seam because of limitation of SEAMW module in fatigue software, so the corner of welding seam is divided into as Figure 1.
Figure 1. The finite model of for the corner framework welding seam
The supporting pedestal of framework contains 5880 hexahedron elements and 8100 nodes; welding position consists of 3067 quadrilateral shell elements and 5759 nodes; the framework is divided into 50330 quadrilateral shell elements and 52175 nodes. In practical operating, framework bears complicate load, such as: vertical static load of framework is 200KN, vertical dynamic load is 30KN, load of symmetry along the diagonal line is 6KN, traction is 60KN, and brake force is 27KN. The distribution of Force and torque in welding section is showed in Figure 2.
378
P. Fan, X. Liu and B. Zhao
Figure 2. Force distribution for the corner framework weld seam
Material of framework welding seam is 16MnR that is low-alloy structural steel. According to GB/T1335-1996, allowed stress of material is 340MPa and elastic modulus is 2.09×105 MPa, Poisson ratio is 0.28. 3.2
FEA Result Of Welding Seam
The FEA result of weld seam shows that the maximum stress value of welding seam is 156MPa which is less than allowed stress of material 340MPa, and which generates inner side of weld-seam junction between the bolster and side beam, and that is close to junction of weld seam and the base. At same time, the maximum principal strain is 5.98h10-4. Moreover, inner side of weld-seam junction between side beam and cross beam is also dangerous area. Under this operating condition, the maximum displacement of the framework is 1.42mm and happens in middle part of cross beam. Deformation displacement is 1.4mm in Y direction. The result if showed in Figure 3.
Figure 3. FEA result of weld seam
Fatigue Life Study of Bogie Framework Welding Seam
4.
379
FEA of Fatigue Life
The ultimate purpose of fatigue life analysis ascertains fatigue life of structure. Calculation of fatigue life needs accurate loading spectra, S-N curve of material or component, proper cumulative damage theory and crack growth theory etc. Meanwhile, it also needs to consider possible defect factors. At present, there is not precise method about calculation of fatigue life at home and abroad, so it is only estimated or predicted. 4.1
S-N Curve of Material
Usually, based on the section 2, S-N curve of material can be easily drawn which is showed as Figure 4.
Figure 4. Unmodified S-N curve
According to the common p-S-N curve theory, there is not fatigue damage when cycle stress is below limiting fatigue stress. Because of structural characteristic of weld seam, stress can induce crack initiation though it is below limiting fatigue stress. Therefore, p-S-N curve is in need of being modified. Aiming at the part of p-S-N curve below limiting fatigue stress, Oblique-line whose slope is ( 2)takes place of original horizontal line.
lg N p
lg N p
a p bp lgV
V t V 1
a p bp lgV 1 b p 2 lgV 1 lgV V V 1
p-S-N˄p=95%˅modified is as Figure 5.
bp
᧨-
(12) (13)
380
P. Fan, X. Liu and B. Zhao
Figure 5. Modified S-N curve
Horizontal coordinates value of turning point is N 0 10 7 , ultimate fatigue limitation is showed: p=95%,
V 1 =267MPa.
4.2
Input of Loading Spectra
The framework is subjected to complicate load that contains static force and dynamic force, so cycling characteristic of the whole component is not consistent. In order to solve the problem, residual stress method is adopted in setting time-load history. Residual stress method is a kind of simple mechanism to change mean stress. In analysis of fatigue life, static loads are combined with a working condition independently and their effects can be superposed to the final result of fatigue life by residual stress method. To simulate working condition better, dynamic load is input by random loading spectra which is showed as Figure 6.
Figure 6. Random loading spectra
4.3
Setting Parameters of Solution
In calculation, survival rate is 95%. As introduced before, average compressive stress and tensile stress have different effect on fatigue life. In material database, the values of R are all -1. However, practical stress ratio is R > 1, 1@. Therefore,
Fatigue Life Study of Bogie Framework Welding Seam
381
it’s necessary to convert fatigue life, that is to say, fatigue life under arbitrary stress ratio should be equivalent to that with R 1 (symmetrical cycle). Two methods are applied to modify mean stress (Figure 7).
Figure 7. Formula of mean stress modified
It is obviously to see that Goodman method is more conservative than Gerber method. But in view of unpredictable dangerous condition of framework in practical running, the thesis adopts Goodman method to modify mean stress. 4.4
Result of FEA Fatigue Life
Due to consideration of unpredictable condition in reality, the influence of stress concentration, component parts' size, surface processing should be considered in calculation, which are referred to as reducing coefficient of fatigue
Kf
k
Kf
.
kH E uH
Where H is coefficient of stress concentration; H is coefficient of part's size; is coefficient of surface processing. The result of fatigue life on weld seam is given in Figure 8.
E
382
P. Fan, X. Liu and B. Zhao
Figure 8. Result of fatigue life
Here, the fatigue life of weld seam is unlimited life and that means the lowest cycle life is 1.0×1020, which meets the requirements of fatigue life for bogie framework. In order to indicate the dangerous parts of framework clearly, such as junction between the base and side beam, connection position of side beam and cross beam, the result which is calculated with increasing the value of load is showed in Insight circumstance of Patran postprocessor (Figure 9).
Figure 9. Result of fatigue life with increasing the value of load
5.
Conclusions
After analyzing fatigue life of weld seam, the paper concludes: 1. 2.
3.
Using adaptive grid algorithm method to deal with weld seam, FEA result is precise and effective. Due to fatigue damage of bogie framework belongs to high cycle fatigue, weak location of weld seam can be predicted based on the miner theory of cumulative fatigue damage and modified S-N curve. The framework is subjected to complicate load that contains static force and dynamic force, so cycling characteristic of the whole component is not consistent. In order to solve the problem, residual stress method is
Fatigue Life Study of Bogie Framework Welding Seam
383
adopted in setting time-load history. Using this method, simulation result can be more close to practical working condition. 4. Comparing simulation result with test result, a conclusion can be drawn that error between them is small. To sum up, the design of bogie framework weld seam satisfies with the demands of strength and fatigue life.
6.
References
[1] H.L.Schwab Ford Motor Company J.Caffrey F.E.Tools J.Lin Ford Motor Company. Fatigue Analysis Using Random Vibration 2003 [2] GunnertR. Residual welding stresses. Stockholm: Almqvist&Wiksell. 1955 [3] Masubuchik. Analysis of welded structures. New York: Pergamon Press. 1980 [4] Bard Wathne Tveiten, Torgeir Moan. Determination of structural stress for fatigue assessment of welded aluminum ship details[J]. Marine Structures 2000,13:189-212 [5] Ballard,P, Dang,K, Van,A., etc (1995). High Cycle Fatigue and a Finite Element Analysis. Fatigue and Fracture of Eng. Mat. & Struct.,18,397-411 [6] Stefan Dietz et al. Fatigue Life Prediction of a Railway Bogie Under Dynamic Loads Through Simulation. Vehicle System Dynamics. 1998, 29(3): 385 -402 [7] Cieck Karaoglu, N.Sefa Kuralay. Stress analysis of a truck chassis with riveted joints. Finite Elements in Analysis and Design 38(2002), 1115–1130 [8] J. J. Thomas. PSA Peugeot Citroën. Fatigue Modeling for Automotive Applications 2002-1 [9] B.J. Mac Donald. M.S.J. Hashmi. Three-Dimensional Finite Element Simulation of Bulge Forming Using a Solid Bulging Medium. Finite Elements in Analysis and Design. 2001,37:107-116 [10] K. Sadananda, A.K. Vasudevan. Fatigue crack growth mechanisms in steels. International Journal of Fatigue 25 (2003) 899–914
Research on Kinematics Based on Dual Quaternion for Five-axis Milling Machine Rui-Feng Guo1, Pei-Nan Li1, 2 1
Shenyang Institute of Computing Technology, Chinese Academy of Sciences, National Engineering Research Centre for High-end CNC, Shenyang 110004, China, [email protected] 2 College of Computing & Communication Engineering, Graduate University of the Chinese Academy of Sciences, Beijing, 100049, China, [email protected]
Abstract This paper presents machine kinematics for five-axis milling machine, which uses a dual quaternion formulation of the kinematics equations. The goal of the machine kinematics is to evaluate a correct position of the parts in the workpiece. The workpiece of a constrained five-axis milling machine is a subset of the group of spatial transformations, in turn, which can be represented by a subset of dual quaternions. The basic approach is to specify the dual quaternion machine kinematics for each transformation of a discrete approximation to the desired workpiece. The structure of these dual quaternions has several advantages over the methods of Euler angles and rotation matrix. Here we present the theory and formulate the machine kinematics equations for a typical configuration of a fiveaxis milling machine. An application example is also presented. Keywords: CNC, dual quaternion, five-axis, machine kinematics, rotary axis, workpiece
1.
Introduction
For tool/workpiece localization study, we always consider the tool/workpiece as a rigid body. During different stages of manufacturing, we may place a tool/workpiece in different coordinate frames. For example, when we design a tool/workpiece with CAD software, we base on a CAD reference coordinate frame. When we machine a tool/workpiece with CNC machines, the machining path is related to machine coordinate frame or other work coordinate frames. We are interested in the relationship of the tool/workpiece, which is represented in different coordinate frames. From a mathematical point of view, the relationship is called the machine kinematics. Due to the presence of two additional rotational axes, the design of the machine kinematics of five-axis milling machine is indispensable and essential tasks. In order to evaluate a correct location (or the position and orientation) of the parts in the workpiece, the methodologies to describe machine kinematics are
386
R.F. Guo and P.N. Li
conventionally derives by homogeneous transformation matrix [1]. However, which is hardly to preserve the matrix orthonormality and often causes a lot of inconvenience. In this paper, we will create a dual quaternion transformation representation to derive the machine kinematics for five-axis milling machine, in which the relative location between arbitrary two successive coordinate frames can be directly represented in dual quaternions. Euler parameters as unit dual quaternion will mainly be used to represent translation and rotation transformation of workpiece location during machining. Some advantages of using Euler parameters in machine kinematics may be summarized as: x x x x
First, providing a conceptual framework that allows one to handle translational and rotational components in a unified manner. Second, no degeneration for any angular orientation. Third, easily computing kinematic equations. Fourth, compact transformation representation.
The organization of this paper is as follows. Section 2 brief reviews dual quaternion, discusses to how to use dual quaternion to represent translation and rotation, and develops the relationship between Euler parameters and rotation matrix in rotation transformation. Section 3 describes the machine constraints of the five-axis milling machine and analyzes the machine kinematics. Section 4 presents the main body of the paper, which creates dual quaternion machine kinematics fundamentals for five-axis milling machine. Then, an application example has been developed based on the forward machine kinematics is given in Section 5.
2.
Dual Quaternion Review
Nomenclature A, B, X, Y, Z Five-axis CNC machine tool joint coordinates Q Dual quaternion representing the transformation Q* Conjugate of a dual quaternion Q R0 Dual part of a dual quaternion, a quaternion of translation Q0 Real part of a dual quaternion, a quaternion of rotation P1 Vector representing the initial location of a point being transformed P2 Vector representing the final location of the point R Coordinates of the origin of the workpiece in the rotary table coordinates T Coordinates of the origin of the rotary table coordinates in the tilt table coordinates C Origin of the tilt coordinates in the cutter center coordinates M Machine coordinate system W Workpiece coordinate system LCS Location Coordinate System rM Dual quaternion transformation representation of the spatial vector in machine coordinate system
Research on Kinematics Based on Dual Quaternion for Five-axis Milling Machine
387
rW Dual quaternion transformation representation of the spatial vector in workpiece coordinate system ¦ Dual unit with the property ¦2 = 0 d Translation vector n Rotation axis ©Rotation angle Mp=(xp, yp, zp) , Mp+1=(xp+1, yp+1, zp+1) Two successive spatial positions of the cutter path Corresponding the orientation of the cutter (ap, bp), (ap+1, bp+1) 2.1
Dual Quaternion Definition
Typically, a dual quaternion Q [2,3,4] is used to represent a spatial transformation, which consists of a pair of quaternion R0 and Q0, the dual quaternion Q is expressed by ª¬ q 0 1 q 0 2 i q 0 3 j q 0 4 k º¼ H > q 1 q 2 i q 3 j q 4 k ª¬ q 0 1 , q 0 2 , q 0 3 , q 0 4 º¼ H d ª q 0 1 , q 0 2 , q 0 3 , q 0 4 º¼ 2 ¬
Q
Q
0
H R0
2.2
@
(1)
Dual Quaternion Conjugate
Conjugate Q* of a dual quaternion Q is defined as Q
*
Q
0
HR
0
*
ª¬ Q 0 º¼ H ª¬ R 0 º¼
*
(2)
*
ª¬ q 0 1 q 0 2 i q 0 3 j q 0 4 k º¼ H > q 1 q 2 i q 3 j q 4 k * * ª¬ q 0 1 , q 0 2 , q 0 3 , q 0 4 º¼ H d ª q 0 1 , q 0 2 , q 0 3 , q 0 4 º¼ 2 ¬
2.3
*
@
Translations and Rotation Transformation by Dual Quaternion
The advantage of using dual quaternion is that we can represent a combined rotation with translation operation with one multiplication operation, expressed by: P2
Q P1
Q *
(3)
For five-axis milling machine, we usually use the dual quaternion by Euler parameters to represent the axes rotation. Generally, there are four type of combining transformation: pure rotation, pure translation, rotation then translation and translation then rotation.
3.
The Machine Constraints
In this paper, we introduce a constraint – based geometric modeling approach in the frameworks of kinematics of five-axis milling machines. The inverse
388
R.F. Guo and P.N. Li
kinematics of the machine combined with the geometric constraints a basis to simulate the machine motion. A solid model of the milling machine is a superposition of the components each being associated with appropriate constraints. In order to apply the constraint-based approach, we identify the constraints with each axis of the machine in terms of the kinematics pairs. The constraints are then represented by the constraint graph.
Figure 1. Five-axis milling machine
Consider a typical configuration of a five-axis milling machine [5] displayed in Fig 1.The machine has 5 degrees of freedom, i.e. the cutter is translated only along the z-axis, the knee along the y-axis, and the saddle along the x-axis. The tilt table rotates about the y-axis whereas the rotary table rotates about the x-axis. The use of the required kinematics pairs is illustrated by the constraint graph shown in Fig 2. The propagation sequence starts then from the node Base. A breadth-first traversal on the constraint graph yields the following sequence: Base- CutterKnee- Saddle- Rotary- Workpiece. F ix e d M a c h in e F ra m e
B ase
C u tte r
K nee
S a d d le
T i lt t a b l e
R o ta ry ta b le
W o rk p ie c e
Figure 2. The constraint graphs of the five-axis milling machine
Research on Kinematics Based on Dual Quaternion for Five-axis Milling Machine
389
Table 1. Allowable motion of a five-axis milling machine Components(Tar-SM)
Allowable Motion
Base Cutter Knee Saddle Tilt table Rotary table Workpiec
Fixed TR-LN, RT-LN TR-LN TR-LN RT-LN RT-LN Fixed
Note that the directed graph only represents the constraints rather than a solid representation scheme. Every arc of the graph corresponds to a constraint between two components. The direction of the arc denotes dependency of the constraint. The component that the arc points to, depends on the component that the arc departs from. This implies that the changes in the geometry (coordinate and size) of the latter are always propagated to the former. Therefore, all the constraints must be split in two groups: in and out. The in-group includes constraints imposed on the target component whereas the out-group involves constraints propagated to the dependent components. For each constraint the pair variables, number of degrees of freedom, and relative motion represent the allowable motion. The virtual kinematics of the components is defined by [6] (1) Motion Type, which refers to either allowable translations or rotations. (2) Motion Geometric Element (MotionGE) defines the origin of the local coordinate system of the solid model, which can be translated, or rotated. The MotionGE is a point, a line or a plane associated with the following five types of the allowable motion: x x x x x
Translation on Plane (TR-PL) Translation on Line (TR-LN) Translation in 3D Space (TR-3Dspace) Rotation about Point (RT-PNT) Rotation about Line (RT-LN)
A set of the allowable motions for each component is displayed in Table 1.
4.
Dual Quaternion Machine Kinematics
Machining of mechanical parts requires generation of tool paths defined the kinematic transformation of the tool/workpiece with respect to the parts. Threeaxis tool paths are represented by a set of Cartesian position vectors. Five-axis tool paths also specify the orientation of the tool. In this paper, we outline the kinematics fundamentals using dual quaternion to describe the transformation representation between the different coordinate frames. Consider a typical configuration called the five-axis milling machine is mentioned above (Fig 1). The machine is guided by axial commands carrying the 3 spatial coordinates of the cutter in the machine coordinate system M and the two rotation
390
R.F. Guo and P.N. Li
angles. The supporting CNC software generates a successive set of coordinates in the workpiece coordinate system W. Suppose that in the machine coordinate system M, its pose i+1 is located at pi (Fig 3) in the Location Coordinate System (LCS){i}, and a translation or rotation about one of the three base coordinate axes, such as the z-axis ,by ˥ i+1 angle in the LCS{i+1}. If we want to represent a spatial vector rMi+1 in the LCS {i+1} relative to the LCS {i}, the transformation between the two adjacent coordinate systems can be expressed in dual quaternion as rM
i 1
ª¬ rTM
i 1,
r RM
i 1
º¼
(4)
Q i r M iQ i *
Furthermore, if pose i is located at pi-1 in the LCS{i-1} and the pose i is rotated about one of its coordinate axes, for instances, the y-axis, by©i angle, we can transform rMi into the LCS{i-1} by dual quaternion as r
M
i
ª¬ rTM
i,
r RM i º¼
Q ir
M
i 1
(5)
Q i*
Substituting (4) into (5), we obtain the dual quaternion transformation representation of the spatial vector rMi in the LCS {i-1} as follows rM
i 1
ª¬ rTM
i 1,
rRM
i 1
º¼
Q i r M iQ i *
Q i x Q i 1 r M iQ i 1* x Q i *
(6)
We can take a hint from above equation, that is, it is possible to extend this equation to develop a general form. Suppose that a spatial vector rMi in the LCS {i} will be represented in another LCS {1} in dual quaternion. By applying the (4) to represent the transforms which are taken place in each LCS from the LCS {1} to LCS {i}, we obtain a dual quaternion transformation representation as follows i 1
rM i 1
ª¬ rTM i 1, rRM i 1 º¼
Q
i 1
i j
j 1
(7)
x r M 1 x Q j* j 1
We name the (6) as a general dual quaternion transformation representation in machine coordinate systems. Qi
L ocation P i+ 1 L ocation P n
L ocation P i
CL L ocation P 1
r i+ 1 W
riW
CW r iM r0
W orkp iece
CM
Figure 3. The spatial transformation mathematical model
Research on Kinematics Based on Dual Quaternion for Five-axis Milling Machine
5.
391
Examples
We will enumerate the links of the kinematic is represented by dual quaternion from the machine zero to the workpiece zero (Fig 4). The system of coordinates attached to the machine will be assigned by 0, the first link by 1, etc., and the location pose will be assigned by 5.
Figure 4. Schematic diagram of Five-axis CB kinematics, moved workpiece
A generalization kinematics representation based on dual quaternion (6), for 6 links of the serial kinematics structure relative to the same pi yields: 1. Table rotate©C by Z axis for link 1, using the dual quaternion, to the form: P i1
T Q ¨§ ©
C
2
, 0 ¸· x r i M ¹
x Q
*
§ T ¨ ©
C
2
, 0 ¸· ¹
2.
Vector from machine zero to joint for link 2, instead of (8), we have
Pi 2
Q 0, j 0 x Pi 1 x Q * 0, j 0
3.
Table rotate©B by Y axis for link3, instead of (9), is expressed by
T T Q 0, j0 x Q §¨ C , 0 ·¸ x ri M x Q * §¨ C , 0 ·¸ x Q * 0, j 0 2 2 © ¹ © ¹
(8)
(9)
392
R.F. Guo and P.N. Li
Pi 3
T , 0 ·¸ x Pi 2 x Q * §¨ B , 0 ·¸ (10) 2 ¹ © ¹ T T T T Q ¨§ B , 0 ¸· x Q 0, j 0 x Q ¨§ C , 0 ¸· x ri M x Q * ¨§ C , 0 ¸· x Q * 0, j0 x Q * ¨§ B , 0 ¸· 2 2 2 2 © ¹ © ¹ © ¹ © ¹ Q §¨ ©
4.
TB
2
Vector from last joint of table to zero point of table, instead of (10), to the form: Pi 4
0 ,
Q
p0
x
Pi 3 x Q
T Q 0 , p 0 x Q §¨ © xQ
5.
Pi 5
*
§T ¨ ©
C
2
0 ,
p0
, 0 ·¸ Q 2 ¹
B
, 0 ·¸ x Q ¹
*
*
0 ,
j0
(11)
0 , j 0 x Q ¨©§ T
Q
*
§T ¨ ©
B
2
C
, 0 ¸· x r i M 2 ¹
, 0 ·¸ x Q ¹
*
0 ,
p0
Vector of programmed pose in LCS, and then tool compensation vector machine, finally vector from the tool holder to the first rotary axis, is to measure at machine initial pose, instead of (11), we have:
Q 0, x x Q 0, t x Q 0, t0 x Pi 4 x Q * 0, t0 x Q * 0, t x Q * 0, x
(12)
T T Q 0, x x Q 0, t x Q 0, t0 x Q 0, p 0 x Q §¨ B , 0 ·¸ Q 0, j 0 x Q §¨ C , 0 ·¸ x ri M 2 2 © ¹ © ¹ * §TC * * §T B * * * · · , 0 ¸ x Q 0, j 0 Q ¨ , 0 ¸ x Q 0, p 0 x Q 0, t0 x Q 0, t x Q * 0, x xQ ¨ 2 2 © ¹ © ¹
It is necessary to mention that different definition of five-axis milling machine zero will lead different kinematics equation for five-axis machine.
6.
Conclusion
Dual quaternion can be employed to represent the kinematics for five-axis milling machine. A general dual quaternion transformation representation is derived. Comparing with ordinary methods we believe that the simple machining task in the machine coordinate system is of a great advantage for deriving machine kinematics equation. The relation between the axes translation and rotation transformation make the dual quaternion kinematics have practical usage. Furthermore, dual quaternion kinmatics for five-axis milling machine applications, such as, inverse, dynamics and control can be based on results, which we have presented in this paper.
7.
References
[1] Koren, Y., and Lin, R. -S., 1995, “Five- Axis Surface Interpolators,” Ann. CIRP, 44, No.1, pp. 379-382. [2] M. W. Walker, L. Shao, and R. A. Volz, “Estimating 3-d location parameters using dual number quaternions,” CVGIP: Image Understanding, vol. 54, No. 3, pp. 358-367, November 1991.
Research on Kinematics Based on Dual Quaternion for Five-axis Milling Machine
393
[3] McCarthy, J.M., 1990, “Introduction to Theoretical Kinematics,” The MIT Press, Cambridge, MA. [4] Bottema, O., and Roth, B., 1979, “Theoretical Kinematics,” North Holland Press, NY. [5] E.L.J. Bohez * “Five-axis milling machine tool kinematic chain design and analysis,” International Journal of Machine Tools & Manufacture, 42 (2002), 505–520 [6] T. Fenando, M. Fa, P. M. Dew and M. Munlin, “Constraint-based 3D Manipulation Techniques for Virtual Environments,” Virtual Reality Application, 1995.
Consideration for Galvanic Coupling of Various Stainless Steels & Titanium, During Application in Water-LiBr Absorption-Type Refrigeration System Muhammad Shahid Khan1, Saad Jawed Malik2 1
Department of Mechanical Engineering, College of E&ME, National University of Sciences & Technology, Rawalpindi, Pakistan, E-mail: [email protected] 2 NDC, NESCOM, Islamabad, Pakistan.
Abstract LiBr is one of the potential absorbent for absorption refrigeration units. Various passivating metals and alloys have been proposed as structural materials for the units. Galvanic or bimetallic corrosion is a well-known type of corrosion responsible for accelerated corrosion under certain favorable conditions. Study of galvanic behavior can help to predicting corrosion behavior of a metallic pair. Potentiodynamic “E- log i” curves were obtained for three grades of austenitic stainless steels, a duplex stainless steel and commercially pure titanium, in lithium bromide (LiBr) solutions, using electrochemical techniques. A three-electrode system connected to a Gamry® framework was employed for the purpose and the materials were scanned to get the polarization diagrams in various concentrations of LiBr i.e. commercial LiBr (850 g/l solution containing chromate inhibitor), 400 g/l LiBr and 700 g/l LiBr solutions, at room temperature. The individual “E- log i” curves obtained under similar conditions were superimposed to predict galvanic behavior of various materials, making use of mixed potential theory. Duplex stainless steel and titanium were found to be cathodic to other materials in commercial solution however, these materials were predicted to suffer from severe galvanic corrosion in inhibitor-free LiBr solutions. AISI 316 appeared to reveal a cathodic behavior when coupled to other materials in dilute solutions (400 g/l concentration without inhibitor), however, an anodic behavior was predicted during coupling in commercial solution (containing inhibitor). An experimental study has been presented in this paper; a numerical modeling of the results may be recommended for future work. Keywords: Corrosion, Galvanic coupling, Stainless steel, Titanium.
1.
Introduction
Absorption units reduce the use of CFC in refrigerants and eliminate concerns about lubricants. Therefore, use of absorption heating and refrigerating systems is expanding widely in air conditioning units for buildings and automobiles [2].
396
M.S. Khan and S.J. Malik
Aqueous solutions containing high concentrations of lithium bromide are employed as absorbent in most of absorption-type heating and refrigerating systems that use natural gas or steam as energy sources [1]. In absorption based system water is used as refrigerant and LiBr as absorber to maintain vacuum in evaporation chamber. The system runs on a cycle where concentrated brine is diluted with cold water [3]. Corrosion problems on metallic components in refrigeration systems and heat exchangers in absorption plants using LiBr are not uncommon. Generation of hydrogen is also a by-product of electrochemical processes occurring during corrosion and poses a serious threat to the structure and causes considerable loss of efficiency [4]. Thus, it is necessary to develop new corrosion resistant alloys and use protection techniques such as pH control, application of inhibitors or inorganic coatings, to minimize general and localized corrosion. The rate of corrosion in refrigeration cycles is determined by a multitude of factors such as temperature, concentration of the working fluid, pH, flow conditions, galvanic corrosion and metallurgical factors [5]. For example, bimetallic corrosion due to unfavorable galvanic coupling [6,7], pitting or localized corrosion due to presence of halide salts [8] etc. are not uncommon in these refrigeration systems. Use of lithium chromate (Li2CrO4) as an inhibitor in LiBr system has proved beneficial for copper and mild steel. Its action could be modified for use when galvanic couples exist in the system [9]. Various passivating alloys like austenitic and duplex stainless steels and titanium are considered potential candidates for use in above mentioned environment. These materials have been under discussion by various researchers for their relatively high corrosion resistance and their corrosion behaviors have been investigated by a number of workers [6-15] This paper describes a comparison among corrosion behaviors of five passivating materials i.e. SS304, SS316, SS316L, duplex stainless steel and titanium, in commercial LiBr containing chromate inhibitor, 400 g/l LiBr and 700 g/l LiBr aqueous solutions at room temperature. Estimation of potential danger or safe use of various galvanic couples in particular environmental conditions has also been investigated as per mixed potential theory which predicts the behavior by superimposing polarization curves of concerned alloys under similar conditions.
2.
Experimental Methodology
2.1
Materials
The materials and their chemical compositions are given in Table 1.
Consideration for Galvanic Coupling of Various Stainless Steels & Titanium
397
Table 1. Materials for investigation Material 304 316 316L DSS Titanium
2.2
Fe Bal. Bal. Bal. Bal. 0.3
C 0.04 0.08 0.03 0.02 0.1
Mn 1.17 1.49 1.27 0.68 ---
S 0.021 0.001 0.022 0.0008 ---
Si 0.59 0.49 0.43 0.32 ---
Cr 18.5 17.14 16.68 26.19 ---
Ni 8.26 9.99 11.11 6.62 ---
Mo --2.34 2.32 3.45 ---
Ti --------Bal.
Environments / Electrolytes
Three electrolytes mentioned in Table 2 were used for the electrochemical testing of all the materials. All tests were performed at room temperature using LiBr salt of commercial purity. All electrolytes were used in deaerated condition by continuous purging of nitrogen through the solution during experimentation. Table 2. Chemical properties of the electrolytes.
2.3
Electrolyte Solution-1
LiBr conc. 850 g/l
Solution-2 Solution-3
400 g/l 700 g/l
Inhibitor Li2CrO4 (0.3%; 4.8 g/l) Nil Nil
pH 10 5 3.2
Electrochemical Experimentation
The experimental method used was based on original work of Stern & Geary [24] using three-electrodes DC corrosion technique. Silver-Silver chloride electrode (Ag/AgCl) was used as reference electrode and platinum as auxiliary electrode. The sample under investigation was polarized first cathodically and then anodically giving a graph of E (applied potential) vs log-i (current density). The various curves generated by this method were superimposed to find out the galvanic behavior between couples of materials. A dedicated machine for corrosion testing Gamry® PC3 computer-interfaced CMS-100 framework, was used for this study. Complete experimentation techniques have been mentioned in [22].
3.
Results and Discussion
The authors investigated E-log i diagrams for AISI 304, AISI 316, AISI 316L, DSS and Ti in the three environments as mentioned in Table 2, and the results have been presented previously [22]. The Ecorr and icorr values derived from the graphs are summarized Table 3. Corrosion potential of all materials has been found relatively more negative in commercial LiBr due to the presence of chromate inhibitor. This was an expected behavior since Li-chromate is normally recommended to be used as an inhibitor in LiBr [9].
398
M.S. Khan and S.J. Malik
Table 3. Summary of Ecorr & icorr. 400 g/L Materials AISI 304 AISI 316 AISI 316L Duplex SS Titanium
700g/L
Ecorr
icorr
(mV) -313 -216 -213 -350 -386
Commercial LiBr
Ecorr
icorr
PA / cm
(mV)
0.873 0.083 0.146 0.057 0.041
-453 -157 -285 -300 -386
2
Ecorr
icorr
PA / cm
(mV)
PA / cm
0.852 0.076 0.458 0.048 0.037
-630 -818 -695 -501 -642
0.054 0.037 0.018 0.015 0.032
2
2
Munoz et al [15] have demonstrated the results of galvanic coupling of Cu, Ni and their alloys indicating the predicted behaviors of the materials. Similar results for the materials under study have been in the forthcoming text. 3.1
Couplings of AISI 304 with Other Materials
Figure 1 shows the predicted mixed potentials and galvanic currents when curves for AISI 304 were superimposed with other materials in similar environments, i.e. the three LiBr solutions. AISI 304 is the stainless steel with a more noticeable anodic character in the sense that its corrosion resistance decreases as a result of the galvanic effect when coupled to the other stainless steels. The pair AISI 304AISI 316 revealed the highest galvanic current among the various couples. The highest galvanic current, 5μA, was calculated in 700 g/L LiBr solution and 304 behaved as anode. This indicates possible corrosion of 304 when connected to 316 in 700 g/l uninhibited solution. Mixed potential values were observed to shift towards more negative values in the commercial LiBr brine, however, the current values were not significant. 400 g/L LiB
-800
700 g/L LiBr
Commercial LiBr brine
6
-761 C
A
5
-700
-663.5
C
A
A
5 -611
-595.4
-600
4
3.16
C
-408
2
-300
A
3
-329
-282.6 2
1.585
A
-340.6
-388
C
-332.6
-440.2
Galvanic Currents (μA)
A
-400
-261.5
Mixed Pair Potential (mV)
A A -500
0.8
1
0.08
0.126
0.25
-100
0.1227
0.63
0.9
0.83
-200
0
m ni u Ti ta
D SS
L 16 I3
A IS
A IS
I3
16
m Ti ta
ni u
D SS
L 16 I3
A IS
A IS
I3
16
m ni u Ti ta
D SS
16 I3
A IS
A IS
I3
16
L
0
Materials
Figure 1. Galvanic current and mixed potentials of AISI 304 coupled to the other stainless steels and titanium in LiBr solutions. Blank column for mixed potential, shaded for galvanic current. The symbols A (anodic) and C (cathodic) indicate the behavior of the AISI 304 in relation to each coupled material.
Consideration for Galvanic Coupling of Various Stainless Steels & Titanium
3.2
399
Couplings of AISI 316 with Other Materials
Figure 2 shows the coupling of AISI 316 with other materials. 316 behaved as cathode in uninhibited solutions where it revealed a possible relatively high current when coupled with 304. The amount of current with other materials was not very significant. 316, however, behaved as anode in case of commercial solution. All other materials accelerated the corrosion of AISI 316 due to the galvanic effect in commercial solution AISI 316 aggravated the corrosion of all other materials in 700g/L LiBr, in all cases the galvanic current was higher than the icorr of the anodic members. Mixed Potentials were very negative in case of commercial solution. Galvanic currents between couples of 316-DSS and 316-Ti did not appear to be very significant; however, a tendency for corrosion of DSS and Ti was predicted in case of coupling. 400 g/L LiBr
-900
700 g/L LiBr
Commercial LiBr A -799
A
C -800
6
A
A
-807.8 -766
5
-761
5
4
-500 3 C -329
C
C
1.26
2
0.8
1
0.795
-200
-205.5
2
0.316
0.1
-100
0
um Ti ta ni
D SS
I3 16 L
I3 04
A IS
A
IS
um Ti ta ni
D SS
I3 16 L
I3 04 IS A
A IS
um Ti ta ni
D SS
I3 16 L
I3 04
0
A IS
IS A
1
0.146
-224.6
-239.8
-237.5
0.63
C
-218.5
2
-300
C
-198
C
C
1.26
-400
Galvanic Currents (μA)
-600
-261.5
Mixed Pair Potential (mV)
-700
Materials
Figure 2. Galvanic current and mixed potentials of AISI 316 coupled to the other SS and titanium in LiBr solutions. Blank column for mixed potential, shaded for galvanic current. The symbols A (anodic) and C (cathodic) indicate the behavior of the AISI 316 in relation to each coupled material.
3.3
Couplings of Duplex Stainless Steel
Figure 3 shows the mixed potentials and galvanic currents formed between duplex stainless steel was theoretically coupled with other materials. DSS showed complete cathodic character in commercial solution and the amount of possible galvanic current was very low indicating relatively safe couples. In 700 g/L DSS
400
M.S. Khan and S.J. Malik
being cathodic to 304 showed relatively high current values indicating preferential corrosion of 304. 400 g/L LiBr
-900
700 g/L LiBr
Commercial LiBr
3.5
C 3.16
-807.8
-800 3 C C
-647.4
-600
C -523.5
C
-500
2
A
C A
C
-400
-388
-371.5 A
-332.6
0.795
-300 0.9 -200
A
1.26
A -301.8
1.5 -319
-263.8 1
-205 0.63
Ti ta ni um
I3 16 L
A IS
I3 04
I3 16 A IS
A IS
I3 16 L Ti ta ni um
I3 04
I3 16
A IS
A IS
A IS
Ti ta ni um
I3 16 L
A IS
I3 16
0
A IS
I3 04
0
0.008
0.168
0.2
0.146
0.2
0.5
0.1227
0.316
-100
A IS
Galvanic Currents (μ A)
2.5
-595.4
-237.5
Mixed Pair Potential (mV)
-700
Materials
Figure 3. Galvanic current and mixed potentials of DSS coupled to the stainless steels and to the titanium in LiBr solutions. Blank column for mixed potential, shaded column for galvanic current. The symbols A (anodic) and C (cathodic) indicate the behavior of the DSS in relation to each coupled material.
3.4
Couplings of Titanium
Figure 4 represents the galvanic current and mixed potentials calculated between titanium and the other alloys in the three LiBr solutions. The cathodic nature of titanium was mostly reflected in the commercial LiBr brine; only DSS was cathodic to titanium in commercial solution but established a very small galvanic current (less than the corrosion current for titanium). So titanium was predicted to be very safe in commercial solution. A relatively higher current was calculated when it was coupled with 316; however, the amount was not very significant,. The highest galvanic currents were calculated in the 400 g/L LiBr solution, where 304, 316 and 316L generated currents more than 0.6 μA on the titanium electrode. In 700 g/L LiBr, 316 and 316L accelerated titanium corrosion by the galvanic effect.
Consideration for Galvanic Coupling of Various Stainless Steels & Titanium
400 g/L LiBr
-900
700 g/L LiBr
Commercial LiBr
401
1.4
A 1.26
C
-800
-766
1.2 C
-700
C
1
-654.2 1
-600
A
A
-523.5
0.794
C
-500 A
A
0.8
-440.2
0.63 A
-371.5
-340 -273.8
-400
-300
-301.4
-239
Mixed Pair Potential (mV)
-611
0.63
A
0.6 -319
0.4
-198
-200
0.25
Galvanic Currents (μA)
A
0.25
0.2
0.2 0.2
0.127
-100
0.008
0.08 0
0
AISI 304
AISI 316
AISI 316L
DSS
AISI 304
AISI 316
AISI 316L
DSS
AISI 304
AISI 316
AISI 316L
DSS
Materials
Figure 4. Galvanic current and mixed potentials of titanium coupled to the stainless steels LiBr solutions, blank column for mixed potential, shaded column for galvanic current. The symbols A (anodic) and C (cathodic) indicate the behavior of the titanium in relation to each coupled material.
4.
Severity of Couples
According to Mansfeld and Kendel [28], the relative increase in the corrosion rate of the anodic member of the pair could be expressed by the ratio
iG iA, corr
, where
iG is the galvanic current density and iA, corr is the corrosion current density of the uncoupled anodic member. The magnitude of this ratio may be used as a guide that reflects the severity of the galvanic effect in a couple, and it was suggested that a value less than 5 implies compatibility of the members in the couple. This ratio tells about the relative increase in the corrosion of the anodic member in the couple and hence can reflect the severity of the couple.
402
M.S. Khan and S.J. Malik
iG Table 4.
iA, corr
Couplings
ratios for all the couples in all the three environments. 400 g/L LiBr Anode
The
iG iA, corr
700 g/L LiBr Anode
iG iA, corr
Commercial LiBr Anode
iG iA, corr
304-316
AISI 304
2.28
AISI 304
5.8
AISI316
21
304-316L
AISI 304
0.95
AISI 304
1.86
AISI316L
6.66
304-DSS
DSS
6.89
AISI 304
3.71
AISI 304
2.265
304-Ti
Ti
15.03
AISI 304
0.3
AISI 304
1.465
316-316L
AISI316L
0.662
AISI316L
4.356
AISI316
8.366
316-DSS
DSS
13.74
DSS
26
AISI316
3.877
316-Ti
Ti
30.56
Ti
26.74
AISI316
16.7
316LDSS
DSS
10.92
DSS
6.547
AISI316L
8.914
316L-Ti
Ti
19.3
Ti
6.716
AISI316L
6.66
DSS-Ti
Ti
4.84
Ti
5.335
Ti
0.242
iG iA, corr
ratios for all the possible couples in each environment are given in the
Table-4. Shaded boxes show the couples which have show substantially higher ratios than 5 and can be regarded as incompatible pairs, exhibiting potential danger of increased corrosion. Titanium and duplex stainless steel are safe in commercial solution because both of these are almost always cathodic in the couples with other materials. Titanium and duplex stainless steel suffer severe galvanic corrosion in the solutions without additives.
5.
Conclusions x
As per galvanic coupling using mixed potential theory, the corrosion resistance of passivating materials has been predicted to decrease in certain galvanic couples of passivating alloys used under deaerated conditions. Alloys 304, 316 and titanium have been predicted to show a relatively higher current in coupled conditions.
Consideration for Galvanic Coupling of Various Stainless Steels & Titanium
x
Galvanic couples of 304-Ti, 316-Ti and 316-DSS have shown relatively high
x x
6.
403
iG iA, corr
ratios indicating higher tendencies of corrosion for one of the
member in un-inhibited LiBr solutions. LiBr with low concentration has been found to higher aggressivity towards the above mentioned couples, as the current values were relatively higher. This study has been performed experimentally under specific environmental conditions; numerical modeling may be carried out to develop more generalized approach.
References
[1] Stoecker, W. F. and Jones, J.W., “Refrigeration and Air Conditioning”, 2nd Ed, McGraw-Hill, (1982). [2] McLaughlin, S. M., “An Alternative Refrigeration System For Automotive Applications, ” MS Thesis, Mississippi State University, August (2005). [3] Itzhak, D. and Elias, O., Corrosion, Vol. 50, No. 2, NACE International, 1994 [4] Munoz A.I., Anton J.G, Nuevalos S.L, Guinon J.L., Herranz V.P, Corrosion Science, 46 (2004) 2955–2974. [5] Anderko A. and Young R.D., Corrosion, Vol. 56, No. 5, NACE International, 2000. [6] Munoz A.I, Anton J.G,. Guinon J.L, and Herranz, V.P, Corrosion, Vol. 57, No.6, NACE International, 2001. [7] Munoz A.I, Anton J.G,. Guinon J.L, and Herranz V.P, Corrosion, Vol. 59, No. 1, NACE International, 2003. [8] Munoz A.I, Anton J.G,. Guinon J.L, and Herranz V.P, Corrosion Vol. 60, No. 10, NACE International, 2004. [9] Munoz A.I, Anton J.G,. Guinon J.L, and Herranz V.P, Electrochemical Acta 50 (2004) 957–966. [10] Maki T, “Stainless Steel: Progress in Thermomechanical Treatment,” Current Opinion in Solid State and Materials Science, (1997). [11] Munoz A.I, Anton J.G,. Guinon J.L, and Herranz V.P, Corrosion, Vol. 58, No.12, NACE International, 2002 [12] Richard Levine, Air Conditioning, Heating and Refrigeration News, January 24, (2000). [13] Guinon J.L., Anton J.G, Herranz V.P, and Lacoste G., Corrosion, Vol. 50, No. 3, NACE International, 1994. [14] Munoz A.I, Anton J.G,. Guinon J.L, and Herranz V.P, Corrosion, Vol. 58, No.7, NACE International, 2002. [15] Munoz A.I, Anton J.G,. Guinon J.L, and Herranz V.P, Corrosion, Vol. 59, No. 7, NACE International, 2003. [16] Kelly R.G., Scully J.R. et al, “Electrochemical Techniques in Corrosion Science and Engineering”, Marcel Dekker (2003) [17] Malik A.U., Siddiqi N.A., Ahmed S, Andijani I.N., Corr. Sci., 37(10), 1521(1995). [18] Vanini A.S., Audoudard J.P., Marcus P, Corr. Sci., 3(11), 1825(1994). [19] Guinon J L, Garcia-Anton J, Perez-Herranz V, Lacoste G, Corrosion, p.240, March 1994.
404
M.S. Khan and S.J. Malik
[20] Tanno, K, Itoh, M, Sekiya, H, Yashiro, H, and Kumagai, N, Corrosion Science, Vol. 34, No. 9, 1993. [21] Shelton R.B and Courville G.E., Energy Division Progress Report for 1994-95, Report No ORNL-6893, Oak Ridge National Laborator, US Dept of Energy, 1996. [22] Shahid, M., Malik, S.J., “Corrosion and Passivation behavior of Various Stainless Steels in LiBr solutions used in absorption-type refrigeration system”, Proc. Conf. International Symposium on Advanced Mat., 3-8 Sept. 2007, Islamabad, Pakistan, In press. [23] Mansfeld, F, and Kendel, J.V., "Laboratory studies of galvanic corrosion of aluminum alloys", Galvanic and Pitting Corrosion-Field and Laboratory Studies, ASTM STP 576, ASTM, (1976) [24] Stern, M, and Geary, A, L, J. of the Electrochemical Society, Vol. 104, No. 1, 56-63, 1957.
Real Root Isolation Arithmetic to Parallel Mechanism Synthesis Youxin Luo1, Dazhi Li1, Xianfeng Fan2, Lingfang Li1, Degang Liao1 1
Department of Mechanical Engineering, Hunan University of Arts & Science, Changde, Hunan, 415000, P.R.China, e-mail: [email protected] 2 Department of Mechanical Engineering, University of Ottawa, Ottawa, Ontario, K1N 6N5, Canada, e-mail: [email protected]
Abstract Most problems of mechanism synthesis can be transformed into that of finding all solutions of nonlinear equations. As existing numerical value methods have their weakness, this makes that people can be relieved to the numerical value method that has the error when these methods are used to find solutions. The symbolic calculation methods are exact calculating and all methods of numerical value calculating methods do not possess this kind of characteristic. Based on symbolic calculation, the paper presents multi-real roots isolation arithmetic of mechanism synthesis. The program is compiled with Maple11.0 software. In the programming process, by rational transform, approximate factorization is effectively used. This method is simple in programming, the analysis is convenient and the results are credible. As an example, the 3-RPR parallel mechanism synthesis is studied. All solutions are found. Keywords: Factorization; parallel mechanism; mechanism synthesis; real-root isolation arithmetic; polynomial system
1.
Introduction
The synthesis problems of mechanism include guidance of rigid body, deriving of functions and reappearance of link contrail [1]. Many researches of that are for the efficient designing methods that satisfy the designing requirements. Along the development of computer technology, analysis techniques of mechanism synthesis problems are improving, and applications are much broad. The so-called analytical technique is translating the mechanism synthesis problems to non-linear symbol or numerical equations, and then analyzing and resolving. Numerical method always boils down to solving the non-linear equation or equations finally. From the angle of symbol calculation, a lot of numerical arithmetic is not adapted. The reasons are that, on one hand, every current numerical method has its own defections [1-3] (such as the initial value problem of iterative method, bounding problem of homotopy continuation method, etc); on the other hand, people always worry about the numerical methods with error. For
406
Y. Luo, D. Li, X. Fan, L. Li and L. Liao
( x 1)( x 2) " ( x 20) has 20 real number roots obviously, 9 19 but f ( x ) 10 x only has 14 real number roots, the other six roots are example, f ( x)
translated into complex number roots. Comparing to the numerical calculation, the primary reason is that symbol calculation is a kind of accurate ones. Planar parallel mechanism with 3 degrees of freedom could be applied to the robot operating mechanism, NC precise embroidery mechanism, NC wire cutting machine, NC EDM machine tool and precise jiggle mechanism, etc. Close-form forward solution of this mechanism is the base of mechanism design, working space calculating and adjusting calculating of planar parallel mechanism with 3 DOF. As a result, it is significant to resolve this problem. At 1983, Hunt put forward that planar parallel mechanism with 3 DOF could be the robot operating mechanism [4], Han Lin at 1998 and Liu Huilin at 2000 obtained the close form resolution [5-6]. The purpose to get the close-form solution is to obtain all solutions. The Newton iteration method must have a better initial value, and one initial value only obtain a set of solutions but all solutions. The homotopy continuation method can obtain all the solutions but computational complexity is high. Literatures [7-10] presented that the Chaos method can obtain all solutions or the majority of all solutions, but this theoretic is also very complicated. Based on symbolic calculating, this paper presented multi-real roots isolation arithmetic of mechanism synthesis. The program is achieved with Maple11.0 software. In programming process, by rational transform, approximate factorization was effectively used. This method is simple in programming, the analysis is convenient and the results are credible. As an example, the 3-RPR parallel mechanism synthesis is shown. All solutions are found. It provides a new computing method for mechanism synthesis.
2.
Real Root Isolation Arithmetic and Maple Realization
2.1
The Real Root Isolation Arithmetic of Polynomial
The so-called real root isolation arithmetic of polynomial is to solve a series of mutually exclusive intervals. The all intervals contain all real roots of the polynomial and every interval has one root. Obviously the length of interval could be random small. In a sense we could say that from this way we “get” all the real roots of polynomial, because we could substitute the algebra number by “the (exclusive) root on one interval” and carry through every operation. There are two kinds of symbolic algorithms to solve all real roots of polynomial, one is based on Sturm theorem and one is on Descartes symbolic principle. The former is the times of changing sign of Sturm sequence, so the Sturm sequence should be calculated, and then substitutes the endpoint. As a result, this algorithm has a mass of calculation. The later is very easy because only the number of changing sign of the modulus sequence should be calculated. It is based on Descartes symbolic principle, presented by J.V.Uspensky, and improved by G.E.Collins, etc. The Maple function realroot carried out the Uspensky arithmetic. Such as
Real Root Isolation Arithmetic to Parallel Mechanism Synthesis
407
20
p:
( x i) 10
9
x 19 , invoking realroot(p),14 real number roots can be
i 1
gotten. 2.2
The Real Root Isolation Arithmetic of Polynomial in Multi-variables
Extending the real root-isolation arithmetic of polynomial in one variable to multivariables polynomial system
^ f1 ( x1 , x2 ,", xn ),", f n ( x1 , x2 ,", xn )`, the basic idea is first triangulaizating (see.2.2.1 section), then irreducible decomposing to the triangularization polynomials to make it without multiple roots (see.2.2.2 section). Then we can get a triangularizated polynomial as follows
^g1 (u1 ), g 2 (u1 , u 2 ),", g n (u1 , u2 ,", u n )` and initial formula
^I1 (u1 ,", u n ),", I s (u1 ,", u n )` Where
u1 ," , u n is a permutation of x1 ,", x n .
Then using the real root isolation arithmetic of one variable polynomial, the real root isolation interval of g (u1 ) about u1 can be calculated. Then calculate the
g (u , " , u )(k
2,", n)
u
k about k by using real root isolation interval of k 1 max-min polynomial estimating (see.2.2.2 section). The next step is according the conclusion of max-min polynomial estimating, judging whether the initial formula
^I1 (u1 ,", u n ),", I s (u1 ,", u n )` is zero under every group root. The roots that make the initial formula not equal to zero are needed [13]. The function name developed with Maple11.0 about this algorithmic is mrealroot () . The signification of real root-isolation algorithmic is that there is only one real root inside the isolation interval but no real root outside the interval. The transferring of
mrealroot () is: mrealroot ([ f1 ( x1 , ", x n ),", f n ( x1 , x 2 ,", x n )], [ x1 , x 2 ,", x n ], c, [h1 ( x1 ," , x n )," , hm ( x1 ,! , x n )]) Where, n, m is natural number, c is initial computational accuracy, h1 ( x1 , ", x n ),", hm ( x1 ,!, x n ) is a symbolic polynomial which need to judge
408
Y. Luo, D. Li, X. Fan, L. Li and L. Liao
on
zero.
When
using mrealroot () ,
the
^ f1 ( x1 , x2 ,", xn ),", f n ( x1 , x2 ,", xn )` should
coefficient
of
polynomial
be rational number, if not
convert () [14] to transform coefficient into the form of rational number using according to needed accuracy. If efficient accuracy is different when transforming, the approximate factorization is different too. 2.2.1
Triangularization Method of Nonlinear Equations
The typical methods of equations system triangularization are Wu method and Grobner method. Now we introduce the Wu-elimination method. The Wu-elimination method is an analytic solution method to solve a system of polynomial equations. In this section we introduce some concepts and terminology
K be a basic field of characteristic zero. We consider a set of polynomials PS in variables x1 , x 2 , " , x n over K : of it. More details can found in reference [3, 15]. Let
P1 ( x1 , x 2 , " , x n ) 0 ° . PS : ®" °P ( x , x ,", x ) 0 n ¯ m 1 2
0, P2 0, " , P m 0 will be denoted by Zero(PS ) . And for a polynomial J other than Pi , i 1,2, ", m , we denote the totality of the zeros if PS is not the zero of J by Zero( PS / J ) . Therefore, the problem of system of polynomial equations P1 0, P2 0, " , P m 0 is just how to determine the set Zero(PS ) explicitly. The totality of zeros of P1
The notion of ascending set plays an important role in determining the zero set Zero( PS ) . Let AS be a sequence of polynomials
A1 (u1, u 2 , " , u d , y1 ) 0 ° AS : ®" ° A (u u , " , u , y , y , " , y ) d 1 2 r ¯ r 1, 2
0
in which u1, u 2 , " , u d , y1 , y 2 , " , y r is a rearrangement of x1, x 2 ,", x n , and
d r
n . If u1, u 2 , ", u d and y1 , y 2 ,", y r are considered parameters and
variables respectively, the polynomial set AS is a triangular one. Moreover, the polynomial
Ai , i 1,2,", r , can be written as Ai
I i y imi (terms of lower
Real Root Isolation Arithmetic to Parallel Mechanism Synthesis
409
yi ), where mi is the highest degree of Ai with respect to the variable y i , which will be denoted by deg yi Ai . We call I i as the initial of Ai of degree in
and
i as the class of Ai .
Definition 1. For two polynomials if the degree of the initial
A j and Ai with j ! i , A j is reduced to Ai
I j with respect to the variable yi satisfies
deg y i ( I i ) mi
. Definition 2. The triangular polynomial set AS is an ascending set if polynomial A j is reduced to polynomial Ai for any pair A j and Ai with j ! i . Definition 3. An ascending set is a contradictory one if it includes a polynomial which is a non-zero constant. From above definitions, we may consider the zero set Zero( AS ) of an ascending set is well determined. For an arbitrarily given P , the remainder R of P with respect to an ascending set AS (denoted by Remd (P/AS) can be obtained by the following remainder formula n
n
i 1
i 1
R [ I iD i ] P ¦ Qi Ai Qi are polynomials and D i are power indexes of initial I i . Definition 4. For a given polynomial set PS , an ascending set AS is the characteristic set of PS , if it satisfies in which
zero of PS is Zero( PS ) Zero( AS ) ;
[1] any 2.
always
the remainders of polynomial
the
zero
of
AS
,
i.e.
Pi , i 1,2," , m with respect to
AS are all zero. We denote the characteristic set by CS . By this definition and the remainder formula, it is easy to see that the following relation between the zeros of polynomial set PS and its characteristic set CS is valid.
Zero(CS / J )
Zero( PS / J ) Zero( PS ) Zero(CS )
Where, J is a product of all initials of polynomials in CS . The Wu Mathematics-Mechanization Principle says that for any given polynomial set PS , the Wu-elimination method provides an algorithm such that the characteristic set CS of PS or a contradictory set can be obtained after a finite
410
Y. Luo, D. Li, X. Fan, L. Li and L. Liao
number of steps. This means that decomposition.
Zero( PS )
Zero(PS ) has the following constructive
Zero(CS / J ) ¦ Zero( PS , I i ) i
2.2.2 Irreducible Decomposition Arithmetic of Triangularizating Polynomial System To the triangularizating polynomial system with multiple roots, it can be transformed to a trangularizating polynomial without multiple roots using the irreducible decomposition arithmetic in this section. The realizing way is to produce randomly ascending set TS : ^ f 1 ( x1 )," , f k 1 ( x1 , x 2 , " , x k 1 )` , carrying out irreducible decomposition using recursion arithmetic.
[
^ f1 ( x1 ), f 2 ( x1 , x2 ),", f k ( x1 , x2 ,", xk )`
TS :
Suppose
([1 ,", [ k ) is a set of zero points. Extension field: Qk
polynomial:
f k 1 ( xk 1 )
polynomial ring
subs({x1 [1 ,", xk
is
irreducible,
Q([1 , [ 2 ,", [ k ) ,
[ k }, f k 1 )
. Supposing on
Qk [ xk 1 ] , f k 1 ( xk 1 ) has following irreducible decomposition:
p ( xk 1 ) p2t2 ( xk 1 )" p sts ( xk 1 ) t1 1
f k 1 ( xk 1 )
piti is irreducible on Qk [ x k 1 ] , and its coefficients is rational function about [1 , [ 2 , " , [ k . After the reduction of fractions to a common denominator, replace [1 , [ 2 , " , [ k with x1 , x 2 , " , x k , we can get: p ( x1 ,", xk ) t1 f k 1 ( xk 1 ) p1 ( x1 ,", xk 1 ) p2t2 ( x1 ,", xk 1 )" p sts ( x1 ,", xk 1 ) q ( x1 ,", xk ) p ( x1 ," , x k ) Where, is a nonzero constant on Qk . Supposing pi is reduced (if q ( x1 , " , x k ) not, then reduce it), then ascending set TS can be decomposed to a irreducible Where,
ascending set, namely,
IRRi
{ f1 , " , f k , pi }, i 1,2, " , s
If we decompose with k from 0 to n 1 like this one by one, and because the initial formula of an irreducible ascending set is non-zero, then we can conclude: Theorem 1. To a random ascending set
TS :
^ f 1 ( x1 ), f 2 ( x1 , x 2 ), " , f n ( x1 , x 2 ," , x n )`
Real Root Isolation Arithmetic to Parallel Mechanism Synthesis
411
there is a kind of arithmetic that can get a set of irreducible ascending set IRRi , i = 1,2, , s after several steps, and this ascending set satisfies the formula as following. s
Zero( f1 , f 2 ,
3.
, f n ) = ∑ Zero( IRRi ) i =1
3-RPR Parallel Mechanism Synthesis
Define point A as the origin of Fixed Axis System (FAS), AB represents x axis, the coordinate of the 3 fixed hinge points in the FAS is A(0,0),B( b1 ,0 ),C ( c1 , c 2 ). For convenience of sign operation, define the foot of a perpendicular of one acme on the moving triangle is a origin of Body Axes System (BAS), the '
border which the foot of a perpendicular is on, is defined as x axis, shown in Figure 1. Define the coordinate of BAS’s origin p in FAS is corner of BAS to FAS is α .
p ( x3 , x4 ) , the
Figure 1. 3-RPR parallel mechanism synthesis
Making
x1 = cos α , x2 = sin α
, the coordinates of the 3 acmes of moving
triangle in BAS is p1 ( − h1 ,0), p 2 ( h2 ,0), p3 (0, h3 ) , then the coordinates of them in FAS is:
[v j1
v j 2 1] = [ p j1 ⎡ x1
x2
⎢ ⎢⎣ x 3
x1 x4
Where, M = ⎢ − x 2
p j 2 1]M 0⎤ 0 ⎥⎥ 1 ⎥⎦
( j = 1,2,3)
412
Y. Luo, D. Li, X. Fan, L. Li and L. Liao
Because
( v11 a 1 ) 2 ( v12 a 2 ) 2 l12 2
2
( v 31 c1 ) ( v 32 c 2 ) l
2 3
0 , ( v 21 b1 ) 2 ( v 22 b 2 ) 2 l 22 2 1
0, x x
2 2
0
,
1
after spread and simplification, we could get the following system of non linear equations as following.
f1 s110 s111x1 2c1hh3 x2 (s1130 2hh3 x2 ) x3 (2c2 h 2hh3 x1 ) x4 °f 2 2 ° 2 x1 x2 1 0 ® 2 2 ° f 3 s130 2b1h1h2 x1 2b1h1 x3 hx3 hx4 0 °¯ f 4 s140 2b1h2 x1 (2b1 2hx1 ) x3 2hx2 x4 0
0
Where,
s110
c12 h c 22 h b12 h1 hh12 h13 h1 h22 hh32 hl12 h1l12 h1l 22 hl32
s111
2b1 h1 h2 2c 2 hh3 , s1130
s130
b12 h1 hh12 h13 h1 h22 hl12 h1l12 h1l 22
2c1 h 2b1 h1 , h
Make the forgoing equation system as
h1 h2
f1 ᇬ f 2 ᇬ f 3 ᇬ f 4 , viz.
( f1 , f 2 , f 3 , f 4 ) T . The mechanical structural parameter is known, then f ,f ,f ,f substitute the data to equation system F and solve out the 1 2 3 4 . Because F
f1 , f 2 , f 3 , f 4 , the coefficients are complex symbolic formula, for the convenience of using mrealroot () , transform the coefficients to rational number
in
using the command convert ([ f 1,
f 2, f 3, f 4], rational , kk ) , in this formula kk means that the digit of significant figure of transforming rational fraction is kk . If kk is different, the significant figure is different too, then realize the factorization with approximate precision. The result is listed at Table 1, which is the same with References [10-12]. The program is as following. restart; read `PolySolve.m`; h1:=42:h2:=10:h3:=40:a1:=0:b1:=44:b2:=0:c1:=0:c2:=30: l1:=46:l2:=48:l3:=40: h: =h1+h2: s110:=c1^2.*h+c2^2.*h-b1^2*h1-h*h1^2+h1^3-h1*h2^2+h*h3^2+h*l1^2h1*l1^2+h1*l2^2-h*l3^2: s111:=2*b1*h1*h2-2*c2*h*h3:s1130:=-2*c1*h+2*b1*h1: s130:=b1^2*h1+h*h1^2-h1^3+h1*h2^2-h*l1^2+h1*l1^2-h1*l2^2: s140:=b1^2-h1^2+h2^2+l1^2-l2^2:
Real Root Isolation Arithmetic to Parallel Mechanism Synthesis
413
f1:=s110+s111*x1+2*c1*h*h3*x2+ (s1130-2*h*h3*x2)*x3+ (-2*c2*h+2*h*h3*x1)*x4: f2:=x1^2+x2^2-1: f3:=s130-2*b1*h1*h2*x1-2*b1*h1*x3+h*x3^2+h*x4^2: f4:=s140-2*b1*h2*x1+ (-2*b1+2*h*x1)*x3+2*h*x2*x4: F1:= [f1, f2, f3, f4]: F:=convert(evalf(F1),rational,10); vars:=[x1,x2,x3,x4]:c:=10^(-10): X:=Mrealroot(F,vars,c,[] ):X2:=evalf(X);X2:=evalf(X[1][1]); This above program is also could be used in other mechanism problems, we only have to change corresponding sentences. Table 1. Six solutions to given data
4.
number
x1
x2
x3
x4
1
-0.7698959
0.6381694
13.4768670
22.6540017
2
0.6332748
0.773927
-9.0227313
3.3983008
3
0.1626644
-0.9866814
-5.1295925
2.9769719
4
0.9929708
-0.1183596
16.629520
-43.5357453
5
0.9671724
0.2541211
-1.2870408
29.6387391
6
0.5327276
0.8462868
67.9823088
29.5496070
Discussion and Conclusion
The multi-real roots isolation arithmetic based on symbolic calculation and the realization of Maple 11.0 software is researched. The arithmetic is in the range of real number, then find out the isolation interval of real roots, and finally determine all real roots of the non-linear equation. The triangularization method of function mrealroot() is Wu method. Because the triangularization is independent to multireal roots isolation, so other triangularization method could also be used. After triangularization mrealroot() function is used to find all real roots. The computational complexity of this arithmetic is very small, and it provides a new method to analyze the mechanical synthesis. The example in this paper is a 3-RPR parallel mechanism synthesis. This arithmetic is simple to program, intuitional to analyze, and the result is credible. The approximate factorization is used in function transformation, so in many fields such as mechanical synthesis and nonlinear equation solving, it has a good prospect of application.
414
5.
Y. Luo, D. Li, X. Fan, L. Li and L. Liao
Acknowledgement
This research was supported by the grant of the 11th Five-Year Plan for Key constructing Academic Subject (Mechanical Design and Theory) of Hunan Province (XJT2006180, and Hunan Natural Scientific Foundation and Hunan Province Foundation Research Program(2007FJ3030,2007GK3058). Thank to Professor Lu Zhengyi, Luo yong gave me useful guidance.
6.
References
[2] Zhang Jiyuan, Shen Shoufan, (1996) Computational theory of mechanism. Beijing: National Defence Industry Press. [3] Zhang Jiyuan, Shen Shoufan,(1991) Interval Analysis Method to Confirm the Result of Kinematics of Mechanism . College Journal of Mechanical Engineering, 4 :75-79 [4] Wang Dongming, (2003)Some Explanation of Symbolic Calculation . Beijing: Tsinghua University Press. [5] Hunt K H.,(1983) Structural kinematics of In-Parallel-Actuated Robot-Arms. ASME J. of Mech., Transmissions and Automation in Design,105(4): 705-712. [6] Han Lin, Liao Qizheng,(1998) Wu method for forward displacement analysis of the planner parallel manipulator. MM Research Preprints, 16 :153-157 [7] Liu Huilin, Zhang Tongzhuang, Ding Hongsheng,(2000) Forward Solutions of the 3RPR Planar Parallel Mechanism with Wu's Method . Beijing Institute of Technology Press,20(5):565ˉ569 [8] Xie Jin, Chen Yong,(2000)Application of Chaos Theory to Synthesis of Plane Rigid Guidance]. Mechanical Science and Technology,19(4):524-526 [9] Xie Jin, Chen Yong,(2000) Chaos in the Application of Newton-Raphson Iteration to Computational Kinematics . J. of Mech.Trans.,24(1): 4-6 [10] Xie Jin, Chen Yong,(2002)A Chaos-based Approach to Obtain the Global Real Solutions of Burmester Points. China Mechanical Engineering, 13(7):608-710 [11] Luo Youxin,(2004) Forward Solutions of the 3-RPR Planar Parallel Mechanism with Chaos Solution Method . Mech. Sci. and Technology,19(4):520-524 [12] He Zheming, Luo Youxin,(2005) Forward Solutions of the 3-RPR Planar Parallel Mechanism with a New Solution Method. Machine Tool & Hydraulics, 8:22-23 [13] Luo Youxin,(2004) Forward solutions of the 3- RPR planar parallel mechanism with Wu's method. Journal of Hunan University of Arts and Science,16(2):27-29 [14] Lu Z.,He B.,Luo Y. and Pan L.,(2001) An algorithm of real root isolation for polynomial systems, M M Research Preprints, 20: 187-198 [15] Li Jie,(2004) Maple9.0 Symbolic Disposal and Application. Beijing: Science Press. [16] Wu Wen-Tsün,(2000)Mathematics Mechanization. London: Kluwer Academic Publishers.
Experimental Measurements for Moisture Permeations and Thermal Resistances of Cyclo Olefin Copolymer Substrates Rong-Yuan Jou National Formosa University, Department of Mechanical Design Engineering 64 Wenhua Rd., Huwei, Yunlin, 632, Taiwan, R.O.C.
Abstract Plastic substrates for organic light-emitting devices (OLED) are extremely sensitive to moisture and oxygen. On the basis of higher transparence, lower birefringence, lower dispersion and lower water absorption, a new amorphous engineering thermoplastic, nominated cyclic olefin copolymer (COC) has been used for this application. However, COC plastic substrates can’t sustain plasmabased processing temperatures at 350͠. This study is focus on two important topics regarding to measurements of the permeation rate and thermal resistance of a SiO2 layer structure deposited upon the COC substrate. Silicon dioxide layer of thickness, 0.25ȝm, 0.5ȝm, and 1ȝm, respectively, are fabricated by PECVD. For the permeation rate measurement, the Ca-test method is adopted. For the thermal resistance measurements, both of the thermocouple in atmosphere and the IR thermographic methods are adopted and measured results are compared. Different surface temperatures, 323.15K, 373.15K, 408.15K, and 473.15K, respectively, are applied upon the silicon dioxide film and temperature differences for varied thickness of silicon dioxide film are measured. Experimental results are presented to investigate the behaviors of moisture diffusion barrier and thermal barrier characteristics of the COC/SiO2 structure. Keywords: plastic substrate, cyclo olefin copolymer, permeation, thermal resistance, water vapor transmission rate (WVTR), IR Thermography
1.
Introduction
Organic light-emitting devices (OLED) are extremely sensitive to moisture and oxygen. Recently, OLEDs have been fabricated on plastic substrates to form flexible organic light-emitting diodes (FOLEDs) [1,2,3]. Yu et al. [1] developed a novel flexible organic light emitting diodes (OLEDs) substrate by simple annealing treatment on the cyclic olefin copolymer (COC). The use of flexible substrates will significantly reduce the weight of flat panel displays and provide the ability to conform, bend or roll a display into any shape. However, one major drawback of plastic substrate to the application of flexible OLEDs is the high permeation rate of
416
R.Y. Jou
water vapor. To achieve an operating lifetime in excess of a few tens of hours, however, isolation of the OLEDs from atmospheric H2O is necessary. Recently, a new amorphous engineering thermoplastic, nominated cyclic olefin copolymer (COC) has been used for many kinds of optical, electrical and mechanical applications, because this plastic has higher transparence, lower birefringence, lower dispersion and lower water absorption. Its chemical structure is shown in Fig. 1. The cyclo olefin copolymer (COC) is amorphous and possesses high Tg (max. 220͠) [4]. The cyclo olefin monomer can undergo addition polymerization by a suitable metallocene catalyst. In addition, the COC also possesses high transparency, high Tg, low moisture absorption, and a low dielectric constant. Properties of a typical COC product of Mitsui Chemicals Inc. (APEL®) can be found in [4]. Regarding to the moisture permeation rate study, Saitoh et al. [5] investigated the moisture permeation rate of thin Cat-CVD SiNx single layer prepared on polymer substrates, and estimated the coverage ratio on these polymer substrates and intrinsic moisture barrier ability about this SiNx film with a thickness of 50 nm. Ghosh et al. [6] reported the results of a robust thin-film encapsulation method that utilizes a layer of aluminum oxide deposited by atomic layer deposition (ALD) process as the primary moisture barrier. More than 1000 h in 85°C and 85% RH testing has been observed without significant device degradation caused by moisture. Carcia et al. [7] studied that quantitative Ca tests were used to determine the water vapor transmission rate (WVTR) through 25 nm thick Al2O3 gas diffusion barriers grown on plastic by atomic layer deposition (ALD). Groner et al. [8] studied that thin films of Al2O3 grown by atomic layer deposition (ALD) were investigated as gas diffusion barriers on flexible polyethylene naphthalate and Kapton® polyimide substrates. Langereis et al. [9] conducted that thin Al2O3 films of different thicknesses (10–40 nm) were deposited by plasma-assisted atomic layer deposition on substrates of poly(2,6-ethylenenaphthalate) (PEN), and the water vapor transmission rate (WVTR) values were measured by means of the calcium test. The thermal conductivity of thin film is known as different that of bulk material due to the scattering of phonons at the boundary with the substrate. It depends on the film thickness as well as the fabrication method [10]. The thermal conduction issue should be worse in porous materials than in condensed materials. If the distribution of pores in the porous material is uniform, the dielectric constant is known to decrease approximately linearly with porosity [11]. Recently discovered phenomenon of extremely low thermal conductivity of nano-porous silicon (nanoPS) is used to work as a barrier layer for flexible plastic substrate [12]. When porous-silicon is sintered, the sintered state of porous-silicon appears to be particularly important for applications, since sintering is unavoidable if poroussilicon is processed at temperatures above 350°C. Free standing single-crystal thin film samples with porosities between 27% and 66% are analyzed [13]. The observed thermal conductivities range from 21 to 2.3W/ (mK) and decrease with increasing porosity. Wuu et al. [14] investigated the characterization of silicon oxide (SiO2) films on polyethersulfone (PES) substrates by plasma-enhanced chemical vapor deposition for transparent barrier applications. Low-pressure
Experimental Measurements for Moisture Permeations and Thermal Resistances
417
microwave plasma has been used to incorporate new functionalities onto the surface of cyclic olefin copolymers (COC) [15]. This study focuses on two important measurements of the permeation rate and thermal resistance of a SiO2 thin film layer deposited upon the COC substrate. For the permeation rate measurement, the Ca-test method is adopted. For the thermal resistance measurements, two methods of the thermocouple in atmosphere and the IR thermography are adopted and measured results are compared. The target of this study is to investigate the behaviors of moisture diffusion barrier and thermal barrier characteristics of the COC/ SiO2 structure.
2.
Experimental Apparatus
In this study, experiment of the moisture permeation rate testing and the thermal resistance experiments are conducted to explore the moisture diffusion barrier and thermal barrier characteristics of COC substrate deposited a SiO2 thin film on it. For the permeation rate measurement, the Ca-test method is adopted. For the thermal resistance measurements, both of the thermocouple in atmosphere and the IR thermographic methods are adopted. Detailed descriptions of these experimental apparatus and procedures are discussed as follows. 2.1
Sample Preparation
The substrate material used in this study is cyclo olefin copolymer (COC) modeled APEL APL5014DP. The glass transition temperature is 135ć. The particles are baked by an oven at 90°C for a period of time. Then hot-embossing on a machine into a flat sheet of 200ȝm in thickness and annealed in a vacuum oven to achieve the high transparency property. After that, dipping in the ultrasonic ethanol bath for one minute and rinse by deionized water and then dried by the dry nitrogen to clean the particles and organics resided on the substrate surface. In this study, the silicon dioxide film is deposited on the cyclo olefin copolymer (COC) substrate by PECVD. The process gases are TMS (10 SCCM) and O2 (199.7 SCCM). Processing conditions are at 300mTorr of vacuum pressure and at 50W of RF power. As shown in Fig. 1, three different thicknesses of silicon dioxide films, 0.25ȝm, 0.5ȝm, 1ȝm, respectively, are fabricated on the top of the COC substrate. The processing times are approximated to 5min, 11min, and 22min, respectively. The surface morphologies of test specimens are measured by the atomic force microscopy, and roughness Ra is approximated to 2.2nm before deposition and to 7.8nm after deposition.
Figure 1. Schematic of the COC substrate with different thickness of SiO2 thin films
418
R.Y. Jou
2.2
Moisture Permeation Measurement
For the moisture permeation experiment, a fixture is designed and manufactured to conduct water vapor transmission rate (WVTR) experiments. The whole testing fixture is put inside a temperature and humidity controlled testing facility (JIA605W) and the specimens of COC/ SiO2 structure are silicone sealed into the cavity of fixture. Testing conditions are set for three humidity levels, 60%RH, 75%RH, and 90%RH, respectively, at 60ć environment and testing time is 24hrs. Inside the fixture cavity, CaCl2 is used to absorb the moisture permeate through the specimen into the cavity. The flux is then obtained from the weight increment due to the absorbed vapor. The water vapor transmission rate (WVTR) is easily calculated by
P g / day x m 2
'W 24 hr 10 2 cm 2 u u A day m2
(1)
where ǻW(g/hr) is weigh increment of CaCl2 and A(cm2) is the area of specimen. 2.3
Thermal Resistance Measurement
For the heat conduction measurements, both of the thermocouples (contact type) and IR thermography (non-contact type) are adopted to measure the COC surface temperatures if different surface temperatures, 323.15K, 373.15K, 408.15K, and 473.15K, respectively, are applied upon the silicon dioxide surface. And effects of varied thickness of silicon dioxide films are explored. By the Fourier’s law, the total resistance, Rtot, of the structure composed of the silicon oxide film and the COC substrates equal to the summation of the total internal resistance [16] of two layers of the structure, which is expressed by Rtot
'T q
'x A 'x B k A A kB A
(2)
where q is the heat transfer rate, ǻT is the temperature difference between the COC substrate and the SiO2 thin film, ǻxA and ǻxB are the thickness of silicon dioxide film and the COC substrate, respectively, KA is the thermal conductivity of the silicon dioxide film, and KB is the thermal conductivity of the COC substrate. The set up and procedures of temperature experiments are shown as follows. 2.3.1
Thermocouple in Atmosphere
As shown in Fig. 1.2, temperature measurements by thermocouples in atmosphere are achieved by the following test specimens. It consist three major parts, the COC/SiO2 specimen, a copper heat spreader, and thermal insulation bakelite, respectively. To measurement the temperature responses of COC/SiO2 specimen in atmosphere subjected to constant heating powers and to control this heating power, a DC power supplier (GPC-6030D) is used in this experiment. Temperatures are
Experimental Measurements for Moisture Permeations and Thermal Resistances
419
measured by the T-type thermocouples and signals are transmitted to a data logger. The measurement procedures are outlined as follows: 1. PID adjustment of temperature controller to maintain the deviation ̰f 0.1ć; 2. Set the temperature in controller and wait for the stabilization of temperature; 3. Record the COC/SiO2 temperature every second for 10min. measurement duration.
Figure 2. Schematic of the thermalcouple measurement in atmosphere
2.3.2
IR Thermography
The measurement method for full plane temperature distribution in heat transfer applications is based on the infrared thermography; this technique was applied in many different works and offers many advantages. At first the system is not intrusive and the apparatus set-up time is not expensive particularly with respect to typical measurement method like the naphthalene sublimation. The principle of the IR thermography is the measurement of the amount of IR radiation, which is absorbed (or emitted) by a sample as a function of the wavelength. The IR measurement can be carried out in the modality of transmission or reflectance, being the first one the most popular. Fig. 1.3 shows a schematic of the experimental setup. The COC/SiO2 structure is placed in an enclosure, the heating power is supplied, and the camera is installed above. The micro-bolometer IR camera used is a NEC Thermo Tracer TH9100WL that records 60 frames per second with a 320x240 resolution and two operating ranges, -40°C–120°C and 0°C–500°C, in resolution ±0.2°C. The detection wavelength is in the range of 8–14ȝm. Focusing range is from 30cm to infinite. Emissivity of COC is calibrated by standard procedure. The IEEE1394 interface is used to connect to PC for image storage. The measurement procedures are: 1. PID adjustment of temperature controller to maintain the deviation ̰± 0.1°C; 2. Calibrate the emissivity of COC; 3. Set the temperature in controller and wait for the stabilization of temperature; 4. Record the COC thermographic images every five second for 10min. measurement duration;
420
R.Y. Jou
5.
Load the thermographic images into PC and processed by image processing software to find the averaged temperature.
Figure 3. Schematic of the temperature measurements by IR thermography
2.4
Measurement Uncertainty
The uncertainty in JIA-605W is ±0.1°C for temperature control and for humidity is ±0.1RH. The calculated uncertainty of WVTR is ±2.1%. The uncertainty in the IR camera temperature measurement is ±0.2°C. This includes both bias and precision error. The uncertainty in all thermocouple measurements is ±0.1°C. The maximum and minimum uncertainties are calculated according to the procedure described by Kline and McClintock [17]. The estimated uncertainty in temperature measurement by thermocouple in atmosphere is calculated to be ±1.5% for the isothermal boundary condition case, ±3.9% for the isothermal boundary condition case using the IR camera. An experimental uncertainty was established by calculating the standard deviation of five sets of data.
3.
Results and Discussion
3.1
Moisture Permeation Measurement
For the moisture permeation experiments, as shown in Table 1, the measured WVTR of COC substrate only decline from 2.77 g/day-m2 to 0.848 g/day-m2 as the relative humidity decrease from 90%RH to 60%RH. After deposited a silicon dioxide layer of thickness of 0.25ȝm, 0.5ȝm, and 1ȝm, respectively, by the PECVD process, the measured WVTR under 60°C and 90%RH decrease from 0.858 g/day-m2 to 0.386 g/day-m2. For the conditions of 60°C and 75%RH, the measured WVTR decrease from 0.549 g/day-m2 to 0.318 g/day-m2, and for the conditions of 60°C and 60%RH, the measured WVTR decrease from 0.263 g/daym2 to 0.199 g/day-m2. At the conditions of 60°C and 90%RH, the WVTR of glass substrate is 1.7 g/day-m2, and the reduction rates in WVTR for thicknesses of 0.25ȝm, 0.5ȝm, and 1ȝm, respectively, of SiO2 film layer are 49.5%, 67.8%, and 77.3%, respectively.
Experimental Measurements for Moisture Permeations and Thermal Resistances
421
Table 1. Permeability of COC/SiO2 structures under different testing conditions Permeability㧔g/day- m2㧕 COC substrate COC/SiO2 0.25ȝm COC/SiO2 0.5ȝm COC/SiO2 1ȝm
3.2
60%RH 0.848 0.263 0.201 0.199
75%RH 2.186 0.549 0.354 0.318
90%RH 2.771 0.858 0.547 0.386
Thermal Resistance Measurement
For the heat conduction measurements, two temperature measurement methodologies, the thermocouple in atmosphere environment and the IR thermography, are adopted to measure the COC surface temperatures if different surface temperatures, 323.15K, 373.15K, 408.15K, and 473.15K, respectively, are applied upon the silicon dioxide surface. And effects of varied thickness of silicon dioxide films are explored. Then, the measured temperatures on top of the COC substrate are interpolated into a straight line with different slopes for two methodologies of the thermocouple in atmosphere and IR thermography. 373
323 T2 = -1.76 * t + 322.63 (IR)
T2 = -3.98 * t + 372.59 (IR)
T2 = -1.32 * t + 321.83 (TC)
372 Temperature ( K )
Temperature ( K )
T2 = -2.42 * t + 371.69 (TC)
322
321 T1 = 323.15K IR
371
370
T1 = 373.15K
369
T2 COC
TC
T2
IR
COC
TC
T1
T1
320
368
0
0.25
0.5 Thickness (um)
0.75
1
0
0.25
0.5 Thickness (um)
0.75
1
Figure 4. Surface temperature measurements Figure 5. Surface temperature measureat the constant temperature of 323.15K ments at the constant temperature of 373.15K
Considering the case of applied constant temperature 323.15K, as shown in Fig. 4, square and circular symbols represent temperatures measured by two different measurement technologies, and interpolated straight lines are drawn, also. The largest temperature differences (in the case of 1ȝm SiO2 film), T1-T2, for two technologies, thermocouple in atmosphere and IR thermography, are 1.32K and 1.76K, respectively, and these values are the same as the slopes of the interpolated straight line. Temperature difference is proportional to the thickness of SiO2 in a linear manner and the same for both measurement technologies of thermocouple in atmosphere and IR thermography. Also, the measured temperatures by the thermocouples in atmosphere method are lower than by the IR thermogrphic
422
R.Y. Jou
method always. As shown in Fig. 5, in the case of constant applied temperature 373.15K, the largest temperature differences (in the case of 1ȝm SiO2 film), T1-T2, for these two technologies are 2.43K and 3.98K, respectively, and these values are the same as the slopes of the interpolated straight line. Temperature difference is proportional to the thickness of SiO2 in a linear manner, but larger temperature deviations are noticeable as film thickness is increased. It can be noted that transition in slope is noticeable and the measured temperature difference by IR thermographic technology is higher than the measurement by the technique of thermocouple in atmosphere. This may be attributed to the fact that, due to the appreciable thickness effect, heat loss through the side walls and optical reflection and refraction increased as film thickness is increased. As shown in Fig. 6, in the case of constant applied temperature 408.15K, the largest temperature differences (in the case of 1ȝm SiO2 film), T1-T2, for these two technologies are 2.66K and 4.96K, respectively. Transition in slope is noticeable as in 373.15K testing condition and this temperature condition, 408.15K, is closed to the glass transition temperature of COC. As usual, the measured temperature difference by IR thermographic technology is higher than the measured one by the technique of thermocouple in atmosphere. Finally, as shown in Fig. 7, in the case of constant applied temperature 473.15K, the largest temperature differences (in the case of 1ȝm SiO2 film), T1-T2, for these two technologies are 1.84K and 3.16K, respectively. Lower temperature deviations than the case of 408.15K are noticeable. 473
409
T2 = -1.84 * t + 471.70 (TC)
T2 = -2.66 * t + 407.63 (TC)
472
408
T2 = -3.16 * t + 471.55 (IR)
Temperature ( K )
Temperature ( K )
T2 = -4.96 * t + 408.54 (IR)
407
406
471
470
469 T2
T1 = 408.15K
405
404
T1 = 473.15K
T2
IR
COC
TC
T1
468
TC
467
0
0.25
0.5 Thickness (um)
IR
0.75
1
0
0.25
COC T1
0.5 Thickness (um)
0.75
1
Figure 6. Surface temperature measurements Figure 7. Surface temperature measureat the constant temperature of 408.15K ments at the constant temperature of 473.15K
Moreover, it can be noted that the measured temperatures by IR thermographic technology is lower than the measurements by the thermocouple in atmosphere technique. This may be attributed to the fact that, due to the applied temperature loading is higher than the glass transition temperature of 408.15K, material properties and molecule structures of COC may changed to an unpredictable properties and structures which makes this measurement unpredictable.
Experimental Measurements for Moisture Permeations and Thermal Resistances
4.
423
Conclusions
This study is focus on two important topics regarding to measurements of the permeation rate and thermal resistance of a SiO2 layer structure deposited upon the COC substrate. Silicon dioxide layer of thickness, 0.25ȝm, 0.5ȝm, and 1ȝm, respectively, are fabricated by PECVD. For the permeation rate measurement, the Ca-test method is adopted. For the thermal resistance measurements, two methods of the thermocouple in atmosphere environment and the IR thermography are adopted and measured results are compared. Different surface temperatures, 323.15K, 373.15K, 408.15K, and 473.15K, respectively, are applied upon the silicon dioxide film and temperature differences for varied thickness of silicon dioxide film are measured. Experimental results are presented to investigate the behaviors of moisture diffusion barrier and thermal barrier characteristics of the COC/SiO2 structure. For the moisture permeation experiments, the measured permeation of COC substrate is 2.77 g/day-m2. And after deposited by the PECVD process a silicon dioxide layer of thickness of 0.25um, 0.5um, and 1um, respectively, the measured permeation is decreased to 0.858~0.386 g/day-m2 (under 60C, 90%RH). Compared these values to the permeation of glass substrate, 1.7 g/day-m2, reduction in permeability is in 49.5%~77.3%. For the heat conduction measurements, for thermocouple in atmosphere measurement, thickness of SiO2 films is effective up to 1% (373.15K) if thickness is increased to 1ȝm SiO2 film, and, for IR thermographic measurement, thickness of SiO2 films is effective up to 1.2% (373.15K) if thickness is increased to 1ȝm SiO2 film. According to the measurements, characteristics of thermal insulation and moisture permeation of a COC substrate are enhanced by deposition of a silicon dioxide film.
5.
References
[1] Yu HH, Hwang SJ, Hwang KC, (2005) Preparation and characterization of a novel flexible substrate for OLED. Optics Communications 248:51–57. [2] Burrows PE, and Forrest SR, (1994) Appl. Phys. Lett. 64:2285. [3] Huang WJ, Chang FC, Chu PP, (2000) Polymer 41:6095. [4] Information on http://www.mitsui-chem.co.jp (Mitsui Chemicals, Inc. Polymers Laboratory, “The Characteristics of Cyclo olefin Copolymer and Its Application”) [5] Saitoh K, Kumar RS, Chua S, Masuda A, Matsumur H, (2007) Estimation of moisture barrier ability of thin SiNx single layer on polymer substrates prepared by cat-CVD method. Thin Solid Films (in press). [6] Ghosh AP, Gerenser LJ, Jarman CM, Fornalik JE, (2005) Thin-film encapsulation of organic light-emitting devices. Appl. Phys. Lett. 86: 223503. [7] Carcia PF, McLean RS, Reilly MH, Groner MD, George SM, (2006) Ca test of Al2O3 gas diffusion barriers grown by atomic layer deposition on polymers. Appl. Phys. Lett. 89:031915. [8] Groner MD, George SM, McLean RS, Carcia PF, (2006) Gas diffusion barriers on polymers using Al2O3 atomic layer deposition. Appl. Phys. Lett. 88:051907.
424
R.Y. Jou
[9] Langereis E, Creatore M, Heil SBS, van de Sanden MCM., Kessels WMM, (2006) Plasma-assisted atomic layer deposition of Al2O3 moisture permeation barriers on polymers. Appl. Phys. Lett. 89:081915. [10] Hu C, Morgen M, Ho PS, Jain A, Gill WN, Plawsky JL, Wayner Jr. PC, (2000) Appl. Phys. Lett. 77:145. [11] Moon S, Hatano M, Lee M, Grigoropoulos CP, (2002) Int. J. Heat Mass Transfer 45:2439. [12] Lysenko V, Roussel PH, Remaki B, Delhomme G, Dittmarand A, Barbier D, (2000) J. Porous Mater. 7:177. [13] Wolf A, Brendel R, (2006) Thin Solid Films 513:385. [14] Wuu DS, Lo WC, Chang LS, Horng RH, (2004) Thin Solid Films 468(1-2):105. [15] Nikolova D, Dayss E, Leps G, Wutzlers A, (2004) Surface and Interface Analysis 36: 689. [16] Incropera FP and Dewitt DP, (2002) Fundamentals of Heat and Mass Transfer 5th Ed. John Wiley & Sons, Inc. [17] Kline SJ, McClintock FA, (1953) Describing Uncertainties in Singe-Sample Experiments. Mechanical Engineering, Jan.: 3-8.
Novel Generalized Compatibility Plate Elements Based on Quadrilateral Area Coordinates Qiang Liu1, Lan Kang 2, Feng Ruan1 1
School of Mechanical Engineering, South China University of Technology, Guangzhou 510641, P.R. China 2 Department of Building Engineering, Tongji University, Shanghai 200092, P.R. China
Abstract In order to overcome the default of distortions of the elements by isoparametric coordinates, some new generalized compatibility plate elements, which have new trial function, are built based on area coordinates. Not only were series of new-type plate elements AQP built based on the area coordinates of a quadrilateral, but also a commonality construction method of thin plate element is summarized. The examples show that the elements exhibit higher efficiency and accuracy than other elements. As well, the method of development is more universal than other elements and is easier to program. Keywords: area coordinates; thin plate element; generalized compatibility.
1.
Introduction
The present review deals with those quadrilateral elements which meet the thin plate assumptions and have 3 DOF at the corner nodes, the two rotations T x and
Ty
and the transverse displacement w. This class of bending elements is based on discrete Kirchhoff (DK) hypothesis or assumption, which entails enforcement of the zero transverse shear strain at certain points of the element. In order to satisfy the Kirchhoff assumption, Batoz [1] derived a 4-node quadrilateral element called DKQ combining a cubic displacement function w with quadratic polynomials for the rotations of the
T
normal. Note that x and y are axes in the plane of the plate and T x and y are the derivatives of function w with respect to x and y. The shape functions for T are those of the 8-node serendipity element termed QUAD8[2]. It is noted that w varies independently along the element edges and it is not defined in the interior of the element. The Kirchhoff assumptions are satisfied on the element boundaries only. The QUAD4 developed by McNeal[3] is an isoparametric 12 DOF quadrilateral element including transverse shear deformations based on the Mindlin/Reissner
426
Q. Liu, L. Kang and F. Ruan
T
T
(M/R) hypothesis or assumption, in which w and the normal rotations x and y are independent. Chinniah[4] combined discrete Kirchhoff theory and least-squares technique in order to obtain an improved discrete Kirchhoff quadrilateral element QUAD9. The above elements are expressed in terms of isoperimetric coordinates, and sensitive to the mesh distortion. In view of the shortcomings of the isoperimetric coordinates mentioned above, a new quadrilateral form of area coordinates is proposed by Yuqiu Long [5, 6]. The transformation between the area coordinates and the Cartesian coordinates is always linear. Thus, the order of the displacement field expressed by the area coordinates will not vary with mesh distortion. Some new quadrilateral thin plate bending elements [7, 8, and 9] with 12 DOF are developed using area coordinates. Among them the typical representative element is ACGCQ [8], which has been developed by employing the area, coordinates. However, the construction method of these elements is of no good commonality and the trial function is often very complicated. In this paper, not only are new series of 4-node 12 degree thin plate quadrilateral elements AQP successfully developed by using quadrilateral area coordinates method and generalized compatibility theory, but also a commonality construction method of thin plate element is summarized. The examples of a square plate and a circular plate show that the elements demonstrate higher efficiency and accuracy than other elements, also that the method of development is of better commonality and easier to program.
2.
Derivation of AQP Elements
The node and the edge descriptions are stated as follows, the angle of rotation in the xOy plane of the node, the tangent direction and normal direction of the each ww ww Tx w, x ,T y w, y w x wy . edge are shown in Figure 1,
n s
As shown in Figure 1, i , i represent separately normal and tangent directions of the i edge of a quadrilateral element. The length of each edge can be calculated as
di bi2 ci2 bi y j yk ci xk x j ai x j yk y j xk , where , , HJJJJJJJJG HJJJJJJJG i, j , k , m 1, 2,3, 4 . The area coordinates of any node can be calculated as:
Li
,
1 ai bi x ci y 2A
where A is the area of quadrilateral, and coordinate.
(1)
ai , bi , ci can be obtained based on the node
Novel Compatibility Plate Elements Based on Quadrilateral Area Coordinates
n2
y 4
n3
3 ˥ y3
s2
s1
˥ y4 ˥ x3
˥x
n1
˥ x4 ˥ y1
s3
1
˥ x1
n4
˥y
˥ y2
2
s4
4 12 8 16 1
11 3 14 6 10
9
5
2
13
5~8 L V W KH FHQW HU RI HDFK edge 9~16 L V W KH GXDVV SRL QW RI HDFK edge
˥ x2
O
15 7
427
x
Figure 1. Normal and tangent direction of the quadrilateral element edge
2.1 Determination of the Number of Free Degree and the Trial Function of Elements The displacement of any point in elements can be expressed as following:
NG e
w( L1 , L2 , L3 , L4 ) where
Ni
N
ª¬ N i
is
N xi
FaG e
shape
(2) function,
N
> N1
N2
N3
N4 @
,
N yi º¼ (i 1 4) F ; is defined as the trial function, which is
listed in Table1; a is a 12 u12 matrix, is defined as the characteristic matrix of nodal displacement, and it shows the relation between the trial function and the shape
function
displacement,
G
e
of
the
T 1
T 2
ª¬G
G
generalized
G
T 3
T 4
T
G º¼ G
T i
displacement.
Ge
ª¬ wi T xi T yi º¼ (i
is
nodal
1 4) .
2.2 The Relation between Generalized Displacement, Nodal Displacement and Displacement at Any Point In this research, the generalized displacement compatibility conditions are adopted. In order to transfer the nodal displacements into generalized displacements, two matrix c and G are defined. The generalized displacement compatibility conditions can be expressed as:
w( L1 , L2 , L3 , L4 )
Fcd
(3)
Where c is defined as the characteristic matrix of generalized displacement, it shows the relation between the trial function and the shape function of the T d > d1 " d12 @ . generalized displacement. d is generalized displacement,
428
Q. Liu, L. Kang and F. Ruan
Then we can use the Eq.(3) to express generalized displacement itself, thus,
d
C cd
(4)
1
So c C , and C is a 12×12 matrix. It shows the relation between the shape function of generalized displacement and the trial function. Suppose
GG e
d
(5)
where G is a 12×12 matrix. It is called the transfer matrix, and shows the relation between the generalized displacement and the nodal displacement. Substitute Eq.(5) into Eq.(4) and compare with Eq.(2), we can obtain:
C 1G
a cG 2.3
(6)
Definition of Element Stiffness Matrix
The following can be obtained from the thin plate theory [9].
K
³³ B
T
1
³ ³
DBdA
1
1 1
e
A
BT DB J d [ dK (7)
where
B T w 2 F T
1 >T1 T2 4 A2
ª b12 b22 « c22 T3 @ « c12 « ¬ 2b1c1 2b2 c2
w 2 F( L1 , L2 , L3 , L4 )
ª w2 F « 2 ¬ wL1
b32 c32 2b3c3
w2 F wL22
b42
º 2b1b2 2b2b3 2b3b4 2b4b1 2b1b3 2b2b4 » c42 2c1c2 2c2 c3 2c3c4 2c4 c1 2c1c3 2c2 c4 » 2b4c4 2(b1c2 b2 c1 ) 2(b2 c3 b3c2 ) 2(b3c4 b4 c3 ) 2(b4 c1 b1c4 ) 2(b1c3 b3c1 ) 2(b2 c4 b4c2 ) ¼»
w2 F wL23
w2 F wL24
w2 F wL1 L2
w2 F wL2 L3
w2 F wL3 L4
w2 F wL4 L1
w2 F wL1 L3
w2 F º » wL2 L4 ¼
T
Eq. (7) is evaluated using 3 points Guass integration[9]. 2.4
Definition of the Generalized Displacement Vector and the
G Matrix
Normal derivative on each edge can be calculated from the quadrilateral area HJJJJJJJJG HJJJJJJJG coordinates theory. i, j , k , m 1, 2, 3, 4 . Normal and tangent derivative on each edge is:
w wni
1 >bi 2 Adi
ªb b ci @ « 1 2 ¬ c1 c2
b3 c3
b4 º ª w c4 »¼ «¬ wL1
w wL2
w wL3
w º » wL4 ¼
T
(8)
Novel Compatibility Plate Elements Based on Quadrilateral Area Coordinates
w wsi
ªb b bi @ « 1 2 ¬ c1 c2
1 > ci 2 Adi
b3 b4 º ª w c3 c4 ¼» «¬ wL1
w wL2
w wL3
w º » wL4 ¼
429
T
j
y
˥ yj
ij a ©n
i
˥ yi
˥ xj
ij a ©s
˥ xi
O
x
Figure 2. Normal and tangent Angle relation on ij edge
Normal and tangent Angle directions are shown in the Figure 2, so that,
Tnij
ww wnm
w,nm Tnij
,
ww wnm
w,nm (9)
Thus
Tnij
1 bmTxij cmTyij Tsij dm ,
1 cmTxij bmTyij dm
(10)
Different matrixes can be obtained by adopting different compatibility conditions. So long as the compatibility conditions adopted are separated, namely matrix is reversible, the corresponding rigidity matrix can be obtained at last. In this research, in order to obtain higher accuracy, three kinds of association are chosen through many tests, as shown in Table 1. The shaped function of the edge displacement on the ij edge is defined as following:
430
Q. Liu, L. Kang and F. Ruan
N1 (] ) 2 3] ] 3 4 ° ° N 2 (] ) lij 1 ] ] 2 ] 3 8 ° ° N 3 (] ) 2 3] ] 3 4 ° ° N 4 (] ) lij 1 ] ] 2 ] 3 8 ° °° N 5 (] ) 1 ] 2 ® ° N 6 (] ) 1 ] 2 ° 2 ° N 7 (] ) 3 3] 2lij ° 2 ° N8 (] ) 3 3] 2lij ° 2 ° N 9 (] ) 1 2] 3] 4 ° 2 °¯ N10 (] ) 1 2] 3] 4
(11)
N ]
is shaped function; parameter ] of ij edge equals -1 on the point i, equals 1 on the point j, the span is 1 d ] d 1 . Normal angle, tangent angle and transverse displacement are expressed by the following commonality construction where
method (used to form the G matrix): bm c b c N5 ] T xi m N5 ] T yi m N 6 ] T xj m N 6 ] T yj , dm dm dm dm
Tnij
N5 ] T ni N 6 ] T nj
Tsij
N 7 ] wi N8 ] w j N9 ] T si N10 ] T sj
N 7 ] wi N8 ] w j
w ij
cm b c b N9 ] T xi m N9 ] T yi m N10 ] T xj m N10 ] T yj , dm dm dm dm
N1 ] wi N3 ] w j N 2 ] T si N 4 ] T sj
N1 ] wi N3 ] w j
HJJJJJJJJG
cm b c b N 2 ] T xi m N 2 ] T yi m N 4 ] T xj m N 4 ] T yj dm dm dm dm (12)
HJJJJJJJG
where i , j , k , m 1, 2, 3, 4 The eq.(12) that we summarized are universal in forming of displacement G matrix, so long as
3.
bm , cm , d m ,N of each edge is input, the G matrix is easy to get.
List of AQP Series Elements
The following AQP series of elements can be derived using the above method, as shown in Table 1.
Novel Compatibility Plate Elements Based on Quadrilateral Area Coordinates
431
Table 1. Series of AQP elements Eleme nt
Trial function F
AQP_ 1
ª¬1 L3 L1
L4 L2
L3 L1 L4 L2
L1 L3 L3 L1 L1 L3 L4 L2
L1 L3 L3 L1 L4 L2
L2 L4 L3 L1
Compatibility conditions L1 L3
L2 L4
Node-compatibility˖ 0, T T 0, T T 0 i 1 ~ 4, Element node
L2 L4 L4 L2 w w i
L2 L4 L3 L1 L4 L2 º¼
y
y
x
x
i
i
Point-compatibility˖ AQP_ 2
w w i w w j
L3 L1 L4 L2 L1 L3 L2 L4 L1 L3 L3 L1 L1 L3 L4 L2 L2 L4 L3 L1 L2 L4 L4 L2 L12 L23 L22 L24 º¼ ª¬1 L3 L1
F
L4 L2
AQP_ 3
T
n
Tn
m
T m
9 ~ 12, k
Tn
n
k
0
13 ~ 16, Guass point of edge
Point-compatibility and edge-compatibility˖ w w i w w j
³ T dij
4.
0 i 1 ~ 4, Element node 0 j 5 ~ 8, Center of edge
n
0 i 1 ~ 4, Element node 0 j 5 ~ 8, Center of edge
Tn d]
0 ij 12᧨ 23᧨ 34᧨ 41᧨Edge
Numerical Examples
In the following, simply supported and clamped square plates under uniformly distributed and concentrated load, simply supported and clamped circular plates under uniformly distributed load were examined, finally the results were compared with results from Ansys. 4.1 Simply Supported and Clamped Square Plate Under Uniformly Distributed and Concentrated Load The ansys modle is shown in Figure 3, due to the symmetrical feature of the square plate, the calculation is done on 1/4 of it, which has 4 kinds of elements, as shown in Figure 3. The length of square plate is l, Poisson ratio P 0.3 , the displacement results are listed in Table 2. From the displacement results, we can see that the calculated results from the elements we developed are good. Even the sparse mesh of these elements, AQP_2 shows good accuracy of calculation, which proved the validity of elements developed using the trial function stated in this paper. Q
Q P Displacement
Displacement
a Simply supported plate under Q
P
Displacement
b Clamped Plate under Q
c Simply supported plate under P
Figure 3. Ansys modle of square plate
Displacement
d Clamped Plate under P
Q. Liu, L. Kang and F. Ruan
y
x
˕ y =0
O
a Mesh of 1×1
y
˕ x =0
˕ x =0
˕ x =0 O
y
y
x
˕ y =0
˕ x =0
432
O
b Mesh of 2×2
O
x
˕ y=0
x
˕ y =0
c Mesh of 4×4
d mesh of 8×8
Figure 4. Mesh pattern of 1/4 square plate Table 2. Central displacement of simply supported and clamped square plate under uniformly distributed load q and concentrated load P sh
Uniformly distributed load Simply supported Plate
late) Ansys
2
4
8 ytic ion
0.383 1 0.400 3 0.404 8
AQP_ AQP_ AQP_ 1
2
3
0.4338 0.4033 0.4033
0.4124 0.4061 0.4061
0.4077 0.4061 0.4062
0.4062 qL4/100D
Concentrated load
Clamped Plate Ansys 0.136 6 0.129 2 0.127 2
Simply supported Plate
AQP_ AQP_ AQP_ 1
2
3
0.1372 0.1245 0.1245
0.1301 0.1263 0.1263
0.1274 0.1265 0.1265
0.1265 qL4/100D
Ansys 1.204
1
2
3
1.1736 1.1875 1.1875
7 1.174 3 1.164 4
AQP_ AQP_ AQP_
1.1666 1.1690 1.1690
1.1633 1.1622 1.1623
1.1600PL2/100D
Clamped Plate Ansys 0.597 9 0.574 5 0.565 3
AQP_ AQP_ AQP_ 1
2
3
0.5668 0.5806 0.5806
0.5685 0.5689 0.5689
0.5640 0.5634 0.5634
0.5612 PL2/100D
4.2 Simply Supported and Clamped Circular Plate Under Uniformly Distributed Load Due to the symmetrical feature of the circular plate, the calculation is done on 1/4 of it, the ansys modle and mesh pattern is shown in Figure 5. The radius of the circular plate is 5, Poisson ratio P 0.3 . The displacement results are listed in Table 3. From the table results we can see calculating precision of the elements we developed is better than the Ansys results when using the Shell63 element.
Novel Compatibility Plate Elements Based on Quadrilateral Area Coordinates
Q
433
Q
Displacement
Displacement
˕ x =0
y
˕ x =0
y
O
O
x
˕ y=0
a Simply supported circular plate
x
˕ y=0
b Clamped circular plate d 48 elements
c 12 elements
Figure 5. Ansys modle and Mesh pattern of 1/4 circular plate Table 3. Simply supported and Clamped Circular plate under uniformly distributed load Edge property Element
Central displacement
Simply supported plate Ansys
AQP_1
AQP_2
AQP_3
Ansys
12 elements
0.6607
0.6778
0.6219
0.6207
48 elements
0.668
0.6512
0.6347
0.6338
AQP_1
AQP_2
AQP_3
0.1666
0.169
0.1498
0.1486
0.165
0.1614
0.156
0.1551
4
Analytic solution
Central moment
Clamped plate
4
0.6370R /10D
0.1563R /10D
12 elements
0.1842
0.2189
0.2088
0.2117
0.6998
0.8876
0.8557
0.8766
48 elements
0.1905
0.2155
0.207
0.2099
0.7421
0.8331
0.8264
0.8296
Analytic solution
2
0.2063qR
2
0.8165qR /10
4.3 Sensitivity Analysis of Element Distortions on the Quadrilateral Element
Sensitivity analysis was performed on the simply supported and clamped square plate under uniformly distributed load. The length of plate is L, Poisson ratio P 0.3 . The mesh adopted is shown in Figure 5 , ǻ is the distortion parameter, which indicates the distance that node D has deviated from the square mesh node. The calculation results of these elements were drawn in Figure 6 and Figure 7 when the distortion parameter varied between -1.25 and 1.25. As shown in the Figure 6, both the elements we developed and the shell63 elements of Ansys are good at resisting distortion. It is also shown that compatibility condition is the key to the element performance.
Q. Liu, L. Kang and F. Ruan
y
L/2=5
O
x
Ⴄ
a Symmetrical distortion form of mesh
L/2=5
Ⴄ
Ⴄ
/
y
/
434
x
O
b Antisymmetric distortion form of mesh
Figure 5. Sensitivity of element distortions: 1/4 Square plate mesh 2×2 0.7
Central displacement
0.6
Central displacement
0.7
Analytic Solution AQP_1 AQP_2 AQP_3 Ansys
0.5
0.4
Analytic Solution AQP_1 AQP_2 AQP_3 Ansys
0.6
0.5
0.4
0.3 -1.25 -1.00 -0.75 -0.50 -0.25 0.00 0.25 0.50 0.75 1.00 1.25
0.3 -1.25 -1.00 -0.75 -0.50 -0.25 0.00 0.25 0.50 0.75
Distortion parameter ?
1.00 1.25
Distortion parameter ?
a Symmetrical distortion
b Antisymmetric distortion
Figure 6. Central displacement of simply supported plate under uniformly distributed load q
5.
§ qL4 · ¨¨ ¸¸ © 100 D ¹
Conclusion
A new series of thin plate quadrilateral plate elements AQP were successfully developed by using the quadrilateral area coordinates method and generalized compatibility theory. The method used to construct elements in this paper was allpurpose, simple and convenient to computer program. In the paper, the bending test and twist test were done on the AQP elements. It was proved that these elements were reliable, passing a patch test and achieving good convergence.
Novel Compatibility Plate Elements Based on Quadrilateral Area Coordinates
0.20
0.4
Analytic Solution AQP_1 AQP_2 AQP_3 Ansys
Analytic Solution AQP_1 AQP_2 AQP_3 Ansys
0.3
Central displacement
Central displacement
435
0.2
0.15
0.1
0.10
0.0 -1.25 -1.00 -0.75 -0.50 -0.25 0.00
0.25
0.50
0.75
1.00
-1.25 -1.00 -0.75 -0.50 -0.25 0.00 0.25 0.50 0.75 1.00 1.25
1.25
Symmetrical distortion ?
Distortion parameter ?
a Symmetrical distortion
b Antisymmetric distortion
Figure 7. Central displacement of Clamped square plate under uniformly distributed load q
6.
§ qL4 · ¨¨ 100 D ¸¸ © ¹
References
[1] J-L. Batoz, M.B. Tahar. Evaluation of a new quadrilateral thin plate bending element [J]. International Journal for Numerical Methods in Engineering, 18,1982: 55–77. [2] E. Hinton, E.D.L. Pugh. Some quadrilateral isoperimetric finite elements based on Mindlin plate theory [C]. Proceedings of the Symposium on Applications of Computer Methods in Engineering, Los Angeles,1977: 85-96. [3] R. McNeal. A simple quadrilateral shell element [J]. Computers and Structures, 8, 1978: 75–83. [4] J. Chinniah. Finite element formulation for thin plate and shallow structures [D]. Ottawa: Carleton University, 1985. [5] Y.Q. Long, J.X. Li, Z.F. Long, S. Cen. Area-coordinates used in quadrilateral elements [J]. Communications in Numerical Methods in Engineering, 15 (8),1999:533-45. [6] Long Yuqiu, Li Juxuan, Long Zh ifei, Cen Song. Area- Coordinate theory for quadrilateral elements [J]. Engineering Mechanics, 14(3), 1997: 1-11. [7] Soh AK, Long YQ, Cen S. Development of eight-node quadrilateral membrane elements using the area coordinates methods [J].Comput Mech, 25(4), 2000: 376–84. [8] Soh AK, Long ZF, Cen S. Development of a new quadrilateral thin plate element using area coordinates [J]. Computer Methods in Applied Mechanics and Engineering, 190,2000:979-87. [9] Chen Xiao-ming, Cen Song, Long Yu-qiu, Two Thin plate elements developed by assuming rotations and using quadrilateral area coordinates [J]. Engineering Mechanics, 2(4),2005: 1-5.
Individual Foot Shape Modeling from 2D Dimensions Based on Template and FFD Bin Liu, Ning Shangguan, Jun-yi Lin, Kai-yong Jiang College of Mechanical Engineering and Automation, Huaqiao University, 362021
Abstract A novel method to generate an individual 3D foot shape from 2D dimensions is proposed. A standard foot shape (template) was first defined as the average shape from foot data samples of 50 participants. The standard foot was sectioned at each 1% foot length so that there were 99 sections. The captured images by digital cameras were used, the bottom view of the foot can be processed to give the foot outline while a side-view gives the foot profile. The points of each section were then scaled based on known 2D information with respect the outline and profile. The other points in standard foot were global deformed with method based on direct manipulation of free form deformation(FFD) with multiple point constraints. So the individual foot shape could be created by modifying the standard foot using the measured height and width obtained from the foot outline and profile respectively. After the shape prediction and alignment, the prediction error was calculated using the dimensional difference between the predicted shape and the actual foot shape. The results show that the method proposed in this paper is robust, efficient and relatively cheaper than using a 3D scanner to determine the 3D foot shape. This research may be used to develop custom lasts for the manufacture of custom footwear without actually scanning a person’s feet. Keywords: Foot shape modeling, Free form deformation, Foot outline, Foot profile
1.
Introduction
High demand of dressing is proposed with the development of economy and the improvement of people’s standard of living and taste. A few request of multiplicity, individuation and body-fit have become more and more common in people’s daily dress. Individual customization is foreseen to be the inevitable result from the development of economy and improvement of people’s standard of living. Nowadays, users choose footwear based only in length and width measurements, but it is widely accepted that three-dimensional shape of the foot can help in good shoe fitting[1][ 2]. The aim of this paper is to develop a method to predict the 3D foot shape using anthropometric dimensions such that a last can be scientifically designed by having a one-to-one mapping from 3D foot shape to 3D last shape. This research may be
438
B. Liu, N. Shangguan, J. Lin and K. Jiang
used to develop custom lasts for the manufacture of custom footwear without actually scanning a person’s feet. The rest of the paper is organized as follows. In the following section, we describe our individual foot shape modelling from foot outline and profile based on template and constrained free form deformation framework. In Section 3, we evaluate the prediction error using the dimensional difference between the predicted shape and the actual foot shape. We conclude the paper in Section 4.
2.
Methodology
A set of foot shape sample data are captured by using 3D scanner system. 3D mesh model of foot shape is created by adopting a slice based mesh surface reconstruction method. Each sample data are averaged to build the standard foot shape as the template of individual(customized) foot shape. Take the input foot outline information as the constrained condition, and linear equations are created by using a method based on direct manipulation of free form deformation with multiple points constraints. Then the linear equations are sloved to obtain the vertexes coordinates after deformation of standard model, and finally achieve the special individual foot model’s rapid customization. 2.1
Capture the Sample Foot Shape Data
We collect foot shape from 70 persons form university students and some postgraduates, aged from 19 to 25, most of who are southerners as our participants. In these 70 persons, 50 persons’ data are used to create standard foot shape, the other 20 data are used to verify the accuracy of this paper’s method. Use our own research center’s independent manufacturing 3D foot shape and shoe last laser scan system as data collecting tool to finish the scanning work. Refer to the relative research work [2] [3], we set down the measure procedure and relative notice proceeding as follows: 1.
2. 3.
4.
First, record participants’ basic information such as age, stature, sex and weight and so on before measure. And then give each one an ID. In allusion to some special samples that are possibly arisen (foot abnormalities), examine the participants primarily and make a special accessional record to foot abnormalities so that it can be convenient for future disposal. All the measurements are doing at afternoon so that all the participants are influenced in the same condition of temperature and humidity. Clean and sterilize the participantsÿ feet before measure. Control the water temperature about 25f1ć, avoid smudge or sweat attaching to the scanner and affecting the result, and ensure the public sanitation. While measuring, participants stand straight and let feet bear the whole body weight equably. On the one hand, it can moderately take the foot dilatation into account which results from body weight. On the other hand,
Individual Foot Shape Modeling from 2D Dimensions Based on Template
439
sole form a plane naturally while people is standing so that it can be convenient for following coordinate system establishment. 5. While measuring, in order to fix foot axis direction, the direction of the line from participants’ foot pternion to the tip of 2th toe, scilicet the foot center axis’s direction, is asked to keep the most parallel with the scan direction that is demarcated by scanner. So that it can be convenient for further foot shape axis emendation and coordinate transition. 6. Measure left and right foot separately. After the foot shape data of point clouds obtained, data of point clouds are pretreated such as denoise, geometrical registration and so on and foundation is established for future 3D mesh model reconstruction. Thus, we have finished foot shape sample data collection. 2.2
Standard Foot Shape Construction
After the foot shape data of point clouds obtained, the next work is how to rebuild the foot shape surface. The key to this problem is to parameterize the data of point clouds. Nowadays, the parameterized method based on basal surface projection is mainly adopted. But it takes a long time to parameterize the great data of point clouds and it can rebuild the model only one time after one parameterizement. So the error is relatively bigger in the same basal surface condition. Pottmann derived a method using second order approximate square distance function as error measurement and using squared distance minimization method (SDM) to rebuild the parameterized curve and surface[4]. Compared with projective parameterized method, SDM updates the rebuilding model. So it has higher rebuilt precision in the same base curve or base surface model condition and it add the contol points to restrict so that it can easily control the rebuilt model surface’s quality. This paper derives a slice based mesh surface reconstruction method. Cut the foot shape data of point clouds into 99 sections along the direction of foot length. Data of the point clouds in each profile adopt SDM to rebuild B-Spline. Then, capture sample points in the curve again according to the curvature change direction. Make the primary out-of-order data of point clouds into ordinal hiberarchy data organized form. At the same time that the final data expression keeps the original shape characteristic information, reduce data redundancy. Foot shape 3D mesh model is created by triangulating the re-sample points. When recapture points, we must ensure that to every sample foot model, each section takes the same number of points. In this way, each 3D foot mesh model may include the same number of vertexes. So it can be convenient for foot model creation and the compute of fitting error. For further disposal, create uniform coordinate system for each sample model: take sole plane as XY plane, pternion’s projective point in XY plane as coordinate system’s origin, and connected line from origin to tip of 2nd toe as X axis. As shown in Figure 1.
440
B. Liu, N. Shangguan, J. Lin and K. Jiang
Figure 1. Coordinate system of foot shape
After rebuilding all the sample models, create standard foot shape model and take average coordinates of relevant vertexes of each sample models as standard foot shape model’s corresponding coordinates. And then make it template of customized individual foot shape model. Assume that get Ni points from every section. Hence each foot had points with coordinates pij=(xij, yij, zij), where i=1, ···,100; j=1, ···, Ni. thus the standard foot had points pij ( xij , yij , zij ) . 2.3
Image Input and Silhouette Extraction
Input a foot image FB shown from the bottom viewing direction, As shown in Figure 2 (a). In order to extract FB’s silhouette, we use the following method. First, image is disposed and background color is wiped off. Then, Image is changed to be black-and-white photograph so that it’s convenient for outline extraction. In the final stage, we employ an edge detector to extract the boundary[5]. During this processing, we sample the outer silhouette at regular intervals, and then approximate the samples by a cubic B-spline curve C, as shown in Figure 2 (d).
(a)
(b)
(c)
(d)
Figure 2. Silhouette extraction: (a) is the original foot image shown from the bottom viewing direction. (b) shows the image that is the result of removing background color. (c) shows the black-and-white photograph .(d)shows the final B-spline silhouette curve.
Individual Foot Shape Modeling from 2D Dimensions Based on Template
441
2.4
Mesh Manipulation Based on Constrained Free Form Deformation
2.4.1
Direct Manipulation of FFD Based Multiple Point Constraints
Free Form Deformations (FFDs) are powerful tools in computer-aided geometric design (CAGD) and computer graphics[6]. An FFD deforms a base volume into a target volume. The main use of FFD is 3D model deformation by the alteration of the volume in which the object is embedded. FFDs can be used to deform a variety of 3D data types, including polygonal surface meshes, free-form surfaces and volumes, tetrahedral meshes, etc. A disadvantage of these FFD methods is in the indirect control of the deformation through adjusting control points or weights of the embedding volume. It is difficult to get the object to pass through desired points precisely. Moreover, a large number of control points in complex models makes it impractical to determine the exact number of control points to be changed and how they must be changed to produce a desired deformation. Hu derives the multiple constrained point solutions for free form deformation which is using a constrained optimization method. The proposed solution also can solve the model’s explicit solution through deformation of target points, only requiring solving a system of linear equations [Hu, 2001]. Assuming that the embedding volume is a NURBS parallelepiped. It is represented as a tensor-product NURBS volume Q (u, v, w) with uniformly distributed control points Pi, j, k, l ,m ,n
Q(u ,X , Z )
¦P
i , j ,k
Ri , j ,k (u,X , Z ), ᇫᇫ 0 d u,X , Z d 1
(1)
i , j ,k 0
Where
Ri , j ,k (u ,X , w)
W
i , j ,k l ,m ,n
¦
i , j ,k
Bi , p (u ) B j ,q (X ) Bk ,r ( w)
W B (u ) B j ,q (X ) Bk ,r ( w) 0 i , j ,k i , p
(2)
and where Wi,j,k are the corresponding weights of Pi,j,k, and Bi,p(u), Bj,q(v), and Bk,r(w) are the pth, qth, and rth order B-spline basis functions defined over the knot vectors U = {u0, u1, ··· , u p, ··· ,ul, ··· , ul+p}, V = {v0, v1, ··· , vq, ··· , vm, ··· ,vm+q}, and W = {w0,w1, ··· ,wr , ··· ,wn, ··· ,wn+r }, respectively. Suppose that Sf, f =1, ···, h are the source points on an object, having the parameter values (uf, vf, wf), f =1, ···, h, and that Tf, f = 1, ···, h, are the corresponding target points. In order to move Sf to Tf, f = 1, ···, h, we must compute the displacement įi, j, k of each control point Pi, j, k so that the following constraints are satisfied:
442
B. Liu, N. Shangguan, J. Lin and K. Jiang
l , m ,n
¦ (P
Tf
i , j ,k
G i , j ,k ) Ri , j ,k (u f ,X f , Z f )
i , j ,k 0
(3)
l , m ,n
ᇫᇫ S f
¦G
i , j ,k
Ri , j ,k (u f , X f , Z f )ᇫᇫᇫf
1, " , h
i , j ,k 0
solve by constrained optimization method,
G
R RT R
1
D
R RT R
T S 1
(4)
Where
D
>D1 , " , Dh @T >T1 S1 , " , Th S h @T
(5)
R
" R0, 0,0 ( sh )º ª R0, 0,0 ( s1 ) ᇫ » « " R0,0,1 ( sh ) » « R0, 0,1 ( s1 ) ᇫ » « ᇫᇫ # ᇫᇫᇫ % ᇫᇫ # » « ¬« Rl ,m ,n ( s1 ) " Rl ,m,n ( sh ) ¼» >( l 1)u( m1)u( n1) @uh
(6)
2.4.2
Determine Control Grid
Because of the locality of bspline, the point in cubic bspline curve p(u), Xę[ui, ui+1] is at most related to four control points dj(j=i-3, i-2,i-1,i). Analogously, a point in three-dimensional space q(u,v,w) is at most related to sixty-four control points. So the model deformation precision is affected by control grid’s distribution directly. Therefor, the number of control grid’s vertexes is confirmed by the principle as follows: ensure that there is at least a constrained deformation point in arbitrary two control points section. as shown in Figure 3.
Figure 3. Relationship between the control grid and constrained vertexes
Individual Foot Shape Modeling from 2D Dimensions Based on Template
2.5
443
Deformation Based on Constraints of 2D Outline and Profile
After extracting the input 2D image’s outline and profile, 3D mesh model’s silhouette should be extracted next. But silhouette extraction of mesh model costs very much time to compute. Therefore, consider the actual situation of the object in this paper and refer to the idea that using a slice based mesh surface reconstruction method as pointed out before, a series of planes are used to intersect models which are vertical to axis X. The foot outline and the foot profile are discretized as the biggest and smallest point in axis Y and axis Z in each section. Because in the vertical view of sole, foot ankle extrudes the outline of sole. The point of foot ankle may be taken as foot outline according to the arithmetic above, so it must be disposed especially when it is programmed. First, the extracted outline curve is geometrically disposed and the coordinates system is created according to 3D foot shape model as shown in Figure 4. The position of the input 2D outline and profile is adjusted to superpose with 3D model coordinates system. And then the direction along standard foot shape model (axis X) is zoomed proportionally to match the direction along the input 2D foot outline and profile’s information. Standard foot shape model and the input 2D foot outline and profile curve are simultaneously intersected by a series of planes vertical to axis X. Four points of intersection with each plane and section curve with standard foot shape model are obtained. Four extreme points: the highest (biggest value in axis Z), lowest (smallest value in axis Z), leftmost (biggest value in axis Y) and rightmost (smallest value in axis Y) point in the curve are solved. In other words, four extreme points on each section are used as constrained points while template is deformed. And the point of intersection between plane and 2D foot outline and profile curve (its value in axis Z equals to relevant constrained point value in axis Z) is the aim position of constrained point.
Figure 4. Coordinate system in the 2D foot outline
Model and 2D foot outline and profile curve are discretized by using fifteen sections, so 13h4˙52 constrained points are got by thirteen planes in the middle intersect with standard model. Add two extreme points in the two end which the values of X are the biggest and the smallest. So there are fifty-four constrained points. Control grid’s volume is about 1.1 times as the smallest bounding box’s. Sixteen control vertexes are taken in the direction of X, and ten control vertexes are taken in the direction of Y and Z to ensure that each control vertex section has a deformation constrained point. Motion vector of each control vertex can be
444
B. Liu, N. Shangguan, J. Lin and K. Jiang
computed according to formula (4) after confirmability of control grid and constrained points. Control grid’s distribution is induced and each vertex’s position of standard foot shape model is solved after constrained deformation. New model is the individual foot shape mode we need and the rapid customization of special foot shape model is achieved.
3.
Accuracy of Model
The model was validated using the foot scans of 20 other participants. he error was defined as the Euclidean distance from points on the actual foot shape to the closest point on the predicted shape (Luximon and Goonetilleke 2004; Luximon et al. 2003). The mean, maximum and minimum error of the 40 subjects for left feet were calculated and are shown in table 1, the mean error is 1.03 mm. Table 1. Mean, minimum and maximum error (mm) between the ‘actual’ feet and the predicted feet using prediction method for 20 participants
4.
Participant
Mean
Min.
Max.
Participant
Mean
Min.
Max.
1
1.21
-5.78
7.12
11
0.81
-6.01
7.32
2
1.01
-5.81
6.77
12
1.03
-6.22
5.63
3
1.22
-4.98
5.96
13
1.15
-4.47
5.21
4
1.04
-5.23
6.68
14
0.77
-3.85
7.96
5
1.21
-6.21
7.02
15
0.88
-6.62
7.13
6
0.92
-4.47
8.12
16
1.18
-3.64
4.35
7
0.93
-5.53
6.34
17
1.34
-5.31
3.88
8
1.07
-4.42
5.52
18
1.11
-3.67
4.31
9
0.94
-3.39
6.89
19
1.03
-6.11
5.21
10
0.87
-2.56
7.21
20
0.92
-4.02
3.83
Conclusions
In this study, a novel method was proposed to predict the foot shape. The foot outline(bottom-view), foot profile (side-view), and a standard foot shape was used to predict the 3D shape of a given foot. Similar to Luximon and Goonetilleke (2004), a standard foot shape was generated as the mean of 50 Chinese male participants. The prediction models were validated using another set of 20 Chinese male subjects. Mean accuracy of 1.03 mm for the foot was obtained using the prediction method. This numerical value is a little bigger than that in literature [1], but the method proposed in this paper is more automatic and need less user interaction . To improve model precision further, proper anatomical landmark point
Individual Foot Shape Modeling from 2D Dimensions Based on Template
445
can be considered to be added. The method proposed in this paper can be used to generate 3D foot shapes for the generation of a personalized last or in the selection of a suitable last to develop a customized shoe with only a limited number of dimensions.
5.
Acknowledgements
The work described in this paper was supported by the key program No. 2006H0029 and 2005HZ1013 both from the Science & Technology Department of Fujian in China.
6.
References
[1] Luximon A, Goonetilleke RS, Zhang M. (2005)3D foot shape generation from 2D information.. In Ergonomics, 48 (6) :625-641 [2] Luximon A, Goonetilleke RS. (2004)Foot Shape Modelling. Human Factors 46(2), 304- 315 [3] Mochimaru, M. and Kouchi, M. (2002) Shoe customization based on 3D deformation of a digital human, The Engineering of Sport, 4:595-601 [4] Pottmann H , Hofer M. (2002)Geometry of the squared distance function to curves and surfaces. Technical Report 90. Vienna :Vienna University of Technology [5] Hertzmann A, Zorin D. (2000)Illustrating smooth surfaces. In Proceedings of SIGGRAPH 2000, 517–526 [6] Sederberg TW, Parry SR (1986) Free-form fundamentals of solid geometry. Comput Graph 20(4): 151–160 [7] Hu SM, Zhang H, Tai CL, Sun JG. (2001)Direct manipulation of FFD: efficient explicit solutions and decomposable multiple point constraints. The Visual Computer 17(6): 370-379
Application of the TRIZ to Circular Saw Blade Tao Yao1, Guolin Duan2, Jin Cai2 1
School of Mechanical Engineering, Hebei University of Technology, Tianjin, P.R.China, 300130 E-mail address: [email protected] 2 School of Mechanical Engineering, Hebei University of Technology, Tianjin, P.R.China, 300130
Abstract The basic principles and some methods of TRIZ are introduced, by which the researcher analyzed the design of circular saw blade based on the World Patent Database. To the characteristics of circular saw blade, the patterns of technology evolution on circular saw blade are studied and the maturity of product is predicted. The results show the current state of circular saw blade development and provide effective methods for the design of circular saw blade which can produce low radiated noise and vibration. On the other hand, as the engineering examples of circular saw’s inventive design, through comparing the functions we want with the properties circular saws perform in the process of work, applying TRIZ Invention Principle to the engineering design are presented and discussed. Keywords: TRIZ; Technology evolution; Inventive principle; Circular saws
1.
Introduction
Circular saw is one kind of essential tool in a typical sawmill, which is used daily by many construction or wood workers. Whenever a circular saw is idling or cutting materials in high rotary speed, some characteristics are embodied(e.g., the dynamic stability, vibration and noise, heating). The reduction of cutting residual is a long term concern for sawmilling technology development, to reduce cutting residual, circular saws must be thin, and they must relatively plane during cutting. However thin circular saw blade can easily buckle out-of-plane because of the heat flux from the cutting process. On the other hand, an more useless characteristic from the saw is noise, the high noise emissions of circular saws severely burden users and the environment. During sawing, the saw blade and the workpiece are caused to vibrate, this vibration generates sound waves that propagate through the air and generates noise. During the last few years, several trends can be observed in the circular saw designs for reducing the effect of those useless functions: tensioning and rolling; guiding saw blades without clamps; damping saw; slotting saw. These approaches thinking has brought about changes in the product
448
T. Yao, G. Duan and J. Cai
development process resulting in better products for companies and customers. In this paper, TRIZ tools are introduced and applied to the design of circular saws, which provide a good idea to the development of sawing technology.
2.
TRIZ Theory and Common TRIZ Tools
TRIZ is the acronym for the “Theory of Inventive Problem Solving”, which is built on over 1500 persons years of research and the systematic study of over 2 million of the world most successful patents[1]. In 1940s, Altshuller, the father of TRIZ, thought the basic principle of invention exists in reality, the principles can be summed up into a series of theories which would increase the radio of succession to the invention and reduce the cycle of the invention. At the same time, the principle could predict the result of the problem. The main object of a TRIZ implementation project is to resolve the design problems encountered during the product development process. TRIZ forces the product development profession to look into the future and seek for successful ways of solving a problem using technology, TRIZ enhances the productivity of product development and reduce cycle time. 2.1
The Laws of System and Product Evolution
Evolution of all objects of the material world including technological objects are governed by certain laws. A system of the laws of technical system evolution have three levels: demand, functions and systems. If knowing regularities of demand evolution, we may predict future demands. On the other hand, such knowledge also makes it possible to discover radically new directions of technical system evolution. The creation of new systems can delivered some useless functions. The laws of evolution define a general direction of technical system evolution, main laws of technical system evolution are[2]: x x x x x
Increase the degree of ideality. Irregular evolution of system parts. Increase the degree of system dynamics. Coordination. Transition of a system to a supersystem.
A general direction of technology evolution is defined by the law of increasing the degree of ideality of technical systems. Although an absolutely ideal system do not exist, people can try their bests to decrease negative effects at the special working time and space. Idealization, as an object of system or product design, can be realized by some methods (e.g., reduction of parts of a system or a process; increase of a number of delivered functions; using advanced equipment, materials, and processes). The process of product evolution is the core of technology which changes from low level to high level. Designers made great efforts to promote the technology evolution. Altshuller indicated the laws and patterns exist in technology evolution.
Application of the TRIZ to Circular Saw Blade
449
The theory tries to find evolution patterns and routes which can be used to predict the future product. Altshuller found that any system is evolving in a biological pattern, meaning that it will go through four main stages also known as: x x x x
the infancy stage the growth stage the maturity stage the decline stage
The graph of the Biological S-Curve is depicted in Figure 1.
Performance
Maturity New Technical System is invented
Decline
Growth
System Performance Approaches Optimum
Infancy Major Problems with System Are Overcome
Time Figure 1. S-curve of growth
Confirming the position of the product on S-curve is an important content that is called as the prediction of maturity of product. The results are used to direct the product design. For example, product which situated on the infancy stage should be optimized aiming at the structure and parameters in order to its fast maturity. If the product situated on the maturity stage or the decline stage, old technology should be substituted by new technology in order to develop new products, so the enterprise can get the more profits. Four main descriptors are used to assess the life cycle stage (or technological maturity) of a technological system on its S-curve. Researching data relevant to critical performance criteria of a technology, the number of inventions in this technological field, the levels of the inventions, and the profitability of the primary resultant of the technology in question will allow interpolation and correlation of data that will indicate the resultant location on the s-curve. This unique ability of the theory of innovative problem solving demonstrates its versatility and power concerning design and development.
450
T. Yao, G. Duan and J. Cai
2.2 Laws of Conflict-solving The researchers think the core of inventing problem is solving the conflict[3]. Any design which has not a conflict or adopts the mean is not thought as an innovation. Altshuller classify the conflict as follows: management conflict, physics conflict and technique conflict. Physics conflict means a system should posses a character or function, but at the same time, the contrary function comes forth. TRIZ emphasized on the physics and technique conflict, which put forward to the separation law: separation in space, separation in time, the separation based on the condition, the separation from the whole and part. A technical conflict arises when an attempt to solve one part of the problem creates another problem. Most engineering systems have conflicts. For example[4,5]: x x x
the walls of a beverage can has to be thin and yet it has to carry a heavy load because of other cans that are stacked on top of it. An automobile has to be faster and yet safer Weather stripping for a car door needs to be thick to provide superior sealing, yet thin so that the door closes easily
It firstly relies upon stating the conflict in stylized terms (there are 39 conflict topics provided, ranging from mass, length, time and their combinations, to less well-defined topics such as reliability or ease of use), and upon classifying the resolutions (currently classified into 40) which encapsulate all currently recognized manipulations, such as changing the temperature (I.P. 35), dividing the object into sub-units (I.P. 1), using composite materials (I.P. 40), etc.
3. 3.1
Application of the TRIZ for Circular Saw Blade Predict of Technique Maturity on the Circular Saw Blade
The researcher analyzed approximately 100 patents during the time frame from 1970 to present. Figure 2 shows the accumulative number of patents over time for circular saw blade. Individual data every five years are depicted in dark-block points. And the trend to the number of patents is showed in curve. The curve fits almost perfectly to Altshuller’s corresponding descriptive S-curve. The peak of the number exists around 1976 and 1998, by determining the current state of a technology, future development can be predicted. From the Figure 2, in recent years, the number of patents begins to decline. It shows that new technique should be founded instead of the past technique.
Application of the TRIZ to Circular Saw Blade
451
Figure 2. S-curve of circular saw
3.2
Evolution Patterns of Circular Saw Blade
In the 1990s, TRIZ experts from America have developed the products’ evolution into 8 patterns as follows: x x x x x x x x
Technology follows a life cycle of birth,growth,maturity,and decline Increasing ideality Uneven development of subsystems resulting in contradiction Increasing dynamism and controllability Increasing complexity,followed by simplicity through integration Matching and mismatching of parts Transition from Macro-systems to Microsystems Decreasing human involvement with increasing automation
For example, according to pattern 4(increasing dynamism and controllability) can find new direction which would be used to replace the old technique. Pattern 4 may be divided into five evolution route: making the parts moving, increasing number of free degree, becoming flexible system, and becoming micro-field. Figure 3 shows the evolution route.
452
T. Yao, G. Duan and J. Cai
Figure 3. The route of evolution on increasing dynamism
A circular saw tool, as is shown in Figure 4, with a single-ply or multi-ply disc shaped saw blade and teeth distributed over the circumference thereof, the saw blade being provided on each side with a bonded covering, the invention increase dynamism and make the part move
Figure 4. A circular saw blade with multi-ply disc
According to pattern 2(increasing ideality), one of the main pillars in the TRIZ philosophy is the concept of systems evolving in the direction of increasing ideality (defined as the sum of the good things in a system divided by the sum of the bad things) [6]. Idealization of the studied object is a basic method, which is an abstract of the existing object. Ideality is a limit which is important act to science research. For example, ideal gas, ideal liquid, the point and line in geometry and so on. The application of ideality includes ideal system, ideal recourse, ideal methods, ideal machine, ideal substance and the part idealization and the whole idealization and so on. The process of part idealization has four patterns: • •
Reinforce. Through the optimizing the parameters,introducing additive control strengthen the act of the useful function. Reduce. Through compensating the useless function,reducing or eliminating the waste,adopting cheaper materials,standardized the parts.
Application of the TRIZ to Circular Saw Blade
x x
453
General Customization
For example, using water spray damping technique can reduce the vibration and the noise of large-sized metal circular saws. Through measuring and analyzing the noise reduction effects and frequency spectrum, it is found that water spray damping is a simple and practical technique [7]. The invention makes good use of a pattern of water molecule damping to absorb the energy that circular saw release in process of working, and reduces useless function. Accordingly, the saws gradually approach the ideal working state. The example is constructed the relevant trends by the damping methods in reducing the noise on circular saw. The design of circular saw in the past adopted the continuous structure, the whole saw have the identical materials and the simple structure. The idea of layer damping in which the user achieves the useful function is similar with sandwich. Figure 5 shows the trend of reducing the noise by means of damping. Field damping
Spry damping
The future
Elementary particle Actual state
Layer damping
Molecule
Layer structure
Continues structure
Figure 5. the trend of reducing the noise by means of damping on circular saw
3.3
Conflict Solving on Circular Saw Blade
Technique in TRIZ is the solution of problems by identifying the conflict at the heart of the problem, then using a table to identify a solution to that conflict. The conflict is defined by a pair of properties that are apparently mutually incompatible. For example, if one parameter (e.g. strength) is improved it will probably compromise another (e.g. lightness). It relies upon firstly stating the conflict in stylized terms (there are 39 conflict topics provided, from mass, length, time and their combination, to less well-defined topics such as reliability) and secondly upon classifying the resolutions (currently classified into 40 principles, called as” inventive principles”), which include all currently recognized manipulations, such as changing the temperature, dividing the object into sub-units, using composite materials, etc. The ideal resolution requires that there is a solution in which a
454
T. Yao, G. Duan and J. Cai
material can (in the example above, for instance) be stronger but not heavier, and this resolution is suggested by the table to which the contradiction matrix points. The dynamic stability is important for the circular saws as a cutting tool in high rotary speed. Table 1 shows those functions we want and the properties of the circular saw, through TRIZ tools, we can confirm the conflict, and then seek for the solutions of problems. For instance, the circular saw should keep the quality of tensioning and stability during sawing process, the aim is controlling and reducing the vibration. But because the characters that circular saw process, that will produce some useless factors. A large number of inventions can be explained by TRIZ successfully. Table 1. Design conflicts in circular saw blade Function We Want
Function of Circular Saw
Solution
IP
Reducing the vibrant noise
Resonance noise
Asymmetry structure
4
Slots and holes
31
Curvilinear materials
14
Composite materials
40
Sawing strength and stability
The whole structure
Partial quality
3
Good tension and stiffness
Existed membrane tension
Pressing rolls or tensioning
11
Example 1: the invention relates to a circular saw blade of the sandwich-type, a blade comprising two steel discs on the periphery of which the cutting elements of the blade are attached. In order to improve the vibration and noise damping properties of such blades, an intermediate layer of an energy-absorbing material is introduced. It does this by controlling the partial quality (Inventive Principle 3). Example 2: a circular saw of improved vibrant characteristics in which the saw teach are arranged in an odd number plurality of segments[8]. Each segment includes a rigid zone and a flexible zone obtained by varying tooth spacing. Also, the tooth-to-tooth spacing is randomly varied to reduce harmonic vibration. The design adopts asymmetry structure to attain good effect, which can be explained with TRIZ (Inventive Principle 4). Example 3: Figure 6.a shows that a circular saw blade is subdivided into two sectors by a interruption cut so that noise-damping slot is form between the two sectors[9]. A serpentine shape cut brings each sector at least one sound-damping slot, the result improves sound-damping properties, on the other hand, a friction action between contact of the sectors is obtained by a uniform continuous pressure of the opposing contact surfaces. The design of a serpentine shape makes good use of the idea of “curving” (Inventive Principle 14).
Application of the TRIZ to Circular Saw Blade
a
455
b
Figure 6. a series of circular saws. a. serpentine-shaped interruption cut; b. with slots consists of an arc
Example 4: A lot of circumferentially extending laser-cut slots is disposed within the body of the saw blade[10]. Each of the slots consists of an arc of a circle concentric with the body of the saw blade and parallel to the peripheral edge thereof. Saw blades incorporating laser-cut slots that inhibit the generation of natural vibration, thanks to the friction in the slots, achieve better damping. The invention introduced the slots to improve the vibrating character of the circular saw (Invention Principle 31-Porous Materials). It is shown in Figure 6.b. Example 5: Before that a circular saw blade begin to vibrate, the membrane tension resulted from manufacture changed the stiffness and inherent frequency, in order to improve the stiffness of the circular saw in the process of manufacture, tensioning or rolling techniques are introduced to reducing the effect of membrane tension. These techniques apply “Cushion in Advance” principle in TRIZ conflict solving methods. 3.4 Some Trends of TRIZ Application on Circular Saw Blade TRIZ solving theory is an effective implement to the product innovative. The paper studied on the design of circular saws based on the TRIZ theory, which is useful to in engineering design practice. Through holding the current state of design technology on circular saw, researchers can predict its future development and find new design or innovative product ideas. Aiming at the functions that we want, then we find that conflict functions which is the properties of the circular saws, we can solve problems based on inventive principles, the approach provide a idea for the new design, and it need a further development in the following study. In the new product development, TRIZ provide an efficient way of tapping into knowledge. Through the use of S-curve analysis and technology forecasting, challenges the product development. With the development of evolution, people start with try to apply molecule damping to reducing the noise. But by present, there are not reported that applying the field damping to reducing the noise. In Figure 5, the past invention followed an evolution route from layer structure to
456
T. Yao, G. Duan and J. Cai
molecule damping. We can predict that in the future other more advanced damping methods would be applied to the design of circular saw systems. For example, To reduce the vibrant noise, we may seek for better composite materials, Inventive Principle 40 are used in this condition.
4.
References
[1] Y Salamatov, (1999)TRIZ: The Right Solution at the Right Time. Insytec. The Netherlands. [2] Petrov V. (2002)The law of system evolution[J]. TRIZ Journal, March, 2002, http://www.triz-journal.com [3] Emily M Smith. (2003)From Russia with TRIZ. Mechanical Engineering. New York: Mar, 125:18–20 [4] Kowalick, Jim. Organizational Transformation using Twelve Leading-Edge Design Practices: How Highly Effective Designers Achieve World Class Engineering Systems. Special paper presented at the 12th Annual Taguchi Symposium, Rochester, NY, October 19,1994 [5] Domb, Ellen and Jim Kowalick. Inventive Principles and QFD: Revolutionary, Rapid, and Integrated Approach to Marketplace Capture.7th Symposium on Quality Function Deployment, Novi Hilton Hotel, Novi, MI, June 11- 13, 1995. [6] Darrell L. (2003)Better technology forecasting using systematic innovation methods. Technological Forecasting and Social Change, Volume 70, Issue 8˖779–795 [7] Ma Jian-min; Lu Jing-lin; Huang Xie-qing. (2000)Water spray damping applied to reducing cutting noise in metal circular sawing. Vol19:621–622 [8] Ernest W.Brown. Attenuated Vibration Circular Saw[P]. International Paper Company. United States. 4,270,429. Jun.2, 1981 [9] Roettger Jansen-Herfeld. Circular Saw Blade[P]. Richard Jansen GmbH, Remscheid, Fed.Rep.of Germany. United States. 4,584,920. Apr. 29,1986 [10] Linwood I.Carter, Jr., Bowman, S.C. Circular Saw Blade with Circumferentally exten ding Laser-cut Slots[P]. Pacific Saw and Knife Company, Portland, Oreg. United States. 4,776,251. Oct.11,1988
Chapter 4 Simulation and Optimisation in Design
Research on Collaborative Simulation Platform for Mechanical Product Design................................................................................................... 459 Zhaoxia He, Geng Liu, Haiwei Wang, Xiaohui Yang Development of a Visualized Modeling and Simulation Environment for Multi-domain Physical Systems ................................................................. 469 Y.L. Tian, Y.H. Yan, R. M. Parkin, M. R. Jackson Selection of a Simulation Approach for Saturation Diving Decompression Chamber Control and Monitoring System...................................................... 479 Diming Yang, Xiu-Tian Yanand Derek Clarke Optimal Design of Delaminated Composite Plates for Maximum Buckling Load.................................................................................................... 489 Yu Hua Lin Modeling Tetrapods Robot and Advancement ............................................... 499 Q. J. Duan , J. R. Zhang, Run-Xiao Wang, J. Li The Analysis of Compression About the Anomalistic Paper Honeycomb Core ............................................................................................... 509 Wen-qin Xu, Yuan-jun Lv, Qiong Chen, Ying-da Sun C-NSGA-II-MOPSO: An Effective Multi-objective Optimizer for Engineering Design Problems .................................................................... 519 Jinhua Wang, Zeyong Yin Material Selection and Sheet Metal Forming Simulation of Aluminium Alloy Engine Hood Panel ......................................................... 529 Jiqing Chen, Fengchong Lan, Jinlun Wang & Yuchao Wang Studies on Fast Pareto Genetic Algorithm Based on Fast Fitness Identification and External Population Updating Scheme ............................ 539 Qingsheng Xie, Shaobo Li, Guanci Yang Vibration Control Simulation of Offshore Platforms Based on Matlab and ANSYS Program ........................................................................................ 549 Dongmei Cai, Dong Zhao, Zhaofu Qu Study on Dynamics Analysis of Powertrains and Optimization of Coupling Stiffness ......................................................................................... 561 Wenjie Qin, Dandan Dong Parametric Optimization of Rubber Spring of Construction Vehicle Suspension.......................................................................................................... 571 Beibei Sun, Zhihua Xu and Xiaoyang Zhang
The Development of a Computer Simulation System for Mechanical Expanding Process of Cylinders....................................................................... 581 Shi-yan Zhao, Bao-feng Guo, Miao Jin Rectangle Packing Problems Solved by Using Feasible Region Method ...... 591 Pengcheng Zhang, Jinmin Wang, Yanhua Zhu Aircraft’s CAD Modeling in Multidisciplinary Design Optimization Framework......................................................................................................... 601 X.L. Ji, Chao Sun Optimization of Box Type Girder of Overhead Crane .................................. 609 Muhammad Abid, Muhammad Hammad Akmal, Shahid Parvez
Research on Collaborative Simulation Platform for Mechanical Product Design Zhaoxia He, Geng Liu, Haiwei Wang, Xiaohui Yang Northwestern Polytechnical University, Xi’an, P. R. China, 710072
Abstract A collaborative simulation platform is presented which plays an important role in product innovation. Different actors can collaborate in this environment which can support the process of design and assessment for mechanical products. In this paper, the functional modules and the technical architecture of the system are proposed. Simulation Requirement Model (SRM), database management based on meta-information and definition of simulation flow have been discussed detailedly as key technologies of the platform. Simulation flow can be constructed by simulation components which are abstracted in terms of simulation experiences and requirements. The system has been designed utilizing many advanced software technologies, including object-oriented design, Java, Java Content Repository, relation database MySQL, etc. An engineering application is used to demonstrate the practicability of the presented platform. Keywords: collaborative simulation integrated platform Simulation Requirement Model (SRM)
1.
simulation component
Introduction
With the developing of high-quality products, the design process is becoming more and more complicated. In order to speed up the developing process, shorten design cycle time and reduce costs, simulation technologies provide unprecedented versatile means for the design of products. These technologies had transformed from specialized analysis tools to integrated platform supporting all phases of the product development. Like the antiquated approach, most of the engineering software tools have been in use with little or no communication. In a work group, engineers and designers can’t exchange data, knowledge and function conveniently for inconsistent format. For example, some CAD files are not suitable to most CAE analysis software. And designers and analysts can’t exchange data directly, so it is necessary to find medium format to exchange data or files. Furthermore, it is important to find useful information in large amount of simulation data and reuse the simulation experience. At present, the research center are focused on collaborative simulation technologies by more and more commercial software developers and researchers[1,2,3].
460
Z. He, G. Liu, H. Wang and X. Yang
Collaborative simulation platform will play an important role in all phases of product design, including scheme design, initial prototype design, trial model design and production. By making good use of the environment, designers, analysts, project managers even chief engineers can finish development of products together.
2.
The Framework of Collaborative Simulation Platform
Collaborative simulation platform is a design-oriented platform that can provide users with uniform GUI(Graphical User Interface). It is an enterprises-level simulation environment and can administrate workflow, tools, data, reports and knowledge. The system can implement grid computing and collaborative simulation based on SOA(Service Oriented Architecture). In terms of parametric modeling, simulation flow can be created and executed in this environment. Based on the classic mode of product analysis, users may acquire simulation flow template and computing task template. 2.1
The Functional Modules of Collaborative Simulation Platform
Database management, simulation, flow management and virtual reality are indispensable technologies for construction of collaborative simulation platform. Furthermore, the platform should possess some important functions that can encapsulate commercial software packages, provide visual environment and interact with developed system, such as PDM. Figure 1 is the functional modules of collaborative simulation platform. The platform is comprised of five hierarchies: Users, Interaction, Management, Distributing and Tools. The top tier in this framework is Users and it includes project management, designer, analyst, etc. The Interaction tier contains Web Portal[4,5,6] and CAE Workbench. The Portal performs user administration, notification, report, data and workflow management. The Workbench includes flow editor and simulator, so it can create simulation flow based on simulation
Figure 1. The functional modules of collaborative simulation platform
Research on Collaborative Simulation Platform for Mechanical Product Design
461
components and execute simulation flow in simulator. In the middle tier Management, there are two databases and four functional modules that can provide management service for user, flow, heterogeneous engineering data[7] and report. Distributing can support grid computing and transferring of software package, so that the computing time could be shorten. At the bottom tier, CAE commercial packages have been integrated, including Ansys, Nastran, ADAMS and software package compiled by users. 2.2
Technical Architecture of the Collaborative Simulation Platform
The collaborative simulation platform makes use of a number of advanced software technologies. As for using Java for the language, it is possible to run the system on different machines and operating system without modifying the source code. Java also makes it easy to create GUI that can be used in application and applets within Web browsers. From the user’s viewpoint, two functions can be performed within collaborative simulation platform. Web browser and Eclipse RCP Client allow users to interactively work with the system. Figure 2 shows the technical architecture and relationship between components in it.
Figure 2. The technical architecture of collaborative simulation platform
Web browser provides a centralized way in which registered users can acquire and edit simulation data, control simulation workflow and generate simulation report. It run in B/S(Brower/Server) pattern by making use of Ajax(Asynchronous JavaScript and XML)[8] technology. J2EE[9] server becomes the bridge between client and database. According to the developing means of network, REST(Representational State Transfer) Service can supply standard and simple interface. J2EE hierarchy can use the DAO(Data Access Object) and CAO(Content Access Object ) to control access to simulation data and its versions. In Eclipse RCP Client, simulation assignment can be abstracted as simulation component by using plenty of analysis experience. Simulation components register in simulation services bus by means of simulation Agent. EMF (Eclipse Modeling
462
Z. He, G. Liu, H. Wang and X. Yang
Framework) and GEF(Graphical Editing Framework) technologies have made it easier to construct simulation components which can be dragged freely in the flow editor of CAE Workbench. Users create simulation flow by defining properties of components. The whole analysis task can be implemented by simulation flow engine in simulator. The platform capabilities can easily be extended by modifying, replacing or adding new components. Popular commercial CAE software packages and programs which compiled by users are developed and integrated based on .NET and COM technologies. Web services and REST Services integrate local, long-range and network services. The DAO and CAO provide the Eclipse RCP client with access to database. In data service tier, ORM(Object Relational Mapping) of Hibernate implement data persistence. Database system is composed of MySQL Server and Jackrabbit Server. MySQL is a popular Open-Resource database, and it possesses many versions, which can be run on many operating systems. Apache Jackrabbit is fully conforming implementation of the Content Repository for Java Technology API(JCR)[10,11]. A content repository is hierarchical content store with support for structured and unstructured content.
3. The Key Technologies of Collaborative Simulation Platform Supporting of Mechanical Product Design 3.1
Simulation Requirement Model(SRM) a
b
c
Figure 3. a. SRM view of object-oriented; b. SRM view of analysis-oriented; c. SRM view of flow-oriented
Simulation Requirement Model is model for organizing and managing task, which prescribes simulation analysis object, analysis type and relationship of workflow on the platform. It is necessary to think about all kinds of resources, which indcludes products, flow information, simulation data, etc. According to SRM, there are three kinds of views, object-oriented, analysis-oriented and floworiented(see Figure 3).
Research on Collaborative Simulation Platform for Mechanical Product Design
463
The object-oriented view is mainly based on the analyzed product, and it demonstrates analysis task and sub-task corresponding to products. From system to assembly, then to part, the view is organized.(Like BOM of mechanical system). There are some system and structure analysis for assembly and there are only structure analysis for part. In analysis-oriented view, simulation analysis type is represented mainly. The view consists of simulation objects and their analysis task. In the light of system and structure, simulation analysis can be separated to two types. System analysis should include all system-level analysis types and structure analysis includes all structure-level analysis types. The flow-oriented view shows properties of task, which contains simulation object, analysis type, task prefix, transform condition and executor. The transformation technology of view is important based on analysis object, type, task, and transformation condition for modeling of SRM. Just for relationship between the object-oriented, analysis-oriented and flow-oriented views, the mathematic model of relationship can be established. It is feasible to implement transformation between object-oriented view and the analysis-oriented by Index Number. Index number includes Object Index, Analysis Type Index, Sub Job Index, etc. and the mathematic relationship between different Index Number can achieve transformation of views. Flow- oriented view includes much more information than other two views, so the transformation from the flow-oriented view to the other two is irreversible. 3.2 Modeling Technology of Simulation Flow Based on Simulation Components On collaborative simulation platform, users can acquire customization of simulation flow for mechanical product. Under operating framework of Eclipse RCP, modeling of simulation flow is implemented by simulation component, which can be dragged freely. Component is defined by EMF and GMF technologies, and is abstracted in terms of simulation experiences and requirements of National or Industry standards. Simulation components allow users focus upon developing the product rather than using software tools or managing data. Figure 4 is the general simulation flow for mechanical products which integrated simulation of structure and multi-bodies system, and users can define different subflow according to own requirements. In this study, these simulations are to obtain mechanical characteristics of structure and implement kinematics and dynamics analyses of multi-bodies system, which don’t involve in other disciplinarities. The system or structure simulations all include basic and advanced analysis. Main steps are integrated into the framework of collaborative simulation platform, which include CAD modeling, Clean model, basic analysis, advanced analysis and validate. In main steps, there are many sub-steps that can be constructed by simulation components, and the properties of components make it more easily to finish simulation tasks. Simulation components make analysis software compliant with the platform and enable them to be easily used and managed from within the system. And the component is composed of operations and definition of properties. The properties of components are designed and implemented in encapsulated
464
Z. He, G. Liu, H. Wang and X. Yang
software, which has been second developed. At present, simulation components are be developed for Solidworks, Ansys, ADAMS and Dytran by standard interface. Just for granularity of component is important for reducing cost and saving time, abundant experience about software packages and professional knowledge have become determinant.
Figure 4. The basic simulation flow for mechanical products
3.3 Integration Technology of Data Management Based on Meta Information Two kinds of database are used for storage of simulation data: the relation database MySQL and the Apache Jackrabbit. Jackrabbit is full JSR-170 compliant, and JSR170 says that a content repository is composed of a number of workspaces. A repository can have one or more workspaces. Each workspace contains a single rooted tree of items, and each item is either a node or a property. The values of the properties and nodes just are the Meta information.
Figure 5. Structure of JSR-170 compliant application
Figure 5 describes the structure of an application developed using the JSR-170 API. When it runs, this application can work with either content repository 1, 2 or 3. Content repository 1 may use RDMBS as underlying data store where as content
Research on Collaborative Simulation Platform for Mechanical Product Design
465
repository 2 may use the file system as its underlying data store, while some other repository could use a mix of these. JSR-170 also defines different features or operations that should be supported by different compliant repository[11] (see Figure 6): Level 1: This includes functionality for the reading of repository content, export of content to XML and searching. Level 2: It is a superset of Level 2. In addition to Level 1’s functionality, it defines methods for writing content and importing content from XML. Advanced Options: The specification defines five additional functional blocks: Versioning, (JTA) Transactions, Query using SQL, Explicit locking and Content Observation.
Figure 6. Three different compliant repository
4.
Illustrative Application
In this section we will illustrate the application of collaborative simulation platform by integrating analysis of the launcher for Airborne Weapon. Development of the instrument need designers, simulation engineers and partners collaborate in a uniform environment. Everyone uses the same tools and processes to work with the same data, so the risk of errors associated with integration and data translation is greatly reduced. Just for underlying aerodynamic load, impact load and flying attitude of battle plane, the launcher for Airborne Weapon is composed of many parts which need to be analyzed in order to master their mechanical properties. In this study, a lot of analysis can be accomplished which includes statics, impact dynamics and multibodies dynamics. During simulation process, a large amount of data should be managed effectively. On the other hand, it is more and more important to establish a communicate channel. The channel should be unblocked and can provide managers with useful information. Furthermore, designers should solve more problems in scheme design or initial prototype design to shortening cycle time of development.
466
Z. He, G. Liu, H. Wang and X. Yang
Figure 7. Application of Airborne Weapon launcher on collaborative simulation platform
Figure 7 demonstrates an application by performing data and flow management, accomplishing system and structure simulation on collaborative simulation platform. On the system, 3-D geometry of the launcher has been modeled by Solidworks, ADAMS finished multi-bodies dynamics analysis and static and impact dynamics simulations have been implemented by Ansys and Dytran, respectively. Additionally, the computing program of aerodynamic force compiled by users has been integrated into the platform. The Launcher for Airborne Weapon has satisfied the strength request of structure by only one time static test.Presently, the Web Portal has implemented main functions; the GUI of CAE Workbench is being developed. Flow editor can perform modeling of simulation flow, but GUI of simulator has not been finished. After executing simulation flow, analysis data has been saved in database and content repository. Users can check and edit simulation data, perform user administration and manage workflow. MySQL database and content repository are developing and its prototype has been finished, which are being used in management of the system.
5.
Conclusions
The framework and the technical structure of collaborative simulation platform are presented in this paper. The key technologies have been discussed based on the technical structure and the SRM has been proposed. The practicability of the system are illustrated by the launcher application. The platform uses a number of advanced computing and network technologies to make the system easily extendable, and become an enterprises-level environment
Research on Collaborative Simulation Platform for Mechanical Product Design
467
which allows designers and other engineers to effectively collaborate in the process of developing mechanical product. With transforming of design mode, collaborative simulation technologies are becoming more and more important for creative design of product. When commercial analysis tools, simulation flow and users are integrated into the collaborative simulation platform, the result is increased quality and reduced cost and design cycle times.
6.
Acknowledgements
This work is supported by the National 863 High-Tech R&D key Program under the grant No. 2006AA04Z161 and the National 863 High-Tech R&D Program under the grant No. 2006AA04Z120.
7.
References
[1] Wang Zhenhua, Deng Jiati, Sui Pengfei, Wang Yongbo, (2007) Complex engineering system synthesis design optimization framework. Journal of Beijing University of Aeronautics and Astronautics 33(2):192-196 in Chinese [2] Han Minghong, Deng Jiati, (2004) Study on Integrated Framework of Multidisciplinary Design Optimizaiton for Complex Engineering System. CHINESE JOURNAL OF MECHANICAL ENGINEERING 40(9):100-105 in chinese [3] GUO bin, XIONG Guang-leng, CHEN Xiao-bo, (2002) Research on Collaborative Simulation Platform Supporting Design of Complex Product. Machinery & Electronics 4:26-29 in Chinese [4] Hongman Kim, Scott Ragon, James Mullins, (2006) A Web-based Collaborative Environment for Bi-Level Integrated System Synthesis (BLISS). AIAA-2006-1618. [5] Hongman Kim, Brett Malone, Jaroslaw Sobieszczanski-Sobieski, (2004) A Distributed, Parallel, and Collaborative Environment for Design of Complex Systems. AIAA-20041848. [6] ZHANG He-ming, XIONG Guang-leng, (2003) Web–based Multi-discipline Collaborative Design and Simulation Platform. Computer Integrated Manufacturing Systems 9(8):704-709 in Chinese [7] ZHENG Xiao-feng, CAI Rui-ying, (2003) Heterogeneous system information integrity by data warehouse methd. Journal of Nanjing University of Technology 25(1):92-95 in Chinese [8] Dave CraneˈEric Pascarello, Darren James, (2006) Ajax in Action. Beijing: Posts & Telecom Press in Chinese [9] ZHAO Yong-yi, SU Hong-yi, HU Shao-hui, (2007) Design and implementation of new web application based on AJAX and J2EE. Computer Engineering and Design 28(1):189-192 [10] Alfresco, (2005-12-05) JSR-170 Development Plan. http://wiki.alfresco.com/wiki/JSR170_Development_Plan [11] Sun Patil, (2006-10-04) What is Java Content Repository. http://www.onjava.com/pub/a/onjava/2006/10/04/what-is-java-content-repository.html
Development of a Visualized Modeling and Simulation Environment for Multi-domain Physical Systems Y.L. Tian1, Y.H. Yan1, R. M. Parkin2, M. R. Jackson2 1
Institute of Modern Design and Analysis, School of Mechanical Engineering and Automation, Northeastern University, Shenyang, Liaoning, China 110004 2 Mechatronics Research Centre, Wolfson School of Mechanical and Manufacturing Engineering, Loughborough University, Leicestershire, UK
Abstract This article introduces the development of a software environment for both Visualized modeling and simulation of mechatronic multi-domain systems. The environment utilises Modelica language to model multi-domain systems and uses an open source Modelica compiler called OpenModelica compiler to obtain executable files for simulation. Basically the environment resembles a commercial simulation programme called Dymola in which there is a proprietary Modelica compiler. Additional ideas from other commercial products are adopted and integrated into the environment to establish a more general framework. Based on such a framework, it is demonstrated how academic researchers can develop integrated software for simulation and analysis of complex mechatronic systems without much dependence on commercial products. Major features of the environment include tree browsers of Modelica libraries, Modelica text editor and 2-D diagram editor, simulation output data plotting windows, flexible interconnections to external 3D visualization software, and custom scripts for design of experiments. Keywords: Modelica; Visualized Modeling and Simulation; Multi-domain Systems
1.
Introduction
Mechatronic systems are generally complex systems composed of multi-domain subsystems, such as mechanical, hydraulic, electrical, control, and optical subsystems, etc. In general mechatronic systems design is complicated, and is becoming more so all the time. Due to continuous demands to reduce the costs and shorten the time of product development, virtual prototyping techniques are becoming more and more popular [1]. A key function in virtual prototyping is simulation-based design [2]. Typical simulation-based design approaches are design study, design of experiments, and design optimisation. To facilitate the simulation-based approach, it is often desirable to combine both modeling and simulation functions within a single environment. In such an environment,
470
Y.L. Tian, Y.H. Yan, R.M. Parkin and M.R. Jackson
designers can get immediate feedback from simulation results, and then reconfigure the design. The flexibility and coverage of the environment are crucial but are also hard to achieve. There are many commercially available software programmes supporting simulation of specific engineering domains. For the case of mechatronic systems with mainly multi-body subsystems, typical programmes are ADAMS and Visual Nastran, etc. Most such programmes are optimal for specific subsystems, while lack supports for other subsystems. As a result, one has to combine such software programmes with other programmes for modeling and simulation of a mixeddomain mechatronic product. On the other hand, there are also commercially available software programmes for simulation of general multi-domain systems. Typical examples are Dymola, MathModelica [3], 20-SIM [4], Matlab Simulink, etc. However such programmes have also many drawbacks. For example the 3-D animation environments for post processing of simulation results in most of these programmes are not as friendly as other CAD packages, not to say they lack other useful features such as online simulation and changing the geometrical parameters dynamically. For example, Matlab provides SimMechanics toolbox for modeling and simulation of multi-body systems within Matlab-Simulink environment. However, this module has only basic functions for the post visualization and animation of multi-body systems. Dymola provides a proprietary 3D visualization window, which uses user provided DXF drawings to show 3D geometries. But this leads to a number of limits. For example geometrical dimensions could not be changed dynamically and drawings have to be regenerated in simulation-based design loops. There was also much effort paid on development of computer aided conceptual design environment in the last decade. A well-known example is Schemebuilder presented in [5]. Nevertheless such effort was mostly discontinued due to challenges in automatically converting concept schemes to concrete models. It is believed that an integrated environment supporting component reuse and multi-domain systems simulation may solve the dilemma. In view of the needs for a powerful and flexible environment for simulation and design of mechatronic multi-domain systems, this article presents the development of an environment for multi-domain systems simulation called as Visual Modelica Laboratory (hereafter abbreviated as Vimola). The article is organized as follows. Section 2 discusses the object-oriented multi-domain modeling and visualization functions of Modelica language, and briefly introduces OpenModelica compiler. In section 3 some simulation environments based on Modelica language will be briefly introduced and compared. In Section 4 the detailed system architecture of the Vimola environment will be illustrated and explained. In section 5 an exemplar mechatronic system will be modelled and simulated in the environment. Finally, Section 6 draws conclusions and discusses future works of the Vimola project.
2.
Modelica Modeling Language
Modelica is a non-proprietary modeling language [6]. It allow multi-domain, multiformalism, general-purpose modeling, and is a neutral format for model representation. Its precedent is Dymola modeling language, which was initially
Modeling and Simulation Environment for Multi-domain Physical Systems
471
designed by Elmqvist in the late of 1970s. Later in 1990s, developers of previous object-oriented modeling languages were brought together to create a new multidomain modeling language that was then termed as Modelica. Modelica supports block-diagram modeling, causal modeling (e.g. bond graphs) [7], acausal modeling (e.g. object-oriented diagrams), and other formalisms such as finite state automata and Petri nets. In Modelica, a physical component is often coded as a model with various ports [8]. Models are classified and stored in hierarchical libraries and can be inherited or extended. A model normally consists of the following sections: inheritance declarations, parameters and variables, diagram annotations, equations, and algorithms. Note that the equation section of a model normally describes intrinsic state equations and interconnections between ports of the model and that of other models, and such equations can be symbolically manipulated; while the algorithm section normally contains sequential procedures. In a Modelica compiler, the equations and algorithms in Modelica models will be first collected and translated to systems of equations, and such equations are then arranged into certain forms with minimum dependence. Finally equations in BLT form are solved by ODE/DAE solvers (e.g. DASSL). Annotations in Modelica models are of great importance concerning graphical user interfaces and model visualization. Such sections mainly contain documentation and graphical information for diagrams editing and icons editing. Additionally there are geometrical data for animation. The diagram (or 2-D schematics) of a Modelica model can be defined by drawing a composition diagram through positioning icons that represent the components and then connecting the ports of icons. While the diagram represents the inner graphical information of a Modelica model, the icon is the outer display of the model, i.e. the icon represents the model displayed in other models. It should be noted that as a textual description language, Modelica is prone to error especially programming on components connecting or parameterization. So it is very important to visualize Modelica icons and diagrams information, and besides a Modelica text editor, it is indispensable too to include a 2-D graphical model editor environment. There are also a number of free libraries and packages provided by Modelica Association, for example Modelica Standard Library (MSL), Modelica Multi-Body Systems (MBS), basic libraries for hydraulics, pneumatics, and gear trains modeling. With these libraries, one can easily build most mechatronic systems with reusable components. However if complicated systems were needed, we may have to purchase commercial packages or to develop our own packages. Based on MSL, we can build our own packages. For example a Modelica package was built for design, simulation, and analysis of dynamical properties of various design schemes for Profile Independent Wood-Moulding Machines (PIMM) [9]. An open source Modelica compiler – OpenModelica compiler [10], can translate Modelica to C code and then execute simulation by link with selected numerical ODE or DAE solvers. Although still under development, it does provide us a useful tool to test Modelica models free of charges. Due to such properties for multi-domain physical systems, a Modelica language based modeling and simulation environment will be explored in this article.
472
Y.L. Tian, Y.H. Yan, R.M. Parkin and M.R. Jackson
3. Existing Simulation Environments Based on Modelica Language There is commercially available multi-domain systems simulation software based on Modelica language. They are Dymola and MathModelica [3]. Dymola (Dynamic Modeling Laboratory) is from Dynasim AB and MathModelica from MathCore Engineering AB. The difference between Dymola and MathModelica is that Dymola is much mature on simulation whereas MathModelica emphasizes more on technical computations. The main requirements of modeling and simulation environment for mechatronic multi-domain systems simulation were specified in [1], the author drew the conclusion that Dymola with Modelica language meets all the requirements of multi-domain simulation, and it does so. Dymola is very strong on its multi-domain simulation ability. The modeling environment of Dymola include a 2-D Graphical User Interface (GUI) for model assembling, editing, and browsing, and a text editor for Modelica language syntax highlighting and annotations auto-hiding. The simulation environment of Dymola includes a strong Modelica language compiler, a simple 3-D multi-body animation module, and a 2-D plot module for post processing of simulation results. Except for relative weak in 3-D multi-body animation module, Dymola is very handy for dynamic simulation of mechatronic multi-domain systems. MathModelica provides a Modelica simulation environment that is closely integrated with Mathematica and Microsoft Visio. There are mainly three subsystems included in MathModelica, they are a graphic editor for Modelica model editing and browsing, a Mathematica notebooks for online documentation and advanced scripting, and a simulation centre for Modelica language compiler and simulations. Unlike Dymola, MathModelica are the integration and extension of professional software, for example, the graphic editor is a customization and extension of the diagram visualization from Microsoft Visio, the Mathematica notebooks from Wolfram Research, and the simulation centre from Dynasim. It can be estimated that the key feature of MathModelica is to provide a variety of technique computing environment (integrated with Mathematica) as well as a modeling and simulation environment. There are also an OpenGL based 3-D visualizer and animation system called MVIS (Modelica VISualizer) for 3-D multibody animation and some CAD to mechanical Modelica translators which communicating with the MathModelica environment as individual subsystems. However, these subsystems are as elementary as MathModelica software itself. Based on the study of the above two Modelica language based multi-domain systems modeling and simulation environments, we concluded that, developing of the Modelica language based multi-domain systems simulation software, the following modules are essential components: 1. 2.
A 2-D graphic model editor. The graphic model editor is a user interface for graphical programming with Modelica. A Modelica text editor. The text editor is the most basic component that supporting Modelica text programming and Modelica syntax highlight because all models are stored as text files, for example, the graphic model
Modeling and Simulation Environment for Multi-domain Physical Systems
473
information stored as Modelica text format and are executed simulation as a Modelica text file. 3. Tree browser module of MSL. A tree view of MSL is the basic component for modeling with Modelica language. From drag and drop reusable components from Modelica Standard Library, and connect them in the 2D graphic user interface environment, physical systems can be modelled for simulation. 4. A simulation centre. The software should have a Modelica compiler or simulation engine for translating Modelica models to C codes, and then execute simulation. 5. A 2-D plotting module. The 2-D plotting plots the simulation data into diagrams. These diagrams can be used for analysis. The 2-D plotting module can be included in the simulation centre module. 6. A 3-D visualizer. The 3-D visualizer can be viewed as translations from Modelica MBS to CAD model. The only difference is that this CAD model should easily display the mechatronic multi-body assembly dynamically during and after Modelica model simulation. Some other auxiliary modules, such as CAD to Modelica translator (for mechanical multi-body systems), Simulink to Modelica translator (for control systems), can help to improve design and simulation integration.
4.
Vimola Simulation Environment
Following on the analyses of Modelica language and Modelica language based simulation environment, we can see that all conditions, that is, free Modelica language which can fulfil multi-domain modeling ability, free Modelica Standard Library which can easily realize module reuse, and free Open Source Modelica compiler, etc., are handy for developing of a multi-domain modeling and simulation software for mechatronic systems. Vimola software will have all the essential modules analysed above, inherited from Dymola too, but extended 3-D multi-body animation module of Dymola by communication with 3-D manufacturing simulation and visualization specialist Visual Components. 4.1
System Architecture
The detailed system architecture of Vimola environment is shown in Figure 1. It can be seen that there are mainly five sub-modules applied in Vimola Environment. They are MSL tree browser module, 2-D diagram editor module, Modelica text editor module, MBS 3-D visualization module, Modelica language compiler and simulate module. 2-D plotting and output log record modules are subsidiary to Modelica language compiler and simulate module. With the MSL tree browser module, users can drag and drop the components from MSL easily, thus makes programming in both text and graphical format of Modelica language more convenient.
474
Y.L. Tian, Y.H. Yan, R.M. Parkin and M.R. Jackson
The 2-D diagram editor module in Vimola applies OGL (Object Graphics Library) and is customized for graphically displaying and editing of the components from MSL and user defined packages. The Scintilla text editor was improved to support Modelica text highlight and Modelica model and annotation auto-folding. The Modelica text highlight function in customized Scintilla text editor is almost the same as Dymola text editor, and Modelica model and annotation auto-folding is the substitution of Modelica model and annotation auto-hiding functions in Dymola.
Figure 1. System Architecture of Vimola Environment
Visual Components create multi-body components corresponding to the mechanical multi-body systems modelled with Modelica language. The main purpose of Visual Components in Vimola environment is to display multi-body assembly dynamically during and after simulation, this was achieved by applying shared memory technique. OpenModelica compiler can compile Modelica models by translating Modelica language to C language, and then generating a runtime simulation executable file. Matplotlib plot 2-D graphs of the simulation results both online (through shared memory technique) or offline (displaying simulation data from output file). 4.2
Vimola vs MathModelica
Vimola bears most modules of Dymola. Most importantly Vimola will be an OpenSource based environment and will be free for educational usages, while Dymola can reach £20K-£30K per seat in highly usable form and lacks supports for inter-application links (such as an interface to MathModelica’s 3D-visualizer module, though the module itself is not good enough now) as it is proprietary.
Modeling and Simulation Environment for Multi-domain Physical Systems
475
MathModelica is another Modelica language based commercial software. It is a combination of Modelica modeling environment with Mathematica. Compared to Dymola, MathModelica is a new emerging software, it even use Dymola’s kernel for compiler and simulation. A comparison with MathModelica is necessary as we can see clearly about the future trend of development and technique used in the main modules of Vimola. The comparison of Vimola and MathModelica is shown in table 1. Table 1. A Comparison of Vimola and MathModelica Items
MSL Tree
2-D Diagram Editor
Text Editor
3-D Visualizer
Simulation Center
MathModelica
parsing
Microsoft Visio
Notepad
OpenGL
Dymola’s kernel
Vimola
parsing
OGL based
Customized Scintilla
Visual Components
Open-Modelica
Modelica Standard Library Tree browser module in Vimola is the parsing of the Modelica file, the same technique used in MathModelica. 2-D diagram editor module in Vimola is customized OGL, while MathModelica use 2-D diagram module from Microsoft Visio. Text editor module in Vimola is custmozed Scintilla which is much better than notepad based Mathmodelica. The 3-D visualizer module in Vimola applies Visual Components which is strong on GUI and 3-D visualization. Currently, we use customized OpenModelica compiler from free open source project as the simulation centre module, though it is not as strong as the kernel in Dymola.
5.
PIMM Application Example in Vimola
In this section, the Profile Independent Wood-Moulding Machines, - abbreviated as PIMM, will be modelled as an application example in Vimola 1.0 environment. The PIMM machine consists of four drives. The horizontal (x-axis) and vertical (yaxis) drives which make the cutter move and produce the desired geometry on the stationary workpiece. Timber feed drive (z-axis) feeds the workpiece in between two consecutive passes of the cutter along the width of the timber. The cutter drive (rx-axis) rotates the cutter at constant speed. The speed of the process and the geometric accuracy of the finished product rely mostly on the performance of the X and Y drives, so X and Y axes are the most critical part in PIMM machine. Figure 2 shows the main graphical user interface (GUI) of Vimola text editor in which PIMM package was loaded and modelled. In the top left of main GUI, we can see the tree browser of MSL, while a. emphasize on all the components contained in MultiBodyPIMM model in PIMM package, b. is a display of the model of Prismatic_Tz component. The highlight and annotation auto fold of specific models are shown in the right side of main GUI in both a. and b.
476
Y.L. Tian, Y.H. Yan, R.M. Parkin and M.R. Jackson
a
b Figure 2. PIMM Package in Modelica Text Editor
Figure 3 is a clear overview of PIMM structure in 2-D diagram editor. We can see, from figure 3, that four axis drivers, Rx axis, X axis, Y axis, Z axis, and main
Modeling and Simulation Environment for Multi-domain Physical Systems
477
mechanical structure are separate objects in PIMM. Thus modeling and simulation of the mechatronic multi-body and control systems are combined together in the same environment. The information communicating between different systems are through connector ports in Modelica, i.e. small circles, small rectangles, and small arrows shown in figure 3. We need to define connectors inside model object in order to exchange information freely between different components, the different connector types will be displayed automatically in different shapes in Vimola 2.0 2-D graphical editor environment.
Figure 3. 2-D Diagram Model of PIMM
The visualized graphical modeling shown in figure 3 can improve modeling efficiency and decrease the probability of error occurrence in text editor. Through ports connectors shown in figure 3, the coupling of the mechanical and control ports can be achieved too. Such kind of modeling method can enable model reuse and provide an easy way for simulation-based design of mechatronic systems. Co-operating with 3-D visualization module, a visualized simulation effect of virtual prototype can be achieved. Thus both visualized modeling and visualized simulation will be achieved in Vimola environment which can not be realized in any single software at present.
6.
Conclusion
The development of a Modelica language based multi-domain systems modeling and simulation environment Vimola was introduced in this article. Vimola resembles the commercial modeling and simulation software Dymola. Moreover, Vimola overcomes the shortage of 3-D multi-body animation ability in Dymola by integrating with 3-D visualization specialist Visual Components. Such software is very important for final manufacture implementation and products demonstration requirements; it can pre-test the dynamical property of the designed product, decrease unnecessary costs of built ill-performance test rig. The current progress of Vimola software was introduced too, by modeling of an application example PIMM, though some parts are still under development and need further upgrade. And OpenModelica compiler, the kernel of Vimola, are still improving and updating itself now.
478
7.
Y.L. Tian, Y.H. Yan, R.M. Parkin and M.R. Jackson
Acknowledgements
The research was supported by UK IMCRC project and National Natural Science Foundation of China (Grant NO.50535010).
8.
References
[1] Ferretti G, Magnani G, Rocco P (2004). Virtual Prototyping of Mechatronic Systems. Annual Reviews in Control 28:193-206 [2] Paredis C J J, Diaz-Calderon A, Sinha R, Khosla P K. (2001). Composable Models for Simulation-based Design. Engineering with Computers 17:112-128 [3] Fritzson P, Gunnarsson J, Jirstrand M (2002). MathModelica – An Extensible Modeling and Simulation Environment with Integrated Graphics and Literate Programming. Proceedings of the 2nd International Modelica Conference, 18-19 March 2002, Oberpfaffenhofen Germany, pp41-54 [4] Broenink J F (1999). 20-sim Software for Hierarchical Bond-Graph / Block-Diagram Models. Simulation Practice and Theory 7:481-492 [5] Bracewell R H, Sharpe J E E. (1996). Functional Descriptions used in Computer Support for Qualitative Scheme Generation – “Schemebuilder”. Artificial Intelligence for Engineering Design, Analysis and Manufacturing: AIEDAM 10:333-345 [6] Tiller M (2001). Introduction of Physical Modeling with Modelica. Boston: Kluwer Academic Publishers [7] Borutzky W (1999). Bond Graph Modeling from an Object Oriented Modeling Point of View. Simulation Practice and Theory 7:439-461 [8] Breedveld P C. (2004). Port-based Modeling of Mechatronic System. Mathematics and Computers in Simulation 66:99-127 [9] Tascioglu Y (2006). Profile Independent Wood-Moulding Machine. PhD Thesis, Loughborough University [10] PELAB (2006). OpenModelica System Documentation. 13 June 2006, PELAB Department of Computer and Information Science, Linkoping University, Sweden.
Selection of a Simulation Approach for Saturation Diving Decompression Chamber Control and Monitoring System Diming Yang1,2, Xiu-Tian Yan1 and Derek Clarke2 1
University of Strathclyde, Glasgow G1 Divex Ltd, Westhill, Aberdeen AB32 6TQ
2
Abstract Saturation diving decompression chamber control and monitoring system involves a large number of Inputs/Outputs (I/O) channels. Due to the large number of I/O channels and the bulkiness of the saturation diving decompression chamber systems, a simulator, which is capable of mimic physical I/O and physical processes, is highly desirable to support software development and test of the control and monitoring system. This can also be used for divers’ training purpose. There are quite a number of options for I/O simulation. This paper describes five options studied first and then explains how a selection is made from the options. Keywords: Saturation Diving support, Control and monitoring system, I/O simulation
1.
Introduction
It has been well known that for divers to go deeper than 50 msw (metre sea water), compression and decompression of the divers, i.e. to make their internal body pressure equal to the atmosphere before they can get back to the surface, has to be conducted in a very slow rate, otherwise rapid decompression may cause fatal illness and even death. A diving technology, known as saturation diving, is commonly used for diving deeper than 50 msw. Saturation is referred to the diver’s tissues saturated with diving gas. Saturation diving is based on the fact that once the diver’s tissues get saturated, decompression period will be unaffected no matter how long the diver stays at the depth where he is saturated [1]. Saturation diving requires a complex support system, usually onboard a ship known as diving support vessel. A typical saturation diving support system, as illustrated in Figure 1, usually consists of deck decompression chambers (DDC), transfer under pressure chambers (TUP) and submersible decompression chambers (SDC).
480
D. Yang, X.T. Yan and D. Clarke
Hyperbaric life boat
SDC Hyperbaric life boat
TUP
SDC
DDC DDC TUP
DDC DDC
DDC DDC
Figure 1. Saturation diving support system (Courtesy of Divex Ltd)
The initial conditions of the saturation diving system are those of the normal atmospheric environment. After the divers move into the DDCs, the whole system is pressurised gradually to the pressure of the work depth and maintained at that level. When not on shift, the divers stay in the DDCs. When called on to duty, the divers make their way into the TUPs, from where they move into the SDCs, which are then detached from the TUPs and winched down in the sea water to the work depth. Then, the divers move out of the SDCs and do their jobs. After they complete their missions, the divers move back to the SDCs, which are then winched up back to the vessel and attached back to the TUPs. When all the diving missions are completed, the decompression process starts, which usually is conducted at a very slow rate, and takes considerably long time to make the chamber pressure back to the atmosphere. Divers usually need to stay in the chambers for up to a month. As a result, the environmental parameters in the chambers, such as pressure, temperature, humidity, and O2 and CO2 levels, need to be strictly maintained. The control and monitoring system for the chambers is very complex and mission crucial, involving more than a hundred various sensors and actuators, which are usually managed by a dozen processing units, typically PLCs due to their robustness. Figure 2 shows a typical control and monitoring system for DDCs and TUPs. Software test of the saturation diving control and monitoring system can be very inconvenient if the whole chambers and associated sensors and actuators have to be involved. Additionally, software may need to be developed alongside hardware development, so the software may have to be tested without the presence of hardware. As a result of the development and test demand, a simulator is highly desirable. The major functionality of such a simulator is to mimic physical sensors and actuators, and also to simulate the physical processes occurring inside the chambers.
Simulation of Diving Decompression Chamber Control and Monitoring System
481
Server
PLC
PLC HMI
Sensors & Actuators
PLC
Sensors & Actuators
PLC HMI
HMI
Sensors & Actuators
HMI
Sensors & Actuators
Figure 2. Saturation diving control and monitoring system (Courtesy of Divex Ltd)
2.
Simulation Options
There are a number of options for the simulation purpose, and there are numerous variations and combinations of these options. 2.1
Simulation PLC
A PLC and its associated I/O modules could be used to mimic real sensor outputs and actuator inputs. (See Figure 3) The method might be the most robust one for I/O simulation. However, although simulation of one or two chamber PLCs with this method may be acceptable, simulation of a system involving a dozen of PLCs and hundreds of sensors and actuators will be unrealistic because of the high cost of PLCs. 2.2
SIMIT
SIMIT is a software platform developed by Siemens for testing SIMATIC automation software. It is aimed to ensure the correct functioning of the SIMATIC automation software and to guarantee this for the lifecycle of the system. SIMIT runs on a PC with specialized PCI based PROFIBUS cards (Siemens IM-2 cards). (See Figure 4) Each PCI card can simulate two channels of PROFIBUS DP with up to 125 DP slaves [2]. Also, IM-2 cards support Fail-Safe I/O simulation [2]. SIMIT supports up to 16 two-channel hardware interface cards per system, i.e. up to 32 PROFIBUS DP networks. Another hardware interface option is SIMBApro PCI cards. SIMBApro cards also have the ability to support two PROFIBUS DP networks and simulate Fail-Safe DP slaves [3]. In fact, the SIMBApro PCI card is a component of another Siemens’ simulation system called SIMBApro, which can only simulate simple processes.
482
D. Yang, X.T. Yan and D. Clarke
However, SIMBApro PCI card can act as hardware interface for SIMIT, which is more powerful in terms of simulation of automation processes.
Figure 3. Schematic of simulation with PLC
Figure 4. Schematic of SIMIT simulation
Simulation of Diving Decompression Chamber Control and Monitoring System
483
Pros: x x x x
SIMIT is from Siemens; therefore it is reasonable to expect SIMIT works fine with Siemens PLCs. Simulation of I/O is carried out through PROFIBUS DP; therefore wiring the simulation system is much simpler and easier when compared to channel by channel options. SIMIT supports Fail-Safe I/O simulation. SIMIT does not require simulation code in the PLCs at any stage in the development lifecycle.
Cons: x
x
I/O can only be simulated through PROFIBUS DP, although main rack I/O can be treated as PROFIBUS DP by some tricks provided in Siemens SIMIT documents. SIMIT does not support PROFINET I/O currently. This may become a problem in the future. Only PLC code and PROFIBUS DP communications between PLCs and I/O modules can be tested. I/O modules will not be tested.
Summary: SIMIT is a good choice, if I/Os are not located in too many PROFIBUS DP networks, and there is no PROFINET I/O involved in the system. 2.3 LabVIEW with Physical I/O
Figure 5. Schematic of LabVIEW with physical I/O simulation
484
D. Yang, X.T. Yan and D. Clarke
National Instruments LabVIEW is a rapid development environment intended for laboratory and test applications [4]. Simulation with LabVIEW through I/O cards is similar to simulation with PLCs in that I/O channels are simulated in a parallel manner (See Figure 5). Pros: x x x x
LabVIEW is likely to provide a more flexible, cheaper and quicker to implement approach when compared with the use of a simulation PLC. There is no I/O network limitation, i.e. it does matter if I/Os are located in PROFIBUS DP and PROFINET. This approach does not require simulation code in the PLCs. Almost the whole system including PLC code, communications between PLCs and I/O modules, and I/O modules themselves can be tested at the same time.
Cons: x x
I/O channels are simulated in a parallel manner; therefore a great deal of wiring is needed. The number of I/O channels that can be simulated is limited by the maximum number of PCI slots in a PC, and the maximum number of I/O channel available on each of the I/O cards.
Summary: LabVIEW is also a good choice, if the total number of I/Os is appropriate and the I/O arrangement is sufficiently defined. 2.4 OPC OPC stands for OLE for Process Control, which is based on Microsoft’s COM (Component Object Model). OPC is a data exchange mechanism which is supported by a number of software platforms including LabVIEW [4], and WinCC [5]. With this approach, physical I/O signals are replaced with memory bits inside the PLC, whose status can be changed through the OPC interface by the simulation program. (See Figure 6) This approach requires some code in the PLC during simulation operations for transferring of data between memory locations. This may become a problem, as the client of the diving support system stated clearly that they did not want simulation code in PLCs for safety reasons. Pros: x x
Works with I/O in any network. No hardware interface is needed except for a connection to the control network.
Cons: x
Software switches are needed in the PLC code with two positions: one position is for simulation, which only works with virtual I/O; while the other is for commissioning, which works with real I/O. If the software switches are in wrong positions after commisioning, the result can be disastrous. Therefore, extra care must be taken for this software switches.
Simulation of Diving Decompression Chamber Control and Monitoring System
x
485
The way safety programmes in PLC code access (real and virtual) I/O would be much more complicated than the way standard programmes access I/O. This is due to many restrictions applied to safety programmes. This complexity makes the problem of software switches even more troublesome.
Summary: If the above cons can be overcome, the OPC approach would be the most cost effective solution for this simulation job.
Figure 6. Schematic of OPC Simulation
2.5 Hybrid Approach As mentioned in the last section, the OPC approach potentially has a problem with safety programme. In order to address that problem, a hybrid approach was also considered. The hybrid approach is combination of simulating standard I/O channels for standard programmes via OPC, and using hardware generated I/O, such as I/O cards, to simulate safety I/O channels for safety programmes. (See Figure 7) Compared to the OPC approach, the hybrid approach may reduce safety programme coding complexity, but it relies on hardware I/O and thus reduces flexibility.
486
D. Yang, X.T. Yan and D. Clarke
Figure 7. Schematic of hybrid simulation
3.
Simulation Approach Selection
There are three criteria used in the simulation approach selection: •
•
Effectiveness: how effective an approach is in terms of system testing. This criterion is mainly measured against whether PLC code and PLC I/O modules are both tested. The physical parameters monitored and controlled by the PLC system include pressure, O2/CO2 content, temperature and humidity. All of these parameters change slowly by nature in the context of saturation diving support. Therefore, I/O simulation of these parameters should not have special requirements on speed of response. As a result, speed of response is not part of effectiveness assessment. Complexity: how complex an approach is, and therefore how much effort is needed to implement the approach. The effort includes software
Simulation of Diving Decompression Chamber Control and Monitoring System
•
487
development and hardware development. In general, hardware development is considered to need more effort than software development. Cost
All the options described in the last section were assessed against the above criteria, and rating numbers were given accordingly, with 1 standing for the worst and 5 for the best. The assessment is shown in Table 1. Table 1. Assessment of the simulation options PLC
LabVIEW + I/O cards
SIMIT + I/O cards
OPC
Hybrid approach
Effectiveness
5
5
3
4
4.5
Complexity
1
3
5
5
4
Cost
1
3
2
5
4
In fact, the first three options are all based on hardware generated I/O. The OPC approach is based on pure software generated I/O; while the hybrid approach is a combination of software generated I/O and hardware generated I/O. Except for the SIMIT option, all the options based on hardware generated I/O are very effective in terms of I/O simulation because both of PLC code and PLC I/O modules will be tested. Therefore, PLC based simulation and LabVIEW based simulation are given 5 scores in this aspect. The SIMIT option is given 3 scores because this approach only works with PROFIBUS DP I/O, which restricts the PLC system design. The OPC option essentially bypasses the PLC I/O modules and only tests PLC code, and therefore is given 4 scores. Since the hybrid option is a combination of hardware and software generated I/O, it is given 4.5 scores. In terms of complexity or how much effort needed, the OPC option is the best with 5 scores because it only requires software development. The hybrid option is the second best with 4 scores because it mainly requires software development and some hardware development. The SIMIT option also only requires software development, and it is therefore given 5 scores. The LabVIEW with physical I/O option largely requires both hardware and software development and therefore is given 3 scores. The PLC option also needs both hardware and software development, and it is considered more difficult to develop than the LabVIEW with physical I/O option, and it is given 1 score accordingly. In terms of cost, the OPC is given 5 scores and the PLC option is given 1 score. The other options are given 2 to 4 scores according to their respective costs. Based on the above assessment, clearly, the OPC option is the best, and the hybrid option is the second best. However, it is realized that it may be troublesome for PLC code, especially safety programmes, to switch from accessing virtual I/O to accessing real I/O when commissioning. In order to deal with the problem, a test PLC programme with both standard and safety functions is to be developed for control and monitoring of a single chamber. In this programme, software switching will be implemented and
488
D. Yang, X.T. Yan and D. Clarke
tested. If there is no problem with switching, then the OPC approach will be applied to simulation of the whole system. Should there be any problem, however, the hybrid approach will be a backup plan.
4.
Acknowledgement
The work reported in this paper is part of a project funded by British Government through KTP (Knowledge Transfer Partner) scheme, and conducted in Divex Ltd. We should thank our colleagues from Divex for their kind collaboration.
5.
References
[1] International textbook of mixed gas diving, Heinz K.J.Lettnin, Best Publishing Company, 1999, ISBN No: 0-941332-50-0 [2] Siemens SIMIT manual [3] Siemens SIMBAPro manual [4] National Instruments LabVIEW manual [5] Siemens WinCC manual
Optimal Design of Delaminated Composite Plates for Maximum Buckling Load Yu Hua Lin Department of Mechanical design Engineering National Formosa University
Abstract Due to laminated structure, manufacturing defects or external impact, composite materials often contain delaminations. The buckling behavior of laminated plates with elliptic delamination under uniaxial compressive loading is studied here. A optimal design method is used to find out the maximum buckling load , the design variables are the length of the plate, the width of the plate.The material was the long fiber composite. In analysis, the delaminated plate is modeled as a three dimensional problem. Four parameters, including the size of the delaminated region , the position of the delaminated region in the thickness direction, the width of the plate and the length of the plate; uniform mesh refinement around the delaminated region edge is enforced to overcome the singularity of the crack tip. Keywords: delamination, maximum buckling load, laminated composite, optimal design
1.
Introduction
Composite materials are composed of two or more than two kinds of materials and often have better performances or material characteristics that the individual materials lack of. Compared with metal materials, such composites not only have high specific strength and high specific modulous but also boast excellent property of resistance of chemical corrosion. Fiber composite materials have such benefits. However, in case of defects or loss in the materials, the strength will be significantly reduced. The loss, such as fiber breakage, disbonds, delamination and voids, may be caused during the manufacture or installation. Among such defects, delamination is particularly important. Owing to lamination structure and special manufacture process, connection between laminations in composites tends to be lost easily. This is called delamination. Because of this, material strength and stiffness deteriorates. The tensile strength and compressive strength may even be reduced for over 50%, causing early destruction of the material. Therefore, understanding the impacts of delamination on composites will help accurate, safe design and use of composites. Delamination means loss of the connection between laminations. It is often a part of the interface. Due to different reasons, the delamination location may be in or
490
Y.H. Lin
on the edge of the materials. The actual conditions are also irregular. To help analysis, it is often presumed delamination is rectangular, round or oval. A composite plate with such delamination shapes, under tensile strength parallel to the delamination plate, is almost non-delaminated. Yet, under the compressive load parallel to the delamination plate, the influence is very obvious. In general, without any defects, under compression force parallel to the plate, the plate will be compressed. Once the force reaches a certain critical point, the plate will have buckling. Provided the plate has delamination within, the buckling of the delamination tends to happen more likely than the buckling of the entire plate. This lowers the buckling critical point of the plate. The critical point, as a result, depends on the size, shape and location of the delamination. Design of composite materials is quite complex. It is affected by properties of substrate and fiber strengthening, stress concentration effect, residual stress in the manufacture process, thickness of the fiber lamination, and angles of the fiber arrangement, etc. One has to find compromise among cost, performances and applicability in structure components design. Yet, limitations such as maximum allowed deformation, stress, response, critical buckling load and dimensions, etc. Therefore, the method to meet the preceding requirements is the optimal design. Minimization of structure weight and maximum strength of critical buckling load are the two major goals in aerospace structure design. Often with reduction of structure weight or addition of buckling strength, designers maintain the safety, comfort of structure and excellence of air dynamic performances. For the issue of minimization of composites plate weight, Adail, Richter and Verijenko [1] proposed having fiber orientation angle as the design variable for minimization design of weight. The findings show that both single load and multiple load conditions meet minimization design of plate thickness. Buckling and post buckling problem is a very significant issue in composites structure application, especially in the aircraft design. For optimization of composite plate critical buckling load, Chao et al [2] fist obtained the maximum buckling strength on uniaxial compressive load plate. Hu and Lin [3], Haftka and Walsh [4], and Smerdov [5] found the optimal fiber orientation angle to obtain the maximum buckling strength for the plate and thin shell under uniaxiall compressive load. To avoid locating the local optimal value Erdal [6] took fiber orientation angle as design variable and obtained the maximum buckling load with simulated annealing algorithm. Callahan and Weeks [7] and Park et al. [8] applied genetic algorithm to optimize the strength /stiffness of laminated composite materials. Also with genetic algorithm, Le Riche and Haftka [9] took the stacking sequence of lamination as design variable to find out the maximum buckling load. In consideration of temperature effect with lamination variables, Autio [10] maximized the strain energy, minimized certain displacement or maximized the buckling load. Liu et al. [11] develop a permutation genetic algorithm to obtain the maximum buckling strength. Most of studies on optimization of buckling strength focus on uniaxial compressive load. Therefore, based on symmetrical laminated composite plate, C W Kim and J S Lee [12] considered unaxial compressive load, shear, biaxial compressive load and the combination of shear and biaxial compressive load to find the maximum buckling load with genetic algorithm. This paper focused on finding out the
Optimal Design of Delaminated Composite Plates for Maximum Buckling Load
491
maximum buckling load with elliptically delaminated composite plate, under uniaxial compressive load, the length and width of the plate was taken as design variables. ANSYS Program Design Optimization is used as optimal design method.
2.
Finite Element Method
The finite element method is applied widely in the engineering field. The vital step of the buckling behavior analysis is to determine the buckling load, i.e. the critical load. As the buckling occurs, the structure or the object becomes unstable, resulting in the early damage. There are two methods: the linear eigenvalue buckling analysis and the non-linear buckling analysis. The formula of the linear buckling analysis was as below:
(>K @ O >KV @)u
(1)
0
Among the formula [K] ᧶ the traditional stiffness matrix, [
KV ] ᧶ the load
stiffness matrix (caused by the axial stress), u ᧶the displacement vector,
O ᧶the
load factor or the Eigen value. The O value linear added the axial stress, making the load stiffness increased towards the negative direction, until the total stiffness of the structure became 0, leading to the buckling. We ordered the O = O cr making the (1) 0, called the critical load factor. The responding axial load was Fcr= O crF0, called the buckling load (critical load). The Eigen value respectively (displacement vector) u became the buckling mode. The buckling behavior occurs as the consequence of the weakening of the load stiffness caused by the axial stress. The pure stiffness of the structure was 0 and the vertical displacement increased with no limits. Therefore, the structural load stiffness first was demanded and then to demand the buckling load as analyzing the buckling behavior.
3.
Long Fiber Composite
The laminated composite is anisotropic; therefore, the problem of the delaminated composite is modeled as the three dimensional problem. The buckling mode analysis of the long fiber composite with the elliptic delamination under the uniaxial compressive load, we ordered“L” the length of the plate,“W” the width of the plate,”H” the thickness of the plate; “a” the long axis of the delamination,”b” the short axis of the delamination, “t” the distance from the delamination location to the surface,”ı” the compressive load, showed in Figure 1. Owing to the symmetry, the model was simulated half. The composite material was graphite/epoxy AS4/3501-6, the mechanical properties are:
492
Y.H. Lin
Ex = 121.17x103MPa㧘Ey = 9.36x103MPa㧘Ez = 9.36x103MPa㧘PRxy = 0.23㧘 PRyz = 0.45㧘PRxz = 0.23㧘Gxy = 6.2532x103 MPa㧘Gyz = 3.533x103 MPa㧘Gxz = 6.2532x103 MPa
Figure 1. model of elliptically delaminated composite plate
4.
Optimal Design 1. Objective Function: the maximum buckling load is the objective function. 2. Design Variable: the length X and the width Y of the plate are design variables.
Figure 2. ANSYS Design Ootimization Flow Chart
Optimal Design of Delaminated Composite Plates for Maximum Buckling Load
5.
493
Analysis Parameters
The ratio of long /short axis of the elliptic delamination: (a/b)=2/1.2, 3/1.8, 4/2.4 The ratio of delamination location/thickness: (t/H)=1/2, 3/8, 1/4 The ratio of width/length of the plate :(W/L)=1/1, 1/2, 1/3
6.
Results
From the analysis based on initial plate width as W/L=1/1, W/L=1/2, and W/L=1/3, delamination long/short axis ratio a/b=2/1.2, a/b=3/1.8, a/b=4/2.4, delamination locations t/H=1/2, t/H=3/8, t/H=1/4, and plate thicknesses H=1,1.5 and 2, the results are: In the first part, W/L=1/1, H=1, the length X and width Y of the plate are design variables. At the delamination location t/H=1/2, delamination size from a/b=2/1.2 to a/b=4/2.4. , the maximum buckling load reduces to 2027.9 from 3150.2. The buckling mode is global buckling shown in Figure 3. When delamination location is more close to plate surface t/H=3/8, the maximum buckling load decreases from 3154.6 to 1979.1. The buckling mode is also global buckling. As the delamination location gets closer to plate surface t/H=1/4, the maximum buckling load drops to 1244.4 from 2887.9. At the delamination size a/b=2/1.2, the buckling mode is local buckling shown in Figure 4, while a/b=3/1.8 and a/b=4/2.4 ,the buckling mode is mixed buckling shown in Figure 5.
Figure 3. global buckling of elliptically delaminated composite plate
494
Y.H. Lin
Figure 4. local buckling of elliptically delaminated composite plate
Figure 5. mixed mode buckling of elliptically delaminated composite plate
As plate thickness is changed to H=1.5, H=2, the changes of the maximum buckling load and buckling modes are basically the same as those in H=1 as Figure 6. The plate width is changed to W/L=1/2, W/L=1/3. In the different thicknesses, H=1, H=1.5, buckling mode changes are basically the same as those in W/L=1/1, but the maximum buckling load is much reduced as Figure 6. In thickness H=2, the maximum buckling load is equal as Figure 6. Such changes are the same at different delamination locations t/H=1/2, t/H=3/8 and t/H=1/4.
Optimal Design of Delaminated Composite Plates for Maximum Buckling Load
495
Figure 6. maximum buckling load-delamination size
In light of influences of different delamination location on maximum buckling load, the results show that when delamination location is closer to plate surface, the maximum buckling load decreases shown in Figure 7.
Figure 7. maximum buckling load-delamination size
496
Y.H. Lin
In the second part, we discuss design variables, the length X and width Y of the plate. First, in light of length X, the results show that in same thickness H=1, when delamination location is t/H=1/2, with increase of delamination size, X value also increases. Such situation is the same as in delamination location t/H=3/8, 1/4. Among the three delamination locations, the X value from t/H=1/4 is the largest. In consideration of different thickness H=1.5, H=2, the change of X value is the same as that of H=1 and the size is between 2 and 5 shown in Figure 8.
Figure 8. length X of the plate- delamination size
In consideration of changes of plate width Y, in fixed thickness H=1, at the delamination location t/H=1/4, with increase of delamination sizes, Y value also increases. Yet, at delamination location t/H=1/2, t/H=3/8, change of Y value is minute. Looking at the results of thickness H=1.5, with increase of delamination sizes, change of Y value is irregular. This is more obvious at delamination location t/H=3/8, t/H=1/4. The same situation in thickness H=2 has also the same results. Also, in Y values between 4 and 10 from H=1, H=1.5, the Y values are between 6 and 9 in H=2 shown in Figure 9.
Optimal Design of Delaminated Composite Plates for Maximum Buckling Load
497
Figure 9. width Y of the plate- delamination size
7.
Conclusions 1. The maximum buckling load decreases as delamination location gets closer to plate surface. 2. The larger delamination size leads to the smaller maximum buckling load. 3. Thicker plate has higher maximum buckling load. 4. The length X value of the plate increases with the increase of delamination sizes. 5. Change of the width Y value of the plate is irregular to the increase of delamination sizes. 6. From the results of optimization with finite element method, the researcher locates the optimal geometric shape(length X and width Y) of delaminated composite plate to obtain maximum buckling load. From the results, we can have the complete optimal sizes of maximum buckling load, which will definitely help design and use of composite plate and offer accurate postbuckling behavior and growth mode of delamination.
8.
References
[1] Adali S, Richter A, Verijenko VE. Optimization of shear deformable laminated plates under buckling and strength criteria. Compos Struct 1997; 39(3–4):167–78. [2] Chao, C.C., Koh, S.L., and Sun, C.T.Optimization of buckling and yield strengths of laminated composites.AIAA J., 1975, 13, 1131-1132.
498
Y.H. Lin
[3] Hu, H.and Lin, B. Buckling optimization of symmetrically laminated plates with various geometries and end conditions. Composite Sci.Technol. 1995,55,277-285. [4] Haftka, R.T. and Walsh, J.L. Optimization of laminated stacking sequence for buckling load maximization by genetic algorithm. AIAAJ. 1993,31(5),951-956. [5] Smerdov, A. Computational study in optimum formulation of optimization problems on laminated cylindrical shells for buckling. Composite Sci. Technol., 2000, 60, 20572066. [6] Ozgur Erdal, Fazil O. Sonmez, Optimum design of composite laminates for maximum buckling load capacity using simulated annealing, Composite Structures, 2005,(71) , 45–52 [7] Callahan, K.J. and Weeks, G.E. Optimum design of composite laminates using genetic algorithms. Composite Eng., 1992, 2,149-160. [8] Park, J.H., Hwang, J.H., Lee, C.S., and Hwang, W. Stacking sequence design of composite laminates for maximum strength using genetic algorithm. Composite Struct. 2001,52,217-231. [9] Rachel. and Haftka,R.T. Optimization of laminate stacking sequence for buckling load maximization by genetic algorithm.AIAAJ.,1993,31,951-956. [10] Autio,M. Determining the real lay-up of a laminate corresponding to optimal lamination parameters by genetic search. Struct. Multidisc. Optim.,2000,20,301-310. [11] Liu,B.,Haftka,R.T.,Akgun,M.A., and Todoroki,A. Permutation genetic algorithm for stacking sequence design of composite laminates. Comput. Methods Appl. Mech. Eng.,2000,186,357-372. [12] Kim,C.W. and Lee.J.S. Optimal design of laminated composite plates for maximum buckling load using genetic algorithm. Proc. IMechE. 2005,219,869-878.
Modeling Tetrapods Robot and Advancement Q. J. Duan 1, J. R. Zhang2, Run-Xiao Wang2, J. Li2 1
Department of Mechano-Electronic Engineering , Xidian University Department of Mechano-Electronic Engineering, Northwest Polytechnical University
2
Abstract A tetrapod robot was designed based on virtual prototyping technology in this paper. First, a 3D solid model was constructed. Then it was tested and evaluated in ADAMS. The simulation result shows that the motion structure is feasible, but the foot impulse is too big. The maximum wallop is more than 7 times of the mass. We adopted spring structure to improve leg mechanism to reduce the wallop. Worm wheel structure was adopted in leg mechanism to keep stand when it’s power off. The comparison result proves the validity of the improved robot mechanism. Keywords: Tetrapod robot; Virtual prototyping technique; ADAMS
1.
Introduction
Tetrapod robot, which has advantages of good surrounding adaptability and high moving agility, now has become very important in robot research area. Legged robots may one day be used for fast transportation over rugged terrain, and exploring other planets. However, much work remains in design the mechanism of robot. People always mimic the structure of dog, cat, and horse to build robot. The actuator (such as motor) is different of the muscle, and can’t get the same controlled result. In order to improve the design efficiency and reliability, in this paper we introduce virtual prototyping technique[1]-[6] to design the tetrapod robot mechanism. 1.1
The Virtual Prototyping Technique
Virtual prototyping (VP) technique has been studied and implemented in recent years in engineering design. First, a computer simulation of a product is required. At current stage, a 3D solid model is the widely accepted, usually parametric parameter modeling presentation. Second, for a virtual prototype to be presented as a real physical model, a humanproduct interaction model is desired. Ideally, a virtual product can be viewed, listened, smelled, and touched by an engineer or a customer. This is an area that virtual reality techniques can play an important role. More importantly, various perspectives of the designed product should be able to tested and evaluated. In
500
Q.J. Duan, J.R. Zhang, R.X. Wang and J. Li
summary, a complete virtual prototype should include essentially three types of models as follows: x A 3D solid model, x A human-product interaction model x Perspective test related models The flowchart is as follows: The construction model
The test model
The verification model
The model refinement
The model describes
The optimized model With the goal of replacing physical prototypes, VP has a great potential to improve the current product development process. 1.2
The Model
Mammalian leg is composed of five sections, and has five joints with the body. Each joint has a degree of freedom at least, this kind of redundant degree of freedom make animal’s movement to be extremely flexible. In order to reducing the complex of control, tetrapod robot mechanism design doesn’t mimic the animal such as having five sections with the ultra redundant degree of freedom. Here, we carry on the reasonable simplification to the robot body structure. The robot has four legs. Each leg has three degrees of freedoms. One is on the knee; the other two is on the crotch. Worm wheels are used to keep the robot stand still when it’s power off. The foot is made of half ball rubber to keep fit with the ground. The design volume of the robot is less than 800mm500mm800mm. The mass is no more than 50kg. Referencing to the actual geometry parameters and physical
Modeling Tetrapods Robot and Advancement
501
characteristics of tetrapod robot, the 3D entity model of the tetrapod robot is constructed in Solidworks. The model is shown in Figure 1.
Figure 1. The tetrapod robot model
The corresponding parameters of tetrapod robot model are show in Table 1. Table 1. Parameters of tetrapod robot model Part
leg
Member 1 Member 2 (thigh) Member 3 (crus)
body
2.
Parameter Length (mm) Quality (kg) Length (mm) Quality (kg) Length (mm) Quality (kg) (long x width x high) (mm x mm x mm) Quality (kg)
Value 85 2.5 193 1.5 245 2 800x400x500 45
Dynamics Analysis
In this paper, Newton - Euler equation is adopted to analyze and to solve tetrapod robot dynamics questions. Regarding a mechanical system, Lagrange function L defines as the difference of system kinetic energy
EK
and the potential energy EP
L = EK − EP
(2.1)
System kinetic energy and the potential energy are indicated by generalized coordinates to indicate. The system dynamic equation ( the second kind of Lagrange equation) is
Fi =
d ∂L ∂L − dt ∂qi ∂qi
(2.2)
502
Q.J. Duan, J.R. Zhang, R.X. Wang and J. Li
In the formula, qi is generalized coordinates expressed the kinetic energy and the potential energy, qi for the corresponding generalized velocity, Fi are called the generalized force, if qi is the linear coordinate, then Fi is the strength; Otherwise, if qi is the angle coordinates, then Fi is the moment of force. The sketch of the tetrapod robot’s leg is shown in Figure 2. The member 1 swings in YOZ plane, T1 ᇬ T2 and T3 are the transfers distance, m1 ᇬ m2 and m3 are various members’ quality, indicated by the leg terminal spot quality. d1 ᇬ d 2 and
d3 are three member lengths respectively. T1 ᇬ T 2 and T3 are the generalized coordinates, and g is gravity acceleration. The dynamic equation is as follows: T1
d wL wL dt wT1 wT1 [(m1 m2 m3 )l12 (m2 m3 )l22 cos2 T2 m3l32 cos2 (T2 T3 ) 2(m2 m3 )l1l2 cosT2 2m3l2l3 cos(T2 T3 )cosT2 2m l l cos(T T )]T 2[(m m )l 2 cosT sinT 313
2
3
1
2
3
2
2
2
2 33
m l cos(T2 T3 )sin(T2 T3 ) (m2 m3 )l1l2sinT2 m3l2l3sin(T2 T3 )cosT2 m l l cos(T T )sinT m l l sin(T T )]TT 323
2
3
2
313
2
3
1 2
2[ m l cos(T2 T3 )sin(T2 T3 ) m3l2l3sin(T2 T3 )cosT2 m3l1l3 sin(T2 T3 )]T1T3 2 33
(m1 m2 m3 )gl1sinT1 (m2 m3 )gl2sinT1 cosT2 m3gl3sinT1 cos(T2 T3 ) (2.3)
Figure 2. The sketch of the tetrapod robot’s leg
Modeling Tetrapods Robot and Advancement
T2
503
d wL wL d t wT2 wT 2 ᧹ [(m2 m3 )l2 2 m3l32 2m3l2l3 cos T3 ]T2 (m3l32 m3l2l3 cos T3 )T3 2m l l sin T T T m l l sin T T 2 [( m m )l 2 cos T sin T 3 2 3
3 2 3
3 2 3
3 3
2
3
2
2
2
2
m3l3 sin(T 2 T3 ) cos(T 2 T 3 ) (m2 m3 )l1l2 sin T 2 m3l2l3 sin(2T 2 T3 ) m3l1l3 sin(T 2 T3 )]T12 (m2 m3 ) gl2 cos T1 sin T 2 m3 gl3 cos T1 sin(T 2 T 3 ) (2.4)
T3
d wL wL dt wT3 wT3 ᧤m3l32 m3l2l3 cos T3᧥T2 m3l32T3 m3l2l3 sin T 3T2T3
(2.5)
2
[m3l3 sin(T 2 T3 ) cos(T 2 T 3 ) m3l2l3sin(T 2 T 3 ) cos T 2 m l l sin(T T )]T 2 m gl sin(T T ) cos T 31 3
3.
2
3
1
3
3
2
3
1
The Gait Simulation
The model is imported into ADAMS platform by a special interface module named ADAMS/Exchange. Moreover, the simulation model of virtual tetrapod robot is established by adopting constraint. The tetrapod robot is simulated in ADAMS using its planned gait. The equation (3.1)-(3.8) [7]-[9]is gait of the crotch and knee: RF (right front) gait: 13.85 u 2S 360 u sin 2S 0.6 t S 2
(3.1)
10.32 u 2S 360 u sin 2S / 0.6t S 2 10.32 u 2S / 360 u sin 2S / 0.6t S 2
(3.2)
RB (right back) gait
13.85 u 2S 360 u sin 2S 0.6t S
(3.3)
10.32 u 2S 360 u sin 2S / 0.6t 3S 2 2
(3.4)
10.32 u 2S / 360 u sin 2S / 0.6t 3S 2 2
504
Q.J. Duan, J.R. Zhang, R.X. Wang and J. Li
LF (left front) gait: 13.85 u 2S 360 u sin 2S 0.6 t S 2
(3.5)
10.32 u 2S 360 u sin 2S / 0.6t 2
(3.6)
10.32 u 2S / 360 u sin 2S / 0.6t 2 LB (left back) gait: 13.85 u 2S 360 u sin 2S 0.6 t
(3.7)
10.32 u 2S 360 u sin 2S / 0.6t S 2 2
(3.8)
10.32 u 2S / 360 u sin 2S / 0.6t S 2 2
A series of important results are acquired in Figure 3 and Figure 4. They are moving velocity in x axis, moving distance in x axis, and wallop in y axis.
Figure 3. The moving distance and velocity
Modeling Tetrapods Robot and Advancement
505
Figure 4. The wallop in y axis
From the simulation result, the mechanism of the robot can achieve the expectation speed. And the motion structure is feasible, but the insufficiency is that the impulse is too big. The maximum wallop is more than 7 times of the mass, which is able to create damage to the robot structure. Therefore, we have chosen spring to the leg to improve structure, as Figure 5.
Figure 5. The improved structure of the leg
The right picture is the appearance of improved leg structure, the left picture is the inside structure of the leg.
506
4.
Q.J. Duan, J.R. Zhang, R.X. Wang and J. Li
The Results
An improved tetrapod robot model was simulated on two conditions: one is the leg added a spring, the other is not. Figure 6 demonstrated the results. First, the improved mechanism can move well like Figure 4 at the same control equation (3.1)-(3.8). Secondly, the shock absorption structure reduces the wallop dramatically by comparing the dotted line and the active line in Figure 4.
Figure 6. The wallop in y axis of two improved mechanism
From the contrast result, we discovered that the improvement structure is efficient to reduce the wallop. We designed an improved tetrapod robot actual structure. To realize dynamic gait it is necessary that a robot is made as light as possible. We selected an aluminum alloy as the materia1 of legs and body, and engineering plastic pulleys for gears. Moreover we adopted shock absorption structure for leg mechanism to reduce the wallop. Then we designed prototype of physical robot.
5.
Conclusions and Future Work
In this article, a tetrapod robot model was proposed and simulated in a given gait. An actuation system which a motor and a worm wheel were connected, spring was introduced to the leg of the model. Simulation experiments showed that effect of the leg-spring system. The model, which has the leg-spring systems, could get steady gait. Then a tetrapod robot based on the model was designed. It is required in the future to verify the mechanism on physical robot.
6.
Acknowledgment
Thanks are due to TAN XQ for valuable discussion and good advice, and to ZHAO Q for assistance with the drawing work.
Modeling Tetrapods Robot and Advancement
7.
507
References
[1] Lewis M A, Simo L S. A model of visually triggered gait adaptation [A].Adaptive Motion of Animals and Machines [C]. Montreal Canada, 2000. [2] ADAMS/VIEW User’s Reference Manual. MSC.Software,2003 [3] ADAMS/Solver Reference Manual. MSC.Software,2003 [4] ADAMS/Control Reference Manual. MSC.Software,2003 [5] Li Zenggang .ADAMS explained in detail basically with the example. Beijing: Defense industry publishing house, 2006.4 [6] Zhang Chunlin. Higher mechanism. Beijing: Beijing Institute of Technology Publishing house .2005.3 [7] HE Dong-qing; MA Pei-sun, Simulation of Dynamic Walking of Quadruped Robot and Analysis of Walking Stability. Computer simulation,2005,22(2) [8] Pan Shuangxia, Liu Jing, Feng Pei'en, Study on the Techniques of Simulation and Optimization for a Robotic Excavator Trajectory Plan Control Using Virtual Prototyping Technology, China Mechanical Engineering,2005,21 [9] Zhang Xiuli, Research on biology quadruped robot rhythm movement and adaptability to environment. Qinghua University .2004 [10] G. G. Wang, “Definition and review of virtual prototyping,” Journal of. Computing and Information Science in Engineering, vol. 2, no. 3, 2002:. 232–236
The Analysis of Compression About the Anomalistic Paper Honeycomb Core Wen-qin Xu, Yuan-jun Lv, Qiong Chen, Ying-da Sun Mechanical and Electronic Engineering, Zhejiang Industry Polytechnic College, Shaoxing, China.
Abstract Existent anomalistic hexagon structure of the honeycomb paperboard core has an impact on its compression characteristics. The subsection-function, modeling the constitutive relationship of honeycomb paperboard cores are introduced, the compression process is divided into four parts. The model of honeycomb paperboard cores are built on the software of MARC and analyzed. At last, some results have been gotten, which is help to further analyze its performance and design new honeycomb structure. Keywords: FEM; buckling; honeycomb paperboard; collapse
1.
Introduction
Honeycomb paperboard is wildly used as packaging materials. The structure of honeycomb core has great impact on performance of honeycomb paperboard. Many drawbacks have been discovered after investigating a number of manufacturers of honeycomb paperboard. Theoretically, the structure of honeycomb core tends to a regular hexagonal shape. But the hexagonal shape is irregular on actual manufacturing. The structure of honeycomb core trends to lozenge showed in Figure 1(a) or quadrilateral showed in Figure 1(b). How it affects the function of honeycomb paperboard should be paid attention. Most analysis of honeycomb paperboard is based on experiment or simple FEM analysis. Gibson and Ashby already did the systematic study of honeycomb
510
W. Xu, Y. Lv, Q. Chen and Y. Sun
(a)
(b)
Figure 1. The irregular shape of honeycomb core
paperboard which compressed by longitudinal load. There are linear elastic, elastic buckling, plasticity collapse and densification in the course [1]. The effect of solid distribution in cell edges on the plastic collapse strength of hexagonal honeycombs was numerically investigated by Simone and Gibson [2] and theoretically verified by Chuang and Huang[3]. The large deformation of cell edges before elastic buckling was taken in account by Zhang and Ashby [4]. The capability of honeycomb paperboards with different structure have been studied in the text, based on the software of MARC and experimental verification. The results are help to provide basis theory for testing honeycomb paperboard.
2. Forming and Compressing Course of Honeycomb Paperboard 2.1
Forming Course of Honeycomb Paper
The forming of honeycomb paper is simple. First, making the glue to the surface of paper-core and regularly mismatching them, shown in Figure 2. Every layer of paper-core has glued together and overlapped. Inter-nodal distance is 4*a and the width of glue water is b*a. Second, the composite layers are stretched along with the vertical of paper-core. At last, honeycomb structure is gotten. The length of glued edge of honeycomb core is b*a while the other edge is (2-b)*a. Among them, b means the forming coefficient, a means the length of honeycomb paper. We know that it is regular hexagon when b=1. The glued edges are shorter than other edges when b<1 and it is longer when b>1, the honeycomb core is irregular hexagon. 2.2
Compression Course of Honeycomb Paperboard
Figure 3 shows even load is clamped down the cover paper of honeycomb paperboard which emplaces on the level. Static state compress means that the speed of compression is very slow. There are four phases during the course of
Analysis of Compression About the Anomalistic Paper Honeycomb Core
511
compression: linear elastic phase, elastic-plasticity phase, plasticity collapse phase and densification phase. 4*a
(2-b)*a
b*a
(2-b)*a
b*a
paper glue water
Figure 2. The forming of honeycomb paperboard
cover paper
honeycomb core
Figure 3. Sketch map of honeycomb model
1. Linear elastic phase: a curve of stress-strain meets to linear relation. 2. Elastic-plasticity phase: the deformation of honeycomb core changes from partial elastic collapse to plasticity buckling as the load added, compression strain stays invariable, but compression stress decline. 3. Plasticity collapse phase: load appears fluctuate as the compression strain accretes till all the cover papers contact. 4. Densification phase: the compression stress arises very quickly when strain gets into densification, the paper almost lost the elastic, the compression stress added quickly as the strain added and the relative density reaches to 0.5. Figure 4 shows the theoretic stress-strain curve of honeycomb paperboard.
512
W. Xu, Y. Lv, Q. Chen and Y. Sun
Compress stress Densification phase Plasticity collapse phase Elastic-plasticity phase
Linear elastic phase Compress strain Figure 4. A theoretic stress-strain curve
3.
FEA of Honeycomb Paperboard
3.1
The Analytical Models are Built
The distortion and destroy process of honeycomb paperboard is very complicated, for it contains material nonlinear or structure nonlinear, so its parameters need to be simplified. A simulation of compression course of honeycomb paperboard is carried out and its stress distribution and deformation course are obtained. The models of honeycomb paper with anomalistic hexagon structure are built by 3D software of pro/E. All the height of different models is 30mm, the thickness of paper used as honeycomb core is 0.3mm, while the paper used as cover paper is 0.4mm. The girth of honeycomb core is 36mm. Every model is divided in the form of quad lattice. Displacement restriction and force control are defined. Coefficient of elasticity of cover paper is 7600Mpa, and paper used as core is 3600Mpa. Poisson ration is 0.3. Other parameters are also defined such as load step. The density of paper is obtained from a formula:
U
m v
(1)
A honeycomb core is made of six edges. Among other things, two edges glued other edges while the left are not. It is assured that the length of glued edge in a core is x, and the length of other edge is y, the relation between them is satisfied:
Analysis of Compression About the Anomalistic Paper Honeycomb Core
513
Figure 5. Honeycomb models with different value of glued edge
2x+4y=l
m v
U
(2)
6 3 y (2 y 3x) U1 8lhU 2 m g 3 3 y ( 2 y 3 x) h
mg 2 U1 8lU 2 h = + h 3 3 (2 y 3x)
(3)
2
Among them, U1 --density of cover paper (g/mm ); l--length of hexagon (m); U 2 -density of paper in core; h--height of honeycomb paperboard; m g --mass of glue water.
So
U
2 U1 16lU 2 2mglue water / h h 3 3 (18 x)(18 2 x)
(4)
There are six models are established with different length of the glued edge. The length of respective glued edge is 2mm, 4mm, 6mm, 8mm, 10mm, and 12mm (see Figure 5).
514
W. Xu, Y. Lv, Q. Chen and Y. Sun
Figure 6. The compression course of honeycomb paperboard
3.2
Analysis of Compression Process
The stability means the ability to stay steady of structure. The steady of structure is greatly broken by a min increase after the load reach critical load, which caused buckling. The purpose of buckling analysis is to acquire the critical load and the state when steady balance is destroyed. Figure6 shows some classical compression phases, which length of glued edge is 10mm. The honeycomb core only performs longitudinal compression changes in linear elastic phase. Then honeycomb core become unsteady and sway as the load adds a little bigger than critical load. Critical load is reached at the end of the phase. Honeycomb paperboard is compressed sharply at longitudinal direction in elastic plastic phase and plastic collapse phase. There are coves happened in the middle of the honeycomb core, and the part up the middle of the core began to collapse, it gets into densification phase and the paper touching each other. The numerical value of load has been changed quickly. Figure 7 shows the curve of Equivalent of stress in the compression course. In linear elastic phase, honeycomb paperboard is linear change, displacement changes a little, and it gets into elastic collapse phase after the load reaches critical numerical value. In this phase, load changes slowly. On the other hand, displacement changes rapidly. In densification phase, displacement almost doesn’t change along with crescent load. It accords with the ideal displacement-stress curve, which is proved the availability of FEA. But the increment load is small in FEA, and there is not a suddenly change of curve after critical load compared to theoretic curve.
Analysis of Compression About the Anomalistic Paper Honeycomb Core
515
Densification Elastic plastic
Plastic collapse Linear elastic
Figure 7. The curve of Equivalent of stress in the compression course
3.3
Comparison of Different Structure
The analysis of the other five models could be acquired in the same method, Figure8 shows the compression analysis of different values of glued edges of honeycomb core. The compression courses are very alike and have the same four phases. The curve of load is alike. It is found that the middle of the honeycomb core is easily collapse partly, so it is needed to choose paper to prevent cracking. The core close to the up cover paper collapses after the flexure. The area of standing under load is different from different structure parameter. The relation between the length of glued edge and area of standing under load is: y=3
3 (18-x)*(18+2x)
(5)
Among them, x--the length of glued edge; y--the area of standing under load. The critical load is different when the length of glued edge is different. Figure 9 shows the result. As the length of glued edge is increased, critical load increased slightly until the value reaches to 6mm, afterwards it descends to 0.01MPa around 8mm, rapid increase and rapid decline have been appeared. The results attribute to paper density per area and length to width ratio of cover paper.
516
W. Xu, Y. Lv, Q. Chen and Y. Sun
2 mm
4 mm
6mm
Figure 8. Compression analysis of different honeycomb models
Figure 9. The relation between critical buckling load and the length of glued edge.
4.
Experimental Verification
The analysis result gotten by FEM should be checked by experimentation. Figure 10 shows the compression process of honeycomb paperboard with irregular core. Coves happened in the middle of the honeycomb core, and the part up the middle of the core began to collapse. Figure 11 is a curve shows the relation between force and displacement. Figure 12 is a curve shows the relation between
Analysis of Compression About the Anomalistic Paper Honeycomb Core
517
pressure and glued edge. They are similar to the result of FEM. However, the numerical value of experiment are bigger than FEM. Because FEM hardly simulation some other factors (such as air and glue).
Figure 10. Crushing test of a kind of honeycomb paperboard with irregular core
Figure 11. The relation between force and displacement
5.
Figure 12. The relation between force and displacement
Conclusions
Kinds of honeycomb paperboards with classical structure are proposed which are existent in the market. The compression course of honeycomb paperboard is simulated by FEM, the result shows that the critical load attributes to paper-core density per area and length to width ratio of cover paper. Experimental verification has been carried out. At last, the comparison between FEM and experiment results have been gotten, which are similar, but numerical value of experiment are bigger than FEM for some factors such as glue and air. The method or the results are beneficial for studying honeycomb structure and reconstruct glue applicator to improve quality.
518
W. Xu, Y. Lv, Q. Chen and Y. Sun
6.
References
[1] Gibson LJ, Ashby MF. Cellular Solids: Structures and Properties.2nd ed. Cambridge, UK: Cambridge University Press; 1997. [2] Simone AE, Gibson LJ. Effects of solid distribution on the stiffness and strength of metallic foams. Acta Mater 1998;46:2139–50 [3] Chuang CH, Huang JS. Elastic moduli and plastic collapse strength of hexagonal honeycombs with Plateau borders. Int JMech Sci 2002;44:1827-44 [4] Zhang J, Ashby MF. Buckling of honeycomb under in-plane biaxial stresses. Int J Mech Sci 1992; 34(6): 491–509 [5] XU Wen-qin: associate professor, woman, master's degree (1967- ), research orientation: mechanical design and hydraulic control. [6] LV Yuan-jun: teaching assistant, man, master's degree (1980-), research orientation : CAD/ CAM/CAE/CAPP and packing design. [7] CHEN Qiong: teaching assistant, women, master's degree (1980-), research orientation: CAD/CAM/ CAE/CAPP.
C-NSGA-II-MOPSO: An Effective Multi-objective Optimizer for Engineering Design Problems Jinhua Wang1, Zeyong Yin2 1
School of Mechanical and Electrical Engineering, Northwestern Polytechnical University, Xi’an 710072, China. Email: [email protected] 2 China Aviation Powerplant Research Institute, Zhuzhou 412002, China
Abstract This paper extends the NSGA-II-MOPSO algorithm, which is based on the combination of NSGA-II and multi-objective particle swarm optimizer (MOPSO) for unconstrained multi-objective optimization problems, to accommodate constraints and mixed variables. In order to utilize the valuable information from the objective function values of infeasible solutions, a method called M+1 nondominated sorting is proposed to check the nondomination levels of all infeasible solutions. For integer and discrete variables, they are dealt with using a method called stochastic approximation. Experiments on two structural optimization problems were conducted. The results indicate that the constrained NSGA-II-MOPSO (C-NSGA-II-MOPSO) is an effective multi-objective optimizer for optimization problems in engineering design in comparison with the constrained NSGA-II algorithm. The efficiency of the algorithm demonstrated here suggests its immediate application to other engineering design problems. Keywords: multi-objective optimization; NSGA-II; multi-objective particle swarm optimizer; engineering design; mixed variables
1.
Introduction
Multi-objective optimization problems (MOOPs), which have more than one objective conflicting with each other, are rather common in engineering design. Classical approaches are to convert them into single-objective optimization problems by combing all the objectives of a problem into a single one. It is very discommodious even difficult for such an approach to be implemented because it requires human expertise and the adopted algorithm has to be applied many times in order to get sufficient Pareto optimal solutions. Over the last decade, populationbased evolutionary algorithms (EAs) have been emphasized to solve MOOPs due to their ability that multiple Pareto optimal solutions can be obtained in a single run. Many multi-objective evolutionary algorithms (MOEAs) had been proposed, among which the strength Pareto EA (SPEA) [1], the Pareto-archived evolutionary
520
J. Wang and Z. Yin
strategy (PAES) [2], and the nondominated sorting genetic algorithm (NSGA-II) [3] had been successfully utilized to solve MOOPs. Another population-based optimization algorithm, particle swarm optimization (PSO) originally applied to single-objective optimization problems (mainly continuous search space), has also been proposed for solving MOOPs. Literature [4] had summarized these multi-objective particle swarm optimizers (MOPSOs). One of the latest MOPSOs is the multi-objective comprehensive learning particle swarm optimizer (MOCLPSO) [5]. The literature [5] shows that MOCLPSO converges fast while maintaining a good diversity of the obtained solutions on some unconstrained benchmark problems. However, it was not developed to accommodate constraints and mixed variables. Recently, a competitive algorithm (NSGA-II-MOPSO) [6] for unconstrained multiobjective optimization problems was proposed, which is based on the combination of NSGA-II and multi-objective particle swarm optimizer (MOPSO). It performs well on unconstrained benchmark problems in comparison with NSGA-II and MOCLPSO. This paper extends this new algorithm to solve the multi-objective optimization problems in engineering design. The remainder of the article is organized as follows. Section 1.2 describes the NSGA-II-MOPSO algorithm in brief. Section 1.3 extends the NSGA-II-MOPSO algorithm to solve general constrained MOOPs by accommodating constraints and mixed variables. In section 1.4, two engineering design problems were used to investigate the performance of C-NSGA-II-MOPSO and the results are compared with that obtained by NSGA-II. Finally, some conclusions are given.
2.
NSGA-II-MOPSO
The NSGA-II-MOPSO algorithm is a new multi-objective optimization algorithm based on the combination of NSGA-II and MOPSO, in which the crossover operation in NSGA-II is replaced with the mode of position updating in MOPSO. In order to combine NSGA-II and the greatly different MOPSO smoothly, the special concepts such as particle, velocity, Pbest, leader for MOPSO are dealt with within the scope of NSGA-II. At the same time, an improved version of crowding distance in NSGA-II is proposed to utilize the information from the distance of extreme solutions (in objective space). The new version of crowding distance is called sparse-degree. The effectiveness, efficiency and robustness of NSGA-IIMOPSO algorithm had been demonstrated by the experiments on some benchmark problems, and the results (including the algorithm itself) had been reported in detail in [6].
An Effective Multi-objective Optimizer for Engineering Design Problems
3.
Constrained NSGA-II-MOPSO
3.1
Handling of Constraints
521
The most popular constraint-handling techniques currently with EAs had been comprehensively surveyed and classified into five categories by Coello [7]: 1) penalty functions; 2) special representations and operators; 3) repair algorithms; 4) separate objective and constraints; 5) hybrid methods. In the constrained NSGA-II [3], an infeasible solution with a larger overall constraint violation is always dominated by any infeasible solution with a smaller overall constraint violation. This approach belonging to the 4th category is simple but ignores the valuable information from the objective function values of infeasible solutions, which may lead to inferior results. Ray and Tai et al’s constraint-handling approach [8] also belongs to the 4th category. However, in this approach, all constraints and objective functions are used as criteria to check the nondomination levels of solutions. In engineering design problems, the number of constraints may be tens or hundreds, even thousands, the computational cost by this approach may be very large. In order to utilize the valuable information from the objective function values of infeasible solutions and save the time in the check of the undomination levels of all infeasible solutions, we propose a method called M+1 nondominated sorting (M and 1 refer to the number of objectives and the overall constraint violation, respectively). In this approach, the objective function values and the overall constraint violations of all infeasible solutions are used to check their nondomination levels, according to which all infeasible solutions are ranked. For the infeasible solutions at the same nondomination level, they are further ranked according to their overall constraint violations. The overall constraint violation can be calculated as follows: m
)( X )
¦
max^0, g i X `
i 1
n
¦ h X i
(1.1)
i 1
where gi(X) is the ith less-than inequality constraint, hi(X) is the ith equality constraint. m and n are the number of the inequality and equality constraints, respectively. In C-NSGA-II-MOPSO, the method for checking the nondomination levels of feasible solutions and ranking them is just the same as that adopted in NSGA-IIMOPSO, i.e., their objective function values are used to check their nondomination levels, and then get the first nondomination fornt F1; for the solutions in F1, rank the extreme solutions first, followed other solutions ranked according to their sparse-degrees. 3.2
Updating of Parent Population
In NSGA-II-MOPSO, four rules are adopted to update parent population: (1) only the solutions in the first nondominated front F1 have the choice to enter parent population; (2) among each group of overlapping solutions (in objective space) in
522
J. Wang and Z. Yin
F1 , just one solution is preserved and others should be erased; (3) any extreme solution, which has the smallest or largest values in at least one objective, is always in preference to all non-extreme solutions;(4) a solution with lager sparse-degree is in preference to anyone with smaller sparse-degree except the extreme solutions. Among these rules, rule 2, 3, 4 help enhance the uniformity and spread of solutions. In order to deal with infeasible solutions in C-NSGA-II-MOPSO, some rules must be added. We propose the following ones: (5) a feasible solution is always in preference to all infeasible solutions; (6) among two infeasible solutions at the different nondomination level, the one with a lower nondomination level is preferred; (7) among two infeasible solutions at the same nondomination level, the one with a smaller overall constraint violation is in preference to another. Among these rules, rule 6 makes the valuable information from the objective function values of infeasible solutions be utilized, which will improve the performance of C-NSGA-II-MOPSO, especially on highly constrained problems. 3.3
Selection of Leader
For NSGA-II-MOPSO, which aims to solve unconstrained optimization problems, the parent population consists of only feasible solutions. However, for C-NSGA-IIMOPSO, there are three situations in terms of the feasibility of solutions in parent population: (1) all are feasible; (2) some are feasible and others infeasible; (3) all are infeasible. For the second situation , it is not easy to decide whether and when an infeasible solution should be selected as a leader of an individual. In our approach, a simple rule is used to deal with this. If NF =0 or rand>NF/| Pk+1| % rand is a random number between 0 and 1 Randomly select an infeasible solution from parent population else Select a feasible solution from parent population where NF is the number of feasible solutions in parent population. When a feasible solution should be selected as a leader of an individual, the selection method is the same as that employed in NSGA-II-MOPSO. This method makes the search of population biased on the region where the nondominated feasible solutions with lager sparse-degrees are located. Consequently, the uniformity and spread of feasible solutions are improved. When an infeasible solution should be selected as a leader of a particle, we just randomly select one from the infeasible solutions in parent population. 3.4
Handling of Mixed Variables
For mixed variables in engineering design problems, several handling method can be found in the relational literature. Venter and Sobieski [9] treated discrete design variables as continuous ones and applied round-off to the discrete components of the final results. Parsopoulos and Vrahatis [10] applied PSO to integer programming by simply truncating the real values to integers, and claimed that the method does not affect significantly the search performance. The same method was used to deal with integer variables by He and Prempain et al. [11]. Meanwhile,
An Effective Multi-objective Optimizer for Engineering Design Problems
523
they optimized the index of the discrete variable instead of the discrete value of the variable directly. Kitayama and Arakawa et al. [12] proposed the penalty function approach, in which the discrete design variables were handled as the continuous ones by penalizing at the intervals. In C-NSGA-II-MOPSO, we propose a new method called stochastic approximation to handle integer and ordinary discrete variables. The idea behind this method is: an integer variable is treated as a continuous one and then its actual value was randomly selected from the integers neighboring to the value of this continuous variable. As to the discrete variables, similar to the method of He and Prempain et al. [11], its index (a integer variable) is optimized, and then replaced with the corresponding discrete permissible value when necessary. Stochastic approximation vivifies individuals in a local region, and consequently enhances the local search ability of individuals in comparison with determinant method such as ceiling, floor operator etc. By dint of stochastic approximation operator (INTR), the formula for updating dth dimension of the ith particle in NSGA-II-MOPSO can be transformed into the following one.
X i(,kd1)
X i(,kd) C1 rand1 Pbestn, d X i(,kd) C2 rand 2 leaderd X i(,kd) , ° ° if d is the component corresponding to a real value variable ® (k ) (k ) (k ) °INTR X i , d C1 rand1 Pbestn, d X i , d C2 rand 2 leaderd X i, d , ° if d is the component corresponding to a integer or discrete variable ¯
(1.2) INTR(r )
floor(r ), if rand ! r - floor(r ) ® ¯ ceil(r ), otherwise
(1.3)
where C1 and C2 are the acceleration factors, rand1, rand2 and rand are three random numbers and generated following the uniform distribution between 0 and 1, r is the value of an continuous variable repesenting an integer variable or the index of a discrete variable. floor() and ceil() are floor and ceiling operation, respectively. Pbestn,d is the dth dimension of an individual, which is the nearest one from the ith individual on the dth dimension among the set ND. if ND is empty, the second term in the right-hand side of (1.2) is set to zero. Noted that due to the existence of infeasible solutions, the ND should be:
ND
the set comprising of the individuals, which belong to Pt 1 [1 : NF ] and are better than the calculated individual ° in at least one objective, ° ° if the calculated individual is feasible ° ® ° Pt 1 [1 : NF ] * the set comprising of the individuals, which belong to Pt 1 [ NF 1 : Pt 1 ] and better than the ° the calculated individual in at least one objective or the overall constraint violation, ° ° if the calculated individual is infeasible ¯
In addition, any component of Xi, which is an integer representing the index of a discrete variable, should be replaced with the corresponding discrete permissible value when evaluating objective functions and the overall constraint violations.
524
3.5
J. Wang and Z. Yin
Algorithm Framework of C-NSGA-II-MOPSO
We integrate the techniques/methods described above in detail and present the framework of C-NSGA-II-MOPSO algorithm as follows. Initialize P1, Q1=Ø, k=1 % initialize parent population P1 and offspring %population Q1, counter While k<=kmax: Step 1 PkQk RkF+ RkI % combine parent and offspring population, and %partition them into RkF (the set of feasible %solutions) and RkI (the set of infeasible %solutions) Step 2 Fast nondominated sorting on RkF and get the first nondominated front F1 Step 3 Sparse-degree-calc(F1) % calculate the sparse-degree of each %individual in F1 (see Appendix) Step 4 Erase the individuals, the sparse-degrees of which are zero, in F1 Step 5 Rank the extreme individuals in F1 first, followed others in the descending order of sparse-degree Step 6 Find Pk+1 if | F1|>N % N is the maximum size of parent population Pk+1= F1[1:N], NF=N % NF records the number of feasible %solutions in Pk+1 else a) M+1 fast nondominated sorting on RkI b) Rank the solutions in RkI according to their nondominated levels; for the solutions at the same nondominated level, they are further ranked according to their overall constraint violations c) Pk+1= F1 RkI[1:min(|RkI |, N-| F1|)], NF=| F1| Step 7 Produce Qk+1 For each individual in Qk a) Selection of leader if NF =0 or rand> NF /| Pk+1| Randomly select a solution from Pk+1[NF +1: | Pk+1|] as the leader else If rand
An Effective Multi-objective Optimizer for Engineering Design Problems
4.
525
Numerical Examples
In this section, two structural optimization problems were used to investigate the performances of C-NSGA-II-MOPSO. Due to space limitation, the two problems are not described in this paper. If necessary, see the relational literature. when the C-NSGA-II-MOPSO algorithm was implemented, the following parameters were used: parent population size, 100; offspring population size, 100; acceleration factors C1 and C2, 2.0; Pg, 0.50; mutation probability Pm=0.1/D ( D is the number of variables); distribution index for mutation operators , 100. In order to perform comparison, these problems were also solved by the constrained NSGA-II. The parameters used for the constrained NSGA-II were as follows: parent population size, 100; offspring population size, 100; crossover probability, 0.9; mutation probability, 1/D (D is the number of variables); distribution indices for real-coded crossover and mutation operators are 20 and 100, respectively. The integer or discrete variables involved by the problems were treated as continuously real-valued ones when performing SBX operator and mutation, and then took the nearest integers or permissible discrete values before evaluating the solution. This method is similar to that used by the author of NSGAII in [13]. The maximum function evaluations were set at 10000 for both algorithms and 10 independent runs were performed for each example. 4.1
Spring Design
This problem aims to minimize the volume and stress of a helical compression spring [14]. Figure 1.1 shows the total solutions obtained in 10 runs by both C-NSGA-IIMOPSO and the constrained NSGA-II. The extreme solutions are as follows: C-NSGA-II-MOPSO: (2.700562, 187485.827) and (27.949890, 56635.264) Constrained NSGA-II: (2.800214, 183589.024) and (27.942675, 56625.803) From Figure 1.1, it can be seen that the distribution of solutions obtained by CNSGA-II-MOPSO is more uniform than that by the constrained NSGA-II. Moreover, the convergence of the former to the true Pareto front is better, although their spread is almost the same. 4.2
Speed Reducer Design
The objective of this problem is to design a gearbox with minimum gearbox volume and minimum stress [15].
526
J. Wang and Z. Yin 5
5
x 10
2
x 10
2
NSGA−II
C−NSGA−II−MOPSO
Stress f2 (psi)
Stress f2 (psi)
Poor solutions 1.5
1
0.5
0
10 20 Volume f1 (in3)
1.5
1
0.5
30
0
10 20 Volume f1 (in3)
30
Figure 1. Total solutions obtained in 10 runs on the spring design problem
1300
1300
1200
1200
1100
1100
Stress f2 (MPa)
Stress f2 (MPa)
Figure 2 shows the total solutions obtained in 10 runs by both C-NSGA-II-MOPSO and NSGA-II. The extreme solutions are as follows: C-NSGA-II-MOPSO: (2772.455, 1299.785) and (5788.500, 694.709) Constrained NSGA-II: (2795.170, 1282.508) and (5195.457, 695.076) From Figure 2, it can be seen that the distribution of solutions obtained by CNSGA-II-MOPSO is more uniform than that by the constrained NSGA-II. Moreover, the convergence of the former to the true Pareto front is much better, and at the same time the spread of the former is wider. Due to the complexity of the search space of this problem (thus, its true Pareto front is more difficult to generate than in the previous example), most of the constrained NSGA-II runs could not converge close to the true Pareto front.
1000 900 800
1000 900 800
700 600 2000
Poor solutions
700 NSGA−II
C−NSGA−II−MOPSO 3000
4000 5000 Volume f1 (cm3)
6000
600 2000
3000
4000 5000 Volume f1 (cm3)
6000
Figure 2. Total solutions obtained in 10 runs on the speed reducer design problem
Summarizing our results, we see that C-NSGA-II-MOPSO is always able to converge close to the true Pareto front within the same function evaluations while maintaining uniform distribution and wide spread of solutions when compared with the constrained NSGA-II. Especially, in one of the problems, on which our
An Effective Multi-objective Optimizer for Engineering Design Problems
527
approach performed perfectly, the constrained NSGA-II did rather poorly. It is worth noting that constrained NSGA-II was chosen for comparison because it is the representative of the state of the art in constrained multi-objective optimization. We attribute the good performance of C-NSGA-II-MOPSO to the fact that it can maintain a good balance between diversity preservation and rapid convergence by combining NSGA-II and MOPSO smoothly. The framework of C-NSGA-IIMOPSO, which is an improved version of the constrained NSGA-II, provides the main mechanism for diversity preservation and the mode of position updating of MOPSO conduces to the rapid convergence. The proposed M+1 nondominated sorting of infeasible solutions, which makes the valuable information from the objective function values of infeasible solutions be utilized, also enhance the performance of C-NSGA-II-MOPSO. In additional, stochastic approximation method strengthens the local search ability of individuals when dealing with problems involving integer or discrete variables.
5.
Conclusions
Based on the NSGA-II-MOPSO algorithm, this paper develops a C-NSGA-IIMOPSO algorithm to solve multi-objective optimization problems in engineering design. This proposed algorithm has been validated using two structural optimization problems taken from the relational literature and compared to the constrained NSGA-II. In the problems, the solutions obtained by the C-NSGA-IIMOPSO algorithm are closer to the true Pareto front within the same function evaluations while having uniform distribution and wide spread. The efficiency of the algorithm demonstrated here suggests its immediate application to other engineering design problems.
6.
References
[1] Zitzler E, Thiele L, (1999) Multi-objective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Transactions on Evolutionary Computation 3: 257–271. [2] Knowles JD, Corne DW, (2000) Approximating the nondominated front using the Pareto archived evolution strategy. Evolutionary Computation 8: 149–172. [3] Deb K, Pratap A, Agarwal S, Meyarivan T, (2002) A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6: 182– 197. [4] Margarita Reyes-Sierra, Coello CAC, (2006) Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art. International Journal of Computational Intelligence Research 2: 287-308. [5] Huang VL, Suganthan PN, Liang JJ, (2006) Comprehensive learning particle swarm optimizer for solving multi-objective optimization problems. International Journal of Intelligent Systems 21: 209–226. [6] Wang JH, Yin ZY, (2007) A multi-objective optimization algorithm based on the combination of NSGA-II and MOPSO. Journal of Computer Applications 27(11): 2817-2830 (in Chinese).
528
J. Wang and Z. Yin
[7] Coello CAC, (2002) Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Computer methods in applied mechanics and engineering 191: 1245–1287. [8] Ray T, Tai K, Seow C, (2001) An evolutionary algorithm for multi-objective optimization. Engineering Optimization 33(3): 399–424. [9] Venter G, Sobieszczanski-Sobieski J, (2004) Multidisciplinary optimization of a transport aircraft wing using particle swarm optimization. Structural and Multidisciplinary Optimization 26: 121–131. [10] Parsopoulos KE, Vrahatis MN, (2002) Particle swarm optimization method for constrained optimization problems. In Proceedings of the 2nd Euro-International Symposium on Computational Intelligence, Kosice, Slovakia, 214–220. [11] He S, Prempain E, Wu QH, (2004) An improved particle swarm optimizer for mechanical design optimization problems. Engineering Optimization 36(5):585–605. [12] Kitayama S, Arakawa M, Yamazaki K, (2006) Penalty function approach for the mixed discrete nonlinear problems by particle swarm optimization. Structural and Multidisciplinary Optimization 32: 191–202. [13] Deb K, Kumar Abhay, (2007) Light Beam Search Based Multi-objective Optimization using Evolutionary Algorithms. KanGAL Report No. 2007005. http://www.iitk.ac.in/kangal/reports.shtml [14] Deb K, Pratap A, Moitra S, (2000) Mechanical component design for multiple objectives using elitist non-dominated sorting GA. In Proceedings of the Parallel Problem Solving from Nature VI Conference, Paris, 859–868. [15] Wu J, (2001) Quality assisted multi-objective and multidisciplinary genetic algorithms. PhD thesis, Department of Mechanical Engineering, University of Maryland at College Park, Maryland, USA.
Appendix Sparse-degree-calc(F1) [6] n=|F1| % the number of the individuals in F1 F1[1:n]sd=0 % the sparse-degrees of all individuals in F1 are initialized to zero if n=1 F1 [1]sd=1 else for each objective F1=sort(F1,m) % sort in ascending order of the mth objective(minimum problem) if f mmax f mmin %all individuals in F1 overlapped in the mth objective space F1 [1]sd = max(F1 [1]sd, 1) else % the extreme individuals in F1 are calculated first F1 [1]sd = max(F1 [1]sd , (F1[2].m- F1[1].m)/( f mmax f mmin ) ) F1 [n]sd = max(F1 [n]sd , (F1[n].m- F1[n-1].m)/( f mmax f mmin )) For i=2 to (n-1) % other individuals F1 [i]sd = max(F1 [i]sd, (F1[i+1].m- F1[i-1].m)/( f mmax f mmin )) Here, F1[i].m refers to the mth objective function value of the ith individual in F1 and the parameters f mmax and f mmin are the maximum and minimum values of the mth objective function.
Material Selection and Sheet Metal Forming Simulation of Aluminium Alloy Engine Hood Panel Jiqing Chen1, Fengchong Lan1, Jinlun Wang1 & Yuchao Wang2 ¹School of Automotive Engineering, South China University of Technology, Guangzhou, 510640, P R China ²Institute of Automotive Engineering, Guangzhou Automobile Industry Group CO., LTD., Guangzhou, 510640, P R China
Abstract In this study, a simulation model is obtained on engine hood inner panel, and the panel’s stamping process parameters are optimized based on orthogonal experiments. The investigation into feasibility and key techniques of steel-toaluminum alloy course of engine hood inner panel is carried out. While based on the optimized stamping parameters of steel hood inner panel, optimized parameters with regard to its aluminum alloy counterpart are obtained through CAE simulation. The main results are:(1) the research applied the orthogonal method to the optimization of stamping process, which guarantees the viability and efficiency of this process. (2) the model used in the forming process simulations is validated by actual trials, resulting in a reliable theoretic analysis. (3) analysis results from an exemplified hood inner panel can indicate the feasibility of using aluminum alloy instead of steel in other body panels, and assist to understand the influence of material and technical parameter on aluminum alloy sheet metal forming, so as to optimize the design process and increase product quality more easily. Keywords: Sheet metal forming; Numerical simulation; Lightweight; Aluminum alloy; Hood inner panel
1.
Introduction
In the stamping process of auto-body panels, the tool setting system and process parameters are crucial to the formability and product quality. The stamping process parameters only can be adjusted after a tool system is validated in practice. Since the change in stamping process parameters will cause non-linear and uncertain influence on the forming quality, it is one of the difficulties in the forming technical optimization and quality control process to choose optimal parameters. A number of researches had carried out by numerical simulations on attempt of optimizing these parameters, and in turn guiding the design and manufacture of tools and the layout of stamping process. With a good set of the parameters, high quality parts can be produced. Compare to actual experimental trials’ high expense
530
J. Chen, F. Lan, J. Wang and Y. Wang
with huge waste and time consumption, the CAE method has been more and more applied in sheet metal forming process optimization and proved effective and efficient. In CAE simulations, the FE model is validated by some essential experiments carried out in simulating cycles rather than actual trials. The validated model can be used for technical parameters' optimization in simulations to avoid possible defects, such as wrinkle, crack and spring back, etc[1-2]. In this study, FE model of hood inner panel is obtained first, then orthogonal method is applied to carry out numerical simulations experiments. And a set of optimal forming parameters is put up with the help of CAE analysis. However, with the advanced modern technology and demanded reduction of car body weight, the shape of auto-body panels are becoming more complex and new light alternative materials are introduced into car body structure and panels, which will result in a more complex forming process. As a lightweight material, aluminum alloy has good mechanical performance compared to traditional steel, but 60% lighter. Aluminum alloy applied to the inner and outer panels of autobody parts will make a 40% reduction in weight of the whole automobile. In an impacted wreck experiment, aluminum alloy plate can absorb 50% energy more than steel plate. It is believed that aluminum alloy is a promising lightweight material for automotive body structure and panels [3]. Many scholars have forecasted the formability of aluminum alloys through simulation technique, such as: with stamping process simulation of door-inner and lift-gate made of aluminum alloy series of 5000 and 6000, Masahiko Jinta acquired the influence of aluminum parameters on wrinkle in stamping process [4]. At the same time, through stamping simulation of aluminum alloys of AA5022, AA5023, A5182 and 6000-A, Masahiko Jinta acquired the lightweight effects and the optimal stamping technical parameters when applied to auto-panels[5]. In this study, stamping process simulations are carried out on hood inner panels which made of 3 representative aluminum alloys (5182-O, 6111-T4 and 6009-T4 respectively), with the optimal parameters given in part2. Lightweight result is indicated in the final of this study, with comparison the mass of hood inner panel of the 2 materials.
2.
Analysis on Formability of Hood Inner Panel
The FE model of the hood inner panel used in the sheet metal forming process is illustrated in Figure 1. Binder hold force (BHF) is a main parameter affecting part quality in the forming process. If binder hold force is too low or high, wrinkle or crack is prone to appear in the stretching process. Draw bead layout is also important in tool design process, and with rational distribution, flow of sheet material can be controlled to prevent wrinkling and cracking. As a result of small dimension, it is not easy to simulate the draw beads exactly in the forming process. If draw beads are meshed with mini-sized elements with consideration of the contact situation between sheet metal and draw beads, it will increase the computer resource greatly. As an equivalent draw bead in FE model, a restrain force exerted to a line (draw bead contour) on the surface of tool, instead of a certain height of draw bead, is simulated to have the same effect on flow of
Material Selection and Sheet Metal Forming Simulation of Aluminum Alloy Panel
531
material [6], herein, the restrain is a uniformly distributive force exerted vertically to the lines in the tool surface, referring to Figure 1.
Figure 1. The FE model and draw beads
In this study, orthogonal experiment method is used to determine the initial optimal parameters of BHF and draw bead height, which will be used in numerical simulation of the forming process later. Firstly, the BHF is set to 3 levels of 50t, 40t and 30t, and the draw beads are circular single ribs. Through a number of simulations, 3 draw bead heights and corresponding maximal restrain forces are achieved, i.e. 6.5mm (corresponding to maximal 177N/m restrain force), 5.7mm (146N/m), 5.5mm(130N/m). So the orthogonal experiment has 4 factors respectively in 3 levels listed in the orthogonal table L9(34) (Table1). The simulation experiments are carried out and a number of results are obtained accordingly. Herein, only 3 representative results 1, 5, 9 are extracted for further analysis, and Forming Limitation Diagram (FLD) of these forming results are shown in Figure 2, Figure 3 and Figure 4 respectively. Table 1. Orthogonal table L9( 34) Number
BHF (t)
Height of top draw bead (mm)
Height of bottom draw bead (mm)
Right/left draw bead height (mm)
1 2 3 4 5 6 7 8 9
30 30 30 40 40 40 50 50 50
5.5 5.7 6.5 5.5 5.7 6.5 5.5 5.7 6.5
5.5 5.7 6.5 5.7 6.5 5.5 6.5 5.5 5.7
5.5 5.7 6.5 6.5 5.5 5.7 5.5 6.5 5.5
532
J. Chen, F. Lan, J. Wang and Y. Wang
The ambiguous profile shown in Figure 3 means probably low binder hold force or small draw bead restrain which make the flow of material too fast for sufficient stretching. The formed part shown in Figure 4 has better quality in terms of an over 90% deformation ratio, although with local areas on the top and bottom insufficiently stretched, the result meets engineering requirement.
Figure 2. FLD and simulation result of No.1
Figure 3. FLD and simulation result of No.5
There are 2 areas named “G” and “E” in Figure 4 having a cracking trend, the probable reasons which can cause difficult material flow are: (1) the binder hold force is too high; (2) the draw bead restrain is too high; (3) geometrical shape and location at these areas makes the difficulty. Subsequently, the simulation results in Figure 3 is acceptable, the contour of thickness (T) is illustrated in Figure 5. The minimal thickness is 0.53mm with a thinning ratio of 24%, up against a risk of cracking, especially if the binder hold force or draw bead height increased. The maximal thickness is 0.853mm with a thickening ratio of 22%, however, wrinkles appear at the addendum areas without affecting the final quality of the part. Thus, A set of optimal forming parameters can be obtained as follow: binder hold force is 40t, the addendum has parameters as R=20mm, r =10mm, V =20rand D=41mm as described in Figure 6, with a friction coefficient of 0.1 and a clearance of
Material Selection and Sheet Metal Forming Simulation of Aluminum Alloy Panel
533
0.784mm between punch and die. The distribution and geometrical shape of draw beads are described previously.
Figure 4. FLD and simulation result of No.9
Figure 5. Thickness distribution of No.5
Figure 6. Parameters of the addendum
534
3.
J. Chen, F. Lan, J. Wang and Y. Wang
Aluminium Alloy Stamping Simulation and Optimization
Aluminum is a typical anisotropy material with a yield function different from the steel ones. It is also useful to find out the effect of sheet thickness on stamping performance after the replacement, as the elastic modulus of the materials are different. The hood inner is geometrically complex, the demand for surface quality is not too high but anti-erosion is high. By and large, it can be replaced with the 5000 series or 6000 series, which have been tried in automotive industry[7-8]. In this study, 3 materials of 5182-O, 6111-T4, 6009-T4 are used in the stamping simulation instead of St16 which is originally used to make the panel. 3.1
Fundamental Attributes of Aluminum Alloy
Aluminum alloy is lighter than steel by 1/3. with good formability and manufacturing performance, aluminum alloy is prone to forming. The stretch-draw ratio and anisotropy in thickness r are lower than steel, but the hardening exponent n is almost the same as its steel counterpart [9-10]. These basic attributes of aluminum alloy raise some problems in the replacement: (1) Local stretch-draw performance should be improved to get rid of risk of cracking. (2)The compensation of spring back is still a problem to deal with. 3.2
Thickness Evaluation of Aluminum Alloy Sheet
(1) Using the bending rigidity as a criterion, the thicknesses of the 2 materials have a relationship as follows: t Al / t S
1/ 3
ES / E Al
(1)
Where t Al and tS are thicknesses of aluminum alloy and steel plates, EAl and ES are the elastic modulus of aluminum alloy and steel. (2) If using the bending strength as a criterion, the thicknesses can be expressed as: t Al / t S
1/ 2
V S / V Al
(2)
Where V S and V Al are the yield strengths of steel and aluminum alloy plates [11]. 3.3
Yield Function of Aluminum Alloy Plate
In sheet metal forming analysis, in addition to the material hardening caused by the plastic deformation, consideration should be focused on the anisotropy of the material performance. Among the yield functions for anisotropy material, barlat1989 yield function [12] can describe the yield behavior of sheet metal more reasonably, simulate the metal flow in the stretch-draw forming process more efficiently, display the influence of anisotropy and yield function exponent m on sheet metal flow and forming limit in the stamping process. As to aluminum alloy sheet metal forming, the barlat1989 yield function can be described as:
Material Selection and Sheet Metal Forming Simulation of Aluminum Alloy Panel
m
m
) a K1 K2 a K1 K2 c 2K2
m
2Vym
535
(3)
Where V y is yield stress, a and c is anisotropy exponent in thickness direction,
m is Barlat exponent, the other constants K1 , parameters of
K2
can be acquired from anisotropy
r0 , r45 and r90 , which represent material anisotropy.
To sum up, aluminum alloy has lower values of stretch-draw ratio and anisotropy coefficient r in thickness, so its formability is worse than steel. But experiences indicate that aluminum alloy can be used to make auto panels with good stamping techniques. Automotive panels made of aluminum alloy are much lighter than steel ones, hence good fuel economics and dynamical performance. 3.4
Stamping Simulation of Aluminum Alloy Hood Inner Panel
The FLDs for 5182-O, 6111-T4 and 6009-T4 hood inner panel are shown in Figure 7, Figure 8 and Figure 9 from the optimized stamping parameters described in Section 2.
Figure 7. The FLD curve of 5182-O
Figure 8. The FLD curve of 6111-T4
536
J. Chen, F. Lan, J. Wang and Y. Wang
The forming of hood inner panel has no failed features displayed in Figure 7, Figure 8 and Figure 9, only some insufficient deformation areas exist in the top and bottom of the panel. With a safety residual of 10%, the range of the risk areas is 0.7-0.77mm. Aluminum alloy 6009-T4 hood inner panel has a minimal thickness of 0.768mm while the maximal thickness is 1.244mm, since the minimal thickness area exists inside of the risk areas, binder hold force can not be added anymore. The 5182-O hood inner has a minimal thickness of 0.794mm while the maximal thickness is 1.209mm. 6111-T4 hood inner has a minimal thickness of 0.803mm while the maximal thickness is 1.252mm, since the minimal thickness exists beyond the risk area, binder hold force can be added to 50t, the new forming FLDs are shown in Figure 10 and Figure 11 respectively.
Figure 9. The FLD curve of 6009-T4
There are insufficient deformation areas at the top and bottom section of the hood inner panel both in Figure 10 and Figure 11. Aluminum alloy 5182-O hood inner panel has a minimal thickness of 0.774mm, changing by 22.6%, so increase binder hold force is not reasonable for it has reached the forming limitation. Aluminum alloy 6111-T4 hood inner has a minimal thickness of 0.773mm, close to the risk areas, so continue to increase binder hold force is also unreasonable.
Material Selection and Sheet Metal Forming Simulation of Aluminum Alloy Panel
537
Figure 10. The FLD curve of 5182-O with a BHF of 50t
Figure 11. The FLD curve of 6111-T4 with a BHF of 50t
Therefore, aluminum alloys 5182-O, 6111-T4 and 6009-T4 can be used for producing hood inner panel instead of St16. But for 5182-O and 6111-T4, binder hold force should be increased from optimized 40t to 50t to eliminate insufficient deformation. Consequently, it is recommended that aluminum alloy 6009-T4 is a good option to substitute for St16. Table 2 shows the comparative results of lightweight. The weight of aluminum alloy hood inner panel is 3.97kg lighter with a ratio of 51%, which is a remarkable outcome. Table 2. The lightweight results of hood inner panel item
Thickness (mm)
Mass (kg)
Steel
0.7
7.80
Aluminum alloy
1
3.83
Lightweight ratio (%)
51
538
4.
J. Chen, F. Lan, J. Wang and Y. Wang
Conclusion
In this study, orthogonal method is used to simulate the stamping process of hood inner panel to acquire optimal technical parameters. Based on the mechanical performance of aluminum alloy, yield function Barlat1989 is described to explain the yield behavior of aluminum alloy plate. By comparison of the mechanical parameters of aluminum and steel, stamping quality problems are estimated in the aluminum-to-steel process. 3 representative aluminum alloy series are adopted to substitute for St16 based on the numerical simulation technique. Simulation results show that aluminum alloys 5182-O, 6111-T4 and 6009-T4 can all be used to replace St16, but aluminum alloy 6009-T4 is recommended especially. Finally, conclusion can be drawn that the use of aluminum alloy instead of St16 in practical production of hood inner panel is viable, and the lightweight effect is remarkable, reaching a lightweight ratio of 51%.
5.
References
[1] Ashley Steven. Steel cars face a weighty decision [J]. Mechanical Engineering, 2003, 119(2):56-61 [2] LAN F,CHEN J,LIN Jet al. Spring back Simulation and Analysis in U-Typed Sheet Metal Forming Processes[J]. Journal of Plasticity Engineering, 2004, Vol.11 (5):78-84. [3] Schaffer G.B. On the development of sintered aluminium alloys for industrial applications[J].Materials Technology, 2001,16(4):245-249 [4] Masahiko Jinta, Yoshinori Sakai. Press forming analysis of aluminium auto body panel:wrinkle behaviour in 5000 and 6000 series aluminium alloy sheet forming. Journal of Materials Processing Technology, 2006, 34 (7): 35-39. [5] Masahiko Jinta, Yoshinori Sakai. Press forming development of aluminium auto body panel for electric vehicle. Journal of Materials Processing Technology, 2000, 5(1): 5160 [6] Cole G S, Sherman A M. Lightweight Materials for Automotive Applications. Materials Characterization, 2005, 35(1):3-9 [7] Hayashi Hisashi, Nakagawa Takeo. Recent trends in sheet metals and their formability in manufacturing automotive panels [J]. Journal of Materials Processing Technology, 2004,46(3-4):455-487 [8] NI Chi-Mou. Stamping and hydro-forming process simulations with 3-D finite element code[C]. SAE Trans of Materials & Manufacturing,2004, No 940753:512-534 [9] LAN F., Lin J., CHEN J. An integrated numerical technique in determining blank shape for net-shape sheet metal forming [J]. Journal of Materials Processing Technology, 2006, 177(1-3): 72-75 [10] LAN F., CHEN J., Lin J. A method of constructing smooth tool surfaces for FE prediction of spring back in sheet metal forming [J]. Journal of Materials Processing Technology, 2006,177(1-3):382-385 [11] ShangHai science & technology communion party. Orthogonal Experiment Method [M]. Shanghai: ShanHai People’s Press, 2005. [12] LIN Zhongqin, LI Shuhui. Sheet Metal Forming Simulation of Automotive Body anels [M]. Beijing: China Machine Press, 2005 (in Chinese).
Studies on Fast Pareto Genetic Algorithm Based on Fast Fitness Identification and External Population Updating Scheme Qingsheng Xie, Shaobo Li, Guanci Yang Institute of CAD/CIMS, Guizhou University, Guiyang, China, 550003
Abstract This paper investigates fast Pareto genetic algorithm based on fast fitness identification and external population updating scheme (FPGA) for searching Pareto-optimal set, which is based on a new approach of fast fitness identification algorithm for individual and a clustering on the basis of external population updating scheme to maintain population diversity and even distribution of Pareto solutions. Experiments on a set of multi-objective 0/1 knapsack optimization problems shows that FPGA can obtain high-quality, well distributed nondominated Pareto solutions with less computational efforts compared to other state-of art algorithms, and FPGA in convergence speed outperforms the representative SPEA. Keywords: fast genetic algorithm; Pareto optimality; fitness fast identify algorithm; fast update algorithm
1.
Introduction
Many engineering problems involve simultaneous optimization of several incommensurable and often competing objectives. Often, there is no single optimal solution, but rather a set of alternative solutions. These solutions are optimal in the wider sense that no other solutions in the search space are superior to them when all objectives are considered. Just about that, searching Pareto-optimal is an extraordinary complicated task in high-dimensional decision-making space. Traditionally, multi-objective optimization methods transforms multi-objective functions into a single objective function through the evaluation function, However using single-objective optimization methods to searching optimal solution has shortcomings as follows. Firstly, to constructing single-objective evaluation functions needs decision-makers to provide the profound preference knowledge or experience, but it is impossible for many engineering problems to provide such preference information. Secondly, most single-objective optimization methods are based on local optimal searching algorithm. Although it is possible to search a locally or globally optimal solution of single-objective optimization problems, they can not obtain Pareto-optimal solution or Pareto-optimal set or more uniform distributed optimal solution,
540
Q. Xie, S. Li and G. Yang
namely parallel search is impossible, and it also can not meet the flexibility requirements of multi-objective decision-making or requirements of environmental dynamic changes. Thirdly, when decision needs alternative Pareto-optimal solution, we must construct evaluation function and run search algorithm repeatedly. Genetic algorithm(GA) achieves individual recombination through genetic reproduction, leads population evolution by select operator, which can deal with a set of solutions in a single population on the high efficient part for inherent parallelism of GA. References[1,2] hold that multi-objective search and optimization is the most suitable application field of GA. Zitzler [3] put forwards the Strength Pareto EA (SPEA), and had been applied it to the multi-objective 0/1 knapsack problem successfully. SPEA employs two populations: populations P with size of N for the genetic reproduction, external populations P c whose size is N c stores non-dominated solutions found so far. According to the domination relationship between individual in populations P and member of P c , SPEA determines the individual’s Pareto fitness, and clustering analysis algorithm is used to maintain the population diversity. SPEA is said to be the significance Paretooptimal search algorithm in word and deed, however, fitness evaluation and clustering analysis requires computational complexity of O(( N N c)3 ) . The reference [4] proposes a vicinity crowding algorithm, which is different from the clustering algorithm, to delete superfluous solution when current solution size exceed fixed scale of population so as to maintain the Pareto-optimal solutions’ distribution uniformity, but only the mean and variance are superior to SPEA in a certain sense. The reference [5] bring forward a multi-objective evolutionary algorithm based on orthogonal design to solve multi-objective optimization problem, but it is just a niching evolution and segmentation iterative process materially. Other algorithms to implement niching methods uses fitness sharing method [6,7] to maintain populations diversity, which needs profound preference knowledge or experience to resolve the parameter of sharing radius, which is badly difficult to implement. Based on those considerations, this paper investigates the rapid identification and evaluation of fitness algorithm, with its time complexity of O (m( N N ' ) 2 ) , and effective method with time complexity of O( NN c log N c) to maintain population diversity.
2.
Concept of Pareto Optimality
In essence, Multi-objective optimization is vector optimization; a general multiobjective optimization problem can be described as a vector function f that maps a tuple of m parameters (decision variables) to a tuple of n objectives. Formally: max/min y
f ( x)
subject to x
( x1 , x2 ,..., xm ) X , y
( f1 ( x)," , f n ( x)) ,
( y1 , y2 ,..., ym ) Y
Studies on Fast Pareto Genetic Algorithm
541
Where x is called the decision vector, X is the parameter space, y is the objective vector, and Y is the objective space. The set of solution of a multi-objective optimization problem consists of all decision vectors for which the corresponding objective vectors cannot be improved in any dimension without degradation in another—these vectors are known as Pareto optimal. Mathematically, the concept of Pareto optimality is as follows: Assume, without loss of generality, a maximization problem and consider two decision vectors a, b X . Then a is said to dominate b (also written as a ; b ) iff i {1,2,..., n} : f i (a) t f i (b) j {1,2,..., n} : f j (a) ! f j (b) .Additionally, in this study a is said to cover b ( a t b ) iff a ; b or f (a) f (b) . All decision vectors which are not dominated by any other decision vector of a given set are called non-dominated regarding this set. If it is clear from the context which set is meant, we simply leave it out. The decision vectors that are non-dominated within the entire search space are denoted as Pareto optimal and constitute the so-called Pareto-optimal set or Pareto-optimal front [3].
3.
Pareto Genetic Algorithms
3.1
Structure and Flow of Algorithm
Fast Pareto Genetic Algorithm Based on Fast Fitness Identification and External Population Update (FPGA) employs an evolution population P with size of N and an external population P c with size of N c . The former is used for crossover and mutation operator, and the latter is used for storing the identified non-dominated Pareto solutions so far. In order to implement beamed evolution search and elitist reservation strategy, binary tournament selection with replacement is used in the tow population. This study compared tow multi-objective EA’s on a multiobjective 0/1 knapsack problem with nine different problem settings, each candidate solution represented by binary encode whose extent is m, and each binary bit x i | i [1, m] of encode figures variable’s value. Thereby, the flow of FPGA is as Figure 1, which formalization description is as follows. 1. 2. 3.
4. 5. 6. 7.
Generate an initial population P and empty external nondominated set Pc ; Determine non-dominated members of P ; Copy non-dominated members of P to P c one by one and remove solutions within P c which are covered by any other member of P c using fast update algorithm based on clustering crowding(FUC); Calculate the fitness of each individual in P as well as in P c using fast fitness identification algorithm (FIA); Select individuals from P + P c (multi-set union ), until the mating pool is filled. In this study, binary tournament selection with replacement is used. Apply problem-specific crossover and mutation operators as usual; If the maximum number of generations is reached then stop, else go to 2);
542
Q. Xie, S. Li and G. Yang
Begin
initial population P and P’ Is he maximum number of generations reached ? no
yes
Determine non-dominated members of P
Output Paretooptimal solution set End
Update P’using FUC Calculate the fitness of each individual of P and P’ using FIA Crossover Mutation Constraint processing
Figure 1. The flow of FPGA
3.2
Fast Update Algorithm Based on Clustering Crowding (FUC)
In fact, to implement Fitness Sharing method [6, 7] needs to resolve the parameter of sharing radius. Therefore, Fitness Sharing need experiences of objective problem which are hard to obtain for many engineering problems. Meanwhile, even the experiences we have got fortunately, the relativity and subjectivity of the experiences maybe restricts the success application of Fitness Sharing. Clustering analysis [3] is complex with high computation. FPGA proposes a fast update algorithm based on clustering crowding (FUC) as the niching method, which makes the non-dominated Pareto-optimal solutions storied in the external population P c approach the optimal-optimal curve or surface. The flow of FUC is as Figure2, and an implementation is described by: for ( j {non-dominate individual of population P } ) S ), flag 1 ; for ( k Pc & & flag 1 ){ k ) flag=0᧷ if ( j ; k ) S S * {k} ᧷else if ( j E k || j next k; if ( flag==1 ) if ( S z ) ) P ' P ' S * { j} ᧷ else if( | Pc | N c ) Pc Pc * { j}; else Calculating the distances between j and each members of P c ; Sorting all distances in QKSORT; Finding out one most similar member to j, and replace it with j;
next j᧷
Studies on Fast Pareto Genetic Algorithm
543
The outer sentence “for” of FUC ransacks population P for N times. The inner “for” ransacks population Pc for N c times. Sentence P ' P ' S * { j} loops for N c times in worst conditions. In the deepest “else”, the comparability replacement only has computational complexity of O( N c log N ' ) ; so computational complexity of FUC in worst condition is O( NN c log N ' ) , but in the best condition, its magnitude is O( N ) , lower than computational complexity of Fitness Sharing method and clustering analysis. 3.3
Fast Fitness Identification Algorithm (FIA)
The fitness of individual expressed by the index of non-dominated individual’ level in P Pc , the fitness of individual in level k is k. For arbitrary individual i P Pc , ni is the number of individuals which dominate member i , Si is the subset which member is weaker than member i , Fk is the non-dominated subset in level k. Based on those definitions, the implementation process of FIA can be described as follows. Initialize k 1 ; for each solution i P P c S i ) , ni 0 ; for each solution j P P c if ( j E i ) Si Si * { j} ; else if ( j ; i ) ni ni 1 ; next solution j; if ( ni 0 ) Fk Fk * {i} ;next solution i; while ( Fk z ) ) for each solution i Fk for each solution j S i ni ni 1 ;if ( ni 0 ) H H * { j} ; next solution j; next solution i; k k 1 ; Fk H ; The outer sentence “for” determines Nc N subsets Si and Nc N values of ni , and the non-dominated solutions in level one, which computational complexity magnitude is O((N Nc)2 ) . Sentence “while” determines the level of rest nondominated solutions. In worst condition, there is one non-dominated solution in each rank; if so, the computation complexity to determine N N c 1 ranks is O (( N N c) 2 ) . So, in worst condition, the computational complexity of fitness evaluation is O (( N N c) 2 ) + O((N Nc)2 ) or O (( N N ' ) 2 . To take FIA as fitness evaluation method, there are many individuals in each rank, which are deemed to have same competition ability. In that case, it’s hard to generate an individual whose fitness is very high, which restrains the other individuals in lower fitness, and benefit to keep the diversity of population potentially in a certain sense. We have verified this conclusion in experiment.
544
Q. Xie, S. Li and G. Yang
Begin for each unrepeated individual j = non-dominated member of P; if there is no other non-dominated members of P, j=null j is null
yes
Pareto-optimal set of Pಿ
no Set S=null, flag = 1
End
for each unrepeated individual k = non-dominated member of Pಿ ; if there is no other non-dominated members of Pಿ , k=null k is null yes
Pc ( Pc S ) { j} Pc Pc { j}
no
yes flag=0 no S = null yes |Pಿ _ 1ಿ
no
S
j ; k? no j E k || j k yes
S {k}
yes
yes
flag=0
yes no Calculate the distance Dj,i ; j {1,2,..., N c} : D j ,i between j and each meber of Pc Sorting {Dj,i } using quicksort algorithm Get tow members x and y according to the sorting results, which has minimum distance; If x = j or y = j, replace x or y with j; If distance Dj,x > = distance Dj,y, replace x with j; If distance Dj,x < distance Dj,y, replace y with j;
Figure 2. Flow of the FUC: this algorithm concerns the distribution of front edge of nondominated solution, we use the Euclid distance to calculate the distance between individuals.
4.
Simulation Optimization Experiment
4.1
Test Problems
Generally, a 0/1 knapsack problem consists of a set of items, weight and profit associated with each item, and an upper bound for the capacity of the knapsack. The task is to find a subset of items which maximizes the total of the profits in the subset, yet all selected items fit into the knapsack, i.e., the total weight does not exceed the given capacity. This single-objective problem can be extended directly
Studies on Fast Pareto Genetic Algorithm
545
to the multi-objective case by allowing an arbitrary number of knapsacks. Formally, the multi-objective 0/1 knapsack problem considered here is defined in the following way: Given a set of m items and a set of n knapsacks, with pi , j = profit of item j according to knapsack i , wi , j = weight of item j according to knapsack i , ci = capacity of knapsack i ,
Find a vector x ( x1 , x2 ,..., xm ) {0,1}m ,such that
m
i {1,2,3,..., n} : ¦ wi , j x j d ci
and for
j 1
m
which f ( x) ( f1 ( x), f 2 ( x), f 3 ( x),..., f n ( x)) is maximum, where fi (x) ¦ pi, j x j and j 1
xj
1 iff item j is selected. In order to obtain reliable and sound results, we used
nine deferent test problems where both the number of knapsacks and the number of items were varied. Two, three, and four objectives were taken under consideration, in combination with 250, 500, and 750 items. Uncorrelated profits and weights were chosen, where pi , j and wi , j are random integers in the interval [10,100]. Table 1 has shown the detailed information. The knapsack capacities were set to half the m total weight regarding the corresponding knapsack ci 0.5 ¦ wi , j . j 1
Table 1. Parameters that were adjusted to the problem complexity: Population P size (N), Population P’ size (N’), knapsacks size (m), items number (n) and the coverage (R , RSPEA, FPGA ) of set FPGA, SPGA
n
2
3
4
RFPGA, SPGA
R SPEA, FPGA
m
N’
N
250
30
120
0.814 95
0.048 56
500
40
160
0.938 53
0.024 27
750
50
200
0.959 69
0.016 43
250
40
160
0.996 63
0.002 03
500
50
200
1.000 00
0
750
75
225
1.000 00
0
250
50
200
1.000 00
0
500
60
240
1.000 00
0
750
70
280
1.000 00
0
A binary string s of length m is used to encode the solution x {0,1}m . Since many coding lead to infeasible solutions, a simple repair method is applied to the genotype s : x r ( s) . The repair algorithm removes items from the solution coded by s step by step until all capacity constraints are fulfilled. The order in which the
546
Q. Xie, S. Li and G. Yang
items are deleted is determined by the maximum profit/weight ratio per item; for n p item j the maximum profit/weight ratio q j is given by the equation q j max { i, j }. i 1
wi , j
The items are considered in increasing order of the q j , i.e., those achieving the lowest profit per weight unit are removed first. This mechanism intends to fulfill the capacity constraints while diminishing the overall profit as little as possible. In our testing, the probabilities of crossover (one-point) and mutation were fixed (0.8 and 0.01, respectively). 4.2
Performance Criteria
In order to compare the advantage of FPGA to SPEA, the coverage R of two sets— final Pareto-optimal set of decision vectors of running FPGA and final Paretooptimal set of decision vectors of running SPEA—is used. Mathematically, the performance measures define is as follows: R FPGA, SPEA
c , j PSPEA c : i ; j} | | {i PFPGA c | | PFPGA
RSPEA, FPGA
,
c , i PFPGA c : j ; i} | | { j PSPEA c | | PSPEA
If Let X c, X cc X be two sets of decision vectors, the function R X c, X cc maps the ordered pair ( X c, X cc ) to the interval [0,1]. The value R X c, X cc =1 means that all points in X cc are dominated by points in X c . The opposite, R X c, X cc =0, represents the situation when none of the points in X cc are covered by the set X c . Let X c ( x1 , x2 ,..., xk ) X be a set of k decision vectors. The function D( X c) gives the distance enclosed by the union of the polytypic p1 , p 2 ,..., p k , where each p i is formed by the intersections of the following hyperplanes arising out of x i , along with the axes: for each axis in the objective space, there exists a hyperplane perpendicular to the axis and passing through the point ( f 1 ( x i ), f 2 ( x i ),..., f n ( x i )) . In the two-dimensional (2-D) case, each represents a points defined by the points ( f 1 ( x i ), f 2 ( x i )) , and D( X c) ¦k ( f j ( xi ))2 . j 1
4.2
Experimental Result and Analysis
On all test milt-objective 0/1 knapsacks problems, 10000 generations were simulated per optimization run, and FPGA, SPEA runs 40 times independently at the same initial population. After the 40 times optimization run, the arithmetic average values RFPGA, SPGA RSPEA, FPGA are shown as Table 1. As can be seen in Table1, FPGA in quality of final non-dominated Pareto-optimal solutions outperform the state-of-the-art SPEA on all problems, and the more knapsacks and items involved, the greater the value for RFPGA, SPGA , the more nakedness of FPGA’s advantage. When the knapsacks n >2, FPGA covers more than 99% of the fronts computed by SPEA. In contrast, SPEA covers less than 5% of outcomes of FPGA at the best
Studies on Fast Pareto Genetic Algorithm
547
condition. So, according to the coverage values of two set, we can draw a conclusion that FPGA seems to provide the best performance comparing to SPEA. In order to observe the distribution of non-dominated Pareto solutions in the different evolution process, two objectives problem was chose under consideration from the 40 times independent running of FPGA and SPEA randomly, in combination with 250, 500, and 750 items, and the distribution and evolution trend of non-dominated Pareto-optimal set of external populations P is shown as Figure3, where the tradeoff fronts obtained in two runs are plotted for the 2-D problems. As can be seen clearly in Table1, as the increase of evolutionary generations, the nondominated Pareto stored in the population P c can uniformly approximate every part of Pareto-optimal front, and FPGA has more uniform distribution and more rapid convergence trends comparing to SPEA . 30400
f 2 (x )
29400 FPGA-5000gen SPEA-5000gen FPGA-8000gen SPEA-8000gen
28400 27400 27400
28200
f 1 (x )
29000
29800
Arithmetic average values of D(X ')
Figure 3. Tradeoff fronts for two knapsacks: here, the distribution and evolution trend of non-dominated Pareto-optimal set of extern populations are described. 40000
FPGA-2
SPEA-2
FPGA-3 SPEA-3
37000 FPGA-4 34000
SPEA-4
31000 0
3000
6000
9000
12000
15000
1XPEHU VHTXHQFH RI JHQHU DW L RQV Figure 4. The increasing trend curve of the arithmetic average values of D( X c)
Considering two, three, and four objectives, in combination with 750 items, we respectively run FPGA and SPEA 40 times respectively, and then calculate the arithmetic average values of D( X c) . The increasing trend curve of D( X c) is described as Figure 4. The trend curve to which correspond FPGA has sharper slope ratio-of-rise, which shows FPGA has advantage in convergence speed at the beginning stage of evolutionary searching. When it achieves the relative stable stage, various curves corresponded to FPGA correspondingly locate in the top of trend curves to which belong SPEA, which indicates that FPGA can obtain high accuracy non-dominated Pareto solutions at the later evolutionary searching.
548
Q. Xie, S. Li and G. Yang
Thereby, the conclusion is that FPGA in convergence speed and quality of nondominated Pareto solution is superior to SPEA.
5.
Conclusions
We propose a kind of fast Pareto genetic algorithm based on fast fitness identification and external population updating scheme for searching Paretooptimal set, which supplies alternative Paretooptimal solution set for multiobjective decision-making. FPGA is unique in two respects.Firstly, we put forward fast update algorithm based on clustering crowding for maintaining population diversity and even uniform distribution of Pareto solutions, which realization is based on external population updating scheme by washing out the most similar individuals of external population. Secondly, we propose a kind of fast fitness identification algorithm with lower computation complexity comparing to other congeneric methods.
6.
Acknowledgements
This research is supported by the National Natural Science Foundation of China under Grant 50575047 and 50475185, 863 Project of China under Grant 2006AA04Z130, West Light Project of Chinese Academy of Science ([2005]404), Foundation of Guizhou Province in China (2006-20).
7.
References
[1] Marínez M A, Sanchis J, Blasco X. Genetic Algorithms for Multiobjective Controller Design[C]//Proc. of the 1st International Work-conference on the Interplay Between Natural and Artificial Computation. 2005: 242. [2] Li Bin, Chen Liping, Huang Zhengdong. Product Configuration Optimization Using a Multi-objective Genetic Algorithm [J]. International Journal of Advanced Manufacturing Technology, 2006, 30(1): 20-29. [3] Zitzler E, Thiele L. An Evolutionary Algorithm for Multiobjective Optimization: The Strength Pareto Approach [R]. Technical Report: TIK43, 2002: 19-26. [4] Zhai Yusheng, Cheng Zhihong, Chen Guangzhu, Li Liu. Multi-objective Optimization Immune Algorithm Based on Pareto [J]. COMPUTER ENGINEERING AND APPLICATIONS, 2006, 42(24): 24-27. [5] ZENG San-You, WEI Wei, KANG Li-Shan, YAO Shu-Zhe. A Multi-Objective Evolutionary Algorithm Based on Orthogonal Design [J]. Chinese Journal of Computers, 2005, 28(7): 1153-1162. [6] Wang Li, Liu Yushu, Xu Yuanqing. Multi-objective PSO Algorithm Based on Fitness Sharing and Online Elite Archiving [J]. Lecture Notes in Computer Science, 2006, 4113: 964-974. [7] Horn J. Niche distributions on the Pareto optimal front[C]//Proc. of International Conference on Evolutionary Multi-criterion Optimi- zation, 2003: 365-375.
Vibration Control Simulation of Offshore Platforms Based on Matlab and ANSYS Program Dongmei Cai1, Dong Zhao1 , Zhaofu Qu2 1
University of Jinan Jinan Intellectual Property Office
2
Abstract The optimal parameters of the wideband multiple extended tuned mass dampers (METMD) system have been studied using on the Matlab and ANSYS programs. The theoretical optimal parameters of the METMD system and the platform are obtained based on their motion equations. The theory analysis using Matlab shows that: 1) the platform has the better vibration control effect when the nondimensional frequency bandwidth ȍ, which is the ratio of the frequencies range to the controlled (target) platform’s natural frequency, is in [0.35,0.6]; 2) the damping coefficient ȟ of the ETMD systems is in [0.05,0.15] and 3) the number of the ETMDs is 5 when ȍ=0.45 and ȟ=0.1. A mega-frame platform with the METMD vibration control system is chosen as an example to test the theoretical results. The FEM simulation using ANSYS program shows that the vibration decreased ratios of the whole platform under the three different random wave forces are 38.7%, 33.7% and 44.7% respectively. The METMD has a good vibration control effect on the mega-platform. Keywords: Vibration theory, METMD system, ANSYS and Matlab
1.
Introduction
The offshore platforms are usually built in a severe ocean environment. The platform has to suffer all kinds of loads, such as earthquake loads, wave loads, wind loads, ice loads and the loads caused by machines and equipment setting on the platform. The platforms vibrate very severely under the corporate action of the loads. (Patil, 2005) The light vibration of the platform can make operators feel panic. Zhao (2005) has referenced that the deck of the W12-1 platform vibrated very severely by the driving of the natural gas compressor. The big vibration brings much more inconvenience to the platform’s performance. The acute exterior loads,
Project supported by Scientific Research Foundation for Outstanding Young Scientists of Shandong Province (2007BS07003) and Doctoral Foundation of University of Jinan (B0607)
550
D. Cai, D. Zhao and Z. Qu
such as earthquake, wave, wind or ice loads, can destroy the whole platform (Duan, 1994). In order to increase the reliability and security of the platform, many vibration control methods have been used on the platform. Among them, the passive vibration control method is widely used because it doesn’t need additional energy and has low cost, good control effect and easy actualization (Wu, 1997; Rana,1998; Chang,1999; S. Živanoviü,2005). The most generally used passive vibration control method is TMD (Tuned Mass Damper). The traditional TMD vibration control method need append a big mass body to the controlled structure (Sun, Ricciardelli, 2000; Kwon,2004). So it has to add much more additional loads to the structure as well. This disadvantage makes it impossible to be used to control the vibration of the deep water jacket platform and flexible platform. For this reason, the researchers start to seek for the new methods to control the vibration of platforms to which there are no additional mass to be added. Zhao (2005) referenced to use the DTMD and METMD system, which used the inner equipment as the mass units to consume vibration energy, to control vibration for the platform. This new method adds no additional mass to the platform and makes a good usage of the equipment’ inertial force, which is harmful to the traditional platform under the huge external loads. In order to improve the vibration control performance, the optimal parameters of the METMD are studied below based on the help of Matlab and ANSYS programs.
2.
Constitution of METMD System
METMD vibration control system is the association of several ETMD (Extended Tuned Mass Damper) vibration control systems. It uses several equipments setting on the platform to control the platform’s vibration. The equipment is connected to the platform by springs and dampers. Vibration energy can be consumed by the springs and the dampers and cannot be transferred to the platform on normal
Figure 1. The model of the platform with METMD system
Vibration Control Simulation of Offshore Platforms
551
working condition. In the abominable circumstance, such as earthquake, typhoon, tsunami and big ice loads, the parameters of the springs and dampers can be changed automatically and the equipment can be used as a TMD system to absorb the vibration energy. In this way, the platform is protected from being damaged. 2.1
Vibration Control Theory of METMD System
The model of the platform and the METMD system is shown in Figure 1. The platform is simplified as a single freedom system. And the METMD system is composed of m ETMD systems. Every ETMD system has different frequency that surrounds the platform natural frequency. The ETMDs’ frequencies are assigned as m
Ȧ1, Ȧ2, …, Ȧm. In order to easily analyze, we define
Z0
¦Z
k
/ m as m
k
ETMDs’
average
frequency,
which
center frequency, : (Zm Z1 ) / Z0 as frequency bandwidth and Zk Z0 [1 (k m 1)] : as m 1 2 the kth ETMD’s natural frequency. Suppose that the mk and ȟk of every ETMD are the sameness. That is to say, the mass ratio ȝk of every ETMD to the platform is the same constant value. The m ETMDs’ frequencies symmetrically distribute around the center frequency. The motion equation of the platform with the METMD system can be written as m
m
k 1
k 1
is
also
ms xs k s x s ¦ c k ( x k x s ) ¦ k k ( x k x s )
called
f (t )
(1)
where ms, ks are the mass and stiffness of the platform, respectively. The motion equation of the kth ETMD can be written as
mk xk c k ( x k x s ) k k ( x k x s )
0
(2)
where k=1, 2,…, m. mk, ck and kk are the ETMD’s mass, damp and stiffness, respectively. The motion equation of the platform-METMD system can be written as a matrix form
MX CX KX
F
where M, C and K are the mass, damp and stiffness matrices, respectively.
(3)
552
D. Cai, D. Zhao and Z. Qu
ªms « m1 « M « « « «¬ m ª k kk ¦ s « « kk 1 1 K «« k2 « # « «¬ k m
ªm º «¦ ck » « k1 c » 1 » C «« c 2 » « % » « # «¬ cm mm »¼ º k2 " km » 0 0 » " » 0 » " k2 » # % # » 0 " k m »¼
m2
k1 k1 0 # 0
c1
c2
"
c1 0
0 c2
" "
# 0
# 0
% "
º cm » 0 » » 0 » » # » cm »¼
The displacement vector is
X
>x s
x1
xm @
T
"
x2
The external load vector is
F
> f (t )
0 0 " 0@
T
In order to solve the Equation 3, we define f (t)=eiȦt. The solution of Equation 3 can be written as
X
>X s
X1
"
X2
Xm@ e T
iZt
(4)
Bring Equation 4 into Equation 3, the solution of Equation 3 can be written as m m m § · 2 ¨ k s msZ iZ ¦ c k ¦ k k ¸ X s ¦ ick Z k k X k k 1 k 1 k 1 © ¹
k k ic k Z X s
k
k
mk Z
2
ic Z X k
k
1
0
(5) (6)
where k=1, 2, …, m. Taken Equation 6 in Equation 5, the solution of Equation 5 can be written as Xs
1 m sZ s2 Re( z ) Im( z )i
(7)
Vibration Control Simulation of Offshore Platforms
where Re( z ) 1 J 2 m P k J 2 >O2k O2k J 2 2[ k Ok J @ , Im( z ) ¦ 2 2 2 2 2
O
k 1
k
J
2[ O J k
k
m
¦ O k 1
2 k
553
2 P k [ k Ok J 5 J 2
2P [ O J 2
;
2
k
k
k
Ȗ is the ratio of the load frequency to the platform’s natural frequency; ȝk and ȟk are the ETMD’s mass and damp ratio; Ȝk is the ratio of the ETMD frequency to the platform’s natural frequency. They can be written as J Z Zs , P k mk ms ,
Ok
Zk Z s
and
[k
ck (2mkZk ) .
The dynamic amplification factor (DAF) can be written as 1
DAF
2.2
(8)
Re 2 ( z ) Im 2 ( z )
Parameter Research of the METMD
There are many kinds of machines and equipment on the platform. Each machine or equipment has different mass. The important question is how to choose a machine or equipment as an ETMD system to control the vibration of the platform. Bad choice of the equipment couldn’t absorb the vibration energy of the platform. The worst choice of the mass can cause resonance and damages the platform. 2.2.1
Effect of the Frequency Bandwidth
The ETMDs’ number is taken as 5. The mass ratio of all ETMDs to the residual
a
b
c
Figure 2. The DAF’s variety with the changing of Ȗ and ȍ
platform is 14%. And every ETMD’s mass is equally distributed. The damp ratio of every ETMD is 0.1. The platform’s damp ratio is ignored because it is quite little compared with the ETMD’s. The figure of the DAF’s variety with ȍ and Ȗ continuously changing in [0,1] and [0.5,1.5] is shown in Figure 2. The Figure 2a shows that: (1) with the frequency bandwidth ȍ increasing, the control load frequencies’ bandwidth and the vibration control effect are enlarged; (2) the vibration control effect decreases with the frequencies bandwidth keeping on increasing beyond an certain level; (3) the platform has the quite well vibration control effect when the ȍ is in (0.35,0.6) and (4) the best vibration control effect emerges when ȍ closely near to 0.45. The Figure 2b and Figure 2c also show that the best vibration control effect appears when ȍ close to 0. But the control
554
D. Cai, D. Zhao and Z. Qu
frequency bandwidth is very narrow. The exquisite resonance can be caused if the load frequencies slightly deviate from the control frequency band. 2.2.2
Effect of the Damp Ratio
As previously mentioned, the METMDs’ number is 5. The mass ratio of all ETMDs to the residual platform is 14%, and every ETMD’s mass is equally assigned around the center frequency. The frequencies bandwidth ȍ is taken as 0.45. The platform’s damp ratio is ignored. Each ETMD’s damp ratio continuously changes in (0,1.5) and Ȗ continuously changes in [0.5,1.5]. The variable figure of the DAF is shown in Figure 3.
a
b
c
Figure 3. The DAF’s variety with the changing of Ȗ and ȟ
The Figure 3a shows that: (1) with the damp ratio ȟ increasing the control load frequency bandwidth and the vibration control effect is enlarged too; (2) the vibration control effect decreases with the damp ratio ȟ keeping on increasing beyond a certain level; (3) the platform has the quite well vibration control effect when the ȟ is in (0.05, 0.15) and (4) the best vibration control effect emerges when ȟ closely near to 0.08. The Figure 3b shows that there are many resonance peaks appear when the damp ratio nears to 0. As the damp ratio closing to 0, only the loads which frequencies equal to the ETMD systems are controlled. The other uncontrolled loads can make the platform vibrating acutely. And the Figure 3c shows that the value of the resonance peak decreases with the damp ratio ȟ increasing and the curve becomes smoothly. 2.2.4
Effect of the ETMDs’ Number
The number of ETMDs will directly affect the controlled frequencies’ number of the loads and finally affect the vibration controlling effect of the METMD system. When ȍ=0.45, ȟ=0.1, the mass ratio equals to 14% and Ȗ continuously changes in [0.5,1.5]. The effect of the ETMDs’ number to DAF is shown in Figure 4.
Vibration Control Simulation of Offshore Platforms
555
Figure 4. The DAF’s variety with the changing of ETMDs’ number
The Figure 4 shows that the vibration control effect increases with the ETMDs’ number increasing when the frequency bandwidth, the mass ration and the damp ratio are the fixed values. And the enlargement of the ETMDs’ number has little effect on the vibration control when the number beyond a certain level. The Figure 4 also shows the platform has the better vibration control effect when the ETMDs’ number is 5. The DAF’s curves are almost superposed when the number is in excess of 5. Now, the increase of ETMDs’ number has very little influence on platform’s vibration control.
3. Vibration Control Simulation of Mega-frame Platforms with the METMD System A 100-meter-high mega-frame platform (MFP) (Zhao, 2005) is used as an example to simulate the vibration control effect. The FEM model of the MFP with the METMD system is shown in Figure 5. The platform models without and with the METMD system are shown in Figure 6. The beam elements are used for building the deck and pipe elements for the jacket. The mass of the whole system including the platform and the equipment setting on it is 1.3844×106 kg. The mass of the jacket is 0.8844×106 kg. The mass volumes are connected to the intersection points of every stake and the decks. The mass for the METMD is 0.1938×106kg, which is 14% of total mass. Let Ω takes 0.45 and ξ takes 0.1.
Figure 5. Model of the mega-platform and the METMD system
556
D. Cai, D. Zhao and Z. Qu
Random excitation load waves, as shown in Figure 7, are added on the nodes 217, 218, 219 and 220 on the middle part of the jacket, as shown in Figure 8. The vibration responses of the platform are analyzed under two conditions, with and without METMD system. The testing points’ responses are shown in Figure 9. In Figure 6a the mass units are directly connected to the platform. In Figure 6b the mass units for the METMD are connected to the platform by springs and dampers. The results of the test points under different conditions are shown in Figure 10.
a
b
Figure 6. Platform without and with the METMD system
Force (kN)
12 9 6 3 0 -3 -6 -9 -12 0
20
40
60
80
100 Time(s)
80
100 Time(s)
80
Time(s) 100
a Hs=10m Force (kN)
35 25 15 5 -5 -15 -25 -35 0
20
40
60
Force (kN)
b Hs=15m 90 60 30 0 -30 -60 -90 0
20
40
60
c Hs=20m Figure 7. Wave forces of different height ocean waves
Vibration Control Simulation of Offshore Platforms
Figure 8. Nodes to input the random excitation load waves
557
Figure 9. Testing points on the platform
ETM D
c Displacement of node 97 when Hs=20m Figure 10. Part nodes’ displacements under different wave forces
558
D. Cai, D. Zhao and Z. Qu
Table 1. Displacement decrease ratios under different wave forces Hs=10m
Hs=15m
Hs=20m
Jacket
34.8%
29.5%
41.2%
Low-deck
40.5%
35.6%
46.3%
Mid-deck
40.1%
35.2%
46.0%
Upper-deck
39.5%
34.3%
45.2%
Whole-platform
38.7%
33.7%
44.7%
Data, which are shown in Figure 10 and Table 1, analyses show that the minimum and maximum vibration reduction of the platform under different wave forces are 29.5% and 46.3%, respectively. The decrease ratios of the whole platform under the three different wave forces are 38.7%, 33.7% and 44.7% respectively. The METMD has a better vibration control effect on the mega-platform under random wave force loads.
4.
Conclusions
The theoretical analysis using Matlab shows that: (1) the platform has the better vibration control effect when the non-dimensional frequencies bandwidth ȍ, which is defined as the ratio of the frequencies range to the controlled (target) platforms natural frequency, is in [0.35,0.6]; (2) the damping coefficient ȟ of ETMD systems is in [0.05,0.15] and (3) the number of the ETMDs is 5 when ȍ=0.45 and ȟ=0.1. The FEM simulation using ANSYS shows that the METMD has a better vibration control effect on the mega-platforms under the different random ocean wave loads.
5.
References
[1] Huang L, 2001. Vibration analysis of compressor deck of Pinghu platform, China Offshore Platforms, 16(5):54-57 [2] Duan M, Fang H, Chen R, 1994. The investigation of the Bohai No.2 platform’s pushing-over by the ice. Oilfield Equipment. 23(3):1-4 [3] Wu B, Li H, 1997. Theory and application of the passive vibration control on the building structure, Press of Harbin Institute of Technology. [4] Rahul Rana and T. T. Soong, 1998. Parametric study and simplified design of tuned mass dampers. Engineering Structures, 20(3):193-204 [5] C. C. Chang, 1999. Mass dampers and their optimal designs for building vibration control. Engineering Structures, 21(5):454-463 [6] Francesco Ricciardelli, Antonio Occhiuzzi and Paolo Clemente, 2000. Semi-active Tuned Mass Damper control strategy for wind-excited structures. Journal of Wind Engineering and Industrial Aerodynamics, 88(1):,57-74 [7] Sun S, 2000. The study of seismic response reduction of single column platform by using turned mass damper. China Offshore Platform, 15(6):6-9
Vibration Control Simulation of Offshore Platforms
559
[8] Soon-Duck Kwon and Kwan-Soon Park, 2004. Suppression of bridge flutter using tuned mass dampers based on robust performance design. Journal of Wind Engineering and Industrial Aerodynamics, 92(11)919-934 [9] Zhao D, 2005. Vibration Control of Offshore Platforms Using the DTMD. Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering. 1B:737-741 [10] Zhao D, 2005. Vibration Control Research of Mega-frame Offshore Platforms. Journal of Jinan University, 19(2):184 [11] Patil K C ˈ Jangid R S, 2005. Passive control of offshore jacket platforms. Ocean Engineering. 32(16):1933-1949 [12] S. Živanoviü, A. Pavic and P. Reynolds, 2005. Vibration serviceability of footbridges under human-induced excitation: a literature review. Journal of Sound and Vibration, 279(1-2):1-74
Study on Dynamics Analysis of Powertrains and Optimization of Coupling Stiffness Wenjie Qin, Dandan Dong School of Mechanical and Vehicular Engineering, Beijing Institute of Technology, Beijing 100081, China
Abstract The finite element model of structure dynamics analysis of powertrains is presented in this paper, including the engine and transmission systems as a whole as only the mechanical gears are concerned. The model is applied to one vehicle powertrain, and the natural frequencies and stress responses of the system in its three mechanical gears are calculated. Then the effect of the coupling stiffness on them is discussed. Based on these results, the optimization model of the coupling stiffness with the goal of reduction of the system’s stress responses is proposed. It provides the method to improve the structure dynamic performance of powertrains. Keywords: Powertrains; dynamics analysis; optimization; finite element method
1.
Introduction
The structure design of the power and transmission system is often carried out separately, and matching of them is usually focused on the operation performance. So it is essential to analyze and optimize the structure dynamic characteristics of the two systems as a whole. Most studies on powertrain dynamics are focused on the torsional vibration, and typically applied a lumped-parameter model. For examples, Christopher S. Keeney and Shan Shih modeled the powertrain system as a set of inertias of the components connecting with spring-dampers. An undamped modal analysis is also given as an eigenvalue problem, then a frequency response analysis is given as a complex, linear problem[1]. Sheng-Jiaw Hwang and Joseph L. Stout et al built their model using the same method, and the analysis of the free vibration, forced responses and self-excited vibration are studied[2]. Li Heyan and Ma Biao et al analyzed the influence of three different kinds of elastic coupling on the performance of a large horsepower powertrain by modeling the system as multiDOFs mass-spring-damping[3]. The finite element method is a numerical method to solve the mathematics-physics equations based on the calculus of variations with high accuracy. It was firstly applied to dynamics analysis of crankshafts by Bagci [4] , but seldom used in powertrains. In this paper, the finite element model for dynamics analysis of powertrains is presented, and that of one vehicle powertrain
562
W. Qin and D. Dong
is built and the natural frequencies and stress responses result from the engine excitation are calculated using ANSYS software. Based on the analysis of the influence of the coupling stiffness on the two aspects, the coupling stiffness is optimized.
2.
Finite Element Modeling for the Powertrain
The powertrain investigated in this paper is a hydro-mechanical system. According to the hydraulic mode, the powertrain can be thought as two vibration systems because of the vibration isolation of the torque converter[3]. So only the mechanical gears in which the converter is locked are concerned and the dynamics of the whole structure is studied in this paper. The powertrain consists of the engine, coupling, torque converter, and transmission et al, in which shafts are the major elements in the system. In this article, beam elements are used to build the models of the shafts, while components like connecting-rods, pistons, torque converter and clutches are modeled as mass elements according to their equivalent masses and inertias. Some components’ models are built as follows: 2.1
Crankshaft Model
The crankshaft is composed of the head, end, main journals, crank pins and crank webs. It is modeled as beam and mass elements and the simplified method is: The head, end, main journals and crank pins are modeled as beam elements with the corresponding size of the diameter and length. The crank web and balance block are not revolution geometries. They are modeled as discs with the same inertias. According to the reciprocating mass of the piston and connecting-rod, equivalent inertia is calculated as follows[5]: At any moment, the kinetic energy of the reciprocating mass is
1 (m p mc1 ) x 2 (2.1) 2 where m p is the mass of the piston, and mc1 is the mass of the connecting-rod; x is the reciprocating velocity of the piston, and it can be expressed as E
x | RZ (sin Zt
O 2
sin 2Zt )
(2.2)
where R is the radius of the crank pin, and Ȝ is the ratio of the radius of the crank pin to the length of the connecting-rod. The average kinetic energy of the reciprocating mass during per revolution cycle of the crankshaft is
1 2S
³
2S
0
Ed (Zt )
(2.3)
Dynamics Analysis of Powertrains and Optimization of Coupling Stiffness
563
Let I e represent the equivalent inertia of the reciprocating mass, then its kinetic energy is equal to
1 I eω 2 . This energy should be equal to the value of function 2
(2.3), so the equivalent inertia can be obtained as
Ie = 2.2
1 λ2 (m p + mc1 )(1 + ) R 2 2 4
(2.4)
Gear Model
The gears are simplified as discs. The mating of gears will produce not only torsional vibration but also transverse vibration in the shaft system. As this is concerned, contact elements are used to simulate the mating of gear pairs, and each element is defined by two nodes on each gear of the mating pair with their connecting line coinciding with the mating line. The normal stiffness of the contact element is the mating stiffness of the gear pair. Because of the small effects of the time-varied mating stiffness on natural frequencies and maximal stress [6][7], its average value is used in this study. 2.3
Bearing Model
The main bearings of crankshaft are modeled as spring-dampers in the horizontal and vertical directions (as shown in Figure 1) [8][9].
Figure 1. Main bearing model of the crankshaft
The rolling bearings of the transmission shafts are simplified as rigid constrains as their great stiffness is concerned. 2.4
Shaft Coupling
The shaft coupling is modeled as torsional spring element. Its stiffness is designed to be 0.0249MN·m/rad and the damping value is 101.11N·m·s/rad. Finally, the finite element model of the powertrain investigated in this paper is built as Figure 2 shown. According to the different gears, there are different gear pairs mated, so different models are needed.
564
W. Qin and D. Dong
Figure 2. Finite element model of the powertrain
3.
Natural Frequency Analysis
Applying the Block Lanczos method, the natural frequencies of the system is calculated. The first five frequencies of the three mechanical gears are shown in Table 3.1. The mode shapes of these frequencies is torsional shapes. For example, the fisrt shape is the coupled torsion of the engine and transmission, and the third shape is mainly the torsion of the transmission, see Figure 3. Table 1. Natural frequencies of the powertrain Frequency order
First gear
Second gear
Third gear
1
10.143
9.8904
10.059
2
46.208
40.585
44.237
3
172.91
162.38
176.41
4
176.42
176.41
179.26
5
221.44
252.16
251.35
The effects of coupling stiffness on the natural frequencies are also analyzed. The changes of the first four natural frequencies of the three mechanical gears with the coupling stiffness are shown in Figure 4. The results indicate that the lower frequencies of each gear decrease with the reduction of the coupling stiffness, and the reduction is greater in lower stiffness range. Among the four frequencies, the first and second frequencies are more sensitive to the coupling stiffness, especially in lower stiffness range, while the changes of the third and fourth frequencies are much slighter. So in the lower stiffness range, the first and second frequencies can be shifted by the adjustment of the coupling stiffness.
Dynamics Analysis of Powertrains and Optimization of Coupling Stiffness
a The first mode shape
b The third mode shape
Figure 3. Mode shape graph
a The first frequency
c The third frequency
b The second frequency
d The fourth frequency
Figure 4. Natural frequencies vs. coupling stiffness
565
566
W. Qin and D. Dong
4.
Stress Response Analysis
4.1
Engine Excitation
In the steady condition, only the engine excitation is concerned in this study, which includes the gas pressure and inertia forces of the rotating parts. As for the V-style six-cylinder engine investigated in this study, the calculated exciting forces during per engine excitation cycle in the horizontal and vertical directions and torques on each crank pin are shown in Figure 5.
a Horizontal forces
b Vertical forces
c Torque Figure 5. Torques acting on each crank pin
4.2
Stress Responses
The transient dynamics analysis is carried out using the models of the three mechanical gears. In the steady engine speed range (800~2200r/min), exciting forces according to various engine speeds are exerted on the system and the resulting maximum stresses at different engine speeds are shown in Figure 6. The results show that the response stresses of the three gears decrease with the improvement of the engine speed in general. So at the speed of 800r/min, the stresses get to the highest and the value of the second gear are greater than those of the other two gears.
Dynamics Analysis of Powertrains and Optimization of Coupling Stiffness
567
Figure 6. Maximum stresses at different engine speed
4.3
Effect of Coupling Stiffness on Dynamic Stresses
At the engine speed of 800r/min, the maximum stresses of the engine and the transmission of the three mechanical gears vs. coupling stiffness are shown in Figure 7. According to the transmission, the results show that in the lower coupling stiffness range (less than 0.2MN·m/rad) the stresses decrease greatly with the reduction of the stiffness, while when the stiffness is greater than 0.2MN·m/rad, the changes of the stresses become much smaller. As for the engine, the stresses are much lower and the curves are relatively smoothly (in the range of 31~33MPa). It indicates that the effect of the coupling stiffness on the stress response of the engine is relatively slight.
a Transmission
b Engine
Figure 7. Maximum stresses of the engine and transmission vs. coupling stiffness
5.
Matching Optimization of the Coupling Stiffness
5.1
Optimization Model
According to the structure dynamic characteristics of powertrains, the optimal goals can be the natural frequency, deformation and stress response. As the high stress occurred in the transmission as the above results shown, the maximum stress
568
W. Qin and D. Dong
in the second mechanical gear is defined as the main goal. From the results of the natural frequency analysis, it can be seen that through adjusting the coupling stiffness the first and second frequencies can be shifted out of the engine’s working frequency range because they are sensitive to the coupling stiffness. So the matching optimization model of the coupling stiffness can be
min V max (k ) s. t.
f 1 (k ) d f eb f 2 (k ) t f et
(5.1)
kb d k d kt where k is the coupling stiffness with its initial value being the coupling’s designed stiffnessņ0.0249MN·m/rad; V max is the maximum stress in the second mechanical gear; f 1 and f 2 are the first and second frequencies of the system; f eb and f et are the lower and upper working frequency limits of the engine; k b and k t are the lower and upper limits of the coupling stiffness. 5.2
Optimization Results
The penalty method [10] is applied to solve the problem as formula (5.1) shown based on the finite element analysis of modal and response. The process is shown as Figure 8. The convergent coupling stiffness is 0.0126MN·m/rad, and the maximum stress is 202.0MPaņ25.4% less than that of the initial design. Applying this coupling stiffness in the model of the first and third mechanical gears, the resulting maximum stresses at the lowest engine speed are 182.3MPa and 218.6MPa, which are 29.6% and 28.4% less than those of the initial design respectively, and the first and second natural frequencies are all out of the engine’s working frequency range. So this coupling stiffness brings great improvement to the structure dynamic properties of the powertrain.
Dynamics Analysis of Powertrains and Optimization of Coupling Stiffness
569
design variable Define
goal function constraint function
Modal and response analysis by finite element analysis Iteration
100Nm
Convergent? Yes End Figure 8. Optimization process
6.
Conclusion
In this article, the finite element model of structure dynamics analysis for powertrians is presented. Base upon this model and the analysis results, the model establishment methodology for coupling stiffness optimization is proposed. The finite element model is applied to the dynamics analysis of one vehicle powertrain. Its natural frequencies of three mechanical gears are obtained and the frequencies sensitive to the coupling stiffness are selected. Then the stress responses at different engine speeds of different gears are calculated through the transient dynamics analysis, and the gear and engine speed at which the highest stress response occurred are found out. Based on these results, the coupling stiffness optimization model with the goal of reduction the stress response is established. The optimal results show that the maximum stresses of each mechanical gear decrease greatly with the defined coupling stiffness.
7.
References
[1] Christopher S. Keeney, Shan Shih. (1992) ‘Prediction and control of heavy duty powertrain torsional vibration’, SAE paper No. 922481. [2] Sheng-Jiaw Hwang, Joseph L. Stout and Ching-Chung Ling. (1998) ‘Modeling and analysis of powertrain torsional response’, SAE paper No. 980276.
570
W. Qin and D. Dong
[3] Li Heyan and Ma Biao et al. (2003) ‘Study on torsional vibration performance of vehicle powertrain affected by elastic coupling’, Journal of Mechanical Strength, Vol. 25, No. 6, pp. 596-603. [4] Bagci C. A. (1973) ‘Computer method for computing torsional nature frequencies of nonuniform shafts geared system, and curved assemblies’, Proceedings of the 3rd OSU Mechanical Conference, Vol. 40, pp. 1-15. [5] Li Bo-zhong, Chen Zhiyan and Ying Qi-guang. (1984) Torsional Vibration of the Shaft System in IC engine, Beijing: National Defense Industry Publishing House. [6] H.Nevzat Özgüven and D. R. Houser. (1988) ‘Dynamic analysis of high speed gears by using loaded static transmission error’, Journal of Sound and Vibration, Vol. 125, pp. 71-83. [7] Li Tao, Qin Wen-jie and Wang Chao. (2005) ‘Study on dynamic excitation and response of shaft and gears system in certain vehicle’, Journal of Mechanical Transmission, Vol. 5, pp. 16-18 [8] H. Okamura, A. Shinio and T. Yamanaka et al. (1995) ‘Simple modeling and analysis for crankshaft three-dimensional vibrations, part 1: background and application to free vibrations’, Transactions of ASME, Vol. 117, pp. 70-79 [9] Swati M. Athavale and P. R. Sajanpawar. (1999) ‘Analytical studies on influence of crankshaft vibrations on engine noise using integrated parametric finite element model: quick assessment tool’, SAE Transactions, Section 3, Vol. 108, pp.1861-1866 [10] Liu Wei-xing. Optimal Mechanical Design. Tsinghua University Press, Beijing
Parametric Optimization of Rubber Spring of Construction Vehicle Suspension Beibei Sun, Zhihua Xu and Xiaoyang Zhang School of Mechanical Engineering, Southeast University, Nanjing 211189, China
Abstract Rubber spring has a progressive spring rate, and the further it’s compressed, the more it resists. So the ride comfort of large tonnage construction vehicle can be improved by using rubber suspension with changeable stiffness. The parameterized nonlinear finite-element model of the rubber spring, which used as the main elastic component in a construction vehicle suspension, has been built in this paper. The non-linear stiffness curve of vertical direction was obtained by FEA, which accord with the experiment results very well. The sensitivity analysis of structural parameter of rubber spring was explored to find the most sensitive design variable for optimization. The optimum nonlinear stiffness curve of rubber suspension was obtained through the whole vehicle dynamics optimization. The structure parameters of rubber spring have been optimized to fulfill the optimum nonlinear stiffness curve. Keywords: Optimization; Rubber spring; Nonlinear; Stiffness; FEA
1.
Introduction
The primary function of a suspension system of a vehicle is to isolate the road excitations experienced by the tires from being transmitted to the passengers. The springs are one of the most important components of the suspension system that provides ride comfort. There are several advantages to choosing rubber springs for vehicle suspension. The most important is that the non-linear force/deflection behavior of rubber springs can in effect provide an adjustment of the transmissibility response to changing operating conditions. In this paper, The rubber springs have been used as the elastic component in an articulated dump truck (ADT) suspension, as shown in Figure 1. The structure of the rubber spring is shown in Figure 2. The fundamental feature of a rubber spring is its stiffness. It is important and necessary to have a good understanding of the nonlinear stiffness of rubber springs at their development stage. However, most of the rubber springs have been designed by experiences or experiments because of the complicated characteristics of rubber materials. Many researchers have been trying to apply FEA to rubber spring design [1-4]. By using FEM, influential factors such as key parameters and shape can be analyzed conveniently and efficiently. However, in comparison to metals, modeling the behavior of rubber springs is a difficult task.
572
B. Sun, Z. Xu and X. Zhang
Figure 1. Suspension of ADT
Figure 2. Structure of rubber spring
There will be a combination of three different non-linearities that have to be considered in the FEA of rubber spring. In this paper, geometric and material nonlinearities as well as structural non-linearity are considered in modelling process. The nonlinear contact model of rubber spring has been built and analyzed by the FEM with the commercial FEA codes ANSYS. The non-linear stiffness in axial direction (working direction) was obtained. In order to verify the FEA results, the experimental tests have been done. The experiment results are consistent with the simulation results very well. The sensitivity analysis of structural parameter of rubber spring was explored to find the law of each parameter’s contribution to stiffness in vertical direction, and the most sensitive design variable was found for optimization. The optimum nonlinear stiffness curve of rubber suspension was obtained through the whole vehicle dynamics optimization. The structure parameters of rubber spring have been optimized for fulfilling the optimum nonlinear stiffness curve.
Parametric Optimization of Rubber Spring of Construction Vehicle Suspension
2.
Finite Element Modeling of Rubber Spring
2.1
Hyperelastic Material Model of Rubber Spring
573
An accurate constitutive law of a material is critical to finite element analysis of rubber spring. Rubber materials fall in the category of hyperelasticity, which can experience large elastic strain that is recoverable. Except as otherwise indicated, the materials are also assumed to be nearly or purely incompressible.As for hyperelastic material, there exists an elastic potential function W (or strain energy density function) which is a scalar function of one of the strain or deformation tensors, whose derivative with respect to a strain component determines the corresponding stress component. This can be expressed by
S
wW wE
(1)
where S is the second Piola-Kirchhoff stress tensor, W is strain energy function per unit undeformed volume, E is the Lagrangian strain tensor. The strain energy function is expressed as follows.
W
W ( I1 , I 2 , I 3 )
(2)
I
where I1 , I 2 , and 3 are the first, second and third strain invariants respectively and can be expressed as:
I1 ° ®I 2 ° ¯I 3
3 J 2 /᧤ O12 O22 O32᧥ 3 J 2 /᧤ O12O22 O22O32 O32O12᧥
(3)
3 J 2 /᧤ O12O22O32᧥
where J is the ratio of the deformed elastic volume over the reference (undeformed) volume of materials, ¬1 , ¬2 and ¬3 are the principal stretch ratios. Here incompressible material behavior is assumed so that third principal invariant, I 3 , is identically one. The hyperelastic material models include several forms of strain energy potential, such as Neo-Hookean, Mooney-Rivlin, Polynomial Form, Ogden Potential, Arruda-Boyce, Gent, and Yeoh. Mooney-Rivlin is used in this paper for the simulation of rubber material. This option includes 2, 3, 5, and 9 terms MooneyRivlin models. The form of the strain energy potential for 3 parameters MooneyRivlin model is
W
C10 ( I1 3) C01 ( I 2 3) C11 ( I1 3)( I 2 3)
(4)
574
B. Sun, Z. Xu and X. Zhang
where C10 , C01 , C11 are material constants. In order to obtain successful results during a hyperelastic analysis, it is necessary to accurately assess the MooneyRivlin constants of the materials being examined. Mooney-Rivlin constants are generally derived for a material using experimental stress-strain data. In this paper, the values are as follows: C10 =0.38M Paˈ C01 =0.325M Paˈ C11 =0.05M Pa. 2.2
Finite Element Model of Rubber Spring
As shown in Figure 3, the structure of the rubber spring consists of the rubber component, the upper and lower metal bush. The upper, middle and lower part of the rubber component has been removed a block of material to enhance the geometrical non-linearities. This will be discussed later. Due to geometric and loading (vertical work load) symmetry, the analysis can be performed using one half of the cross section of rubber spring. In order to undergo further parametric optimization, APDL (ANSYS Parametric Design Language) is used to build the model in terms of parameters such as height, radius. So that rubber springs which adapt to different tonnage trucks can be easily designed later. The model is developed as shown in Figure 3.
Figure 3. Finite element model of the rubber spring
The proper element type and reasonable meshing strategy are important to modeling and simulation of large deformation rubber components to avoid poor accuracy. A HYPER56 element, which is a 2-D 4-node mixed u-P hyperelastic solid element, is selected to mesh the rubber component. A PLANE182 element is adopted to mesh the metal bush. The upper and lower parts (1 and 2 parts in Figure 3) of the rubber component will come in contact with the upper and lower metal bush when the vertical load is applied to the rubber spring. And the middle part (3 part in Figure 3) of the rubber component will also come in contact with each other. So these parts have to adopt contact elements. A TARGE169 and CONTA172 are used to create a contact pair. Thus, this becomes a non-linear large displacement contact analysis. As a whole, the model has 676 HYPER56 elements, 158 PLANE182 elements and 90 contact elements. That is 924 elements and 1010 nodes.
Parametric Optimization of Rubber Spring of Construction Vehicle Suspension
575
Specifying the proper loading conditions is a key step in the analysis. The loads in ANSYS include boundary conditions as well as other types of loading. For an axisymmetric model as Figure 3, displacements of all nodes on the left edge (X=0) are constrained, UX=0. All nodes on the bottom edge (Y=0) are constrained in UX and UY. The analytical model is loaded by uniformly distributed external pressure at the upper surface. Nine load steps are used to specify the load. Each load step is 5KN. 2.3
FEA of Rubber Spring
The stiffness response of the system was determined with the help of an iterative method using Newton-Raphson algorithm. All the non-linearities discussed before were included in the analysis. Figure 4 presents a deformation scheme of the FEM model at different load steps. It can be seen that the vertical displacement is 15.225mm when the load is 5KN. As the increment of the loading, the contact area is becoming bigger. From 25KN the center of the inner upper and lower parts of the rubber component makes contact with the upper and lower metal bushes. The contact area is greatly increased. Then the stiffness is enhanced quickly. So the clear picture of deformation process can be observed by using FEM, while it can’t be seen from the experiment.
a 5KN
b 25KN
c 45KN
Figure 4. Vertical deformation scheme
3.
Verification of the FE Results
In order to verify the FE results obtained, the experimental tests were done. The experiments were carried out on a material testing machine (SINTECH 20/D, made in USA). The corresponding software was TESTWORKS. The environment temperature was 19ć. A static preload of 100N was applied. The rubber spring was loaded vertically at a rate of 10mm/min and the deformations were registered as shown in Table 1. The static load-deflection curve was obtained as shown in
576
B. Sun, Z. Xu and X. Zhang
Figure 5. The experimental results are very close to the previously obtained results from FEA. It shows that the non-linear contact model is suitable for rubber spring design and analysis. The errors between any of the predicted results and measurement are acceptable in engineering. Table 1. Comparison of results from measurement and simulation Load(KN)
5
10
15
20
25
30
35
40
45
68.83
74.25
78.346
81.845
52.84 64.432 69.793
73.17
75.977
78.307
Test (mm) 15.47 29.721 42.263 52.899 61.73 FEM(mm) 15.22
29.04
41.26
Figure 5. Comparison of simulated and measured load-deformation curve
4. Sensitivity Analysis of Structural Parameter of Rubber Spring The stiffness characteristic of the rubber spring depends upon its structure. The sensitivity analysis of structural parameter of the hourglass rubber spring was explored to find the law of each parameter’s contribution to stiffness in vertical direction, and the most sensitive parameters was chosen as the design variable for the optimization. Since the shape of the rubber spring associates with assembly dimensions, the outer dimensions of rubber spring can’t be changed. There are 7 parameters can be tuned here, namely hollow depth h1 , bush height h1 , rubber middle radius r1 , hollow peristome radius r2 , hollow bottom radius r3 , hollow fillet radius R1 and rubber fillet radius R2 , see Figure 2. The sensitivity curves of the 7 parameters under different working load conditions from 5KN to 45KN are shown in Figure 6. Parameter sensitivity analysis results shows that the sensitivity of r1 , r2 , h2 , h1 , r3 , R2 , R1 are in descending order. The rubber middle radius r1 is the most sensitive parameter among 7 parameters. The fillet radius R1 and R2 are not sensitive to the stiffness value. So R1 and R2 are not chosen here as the design variables.
Parametric Optimization of Rubber Spring of Construction Vehicle Suspension
577
Figure 6. Sensitivity curves of the parameters under different working load conditions
5.
Parametric Optimization of the Rubber Spring
The rubber spring is the key component of the rubber suspension system of the construction vehicle, which affects the ability of the suspension isolating road excitation. An optimum design of the rubber spring should be based on the optimum dynamic design of the whole vehicle. The rubber suspension of AD250 articulated dump truck was studied here. Modal synthesis and flexible multi-body methods, as well as experimental methods were applied to build a rigid-flexible coupling multi-body model of the truck, see Figure 7. In order to improve the ride comfort and handing performance of AD250, the optimization design of the rubber suspension was completed under differing load by using sequential Quadratic Programming theory. The optimum nonlinear stiffness curve of rubber suspension was obtained through least squares fitting the optimum stiffness values according to certain load. The detail of the modeling and optimization procedure of the vehicle was reported in [6] and [7] and it is omitted here for the length limitation of the paper.
Figure 7. The rigid-flexible coupling multi-body model of the ADT
The optimum nonlinear stiffness curve of rubber suspension got from the whole vehicle dynamic optimization was taken as an ideal stiffness curve. The aim of the parametric optimization of the rubber spring is to get this ideal curve. The structural parametric optimization of the rubber spring is a constrained
578
B. Sun, Z. Xu and X. Zhang
optimization problem. The general mathematical model of optimization is as follows.
f x
(5)
>x1, x2 , x3 " xn @
(6)
Min f
x
xi d xi d xi i 1,2,3" n
(7)
gi x d g i i 1,2,3,", m1
(8)
hi d hi x i 1,2,3,", m2
(9)
wi d wi x d wi i 1,2,3,", m3
(10)
On the basis of the modeling and sensitivity analysis of the rubber spring, the design variables are defined as: ranges of the variables.
x =[h1᧨h2᧨r1᧨r2᧨r3]. Table 2 shows the
Table 2. Design variables and their boundary values Design variable
h1
h2
Initial value (mm) 39 Range (mm)
[0, 60]
r1
r2
r3
51.5
44.5
77.5
38.5
[10, 85]
[28.5, 78
[38.5, 84.5] [10, 68.5]
According to the ideal nonlinear stiffness curves, the objective function is defined as follows. n
f
¦ d
Gi
2
i
(11)
i 1
Where di is the rubber spring deformation obtained from FEM under ith load step,
G i is
the rubber spring deformation obtained from multi-body vehicle modeling
and optimization under ith load step. The load steps include 10000N, 13500N, 17689N, 19156N, 22546N and 25826N. The constraint condition for the optimization of the rubber spring is that the maximum stress of elastomer is smaller than its allowable stress, that is V max ˘10MPa.
Parametric Optimization of Rubber Spring of Construction Vehicle Suspension
579
Table 3. Optimization results Initial value
Optimal value
Relative change ratio
h1
39
48.2
23.59%
h2
51.5
30.1
-41.55%
r1
44.5
38.4
-13.7%
R2
77.5
66
-14.84%
r3
38.5
40.5
5.19%
1926.953
12.746
-99.34%
4.76
5.50
15.55%
Variable
Design variable ˄mm˅
Objective function State variable
V max (Mpa)
First-order method is used to get the optimal structural parameters of the rubber spring. Table 3 shows the optimum parameter obtained. By FEM, the nonlinear stiffness curve of new rubber spring with optimum structural parameters was acquired, as shown in Figure 8. The figure shows that the new curve coincides with the ideal nonlinear stiffness curve which was got from the whole vehicle dynamic optimization. This means that the aim of the parametric optimization of the rubber spring has been realized.
Figure 8. Stiffness curve of rubber spring after parametric optimization
6.
Conclusions
The finite element modeling for rubber springs has been done. The nonlinear static stiffness cure of the rubber spring has been obtained from the nonlinear FEA. The results from the static experiment, confirmed the results obtained during numerical analysis. Since the model is parameterized, it provided a basis for the parameter optimization of the rubber spring. The sensitivity analysis of structural parameter of rubber spring was explored to find the law of each parameter’s contribution to
580
B. Sun, Z. Xu and X. Zhang
stiffness in vertical direction, and the most sensitive design variable was found for optimization. The ideal nonlinear stiffness curve of rubber suspension was obtained through the whole vehicle dynamics optimization. The structure parameters of rubber spring have been optimized for realization of the ideal nonlinear stiffness curve. The results show that the nonlinear stiffness curve of new rubber spring with optimum structural parameters coincides with the ideal nonlinear stiffness curve which was got from the whole vehicle dynamic optimization.
7.
Acknowedgements
The authors gratefully acknowledge the support of National Natural Science Foundation of China (No. 50575040) and Natural Science Foundation of Jiangsu province (No. BK2007112).
8.
References
[1] Kenneth N, Morman, Pan TY, (1988) Application of finite –element analysis in the design of automotive elastic component. Rubber chemistry and technology 61(3):503533 [2] Arruda EM, Boyce MC, (1993) A three-dimensional constitutive model for the large stretch behavior of rubber elastic materials. Mech. Phys. Solids. 41(2):127-130. [3] Kim Joong Jae, Kim Heon Yung, (1997) Shape design of an engine mount by a method of parameter optimization. Computers & Structure 65(5):725-731 [4] Zielnica J, Ziolkowski A, Cempel C, (2003) Non-linear vibroisolation pads design, numerical FEM analysis and introductory experimental investigations. Mechanical System and Signal Processing 17(2):409-422 [5] Qinghong Sun, Zhihua Xu, Beibei Sun, (2006) Dynamics characteristic study of rubber suspension system of AD250 articulated dump truck. Journal of southeast university 369(3):341-345. [6] Sun Beibei, Sun Qinghong, Xu Zhihua. (2006) Optimization of nonlinear rubber suspension based on flexible multibody Dynamics. Automobile Technology 2:20-24 [7] Sun Beibei, Xu Zhihua, Sun Qinghong. (2006) Computation of dynamic stress of suspension links based on the multibody model of the whole vehicle. Automotive engineering 28(10):922-925
The Development of a Computer Simulation System for Mechanical Expanding Process of Cylinders Shi-yan Zhao, Bao-feng Guo, Miao Jin Yanshan University, Qinhuangdao 066004 China
Abstract Based on the general purpose finite element program MSC.Marc, a Mechanical Expanding Process Simulation System (ME PSS V1.0) is developed to simulate the forming process of mechanical expanding for cylinders. Given technological parameters needed, it can simulate the forming process of mechanical expanding. After the simulation, important data such as load, spring-back, size of product and shape tolerance and the distribution of residual stress of the product are obtained automatically by ME PSS. This system can realize optimization of mechanical expanding technological parameters of cylinders by multi-objective optimization based on MSC.Marc. Keywords: mechanical expanding; numerical simulation; secondary development; multi-objective optimization
1.
Introduction
Mechanical Expanding is one of the finishing processes in high-precision cylindrical part forming. Take high-precision steel pipeline for example, we should consider field construction and mediator transmission efficiency, whether it is straight submerged-arc welding pipe or spiral submerged-arc welding pipe, both of which need to be uniform crosssectional dimension and shape[1]. So, on the production line of advanced pipe-line steel, mechanical expanding is always the last plastic forming progress in the technological flow [2-4]. The mechanical expanding of cylinders is a plastic deformation process which involves tolerance of the outer diameter and geometrical tolerance of the crosssection of the cylindrical billet, material properties, friction condition, degree of deformation, diameters of dies, and so on. Research indicates that the precision of dimension and geometrical shape of the final products are related with these factors, but their relevancies to these factors are different. If the precisions of dimension and geometrical shape of the final products are separately defined to be the difference between the nominal outer diameter of the final product and the average outer diameter of mechanical expanding product, and the difference between the minimum and the maximum outer diameter, the former is sensitive to not only the changes of the cross-sectional dimension, cross-sectional shape and degree of
582
S. Zhao, B. Guo and M. Jin
deformation, but also the changes of diameters and radii of corner of dies, because it is related with the average radius of the final products; the latter is sensitive to the change of the cross-section shape, degree of deformation, diameters and radii of corner of dies[5-8]. In the shop floor production, how to define the billet gauge, degree of deformation, diameters and radii of corner of dies and other major parameters according to the needs of the users to the product dimension and shape precision is an important issue during the mechanical expanding of tubular parts. It is clear that this case can be modeled to be a multiple objectives optimization, which seeks for the best combination of billet gauge, degree of deformation, diameters and radii of corners of dies on the condition of accepted product dimension and the shape precision. Visualization menu module
Start Parameters input
Automatic generate FEM model according to parameters of optimization
Simulation of mechanical expanding process
Module of automatic generation of FEM model
Marc resolver Analyse results of simulation and calculate fitness function
Module of automatic generation of quality evaluating indicators
N Genetic algorithm of multi-objective optimization
Results of optimization
Save and exit
Simulation system of mechanical expanding forming process
Y Output the optimized results
End System of multi-objective optimization
Figure 1. System framework
The paper is based on the MSC.Marc, and uses PYTHON and FORTRAN languages to secondarily develop the parametric modeling, automatic generation of
Development of Simulation System for Mechanical Expanding Process of Cylinders
583
quality evaluating indicators, multi-objective optimization and other modules in the mechanical expanding process simulation. We constructed a special analysis system which is related with mechanical expanding process simulation and technological parameters optimization. This system can automatically establish finite element model by man-machine interaction interface of the menu and dialogue windows. Users can utilize the communication function of the manmachine interaction interface to simulate process of mechanical expanding and optimize technological parameters quickly, accurately and efficiently.
2.
System Structure
The system is constructed on the framework of the commercial finite element analysis software MSC.Marc’s platform, and also on the basis of research and development of mechanical expanding forming process simulation systems and multi-objective optimization design system that integrates integrated system, using PYTHON, FORTRAN and other languages as the secondary development tools for MSC.Marc. The main functions of the system are visual inputting, automatic generation of model, automatic generation of quality evaluating indicators and multi-objective optimization. The general structure of the system is shown in Fig. 1. Visualization menu module is the main control module in the system for users to analyze the mechanical expanding simulation and optimize the design parameters of all inputs. Other modules conduct the module data transfer. Visualization menu module has a functional interface which has the functions of inputting parameters, selecting and establishing model, analysing during pre-processing and visualization of quality evaluating indicators and optimal design during post-processing. Users conducting simulation of mechanical expanding forming process or multi-objective optimization can complete the full functionality only by the independent operation in the menu without the need to enter the original menu system. Simple menu to mechanical expanding forming process simulation operation is more efficient and convenient. 2.1
Simulation System of Mechanical Expanding Forming Process
Simulation system of mechanical expanding forming process includes two modules such as automatic model generation and automatic generation of quality evaluating indicators. The structure of module of the automatic generation of model is shown in Fig. 2. The module of automatic generation of FEM model is written with PYTHON languages in the secondary development group. The module is used in PyMentat module parameters based on user input to create the model PYTHON script to send commands to Mentat, and to achieve external control of Mentat running, and generating model. PYTHON script from Mentat, first reads all the user- input parameters, and in accordance with the steps of creating finite element model to establish the model of mechanical expanding forming process simulation. The module offers users the interfaces for plane model, shell model and solid model,
584
S. Zhao, B. Guo and M. Jin
and users can choose according to their own needs of finite element model types. Users only need to request from the menu, complete simulation model, input parameters and options needed to establish the finite element simulation model, the system can automatically generate finite element analysis model of mechanical expanding forming process simulation. Fig. 3 shows the FEM model of the mechanical expanding tube forming process, in which the tube billet is defined as a deformable object, the modular dies are rigid. Considering the symmetric nature of the problem, we take the quarter of the billet to build the FEM model which is enforced displacement constraints on its symmetry planes.
Figure 2. Sketch map of the structure of the module of the automatic generation of FEM model
The plane model
The three-dimensional model Figure 3. The model of FEA
The module of automatic generation of quality evaluating indicators is made up of Marc subroutines written with FORTRAN languages. According to the different simulation models, the modules are categorized into three groups: plane model, shell model and solid model. Each program group contains a number of user subroutines. According to information of nodes and elements being automatically output in the process of simulation, the user can obtain the quality evaluating indicators. The current System folder will automatically generate a text file named
Development of Simulation System for Mechanical Expanding Process of Cylinders
585
portfolio of model name and job name to store results of the quality evaluating indicators. System offers users two ways of reading quality evaluating indicators. One is visual menu results in the list of four-step process of reading the quality evaluating indicators. This method is more intuitive. The other method is automatically generating a text file of quality evaluating indicators by the module in the current directory. The latter is a more user-friendly method of data processing. The structure of quality-control parameters generating module is shown in Fig. 4. Visual menu module
Quality-control parameters
Input module Arithmetic module of Quality-control parameters Quality-control parameters report
Results of elements and nodes
Output module The module of the generation of qualitycontrolling parameters
Figure 4. Sketch map of the structure of the module of the generation of quality-controlling parameters
2.2
System of Multi-Objective Optimization
Based on MSC.Marc and genetic algorithm, a program of single-objective optimization for the forming parameters of mechanical expanding process is developed in PYTHON by means of the development of MSC.Marc in the literature [1]. MSC.Marc and the program of optimization are united by which the forming parameters of mechanical expanding process for cylinders can be optimized. An optimum combination of forming parameters of expanding rate, diameter of sectorial die and corner radii of die rim is preliminary concluded from a functional relation between roundness errors of cross-section and forming parameters for the process of mechanical expanding of large diameter line pipe. In the literature [2], the multi-objective optimization of process of mechanical expanding is performed. The objective function has added error of dimension of cross-section, and the dimension of cylindrical billet as design variables. But the degree of deformation of final products is evaluated by plastic expanding rate, in mechanical expanding forming process, the rate is an output, while the remaining three design variables are input.
586
S. Zhao, B. Guo and M. Jin
Thus, the module of multi-objective optimization in the system is a connection of the genetic optimization approach with Marc commercial finite element analysis software. It takes the dimensional and shape accuracies of the final product as the goal of optimization and takes the forming parameters, such as tube diameter, expanding stroke, diameter and radii of corner of dies, as the design variables. Considering the optimization process concerns the errors in dimension and geometrical form of the cross-section of products, the planar FEM model is preferred in the optimization analysis process. Fig. 5 is Flow scheme of multiobjective optimization for mechanical expanding process. Generate random initial population
Generate new population through genetic variation
Calculate fitness of seeds by FEM
Calculate gradient of function and update weighted factor Y
N
Results of optimization
Judge whether seed is better
N
Y Output
Figure 5. Scheme of genetic algorithm of multi-objective optimization
Common methods of multi-objective optimization include weighted array method, efficiency coefficient method, multiplication and division method, the main objectives method, coordination curve method [9, 10]. The method of weighted array optimization has the advantages of simplicity effective, easy preparation of the corresponding source code. This method is applicable to the relatively simple problem, but the deficiency is not very good non-convex regions, it needs some experience to confirm the value of the weights. The system uses method of weighted array optimization, the mathematical model is shown as following. min f ( x) Z1 f 1 ( x1 , x 2 , x 3 , x 4 ) Z 2 f 2 ( x1 , x 2 , x 3 , x 4 ) x1 min d x1 d x1max x2 min d x 2 d x2 max x3 min d x 3 d x3 max x4 min d x 4 d x4 max
The type of design variables x1ᇬx 2ᇬx 3 and x 4 , represent diameter of billet, expanding stroke, radius and radius of corner of dies, respectively, and the values depend on the diameter of the product. The objective function f 1 , f 2 are errors in
Development of Simulation System for Mechanical Expanding Process of Cylinders
587
dimension and geometrical form of the cross-section of products respectively. f is the objective function being obtained through weighted array method. Z1 and Z 2 are weighted factor, two numbers greater than zero, and their values depend on the magnitude and importance of f 1 , f 2 . In the system, weighted factor is divided into two parts Z1 , Z 2 :
Zi
Zi1 Zi 2 (i 1,2)
Zi1 is the intrinsic weighted factor which reflects the important degree of the objective function of the i th item. As the correction weight factor of the objective function of the i th item. Zi 2 is used to gradually correct the effect of objective function in magnitude difference, during the iterative process. Zi1 is acquired from experience. Taking into account of the precision of dimension of the final products is more important than making geometrical form. Zi1 slightly larger than Zi 2 . Zi 2 can be evaluated by gradient f i of the objective function. Correction weights factor is desired: Zi 2
2
1 f i ( x1 , x2 , x3 , x4 ) (i 1,2)
Since no explicit functions exist between the objective function and design variables, the partial derivatives of the objective function to the design variables is substituted approximately by the ratio of the difference of the objective function to the difference of the design variables in the calculation of the gradient function. In the calculation, the genetic algorithm program has to judge whether better seeds appear or not in each evolutionary generation. If better seeds appear, Z 21 and Z22 are calculated, and the old values are replaced by the new values.
3.
Example and Analysis
This paper chose 1016mm diameter steel pipe for the simulation analysis. The shape of the cross section of the blank pipe is a normal ellipse, and the roundness is 2%. The thickness of initial blank pipe is 25.4mm. The die is in a split sectorial structure with 12 splits, and the sectorial angle is 30°. The material of pipeline is X60. In view of the mechanical properties of X60 and the purpose of simulation analysis, the materials model of bilinear hardening is used in the finite element calculation. The yield strength is 475 MPa and the tensile strength is 600 MPa. The elastic modulus is 2.1×105 Mpa and the plastic modulus is 378 MPa. Poisson's ratio is 0.3. Table 1 gives the allowable range of the design variables x1ᇬx 2ᇬx 3 and x 4 . The circumcircle diameter of the dies is 915mm before expanding. In the process of
588
S. Zhao, B. Guo and M. Jin
genetic optimization, the operational factor is defined: The initial population M=10, the cross-probability Pc=0.8, the variable-probability Pm=0.15. Table 1. Allowable Range of design variables
x1 (mm)
x2 (mm)
x3 (mm)
x4 (mm)
985 d x1 d 1016
24 d x2 d 29
432 d x3 d 538
3 d x4 d 15
The results of optimization are shown in Table 2. It can be seen that, comparing with the roundness error of the billet, which is 2%, the roundness error of the product, which is 0.145%, is remarkably lower. And the outer radius dimension of the product is very close to the expected value. Table 2. Results of optimization
x1 (mm)
x2 (mm)
x3 (mm)
x4 (mm)
Diameter(mm)
Roundness (%)
1004.798
26.501
530.190
10.375
1016.127
0.145
By means of ME PSS V1.0 and by adopting the combination of the forming parameters from the optimization, a FEM numerical simulation of the mechanical expanding process of the steel pipeline is conducted, and the automatic outputs of the quality evaluation are: the outer diameter of the product is 1016.127 mm, the roundness error is 0.145%, the average thickness of the shell of pipe is 25.106 mm, the plastic expanding ratio is 1.174%, the radial spring-back is 2.309 mm, and the radial forming load is 50.496 MN. The outer radius and the distribution of the shell pipe of the product after the expanding process are shown in Fig. 6 and Fig. 7. In the figures, the longitudinal coordinate shows the result of the finished product, and the horizontal coordinate shows the central angle of 1/4 of the product. The positions 0° and 90° are correspondent to the long shaft and short shaft of the billet, respectively. It can be seen that, between the two adjacent sectorial die, the outer diameter of the product is smaller than that of the working arc of the die due to the effect of the expanding die. The thickness of the shell of the product is basically consistent within the working arc of the die, which is the maximum of the shell thickness. However, the ಯthickൺthinൺthickൺthinൺthickರ variations occurred at the suspending part between the two adjacent dies is an intrinsic feature of partial deformation. After the expanding, the areas of larger equivalent plastic strain and equivalent residual stress both occur at the end of the two adjacent dies, as shown in Fig. 8.
Development of Simulation System for Mechanical Expanding Process of Cylinders
589
idealized value
Figure 6. Distribution of outside radius of finished pipe
outer radius/mm
25.8 25.4 25 24.6 24.2 23.8 0
15
30
45
60
75
90
angle/° Figure 7. Distribution of wall thickness of finished pipe
Figure 8. Distribution of equivalent strain and residual equivalent stress of finished pipe
590
4.
S. Zhao, B. Guo and M. Jin
Conclusion
Based on a secondary development of MSC.Marc, the simulation and analysis system ME PSS V1.0 was also established for the simulation of mechanical expanding process and optimization of technological parameters. It was realized that the parameters modeling, automatic generation of quality evaluating indicators and multi-objective optimization of the design in mechanical expanding process. Using of the special system is easy and effective. Through simulation, the quality evaluating indicator and the technological parameters of optimization can be obtained.
5.
Acknowledgement
The authors acknowledge the financial support from the Chinese Natural Science Foundation (50475080) and the Hebei Province Natural Science Foundation (2006000246).
6.
References
[1] CHEN Xiao-Yan, GUO Bao-feng, JIN Miao. Optimisation of forming parameters of cylinders in mechanical expanding process. Journal of Plasticity Engineering, 2006.13(6):24-28 [2] CHEN Bao-lin.Prospect of Domestic Construction of Submerged Arc Straight Welding Pipe Mill. STEEL PIPE , 2000.29(2):5-9 [3] WANG San-yun. The Production Technology Development of Large Diameter LSAW Pipe Abroad. Welded Pipe and Tube,2000.23(6):50-58 [4] PENG Zai-mei. Discussion on Some Technical Problem of UOE Pipe Production Line to Be Build by Bao Steel. Welded Pipe and Tube, 2004.27(5):46-51 [5] GUO Bao-feng, etc. The influence of sectorial angle on mechanical expanding process. Journal of Plasticity Engineering, 2002. 9(1):59-61 [6] GUO Bao-feng. Influence of expanding-die diameter on the quality of finished products. Journal of Plasticity Engineering,2003.10(4):52-57 [7] GUO Bao-feng. Influence of geometrical parameters of expanding-die on product quality. Journal of Plasticity Engineering,2004.11(1):46-51 [8] GUO Bao-feng. Influences of Expanding-ratio and Overlap-length on the Quality of Finished Products. China Mechanical Engineering, 2004.15(12):1111-1114 [9] GUO Bao-feng, ZHAO Shi-yan, WANG dong-cheng. Multi-objective optimisation of forming parameters of cylinders in mechanical expanding process. Journal of Plasticity Engineering. (accepted) [10] LIU Wei-xin. Optimal Design for mechanism. BeiJing: Tsinghua University Press, 1994. [11] SONG Li-Min. Genetic algorithm applied to Multi-objective optimisation: A comparative study. Computers and Applied Chemistry,2005.22(11):1079-1082
Rectangle Packing Problems Solved by Using Feasible Region Method Pengcheng Zhang, Jinmin Wang, Yanhua Zhu Department of Mechanical Engineering, Tianjin University of Technology and Education, Tianjin 300222, China, Email:[email protected]
Abstract An investigating into the rectangle packing problems and the feasible region of the rectangles to be packed was comprehensively analyzed and studied. Useful information was explored fully and a packing algorithm, which realized the choice and location of the packing rectangles based on their feasible regions, was proposed. Examples are also shown to indicate that the algorithm can produce optimal packing results efficiently and it has a wide applicability in engineering. Keywords: Packing problem, Feasible Region, Packing space, Positioning function, Attractive factor
1.
Introduction
As a combinatory optimization problem that has NP-hard, the rectangle packing problem[1-2] exists widely in many mechanical design and manufacture, transportation, LSI circuits design and aerospace. Since global optimal solution is difficult to find in limited time in solving packing problems, only heuristic algorithm, which is the most widely used and structured method, has been developed for these problems. This structured method divides the packing process into two parts, namely ordering and locating. The former decides which rectangle to be packed will be selected for packing and the latter decides where the selected rectangle will be located onto. By completing these two tasks, a feasible solution can be obtained in the packing problem if the ordering and locating rules are determined. The quality of the feasible solution depends on the selection of the ordering and locating rules. The suitable rules can improve the advantage of the feasible solution to a large extent. Both the ordering and locating rules have two types of static and dynamic state. Combining the dynamic attractive factor method in reference [3], in this paper, firstly, an algorithm is put forward to calculate any rectangle’ s feasible region. Then, taking the feasible regions for the objects to be packed into the packing space, one can choose the packing objects and determine their location. The whole algorithm aims to produce a maneuverable and a flexible packing process and optimal packing results.
592
2.
P. Zhang, J. Wang and Y. Zhu
Feasible Region of Packing Objects
Feasible region is the collection of all feasible locations of the packing rectangles in the packing space, which can be expressed by the size of the area surrounded by the enclosed path of any point of the packing rectangles when the rectangle moves along the packing space boundary[4], as shown in Figure 1. The size of the area reflects the difficult degree that the packing object can be packed.
Figure 1. The rectangle and its feasible region
The size of the feasible region is associated with that of the packing space and the size and shape of the packing objects. Generally speaking, the more complicated the packing space is and the bigger size of the feasible region area and the longer side of the packing rectangle is , the smaller the feasible region is. The shape of the feasible region may be points, line segments (including broken lines), plane polygon regions. Since the packing space is variable in the packing process, the same object may be packed in different order and the feasible region of the packing object is also different. Taking the orthogonal decomposition of the packing space as premise, reference [5] and [7] discussed the ordering problem of the waiting-to-be-packed rectangles using feasible region, but the feasible region they meant were all restricted in one certain sub-rectangle produced through the orthogonal decomposition of the packing space. Therefore, the feasible region obtained is not the genuine one for the waiting-to-be-packed rectangles in the whole packing space.
3.
Packing Problems Solved Using Feasible Region Method
Which rectangles will be generally determined by ordering rules and the commonly seen ones are measured by the area of the rectangles, the longer side, the shorter side and so on. All these ordering methods are static ones which means the order of the packing rectangles were determined before the packing process. Static methods are easy to conduct but there are obvious disadvantages: one of which is with the real time change of the packing space it is impossible to select the most suitable objects and as the result the quality of the packing result is low. Ordering by feasible region means to determine the rectangles to be packed according to their size and shape of their feasible regions in the current packing space. With the change of the packing space the object selected each time is the
Rectangle Packing Problems Solved by Using Feasible Region Method
593
most suitable one, so it can ensure the final quality of the packing result. The specific procedures are as follows: 3.1
Describe the Packing Space
Packing space is the container in which packing objects can be packing. Packing space often changes with the packing process. In this paper, we regard the packing space as a polygon and describe the packing space by the vertices sequence of the polygon. The advantages of this are as follows: First, it simplifies the calculation of the feasible region of the remaining packing rectangles through the inward offset of the vertices on the packing space. Second, the feasible region vertices obtained through the offset of the vertices on the packing space polygon are usually the locations of the next rectangle to be packed. Because if packing rectangles are put on these points, which means they are cling to the side and angle of the packing space as the strategy of “taking up angle ”and “cling to sides” in reference [6], it also satisfies the traditional locating method of “gold angle, silver side and grass belly”. When a rectangle is packed in the packing space, the vertices sequence of the space polygon will be disposed as follows: if one or more current rectangle’s vertices superpose with the vertices of the space polygon, these vertices will be removed from the sequence. If not, insert the rectangle’s vertices into the vertices sequence as the new vertices. Thus, the new packing space is obtained. 3.2
Calculate the Feasible Region for the Current Packing Rectangle
Set the center of the current rectangle as base point. The algorithm shows as follows: 1. get the packing region and describe it by the vertices sequence 2. offset all the packing space vertices to the polygon interior measured by the size of the current rectangle and get the offset polygon. 3. find out the intersections on which two nonadjacent edges of the offset polygon intersect and insert them into the vertices sequence of the offset polygon. 4. move the current rectangle along the offset polygon boundary with the center point of the rectangle just superposes with the vertex of the offset polygon. If it happens that the conditions the rectangle and the packing space polygon are intersectant, the vertex must belong to the feasible region, so abandon it from the vertices sequence of the offset polygon. 5. finally,the remain vertices on the offset polygon sequence are the useful vertices which just encircle the feasible region . Using this algorithm we can get each waiting-to-be-packed rectangle’ feasible region easily. Figure 2 shows the packing space polygon, the offset polygon and the feasible region polygon.
594
P. Zhang, J. Wang and Y. Zhu
a
b
Figure 2. a The packing space polygon and the offset polygon, b the feasible region polygon(shaded parts).
3.3
Select the only Rectangle for Packing
The size of the feasible region is a measure of the difficult degree that the packing rectangle can be packed. The bigger size the feasible region is, the more location the object can be put or the more chance the object can be packed in the packing space. A intuitive strategy is to choose the rectangles who have relatively smaller feasible regions so as to make it possible to pack rectangles as many as possible, or there will be less chance that these objects can be packed later. Generally speaking, for the packing rectangle the bigger size the area is and the longer side is, the smaller the feasible region is. So it is quite appropriate to choose the ones with smaller feasible regions. Examples indicate that the act of selecting rectangles with smaller feasible regions not only synthesize the packing strategy of area or longer side in descending order but also combine the specific condition of the current packing space, which make the selection of packing objects more pertinent. The specific procedures are as follows: 1. Sort the objects by the longer side as the first keyword and the shorter side the second keyword in descending order . 2. Select successively the objects in the sequence which shorter side is longer than the former. 3. Determine the feasible regions of all the selected rectangles and calculate the areas of their feasible regions. 4. Choose the only rectangle whose area is the smallest in the selected rectangles. By the step2 we can choose the several necessary rectangles from all the waitingto-be-packed rectangles to calculate their feasible regions, so many redundant calculations are cut down. For example, there are five rectangles: R1:15× 4, R2: 13 × 3, R3: 10 × 6, R4: 9 × 4, R5: 7 × 2. Since both sides of R1 are longer than those of R2 and R5, one side of R1 equals that of R4 and another is longer than that of R4, it is obvious that whatever the shape of the current packing space is, the feasible region of R1 must be smaller than that of R2, R4 and R5. Though the longer side of R3 is shorter than that of R1, its shorter one is longer, so it can’t
Rectangle Packing Problems Solved by Using Feasible Region Method
595
certain which one should be selected by comparing their sides. In this way, it is only required to calculate and compare the feasible regions of R1 and R3. Through the above processing, many redundant calculations are cut down and the needful packing rectangle is found. Besides, there may be two special cases about feasible region. One is the feasible region is just a point, the other is part or all of the feasible regions are line segments. In the former case, the packing rectangle is just as the same size as the packing space. So there is no need to judge by the size of feasible region and the only thing to do is to pack it in the only position, the feasible region point. In the latter case, the only thing to do is to make the object to be packed and put it in the end point of the line segment. As seen in Figure 3, the dashed line is the feasible region of the rectangle (taking the center of the rectangle as base point) and the point A is the packing position. $
Figure 3. Part of the feasible regions is line segment
3.4
Using Feasible Region to Locate
The next problem is to determine the most suitable position for the object to be packed, which was selected through calculation and comparison of feasible regions. In view of possible constraints in practical packing process, the paper adopted the positioning function put forward in reference [2] to calculate the practical packing position. Since the packing object can be put in any place in the feasible region, it is impossible to calculate and compare all the points. Which feasible position will be chosen is determined by the positioning function values. This paper just calculates and compares the positioning function values of the verticess on the feasible region polygon . The positioning function of packing is as follows: m
f ( xi , yi , zi )
¦Z
t
f t (x i , y i , z i )
i 1
Where
f t ( xi , y i , z i ) D t | xi x0t | E t | y i y 0t | J t | z i z 0t |
᧷
᧤ t=1ಹಹm᧨i=1ಹಹn᧥
f ( xi , yi , z i ) is the general positioning function f t ( xi , y i , z i ) is the positioning function associate with each attractive factor, t is the number of attractive factors, i is the number of objects waiting to be packed.
( xi , y i , z i )
is the base point of the waiting-to-be-packed objects, usually taking use of the center of the object.
596
P. Zhang, J. Wang and Y. Zhu
( x0t , y 0t , z 0t ) is the coordinate of the packing attractive factor. D t ᇬ E t ᇬ J t are weight factors. D t ᇬ E t ᇬ J t can be chosen according to the importance of constraints from different directions. Generally, D t + E t + J t =1. m
Zt
¦Z
t
1
Z
is weight factor. t 1 ᧨ t can be chosen according to the parts different attractive factors which play in the packing process. Positioning evaluation function is min f(xi, yi, zi). When an attractive factor is set in the positioning function, all the packing rectangles will come close to the position in the packing process. About the weight factors, we can determine them on the two ways: if the practical packing problems have some constraint conditions in different direction, every constraint conditions play different roles and produces different influence on the packing process, so we can use the different weight factors to declare these different constraint conditions. If the only goal is to pursue the high space utilization, weight factors may be regarded as a part of the position function, we can set them as fixed values or unfixed values, which vary with the current packing process. In this paper we employ some fixed values as weight factors, so it will be convenient to compare the result with that of other references.
4. Algorithm for Rectangle Packing Problems Solved by Feasible Region Method A new algorithm has been developed and its flow chart for rectangle packing problems solved by feasible region method is shown in Figure 4. Using this algorithm, the optimal packing solution can be derived.
Rectangle Packing Problems Solved by Using Feasible Region Method
597
begin
Input the size of packing objects.
determine the current packing
select the left objects having not been packed.
Y the number is
N calculate the feasible region;
Y
Are all feasible i b l
select packing object and determine its packing position by positioning function N l
pack the packing object in current layout region.
end
Figure 4. The flow chart of the proposed algorithm for packing problem solving
5.
Application Examples
The examples employ the two-dimension packing from the open rectangular packing data set, solved by the method this paper put forward. Example 1: the packing space is 20×20 and there are 17 objects, the sizes of which are shown in table 1. Bi is the number of the rectangle, a and b are the sides of the rectangle, s is the area of the rectangle. Some packing schemes are requested to make the most use of the space and the rectangular can’t be inclined.
598
P. Zhang, J. Wang and Y. Zhu
Table 1 The sizes of the objects in example one
1.
Bi 1
a 4
b 1
s 4
Bi 7
a 5
b 3
s 15
Bi 13
a 2
2
4
5
20
8
4
3
9
4
36
9
5
4
3
5
15
10
5
3
9
27
11
6
1
4
4
12
b 8
s 16
1
4
14
15
4
60
5
25
15
5
4
20
7
2
14
16
10
6
60
9
3
27
17
7
2
14
3
13
39
In the case of t=1,Z1=1,D1=0.75,E1=0.25,J1=0 and the attractive factor is at the left lower corner. The packing result can be seen in Figure 5 and the packing sequence is 14->12->16->9->3->5->11->2->15->4->7->13->10>17->1->6. The area taken up is 396 and the space utilization rate is 99%.
Figure 5. Packing result in the case of one attractive factor and D1=0.75,E1=0.25
2.
In the case of t=4,Z1=Z2=Z3=Z4=0.25,D1=D3=1, D2=D4=0, E1=E3=0, E2=E4=1,J1=J2=J3=J4=0 and the attractive factor are at the four corners. The packing result can be seen in Figure 6 and the packing sequence is 14->12>16->9->3->5->11->2->15->4->13->10->17->7->1->6. The area taken up is 396 and space utilization rate is 99%.
Figure 6. Packing result in the case of four attractive factors
Rectangle Packing Problems Solved by Using Feasible Region Method
599
Example 2: the packing space is 40×15 and there are 25 objects, the sizes of which are shown in table 2. Bi is the number of the rectangle, a and b are the sides of the rectangle, s is the are of the rectangle. Some packing schemes are requested to make the most use of the space and the rectangles can’t be inclined. Table 2 the sizes of objects in example 2
Bi 1
a 11
b 3
s 33
Bi 10
a 13
b 4
s 52
Bi 19
a 1
b 2
s 2
2
13
3
39
11
3
5
15
20
3
5
15
3
9
2
18
12
11
2
22
21
13
5
65
4
7
2
14
13
2
2
4
22
12
4
48
5
9
3
27
14
11
3
33
23
1
4
4
6
7
3
21
15
2
3
6
24
5
2
10
7
11
2
22
16
5
4
20
25
6
2
12
8
13
2
26
17
6
4
24
9
11
4
44
18
12
2
24
1. In the case of t=1,Z1=1,D1=0.5, E1=0.5, J1=0 and the attractive factors are at the left lower corner. The packing result can be seen in Figure 7 and the packing sequence is 21->10->22->2->8->18->9->1->14->7->12->5->17>6->3->16->11->20->4->25->24->23->15->13->19 .The area taken up is 600 and the space utilization rate is 100%.
Figure 7. packing result in the case of one attractive factor and D1=0.5,E1=0.5
2.
In the case of t=1,Z1=1,D1=-0.5, E1=-0.5, J1=0 and the attractive factor(now it is repulsive factor) is at the center of packing space. The packing result can be seen in Figure 8 and the packing sequence is 21->10->22->2->8>18->9->1->14->7->12->5->17->6->3->16->11->20->4->25->24->23>15->13->19. The area taken up is 600 and the space utilization rate is 100%.
600
P. Zhang, J. Wang and Y. Zhu
Figure 8. Packing result in the case of one attractive factor and D1=-0.5,E1=-0.5
It can be seen from the packing results that taking the algorithm of packing problem solving by feasible region and selecting different positioning function parameters can produce high quality results. They are much better than the solutions produced for the same case in reference [3] and [8].
6.
Conclusions
This paper has given a detailed description on a layout algorithm which takes use of feasible region to solve the rectangle packing problems. The open rectangle packing data set is used to detect this algorithm, and different parameters are set in the positioning function to simulate different constraint conditions. All the results are satisfying. This indicates that our algorithm is efficient and optimal and it can be applied in various practical engineering problems.
7.
References
[1] Dowsland K A. Dowsland W B,(1992) Packing problems [J]. European Journal of Operational Research. 56:2–4 [2] Zha Jianzhong, Tang Xiaojun, Lu Yiping,(2002) Survey on packing problems. Journal of Computer-Aided Design & Computer Graphics,14:705–712 [3] Wang Jinmin,Yang Weijia,(2005) Dynamic attractive factors applied in packing problems[J]. Journal of Computer-Aided Design & Computer Graphics. 17:1725–1730 [4] Yang Weijia,(2005) The research on solving algorithm and strategy for packing problem[D]. Tianjin:Tianjin University, [5] Wang Jinmin,Jian Qihe,(2003) An algorithm based on orthogonal space decomposition for packing problem[M]. Modern Manufacturing Engineering, 7:35–37 [6] Huang Wenqi, Chen Duanbing,(2005) An efficient quasi-physical and quasi-human block-packing algorithm[J]. Computer science, 32:182–186 [7] Liu Tianliang,Yuan Li,(2003) A heuristic algorithm for solving rectangle packing problem[J]. Journal of Qingdao University, 16:88–92 [8] Jian Qihe,(2003) Research on the heuristic algorithm based on objective and orthogonal space decomposition for packing problem[D]. Tianjin:Tianjin University.
Aircraft’s CAD Modeling in Multidisciplinary Design Optimization Framework X.L. Ji, Chao Sun School of Aerospace Science and Engineering, Beijing Institute of Technology, Beijing 100081, China
Abstract An aircraft’s CAD modeling in multidisciplinary design optimization framework is presented to satisfy the automatic manufacturing technology. After the dominant model is defined, each discipline constructs their view models simultaneously according with the multidisciplinary decomposition and view interdisciplinary consistency. It is demonstrated that the different representations for the functional behaviors of design objects are the foundation for the multidisciplinary model format. The modeling method is then applied to an Artillery Tube Launched Aircraft shape design project. Design information is shared easily and altered to design the shape dynamically. Keywords: Multidisciplinary Design Optimization(MDO)᧨Aircraft Design, CAD modeling, Artillery Tube Launched Aircraft(ATLV)
1.
MDO Method of Aircraft
1.1
MDO Method
Aircraft design is multidisciplinary; that is, it requires the coordination of information from a number of highly specialized disciplines. It may include the disciplines of aerodynamics, structures, propulsion, controls, and manufacturing. The point of view, design emphasis, and design approach of each discipline specialist can be quite different. As design problems become more complex, the role of disciplinary specialists increases and it becomes more difficult for a central group to manage the process. As the analysis and design task becomes more decentralized, communications requirements become more severe. These difficulties with multidisciplinary design are particularly evident in the design of aircraft. Product design philosophy is to ensure product’s integral performance, shorten cycle and reduce cost of design, development and manufacturing utmost utilizing modern design optimization method synthetically. However, traditional design practices dissever the coupling within each discipline factitiously, weaken
602
X.L. Ji and C. Sun
collaborative effect, and make it difficult to achieve holistic optimum of system. Moreover, traditional design method has a longer period and much expensive cost, so it has become more and more malapropos. Contrarily, multidisciplinary design optimization (MDO) is a tool that has been used successfully throughout the design process to enable improvements in aircraft performance [1, 2]. By simultaneously considering the disciplines of interest, one can coherently exploit the synergism of mutually interacting phenomena. Furthermore, by casting the design problem as a formal optimization statement, computational algorithms can be used to search the design space in a rational and efficient manner. Faced the complexity and synthetic of products design, a commercial computer aided design (CAD) technology is still the basic design tool at present. With the expansion and improvement of CAD’s parameterization modeling [3, 4], designers can integrate different design information, function and structure demanded from every discipline, and solve multidisciplinary design problems effectively using intelligent alternant design method. 1.2
MDO Characteristics during Different Design Phase
There are different MDO characteristics during different design phase as shown in Figure 1. Conceptual design defines the large-scale features of the aircraft. The major components and subsystems are named, and rough estimates are given for their size and shape. Preliminary design defines the intermediate-scale features of the aircraft. This includes the actual size, shape, and location of the aerodynamic lifting and control surfaces, the size and shape of the payload area, consistent with design constraints, design details of the propulsion system, and intermediate level details of the structural subassemblies. Detailed design defines all information necessary to manufacture the aircraft. Based on preliminary design information, the final design determines all fastening and joining details and produces mechanical drawings for all parts and subassemblies. Here, an Artillery Tube Launched Aircraft (ATLA) project is presented. The ATLA, is a small autonomous flyer, which is launched contained in an artillery shell, and then deployed over the battlefield to capture images. The conceptual design shows that it is possible to meet the minimum requirements -- 1.5 hour duration and 100 mph maxim airspeed. Preliminary sizing was conducted to determine how large the vehicle needed to be. For the basic minimum performance requirements, baseline vehicle weights can be derived using a combination of in-house developed procedures and historical data. The biggest weight driver for a vehicle to meet the requirements is propulsion selection. Usually a design decision must be made between internal combustion and electric systems. For a component capable of surviving high-g launch load, explosive fuels should be avoided; electric power becomes a cheap and reliable working solution. The requirement that the vehicle has to be launched from an artillery tube is the highest driver for the final configuration. From the configuration studies conducted, it was determined that the cheapest and simplest way to perform the mission was to have the deployable wings that swung out from the fuselage such the wings on JASSM (Joint Air to Surface Standoff Missile). However, in order to provide
Aircraft’s CAD Modeling in Multidisciplinary Design Optimization Framework
603
sufficient lift, an increased wing area is needed, so a three-folded wing is adopted as shown in Figure 5. The wings need to be attached as far forward as possible to allow for the largest wing size to fold into the allowed space.
Figure 1. Aircraft’s MDO roadmap
2.
Multidisciplinary Framework of Product Design
The goal, function and behavior of product design should be decomposed in detail so as to express specific relationship among different design concepts. The design goal is to form product’s main function which reflects the designers’ ideas of multidiscipline. The product function is realized through working behavior; and the behavior is the physic principle of realizing functions. Structure reveals the product’s working status and depicts the physics realization of product behavior. So the design purpose is projected onto components structures [5] through functional language description and definitive regulation. For an ATLA application, the MDO framework is formed as shown in Figure 2.
604
X.L. Ji and C. Sun
Figure 2. Framework for interdisciplinary design optimization for an ATLA application
3.
CAD Modeling of MDO Views
3.1
CAD Modeling Technology
The prevalent molding technology of CAD software is parameterization modeling technology impelled by feature dimension. Unigraphics, as a representative, can enhance the design capability driven by purpose through agile secondary development,as shown in Figure 3. For an ALTA configuration, the basic constraint topology changes not too much, automatic modification can be realized using parameterization modelling technology between correlative components. Within the continuous region, variables can be altered by changing the linkage between dimensions, assembly connection between accessories and model linkage.
Aircraft’s CAD Modeling in Multidisciplinary Design Optimization Framework
605
Figure 3. CAD modeling technology
3.2
Technology Roadmap of Modeling of MDO Views
For an ATLA application, the geometry processes is shown in Figure 4. At least 6 different geometry models are needed, they are: linear aerodynamics, nonlinear aerodynamics, finite-element structural analysis, fuel, weights and performance.
Figure 4. ATLA geometry processes
The technology roadmap of MDO modeling can be described as follow: Curve model is constructed using CAD parameterization modeling technology; and then solid model of configuration is realized through using NURBS, mold and grid-
606
X.L. Ji and C. Sun
bended faces obtained from geometry processes, and the structure dominant model is constructed ultimately. Configuration design is carried out using integrated optimization. First, an optimum low drag shape according with aerodynamics is educed from the solid dominant model. Second, the shape is optimized by using virtual wind tunnel experiment or numerical simulation, and every part is amended by optimization. Then, load and distortion are loaded on structure’s geometry model of the ALTA. Finally, optimization software (iSIGHT) is integrated into Unigraphics grid distortion software, direct coupling analysis of configuration and structure is made on the base of NURBS controlling curves, the existing finite element structure will become the target shape. 3.3
ALTA Configuration Modeling of MDO Views
ALTA configuration modeling of MDO views is shown in Figure 5. The conceptual design goal is expressed by simple geometry model aided by corresponding language description and design regulations. In preliminary design phase, a high verisimilar complex model is adopted as the dominant model in order to simulate interdisciplinary interaction. Subsystems analysis models are derived through abstracting, quoting and extension from the dominant model. In detailed design phase, sensitivity analysis technologies are utilized to carry through coupled design so as to improve the integrated performance of ALTA. That is to say, the roadmap of modeling of MDO views is leaded by system’s solid model and multiparallel modeling is implementing simultaneously.
Figure 5. Modeling of MDO views of ALTA
4.
Consistency of MDO Modeling
The consistency of MDO modeling [6] can be ensured based on the optimization model of subsystems educed from dominant model and consistency mechanism as
Aircraft’s CAD Modeling in Multidisciplinary Design Optimization Framework
607
shown in Figure 6. The educible regulation is abstracting, quoting and extension. Moreover, consistency mechanism includes constraint mechanism and feedback automatically mechanism. The feedback of each discipline’s optimization data will modify influenced parameters of dominant model to guarantee the consistency.
Figure 6. Consistency system of MDO modeling
5.
Conclusions
Traditional optimization model considers the associated influence of other discipline subsystems by appending design parameters or consistency constraint mechanism. So it weakens the importance of interdisciplinary collaborative effect and brings conflicts easily. However, Dominant model oriented multidiscipline parallel modeling technology considers the coupled effect of different discipline, so it eliminates redundant relationship, ensures the model consistency and design information shared, and improve automatic level greatly.
6.
References
[1] Chery A. Eisler, (2003) Multidisciplinary optimization of conceptual aircraft design, Carleton University [2] P Scott Zink, Dan A DeLaurentis, Mark A. Hale, etc, (2000) New approaches to high speed civil transport multidisciplinary design and optimization. IEEE Aerospace conference proceedings 1: 355-369 [3] Wu Baogui, Huang Hongzhong, Tao Ye, etc, (2006) Modeling of multidisciplinary views for complex products in MDO environments. Journal of Tsinghua University 46: 1816-1819 [4] Ma Tieli, Lan Fengchong, (2002) Behavioral modeling—the fifth generation of CAD model technology. Computer Eng and Appl: 98-100. (in Chinese) [5] Zhao Bo, Fan Yushun, (2003) View s consistency in multi-views enterprise modeling. Computer Integrated Manufacturing Systems 7: 522 -526. ( in Chinese) [6] Rosenman M A, Gero J S, (1996) Modeling multiple views of design objects in a collaborative CAD environment. CAD 28: 207-216
Optimization of Box Type Girder of Overhead Crane Muhammad Abid, Muhammad Hammad Akmal, Shahid Parvez GIK Institute of Engineering Sciences and Technology, Topi, Pakistan
Abstract Double girder box girder over head cranes are used for heavy duty applications in the industry. In this paper a detailed parametric design optimization of the main girder of box type is performed for a 150Ton capacity and 32m long span crane, after its basic design using available design rules. Design optimization is performed using detailed 3D finite element analysis by changing the number, shape and location of horizontal stiffeners along the length of the girder and number and location of stiffeners along the vertical direction to control any possible buckling, light weight and for safe stress and deflection. During optimization, primarily calculated thickness of the box girder plates is not changed. Keywords: Box, girder, optimization, overhead, crane, FEA
1.
Introduction
Overhead cranes are used for the handling and transfer of heavy loads from one position to another, thus they are used in many areas of industry such as in automobile plants and shipyards [1,2] etc. Their design features vary widely according to their major operational specifications such as: the type of motion of crane structure, weight and type of the load, location of the crane, geometric features and environmental conditions. Since the crane design procedure is highly standardized with these components, main effort and time is spent mostly for interpretation and implementation of available design standards [3]. There are many of the published studies on their structural and component stresses, safety under static loading and dynamic behavior [5-16]. Solid modeling of bridge structures and finite element analysis (FEA) to find the displacements and stress values has been investigated by Demirsoy [17]. Solid modeling techniques applied for the road bridge structures, and these structures analysed with finite element method has given by [18-20]. DIN-Tashenbuch and F.E.M (Federation Européan de la Manutention) rules offer design methods and empirical approaches and equation that are based on previous design experiences and widely accepted design procedures. DIN-Tashenbuch 44 and 185 are collection of standards related to the crane design. DIN norms generally state standard values of design parameters. F.E.M rules are mainly an accepted collection of rules to guide the crane designers. It includes criteria to decide on the external loads to select crane components. In this paper a detailed parametric design optimization of the main girder of box type
610
M. Abid, M.H. Akmal and S. Parvez
is performed for a 150Ton capacity and 32m long span crane, after its basic design using available DIN and F.E.M design rules. Design optimization is performed using detailed 3D FEA, by changing the number, shape and location of horizontal stiffeners along the length of the girder and number and location of stiffeners along the vertical direction to control light weight and for safe stress and deflection. During optimization, primarily calculated thickness of the box girder plates is not changed. Three case studies are carried out for optimization using; x x x
horizontal stiffeners only (study-1) vertical stiffeners only (study-2) Both horizontal and vertical stiffeners (study-3)
Figure 1. Initial geometry of the overhead crane girder
2.
Modeling, Material Properties and Meshing
A complete box girder is modeled in ANSYS software and is shown in Figure 1 with all its dimensions. Thickness of side plates = 16mm, top and bottom plates = 22mm, vertical stiffeners = 10mm, width of top and bottom plates = 960mm and height of side plates in the center = 2600mm. However during FEA due to its symmetry, only half of the model is used and is optimized with different geometries under applied loading conditions. Initially box with rail at the top is analyzed without any stiffener. Then different horizontal and vertical stiffeners at different stages were modeled and glued to the outer box keeping in view the manufacturing process and symmetry in front. Linear elastic material model is used for steel Rst-37.2 with Young’s modulus of 207GPa, Poisson’s ratio of 0.3, allowable stress 157 MPa and density of 7.86*10-6 kg/m3. 3-D, 10 nodded higher order quadrilateral SOLID187 elements having three degrees of freedom at each node are used. Free Mesh option is used to mesh the entire geometry and is shown in Figure 2.
Optimization of Box Type Girder of Overhead Crane
2.1
611
Boundary Conditions
Considering crane standing at one position and lifting the load, as mostly is the recommendation for crane operation, hence during design calculation and finite element analysis, no horizontal force is considered to be acting on the main girder. Main girder is fully fixed at the ends where it is joined to the end carriages. A three point bending loading strategy is applied considering the distance between two wheels of the trolley to be very small. Load is applied along the rail width equally distributed on all the 6 nodes. For different case studies load applied is considered with the self weight of the main girder and is discussed in related sections below. Due to the symmetry of the geometry, symmetry boundary conditions are applied on the plates as shown in Figure 2(a). Load on rail
Symmetric Boundary conditions
Fully fixed
Figure 2(a). FE model with applied boundary conditions
Figure 2(b). FE model using SOLID WORKBENCH
3.
Results and Discussion
Maximum bending stress with and without stress concentration points are shown in Figure 3 and 4.
612
M. Abid, M.H. Akmal and S. Parvez
Figure 3. Bending stress in girder with maximum at rail due to stress concentration where load is applied
Figure 4. Bending stress in girder by removing the volumes to avoid stress concentration, hence redistributing the stresses.
3.1
Study-1: Optimization Using Horizontal Stiffeners
In this case optimization is performed by changing the number, position and shape of horizontal stiffeners only. The details of all the cases are summarised in Table 1. It is noted that there is no considerable decrease in the maximum deflection by using the L-shape stiffeners, however better results are achieved using the C-shape horizontal stiffeners. Using two C-shape horizontal stiffeners at 400 and 1700mm
Optimization of Box Type Girder of Overhead Crane
613
from the top plate, the best optimized results (maximum deflection=37.32mm and maximum bending stress=176MPa, mass of girder=16999kg) are achieved. Analysis is also performed by modeling the girder in Ansys Workbench. Using built in solid elements and free meshing and removing the stress concentration points, maximum deflection = 36.24mm and bending stress = 165 MPa is observed. Although maximum bending stress is more than the allowable but can be neglected due to the stress concentrations in all the cases. 3.2
Study-2: Optimization Using Vertical Stiffeners
In this case optimization is performed by changing the number and position of plate stiffeners along the length of the girder. Results are summarized in the Table 2. It is observed that by increasing the number of vertical stiffeners from one to two and so on, a decrease in the maximum deflection from 37.74mm to 34.79mm is observed. By increasing the number of vertical stiffeners, corresponding decrease observed is small. Hence using seven vertical stiffeners@2000 mm from each other deflection reduced to 34.79mm, but an increase in mass (1042kg) of the girder is observed as we increase the stiffeners from 1 to 7. A maximum bending stress of 160MPa is observed which is very close to the allowable stress of the flange material. Using Workbench and neglecting the stress concentration, maximum deflection is reduced to 29.52mm and maximum bending stress is reduced to 135 MPa respectively, and is within the allowable limits. 3.3 Study-3: Optimization Using Both the Horizontal and Vertical Stiffeners In this case, analyses are performed by changing the number and location of the vertical stiffeners along the length of the girder in addition to the two C-shape horizontal stiffeners positioned equally along the height of the girder. Results are summarized in the Table. 3. Two C-shape horizontal stiffeners are used as most optimized results were concluded using these in study-1. In these cases, the number of vertical stiffeners is increased, the value of maximum deflection decreases from 34.23 to 34.06mm and the value of maximum bending stress decreases from 166 to 160MPa. It is interesting to note that using vertical stiffeners from 3 to 7, maximum deflection and bending stress remains the same. But using more vertical stiffeners, mass of girder is increased. Vertical plates are used here in order to avoid lateral buckling. Using Workbench model and neglecting stress concentrations, maximum deflection and stress is reduced to 29.32mm and 131MPa and is within the allowable limits. After that, box girder is modeled by using the dimensions such that two C-shape horizontal stiffeners are placed @625mm and 1250mm from the top plate and twenty one vertical stiffeners are used in the half model of girder. First four vertical stiffeners are located along the support and varying cross section and the remaining 17 vertical stiffeners are located along the length of the girder where the height of the girder is uniform. For optimization 21 and 31 vertical stiffeners are
614
M. Abid, M.H. Akmal and S. Parvez
also used and analysis is performed. In addition position and orientation of the horizontal stiffeners is also changed such as using inverted C-shape stiffeners and so on and results are summarized in Table 4. Using 17 vertical stiffeners in addition to two C-shape stiffeners, a maximum deflection = 32.45mm and maximum bending stress = 218MPa is observed. Using Worbench model with 17 vertical stiffeners and removing stress concentrations, maximum deflection and stress is reduced to 28.62mm and 132 MPa. By using L-shape horizontal stiffeners in addition to vertical stiffeners, results are also found in good agreement to that using 2 C-shape stiffeners but with a slight increase in the weight of the girder. Using inverted 2 C-shape stiffeners, no difference in results is observed but from manufacturing point of view, this is not appreciated.
(a)
(b) Figure 5. Different Orientations of horizontal stiffeners
(c)
Optimization of Box Type Girder of Overhead Crane
615
Table 1. Results comparison by changing the shape, number and location of horizontal stiffeners # and type of stiffeners
Location ----Touching top plate
No stiffener 1 C-Shape horizontal stiffener 180x70x8
2 C-Shape horizontal stiffener 180x70x8
3 C-Shape horizontal stiffener 180x70x8 1 L-Shape horizontal stiffener 156x156x8 2 L-Shape horizontal stiffener 156x156x8 3 L-Shape horizontal stiffener 156x156x8
@400mm from top plate @650mm from top plate @890mm from top plate(aligned with lower plate) Equally divided throughout the height 1st@710mm, 2nd@1655 mm from top plate 1st@400mm, 2nd@1700mm from top plate 1st@400mm, 2nd@1700mm from top plate(WORKBENCH) Equally divided throughout the height 1st@710mm, 2nd@1340 mm, 3rd@1970mm from top plate Touching upper plate @400mm from top plate @878mm from top plate Equally divided throughout the height 1st@722mm, 2nd@1661 mm from top plate Equally divided throughout the height st
Max deflection (mm)
Max bending stress (MPa)
Mass of girder (kg)
47.49
210
15865
41.91 42.98 42.97
206 205 325
16488 16457 16466
41.94
208
16438
39.77
187
16972
40.63
353
16999
37.32
176
16999
36.24
165
16999
40.97
318
39.36
350
17537
46.54
208
16466
44.90
213
16466
42.75
206
16438
42.00
200
16971
43.71
201
17001
45.00
204
17540
43.71
203
17538
17539
nd
1 @722mm, 2 @1348 mm, 3rd@1974mm from top plate
616
M. Abid, M.H. Akmal and S. Parvez
Table 2. Results comparison by changing the number and location of vertical stiffeners Location and number of vertical stiffeners 1@6500mm from center 2@12000mm from each other 3@6000mm from each other 4@4000mm from each other 5@3000mm from each other 6@2400mm from each other 7@2000mm from each other 7@2000mm from each other (WORKBENCH)
Maximum deflection (mm) 37.74 35.39 35.03 34.86 34.79 34.78 34.79
Maximum bending stress (MPa) 179 167 166 165 165 165 160
29.52
135
Mass of girder (kg) 16039 16213 16386 16560 16734 16907 17081 17081
Table 3. Results comparison by changing the location of vertical stiffeners in addition to two C-shape stiffeners of study-1 Location and # of vertical stiffeners 3@6000mm 5@3000mm 7@2000mm 7@2000mm (WORKBENCH)
Maximum deflection (mm) 34.24 34.07 34.06
Maximum bending stress (MPa) 166 164 165
Mass of girder (kg) 17472 17806 18140
29.32
131
18140
Table 4. Results comparison by changing the number and location of vertical stiffeners in addition to two different types of horizontal stiffeners Number and type of using stiffeners
Max Max bending Mass of deflection stress (MPa) girder (mm) (kg) ORIENTATION OF HORIZONTAL STIFFENER AS PR FIG. 1.5a 17@750mm along uniform height 32.45 218 20104 17@750mm along uniform height (WORKBENCH) 28.62 132 20104 21@600mm along uniform height 32.45 221 20779 31@400mm along uniform height 32.70 224 22451 ORIENTATION OF HORIZONTAL STIFFENER AS PR FIG. 1.5b 2 C-Shape horizontal stiffeners 32.35 220 20114 ORIENTATION OF HORIZONTAL STIFFENER AS PR FIG. 1.5c 2 C-Shape horizontal stiffeners 32.40 220 20123 CHANGING POSITION OF HORIZONTAL STIFFENERS 2 C-Shape stiffeners @866 and 1733mm from top plate 32.05 218 20123 2 C-Shape stiffeners @866 and 1733mm from top plate 28.18 129 20123 (WORKBENCH) CHANGING THE SHAPE OF HORIZONTAL STIFFENERS 2 L-Shape horizontal stiffeners 32.31 220 20169
Optimization of Box Type Girder of Overhead Crane
4.
617
Conclusions
From detailed optimization studies following results are concluded; 1.
2.
3. 4. 5.
6.
5.
The most optimized case concluded is with 2 C-Shape horizontal stiffeners equally distributed along the height with 17 vertical stiffeners along the uniform height and 4 along the support point and varying section. Here, the maximum deflection and stress is reduced to 28.18mm and 129 MPa according to the Workbench model. The results achieved from the model of ANSYS Workbench are 10 % more accurate than ANSYS model and are concluded due to less discretisation error. Orientation of the horizontal stiffeners does not make visible difference in the results. The minimum deflection is achieved by equally dividing the horizontal stiffeners along the height. To control longitudinal and lateral buckling, use of horizontal and vertical stiffeners is strongly recommended. In addition, inclusion of stiffeners increases the strength of the girder. In order to further reduce the weight of the girder in future, variation of plate thicknesses and use of other sections is recommended.
References
[1] Oguamanam, D.C.D., Hansen, J.S., Heppler, G.R., (1998) Dynamic Responce of an Overhead Crane System,Journal of Sound and Vibration, 213 (5), 889 – 906. [2] Otani, A., – Nagashima, K. –Suzuki, J.: Vertical Seismic Responce of Overhead Crane, Nuclear Eng. And Design, 212, 1996, p. 211 – 220. [3] Erden, A. (2002) Computer Automated Access to the “F.E.M. rules” for Crane Design, Anadolu University Journal of Science and Technology, 3 (1), 115-130. [4] Anon, A. (1998) New Thinking in Mobile Crane Design, Cargo Systems, 5 (6), 81. [5] Baker J.: (1971) Cranes in Need of Change, Engineering, 211 (3), 298. [6] Buffington K.E. (1985) Application and Maintenance of Radio Controlled Overhead Travelling Cranes, Iron and Steel Engineer,62 (12), 36. [7] Demokritov V.N. (1974) Selection Of Optimal System Criteria For Crane Girders, Russian Engineering Journal, 54 (4), 7. [8] Erofeev M.J. (1987) Expert Systems Applied To Mechanical Engineering Design Experience with Bearing Selection and Application Program, Computer Aided Design, 55 (6), 31. [9] Lemeur M., Ritcher C., Hesser L. (1977) Newest Methods Applied to Crane Wheel Calculations in Europe, Iron and Steel Engineer, 51 (9), 66. [10] McCaffery F.P. (1985) Designing Overhead Cranes for Nonflat Runways, Iron and Steel Engineer,62 (12), 32. [11] Reemsyder H.S., Demo D.A. (1978) Fatigue Cracking in Welded Crane Runway Girders, Causes and Repair Procedures, Iron and Steel Engineer, 55 (4), 52. [12] Rowswell J.C., Packer J.A. (1989) Crane Girder Tie-Back Connections, Iron and Steel Engineer, 66 (1), 58.
618
M. Abid, M.H. Akmal and S. Parvez
[13] Moustafa, K.A., Abou-El-yazid, T.G. (1996) Load Sway Control of Overhead Cranes with Load Hoisting via Stability Analysis, JSME Int. Journal, Series C, 39 (1), 34–40. [14] Oguamanam, D.C.D., Hansen, J.S., Heppler, G.R. (2001) Dynamic of a Threedimensional Overhead Crane System, Journal of Sound and Vibration, 242 (3), 411– 426. [15] Auering, J.W., Troger, H. (1987) Time Optimal Control of Overhead Cranes with Hoisting of the Load, Automatica, 23 (4), 437–447. [16] Huilgol, R.R., Christie, J.R., Panizza, M.P. (1995) The Motion of a Mass Hanging From an Overhead Crane, Chaos, Solutions & Fractals, 5 (9), 1619–1631. [17] Demirsoy, M. (1994) Examination of the Motion Resistance of Bridge Cranes, PhD. Thesis, Dokuz Eylul University, Izmir, Turkey. [18] Ketill, P., Willberg, N.E. Application of 3D Solid Modeling and Simulation Programs to a Bridg Structure, PhD. Thesis, Chalmers University of Technology, Sweden. [19] Celiktas, M. (1998) Calculation of Rotation Angles at the Wheels Produced by Deflection Using Finite Element Method and the Determination of Motion Resistance in Bridge Cranes, J. Of Mechanical Design, 120. [20] Alkin, C. (2004) Solid Modeling of Overhead Crane’s Bridges and Analyse with Finite Element Method, M.Sc. Thesis, Istanbul Technical University, Turkey. [21] Scheffer, M., Feyrer, K., Matthias, K. (1998) Fördermaschinen Hebezeuge, Aufzüge, Flurförderzeuge, Vieweg & Sohn, Wiesbaden. [22] Kogan, J.: Crane Design. (1976) Theory and Calculations of Reliability, John Wiley & Sons, New York. [23] Errichello, R. (1983) Gear Bending Stress Analysis, ASME Journal of Mechanical Design 105, 283–284. [24] Moaveni, S. (1999) Finite Element Analysis : Theory and Application with ANSYS, Prentice-Hall, New Jersey. [25] Verschoof, J. (2000) Cranes Design, Practice and Maintenance, Professional Engineering Pub. London.
Chapter 5 New Mechanism and Device Design and Analysis
Symmetric Toggle-lever-toggle 3-stage Force Amplifying Mechanism and Its Applications........................................................................................... 621 Dongning Su, Kangmin Zhong, Guoping Li Kinematics and Statics Analysis for Power Flow Planet Gear Trains.......... 631 Zhonghong Bu, Geng Liu, Liyan Wu, Zengmin Liu Green Clamping Devices Based on Pneumatic-mechanical Compound Transmission Systems Instead of Hydraulic Transmission Systems............. 641 Guang-ju Si, Ming-di Wang, Kang-min Zhong, Dong-ning Su Rapid Registration for 3D Data with Overlapping Range Based on Human Computer Interaction .................................................................... 651 Jun-yi Lin, Kai-yong Jiang, Bin Liu, Chang-biao Huang A New FE Modelling Approach to Spot Welding Joints of Automotive Panels and Modal Characteristics .......................................... 661 Jiqing Chen, Yunjiao Zhou and Fengchong Lan Precision Measurement and Reverse Motion Design of the Follower for Spatial Cams ................................................................................................ 671 Zhenghao Ge, Jingyang Li, Feng Xu, Xiaowei Han Static Analysis of Translational 3-UPU Parallel Mechanism Based on Principle of Virtual Work............................................................................ 681 Xiangzhou Zheng, Zhiyong Deng, Yougao Luo, Hongzan Bin A Natural Frequency Variable Magnetic Dynamic Absorber....................... 691 Chengjun Bai, Fangzhen Song
Symmetric Toggle-lever-toggle 3-stage Force Amplifying Mechanism and Its Applications Dongning Su1, Kangmin Zhong2, Guoping Li1 1
University of Jinan University of Suzhou
2
Abstract This paper describes a symmetric toggle-lever-toggle 3-stage force amplifier and its applications. The performance characteristics are analyzed. The theoretic and actual force amplification coefficient formula are given, as well as the movement plot of input and output during operation. In addition, we also briefly study the effects of various source force and equipment layouts. This equipment enables force input/output in a same direction. Its force amplification effect is considerable, the transmission efficiency is high and the structure is compact. Keywords: Toggle-lever-toggle; 3-stag Force amplifier; Force amplification coefficient; Movement plot; Layout;
1.
Introduction
Force amplifier is widely used in machine engineering. Using force amplifier, the output force could be amplified and the volume of the system could be reduced, such as in fixture, small operation tools and large punching equipments. Hence, a compact force amplifier with high amplification rate will be very useful in mechanical engineering. Cascading various force-amplifying elements will achieve various forceamplifying equipments to enable multi-stage force amplification. Currently, there is much research on 1-stage or 2-stage force amplification through toggle and lever [1],[2],[3],[4]. However, there is seldom study on 3-stage force amplification. In this paper, an innovative symmetric 3-stage force amplifier, i.e., toggle-levertoggle equipment is presented. It takes the advantage of toggle with low abrasion and high transmission efficiency. Structurally, it is more compact. In the reminder of this paper, we will introduce the principles of this novel force amplification equipment. We will calculate its theoretical and actual force amplification coefficient. As well as we give the movement plot of the input and output during its operation.
622
2.
D. Su, K. Zhong and G. Li
Principles
This equipment is a 3-stage symmetric force amplifier consisted by toggles and levers. See Figure 1 for the structure. In the Figure 1, component 1 is the input, component 2, 3 and 4 are cascading intermediate toggles or lever, component 5 is the output. The force transmission and amplification process is: Fi (the input force) is imposed on component 1 and being amplified for three times respectively through component 2, 3 and 4. The amplified force F0 is output through component 5. In the structural perspective, the components are symmetrically distributed around the vertical central axis. For this reason, in the horizontal direction, there is no forces on the input component, neither on the output component, so theoretically there is no abrasion between the input/output and their corresponding holes or trails. Therefore, comparing with the asymmetric multimechanical transmission mechanism [5], [6], the abrasion on this equipment is reduced and the force transmission efficiency is enhanced. Fi
Fo Figure 1. Symmetric toggle-lever-toggle 3-stage force amplifying mechanism
Symmetric Toggle-lever-toggle Force Amplifying Mechanism and Applications
3.
623
Mechanics Calculation
3.1 The Theoretic Force Amplifying Coefficient and Theoretic Output Force Force amplification coefficient: the ratio of the output force to the input force is the force amplification coefficient. If we do not take into account the abrasion, the force amplification coefficient and the output force are the theoretical values. For Figure 1, we establish mechanical model to calculate and we get: Theoretical amplification coefficient: l 31 1 (1) it l 32 tan E tan D Theoretical output force: l 31 Fi Fot (2) l 32 tan E tan D where: D the final angle between component 2 and the horizontal direction when the output is at travel end; E the final angle between the component 4 and the vertical direction when the output is at travel end; Fi input force; l31 , l32 the lengths of the upper and lower parts of component 3, see the Figure 1. We know from Formula 1, the theoretical force amplification coefficient is l l depending on 31 and angle D , E . The larger 31 and the smaller D , E , the better l32 l32 the force amplification effect. By appropriate adjustment of these parameters, the ideal force amplification result could be obtained. We should notice that in the actual engineering, due to the manufacturing precision, the values of D , E could not be very small. Generally, we take D min 3.2
30 50 .
The Actual Force Amplifying Coefficient i p
and the Actual Output Force Fop In real world applications, we should take into account the abrasion of the toggles and levers to calculate the actual force amplification coefficient and actual output force by the following two methods: Method 1. By modeling and analyzing Modeling Figure 1 and calculating to obtain the actual force amplification coefficient: R 43 cos( E M 2 ) (3) ip R 23 sin( E M1 )
M1 , M 2 The equivalent abrading angles of hinge joints of components 2 and 4; shown as follows [7]:
624
D. Su, K. Zhong and G. Li
2r 2r . f ) , M 2 arcsin( . f ) l2 l4 where : r radius of axle neck of the toggle; l2 , l4 distances between the two holes respectively on the toggle 2 and toggle 4; f abrading coefficient of hinge joint; R23 full counter-force of component 2 on component 3; R43 full counter-force of component 4 on component 3;
M1
when D
arcsin(
E and l2
l4 ,
R43 is determined by the following formula: R23
R23 [l31 cos(D M1 ) U ] R43 [l32 sin( E M 2 ) U ]
where U is the radius of abrading circle, U
f 0 .r
U R232 R432 f 1 f 2
When selecting the parameters, from Formula 4 we have
(4)
.r
R43 , then from Formula R23
3 we have the actual force amplification coefficient i p , then the corresponding real output force is Fop
i p .Fi .
We can known from Formula 3: the mechanism abrading mainly results from the abrading of the hinge joint, and the abrading value is depending on the materials, axle neck radius and toggle length. The smaller the axle neck radius and the longer the toggle, the smaller abrading angle M , so the abrading loss is smaller. By appropriately selecting these parameters, we can adjust the force amplification coefficient to some extend. Method 2. By empirical constant When considering abrasion, we calculate the first stage (through component 2) force amplification coefficient and the third stage (through component 4) force amplification coefficient respectively according to the follows [7]: 1 1 i p1 , ip3 tan(D M1 ) tan( E M2 ) the second stage force amplification coefficient through component 3 is: l i p 2 K 31 l32 where K 0.97 [8], it is the force transmission efficiency of lever 3. Thus, considering the abrading loss, the actual force amplification coefficient could be calculated by the following formula: ip
i p1 .i p 2 .i p 3
0.97
l31 1 . l32 tan(D M1 ).tan( E M2 )
the meanings of the parameters are as discussed above.
(5)
Symmetric Toggle-lever-toggle Force Amplifying Mechanism and Applications
4.
625
Moving Relationship
Let the input component 1 of the equipment is moving in a constant speed, the displacement is h ˈ the displacement of output component 5 is s . During the travel , angle D moves from 450 to tightening angle 60 (when travel is end), then component 3 is right at the vertical position and E is 60 . The relationship of displacement h and displacement s can be determined by the following system of equation:
T
T
h
l2 sin D 0 l2 sin D 2l31 sin sin(T max ) 2 2
s
l4 cos E l4 cos E 0 2l32 sin
l2 cos D l2 cos D e
l2 cos D 0 2l31 sin
l4 sin E 2l32 sin l4 sin E
T
sin(T max ) 2 2
T
T
cos(T max ) 2 2
l2 cos D 0 2l31 sin
T
T
T max 2
cos
(6)
T max 2
T
cos(T max ) l4 sin E 0 2 l32 sin(T max T ) l4 sin E e 2
where D 0 , D , D e angles between component 2 and horizontal level, respectively when travel is at begin, at a arbitrarily position and at end; we take D 0 450 , D e 60 . E 0 , E , E e angles between component 4 and vertical direction, respectively when travel is at begin, at a arbitrarily position and at end; we take E e
Figure 2. The Ralation of s-h
60 .
626
D. Su, K. Zhong and G. Li
T angle between a arbitrarily position and the begin position of lever 3; T max angle between the begin position and the end position of lever 3; The meanings of the other parameters are as discussed above. We take l 31 l 32 200mm , l 2 l 4 120mm . Through numerical analysis computation, we obtain the s h plot, see Figure 2. Figure 3 is s t plot. From the plots we have: angle D moving within 45 0 ~ 6 0 , the displacement of input component 1 is 69.3mm while the displacement of output component 5 is only 4.6mm, so short.
Figure 3. The Ralation of s-t
Figure 4. The Ralation of v-t
Symmetric Toggle-lever-toggle Force Amplifying Mechanism and Applications
627
If the source force is hydraulic fluid, we take the input speed is 6m / min , i.e 100mm / s , it only takes 0.69 seconds to go through the output travel. We calculate the derivative of s t to obtain velocity-time plot, i.e., v t plot, see Figure 4. The acceleration-time plot, i.e., a t plot is shown in Figure 5. From the plot we can see: the speed of the output component 5 reachs the highest at the beginning at 25.11mm / s . With the time goes by, the speed decelerates. When t is
Figure 5. The Relation of a-t
around 6s, the speed of output component is the lowest 0.85mm / s . And then, before the end of the travel the speed will goes up a bit, gets 0.98mm / s . And in the former period of the travel, the speed decelerates sharper than the later period. Now let’s see the acceleration curve, the acceleration is negative, meaning that during the operation the speed is decreasing. When the travel begin, the acceleration is 139.8mm / s 2 .During the period of the travel of the force amplifier, the acceleration changes from 139.8mm / s 2 to 6.6mm / s 2 . Obviously, near the end of the travel, the acceleration of the output component is positive, that means the speed of the component 5 will goes down with a small acceleration until stopping. From the analysis above, we get the movement trend is something like that : at beginning the output speed drops sharply, then gently, and then goes up a little. To sum up, the speed and the acceleration of this mechanism are not large. Its movement is stable and it is suitable for the application of low speed, short travel and large force output.
628
D. Su, K. Zhong and G. Li
5.
Source Force and Equipment Layout
5.1
Source Force
For the force amplification equipment shown in Figure 6, the source of the force could be machinery, electric, fluid or air-driven. Because of the stable operation of fluid, and the compact set in the same power, the fluid force is widely used in real world engineering applications. From the above discussion, we understand the device’s considerable force amplification effect. For the same output requirement, the equipment could be much smaller than other equipments and the input force could be small. Since it allows small input force, in some conditions, the source force could be air-driven, which is an easy and environmental-protective power. This is suitable for the development trend of modern technologies. Except fluid power, the machine power or electrical power are also available as the source of the input force.
a
b
Figure 6. Different equipment layout
5.2
Equipment Layout
There different layout methods for the equipment. Figure 6 shows the couple of layout methods when the power is fluid. In Figure 6.a, the input component is on the top, obviously it is simple. However, the cascade arrangement of the input, force- amplifying component and output component makes it too long in vertical direction, the space occupation is large. To avoid this problem, we could apply the layout of Figure 6.b. In Figure 6.b, the cylinder is located in the internal left by the
Symmetric Toggle-lever-toggle Force Amplifying Mechanism and Applications
629
force amplifier, the space use is efficient and the dimensions are reduced to make the equipment compact, meanwhile the performance is remained. In a word, the power of the equipment is various. The layout could be also determined in accordance with the applications. The many layout styles enrich the application of this equipment.
6. A Calculation Example and the Application in Advanced Manufacturing 6.1
A Calculation Example
For this equipment, we take: Į ȕ 60 , l31 2l32 200mm , l2 l4 120mm , r 5mm , f 0.1 . According to Formula 1, we have it = 181.046. According to Formula 5, we have i p 150.49 . Then the force transmission efficiency is
K
ip io
83.12% . It means that when an input force Fi acts over component 1,
there will be almost 151 Fi vertically output on component 5. For the layout in Figure 6, if we use fluid force as the source, assume the input oil pressure is p 7 MPa , and the diameter of the cylinder is d 100mm , the cylinder transmission efficiency is 0.85, then we get the output force is 7029 KN. If we use air as the source force, let the pressure be p 0.6MPa and the cylinder transmission efficiency is 0.7, then we get the output force is 496.17 KN. 6.2
The Application in Advanced Manufacturing
No doubt that the equipment has considerable force amplification effect and high force transmission efficient. In real world engineering applications, it is of special significance for low-input and high-output applications. In modern manufacturing industry, the device shown as Figure 6 will have gaugous applications in the following two fields: 1. In machine fixture, the substitution of pneumatic claming for hydraulic fixture will decrease environment pollution greatly, for the former’s transmission media is clear compressed air while the latter’s transmission media is mineral oil. 2. In press machine, it’s great for pneumatic device to take the place of hydraulic transmission equipment, which is easy to cause environment pollution, and the mechanical-driving press machine, which is easy cause noise pollution seriously. In addition to the above, the mechanism shown as Figure 1 can be useful in many applications where need to amplify force, such as riveter, hydraulic punching, pneumatic or hydraulic supercharger, shrink fitting machine and so on.
630
7.
D. Su, K. Zhong and G. Li
Conclusion
“Innovation is the combination of previous inventions”, said the father of transistor. Therefore, either new product or changes of existing product is somehow an invention. The new innovation is somewhat a combination of previous inventions. No doubt, the toggle-lever-toggle 3-stage force amplification equipment is an innovative consideration. It makes use of the combining innovative thoughts to design a high performance force amplifier with large amplification coefficient and high efficiency, as well as compact and simple symmetric structure. We believe It will be much useful in real world engineering applications.
8.
References
[1] Lulin, Qian Zhiliang, (2006) Two-step force amplifier using toggle mechanism driven by pneumatic muscle. Chinese Hydraulics & Pneumatics, (2): 51-52 [2] Wang Mingti, Zhong Kangmin, Zuo Dunwen, (2005) Unit equipment of pneumatic muscle and lever-toggle force amplifier. New Technology & New Process, (6): 26-27 [3] Zhang Yang, Zhong Kangmin, Chen Zailiang, (2005) A new kind of hydraulicmechanical combination driving device based on lever-toggle force-amplifier. Modern Machinery, (6): 27-28 [4] Chen Zhong, Su Guisheng, (2005) Hydraulic clamping devices composed of rod-less piston cylinder and toggle-lever. Mechanical Engineering & Automation, (6): 76-77, 80 [5] Zhong Kangmin, Guo Peiquan, (1999) Orthogonal reinforcement mechanism and hydraulic drive. In Proceedings of tenth world congress on the theory of machines and mechanisms, Oulu, Finland, Oulu University Press, 5: 2037-2042 [6] Zhong Kangmin, Song Qiang, Guo Peiquan, Hu Bingcheng, (2003) The principle and design toggle force amplification centrifugal clutch. Manufacturing Technology & Machine Tool, (3): 13-15 [7] Lin Wenhuan Chen Bentong, (1987) Fixture Design in Machine Tools. Beijing: Defense Industry Press [8] Zhong Kangmin Guo Peiquan Hu Bingcheng, (2000) Orthogonal force amplification centrifugal clutch. Chinese Journal of Mechanical Engineering, 36(4): 38-40, 44
Kinematics and Statics Analysis for Power Flow Planet Gear Trains Zhonghong Bu, Geng Liu, Liyan Wu, Zengmin Liu School of Mechatronic Engineering, Northwestern Polytechnical University, Xi’an, 710072, PR China
Abstract In order to analyze the kinematics and statics load problems effectively, a new method to solve the kinematics and statics load parameters is proposed according to basic kinematics equations of planet gear transmissions, based on dividing the system into fundamental kinematics unit. The angular velocity and torque equations considering the efficiency for power flow planet gear trains are developed and expressed with matrix formulations. The tangent mesh force of gear pairs, the bearing force which is applied by the planet on the carrier and the system efficiency are solved simultaneously. A typical power flow planet gear system is also demonstrated to highlight the capabilities of the proposed formulation, and the influence/effect of efficiency on power flow is also analyzed. Keywords: planet gear trains, power flow, kinematics, statics, matrix formulation
1.
Introduction
Planetary gear trains are widely used in machinery transmissions due to their small volume and weight, high torque/weight ratio, high power density, high efficiency and compactness. Design and analysis of planetary gears always start with the calculation of speed ratios, efficiency and torque values. It is very necessary to seek a more convenient approach to determine the kinematics and statics parameters fast and correctly. A number of published studies in planetary gears focused on the kinematics and statics analysis [1-7]. The results were obtained step by step based on the basic equations with conventional method [1-3]. The graph theory was used by Lin [4] to figure out the speed ratios and efficiency and discusses the relationship between efficiency and self-locks. Some new methods, such as matrix formulation [5, 6], genetic algorithm [7] appeared along with the development of computer technology. In order to get all results one-time, the whole calculation was in the form of matrix and solving matrix formulation. The matrix formulation was also used to search for all possible kinematics configurations in Kahraman’s research [6]. But in these studies, the process of how to form the matrix was not clear. Only were the simple planetary gears discussed in these studies, the research on the encased planetary gears with power flow was rarely published.
632
Z. Bu, G. Liu, L. Wu and Z. Liu
A modified matrix formulation is used in the kinematics and statics analysis for power flow planetary gears successfully in this paper. An example is also given to confirm the advantages of this method.
2.
Fundamental Structure of the System and Its Dividing
The typical transmission structure of power flow planetary gears is shown in Fig. 1. This gear set is formed by differential planetary gears (sun gear Z1, planet gear Z2, ring gear Z3 and the carrier H1) and encased star gears (sun gear Z4, planet gear Z5, ring gear Z6 and the carrier H2). The input power Pin is transmitted to output Pout through P1 and P2.
Encased
Differential
Figure 1. The power flow planetary gears
The gear sets shown in Fig.1 can be divided into two branches. Each branch can be divided into two fundamental kinematics units including an external mesh and an internal mesh which are shown in the Fig 2.
(a)
(b)
Figure 2. Fundamental kinematics unit (a) an external mesh and (b) an internal mesh
In Fig. 1.2, symbols p, s, r and H denote the planet gear, the sun gear, ring gear and the carrier respectively.
Kinematics and Statics Analysis for Power Flow Planet Gear Trains
633
3.
The Kinematics and Static Load Parameters Calculating
3.1
Angular Velocity Equation
The speed ratio in ordinary gear train of fundamental kinematics units is considered as basic parameter. The expression of the speed ratio for each fundamental unit is as following:
iadH
Za Z H Zd Z H
r
zd za
The angular velocity equation for each unit is as following:
Za iadH Zd (1 iadH )ZH
0
(1)
where Z is the absolute angular velocity of a component and Z is the number of teeth of a gear. Subscripts a and d denote the driving and driven gear respectively. The symbol “+” stands for internal mesh while the “-” stands for external mesh. The angular velocity equation for each branch can be established according to the equations and the motion restriction relationships. The expression for the differential branch is as following because the sun gear is the input component.
Z1 i12H 1Z2 (1 i12H 1 )ZH 1 0 Z2 i23H 1Z3 (1 i23H 1 )ZH 1 0 Z1 Zin
(2)
The carrier is fixed in the encased branch, so the equation is as following:
Z4 i45H 2Z5 (1 i45H 2 )ZH 2 H2 56
H2 56
Z5 i Z6 (1 i )ZH 2 ZH 2 0 where
0 0
(3)
Hj imn represents the speed ratio in ordinary gear train for the fundamental
kinematics unit consisting of gear m, n and carrier j in the jth branch. Z j ( j 1, 2, 3, H 1) is the absolute angular velocity of each component and Zin is the input rotation speed of the system. The serial numbers of the gears are in accordance with the symbols in the Fig. 1. The following expression is true because the input rotation speed of encased branch is original from the ring in differential branch and the carrier in differential branch and the ring in encased branch are both connected to output-shaft.
634
Z. Bu, G. Liu, L. Wu and Z. Liu
Z4 Z3 Z H 1 Z6
(4)
The equations (1.2), (1.3), (1.4) are combined into matrix formulation:
[ M Z ]{Z} {Z0 }
(5)
where {Z} [Z1 , Z2 , Z3 , Zc1 , Z4 , Z5 , Z6 , Zc2 ]T {Z0 } [0, 0, Zin , 0, 0, 0, 0, 0]T ª1 i12H 1 « 1 « «1 « [M Z ] « « « « « « ¬«
0 H1 23
i
1 i12H 1 1 i23H 1 1 0
i45H 2 1
0 i56H 2
1
1
1
1
º » » » » 1 i45H 2 » H2» 1 i56 » » » » 1 ¼»
The angular velocity of each component in the power flow planetary gears can be obtained simultaneously by solving out the equation (5). 3.2
Torque Equation
Two sets of parameters must be figured out to describe the static force state. The first set is formed by torque values. The second set is the force acting on the gears and planet bearing. First, a formulation to calculate the torque values is proposed in this section. Then using the torque values to figure out the gear mesh and planet bearing force will be discussed in next section. Assuming no mechanical efficiency, the summation of torque values and power of each component in every unit must be zero when the system was in static equilibrium. The equation for this principle is as following:
Ta Td TH 0 TaZa Td Zd TH ZH
(6)
0
(7)
Insertion of (1), (6) into (7) yields:
°Tb iadH Ta 0 ® H °¯TH (1 iad )Ta
(8)
0
The subscripts in (6), (7) and (8) denote the same thing in equation (1).The torque equations for each branch can be established according to equation (8) and the torque restriction relationship. The expression for differential branch is as following:
Kinematics and Statics Analysis for Power Flow Planet Gear Trains
T1
635
Tin H1 12 1
i T T2 i23H 1T2 T3
0
(9)
0
(1 i12H 1 )T1 (1 i23H 1 )T2 TH 1
0
The torque equations for encased branch are as following:
T4
T3 H2 45 4
i T T5 H2 56 5
i T T6 H2 45
( 1 i
0
(10)
0
)T4 ( 1 i56H 2 )T5 TH 2
0
where Tj ( j 1,3, 4,6) is the torque acting on each gear. Tj ( j
2,5) is the torque
acting on planet gear in external mesh. TH 1 and TH 2 represent the torque value applied on the carrier of differential and encased branch respectively. In fact, the Hj
torque values of planet gears are zero. The speed ratio imn and serial numbers of the gears denote the same thing as in (2) and (3). The matrix formulation for (9) and (10) is as following:
[ M T ]{T } {T0 } ª 1 « i « 11 where㧦 « « 1 i11 [M ] « « « « « « ¬«
1 i12 1 i12
(11)
1 1 1
2 i21 1 i21
1 i22 1 i22
º » » » » » » » » 1 » » 1¼»
{T } [T1 , T2 , T3 , TH 1 , T4 , T5 , T6 , TH 2 ]T {T0 } [Tin , 0, 0, 0, 0, 0, 0, 0]T The torque values applied on each component can be obtained simultaneously by solving out the equation (11). 3.3
The System Efficiency Calculating
Considering the gear mesh efficiency loss, the angular velocity of each component is constant but the torque values vary. And the equivalent speed ratio
iab Kabiab
is defined. In order to figure out the torque values with considering the efficiency, the equivalent speed ratio is inserted to equation (11).
636
Z. Bu, G. Liu, L. Wu and Z. Liu
The efficiency of the system can be obtained from the following expressions:
c 1Z H 1 Tout c 2Z6 ) /(Tout1Z H 1 Tout 2Z6 ) (Tout
Ks
(12)
where the output component of differential branch is the carrier, so Tout=TH1, c 1 THc 1 . The output component of differential branch is the ring, so Tout
Tout 2 3.4
c2 T6 , Tout
T6c .
Forces Acting on Gears and Planet Bearings
After getting the external torque values applying on each component, the gear mesh and bearing forces of each deck can be calculated using the static equilibrium. The expressions of the gear mesh and bearing forces are as following according to the relationship between torque and force using the diagram shown in the Fig. 3.
Ts ᧷Fpr rs
Fsp
Tr ᧷FpH rr
TH n p rH
where Fsp is the gear tangent mesh force between the sun and planet gear. Fpr is the force between the ring and planet gear. FpH is the planet bearing force applied on the carrier by the planet. Here, rs and rr are the pitch radius of s and r defined from the center of the gear to the pitch point as shown in the Fig. 3. rH is defined from the rotational center to the center of a planet.
Figure 3. Free body diagram of the gears forming a planet gear set
Applying these theories on the power flow planet gears, the gear tangent mesh and planet bearing forces can be get according to the following the expressions:
Fspi
Tj rj
( j 1, 4), Fpri
Tj rj
(j
3, 6) ,FpHi
THi (i 1, 2) n pi rHi
(13)
where Fspi, Fpri and FpHi represent the gear tangent mesh force of sun and planet gear, planet and ring, the planet bearing force in the ith branch respectively. rj is
Kinematics and Statics Analysis for Power Flow Planet Gear Trains
637
the pitch radius of each gear and rHi is defined from the rotational center to the center of a planet in the ith branch.
4.
Example Analysis
4.1
The Result of Kinematics Parameters and Load
All the kinds of parameters of the system are shown in Table 1. Inserting the speed ratio and input rotating speed into equation (5), the rotating speed of each component can be obtained one time by solving out equation (5). All the torque values (including the results after considering the efficiency) can be gotten by inserting the speed ratio or equivalent speed ratio into equation (11). Equations of (5) and (11) can be solved out by using Gauss elimination method. After getting the rotating speed and torque values, the system efficiency can be figured out according to equation (12). All the results are shown in Table 2. The pitch radius of each gear can be gotten easily according to the gear parameters shown in Table 1. All the gear tangent mesh and planet bearing force of the system shown in Table 3. Can be gotten according to the equation (13). All the symbols in Table 3 denote the same thing as in the equation (12). 4.2
Influence of Efficiency to Power Flow
The power percents shown in Table.4 for each branch are solved out according to the foregoing results. P1 and P2 are power flow way shown in Fig. 1. The results show that the mesh efficiency will influence the power flow. The power percents of encased branch will increase while the meshing efficiency decreases. Table 1. Parameters of example system Gear Number
1
2
Teeth Number
42
68
Speed Ratio
H1 12 =-68/42
i
Mesh efficiency
0.99
Gear Parameters
Modulus (mm)
Values System Parameters Values
12 Input power(kW) 36775
3 177 H1 23 =177/68
i
0.99 Helical Angle (°) 20 Input rotation speed (r/min) 3150
4
5
76
58
H2 45
i
=-58/76
6 189 H2 56
i
=189/58
0.99
0.99
Planet n1
Planet n2
3
5
Input torque (kN·m)
T1
9449 P1 / n1 110.31
638
Z. Bu, G. Liu, L. Wu and Z. Liu
Table 2. Angular velocity, torque of each component and efficiency Rotating speed(r/min)
Equivalent speed ratio
Torque (kN·m)
Torque (Considering efficiency) (kN·m)
n1
3150
T1
110.31
i12' K12i12
T1c
110.31
n2
-1620.9
T2
0
' i23 K23i23
T2c
0
n3
-499.13
T3
464.89
' i45 K 45i45
T3c
455.64
nH1
200.71
TH1
-575.21
i56' K56i56
THc 1
-565.95
n4
-499.13
T4
464.89
T4c
455.64
n5
654.03
T5
0
0
n6
200.71
T6
1156.1
T5c T6c
1110.6
nH2
0
TH2
-1621
THc 2
-1566.2
Efficiency
Differential Branch
0.9814
Encased Branch
System
0.97083
Fpr2 205.47
FpH2 406.35
0.9801
Table 3. Load distribution of the system Tangent Mesh load and planet baring force
Fsp1 147.04
Fpr1 147.04
FpH1 292.75
Fsp2 205.47
Table 4. The power flow effect with different efficiency No considering efficiency
P1 33.2%
P2 66.8%
Considering efficiency
0.98 P1 33.4%
P2 66.6%
0.95 P1 P2 33.7% 66.3%
Using the conventional method without considering the mesh efficiency, the power percents for differential and encased branch are 66.8 and 33.2. The results are as the same as the results using the proposed method. It confirms this method accurate and effective.
5.
Conclusions
A modified matrix formulation for kinematics and statics analysis was proposed in this study. Compared with the conventional matrix formulation[5, 6], the proposed method combined the advantages of graph theory[4] and matrix formulation. It makes the process of forming matrix and programming clearly and easily. The rotating speed and torque values for each component, the tangent mesh force, the bearing force and the efficiency of every branch and the whole system can be evaluated simultaneously using this method.
Kinematics and Statics Analysis for Power Flow Planet Gear Trains
639
The typical example shows that using this method to figure out the kinematics and static force parameters has the advantages of faster and more convenient compared with the conventional method. The results of this paper are very useful to the system design and strength analysis.
6.
Acknowledgements
The research was supported by NPU Foundation for Fundamental Research (NPUFFR-20060500W018101).
7.
References
[1] Ʉɭɞɪɹɜɰɟɜ,ȼ.ɇ. (1985) Planetary gear trains handbook. Chen Qisong, Zhang Zhan, Translating. Beijing: Metallurgical Industry Press, 1986. [2] Pennestri E, Freudenstein F. (1993) The mechanical efficiency of epicyclic gear trains [J]. ASME Trans. Journal of Mechanical Design. 115(3): 645-651. [3] Jose M. del Castillo. (2002) The analytical expression of the efficiency of planetary gear trains [J]. Mechanism and Machine Theory. 37(2): 197-214. [4] Lin Jiande, Chen Xiaoan. (2004) Simplified approach for the determination of the mechanical efficiency in gear trains. Chinese Journal of Mechanical Engineering. 40(9): 33-37. [5] Kahraman A, Ligata H, Kienzle K, et al. (2004) A kinematics and power flow analysis methodology for automatic transmission planetary gear trains. ASME Trans. Journal of Mechanical Design. 126(11): 1071-1081. [6] Hu Qingchun, Duan Fuhai, Mo Haijun. (2006) Kinematics analysis and efficiency calculation for complex planetary gear trains. China Mechanical Engineering. 17(20): 2125-2129. [7] Rao A C. (2003) A genetic algorithm for epicyclic gear trains. Mechanism and Machine Theory. 38(2): 135-147.
Green Clamping Devices Based on Pneumaticmechanical Compound Transmission Systems Instead of Hydraulic Transmission Systems Guang-ju Si1, Ming-di Wang1, Kang-min Zhong1, Dong-ning Su2 1
School of Mechanical and Electronic Engineering, Soochow University, Suzhou 215021 2 School of Mechanical Engineering, Jinan University, Jinan 250022
Abstract In this paper several different kinds of pneumatic-mechanical compound transmission systems formed by two-step orthogonal toggle force-amplifying mechanisms and rod-less pneumatic cylinders are introduced. The working principles and mechanical calculating formulas of the actual systems are given as well. Two-step orthogonal toggle force-amplifying mechanisms have these advantages: the fine force-amplifying effect and a high efficiency of force conduction; the rod-less pneumatic cylinder has structural compactness and greater rigidity; pneumatic-mechanical compound transmission systems, the combinations of these two, can remedy the short comings of single pneumatic transmission in which the system pressure is very low and the output force is limited. On some occasions, pneumatic-mechanical compound transmission systems can be used instead of hydraulic transmission systems that cause environmental pollution. Keywords: pneumatic transmission, pneumatic-mechanical compound transmission, hydraulic transmission, two-step orthogonal toggle mechanism, force-amplifying mechanism
1.
Introduction
Compared with hydraulic transmission, pneumatic transmission has these distinct advantages: 1.
2.
The working medium of a pneumatic transmission is clean compressed air while mineral oil is usually used as the working medium in hydraulic transmission, which is easily volatilised and often leaks leading to environmental pollution, thus pneumatic transmission is an environmentally friendly form of transmission. The pressure loss in pneumatic transmission is much less than the loss in hydraulic transmission. In a large manufactory, a centralized pumping station of compressed air needs to be built so that compressed air is
642
G. Si, M. Wang, K. Zhong and D. Su
supplied to each piece of equipment through netlike pipelines, whereas at least one hydraulic pump is required in each hydraulic transmission system for every machine. Generally speaking, pneumatic transmission using centralized compressed air is more economical than hydraulic transmission. 3. The velocity of the piston in the pneumatic cylinder is higher than that in the hydraulic cylinder, so the working efficiency of pneumatic transmission system is clearly higher than that of the hydraulic transmission. However pneumatic transmission has the unavoidable disadvantage of the low air pressure caused by the continual leaking of the compressed air. In engineering the air pressure in a pneumatic transmission system is about 0.4~0.7MPa. The pressure is so low that it requires a large pneumatic cylinder and a huge system structure if a powerful output force is needed, generally this is unacceptable. Therefore the hydraulic transmission, which is prone to cause environmental pollution, has to be used due to higher system pressure which can reach over 100MPa. Combining force-amplifying mechanisms with pneumatic transmission, we will get the pneumatic-mechanical compound transmission systems in which the advantages of pneumatic transmission and mechanical transmission are combined for the greatest effect. The most significant characteristic of these systems is that by using the force-amplifying mechanism to increase the output force from the piston a much greater force will be realized on the force output component in relationship to if the system pressure and the diameter of the cylinder are limited. It not only can enlarge the application area or field of pneumatic transmission, but also can use a relatively environmentally friendly form of pneumatic system instead of the pollution-prone hydraulic transmission system.
2.
The Toggle Force-amplifying Mechanism
At present toggle force-amplifying mechanisms, which are bionic mechanisms, are widely used in mechanical engineering [1-7] because of their significant forceamplifying effect, but most of the systems are based on one-step mechanisms which means that the input force in the system is amplified in only one step [1-5]. Application of two-step force-amplifying mechanisms has been seldom seen [6,7], however two-step mechanisms usually have about 10 times the force amplifying ratio as one-step mechanisms. Two-step toggle force-amplifying mechanisms are a series of combinations with many possibilities that take into consideration the desired mode of output motion. In some combinations the directions of input force and output force are parallel, while in others the directions of input force and output force are not parallel; in some combinations the output movement is linear, while in others the output movement is flexural; in some combinations there is only one path of output force, while in others there are two paths of output force; in some combinations only one force output component exists, while in others two force output components exist. Moreover toggle mechanisms have different types such as single-bar toggle
Pneumatic-mechanical Green Clamping Devices
643
mechanisms, double-equilateral-bar toggle mechanisms and double-inequilateralbar toggle mechanisms.
3. Two-step Orthogonal Toggle Force-amplifying Mechanisms and Rod-less Pneumatic Cylinder A large numbers of combinations with different structures can be formed by the different kinds of two-step toggle force-amplifying mechanisms and pneumatic cylinders being connected in series. Several kinds of representative and practical pneumatic-mechanical compound transmission systems will be illustrated as follows. These combinations are all based on two-step orthogonal toggle forceamplifying mechanisms and rod-less cylinders, the output motions are one –way and linear. Rod-less cylinders are employed in the combinations is due to their rigidity and structural compactness. Based on this working principle, an orthogonal mechanism is defined as: a kind of mechanism in which the force direction is changed orthogonally from force input to output. An orthogonal mechanism usually has two forms -- one-step and twostep. In one-step orthogonal mechanisms the directions of output force and input force are perpendicular [2,3], while in two-step ones the directions of output force and input force are parallel. [6,7] 3.1 Single-bar toggle—double-equilateral-bar toggle series mechanism and rod-less cylinder The pneumatic-mechanical compound transmission system formed by single-bar toggle—double-equilateral-bar toggle series mechanism and rod-less cylinder has two forms: symmetrical style as shown in Fig. 1 and unsymmetrical style shown in Fig. 2. force output component
Fo
E E D p
two-step toggle one-step toggle rod-less piston
Figure 1. Single-bar toggle—double-equilateral-bar toggle series mechanism and rod-less cylinder (unsymmetrical structure)
644
G. Si, M. Wang, K. Zhong and D. Su
The working principle of the system illustrated in Fig. 1 is: when the system is on its working path, the pneumatic direction-controlling valve is located to the left position, as shown in Fig. 1. The compressed air enters the left cavity of the cylinder, forcing the rod-less piston to move rightwards. The force exerted on rodless piston by the compressed air is amplified by single-bar toggle—doubleequilateral-bar toggle series mechanism (a kind of two-step orthogonal series mechanism), and then the amplified force acts on the force output component, finally the force output is Fo. When the pneumatic direction-controlling valve is located in the right working position, the compressed air enters the right cavity of the cylinder, forcing the rod-less piston to move leftwards; the force output component moves leftwards too and the system is on the return path.
Figure 2. Single-bar toggle—double-equilateral-bar toggle series mechanism and rod-less cylinder (symmetrical structure)
The working principle of the system is illustrated in Fig. 2 is similar to that of the system in Fig. 1. The difference in Fig. 2 is illustrated as follows: when the force output component is the on working or return path, the compressed air enters the cavities of the two cylinders simultaneously, this means that the amount of compressed air consumed by the system in Fig. 2 is two times that consumed by the system in Fig. 1, it also means that theoretical output force of the system in Fig. 2 is two times that of the system in Fig. 1.
Pneumatic-mechanical Green Clamping Devices
645
For systems illustrated in Fig. 1 and Fig. 2, the formulas for calculating theoretical output forces (neglecting friction loss during force conduction) are shown as follows: · Sd 2 pA 1§ 1 (1) ¨¨ 1¸¸ Fot1 2 © tan D tan E ¹ 4 · Sd 2 p A § 1 (2) Fot2 ¨¨ 1¸¸ ¹ 4 © tan D tan E Where d —diameter of piston, D , E —theoretical pressure air, p A —pressure of compressed air. When taking friction loss into account, the formulas for actual output forces Fop1 and Fop2 of systems in Fig. 1 and Fig. 2 are: Fot1
· Sd 2 pA 1§ 1 1 tan(E M2 ) tan J ¨¨ 1¸¸ 2 © tan(D M1 ) tan I tan(E M2 ) ¹ 4
(3)
Fot2
§ · Sd 2 p A 1 1 ¨¨ 1¸¸ © tan(D M1 ) tan I tan(E M 2 ) ¹ 4
(4)
Where
M1 —equivalent
I —equiivant friction cylinder, M 2 —equivalent
friction angle of single-bar toggle,
angle between the piston and the internal wall of the friction angle of double-bar toggle, J —equiivant friction angle between force output component and its guiding hole. Calculations of the above equivalent friction angles are shown in reference [1,5]. Comparing formula (1) with (2), (3) with (4), we will find that the theoretical output force of the system in Fig. 2 is doubled, while the actual output force is over twice that in Fig. 1, due to the lack of 1 tan( E M 2 ) tan J in the numerator of formula (4), the value is less than 1. The reason for this situation is: there is no friction loss between the force output component and its guiding hole for the radial forces exerted on force output component are symmetrical and balanceable. As well as indicating the force conduction efficiency of the symmetrical style is higher than the unsymmetrical one. 3.2 Double-equilateral-bar—Double-equilateral-bar Toggle Series Mechanism and Rod-less Cylinder The pneumatic-mechanical compound transmission system formed by doubleequilateral-bar—double-equilateral-bar toggle series mechanisms and rod-less cylinder has both a symmetrical style and unsymmetrical style as illustrated in Fig. 3 and Fig. 4 respectively. The working principle of the systems illustrated in Fig. 3 and Fig. 4 is: when compressed air forces the rod-less piston to the right, the intermediate sliding block, which is connected to the radial hole in the piston and hinged with the two toggles of a one-step force-amplifying mechanism, moves upwards, the force exerted on rod-less piston by compressed air is amplified by double-equilateral-bar—doubleequilateral-bar toggle series mechanism (a kind of two-step orthogonal series
646
G. Si, M. Wang, K. Zhong and D. Su
mechanism), and then the two-step amplified force act on the force output component, finally output force is Fo.
Figure 3. Double-equilateral-bar—double-equilateral-bar toggle series mechanism and rodless cylinder (unsymmetrical structure)
Because a double-equilateral-bar toggle mechanism was used instead of a singlebar toggle mechanism, the force conduction efficiency of the system in Fig. 3 is higher than that in Fig. 1.; the force conduction efficiency of the system in Fig. 4. is higher than that in Fig. 2. The reason for this efficiency is that there are great radial forces exerted on the rod-less cylinders in the systems shown in Fig. 1 and Fig. 2 causing great friction loss; while the radial forces exerted on the rod-less cylinders in the systems shown in Fig. 3.and Fig. 4., caused by the friction between the sliding block and the radial hole in the piston, are so small that they can be ignored in engineering. The formulas for theoretical output forces Fot3 and Fot4of systems in Fig. 3 and Fig. 4 are: Fot3 Fot4
· Sd 2 p A 1§ 1 ¨¨ 1¸¸ 4 © tan D tan E ¹ 4 · Sd 2 p A 1§ 1 ¨ 1¸¸ 2 ¨© tan D tan E ¹ 4
(5) (6)
If taking friction loss into account, the formulas for actual output forces Fop3 and
Fop4 of systems in Fig. 3. and Fig. 4. are: Fot3
1 §1 tan(D M1) tanT 1 tan(E M2 ) tanJ · Sd 2 pA ¨ 1¸¸ 4 ¨© tan(D M1) tan(E M2 ) ¹ 4
(7)
Pneumatic-mechanical Green Clamping Devices
Fot4
· Sd 2 p A 1 § 1 tan(D M1 ) tan T 1 ¨¨ 1¸¸ 2© tan(D M1 ) tan(E M 2 ) ¹ 4
647
(8)
Where T is the friction angle between the sliding block and the radial hole in the rod-less piston. Comparing formula (5) with (6), (7) with (8), we find that theoretical output force of the system in Fig. 4 is two times, while the actual output force is over two times, as that in Fig. 3, the situation is very similar with the above. It further illustrates that the force conduction efficiency of a system with a symmetrical structure is higher than that with an unsymmetrical structure.
Figure 4. Double-equilateral-bar—double-equilateral-bar toggle series mechanism and rodless cylinder (symmetrical structure)
648
G. Si, M. Wang, K. Zhong and D. Su
3.3 Double-equilateral-bar—Double-equilateral-bar Toggle Series Mechanism with Intenal One-step Toggle Mechanism and Rod-less Cylinder Putting the one-step toggle mechanism shown in Fig. 1 inside a two-step toggle mechanism, a new kind of pneumatic-mechanical compound transmission system will be formed as shown in Fig. 5. The working principle of the system is so obvious that it is unnecessary to be illustrated any more. It is necessary to mention that the symmetry of the system in Fig. 5 is the highest among the five systems mentioned above, so the system in Fig. 5 has the best technical properties. The formula for theoretical output forces Fot5 of the system in Fig. 5. is same as that in Fig. 3 as follows: · Sd 2 p A 1§ 1 (9) ¨¨ 1¸¸ Fot5 2 © tan D tan E ¹ 4 Its formula for actual output forces Fop5 is: Fop5
· Sd 2 pA 1§ 1 1 ¨¨ 1¸¸ 2 © tan(D M1 ) tan( E M 2 ) ¹ 4
(10)
Figure 5. Double-equilateral-bar—double-equilateral-bar toggle series mechanism with internal one-step toggle mechanism and rod-less cylinder
Although the theoretical output force of the system in Fig. 5 is same as that in Fig. 4, comparing formula (8) with (10), it shows that the actual output force of the system in Fig. 4 is appreciably less than that in Fig. 5 due to its lower force conduction efficiency. Furthermore, the structural compactness of the system in Fig. 5 is better than that in Fig. 2 or in Fig. 4.
Pneumatic-mechanical Green Clamping Devices
4.
649
Example
For example, considering the system in Fig. 5, if D E 5 D , M1 M 2 1D (the value of M is a statistical average), the diameter of piston d 100mm , air pressure of compressed air p A 0.6MPa , Fot5 215646 N from the formula (10). If applying the hydraulic transmission system with the same cylinder diameter, in order to get the same output force from the rod of piston, the working pressure of the hydraulic system will be as high as p L 215646 u 4 27.46MPa . This pressure S u 100 2 in hydraulic system is in the high range. The result shows that, on some occasions, using a pneumatic-mechanical compound transmission system based on forceamplifying mechanisms instead of pollution-prone hydraulic system is absolutely practical in engineering.
5.
Conclusion
A pneumatic-mechanical compound transmission system based on forceamplifying mechanisms, suitablly meeting the trend of developing environmentally friendly forms of transmission techniques, has a good application prospect in engineering. These types of systems are still on paper up to now, and we hope that this paper can attract more attention and interest from engineers in this field to promote the technique of pneumatic-mechanical compound transmission in practical application and become more popular in engineering. Thus it will accelerate the greenlization progress in mechanism design and manufacturing technique on the other hand.
6.
References
[1] Lin Wen-huan, Chen Ben-tong. Clamp Design for Machine Tools, Beijing: National Defense Industry Press. 1987. [2] Edward G Hoffman. Jig and Fixture Design. Albany, USA: Delmar Publishers. 1996. [3] Zhong Kangmin, Guo Peiquan, Hu Bing-chen. Centrifugal clutch with orthogonal force amplifier. Chinese Journal of Mechanical Engineering, 2000(4):38–40,44. [4] Lu Wen, Zhong Kangmin. Parallel and Synchronal Double-Acting-Path Toggle Force Amplifier and Its Application to Hydraulic Drive, Construction Machinery and Equipment. 2005(1):45–46. [5] Zhong Kangmin, Guo Peiquan. Orthogonal reinforcement mechanism. and hydraulic drive. In: Proceedings of tenth world congress on the theory of machines and mechanisms(Vol5). Oulu, Finland: Oulu University Press,1999, 2037–2042. [6] Lu Wen, Wang Bing, Zhong Kangmin. Three kinds of composition system for pneumatic muscle and force amplification mechanism of hinge rod and their comparison, Journal of Machine Design. 2005,22(2):52-54. [7] Robert L. Norton, Design of Machinery-An Introduction to the Synthesis and Analysis of Mechanisms and Machine, USA, McGraw-Hill, 1999
Rapid Registration for 3D Data with Overlapping Range Based on Human Computer Interaction Jun-yi Lin, Kai-yong Jiang, Bin Liu, Chang-biao Huang
Mold & Die Technology and Research Center, HuaQiao University, QuanZhou, FuJian, 362021, China
Abstract Non-contact optics 3D measurement methods have an advantage in the measurement of complex free-form curve surfaces, and registration of multiviews point data is still a challenge task in this field. Obviously, the ICP (iterative closest point) algorithm is one of the most classical methods to carry out the registration. A rapid registration method for two sets overlapping points data based on human computer interaction is presented in this paper. The method includes two step registration: rough registration and accurate registration. Three pairs of points are chose quickly though human computer interaction in the overlapping point region, and the rigid-conversion of the two views point data is calculated based on the pairs of points. Then the rough registration can be accomplished by the rigidconversion. In accurate registration processing, the ICP algorithm is used to gain the more accurate registration result. Finally, the shoe last, which has complex free-form curve surface, is measured with this algorithm, and the result shows: the registration algorithm is fast and efficient. Keywords: multi-views point data Rapid registration algorithm human computer interaction
1.
iterative closest point
Introduction
The optics 3D measurement methods possess wide application in the domains such as: rapid prototyping, computer vision, biology and medicine [1]. Non-contact, non-destroy, fast and wide measuring are the main advantages of these methods, and they can gain the 3D point of the object in a few seconds. With its optical principle, only certain angle of view of the object can be measured once a time. So it takes many times in measuring to gain the whole data of the object. The coordinate of each measurement is different, then, the registration of multiviews data is applied to obtain the whole 3D data of the object. At present, the main registration methods for the multiviews 3D data [2][3] are:1) realization depend on hardware. This method gets the conversion of different views of data with the high precision apparatus, and the registration can be carried out by
652
J. Lin, K. Jiang, B., and C. Huang
the known conversion. And 2) its realization depends on algorithm. In this method, registration is obtained by processing the conversion information in the data or the assistant information introduced into the data. The iterative closest point (ICP) algorithm is one of the most widely used methods for registration [4]. Although the ICP algorithm has already become the dominant method of the 3D data registration, it also has some limits [5]: 1) it requires a good initial estimate to prevent the problem of local minima, and 2) there is no guarantee for getting the correct solution even for the noiseless case. According to these limits, a lot of methods based on the standard ICP algorithm are put forward to come over the problem, and the detail results are showed in [6]. In recent years, many two-step registration methods, which include rough registration and accurate registration, are offered to improve the efficiency and reliability of the ICP algorithm. A global rough registration method is introduced in [7]. In this method, the entire search ability of the inherit algorithm is adopted and three parameters are applied to unit quaternion method as optimizing space to obtain the global rough registration. In [8], the rough registration is accomplished by matching the geometry feature of the neighboring points. The rough registration presented in [9] is carried out by calculating the corresponding points. Curvature and normal vector are calculated according to the point and its neighbor points, and then they are used to find the corresponding points. Some kinds of fast rough registration methods are gained on the base of the introduced assistant information. In [10][11][12], some marks are made in the overlapping range data on two data sets, and the position of the marks can be obtained by image processing, so the conversion of the different coordinate can be calculated by the corresponding mark points. To improve the efficiency and reliability, this paper offers a rapid registration for the multiviews data with overlapping region. We can select three pairs of points in the overlapping region by human computer interaction, and then according to the three pairs of points, the rigid conversion of the two angles of view data can be calculated easily, so the rough registration can be finished by the application of the conversion. Based on the rough registration, the improved ICP algorithm is used for the accurate registration. In the next section, we introduce how to select the pair of points in the 3D point sets. And then᧨Based on the three pairs of points, the method to calculate the conversion of the two angles of view data is presented. In section four, based on the rough registration, the iterative closest point (ICP) algorithm is stated. Finally, experimental results for shoe-last measurement are presented to demonstrate the capabilities of this algorithm.
2.
Selection of Pairs of Points
2.1
The OpenGL Display of the 3D Data
First of all, the 3D data must be displayed in the software interface, and OpenGL technology is known as the processing standard of the high performance graphics.
Rapid Registration for 3D Data with Overlapping Range
653
It provides about 120 different orders to define the 3D object and operate 3D mutual application. While constructing a 3D entity, it recommends a usage of triangle specially from the coplanar performance [13]. To prevent the distortion of the object, it needs to build up the light-shine model in the OpenGL, and the result of the light-shine model depends on the normal vector of the structure unit, so the STL file is adopted to describe the 3D data in this paper. STL file has become one of the industrial standards of the CAD/CAM system interface files. It records the normal vector and three vertex of a triangle, the normal vector and the vertex recorded order accord with the right hand rule, this character is very important when building light shine model in OpenGL. It is easy to accomplish the 3D data display task using STL file with OpenGL technology. The 3D data of every view is relative to the current coordinate, so it is difficult to guarantee the overlapping region is in the visible status without any operation. OpenGL offers some functions such as: glTranslate(), glRotatef() and glScalef() to carry out the translation, rotation and zoom operation. Any part of the object can be shown to Handlers after using these operations. 2.2
Selection of the Pairs of Points by Human Computer Interaction
When the object in the wanted position, it has passed though many times transformation in practical process, such as rotation, translation and protection and so on. Generally, it is difficult to confirm which object is the selection in this situation. Fortunately, OpenGL provides a kind of select mechanism, and the handler can select the wanted object conveniently with the mechanism. The basic ideas to carry out the OpenGL selecting functions are: first, the scene is protracted to the buffer, and then enter the select mode to redraw the scene. When withdrawing this mode, OpenGL returns a series of graphics elements intersected with the view body, and each graphics element can produce selected information. According to the selected information, the selected object can be figured out. The main steps of the OpenGL select mechanism are outlined as follows: 1 Define the array where the select records are return to. The OpenGL function is: void glSelectBuffer(GLsize size, GLunit* buffer). 2. Operate name stack. The function used to initialize the name stack is listed as: void glInitName(void), and the function to push the name into the name stack is listed as: void glPushName(GLuint name). 3. Select operation. The select operation is carried out when the handler hits the mouse on the data shown in the software interface. the function is
y , GLdouble with, GLdouble height, Glint viewpoint[4]). The parameter x and y
outlined as: void gluPickMatrix (GLdouble x , GLdouble
define the mouse position, and the parameter with and height define the select region. 4. Withdraw the select mode. Only this step is done, the select results can be return to the buffer, and the numbers of the selected graphics units are returned. The function used to finished the step is listed as follow: Glint glRenderMode(GLenum Mode). Here, the mode must be set to GL_RENDER.
654
J. Lin, K. Jiang, B., and C. Huang
5.
Process the select results. The select results are recorded in the array, every record includes the number of the name in the name stack, the minimal and maximal depth of the intersection between the selected object and the view body, and the actual name pushed in the name stack. It is necessary set a region near the mouse position to select a point in the 3D data. In this case, one or more points can be selected in operation once. The minimal depth is equal to the maximal depth. The point which nearest to the handler is the one we wanted. It is easy to decide which point is selected with the comparison of the minimal depth. In order to guarantee the preciseness of the rough registration, the requirements of the pairs of points selection is stated as follows: x x x x
The three pairs of points must be in the overlapping region of the two angles of view data. Three select points must be composed to a most big regular triangle as possible as they can. Do not select the point near the edge of the data. Choose the pairs of points in the corresponding order.
The example of the select points is showed in figure 1.
Figure 1. the example of select points-
3.
Rough Registration
All above methods about the rough registration based on three points stress on the accurate extraction of the marks introduced in the data, These methods spend a lot of time on extracting the mark points, Our goal is to finish the rough registration quickly, so it needn’t extracts the points accurately. In our method, three pairs of
Rapid Registration for 3D Data with Overlapping Range
655
points are picked up by manual work. They are not require to match each other strictly in this case. The rigid conversion is calculated by the three selected pairs of points, and the rough registration is carried out by the application of the rigid conversion. Suppose that three points in one of the set data note P1 ˈ P2 ˈ P 3 , and their compose of the triangle P , and three corresponding points in the other set data note Q1 ˈ Q2 ˈ Q3 , and the triangle composed of them note Q .Then the main steps for the rigid conversion are outlined as follows: 1
Translate the first point ( P1 , Q1 ) to the origin of the coordinate
T
respectively, and the translation matrix ( TP , Q ) is calculated. 2. Calculate the normal vector of the two triangles based on the first point. The normal vector of the triangle P is noted
FP( FPx , FPy , FPz ) , FP
P1P 2 u P1P 3 , the normal vector of the
triangle Q is noted FQ ( FQx , FQy , FQz ) , FQ
Q1Q 2 u Q1Q 3 , the
module of the two normal vectors are calculated too, and are noted MP ˈ MQ . 3.
Rotate the normal of the two triangles to consist with z axis. Suppose and
J 1 are the angles between the normal of triangle P
and the positive
direction of x axis and z axis. The normal vector FP rotates by z axis first, and then J 1 degree by rotation matrix is noted as follows:
R PZ
RPY
ªcos D 1 « sin D 1 « « 0 « ¬ 0 ª cos J 1 « 0 « « sin J 1 « ¬ 0
sin D 1 cos D 1 0 0
0 0 1 0
0 sin J 1 1 0 0 cos J 1 0 0
0º 0»» 0» » 1¼ 0º 0»» 0» » 1¼
D1
y axis to consist with
D 1 degree z axis, the
(1.1)
(1.2) The normal vector of the triangle Q can be rotated to consist with z axis with the same operation. 4. Though the three steps above, two triangles have been transformed to the xy plane, and the first vertex has consisted with the origin of the coordinate. In order to gain the rigid conversion by three corresponding points, it only needs to superpose one of the corresponding edges. Because
656
J. Lin, K. Jiang, B., and C. Huang
the point P1 and Q1 are consisted with the origin, the edges P1 P2
and
Q1Q2 are chose. First, the angle θ between two edges are calculated, and then the edge Q1Q2 is rotated to consist with the edge P1 P2 . The rotation matrix is noted as follows:
RZ ±θ
⎡ cos θ ⎢∓ sin θ =⎢ ⎢ 0 ⎢ ⎣ 0
± sin θ cos θ 0 0
0 0 1 0
0⎤ 0⎥⎥ 0⎥ ⎥ 1⎦
(1.3)
Base on the four steps, the whole conversion matrix can be deduced as follows: −1 −1 T = TP R PZ R PY T −1 RQY RQZ TQ−1
(1.4)
Figure 2. The result of rough registration
The rough registration result for two angles of view of shoe last measuring data is showed as figure 2. It can come to a conclusion that although there still exits big error in the result, the two view data have already in the good position relatively.
4.
Accurate Registration Using ICP Algorithm
The angle between the two sets data are less than 10 degree after rough registration, it offers a good pose to the application of ICP algorithm. The two sets data are noted A0 , B0 , and data A0 is set as standard data, after rough
Rapid Registration for 3D Data with Overlapping Range
657
registration, data B0 is transformed to gain data B1 . and the main steps of the ICP algorithm are outlined as follows: 1.
Set the initial data. A0 and B1 are suppose to be the initial data of the
2.
accurate registration. Find the closest pairs of points. According to the three selected points, the overlapping region can be affirmed easily. In the overlapping region, every point in data
A0
can find the corresponding closest point in data
B1 quickly using k d tree. The search efficient is high improved 3.
4.
because the search area is limited in the overlapping region data. Calculate the conversion matrix. The parameters R1 and T1 of the conversion is calculated using unit quaternion [13] based on the pairs of points. Practice the conversion. The New data B2 is obtained by the application of the conversion parameters R1 and T1 on the data B1 .and the RMS error is gained between data
5.
A0
and B2 .
Determine the iterative stop condition. The iteration is ended when the RMS error is smaller than the allowable error. Otherwise, data B1 is substituted by the data B2 , and the algorithm returns to step 2).
5.
Experiments and Conclusions
The multi-angles–of-view 3D data of the shoe last is registered using the algorithm, and the results are showed as follows: figure 3 shows the two angles of view data of the shoe last, which have overlapping region data. And figure 4 a shows the rough registration result. From the picture, we find there still exist a little angle between two sets data, and accurate registration result is showed in figure 4 b. One of the data has 6294 points and 12157 triangles, and the other data contains 6456 points and 12459 triangles. The algorithm is run in the computer situation of: Pentium(R) D CPU 2.80GHZ, 2.00GB ROM, it can finished the accurate registration in a second, the RMS error is about 0.6mm. In this paper, a rapid ICP algorithm for the registration of 3D data is presented, and it includes two steps: rough registration and accurate registration. The result shows that the efficiency of the registration algorithm is improved, and this algorithm can overcome the limits of the ICP algorithm, such as it requires good initial relative pose of the two sets data. We have focused largely on the speed of the registration algorithm, the efficiency is high improved by human computer interaction, but the result depends on the Handlers technique, so we anticipate future surveys focus on the stability and automatism of the registration algorithm. In addition, the remaining registration
658
J. Lin, K. Jiang, B., and C. Huang
error exits in every pairwise registration step, in order to evenly diffusing the pairwise registration errors, a better global registration for data sets would be further researched.
Figure 3. The two angles of view data of the shoe last with overlapping region
a
b
Figure 4. a. The result of rough registration; b. The result of accurate registration
6.
Acknowledgements
The work described in this paper was supported by the key program No. 2006H0029 and 2005HZ1013 both from the Science & Technology Department of Fujian in China, and the school fund item No.06HZR12.
Rapid Registration for 3D Data with Overlapping Range
7.
659
References
[1] Chen XR, Cai P, Shi WK, (2002) The latest development of optical non-contact 3D profile measurement. Optics and Precision Engineering. 10(5):528-532 [2] Williams JA , Bennamoun M, Latham S, (1999) Multiple view 3D registration: a review and a new technique [A ]. Proceedings of the IEEE International Conferences on Systems, Man and Cybernetics [C]. Tokyo, Japan : IEEE Press,3:497-502 [3] Simon DA, (1996) Fast and Accurate Shape-Based Registration[D]. Pittsburgh, Pennsylvania : Carnegie Mellon University [4] Besl PJ, McKay ND, (1992) A method for registration of 3-D shapes [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2): 239-256 [5] Chen CS, Hung YP, and Cheng JB, (1999) “RANSAC-Based DARCES: A New Approach to Fast Automatic Registration of Partially Overlapping Range Images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 11, pp.1229–1234 [6] Szymon R and Marc L, (2001) “Efficient Variants of the ICP Algorithm,” in Proceedings of the 3rd International Conference on 3-D Digital Imaging and Modeling, Quebec City, Canada [7] Yan QG, Li MZ, Li DC, (2003) Research on Registration of 3D Data in Inspection of Multi-point Forming Part[J]. China Mechanical Engineering, 14(19):1648-1651 [8] Liu Y, Wei B, (2004) Developing structural constraints for accurate registration of overlapping range images, Robotics and Autonomous Systems. Blood 47: 11–30 [9] Zhu YJ, Zhou LS, Zhang LY, (2006) Registration of Scattered Cloud Data[J]. Journal of Computer-aided Design & Computer Graphics, 18(4):475-481 [10] Luo XB, Zhong YX, Li RJ, (2004) Data registration in 3-D scanning systems[J]. Joumal of Tsinghua University: Science & Technology, 44(8): 1104-1106 [11] Chen JT, Zhao C, Wang CG, (2006) Mo JH. Research of Point Clouds Reorientation Based on Reference Point and ICP Algorithm[J]. Computer Measurement & Control, 14(9): 1222-1224 [12] Zhang WZ. Zhang LY, Wang XY, (2006) Robust Algorithm for Image Feature Matching Based on Reference Points. China Mechanical Engineering, 17(22):24152418 [13] Horn B K P, (1987) Closed-form solution of absolute orientation using unit quaternions [J]. Optical Spcoetu of America, 44 (4): 629–642
A New FE Modelling Approach to Spot Welding Joints of Automotive Panels and Modal Characteristics Jiqing Chen, Yunjiao Zhou and Fengchong Lan School of Automotive Engineering, South China University of Technology, Guangzhou, 510640, P R China
Abstract This paper presents a new finite element modeling technique for spot-weld connection. Unlike the conventional one that depends on node-node model, using this method, while meshing, it is not necessary to consider the correspondent relationship of the positions between the nodes and the actual welding spots. It performs a shortcut and shows high efficiency in a large number of spot-weld models as established easily and rapidly, especially for automotive cab, body frames and panels, etc. It is significant for reducing workload in pre-process of carbody finite element, enhancing the modeling precision and reliable CAE analysis. In addition, based on the orthogonal experiments, the effect of the parameters of spot-welding structure, such as overlap proportion, spot pitch, etc., on the modes of the connected panels has been emphatically investigated, and its law is obtained. The law is significant in engineering applications providing with an approach to reduce the NVH level of products where a large number of spot-welding connection structures is used, such as vehicles, airplane, etc. Keywords: Automotive panel; Spot weld; FE modelling; Modal analysis
1
Introduction
As an economical and rapid connection manner, spot-welding has been widely used in numerous manufacturing industries. With the development of FE technique, virtual reality design has been becoming an effective means for research and development. In the process, establishment of an accurate FE model involving a large number of spot welds is the precondition to ensure the effective design [1]. Nowadays, how to model a spot-weld according to the fact is one of the hotspots in FE research and application [2-7], and much important progress has been achieved. In early 1970s, Ford automobile corporation firstly carried out the investigation in the characteristics of the spot weld interface[2]; Kan ni and Sankaran Mahadevan[3] presented a reliability-based methodology for the evaluation of stiffness degradation of automotive spot-welded joints under high mileage; Wen He and his partners[6] presented a new FE modeling technique for spot-welds by introducing a new short beam element between node and element, which is subject to the tensile and shear forces. As mentioned above, a great quantity of research achievements
662
J. Chen, Y. Zhou and F. Lan
have been obtained in the different methods of element selection such as short beam element, block element and spring element, etc. used to simulate spot welding connection structures. However, there still remain two problems not successfully solved: rapid FE modelling establishment of a large number of spot-weld joints and simulative precision reliability for the actual spot-welding phenomena. According to traditional method, a spot-weld element consists of FE nodal pairs, and only the nodal pairs when close to the actual spot-weld position can be used as the welding position. In fact, due to the large number and the distributive discontinuity of the spots, in order to ensure the position correspondent relationship between the actual spot welds and nodes, it needs to elaborately mesh within a small local area. It inevitably increases the workload in FE modelling, and cause a lot difficulty to keep the position precision. In this study, a new FE modelling approach to spot welding is presented and validated. The distinct characteristic is that: the problem in spot position correspondence between nodes and actual spot weld is solved, and the position precision is accurately guaranteed. In addition, the effect of parameters of spot-weld structure, such as overlap proportion, the space between spot weld, etc. on modes of automotive panel is emphasized, and the law obtained.
2
Spot Weld Modelling
In the process of spot weld modeling, more attention should be paid to the efficiency and precision of modeling in FE simulation. The low efficiency in the traditional node-to-node model simulation needs to be overcome in order to reduce the pre-processing workload, increase the precision and reduce the time cost. 2.1
Establishment of Welding Element
In the traditional node-to-node method, referring to Fig 1, the weld element can consist of a pair of nodes denoted as (K1, K2), only when the position layout of the nodal pair is close to the actual spot-weld positioning. Its disadvantages are the difficulty to keep the position precision as the actual weld spots, and the large workload in modelling procedures.
Fig 1. Traditional node-to-node method
Since a complete FE modeling of the whole automotive body is to be very complicated, in order to reduce the workload, a new method in this study, hereafter named “virtual node” method, is proposed to simulate welding spots. As shown in Fig 2, a reference point R in the real location of the welding spot is projected onto
A New FE Modelling Approach to Spot Welding Joints of Automotive Panels
663
two welding planes along their normal directions to obtain two virtual nodes Km and Kn. Thus, a welding element is composed of such a pair of virtual nodes (Km, Kn), and a short beam element is introduced connecting the two virtual nodes to simulate the welding relations; at the same time, each virtual node is associated with a group of nodes on the surface in the immediate neighborhood of the virtual node called a region of influence, as illustrated in Fig 3. The motion of the virtual node is then coupled to the motion of the nodes in this region by distributive coupling constraints with different weights. In this way, the location of virtual nodes are independent of the meshed nodes when building welding elements, regardless of the location correspondence of meshed nodes to the real welding spots, then the local meshing refinements are not required, so that the efficiency of spot welding modelling is greatly improved.
Fig 2. Scheme of virtual node method
Fig 3. Influence region of each virtual node
The spot welding is composed of welding nugget and connected plates, so the thin shell elements are usually used to simulate the mid-surface of sheet metals, while spot-weld connections are simulated with beam elements connecting and perpendicular to the middle surfaces of the two connected plates, as shown in Fig 3. Assuming a displacement constraint equation exists between virtual nodes Km and Kn: C (Km , K n )
um un
0
(1)
where u m and u n represent the components of relative motion between virtual node Km and Kn, then the multipoint constraints of the beam element between the two virtual nodes are established. In the influence region, each node has an influence on the motion of the virtual node with different weights. The influence of the nodes close to the virtual node is greater than the nodes far from the virtual node. So, the weight factors are used to represent the difference:
664
J. Chen, Y. Zhou and F. Lan
Oi 1
li l0
(2)
where Oi is the weight factor at a coupling node i, li is the coupling node radial distance from the reference node, and l0 is the distance to the furthest coupling node. Distributive coupling constraints are used to couple the translation and rotation of the reference nodes to the average translation of the coupling nodes. The constraint distributes the forces and moments at the virtual node as a coupling nodes force distribution only. No moments are distributed at the coupling nodes. The force distribution is equivalent to the force distribution for the classic bolt pattern when the weight factors are interpreted as bolt cross-section areas. The constraint enforces a rigid beam connection between the attachment point and a point located at the weighted centre of position of the coupling nodes. A virtual node has displacement ( uQ ) and rotation ( IQ ) degrees of freedom. The i
coupling nodes have only displacement ( u ) degrees of freedom active. Each coupling node has a weight factor Oi assigned, which determines the proportion of load carried by the region that is transmitted through the coupling nodes. Weight factors are dimensionless, and their magnitude is of relative significance. Hereafter, normalized weights are used:
Oˆi
Oi ¦ Oi
(3)
Let F v and M v be the load and moment applied to the virtual node. The statically admissible force distribution F i among the coupling nodes satisfies: ¦ F i F v ° i ® i i v v v °¦ X u F M X u F ¯ i
(4)
Where X v and X i are the position of the virtual and coupling nodes, respectively. For an arbitrary number of coupling nodes there is no unique solution to equation (4). Suppose that the force distribution here has the property that the linearized motion of the reference node is compatible with the motion of the coupling nodes in an average sense. The compatibility can be described by considering the momentum of a moving coupling node group in a case where weight factors are considered as masses. In this case the virtual node motion is identical to that of a point on a rigid body occupying the position of the virtual node, where the centre of mass of the rigid body is the centre of mass of the coupling nodes and the rigid body moves with the same linear and angular momentum as the coupling node group. So,
A New FE Modelling Approach to Spot Welding Joints of Automotive Panels
665
Oˆi ( F v ( J 1 Mˆ v ) u r i )
(5)
Fi
Where Mˆ v M v r v u F v , r i
X i Xc , rv
¦OX ¦ Oˆ X ¦O i
X v Xc , X
i
i
c
i
i
coupling node arrangement inertia tensor is J
¦ Oˆ [(r i
i
i
, and the
i
i
r i ) E (r i r i )] , where E is the
i
second-order identity tensor. 2.2
Numerical Simulation and Test
In order to validate the feasibility of virtual node method, the paper simulated the whole course with the specimen in [5] and compared the simulation result with the experiment. The structure and dimension parameters of the specimen are shown in Fig 4, which is abstracted from spot-weld connected structure of automotive panel according to thin shell structure that widely used, and the spot welds are distributed uniformly in a single row. The size parameters are: A=500mm, B=96mm, L1=250mm, L2=15mm, the diameter of welding nugget D=5 mmˈthe number of spot welds N =6. The material of plate is low carbon steel, quality density ²=7.85 h103 Ϳ/m3, elastic modulus E=2.08h1011Pa, and Poisson ratio v=0.28.
Fig 4. Structure parameter of the specimen
Using the “virtual node” method, the free mode of the sample in Fig 4 is simulated. The first three modal shapes are shown in Fig 5.
a
b
c
Fig 5. The first three modal shapes
Fig 5-a is the first modal shape, b is the second modal shape and c is the third modal shape, which are used to compare with the test’s modal shape in [5]. The result comparison between simulations and experiment in [5] is shown in Table 1, and I, II and III is the sequence number of the first three modes respectively.
666
J. Chen, Y. Zhou and F. Lan
Table 1. Comparison between simulation and test
I
II
III
simulation
experiment
the first mode shape
torsion
torsion
the first frequency(HZ)
16.53
15.54
frequency error
6.37%
the second mode shape
longitudinal bend
longitudinal bend
the second frequency(HZ)
24.37
25.31
frequency error
3.71%
the third modeshape
lateral bending
lateral bending
the third frequency(HZ)
30.15
32.8
frequency error
8.08%
As shown in Table 1, the first three modal shapes are consistent with test, and the maximal frequency error is 8.08%. Considering the various error factors of tests and theoretical computations, we can consider that the simulation result absolutely accorded with the request of engineering precision and finally validate the rationality and precision of the new method. The simulation processes using these two methods are compared in Fig 6. In the traditional method, refining local meshes, finding the position of the spots and the serial numbers of the nodes are very time-consuming, and it can take about 85% of the whole workload. For the virtual node method, these problems do not exit. It can not only reduce the pre-process workload, but also ensure the accuracy of the welding position. The virtual node method can cut down about 80% of the workload over the traditional method. This is very important for reducing the leading time in the whole automotive body structure and panels modelling.
a. virtual node method
b. traditional method
Fig 6. Comparison of the two different methods
It is seen that the advantages of the virtual-node method to simulate the spot weld are: the reference point used to locate the position of the spot weld is meshindependent. Moreover, it is not limited or influenced by nodes. This can solve the
A New FE Modelling Approach to Spot Welding Joints of Automotive Panels
667
difficulty that the nodes have to be consistent with the actual position of the corresponding spot weld, reduce the workload caused by the local refining of the mesh, and it is very suitable for the quick establishment of a large number of spot weld models.
3. Effect of the Structural Parameter of Spot Welding on the Modal Characteristics As mentioned above, the rationality and precision of the new “virtual node” method have been validated already, and based on its rationality and precision, this study also carries out the modal analysis on the effect of the structural parameter of spot welding on the modes of spot-weld connected metal sheets. This section is to discuss it on the basis of the orthogonal experiments. 3.1
Orthogonal Experiment Scheme
In this study, the orthogonal experiment is used to research the effect of parameters of spot-weld structure on modes of automotive panels. The structural parameters of spot-weld connection mostly have overlap proportion, spot pitch, diameter of welding nugget, weld-penetration ratio, and impress depth, etc. Generally, we can ignore the effects of the penetration ratio and the impress depth during the FE simulation. This paper chiefly discusses the different effect rules of the three factors(diameter of welding nugget, overlap proportion and space between spot welds) on the mode of metal sheets connected by spot welds, and using the first three frequency average values (max/min) as the test target. The spot-welded sample structure and its size are shown in Fig 4. Table 2. Orthogonal table and simulation result influence factor sequence number 1 2 3 4 5 6 7 8 9
overlap proportion
spot pitch(mm)
diameter of welding nugget(mm)
0.06 0.06 0.06 0.1 0.1 0.1 0.14 0.14 0.14
96 60 40 96 60 40 96 60 40
5.4 6.2 7 6.2 7 5.4 7 5.4 6.2
f
(HZ)
23.68 23.97 24.73 27.81 28.08 28.32 28.65 28.89 29.42
668
J. Chen, Y. Zhou and F. Lan
For low-carbon steel plate with thickness of 1.2mm, the minimal overlap value of joint is 11mm, and the spot pitch is 14mm at least[9]. Referring to the minimal overlap values, 15mm, 25mm and 35mm can be chosen and the corresponding overlap proportion is 0.06, 0.1 and 0.14. The interval between spot welds ranges from 40mm to 80mm commonly, and the size about 50mm is widely used in general, so 96mm, 60mm and 40mm can be selected as the levels. The welding nugget diameter can be set to 3 levels of 5.4mm, 6.2mm and 7mm which is normally between 4mm and 7mm[9]. So the orthogonal experiment has 3 factors respectively in 3 levels listed in the orthogonal table L9(34) (as expatiated in Table 2), where f is the average value of the first three natural frequencies in each orthogonal experiment. 3.2
Analyses for Simulation Result
Based on Table 2, the range analysis and variance analysis are carried out, as shown in Table 3 and Table 4 respectively, and Fig 7 can be obtained according to Table 3. In Table 3, the number 1, 2 and 3 are the level number of each influence factor; k1 , k 2 and k 3 are the average value of each influence factor’s sum of the experiments, which is corresponding to the level number 1, 2 and 3; Rj(j=1, 2 and 3) is the range of the average value of each influence factor in each level. In Table 4, the value of F is the ratio of the mean-standard deviation of each variance resource (A, B, C) to the mean-standard deviation of the variance resource D. Table 3. Range analysis factor item
k 1 ( HZ)
k 2 ( HZ) k 3 ( HZ)
Rj (HZ)
A(overlap proportion)
B( spot pitch)
C(diameter of welding nugget)
1
2
3
24.13 28.07 28.99
26.71 26.98 27.49
26.96 27.07 27.15
4.86
0.78
0.19
a. average valueüoverlap ratio b. average valueüspot pitch
c. average valueüdiameter
Fig 7. Relationship between average values of the first three frequencies and influence factors
A New FE Modelling Approach to Spot Welding Joints of Automotive Panels
669
According to the variance analysis table[7], the value F0.05(2,2)=19, F0.01(2,2)=99 can be obtained. From Table 4 we can get that the F value of the variance resource A is 400, which is greater than F0.01(2,2)=99. So, the influence of factor A (overlap proportion) on is very significant (marked as “**”). The F value of the variance resource B and C are 9.4 and 0.6 respectively, which are less than F0.05(2,2)=19. So, neither spot pitch nor diameter of welding nugget has significant influence on the natural frequency of spot welding sheet metal. Table 4. Variance analysis errors square sum (SS)
degree of freedom(DF)
meanstandard deviation (MS)
F value
significance
A(overlap proportion)
40.01
2
20
400
**
B( spot pitch)
0.94
2
0.47
9.4
C(diameter of welding nugget)
0.06
2
0.03
0.6
D(error)
0.1
2
0.05
variance resource
As mentioned above, from Fig 7 we can get that the mean value of the first three frequencies increases with the increase of overlap proportion and the diameter of welding nugget, but contrary to spot interval; as shown in Table 3, according to the degree of influence on first-three frequency, the three factors can be arrayed: overlap proportion>spot pitch>the diameter of welding nugget, which is coincident with the results in the variance analysis in Table 4; from Table 4 we can get that in all three factors, only overlap proportion affect markedly, the other two factors only have little effect.
4.
Conclusion 1.
2.
“Virtual node” method is put forward and used to simulate spot welding joints, the problem of having to need the location correspondence of the nodes to actual welding spots is therefore successfully solved. In addition, the locations of actual spot welding are more precisely modeled, and about 80% of the whole workload can be reduced over the traditional node-to-node modeling method. This makes significance for reducing preprocess workload, shortening the car-body design cycle and enhancing the body quality. It can also be adapted to the product FE analysis in other manufacturing industries. With the orthogonal experiment, the parameters of spot welding connection structures are determined and their effect on the modal characteristics is achieved. The degree of the influence of parameters on
670
J. Chen, Y. Zhou and F. Lan
3.
5.
the natural frequency is shown as: overlap proportion>spot pitch>nugget diameter. Among these factors, none but the overlap proportion has a remarkable impact on the spot welding joining structure’s nature frequency. This study on modal analysis provides an approach to improvement in the CAE analysis precision and the NVH performance of automobile, which can be used as a significant reference in engineering analysis and product development.
Acknowledgments
The authors would like to thank the support on the technology-planning project (2007B010-40052) granted by the technology department of Guangdong province, P.R. China.
6.
References
[1] Changchun Huang, Zhilin Wei, Guanglie Shen, Huijun Yin. Comparative analysis of the models of spot welds in finite element analysis, The Technology of Furnishment and Manufacture, 2006, 5:17–19 [2] LubkinJames L. The flexibility of a tubular welded joint in a vhicle frame. SAE Transactions 740340 [3] Kan Ni, Sankaran Mahadevan. Reliability models for structural stiffness degradation due to fatigue and fracture of joints. Structures, 2004:1–9 [4] Kim Yoon Young,et al. Stress analysis and design improvement for commercial vehicle frames with bolted joints. International Mechanical Engineering Congress and Exposition of ASME, 1996, 11:17-22 [5] C.T. Ji, Y. Zhou, Dynamic Electrode force and displacement in resistance spot welding of aluminium, Journal of Manufacturing Science and Engineering, 2004, 126:605-610 [6] Wen He, Weigang Zhang, Zhihua Zhong. New finite element modelling technique for spot-welds of autobody in vehicle dynamic simulation. Automotive Engineering, 2006, 28(1):81–84 [7] Lingyu Sun,Zhuangrui Zhu,Nan Chen, Qinghong Sun. A study on the characteristics and modelling method of the spot welding interface for car body panel. Automotive Engineering, 2000, 22(1):69–72 [8] Shi Li. Applied statistics, Tinghua University Press, Beijing ,2005:187–204 [9] The Jointing Acad of Chinese Mechanical Engineering. The Manual of Jointing(I),1992: 223–227
Precision Measurement and Reverse Motion Design of the Follower for Spatial Cams Zhenghao Ge, Jingyang Li, Feng Xu, Xiaowei Han Shannxi University of Science & Technology, Xian, Shaanxi, 710021, China
Abstract Precision measurement method for cylindrical cams and probe-radius compensation method are presented in this paper according to the characteristics of cylindrical cams, and a specified measurement procedure that can realize real-time probe-radius compensation is programmed. The cam contour curves are reconstructed using NURBS, and a rapid reverse design method for cylindrical cam follower motion specification via motion simulation is also studied. Through experiments, it proves that those methods can realize precision measurement for cylindrical cams and reverse the design of motion specifications for a cylindrical cam follower rapidly and accurately. Keywords: Cylindrical cam mechanism Precision measurement Reverse design Motion specification
1.
Introduction
Spatial cams and their combined mechanisms can realize all kinds of required motion specification, which are widely used in automatic machines. Spatial cam mechanisms can be used, such as in automatic or half-automatic machines, internal combustion engines, forging machinery, cold forming machinery, automatic packaging machine, printing machinery, agricultural machinery and so on. Therefore, in the course to digest and absorb equipments imported overseas, it’s necessary to study on original design motion specification of cam mechanism from the field of mechanism, and thereby, many kinds of cam mechanism can be analyzed and designed afresh. Nowadays, there are two kinds of reverse design methods for the follower motion specification of spatial cams. One is to make measurement devices in the same structure with cam mechanisms, so the parameters of follower motion specifications can be got directly. The other is that the coordinate parameters of cam contour can be measured using CMM, and thus mathematical model of reverse design for motion specification will be built. But the cost of the former is too high, it has less general utilization, and it’s also difficult to measure follower motion specification. However, the latter need to program, hard to popularize, and has lower efficiency. Cam mechanisms, as the key part of devices imported overseas, if promptly getting original design motion specification is needed, a kind
672
Z. Ge, J. Li, F. Xu and X. Han
of new, efficient, accurate method of reverse design is craved. This paper inquires into precision measure method for cylindrical cams and probe-radius compensation, and reverse design of cylindrical cam follower motion specifications via motion simulation are realized.
2.
Precision Measurement of Cylindrical Cams
2.1
Building Measurement Coordinate System
The coordinate system Oc , X c , Yc , Z c for cams measurement is built as Figure 1. Origin Oc is located on the top surface of the cam, X c parallels the X axis of machine coordinate system. The right direction is opposite with X axis of machine coordinate system. Z c axis is unified at the rotating center axis of cam, and the direction is upward, that is ,being identical with Z axis of machine coordinate.
Figure 1. Build coordinate system of cylindrical cams measurement
2.2
Programming Measurement Program
When programmingˈ valid radius R c of cam is regards as measurement polar radius, and contour curve measurement can be realized, according to a suitable measure step 'T . Generally, four probe spatial posture can accomplish measuring cam contour curves. In the special cases, the more probes can be defined. But if they are too many, the bigger errors will exist in data coordinate exchange. When the spatial posture of four probes is used, A and B which determine the probes spatial posture respectively are: When (T ic t 315 D )OR (T ic 45 D ) , then A 60D , B 180D ; When (T ic t 45 D ) AND (T ic 135 D ) , then A 60 D , B 90 D ;
Precision Measurement and Reverse Motion Design of the Follower for Spatial Cams
673
When (T ic t 135 D )AND (T ic 225 D ) , then A 60 D , B 0 D ; When (T ic t 225 D )AND (T ic 315 D ) , then A 60 D , B 90 D . Where A is not surely 60D , which is decided by the relational parameters of the cam. This measurement procedure has been programming in WinMeil language from COORD3 Corporation of Italy, part of program is as following: …… STEPA=INPUT ("Input measurement step :") STEPNUM=TRU (360/STEPA) STEPNUM=ABS (STEPNUM) STR1=INPUT ("Input the file name of measurement data :") OPEN (1,"C:\WINMEIL\DATA\"+STR1+".TXT", WRITE) PRB1=INPUT ("Input the probe number of beginning point :") PRB (PRB1) PRB2=0 WHILE (PRB1<>PRB2) PRB2=PRB1 PRB1=INPUT ("Input the probe number of beginning point again :") PRB (PRB1) ENDWHILE …… 2.3
Probe-Radius Compensation
In polar coordinate system, the coordinate of any point on cam contour is (Rc ,Tic , Zic )(i 0,1,2"n) , Rc , as the valid radius of cam, is fixed, and T ic changes as equal difference, Z ic is the only one that can’t be determined, so only Z ic should be put on Probe-radius compensation.
Figure 2. The principle drawing of probe-radius compensation
674
Z. Ge, J. Li, F. Xu and X. Han
Point A in Figure 2. needs to measure, while probe is closing from the reverse direction of Z c axis, what actual measures is Point B nearby A, at that time, what CMM gets is the center point coordinate ( Rc ,T ic , Z ic )(i
0,1,2 " n) , when the probesphere touches Point B touched by probe-sphere. After compensating radius in above way, the coordinate of Point C ( Rc , T ic , Z ic R ) (i 0,1,2"n) is obtained, and
AC is measurement error of Point A. AC
OA OC
R R cos M i
· § 1 R¨¨ 1¸¸ ¹ © cos M i
(1)
M
Because OB is cam contour normal throughout Point B, and i , the angle of OA and OB, is cam pressure angle of Point B. For translating follower cylindrical cam, the pressure angle of any point located on valid radius satisfies
tan M i
k pV
h
T h Rc
[ 2]
:
V
(2)
Where k p ಧಧthe aspect ratio for cam contour curve with actual dimension; Vಧಧdimensionless velocity hಧಧroute of follower
T h ಧಧmoving route angle Rc ಧಧvalid
radius of cam
For swing follower cylindrical cam, the pressure angle of the points located on valid radius satisfies:
tan M i
k pV
W h Rr V T h Rc
(3)
Where, W h ಧಧswaying angle of follower᧤rad᧥
Rr ಧಧthe length of swing link According to Equation (1) and (2), construct the equation as below:
AC
§ · ¨ ¸ ¨ ¸ ¨ ¸ 1 ¨ ¸ 1¸ R¨ 1 ¨ ¸ 2 ¨ ¸ § hV · ¨ 1 ¨¨ ¸ ¸¸ ¨ ¸ T R h c © ¹ © ¹
(4)
Precision Measurement and Reverse Motion Design of the Follower for Spatial Cams
675
According to Equation (2) and (3), construct the equation as below:
AC
§ ¨ ¨ ¨ 1 ¨ R¨ 1 ¨ ¨ § W h Rr V ¨ 1 ¨¨ ¨ © T h Rc ©
· ¸¸ ¹
2
· ¸ ¸ ¸ ¸ 1¸ ¸ ¸ ¸ ¸ ¹
(5)
According to the equation above, the smaller the probe-radius R and the pressure angle of any point in the cam contour are, the smaller measurement errors will be.
3.
Reconstruct Cam Contour Curve
Nowadays, NURBS curve is widely used in fit many kinds of curves, its based function B i , k u is localized, so it is convenient to conduct the curve slightly. The power of k curve equation is : n Bi ,k u Wi Vi ¦ ° i 0 °pu n ° Bi ,k u Wi ° ¦ ® i 0 ° Bi ,k u Wi ° Ri ,k u n ° B j ,k u W j ¦ °¯ j 0
n
¦ R u V i ,k
i 0
i
᧨u >0,1@
(6)
Where Vi ಧಧcontrollable top point Wi ಧಧweight
factor
Bi,k u
ಧಧbase-function of B spline curve with the power of k The cam curve line shown as Figure 3. is obtained by fitting NURBS curve, and smoothed by regulating controllable points.
Figure 3. Curve Fitting
676
4.
Z. Ge, J. Li, F. Xu and X. Han
Reverse Design of Motion Specification
Build motion model of cylindrical cam mechanisms, and do motion simulation by CAE software, so the actual motion specification curve of follower can be acquired. Then exchange actual motion specification curve into dimensionless motion specification curve. The way to make time t, displacement s, velocity v, and acceleration a dimensionless is: °T ° °S ° ® °V ° ° °A ¯
t th s h vt h h at h2 h
(7)
Where, T , S , V , A ಧಧdimensionless time, displacement, velocity and acceleration t h ಧಧthe total internal of lifting or return
hಧಧdisplacement relating to t h
5.
Example of Measurement and Reverse Design
To verify measurement of spatial cam and reverse design of motion specification proposed by this article, taking index cylindrical cam as an example, actual measurement and reverse design experiment have been done, shown as Figure 4. The index number of this cylindrical index cam mechanism is 12, the motion angle is 290°, the radius of follower pitch circle is 75mm, the center distance is 72.44mm, the diameter of roller is 25mm, the cam is left turning, and its motion specification is modified sine.
Figure 4. Measure cylindrical dividing cam using CMM
Precision Measurement and Reverse Motion Design of the Follower for Spatial Cams
677
Choosing the 1mm as the diameter of probe-sphere, 0.5 degree as measure step, 57mm as measure polar radius, using probe-radius compensation method proposed by this article, the real-time compensation can be realized, and the measurement data will be changed into 2D form. The measurement date is as following: 0.0017 0.0000 2.2394 0.4967 0.0000 2.2392 0.9943 0.0000 2.2390 1.4918 0.0000 2.2389 …… The cam contour curve after fitting shown as Figure 5., build a motion model and do motion simulation, so actual motion specification curve of cam mechanism follower can be obtained, shown as Figure 6.
Figure 5. Cam contour curve
c Figure 6. a. Displacement curve; b. Velocity curve; c. Acceleration curve
678
Z. Ge, J. Li, F. Xu and X. Han
After making motion specification dimensionless, the dimensionless motion specification curve are shown as Figure 7.
c Figure 7. a. Dimensionless displacement curve; b. Dimensionless velocity curve; c. Dimensionless acceleration curve
Based on dimensionless motion specification curve, the conclusion that this motion specification is a modified sine motion specification will be drawn. For this example, measurement errors changing as pressure angles are shown as Figure 8. The detailed analysis can be seen as the table below.
Precision Measurement and Reverse Motion Design of the Follower for Spatial Cams
679
Figure 8. Measure errors changing as pressure angles Table 1. Analysis on data of measurement error Pressure angle ˄deg˅
0
2.5
5
7.5
Error˄mm˅
0
0.00045
0.00195
0.00425
Pressure angle ˄deg˅
10
15
20
26.5
Error˄mm˅
0.00765
0.0176
0.032
0.0584
According to the table above, measurement and probe-radius compensation method proposed by this article can totally satisfy reverse design of motion specification, and the motion specification obtained from reverse design is identical with reality.
6.
Conclusions
This paper steps deeper into pure reverse design of cam 3D model. Through reverse design of the originally designed motion specification of cam mechanism, the normal design of cam mechanism can be realized, and then the proper cam mechanism is gained. This reverse design method compounds rapid measurement and rapid reverse design, which can be used both in spatial cam mechanism and in planar cam mechanism.
7.
References
[1] Peng Guoxun, Xiao Zhengyang. Cam mechanism design of automatic machines [M]. Beijing: China Machine Press, 1990. [2] Liu Changqi, M.Yang, Cao Xijing. Design of cam mechanism [M]. Beijing: China Machine Press, 2006. [3] Guo Weizhong, Wang Shigang, Zhou Huijun. Spatial cam with oscillating follower CAD based on reverse design [J]. Computer Aided Design and Diagram Learn Journal. 1999, 11(2): 159–162. [4] Zhu Xinxiong. The model technology of free curve line and free curve surface [M]. Beijing: Science Press, 2
Static Analysis of Translational 3-UPU Parallel Mechanism Based on Principle of Virtual Work Xiangzhou Zheng1, Zhiyong Deng2, Yougao Luo2, Hongzan Bin3 1
School of Engineering Technology, Huazhong Agricultural University, Wuhan, P.R. China, 430070 2 The Second Ship Design Institute, China Shipbuilding Industry Corporation, Wuhan, P.R. China, 430064 3 School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, P.R. China, 430074
Abstract Static analysis of a translational 3-UPU parallel mechanism (PM) based on the principle of virtual work is presented in this paper. The translational 3-UPU PM is constituted with a fixed base and a movable platform connected by three identical UPU type limbs, where U and P stand for universal and prismatic joint respectively. Given configuration of PM and force applied to platform, actuating forces in prismatic joints of each limb to keep the PM in static equilibrium can be obtained with formula put forward in this paper. To decide actuating forces, gravity force of platform and three limbs are taken into account and each limb is dissected into three parts: piston, cylinder and oil between the piston and the cylinder. There is difference between actuating forces decided with the method presented in this paper and the ones calculated by using Jacobian matrix. It is thought that the method presented in this paper is more accurate than that of Jacobian method. Based on formula in this paper, static simulation of 3-UPU PM used in compensating platform in deep-ocean mineral mining is worked out finally. Keywords: Statics, Translational 3-UPU, Parallel Kinematical Mechanism, Principle of Virtual Work.
1.
Introduction
A parallel kinematic mechanism (PM) consists of a moving platform and a fixed base connected by several identical extensible limbs with one actuator in each. The typical Stewart mechanism with six limbs was firstly presented for robotic application by Hunt [1]. Due to its parallel structure and low inertia of the moving parts, these PMs offer the advantages of higher overall stiffness, low inertia, and higher operating speeds, and are being used in a wide variety of applications, ranging from machine tool to robot manipulators. However these advantages are
682
X. Zheng, Z. Deng, Y. Luo and H. Bin
gained at the expense of reduced workspace, difficult mechanical design, and more complex kinematics and control algorithms [2]. Many kinds of alternative architectures of PM possessing fewer limbs have been investigated to overcome shortcomings associated with six-limb platforms. These fewer-limb PM can enlarge workspace and simplify kinematics and control, although at a cost in reduced stiffness and speed. An important kind of fewer-limb parallel mechanism is the one that has three limbs, especially translational 3-UPU PM presented by Raffaele Di Gregorio [3,4], as shown in Fig.1.
Figure 1. The scheme of translational 3-UPU PM
The 3-UPU mechanism features platform and base interconnected by three serial kinematical chains of type UPU, where U stands for universal joint, and P for the prismatic pair which is actuating independently. When certain assembly or geometric conditions are satisfied, platform can only translate relative to fixed base in 3-UPU PM [3,4]. This kind of PM can be used in different situations. A hybrid serial-parallel mechanism [5] made up of translational and spherical 3-UPU PM is put forward to be used as compensation platform in deep-ocean mining to keep umbilical pipe which is used to transfer mineral in a stable position with respect to the earth, in which, the translational 3-UPU PM is used to compensate heave. Forward position analysis for this kind of mechanism has been researched, such as in [5]. Statics is an important issue for design of PM. In general, driving forces in each limb actuating by translational driver can be decided by Jacobian matrix of PM [6] or by solving static equilibrium equations [7]. However the static equilibrium equations are complex by virtue of parallel architecture and it’s difficult to decompose the driving forces from equations. And generalized force to be supported by limbs is defined ambiguously in Jacobian method. In this paper, statics of translational 3-UPU PM is analyzed based on principle of virtual work. In section 2, kinematics of limbs is represented with kinematical
Static Analysis of Translational 3-UPU Parallel Mechanism
683
parameters of platform, in which each limb is disassembled to three parts, which are cylinder, piston and oil. Taking account of the three parts of limb will produce a more accurate result. Then in virtual displacements analysis of platform in section 3, virtual displacements of each limb are described with ones of platform. In section 4, actuating forces in each limb are solved by adopting principle of virtual work. And an example from compensating platform of deep-ocean mining is simulated in section 5, and compare between forces decided by Jacobian method and the ones presented in this paper is made. At last a concise conclusion is given in Section 6.
2.
Kinematics of Limbs
Two parallel coordinate frames, {M:xyz} and {B:xyz}, are fixed to platform and base respectively as shown in Fig.1. Z-axis in the two frames are perpendicular to planes of platform and base respectively. The center points of universal joints connected to platform and base are denoted with Mi and Bi respectively, where i=1,2,3. The list “i=1,2,3” is omitted in following contents for simplicity. Let M=[x y z]T be position vector of origin M of frame {M} in {B}, Mmi=[xmi ymi zmi]T be position vectors of points Mbi in frame {M}, and Bi=[xbi ybi zbi]T be position vectors of points Bi in frame {B}. So the position vectors of points Mbi in frame {B} can be expressed to: Mi
M mi M
(1)
With geometry of translational 3-UPU, vectors along each limb can be produced as: Li
M i Bi
(2)
Length of each limb can be got from Eq.2: Li
(3)
Li
And unit vectors along each limb are as follows:
li
Li Li
[li1 li 2
li 3 ]T
(4)
With component form of Cartesian, the length of limb can be described as:
L2i
( x xmi xbi ) 2 ( y ymi ybi ) 2 ( z zmi zbi ) 2
(5)
Differentiated Eq.5 with respect to time, we have Li Li
( x xmi xbi ) x ( y ymi ybi ) y ( z zmi zbi ) z
(6)
684
X. Zheng, Z. Deng, Y. Luo and H. Bin
where M
> x
T z @ is velocity vector of platform, and Li is sliding speed of
y
upper part moved with respect to lower one of each limb. By virtue of translation of platform and Eq.1 to 2, there exists relation as: L i
M i
M
> x
y
T
z @
(7)
Assembled Eq.6 for three limbs, Eq.6 can be represented with matrix form:
BL
AM
(8)
where L
» L1
L2
ª x xm1 xb1 A «« x xm 2 xb 2 «¬ x xm 3 xb 3 If inverse of matrix be obtained:
M
T L3 ¼º , B
ª L1 «0 « «¬ 0
0 L2 0
0º 0 »» and L3 »¼
ª LT1 º « T» «L 2 » . « LT3 » ¬ ¼ A exists, kinematical relation between limbs and platform can y ym1 yb1 y ym 2 yb 2 y ym 3 yb3
z zm1 zb1 º z zm 2 zb 2 »» z zm 3 zb 3 »¼
A 1BL
(9)
It is obvious that matrix A-1B is Jacobian matrix of parallel mechanisms. In the view of motion of limb, motion of point Mi is constituted with two components: limb rotation around point Bi and slide of upper part along the lower , is known, the sliding speed of upper part can be part of limb. If velocity of Mi , M : found by scalar multiplying unit vector l i with M Li
li M
(10)
Angular velocity of limb can be got as: Ȧi
/L li u M i
(11)
Assume three limbs are driven by hydraulic cylinders to support platform. Each limb is dissected to three parts: piston, cylinder and oil in piston. The subscripts, “u”, “b” and “o”, are used to represent parameters of piston, cylinder and oil respectively. Let mu, mb and moi be their masses, lu, lb and loi be lengths of each part, ru, rb and roi be centers of mass, as shown in Fig.2.
Static Analysis of Translational 3-UPU Parallel Mechanism
685
Figure 2. The dissection of limbs of translational 3-UPU
Accordance with the dissection of limbs and Eq.10 and 11, velocity of center of mass of piston can be presented as: Vui
Li ru Ȧi u l i l i Li Li
(12)
Substituting Eq.10 and 11, matrix form of Eq.12 can be rearranged to as follows: Vui
A ui M
(13)
where A ui
ªcui1 cui 2 li21 « « cui 2 li1li 2 « cui 2 li1li 3 ¬
cui 2 li1li 2 cui1 cui 2 li22 cui 2 li 2 li 3
cui 2 li1li 3 º » cui 2 li 2 li 3 » , cui1 cui1 cui 2 li23 »¼
Li ru and cui 2 L2i
1 cui1 .
By the same method, velocities of center of oil and cylinder can be represented with following equations respectively: Voi
roi and A i L2i
where coi
Vbi where cbi
coi A i M
cbi A i M rb / L2i .
(14) ª 1 li21 « « li1li 2 « li1li 3 ¬
li1li 2 1 li22 li 2 li 3
li1li 3 º » li 2 li 3 » . 1 li23 »¼
(15)
686
X. Zheng, Z. Deng, Y. Luo and H. Bin
When sliding speed of piston is 0, i.e. Li forms:
3.
0 , Eq.13 to 15 will display simple
Vui
Li ru M Li
(16)
Voi
roi M Li
(17)
Vbi
rb M Li
(18)
Virtual Displacements in PM
Let G x j , G y j and G z j be virtual displacements of platform associated with j-th actuating prismatic joint and let G l be virtual displacement of the j-th actuating prismatic joint. When configuration of the 3-UPU is not singular, the virtual displacement of platform can be computed with Eq.9:
GMj where G M j
A 1BG Lj j =1, 2, 3
ª¬G x j
T
G y j G z j º¼ , G L1 T
(19) E1T G l , G L2 T
ET2 G l , G L3
ET3 G l ,and T
G l z 0 , in which E1T >1 0 0@ , ET2 > 0 1 0@ and ET3 > 0 0 1@ . It should be noted that when the j-th prismatic joint is actuating to produce displacement of platform, other limbs different from the j-th limb is only of rotation. Applied Eq.13 to 18, virtual displacements of each part on limbs can be calculated with ones of platform:
G l uij
where A
i u
G l oij
A ui G M j
(20)
A ui i j ° , in which E is a unit matrix. ® Li ru ° L E iz j ¯ i
A ioG M j
(21)
Static Analysis of Translational 3-UPU Parallel Mechanism
where A
i o
coi A i i j ° . ® roi °L E iz j ¯ i
G l bij
where A
4.
i b
687
A ibG M j
(22)
cbi A i i j ° . ® rb °L E iz j ¯ i
The Actuating Forces in Limbs
Let Fp be force applied to platform, and Fj (j =1, 2, 3) be actuating forces in each limb to balance translational 3-UPU PM. Gravity forces of three parts in each limb and platform can be described as follows: Wu
>0 >0
T
0 mu g @ , Woi
>0
T
0 moi g @ , T
T
0 mb g @ , Wmp
ª¬0 0 m p g º¼ , where mp is mass of platform and g is gravity acceleration. Base on principle of virtual work [8], it holds that:
Wb
3
Fj G l (Fp Wmp ) G M j ¦ Wu G l uij Woi G l oij Wb G l bij
(23)
0
i 1
Due to translation of platform relative to base, torque applied on platform does not work and is omitted in Eq.23. Substituted Eq.20 to 22, Eq.23 can be simplified to: 3
T Fj G l (FpT Wmp ) A 1BE j G l ¦ WuT A iu WoiT Aio WbT A ib A 1BE j G l
0
i 1
Because virtual displacement be obtained from Eq.23 as: Fj
5.
Gl
is arbitrary and
G l z 0 , the actuating forces can
3 ª º T «FpT Wmp ¦ WuT A ui WoiT A io WbT A bi » A 1BE j i 1 ¬ ¼
(24)
Simulation of Example
In the following simulating example, translational 3-UPU PM is used to compensate heave in deep-ocean mining platform, where SI system of units is used.
688
X. Zheng, Z. Deng, Y. Luo and H. Bin
In this example, Y-axis in frame {B} is parallel to B1B2, and Y-axis in frame {M} is parallel to M1M2. Geometric parameters used in simulation are given as: B1=[-4.9cos300 -4.9/2 0]Tm, B2=[4.9cos300 -4.9/2 0]Tm, B3=[0 4.9 0]Tm, M1=[-3cos300 -3/2 0]Tm, M2=[3cos300 -4.9/2 0]Tm, M3=[0 3 0]Tm, rb=1.8m, mb=4500Kg, ru=2.42m, mu=2800Kg, mp=6.2x104 Kg. In different configurations, mass and center of mass of oil are calculated as: moi roi
279.6 176.7( Li 7.33) Kg 1.62 ( Li 7.33) / 2 m
(25) (26)
Assume that external force applied on platform is Fp=-2x106kN, where k is a unit vector along Z-axis of frame {B}. Forces of actuating joints in limbs to balance translational 3-UPU parallel mechanism can also be decided with Jacobian method. In Jacobian method, actuating forces can be computed with following formula: f
JT F
( A 1B)T F
(27)
where f and F are actuating force of limbs and generalized force applied at the center of platform respectively, matrices A and B are defined with Eq.8. When generalized F is known, actuating force f can be obtained with Eq.27. Forces applied on platform include external force, gravity force of platform and forces exerting by limbs. The question is which force should be taken into account to decide the generalized force F. Different forces to constitute F will lead to different results of f. When the platform is at different locations along straight line X=Y=0, actuating forces of three limbs are shown in Fig.3(a,b), in which three forces are identical due to symmetric geometry of PM and coincided to one curve. In Fig.3(a), actuating forces taking account of oil are shown with real line, and forces not considering oil are shown with dashed curve, the difference between which are slightly increased with Z. In Fig.3(b), curve with label “Jacobian” is the actuating force determined with Eq.27, and the one labeled with “F” is solved with the method presented in this paper. In calculation with Jacobian method, all gravity forces of platform and limbs are transformed to be generalized force applied to platform. There is a slightly big difference between the two sets of force although they have the same changing trend with location of platform. In Fig.4, locations of platform are changed along a straight line, Y=0 and Z=6, in which actuating forces of three limbs are shown respectively in Fig.4(a), (b) and (c). In the same figure, actuating forces calculated from Jacobian method are also displayed. As shown in Fig.4, difference of the two sets of forces is different with varied configurations of PM.
Static Analysis of Translational 3-UPU Parallel Mechanism
(a)
689
(b)
Figure 3. Actuating forces when the platform at different locations along: X=Y=0
(a) F1
(b) F2
(c) F3 Figure 4. Actuating force according two methods when platform at locations along line: Y=0ˈZ=6
690
6.
X. Zheng, Z. Deng, Y. Luo and H. Bin
Summary
In this paper, a method based on principle of virtual work is put forward to determine forces of actuating joint for translational 3-UPU parallel mechanism with static equilibrium, in which a closed-form formula is derived. It is considered by authors that components of generalized forces applied on platform in Jacobian method is ambiguous. There is difference between the two sets of actuating forces obtained with the method presented in this paper and with Jacobian method. The method in this paper is more accurate than Jacobian method.
7.
References
[1] K.H. Hunt, Kinematic Geometry of Mechanism. Clarendon Press, Oxford, 1978. [2] Bhaskar Dasgupta, Mruthyunjaya T S, The Stewart platform manipulator: a review, Mechanism and Machine Theory, Vol.35, No.1, pp.15-40, 2000. [3] Raffaele Di Gregorio, V. Parenti-Castelli, Mobility analysis of the 3-UPU parallel mechanism assembled for a pure translational motion, 1999 IEEE/ASME Int. Conf. on Advanced Intelligent Mechatronics, pp.520 –525. [4] Raffaele Di Gregorio, V. Parenti-Castelli, A Translational 3-DOF Parallel Manipulator, Advanced in Robot Kinematics Analysis and Control, Kluwer Academic Publisher, 1998, pp.49-58. [5] Zheng Xiangzhou, Bin Hongzan, Luo Yougao, Kinematic Analysis of a Hybrid SerialParallel Manipulator. International Journal of Advanced Manufacture Technology, 23(2004), pp.925-930. [6] Lee K-M, R. Johnson, Static characteristics of an in-parallel actuating manipulator for clamping and bracing applications. IEEE International Conference on Robotics and Automation, Vol.3, pp.1408-1413, 1989. [7] Raffaele Di Gregorio, Statics and Singularity Loci of the 3-UPU Wrist, 2001 IEEE/ASME Int. Conf. on Advanced Intelligent Mechatronics Proc., pp.470-475. [8] H. Goldstein, Classical Mechanics, Beijing: Higher Education Press, 2004.
A Natural Frequency Variable Magnetic Dynamic Absorber Chengjun Bai, Fangzhen Song School of mechanical engineering, Jinan University, Jinan 250022, China, [email protected]
Abstract A new magnetic dynamic absorber and the method to tuning the magnetic dynamic absorber's natural frequency is presented in this paper. The new absorber’s natural frequency is controlled by changing the clearance between the rotor's surface and the electrical magnet's inner surface. The structure and the dynamic characteristics are analyzed by tuning the natural frequency of this kind of absorber in this way. The natural frequency of the dynamic absorber can be regulated by changing the clearance is also presented in the paper. Keywords: Magnetic Dynamic Absorber; Rotor; Vibration Control; Natural frequency
1
Introduction
With the development of technology, the need for higher rotor speed becomes increasing much more than ever before. When the rotor’s speed is higher than 100,000 rev/min in many machines, therefore the rotor usually works above the critical speed. When it starts or stops working, the speed may be the same as its critical speed, in this case the rotor may be vibrating. Further more, the imbalance of the rotor itself, the variety of exterior loads, the set precision of the rotor and the support capacity of the bearing may also cause vibration to the rotor. These factors result noises and the machine may be damaged and the rotating precision is lost, therefore the vibration control becomes a hotspot in the study of mechanics [1, 3]. The most popular mean to reduce vibration is a dynamic absorber. With the advantage of no contact and no lubrication, its stiffness and damping can be changed according to the rotor’s vibration [4]. When magnetic bearing is used in dynamic absorber, the absorber’s work range increases than the traditional mechanical absorber [5, 6]. In this paper, a new magnetic absorber is presented to reduce the vibration caused by structural imbalances and in which the natural frequency is controlled by changing the clearance between the rotor and the magnetic absorber. A calculation formula is also presented in the paper.
692
2
C. Bai and F. Song
The Principle of the Magnetic Dynamic Absorber
The structure of the magnetic dynamic absorber with natural frequency controlled by changing clearance is shown in Figure 1. It consists of an electrical magnet, which is of larger diameter at one end than the other end, springs, screw, motor, guide way and some other components to support it and drive it. The electrical magnet can move freely in the plane which is perpendicular to the rotor’s axis, but in the direction parallel to the rotor’s axis, the movement of the absorber is controlled accurately. Electric current is running in the magnet’s coil, when the current remains unchanged, the absorber’s natural frequency is determined by the clearance between the rotor and the magnetic absorber. In this case, the absorber can be moved in the direction parallel to the rotor’s axis to regulate the absorber’s natural frequency. The principle of the magnetic dynamic absorber is illustrated in Figure 2. Firstly the sensor detects the vibration message of the rotor, from the massage we can get the vibration frequency of the rotor. Secondly the vibration frequency is compared with the absorber’s natural frequency and lastly, the controller controls the motor to drive the absorber moving to-and-fro according to the difference between the two frequencies. When the vibration frequency is same as the absorber’s natural frequency, the rotor’s vibration can be absorbed by the magnetic dynamic absorber.
GDPSHU VSL QJ VSUL QJ EHDULQJ URW RU
VFUHZ
HOHFW R U L F DO PDJQHW PRWRU
JXL G ZD\ Figure 1. The Structure of the Magnetic Dynamic Absorber with Natural Frequency Controlled by Changing Clearance
A Natural Frequency Variable Magnetic Dynamic Absorber
693
The absorber’s model is as
mx cx kx F ȝ0 N 2 A ª I 02 I 02 º F0 « » cosĮ 2 4 į0 x 2 ¼ ¬ į0 x
(1) (2)
The absorber whose mass is m, is mounted on the base by springs with stiffness k, and damper with damping c, F is electromagnetic force that acts on the mass m. ȝ0 is the magnetic conductivity in the vacuum, į0 is clearance between the surface of the rotor and the inner surface of the electrical magnet, I0 is the current in the coil, A is the effective area and N is the loop amount of the coil, D is the cone angle of the rotor’s surface. When variable x is small enough, we can get the linear equation as
F K xx
ȝ0 N 2 AI 02 cosĮ x x02
§ ȝ N 2 AI 02 cosĮ · ¸x 0 mx cx ¨¨ k 0 3 ¸ x 0 © ¹
(3)
(4)
From above we can get the absorber’s natural frequency
Z0
k
k K x m
ȝ0 N 2 AI 02 cosĮ
G 03 m
(5)
Here
G0
y y 0 sinD k
Z0
(6)
ȝ0 N 2 AI 02 cosĮ
y y 0 3 sin 3D m
(7)
In equation (7), y0 is the absorber’s initial displacement, and y is the absorber’s displacement in the direction of the rotor’s axis, from this it can be seen that the absorber’s natural frequency Z 0 is changed when the absorber moves in the direction of the rotor's axis.
694
C. Bai and F. Song
\
˰
HOHFWULFDO PDJQHW ˞
URWRU
[
GULYHU
VHQVRU PRWRU FRQWUROOHU Figure 2. The Principle of the Magnetic Dynamic Absorber
3
The Dynamic Characteristics Analysis of the Absorber
The magnetic dynamic absorber's parameters are listed in Table 1. Table 1. Parameters of the Magnetic Dynamic Absorber
Parameters M N A
D y0
Value 0.92kg 200loop 140mm2 S 8 0.4mm
When the current in the coil remains unchanged, the relationship between the magnetic dynamic absorber's natural frequency and the absorber's displacement in the direction of the rotor's axis is shown in Figure 3 and Figure 4. In Figure 3 and 4, the stiffness is 7000,000N/m, 1000,000N/m. From Figure 3 and Figure 4, we can see that it is very easy to tune the natural frequency, for example when the natural frequency needs to be changed from 0Hz to 1000Hz; the change in displacement is less than 0.03cm. But when the displacement of the absorber is unchanged, the natural frequency of the absorber is changed according to the current in the coil and is shown in Figure 5 and Figure 6 indicates that huge current variation is needed to change the natural frequency.
A Natural Frequency Variable Magnetic Dynamic Absorber
3000 I0=0.2A 2500
w 0 (H z )
2000
I0=1A
1500
I0=0.5A
1000
500
0 -0.04
-0.03
-0.02
-0.01
0 y(cm)
0.01
0.02
0.03
0.04
Figure 3. The natural frequency (w0) when k=7,000,000N/m 1200
1000
w 0 (H z )
800
I0=0.2A
600 I0=0.5A 400 I0=1A 200
0 -0.04
-0.03
-0.02
-0.01
0 y(cm)
0.01
0.02
0.03
0.04
Figure 4. The natural frequency (w0) when k=1,000,000N/m
695
696
C. Bai and F. Song
3000
2500
w 0 (H z )
y0=0.1cm
y0=0.04cm
2000
1500
y0=0.005cm
1000
500
0
0
0.1
0.2
0.3
0.4
0.5 I0(A)
0.6
0.7
0.8
0.9
1
Figure 5. The natural frequency (w0) when k=7,000,000N/m 1200
1000
y0=0.1cm
w 0 (H z )
800 y0=0.04cm 600
400 y0=0.005cm
200
0
0
0.1
0.2
0.3
0.4
0.5 I0(A)
0.6
0.7
0.8
0.9
1
Figure 6. The natural frequency (w0) when k=1,000,000N/m
4.
Conclusion
From the analysis and the figures shown above, we can see that by means of changing the absorber’s displacement in the direction of the rotor’s axis, we can tune the natural frequency of the magnetic absorber, and when the displacement is
A Natural Frequency Variable Magnetic Dynamic Absorber
697
little (less than 0.005cm), the range of the natural frequency is wide, which is as much as 2,700Hz. Similarly when the stiffness of the spring between the rotor and the absorber is changed, the range of absorber’s natural frequency is changed accordingly, therefore when the displacement is less, the current needed is also of less value. When the displacement is more than 0.04cm, the current needed to change the natural frequency of the absorber is also large. In another case, when the current in the coil is changed but the displacement stays the same, the natural frequency of the absorber is changed accordingly. In this case, the linearity is obvious, but the tuning range of the natural frequency is narrower as compared to the method of changing the clearance between the rotor’s surface and the inner surface of the electrical magnet. When the clearance is larger than 0.04cm and the vibration frequency is low then changing electrical current needs large current in the coil which results in high power consumption over a long period of time.
5.
References
[1] Zhu Meiling, Wang Fengquan. “Theoretical and Experimental Research on Active Vibration Control System Using Electromagnetic Actuator”, Journal of Vibration Engineering, vol.8, no. 1, pp. 80-84, March 1995. [2] Song Fangzhen, Song Bo, Shao Haiyan, Chang Sufang. “Research on the on-line monitoring technique of magnetic dynamic absorber type”, Journal of Jinan university (Sci.& Tech.), vol.20, no. 3, pp. 281, July 2006. [3] Song Fangzhen, Feng Dezhen, Song Bo, Sun Xuan. “A method for controlling multifrequency unbalance response of a rotor with magnetically levitated dynamic absorber”, Mechanical science and technology, vol.23, no.2, pp. 170-173, May 2004. [4] Huang Dezhong, Ge Suzhen, Chao Mengsheng. “The Study on Taper Mix Magnetic Bearings”, Machine Tool & Hydraulics, vol.3, no. 2, (3), pp. 132-134, March 2004. [5] Li Kuinian , Cheng Yuelin, Electro- “Magnetic Natural Frequency Controllable Dynamic Vibration Absorber”, Journal of Guizhou University of Technology, vol.26, no.si, pp. 123-127,October 1997. [6] Zhu Meiling, Wang Fengquan, “A Study on Electromagnetic Actuator and Its Active Vibration Control System,” Journal of Vibration Measurement & Diagnosis, vol.15, no.64, pp. 52-56, March 1995.
Chapter 6 Manufacturing Systems Design
Next Generation Manufacturing Systems ....................................................... 701 R.H. Weston and Z. Cui Tooling Design and Fatigue Life Evaluation via CAE Simulation for Metal Forming ............................................................................................. 711 W.L. Chan, M.W. Fu, J. Lu Modelling of Processing Velocity in Computer-controlled Sub-aperture Pad Manufacturing.................................................................... 721 H. Cheng, Y.Yeung, H. Tong, Y. Wang Load Balancing Task Allocation of Collaborative Workshops Based on Immune Algorithm....................................................................................... 729 XiaoYi Yu, ShuDong Sun Study on Reconfigurable CNC System ............................................................ 743 Jing Bai, Xiansheng Qin, Wendan Wang, Zhanxi Wang Development of a NC Tape Winding Machine ............................................... 753 Yao-Yao Shi, Hong Tang, Qiang Yu TRIZ-based Evolution Study for Modular Fixture ........................................ 763 Jin Cai , Hongxun Liu , Guolin Duan , Tao Yao , Xuebin Chen Study on the Application of ABC System in the Refinery Industry.............. 773 Chunhe Wang, Linhai Shan, Ling Zhou, Guoliang Zhang The Application of Activity-Based Cost Restore in the Refinery Industry .. 783 Xingdong Liu, Ling Zhou, Linhai Shan, Fenghua Zhang, Qiao Lin Research on the Cost Distribution Proportionality of Refinery Units .......... 793 Fen Zhang, Yanbo Sun, Chunhe Wang, Xinglin Han, Qiusheng Wei
Next Generation Manufacturing Systems R.H. Weston and Z. Cui MSI Research Institute and UK Centre of Excellence for Customised Assembly (CECA), Loughborough University, Leicestershire, UK
Abstract The average economic life of production systems will reduce as product lifetimes reduce, unless the next generation of these systems is sufficiently flexible to realise the customised assembly of multiple product types. Hence, a new opportunity exists for ME’s to realise a step change in production systems delivery and change, leading to world class high value, small to medium volume production. This paper reports on developments that promise such a step change, initially in auto, aero and construction equipment industries with roll out to other sectors. It describes how integrated people, product, process and plant (ip4) virtual environments and innovative forms of reusable assembly system components are being developed and used to full life cycle engineer large scale assembly systems. Keywords: reconfigurable manufacturing systems; virtual environments; component based modelling; large scale assembly systems; product dynamics
1.
Introduction
With increasing complexity and change in business environments manufacturing enterprises (MEs) need to respond competitively to growing uncertainty with respect to the types and quantities of products they must make over short and long time frames [1, 2]. This paper considers impacts of ‘product dynamics’ in respect of an emergent need to create and deploy ‘change capable manufacturing systems’, and illustrates how business benefits and competitive advantage can be gained. It is observed that industry requires new forms of manufacturing system which are: x x
‘flexibly integrated’ into the ‘specific dynamic business contexts’ in which they will be used ‘recomposed, reconfigured and reprogrammed’, such that economies of ‘scope’ and ‘scale’ can be realised.
This paper also explains how synergistic use of state of the art modelling technologies can lead to a ‘step change’ in the full life cycle engineering of large and small scale manufacturing systems. The approach described is capable of representing and computer executing mixed reality manufacturing systems and by so doing supports team based engineering of:
702
R.H. Weston and Z. Cui
1. 2.
3.
2.
manufacturing systems that suit the specific business environment in which they will be used strategic development of manufacturing systems enabling quantitative reasoning about manufacturing policy change, systems recomposition, quantitative capability analysis and investment planning, and ongoing reconfiguration and reprogramming of manufacturing systems components, in response to specific cases of product dynamic.
Full Life Cycle Engineering Requirements Defined
Here automotive industry best practice is outlined when engineering large scale assembly systems [3, 4]. The purpose of so doing is to indicate how virtual environments have potential to realise a step change in best practice industry wide; such that the full life cycle engineering of next generation manufacturing systems can lead to business benefits. Figure 1 illustrates elements of a typical automotive engine assembly line.
local controller
local controller manual tool
manual tool
OEM supplied robotic system
local controller
OEM supplied dedicated machine local controller
hand tools
fixed stops
test station
test station
zone controller
zone controller
rework (pull-off spur)
rework
pitch
(pull-off spur)
pitch
Many work stations
RF tag
RF tag
engine flow
work flow Takt time typically 40 s
RF tag
RF tag
Figure 1 Conceptualisation of current best practice assembly systems engineering
Often such a line comprises some 40 to 70 workstations that range from being largely automated to primarily manual; dependent on specific assembly operations required at each stage of value generation. Common practice is to ‘pull’ engines through work stations at a specific Takt time; which determines the productionrate at which engine products are output from the line. This requires complex component feeding operations and effective synchronisation of operations and
Next Generation Manufacturing Systems
703
work flows. Also distributed controls are used to achieve information support and process synchronization.
Figure 2 Conceptualisation of Current Best Practice Assembly Systems Engineering
Current auto industry assembly systems engineering practice is world class and is conceptualised by Figure 2. But significant constraints remain with respect to making custom products. Inherent complexity levels necessitate formation of multidisciplinary teams, some affiliated to end user manufacturers, others to OEM equipment and technology vendors. Those teams can be distributed globally and typically comprise 10 to 100 persons. The perspectives of team members on ‘what is required’, ‘possible conceptual and detailed solutions’, and on ‘runtime operations’ and ‘support services’ are different but complementary. But they are all concerned with people, product, process and plant (p4) issues, which themselves have complex interdependencies. What team members do and when, is structured by proven methods and enabled by many kinds of computer tool and information support system. Methods used are generally Manufacturing Enterprise specific but build upon widely known approaches to systems engineering, software and database engineering, control systems engineering, waste reduction, process synchronisation, etc. One major constraint of current best practice is that necessary requirements to reprogram and reconfigure (collective and individual) operations of workstations (that typically comprise a large scale auto assembly system or ‘line’) need largely to be determined during first off systems engineering. This requires significant foresight about needed processing routes and operations, for all product variants, that must be realised by the assembly system during its intended lifetime. To some
704
R.H. Weston and Z. Cui
extent remaining uncertainties can be mitigated by embedding redundant capabilities into assembly systems, but generally such an approach cannot lead to economic and timely assembly of multiple product types (with their different ramp up and down profiles) through extended time periods. It follows that current best practice first off engineering is very costly. Also as current practice only facilitates limited externalisation and integrated reuse of p4 knowledge and data then subsequent projects (e.g. to create a new large scale assembly system or to make a major change to an existing one) may be equally costly. Even relatively minor unforeseen changes may not be catered for without very significant re-engineering implications. Consequently, the useful lifetime of conventionally engineered large scale assembly systems will in general decrease as product lifetimes decrease. Other major constraints arising from current best practice include: lack of multiperspective project quantification and decision support; lack of well designed and explicitly specified ‘interfaces’ between system elements (e.g. modules or ‘assembly system components’); ad hoc use of systems integration technologies and services; and locking into specific technology or OEM that constrains later change. Current work of the author and his research colleagues seeks to deliver a step change in current best practice leading to ‘full life-cycle engineering’ of large scale assembly systems; this paper and conference presentation explains in outline how new forms of ‘virtual engineering environment’ and ‘reusable assembly system components’ are being developed that are suitable for cross industry sector deployment. Figure 3 illustrates the role of an integrated set of computer models that are captured and reused so as to structure access to supporting information and decision making services. This allows the overlaying of public domain lifecycle and general engineering methods onto the deployment of virtual engineering computer tools that are being used as part of an integrated p4 (ip4) environment to rapidly and effectively build and change configurations of modelled and real system elements. The elements modelled include: people; machines; workstations; positioning, transportation and feeding mechanisms; semi-automated fixtures and tools; sensory systems; material, piece part and product flows; and control logic and information about plant states, state transitions and plant animations. Interoperation of modelled and real system elements and an inherent ability to change the configuration of these elements is being enabled via suitable information structures and distributed data sources. Thereby collective design and change decision making amongst engineering teams is being achieved, along with the overlaying of enterprise specific engineering methods, documentation and version control procedures. Figure 4 illustrates how the ip4 virtual environment illustrated by Figure 3 is being used to support team- based large scale production systems engineering, which in this example is achieved by unifying the use of multi perspective models of assembly workstations and complete assembly lines.
Next Generation Manufacturing Systems
705
Figure 3 ip4 Enhanced Best Practice Assembly Systems Engineering
Figure 4 Illustrative Use of Multi-Perspective Assembly System Models by Engineering Teams
706
R.H. Weston and Z. Cui
Figure 5 shows the main set of modeling concepts and technologies that are being used in a proof of concept integrated fashion to structure and implement the reuse of the multi-perspective assembly system models. Ongoing proof of concept projects are being carried out for companies that include: Ford, JCB, Volvo, Airbus, BAE and Goodrich. The conference presentation will describe how at least one of these assembly systems is being engineered. The proof of concept activities involve CECA project engineers working alongside company engineers so as to (1) capture their best practice assembly systems life-cycle engineering practice within ip4 models and (2) to systemize the reuse of ip4 models so as to inform and quantify decisions made by the company engineers, so as to advance best practice assembly systems engineering.
3.
Underlying Modelling Concepts
Key to full life-cycle engineering of complex assembly systems is effective system decomposition and change type classification [5]. A prime requirement of ip4 is to life-cycle engineer large scale assembly systems that can respond effectively to impacts arising from product dynamics. Four main types of product dynamic known to impact significantly on assembly systems used by automotive , aerospace and construction equipment ME’s (currently collaborating with the present author and his research colleagues at Loughborough) are: (1) ‘Product variance’ (amongst product classes (families), product types and product feature characteristics), (2) ‘Production volume variation’, (3) ‘Production mix variation’ and (4) Reflected impacts on (1) though (3), following ‘New product introduction’. The IpP4 virtual engineering environment enables quantitative reasoning and prediction about change impacts (of types (1) through (4)) on alternatively configured assembly workstations and production lines [6, 7]. While reuse of ip4 assembly components enables rapid and effective assembly system change. Our collaborators believe that ip4 will facilitate a step change in their best practice. For our automotive partners, catering for product related dynamics of types (1), (2) and (4) is of strategic importance; also the ME concerns has growing concerns about coping with (3). Key to our construction equipment manufacturer collaborator are (1), (3) and (4); at both workstation and production line levels, with growing concern for (2) where they anticipate a need to support variable assembly system Takt times. While for our aero-space partners (2) is key, with a general need to ramp up and ramp down with changing product demand; but they also have a requirement to cater for (1), (3) and (4). Figure 5 illustrates the ip4 full life cycle engineering toolbox which is being developed by the author and his research colleagues based on the use of leading edge modelling. One dimension of this tool box is concerned with the provision of coherent modeling concepts. A second dimension concerns implementation technologies, for model capture, model execution and model repository and version control. The implementation technologies being used include: CIMOSA and GRAI-GIM enterprise modeling concepts and newly developed process network capture tools; causal loop modeling in support of simulation model design; Simul8, PlantSim,
Next Generation Manufacturing Systems
707
ithink and JAK simulation modeling tools; ‘Teamcentre’ and ‘Delmere’ integration technologies to provide underpinning distributed information services; and various Unigraphics CAE tools. The third dimension concerns the provision of road-maps to define effective ways of deploying the toolbox.
Figure 5 ip4 Full Systems Engineering Toolbox
Fundamental to ip4 toolbox design and development has been the conception, innovative implementation and case study application of a new component based modeling concept which has referred to as the DPU (Dynamic Producer Unit) concept [8]. A DPU is defined as being ‘an organisational unit comprising people, machines and/or computer systems that form a configurable, re-usable and interoperable component of a more complex production system’. Figure 6 illustrates key properties of DPUs that are modeled. DPU’s need to function (a) individually, as a holder of one or more assigned roles and (b) collectively, by interoperating with other DPU’s to realise higher level roles (i.e. some configuration of roles to which the interoperating DPU’s are assigned). Dynamic character sets are used to describe and quantify inherited and acquired behaviour traits of DPU’s. The generic attributes defined for this purpose belong to three classes: (1) productivity characters, (2) change capability characters and (3) self characters [9, 10]. In general it assumed that all DPU’s behave in ways related to these traits, but when a given configuration of DPU’s is assigned to a specific role set it is understood that not all character sets are of equal importance to different users of manufacturing system models (e.g. to product, process, automation, IT systems and ergonomic engineers or to business and manufacturing managers). The conference presentation will illustrate a case study use of DPU
708
R.H. Weston and Z. Cui
concepts to: (i) conceptually model alternative manufacturing system configurations, (ii) to match these alternative DPU configurations to work loaded roles and (iii) to predict individual and collective DPU behaviours when subject to different forms of product dynamic. configurability
programmability
mobility has changeability characters longevity
pro-activity
DPU
reactivity
culture motivation
timeliness personality has self & characters
stressors & stresses
inter-personal ability
generated values has productivity characters output rate utilisation
cost
efficiency
Figure 6 Dynamic Producer Unit Concept
4. Ongoing Industrial Application and Case Testing of the ip4 Toolbox The IP4 toolbox is being used to facilitate prototype system builds (for automotive, aerospace and construction equipment manufacturers). Various physical elements of these prototypes include: (a) Laser metrology to offer benefits such as continuous calibration of fixtures or assemblies, quickened and optimised assembly, automated measuring solutions and improved quality (b) Advanced vision systems that can be integrated into assembly systems to flexibly ensure that task completion and tolerances are met, and quality is ensured. Use of robots as reconfigurable assembly manipulators will allow piece parts of products to be brought together and operated on. We are creating a library of assembly system building blocks, from which assembly workstations and assembly lines can be composed, modelled and proven; to ensure fast and predictable results. Via prime systems engineering and system component vendors, use of the toolbox and reconfigurable and programmable assembly system components will facilitate a wide base of exploitation and roll out to other industries. Ip4 is currently facilitating prototype system builds (for our collaborators). Various physical elements of these prototypes (which are referred to here as assembly
Next Generation Manufacturing Systems
709
system components) will need to possess flexible characteristics, both individually and when configured into (sub) systems. Hence common classes of component configured by the ip4 tool box into prototype systems will include latest technologies such as laser metrology, vision systems, robotics and flexible assembly fixtures and tools. The ability to model and build complex assembly systems from flexible components is of great importance and will allow companies to automate production processes previously thought to realise insufficient product volumes or where the number of product variants was previously too high. Key flexible components in all current assembly systems used by our collaborators are people. Key aspects of people modelling supported by the ip4 toolbox include: functional competencies, ergonomic factors, workplace physical ergonomics and some aspects of cognitive ergonomics. An important aspect of this is the determination of explicitly defined sets of task elements such that human capabilities are deployed and managed in a way that is compatible with required product flows through business process elements. Human simulation in the assembly workplace is driven by task-oriented descriptions of required roles and capabilities of humans as candidate holders of roles. This places physical and cognitive demands on people in relation to specific workplaces and work-flows. The approach is expected to significantly enhance best industry practice through the provision of a task-driven evaluation method aimed at improving the design and utilisation of people in assembly workplaces. Prototype system builds (for our collaborators) is taking a variety of forms, selected to maximise business impact and will have various scope and focus. Examples include: use of the ip4 toolbox to prototype a much enhanced ability to enable ‘product’, ‘process’, ‘production’, ‘ergonomic’ and ‘IT systems engineers’ to collectively build, access and manipulate assembly system models. Here a staged developmental programme will lead the progressive release of new virtual engineering technologies to our collaborators Enterprise world-wide via key technology vendors. Also new workstations with much enhanced flexibility are being prototyped to enable mixed architecture, economy of scope assembly. Further new flexible workstations are being developed and tested centred on the use of robot welding of fabrications to accommodate increased volumes, and variants; also to save space and improve quality. Models of complete fabrication and assembly lines will enable new strategies and control policies to be developed so that variable Takt time Lean product realisation can become a reality. Similarly flexible workstations and complete assembly lines are being prototyped to cope with product variance and production ramp up and down.
5.
Key Innovations and Reflections
The approach to virtual engineering reported in this paper is particularly innovative in respect of its model driven support for integrated people, product, process and plant ip4 aspects of assembly systems engineering. Also highly innovative is the support provided for modelling human-machine interactions; including elements of autonomous decision making by (AI-based) assembly system components suited for use across auto, aero and construction equipment industries where virtual
710
R.H. Weston and Z. Cui
engineering is already commonly used. Another prime innovation concerns the systematic and quantitative support provided for the full life-cycle engineering of mixed-reality (part modelled, part real) assembly system components; with respect to business-context dependent scenarios of use. The modelling concepts, and related decomposition and integration structures, devised advancing understandings about complex systems design and interoperation.
6.
References
[1] Loe N.(1998), Postponement for mass customisation, Chapter 5, in Gattorn J; Strategic Supply Chain Alignment, Gower. [2] Christian I et al. Agile manufacturing transitional strategies, manufacturing information systems: Proceedings of the Fourth SME International Conference. [3] Yasuhiro Monden (1998), Toyota Production System, An Integrated Approach to JustIn-Time, Third edition, Norcross, GA: Engineering & Management Press. [4] Levinson, WA, (2002), Henry Ford's Lean Vision: Enduring Principles from the First Ford Motor Plant, Productivity Press 2. Hirano, Hiroyuki and Makota, Furuya, JIT Is Flow: Practice and Principles of Lean Manufacturing, PCS Press, Inc. [5] Vernadat.FB, (1996), Enterprise Modeling and Integration: Principles and Applications. Chapman & Hall, 2-6 Boundary Row: London, UK. [6] Weston RH, Chatha KA and Ajaefobi JO, (2004). Process thinking in support of systems specification and selection. Advanced Engineering Informatics. Elsevier, 18(4), 217-229. [7] Weston RH, Zhen M, Ajaefobi JO, Rahimifard A, Guerrero A, Masood T, Wahid B, and Ding C (2007) Simulating dynamic behaviours in manufacturing organizations. IESM 2007 Int. Conf. on Industrial Engineering & Systems Management, Beijing, China, May 30-June 2. 2007. [8] Weston RH, Rahimifard A and Ajaefobi JO, Next generation, change capable, component based manufacturing systems, Part 1: Dynamic producer unit concepts defined (2007). Submitted to I.Mech.E.Part B. [9] Ajaefobi JO, Weston RH and Wahid B, (2007). Modelling complex systems in support of human systems design and change, Part 2: SME bearing manufacture case study. Submitted to Int. J. of Computer Integrated Manufacture [10] Weston R H, Ajaefobi JO and Rahimifard A, (2007). Modelling complex systems in support of human systems design and change; Part 1: methodology defined. Submitted to Int. J. of Computer Integrated Manufacture.
Tooling Design and Fatigue Life Evaluation via CAE Simulation for Metal Forming W.L. Chan, M.W. Fu, J. Lu Department of Mechanical Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong
Abstract Cold forming tools are subjected to extremely high pressure. Fatigue is one of the major failure modes on metal forming tools especially on cold forging process. There are many factors influencing the fatigue life such as material properties, interfacial friction, loading, and product geometry etc. and thus it is possible to greatly enhance the tooling life with subtle design change on the product, tooling or process parameters. This paper is aimed to address some key design parameters for tool fatigue life and they should be considered as more in-depth as possible at up-front design stage to eliminate any late design changes and unexpected tooling failures. To realize this goal, an efficient framework for tooling design and fatigue life evaluation which considers different design factors and integrated product, tooling and process design via simulation is proposed. It also provides a platform to systematically integrate different design and engineering tools to support tooling design and design solution evaluation through geometry representation, formability analysis, identification of fatigue failure area and die life prediction. A case study for the tooling design for production of a vehicle wheel disc is presented to demonstrate the implementation of the proposed design and analysis framework. Keywords: CAE, Fatigue life, Tooling design methodology, Metal forming simulation
1.
Introduction
Tooling is made of tooling materials and subjected to repeated loadings. As all the materials have flaws or initial micro cracks, these defects or flaws in the original tooling material could propagate gradually and eventually lead to tooling failure. In traditional tooling development paradigm, it is difficult to predict and assess design solutions at early design stage as there are many affecting factors and their interaction and interplay is very complicated. Improper design is a common issue which could lead to early fatigue fracture. Design evaluation for fatigue life is necessary and become a must [1-3]. On the other hand, having an accurate evaluation and analysis paradigm for tooling design in the early design stage is an important way to eliminate uncertainties in tooling development.
712
2.
W.L. Chan, M.W. Fu and J. Lu
Literature Review
In the literature, there are many researchers have tried to further advance the fatigue analysis tools for more practical applications. To name a few, Wagner and Engel used FEA to identify and localize the critical tool regions and to select a qualified surface treatment approach to improve tooling performance and increase its life. In their investigation, three forging tools have been studied by hard roller burnishing, surface heat treatment by laser and surface texturing respectively [4]. Taylor studied the theory of critical distances with the aid of FEA to address some component failures are not at the highest surface stress region [5]. Witek used nonlinear finite element method to determine the stress state of turbine disc of an aero engine and studied the mechanisms of fatigue failure [6]. Raju et al have studied the fatigue life of aluminium alloy wheels under radial loads. In order to get the actual fatigue properties of the aluminium alloy wheel material after manufacturing process, fatigue test is conducted on 43 specimens which are machined from the spokes of alloy wheels. The obtained actual material fatigue properties were inputted to FEA system for analysis [7]. Saroosh et al proposed a method to estimate the fatigue life of cold forging tool based on the industrial tool life data, workpiece material property and the FEA simulation. Their estimation methodology was mainly developed from the Morrow’s and Basquin’s equation. Tong and his colleagues utilized CAE technology to study the effect of die and workpiece geometry, die and workpiece material properties, process parameter on die fatigue life and improvement. The tool life was predicted based on the FEA result and Haigh Diagram [8]. Among the literature, there are many researchers to study material fatigue properties by using FEM system. However, there is no research on how to integrate different FEM for tooling design and analysis in a systematic way. Knowing that FEM technology is an advanced tool on predicting mechanical failure, however, without efficient and systematic application approach to design and analysis may cause high computation cost and incorrect prediction. Furthermore, fatigue analysis involves many parameters including material properties, loading and tooling geometry. A little change of one single parameter may cause different result. Therefore, the aim of this paper is to propose an efficient tooling design and analysis framework based on CAD and CAE technologies. Through the industried case study, the proposed design and analysis framework is verified and validated.
3.
Classic Fatigue Theories
In the classical fatigue theory, there are two main approaches to predicting the fatigue life. They are stress-life (S-N) and strain life (H-N) approach [9]. In general, S-N approach can be used to estimate long cycle life (over 1000 cycles) [8]. It is more accurate when the cyclic elastic straining is dominant and ignores the damage from plastic strain. On the other hand, the S-N approach estimates the total life without distinguish crack initiation from crack propagation [10]. S-N curve is
Tooling Design and Fatigue Life Evaluation via CAE Simulation for Metal Forming
713
obtained under uni-axial fatigue test with smooth specimens. The relationship of applied stress and fatigue is represented by Basquin’s equation
Va
V 'f (2 N f ) b
(1)
Where Va, Vf’, b and Nf are the stress amplitude, fatigue strength coefficient, fatigue strength exponent and number of lifecycle, respectively. Vf’ is often approximately equal to the true fracture strength [11], while b is given from the slope of the S-N curve. 3.1
Mean Stress Effect
Most of S-N curve data is obtained from zero mean stress with r-ratio (Vmax/Vmin) of -1. However, the fatigue behaviors vary with mean stress. In general, tensile mean stress has faster crack propagation rate in the result of shorter lifecycle, while the compressive mean stress can be beneficial to long cycle life. The mean stress effect can be represented in Haigh diagram which shows the different combination of stress amplitude and mean stress for constant life cycle. There are two common equations to consider mean stress effect, they are known as modified Goodman equation and Gerber parabola equation. The modified Goodman equation can be used in the case of compressive mean stress. The Gerber equation, however, incorrectly predicts the fatigue behavior with compressive mean stress [11]. Therefore, it is not suggested on forging tool analysis. 3.2
Stress Concentrations Factor
Geometric discontinuities, such as hole, fillet, groove and key way, always cause stress concentration which magnifies the local stress and shorten the fatigue life [12]. Those kinds of features are generically termed notch. It usually uses concentration factor, Kt, to measure the magnified level. Actually, there is no need to use Kt to consider the stress concentration effect of geometry in FEA. The Kt is used to account for the effect of manufacturing and environmental factors such as casting, fretting, corrosion, etc. [10]. However, the local stress concentration cannot correctly model the effect due to finishing and treatment; both effects are modeled by adjusting the slope (fatigue strength exponent in Equation 1) of the SN curve.
4.
Tool Design and Fatigue Life Analysis Framework
A systemic and integrated tooling life design and analysis paradigm is presented in Figure 1. At the beginning of the framework, it has to clearly define product feature. Product feature means product uniqueness and what the functions of the product has. To clearly define the product feature is important as it involves material selection, product geometry constraints, process determination, and development time and cost. Configuration of these also constitutes the
714
W.L. Chan, M.W. Fu and J. Lu
conceptualization of the product. After the design conceptulization stage, the product geometry with detailed dimensions is represented in a CAD system. With the defined product geometry, the subsequent process is to represent the billet and tooling geometry with the detailed dimension in a CAD system. Since the billet may undergo multi-forming processes, the geometry of each intermediate product has to be designed. The finished models of the tooling and billet are converted to be a compatible format to CAE systems, such as STL, IGES, STEP etc. The models are then imported to the CAE systems for formability analysis. The simulation of material flow is a non-linear dynamic process which is very time-consuming. The appropriate setting can greatly reduce the computation cost and maintain a satisfactory accuracy level. Since the punch and die are analyzed in the later stage, they are set to be rigid bodies, while the billet is a plastic body. The billet is meshed with appropriate element size. The size of element plays an important role on the computation time and accuracy. Usually, the element size is at least 1/2 of the smallest feature size of the tooling [13]. Material definition is assigned to the billet. With the setting of tooling and billet temperature, punch speed, convergence criteria, loading step, re-meshing criteria, the simulation can be conducted. The results of the simulation reveal the material flow to form the detailed geometry. It is easily checked if defect exists. Furthermore, the maximum forming load uaually at the last forming step is known, it is the key factor to determinate the forming process and tool design [14]. If the formability analysis result cannot be satisfied, it has to consider to change operation steps, process parameters, workpiece material and preform geometry. Once the result is accepted, the tooling fatigue life analysis can be conducted. To conduct fatigue life analysis, it needs to carry out a static analysis first. The tooling model has to be meshed. The maximum forming pressure (at the last forming step in formability analysis), boundary conditions and monotonic properties of the tooling material are applied on the tooling model. With all the above setting, static analysis can be run and the result file is imported to the fatigue analysis engine. In the fatigue analysis engine, it has to define the material fatigue properties, mean stress correction method (Goodman, Gerber), surface finishing, treatment process and the profile of one single stress level cycle. If the simulation result of fatigue load cycle is not accepted, it has to check if it can modify the stress concentrated feature without changing the product geometry design, otherwise, strengthening treatment is needed to the tooling material or even use other stronger material. In case there is no other suitable tooling material can be used, it has to reduce the forming load and the stress concentration factor based on the consideration of process parameter, product material and product geometry. In the above proposed design paradigm, as shown in Figure 1. The sequence of consideration on each design factor is aimed to provide an efficient design and analysis approach.
Tooling Design and Fatigue Life Evaluation via CAE Simulation for Metal Forming
Figure 1. Integrated tooling life design & analysis paradigm
715
716
5
W.L. Chan, M.W. Fu and J. Lu
Case Study
In order to demonstrate the implementation of the proposed tooling life design and analysis paradigm, a case study of wheel disc tooling is presented. The whole structure of the wheel is formed by welding two components together, they are rim and the disc as shown in Figure 2. The rim is produced by rolling process, while the disc is formed by cold forging process. From the production point of view, the cold forging process is the most critical process to be estimated. Since it involves non-linear material flow and the cold forging tools are under extremely high pressure during the forming process. Without careful and accurate analysis, product defect and the tool facture may happen. Therefore, the forming process and the tool fatigue life for wheel disc production are chosen as a case study to demonstrate the design and analysis process based on CAD and CAE technologies.
Figure 2. (a) Wheel assembly, (b) Disc design
4.1
Forming Process Analysis
The geometries of the disc could not be formed by a single forging process; the unwanted region has to be removed by the following processes. Therefore, the metal-formed part was an intermediate product; the geometry of the intermediate product has to be studied in order to avoid the formation of defect on product and high stress concentration on the tooling during the forming process. The forming process simulation was done in DeformTM system. Since the wheel geometry is symmetrical above two plans, therefore, only a quarter of model was used in order to reduce the computation time. The tooling was set as a rigid body, while the billet was plastic body. The billet material was AL6062 and there were about 40000 elements was generated inside. The punch speed was 1mm/s. Different geometry of the intermediate product have been tried. There were two common defects could be found as shown in Figure 3. The first defect was the material folding defect caused by irrational flow in the metal on top surface during its flow into the die cavity. The second defect was underfilled, a portion of a forging that metal has difficulty to go into the true shape of the impression. It was due to the discontinuous geometry that made uneven material flow inside the cavity. In this case, both defects cannot be eliminated by changing the forming parameter. Based on the metal flow pattern and the modification of the geometry of the intermediate preforms, the satisfactory forming performance was obtained. The forming stages and the stroke-load curve are illustrated in Figure 4.
Tooling Design and Fatigue Life Evaluation via CAE Simulation for Metal Forming
717
Figure 3. Formation of defect, (a) material folding, (b) underfill
Figure 4. (a) Forming stages, (b) stoke-load curve
5.3
Tooling Fatigue Life Analysis
During the forging process, the forming load keeps increasing while the punch is pressing downwards until the end of stoke. Therefore, the highest and most critical load appear at the end of the stoke. As shown in Figure 5, the forming pressure was evenly distributed on the upper surface of the punch. When the billet was deforming, the reaction pressure on the contact surface varied from center to the edge due to the friction between the punch and billet. The stress distribution can be described by following equation [15, 16].
Vz
2W h
§d · 2 V ¨ x¸ ©2 ¹ 3
(2)
718
W.L. Chan, M.W. Fu and J. Lu
where W is friction shear stress and it is equal to mV / 3 , m and V represent the friction factor and flow stress respectively, usually, m=1.5 for cold forging [17]. h, d, x represent the height, diameter, distance from the center of the metal-formed part respectively. In this case study, h was the average thickness which was obtained from four representative sections as shown in Figure 6.
Figure 5. Pressure distribution on the tooling surface
Figure 6. Critical cross-section area
The cross-section area of pressure distribution on the upper die surface is equal to the cross-section area of pressure distribution on the contact surface as shown in Figure 5. The ratio of the center pressure to the edge pressure can be found by using equation 2, and then the stress distribution could be approximated. After the stress boundary condition of the die is determined via the plastic deformation simulation of billet, the boundary condition is employed for die life analysis with the code of MSC Nastran and MSC Fatigue. Firstly, the model was meshed in MSC Patran. Since it was only to analyze a quarter of model, boundary conditions have been set on the symmetrical plans. The bottom surface was fixed in all degrees of freedom. The tooling material was Calmax with young’s modulus of 200GPa and Poisson’s ratio of 0.31 [18]. With all about data, static analysis could be conducted. The results are shown in Figure 7a, the highest stress in punch was 1.2GPa as circled. The result file of static analysis was imported to MSC Fatigue. For fatigue analysis, there were four key data to be inputted to the system, namely the material fatigue properties, the surface finishing, the mean stress correction method, the stress variation profile. In this study, assuming the plastic strain took only little effect and
Tooling Design and Fatigue Life Evaluation via CAE Simulation for Metal Forming
719
could be negligible, therefore, it might utilize S-N approach which only consider the fatigue properties in the elastic part. Fatigue properties parameters (Vf’, b in Equation 1) were obtained from literature [18]. Good surface finishing and no treatment process was set, therefore, the fatigue strength exponent, b, would not be adjusted. The mean stress correction was Goodman method. The stress level variation profile was started from zero to maximum stress of each element and then return to zero. The fatigue life analysis results are shown in the Figure 7b. The punch has the service life up to 10000 cycles.
Figure 7. (a) Static analysis result of the punch, (b) Fatigue life analysis result of the punch
6.
Summary
Design and fabrication of tooling usually involve large amount of investment. The design needs to take tooling performance and fatigue life into account. In the tooling fatigue life analysis, the subtle change of one single parameter may have significant effect on tooling performance and service life. Without clear understanding of different fatigue affecting factors, it is impossible to accurately assess the performance of tooling. The sophisticated CAD and CAE technologies provide an auxiliary tool for solving the problem. This paper proposes a fatigue life evaluation approach and investigate its key factors and parameters for tool design, such as the effect of material properties, mean stress and stress concentration. Furthermore, a paradigm for tooling design and fatigue life evaluation via CAD representation and CAE simulation has been presented. The proposed methodology is aimed to provide efficient and systemic approaches for designing high quality product and prolonging tooling service life.
7.
References
[1] J. F. Darlington, J. D. Booker, Development of a design technique for the identification of fatigue initiating features, Engineering Failure Analysis 13, 1134-152, 2006.
720
W.L. Chan, M.W. Fu and J. Lu
[2] J. F. Darlington, J. D. Booker. Designing for fatigue resistance: survey of UK industry and future research agenda. In: IDMME conference, Bath; 2003. [3] J. F. Faupel, F. E. Fisher, Engineering design: a synthesis of stress analysis and materials. New York: Wiley; 1981. [4] K. Wagner, A. Putz, U. Engel, Improvement of tool life in cold forging by locally optimized surface, Journal of Materials Processing Technology 177 (2006) 206-209. [5] David Taylor, Analysis of fatigue failures in components using the theory of critical distances, Engineering Failure Analysis 12 (2005) 906-914. [6] Lucjan Witek, Failure analysis of turbine disc of an aero engine, Engineering failure analysis 13 (2006) 9-17. [7] P. Ramamurty Raju, B. Satyanarayana, K. Ramji, K. Suresh Babu, Evaluation of fatigue life of aluminium alloy wheels under radial loads, Engineering failure analysis 14 (2007) 791-800. [8] K. K. Tong, M. S. Yong, M. W. Fu, T. Muramatsu, C. S. Goh, S. X. Zhang, CAE enabled methodology for die fatigue life analysis and improvement, International Journal of Production Research, Vol. 43, No. 1, 1 January 131-146, 2005. [9] J. A. Bannantine, J. J. Comer and J. L. Handrock, Fundamentals of metal Fatigue Analysis, 1990, Prentice Hall: New Jersey. [10] MSC. Fatigue User’s Guide. [11] Ralph I. Stephens, Ali Fatemi, Robert R. Stephens, Henry O. Fuchs, Metal fatigue in engineering, 2nd edition, A Wiley-Interscience Publication, 2001. [12] Norman E. Dowling, Mechanical behaviour of materials: engineering methods for deformation, fracture, and fatigue, 3rd edition, Pearson Prentice Hall, 2007. [13] DeformTM-3D Tutorial. [14] M. W. Fu, M. S. Yong, K. K. Tong and T. Muramatsu, A methodology for evaluation of metal forming system design and performance via CAE simulation, International Journal of Production Research, Vol. 44, No. 6, 1075̢1092, 2006. [15] T. Altan and G. D. Lahoti, Limitations, Applicability and Usefulness of Different Methods in Analyzing Forming Problems’, Annals of CIRP, Vol 28, No. 2, 473, 1979. [16] E. G. Thomsen, C. T. Yang, and S. Kobayashi, Mechanics of Plastic Deformation in Metal Processing, Macmillan, New York, 1965. [17] Abbas Ghaei, Mohammad R. Movahhedy, Die design for the radial forging process using 3D FEM, Journal of Materials Processing Technology 182, 534-539, 2007. [18] Povl Brøndsted and Peder Skov-Hansen, Fatigue properties of high-strength materials used in cold-forging tools, In. J. Fatigue Vol. 20, No. 5, 373-381, 1998.
Modelling of Processing Velocity in Computercontrolled Sub-aperture Pad Manufacturing H. Cheng1, Y.Yeung2, H. Tong3, Y. Wang4 1
Dept. of Optical Engineering, Beijing Institute of Technology, Beijing 100081, China 2 Dept. of Mechanical and Automation Engineering, the Chinese University of Hong Kong, Shatin, New Territories, Hong Kong 3 Dept. of Mechanical and Automation Engineering, the Chinese University of Hong Kong, Shatin, New Territories, Hong Kong 4 Dept. of Optical Engineering, Beijing Institute of Technology, Beijing 100081, China
Abstract This paper discusses the mechanism of processing velocity in computer-controlled optical sub-aperture tool polishing/grinding. By introducing a removal function describing the movement of a pad, an equation expressing the relation between the material removal and the process parameters such as the relative velocity is established and simulated respectively, followed by experiments to confirm the theoretical model. The results obtained theoretically and experimentally indicated that the velocity is a key parameter affecting the manufacturing process besides the known dwell-time, which is helpful to optimize the process and realize more efficiently optical fabrication. Keywords: Computer-controlled manufacturing, Sub-aperture pad, Process optimization 1.
Introduction
In modern optical systems, components such as precision mirrors or lenses are quantitatively demanded [1]. Many manufacturing processes have been developed in order to process optical surfaces precisely and efficiently. Of them, the computer-controlled sub-aperture pad manufacturing, in which the grinding/polishing pad is rotated at a fixed speed, the work-piece is also moved in a polar or orthogonal coordinate, and loose abrasives are added between the pad and work-piece surface, has been widely studied especially for the fabrication of brittle materials such as ceramics, glasses for the sake of benefits including significantly improved work-piece shape accuracy, increased processing efficiency, and smoothed the whole surface due to the introduction of the computer-controlled technique [2,3].
722
H. Cheng, Y.Yeung, H. Tong and Y.Wang
For the practical application of computer-controlled optical surfacing, several researchers have introduced small sized tool into grinding or polishing process [4,5] and loose abrasives finishing [6] to realize the material removal by virtue of the relative movement between the tool and work-piece. In addition, several other research works dealing with large optical aspheric mirrors or lenses have also been reported [7,8]. However, in these processes material removal is attained by changing the pad dwell-time on the surface of the work-piece, the contact force is applied being taken as a constant one in both magnitude and vector direction, i.e., constant-force-based machining. Actually, the working force and velocity will change along with the change of the tool/work-piece contacting position, especially for such complex shaped work-piece as free-form surfaces. Therefore, the authors introduced processing velocity modelling into material removal function in order to establish a high-precision and high-efficient computer-controlled sub-aperture pad surfacing technique. This paper focuses on clarifying the action of processing velocity theoretically and experimentally.
2.
Principle of Computer-controlled Sub-aperture Fabrication
Based on the Preston hypothesis, which is commonly accepted in optical manufacturing, mathematics a model of computer-controlled sub-aperture pad surfacing is established. According to the equation, the material removal rate is given by the following expression.
L ( x, y )
K P ( x, y ) V ( x, y )
(1)
where L ( x, y ) is the material removal function at the manufacturing point ( x, y ) during the unit time, P ( x, y ) is the relative pressure between tool and work-piece, V ( x, y ) is the relative motion speed between tool and work-piece, K is the coefficient relates to process conditions, i.e., material, abrasive, temperature and humidity. During the fabrication, when keep the pressure between the tool and the work-piece be constant, say constant-pressure-machining, and for the arbitrary point ( x, y ) in the working area, the dwell-time should be equal, thus, the material removal in the point ( x, y ) is mainly depending on the change of relative speed during the manufacturing cycle. By analyzing, we conclude that the material removal function L ( x, y ) of the point ( x, y ) in the working field conforms to the following convolution equation.
L ( x, y )
³³ R([ ,K )V ( x [ , y K )d[ dK
(2)
path
where R([ ,K ) is the material evenly removal distributing characteristic function in the working area between the tool and the work-piece under the unit working-
Modelling of Processing Velocity in Sub-aperture Pad Manufacturing
723
speed, V ( x [ , y K ) is the relative speed changing function between the tool and the work-piece.
Z2
JG V1
Motor
y
A
JG JJG V V2
Work-piece Pad
Pad Working area
R
D
r r1
e O
Z2
d
O1
Work-piece
Z1
x
Z1 (a)
(b) Figure 1. Schematic illustration of sub-aperture pad grinding/polishing
Figure 1 schematically describes the operating principle of sub-aperture pad surfacing. A small pad scanning the work-piece surface in designed path orbit is driven by a linear motor, and the work-piece is rotated in a velocity. In order to follow the work-piece surface shape and keep better fitting with it as the pad moves along the surface, a ball hinge is screwed on an end of the motor principle axis to combine the upside end of the pad. Figure 2 presents the detailed motion relationship between the pad and work-piece. Where, R is the pad radius, e is a set-over (adjustable), the working area during one cycle is centered in point O, and the radius r is R e . 2.1
Modelling of Processing Velocity
Since the relative speed between the small tool and the work-piece is the key factor that affects the material removal efficiency, we should construct the removal function upon the issue. Let the work-piece rotate in Z1 , the driven motor rotate in
Z2 at the same direction. During one working cycle t , for the working point A at the r1 , the relative velocity as follows.
JG V
JJG JG V2 V1 JG °° JJG Z o r V ® 11 1 JJG ° JJG °¯Z2 r o V2
From equation 2.3, we get
(3)
724
H. Cheng, Y.Yeung, H. Tong and Y.Wang
1
V
[(Z1r1 )2 (Z2 r )2 Z1Z2 (2r 2 2rd cos(D ))] 2
(4)
Z
Z2 Z1
(5)
Let
In view of the geometric relation shown in ǻAO1O, deduce the following formula. 2
2
r1 [r d 2rd cos(D )]
1 2
2
2
[r d 2rd cos(Z1Zt )]
1 2
(6)
Thus, the working velocity model can be obtained from equation 2.4, 2.5 and 2.6 as follow.
V
2
2
2
2
Z1[(Z r ) Z (2r 2rd cos(ZZ1t )) r d 2rd cos(ZZ1t )]
1 2
(7)
Therefore, the material removal function at point A during arbitrary working cycle can be expressed
R
T
2
2
2
2
1 2
KP ³ Z1[(Z r ) Z (2r 2rd cos(ZZ1t )) r d 2rd cos(ZZ1t )] dt (8) 0
2.2 Simulation on Removal Function When the work-piece rotating speed hold constant, the material removal depend on r . Figure 2 describes the simulated removal function curves with different parameters such as relative velocities Z and set-over ratios e/R. An obvious higher centre peak conformed well to the theory that removal function should be a Gauss function [9].
Modelling of Processing Velocity in Sub-aperture Pad Manufacturing
(a)
725
(b) Figure 2. Removal function curves refering to different parameters
3.
Experiments
In order to verify the theoretical results on processing velocity predicted above, grinding experiments were carried out on a home-made experimental apparatus located at the Chinese University of Hong Kong as shown in figure 3. As the heart of this apparatus, a sub-aperture pad measuring 50 millimeters in diameter is screwed on an end face of a principle axis, and is rotationally driven by a motor via a coupling. A tool feed mechanism consisting of two crossed linear guides, two ball screws, and two serve motors provide back-forward and left-right feed of the tool-table. A work-piece holder is mounted on a Z-axis rotary stage.
Motor
Guide
Pad
Work-piece
Figure 3. Photo of experimental equipment
726
H. Cheng, Y.Yeung, H. Tong and Y.Wang
Figure 4. Material removal feature
Cerium oxide particles (sized as 0.5~1μm) were adopted as loose abrasives to carry out the polishing experiments to illustrate the polishing capability of the proposed sub-aperture polishing method. In the process, all of us know that material removal quantity is a parameter which is nearly linear to the polishing time, thus, here material removal rates during one polishing cycle, say 10 minutes, were calculated under different relative rotation speeds corresponding to different sized abrasives. The material removal histogram is shown in figure 3.2. It is found the max material removal rate increases quickly as the polishing speed and the abrasive size increase, which is also consistent with the trend reflected in figure 2.2(a). Actually, increasing the speed reduces the sharpness of abrasive grains, and that is the reason for renewing abrasives timely. In order to confirm this consideration, cerium oxide particles after polishing for various speeds were collected and observed using a SEM. Figures. 5 (a), (b), (c) and (d) show the SEM images for Z1 Z2 0 and relative speed Z chosen as 3, 6 and 9 respectively, under the conditions of abrasive size at 1Pm and polishing 30 minutes, respectively. As shown in figures, abrasive grains with original sharp edge and larger block become truncated edge, and break-up finally. This indicated that a more proper relative speed range should be decided in order to realize high efficient and quality removal.
(a)
Z1 Z2
0
(b)
Z =3
(c)
Z =6
(d)
Z =9
Figure 5. SEM images of CeO2 grains obtained for various working speeds
Modelling of Processing Velocity in Sub-aperture Pad Manufacturing
4.
727
Conclusions
In order to realize optimum optical surfacing for high-accuracy mirrors or lenses, the mechanism of computer-controlled sub-aperture pad manufacturing process has been discussed. By introducing a removal function describing the movement of a pad, equations expressing the relation between the material removal and the relative velocity has been established and simulated respectively, followed by experiments to confirm the theoretical model. The results obtained theoretically and experimentally indicated that the velocity is one key parameter affect the manufacturing process besides the known dwell-time, and the maximum material removal rate increases quickly as the polishing speed and the abrasive size increase.
5.
Acknowledgements
This work was supported in part by the Innovation and Technology Support Program of Hong Kong Special Administrative Region Innovation and Technology Fund under Grant ITS/106/06, the National Natural Science Foundation of China under Grant 60644003 and Beijing Nova Program of China under Grant 2006B24 and Excellent Young Scholars Research Fund of BIT under Grant 2006Y0101.
6.
References
[1] Paula G, (1997) Automating lens manufacturing. Mechanical Engineering 119(3): 88– 91 [2] Negishi M, (1995) Studies of super-smooth polishing on aspherical surfaces. Int. J. Japan Soc. Prec. Eng. 29:1–4 [3] Doughty G and Smith J, (1987) Microcomputer-controlled polishing machine for very smooth and deep aspherical surfaces. Applied Optics 26:2421–2426 [4] Jones R A and Rupp W J, (1991) Rapid optical fabrication with computer-controlled optical surfacing. Optical Engineering 30:1962–1969 [5] Suzuki H, Hara S and Matsunaga H, (1993) Study on aspherical surface polishing using a small rotating tool-development of polishing system. Int. J. Japan Soc. Prec. Eng. 59(10):1713–1717 [6] Rupp W J, (1972) Loose abrasive grinding of optical surfaces. Applied Optics 11(12):2797–2810 [7] Pollicove H M, (2000) Next generation optics manufacturing technologies. Proceedings of the SPIE 4231: 8–15 [8] Juranek H J, Sand R, Schweizer J, Harnisch B, Kunkel B, Schmidt E, Litzelmann A, Schillke F and Dempewolf G, (1998) Off-axis telescopes—the future generation of earth observation telescopes. Proceedings of the SPIE 3439:104–115 [9] Wagner R E, Shannon R R, (1974) Fabrication of aspherics using a mathematical model for material removal. Applied Optics 13(7):1683–1689
Load Balancing Task Allocation of Collaborative Workshops Based on Immune Algorithm XiaoYi Yu1, ShuDong Sun2 1
Department of Industrial Engineering, Northwestern Polytechnical University, Xi’an 710072, China. E-mail: [email protected] 2 Department of Industrial Engineering, Northwestern Polytechnical University, Xi’an 710072, China. E-mail: [email protected]
Abstract The load balancing task allocation problem of collaborative workshops under the condition of flexible process constraints is described. Load-balancing-oriented task combinatorial optimization allocation model is established considering the task collaboration. A task combinatorial optimization allocation algorithm is proposed based on the immune algorithm for solving the problem. The concept of dynamic task resource map matrix is introduced. Obtaining and updating vaccine operation, product new antibody operation and mutation operation are designed according to the task resource map matrix, to improve the speed of finding the optimization solution. The expectation reproductive rate is adopted as the evaluation criteria of antibody in order to prevent non-optimal antibody to have a large scale in the population, and the problem on easy appeared the prematurity can be avoided when these immune operations take effect. In addition, the experimental results indicate that the algorithm solves the task allocation problem of realistic enterprises in terms of collaboration cost and load imbalance and possesses great validity and good prospects of application. Keywords: task combinatorial optimization allocation; immune algorithm; production management and control; collaborative manufacturing
1.
Introduction
Task optimization allocation is the foundation and key of many systems. It is researched extensive by mechanical engineering, computer science and operation research. Task optimization allocation problem is not only a typical combinatorial optimization problems but also a common type of NP-Complete problem. In recent years, with the appearance of the number of heuristic algorithms such as simulated annealing, genetic algorithm and ant algorithm, they provides a new way to solve such NP-Complete problem[1,2,3]. Currently, the researching of the problem focus on two fields mainly: one is the distributed computing in network environment [4-6], another is the task allocation
730
X.Y. Yu and S.D. Sun
in manufacturing system. Limited to the length of the article about the first issue are not too much described. Because of the manufacturing system have the characteristic of uncertainty and dynamic, so solving the second issue has the property that modeling is more difficult, computing is more complicated and more constraints than the first issue. In this paper, the task allocation problem in manufacturing system is divided into three categories according to the different level. The first category is the task allocation between enterprises. The second category is the task allocation between collaborative workshops of enterprise internal. The third category is the task allocation between machines of workshop internal. Because of the task allocation in every category has different target and object, different corresponding allocation method. For the first kind of the problem, game theory, consultation/negotiation mechanism, agent technology [7-9] was adopted usually. The third kind of problem that is often said Job Shop Scheduling problem, and some intelligent algorithms [10-13] are adopted mainly. This study is about the second kind of problem that is a new class allocation problem with the emergence of flexible process. Nowadays, the method based on experiences is adopted in industry, and make the load unbalancing. The collaboration workshops are treated as objects in this problem, and the goal is to solve the load unbalancing on the condition of considering the collaboration costs. There has been much less research on this problem than another two problems. From the above that level division, the second kind of task allocation has the an important role that it incepts the allocation solution from the upper level as the input data to output the task optimization allocation solution between collaborative workshops, and its output result is regarded as the important data foundation and decision-making gist of the under level that Job-shop scheduling problem. So the collaboration workshop task allocation plays an important role in the manufacturing system performance optimization. Therefore, the study of the issue has important theoretical significance and practical application value. In this paper, the immune algorithm (IA) is introduced to solve the load balancing task allocation between workshops problem take into account the collaboration costs. This algorithm based on the random search algorithm can effectively overcome the disadvantage of other intelligence algorithm that premature, bad diversity and low search speed [14]. Through the immunization selection, immune regulation, vaccination etc immune mechanisms to improve search efficiency, accelerate the pace of global convergence, and find the collaborative task allocation optimal solution within a reasonable time.
2. Load Balancing Task Allocation Problem Between Collaborative Workshops Task allocation between collaboration workshops is a typical multi-workshop mapping problem. This mapping problem can be represented with two undirected graphs, called the Task Interaction Graph (TIG) and the Plant Collaboration Graph (PCG). TIG is denoted as GT (V , E ) . V N vertices are labeled as
Load Balancing Task Allocation of Collaborative Workshops
731
(1,2,…,i,j,…,N). Vertices of GT represent the atomic tasks of the parallel program and its weight, Zi , denotes the processing/assembly cycle of task i for 1 d i d N . Edge E represents interaction between two tasks. Edge weight, eij , denotes the collaboration time between task i and j that are connected by edge (i, j ) E . The PCG is denoted as GP ( P, D) . Gp is a complete graph with P D
K vertices and
2 K
C edges. Vertices of the Gp is labeled as (1,2,…,p,q,…,K), representing the
workshops. Edge weight, d pq , for 1 d p, q d K and p z q , denotes the unit collaboration cost between workshop p and q. The problem of allocating tasks to a proper workshop is to find a many-to-one mapping function M : V o P . That is , each vertex of GT is assigned to a unique node of GP. Each workshop is balanced in load (Loadp) while minimizing the total collaboration cost (Coll) between workshops. Load p (1) ¦ Zi d p dK iV , M ( i ) p
Coll
¦
eij d M (i ) M ( j ) d i, j d N
(2)
( i , j )E , M ( i ) z M ( j )
M (i ) denotes the workshop to which task i is mapped, i.e. M (i ) p represents that task i is mapped to the workshop p. Loadp in Equation (1) denotes the summation of consumed resource of tasks i, Zi , which are allocated workshop p, M (i ) p . In Equation (2), if tasks i and j in GT are allocated to the different workshop, i.e. M (i ) z M ( j ) in GP, the collaboration cost occurs. The contribution of this to Coll is the multiplication of the collaboration time of task i and j, eij , and
the unit collaboration cost of different workshops p and q, d pq , where M (i ) p and M (i ) q . Figure 1 shows an example of the task allocation between collaboration workshops problem. Figure 1(a) represents TIG of N=20 tasks, and Figure 1(b) is for PCG of 2-dimensional mesh topology consisting of K=5 workshops. The numbers in circles represent the identifiers of tasks and workshops in Figure 1(a) and 1(b) respectively. In Figure 1(a), the weight of vertices and edges is for size of consumed time and collaboration time respectively. In Figure 1(b), the weight of edge represents the unit collaboration cost between two workshops. Figure 2 shows an example of the task allocation to workshops on the mapping problem of Figure 1. In this paper, a spin matrix is used to represent the mapping state of tasks to workshops. A spin matrix consists of K workshops rows and N task columns representing the allocation state. The value of spin element s pi , is the probability of
mapping task i to workshop p. Therefore, the range of s pi is 0 d s pi d 1 and the sum of each column is 1. The initial value of s pi is 1/ K i ' ( K i ' denotes the number of the workshop that can process the task i ) and s pi converges 0 or 1 as solution state is reached eventually. s pi =1 means that task i is mapped to workshop p.
732
X.Y. Yu and S.D. Sun
Figure 1. The Example of Task Allocation Problem: a. Task Interaction Graph GT; b. Plant Collaboration Graph GP
Figure 2. A Solution of Figure 1
Figure 3 and 4 display the initial and final solution spin matrix of Figure 1 respectively. In order to do not lose the general admission, we set K i ' K . 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
1
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
2
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
3
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
4
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
1/5
Figure 3. The Initial State 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
1
1
0
0
0
0
0
1
0
0
0
1
0
0
0
0
0
1
0
0
0
2
0
1
0
0
0
1
0
0
0
0
0
1
0
0
1
0
0
0
1
0
3
0
0
0
1
0
0
0
1
0
0
0
0
1
1
0
0
0
0
0
0
4
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
5
0
0
1
0
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
1
Figure 4. The Solution State
The objective function, F ( s ) , is set to minimize the total collaboration cost of Equation (2) and to equally balance the production load among workshops of Equation (2).
Load Balancing Task Allocation of Collaborative Workshops
733
N
N
F (s)
K
K
¦s
¦¦¦ ¦ eij s pi s jq d pq u¦ ( i i 1 j zi p 1 p z q
p 1
pi
Zi
1
Cp
1) 2
(3)
eij : The collaboration time of task i and j in TIG
Zi : The processing/assembly consumed time of task i in TIG d pq : The unit collaboration cost of workshop p and q in PCG s pi : The probability of task i mapping to workshop p
C p : The capacity of workshop p The first term of objective function, Equation (3), represents inter-workshop collaboration cost between two task i and j when task i and j are mapped to different workshop p and q respectively. Therefore, the first term of Equation (3) minimizes as two tasks with large collaboration time are mapped to the same workshop. The second term of Equation (3) is the sum of squares error of the load with capacity of workshop p, and it minimizes when the load of each workshop are almost the same. In the objective function, the multiplication is adopted between the first term and the second term to make each part better reflect the impact to the objective function.
3.
Task Allocation Algorithm Based on IA
The immune algorithm is a type of global optimization search algorithm that based on the genetic evolution mechanism and the treatment mechanism of biological immune system and constructed by artificial [15]. Although the immune system has many excellent computing performance, but existing immune algorithm model still exist some problems, mainly in the estimate form of antibodies, the restraint and the promotion of antibodies and the use of the memory information. Of these, the affinity of antibody and antigen is treated as criterion in the during of the evaluation of the antibody, so the antibody with high affinity is promoted and the antibody with low affinity is restrained, then it is usually to make the algorithm run into local optimization and lead to premature solution. And, the memory information is only used in the initial population, it is just updated and not used in the evolution process, it did not yield accelerate the convergence effect. We propose an immune algorithm that based on dynamic task-resource matching matrix to improve these shortcomings. Figure 5 illustrates the main flow of the task allocation algorithm based on IA. During the initialization process, we will identify the antigen, analysis of the characteristics of problem and code for the antibody. We adopt decimal encoding method to code the workshop that participating the collaboration production by 1,2,…,K. The length of the antibody equal to the task number N, the ith gene p represents task i mapping to workshop p. The advantage of this coding method is intuitive, easy to operate, and without decoding. Figure 2 illustrates the encoding sample of the antibody.
734
X.Y. Yu and S.D. Sun
Figure 5. Flowchart of IA for Task Allocation
In the first time of immune evolution, an initial antibody population antiby(t) is generated randomly by task-resource matching matrix that size is popsize. The antibody is selected to participate immune evolution according to the expectation reproductive rate of the antibody, and the antibody with high reproductive rate is promoted and the antibody with low reproductive rate is restrained. Through this
Load Balancing Task Allocation of Collaborative Workshops
735
promotion/restraint antibody immune evolution mechanism, the idea of survival of the fittest is incarnated, the situation of some individuals absolute priority is prevented, and the dynamic self-regulating function of immune system is achieved. We will obtain vaccination antibody population antiby_v, crossover cloning antibody population antiby_c, mutation antibody population antiby_m, and collected new antibody population antiby_n in the end of each generation of immunization operation. These antibody populations are merged with immune memory optimal antibody population antiby_o to create a new immune evolutionary population antiby(t+1) for the next generation. Repeat the evolution processes above, until we get the satisfaction solution. 3.1
Evaluation and Selection of the Antibody
It is necessary to evaluate every antibody in population during the immune evolution process. If we treat the fitness of the antibody as criteria, it is easy to lead the premature convergence when some antibodies occupy a considerable scale but they are not optimal solutions. The antibody concentration is adopted to restrain the antibody with large scale but non-optimal individual, and entropy is introduced as the indicator when we measure the similarity of antibodies. We treat the reproductive rate of the antibody as evaluation criteria that calculated by the following equations: fit (v) ev cv fit (v) cv
1 F (v )
1 ¦ acv, w popsize w
acv , w
1Dxv , w t Oac ® ¯0other
Dxv , w
1 1 H (2)
H (2)
1 N
(4)
N
¦ H (2) i
i
Ki '
H i (2)
¦ P
' pi
logP ' pi
p 1
fit (v) : The fitness of the antibody v; F (v) : The corresponding function value of the objective function Equation (3) when the antibody v is treated as task allocation solution; cv : The concentration of the antibody v; Oac : The affinity threshold of the antibody; Dxv , w : The affinity between antibody v and w; H (2) : The average
entropy between antibody v and w, if these two antibodies are all same, H (2)
0;
736
X.Y. Yu and S.D. Sun
N: The length of antibody genes; H i (2) : The entropy of the ith gene between antibody v and w; K i ' : The number of the ith gene optional genes. In the Equation (4), the expectation reproductive rate describes the relationship of fitness, affinity and concentration. It integrated considers the relationship of antibody with antigen and the relationship of antibodies. In this paper, immune selection operation means that we select antibody according to the expectation reproductive rate. From the view of immune mechanism, immune selection reflects the uncertainty, and promotion/restraint mechanism. In this study, the Roulette strategy, as expressed by the following equation, is adopted as the selection procedure. esi P ( si ) (5) ¦ s G es j j
where P ( si ) is the selection probability for antibody si . With the Roulette strategy, the suitable antibody for the environment at generation is selected in proportion to its expectation reproductive rate. Moreover, since the Roulette strategy allows sampling with replacement, the selection pressure is relatively high. 3.2
Vaccine Obtained and Vaccination
An effective vaccine has important positive effect on the convergence of the algorithm. We obtain vaccines form the dynamic task-resource matching matrix, and the task-resource matching matrix will be dynamically updated in immune evolution process. In a sense, this method implies that the vaccine is constantly evolving. The task-resource matching matrix of every generation is calculated by Equation (6). Ppi : The probability of the ith gene is gene p; g v (i ) : The gene code of the ith gene in antibody v. If the probability of an allele Ppi in dynamic taskresource matching matrix is larger than the threshold set beforehand, the gene code is the vaccine of the allele. The vaccine format is indicated by Equation (7). popsize 1 g v (i ) p 1 (6) Ppi av ,i av ,i ® ¦ popsize v 1 ¯0 ot her ° pi max( Ppi )t 77: Thr eshol d gi ven (7) ® °¯0ot her Vaccination will oriented generate specific antibody, and effectively accelerate the convergence of algorithm by using the prior knowledge of the problem. The antibody g1 selected will vaccinate vaccine, and the gene code of vaccine Y will vaccinate to antibody g1 orderly. Then we will generate a new vaccinated antibody g2 by exchanging the corresponding gene between antibody g1 and vaccine Y. Repeat the process and get vaccination antibody population antiby_v. Figure 6 shows an example of the vaccination. Y
( y1 , y2 ,..., y N ) yi
Load Balancing Task Allocation of Collaborative Workshops
737
Figure 6. Example of the Vaccination
3.3
Crossover Cloning and Mutation of Antibody
In the crossover procedure, new antibodies are created by gene recombination, the offspring will inherit the excellent genes of the parent generation, so the outstanding gene patterns will multiply rapidly and spread in the population, and make the evolution in the optimal direction. In order to avoid a local optimum that because of the local similarity of crossover antibodies and can not generate new antibodies, we improve the diversity of the population by mutation operation to guide the evolutionary to explore new search space. Two points crossover/mutation operation are adopted in this algorithm. Firstly, the parents g1 and g2 are selected by Roulette strategy. A sub-interval [ x1 , x2 ] is selected randomly form interval [1, N ] , and treated as crossover interval. The antibody is divided three gene segment, called head gene segment, crossover/mutation gene segment and tail gene segment. Secondly, antibody g1 exchange the crossover gene segment with g2. Then, we can create two new offspring antibody g3 and g4. Thirdly, we recombine genes of mutation gene segment according to the probability of the task-resource matching matrix, and generate new antibody g5. Repeat the above processes, we will obtain crossover antibody population antiby_c and mutation antibody population antiby_m respectively. 3.4
Analysis of Algorithms
In this algorithm, the immune selection operation ensures the better candidate to be chosen to participate in the evolution procedure and provides the opportunity to explore new optimal space. Immune memory both improves the efficient of solving the problem and provides the necessary preparations for the local search. Immune memory operation, crossover cloning operation and mutation operation all together enhance the ability of local search of the algorithm, and provide more chances for algorithm to find optimal solution. The concentration restraint ensures that the same or similar antibodies will not be overmuch reproduced in the population. Its role is not only to preserve good, middling, and bad antibodies, but also to reduce the selection pressure of immune selection operation. The immune selection
738
X.Y. Yu and S.D. Sun
provides more survival chances both for the antibodies with high fitness and for the antibodies with low fitness but with low concentration to keep the diversity of the population. It reflects the promotion/restraint mechanism and the random characteristic of antibody selection. We propose the antibody generation approach based on dynamic task-resource matching matrix when we initialize the antibody population and collect new antibodies. This approach can fine-tune the diversity of antibody population and enhance the ability of global search. Because of taking into account the probability in dynamic task-resource matching matrix, the approach accelerate the speed of finding the optimization solution, while it ensure that self-antibody can be introduced at anytime and make the algorithm having the opening characteristic. The algorithm has the following characteristics by interactions of these immune operations: (1) The selection of the antibody is constrained by fitness and concentration of antibody, it is the reunification of certainty and randomness. (2) Characteristics of neighborhood search and parallel search are incarnated by crossover/mutation operation. (3) The behavior characteristic of the antibody learns from the antigen is reflected by the coordination and cooperation process of mining, exploration, selection and selfregulation in the searching procedure.
4.
Simulation Results
In this simulation, the evolution generation is 100, the size of population is 50, the percent of crossover population is 0.4, the percent of mutation population is 0.2, the percent of vaccination population is 0.2, the percent of new collected antibody population is 0.1, the percent of immune memory population is 0.1, the threshold of optimal gene distilled is 0.85, the threshold of affinity is 0.85. The algorithm running early, there is no vaccine to use in vaccination operation, then, it is a good choice that a new collected antibody population will take place the vaccination population. Because of this new collected population is generated by dynamic taskresource matching matrix. The problem in section 2 will be treated as experimentation problem, the initial load of each workshop is 0, and Figure 7 shows the immune evolution process. Figure 7 illustrates the objective function (blue), collaboration cost function (red) and sum of squares error function (green) of the best antibody in each generation respectively. The objective function and collaboration cost function share left y-axis, sum of squares error function use right y-axis. We get the optimization solution at the algorithm run 44 generations. Figure 8 shows the optimization solution of the problem. Figure 9 shows the load, utilization, the highest utilization, the average utilization, and the lowest utilization of the optimization allocation solution corresponding workshop. Integrated Figure 7 and 9, the algorithm proposed in this paper achieves that balancing the load of workshop while minimizing the collaboration cost of enterprise.
Load Balancing Task Allocation of Collaborative Workshops
Figure 7. The Immne Evolution Process
Figure 8. The Optimization Solution
Figure 9. Workshop Load of Optimization Solution
739
740
X.Y. Yu and S.D. Sun
There are new orders in actual production, so we need to face the new added task allocation problem. There are usually two ways to solve the problem. The one is that we regard the original task and new task as a whole and re-allocated them. Another is that we allocate the new tasks on the basis of keeping the original allocation solution. Now, we assume that there are ten new tasks need to be allocated, Figure 10 shows the task interaction graph of new added tasks and Figure 11 shows the task-resource matching matrix of new added tasks according as the flexible process constraints. 20
21 9
8 28
25
22
23
11
7
10
6
15
21
13 22
17
24
36
25
19
26
30
27
41
16
28
29
30
Figure 10. Task Interaction Graph of New Added Tasks
21
22
23
24
25
26
27
28
29
30
1
1/3
1/4
0
1/5
1/4
1/3
0
1/4
1/5
0
2
1/3
1/4
1/3
1/5
1/4
1/3
1/3
1/4
1/5
1/2
3
1/3
0
1/3
1/5
1/4
0
1/3
0
1/5
0
4
0
1/4
0
1/5
1/4
0
0
1/4
1/5
0
5
0
1/4
1/3
1/5
0
1/3
1/3
1/4
1/5
1/2
Figure 11. Task-Resoruce Matching Matrix of New Added Tasks
The first way usually need to consider the re-planning cost, re-scheduling cost, and the re-computing cost and so on. So although the first way may find better solutions than the second way, but still less used. In this study, the second way is adopted. In the problem, the initial load of workshop is not 0. Figure 12 shows the allocation solution of new added tasks, Figure 13 shows the updated load of workshops and we can make a conclusion that the algorithm meet the task allocation of actual production form the results in Figure 13.
Load Balancing Task Allocation of Collaborative Workshops
741
Figure 12. Optimization Allocation Solution of New Added Tasks
Figure 13. Updated Workshop Load of Optimization Solution
5.
Conclusions
We study the load balancing task allocation problem form the view of cooperative production. An immune algorithm supporting the allocation/re-allocation is proposed for solving this problem. The concept of dynamic task-resource matching matrix is introduced to enhance the search efficiency of the algorithm. The simulation results indicate that algorithm solves the task allocation problem of realistic enterprises in terms of collaboration cost and load imbalance and possesses great validity and good prospects of application.
6.
Acknowledgment
This research is supported by the National High Technology Research and Development Program of China (863 Program) under Grant 2007AA04Z187. It is also supported by a grant from the Ph.D. Programs Foundation of Ministry of Education of China (NO. 2004699025).
742
X.Y. Yu and S.D. Sun
7.
References
[1] Kirkpatrick S, Gelatt C D, Jr Vecchi M P. Optimization by simulated annealing [J]. Science, 1983, 220: 671-679. [2] Colderg D E. Genetic Algorithm in Search: Optimization and machine Learning [M]. Reading: Addison Wesley Publishing Company, 1989. [3] YANG Dong, WANG Zheng-ou.Improved Ant Algorithm for Assignment Problem[J].Journal of Tianjin University. 2004,37 (4): 373-376. [4] Ladislav Hluchy, M.Dobrucky, Jan Astalos. Hybrid Approach to Task allocation in Distributed Systems[C]. Proceedings of the 4th International Conference on Parallel Computing Technologies.1997,210-215. [5] Peng-Yeng Yin, Shiuh-Sheng Yu, Pei-Pei Wang, Yi-Te Wang. A hybrid particle swarm optimization algorithm for optimal task assignment in distributed systems [J]. Computer Standards & Interfaces. 2006, 28:441-450. [6] Chuleui Hong. A Distributed Hybrid Heuristics of Mean Field Annealing and Genetic Algorithm for Load Balancing Problem [M]. Rough Sets and Current Trends in Computing.2006, 4259:726-735. [7] Wang Jun, Shi Chunsheng. The Task Optimization Allocation in Virtual Enterprise[J]. Sci-Technology and Management. 2005,31(3):26-31. [8] Chen Qing-xin, Chen Xin-du, Zhang Ping. Self-organized negotiation & coordination for projects under manufacturing grid environment [J]. COMPUTER INTERGRATED MANUFACTUREING SYSTEMS .2006,12(10):1683-1717. [9] Kyung-Hyun Choi, Dong-Soo Kim, Yang-Hoi Doh. Multi-agent-based task assignment system for virtual enterprises[J]. Robotics and Computer-Integrated Managacturing. 2007, 2:1-6. [10] Chai Yongsheng, Sun Shudong, Yu Jianju,Wu Xiuli. Job Shop Dynamic Scheduling Problem Based on Immune Genetic Algorithm. Chinese Journal of Mechanical Engineering, 2005, 41(10): 23–27. [11] ZHANG Wei-cun, ZHENG Pi´e ,WU Xiao-dan. Solving flexible Job-Shop scheduling problems based on master-slave genetic algorithm[J].Computer Integrated Manufacturing Systems. 2006, 12(8):1241–1245. [12] Tung-Kuan Liu, Jinn-Tsong Tsai, Jyh-Horng Chou. Improve genetic algorithm for jobshop scheduling problem[J]. The International Journal of Advanced Manufacturing Technology. 2006, 27: 1021–1029. [13] Jie Gao, Mitsuo Gen, Linyan Sun. Scheduling jobs and maintenances in flexible job shop with a hybrid genetic algorthm[J]. Journal of Intelligent Manufacturing. 2006,17: 493–507. [14] Huang Xiyue,Zhang Zhuhong,He Chuanjiang.Theroy and Application of Modern Intelligent Algorithm[M]. Reading: Science Press,2005. [15] CAI Zi-xing,GONG Tao.Advance in research on immune algorithm[J].Control and Decision.2004,19(8):841-846. [16] Ge Hong,Mao Zongyuan.Improvement for Immune Algorithm[J].COMPUTER ENGINEERING AND APPLICATIONS.2002,38(14):47-49.
Study on Reconfigurable CNC System Jing Bai, Xiansheng Qin, Wendan Wang, Zhanxi Wang Department of Mechatronics, Northwestern Polytechnical University, Xi’an, China
Abstract The aim of the study on the RCNC (Reconfigurable Computer Numerically Controlled) system is to provide a platform for the CNC development. The RCNC system is an open and reconfigurable system where a set of pre-defined components is given and an assembly of selected components is selected to satisfy the special customer requirements subject to the constraints. In this paper, the architecture of RCNC is proposed, and the software design of the system is discussed. Keywords: Reconfigurable, Open-architecture, CNC
1.
Introduction
The technologies of the CNC are believed to be the significant symbol to show the manufacture power for a country [1, 2]. However, most existing CNC systems are provided for customers with an closed architecture, i.e. the module of hardware and the structure of software are special and incompatible. In order to better meet the increasingly frequent and unpredictable market changes, the CNC system should be open and reconfigurable. As a result, the CNC system can reduce the waste of resources by eliminating those needless constituents during the new CNC system being development [3,4,5]. Currently, three active industrial consortiums are addressing on the definition and application of open-architecture CNC system, which are the OMAC (Open Modular Architecture Controllers) of U.S, the OSACA (Open System Architecture for Control within Automation systems) of Europe, and the OSEC (Open System Environment for Controller) of Japan. All of their efforts are trying to replace those closed CNC systems by the open-architecture controllers. Based on the definition and development of APIs, various standard components are delivered firstly to machine tools suppliers and integrated into control systems, and then to final customers to meet their specific requirements. These components are delivered to machine tool suppliers and integrated into different control systems. Then, the integrated control systems and machines are delivered to the final users to meet their specific needs. Though plenty of researches on CNC system have been carried including the design strategies and design solutions, and a wide variety of design strategies and solutions have been proposed, there are still new demands and opportunities to empower the current CNC machines with more expected
744
J. Bai, X. Qin, W. Wang and Z. Wang
features such as interoperability, adaptability, agility and reconfigurability. Based on the researches of those three industrial consortiums, the RCNC system with an open and reconfigurable architecture is proposed to ease the CNC system development according to customer requirements.
2.
The RCNC System
To understand the concept of RCNC, the technology of components and the method of configurable software in the process control system are considered together. The concept of RCNC system is an extension of open-architecture CNC, which includes the reconfigurable software and the reconfigurable hardware. For the reconfigurable software, it means that the software can be designed, changed or amended freely when designing a new system, repairing the software system or adding new function into the system, and what the designers need to do is to obey the predefined constrains but without more consideration on the hardware [6,7]. On the other hand, the reconfigurable hardware means that are no influences will be arisen on software due to the hardware selection. As a result, the software in RCNC system should be flexible and convenient enough to support different applications, and its hardware should provide plug and play function to existing hardware resource. The method to partition components is important for RCNC as it will decide the flexibility of the system. It is widely known that, more flexibility could be achieved with smaller size of components. However, the system will become more complicated if the size of component is too small. Furthermore, small component will consume more system resources, and affect the feasibility and actualization of the system. As a result, the balance between the size of component and the degree of flexibility of the system must be considered together during the system design. The RCNC exhibits such essentials as modularization, standardization and interoperation. Modularization has two meanings, which are the modularization of the function and the modularity of the structure. The function unit is configurable, which can be comprised of smaller pre-defined components. The calculations that realized the task inside the system are detachable and replaceable. The modularization of the function is the basic of the modularization of structure. The constitution of standard is based on the reasonable partition of module. The interoperation means that the controller is independent of specific hardware and OS (Operating System). Besides, using the pre-compiled API can transfer the controller from one OS to another. The structure of the RCNC is shown as Figure 1. And it can be divided into five layers including the application software, the CNC system software, the system software, the hardware interface and the hardware platform. The application software includes the software which can meet the special needs and the components which are secondary developed. The CNC software had no relation with the OS; it is comprised of the management software and the function software. The system software can not only control and coordinate relations of the hardware, but also maintain and manage the software. The hardware interface provides the drivers and the interfaces for the hardware to access the system. The hardware
Study on Reconfigurable CNC System
745
platform can be thought as the main board of the computer which has the standard slots for the different controllers and function boards.
Figure 1. The RCNC system
3.
System Architecture
The RCNC is constructed with PC (Personal Computer), NC (Numerical Controller) and current OS. And architecture is shown as Figure 2.
Figure 2. The architecture of the RCNC system
The application software (such as AutoCAD, SolidWork and MasterCam, etc.) can be used on the system directly. Into the RCNC system, the hardware in RCNC is constructed with IPC (Industrial Personal Computer) and NC, and the current computer is adopted as the system plat. The abundant interfaces of current
746
J. Bai, X. Qin, W. Wang and Z. Wang
computer ensure the RCNC system to be an open-architecture system. The system uses the ISABus, PCIBus or PC104Bus as the system bus, which will promote the existing industrial computers to be the part of the system. The multi-axis motion control card, such as PMAC (Programmable Multi-Axis-Controller) and Galil, will in charge of motion control and switching control. The software running on the system mainly includes the following parts: 1. The RCNC software: it is comprised of the application program management, the GUI (Graphic User Interface) operating program, script editor program, database reconfiguration program, database operating program, communication reconfiguration program and I/O communication program, etc. 2. The maintenance software: it is used to configure the hardware of the RCNC system and maintenance the special software. 3. Application software under windows OS: AutoCad, SolidWord and MasterCam, etc.
Figure 3. The overall structure of the software
Study on Reconfigurable CNC System
4.
747
Structure of the Software
The software of the RCNC system is the core of the whole system. The overall structure of software can be divided into reconfiguration of development environment building the HMI (Human-Machine Interface) and reconfiguration of the operating environment (operating the HMI) (as shown in Figure 3). 4.1
Reconfiguration Development Environment
4.1.1
GUI Development Program
The HMI of the traditional CNC system is comparatively simple and the graphic elements are not enough. Considering the demand of real-time characteristic and the limit of hardware, windows operation systems is selected as the software platform for RCNC system. As a result, varied HMI, which are desired by users, can be developed by luxuriant GUI resources of the windows. The GUI development grogram provides the following mapping resources: x x
x x x
4.1.2
Vector Drawing Tools: it draws the simple graphics in the HMI. CNC Graphic Library: it collects the familiar graphic elements of HMI, which is extensively used in Siemens, Fanuc and other company’s CNC. By using those graphic elements, the user can build up a familiar HMI, which is mainly used to describe the static graph or simple action graph. ActiveX Controls: it includes the ActiveX controls from Windows and the other special ActiveX controls for CNC. Common Components: it is comprised of common components in various software such as text box, check box, combo box, track bar, etc. CNC Special Components: it is composed of those components which are widely used in CNC system such as button, emergency-stop button, check button, and fine-tuning button, etc. Script Editor Program
The script language improves the flexibility of the application program. For example, in order to respond the situation of NC machining in a dynamic manner better, graphic elements’ outlooks can be changed when the sampling data is changing by scripting language. 4.1.3
Database Reconfiguration Program
The database is used to exchange communication. The database not only can read out the NC data in the register, and these data can be stored in the memory for the HMI according to the data structure, but also receive the data from the HMI and transfer the data to the NC.
748
J. Bai, X. Qin, W. Wang and Z. Wang
4.1.4
Communication Reconfiguration Program
The RCNC can support the NC system from different manufactures where the API of these system may be different. All corresponding access functions and methods can be encapsulated into communication DLL (Dynamic Link Library) which has been complied for those NC modules with different types. In consequent, the DLL can provide the uniform interface. Till now, communication reconfiguration is actually the process to select proper communication DLL for those NC modules adapted in system. 4.1.5
Steps of Development
1. In the reconfiguration communication program, communication components which provide the communication program as selected for the I/O module according to the type of the motion control card (the NC module). 2. In the database, the reconfiguration programs establish the data dictionary according to the sample NC data required by HMI. x
In the GUI development program, the HMI can be constructed by the vector drawing tools, controls, CNC graphic library, common components and CNC special components provided by the RCNC system. In addition, the animation attributes of graphic elements can be set as well.
x
Establish connection between the graphic elements of the HMI and the corresponding data of the database through the animation links.
x
Set action script of the graphic elements in the HMI via the script editor program.
4.2
Reconfiguration Operating Environment
The HMI built in the development environment need to be run in the operating environment. The operating environment includes the following programs: 4.2.1
GUI Operating Program
It reads out the configuration HMI and other configuration information, and writes the data into the real-time database according to the correspondence established by animation links. In addition, it also displays the change of NC parameter sample according the graphic element property, answers the user’s input and executes the action script is still the task of the GUI operating program. 4.2.2
Real-time Database Operating Program
The real-time database is a data processing center. It is the core of the RCNC software. In change of the data operation, data storage, data processing register,
Study on Reconfigurable CNC System
749
and alarm processing, etc. The data exchange between the component and the realtime database is independently. Different component exchange the data through the real-time database when the system is operating. 4.2.3
I/O Communication Program
The I/O communication program transfers the communication function and interface functions (in DLL) on a fixed manner according the reconfiguration information, and communicates with the NC without considering the change of the NC. After starting the I/O communication program and establishing the communication with the NC through the communicating DLL, the real-time database reads out the information data from the NC and writes the data into the memory database. Then, the GUI operating program reads out the information data from the memory database, and displays the data into corresponding graphic elements. Meanwhile, the GUI operating program sends the user’s orders received from the graphic elements to the corresponding memory database. The user’s orders are read by the I/O communication program and sent to the NC.
Figure 4. I/O communication function and communication component
5.
Realization of the RCNC System
The RCNC system discussed in the paper has already been launched for the industry. As showing in the Figure 2, the system takes the structure of PC+NC by using IPC with PC 104 Bus and PMAC 2A-PC 104 Controller respectively, and the system’s OS is Windows 2000. The I/O extended board is the ACC-34A. The PMAC connect with servo drive by ACC-8P. The system servo motor is the Panasonic permanent magnet AC (Alternating Current) synchronous servo motor. The hardware inside the control cabinet is showed in the Figure 5, one of the monitoring interfaces of the system is showed in the Figure 6, and the overall Fequipment is showed in Figure 7.
750
J. Bai, X. Qin, W. Wang and Z. Wang
Figure 5. The control cabinet inside
Figure 6. One of the monitoring interfaces
Figure 7. The overall equipment
6.
Conclusions
Though the current CNC system can provide plenty of functions, they are short of adaptability, flexibility and reconfigurability. This paper presents a method to construct RCNC which is thought to be more open and reconfigurable by providing
Study on Reconfigurable CNC System
751
plug-and-play function for both hardware and software. The reconfigurable software can be reconfigured with pre-defined components independently. Different NC can be reconfigured by simply loading a corresponding communication DLL for the NC. The sales performance of the RCNC illustrates the strong market potential. In addition, there are still lots of technology problems need to be solved, such as enhancing the flexibility of the application program, expanding the communication manner, and perfecting the function, etc.
7.
Acknowledgement
This work is supported by Shaan’xi Province Technology Development Subject under grant 2007K05-01. The Authors gratefully acknowledge for their supporting on this project. Thanks are also due to industrial partners for their invaluable feedback. Finally, we would appreciate all the colleagues in Northwestern Polytechnical University for their silently but endless contribution.
8.
References
[1] Xu XW, and Newman ST, (2006) Making CNC machine tools more open, interoperable and intelligent: a review of the technologies. Computers in Industry 57(2): 141–152 [2] Yang H, Li B, Zhao YJ, (2003) Research on development platform of open NC machine training system. Machine Tool & Hydraulic16(2): 180–183 [3] Zhang CR, Guo LN, Lan HB (2006) Open CNC system components implementation based on CCM. Manufacturing Technology & Machine Tool 38(2): 25–28 [4] Wang YH, Hu J, Li Y (2003) Study on a reconfigurable model of an open CNC kernel. Journal of Materials Processing Technology 138: 472–474 [5] Oldknow KD, Yellowley I (2001) Design, implementation, and validation of a system for the dynamic reconfiguration of open architecture machine tool controls. International Journal of Machine Tools & Manufacture 41: 795-808 [6] Wright P (1998) Everybody’s open: the view from academia: no compromise on plugand-play. Manufacturing Engineering 12 (7): 84–85 [7] Zhou J D, Chen YP, Zhou ZD (2004) The hardware interface design and realization of configuration software. Machinery & Electronic 6: 44–47
Development of a NC Tape Winding Machine Yao-Yao Shi, Hong Tang, Qiang Yu The Key Laboratory of Contemporary Design and Integrated Manufacturing Technology, Ministry of Education, Northwestern Polytechnical University, Xi’an 710072, China Abstract The NC tape winding machine for multiple purposes is a complicated device combining and integrating such scientific specializations as mechanics, electronics, pneumatics, automation and numerical control. Based on analyzing the machine’s structure, components and creation methods, the authors of this paper designed a typical NC tape winding machine, and improved the accuracy for controlling the parameters during the winding process. The key technologies of the host machine structure, the tension control system, the temperature control system, pressure control system and the NC system are each discussed in detail. Keywords: NC tape winding machine, wound composite structures, control of processing parameters, industrial control
1.
Introduction
The winding industry is well established for producing such structures as pressure vessels, helicopter blades, and numerous other aerospace products. Winding was first commercially employed for producing high performance aerospace structure in 1960. The material used for this case was fiberglass coated with an epoxy binder, which is commonly referred to as filament winding. Tape winding is an alternative formally introduced in 1990 as a proven manufacturing process. Tape winding provides for a relatively economical means for producing composite structures and its mechanical properties has been widely accepted. Nowadays, tape winding is typically selected for products requiring high quality and the most weight efficient design. However, there is still no mature technology about automated equipments for tape winding domestically, and manual operations lead to low efficiency and unstable properties for composite structures. An NC tape winding machine is defined as a special Mechatronic device for such group products as pressure vessels for engines of solid rockets, burn-resistant and heatproof parts, tapered noses for missiles, launching vessels or heatproof parts for aero-crafts. With increasing application of composite structures, especially in development of aerospace products in the near future, more rigid requirements will
754
Y.Y. Shi, H. Tang and Q. Yu
be proposed for the tape winding process, such as control of processing parameters, structure of the winding machine, which is a crucial factor for the winding quality.
2.
Analysis of the Processing Parameters
The machine is designed to automate the winding process that the tape coated with carbon / phenol aldehyde or high-silica glass / phenol aldehyde, is systematically placed onto the mandrel in a special pattern. During the winding process, the composite tape is imposed changeable tension while heated near the winding point, after that it is pressed onto the mandrel with a roller driven by a gas engine and then immediately frozen by cool air. The whole process is shown in Fig. 1.
Figure 1. The winding process
The processing parameters include tension, pressure and temperature. Tension refers to the pull on the tape, it works when the tape is unpacked from the scroll and ends after it is placed onto the mandrel. Tension control is an important but tough key technology closely related to intensity and fatigue properties for the composite structures: insufficient tension will lead to loose structures, thus severe deformation when the air is pressured in the inner layers, while excessive tension will lead to decrease of resistibility for the tape and thus the low intensity, and if the tension fluctuates greatly, the tape blocks in different layers are of different initial stresses and they can’t load at the same time, lowering the intensity of the whole eventually. Much research work has shown that a strength loss of up to 20%∼30% will occur if unstable or improper tension is used. Pressure refers to the vertical pressure imposed on the tape in the radial direction when the tape is wound onto the mandrel. Its principal functions include: enhancing the adhesion between layers and wiping off the bubbles, increasing density of the tape, and avoiding wrinkling and glide between tape layers. Resin on the tape should be heated to a fluid and glutinous state to convenient its infiltration and increase adhesion effect between tape layers. Meanwhile, the glutinous reaction in advance by excessive heat should be avoided; otherwise it will disable the resin. During the whole winding process with changing temperature, pressure and tension, this tape molding system is a complicated multivariable time-varying system. The winding process is difficult to describe mathematically and carry out
Development of a NC Tape Winding Machine
755
precise control. Furthermore, the level of matching accuracy for the processing parameters is also a key factor determining the quality of the composite structures.
3.
Components of the NC Tape Winding Machine
The NC tape winding machine for multiple purposes generally consists of four main parts, namely the host machine, the NC system, the industrial personal computer (IPC) system and the heating equipments. The host machine consists of a headstock, tailstock, mandrel and feeding vehicle, which together are able to realize the NC motion in two directions. The heating equipment offers continuous dry wind ranging from 0 to 200°C. The NC system drives the host machine to the desired winding trajectory. The IPC is mainly responsible for control, presentation, storage and printing.
4.
The Key Technologies and Creation Methods
4.1
The Host Machine
The mechanical structure is shown in Fig. 2. Three main factors restrict its overall layout and structural design. First, the three main functions (winding, cutting and measuring) should be integrated into this device. Second, dimensions of the composite structures fluctuate greatly (diameters range 50~1700mm, and the length may reach 4000mm). Finally, thermal rollers should turn at an angle for the gradient overlapping winding. The three main parts (the headstock, tailstock and vehicle body) are separately designed so as to facilitate tooling, machine assembly and program debugging. The vehicle body is designed a three-layer structure, with the upper layer turning at an angle of 60° for the headstock and 45° for the tailstock. The tension, pressure and temperature sensors are fixed upon the upper layer, on which the frame supporting the thermal rollers (in the winding mode) and the cutting tools (in the cutting and measuring case) can be fixed so that the compact structure and easy access are able to be realized. Meanwhile, the gas engine with the stroke capacity of 100mm is fixed within the upper layer, pressing the roller firmly onto the mandrel. 4.2
Tension Control
The tension control system of the winding machine is an outward pulling tensioner composed of the tape delivery system, measuring and control units. Its mechanical structure is a key factor for the tension accuracy.
756
Y.Y. Shi, H. Tang and Q. Yu
Figure 2. Sketch of the host machine
Figure 3. Structure 1 of the tape delivery system
The advantages of Structure 1 are its simple structure, low cost and damping, but as the tape plate radius varies, the system is time-varying with the changeable damping moments and tape speeds, and the stochastic behavior is more severe for tiny tension control. Furthermore, the tape unfolds at non-constant speeds by adhesion reaction between layers.
Figure 4. Structure 2 of the tape delivery system
In Structure 2, tension is obtained by applying a damping wheel with constant diameter, which is fixed on the magnet particle clutch used to generate a moment, so the time-varying property is able to be avoided; tape before placed on the wheel is in free state and changeability of the scroll radius is independent of tension control; damping wheel is the powering structure which avoids system disturbances in the tape unfolding process. However, the pressure sensor should be fixed in accordance with symmetrical axis of the structure and the sensor axis should be coplanar with the tape center, which brings forwards rigid requirements on the tape assembly process, as well as tape deviation control.
Figure 5. Structure 3 of the tape delivery system
Development of a NC Tape Winding Machine
757
Based on Structure 2, integrated with characteristics of the tape winding process and machine structures, the angle ș is changed to 0 and the double-sensor mechanism is used in Structure 3. The improved machine structure is able to realize stable tape unfolding, precise tension control, avoid accuracy decline by tape assembly, and resist disturbances greatly. 4.3
Pressure Control
In order to keep the adhesion layers smooth, especially to avoid wrinkling in the inner side (one of the two tape sides closer to the mandrel) caused by deformation, and to avoid wrinkling or glide between layers in the working mode of gradient overlapping winding, rigid requirements are proposed. Candidate methods are hydraulic driving, electronic or pneumatic driving. Hydraulic driving is of low acceleration and it contaminates while rigid environmental requirements are proposed; electronic driving is of complicated structure, high manufacturing and maintenance cost; the pneumatic structure is simple, compact and economical, and able to realize quick driving and signal response, furthermore, long-distance operation is easily accessible, so that the pneumatic structure of good comprehensive performances is selected and the improved PID closed-loop algorithm is adopted. 4.4
Temperature Control
The heating equipments are applied to heat the tape to the melting and flexible state to ensure tight adhesion after it is wound onto the mandrel; otherwise, the structural layers can’t integrate as a whole. The main design difficulty lies in that the coated tape should be heated to a semi-melting and flexible state in an extremely short stretch of time while avoiding burnt tape and liquid resin caused by excessive heat. Therefore, two components are designed separately: the hot wind blower and the thermal rollers. The hot wind blower: it is positioned in front of the winding point to keep the tape in a semi-melted and flexible state by raising the temperature around. The thermal rollers: based on the influential effects by temperature and pressure, as well as the tape winding characteristics, electro-thermal pipes are fixed inside the thermal rollers, so that the tape placed onto the surface of the mandrel is kept semi-melted and flexible while under pressure. The rollers and blower offer adjustable temperature ranging from 50 to 250°C. Based on the advanced temperature controller and non-overshooting PID algorithm, the melting flexible tape is obtained by regulating heat amount generated by the electro-thermal pipes in the rollers and blower, and it won’t get burnt as well. The IPC samples current temperature values and sends out user-defined values through the communication port RS485 in the temperature controller while displaying, recording and saving the temperature data.
758
Y.Y. Shi, H. Tang and Q. Yu
4.5
Tension and Pressure Control Strategies
The PID control strategy is most widely used in industrial control for its low cost, simple structure and robustness. However, the tension and pressure are timevarying and strongly nonlinear due to stochastic factors such as external impact, so that the satisfactory control can’t achieved alone by applying conventional PID control strategy, and it’s necessary to adopt improved PID algorithm. 4.5.1
Tension Control by the Fuzzy Auto-Turning PID Algorithm
The time-varying tension control system is difficult to describe so that the fuzzy auto-turning PID control strategy is adopted. According to the requirements for tension control, a fuzzy controller of two inputs and three outputs is designed, and the input signal error (e), error change (ec) are defined as input variables while ǻKp, ǻKi and ǻKd are output variables. The input/output values are classified as seven categories, namely NB, NM, NS, ZO, PS, PM, PB, and the quantifying factors and proportional factors are logically designed. Triangle membership functions of the variables are selected for simple computation. (Fig. 6). The key factor for designing fuzzy controllers is to establish a rule base with if … then structures. Based on different status of input signal error (e) and error change (ec), the auto-turning principles for ƸKp, ƸKi and ƸKd are designed as follows: x
x
x x
x
Input signal error is relatively big, which means its absolute value is relatively big either. So outputs of the controller should be considered maximum/minimum to regulate the signal error by rapidly diminishing its absolute value regardless of the error change. It means the open-loop control strategy is adopted and Kp, Ki, Kd won’t work in this stage. e · ec > 0 while |e| is relatively big, which implies that the absolute error value will continue to grow. Intensive control by increasing Kp, Ki and Kd coefficients can be adopted to reverse it to the descent stage so as to diminish it quickly. e · ec > 0 while |e| is relatively small, which means the absolutely error value is not big through it is in the ascent stage. Kp, Ki and Kd can remain constant so as to reverse the changing error to the decent stage. e · ec < 0 while |e| is relatively big, which means the error should be decreased by increasing KP. Meanwhile, if |e| is relatively small, the less intensive control may be introduced by decreasing Kp, and Ki, Kd won’t work in this stage. |e| is very small, which means its absolute error value is small either. Integral coefficient Ki is introduced to decrease the error in this case.
The rule data-base for regulating KP, and KI, KD are created based on these principles and the experience of expert engineers. Then the fuzzy subsets of output variables and defuzzification of the output signals are worked out respectively.
Development of a NC Tape Winding Machine
Figure 6. Membership function
759
Figure 7. Signal response based on the fuzzy strategy
Signal response is shown in Fig. 7. Tension is 20kgf and sampling period is 100ms while the disturbance signal is used. The maximum deviation by disturbance is regulated within 4% and the modulation period is relatively short, which stabilizes the winding tension. 4.5.2 Pressure Control by the Dynamic Integral Separation PID Algorithm Based on the dynamic integral separation algorithm, the PID parameters are able to be sufficiently utilized, which avoids the integral accumulation by machine startup, pause, increase or decrease of tape joints, and change of the user-defined values. The smooth and continuous transformation from PD to PID structures simplifies parameter turning process, which reduced overshooting and modulation time for signal response. The basic principle is to avoid reduction of the system stability by removing the integral term if the error of input signal is relatively big, and to eliminate static error and enhance control accuracy by adding the term if the error is relatively small. The equation can be expressed as followings: ΔU(k)=KP[e(k)-e(k-1)]+αKI ( k)+KD[e(k)-2e(k-1)+e(k-2)]
(1)
Where ΔU(k) is the relative increment; KP is the proportional factor; KI is the integral factor; KI = KP⋅T/TI, (T is the sampling period, TI is the integral time constant); KD is the differential factor; KD = KP⋅T/TD, (TD is the differential time constant); α is a logical variable expressed as follows:
α =e
α=
−θ
e(k ) r
,r > 0
e( k ) + β , β > 0, r ≤ 0 r+β
(2)
(3)
α is a continuous function in relation to e (k), and is able to realize the smooth and continuous structural transformation and avoid disturbances, so that the integral separation process is well described. Where r is a given value and β is an established valve value, generally r>0, β>0. θ is an integral separation factor with large modulation margin and simple turning method, so that the correlation complexity about α and β is avoided during the turning process. The term α
760
Y.Y. Shi, H. Tang and Q. Yu
suppresses the integral function and if incorporated when the integral value is insufficient, more risetime will occur, so the integral judgment term is applied to realize the dynamic performances for integral separation by monitoring the change amount of the input signal error, which is |e(k)-e(k-1)|/e(k)>į, where 0<į<1. The software flow of the pressure control system is shown in Fig. 8.
Figure 8. Flow of the pressure control system
Simulations are carried out. The pressure value is set 25kg, and the signal response by conventional PID control and the improved PID control is shown in Fig. 9, which shows that risetime and modulation time are shortened dramatically, and the signal overshooting is suppressed.
a
b
Figure 9. Signal response by different PID control strategies a. conventional PID control; b. improved PID control
Development of a NC Tape Winding Machine
4.6
761
Creation Methods for the NC and Control System
The opening CNC structures of NC system embedded in PC system (PC and motion controller) are adopted for this tape winding machine (shown in Fig. 10). The functions of motion and logic control are completed by independent controllers. Motion controllers are core of the system which compose the NC system by the application-embedded PC hardware. The controllers are of strong information processing abilities, higher opening degrees, more accurate trajectory control and wide range of performances, which simplifies technology development and updating, system assembly and maintenance.
Figure 10. Structure of the NC system
The PMAC (programmable Mulitiple-Axis Controller) of higher speed and resolution is adopted. The CPU’s of IPC and PMAC work respectively and they make up a structure of principal-subordinate double microprocessor. Meanwhile, most PMAC structural addresses and communication ports are accessible, which is convenient to combine DLL (dynamical link library) with PC. The PMAC is responsible for interpolation computing, position control, tool compensation, speed processing and PLC manipulation while the IPC realizes performances of the NC system and controls the processing parameters by commanding some functions. Based on the winding motion trajectory and processing parameter control, the special opening NC system integrated by IPC and motion controllers is shown in Fig. 11. Based on motion control of PMAC-PC, closed-loop trajectory control of three-axial simultaneous coordination is realized by connecting three sets of AC servo system respectively with the three PMAC-PC ports. The PLC control is realized by I/O interface of PMAC, such as stroke limit control, machine tool reset, operating panel control, operating mode selection, software protection, logic and motion control, and manual adjustment of the machine tool required by the winding process. The processing parameter control requires much computation, and the software for the NC system is complicated, so that the computationally powerful IPC and virtual instrument programming language Labwindows / CVI by NI Corporation are selected. With multi-thread technologies, the logically designed system management module, the function control module, the parameter control module, the communication module and the failure detection module are able to operate under different threads, which realizes concurrent control and real time of the
762
Y.Y. Shi, H. Tang and Q. Yu
software. Program input or modification, setup, real-time presentation and recording, and simulation are realized by external equipments of the IPC.
Figure 11. Flow of the NC system
5.
Conclusions
The new NC tape winding machine has met the tape winding requirements. The machine, by adopting the qualified NC system, the advanced mechanical structures and algorithms, automates the tape winding process, ensures the quality of the products and solves the key manufacturing problems in the molding process for composite structures. All the parameters are automatically presented, recorded and printed with easy operations, which reduces labor intensity and improves working efficiency. The application is not only limited to aeronautical industry, but such areas as paper making, thread winding or chemical industry.
6.
References
[1] James H Campbell, Jeff L Kittelson, (1991) The tape winding process and applications. 36th International SAMPE Symposium, April 15-18 [2] Ren Shengle, LU Hua, Development of a PLC-based Tension Control System. Chinese Journal of Aeronautics: Vol 20, Issue 3, June 2007, 266-271 [3] Wang Chunxiang, Wang Yongzhang, A tension control system using magnetic particle clutch as the actuator. Journal of Mechanism and Electron: 1999, 5-7 [4] Dominique Sauter, Hicham Jamouli, Jean-Yves Keller, Jean Christophe Ponsart, Actuator fault compensation for a winding machine. Control Engineering Practice: Volume 13, Issue 10, October, 2005, 1307-1314 [5] Mantell, S. C. , and G. S. Springer, (1992) Manufacturing process models for thermoplastic composites. Journal of Composite Materials 26: 2348-2377 [6] T. G. Gutowski, (2004) Advanced Composite Manufacturing. Publishing House of Chemical Industry: 293-306.
TRIZ-based Evolution Study for Modular Fixture Jin Cai 1, Hongxun Liu 2, Guolin Duan 1, Tao Yao 1, Xuebin Chen 1 1
School of Mechanical Engineering, Hebei University of Technology, Tianjin, P.R.China, 300130 2 School of Electrical Engineering, Hebei University of Technology, Tianjin, P.R.China, 300130 Abstract TRIZ evolution technology is an important branch of technology system forecasting. The evolution patterns and lines of direct evolution (DE) theory were firstly introduced. As modular fixtures(MFs) are important to both traditional and modern flexible manufacturing system, so applying the evolution patterns and lines on MFs to forecast the future development orientation and possible trends are helpful and significative. Then the development of fixture technology was reviewed and modular fixture (MF) patents in China, US and Europe are classified according to problems they solved. Based on that, the possible evolution patterns and corresponding lines on MFs were deeply analyzed. The evolutionary potential radar plot of MF was then depicted, based on which the most possible patterns and lines as well as schemes of MF are analyzed in detail and the future development can be forecast. The above works proves that TRIZ-based technological evolution theory is an effective method when analyzing development orientation and providing guidance for MF industries. Keywords: Evolution Technology, TRIZ, Direct Evolution, Evolutionary Potential, Modular Fixture
1.
Introduction
TRIZ (Theory of Inventive Problem Solving) is a methodology for problem solving and idea generation, which was founded in 1946 by former Soviet scholar G.S.Altshuller and was presented on the basis of 2500 thousand high-quality patents around the world [1-2]. Doctor Savransky, an expert on TRIZ, defines TRIZ as following: TRIZ is a systematic methodology that is knowledge-based and human-oriented inventive problem solving. Four evolution stages are classified according to TRIZ theory: infancy stage, growth stage, maturity stage and decline stage. TRIZ include tools for problem identification, analysis, and solution, which can be applied to accelerate product development. TRIZ also offers systematic guidelines for technology forecasting [3, 4]. TRIZ provides tools to determine the current state
764
J. Cai, H. Liu, G. Duan, T. Yao and X. Chen
Performan ce
and the future development of a specific product technology. The specific tools used in this case study are Maturity Mapping and System Approach [4, 5]. G.S.Altshuler found the law of product evolution follows Biological S curve by analyzing a large number of patents. The Biological S curve of TRIZ is depicted in Figure1. One core of TRIZ theory is the laws of technical evolution which can forecast the future trends and help enterprises to develop more competitive products [5-7]. TRIZ evolution technology was proposed by Altshuller, which included evolution technology (ET), guided technology evolution (GTE) and directed evolution (DE) etc. DE is a branch of TRIZ and has been developing since 1980s. In DE, there are ten patterns in which many evolution lines are included [3, 7 ,8]. This paper will focus on DE and its ten evolution patterns and multi-lines, which can bring designers many initially innovative design ideas.
Infancy
Growth
Maturity
Decline
Time
Figure 1. Biological S-curve of TRIZ
2.
Direct Evolution (DE) Theory and Evolutionary Potential
Direct Evolution (DE) Theory Patterns or laws and evolutionary lines are the true descriptor for technical evolution, which can be proved by large number of patents and technical information in different historical stages. Secondly, patterns or laws can help designers to forecast future development of technology. Finally, patterns or laws should be open system allowing new patterns and lines to be appended into the existing system [6].DE has ten evolutional patterns shown as follows [3]: Pattern 1: Stages of Evolution; Pattern 2: Increasing Ideality; Pattern 3: Increasing Resource Including; Pattern 4: Non-Uniform Development of System Elements; Pattern 5: To Increase Dynamism and Controllability; Pattern 6: From Complexity to Simplicity; Pattern 7: Matching and Mismatching between Elements; Pattern 8: Evolution towards Micro-Level; Pattern 9: To Increase the Use of Fields; Pattern 10: To Increase Automation and Decrease Human Involvement
TRIZ-based Evolution Study for Modular Fixture
765
Evolutionary Potential When a product is evolving along one evolutionary line but is not in the maxim status, TRIZ calls the product having evolutionary potential. Figure 2. is the ideal final result and ‘Evolutionary Limit’ concept proposed by Darrell Mann [9]. He also gives the evolutionary potential radar plot of a certain system as shown in Figure 3. The product (system) in Figure 3 has ten evolutionary lines, and the lightgray area demonstrate the evolutionary potential and the eggplant area is the evolutionary status, where, line 9 and line 10 have reached ‘Evolution Limit’, so it is evident for designers to see and to decide the product (system) evolutionary orientation.
Figure 2. Ideal Final Result and ‘Evolutionary Limit’ Concepts
Figure 3. Evolutionary Potential Radar Plot
3.
Reviews of Fixture [1, 2, 10]
Fixtures are widely used in manufacturing and are important in both traditional and modern flexible manufacturing systems (FMS), which directly affect machining quality, productivity and cost of products. So, great attention has been paid to fixture study in manufacturing. Fixture methodologies are usually determined by the size of the lots. In mass production, dedicated fixtures are usually applied when the fixture construction is perfectly designed for a specific operation, but when product design changes; the dedicated fixtures are no longer useful and scrapped. In this sense, dedicated fixtures are one-time fixtures. So, flexible fixturing methodologies including adjustable fixtures, modular fixtures, programmable clamps, fixtures with phase-change materials, bionics fixtures appear. Virtual assembly comes to true with the development of computer-aided fixture design
766
J. Cai, H. Liu, G. Duan, T. Yao and X. Chen
(CAFD) and computer-aided MF configuration techniques. Only recently, CAFD became an important part of CAD/CAM technique [11]. After 1986, China began the research of CAFD and did much work about how to apply artificial intelligence (AI) technique to CAD of MF. Now, there are some achievements after ten more years’ efforts of scholars in the world, but it is still far from practical application. Some new methods and results on CAFD appear with the rapid development of CAFD technique, please see references 12-17. Figure 4 is a simple example of computer-aided modular fixture configuration system developed by Hebei University of Technology.
Figure 4. Computer-aided MF Configuration System
According to the automation degree, fixture design can be classified into three stages: interactive computer-aided fixture design (I-CAFD), semi-automated fixture design (Semi-AFD) and automated fixture configuration design (AFCD). The problems associated with the current research on CAFD include: (1) functions of automated fixture-design systems are limited and many complex fixture designs still need human interaction and (2) current CAFD systems using commercial CAD packages are time-consuming as it manipulates the geometric entities. Therefore, further research on interactive CAFD (I-CAFD) systems is valuable for industrial applications [2]. Although I-CAFD is better than CAFD on solving some problems such as selecting proper points and fixture elements between designers and computer, it is still time-consuming. So, AFCD is proposed as more and more CNC machines and machining centers are employed, many operations can be carried out within a single setup, which needs to be ensured by a well-designed fixture configuration. But in the area of AFCD, relatively less literature can be found. The major problems involved in AFCD include module selection and identification of fixture elements, connections determination modules, interference checking module between fixture units and so on. MF is a new tooling on the basis of the highly standardized fixture parts and components. MFs can save fixture design time and material, reduce producing prepare cycle and improve product quality. With the development of science and technology as well as the modern production, about 75% enterprises are one-piecetype or batch production in manufacturing especially in mechanical manufacturing. So, high cost and long lead-time will take place if dedicated fixtures are designed for small-sized lots products. MF appears on the above background.
TRIZ-based Evolution Study for Modular Fixture
767
The earliest MF system appeared during U.K World WarII. John Wharton invented a fixture system made up of a set of standard components that could be used to build up different fixture configurations for military production. Later, a T-slotbased MF system was developed by the former Soviet Union. Although dowel-pinbased MF systems were proposed in the mid-1950s, they were rapidly developed and applied in production in the late seventies as a result of the NC systems developed and widely used in production. MFs have become promising flexible fixtures both in FMS and CIMS. Assembling hole-based fixtures need rich knowledge and practice as well as skill, which is just factories want but lack of in the world. With the rapid development of flexible tooling and machining system, MF with high flexibility and standardized components has become an auxiliary way of fabrication system. China began to study MF since the mid of 1980s, but the intelligent MF is difficult to be realized because of the complexity of the machining work pieces. By analyzing 142 patents in US, Europe and China, the evolutionary patterns and lines on MF were deeply discussed. By TRIZ evolution technology theory, the development orientation and trends can be forecast.
4.
Patents Analysis of MFs
Altshuller classified patents or inventions into five levels, products develop from low to high level. MF patents were searched around the world and 142 patents were obtained, in which 41 patents are in the field of mechanical manufacturing. The following are analysis based on the main problem solved by MF: x
x x
x x
To locate and adjust fixture quickly and accurately, that is, fixture locating accuracy and reliability; (CN03158227.3, CN93105534.2, US4630811,CN200320110161.3,US4512694,US5107577,US7036810,US5 546314,US5856924); To improve supporting and clamping apparatus and claming force of MF; (US4711437,US4828240,US4901991,CN92231077.7,US6554265,CN2004 20060280.7) To assemble and dissemble MF quickly, that is, the convenience and flexibility of assembling, also including assembling techniques of workers; (CN88100600.9,CN88100600.9,CN88106253.7,CN90209378.9,CN912329 87.4,CN93112353.4,CN200510013529.8,US6439561,wo0206002,CN2004 20030469.1,US4419827) Reconfiguration of MF;(CN88212949U, US6644637, US6877729, US5887733, US6279888, US6094793, US6712348, US7000966) Rigidity, production cost and product quality of MF (CN90217285.9, CN88106253.7,CN8810600.9,CN97210124.1,US5771553,CN94201509.6, US6273635,US5362036,CN200320109806.1)
768
J. Cai, H. Liu, G. Duan, T. Yao and X. Chen
5.
Evolutionary Patterns and Lines of MF
The Development Process of MF By analyzing 41 patents on mechanical manufacturing, MF technology has come to maturity stage (Please see reference 18). When developing a product, enterprises need to forecast the technology level of a current product and the possible evolution direction of new generation product. The course of forecasting is called technology forecasting [19]. Technology forecasting may also identify new potential markets and opportunities, such as finding ways to exploit current technology beyond its originally intended purposes [20]. Roughly speaking, there are two types of MF system: T-slot-based and dowelbased systems. Since late seventies, the use of dowel-based system is more than the use of T-slot-based system. Technical Evolutionary Patterns and Lines After analyzing the 41 patents, the evolutionary patterns and lines concerned are as follows: Increased Dynamization and Controllability within Systems; x
Increased Dynamization (The line can be seen in Figure 5.) Stiff Body
A Joint
MultiJoints
Elastic Body
Molecular (liquid,gas)
Field
Figure 5.
Both Rigidity and flexibility are essential to MF system. So, a stiff, immovable sub-system is needed to be exchanged to moveable and flexible. According to this line, the current technology of MF is in the elastic body stage. There is still evolution potential. Molecular and field can be expected to be used in MF system in the future. Direct Control
SemiAutomation
Automation
Figure 6.
x Increased Controllability x Line 1 (Please see Figure 6.) For MF design, it can be classified into three phases: CAFD, I-CAFD and AFCD. But from the point of assemble and application in practical production, it depends on mainly direct control, so, there is future evolution potential. x Line 2 (Please see Figure 7.)
TRIZ-based Evolution Study for Modular Fixture
Direct Control Action
Action Through Intermediary
Addition Of Feedback
769
Intelligent Feedback
Figure 7.
In this line, MF is located at the stage of direct control action. From Increasing Complexity to Reducing Complexity; x
Line 1 Structure Combining (Please see Figure 8.) MultiSingle System
Intercrossing
Systems Integration into a Super System
Figure 8.
There are many typical structures in MF elements library. Two or more elements are selected to form a combined element in order to decrease fixture time (easy assembly/disassembly) and improve efficiency and intelligence. But typical structures are still being studied and intercrossing is the current status. Micro-Level and Increased Use of Fields; x
Line 1 Objects Segmentation: (Please see Figure 9.) Monolith
Two Parts
MultiParts
Powder
Suspended Particles
Field
Figure 9.
Now, MF has many elements in order to satisfy fixturing demands, so MF is in the multi-parts stage, field can be considered if they can be applied future MF technique, so there is still evolutionary potential along this orientation. Line2 Space Segmentation (Please see Figure 10.) Monolithic Hollow Solid Structure
Structure with Multiple Hollows
Porous Capillary Structure /Porous Structure with Active Elements
Figure 10.
Dowel-pin-based MF has many holes, so compared with current stage of structure with multiple hollows, there is still development space.
770
J. Cai, H. Liu, G. Duan, T. Yao and X. Chen
Decreased Human Involvement x
Line 1 (please see Figure 11.) By Hand
Mechanization
Intelligence
Figure 11.
From this line, the MF system is in the mechanization stage—the second evolutionary status. Simple mechanical tools can finish fixturing tasks. For MF technology, intelligence is wanted but still far away. Increased Super-system Trend x
Line 1: (please see Figure 12.) Single System
Double Systems
MultiSystems
Integrated MultiSystems
Figure 12.
Now, MF elements have been standardized and many different elements can be integrated into a super system in terms of assemble and machining needs. So, MF is in the last stage of this line. Based on above evolutionary patterns and lines, the evolutionary potential radar plot can be depicted as shown in Figure 13. Dynamization Structure Combining
Super system trend
Segmentation of Materials or objects
Decreasing Human Interference
Space Segmentation
Controllability about Automation degree Controllability about Feedback
Figure 13. Evolutionary Potential Radar Plot of MF
In Figure 13, there are eight evolutionary lines. The perimeter of radar plot is the “Evolution Limit”. The line of super system trend has come to the maximal state and has no further evolutionary potential, whereas other seven lines still have in which controllability about automation degree, controllability about feedback, space segmentation and segmentation of materials or objects have more potentials. So, the lines with more potential can be decided. According to practical
TRIZ-based Evolution Study for Modular Fixture
771
circumstances of MF, controllability about automation degree and controllability about feedback are selected in order to propose new MF design schemes. Line 1: controllability about automation degree; The line is “direct control->semiautomation-> automation”, the corresponding schemes are as follows: Scheme 1: Because the current state is direct control, future development state “semi-automation” can be realized such as by computer and human, although CAD-based MF design and assemble have appeared, it is still far for the software to be commercial and pervasive to help more MF industries and enterprises. x Scheme 2: “automation” is a super state, that is, the final aim of fixtures design can all be executed automatically without human interferences. Line 2: controllability about feedback; According to the line “direct control action->action through intermediary>addition of feedback->intelligent feedback”, the schemes are given as follows: x
x x
x
6.
Scheme 1: The current state is direct control action by human. Future development state “action through intermediary” can be used by some intermediary materials, such as manipulator. Scheme 2: Future state “addition of feedback” can be realized by making use of simple feedback systems to tell executor if the locating point, clamping point or forces are properly selected between the fixture elements. Scheme 3: “Intelligent feedback” state may come true through automatic and intelligent system in which all the fixture elements can be automatically selected according to the shape and scale of work pieces, and assembles are also automatic. All the problems can be resolved by the intelligent feedback system including if the selected point and forces are proper and if interference exists.
Conclusions
Technical evolutionary patterns and lines as well as evolutionary potential analysis are important to determine product development in the future. The evolutionary lines of MF were selected in terms of TRIZ evolution theory. The research result is institutive to product (system) development. At the end of paper, the MF evolutionary potential radar plot was depicted and the schemes of the focused lines are given, based on which future development orientation and methods of MF are predicted.
7.
Acknowledgement
The researches were financially supported by Hebei Province (No.F2006000111), Natural Science Foundation of Tianjin (07JCYBJC13900) and Scientific Research Project by Hebei Provincial Department of Education.
772
8.
J. Cai, H. Liu, G. Duan, T. Yao and X. Chen
References
[1] Rong Yiming, Zhu Yaoxiang, Luo Zhenbi, (2002) Computer-aided fixture design. Beijing: China Machine Press. [2] Rong Y M, Zhu Y X, (1999) Computer-aided fixture design. New York: Marcel Dekker. [3] Tan Runhua, (2004) Theory of inventive problem solving. Beijing: Science Press. [4] Michael Tompkins, Tim Price, Timothy Clapp, (2006) Technology forecasting of CCD and CMOS digital imaging technology using TRIZ. http://www.trizjouranl.com/archives/2006/03/04.pdf, March. [5] Zhao Xinjun, Hou Mingxi, Li Ai, (2005) Research on product technical forecasting system based on TRIZ evolution theory. Journal of Engineering Design, 12(l6):321324. [6] Tan Runhua, Zhang Qinghua, Ji Chun, (2003) The law and routes of system evolution in TRIZ and the application. Industrial Engineering and Management, 1:34-36. [7] Zhang Fuying, Xu Yanshen, Wang Ping, (2005) Development on cutting technology based on TRIZ directed evolution. Journal of Nanjing University of Aeronautics & Astronautics, 37:190-193. [8] Zhang Jianhui, (2005) The analysis of patent and the study on the application of the theory of technology evolution based on TRIZ. Tianjin: Hebie University of Technology. [9] Darrell Mann, Simon Dewulf, (2002) Evolution-potentialTM in technical and business system,” http://www.triz-journal.com/archives/2002/06/f/index.htm,. [10] Zhu Yaoxiang, Rong Yiming, (2000) The development of Flexible fixture and computer-aided fixture design technology. Manufacturing Technology & Machine Tool. 8:5-8. [11] Trappy A J C, Liu C R, (1990) A literature survey of fixture design automation. International Journal of Advanced Manufacturing Technology, 5:240-255. [12] Surendra Babu B, Madar Valli P, Anil Kumar A V, (2005) Automatic MF generation in computer-aided process planning systems. Proceedings of the Institution of Mechanical Engineers, 219(10):1147-1152. [13] Tan Ernest Y T, Kumar A Senthil, Fuh J Y H, (2004) Modeling, analysis, and verification of optimal fixturing design. IEEE Transactions on Automation Science and Engineering, 1(2):121-132. [14] F Mervyn, A Senthil Kumar, S H Bok, (2003) Development of an Internet-enabled interactive fixture design system. Computer-Aided Design, 35: 945-957. [15] Surendra Babu B, Madar Valli P, (2005) Automatic modular fixture generation in computer-aided process planning systems. Proceedings of the Institution of Mechanical Engineers, 219 (10):1147-1152. [16] Contini P, Tolio T, (2004) Computer-aided set-up planning for machining centers configuration. International Journal of Production Research, vol.42, no.17. [17] El Sayed J, King L S B, (2003) Multi-objective design optimization of automatic fixturing. Structures and Materials, Computer Aided Optimum Design of Structures VIII, 13:3-13. [18] Cai Jjin, Duan Guolin, Yao Tao. TRIZ-based technology maturity mapping of the modular fixture. The proceedings of the 14th International Conference on Industrial Engineering and Engineering Management (IE&EM’2007). [19] Zhang Huangao, (2003) The technology of mapping technology maturity of product based on patent analysis and its software development. Tianjin: Hebei University of Technology. [20] Oliver Lane Inman, (2004) Technology forecasting using data envelopment analysis. Portland State University.
Study on the Application of ABC System in the Refinery Industry Chunhe Wang1, Linhai Shan2, Ling Zhou2, Guoliang Zhang2 1
The Machinery Department, Research Institute of Petroleum Development & Exploration, Beijing 100083, P. R. China 2 China Boomlink Information Technology Co. Ltd. Beijing 100107, P. R. China
Abstract Since the present costing methods used in the refinery industry have major deficiency in costing products accurately and they cannot produce correct cost information for semi-finished products, an Activity-Based Costing (ABC) system is proposed. The designed approach and associated system for refineries is discussed in this paper. A costing model based on activity-chain is introduced and the key techniques, such as the arithmetic of Retroactive Costs, the strategy to cost attribution of multi-products, are proposed. The system was used in more than 20 refinery factories in the past three years successfully. The comparison between the application of ABCM and that of the traditional costing method is also detailed in the paper. Keywords: Refinery, Activity-Based Costing, Activity chain, Retroactive Costs
1.
Introduction
Activity-Based Costing (ABC) and Activity-Based Costing Management (ABCM), is considered as a revolutionary innovation after Thaler’s “Scientific Management” [1]. They have made a great contribution to costing and cost management, which benefited development of many corporations. Presently, the ABC is world widely applied in mechanical industries, and related researches were proposed. OU Pei-yu and WANG Ping-xin studied the application of ABC in the Chinese manufacture industry in 2000[2]. The research on the application of activities-based cost control in manufacturing industries was proposed in 2002 [3]. But the application of ABC in continuous process industries, especially, in the refinery industry, is little introduced. In China, the application of ABC is in the primary phase [4]. In this paper, the detail aspects of application of ABC in the refinery industry according to the present situation of costing in refineries and the characters of ABC are analyzed. Furthermore, the difference between ABC and traditional costing methods is concluded and the future development of ABC is proposed.
774
2.
C. Wang, L.Shan, L. Zhou and G. Zhang
Status Quo of Refineries Costing
In the refinery industry, the objects of costing are refinery units, final products and semi-finished products. The elements of costs are raw materials, auxiliary materials, fuels, power, overheads and etc. Cost collection & cost distribution is the main work in refinery costing. In order to analyze the characters of refinery costing thoroughly, we should realize the characters of the refinery process. 2.1
Characters of the Refinery Process
Continuous process and technical-centered are the main characters of the refinery industry, which takes on process craft that is very different from the process in mechanical industries. Concretely, we can list its characters as follows: 1. Continuity and complexity In the refinery process, the raw materials can be processed into different products through the same procedure. Meanwhile, there are hundreds of kinds of semi-finished products, which vary with operation condition and syncretism proportion. For example, products from the crude distillation unit include petrolic fraction, diesel fraction, kerosene fraction and sediment fraction. After processed through procedures, petrolic fraction can be produced into other different products, such as 90# gasoline, 93# gasoline, 97# gasoline, etc. 2. Varity of products It is normally known that final products from the refinery process include gasoline, diesel, kerosene, as well as products that are used as raw materials for chemical process. As mention to semi-finished products, products from different procedures take on different characters, and can be used for different further processes. 3. Uncertainty of the flow of semi-finished products Since the refinery technique is complex, products produced in different programs are different, and they have various flowing direction. There are at least three common ways. Firstly, it can be used as input for next units or used as raw materials for other factories. Secondly, it can be brought into the market as final products for consumption. Thirdly, it can be used as components for concoction. 4. Variety of product concoction In the product concoction, the change of proportion or the change of components will brings into different objective products. Considering the characters listed above, it is difficulty to costing the refinery process accurately. 2.1
Analysis on the Status Quo
The costing method that used by most refineries are inaccurate. For example, fix unit cost is used to cost semi-finished products, and fix proportions are used to
Study on the Application of ABC in the Refinery Industry
775
distribute the cost of final products, etc. It presents the wrong cost information to produce managers. The main un-effective characters of traditional costing method can be listed as follows. The traditional costing is final product-oriented and it does not focus on the activities of the production. The process and the relationship between activities are out of consideration. The presence of the disengagement between cost and process would necessarily result into the deviation of costs from the actual consumption. Since the main object of the costing is just the final products, and the cost distribution disengages from activities, the result of the costing that based on this method can not provide effective information for cost analysis. Moreover, cost analysis based on this method can not reflect the effect of the cost changing by the difference of processes. In the cost distribution, fixed cost proportions are used to distribute simply collected units’ expenses into all products. This process ignores the procedures of cost transference and would result into cost distortion. As mention to the cost distribution among semi-finished products and finished products, the usage of fixed price in costing semi-finished products would necessarily result into the inaccurate costs of finished products. It is very difficult to get the cost tracing in the traditional General Ledger accounting system, and it is also difficult to find out the detail components of costs. Furthermore, it is difficult for managers to control expenses of the whole process definitely.
3.
Activity-Based Costing Strategy in Refinery
The important theory of ABC is to carry forward process expenses step by step. Following the principle that the beneficiary should afford corresponding expenses, in the ABCM system the expenses are distributed to the corresponding beneficiaries, that is, different products, step by step based on proper cost drivers. 3.1
Initialization Data
The costing based on the ABC needs two kinds of initialization data. That is processing data and financial data. The former provides information on activities chains, produce and input of every unit, and the latter gives collected data of expenses of process during certain period. The final aim of the costing is to distribute the collected costs into different products, and the procedure of distribution should be close related to the producing process. 3.2 The Arithmetic for Activity Costing and Product Costing in Refinery ABCM Processes of refinery are composed by a series of units, which keep in certain order. From the raw materials to the final products, every unit would bring one or more kinds of semi-finished products. These products can be used as the inputs to the next unit, and so on to the end units.
776
C. Wang, L.Shan, L. Zhou and G. Zhang
In the term of costing, all the finished products costs are calculated based on the cost of semi-products, as the finished products would share the cost of semiproducts first. In the term of cost controlling, semi-finished products should be cost accurately so that to control the expenses of every procedure during the producing. In the term of business, some semi-products can be used as merchandises and accurate cost of semi-products is necessary to evaluate the profitability of programs. Considering the terms listed above, the main way of refinery activities costing is to construct the activities chain, in which all the units are linked following the manufacture process, and to carry forward expenses step by step. Concretely, it means collecting expenses that consumed by every process units step by step, and distributing the expenses to products based on proper cost drivers step by step. 1. Settlement of activities The activities, that were settled based on the refinery process, are the main objects of costing. They may be settled according to producing process and financial standards to satisfy the costing requirement. The definition of the activities that used in costing may be different from that of the units in process. Activities can be settled based on two principles. a) The unit that will be checked for its economy produce and performance evaluation would be settled as an activity. b) If there are multi-products produced from one unit and the products will be sent to the other units or as the final product, this unit can be settled as an activity. Products of activities can be defined clearly, and they can be used for further process, or for business. 2. Stepwise costing As to expenses, that be afforded by a certain cost object, we can directly take them as the cost of this object. Otherwise, we shall collect them firstly and distribute them to proper beneficiaries in the end of accounting period. The Figure 1 shows the procedure of stepwise costing.
Study on the Application of ABC in the Refinery Industry
777
Figure 1. Stepwise costing
We can found that cost of units is transferred based on the activities chain, and costs are distributed into products according to various drivers on the Figure 1. Meanwhile, expenses are transferred by the products flow. 3.3
Cost Tracing
Since the cost of products is carried forward step by step, cost of products from certain activity includes cost of the former activities that is collected as the item of materials. In the same way, the main part of the cost of finished products is the expenses of the products that used as materials in the last procedure, and others are the process expense of the last procedure. Cost tracing means tracing the cost of semi-finished products of former procedures to elementary cost elements like raw materials, auxiliary materials and overhead. The Figure 2 shows us the process of cost tracing. We can found that cost of the input of certain unit includes the cost of products that come from former units, whose cost is composed by expenses of elementary cost elements like crude oil, fuel, power providers and other elements of process expenses. Moreover, we can divide the cost of the input of the unit into expenses of crude oil and elements of process expenses, and combine them with the expenses of elementary components of cost that the unit itself consumes, so that we can divide the cost of the unit into elementary components, that is its traced costs. In the same way, we can compute the retroactive costs of all units and products.
778
C. Wang, L.Shan, L. Zhou and G. Zhang
Crude Oil
Crude oil
Catalytic Units
Distillation Unit
Crude oil
Product 1 Process
Semifinished products
Crude oil Process Expense
Product 2 Process Expenses
Crude oil Product3 Process Expenses Crude oil Product 4
Process Expenses
Process Expenses
Process Expenses
Final Products
Crude Oil
Process Expenses
Figure 2. Cost Tracing
3.4
Activity-Based Cost Planning of Refinery Process
Cost planning of refinery process is another kind of application of ABC in refinery cost management. From the process plan, we can get different planning projects, that distinguished by factors like type of crude oil, proportion of raw materials and ration of components. According to the input & output of units and the flow of products, we can establish planned activity chain. Moreover, we can set database of activities cost and products cost based on the data of the historical cost, and analyze characters of expenses according to the actual input of every unit, and confirm the cost ration of every unit, that reflects the consume capability of the unit. In addition, the different kinds of rations can be set according to the different planning periods. Combining the data of planning program with the data of rations, we can compute the planning profits, as well as the planning cost of activities and the planning cost of the products. If there are different kinds of programs, we can compute cost of activities and cost of products for every planning program, and make comparison in the cost among these programs. This kind of comparison will provide information for program optimization and for decision-making. Also, we can confirm the controllable components of the costs according to the analysis on the characters of the planning costs, and take the total controllable cost, controllable unit cost and uncontrollable cost as standards for performance evaluation.
Study on the Application of ABC in the Refinery Industry
4.
779
Analysis on Impacts
After analyzing the application of ABC in the refinery in aspects mentioned above, we found that there is the difference between the application of ABC and that of the traditional costing methods. Concretely, we can conclude them as follows. 4.1
Activity-Based Costing Benefits Refinement Management
According to the theory of ABC, the cost of activities and the cost of products come from these activities shall be confirmed step by step. That is to say, there is close relationship between costing and refinery process. This kind of relationship avoids the information isolation that existed between costing and cost management, and provides helpful information for managers. The establishment of activity chains makes the cost management develop from the level of ‘point’ to the effective combination of ‘point’ and ‘surface’, which clearly reflects both ‘materials flow’ and ‘activity chain’, and makes great contribution to cost controlling. Moreover, the combination of information from different departments benefits the communication among different departments, which greatly advance the efficiency of management. 4.2
Rational Distribution of Indirect Expenses
In the traditional costing methods, distribution of the indirect expenses mainly base on single cost driver, and they made the cost of products deviate from the process of products. However, ABC collects expenses that related to certain activity and distributes the cost of this activity to products come from it according to various cost drivers. The latter reflects the effect of process on the cost of products actually, and keeps to the principle that beneficiary should afford corresponding expenses. 4.3
Definite Cost Benefit Economic Analysis
ABC brings the method of stepwise carry forward, which makes the retroactive cost possible, into the process of costing. Therefore, we can get the expense of materials that every procedure consumes during the produce. The result can be used to analyze cost components of different products, and to make contrast among the expenses that different units consume, and to contrast rations of output to input of various materials. Based on the retroactive costs, managers can make comparison of components and cost difference between planned costs and actual costs, and evaluate the performance of every activity. Furthermore, they can find out the factors that resulted into the cost raise or cost descend, and to practice control on cost of activities and that of products. Finally, the information would help to optimize produce programming combining with other economic indexes like cost objects and benefit objectives [5].
780
4.4
C. Wang, L.Shan, L. Zhou and G. Zhang
Cost Controlling Benefits Objective Cost Management
Defining the cost of every procedure, even every unit, and changing the main point of cost management from ‘products’ to ‘activities’, which means the change from ‘controlling results’ to ‘controlling process’, are explorations that advance financial management in the refinery industry. In the term of cost controlling, objective cost, that confirmed based on optimized process, benefits cost control on all units, as well as the whole process. Meanwhile, analysis on characters of activity cost, and critical activities and critical factors, that affect cost of process, greatly enhances the efficiency of cost controlling. In the other term, objective cost management can be taken into practice with the help of ABCM. It provides information for cost evaluation. For example, the information can be used to make contrast of rations of output to input and performance efficiency among units like the Distillation unit, RFCC, Continuous Catalytic Reforming, etc. As to certain kind of unit, we can analysis its performance and cost expenses in different enterprises, so that to improve the process capability of the industry. 4.5 Cost Planning Benefits Profitability Predication and Performance Evaluation Based on the activity-based cost planning and the predicated price, cost planning can help to predicate the profitability of programs and to provide information for decision-making.[6] Managers can analyze the costs of different planning programs and make contrast in profitability among these programs. So that, program optimization based on the contrast would realize the profits maximization. After analysis the structure of planning costs, managers can confirm controllable planning costs and make the criterion for performance evaluation. 4.6
Cost Analysis Benefit the Decision Making
There are two aims of cost analysis in the ABCM. Firstly, data provided by the ABCM can be used to analyze the components of product costs and the components of activity costs. In this way, we can get information concerning cost structures. Secondly, the data is effective in analyzing the difference in product costs of different programs. Furthermore, we can subdivide the difference in product costs and activity costs into the difference in input, the difference in price and difference in consuming capability[7]. As to cost analysis in the ABCM, the ‘termary –factors analyses’ method is used in the process of analysis, and the application of this method is important to optimize processing programs. In this method, three factors mean cost drivers like output, price and wastage. Managers can take proper measures to reduce cost according to the analysis on the effect of these factors on the product costs.
Study on the Application of ABC in the Refinery Industry
5.
781
Summary
Different from the costing of traditional General Ledger accounting, the ABCM system help the people focus the object on the process activities, such as process unit, semi-final product, people’s activity etc., not focus only on the final products. With our system the cost calculation is more accurate to the real cost. That gives managers more support for making decision in the manufacture process control and process planning. The more correct cost information can be got from the ABCM system. It makes the cost analysis and programming optimization more powerful. Finally, management can realize the objects to optimize programs and reduce process costs. As an advanced theory of costing, the method of ABC was successfully used in Chinese refinery factories. And also there still be more should be improved. First, how to make a better combination between ABC and Chinese traditional General Ledger accounting is a big title in the coming study. Second, not only the financial employee should understand the benefit that ABC can give, but also the total employee of the company can accept it. And then more cooperation between the different management departments will make a great improve for control and deduce the cost. More research on the application of ABC in the refinery industry will be kept on.
6.
References
[1] Peter B.B. Turney, (1996) ABC The Performance Breakthrough: 85-86 [2] Ou Peiyu, Wang Pingxin, (2000) The Application of ABC in the Chinese Manufacture Industry. Accounting Research [3] Ou Peiyu, Wang Pingxin, (2002) Study of Activity-based Cost Control, Chinese Soft Science [4] Cokins, G., (1999) “Using ABC to become ABM”, Journal of Cost Management: 29-35 [5] Wang Xinping, (2000).Research on Theory and Application of Activity-Based Costing. Dongbei University of Finance & Economics Press [6] Han Qing-lan, Xiao Bo-yong, (2004) A Fuzzy Evaluation Method of Activity Performance Based on Value-Chain Analysis, Wuhan University of Technology(Social Science Edition):439-442 [7] Ding Rijia, (2003) Activity-Based Costing Management System: No.010308–09
The Application of Activity-Based Cost Restore in the Refinery Industry Xingdong Liu1, Ling Zhou2, Linhai Shan2, Fenghua Zhang2, Qiao Lin2 1
Financial Department, Petrochina Company Ltd., Beijing 100724, P. R. China Beijing Boomlink Information Technology Co. Ltd., Beijing 100012, P. R. China
2
Abstract With the development of more detailed cost management in the refinery industry, managers pay more attention to cost controlling. The combination of activity-based costing and the cost restore will play a very important role in cost controlling in the refinery industry. The application of ABC and the cost restore makes the structure of cost more clear-cut in each procedure and product in refineries. This paper introduces the implementation of cost restore based on activity in refineries and proposes a new arithmetic of cost restore and the further application of cost analysis based on cost restore information. Also, the traditional theory of cost restore and the application of activity-based cost restore in the refinery industry and the method of directly traced costs are discussed in this paper. Keywords: Activity-Based Costing (ABC), Cost Restore, activity-based cost restore, continuous flow, joint products
1.
Introduction
The Activity-Based Costing (ABC) is a new method of costing, which aims at closing a gap of the traditional cost management, and tries to provide immediate and accurate information. ABC was popular in developed countries since 1980s, and it started spreading in China in the early 1990s. Presently, in China the research on the application of ABC in the refinery industry is rare and the application of ABC in many other industries such as service, circulation industry and dispersed trade also did not take very good effect. This paper focuses on the research of the cost restore and related analysis based on the combination of traditional methods of cost restore and the theory of ABC.
2.
Difficulties of Cost Restore in the Refinery Industry
The main operation of refineries is the procedure that makes raw materials more valuable, and it includes both separation and synthesis. The characters of this kind of procedure are continuous, multi-products and volume-produce. Products from units can be further processed in next procedures. In addition, products can be
784
X. Liu, L. Zhou, L. Shan, F. Zhang and Q. Lin
taken as merchandises that can be transacted in the market. Since values are transferred with the flow of products, it is important to calculate the cost of the products and processes step by step according to the processing techniques. In the stepwise costing, expenses of materials in every procedure are represented by the cost of the ‘semi-products’, an element of cost. In this condition, product cost is the sum of costs of semi-products that come from former procedures, as well as process expenses of the last procedure, but not of the costs of elementary items that the procedure consumes. In most procedures, expenses of ‘semiproducts’ are huge comparing to that of process expenses. This kind of cost structure is inefficient for management and performance evaluation. It is necessary to trace the cost of all kinds of ‘semi-products’, so as to analyse the effect of the change of expenses of elementary items on product costs, and help to find out the difference in cost consumption between actual programs and planning programs, and to reduce process cost and enhance profitability. Considering the complexity of the refinery process and the difference in the process between the refinery industry and other industries, products cost restore in the refinery industry takes on several shapes. 2.1
Cost Restore for Joint Products
In the refinery, crude oil and raw materials are processed into kinds of main products, which present the same economic values. We call these products joint products. For example, products from the crude oil distillation units include gasoline, kerosene, light diesel, weight diesel, VGO, residual, gas, etc. Meanwhile, products of different processing programs are different, and every kind of product takes on different expenses of materials as well as different of processing expenses. In that way, we should consider types of products and technical programs in cost restore [1]. 2.2
Complex Processes Make Restore Cost of Mixed Products Necessary
Most of petroleum products are the mixture of different concoctions, through which catalyzes and additives are added to improve the quality of the concoction to meet certain requirements. This process complicates the cost structure of the final products and restore cost is needed for cost analysis. Moreover, cost of stocks would affect the cost of final products if products in stocks are used for concoctions. Since there are different kinds of flowing ways for even the same semi-finished product, the uncertainty of the flow of semi-finished products makes cost restore necessary 2.3
Detailed Management Would Benefit from Cost Restore
With the development of technology, environment protection promotes the enhancement of the quality of products. Correspondingly, activities chains of refinery processes become longer and more complex. It is necessary for the main point of cost management to change from ‘result controlling’ to ‘process
The Application of Activity-Based Cost Restore in the Refinery Industry
785
controlling’.[2] In that way, cost restore can help to provide detailed information concerning expense of resources and to improve the development of refinement management. For the reasons mentioned above, cost restore in the refinery industry is consistent with the requirement of cost management, and the application of cost restore in this industry shall develop further.
3.
Theory of Product Cost Restore Based on Activities
The basic principle of ABC is that products consume activities and activities consume resources, which tell us that product costs caused by the activities consumption has no direct relationship with resources consumption. So, costs distribution is closely related to activities chain. Activity-based costing management pays more attention to the activities that produces products, not just to the products costs itself [3]. The application of ABC overcomes the shortcoming of traditional costing that take the financial requirement as the only factor, and it makes the combination of the accounting and management. The application of ABCM makes great advancement in cost accuracy, decision making and cost controlling. The accounting of cost restore is usually used to record the detailed information of primitive cost elements of the semi-finished products which are the raw material of the next processes. [4]The cost restore is represented by the primitive element of cost, such as raw material, direct labour and the detailed items of manufacturing overhead. There are different ways to classify the methods of cost restore. If we classify the methods in terms of the calculating methods, we can get proportionality restore and structure restore. And we can get direct restore and converse restore if we classify the methods in terms of restore direction. Anyway, it’s the final objectives to get the detailed information of element of costs, which is consistent with the thought in ABC. Cost restore and the ABC can be combined together based on the consistency. Since ABC calculates expenses of activities and products, and cost restore presents the structure of activities chain through cost ascending, we can found out coupling point of these two kinds of methods and use the theory of ABC in cost restore.
786
X. Liu, L. Zhou, L. Shan, F. Zhang and Q. Lin
Figure 1. Relationship between cost restore and ABC
4.
Direct Cost Restore Based on Activities
The process of crude oil refining is a complicated, continuous flow. A lot of joint products are derived from this process. The detailed cost information of procedures, semi-finished products and finished products is necessary in order to control the costs, process planning and make decision. Direct cost restore, which is easily to be understood, is adopted in cost management in the refinery industry. We can discuss the concrete procedures of this method.
The Application of Activity-Based Cost Restore in the Refinery Industry
4.1
787
Establishment of Activities Chain
Figure 2. Model of activities chain
Based on the refinery process, we define the order of all units in the whole process. Every unit of the whole process is taken as an activity and the activities chain is set up, based on which cost will be restored. We take the first unit, that is the first step of production, of the whole process as units in stage 1, and then find out the units that follow the first unit in the process, that are take as units in stage 2. In the same way, we find the former activities and latter activities of every activity in the process. In this way, we can set the relationship among all activities through activities chain and the order of stages. Different numbers of the order present different location of activities in the process, and the number of order reflects the length of the activities chain. The Fig. 2 shows us the typical model of activities chain.
788
4.2
X. Liu, L. Zhou, L. Shan, F. Zhang and Q. Lin
Setup of the Flow of Materials
According to the activities chain, we can trace the raw materials that every unit consumes during the process, and define the relationship between semi-finished products, final products and units according to the flow of all materials. Furthermore, we can set the order of cost restore for semi-finished products. 4.3
Product Cost Restore Model
To simplify the procedure, we hold a hypothesis that there is only one kind of final product in the whole process, and it is processed through n (n>2) activities An . And there is only one kind of semi-finished product that comes from every activity, and all these semi-products flow into next units for further processing, and the raw materials are invested in the very beginning. We suppose the cost items are Ci , Di, Fi, which represents crude oil, salaries and manufacturing expenses respectively.
Figure 3. Model of typical cost restore[5]
In the Fig. 3 above, we see that: When n=1, it means there is a processing or an activity. So what the activity in stage 1 consumes are elementary cost items—crude oil, salaries and manufacturing expenses and the restore cost of the activity is the same as the activity cost and product cost, and Ci +Di+ Fi can be used to represent it. When n=2, activities are ordered in two grades. The raw material of the latter activity is a semi-finished product produced by the former activity. The comprehensive cost of the semi-finished product A1 must be restored to be represented in the elementary cost items. Then the restore cost of the latter activity is accounted. The following algebraic expressionrepresents the restore cost of the activity and its product in stage 2. A1C 1 A1 D 1 A1 F1 ( D2) ( F2 ) C 1 D 1 F1 C 1 D 1 F1 C 1 D 1 F1
A1 C 1 C 1 D 1 F1
represents
(
A1 D 1 D2) C 1 D 1 F1
(
A1 F1 F2 ) C 1 D 1 F1
the
represents
restore the
expenses
restore
of
expenses
(1) crude of
oils,
and
salaries,
and
represents the restore expenses of manufacturing expenses.
The Application of Activity-Based Cost Restore in the Refinery Industry
789
When n=3, the restore cost all the semi-finished products that are the raw material of the next activities should be calculated first. The restore cost of the activity in stage 3 and its restore product cost can be calculated by the following algebraic expression. A1 D2 F2 ) D 3 F3 A1 D 2 F 2 A1 D 2 F 2 A1 D 2 F 2 A2 A1 A2 D 2 A 2 F2 D 3 F3 A1 D 2 F2 A1 D 2 F 2 A1 D 2 F2 A 2 A1 C1 D1 F1 A2 D 2 ( ) A1 D 2 F 2 C 1 D 1 F1 C 1 D 1 F1 C 1 D 1 F1 A1 D 2 F 2 A2 F2 D 3 F3 A1 D 2 F 2
A2 D 3 F3
A2 *
A2 * (
ª º A1 C 1 A1 D 1 D2 A2 * « » ( A1 D 2 F 2 ) ( C 1 D 1 F1 ) A1 D 2 F 2 ¼ ¬ ( A1 D 2 F 2 ) ( C 1 D 1 F1 )
ª A1 F1 A2 * « A1 ¬ ( A1 D 2 F 2 )( C 1 D 1 F1 ) ° A1 C 1 A2 * ® A2 * ( A1 D 2 F 2 )( C 1 D 1 F1 ) ¯°
º F2 » D 3 F3 D 2 F2 ¼ ½° ª A1 D 1 C 1 D 2 D 1 D 2 F1 D 2 º « » D3 ¾ °¿ ¬ ( A1 D 2 F 2 )( C 1 D 1 F1 ) ¼
(2)
ª A1 F1 C 1 F 2 D 1 F 2 F1 F 2 º ° °½ ® A2 * « » F3 ¾ °¯ °¿ ¬ ( A1 D 2 F 2 ) ( C 1 D 1 F1 ) ¼
[1] In the algebraic expression㧔2㧕, represents A C A2 *
1
1
restore
expenses
of
crude
oils,
( A1 D 2 F 2 ) ( C 1 D 1 F1 )
ª A1 D 1 C 1 D 2 D1 D 2 F1 D 2 º ° °½ ® A2 * « » D3 ¾ °¯ °¿ ¬ ( A1 D 2 F2 )( C 1 D1 F1 ) ¼
represents restore expenses of salaries㧘
ª A1 F1 C 1 F 2 D 1 F 2 F1 F 2 º ° °½ ® A2 * « » F3 ¾ °¿ ¬ ( A1 D 2 F 2 ) ( C 1 D 1 F1 ) ¼ ¯°
represents
restore
expenses
of
manufacturing expenses. Comparing the expression (2) with (1), we know how to calculate the proportion of the restore cost of crude oils, the restore cost of salaries and the restore cost of manufacturing expenses respectively. Supposing the proportion of the restore expenses of crude oils is u2, we can calculate it using the expression below. P2
ª º A1 C 1 A1 C 1 A1 D 1 A1 F1 y « ( D2) ( F2 ) » C 1 D 1 F1 ¬ C 1 D 1 F1 C 1 D 1 F1 C 1 D 1 F1 ¼ A1 C 1 A1 C 1 A1 D 1 C 1 D 2 D 1 D 2 F1 D 2 A 1 F1 C 1 F 2 D 1 F 2 F1 F 2
A1C 1 ( A1 D 2 F2 )( C 1 D1 F1 )
(3)
In the same way, we can calculate the proportion of restore expenses of salaries as Į2.
790
D2
X. Liu, L. Zhou, L. Shan, F. Zhang and Q. Lin
(
ª º A1 D 1 A1 C 1 A1 D 1 A1 F1 D2 ) y « ( D2 ) ( F2 ) » C 1 D 1 F1 C 1 D 1 F1 C 1 D 1 F1 ¬ C 1 D 1 F1 ¼
A1 D1 C 1 D 2 D1 D 2 F1 D 2 ( A1 D 2 F2 )( C 1 D1 F1 )
(4)
E 2 represents the proportion of restore expenses of manufacturing expenses. E2
(
ª º A1 F1 A1 C 1 A1 D 1 A1 F1 F2 ) y « ( D2) ( F2 ) » C 1 D 1 F1 C 1 D 1 F1 C 1 D 1 F1 ¬ C 1 D 1 F1 ¼
A1 F1 C 1 F 2 D 1 F2 F1 F 2 ( A1 D 2 F2 )( C 1 D 1 F1 )
(5)
From the expressions listed above, we can find out that the amount of the restore cost of the activity in stage 3 is equal to the cost of the semi-finished product A2 by the proportions of restore cost of products that come from the former activity, plus expenses of items consumed by the activity itself except raw materials. In the same way, when n=n, the cost of semi-finished product An-1should be restored to the elementary cost items, and the restore cost of the activity of stage n is the cost of the product An by the proportions of restore cost item of semi-finished products that comes from the activity in stage n-1, plus the expenses of items except An-1 consumed by the activity itself. In this way, if the cost structure of the restore cost of the former activities is known, we can account the cost of semi-finished products, which are provided by former activities and be consumed by the latter activities, with elementary cost items and trace cost quickly. The traditional method we learned from textbooks before is that we should trace the sources of semi-finished products that consume by every activity. In the condition that the process is complex and the number of products is huge, or there is lots of concoction, the traditional method can’t be used and the advantages of this new method are obvious. 4.4
Procedures of Cost Restore
Equations above show us how we get restore cost of activities and that of joint products, as well as semi-finished products and final products based on the activities chain and material flow. We propose a hypothesis that there are m series of activities, and the amount of activities for every series is n, and the amount of joint products that come from every activity are S. The meanings of every parameter can be listed as following. Aij represents activity j in the stage i, and i= (1……m) and j= (1……n). Pijk represents product k of Activity j in the stage and k= (1……s). DPf represents semi-finished products f and f= (1……t). LPl represents final products l and l= (1……r). Cost items include crude oil, semi-finished products, powers, salaries, manufacturing expenses, etc. Cost proportions of these cost items are
The Application of Activity-Based Cost Restore in the Refinery Industry
D ij , Oij , E ij , G ij ," , P ij respectively, D ij' , E ij' , G ij' ," , P ij' respectively.
and
their
restore
cost
proportion
791
are
First of all, we calculate the restore cost of the activities in stage 1. Obviously, their restore cost equals to their activity cost since resources that consumed by activities in stage 1 are presented by elementary cost items. Supposing C represents activity cost, the C’A = CA . And D1 j D1' j , E1 j E1' j , G1 j G1' j ,……, u1j=u’1j, and 1j 1j
1j
represents activity j in stage 1. Meanwhile, we get the restore cost of joint products by using the equation below based on the theory that products consume activities. The cost of activity is allocated to the joint products. CP' 1 jk
CP1 jk *(D1' jk E1' jk G1' jk " P1' jk )
(6)
After calculating the restore cost of activities in stage 1 and corresponding restore cost of products, we calculate the restore cost of activities in stage 2 since the source of the semi-finished products that consumed by the activities in stage 2 is the activities in stage 1. Considering the difference of programs, different raw materials would be processed into different joint products and they consume different resources. So, we distinguish source of raw materials when calculating the restore cost of activities in stage 2. If the semi-finished products consumed by the activities in stage 2 come from the activities in stage 1 directly, that means there is no consumption of semi-finished products inventory, we can calculate the restore cost by the product of cost of the semi-products and the proportion of cost items consumed by the activities in stage 1, plus other expenses of the activity. C A' 2 j
C A2 j O2 j ª C P1 jk (D 1' jk E 1' jk G 1' jk " P1' jk ) º C A2 j (D 2 j E 2 j G 2 j " P 2 j ) ¬ ¼
(7)
If the raw materials of the activity in stage 2 are from semi-finished products inventory, we should calculate the restore cost of semi-finished products inventory. Firstly, we should find out the source of the semi-finished products inventoryDPf, and calculate the cost of the semi-finished products transferred out on every cost item. Restore cost of the semi-finished products transferred out = (Restore cost of beginning inventory + restore cost transferred out)/ (the amount of the beginning inventory + the amount of the inventory transferred out) Restore cost transferred out = sum of the restore costs of components. Meanwhile, we can get the restore cost of products inventory until the end of the term and carry forward it to next period. To get the restore cost transferred out, we should use the cost of the products inventory transferred out and the proportion of cost items in restore cost, plus other expenses that the activities consume. In order to get the restore costs of joint products, we shall use the product of the cost of semi-finished products consumed by joint products and the proportion of
792
X. Liu, L. Zhou, L. Shan, F. Zhang and Q. Lin
cost items in the restore cost of the semi- finished products, plus other expenses that consumed by multi-products. C P' 2 j
C P2 j O 2 j ª C P1 jk (D 1' jk E 1' jk G 1' jk " P 1' jk ) º C P2 j (D 2 jk E 2 jk G 2 jk " P 2 jk ) ¬ ¼
(8)
Following this procedure, we can get restore cost of other activities and other joint products. Obviously, there is close relationship between cost restore of activities and their position in the activities chain. Restore cost of former activities should be calculated before that of latter activities, and the restore cost of semi-finished products is in the same condition. After all these procedures are finished. We can get the restore cost of other semifinished products inventory. As to activities that have more than one former activities, all of the former activities should be taken into consideration. Semi-finished products that have more than one component shall be handled in the same way.
5.
Conclusions
Based on the computer technology, the method of cost restore based on activities chain is efficiently used in the cost management of the refinery industry. Firstly, analysis on restore cost can help to provide detailed information about the cost structure of activities, joint products, semi-finished products and final products. This information is helpful to evaluate the actual expenses of units and to find ways to reduce costs. Secondly, the combination of restore cost with the expenses of variable cost and that of fixed cost is useful for process planning, process analysis and objective management. Thirdly, the comparison of activity cost and product cost among different enterprises provides information for analysis on difference in costs, and benefit.
6.
References
[1] REN Yang, LIU Huan-jun, CHEN Liang-you, A Programming Model for Multiporducts Producling Decision Based on ABC, Chinese Journal of Management Science. Vol.9, No.2, Apr. 2001.36-42. [2] Cooper, R. and Kaplan, R.S. Measure Cost Right; Make the Right Decision [J]. Harvard Business Review, Septernber /October 1988, 96-103. [3] OU Pei-yu, WANG Ying-luo, WANG Pin-xin, ZHU Li-xin, The Application of Composite DEA in Activity Analysis and Evaluation, System Engineering, Vol.24, No. 6, Jun.,2006. 52-57. [4] Callen J. Data Envelopment Analysis: Partial Survey and application for Management Accounting [J]. Journal of Management Accounting Research, 1991, 3, fall: 35-56. [5] HAN Qing-lan, XIAO Bo-yong, A Fuzzy Evaluation Method of Activity Performance Based on Value-Chain Analysis, Wuhan University of Technology (Social Science Edition) Vol. 17 No. 4 August 2004.
Research on the Cost Distribution Proportionality of Refinery Units Fen Zhang1, Yanbo Sun2, Chunhe Wang3, Xinglin Han2, Qiusheng Wei2 1
China Boomlink Information Technology Co. Ltd. Beijing 100, P. R. China Daqing Petrochemical Company, Daqing, Heilongjiang Province, China 3 The Machinery Department, Research Institute of Petroleum Development & Exploration, Beijing 100083, P. R. China 2
Abstract Cost distribution proportionality, one of the critical factors that influence the accuracy of the products’ cost calculation in the refinery industry, becomes the focus of research on product costing and cost control system. The cost distribution proportionality is subdivided into material consumption proportionality, energy consumption proportionality and synthetic proportionality in this paper. Factors that affect every kind of proportionalities are studied, and be quantified to distinguish their impact on cost proportionalities and the three kinds of proportionalities are used to distribute cost of different elements respectively. The theory model for calculating the three cost distribution proportionality respectively and for distributing cost of different elements is provided. An instance is proposed to compare the products’ costs that calculated based on different proportionalities system. Keywords: cost distribution proportionality, material consumption proportionality, energy consumption proportionality, enthalpy difference, value of equivalents
1.
Introduction
With the application of the Activities-Based Costing Management System (ABCM) in the petroleum refinery industry, managers put more attention to control processing costs. As to the refinery process, cost collection and cost distribution are related closer to the refinery process. The multi-products produced from one unit at same time reflect an obvious character of refinery process, which makes the application of cost distribution proportionality necessary. In detail, cost distribution proportionality is used to carry forward the costs of multi-products and to distribute costs as a main cost driver. In the 1980s, researchers proposed a set of fixed cost distribution proportionality based on the techniques and characters of processing level in that time. The research covered factors like the manufacturing difficulty, product quality standard and product distilling rate. With the development of management theory,
794
F. Zhang, Y. Sun, C. Wang, X. Han and Q. Wei
traditional cost distribution proportionality is not accurate enough to reflect the resources consumption of the refinery process, and it would distort the result of economical analysis on processing. By using the traditional cost distribution proportionality, the semi-finished products costs distributed from unit expense are not consistent with the actual resources consumption. Moreover, the result of performance evaluation based on the traditional cost distribution proportionality would not effective enough to optimize the processing program. In this paper, we subdivided factors that affect products’ costs, and provided models for calculating cost distribution proportionality and distributing cost of refinery unit expense to multi-product based on the characters of continuous process of the petroleum refinery.
2.
Theory Model
As the typical continuous process industry, the oil refinery process takes on many characters, such as continuity, complexity, product variance, multi-products, and difficulty in choosing cost drivers. In order to distribute the cost of units to corresponding multi-products accurately, cost distribution proportionality should be effective enough to reflect the impact of various factors on cost consumption. Data of product cost based on this procedure would give useful information to managers in different departments of enterprises. In the view of process planning, cost distribution proportionality should reflect the energy consumption of every product from the same units. As mention to cost consumption, energy consumption represents how much processing expenses that every product should afford during the process. In the view of economical analysis, cost distribution proportionality should reflect the value of the materials separated from raw oil in every output-product. In that way, the value of the products, that settled by using the value of final products, would benefit to decision-making. According to the characters of the refinery process and the structure of refinery products cost, the cost of refinery products can be subdivide into three parts definitely. They are expense of raw materials, direct processing charge and indirect expense. Generally, the expense of raw materials takes more than 90% of product cost, and the direct processing charge is mainly caused by energy consumption, which means the consumption of energy providers like fuel & power, and the indirect expense is composed by salary, depreciation expenditure, overhead, etc. In order to cost different products accurately, we should analyze how expenses of cost elements are affected by different factors in order to choose corresponding cost drivers for different elements according to the factors that exert influence. The model in the Fig.1 indicates the way to use material consumption proportionality, energy consumption proportionality and synthetic proportionality to distribute cost of materials, direct process expenses and indirect expenses respectively.
Research on the Cost Distribution Proportionality of Refinery Units
795
Figure 1. Model of costing products
Energy Consumption Proportionality In the refinery process, products may be the outputs from different subunits even they are produced from the same unit. So we definite the Energy Consume Unit (ECU) to detail the energy consumption process of the different product. That is to say, different producing processes would form different processing chains, which cause the different kinds of energy consumption of every product. Energy consumption can be used to compute the expenses of energy providers, such as water, electricity, vapor and wind. For this reason, we propose energy consumption proportionality to reflect the relative energy consumption of every output from one unit, and to calculate the energy expense that consumed by multi-products. First of all, we subdivide a unit into several subunits according to the definition of ECU, and establish ECU chains based on the energy transfer among different subunits. For example, the model in the Fig. 2 is the typical ECU chain of the crude distillation unit. We can see that energy is transferred by products.
Figure 2. ECU Chain of the Crude Distillation Unit[1]
796
F. Zhang, Y. Sun, C. Wang, X. Han and Q. Wei
The enthalpy difference (ǻH) [2][3], which represents the unit energy consumption, is proposed to quantify the energy consumption that consumed by different products. Since energy consumption proportionality represents the proportion of the products’ energy consumption in the unit’s energy consumption, we should transform the proportion of product’s energy consumption to subunit’s energy consumption through the proportion (M) of subunit’s energy consumption in the unit’s energy consumption. We assume that the proportion of energy consumption of the primary tower to that of the crude distillation unit is M 1 , and the enthalpy difference of the First fraction is 'H 1 . The equation (1) can be used to calculate the energy consumption of per unit of top output, First fraction. (1) Before calculating the energy consumption of As 1 that comes from the atmospheric distillation tower, we should confirm the proportion of energy consumption of the atmospheric distillation tower to that of the unit. The energy consumption of the atmospheric distillation tower is composed by two parts, that is, the known energy that the atmospheric distillation tower itself consumes, and the energy that transferred by the residual oil. Supposing that the distilling rate of the First fraction is P1 , and that of the residuum oil is P 2 , and enthalpy difference of the residual oil is 'H 2 , the energy consumption rate that the residuum oil transfers to atmospheric distillation tower can be calculated as follows. N1
'H 1 u M 1
'H 2 u P 2 (2) u M1 ( 'H 1 u P1 'H 2 u P 2 ) In this way, we can confirm the total energy consumption proportion of the atmospheric distillation tower as M 2 T1 with the hypothesis that the energy T1
consumption of the atmospheric distillation tower is M 2 . If 'H3 represents the enthalpy difference of the As 1, the follow equation represents the energy consumption of per unit of As 1. N3
'H 3 u ( M 2 T1 )
(3)
After calculating energy consumption of every semi-finished product, we can take the relative energy consumptions as energy consumption proportionality by using the equation (4). Fni
(4)
Ni
¦N
j
Fni ----- The energy consumption proportionality of product i N i ----- The energy consumption of per unit of product i
Research on the Cost Distribution Proportionality of Refinery Units
797
Material Consumption Proportionality The difference between the cost structure of the petroleum products and that of the mechanical products is that expenses of raw materials take the major part in the cost structure of petroleum products, and the distribution of the expenses of materials is critical to product cost. Characters of refinery process indicate that multi-products should afford the expenses of raw materials that the process consumes, and different products take on different values, since they can meet different needs in the market. Following the hypothesis that the expenses of materials that products afford is in consistent with values of products, we take material consumption proportionality as the cost driver to distribute raw material costs in the view of economic analysis. Material consumption means how much raw materials are consumed by the products during the process. According to the model of refinery activity chains, most products come from units have not the market price, and they are further processed to be the final products. That means only the final products can be transacted as commodities in the market. In that way, semi-products’ value should be reckoned from the price of final products. According to the flow direction of products, we can establish the quantified relationship between final products and corresponding multi-products, and define equivalents of semi-products. In the condition that the value of final products is known, we can calculate the price of equivalents that corresponding to final products. That is to say, we can take the relative prices of equivalents of semiproducts as the semi-products’ material consumption proportionality. The Fig. 3 is a simplified model of typical refinery activity chain.
Figure 3. Typical Model of Refinery Activity Chain
tr1 -----Amount of certain product from the unit B. tr2 -----Amount of another kind of raw materials of the unit A. cci -----Amount of certain products from the unit A. ccli -----Amount of product cci that constitutes C.
798
F. Zhang, Y. Sun, C. Wang, X. Han and Q. Wei
According to the expression in the Fig. 3, we can calculate the amount of tr1 and tr2 , that constitute the product C, as follows. trl1
ccl1 u
tr1 tr1 tr1 ccl2 u ccl3 u tr1 tr2 tr1 tr2 tr1 tr2
(5)
trl2
ccl1 u
tr2 tr2 tr2 ccl2 u ccl3 u tr1 tr2 tr1 tr2 tr1 tr2
(6)
In that way, we can find the amount of the semi-products that construct different final products, and calculate the price of equivalents of all semi-products. As mention to the semi-product cc1 and semi-product cc 2 , we suppose that the amount of them, that constitute the final products C1 ǃ C2 ǃ C 3 , are respectively ccl1 ǃ ccl1' ǃ ccl1'' and ccl2 ǃ ccl2' ǃ ccl 2'' , and the prices of the final products are P1 ǃ P2 ǃ P3 . The price of equivalents of cc1 and cc 2 can be calculated as follows.
Pcc1
Pcc2
P1 u ccl1 P2 u ccl1, P3 u ccl1,, ccl1 ccl1, ccl1,,
(7)
P1 u ccl 2 P2 u ccl 2, P3 u ccl 3,, ccl 2 ccl 2, ccl 2,,
(8)
We can take the relative value of cc1 and cc 2 , that are Pcc and Pcc respectively, as 1 the material consumption proportionality. It represents the market value of semiproducts. The material consumption proportionality can be calculated by using the equation (9). 2
Fwi
Pi ¦ Pj
(9)
F wi ----- The material consumption proportionality of the product i . Pi ----- The relative value of the product i . Synthetic Proportionality Cost distribution should base on different cost drivers according to the cost driver theory in the ABC. The material consumption proportionality and the energy consumption proportionality mentioned above can be used to distribute expenses of material and direct process expense respectively. Since the influence of indirect expenses, such as salary and overhead, on product cost is relatively stable and very little, we can neither distribute these expenses just base on process nor on the market value of products only.
Research on the Cost Distribution Proportionality of Refinery Units
799
In distributing indirect expenses, we should take both processing and products values into consideration. The effect of these two factors on product costs can be quantified by the energy consumption proportionality and the material consumption proportionality respectively. We propose synthetic proportionality, which synthesizes the energy proportionality and the material proportionality, to distribute indirect expenses of products as the cost drivers. We can calculate the synthetic proportionality by using the equation (10).
Fi
Ww u Fw Wn u Fn
(10)
F i ----- The synthetic proportionality. W w ----- The weight of material consumption proportionality.
W n ----- The weight of energy consumption proportionality.
Cost Distribution Model As to the cost distribution, we should distribute expenses of different cost items based on different cost drivers. In addition, the effect of productLRQ should be taken into consideration since cost distribution proportionality indicates the consumption capability of every one quantity of products. For example: Material costs of Pi =Material costs of unitsoutput of Pi material proportionality of Pi / ¦ (output of Pi material proportionality of Pi ) In the same way, we can distribute expenses of other items based on producWLRQ and other kinds of proportionalities and sum the cost of every item to get the total cost of products as follows. Cost of Pi =Material cost of Pi +Variable processing cost of Pi +Maintenance processing cost of Pi
3.
An Instance
There is an instance of calculating cost distribution proportionality and cost distribution for oil refinery units. We can put the theory mentioned above into practice. Proportionality Calculation According to the processing craft, we can subdivide the crude distillation unit into three subunits, and they are primary power, atmospheric distillation tower and vacuum distillation tower. Then, we set the subunit chain according to the products’ flowing. Based on the parameters of the products, and the proportions of energy consumption of subunits, and the value of products that come from the crude
800
F. Zhang, Y. Sun, C. Wang, X. Han and Q. Wei
distillation unit, we can calculate the energy consumption proportionality and material consumption proportionality as follows. [4][5] In the Table 1, we suppose that the weight of material consumption proportionality and that of energy consumption proportionality are 0.9 and 0.1 respectively, the synthetic proportionality can be calculated. In the Table 1 the ECP is Energy Consumption Proportionality, the MCP is Material Consumption Proportionality, SP is Synthetic Proportionality. Table 1. Proportionalities Products
Production
Old proportionality
E&3
M&3
S3
First fraction
3825
1
1
1
1
Atmospheric gas oil
5176
1.2
1.31
0.96
0.99
As 1
17841
1.1
0.7
1.2
1.15
As 2
26934
1
0.52
1.2
0.13
As 3
18551
0.8
0.39
1.03
0.97
As 4
5543
0.65
0.23
1.02
0.94
Vs 1
6161
0.7
0.86
1.03
1.02
Vs 2
14704
0.75
0.66
1.07
1.03
Vs 3
13169
0.85
0.36
1.03
0.97
Vs 4
8566
0.9
0.22
1.01
0.93
Vs 5
4466
0.7
0.26
1.01
0.93
VRs
103316
0.7
0.79
0.89
0.88
Cost Distribution Data indicating the cost consumption and the production of the crude distillation unit are listed below. Table 2. Cost Consumption of the Crude Distillation Unit Cost elements
Cost drivers
Total Raw materials
ProductionhMCP
Auxiliary materials Fuels Powers
ProductionhECP
Amounts
Price
Expenses
2647040.56
309.64
819622954.22
229030.00
3554.53
814094624.28
170.56
337.32
57532.00
2842.00
923.10
2623450.00
2414998.00
0.56
1359323.40
Research on the Cost Distribution Proportionality of Refinery Units
Salaries
286355.52
Production hSP
Overhead
801
1201669.02
Then, we can get the cost of As 1, and contrast it to the cost that calculated based on old proportionality as follows. Table 3. Costs of the product As1
Cost elements
Results based on the old proportionalities
Results based on the new proportionalities
Difference in expense
Amounts
Expenses
Amounts
Expenses
Total
281212.47
86915836.22
224203.20
76688004.64
10227831.59
Raw materials
24331.36
86486608.10
21440.48
76210888.86
10275719.23
Auxiliary materials
18.12
6112.00
14.30
4824.36
-1287.65
Fuels
301.92
278706.29
238.32
219989.88
-58716.41
Powers
256561.07
144409.83
202510.10
113986.31
-30423.52
Salaries
30421.42
26617.39
-3804.03
Overhead
127661.18
111697.84
-15963.34
From the data in the table, we can see that there is difference in the product’s cost between the results that calculated based on different proportionality. Meanwhile, there is difference in distributed costs of various products between these two methods.
4.
Summary
In the view of model establishment, this method balances the disability of old proportionality system to reflect the process craft in detail, and establishes the relationship between cost distribution proportionality and economic analysis. Moreover, the appliance of multi-proportionality in costing greatly enhances the belongingness of cost elements. Different proportionalities are used as cost drivers to distribute cost component consumption of multi-products, and the enhancement of accuracy of cost would greatly help the product position, and make the product pricing decision-making more effective. Based on the multi-proportionalities and the accurate cost distribution, performance evaluation, which would present the profitability of units, is more effective. The result of performance evaluation is more helpful to guide the process program optimization. For another, cost of finished products that stepwise carried
802
F. Zhang, Y. Sun, C. Wang, X. Han and Q. Wei
forward reflects the actual cost consumption of the process, and the evaluated performance based on this cost system would be more benefit to managers’ decision-making as well as economic analysis.
5.
References
[1] Lihua Cheng, Petroleum Refining [M]. Sinopec Pressing, 2005. [2] Yujun Zhao, Rucheng Chou, Mathematic Correlations for Enthalpy of Petroleum Fractions. Shandong Chemical Industry [J]. 2006(35).11-12. [3] Michael G. Kesler and Byuny lk Lee, Mobil Research and Development Corp. Princeton, N.J. Improve prediction of enthalpy of fractions. Hydrocarbon Processing. March 1976. 153-158 [4] [API, Technical Date Book-Petroleum Refining [M]. 3rd Ed.1976. [5] Byung Ik Lee and Michael G. Kesler, A Generalized Thermodynamic Correlation Based on Three-Parameter Corresponding States. AlChE Journal (Vol.21, No3).1975.510-527.
Chapter 7 Collaborative and Creative Product Development and Manufacture From a 3D Point Cloud to a Real CAD Model of Mechanical Parts, a Product Knowledge Based Approach ........................................................... 805 A. Durupt, S. Remy, W. Derigent Research on Collaborative Design Support System for Ship Product Modelling .............................................................................. 815 Yiting Zhan, Zhuoshang Ji, Ming Chen Research on New Product Development Planning and Strategy Based on TRIZ Evolution Theory ............................................................................... 825 Fuying Zhang, Xiaobin Shen, Qingping He ASP-based Collaborative Networked Manufacturing Service Platform for SMEs............................................................................................................. 835 Y. Su, B.S. Lv, W.H. Liao, Y. Guo, X.S. Chen, H.B. Shi Virtual Part Design and Modelling for Product Design................................. 843 Bo Yang, Xiangbo Ze, Luning Liu Integrated Paper-based Sketching and Collaborative Parametric 3D Modelling...................................................................................................... 855 Franklin Balzan, Philip J. Farrugia, Jonathan C.Borg Mechanical System Collaborative Simulation Environment for Product Design............................................................................................. 865 Haiwei Wang, Geng Liu, Xiaohui Yang, Zhaoxia He Evolution of Cooperation in an Incentive Based Business Game Environment ...................................................................................................... 875 Sanat Kumar Bista, Keshav P. Dahal, Peter I. Cowling
From a 3D Point Cloud to a Real CAD Model of Mechanical Parts, a Product Knowledge Based Approach A. Durupt1, S. Remy1, W. Derigent2 1
Université de technologie de Troyes, 12 rue Marie Curie, 10010 Troyes (France) Université Henri Poincaré de Nancy I, 24-30 Rue de Lionnois, 54003 Nancy (France) 2
Abstract Reverse engineering is not a new domain but, according to users, form results obtained with the current approaches are not good enough. Starting from the 3D point cloud of the original mechanical part, the surface/solid based approach allows to obtain a software solution whose purpose is to find an automatic way for surface rebuilding. Based on basic segmentation and free form surface fitting methodologies, Geometric features can be extracted from a point cloud from a 3D digitalisation of a real model. Then, all these features are rebuilt and connected with each others using expert knowledge to add some design features in a very long process in order to obtain a kind of CAD model a little more useful than a meshed model. As far as we can see, there are no industrial approaches for the automatic conversion of a 3D point cloud into a CAD model with parameters or formulas. In this article, we depict a new research theme which will lead to the reconstruction of a CAD Model from the 3D point cloud with a knowledge-based approach. Keywords: Re-engineering, knowledge management, leading parameters.
1.
Introduction
Reverse engineering is a domain of current interest. It appears that nowadays, companies, organisation and suppliers need to manufacture old parts or products they use everyday but that reach their end of life. For example, Reverse engineering is massively used by the forging industry. It is used to manufacture new tools for old parts or by suppliers to produce parts from the prototype of a customer. Reverse Engineering is used everyday. It is not a new domain but, according to users, the results that are obtained with the current approaches are not good enough. The current approaches can be classified into two different categories: Mesh based approaches and the surface/solid based approaches. Using the first ones, people change a 3D point cloud representing an existing object into a meshed surface that samples the real surface of this object. Due to
806
A. Durupt, S. Remy and W. Derigent
recent improvement regarding to meshing algorithms, the rebuilt surfaces are very accurate and quickly computed. The noise of the point cloud, that is inherent to the digitising process, is most of the time filtered and the result is good enough to enable to copy the original object using rapid prototyping technology or basic CAM approaches. It can also enable mesh and remesh works to prepare stress analysis calculation or to create digital model for marketing or virtual reality purpose. Here, the possibility of re-engineering or re-design does not exist. In a meshed model, a hole, for example, has no diameter, no axis; it is just a set of triangles that are tiny plane surfaces. In such a model, change a diameter; add a constraint of parallelism or a filet between two faces is impossible. But, with the second type of approaches,” surface/solid based approaches”, the 3D point cloud of the original object is changed into a surface model or a solid model. Software solution like Raindrop Geomagic (http://www.geomagic.com) proposes an automatic way for surface rebuilding. It is based on basic segmentation and free form surface fitting methodology.
3D point cloud from digisting
3D solid model after manual segmentation and surface rebuilding
Figure 1. An example of rebuilding 3D model with surface/solid based approach
The resulting model is as useless as a meshed model regarding to re-design possibilities. But, surface models or solid models can be also obtained from point clouds using CAD solution. In this case, it is possible to obtain a model that enables re-design approach but it is a very long set of geometric operations. Here, a point cloud from a 3D digitalisation of a real object is manually segmented into a set of N sub point clouds representing the N geometric features that compose the object, the segmentation is performed using only a geometric point of view. Then, all these features are rebuilt and connected with each others using expert knowledge to add some design features in a very long process in order to obtain a kind of CAD model a little more useful than a meshed model. Into the structure or a real CAD model, designers put data about expert knowledge (with parameters and relationships), about the manufacturing process, about the function of the product … To obtain such a product model, a geometric approach is not enough. The knowledge about the product, its life and its environment have to be taken into account as well as the first geometric appearance. Thus, this project proposes to formalise this knowledge and to automate the rebuilding methodology in order to obtain a real CAD model. It is an original
From a 3D Point Cloud to a Real CAD Model of Mechanical Parts
807
approach, which takes into account, in an early stage of the reverse engineering process, the environment of the product a well as the knowledge people has about it.
2.
The State of the Art
Reverse engineering (RE) refers to create a CAD model from an existing physical object, which can be used as a design tool for producing a duplicate of an object, extracting the design feature of an existing model, or re-engineering an existing part. In other words, RE takes information from the real world like a point cloud of an object’s surface captured using a 3D digitising technology as an input and creates a geometric model, which should be compliance with the requirements for a rapid prototyping system or CAM. Since the cloud data are generally dense and unorganised, reconstructing a geometry model for efficient and accurate prototype manufacturing becomes a major research issue. In general, approaches for modeling an object from a cloud data can be classified in two categories, i.e., (a) surface reconstruction based on an implicit function (e.g., parametric function) (Sakar and al. 1991) [1] or (b) surface modeling employing a polyhedral mesh (e.g., triangular mesh) (Urk and al. 1994) [2]. The segment-and-fit approach described by Hoffman and Jain (1987) [3] is widely used in the former method. Typically, the cloud data is segmented into several patches bounded by clearly defined curves, each representing a discrete surface region present the physical object. Modeling methods, such as those employing parametric (Varady and al. 1997) [4] or quadric (Chivate and al. 1993; Weir and al. 1996) [5] [6] functions are then applied to fit surfaces to the patches. Among the parametric representations for curves and surfaces, Non-Uniform Rational B-spline (NURBS) is the most popular one due to its ability to accurately approximate most of surface entities encountered in design and manufacturing applications (Piegl and al. 1995) [7]. The model with these kinds of mathematically described surface patches can be used for machining directly. However, segmentation of very large sets of cloud data (manual operation) could be a difficult and tedious task. It is noticed that some commercial reverse engineering packages combine the polyhedral mesh and parametric surface reconstruction. A typical example is Paraform (Paraform website) in which the point cloud is first triangulated followed by a curvature-based mapping method to extract feature curves for segmentation. Parametric surfaces are then created using the feature curves. Therefore, polyhedral mesh can also be used as an intermediate model for final surface creation. For the approach employing polyhedral meshes, the inherent data structure produced by the vision system plays a critical role on the meshing techniques. The structure in the data can range from being highly organized, such as an array of points, to little structure, such as cloud data. For a highly structured data set, such as a range image composed of a regular grid of data points, a polygonal model can be created in a straightforward manner by linking data points in a neighborhood to form the mesh. If an object is digitized through the acquisition of multiple range images, then an appropriate registration and alignment technique must be implemented to merge the set of adjoining polygonal domains [Urk and al. 1994;
808
A. Durupt, S. Remy and W. Derigent
Soucy and al. 1995) [2] [8]. Generally, algorithm dependent on existing data structure typically perform with greater efficiency than un-constrained algorithm employing unstructured data. However, a major disadvantage of these algorithms is the inherent dependence on specific sensor types, or even manufacturers. Algorithms that have been developed for modeling less structured three-dimensional data sets assume that no a priori information regarding the connectivity of points in the data set is available. The only assumption is that there exists a sufficiently high data sampling resolution to permit unambiguous reconstruction of the model. For example, Fang and Piegl (1995) [9] extended 2D Delaunay triangulation algorithm to three-dimensional data. Cignoni and al. (1998) [10] described another Delaunay triangulation technique based on a divide-and-conquer paradigm. Lawson (1977) [11] used geometric reasoning to construct a triangular facet mesh, and subsequently, Choi and al. (1988) [12] extended the same method using a vector angle order, instead of Euclidean distance, to determine the linkage of data points. Hope et al. (1992) [13] developed a signed distance function by estimating the local tangential plane and using a marching cube method to extract a triangular polyhedral mesh
3.
Research Paths
The final goal of this project is to obtain a new CAD model controlled by the user. Consequently, this project blended a step of geometrical recognition and a step knowledge management. 3D point cloud
Part/Product’s knowledge (function, process)
Management
Geometrical recognition (Surfaces, edges)
Bijection ??
Functional and structural skeleton
CAD model controlled by the users
Figure 2. Aim of the project.
In the literature, the techniques of geometrical recognition is known. But, the CAD model resulting can’t be controlled by the user CAD. We think that the knowledge on the product could be a research paths for to extract the functional and structural
From a 3D Point Cloud to a Real CAD Model of Mechanical Parts
809
skeleton design and consequently, the leading design parameters. Thus, a first research path will based and the interpretation of the knowledge product. Moreover, the second research path will to establish a link between the results of actual geometrical recognition systems and the functional and structural skeleton deduced by knowledge management of a product. 3.1
Rendering of the Knowledge
Usually, a CAD model contains expert knowledge given by a designer (this could be parameters, relations, attributes). This information is domain-related, and could concern for instance the product mechanical functions or its manufacturing process. In a reverse engineering context, numerous product details are known. We will qualify this information as “product knowledge”. To take into account this knowledge and its interpretation will allow to help the geometric feature detection during the reconstruction phase. It will confirm (or not confirm) the detection of certain surface types and ease the determination of leading and led parameters. In fact, the purpose of this research is easy to understand: to reconstruct a real CAD model from its 3D point cloud seems to be almost impossible, because some information is missing. In fact, a CAD model is not only a geometrical feature but a technological feature too. To ease the reconstruction process, and to determine parameters (which are knowledge parts), we need to know in an accurate way what the environment of the product is, and what its functions are. We truly believe this product-related knowledge will dramatically help the reconstruction process. In the next sections, we develop the proposed process to be implemented. This step consists to list knowledge related to the part by taking into account of its life cycle and its environment. From our point of view, the knowledge can have two types: On the one hand, the manufacturing knowledge (foundry, machining…) and, in addition, the finality of part by the study of the mechanical functionalities it ensured. In a first time, we restrict our study to two contexts: functional context and process context. We assume that both are known 3.1.1 The Manufacturing Knowledge In this part, we show that manufacturing knowledge imposes geometrical shapes. Habits and patterns of a different process lead to different shapes and particular geometrical characteristics important for the part manufacturing. Habits, in other terms “Rules trade” can be extracted of procedure manuals or trade experiment. A listing, by audit for example, will allow to integrate manufacturing rules extracted from usual manufacturing processes.
810
A. Durupt, S. Remy and W. Derigent
Characteristics Manufacturing Foundry
-Uniform Thickness t -Surface drafted
Prismatic milling
Simple Shape
-
For Thickness t<=10mm ; Radius fillet R=t ; - For t>10 ; R=0.3t -
Plan surfaces for Fixture - Fillets, Chamfers
Figure 3. List of process ruled
In Figure 3, a prismatic milled part could have a large plane surface which corresponds to a fixture. Width, length, perimeter and area can be possible leading parameters which can be used to change the CAD model shape. Another example, a cast part has drafted surfaces. Draft angles can also be leading parameters. Thus, the knowledge interpretation can reveal certain leading parameters which could be changed by CAD users. Moreover, the mechanical part functions are also determining data for parameters extraction. 3.1.2 The Interpretation of Knowledge Issued from the Product Mechanical Functions In a first time, we suppose, in this context, that each part ensure one or more known mechanical functions. Consequently, environment is also known. “Environment” and “function” terms highlight the concept of functional analysis, which could ease to reveal the part geometrical and mechanical information. For example, one of mechanical functions of a piston is ensured by a pivot linkage with a bore. Consequently, this linkage reveals the presence of cylindrical surfaces. Thus, parameters can be radius, diameter or cylinder height. 3.2
A Practical Case for the Knowledge Management
In this practical case (Figure 4), we imagine that a manufacturer wants to re-design in order to change the shape of a belt idler. This latter is in pivot linkage with a stud. The clamping handle maintains the belt idler in position and the belt tension. We notice that the belt idler is cast. Cast rules indicate that the part has the following parameters: drafted surfaces, uniform thickness and fillets. Moreover, the functional analysis reveals certain parameters. For example, the F4 function between the belt idler and the roller is to ensure a pivot linkage with the axis. It allows to extract a list of parameters: journal, axis cylindricity and axle diameter.
From a 3D Point Cloud to a Real CAD Model of Mechanical Parts
811
Functional Analysis(APTE)
To identify actors and goals of the system Nomenclature: 1: Stud 2: Belt idler 3: Clamping Handle 4: Roller 5: Axe roller Environment Manufacturing knowledge Manufacturing Casting
Parameters Surface Drafted; Uniform Thickness; Fillets.
F4: to ensure rotation with the roller.
Parameters Cylindrical Axe Axe Diameter Constraints
Figure 4. A Practical case of a belt idler
Consequently, this “set of information” needs to be managed in order to extract the parameters currently used. This Knowledge management will allow, during this step, to reveal the presence of determined surface types. For example, a pivot linkage reveals the presence of a cylinder, a prismatic part reveals the presence of a plane. Moreover, this future knowledge management will guide the construction of functional and structural skeleton.
Figure 5. A research path: correlation with interface model
The research work related to Product knowledge management could be indorsed on the work realised by the LASMIS laboratory in a product modelling context (Roucoules and al 2003) [14].
812
A. Durupt, S. Remy and W. Derigent
The research axis will be based on the notion of interface models. These latter points are out concepts which intrinsically define geometrical translation and describe the knowledge of the CAD expert. It represents trade information and support the emergence of a design solution corresponding to a design problem and specially geometrical solutions (geometrical shape, dimensional tolerances and roughness). Thus, in this context, an interface model allows to model mechanical designer minds, to specify leading parameters. In this time, interface models are based on the concept of functional and structural skeleton and skin. This work will specify if a relation could exist between an interface model and the knowledge management. The final goal is to define, from the part information, a class of leading parameters.
4.
Conclusions
In classical design approach, people have to define the product by designing and classifying the parameters that manage the different functions of the product. Our approach is different. In fact, it is not to define the product from an idea but to return into a complete and fully parameterised CAD model including design intents. From a product functions and leading parameters, people have to deduce the set of parameters and the geometrical definition.
Figure 6. The three milestones of this project.
As the first milestone, we will propose a prototype software application, which will answer to the need of the user. In a second step, we will propose a methodology to interpret and to manage the knowledge about the product in order to deduce the set of leading parameters.
From a 3D Point Cloud to a Real CAD Model of Mechanical Parts
813
Then, we will search for solutions and geometrical approaches in order to implement a features recognition system. The merge between knowledge management and geometrical recognition will enable to build a complete and fully parameterised CAD model. Finally, we will propose a software solution as a tool for a knowledge based reverse engineering approach. During this project, the results obtained will be confronted with an industrial case from forging industry as this one use reverse engineering approach to rebuilt tools for old part.
5.
References
[1] Sakar B, Menq CH, (1991) Smooth surface approximation and reverse engineering Computer-Aided Design 23(9) [2] Urk G, Levoy M, (1994) Zippered polygon meshes from range images. Proceedings of SIGGRAPH’94 [3] Hoffman R, Jain K, (1987) Segmentation and classification of ranges images IEEE, Pattern Analysis and Machine Intelligence 9 (5) [4] Varady T, Martin R, Cox J, (1997) Reverse engineering of geometric models-an introduction. Computer-Aided Design 29 (4) [5] Chivate PN, Jablokow A G, (1993) Solid-model generation from measured point data. Computer-Aided Design 25 (9) [6] Weir DJ, Milroy MJ, Bradley C, Vickers GW, (1996) Reverse engineering physical models employing wrap-around B-spline surfaces and quadrics. Proceedings of the Institution of Mechanical Engineers-Part B, vol. 210 [7] Piegl L, Tiller W, (1995) The NURBS Book, Berlin: Springer [8] Soucy M, Laurendeau D, (1995) A general surface approach to the integration of a set of range views. IEEE Pattern Analysis and Machine Intelligence 17(4) [9] Fang TP, Piegl L, (1995) Delaunay triangulation in three dimensions. IEEE Computer Graphics and Applications 15(5) [10] Cignoni P, Montani C, Scorpigno R, (1998) A fast divide and conquer Delauney triangulation algorithm in Ed. Computer-Aided Design 30(5) [11] Lawson CL, (1977) Software for C1 surface interpolation. Mathematical Software III Academic Press [12] Choi BK, Shin HY, Yoon YI, Lee JW, (1988) Triangulation of scattered data in 3D space. Computer-Aided Design 25 (9) [13] Hoppe H, DeRose T, Duchamp T, McDonald J, Stuetzle W, (1992) Surface reconstruction from unorganized points. Computer Graphics (Proceeding of SIGGRAPH) [14] Roucoules L, Skander A,(2003) Manufacturing process selection and integration in product design. Analysis and synthesis approaches, CIRP Design Seminar, Grenoble (FR)
Research on Collaborative Design Support System for Ship Product Modelling Yiting Zhan, Zhuoshang Ji, Ming Chen Ship CAD Engineering Centre, Dalian university of Technology, Dalian, 116024, China Abstract According to the parametric product modeling research, a digital modelling method based on the modular ship, and a transmission framework for collaborative design were put forward. Web Service technology and dot net platform were used to develop a collaborative design system in which the ship structure modeling, modification and assembly were included. The information collaborative design and management on ship digital product model were realized. Keywords: Collaborative design, Product modelling, Web Service, Ship structure
1.
Introduction
The design of the ship structure is a typical complicated large product development since vast space and long period of time is required in the construction process. As the core of the production cycle of ship, designing has multiple stages such as the preliminary design, submission for approval, detailed design and production design, which are carried out by designing institutes and the shipyard. The information among these institutes should be shared completely. The duty of design institutes is to supply models and design parameters, which will be used by shipyards to make a production plan. Ship-owners need models and production information to calculate the costs. The design of a ship will usually experience a few major changes. However, under the existing system, information shared by different institutes can not be updated in real-time. There are several teams that work on the design of a ship. Usually the change of a design parameter or model of one team will require other teams to re-edit or re-design. If information is not updated promptly, the entire design may be subject to change. In this case, the cost and cycle of design will be affected. The efficiency of the design would be greatly enhanced and production cycle would be shorten if different operations to the same ship structure model can be made by sub-offices in different locations while dynamic and real-time browse and test could be achieved on 3D structural models by the design units and production units. Based on the hull structure, a unified design resources platform focusing on the establishment, browse and modification of 3D model is provided in this paper that ensures the unimpeded exchange of technical and non-ambiguity of structural
816
Y. Zhan, Z. Ji and M. Chen
design process. In order to shorten the design period of a product, once developed, a design system is applied to a team of 9 people. Each member of the team is in charge of the establishment of structural models of the hull, engine room, bow, stern and 5 cargo holds respectively. During the model design and establishment process, information on the ship’s hull plate, welding and steel products will be generated. Other users are then able to check this essential information and apply it directly on production.
2.
Construction of Design System
Based on the searching need of Internet users such as shipyards, ship-owners and registers of shipping, this article adopts ‘Brower/Server mode’. This mode allows users to browse models and collect information on production and it reflects their needs. As for intranet users within institutes, we use ‘client/server mode’ to give each team access to build different parts of the model from the client. The structural parameters and models are managed by the server, which will then contribute to the Internet assembly design. Since the ‘Brower/Server mode’ is relatively simpler, this article will focus on the introduction of model establishment of a product under the ‘client/server mode’, which is combined with two parts, i.e. network transmission framework and background 3D modeling system. The framework is shown as in Figure 1. Client
Server (Modeling system )
Dialog management program
Modeling applications
5-layer transmission framework
User mutual module Display module
Type-library Bar-library
3D model
Parts-library
DataBase
Data information Figure 1. System's Framework
As illustrated in the Figure 1, the modelling CAD system is composed by modelling applications and dialog management program. C/S mode is applied, which adds modelling CAD system to the server to concentrate computing-
Research on Collaborative Design Support System for Ship Product Modelling
817
intensive modelling operations, while operating model display and provide operation interface at the client end. The core idea is to realize distributed sharing, open the application connection of the CAD modelling system to network users. As service provider, the modelling system is thus able to respond to service request from multiple clients. The parameters entered by designers at the client end can be transmitted to the server through network and the renewed operation results could be gained consequently. Due to the complexity of ship structure modelling and the huge amount of data, 5-layer transmission mode is used to realize data transfer and modelling operation between client and server, and design for Data Processing Layer, Safety Layer and Navigating Layer are added based on the traditional 3layer structure. The correspondence from many aspects is involved in the modelling system which requires the connection and continuously modification of information and models. Modelling information is expressed as characterized organic combination using solid modelling technology to restore the geometric topologic information in structure models. According to characteristics of ship data, complete data structure is defined in XML data files, including not only components’ basic geometric and topologic information, but also non-shape information, such as characteristic constraints and manufacturing information, etc. Meanwhile, based on objectoriented thinking, the majority part of the design process is divided into structural modules. With the use of object types and structural development as the basic design module, more complex structural model categories are inherited and multistate ideology with similar structure is used to deal with the same type of module.
3.
The Design of Product Modelling
3.1
The Design of Modeling System
The modeling system is a background modeling program in which dynamic modification for 3D product is provided. The problems can be detected and solved, project data would be managed and search results for all kinds of model analyzing tools are provided, which enables the designers to build accurate product model by entering or modifying designing parameters without having to learn operations in model building. Every window program and modeling application is all encapsulated into modules, and the key words to trigger the applications of components accordingly through ‘socket’ will be connected by the dialog management program. In fact, the modeling operation is carried out by the local modeling system set up through the dialog management program triggered by the client run files at the server through web page. After each operation completed, those applications are automatically deleted so as not to engross server resources. Product design involves the problem of simultaneous invoking or visiting by many users with different permission, which requires the modelling system to be stable and of high quality, as well as having good internal structure. It is necessary to mark off its function, establish shareware public library files.
818
3.2
Y. Zhan, Z. Ji and M. Chen
The Public Library of Modeling System
Design library can transfer same module according to user’s different operations, name different parameters to complete user’s design. A library is thus the logical unit that marks off processes according to the functions. It can be used in multiple processes to set many compiled binary classes, which are encapsulations of regularly used designing function that have strict standardized connections to restrict illegal data using processes. Public libraries can be used directly when added DLL citation. They have very strong independence that can maximize the effect of re-cycling function. Although different ships have various design processes, they share similar design methods. This enables us to simplify the process of computer aided ship design into a system made up of four bibliographic divisions. The design process of various ships can be defined as different assembling processes of the system, which avoids the repetitive composing work of design code for the same structure. (See Figure 1) In Figure 1, the type-library that restores the main ship design type data, the partslibrary that restores main structure part design process, the bar-library that restores standard material information and the database that manages all data information. Basic structures are defined by the designers through the type-library to create different designing parameters which are sent to parts-library. Different designing parameters can form all kinds of parts models while the material parameters of components can only be established by choosing standard materials in the barlibrary. Moreover, the geometric and topologic information of the parts-library are saved in the database. 3.3
The Design of the Parts-Library
The parts-library is the main part of modeling design. It decomposes complex structure parts models into basic component units, such as board, framing, floor and longitudinal, including relationship properties and data properties, so as to departs their connection. Then more complex models parts could be gained from these basic parts types through adding characteristics and inheriting. Data system structure is thus based on parts and every basic part becomes ‘data’ in the database. These data can show different properties, design parameters and product structure (part-to-part relationship). The same parts-library file is shared by different ship types which are assembled according to different types in the types-library. The next step is to build bar-library containing bulb plate, angle bar and T-bar, and restore standard material type and size in the database. After the information is being related, users can choose type and transmit to build according to the chosen type and material. Structure design can be divided into deck structure design, shell structure design, character structure design and cabin structure design according to ship structure. Platform, floor and bulkhead are often defined as main parts, the designing priority of which can be decided by the users. Stiffeners and holes are often defined as subcharacteristics, which must be designed after the main parts.
Research on Collaborative Design Support System for Ship Product Modelling
3.4
819
The Modification and Management of Models
A number of parts of the same type can be modified, or related parts are chosen to perform related modification, i.e. the modification of the bulkhead and the longitudinal on the bulkhead. Modification is done by the users through changing the designing parameters to change the parts model. In related modification, users only have to change the design parameter of the main parts, and related sub-parts will be modified thereafter, which ensures the simultaneous modification of models and related parameters in the database and thus benefits the management of changes in the project. With the parts information enquiry page, production information is exported by the designers as well as main design parameters of the component through the page and the BOM are gained. Creating the relationship through the same property region and sub-spreadsheet for topologically-related parts are built. The property of the related areas is designed to have non-ambiguity. Since ship design involves simultaneous designing by multiple designers, it requires the transfer and sharing of data and information among different databases so that multiple client users can operate on the same database. It is possible to visit and operate data through network, incorporate SQL language process into ASP window and connect to database to create Dataset. Thus the files in the database can be effectively visited using the data organizing ability of XML. To improve browsing and data operating speed, just add Dataset when opening WEB page while disconnect with database and reconnect after the changing of parameters. 3.5
The Design of Collaborative Assembly
After each ship part has been built, the next step are to assemble different basic parts built by different designers through the assembly page, to check the connection of pieces and to apply simulation to the assembly process. Through Assembly Simulation, problems can be detected in real-time assembly due to inappropriate design, i.e. narrow or huge clearance or clashes among parts. Using the assembly module, the ship structure models could be assembled by shipyards into plane, complex and gross models according to different pre-determined orders. Consequently, the whole ship model is assembled. Assembly files can be controlled by the management process and the assembly metrics are optimized to form an assembly tree containing different assembly series. Furthermore a reasonable yet economic assembly plan is created. The server actually restores the location of each parts and the saving path. When a part model is modified, the relevant assembly model would also be updated to reflect the changes. Once the members in charge of the engine room, bow, stern and cargo holds finish their part of the design and synchronise the data. The ship is divided into several large structure blocks to assemble, such as section#12~#41, section#75~#111 in the hold etc. As shown in Figure 2, section#75~#111 is the assembly model including transverse bulkhead, deck, open pore, stiffener and etc. The information will be saved in the assembly table. Ship model is finally assembled.
820
Y. Zhan, Z. Ji and M. Chen
#-6~#12
Figure 2. Hull structure of the bulk cargo ship
4.
The Design of Network Structure Framework
4.1
5-Layers Transmission Structure
Web Service is a network structure in which some operation connections are used to develop LCN that deals with module realization. Based on Web Service, a 5layer transmission structure is designed in this paper which supports modeling operation and data transmission on the internet. The traditional 3-layer structure includes user interface, business layer and database, in which Windows DNA is used for program development. Therefore, using business standard modules is not allowed in the same manner in ASP Web application and other client programs. Many applications must use business rule repeatedly because of the different connections for different clients, which causes great difficulty in the maintenance and update for the system. Thus, it is necessary to add new business layers according to system function (see Figure 3).
Research on Collaborative Design Support System for Ship Product Modelling
User layer
Safety layer
Navigation layer Business layer
Appearance object Safety mark
Access Navigation
UI
Users’ cookies
Parts-library
Browse
Appearance object
Modeling Design
Modeling system
821
Data management layer SQL class Data processing OLEDB class
Data Base
Figure 3. 5-layer transmission Framework
As illustrated in the Figure 3, user goes through the safety layer for identification after log-in, and then gets connection to relevant user interface through navigation layer while user operations are dealt with by the modelling layer. When the methods in the modelling layer are being visited, the level of visiting will be checked by the system to ensure the correct identification and relevant status information is being set. Unlike the traditional 3-layer structure in which repeating data visiting and processing codes in the business component are required, the new data management layer can process data operation more efficiently. So this is the most important layer for reducing the amount of operations which affects all the operations and visits. The appearance object in the navigation layer is developed by ASP client script. It is represented as abstract form of business object and is used to invoke all kinds of changes in the client end. The logic and design of these layers are detached by code-hiding files so that all services could be executed through network. 4.2
The Design of Data Management Layer
Since the whole 3-D model adopts parametric design, both designing and editing are done through design parameters set by designers from the client. The modelling program from the server will then operate on components as well as edit design parameters in the database. In addition, the server should also meet the searching and browsing needs outside of the design institutes, such as shipyards and shipowners. Simply inserting sentences like SQL into the program will generate errors to the visiting and editing of the same parameters. It can be slow and unstable to visit the database since it is kept connected to many ports. It is essential to ensure the rapid and safe visit of massive data information. An independent data management layer is developed which can be used by all applications. Furthermore
822
Y. Zhan, Z. Ji and M. Chen
repeated code is reduced created by data visiting and the speed and safety of data transmission are improved consequently. Data visiting module is built based on ADO.NET, and IP address is set in program setting files to connect databases in different machines and the establishment of distributed database is realized. Business and data cogeneration are dealt with by the data management layer as well as OLE DB compatible code to visit other databases. Developing DATASET as data set, so the data is saved in variables when the data gained by the server at first from the database and the saved data can be transmitted directly from server to client when the server receives the same request. The saving period of data has to be set so that the saved data can be deleted and new data could be gained when the time is up. ADO.NET objects are defined for data visiting, including sqlConnection, SqlCommand, SqlDataReader, XMLReader, sqlDataAdapter, DataSet. Besides, character strings are defined and used to deal with connections. These parameters include server name, database being used, username and passwords. In the end, module name, processing signs and the general irregular information are defined so as to be transmitted to transfer code with other information when any irregularity happens. 4.3
Safety Settings for the System
Since different levels of users in various institutes are involved in the system, the definition of each type of user becomes very important. Factors such as whether a user is entitled to check production plans and costs, and whether all information of the model can be seen, should be considered. As for team members, one user should not be able to change the model parameter of another. The confirmation of user’s identity is required by the system to fine out whether the user has the right to visit the applications and what operations can be executed by the user. Therefore, user log-in is defined as a role-authorization-based ASP.NET identification system in which WINDOWS identification is used. User-defined safety layer is developed which sets visiting level in IIS and generates WINDOWS identification. When the window request is sent to IIS for the first time, ASP.NET receives the request and user’s qualification will be checked by asking for identification cookie. If the request fails the identification, HTTP client re-direction function would be used by the system to send the request to identification window. After applications (login.aspx) identifies the request, a cookie will be sent, which contains password used to regain identity. Furthermore the resource request is sent to applications in which this cookie is included in the title so as to be validated and authorized by ASP.NET window engine. The identification of user is checked in the application process and accessing to resources is provided depends on the identification of the user. Authorization is Supported by generating principal information will enable the acquirement of principal information from the relevant identifier and the implementation on current thread. The safety layer is realized by the ‘UserSafty’ module, which is the only part that reaches evidence in the database.
Research on Collaborative Design Support System for Ship Product Modelling
5.
823
The Management of Multi-User Visiting
Regarding collaborative design, the most common problem among members in different teams is design conflicts. For example, user A deletes a component. When user B wants to edit this component, an error will occur because the system can not provide the model. In addition to this, in order to avoid the breakdown of the whole system, the quantity of visitors and length of visit should be limited. So it is essential to find effective solutions to the common problems in collaborative design, i.e. user visiting conflict, user visiting time and user conflict management. Specific Windows server program will be restarted when server is turned on without user’s interference. Console applications are added in which simple diagnose and log-info can be displayed on the screen and server tunnel is established to detect client request and generate long-distance object automatically. Thread pool is used by .NET remote process framework to detect client requests and multi-thread technique is applied to design client sharing module. For user’s modelling operation, the non-lock mechanism is used to enhance the traditional token transmission strategy. Locking refers to monopolizing the visiting right of an object that is being used by another thread and modifying object value safely without worrying about it could be changed by the other thread. The nonlock mechanism in this paper allows multiple users to operate on a model or data file at the same time. It actually cannot invoke long-distance object synchronously, but can only call transparent proxy asynchronously. User’s operation is recorded which ensures only one operation for a model at a time, while the proxy waits for long-distance object to respond in the other thread. When current operation is completed, the next operation, called the token transmission strategy, will start. To improve the system efficiency, it is necessary to abandon the traditional single thread method and to apply multi-thread so that users can operate several tasks at the same time. Thus asynchronous operations cost roughly the same time as being completed synchronously. Commission is often treated as safety index that incorporates asynchronous operations support. It can be divided into basic commission type and multiple direction commission. Re-transfers is used as the optimal way of asynchronous invoke when many users operate on the same model so as to eliminate extra cost of continuous request. Or apply asynchronous invoke in another thread so as to avoid reading information or communicating with another object since there’s no other relationship in user’s asynchronous operations. Because of this independence, user won’t worry about the problems of synchronous or co-generating. If the client forgets to release object or network at the client is disconnected, the server object will stay at a useless status that can cause blockage. Therefore, the system allows an object to survive only for a certain amount of time. Single transfer object will be destroyed when every method transfer is finished and longdistance object will be automatically destroyed when being inactive for 2 minutes after being existed for 5 minutes. It can change default property of survive period and automatically allocate to all long-distance object, which are sealed in ILease connection named by System, RunTime, Remoting, Lifetime. Through restricting object survive period and allocating asynchronous multi-thread transfer as well as applying long-distance proxy, the system can solve the user management problem
824
Y. Zhan, Z. Ji and M. Chen
in collaborative design effectively. As illustrated from Figure 4, users browse and assemble the model via the Internet, using VRML to display the 3-D model.
Figure 4. Assembly page for 3-D model
6.
Conclusion
The above technology can be used by a few teams to collaboratively design the model of structural products. This technology has been applied to the design team of 9 for a bulk cargo ship. Collaborative design is very useful in sharing resources and synchronising information in a small group. However, if there are too many members in the design team, the system will be subject to breakdown caused by the huge amount of resources required for large-scale models. Therefore, at this stage, the system can only meet the need of small-scale collaborative models.
7.
References
[1] Zhan Yi-ting et al. (2007.02) Research and development of a digital design system for hull structures [J]. Journal of Marine Science and Application [2] Myung-Il Roh et al. (2006.07) Improvement of ship design practice using a 3D CAD model of a hull structure [J]. Robotics and Computer-Integrated Manufacturing [3] Francesco Balena. (2002) Programming Microsoft Visual Basic .NET (Core Reference), Microsoft Corporation Press.
Research on New Product Development Planning and Strategy Based on TRIZ Evolution Theory Fuying Zhang, Xiaobin Shen, Qingping He College of Mechanical Engineering, Tianjin University of Science & Technology, Tianjin, China 300222 Abstract New product development planning and strategy identifies the portfolio of products to be developed. To assist technology managers in identifying core technologies, product development objective, and right technical strategy, the core technology decision frame, objective decision-making method, and technical strategy analyzing method incorporating different tools, such as Porter’s competitive force model, system operator and TRIZ technology evolution theory were proposed. The methods make the decision of core technologies, objective decision-making process operable, and can get enterprises to focus on right technology strategy, and take the corresponding innovation strategy update, consequently speeding up core technology’s maturation. A case study is presented to illustrate the validity of the methods in new product development. Keywords: TRIZ evolution theory, Product planning, Strategy analysis, Objects decision-making, Core technologies
1.
Introduction
As product competition becomes fiercer, the ability to innovate new product rapidly has become the primary approach for company to gain sustainable advantage. Selecting competing technologies, function parameter, structure, and the early detection of changes in the technological surrounding are important factors for the success of every technology-oriented company. Evolution theory is a fundamental branch of TRIZ [1] (Theory of Inventive Problem Solving), which today includes a broad range of tools and rules. Laws of technology system evolution are the theoretical fundament of TRIZ; they form the core of TRIZ evolution theory, which is specialized in forecasting technological system evolution, and provides the critical tool for technology strategy analysis and objects decision-making in new product development. There are two kinds of technological forecasting methods in the TRIZ evolution theory: The former TRIZ, including technical system evolution s-curve, technology maturity determination tool, and system operator, a natural outgrowth of the TRIZ research into the patterns of evolution of technological systems [2]. Directed
826
F. Zhang, X. Shen and Q. He
evolution, incorporating several hundred lines of evolution, constitutes a process for identifying comprehensive sets of potential evolutionary scenarios [3]. TRIZ evolution theory does not just predict the future of technology; it also forces the system to its most probable future development by inventing it before it could occur naturally. This is very help for rapidly inventive product development. In this paper, the objects decision-making procedure for new product development based on TRIZ evolution theory is first described, then the technology planning and innovation strategy analysis model is proposed, and a case study of applying proposed methods to hydrodynamic reciprocating sealing set is also illustrated.
2 The Objects Decision-Making Procedure for New Product Development Based on TRIZ Theory of Evolution 2.1
Decision Framework of Product Core Technology
Severine[4] refers to core technology first in his paper, which means the realizable technology of product function or the realization of scientific principles such as physical, chemical and geometrical. The ability to identify the right core technology actively is vital to the long-term success of the enterprises. To help corporations to identify the right core technology and speed up new product development, we suggest a decision framework of core technology based on the TRIZ system operator, Porter’s five forces and TRIZ technology evolution theory, as shown in Figure 1a. In this framework, the simplified system operator of TRIZ is used to describe a space-time plane; the third dimension is introduced to carry out the core technology decision process. From top to bottom, the first plane represents the technology competitive force. Successive planes then represent a hierarchy of product function, technology and technology evolution potential. In the middle of competitive force plane is a product technology. It has to be taken into account that after the focus on product technologies aspects like production, material and information technologies, market development and competences of companies also have to be considered. In more common parlance, the product function plane and the technology plane represent the territory and a map, and the map is not exclusive; a function can be implemented with some technologies. Therefore, the main question will be answered during the map is how the potential of different product technologies which accomplish the same primary function can be evaluated from the perspective of a technology owner. The technology evolution plane evaluates the evolution potential of all the technologies. 2.1.1
The Analysis of Technology Competitive Force
This plane integrates Porter’s[5] five forces (Figure 1 b) with TRIZ system operator. Product technology’ choice can be guided by Porter’s five forces: analyzing the industry structure and competitive force from five aspects, such as
New Product Development Planning and Strategy Based on TRIZ Evolution Theory
827
suppliers’ technologies, buyers’ technologies, potential rivals’ technologies and alternative technologies, in the level of systems, sub- and super-systems. 2.1.2
The Analysis of Product Functions
A principally important feature that here we have to deal with is a traditional coupling in TRIZ: object + its function, although a 9-screen diagram contains only the objects. In the second plane, we examine all the useful functions of the present systems, the future systems. Of particular importance is the forecasting of new functions. Evidently, the functions are to be realized by the material elements – the sub-systems. Thus, one can find a set of sub-systems which will compose a new system in the future. The same functions analysis concerns the super-system.
Figure 1. a. Framework of core technologies decision; b. Porter’s five forces model
2.1.3
The Analysis of Technologies
We examine all the technologies which implement the functions of the present system, the future systems. It is important to forecast the evolutionary trend of
828
F. Zhang, X. Shen and Q. He
different technologies. Obviously, the technologies are to be effected by principles of sub-systems. Then, we can find a set of sub-systems which will construct a new core technology in the future. The same examination applies to the super-systems. The analysis makes it possible to outline the competitive technology under consideration. 2.1.4
Forecast and Evaluate Selected Technology
In the last plane, we examine the evolutionary trends; evaluate the evolution potential of all the technologies selected using Altshuller’s laws of technology system evolution, in the level of system, sub-system and super-system, and each level from three stages (the past, present and future). 2.1.5
Define Product Core Technology
The technologies with high competitive potential are selected as the core technologies of the enterprise, based on the analysis and evaluation of technologies in the last planes of Figure 1. 2.2
The Objects Decision-Making Model for New Product Development
The objects decision-making process model for new product development is presented in Figure 3. The model includes four steps such as the data collection of product technology evolution, the paths analyzing of technology evolution, the objects analyzing and determining of product development. Former two steps determining the technology level and retral two accomplish object analysis and decision-making. The model can be used to decrease the solution-searching range of the product development goals and the tent-market time, make the objects determination process of new product planning operable. 2.2.1
The Objects Decision-Making of Function Parameter
In order to find out the function parameter in new product planning, Function ideality evolutionary potential radar plot is presented in Figure 2 C (c) based the concept of ideality. Each spoke with arrowhead in the plot represents one of the product functions. The center circle represents ideality product (the ideality product is defined as the virtual product which retains fulfils its own specific functions [1]. In practice, there is not absolute ideality product , the product can be defined as ideality product when their functional performance improves, while their costs diminish.), The outside cycle of the plot represents the new generated product, and the shaded area represents how far along the current function has evolved toward ideal product. Thus the area difference between torus and shaded area is a measure of function ideality evolutionary potential. Function ideality evolutionary potential radar plot is used as a way of discovering the evolution trend toward ideal product from multi-dimension, consequently making the designer to gain the function parameter with high competitive power.
New Product Development Planning and Strategy Based on TRIZ Evolution Theory
829
Figure 2. The objects decision-making model for new product development
2.2.2
Finding Out the Right Structure Development Direction of Product
Evolutionary potential radar plot is the foundation for determining product structure development opportunities; the evolutionary potential radar plot is showed in Figure 2 D (d).We can understand how far along each pattern the current system has evolved. The evolution analysis of all evolution steps in the deficient areas may gain development opportunities.
830
F. Zhang, X. Shen and Q. He
3. Technology Planning and Innovation Strategy Analysis for New Product Development The technology planning process begins with the selected core technology. To focus on the right technology strategy, and to take the corresponding innovation strategy update for new product development, we propose a model for managers to decide when to adopt a new basic technology or a different innovation strategy, according to evolution S-curves of technology systems. The model is shown in Figure 3. It can help enterprise managers to forecast the development potential of core technology, and to focus on the right technology and innovation strategy update to withstand the rapidly changing market requirements.
Figure 3. The technology planning and innovation strategy model
Four technology and innovation strategy are used according to the evolution position of core technology. They are: most ideal strategy, the strategy of focusing on different technologies update, the strategy of resolving conflicts, and the transformation of innovation strategy [6].
4. Case Study: The Hydrodynamic Reciprocating Sealing Set Hydrodynamic reciprocating seal is the core technology of the hydrodynamic cylinder. And the hydrodynamic cylinder’s performances at lower speed, dynamic and static rigidity depend more on the capability of the hydrodynamic reciprocating seal [7]. In this paper, the hydrodynamic reciprocating seal is used as a case study to illustrate the validity of the core technology decision method, and the technology planning and innovation strategy analysis methods.
New Product Development Planning and Strategy Based on TRIZ Evolution Theory
831
4.1 Seal
Core Technology Decision Process of Hydrodynamic Reciprocating
4.1.1
The Competitive Force Analysis of Hydrodynamic Reciprocating Seal
According to the decision framework of core technologies built in Figure 1, the competitive force plane of the hydrodynamic reciprocating seal can be constructed. The present product (combined seal) is set as the start point. The development paths of its super-system (hydrodynamic cylinder) and sub-system (seal pair and seal part) are analyzed. Their future evolutionary trends are forecasted from the perspective of the system. After the analysis, we conclude that the performance in seal ability is the key technologies of the hydrodynamic reciprocating seal. 4.1.2
The Identification of Product Core Technology
To block the leakage gate is the fundamental function of the hydrodynamic reciprocating seal. This fundamental function can be decomposed into two subfunctions, i.e., to block the leakage gate and decrease wear. Correspondingly, these sub-functions are delivered by the technologies of seal ring and seal pair. Hydrodynamic cylinder is its super-system. Obviously, zero leakage and lower wear are the core technology of the hydrodynamic reciprocating seal. Remaining,untapped function evolutionary potential
Friction
Self-adjusting
Wear Ideal product
Adhesion properties
How far along the current function has evolved toward ideal product Restoration
Distortion
Figure 4. Function evolutionary potential plot of hydrodynamic reciprocating
4.2
The Development Objects of Hydrodynamic Reciprocating Seal
4.2.1
The Functional Parameter Selection of Hydrodynamic Reciprocating Seal
Figure 4 is showed the function evolutionary potential plot drawn by the relative technical parameter; obviously, the self-adjusting and self-restoration seal is deficient parameter in evolution, so it is necessary to develop the self-restoration seal as soon as possible.
832
4.2.2
F. Zhang, X. Shen and Q. He
Finding Out the Right Structure Development Direction of Product
Figure 5 illustrates the result of the comparisons between the other most relevant TRIZ trends and the hydrodynamic reciprocating seal technology, and concludes this study as follows: It is necessary to improve the ability of adjustable of the system and increases the segments of configuration, shape, highlighting the utility of systems resource. Studies emphasizing on above-mentioned concludes will accelerate the development of this technology.
Figure 5. Evolutionary potential plot of hydrodynamic reciprocating seal
4.3
Product Technology and Innovation Strategy Analysis
4.3.1
Technology Evolution Curve of Reciprocating Seal
According to the quantitative characteristic analysis of the material, configuration, controllability, friction and of the reciprocating seal, its evolution S-curve is shown in Figure 6. Obviously, the hydrodynamic reciprocating sealing technology is still positioned at the growth stage of its evolution curve and the future development opportunity of it will increase the seal, friction and wear characteristics by improving the configuration, material and shape.
Figure 6. Evolution S-curve of reciprocating seal
New Product Development Planning and Strategy Based on TRIZ Evolution Theory
4.3.2
833
Product Technology and Innovation Strategy Analysis
From the evolution curve of hydrodynamic reciprocating sealing technology, we can conclude that: x To increase product ideality by improving product function is still the competitive strategy for the present seal product development. x The technology strategy adopted should focus on the optimum of the seal, wear and friction characteristics. x At the present period, the primary conflict that an enterprise has to overcome is the physical conflict, which is defined as how to improve seal performance by increasing elasticity and how to reduce fraction and wear by decreasing elasticity. x The process technique’s improvement of new seal structure is still the main restriction to seal technology development. Therefore, the innovation strategy of rubber-plastic combined seal and abnormity section seal is still the innovation of process techniques, by improving the machining process and equipment. However, imitating innovation is adopted for the adjustable seal, which emphasizes increasing human and material resources to perfect the self-adjusting sealing technology and to improve the seal’s performance.
5.
Conclusions
Product core technology changes an enterprise’s competitive situation in manifold ways. Because of the reduced product life cycles, and the increasing speed at which newer products substitute for older ones, it has been proven important to identify the correct core technologies and to focus on the right technology strategy. New product development planning and strategy methods this paper proposed can help technology managers to identify technologies that possess competitive power, decide when to adopt new basic technology or an update innovation strategy. The methods also make technology decision-making operable, and improve the effectiveness of new product development. The application of these methods in hydrodynamic reciprocating sealing development demonstrates their validity.
6.
References
[1] Altschuller, G. S. (1988). Creativity as an Exact Science (Translated by Anthony Williams) New York: Gordon & Breach. [2] Stephen, R.L., (2002) A Conceptual Design Tool for Engineer: An Amalgamation of Theory of Constraints, Theory of Inventive Problem Solving and Logic, Virginia: Old Dominion University. [3] Mann, D.L., (2003) Better technology forecasting using systematic innovation methods. Technological Forecasting and Social Change. Vol. 70:779–795.
834
F. Zhang, X. Shen and Q. He
[4] Severine, G. (1999). Application of TRIZ to Technology Forecasting- Case Study: Yarn Spinning Technology, Journal of TRIZ. Available from http: //www.TRIZJournal.com/ archives/2000/07/d/index.htm. [5] Porter, M. E. (1998). Competitive Strategy: Techniques for analyzing industries and competitors. New York: The Free Press. [6] Zhang, F.Y., (2004) Research on innovative design information engineering modeling, solving, key Technologies of Mechanical Products. Tianjin: Tianjin University. [7] Zhang, F.Y., Xu, Y.S. and Liu, H. (2005) Seal technology study of hydrodynamic piston shaft based on TRIZ Su-field models and standard solutions. Run Hua Yu Mi Feng/Lubrication Engineering. Vol. 171:57–60.
ASP-based Collaborative Networked Manufacturing Service Platform for SMEs Y. Su1, B.S. Lv2, W.H. Liao1, Y. Guo1, X.S. Chen1, H.B. Shi1 1 2
Nanjing University of Aeronautics and Astronautics, Nanjing, China Northwestern Polytechnical University, Xi’An, China
Abstract In order to enhance small to medium sized enterprises’ core competition, an ASPbased Collaborative Networked Manufacturing Service Platform (CNMSP) to promote resource sharing and advance collaboration level between enterprises is proposed. In this paper, the structure of CNMSP is introduced briefly. Collaborative workflow based on mixed modes of B/S and C/S, which aims at implementing collaborative process smoothly and serving product lifecycle successfully, is described in details. Resource as one important feature for supporting CNMSP is highlighted, and a resource estimation model and relevant award measures to urge distributed resource sharing are established. Finally, a Construction Machinery Networked Manufacturing Platform upon proposed structure is presented. Keywords: collaborative networked manufacturing, application service provider (ASP), small to medium sized enterprise (SME), resource sharing
1.
Introduction
Small to medium sized enterprises (SMEs) play a major role in China’s economy [1, 2]. Fact reports that SMEs in China are not keeping up with new information technology in their manufacturing operations. Most of them nether have sufficient fund to buy nor technical capability to utilize advanced software. Considering those characteristics of SMEs, a platform that integrates the idle or distributive resource and provides servicesüincluding resource services, technology services, software services, and design and manufacture ability services—in a way of low cost and high quality is required. Networked Manufacturing carries on the enterprises' activities that cover the whole product lifecycle by means of the advanced technologies of network, production and management, and via the cooperation and resource sharing between enterprises, thereby improve the enterprises' core competencies [3, 4]. In order to enhance competition and promote collaboration level between enterprises, we must find an easier and cheaper operation mode to implement the networked manufacturing. Currently, networked manufacturing system based on ASP platform has become a new research trend [5, 6, 7].
836
Y. Su, B.S. Lv, W.H. Liao, Y. Guo, X.S. Chen and H.B. Shi
Application service provider (ASP) is a third party service organization whose main business is providing software-based services to customers over a wide area network in return for payment [8]. ASPs’ core value propositions are to lower total cost of ownership, make monthly fees predictable, reduce time-to-market, provide access to market-leading applications, and allow businesses to focus on their core competencies [9]. ASP stresses the roles of collaboration and interaction between a provider and a consumer as key feature, and targeted mainly SMEs by providing applications that these firms could normally not afford [10]. So, focusing on ASP whose role is engaging SMEs, an ASP-based collaborative networked manufacturing service platform is proposed. The paper is organized as follows. In Section 2, the structure of ASP-Based CNMSP is presented. In Section 3, platform workflow is analysed. In Section 4, the role-based dynamic resource collection method is studied in details. In section 5, Construction Machinery Networked Manufacturing Platform upon above research is developed as an illustration. Finally, we present our conclusions and suggest avenues for future research.
2.
Structure of ASP-based CNMSP
The structure of ASP-based CNMSP includes four layer services ü resource support layer, management layer, service layer, and user layer, which cover the whole business cycle of networked manufacturing, from design, manufacture to management and marketing (see in Figure 1). It mainly serves as professional software service center, information service center, manufacture resource sharing centre and collaborative environment between enterprises. The resource support layer as the foundation of CNMSP provides data and resource for aided networked manufacturing and system running. It includes basic database, sharing database and private database. Basic database drives the whole CNMSP run freely and play an indispensable role on CNMSP. Sharing database is available to all enterprises in industry. Nevertheless, product data, patent knowledge, special technology, and product planning data—which are owned by some enterprises and only accessed to authorized enterprises or customers—are deposited in private database under safeguard. The management layer is composed of resource management module, authorization management module, collaboration management module, system maintenance module, search module, SMEs management module, data transform module, data security module, etc. It is responsible for managing the integrated service platform and ensuring the whole CNMSP running successfully. The content of the service layer as the core of CNMSP is designed for three purposes. The first purpose is to supply information, tool and technology services. It includes not only software of CAD, CAM, SCM and CRM and so on, but also product design/manufacture information. The second purpose is to provide all distributed design groups and manufactories a public collaborative product development platform. The third purpose is to offer customers personalized and diversified products quickly and thus strengthen enterprise competition in the global market. For those three purposes, the service layer is divided into three main
Collaborative Networked Manufacturing Service Platform for SMEs
837
subsystems, i.e. resource sharing system, resource publishing system, and collaborative networked manufacturing system.
Figure 1. Structure of ASP-based MNSP
The value of user layer is to enable SMEs and personal customers to access all the tools, resource and function services provided by CNMSP. The main user interfaces are as follows: information upload, tools download, search, collaborative design, and so on. Through the portal and pre/post processor, the dataflow and workflow transfer among allied SMEs. Otherwise, the ASP-based CNMSP get technology support from scientific institutions, colleges and universities, and are also managed by state controlling company, banks, and other administrative organs. Thus the whole ASP-based CNMSP can act as a high-tech software and technology service centre, and supplycustomers and SMEs service stably.
3.
Collaborative Workflow
B/S (Browser/Server) and C/S (Client/Server) are two typical compute mode applied in networked application system. As far as ease of use concerned, B/S
838
Y. Su, B.S. Lv, W.H. Liao, Y. Guo, X.S. Chen and H.B. Shi
mode precedes C/S mode. Modern products are usually complex and its 3D model files are big. Collaborative design activities based on sole B/S mode is not practical under current bandwidth and data flow technologies. Thus the platform adopts mixed modes of B/S and C/S. The collaborative design module is based on C/S mode and other modules are upon B/S mode. Figure 2 shows the whole collaborative process of ASP-based CNMSP. Customers log on CNMSP through Web Browser and submit requirements information. The analyser on CNMSP parses them into XML documents. Then search engine makes the first search for counterpart case from resource support layer. If certain parts can not well satisfy customers’ requirements or are not existent, manufactories publish information of improved design or innovative design on Browser. Distributive design groups who accept design tasks apply for authorization of using Collaborative Design Module and carry out collaborative design on Client. After collaborative design is accomplished, authorization of using collaborative design module is shut off. Parts from distributed design groups are feedback to CNMSP for collaborative assembly. If none of counterparts is retrieved, the analyser starts up and analyses exported XML model documents again. For assembly products, each children-node in XML model documents is captured for next search. If there is not counterpart or similar counterpart and the children-node can be divided further, the same search process continues until gaining the counterpart or similar, or the node not being divided any more. When similar product is achieved, the approach of partial improving design is adopted. If none of counterpart and similar is found, innovation design is appointed. While the customer is content with results of design, the task-distribute module send a product order to remote distributive manufacture enterprises for production.
Figure 2. Requirement-driven Collaboration Workflow
4.
Role-based Resource Dynamic Collection
Regardless of which category is focused on and which methods or logical structures are adopted, the foundation and core element determining the failure or
Collaborative Networked Manufacturing Service Platform for SMEs
839
success of networked manufacturing is design and manufacture resource [11]. Following this paper, a role-based dynamic resource correcting approach is proposed. According to the roles in operating ASP-Based CNMSP, participants can be divided into three roles—member, non-member, and administrator. All roles conform to uniform priority rule and operation mechanism to publish resource. Each role who is neither an administrator nor a member must register for attaining authorization to enter the ASP-based CNMSP. Each kind of resource is updated dynamically by different roles. Platform administrator have the highest right to operate the whole resource, involving constructing initial resource database, integrating distributive resource, grouping and reconfiguring resource, etc. Typical enterprises as partnership of ASP in constructing and completing resource base are advanced members whose right is higher than general members. Any non-member must register and pay certain money for special resource or applying for resource space to deposit private resource in database. Any freely uploading resource, when it is estimated to be useful by estimation model, will be taken in. If the useful resource is estimated as a value-added resource, some reward is given to the resource provider. Useless resources will be rejected. 4.1 Estimation Model Hypothetically estimation index is E
( e1 , e2 ," , ek ) , w
( Z1 ,Z 2 ," ,Z k ) is
relevant weight aggregate of E. Let M be the mark matrix of ei (i=1, 2,Ă, k), P be the matrix of the number of estimation experts, and A be the estimation matrix, then (1) Log files memorize the frequency W of submitting resource done by each role and the quantity of resource provided by each role at a time. The context of resources and other information are memorized in temporary information area. Domain experts and task-assigner mark each item. Generally, the number of total resource provided by each role is incremental change along with time t. Let g ( t ,W ) be A
w x ( M x PT )
quantity of resource per second at a time and Gi ˄i=1,2, Ă ,k˅ be the total quantity of the ith kind of resource, then Gi
³³ g( t ,W )dtdW
(2)
Suppose ui (i=1,2,Ă,k) is the value of unit mark and N is the number of experts who attend estimation. u i is appointed by task-assigner. Let Val is the final estimation result, then Val
( u1G1 , u 2 G 2 ," , u 5 G k ) x A / N DŽ
(3)
840
Y. Su, B.S. Lv, W.H. Liao, Y. Guo, X.S. Chen and H.B. Shi
4.2 Estimation Model In order to encourage resource-holders to publish resource and attract more customers to provide helpful information, three reward measures are set up according to the values of Val calculated by resource estimation model. Those are Payment, Awarded Marks, and Widen Authorization. 1. Payment Pay cash or email goods to resource supplier directly according the rate of exchange of Val which assigned by task distributor. 2. Awarded marks Every time the published resource is evaluated by estimation module. The calculated value is added to Val and deposited as a type of awarded marks. The resource supplier can use the rewarded marks to exchange services which are provided by ASP at any time until the awarded marks equal to zero. 3. Widen authorization When Val is enough large and exceed certain value appointed by administrator, the resource supplier has the following choices: y To be a member if his/her present status is non-member. y To be an advanced member if he/she has been a general member. y To prolong period of validity to operate ASP-based CNMSP.
5.
Implementation and Case Study
Based on aforementioned studies and supported by China National 863 Projects, Construction Machinery Networked Manufacturing Platform is developed utilizing object orient technology, modularization design method, Internet technology, HOOPS platform and SQL Server 2000 system. All modules are implemented on Visual Studio.NET framework, making use of technologies of ASP.NET by Visual Basic.NET, JavaScript, and C language. Moreover, VC programming language and ASIC kernel are utilized in the implementation of 3D geometric modeling. B/S based modules adopt HTTP and XML as network transport protocol. C/S based three-dimensional (3D) geometric modeling module takes TCP/IP as network transport protocol. Collaborative design module is encapsulated into system platform through COM+ technology. Collaborative design of 3D model is driven by ASIC geometric kernel. Interactive virtual collaborative environment is developed upon HOOPS platform and Microsoft Netmeeting platform by using C and JavaScript. To resolve the problems of data confidentiality, data safety, and intellectual and industrial property management, three data encryption algorithms are adopted during system development, i.e. message-digest algorithm 5 (MD5), Data Encryption Standard (DES), and RSA. Figure 3 shows several typical interfaces of Construction Machinery Networked Manufacturing Platform developed based above studies. Figure 3a is a typical interface of resource estimation system for collecting customisation resources of Road-surface Machinery. It is based on proposed role-based dynamic resource collecting method with certain incentive mechanism. Resource published freely by
Collaborative Networked Manufacturing Service Platform for SMEs
841
users is filtrated by resource estimation system and just useful resource is accepted. Through resource dynamic collection, the newest and abundant resource is more accessible for SMEs and customers. Figure 3b is a representative collaborative networked manufacturing platform. In Figure 3b, the left side offers different collaborative module portal, the centre is a public collaborative environment and the right side provide collaborative communication function. At the centre of Figure 3b, upper-side is the public interactive virtual scene which developed on HOOPS platform, under-side displays information of parts, including product structure tree, information of suppliers, performance parameters of parts, and so on. Information of parts can be modification by authorized users. Product structure tree can displays on the right side in the form extension bar. Collaborative communication function consists of video meeting function, intercommunicating function, attitude bulletin function, and negotiation function. Any users must apply for authorization before participating in collaborative activities. Different grade users enjoy different service according their authorization and rental.
a. Resource estimation system
b. Collaborative Platform
Figure 3. Construction Machinery Networked Manufacturing Platform
The Construction Machinery Networked Manufacturing Service Platform has been applied in some SMEs in construction machinery in Jiangsu province, P.R. China, such as Jiangsu Zhenjiang Huachen Huatong Road Machinery Co, Ltd., Xuzhou Construction Machinery Group Inc., etc. Feedback from these enterprises shows that the Construction Machinery Networked Manufacturing Service Platform can greatly shorten the time of product design/manufacture and lower cost of products. For example, after using the service platform Huachen Huatong Road Machinery Co, Ltd. just spend sixteen days in designing and manufacturing a mainframe of Road Paver, ahead of schedule by about half a month than before.
6.
Conclusions and Future Research
This paper has proposed an ASP-based CNMSP for a distributed environment, with a specific study focused on construction machinery. Collaborative workflow of the platform is described. In order to acquire the newest and useful distributive
842
Y. Su, B.S. Lv, W.H. Liao, Y. Guo, X.S. Chen and H.B. Shi
resource in time, distributive resource collecting is studied, and a resource estimation model and certain award measure are built. A Construction Machinery Orient Networked Manufacturing Service Platform is developed and validates the studies’ feasibility. Above-mentioned research will facilitate the business mode of ASP diffusing effectively in networked manufacturing and help SMEs improve their core competence. Following this research, conflict resolution of collaborative process, isomeric data integration, and rental mechanism of services need ulterior research.
7.
References
[1] WANG Zhong-qi, LI Xi-ning , JIANG Cheng-yu, Networked Manufacturing for High Technology Industry Region of Shannxi Province’s Center, Computer Integrated Manufacturing Systems, 2003,9(8): 710-715 [2] ZHAO Hui-juan, JU Wen-jun, WANG Shu-ying, YIN Cheng-feng, Software Resource Sharing and Its Application in Networked Manufacturing System, Computer Integrated Manufacturing Systems, 2003, 9(7):608-612 [3] Gu Xing-jian, Qi Guo-ning, Chen Zi-Chen, The Strategy and Methodologies for Networked Manufacturing [M].Higher Education Publishing House, Beijing, 2001 [4] FAN Yu-shun, Networked Manufacturing and Manufacturing Network[R], 2002. Hangzhou, Zhejiang University (in Chinese). [5] Pan Xiaohui, Jia Zhenyuan, Research and implementation of networked manufacturing platform based on ASP, Manage Technique, 2005,9:99-101 [6] Xie Qingsheng, Network Manufacturing Based on the Model of ASP, Machine and Electron, 2004,1:1-5 [7] Xu Liyun, Li Aiping, Zhang Weimin, Networked Manufacturing Base on ASP and Relational Technologies, China Mechanical Engineering, 2004,15(19):1755-1759 [8] Kern, T., Lacity, M., Willcocks, L., Netsourcing: Renting Business Applications and Services Over a Network, Prentice-Hall, New York, 2002 [9] B. Jaruzelski, F. Ribeiro, and R. Lake, "ASP101: Understanding the Application Service Provider Model," Booz.Allen & Hamilton, 2000. [10] Nigel J. Lockett, David H. Brown, An SME Perspective of Vertical Application Service Providers, International Journal of Enterprise Information Systems, 2005,1(2):37-55 [11] J.Breslin, J. Mcgann, The Business Knowledge Repository, Quorum Book
Virtual Part Design and Modelling for Product Design Bo Yang, Xiangbo Ze, Luning Liu College of Mechanical Engineering, Jinan University, Jinan, P.R. China Abstract This paper describes our initial efforts to deploy a digital library to support computer-aided web based product growth design. Firstly, aiming at providing efficient method for complicated structure parts sorting and retrieving, the product growth design model as well as the part gene model and body gene model are developed based on the similarity between product design and genetic engineering. The concept of part gene and body gene in genetic engineering is used to model the mechanical part information. Secondly, the methods to calculate the similar degree of different conceptual part structures are discussed, and the encoding method of the body gene and the method to design the parameters of each genetic unit bounding box are given. Then, aiming at making good use of the rich and abundant previous design knowledge available on internet, a searching mechanism of two stages is given to obtain suitable part resource on internet, and a method to establish virtual library for product design is proposed, in which a fuzzy searching approach based on the coding information of mechanical parts is used and analytic hierarchy process is applied to avoid the subjective factors in deciding the weights of part structure features. Through theoretical study and hard development work interesting and useful results have been obtained. Keywords: Product Growth Design; Design reuse; Part gene library
1.
Introduction
In engineering, it is conservatively estimated that more than 75% of design activity comprises case-based design – reuse of previous design knowledge to address a new design problem. On the other hand, concurrent engineering, virtual enterprises, collaborative design and network manufacturing are predominant schemas of design and manufacturing in the 21st century. All these have higher demands for the sharing techniques of design information. Web-based design library have been a bridge to share design information in and among enterprises, which are also vital to engineers, who search through vast amounts of corporate legacy data and navigate on-line catalogs to retrieve precisely the right components for assembly into new product. Many countries have embarked on researches of web-based design library, such as CIREP [1] project of Europe, ECCI [2] project of America and JEMIMA [3]
844
B. Yang, X. Ze and L. Liu
project of Japan, etc. Although many plans and engineering projects have focused on the problem of design resource sharing, they emphasize particularly on parts electronic catalogues [4]. Due to the complexity of design work, researches about practice mode, construction method, information representation and information interface, etc. are still on an initial stage. The searching mechanism as well as a reasonable information description mechanism of parts to standardize description method of parts library is vital for retrieving precisely the right components with high efficiency, which are the most important key techniques of building a web-based design library. Based on our previous study on product growth design [5, 6], a modeling process for Internet-based virtual design libraries is put forward. Modeling for simplifying the feature unit of part is also presented, which gives an abstract description by using the body gene and gene unit, these genes are also used to build up a two stage indexing model. Based on the parts retrieved, virtual assembly process and product manufacturability valuation can be realized in the stage of product concept design, which lay the foundation for top-down, assembly and lifecycle design.
2.
Parts Structure Model
To share parts information, the first step is to establish qualitative description models of parts and their characteristic data. 2.1
Product Gene Model
Through research work on the design process of a product, some important similarities between product design and biological growth with regard to the structure, the process and the evolving stage have emerged. In addition, the nature of the structure design process is such that incomplete design information in the initial stage will be explored, enriched and extended following the design process. The way the original information transferred and increased is a good representation and intuitive simulation to the inheritance and evolution of biological growth. Thus, genetic engineering methodology and a biological growth mechanism can be integrated into the modeling of products. On the other hand, just like there are similarities among different living creatures, there are similarities among structures of different parts, and these similarities can be described by some typical parts through the way of standardization, modularization and serialization, then by making corresponding modification, these typical parts can be reused in new products. So, a biological design based four leveled product genetic model is proposed, where the concepts of body gene, gene unit and gene atoms are used (Figure 1).
Virtual Part Design and Modelling for Product Design
845
Product gene Body genes of parts Gene units Gene atoms Figure 1. Product genetic model
2.2
Body Gene and Gene Unit in Part Gene Model
Although there are many kinds of parts for different mechanical products, regarding to the functionality, machining process and machining tools, there are quite some local similarities in the structure of different parts, thus, the common set of basic shapes which can be used to best represent the structure of a mechanical product, in most cases is fixed. There are usually some functional regions on a part structure, these regions are constituted of specific shape features, which are corresponding to the sub-functionalities. Considering a part structure in this way, the concept of part gene can be given, which denotes the elementary units of a part. In order to search and retrieve similar part effectively in a library resource and through the analogy analysis of biological gene, a part gene can be divided hierarchically into two levels, body gene and gene unit. Definition 1 Gene unit G: Gene unit is the most basic unit for the representation of the structure of a part, which provides the information of the design, the process and machining of a part. Suppose Gi is used to represent a gene unit, Pai is a sub-set of part A. If GiPai, then Gi is one of gene units of part A. Because of the generality characteristics of a gene unit, Gi can also be used to construct other parts. Definition 2 Body gene: The overall functionality features set of a part is called body gene. It is used to describe the general classification of a part and built up from the gene unit G1, G2, …, Gn G according to some topology relationship. Thus, body gene is an integrated set of information which specifies a product structures, functions and its mechanisms to “grow” automatically, and under suitable conditions body gene can generate specific structures of a part. In general, body gene gives the description of the main and overall feature of the part, while the gene units give the description of composing feature of the part. 2.3
Gene Atoms Based Part Coding Method
A part is constituted by gene units, and all information on specific gene unit and relationships among them together form the body gene. Based on specific coding method, each design scheme can be encoded into a string of gene, further more body gene encoding of part structure with similar functionality forms kinds of gene populations, which could be used as design samples and give effective implementation of part classification and part searching. Through the deep analysis of different kinds of part features, we found that there are something common in the part body gene, we thus bring forward the concept of gene atom.
846
B. Yang, X. Ze and L. Liu
Definition 3 gene atom: The bit or position in the encoding body gene is defined as gene atom, which is related to some functionalities or shape features of the part. As shown in Figure 2 and Figure 3, by extracting the basic part shapes and also referring to the OPTIZ chaining classification coding system proposed by H.Opitz by the end of 60s in Aachen Industry University German, a coding method of body gene based on group technologies is put forward. Each body gene is composed of 6 gene atoms, and these gene atoms can be further classified into main gene atom and ancillary gene atom, main gene atom is represented by code position 1 and 2, which is used to constitute the main shape structure and main functional surface, ancillary gene atom is used to represent such elements of the inner-outer shape as with or without screw thread, functional cone, functional slot, with or without multiple keys in the marching part of the flat surface, with or without gear or cone gear in the ancillary hole and so on, moreover, additional codes can be added to the gene atom, which are mainly used to make local modification to the main features such as chamfer, key slot, relief groove, and central hole [7]. In Figure 3, the number in the Figure 2 is used as the value of the characteristic character at code position 2, while the value of characteristic character at code position 1-5 in the OPTIZ system is used for code position 1 and 3-5 in Figure 3, each code position is described as 10 digital numbers from 0 to 9. Obviously, the main structure feature of the concept part can be roughly represented by the model from the part body gene coding. Part Body Gene
Axially Symmetrical Parts
Bit
plate and shell type parts
Rectangular Parts
Axis and Shaft Parts
Parts with specific function
Wheel type parts
Profile Parts
Thread and Its Connection
Combined Parts
Bit
Figure 2. Main gene atoms and their coding 0ü9 1ü4 0ü9 0ü9 0ü9 0ü9
Bit 6 Auxiliary hole and teeth part of a gear Bit 5 plane machining parts Bit 4 Inside part and its related elements Bit 3 Outside part and its related elements Bit2 Detailed classifying of part Bit 1 Rough classifying of part
Figure 3. Auxiliary gene atoms and their coding
Bit
1RQ Axially Symmetrical Parts
Virtual Part Design and Modelling for Product Design
2.4
847
Part Gene Model
Gene unit is obtained through the analysis of typical elements and typical structure of the part. Conceptual design is actually a process, which firstly gene unit is combined according to some functionality requirement, then part conceptual model is generated from the editing and modification to the gene unit. After the conceptual design, a skeleton for the product assembly structure represented by gene unit should be produced. Mechanical product is composed of different parts, and different parts are composed of different gene units, and gene units are combined, crossing-overed and evolved, which finally gives a series of product and the product evolution design. Based on the product gene unit model abovementioned, the design process can be represented by the following chain. Functional design
Feature gene unit set
Functional structure
Functional part
Part entity
Figure 4. Product gene unit model based design process
So referring to the Figure 4 above, the gene unit is the intermediate level in the mapping from design functionality to physical part entity. In the modeling of virtual design library, groups of part entity with same functional information can be created through the match between gene units. Then, in the actual product design process, evolution and optimization can be applied to these groups to finally realize the mapping from product concept design to product structure design. Through the analysis of the shape of the gene unit, we found that once the sort of gene unit and the position relationship among gene units are determined, the genetic unit bounding box can be used to represent the general part shape and assembly relationship of the gene units, and thus the complicate gene unit is simplified to a virtual box representation. Once the sort, the quantity and assembly pattern of gene unit are determined, the gene template can be created, and through the permutation of specific gene unit, the part entity can be built up. Based on abovementioned, gene unit is represented as a vector, the start point pi ( x, y, z ) of the vector is used to represent the position of the ith gene unit, G direction vector l i ( x, y, z ) represents the assembly direction or mating direction, the modulus length Oi represents length of the diagonal of the ith genetic unit bounding box. Thus the initial product structure can be described by using the vectors, and can be graphically represented as “draft drawings” of the structure relationship. A quadruple model for the mechanical part is created as follows, P
G ( n, ¦ ( pi ( x, y , z ),l i ( x, y, z ), Oi ))
i 1,2, " , n;
(1)
Where, n is the quantity of gene units in the part. Moreover, position relationship of gene units can be described by using the coordinate difference of the reference start point of each gene unit.
848
3.
B. Yang, X. Ze and L. Liu
Modelling of Virtual Design Library
It is obvious that the parts with similar body gene and gene units have similar structure as well, so while searching of similar parts over network, firstly part functionality is obtained through the decomposition of general functionality of the product, secondly part functionality is decomposed and refined into many logical functionality which can be realized by gene units, thirdly through the effective combination of gene units information, functional structure information and information of relationship between units, the body gene is created, then body gene and gene units are used as the condition of indexing and searching, body gene is mapped into a class of functional structures, and then gene units are mapped into specific functional structures, finally part prototype is generated by the transform from logical model to physical model. Since the resource on the network is quite huge, there are usually some confliction in making the knowledge filtration efficiently and at the same time effectively. In order to build up an effective knowledge indexing and searching model, a two-level indexing and searching method is put forward. Network data mining tools are used as the first level searching tool, in which the similarity of body gene is used as the searching principal. The knowledge obtained which are rough correlated with objective part information are structured or half-structured, from which designers can start the second level searching based on similar gene units, as shown in Figure 5. U s er 1
U ser 2
U ser n
... In ternet
F irs t L evel S earch
Search ing b ased o n c od e of pa rt b o dy gen e A g g reg atin g featu re info rmati on
In form atio n filt rati ng
P art co d in g
m ap pi ng op erato r R ou g h related parts
Seco n d L ev el Search D e script ion
of
Search ing b ased o n part g ene u nit M atch in g b ased on gen e u nit
In fo rm atio n p ret reat ing
g en e un it E val uati on
R etrieved pa rt
Virtu al d esig n li brary
Figure 5. Framework of the searching model based on the two level searching technique
Virtual Part Design and Modelling for Product Design
3.1
849
Searching Based on Part Body Gene
The main character of the structure of a part can be roughly expressed by the encoding model based on its body gene, so a body gene code based searching technique is proposed firstly, in which the coding method shown in Figure 2 and Figure 3 is used in this rough correlation matching stage. Here code similarity is used as constraint condition. The searching result, the retrieved parts, have group character. Followings are the searching steps. Identify evaluation index Z i : Z i is the relative importance (i.e. weight) of the ith bit in the body gene code which contains 6 bits. Evaluation index Z i describes the importance of the ith feature in all the features of the part. Different degrees of fuzziness are included in identifying it, which is often set by manual traditionally. So, some subjective influence often involved when evaluating it, which made it not only time-consuming, but also low matching precision. So, a method based on fuzzy synthetic evaluation method of AHP is adopted in this article. Four phases are involved in this process, that is, setting up judgment matrix of twain compared, constructing a judgment matrix to calculate the maximum Eigen value of the matrix and vector features, the largest Eigen value calculation, a one-time inspection steps to determine the weights. Following is the detailed process. Step1: Assessing degrees of importance of factors influencing each bit According to the AHP method given by Satty, firstly, the pair wise comparisons values f uj (u i ) , f ui (u j ) can be achieved by using Table 1, where f uj (u i ) is the importance gradation level of bit u i to bit u j . Here, three principles are given for the assessment process: (1) Code 1 is the most important; (2) Code 2 is the second important; (3) The important weights of the other bits of the code are determined by the size tolerance and shape-position tolerance of the feature the bits represented. The higher the tolerance is, the more important the bit is. Table 1. The AHP pairwise comparison scale f uj ( ui )
D ef initio n
u i is e q ual im po rta n t to u j w e ak im po r ta n c e o f u i o ver u Stro ng im p o rtanc e of u i
j
ove r u j
Ve ry str on g imp o rta n ce o f u i ov e r u A bsolute im po r tan c e o f u i
j
ov e r u j
In te rm e diate v a lue s be tw e en th e tw o ad ja ce nt
f ui ( u j )
1
1
3
1
5
1
7
1
9
1
2᧨ 4 ᧨ 6 ᧨ 8
1
Step2: Constructing the judgment matrix The judgment matrix can be established as follows.
C
§ c11 ¨ ¨ c 21 ¨ ... ¨¨ © c n1
c12
...
c 22
... ...
.... cn2
c1 n · ¸ c2n ¸ ... ¸ ¸ ... c nn ¸¹
850
B. Yang, X. Ze and L. Liu
Elements in the judgment matrix can be calculated by the following Equation. f uj (u i )
c ij
(2)
i, j 1,2, ", n
f ui (u j )
Obviously, the value of C ij must be determined according to the actual status of different parts. Step3: Calculating the maximum Eigen value and the vector features of the judgment matrix The characteristic equation of the judgment matrix C is C OE 0 , after calculation, its eigenvectors Omax can be described as the following: (3)
( x1 , x2 , " , xm )
[
Step4: Normalizing the weighting coefficients subset shown in Eq. (3) Because the vector features are non-unique, normalizing is necessary. That is: § ¨ ¨ ¨ ¨ ©
x1 m
¦
,
x2 m
¦
xi
i 1
, ... ,
m
¦
xi
i 1
· ¸ ¸ ¸ xi ¸ ¹
(4)
xm i 1
Then, we can obtain the weighting coefficients subset, in which every element can be quantified. Step5: Consistency check Calculating the similarity coefficients between the objective part and the parts to be retrieved For the sake of avoiding explosion of combination in the searching process, a new concept – part similarity coefficients is given. Definition: Let A to be the objective part and B the part to be retrieved. The quantified index based on the degree of code similarities between A and B, is called the similarity coefficients between A and B, which describes the similarity degree of the corresponding features of the two parts. It can be described by CAB, and calculated by the following equation. 6
C AB
i 1
i
S Ai B i
¦Z
i
S Bi
i 1 6
6
¦Z
¦Z i
S Ai
i 1
(5) 6
¦Z i 1
i
S Ai B i
i 1, " ,6 is the total number of bits in the part body gene’s code. Where, S
Ai B i
1᧨ A i z 0 , B i z 0 , A i ® 0᧨ Others ¯
Bi
S
Ai
1᧨ A i z 0 S ® 0 Bi ¯ 0᧨ A i
1᧨ B i z 0 ® ¯ 0᧨ B i 0
Virtual Part Design and Modelling for Product Design
851
Here AiᇬBi denotes the ith bit of the code of part body gene A and B respectively. In our classification and coding systems, if a bit is 0, it means that the part has no corresponding feature. Zi denotes evaluating index of the relative importance of the ith bit in the six bits which can be obtained by Eq.(4). (3) Set the limit indexes G according to the capacity of the data warehouse. If C AB t G , then part B is one of the alternative solution of the design. After this searching process based on body gene, limited amount of part samples can be retrieved. These part samples have the same classification features, which ensure their higher correlative degrees with the objective part. Check the characters of the retrieved parts, and make change to them. Finally, put the modified part model into the objective database, then the part data warehouse based on the raw classification can be established. 3.2
Searching Based on Gene Unit
After the part retrieval process abovementioned, the parts retrieved have only the construction similarity with the objective part, more information such as part dimension should be considered in the further precise correlative matching. On the other hand, the index of the gene unit is the main functional feature during the product assembly process which determined the assemblability of a part. So the second retrieval process is established based on the gene unit matching technique. Here parts are also described by the same model as shown in Eq.(1). The main stages of the second matching process are as follows. Step1: If the number and type of gene units of the retrieved parts and the objective part can be matched successfully, go to Step2; Otherwise, the part is ineligible; Step2: Calculate Sj , which is the similarity of gene unit j in the retrieved part Pk with the objective part P0 . In this process, compared the corresponding gene unit j of the retrieved part and the objective part, calculate the similarity of their position (that is, the coordination of the vector’s starting points), the similarity of their matching directions and the similarity of their bounding boxes. Those similarities can be represented as: Sim ( x ip k , x ip 0 ) p
p0
Where x i k , x i
1
x ip k x ip 0 Ri
(6)
denotes the value of the ith features of the retrieved part and
objective part respectively, Ri represents the value adoption range of the ith features, i=1,…,6, then Sj
6
¦ Sim ( x ip i 1
k
, x ip0 )
(7)
Step3: Calculate the relative importance (i.e. weight) of each gene unit to the global features of the part, Z1 , ", Z n . AHP method is also used in this process.
852
B. Yang, X. Ze and L. Liu
Step4: Calculate the compositive similarity of the retrieved part and the objective part based on all the composing genet units. n
S ( P k , P0 )
¦
(Z
j
S j)
(8)
j 1 n
¦Z
j
j 1
Step5: If S ( P k , P0 ) Max( S ( Pi , P0 )) ᧨part Pk is the most suitable matching part.
4.
Conclusions
In order to quickly get and efficiently use the part design resource on Internet for the follow-up product design and for the building up of concurrent design environment, we have proposed a method to integrate the genetic engineering methodology and web-based searching techniques into the establishment of virtual design library. The method has the following advantages: x Among the variety of design methods, biological design concept throws new light on the design domain, based on which the biologically inspired feature of the design concept fits well in the dynamically changing and distributed design environment. It is an important exploration in design reuse area to establish the searching model based on product gene model. x Retrieved part’s sales information and manufacturing information are often available on the internet, so necessary information in the follow-up manufacturing process, such as manufacturability and economic information, can be obtained in advance, which lay the foundation for topdown, lifecycle design. x The approach provides a feasible way to support automatic product design, which will lead to the evolvement of product mechanisms as well as the enrichment of geometrical information of components step by step automatically. Much more need to be done for the method to be widely used in practice. And due to complexity in the design, related description on the model is too rough and involves very complicated calculations. With further understanding on design, the model needs to be improved with more details and made more operable.
5.
Acknowledgement
The study reported was supported by Shandong Natural Science Foundation(Y2005F26) and the Scientific Research Foundation for doctor of Jinan University(B0538).
Virtual Part Design and Modelling for Product Design
6.
853
References
[1] Pierra, G., Sardet, E., Potier, J. C., et al. (1998). Exchange of component data: The PLIB (ISO 13584) Model, Standard and Tools. In Proceedings of the CALS Europe’98 conference, Paris, pp. 160–176. [2] Pierre and C.H. Parks, (1999) Electronic commerce of component information workshop, Journal of Research of the National Institute of Standards and Technology, 104 (3): 291–297. [3] Sardet, E., & Pierra G. (2001). Simplified representation of parts library: Model, practice and implementation. In Proceedings of the 10th product data technology Europe, QMS, Berkshire, UK, pp. 163–174 [4] G. Pierra, J.C. Poiter and E. Sardet, (2003) From digital libraries to electronic catalogues for engineering and manufacturing, International Journal of Computer Applications in Technology, 18 (1): 27–42. [5] Yang Bo, Yang Tao, Ze Xiangbo. Functional tolerance theory in incremental growth design. Frontiers of Mechanical Engineering in china. 2007,2(3):336-343 [6] Huang Kezheng, et al. (1998) Generic Structural Design by Assemblability for Mechanical Product. Proc. of 14th Int. Conf. On CAPE, Tokyo, Sept. 8-10 [7] Ulrich R. B., Christian B. Ruediger D. (1991) Computer Integrated Manufacturing Technology and System. Weaponry Industry Press.
Integrated Paper-based Sketching and Collaborative Parametric 3D Modelling Franklin Balzan, Philip J. Farrugia, Jonathan C.Borg Concurrent Engineering Research Unit, Department of Industrial and Manufacturing Engineering, University of Malta, Tal-Qroqq, Msida, MSD06, Malta. E-mail: [email protected]
Abstract Despite that paper-based freehand sketching is still widely used during the conceptual design phase, few are the tools available which allow designers to exploit sketches resulting from this activity at a later design phase. This paper reports the ongoing research on a prototype tool nicknamed mX-Sketch, which addresses this lack of support by linking 'freehand paper-based' sketches with Computer-Aided Design (CAD) technology. Given that paper-based sketches are also used by mobile designers to express their ideas, the rapid transfer and automatic generation of 3D virtual models from such sketches provide real-time design collaboration. Since a paper-based freehand sketch is inherently vague, the 3D form idea is clarified by means of symbols representing 2D geometric constraints (e.g. perpendicularity). As a result, mX-Sketch produces a parametric 3D virtual model which can be potentially used downstream in the design process and exchanged in real-time in collaborative design scenarios. Keywords: geometric constraints, collaborative design, form design, mobility
1.
Introduction
Various researchers [1,2] have shown that despite the advent and availability of CAD, designers prefer freehand paper-based sketches during the early design phase. This is attributed to the fact that designers, at these early stages want to express their ideas quickly and naturally, thus paper-based freehand sketches provide an ideal early visual representation of their design intent [3]. Furthermore, designers find that the rigid user-interfaces (UI) of CAD systems hinder freedom [4], intuitiveness [5] and creative idea generation, and are thus not suitable for the generation of early form concepts. Additionally the availability and portability of these systems outside of the design office is fairly limited, requiring the designer to make use of readily available media (e.g. a paper napkin) to store spontaneous conceptual design solutions when outside of the office [6]. Thus designers need appropriate tools which integrate paper-based sketching with CAD. Such a tool would make it possible for the designer to quickly create a form concept which can then be edited to explore form variation. Furthermore it would also contribute to
856
F. Balzan, P.J. Farrugia and J.C. Borg
collaborative conceptual design since any 3D virtual model generated can be distributed to designers at geographically dispersed locations for evaluation, edited and forwarded back to the designer. This design collaboration can only be effectively established if the 3D model generated is a parametric 3D model and contains the geometric design intent of the designer. This enables model variation while maintaining unvaried the geometric design intent. Based upon the design problem introduced above, the rest of this paper is structured as follows. Section 1.2 critically reviews work on computer-aided sketching. The framework architecture for a computational tool aimed at addressing the above problem, follows in Section 1.3. Sections 1.4 and 1.5 describe respectively the sketching approach adopted and the processing of sketches in this framework. Section 1.6 discloses the results of a preliminary evaluation of a proof-of-concept tool. This is followed by a discussion in Section 1.7. Future research directions are also suggested in this section. Key conclusions are finally made in Section 1.8.
2.
Related Work on Computer-aided Sketching
Various research Computer-Aided Sketching (CAS) tools have been developed to integrate sketching with 3D modelling technology. Tools based on gestural modelling utilizes gestures, i.e. symbols entered with a stylus or a mouse, to trigger either a modelling command, such as ‘sweep’ or a 3D primitive such as ‘cube’. An example of a CAS tool based on gestural modelling is SKETCH [7]. Previous research work carried out at the Concurrent Engineering Research Unit of the University of Malta, [8] enabled the automatic interpretation and processing of paper-based sketches through the use of symbols. Although this work supported collaborative design and the use of the paper medium, the 3D models generated were not parametric. Reconstructional modelling systems are CAS tools that use reconstructional techniques to build the object geometry. CIGRO [9] is an example of a CAS tool in which sketched polyhedral wireframe models are reconstructed. A common limitation in all the CAS tools reviewed, based on reconstructional modelling, is that the input is limited to line strokes only. The third modelling approach is the hybrid technique, combining gestural and reconstructional approaches. GEGROSS [10] extends CIGRO capabilities, allowing users to dynamically modify the sketch geometry and to impose geometrical constraints (e.g. perpendicularity) via a gesture alphabet. GEGROSS utilizes a Tablet PC as a sketching medium. CIGRO exploits different stylus pressure levels to identify between auxiliary and main sketching lines. As in many other systems, the Window, Icon, Menu, Pointing device (WIMP) userinterface is eliminated so as to attempt to emulate as best as possible the traditional use of pen and paper. Since GEGROSS is parametric, it caters for constraints to be applied to the input sketch. Although GEGROSS addresses ambiguity in form sketches via the use of such an alphabet, it replaces the traditional pencil and paper with a digital sketching device. Results in [11] suggest that the majority of the subjects prefer the former media for mobile design work.
Integrated Paper-based Sketching and Collaborative Parametric 3D Modelling
857
Although commercial systems, e.g. AliasStudio® 13 [12], have attempted to integrate sketching with CAD, such systems do not support the automatic conversion of paper-based sketches into 3D virtual models. Furthermore, such systems demand access to a computer, which is not always available as also argued in [6]. Therefore, although CAS support has been developed, both at the commercial and research level, the current state-of-the-art does not yet fulfil the designer’s need of remotely obtaining a parametric 3D model from a paper-based freehand sketch. In view of the above, the overall aim of this research is to develop, implement and evaluate a computational framework capable of automatically and remotely generating parametric 3D models from freehand paper-based sketches.
3.
Framework Architecture
The framework depicted in Figure 1.1 shows the computational framework used as basis to implement the prototype tool mX-Sketch. Following is an overview of the main role of the seven frames depicted in Figure 1.1: x
Freehand Sketching (FS) frame: The candidate form concept is semiformally represented on a paper based sketching medium with a Prescribed Sketching Language (PSL). The underlying principles of such a sketching language are described later in section 1.4.
x
Sketch Image Capture (SIC) frame: the semiformal sketch representation is digitised with an optical device. Depending upon the situation where the designer is sketching, this frame allows the use of alternative image acquisition devices, including flatbed scanners and cameraphones. If the image of the sketch is captured by a cameraphone it is transmitted to an e-mail address via Multimedia Messaging Service (MMS), as an attachment file.
x
Sketch Image Processing and Validation (SIPV) frame: image preprocessing algorithms are applied to the sketch image such that it is prepared for subsequent processing. Provided that the sketch visual syntax is correct, the 3D shape information is extracted and modelled in the subsequent frame.
x
Shape Information Modelling (SIM) frame: the extracted 3D shape information is modelled in a specific format, depending on what CAD package is utilized to obtain the 3D geometric model. For example, if AutoDesk Mechanical Desktop® [13] is used, the shape information is modelled in a command script file format, from which a sequence of commands is automatically executed.
x
Virtual 3D Model Construction (V3D) frame: the role of this frame is to obtain a 3D virtual model in a commercial CAD package from the format inputted from the SIM frame.
858
F. Balzan, P.J. Farrugia and J.C. Borg
Figure 1.1. Framework for mobile parametric sketch-based modelling
x
3D model Transmission (3DT) frame: the generated parametric 3D model is transferred to the designer’s mobile device. The 3D model is forwarded in a .dwg format, which is used to generate the 3D model on a CAD system and in a dynamic rendered format, which is used for visualization purposes.
x
3D model Editing (3DE) frame: allows the designers to collaboratively share and edit the 3D model generated.
4.
Sketching Approach
The core of the FS frame is the prescribed sketching language (PSL) which the designer utilises to represent his design intent on the paper medium. In view of the idiosyncrasy in form sketching, PSL is required to robustly communicate the
Integrated Paper-based Sketching and Collaborative Parametric 3D Modelling
859
designers’ form intent with the computer. It makes use of plane lines to define the planes of the form’s salient cross-section profiles [14]. The term ‘salient crosssections’ refers to those critical cross-sections that will produce the intended 3D form, when 3D operations are performed on them. Figure 1.2 illustrates a simple sketch in PSL utilising a salient cross section extruded between two planes.
Figure 1.2. A simple PSL sketch and the corresponding 3D geometric model
Freehand sketches are vague, ambiguous and inaccurate in nature since they represent a spontaneous way of graphical communication [1]. To this end, the inclusion of geometric constraints in the sketch has the intent of reducing this ambiguity, thus improving the generated virtual 3D model’s geometry as shown in Figure 1.3.
Figure 1.3. Constraint-based representation in PSL
In order to arrive to the set of geometric constraints, different constraint notations were considered and evaluated with respect to easiness with which they can be memorized, intuitiveness, speed with which they can be applied, easiness with which they can be implemented and robustness of the eventual implementation. The geometric constraint symbols to be used were also studied, by carefully examining geometric constraint symbols utilised in parametric modellers. A survey was conducted with fifty-one engineering students to identify the most intuitive constraint symbol, from a proposed set of three. Based on the results obtained, Figure 1.4 illustrates the set of six constraint symbols employed in mX-Sketch.
860
F. Balzan, P.J. Farrugia and J.C. Borg
Symbol
Meaning Is parallel to Is perpendicular to Is equal to Is collinear to Is horizontal to Is vertical to
Figure 1.4. Set of constraint symbols employed
5.
Processing of the Sketches
After that the form concept has been represented with PSL, the sketch is digitised with an optical device and sent to a computer for processing. The key processing steps are illustrated in Table 1.1. Table 1.1. The key steps of the Sketch Image Processing and Validation frame A
The designer’s intent is appended to the sketch by means of geometric constraints on to the profile. The notation used in the bottom right hand side indicate that side A is intended to be parallel to E while side F is to be perpendicular to side E. The sketch is then transferred to mXSketch, for processing and interpretation. After removing unwanted information such as crossover lines, the various entities are separated, depending on their class. PSL includes a number of classes such as symbols, profiles, plane lines and identifiers. The salient points of the profile are then stored, as they will be used later to generate the profile.
A F
E E
B F
C D E
A F
A
E E
B F
C D E
Integrated Paper-based Sketching and Collaborative Parametric 3D Modelling
The centre point of each entity in each class is located and stored. Constraint symbols are correlated to identifiers, while line entities in the profile are correlated to the nearby identifiers by means of binding boxes (shown dashed), which define the region of proximity. These correlations are required for a correct sketch interpretation.
A F
A
861
E E
B F
C D E
mX-Sketch processes the sketch, until it generates a *.scr file which contains all the commands required to generate the respective 3D parametric model in AutoDesk Mechanical Desktop®, with all the intended geometric constraints.
5.
Evaluation
5.1
Evaluation Objectives
Primarily, the evaluation objectives consisted of investigating: x x
the willingness of designers to utilise symbols representing geometric constraints in paper-based sketches. the utility of a 3D parametric model generated during the conceptual design phase, later in the design process.
A preliminary evaluation of mX-Sketch was evaluated with six mechanical engineering designers. In order to put the subjects in the context of this research, the designers were first given a flyer. A questionnaire form was then delivered. Prior to the actual evaluation a pilot study was conducted with two engineering students to detect any weaknesses in the approach. This pilot study showed that the questions asked were easy to comprehend. The survey consisted mainly of questions which required a 7-scale response. The designers were also required to provide a justification for their response, in order to better comprehend and assess the quantitative feedback gathered.
862
5.2
F. Balzan, P.J. Farrugia and J.C. Borg
Evaluation Results
The evaluation of the concepts underlying mX-Sketch indicated that while the designers regarded geometric relations to be “the base of a good 3D model”, two evaluators were skeptical to the possibility of utilizing the model, later on during the design process. Particularly the evaluators commented that geometric constraints were essential “to be able to convey the appropriate message and ensure proper translation by the computer”, as “it is useless drawing a sketch and generating a shape which I did not have in mind”. Despite this a strong positive attitude (a mean rating of 2.3 from a 7 scale rating, 1 implying a positive attitude) was achieved from the evaluators when asked if they would utilize this model to explore form variation using different parameters. A very positive attitude was expressed by the evaluators, when queried on the intuitiveness of the geometric constraints representation employed in PSL (a mean rating of 1.5 from a 7 scale rating was achieved, having 1 as most intuitive). The evaluation also revealed that the two most important parameters for mX-Sketch were the editability of the model and the easiness with which it can be obtained.
6.
Discussion and Future Work
Geometric constraints form part of the essential knowledge a designer would require to participate in a globally collaborative part design, improvement and evaluation. Without such constraints in a distributed virtual 3D model, geographically remote designers are unaware of the original design intent and required geometric specifications. This lack of geometrical knowledge between collaborating designers would lead to unacceptable modifications to the 3D model. Despite the promising evaluation results achieved and the potential of the tool in collaborative design, future work is required to extend the practical utility of the tool. Key research directions include: x extending the component forms which can be supported by PSL; x implementing robust symbol recognition algorithms to classify constraint symbols; x incorporating more geometric constraint symbols in mX-Sketch; x evaluation of the tool, based on hands-on experience, especially in a typical collaborative design scenario.
7.
Conclusions
The research disclosed in this paper addresses the lack of support for designers to automatically obtain 3D parametric models from paper-based sketches. The novelty of this work lies in exploiting a set of re-usable 2D geometric constraint symbols in paper-based sketches which contribute in the automatic and generation of 3D parametric models. As a result of this feature and the use of mobile devices, mX-Sketch allows designers at geographically distributed locations to share in
Integrated Paper-based Sketching and Collaborative Parametric 3D Modelling
863
realtime their design concepts and edit them while respecting the geometric constraints. Such models can be potentially used further in the design process.
8.
Acknowledgements
The authors are grateful for the contribution offered by all participating evaluators. Special thanks go to the engineering designers at Methode Electronics Malta Ltd. The input of Mr. Christopher Spiteri, Mr. Anthony Caruana and Ms. Alexandra Bartolo in implementing parts of the prototype tool is also greatly acknowledged. Last but not least, thanks are also due to the University of Malta, Malta, which funded this research project through research grant ‘Emotional Support Tool for Early Product Design’ (ESTRED) and various visits to industry abroad.
9.
References
[1] Borg J.C., Farrugia P.J., Camilleri K.P., Franca G., Yan X.T., and Scicluna D., (2003) Why CAD tools benefit from a sketching language, Proceedings of the 14th International Conference on Engineering Design (ICED03), Stockholm, Sweden, pp.141-142. [2] Lim S., Qin S. F., Prieto P., Wright D. and Shackleton, J, (2004) A study of sketching behaviour to support free-form surface modelling from on-line sketching, Design Studies Vol. 25 (4): 393-413. [3] Alvardo C.J., (2000) A Natural Sketching Environment: Bringing the Computer into Early stages of Mechancial Design, Master's thesis, Massachusetts Institute of Technology, MA, USA. [4] Naya, F. Jorge, J.A. Conesa, J. Contero, M. and Gomis, J.M., (2002) Direct Modeling: from Sketches to 3D Models. Proceedings of the 1st Ibero-American Symposium on Computer Graphics - SIACG02, Guimarães, Portugal. pp.109-117. [5] Roemer A., Pache M., WeiBhahn G., Lindemann U. and Hacker W., (2001) EffortSaving product representations in design - results of a questionnaire survey, Design Studies 22 (6): 473-491. [6] Stappers, P. J. and Hennessey, J. M., (1999) Towards electronic napkins and beermats: Computer support for visual ideation skills, Visual Representations and Interpretations VRI'98, Liverpool, UK, pp.220-225. [7] Zeleznik, R., Herndon, K. P. and Hughes, J. F., (1996) Sketch: An interface for Sketching 3D scenes, SIGGRAPH'96 Conference, New Orleans, Louisiana, USA, pp. 163-170. [8] Caruana, A., (2005) Mobile Paper Sketch-Based Technology for Collaborative Early Form Design, Department of Manufacturing Engineering. Msida, University of Malta. [9] Naya F., Contero, M. Jorge, J. and Conesa, J., (2003) Conceptual Modelling Tool at the early Design Phase, Proceedings of the 14th International Conference on Engineering Design (ICED03), Stockholm, Sweden, pp.137-138. [10] Naya F., Contero M., Aleixos N. and Jorge J., (2004) Parametric Freehand Sketches, Proceedings of the Second Technical Session on Computer Graphics and Geometric Modeling, TSCG2004. Assisi, Italy, pp. 613-621.
864
F. Balzan, P.J. Farrugia and J.C. Borg
[11] Farrugia, P. J., Borg, J. C., Camilleri, K. P. and Spiteri, C., (2005) Experiments with a Cameraphone-Aided Design (CpAD) System, Proceedings of the 15th International Conference on Engineering Design (ICED05), Melbourne, Australia, pp. 130-131. [12] Alias Learning Tools, (2006) Learning Design with Alias StudioTools: A Hands-on Guide to Modeling and Visualization in 3D (Official Alias Training Guide), Sybex; Pap/Dvdr edition. [13] Shih, R. Z. J., (2001) Parametric Modeling with Mechanical Desktop 6, Schroff Development Corp. [14] Farrugia P.J., (2006) Evaluation of a Paper-based Prescribed Sketching Language, University of Malta, Concurrent Research Unit, Internal Technical Report CERU/ITR/01/2006.
Mechanical System Collaborative Simulation Environment for Product Design Haiwei Wang, Geng Liu, Xiaohui Yang, Zhaoxia He School of Mechatronic Engineering, Northwestern Polytechnical University, Xi’an 710072, China Abstract Widespread application of product performance analysis in mechanical product design stage demands to efficiently manage simulation flow and engineering data. Therefore the solution of Collaborative Simulation Environment(CSE) was presented, and its function framework and system architecture were analyzed. The key technologies of multi-hierarchy engineering data management and simulation flow control were researched. Engineering data can be classified and hierarchically managed through multi-hierarchy engineering data management technology based on XML. Effective management for simulation flow can be carried out through simulation flow control technology based on workflow management technology. Based on the architecture and key technology of CSE solution, flow model platform and Web portal of some suspension system collaborative simulation environment were developed, using which the product simulation period can be greatly shortened and the design efficiency can be increased. The results of application show the validity of the method. Keywords: Collaborative Simulation Environment, Simulation flow control, Multihierarchy engineering data management, Workflow management technology
1.
Introduction
In the process of innovative development for mechanical system, integrating various valid software and hardware resource to implement product performance simulation can rapidly improve development speed and save test expense. However, when carrying through performance simulation some problems are faced as follows: multifold performance simulation requirements demand simulation flow management, multifold large numbers of data which are produced by various specialty engineering software demand engineering data management, multifold field complexity of mechanical system development demands many people collaboratively working. Namely there are information island, application island and flow island in simulation. At present the mature PDM (Product Data Management) system can’t solve these problems at all [1]. Chai Xudong and Li Bohu put forward building simulation environment or platform based on HLA/RTI technology [2], and researched multi-field tool
866
H. Wang, G. Liu, X. Yang and Z. He
integration technology [3], and so on. Agent technology was imported into distributed simulation platform design by Ma Baohai [4]. Chen Xi presented an implementation solution of virtual prototyping (VP) simulation support platform for multi-field complex product [5]. Some product design and simulation companies presented frameworks and technique routines for simulation platform development, for example FIPER released by Engineous Co., SimManager engineering data management solution of MSC Co., etc. A vertical mechanical product simulation solution — Collaborative Simulation Environment (CSE) is presented, which can resolve the simulation flow and engineering data management in mechanical system design and development process, and it provides a simple operational integration platform for simulation analyzer and manager.
2.
CSE Function Framework and Architecture
2.1 CSE Function Framework To construct CSE system integrating various tools, many techniques are required, including database technology, collaborative simulation technology, and workflow management technology. Various tools are required to be integrated, including CAD (Computer Aided Design) tools, CAE (Computer Aided Engineering) tools, flow editor, user management, knowledge management, and other self-developed special software. Moreover comfortable visual environment is expected to offer [6]. The system function framework of CSE is displayed in Figure 1.
Figure 1. CSE function framework
Mechanical System Collaborative Simulation Environment for Product Design
867
Figure 2. CSE system architecture
Database/model database/knowledge database are the bottom support of the system, which provide data and models picking and storing management for CSE. Collaborative simulation environment is the core of system function, including simulation flow management system, platform support of collaborative running, project management system, knowledge management system, report management system. Simulation flow management system manages and controls the process of design and simulation tasks. Platform support of collaborative running supports the effective integration tools by means of COM/DCOM and API (Application Programming Interface) message technologies. Various CAD/CAE tools, selfdeveloped special software, flow editor, user management, and knowledge management face to users in the forefront, through which concrete engineering design and simulation mission can be implemented, for example definition of process, build of tri-dimension model, constitution of VP model, simulation and result analysis of prototyping, to realize the purpose of optimizing product quality and product design. 2.2 CSE Architecture Based on anlysis of CSE function framework, CSE architecture is built, as Figure 2. The architecture is composed of five layers, which are tool layer, distribution layer, management layer, representation layer, and user layer. The function and content of each lay is described as follows. 1. User layer: all CSE user, including general engineer, project principal, simulation analyser. 2. Interaction layer: including simulation Web portal and CAE workbench. Simulation Web portal provides unified portal based on Web for user, through which flow control and simulation result report check can be carried out. CAE workbench is the platform to implement all simulation tasks.
868
3.
H. Wang, G. Liu, X. Yang and Z. He
3.
Tools layer: including various CAD/CAE and self-developed special software, such as Solidworks, ANSYS, MSC.Nastran, Adams, and so on.
4.
Distributing layer: composed of Enterprise Service Bus and Grid Computing Platform, supporting distributed call and grid computing platform for CSE. Distributed call realizes Adapter and Agent of analysis tools based on Web Services (WS), and implements service publish and subscription of event driven based on WS-Notification standard. Grid computing platform is developed based on Globus Toolkit.
5.
Management layer: including user management, simulation flow management, engineering data management (EDM) and knowledge management. The function of user management is managing users and power in CSE. Simulation flow management includes simulation flow build, run, control and rebuild. EDM mainly handle all kinds of data in simulation process, such as CAD model, CAE model, CAE postprocess data, parameter data, flow data, and user data, and so on. Knowledge management supplies experience knowledge and guidance for simulation analyser.
Multi-hierarchy Engineering Data Management
From CAD models import and correlation parameters input in simulation start stage to various analysis report and optimization models output in simulation end stage, large numbers and many types of data will generate. The main data file type and file content are displayed in Table 1. In this table just the types of data file are listed, therefore different format data will produce to adopting different simulation software tool, so data types and numbers generated in complex simulation analysis system are very many. Moreover, an important characteristic is that many database file (for example FEM analysis model, VP analysis model, and computing result file,etc.) occupies large storage space, usually several hundreds MB or over thousand MB. Table 1. The main data file type and file content File Type
File Content
Text File
Correlation parameters data, analysis result
Graphics File
Analysis result
Animation File
Analysis result
Tri-dimension model Analysis model file File to solve
Object shape sizes and correlation Geometry and structure analysis model information All solving information
Mechanical System Collaborative Simulation Environment for Product Design
Mode file
Used in importing FEM model to system analysis model
Report file
Report file to submit after completing analysis
869
According to these data file characteristic, multi-hierarchy engineering data management method can be used to manage engineering data, which is method of file server uniting server database each other. Some little frequency using file, occupying large storage space file, report file, and animation file store in file server. Some high frequency using and occupying little storage space data store in server database directly, for example parameter information, flow information, and user information. Those engineering data files in file server are mapped into XML files corresponding with various simulation types through XML Schema. Server database can be accessed by using these XML file. Engineering data mapping mechanism based on XML is shown as Figure 3.
Figure 3. Engineering data mapping mechanism based on XML
870
H. Wang, G. Liu, X. Yang and Z. He
Figure 4. Common simulation flow of mechanical system
4.
Simulation Flow and Flow Control
4.1
Simulation Type and Flow
The problem of simulation flow control exists in product design and development process because of a number of simulation analysis tasks. The first step is summarizing the type and content of simulation. According to simulation requirement, the common simulation flow of mechanical system is defined as Figure 4. In the simulation process, every simulation module alternates with EDM. There are two steps at the beginning of flow, namely “obtain CAD model” and “clean CAD model”. The flow is composed of two kind of analysis,system and structure. System analysis includes predefining analysis type (kinematics, dynamics etc.), multi-rigid dynamics analysis, result validation, multi-flexible body dynamics analysis and so on. Structure analysis includes predefinition (analysis type, model, element type etc.), structure analysis, result validation, structure optimization. CAD model is acquired from PDM system, or parameter modelling module designed in this system. CAD model cleaning can resolve the problems that sharp angle and pinhole induce result aberration and long time computing when most of complicated CAD model is directly used in simulation analysis. Part feature is needed to modify and the modification can not affect correct analysis result, for example removing some pinhole and sharp angle in CAD model. Such CAD model modification is called CAD model cleaning. After cleaning, CAD model is saved in EDM and acts as source input of simulation analysis module. After system and
Mechanical System Collaborative Simulation Environment for Product Design
871
structure analysis, simulation modules are multidisciplinary optimization, reliability, 6 Sigma design, etc. Finally improved CAD model is obtained. 4.2
Simulation Flow Control
The objective of application integration is separating the process logic from the application which fulfils the process, managing the correlation with the process resource, and supervising and controlling process performance, to dynamically accomplish design and simulation mission. The application process characteristic of workflow management technology is displayed in Figure 5. The figure shows the run process of workflow management, which includes two stages. The first stage is build stage. According to certain mission, the process is designed and built in design and simulation process definition tool, which is based on XML Process Definition Language (XPDL) and accords with the definition standard of workflow model that was established by Workflow Management Coalition (WfMC) [7]. The second stage is run stage. Workflow engine of integration platform imports the process built in the first stage and executes the management operations of starting, hanging, resuming and ending workflow process. At the same time workflow engine, human, and applications can intercommunicate with each other to ensure the process run successfully. Furthermore, according to the change of mission, workflow engine can returns message to design and simulation process definition, then the process of mission can be redefined.
Figure 5. Application process characteristic
5.
Experiment and Discussion
According to formention research, under Eclipse3.2 development environment, taking some suspension system as object, flow modelling integration environment platform (as in Figure 6) and CSE Web portal (as in Figure 7) are developed. The two parts constitutes running environment of CSE, in which user can accomplish
872
H. Wang, G. Liu, X. Yang and Z. He
building simulation flow, executing simulation task, controlling simulation flow, generating report, etc.
Figure 6. Flow modelling integration environment platform
Figure 7. CSE Web portal
6.
Conclusions
In this paper, CSE function framework and system architecture are presented, and two key technologies of engineering data management and simulation flow management are researched. Multi-hierarchy engineering data management
Mechanical System Collaborative Simulation Environment for Product Design
873
technology based on XML and simulation flow control technology based on workflow management technology are adopted. The experiment of the suspension system CSE supports simulation flow definition and management. The result proves that it has better expandability, stabilization, and handleability.
7.
Acknowledgements
This work is supported by the National 863 High-Tech R&D key Program under the grant No. 2006AA04Z161 and the National 863 High-Tech R&D Program under the grant No. 2006AA04Z120.
8.
References
[1] Alhad A. Joshi, CAE Data Management Using Traditional PDM Systems[J]. Proceedings of ASME DETC CIE EIM Track - 2004: Computers and Information in Engineering [2] Chai Xudong, Li Bohu, Xiong Guangleng, etc. Research and Implementation on Collaborative Simulation Platform for Complex Product[J]. Computer Integrated Manufacturing System, 2002,8(7):580-584(in Chinese) [3] Hou Baocun, Li Bohu, Chai Xudong, etc. Research on the Multidisciplinary Tools Integration in the Virtual Prototyping Design and Simulation Environment[J]. Journal of System Simulation, 2004,16(2):234-241(in Chinese) [4] Ma Baohai, Qiu Lihua, Wang Zhanlin. Design of Distributed Simulation Platform of the Aircraft Onboard Utility Integration Management System[J].China Mechanical Engineering, 2003, 14(23):2033–2037 (in Chinese) [5] Chen Xi, Wang Zhiquan, Wu Huizhong. Research on Technology of Collaborative Virtual Prototyping for Complex Product[J]. Journal of Computer simulation, 2005, 22 (12):132-135(in Chinese) [6] William L. Kleb, Eric J. Nielsen, Peter A. Gnoffo, et al. Collaborative Software Development in Support of Fast Adaptive AeroSpace Tools[J], AIAA 2003-3978 [7] Workflow Management Coalition, Workflow Process Definition Interface – XML Process Definition Interface[M], Document Number WfMC- TC-1025, 2001
Evolution of Cooperation in an Incentive Based Business Game Environment Sanat Kumar Bista, Keshav P. Dahal, Peter I. Cowling MOSAIC Research Centre, University of Bradford, Bradford, West Yorkshire, BD7 1DP, UK Abstract This paper discusses our investigation into the evolution of cooperative players in an online business environment. We explain our design of an incentive based system with its foundation over binary reputation system whose proportion of reward or punishment is a function of transaction value and the player’s past history of cooperation. We compare the evolution of cooperation in our setting with non-incentive based environment and our findings show that the incentive based method is more suitable for the evolution of trustworthy players. Keywords: Evolution of Cooperation, Online Markets, Reputation systems, Trust
1.
Introduction
Trust is a crucial component of society. As a foundation of human civilization, trust continues to be important in all aspects of life. Whether we rely or not on some object is guided by how trust worthy we believe it to be. In a social context, trust worthiness is assessed in several ways, for example by referring to the past history of interaction, word-of-mouth, reliable third party certification, social reputation etc [1-3]. Computational model of trust in online societies is not straightforward. One of the common ways of assessing trustworthiness in online societies is a reputation mechanism and it has emerged as an important component of electronic markets in eliciting cooperation within loosely coupled and geographically dispersed economic agents [4]. Online auction and business sites like eBay, Yahoo Auction, Amazon.com etc use simple yet effective reputation management frameworks to provide their users with reputation information. The success of these trading environments demonstrates that reputation mechanisms are an effective way of inferring trust worthiness in the transacting parties. However, with strategic players in application fronts it has become increasingly difficult to identify a trustworthy partner for interaction. These types of systems largely preserve user anonymity and this brings additional challenges. Thus it is necessary to identify the parameters that would contribute positively in making possible the evolution of a cooperative society. The system we discuss and propose is a possible step towards this.
876
2.
S.K. Bista, K.P. Dahal, P.I. Cowling
Background
The trust and reputation management framework we are considering builds upon the requirements of online market places. Online reputation systems like that of eBay represents a simple and successful binary reputation based quality of service monitoring utility the in existing online business environments. The problem for the eBay reputation system [5] is to compute a trustworthiness value given the total number of positive feedbacks (Vp) and the total number of negative feedbacks (Vn) received by the player (the term player would be used to describe the buyer as well as seller). If any player ABC has a total of 4746 unique positive feed backs (Vp) and a total of 9 unique negative feedbacks (Vn). The positive feedback value expressed in percentage is thus the ratio of positive votes to the sum of positive and negative votes, which in this case becomes 99.8%. The rating process however does not consider the value of good that is being sold or purchased and the reputation of the player who is a seller or buyer in the process. This information is quite significant while assessing the quality of feedback and reputation consequently. It is always possible that a player might build a good reputation score by transacting small valued goods at first and later on might ‘cheat’ in a high value transaction [6]. Similarly, the feedback provided by a player with high reputation should be more meaningful than that provided by players with lower reputation; these points seem to be neglected by the current online recommendation systems. In our investigation, we include these parameters and present a comparative analysis of the results of evolution that was obtained by considering and not considering these parameters. We show that the inclusion of these parameters in the business process contributes in the evolution of a cooperative society with a larger number of cooperative players in it. The experiments for this investigation were carried out in an Iterated Prisoner’s Dilemma [7] like setting over a spatial distribution of players. IPD environment represents social dilemma situation [8] [7] [9]. A typical online business setting has a dilemma situation, as a buyer doesn’t know the seller and either of the parties are not sure of cooperation. Defection in a one shot business interaction seems to be attractive but in a repeated interaction, cooperation might still be attractive as it gains an increased reputation for the player and this could be helpful in future business, hence the dilemma. The pay offs for Temptation(T), Reward(R), Punishment(P) and Sucker(S) within an Iterated Prisoner’s dilemma game would strictly follow the following two inequalities, (i). T>R>P>S, and (ii). 2R>T+S [7]. The pay offs in a typical business game would have a different relationship (as expressed in Table 2 Section 4). The real difference that this has brought about is in making defection even more attractive over cooperation as the reward value now becomes equal to punishment. This endangers the dilemma situation and a right incentive for cooperation needs to be added for the dilemma to continue. In the simulation, number of players would play the cooperation-defection over generations in a genetic algorithm based environment. The payoff values as obtained would work as a fitness function for the GA based simulation, where the player strategies are represented by the chromosomes and each chromosome is a fixed length representation of player strategies in terms of cooperation (C) and defection (D). Essentially, the system searches for an optimal strategy.
Evolution of Cooperation in an Incentive Based Business Game Environment
877
A memory-3 game with four specific possible moves (CC, CD, DC, DD) is played between the players thus making the chromosome size 64 (43). The original IPD tournaments were described by Axelrod in [7] and it was first programmed by Forrest [10]. Axelrod used additional 6 bits to determine the first three moves. A variation of this was used by Errity in [11] and we are following the same scheme of additional bit encoding, in which 7 extra bits are used for encoding actions for the first three relative moves (relative to opponent moves). In this approach it is not required to encode an assumption of the pre-game history [11]. For reproduction, the system is capable of performing a crossover as well as mutation. During crossover, both the parent chromosomes are broken in at the same random point. Values of 0.001 for the mutation probability and 0.5 for crossover probability have been used throughout the game. The players have been categorized into six different types as defined by the percentage of cooperative actions exhibited in their strategy as represented in their chromosomes. Table 1 below presents the classification: Table 1. Player Classification
3.
Player Type
Cooperation (%)
Very Cooperative Cooperative Good Okay Dishonest Very Dishonest
> 65 55 to 65 50 to 54 45 to 49 35 to 44 < 35
Related Works
Trust and Reputation in the context of e-commerce environments and Peer to Peer systems have been an area of significant interest in recent. Aberer in [8] outlines the complexity of Trust and Reputation and discusses different approaches to computing trust and reputation. The authors have considered evolutionary approach as one of the many popular approaches that game theorists have been using. In [12] the authors have presented a social mechanism of reputation management in electronic communities. In their discussion around electronic communities, the authors have described the Prisoner’s dilemma situation in it. Our choice of a Prisoner’s Dilemma like environment to represent online business interaction has been justified by their discussion in these papers. In [6] the authors describe their design of a reputation management system for peer-to-peer systems in electronic communities. While listing the problems of electronic communities, the authors have marked the lack of incentives in rating to be a major one. In addition the paper also highlights the existing systems’ lack of ability in handling strategic players. Our research is fueled by these two listings. In a related work Janssen in [9] has studied the role of reputation scores in the evolution of cooperation in online e-commerce sites. The author discusses whether or not reputation alone can be meaningful in evolving a cooperative society. The paper concludes that high level cooperation is not only possible with reputation
878
S.K. Bista, K.P. Dahal, P.I. Cowling
scores. The author investigates the work in a one-shot prisoner’s dilemma like environment. Based on these findings, our work in this paper concentrates on investigating the possible role of incentive in the evolution of cooperation.
4.
Problem Definition and Incentive Based Model
The problem we are considering is a typical business game between a buyer and a seller. The corresponding actions and the pay offs are explained by the matrix representation in Table 2 below: Table 2. Pay-off Matrix for business game
Pbuyer
C Pseller
D
Cooperate (C)
Defect (D)
Rseller = Money
sseller = - (Money)
Rbuyer =Good(s)
TBuyer = Money + Good(s)
Tseller = Good(s)+ Money
Pseller = Money
s Buyer = - (Good(s))
Pbuyer =Good (s)
Here, R represents the reward pay off, S the suckers pay off, T for Temptation payoff and P for punishment payoff. It is clear from the table that in any case the player is in safe side to play a defection. If the other player cooperates it is going to receive a Temptation pay off which is twice the amount of Reward (as the defector would have goods as well as money in his hand), and even if the other player plays a defection it is going to receive a Punishment payoff which is equal in value to the reward pay off (as the player would still have in his hand either the goods or money). A player who cooperated while his opponent defected will loose money as well as goods. If we think of any online business environment preserving total anonymity of players, then the situation is closely resembled by the one described above. As an example, if we keep the ‘physical’ means of user identification and loss compensation schemes as a constant, this situation reflects the eBay like business scenario. This situation should not exist as it might result in a high number of selfish players in the society, a fact that is demonstrated in our experimental results. To avoid this situation we focus our investigation on what impact the inclusion of player reputation and price related data can be in the evolution of cooperation. In our model we use ‘bonus reward’ as an incentive to cooperative behavior in the game. Mutual cooperation in a game representing a single transaction would result in a payoff equivalent to the reward as in Table 2. Plus, we would assign a bonus reward, computed as a function of the player reputation and value of the goods. In the other hand, in a case where both parties defect each other, their corresponding pay off values are subjected to a decrement in pay off indicating a more severe
Evolution of Cooperation in an Incentive Based Business Game Environment
879
penalty for punishment. In the later case bonus reward would be subtracted from the reward value. In a simple approach towards this we base the reward and punishment pay off to be dependent on the following two parameters: The price value of the transaction(equal to the Reward for cooperation ) The existing cooperation probability (reputation) of the player as given by its history of cooperation and defection. The corresponding actions and associated pay offs for an incentive based setting is represented by Table 3 below: Table 3. Incentive compatible Pay-off Matrix for business game
Pbuyer Cooperate (C)
C
Rseller ValG T Pseller u ValG Rbuyer
Pseller D
Defect (D)
sseller TBuyer
ValG T PBuyer u ValG
Tseller
sBuyer
2 u ValG
ValG
ValG 2 u ValG
Pseller ValG (T Pseller u ValG ) Pbuyer
ValG (T PBuyer u ValG )
Here, ValG represents the value of goods being transacted and T represents the reputation of the player. The reputation information is maintained by the system as a vector with a total number of Cooperation (C) and Defection (D): ªC º (1) T H
« D» ¬ ¼
The expected probability of cooperation is given by the following expression: E ( Pn )
C CD
(2)
Where, the values for C and D are derived from the transaction history in (1).
5.
Experimental Setup and Results
The experiments were carried out in two phases. In the first phase a total of 2500 players were selected to play a non-incentive business game for 5000 generations with 100 iterations in each generation. The pay-off values for this game were based on the explanation provided in Table 2 above. In the second phase by keeping the other parameters same, a pro-incentive business model with pay-off values as listed in Table 3 above was simulated. The system recorded the readings of the player evolution and the cooperative and defective moves in each interaction. The results obtained are an average of 10 rounds of simulation. We assume that each player initially has a truth telling probability of 1. Further, we assume that the players are transacting goods of same value through out.
880
S.K. Bista, K.P. Dahal, P.I. Cowling
The stacked bar diagrams in Fig. 1 below shows the percentage growth share in the evolution of the six different types of players in each of the settings. In the diagram, it is clearly shown that the population of cooperative players (Very cooperative, cooperative, good and okay players) rise to higher values as the evolution continues in a pro-incentive setting. Classification of these players in terms of probability of cooperation was presented in Table 1 before. The population of non cooperative players (very dishonest and dishonest players) is high and continues to grow in a non-incentive setting. Evolution in a Pro-Incentive Business Game
Evolution in a Non-Incentive Business Game
100
120
80
100 Very Dishonest Dishonest Okay Good Cooperative Very Cooperative
80 % 60 40 20 0
%
60 40 20 0
Generations of Evolution
Generations of Evolution
Fig. 1. Evolution of different player types over 5000 generations (1, 100, 500, 1000, 1500, 2000, 2500, 3000, 3500, 4000, 4500 and 5000 respectively) in a non-incentive and proincentive business game environment
The evolution trends of each type of player were observed in comparison with respect to each game setting. The graphs in Fig. 2 below represent the comparative trend of evolution of the six different types of players. The evolution trend in general shows that the pro-incentive setting is favorable for the evolution of cooperative players while the non-incentive setting is favorable to the noncooperative player evolution. Evolution of Cooperative Players
Evolution of Very Cooperative Players 35
4
30
3
25 20
%
%
2 15 10
1
5
0
0
G1
G100 G500 G1000 G1500 G2000 G2500 G3000 G3500 G4000 G4500 G5000
Generations of Evolution Non-incentive Evolution Pro-Incentive Evolution
G1
G100 G500 G1000 G1500 G2000 G2500 G3000 G3500 G4000 G4500 G5000
Generations of Evolution Non-incentive Evolution Pro-Incentive Evolution
Evolution of Cooperation in an Incentive Based Business Game Environment
881
Evolution of Okay Players
Evolution of Good Players 40
50
35
40
30 30
%
%
25 20
20 10
15
0
10 5 G1
G1
G100 G500 G1000 G1500 G2000 G2500 G3000 G3500 G4000 G4500 G5000
G100 G500 G1000 G1500 G2000 G2500 G3000 G3500 G4000 G4500 G5000
Generations of Evolution
Generations of Evolution
Evolution of Dishonest Players
Evolution of Very Dishonest Players
70
35
60
30
50
25 20
%
%
40
15
30 10
20 5
10
0
0 G1
G100 G500 G1000 G1500 G2000 G2500 G3000 G3500 G4000 G4500 G5000
Generations of Evolution
G1
G100 G500 G1000 G1500 G2000 G2500 G3000 G3500 G4000 G4500 G5000
Generationsof Evolution
Fig. 2. Comparative evolution of six different player types in a non-incentive and proincentive business game setting
Another interesting aspect is the reputation of the players. In each play in the game, the cooperative and defective moves of the players were recorded in order to calculate the total reputation score. Reputation would be calculated as in expression (2) above. An average reputation score of 0.98 was recorded for proincentive business game whereas a very low average score of 0.003 was recorded for non-incentive setting. This result is in correlation to the population of noncooperative players who would defect in maximum as opposed to the cooperative instances of good players in the pro-incentive environment.
6.
Discussion and Conclusion
“Hard security mechanisms” like authentication, access control, encryption etc have been in use in different online business environments to reduce the chances of fraudulent acts [13, 14]. Such mechanisms might also include the registration requirements, requirements of personal details including bank and physical address, telephone numbers etc. While these mechanisms do certainly contribute in
882
S.K. Bista, K.P. Dahal, P.I. Cowling
reducing possibly fraudulent players from appearing in the market, they also reduce the level of participation in terms of numbers. Further, dishonest behaviors can also be demonstrated by players who pass it. The notion of trustworthiness in online societies is really complex [8], and such hard security mechanisms might not be enough to curb on the temptation of defectors. In an eBay like online business environment, if this was completely open, meaning that there would be completely no registration requirements for players and that the system would preserve total anonymity, the situation is in the worst case similar to the one depicted in our non-incentive business game. We suggested a pro-incentive model which demonstrated to be favorable for the evolution of cooperative players in society thus, leading to cooperation and trustworthiness with highly reputed players in it. Our investigation shows that the pro-incentive model which is an interrelated representation of cooperative behavior and reputation would be even more suitable for an open business environment. Our future investigation in this line could involve the formalization of incentive model, specifying the reputation to reflect the incentive for cooperation.
7.
References
[1] Dellarocas, C., The Digitization of Word-of-Mouth: Promise and Challenges of the Online Feedback Mechanisms, in MIT Sloan School of Management Working Paper. 2003, MIT: Cambridge, MA,USA. p. 36. [2] Aberer, K., et al., P-Grid: a self-organizing structured P2P System, in ACM SIGMOD. 2003. p. 29-33. [3] Josang, A., R. Ismail, and C. Boyd, A survey of Trust and Reputation for online service provision. Decision Support Systems, 2007. 43(2): p. 618-644. [4] Dellarocas, C., Reputation Mechanism Design in Online Trading Environments with Pure Moral Hazard. Information Systems Research, 2005. 16(2): p. 209-230. [5] EBay. Buy and sell electronics, cars, clothing, apparel, collectibles, sporting goods, digital cameras, and everything else on eBay, the world's online marketplace. 2007 [cited 2007 April 2 2007]; Available from: http://www.ebay.com. [6] Xiong, L. and L. Liu, PeerTrust: Supporting Reputation Based Trust for Peer-to-Peer electronic Communities. IEEE Transactions on Knowledge and Data Engineering, 2004. 16(7): p. 843-857. [7] Axelrod, R., ed. The Evolution of Cooperation. 1984, Basic Books, New York. [8] Aberer, K., et al. The Complex Facets of Reputation and Trust. in 9th Fuzzy Days, International conference on computational intelligence Fuzzy Logic Neural Networks and Evolutionary Algorithms. 2006. Dortmund, Germany. [9] Janssen, M., Evolution of Cooperation when feedback to reputation scores is voluntary. Journal of Artificial Societies and Social Simulation, 2006. 9(1): p. 17. [10] Goldberg, D.E., Genetic Algorithms in search, Optimization and Machine Learning. 1989: Pearson Education Inc. [11] Errity, A., Evolving Strategies for Prisoner's dilemma. 2003, Dublin City University. [12] Yu, B. and M.P. Singh. A Social Mechanism of Reputation Management in Electronic Communities. in 4th International Workshop on Cooperative Information Agents(CIA) 2000. Berlin: Springer-Verlag. [13] Rasmusson, L. and S. Janssen. Simulated Social Control for secure Internet Commerce. in Proceedings of the New Security Paradigms Workshop. 1996. [14] Josang, A. and J. Haller. Dirichlet Reputation Systems. in 2nd International Conference on Availability, Reliability and Security (ARES 2007). 2007. Austria.
Author Index A Abid Muhammad............................ 609 Akmal Muhammad............................ 609 Awang Atikah Haji .............................. 71
B Bahloul Khaled.................................... 313 Bai Chengjun................................ 691 Jing ........................................ 743 Balzan Franklin.................................. 855 Bin Hongzan................................. 681 Bista Sanat Kumar .......................... 875 Borg Jonathan ......................... 137, 855 Bouras Abdelaziz ....................... 313, 333 Brunel Stéphane................................. 303 Bu Zhonghong............................. 631 Buzon Laurent................................... 313
C Cai Dongmei ................................ 549 Jin .................................. 447, 763 Tiefeng................................... 231
Cao Dongxing................................211 Guozhong .................................91 Chakpitak Nopasit ...................................333 Chan W. L. ......................................711 Chau Hau Hing ....................................3 Chen Hang .........................................81 Jiqing..............................529, 661 Ming .......................................815 Qiong......................................509 X.S. ........................................835 Xuebin....................................763 Cheng H.............................................721 Hongmei.................................261 Clarke Derek......................................479 Conway A. P ........................................221 Cowling Peter .......................................875 Cui Chunxiang ..............................211 Z. ............................................701
D Dahal Keshav....................................875 de Pennington Alan............................................3 Demoly Frédéric ..................................117 Deng Qian-Wang .............................177 Zhiyong ..................................681
884
Author Index
Derigent W. .......................................... 805 Ding Shuhui.................................... 261 Dong Dandan................................... 561 Du Zongzhan ............................... 283 Duan Guolin ............................ 447, 763 Q. J......................................... 499 Ducellier Guillaume .............................. 157 Durupt A. ........................................... 805
E Eynard Benoît ............................ 127, 157
F Fan Pingqing................................. 375 Xianfeng ................................ 405 Zhun......................................... 13 Fang Zongde ................................... 345 Farrugia Philip...................................... 855 Fu M.W............................... 323, 711
G Gao Qi ........................................... 283 Ge Zhenghao ............................... 671 Girard Philippe.................................. 303 Gomes Samuel ................................... 117
Gou Yanni........................................81 Guo Bao-feng.................................581 Haixia .......................................91 Hui..........................................345 Rui-Feng.................................385 Y.............................................835
H Han Xiaowei ..................................671 Xinglin ...................................793 He Qingping.................................825 Zhaoxia...........................459, 865 Hein Lars ..........................................13 Hogg David..........................................3 Hou Yuemin.....................................31 Huang Chang-biao .............................651
I Ion W. J ........................................221
J Jackson M. R .......................................469 Ji Linhong ....................................31 X.L .........................................601 Zhuoshang..............................815 Jiang Jianjun ....................................147 Kai-yong.........................437, 651 Pingyu ....................................273
Author Index
Jin Miao....................................... 581 Jou Rong-Yuan............................. 415 Jowers Iestyn ......................................... 3
K Kang Lan......................................... 425 Khan Muhammad Shahid................ 395
L Lan Fengchong ..................... 529, 661 Lhoste Pascal..................................... 241 Li Bing ....................................... 355 Dazhi...................................... 405 Guoping ................................. 621 J.............................................. 499 Jian......................................... 355 Jingyang................................. 671 Lingfang................................. 405 Pei-Nan .................................. 385 Shan ................................. 41, 365 Shangping .............................. 355 Shaobo ................................... 539 Shiyun.................................... 231 Lian Chaochun ............................... 199 Liao Degang................................... 405 W.H. ...................................... 835 Lin Jun-yi ............................. 437, 651 Qiao ....................................... 783 Yu Hua................................... 489 Zhongqin................................ 199
885
Liu Bin..................................437, 651 Geng ................. 61, 459, 631, 865 Hongxun.................................763 Jihong .....................................187 Luning ....................................843 Mei .........................................261 Qiang......................................425 Xingdong................................783 Xintian....................................375 Zengmin .................................631 Lombard Muriel.....................................241 Lu J. .............................................711 Luo Ming .......................................365 Yougao ...................................681 Youxin....................................405 Lv B.S..........................................835 Yuan-jun.................................509 Lynn A.............................................221
M Ma Gui Chun ..................................51 Mahdjoub Morad .....................................117 Malik Saad Jawed.............................395 Matta Nada .......................................127 McKay Alison .........................................3
O Ogrodnik Peter .......................................355 Ouzrout Yacine ....................................333
886
Author Index
P Pa P.S.......................................... 103 Parkin R. M....................................... 469 Parvez Shahid .................................... 609
Q Qin Wenjie.................................... 561 Xiansheng .............................. 743 Qu Yaning ................................... 283 Zhaofu.................................... 549
R Remy S............................................. 805 Roucoules Lionel............................. 127, 157 Ruan Feng ....................................... 425
S Sagot Jean-Claude ........................... 117 Shan Linhai............................. 773, 783 Shangguan Ning ....................................... 437 Shen Xiaobin .................................. 825 Yunbo .................................... 345 Shi H.B......................................... 835 Yao-Yao................................. 753 Si Guang-ju ................................ 641
Song Fangzhen ................................691 Spiteri Christopher.............................137 Stewart Barry.........................................21 Su Dongning................................621 Dong-ning ..............................641 Tzu-Pin...................................103 Y.............................................835 Sun Beibei .....................................571 Chao .......................................601 Limei ......................................273 ShuDong.................................729 Yanbo .....................................793 Ying-da...................................509 Zhaoyang................................187 Sureephong Pradorn ...................................333
T Tan Runhua .....................................91 Tang Hong.......................................753 Teng Duo...........................................81 Tian Y.L .........................................469 Tong H.............................................721 Shurong ..................................127 Xufeng....................................293
W Wang Bailing....................................147 Chunhe ...........................773, 793 Dongbo...................................293 Dong-Bo.................................251 Haiwei ...................... 61, 459, 865
Author Index
Huicai .................................... 293 Jinhua..................................... 519 Jinlun ..................................... 529 Jinmin .................................... 591 Junbiao................................... 147 Juqun........................................ 61 Keqin ..................................... 127 Ming-di .................................. 641 Run-Xiao ....................... 251, 499 Wendan.................................. 743 Y. ........................................... 721 Yuchao................................... 529 Zhanxi.................................... 743 Wei Bingyang................................ 345 Fajie ....................................... 167 Qiusheng................................ 793 Weston........................................ 701 Wodehouse A. J......................................... 221 Wu Baohai.............................. 41, 365 Fu Jia ....................................... 51 Liyan...................................... 631
X Xie Qingsheng.............................. 539 Xu Feng ....................................... 671 Jian......................................... 211 Wen-qin ................................. 509 Wubin .................................... 355 Zhihua.................................... 571
Y Yan Xiu-Tian............. 21, 71, 251, 479 Y.H ........................................ 469 Yang Bo .......................................... 843 Diming ................................... 479 Ge .......................................... 211
887
Guanci ....................................539 Xiaohui...........................459, 865 Yao Shanshan ................................167 Tao .................................447, 763 Zuoping ..................................199 Yeung Y.............................................721 Yin Zeyong ...................................519 Yu De-Jie .....................................177 Qiang......................................753 XiaoYi....................................729
Z Ze Xiangbo..................................843 Zhan Yiting .....................................815 Zhang Dinghua............................41, 365 Fen..........................................793 Fenghua..................................783 Fuying ....................................825 Guoliang.................................773 J. R. ........................................499 Pengcheng ..............................591 Shichao...................................147 Shu Sheng ................................51 Xiaoyang ................................571 Yingfeng.................................273 Zhao Bo...........................................375 Dong.......................................549 Ning........................................345 Qian........................................251 Shi-yan ...................................581 Zheng Xiangzhou ..............................681 Zhong Kangmin.................................621 Kang-min ...............................641 Peisi........................................261
888
Author Index
Zhou Chuan Hong........................... 157 Jiangqi.................................... 199 Ling................................ 773, 783 Yunjiao .................................. 661
Zhu Guolei.......................................81 Ning..........................................81 Wenfeng .................................199 Yanhua ...................................591 Zolghadri Marc .......................................303