Advances in Industrial Engineering and Operations Research
Lecture Notes in Electrical Engineering Volume 5 Advances in Industrial Engineering and Operations Research Alan H. S. Chan, and Sio-Iong Ao ISBN 978-0-387-74903-7, 2008 Advances in Communication Systems and Electrical Engineering Xu Huang, Yuh-Shyan Chen, and Sio-Iong Ao ISBN 978-0-387-74937-2, 2008 Time-Domain Beamforming and Blind Source Separation Julien Bourgeois, and Wolfgang Minker ISBN 978-0-387-68835-0, 2007 Digital Noise Monitoring of Defect Origin Telman Aliev ISBN 978-0-387-71753-1, 2007 Multi-Carrier Spread Spectrum 2007 Simon Plass, Armin Dammann, Stefan Kaiser, and K. Fazel ISBN 978-1-4020-6128-8, 2007
Alan H. S. Chan • Sio-Iong Ao Editors
Advances in Industrial Engineering and Operations Research
123
Edited by: Alan H.S. Chan City University of Hong Kong Department of Manufacturing Engineering and Engineering Management 83 Tat Chee Avenue KOWLOON HONG KONG/PEOPLES REP. OF CHINA
Sio-Iong Ao IAENG Secretariat 37-39 Hung To Road Unit 1, 1/F HONG KONG PEOPLE’S REPUBLIC OF CHINA
Library of Congress Control Number: 2007935316 ISBN 978-0-387-74903-7
e-ISBN 978-0-387-74905-1
Printed on acid-free paper c 2008 Springer Science + Business Media, LLC All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. 9 8 7 6 5 4 3 2 1 springer.com
Preface
A large international conference on industrial engineering and operations research was held in Hong Kong, March 21–23, 2007, under the International MultiConference of Engineers and Computer Scientists (IMECS) 2007. The IMECS 2007 is organized by the International Association of Engineers (IAENG), a nonprofit international association for engineers and computer scientists. The IMECS conferences serve as platforms for the engineering community to meet with each other and to exchange ideas. The conferences have also struck a balance between theory and application development. The conference committees consist of over 200 committee members who are mainly research center heads, faculty deans, department heads, professors, and research scientists from over 30 countries. The conferences are truly international meetings with a high level of participation from many countries. The response that we have received for the multi-conference is excellent. There have been more than 1100 manuscript submissions for the IMECS 2007. All submitted papers have gone through the peer review process, and the overall acceptance rate is 58.46%. This volume contains revised and extended research articles on industrial engineering and operations research written by prominent researchers participating in the multi-conference IMECS 2007. Topics covered include quality management systems, reliability and quality control, engineering experimental design, computer-supported collaborative engineering, human factors and ergonomics, computer-aided manufacturing, manufacturing processes and methods, engineering management and leadership, optimization, transportation network design, stochastic modeling, queuing theory, and industrial applications. The papers are representative in these subjects, sitting at the top end of the highest technologies in these fields. This book presents state-of-the-art advances in industrial engineering and operations research and serves as an excellent reference work for researchers and graduate students working with industrial engineering and operations research.
v
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
v
Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xxi
1 A Comprehensive Movement Compatibility Study for Hong Kong Chinese . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . W.H. Chan and Alan H.S. Chan 1.1 Introduction 1.2 Methods 1.2.1 Experimental Design 1.2.2 Subjects 1.3 Results and Discussion 1.3.1 Response Preference and Mean Stereotype Strength 1.3.2 Reversibility 1.3.3 Response Time 1.4 Conclusion and Recommendations References 2 A Study of Comparative Design Satisfaction Between Culture and Modern Bamboo Chair. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vanchai Laemlaksaku and Sittichai Kaewkuekool 2.1 Introduction 2.2 Methodology 2.2.1 Design and Fabrication 2.2.2 Sample Selection 2.2.3 Questionnaire 2.3 Results 2.3.1 Dimensional Appropriateness of Chair 2.3.2 Comfort Level of Chair
1 1 4 4 4 4 4 7 8 9 10
13 13 14 14 17 17 18 18 21 vii
viii
Contents
2.4 Conclusions 2.4.1 Dimensional Appropriateness of the Chair 2.4.2 Comfort Level 2.4.3 Aesthetic Appeal of the Modern Bamboo Chair 2.5 Recommendations References 3 Factors Influencing Symbol-Training Effectiveness . . . . . . . . . . . . . . . . . . Annie W.Y. Ng and Alan H.S. Chan 3.1 Introduction 3.2 Factors Influencing Symbol-Training Effectiveness 3.2.1 Training Method 3.2.2 Other Training Factors 3.3 Experimental Design and Analysis for Symbol-Training Effectiveness Research 3.4 Conclusion References 4 Multiple-Colony Ant Algorithm with Forward–Backward Scheduling Approach for Job-Shop Scheduling Problem . . . . . . . . . . . . . Apinanthana Udomsakdigool and Voratas Kachitvichyanukul 4.1 Introduction 4.2 Problem Definition and Graph-Based Representation 4.2.1 Problem Definition 4.2.2 Graph-Based Representation 4.2.3 The General Concept of ACO Algorithm 4.2.4 Memory Requirement for Ant and Colony 4.2.5 Hierarchical Cooperation in Multiple Colonies 4.2.6 Backward Scheduling Approach 4.3 Description of Algorithm 4.3.1 Initialize Pheromone and Parameter Setting 4.3.2 Local Improvement 4.3.3 Pheromone Updating 4.3.4 Restart Process 4.4 Experimental Results 4.5 Conclusion and Recommendation 4.5.1 Conclusion 4.5.2 Recommendation References 5 Proposal of New Paradigm for Hand and Foot Controls in the Context of Spatial Compatibility Effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alan H.S. Chan and Ken W.L. Chan 5.1 Introduction 5.1.1 Spatial Stimulus–Response (SR) Compatibility
24 24 24 25 25 25 27 27 28 28 34 34 36 36
39 39 40 40 41 42 43 44 45 46 47 50 51 53 53 53 53 54 54
57 57 58
Contents
5.2 Research Plan and Methodology 5.3 Experiment 1: Spatial SR Compatibility Effect of Foot Controls 5.3.1 Design 5.4 Experiment 2: Spatial SR Compatibility Effect of Hand and Foot Controls 5.4.1 Design 5.5 Experiment 3: Spatial SR Compatibility Effect of Hand and Foot Controls for Stimulus and Response Arrays on Orthogonal Planes 5.5.1 Design 5.6 Experiment 4: Spatial SR Compatibility Effect of Hand and Foot Controls for Stimulus and Response Arrays on Parallel and Orthogonal Planes 5.6.1 Design 5.7 Analysis References 6 Development of a Mathematical Model for Process with S-Type Quality Characteristics to a Quality Selection Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Tahera, R.N. Ibrahim, and P.B. Lochert 6.1 Introduction 6.2 Model Development 6.3 Genetic Algorithm 6.3.1 Genetic Representation 6.3.2 Population Size 6.3.3 Generating Initial Population 6.3.4 Fitness Function 6.3.5 Selection 6.3.6 Mating or Crossover 6.3.7 Mutation 6.3.8 Termination Criteria 6.3.9 Generic Algorithm Parameters 6.4 Numerical Example 6.5 Conclusions References 7 Temporal Aggregation and the Production Smoothing Model: Evidence from Electronic Parts and Components Manufacturing in Taiwan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chien-wen Shen 7.1 Introduction 7.2 Literature Review 7.3 Model Specifications
ix
59 59 59 60 61
61 62
62 63 64 64
67 67 69 74 75 76 76 77 77 77 78 79 79 79 80 80
83 83 84 85
x
Contents
7.4 Empirical Results 7.4.1 Tests of Production-Smoothing Hypotheses 7.4.2 Model Analyses 7.5 Conclusions References 8 Simulations of Gear Shaving and the Tooth Contact Analysis . . . . . . . Shinn-Liang Chang, Hung-Jeng Lin, Jia-Hung Liu, and Ching-Hua Hung 8.1 Introduction 8.2 Mathematical Model of the Shaving Machine 8.3 Tooth Contact Analysis of the Shaved Gear 8.4 Longitudinal Tooth Crowning Introduced by Litvin 8.5 Conclusion References 9 On Aggregative Methods of Supplier Assessment . . . . . . . . . . . . . . . . . . . Vladim´ır Modr´ak 9.1 Introduction 9.2 Research Background and Motivation 9.3 The Importance of Suppliers Assessment and Selection 9.4 Alternative Techniques of Supplier Assessment 9.4.1 Assessment of the Quality of Supplied Products 9.4.2 Assessment by Adherence to Time Schedules 9.4.3 Supplier Assessment with Respect to the Schedule of Quantity 9.4.4 Aggregative Supplier Assessment 9.5 Discussion and Closing Remarks References 10 Human Factors and Ergonomics for Nondestructive Testing . . . . . . . . B.L. Luk and Alan H.S. Chan 10.1 Introduction 10.2 Principles and Procedures 10.2.1 Dye Penetrant Inspection 10.2.2 Magnetic Particles Inspection 10.2.3 Ultrasonic Inspection 10.2.4 Eddy Current 10.3 Human Abilities and Skills Required 10.3.1 Perceptual and Cognitive Abilities 10.3.2 Physical Strength 10.3.3 Surface Preparation Technique 10.4 Ergonomics, Safety, and Health Problems 10.4.1 Illumination
87 88 88 93 94 95
95 96 103 105 109 109 111 111 112 114 116 117 117 120 123 124 124 127 127 128 128 129 130 132 133 133 135 135 136 136
Contents
10.4.2 Working Posture 10.4.3 Potential Chemical Hazards 10.5 Conclusions and Recommendations References 11 A Novel Matrix Approach to Determine Makespan for Zero-Wait Batch Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amir Shafeeq, M.I. Abdul Mutalib, K.A. Amminudin, and Ayyaz Muhammad 11.1 Introduction 11.2 Batch Process 11.2.1 Makespan for Single-Product Batch Processing 11.2.2 Makespan for Multiproduct Batch Process 11.3 The Matrix Method 11.4 Application of the Matrix Method 11.5 Conclusion References 12 Interactive Meta-Goal Programming: A Decision Analysis Approach for Collaborative Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . Hao W. Lin, Sev V. Nagalingam, and Grier C.I. Lin 12.1 Introduction 12.2 Decision Making in Collaborative Manufacturing 12.3 Interactive Meta-Goal Programming-Based Decision Analysis Framework 12.3.1 Meta-Goals 12.3.2 Interactive Process 12.3.3 Interactive Meta-Goal Programming-Based Decision Analysis Workflow 12.4 Example 12.5 Conclusion References 13 Nonlinear Programming Based on Particle Swarm Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Takeshi Matsui, Kosuke Kato, Masatoshi Sakawa, Takeshi Uno, and Kenji Morihara 13.1 Introduction 13.2 Nonlinear Programming Problem 13.3 Particle Swarm Optimization 13.4 Improvement of Particle Swarm Optimization 13.4.1 Generation of Initial Search Positions of Particles 13.4.2 Modified Move Schemes of a Particle 13.4.3 Division of the Swarm into Two Subswarms 13.4.4 Secession
xi
136 138 140 141
143
143 144 145 146 148 150 152 153
155 155 156 157 157 161 167 169 171 172
173
173 174 174 175 176 176 177 178
xii
Contents
13.4.5 Multiple Stretching Technique 13.4.6 The Procedure of Revised PSO 13.5 Numerical Example 13.6 Conclusions References 14 A Heuristic for the Capacitated Single Allocation Hub Location Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jeng-Fung Chen 14.1 Introduction 14.2 Previous Related Studies 14.3 A Model 14.4 Heuristic 14.4.1 Determining the Number of Hubs 14.4.2 Selecting Hub Locations 14.4.3 Allocating Nonhubs To Hubs 14.4.4 Heuristic SATLCHLP 14.5 Computational Results 14.5.1 Australia Post Data Set 14.5.2 Results 14.6 Conclusions and Suggestions for Future Research References 15 Multimodal Transport: A Framework for Analysis . . . . . . . . . . . . . . . . . Mark K.H. Goh, Robert DeSouza, Miti Garg, Sumeet Gupta, and Luo Lei 15.1 Introduction 15.2 Literature Review 15.3 Theoretical Framework 15.4 Research Methodology 15.5 Case Study 15.5.1 Railways 15.5.2 Road 15.5.3 Maritime 15.5.4 Multimodal 15.6 Future Directions for Research References 16 Fractional Matchings of Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jiguo Yu and Baoxiang Cao 16.1 Terminology and Notation 16.2 Basic Results on Fractional Matching 16.3 Fractional Factor-Critical Graph 16.4 Fractional Deleted Graphs 16.5 Fractional Covered Graphs 16.6 Fractional Extendable Graph
179 180 181 183 183
185 185 186 187 188 188 189 189 190 190 191 191 194 195 197
197 199 200 201 202 203 203 203 204 207 208 209 209 210 211 215 217 220
Contents
16.7 Conclusion References 17 Correlation Functions for Dynamic Load Balancing of Cycle Shops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Claudia Fiedler and Wolfgang Meyer 17.1 Problem Statement 17.2 Load-Balancing Systems: State of the Art 17.3 Process Plan and Resource Model 17.4 Theory of Correlation Scheduling 17.4.1 Two Processes Being Sent to the Plant 17.4.2 Three Processes Being Sent to the Plant 17.4.3 Generalization to n Processes 17.5 Dynamic Scheduling 17.5.1 Collision Functions 17.5.2 Scheduling Procedure 17.6 Load Balancing 17.6.1 Load Balancing at System Level 17.6.2 Load Balancing at Subsystem Level 17.7 Conclusion References 18 Neural Network-Based Integral Sliding Mode Control for Nonlinear Uncertain Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . S.W. Wang and D.L. Yu 18.1 Introduction 18.1.1 Sliding Mode Control 18.1.2 Integral Sliding Mode Control 18.1.3 Radial Basis Function Neural Network Approximation 18.2 Problem Statement 18.3 New Integral Sliding Surface 18.4 Sliding Mode Control Law 18.5 Numerical Example 18.6 Conclusions References 19 Decentralized Neuro-Fuzzy Control of a Class of Nonlinear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Miguel A. Hern´andez and Yu Tang 19.1 Introduction 19.2 Problem Statement 19.3 Recurrent Neuro-fuzzy Networks 19.4 Design of the Decentralized Control 19.4.1 Control Law 19.4.2 Stability Analysis 19.5 Output Feedback
xiii
224 224
227 227 229 230 232 232 233 233 234 234 235 237 237 238 242 243
245 245 245 246 246 247 248 253 254 256 256
259 259 260 262 263 263 264 269
xiv
Contents
19.6 Experimental Results 19.7 Conclusions References 20 A New Training Algorithm of Adaptive Fuzzy Control for Chaotic Dynamic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chun-Fei Hsu, Bore-Kuen Lee, and Tsu-Tian Lee 20.1 Introduction 20.2 Problem Formulation 20.3 Design of AFC with PID-Type Learning Algorithm 20.3.1 Approximation of Fuzzy System 20.3.2 Design of PID-AFC 20.3.3 Design of PID-AFC with Bound Estimation 20.4 Simulation Results 20.5 Conclusions References 21 General-Purpose Simulation Management for Satellite Navigation Signal Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ge Li, Xinyu Yao, and Kedi Huang 21.1 Introduction 21.2 The Real-Time Application Requirements 21.2.1 Requirements for the Simulation Architecture 21.2.2 Requirements of the Real-Time Calculation for the High-Fidelity Model 21.2.3 Requirements of the Data Communication for Different Layers 21.2.4 Requirements of the Real-Time Simulation Engine 21.3 A General-Purpose Architecture for Satellite Navigation Signal Simulation 21.4 General-Purpose Real-Time Distributed Simulation Managements 21.4.1 Experiment Design and Management Techniques 21.4.2 Simulation Database Techniques 21.4.3 Simulation Management Techniques 21.4.4 System Scalability Realization 21.5 Conclusions References 22 Multilayered Quality-of-Service Architecture with Cross-layer Coordination for Teleoperation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . X.U. Lei and L.I. Guo-dong 22.1 Introduction 22.2 Network Performance Parameters Analysis 22.3 Architectural Framework
269 272 273
275 275 277 278 278 279 281 284 287 289
291 291 292 292 292 293 293 293 295 295 296 297 298 298 299
301 301 302 303
Contents
22.4 Communication Network QoS Enhancement 22.4.1 Network Layer QoS Optimization 22.4.2 Data Link Layer QoS Optimization 22.5 Resource Network QoS Enhancement 22.5.1 Transport Layer QoS Optimization 22.5.2 Presentation Layer QoS Enhancing 22.5.3 Session Layer QoS Supervision 22.5.4 Application Layer QoS 22.6 Cross-layer Coordination and Adaptation 22.7 Application Scenarios 22.8 Conclusion References 23 Improvement of State Estimation for Systems with Chaotic Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pitikhate Sooraksa and Prakob Jandaeng 23.1 Introduction 23.2 Improvement of Adaptive Kalman Filtering 23.3 Results 23.3.1 Model 23.3.2 Computer Simulation 23.4 Conclusion References 24 Combined Sensitivity–Complementary Sensitivity Min–Max Approach for Load Disturbance–Setpoint Tradeoff Design . . . . . . . . . . Ramon Vilanova and Orlando Arrieta 24.1 Introduction 24.2 Problem Formulation 24.2.1 PID Controller 24.2.2 Process Model 24.2.3 Design Problem Formulation 24.3 Solution to the Optimal Approximation Problem 24.4 Step Response Tuning 24.5 Disturbance Attenuation Tuning 24.6 Example 24.7 Trade-off Tuning 24.8 Conclusions References 25 Nonlinear Adaptive Sliding Mode Control for a Rotary Inverted Pendulum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yanliang Zhang, Wei Tech Ang, Jiong Jin, Shudong Zhang, and Zhihong Man 25.1 Introduction
xv
305 305 306 307 307 308 309 309 309 311 313 313
315 315 316 319 319 319 325 325
327 327 328 329 329 329 331 333 335 336 338 342 342
345
345
xvi
Contents
25.2 Background 25.2.1 Mathematical Model of System 25.2.2 Sliding Mode Control 25.2.3 Adaptive Control 25.3 Sliding Mode Control: Design and Simulation 25.3.1 Linear Sliding Mode Control 25.3.2 Nonlinear Sliding Mode Control 25.4 Nonlinear Adaptive Sliding Mode Control Design and Simulation 25.4.1 System Parameters 25.4.2 Parameter Selection 25.5 Experimental Results 25.6 Conclusion References 26 Robust Load Frequency Sliding Mode Control Based on Uncertainty and Disturbance Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . P.D. Shendge, B.M. Patre, and S.B. Phadke 26.1 Introduction 26.2 Dynamic Model for Load Frequency Control 26.3 Model Following and UDE-Based Control Law 26.4 Design of Control 26.4.1 Uncertainty and Disturbance Estimation with First-Order Filter 26.4.2 Uncertainty and Disturbance Estimation with Second-Order Filter 26.4.3 Uncertainty and Disturbance Estimation with nth-Order Filter 26.5 Model Following and UDE Based LFC 26.6 Results 26.7 Conclusion References 27 Robust Intelligent Motion Control for Linear Piezoelectric Ceramic Motor System Using Self-constructing Neural Network . . . . Chun-Fei Hsu, Bore-Kuen Lee, and Tsu-Tian Lee 27.1 Introduction 27.2 Problem Formulation 27.3 Robust Intelligent Motion Controller Design 27.3.1 Description of SCNN 27.3.2 Approximation of SCNN 27.3.3 Design of RIMC 27.4 Experimental Results 27.5 Conclusions References
346 346 347 348 348 348 350 353 353 356 357 359 359
361 361 362 364 364 366 366 367 368 370 373 373
375 375 377 378 379 381 382 384 388 390
Contents
28 Development of Hybrid Magnetic Bearings System for Axial-Flow Blood Pump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lim Tau Meng and Cheng Shanbao 28.1 Introduction 28.2 Design of Axial-flow Blood Pump 28.3 Principles of Magnetic Bearings 28.4 Principles of Lorentz-type Motor 28.5 Control of the HMBs System 28.6 Performance of the HMBs System 28.7 Conclusions and Future Work References 29 Critical Angle for Optimal Correlation Assignment to Control Memory and Computational Load Requirements in a Densely Populated Target Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.M. Akbar Hussain and Zaki Ahmed 29.1 Introduction 29.2 Critical Angle Representation 29.3 Motion Model Consideration 29.4 Implementation 29.5 Performance Parameter 29.6 Simulation Results 29.7 Conclusion References 30 High-Precision Finite Difference Method Calculations of Electrostatic Potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . David Edwards, Jr. 30.1 Introduction 30.1.1 Historical Development 1970–2007 30.1.2 Brief Description of the Process 30.2 Construction of Order-10 Algorithm for General Mesh Points and the Definition of the grad6 Function 30.3 Properties of the grad6 Function and the Definition of the Maximum-Error Function 30.4 Comparison of Different Algorithms for the Two-Tube Zero-Gap Lens 30.5 Application to Region Construction 30.6 Dependence of Algorithm Precision upon the Set of Surrounding Points 30.7 Notes of Caution 30.8 Summary and Conclusion Appendix A References
xvii
391 391 392 394 394 396 397 399 399
401 402 402 404 406 407 408 412 413
415 415 415 416 417 419 423 425 426 428 430 431 432
xviii
Contents
31 Newton–Tau Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Karim Ivaz and Bahram Sadigh Mostahkam 31.1 Introduction 31.2 Solving Nonlinear Fredholm Integral Equation 31.2.1 Formulation of the Problem 31.2.2 Application of the Newton Method 31.2.3 Application of the Tau Method 31.2.4 Numerical Examples 31.3 Solving a System of Nonlinear Integral Equations 31.3.1 Formulation of the Problem 31.3.2 Application of the Newton Method to SNFIE 31.3.3 The Tau Method Applied to (8) 31.3.4 Numerical Examples 31.4 Solving Nonlinear Integro-Differential Equation 31.4.1 Formulation of the Problem 31.4.2 Application of the Newton Method 31.4.3 Application of the Tau Method 31.4.4 Numerical Examples References 32 Reconfigurable Hardware Implementation of the Successive Overrelaxation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Safaa J. Kasbah, Ramzi A. Haraty, and Issam W. Damaj 32.1 Introduction 32.2 Description of the Algorithm 32.3 Reconfigurable Computing 32.3.1 Hardware Compilation 32.3.2 Handel-C Language 32.4 Hardware Implementation of SOR 32.5 Experimental Results 32.6 Conclusion References 33 Tabu Search Algorithm Based on Strategic Oscillation for Nonlinear Minimum Spanning Tree Problems . . . . . . . . . . . . . . . . . . . . . . Hideki Katagiri, Masatoshi Sakawa, Kosuke Kato, Ichiro Nishizaki, Takeshi Uno, and Tomohiro Hayashida 33.1 Introduction 33.2 Problem Formulation 33.3 Summary of Tabu Search 33.4 Tabu Search Algorithm Based on Strategic Oscillation for Nonlinear MST Problems 33.4.1 Initial Solution 33.4.2 Neighborhood Structure and Local Search 33.4.3 Tabu List and Aspiration Criterion
433 433 433 433 434 435 436 438 438 440 441 443 445 445 447 448 451 452
453 453 455 455 456 457 459 462 464 464
467
467 468 469 469 470 471 471
Contents
33.4.4 Strategic Oscillation 33.4.5 Diversification 33.5 Numerical Experiment 33.6 Conclusion References 34 Customization of Visual Lobe Measurement System for Testing the Effects of Foveal Load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cathy H.Y. Chiu and Alan H.S. Chan 34.1 Introduction 34.2 Design 34.2.1 Additional Features 34.2.2 Stimuli 34.2.3 Software 34.2.4 Apparatus 34.2.5 Output 34.3 Conclusion References Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xix
471 472 472 475 475
477 477 480 480 481 483 483 483 484 485 487
Contributors
Dr. Zaki Ahmed School of Electronic, Communication and Electrical Engineering, University of Hertfordshire, College Lane, Hatfield, Herts AL10 9AB UK, +44 1707286279,
[email protected] K.A. Amminudin Chemical Engineering Program, Universiti Teknologi Petronas, 31750, Tronoh, Perak, Malaysia Wei Tech Ang School of Mechanical and Aerospace Engineering, Nanyang Technological University, Blk N3, B4a-02A, Singapore 639798 Orlando Arrieta Telecommunication and System Engineering Department, ETSE, Universitat Aut‘onoma de Barcelona, 08193 Bellaterra, Barcelona, Spain, Orlando.Arrieta @uab.cat Baoxiang Cao School of Computer Science, Qufu Normal University, Ri-zhao, Shandong, 276826, P. R. China,
[email protected] Alan H.S. Chan Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, Kowloon Tong, Kowloon, Hong Kong, alan.chan @cityu.edu.hk Ken W.L. Chan Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, Kowloon Tong, Kowloon, Hong Kong, wl.chan @student.cityu.edu.hk
xxi
xxii
Contributors
W.H. Chan Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong Shinn-Liang Chang Department of Power Mechanical Engineering, National Formosa University, Huwei, Yunlin, Taiwan 632, ROC. Jeng-Fung Chen Department of Industrial Engineering and Systems Management, Feng Chia University, P.O. Box 25-097, Taichung, Taiwan, R.O.C. 40724 Cathy H.Y. Chiu Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong Issam W. Damaj Department of Electrical and Computer Engineer, Dhofar University, Salalah, Sultanate of Oman, i
[email protected] David Edwards, Jr. IJL Research Center, Newark, Vt. 05871,
[email protected] Claudia Fiedler University of Technology, 21071 Hamburg, Germany,
[email protected] Miti Garg Research Engineer, The Logistics Institute—Asia Pacific, Block E3A, Level 3, 7 Engineering Drive 1, NUS, Singapore 117574,
[email protected] Mark K.H. Goh Director of Industry Research, The Logistics Institute—Asia Pacific, Block E3A, Level 3, 7 Engineering Drive 1, NUS, Singapore 117574,
[email protected] L.I. Guo-dong Computer Science and Technology Department North China Electric Power University, Beijing, 102206 P. R. China, lgdlgl
[email protected] Sumeet Gupta, Ph.D. Research Fellow, The Logistics Institute—Asia Pacific, Block E3A, Level 3, 7 Engineering Drive 1, NUS, Singapore 117574,
[email protected] Ramzi A. Haraty Division of Computer Science and Mathematics, Lebanese American University, Beirut, Lebanon,
[email protected] Tomohiro Hayashida Graduate School of Engineering, Hiroshima University, Kagami-yama 1-4-1, Higashi-hiroshima, Hiroshima, 739-8527 Japan,
[email protected]
Contributors
xxiii
Miguel A. Hern´andez Faculty of Engineering, National University of Mexico, FI-UNAM, P.O. Box 70273, 04510 Mexico D.F., Mexico Chun-Fei Hsu Department of Electrical Engineering, Chung Hua University, Hsinchu 300, Taiwan, Republic of China,
[email protected] Kedi Huang Institute for Automation, National University of Defense Technology, Changsha, Hunan, China, 410073 Ching-Hua Hung Mechanical Engineering Department, National Chiao Tung University, Hsinchu, Taiwan 300, ROC. D.M. Akbar Hussain Information and Security Analysis Research Centre, Department of Computer Science and Engineering, Aalborg University, Niels Bohs Vej 8, 6700, Esbjerg, Denmark,
[email protected], http://www.cs.aue.auc.dk/∼akbar/ R. N. Ibrahim Mechanical Engineering, Monash University, Wellington Road, Clayton 3800, Australia,
[email protected] Karim Ivaz Department of Mathematical Sciences, University of Tabriz, Tabriz, Iran, ivaz @tabrizu.ac.ir Prakob Jandaeng Department of Information Engineering, Faculty of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Chalongkrung Rd., Ladkrabang, Bangkok, Thailand 10520 Jiong Jin School of Mechanical and Aerospace Engineering, Nanyang Technological University, Blk N3, B4a-02A, Singapore 639798 Voratas Kachitvichyanukul The Department of Industrial System Engineering, School of Engineering and Technology, Asian Institute of Technology, P.O. Box 4, Klong Luang, Pathumthani 12120, Thailand Sittichai Kaewkuekool Department of Production Technology Education, King Mongkut’s University of Technology Thonburi, Thailand Safaa J. Kasbah Division of Computer Science and Mathematics, Lebanese American University, Beirut, Lebanon,
[email protected]
xxiv
Contributors
Hideki Katagiri Graduate School of Engineering, Hiroshima University, Kagami-yama 1-4-1, Higashi-hiroshima, Hiroshima, 739-8527 Japan,
[email protected] Kosuke Kato Graduate School of Engineering, Hiroshima University, Kagami-yama 1-4-1, Higashi-hiroshima, Hiroshima, 739-8527 Japan,
[email protected] Vanchai Laemlaksaku Department of Industrial Engineering Technology, King Mongkut’s Institute of Technology North Bangkok, Thailand Bore-Kuen Lee Department of Electrical Engineering, Chung Hua University, Hsinchu 300, Taiwan, Republic of China,
[email protected] Tsu-Tian Lee Department of Electrical Engineering, National Taipei University of Technology, Taipei 106, Taiwan, Republic of China,
[email protected] Luo Lei, Ph.D. Research Fellow, The Logistics Institute—Asia Pacific, Block E3A, Level 3, 7 Engineering Drive 1, NUS, Singapore 117574,
[email protected] X.U. Lei Professor, Computer Science and Technology Department, North China Electric Power University, Beijing, P. R. China,
[email protected] Ge Li Institute for Automation, National University of Defense Technology, Changsha, Hunan, China, 410073,
[email protected] Grier C.I. Lin Centre for Advanced Manufacturing Research, University of South Australia, SA 5095 Australia (e-mail:
[email protected]) Hao W. Lin Centre for Advanced Manufacturing Research, University of South Australia, SA 5095 Australia (61-08-8302-3112; fax: 61-08-8302-5292; e-mail:
[email protected]) Hung-Jeng Lin Department of Power Mechanical Engineering, National Formosa University, Huwei, Yunlin, Taiwan 632, ROC. Jia-Hung Liu Mechanical Engineering Department, National Chiao Tung University, Hsinchu, Taiwan 300, ROC. P.B. Lochert Mechanical Engineering, Monash University, Wellington Road, Clayton 3800, Australia,
[email protected]
Contributors
xxv
B.L. Luk City University of Hong Kong, Kowloon Tong, Hong Kong Zhihong Man School of Mechanical and Aerospace Engineering, Nanyang Technological University, Blk N3, B4a-02A, Singapore 639798 Lim Tau Meng Associate Professor, Nanyang Technological University, Singapore, mtmlim @ntu.edu.sg Wolfgang Meyer University of Technology, 21071 Hamburg, Germany,
[email protected] Takeshi Matsui Graduate School of Engineering, @msl.sys.hiroshima-u.ac.jp
Hiroshima
University,
fmatsui
Vladim´ır Modr´ak Technical University of Koˇsice, Faculty of Manufacturing Technologies, Bayerova 1, 080 01 Preˇsov, Slovakia Kenji Morihara Graduate School of Engineering, @msl.sys.hiroshima-u.ac.jp
Hiroshima
University,
moriharag
Bahram Sadigh Mostahkam Department of Mathematical Sciences, University of Tabriz, Tabriz, Iran,
[email protected] Ayyaz Muhammad Chemical Engineering Program, Universiti Teknologi Petronas, 31750, Tronoh, Perak, Malaysia M.I. Abdul Mutalib Chemical Engineering Program, Universiti Teknologi Petronas, 31750, Tronoh, Perak, Malaysia Sev V. Nagalingam Centre for Advanced Manufacturing Research, University of South Australia, SA 5095 Australia (e-mail:
[email protected]) Annie W.Y. Ng Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong Ichiro Nishizaki Graduate School of Engineering, Hiroshima University, Kagami-yama 1-4-1, Higashi-hiroshima, Hiroshima, 739-8527 Japan,
[email protected] B.M. Patre SGGS Institute of Engineering and Technology, Vishnupuri, Nanded - 431 606, India,
[email protected]
xxvi
Contributors
S.B. Phadke Defence Institute of Advanced Technology, Pune- 411 025, India, sbphadke @hotmail.com Masatoshi Sakawa Graduate School of Engineering, Hiroshima University, Kagami-yama 1-4-1, Higashi-hiroshima, Hiroshima, 739-8527 Japan,
[email protected] Amir Shafeeq Chemical Engineering Program, Universiti Teknologi Petronas, 31750, Tronoh, Perak, Malaysia Cheng Shanbao Ph.D. research student, Nanyang Technological Univer-sity, Singapore, chen0181 @ntu.edu.sg Chien-wen Shen Department of Logistics Management, National Kaohsiung First University of Science and Technology, Taiwan P.D. Shendge College of Engineering, Pune-411 005, India,
[email protected] Pitikhate Sooraksa Department of Information Engineering, Faculty of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Chalongkrung Rd., Ladkrabang, Bangkok, Thailand 10520,
[email protected] Robert De Souza, Ph.D. Executive Director, The Logistics Institute—Asia Pacific, Block E3A, Level 3, 7 Engineering Drive 1, NUS, Singapore 117574,
[email protected] K. Tahera Mechanical Engineering, Monash University, Wellington Road, Clayton 3800, Australia,
[email protected] Yu Tang Faculty of Engineering, National University of Mexico, FI-UNAM, P.O. Box 70-273, 04510 Mexico D.F., Mexico Takeshi Uno Graduate School of Engineering, Hiroshima University, Kagami-yama 1-4-1, Higashi-hiroshima, Hiroshima, 739-8527 Japan,
[email protected] Apinanthana Udomsakdigool The department of Industrial Engineering Technology, College of Industrial Technology, King Mongkut’s Institute of Technology North Bangkok, 1518 Pibulsongkram Road, Bang Sue, Bangkok, 10800, Thailand
Contributors
xxvii
Ramon Vilanova Telecommunication and System Engineering Department, ETSE, Universitat Aut‘onoma de Barcelona, 08193 Bellaterra, Barcelona, Spain, Ramon.Vilanova @uab.cat S.W. Wang Weihai Yuanhang Technology Development Co., Ltd. 19 Tangshan Road, Hi-tech District, Weihai, Shandong, 264209, P.R. China,
[email protected] Xinyu Yao Institute for Automation, National University of Defense Technology, Changsha, Hunan, China, 410073 D.L. Yu Control Systems Research Group, School of Engineering, Liverpool John Moores University, Byrom Street, Liverpool, L3 3AF, UK,
[email protected] Jiguo Yu School of Computer Science, Qufu Normal University, Ri-zhao, Shandong, 276826, P. R. China,
[email protected] Shudong Zhang School of Mechanical and Aerospace Engineering, Nanyang Technological University, Blk N3, B4a-02A, Singapore 639798 Yanliang Zhang School of Mechanical and Aerospace Engineering, Nanyang Technological University, Blk N3, B4a-02A, Singapore 639798
Chapter 1
A Comprehensive Movement Compatibility Study for Hong Kong Chinese W.H. Chan and Alan H.S. Chan
Abstract This paper reviews a study, which used real mechanical controls, of strength and reversibility of direction-of-motion stereotypes and response times for most common control–display configurations in the human–machine interface for Hong Kong Chinese. The effect of instructions for the change of display value and control plane on movement compatibility for various control–display configurations was analyzed with precise quantitative measures of strength and reversibility index of the stereotype. The results showed that the best control-display configuration was the rotary control–circular display combination. The performance of the rotary control-digital counter, rotary control–horizontal scale, and the fourway lever–circular display configurations were of comparable magnitude. The poorest configurations found in this study were the four-way lever–digital counter and the rotary control–vertical scale combinations. In general, subjects’ response times were found to be longer when there were no clear movement stereotypes. The results of this study provide significant implications for the industrial design of control panels used in human–machine interfaces for improved human performance. Keywords: Movement compatibility · circular display · digital counter · linear scale · rotary control · lever control · stereotype reversibility
1.1 Introduction Displays and controls provide the means of communication between people and machines in human–machine systems. Displays provide information about operational W.H. Chan and Alan H.S. Chan Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong.
1
2
W.H. Chan and Alan H.S. Chan
status, and control devices enable operators to take necessary actions and change the states of a human–machine system [1]. When people operate a control they have expectations about what it will do and what effect it will have on a display. The relationship between a control movement and the effect most expected by a population is known as a population stereotype or direction-of-motion stereotype, and such a relationship is said to be compatible. The importance of designing a compatible relationship or population stereotype is to conform to the expectations of the relevant operator population [2,3]. Although it is possible to train people to operate systems that do not follow the stereotypes, this will take a much longer training time and their performance may deteriorate when placed in an emergency situation. It was suggested that the trained behaviors do not replace the old behaviors that were learned as a result of past experiences and expectation—they merely overlay them. Certain situations may then arise in which the old behaviors may be stronger and habit regression [2] will occur when the operators’ motivation is decreased or when they are fatigued and subjected to some changes in the working situation. In a field study on the machine control and the resultant movements of the cabins and the hooks in more than fifty electric overhead traveling cranes in a heavy engineering factory [4], it was shown that control-movement compatibility was absent in most of the cranes. It was particularly dangerous when the operators were shifted from one crane to another. When moving the controls, operators had a high chance of making mistakes, especially during periods of high workload or time stress. This would lead to severe accidents [4]. Research on control and motion relationships for Western nations has been going on for a long time [5–16]. Speculating that cultural differences on stereotypes may exist for different ethnic groups, few studies on cultural differences and Asian populations have been reported [17–23]. Apart from the studies by Courtney [24–27], Courtney and Chan [28], Chan et al. [29, 30], and Yu and Chan [31], the number of studies in direction-of-motion stereotypes for a Chinese population is small. It is known that the Chinese culture has a peculiarly enduring and durable nature, which differs in many fundamental and important ways from the Western culture [25]. One of the most obvious differences is that Chinese language is nonagglutinative and is not written in alphabetic form. The invariable left-to-right convention of Englishspeaking countries cannot be assumed to prevail in China. Therefore, it is essential to have knowledge of Chinese response preferences for displays and controls in order to predict the kind of problems that may be encountered when Chinese operators use Western-designed equipment. Courtney [25, 26] investigated the direction-of-motion stereotypes for Chinese with paper-and-pencil tests and concluded that while there were some areas of similarity between Chinese and Western subjects, the Chinese subjects adopted a more consistent approach (with stronger stereotype strength and reversibility) than Western subjects did and exhibited some stereotypes not generally found in Western subjects. In general, the results supported the cautionary comment [24] that some care needs to be exercised when equipment and systems designed in the West are to
1 A Comprehensive Movement Compatibility Study for Hong Kong Chinese
3
be used by Chinese people, even when they live in a community under the influence of the West. Using a simple mockup, stimulus–response stereotypes for Chinese subjects using linear displays with various types of controls were investigated [24–28], and the stereotypes were compared with those found for other populations. Surprisingly, only a few studies of control–display stereotypes using real mechanical controls and computer displays for either the Chinese or Western populations had been reported. Furthermore, little comprehensive research in comparing the movement compatibility among the common control–display configurations using real hardware tests has been analyzed and published. To determine if any response preference or population stereotype exists, chi-square tests are usually used to demonstrate statistical significance between proportions of different responses [29]. The majority proportion of responses (≥50%) for a testing condition is a measure of the strength of stereotype. A value of 50% indicates no choice preference, while a value of 100% indicates a perfect stereotype. Other than the strength of stereotype, reversibility of stereotypes is another important factor for consideration in industrial design for improved human performance. In the context of movement compatibility, reversibility is a term for describing the situation where, for example, a population that lifts a lever up to move a device up will also push it down to move the device down. Previous research on movement compatibility has shown that a person’s expectations are not always reversible. In a study of the operation of water taps, Hoffmann et al. [11] used a quantitative measure, index of reversibility (IR) “for measuring the likelihood that the response for closure of a tap is opposite to that used for opening the tap, independent of the expected direction of rotation of the tap for opening.” In this water tap example, the IR was evaluated from the sum of two products. One product was derived from the proportion of anticlockwise responses for increasing the flow and the proportion of clockwise responses for decreasing the flow. The other product was derived from the proportions of the opposite pair of responses. The index ranges from a value of zero indicating absolute nonreversibility to a value of unity for perfect reversibility, which occurs when the response to “increasing the flow” is the opposite of the response to “decreasing the flow.” Stereotypes are not always reversible, and this is an important factor when considering movement compatibility. Designers of human–machine interfaces should use stereotypes with a reasonable degree of reversibility to reduce confusion and enhance efficiency and safety. The present study aimed at examining the similarities and differences in response preferences among different combinations of some common controls and displays, viz., rotary control–circular display [32, 33], rotary control-digital counter [34], four-way lever–circular display [35, 36], four-way lever–digital counter, rotary control–vertical scale, and rotary control–horizontal scale. Detailed comparisons of results in strength and reversibility of stereotype of all these configurations were critically examined and analyzed.
4
W.H. Chan and Alan H.S. Chan
1.2 Methods 1.2.1 Experimental Design For better presentation of stimulus materials and immediate capturing of the dynamic performance of subjects, a personal computer with a Visual Basic application program was used for testing. Two types of controls (rotary control, four-way lever) and four types of displays (circular display, digital counter, horizontal scale, vertical scale) were tested in a total of six experiments. In each experiment, one type of control was combined with a specific type of display for testing. The display was always shown directly in front of subjects in the frontal plane, and the control might appear in one of the four planes (Figs. 1.1 and 1.2). Subjects were requested to select their choices of manipulating the control movements to achieve the target setting immediately after the display was shown. The time between the showing of the instruction of change and subject’s moving the control was recorded as the response time. There were four control planes and two instructions for change of setting (clockwise or anticlockwise for circular displays, left or right for horizontal scales, up or down for vertical scales, and increase or decrease in number in digital counters) for each experiment, which were randomly tested for all subjects who paced and initiated presentations themselves. The display always changed to the target setting independent of subjects’ choice of lever movement.
1.2.2 Subjects Two groups of 38 undergraduates between ages 23 and 47 of the City University of Hong Kong took part in the study. The first group took the tests on the rotary control–circular display, rotary control–digital counter, four-way lever–circular display, and four-way lever–digital counter combinations. The second group took the tests on the rotary control–horizontal scale and rotary control–vertical scale combinations. The subjects were all right-handed and manipulated controls with their right hands. They were all Hong Kong Chinese.
1.3 Results and Discussion 1.3.1 Response Preference and Mean Stereotype Strength Table 1.1 shows a comparison of the major direction-of-motion stereotypes obtained at different planes for the control–display configurations tested in this study. For the rotary control–circular display experiment [32], strong clockwise–clockwise (CC) and anticlockwise–anticlockwise (AA) relationships were found
1 A Comprehensive Movement Compatibility Study for Hong Kong Chinese Plane 3
Plane 3 Plane 2
Plane 2
1 4
5
0004
2 3 Plane 4
Plane 4
Plane 1
Plane 1
Plane 3
Plane 3
1 4
0012
2 3
Plane 2
Plane 4
Plane 4
Plane 2
Plane 1 Plane 1
Plane 3
Plane 3 Plane 2
Plane 2
Plane 4 Plane 4 Plane 1 Plane 1
Fig. 1.1 Schematic diagrams showing all the control–display configurations tested in the study
significant in all planes. It is worth noting that the lowest stereotype strengths were found in the sagittal plane (plane 2). In the case of the rotary control–digital counter test [34], strong clockwise-toincrease (CI) and strong anticlockwise-to-decrease (AD) stereotypes were found in all planes. The results suggested that the virtual movement directions in increasing and decreasing number magnitude coincide with the clockwise and anticlockwise movements of the rotary control, respectively.
6
W.H. Chan and Alan H.S. Chan
Fig. 1.2 Examples of physical setup for testing: (a) rotary control mounted on plane 1 with digital display, (b) four-way lever mounted on plane 2 with circular display
Looking at the four-way lever–circular display test [36], strong right-clockwise (RC) and left-anticlockwise (LA) stereotypes were found in all planes. With the four-way lever, the subjects seemed to ignore the rotary motion of the pointer at the 12 o’clock position and associated their linear lever responses with the translatory pointer movements in the left and right directions. Nevertheless, it is worth noting that the lowest stereotype strengths were also found in plane 2. For the four-way lever–digital counter test, moderately strong forward-to-increase (FI) and backward-to-decrease (BD) stereotypes were found in the horizontal planes (planes 1 and 3). In the two vertical planes (planes 2 and 4), relatively weaker up-to-increase (UI) and down-to-decrease (DD) stereotypes were found dominant. The FI-BD stereotypes in the horizontal planes were generally stronger than the UI-DD stereotypes in the vertical planes, indicating that the horizontal planes are the desirable planes for positioning a forward–backward lever in working with the digital counter. The rotary control–horizontal scale test showed that there were strong clockwiseto-right (CR) and anticlockwise-to-left (AL) stereotypes exhibited in all planes except the sagittal plane (plane 2). The results matched quite well with the findings obtained by Hotta and Yoshioka [21]. Nevertheless, different from the result obtained
Table 1.1 Comparison of the major direction-of-movement stereotypes on different planes for all the control–display configurations tested in this study Plane Control–display configuration
1
2
3
4
Rotary control–circular display Rotary control–digital counter Four-way lever–circular display Four-way lever–digital counter Rotary control–horizontal scale Rotary control–vertical scale
CC, AA CI, AD RC, LA FI, BD CR, AL No
CC, AA CI, AD RC, LA UI, DD CL, AR No
CC, AA CI, AD RC, LA FI, BD — —
CC, AA CI, AD RC, LA UI, DD CR, AL No
1 A Comprehensive Movement Compatibility Study for Hong Kong Chinese
7
Table 1.2 Comparison of the mean SSs on different planes for all the control–display configurations tested in this study Plane Control–display configuration
1
2
3
4
Average
Rotary control–circular display Rotary control–digital counter Rotary control–horizontal scale Four-way lever–circular display Four-way lever–digital counter Rotary control–vertical scale
0.921 0.856 0.888 0.895 0.780 0.612
0.908 0.868 0.783 0.645 0.556 0.507
0.934 0.862 — 0.908 0.720 —
0.974 0.856 0.901 0.882 0.629 0.566
0.934 0.861 0.857 0.833 0.671 0.562
in [21], where no stereotype existed in plane 2, opposite stereotypes of clockwise-toleft (CL) and anticlockwise-to-right (AR) stereotypes were found in plane 2 instead. For the rotary control–vertical scale test, no stereotypes existed in all planes. Again, the result matched well with the findings obtained by Hotta and Yoshioka [21], where no significant stereotypes were reported in all planes. Table 1.2 shows a comparison of the mean stereotype strengths (SSs) obtained with all the control–display configurations tested in this study. Using rotary control– digital counter [34] as an illustration, the mean SS for the rotary control–digital counter was calculated as the arithmetic mean of the CI and AD stereotypes. The result (Table 1.2) showed that the strongest mean SS was found in the rotary control– circular display combination. The mean SSs of the rotary control–digital counter, rotary control–horizontal scale, and the four-way lever–circular display configurations were of comparable magnitude. The poorest configurations found in this study were the four-way lever–digital counter and the rotary control–vertical scale combinations. It is interesting to find that, except for the rotary control–digital counter combination, the poorest stereotype strengths were obtained in the sagittal plane (plane 2). The weaker strength for the controls positioned in the sagittal plane (plane 2) of the circular display configuration can be explained by the following fact: Since the controls were 90◦ offset from subjects’ line of sight and the frontal plane of the display, the associated mechanical pointer movement (left or right) in the circular display was also 90◦ offset from the control. This inevitably led to the degradation of subject performance.
1.3.2 Reversibility For the experiment using rotary control–digital counter [34], the term reversible stereotype is used to describe the situation in which a subject who turns a rotary control clockwise to increase the display value will also turn the control anticlockwise to decrease the display value to the target value. The IR was so evaluated based on the sum of two products. One product was derived from the proportion of CI and AD responses, and the other came from the proportion of the opposite
8
W.H. Chan and Alan H.S. Chan
Table 1.3 Comparison of the IRs on different planes for all the control–display configurations tested in this study Plane Control–display configuration
1
2
3
4
Average
Rotary control–circular display Rotary control–digital counter Rotary control–horizontal scale Four-way lever–circular display Four-way lever–digital counter Rotary control–vertical scale
0.855 0.752 0.802 0.800 0.622 0.524
0.832 0.771 0.635 0.411 0.316 0.483
0.877 0.762 — 0.824 0.543 —
0.949 0.752 0.812 0.777 0.400 0.492
0.878 0.759 0.750 0.703 0.470 0.500
pair of anticlockwise-to-increase (AI) and clockwise-for-decrease (CD) responses. Mathematically, the form is expressed as follows: IR = p(CI) × p(AD) + p(CD) × p(AI). Significant CI and AD stereotypes were found in all planes, and the strongest ones were found in plane 2 for CI and planes 2 and 3 for AD. The mean IRs were at high levels of 0.752, 0.771, 0.762, and 0.752 in planes 1, 2, 3, and 4, respectively. The overall average IR for the rotary control–digital counter configuration was at a high level of 0.759. Using similar methodology, the IRs for all the other configurations were calculated. Table 1.3 shows a comparison of the IRs obtained with all the control–display configurations performed in this study. The results showed that the strongest IR was found for the rotary control–circular display combination. The IRs of the rotary control–digital counter, rotary control–horizontal scale, and four-way lever–circular display configurations were of comparable magnitudes. Again, the poorest configurations found in this study were the four-way lever–digital counter and the rotary control–vertical scale combinations.
1.3.3 Response Time For the experiment performed on rotary control–digital counter configuration, the average response times captured by the software program ranged from 560 to 686 ms with a mean of 615 ms and a standard deviation of 32 ms. Student’s t-test showed that the average response times for the increase and decrease instructions were statistically the same (p > 0.05). The regression analysis for the preferred response percentage (pr ) for instructions of change of number showed that the higher the preferred response percentage, the shorter the mean response time (Fig. 1.3). The expression relating response time and preferred response performance is Response time(ms) = 1306 − 8.04 pr (r2 = 0.537, n = 32, p < 0.001).
1 A Comprehensive Movement Compatibility Study for Hong Kong Chinese Fig. 1.3 Average response time vs. % response preference
9
Average Response Time vs % Response Preference
Response Time (ms)
800
y = 1306- 8.04x
700 600 500 400 75
80
85
90
95
% Response Preference
As predicted from the equation, the mean response time ranges from 502 ms (pr = 100%) to 904 ms (pr = 50%). The regression equation clearly shows that a substantial reduction of response time could be achieved if there is a high level of compatibility built between the rotary control and digital counter. Table 1.4 shows a summary of the mean response times (RTs), the slopes (b1 ), and the y intercepts (b0 ) of the regression equations obtained in all the experiments conducted in this study. Due to the different electromechanical design features for the various controls employed in each experiment, no direct comparison among the magnitudes of the response times obtained in all the experiments can be made. Nevertheless, the result showed that with negative slope (b1 ) values found in the regression equations for all experiments, faster response time could be achieved if there is a high level of compatibility built between the control and display.
1.4 Conclusion and Recommendations In consideration of the mean stereotype strengths and indexes of reversibility, the compatibility of rotary control–circular display combination was the best configuration reported. The performance of the rotary control–digital counter, rotary control– horizontal scale, and the four-way lever–circular display configurations were of comparable magnitudes. The poorest configurations found in this study were the
Table 1.4 Summary of the response times for all the control–display configurations tested in this study Control–display configuration
y Intercept (b0 )
Slope (b1 )
r2
Mean RT (ms)
Rotary control–vertical scale Rotary control–horizontal scale Rotary control–digital counter Rotary control–circular display Four-way lever–digital counter Four-way lever–circular display
962 953 1306 996 1087 1108
−4.24 −3.90 −8.04 −4.18 −5.85 −5.42
0.515 0.540 0.521 0.313 0.642 0.491
740 616 615 653 694 658
10
W.H. Chan and Alan H.S. Chan
four-way lever–digital counter and the rotary control–vertical scale combinations. The negative correlation coefficients obtained for the average response time and the average proportion of majority response showed that subjects in general needed to do less mental work in compatible settings where dominant preferences of movement directions were evidenced. The results of this study led to the following recommendations that are useful for designing control panel interfaces and for predicting the effects of design compatibility on human response times and response preferences. (a) Rotary control–circular display combination is the best among all the configurations discussed in this study. In cases where rotary controls or lever controls are to be adopted, circular displays rather than digital counters should be chosen for use. (b) The sagittal plane is the least advantageous one for all the control–display configurations reported in this study except for the rotary control–digital counter combination. (c) Translatory levers are not as good as rotational controls for working with the digital counter. If digital counters are to be adopted, rotary controls instead of lever controls should be chosen for use. (d) If a lever needs to be used with a digital counter, the forward–backward type can be positioned on the horizontal planes and the up–down type can be positioned on the frontal vertical planes. (e) If a linear scale needs to be used with a rotary control, the horizontal scale should be preferred over the vertical scale. (f) In general, response times are longer when there are no clear movement stereotypes.
References 1. H.Y. Kang and P.H. Seong (2001) Information theoretic approach to man–machine interface complexity evaluation. IEEE Transactions on Systems, Man, and Cybernetics, 31(3): 163– 171. 2. N.E. Loveles (1962) Direction-of-motion stereotypes: A review. Ergonomics, 5: 357–383. 3. K.F.H. Murrell (1965) Ergonomics: Man and his working environment. Chapman & Hall, London. 4. R.N. Sen and S. Das (2000) An ergonomics study on compatibility of controls of overhead cranes in a heavy engineering factory in west bengal. Applied Ergonomics, 31(2): 179–184. 5. M.J. Warrick and W.F. Grether (1948) The effect of pointer alignment on the check reading of engine instrument panels. USAF AMC Memo. Rep. No. MCREXD, pp. 694–172. 6. J. Brebner and B. Sandow (1976) The effect of scale side on population stereotype. Ergonomics, 19: 571–580. 7. H. Petropoulos and J. Brebner (1981) Stereotypes for direction-of-movement of rotary controls associated with linear displays: the effects of scale presence and position, of pointer direction, and distances between the control and the display. Ergonomics, 24: 143–151 8. S.P. Wu (1997) Further studies on the spatial compatibility of four control–display linkages. International Journal of Industrial Ergonomics, 19 (5): 353–360.
1 A Comprehensive Movement Compatibility Study for Hong Kong Chinese
11
9. E.R. Hoffmann (1990) Strength of component principles determining direction of turn stereotypes for horizontally moving displays. Proceedings of the Human Factors Society, 34th Annual Meeting, pp. 457–461. 10. E.R. Hoffmann (1990) Strength of component principles determining direction of turn stereotypes of three-dimensional display/control arrangements. Proceedings of the Human Factors Society, 34th Annual Meeting, pp. 462–466. 11. E.R. Hoffmann, C. Brown, and S. Morgan (1992) Stereotypes for operation of water taps. Proceedings of the 28th Annual Conference of the Ergonomics Society of Australia Inc., pp. 63–71, Melbourne, Australia, 2–4 December, 1992, E. Hoffmann and O. Evans (eds.), Ergonomics Society of Australia Inc., Downer, ACT, Australia. 12. E.R. Hoffmann (1997) Strength of component principles determining direction of turn stereotypes–linear displays with rotary controls. Ergonomics, 40(2): 199–222. 13. C.J. Worringham and D.B. Beringer (1989) Operator orientation and compatibility in visualmotor task performance. Ergonomics, 32(4): 387–399. 14. C.J. Worringham and D.B. Beringer (1998) Directional stimulus-response compatibility: a test of three alternative principles. Ergonomics, 41(6): 864–880. 15. J.R. Carey, C.L. Bogard, J.W. Youdas, and V.J. Suman (1995) Stimulus-response compatibility effects in a manual tracking task. Perceptual and Motor Skills, 81(3): 155–1170. 16. S. Bosbach, W. Prinz, and D. Kerzel (2005) Movement-based compatibility in simple response tasks. European Journal of Cognitive Psychology, 17(5): 695–707. 17. D. Boles and R. Dewar (1986) Nationality and handeness differences in stereotypes for control movements. Proceedings of the 19th Annual Meeting of the Human Factors Association of Canada, Richmond(Vancouver), BC, August 22–23, 1986, The Association, Rexdale, Ontario, pp. 87–90. 18. C.K. Wong and J. Lyman (1988) Riding the wave of innovation. Proceedings of the Human Factors Society 32nd Annual Meeting, Anaheim, CA, October 24–28, 1988. The Human Factors Society, Santa Monica, CA, 1: 30–34. 19. A. Hotta, T. Takahashi, K. Takahashi, and K. Kogi (1979) Relations between direction-ofmotion stereotypes for indicator controls. Journal of Human Ergology, 8: 47–58. 20. A. Hotta, T. Takahashi, K. Takahashi, and K. Kogi (1981) Relations between direction-ofmotion stereotypes in living space. Journal of Human Ergology, 10: 73–82. 21. A. Hotta and M. Yoshioka (1988) Experiment on direction-of-motion stereotypes for indicator or figure control. Ergonomics International 88: Proceedings of the 10th Congress of the International Ergonomics Association, pp. 154–156, Sydney, Australia, 1–5 August 1988, A.S. Adams, R.R. Hall, B.J. McPhee, and M.S. Oxenburgh (eds.). Taylor & Francis, London. 22. A. Hotta and M. Yoshioka (1992) An experiment on direction-of-motion stereotypes in rotary control. Japanese Journal of Ergonomics, 28(2): 61–68. 23. N. Moray (1999) Advanced displays, cultural stereotypes and organisational characteristics of a control room nuclear safety: A human factors perspective. J. Misumi, B. Wilpert, and R. Miller (eds.). Taylor & Francis, London, pp. 97–112. 24. A.J. Courtney (1998) Chinese response preferences for display-control relationships. Human Factors, 30(3): 367–372. 25. A.J. Courtney (1994) Hong Kong Chinese direction-of-motion stereotypes. Ergonomics, 37: 417–426. 26. A.J. Courtney (1994) The effect of scale-side, indicator type, and control plane on directionof-turn stereotypes for Hong Kong Chinese subjects. Ergonomics, 37: 865–877. 27. A.J. Courtney (1994) The effect of scale-side, indicator type, and control plane on directionof-turn stereotypes for Hong Kong Chinese subjects. Ergonomics, 37: 865–877. 28. A.J. Courtney and A.H.S. Chan (1998) Testing Hong Kong stereotypes with a questionnaire. In Ergonomics practice and its theory. Proceedings of the 5th Pan-Pacific Conferences on Occupational Ergonomics, 21–24 July, 1998. Kitakyushu, Fukuoka, Japan, pp. 288–292. 29. A.H.S. Chan, A.J. Courtney, and K.W.Y. So (2000) Circular displays with thumbwheels: Hong Kong Chinese preferences. International Journal of Human Factors and Ergonomics in Manufacturing, 10:4: 453–463.
12
W.H. Chan and Alan H.S. Chan
30. A.H.S. Chan, V.W.Y. Shum, H.W. Law, and I.K.Hui (2003) Precise effects of control position, indicator type, and scale side on human performance. International Journal of Advanced Manufacturing Technology, 22:5–6: 380–386. 31. R.F. Yu and A.H.S. Chan (2004) Comparative research on response stereotypes for daily operation tasks of Chinese and American engineering students. Perceptual and Motor Skills, 98(1): 179–191. 32. W.H. Chan and A.H.S. Chan (2003) Movement compatibility for rotary control and circular display computer simulated test and real hardware test. Applied Ergonomics, 34: 61–67. 33. A.H.S. Chan and W.H. Chan (2006) Movement compatibility for circular display and rotary controls positioned at peculiar positions. International Journal of Industrial Ergonomics, 36: 737–745. 34. W.H. Chan and A.H.S. Chan (2006) Hardware test on movement compatibility for rotary control and digital display. Proceedings of the International Multi-Conference of Engineers and Computer Scientists (IMECS) 20–22 June 2006, pp. 721–725. 35. W.H. Chan and A.H.S. Chan (2006) A study on movement compatibility for lever control and circular display with computer simulated test. Proceedings of the International MultiConference of Engineers and Computer Scientists (IMECS) 20–22 June 2006, pp. 625–629. 36. W.H. Chan and A.H.S. Chan (2007) Strength and reversibility of movement stereotypes for lever control and circular display. International Journal of Industrial Ergonomics, 37: 233–244.
Chapter 2
A Study of Comparative Design Satisfaction Between Culture and Modern Bamboo Chair Vanchai Laemlaksaku and Sittichai Kaewkuekool
Abstract The objective of this paper was to examine the cognitive domain when using the modern bamboo chair from Pai Tong (Dendrocalamus asper Backer) in size, physical construction, and shape. The modern bamboo chair was compared with the culture one in terms of its design and comfort level. A questionnaire with rating scales was used as a tool to collect data from respondents. Sixty people were randomly selected from King Mongkut’s Institute of Technology North Bangkok, Thailand to participate for satisfaction tests. They were asked to rate their responses after sitting on both chairs. The results showed that the modern bamboo chair is appropriate to use and better than the old one. The correlation was shown to be significant at the level of 0.01. The width, height, and depth of the modern bamboo chair were shown more appropriate than the culture bamboo chair by 40.00%, 26.67%, and 26.67%, respectively. Therefore, as shown in the results, the modern bamboo chair is appropriate for shape and could be used to replace the old one. Keywords: Anthropometric · laminated bamboo · chair
2.1 Introduction Anthropometric measurements are an important factor that should be taken into account for all designs. Most designs were considered on basic information received from customer needs and designers who would like to serve those needs. Therefore, the design for 5th and 95th percentiles and average for male and female might Vanchai Laemlaksaku Department of Industrial Engineering Technology, King Mongkut’s Institute of Technology North Bangkok, Thailand Sittichai Kaewkuekool Department of Production Technology Education, King Mongkut’s University of Technology Thonburi, Thailand
13
14
Vanchai Laemlaksaku and Sittichai Kaewkuekool
be used for serving those needs. During the past decade, ergonomic research has focused especially on the design of work furniture based on biomechanics of the human body. Many researchers dealt with the principles for the design of chairs and desks in the workplace, particularly for computer system users [1–3]. This would indicate that a chair should be designed to fit the human rather fitting the human to the workplace. However, for a period of time, little interest was been shown in the design of bamboo chair furniture for use in restaurants. During this time, it was called a culture bamboo chair. Potential design variables are numerous and have included variations in seat cushioning, seat fabrics, backrest designs, lumbar support, and seat height. Some of these design variables were shown to have quantifiable impacts upon seat-pan interface pressure. Specifically, many studies have indicated significant differences in degrees of cushion thickness, density, and composition and chair contouring [4, 5]. Based on these variables, many body dimensions were considered for modern bamboo furniture design. However, most designs used static information to design the seat, while working humans have to move their body. Based on this information, movement allowance should be taken into account when considering the modern bamboo chair. Moreover, due to reduction of wood resources and the increased restriction of wood harvest, the development of wood substitutes has become essential in resolving the shortage of wood resources in many countries. Bamboo has recently been rediscovered as a potential source of wood substitutes owning to its properties of excellent strength, easy processing, and growth that is more rapid than that of common trees. From the past decade, bamboo has been used in the modern factory for production of paper, bamboo blinds, and barbeque skewers. Bamboo is not commonly used in modern furniture production due to its round shape. However, laminated bamboo, a wood substitute product made from bamboo, became available in Europe and the USA primarily as flooring material. Moreover, laminated bamboo could also be used in many other applications including furniture manufacturing. The objectives of this research were to study the evaluation of the cognitive domain toward the modern furniture made from laminated bamboo. In this study, a mature Dendrocalamus asper Backer bamboo was selected.
2.2 Methodology 2.2.1 Design and Fabrication • Determine the body dimensions that were important to use for chair design such as seat height, depth, and width and width of cushion. • Collect and study information of customers who decide to buy the product. Subjects were randomly selected based on interested groups and were studied for anthropometry data. This information is shown in Tables 2.1 and 2.2 and follows the standard of pheasant in body dimensions as shown in Fig. 2.1.
2 Comparative Design Satisfaction Between Culture and Modern Bamboo
15
Table 2.1 Anthropometric estimates of male student of KMITNB (all dimensions in centimeters) [7] 14 16 19
Body dimensions
Mean
5th percentile
95th percentile
SD
Buttock–popliteal length Popliteal height Hip breadth
48.14 39.15 33.68
43.37 34.35 30.49
52.81 43.95 36.87
2.84 2.92 1.94
Table 2.2 Anthropometric estimates female student of KMITNB (all dimensions in centimeters) [8] 14 16 19
Body dimensions
Mean
5th percentile
95th percentile
SD
Buttock–popliteal length Popliteal height Hip breadth
44.25 36.96 34.97
37.23 30.97 31.28
51.27 42.94 38.65
4.27 3.64 2.24
• Determine the body dimension to design for in the modern chair by selecting data for the 5th and 95th percentiles, which should cover all types of users, as follows: – Seat height: Selected number 16 popliteal height, 95th percentile of male = 43.95 cm plus the height of shoes = 1.00 cm, which equals 44.95 cm (our design is 45.00 cm).
Fig. 2.1 Body dimensions [6]
16
Vanchai Laemlaksaku and Sittichai Kaewkuekool
Fig. 2.2 Culture bamboo chair
– Seat width: Selected number 19 hip breadth, 95th percentile of female = 38.65 cm plus the allowance of width = 6.00 cm, which equals 44.65 cm (our design is 45.00 cm). – Seat depth: Selected number 14 buttock–popliteal, average of female and male = 46.20 cm (our design is 47.00 cm). • The concept development was created by a revolution in the culture bamboo armchair design, as shown in the Fig. 2.2. The research model is shown in Figs. 2.3 and 2.4. Details of the modern bamboo chair’s fabrication can be found in [9].
OID
TR
120
R500
75
R2
38 R1
40
450 530
0
25
26 0
Fig. 2.3 Dimensions of modern bamboo chair
0
450 425
75
965
320
R2
440
N CE
2 Comparative Design Satisfaction Between Culture and Modern Bamboo
17
Fig. 2.4 Modern bamboo chair
2.2.2 Sample Selection The participants of this research consisted of staff and students from King Mongkut’s Institute of Technology, North Bangkok. Their ages ranged from 18 to 60 years old. A random sampling of 60 people participated by submitting their subjective feedback regarding the modern bamboo chair versus the culture bamboo chair.
2.2.3 Questionnaire The questionnaire consisted of three parts. The first part of the questionnaire was general questions used to record demographic information such as height, weight, gender, and occupation. The second part reflected the participants’ subjective views regarding the culture bamboo chair. This part consisted of two sections. In the first section the participant was asked to rate the culture bamboo chair’s dimensional appropriateness for example, height, width, and depth of chair, on an ordinal scale. In the second section the participant was asked to rate the culture bamboo chair’s comfort level on an ordinal scale according to eight ergonomic points of the body: neck, shoulders, back, waist, tailbone, thighs, knees, and feet. The scale was from 1 to 4 corresponding to comfortable, slightly uncomfortable, uncomfortable but tolerable, and uncomfortable and intolerable, respectively. Once the subjects rated their responses, the data were used to calculate an average, which was interpreted as follows: • • • •
Score between 1.00 and 1.25 corresponded to comfortable. Score between 1.26 and 2.50 corresponded to slightly uncomfortable. Score between 2.51 and 3.25 corresponded to uncomfortable but tolerable. Score between 3.26 and 4.00 corresponded to uncomfortable and intolerable.
18
Vanchai Laemlaksaku and Sittichai Kaewkuekool
Table 2.3 The culture and modern bamboo chairs rated by dimensional appropriateness (the top numeral is a tally of votes and the bottom number in parentheses is a percentage of total votes) Culture bamboo chair SH SW SD SB HA
Modern bamboo chair
A
IB
IA
A
IB
IA
40 (66.67%) 21 (35.00%) 36 (60.00%) 30 (50.00%) –
20 (33.33%) 38 (63.33%) 22 (36.67%) 30 (50.00%) –
0 (0%) 1 (1.67%) 2 (3.33%) 0 (0%) –
56 (93.33%) 45 (75.00%) 52 (86.67%) 42 (70.00%) 47 (78.33%)
4 (6.67%) 15 (25.00%) 8 (13.33%) 18 (30.00%) 13 (21.67%)
0 (0%) 0 (0%) 0 (0%) 0 (0%) 0 (0%)
A = appropriate, IB = inappropriate but tolerable, IA = inappropriate and intolerable, SH = seat height, SW = seat width, SD = seat depth, SB = seat back slope, HA = height of chair armrest
The third part reflected the participants’ subjective views regarding the modern bamboo chair. This part consisted of three sections. In the first and second sections, the participant was asked to rate the modern bamboo chair’s dimensional appropriateness and comfort level. The third section consisted of open- and close-ended questions regarding the modern bamboo chair’s aesthetic appeal as it applied to design, cushion, arms, and legs.
2.3 Results 2.3.1 Dimensional Appropriateness of Chair From Table 2.3, height of the culture bamboo chair’s appropriateness level was rated at 66.67% compared to that of the laminated bamboo chair’s 93.33%, i.e., 26.67% more people rated the modern bamboo chair’s height as appropriate. The width of the culture bamboo chair’s appropriateness level was rated at 35.00% compared to that of the modern bamboo chair’s 75.00%, i.e., 40.00% more people rated the modern bamboo chair’s width as appropriate. The depth of the culture bamboo chair’s appropriateness level was rated at 60.00% compared to that of the modern bamboo chair’s 86.67%, i.e., 26.67% more people rated the modern bamboo chair’s depth as appropriate. The back slope of the culture bamboo chair’s appropriateness level was rated at 50.00% compared to that of the modern bamboo chair’s 70.00%, i.e., 20.00% more people rated the modern bamboo chair’s back slope as appropriate. The results on correlation are shown in Tables 2.4 and 2.5. Table 2.4 shows the correlation on culture bamboo chair and indicates that all the designs for SH, SW, SD, and SB were shown significant at the level of 0.01. Although all results
2 Comparative Design Satisfaction Between Culture and Modern Bamboo
19
Table 2.4 Correlation for all designs of the culture bamboo chair SH SH Pearson correlation Sig. (2-tailed) N SW Pearson correlation Sig. (2-tailed) N SD Pearson correlation Sig. (2-tailed) N SB Pearson correlation Sig. (2-tailed) N HA Pearson correlation Sig. (2-tailed) N
SW
SD
SB
HA
.536∗∗ .000 60
.844∗∗ .000 60
.707∗∗ .000 60
a
.536∗∗ .000 60
1 60
.629∗∗ .000 60
.725∗∗ .000 60
a
.844∗∗ .000 60
.629∗∗ .000 60
1 60
.776∗∗ .000 60
.707∗∗ .000 60
.725∗∗ .000 60
.776∗∗ .000 60
a
a
a
1 60
60
60
60
60 a
60 a
1 60
60 a
60
a
60
60
∗∗ Correlation
is significant at the 0.01 level (2-tailed) a Can not be computed because at least one of the variables is constant SH seat height, SW seat width, SD seat depth, SB seat back slope, HA height of chair armrest Table 2.5 Correlation for all designs of the modern bamboo chair SH SH Pearson correlation Sig. (2-tailed) N SW Pearson correlation Sig. (2-tailed) N SD Pearson correlation Sig. (2-tailed) N SB Pearson correlation Sig. (2-tailed) N HA Pearson correlation Sig. (2-tailed) N ∗∗ Correlation
SW
SD
SB
HA
1 60
.463∗∗ .000 60
.681∗∗ .000 60
.408∗∗ .000 60
.508∗∗ .000 60
.463∗∗ .000 60
1 60
.679∗∗ .000 60
.882∗∗ .000 60
.911∗∗ .000 60
.681∗∗ .000 60
.679∗∗ .000 60
1 60
.599∗∗ .000 60
.746∗∗ .000 60
.408∗∗ .001 60
.882∗∗ .000 60
.599∗∗ .000 60
1 60
.803∗∗ .000 60
.508∗∗ .000 60
.911∗∗ .000 60
.746∗∗ .000 60
.803∗∗ .000 60
1 60
is significant at the 0.01 level (2-tailed) SH = seat height, SW = seat width, SD = seat depth, SB = seat back slope, HA = height of chair armrest
Table 2.6 Eight ergonomic dimensions of the body for the culture bamboo chair Dimension
C
S
U
UI
Mean
Variance
Results
Neck
32 (53.33%) 23 (38.33%) 11 (18.33%) 18 (30.00%) 17 (28.33%) 29 (48.33%) 36 (60.00%) 36 (60.00%)
18 (30.00%) 32 (53.33%) 32 (53.33%) 31 (51.67%) 28 (46.67%) 21 (35.00%) 12 (20.00%) 14 (23.33%)
10 (16.67%) 5 (8.33%) 14 (23.33%) 9 (15.00%) 10 (16.67%) 10 (16.67%) 10 (16.67%) 8 (13.33%)
0 (0%) 0 (0%) 3 (5.00%) 2 (3.33%) 5 (8.33%) 0 (0%) 2 (3.33%) 2 (3.33%)
1.62
0.756
C
1.69
0.620
C
2.13
0.785
S
1.90
0.768
S
2.03
0.894
S
1.67
0.747
C
1.62
0.879
C
1.59
0.844
C
Shoulders Back Waist Tail bone Thighs Knees Feet
C = comfortable, S = slightly uncomfortable, U = uncomfortable but tolerable, UI = uncomfortable and intolerable
Table 2.7 Correlation between eight ergonomic dimensions and the comfort level of sitting in a culture bamboo chair Neck Neck Pearson Correlation Sig. (2-tailed) N Shoulders Pearson Correlation Sig. (2-tailed) N Back Pearson Correlation Sig. (2-tailed) N Waist Pearson Correlation Sig. (2-tailed) N Tailbone Pearson Correlation Sig. (2-tailed) N Thighs Pearson Correlation Sig. (2-tailed) N Knees Pearson Correlation Sig. (2-tailed) N Feet Pearson Correlation Sig. (2-tailed) N ∗∗ Correlation
1 60
Shoulders
Back
Waist
Tailbone Thighs
Knees
Feet
.773∗∗ .000 60
.785∗∗ .000 60
.822∗∗ .000 60
.830∗∗ .000 60
.958∗∗ .000 60
.910∗∗ .000 60
.928∗∗ .000 60
1
.764∗∗ .000 60
.841∗∗ .000 60
.857∗∗ .000 60
.817∗∗ .000 60
.726∗∗ .000 60
.737∗∗ .000 60
1
.847∗∗ .000 60
.894∗∗ .000 60
.783∗∗ .000 60
.823∗∗ .000 60
.813∗∗ .000 60
1
.925∗∗ .000 60
.841∗∗ .000 60
.832∗∗ .000 60
.836∗∗ .000 60
1
.838∗∗ .000 60
.843∗∗ .000 60
.835∗∗ .000 60
1
.874∗∗ .000 60
.893∗∗ .000 60
1
.979∗∗ .000 60
.773∗∗ .000 60
60
.785∗∗ .000 60
.764∗∗ .000 60
60
.822∗∗ .000 60
.841∗∗ .000 60
.847∗∗ .000 60
60
.830∗∗ .000 60
.857∗∗ .000 60
.894∗∗ .000 60
.925∗∗ .000 60
60
.958∗∗ .000 60
.817∗∗ .000 60
.783∗∗ .000 60
.841∗∗ .000 60
.838∗∗ .000 60
60
.910∗∗ .000 60
.726∗∗ .000 60
.823∗∗ .000 60
.832∗∗ .000 60
.843∗∗ .000 60
.874∗∗ .000 60
60
.928∗∗ .000 60
.737∗∗ .000 60
.813∗∗ .000 60
.836∗∗ .000 60
.835∗∗ .000 60
.893∗∗ .000 60
.979∗∗ .000 60
is significant at the 0.01 level (2-tailed)
1 60
2 Comparative Design Satisfaction Between Culture and Modern Bamboo
21
Table 2.8 Eight ergonomic dimensions of the body for the modern bamboo chair Dimension
C
S
U
UI
Mean
Variance
Results
Neck
48 (80.00%) 48 (80.00%) 47 (78.33%) 49 (81.67%) 46 (76.67%) 47 (78.33%) 53 (88.33%) 52 (86.67%)
10 (16.67%) 12 (20.00%) 12 (20.00%) 11 (18.33%) 14 (23.33%) 13 (21.67%) 7 (10.67%) 8 (13.33%)
2 (3.33%) 0 (0%) 1 (1.67%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 0 (0%)
0 (0%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 0 (0%) 0 (0%)
1.28
0.609
C
1.21
0.413
C
1.26
0.513
C
1.20
0.401
C
1.25
0.434
C
1.23
0.424
C
1.13
0.340
C
1.16
0.416
C
Shoulders Back Waist Tail bone Thighs Knees Feet
C = comfortable, S = slightly uncomfortable, U = uncomfortable but tolerable, UI = uncomfortable and intolerable
showed significant, some correlation designs, such as SH and SW, still needed to be improved. Another correlation result for the modern bamboo chair is shown in Table 2.5; here all the designs for SH, SW, SD, and SB of the modern chair are significant at the level of 0.01.
2.3.2 Comfort Level of Chair From Table 2.6, results for the culture bamboo chair were shown to be comfortable in the area of neck, shoulder, thigh, knee, and feet. The result for correlation between body dimension and comfort of sitting for the culture bamboo chair is shown in Table 2.7. The correlation was shown significant at the level of 0.01 for all dimensions. As seen in Table 2.8, the results from participants indicate that the modern bamboo chair was comfortable at all eight ergonomic body points. Another result of correlation for designs between body dimension and comfort of sitting for the modern bamboo chair is shown in Table 2.9. The correlation was significant at the level of 0.01 for all dimensions. Subjects who rated the culture bamboo and modern bamboo chairs as dimensionally appropriate were divided into three weight classes; 40 to 55 kg, 56 to 71 kg, and 72 to 85 kg. As shown in Table 2.10, the weights of participants are a factor in the chair’s dimensionally appropriate level. The dimensional appropriateness of the culture bamboo chair’s width was found mostly inappropriate because it is too narrow. Data in Table 2.11 show that the modern bamboo chair is dimensionally more appropriate at all participant weight classes.
22
Vanchai Laemlaksaku and Sittichai Kaewkuekool
Table 2.9 Correlation between eight ergonomic dimensions and the comfort level of sitting in a modern bamboo chair Neck Neck Pearson Correlation Sig. (2-tailed) N Shoulders Pearson Correlation Sig. (2-tailed) N Back Pearson Correlation Sig. (2-tailed) N Waist Pearson Correlation Sig. (2-tailed) N Tailbone Pearson Correlation Sig. (2-tailed) N Thighs Pearson Correlation Sig. (2-tailed) N Knees Pearson Correlation Sig. (2-tailed) N Feet Pearson Correlation Sig. (2-tailed) N ∗∗ Correlation
1 60
Shoulders Back
Waist
Tailbone Thighs Knees
Feet
.942** .000 60
.930** .907** .854** .895** .772** .805** .000 .000 .000 .000 .000 .000 60 60 60 60 60 60
1
.923** .948** .906** .915** .727** .784** .000 .000 .000 .000 .000 .000 60 60 60 60 60 60
.942** .000 60
60
.930** .923** .000 .000 60 60
1
.882** .918** .963** .718** .759** .000 .000 .000 .000 .000 60 60 60 60 60
60
.907** .948** .000 .000 60 60
.882** .000 60
.854** .906** .000 .000 60 60
.918** .859** .000 .000 60 60
.895** .951** .000 .000 60 60
.963** .901** .953** .000 .000 .000 60 60 60
.772** .727** .000 .000 60 60
.718** .767** .659** .691** .000 .000 .000 .000 60 60 60 60
.805** .784** .000 .000 60 60
.759** .828** .711** .746** .927** .000 .000 .000 .000 .000 60 60 60 60 60
1 60
.859** .901** .767** .828** .000 .000 .000 .000 60 60 60 60 1
.953** .659** .711** .000 .000 .000 60 60 60
60
1 60
.691** .746** .000 .000 60 60 1
.927** .000 60
60
1 60
is significant at the 0.01 level (2-tailed)
Table 2.10 Analysis of the weight of participants who rated the culture bamboo chair on dimensional appropriateness Appropriate
SH SW SD SB
Inappropriate
40–55 kg.
56–71 kg.
72–85 kg.
40–55 kg.
56–71 kg.
72–85 kg.
21 (35.00%) 12 (20.00%) 19 (31.67%) 18 (30.00%)
14 (23.33%) 4 (6.67%) 13 (21.67%) 7 (11.67%)
5 (8.33%) 5 (8.33%) 4 (6.67%) 5 (8.33%)
11 (18.33%) 20 (33.33%) 13 (21.67%) 14 (23.33%)
8 (13.33%) 18 (30.00%) 9 (15.00%) 15 (25.00%)
1 (1.67%) 1 (1.67%) 2 (3.33%) 1 (1.67%)
SH = seat height, SW = seat width, SD = seat depth, SB = seat back slope
2 Comparative Design Satisfaction Between Culture and Modern Bamboo
23
Table 2.11 Analysis of the weight of participants who rated the modern bamboo chair on dimensional appropriateness Appropriate
SH SW SD SB HA
Inappropriate
40–55 kg.
56–71 kg.
72–85 kg.
40–55 kg.
56–71 kg.
72–85 kg.
30 (50.00%) 24 (40.00%) 29 (48.33%) 22 (36.67%) 26 (43.33%)
20 (33.33%) 16 (26.67%) 17 (28.33%) 14 (23.33%) 15 (25.00%)
6 (10.00%) 5 (8.33%) 6 (10.00%) 6 (10.00%) 6 (10.00%)
3 (5.00%) 8 (13.33%) 4 (6.67%) 11 (18.33%) 7 (11.67%)
1 (1.67%) 6 (10.00%) 4 (6.67%) 7 (11.67%) 6 (10.00%)
0 (0%) 1 (1.67%) 0 (0%) 0 (0%) 0 (0%)
SH = seat height, SW = seat width, SD = seat depth, SB = seat back slope, HA = height of chair armrest Table 2.12 Analysis of the height of participants who rated the culture bamboo chair on dimensional appropriateness Appropriate
SH SW SD SB
Inappropriate
150–160 cm
161–170 cm
171–180 cm
150–160 cm
161–170 cm
171–180 cm
14 (23.33%) 7 (11.67%) 12 (20.00%) 10 (16.67%)
16 (26.67%) 9 (15.00%) 15 (25.00%) 11 (18.33%)
10 (16.67%) 5 (8.33%) 9 (15.00%) 9 (15.00%)
8 (13.33%) 13 (21.67%) 8 (13.33%) 11 (18.33%)
7 (11.67%) 20 (33.33%) 10 (16.67%) 12 (20.00%)
5 (8.33%) 6 (10.00%) 6 (10.00%) 7 (11.67%)
SH = seat height, SW = seat width, SD = seat depth, SB = seat back slope Table 2.13 Analysis of the height of participants who rated the modern bamboo chair on dimensional appropriateness Appropriate
SH SW SD SB HA
Inappropriate
150–160 cm
161–170 cm
171–180 cm
150–160 cm
161–170 cm
171–180 cm
20 (33.33%) 15 (25.00%) 18 (30.00%) 14 (23.33%) 18 (30.00%)
23 (38.33%) 19 (31.67%) 20 (33.33%) 15 (25.00%) 16 (26.67%)
13 (21.67%) 11 (18.33%) 14 (23.33%) 13 (21.67%) 13 (21.67%)
1 (1.67%) 5 (8.33%) 3 (5.00%) 7 (11.67%) 3 (5.00%)
2 (3.33%) 6 (10.00%) 4 (6.67%) 9 (15.00%) 8 (13.33%)
1 (1.67%) 4 (6.67%) 1 (1.67%) 2 (3.33%) 2 (3.33%)
SH = seat height, SW = seat width, SD = seat depth, SB = seat back slope, HA = height of chair armrest
24
Vanchai Laemlaksaku and Sittichai Kaewkuekool
Tables 2.12 and 2.13 show the results that rated the culture and modern bamboo chairs as dimensionally appropriate for height. This dimension was divided into three classes: 150 to 160 cm, 161 to 170 cm, and 171 to 180 cm. The dimensional appropriateness of the culture bamboo chair’s width and back slope results is shown in Table 2.12. The height appropriateness level was found to be more appropriate for the tallest participants. On the other hand, the results in Table 2.13 show that the modern bamboo chair is dimensionally appropriate for all participant height classes.
2.4 Conclusions 2.4.1 Dimensional Appropriateness of the Chair The participants rated the culture bamboo chair’s height appropriateness at 66.67% and the modern bamboo chair’s height appropriateness at 93.33%. The participants rated the culture bamboo chair’s width appropriateness at 35.00% and the modern bamboo chair’s width appropriateness at 75.00%. The participants rated the culture bamboo chair’s depth appropriateness at 60.00% and the modern bamboo chair’s depth appropriateness at 86.67%. Finally, the culture bamboo chair’s back slope appropriateness was rated at 50.00%, and the modern bamboo chair’s back slope appropriateness was rated at 70.00%. Moreover, the correlations on designs for culture and modern were shown to have significant differences and to be appropriate for use at the level of 0.01. This would indicate that both bamboo chairs need better design.
2.4.2 Comfort Level The culture bamboo chair was found to be more comfortable in the area of neck, shoulder, thigh, knee, and feet. Participants’ opinions on the culture bamboo chair were shown as slightly uncomfortable for the back, waist, and tailbone areas. The modern bamboo chair was found to be more comfortable for all eight ergonomic body points: neck, shoulders, back, waist, tailbone, thighs, knees, and feet. Correlations on sitting comfort were shown to be significantly different at the level of 0.01. This would indicate that both culture and modern bamboo chair designs were acceptable and appropriate for use. However, respondents who were asked to rate their sitting comfort liked the modern bamboo chair better than the old one.
2 Comparative Design Satisfaction Between Culture and Modern Bamboo
25
2.4.3 Aesthetic Appeal of the Modern Bamboo Chair The color of the modern bamboo chair was found to be aesthetically appealing for 68% of the participants. The style of the chair appealed to 70% of the participants. The armrest appealed to 56% of the participants, while 74% of participants found the softness of the cushion comfortable.
2.5 Recommendations Although the modern bamboo chair’s design was strong enough to hold a person weighing at least 85 kg, the width of the modern bamboo chair legs should be increased to look stronger. Participants suggested that an increase in the width of the armrest would be more comfortable. Finally, participants preferred a darker color, closer to the natural bamboo color. Acknowledgment This work was supported by Thailand’s Commission on Higher Education.
References 1. A. Aaras, K.I. Fostervold, O. Ro, and M. Thoresen (1997) Postural load during VDU work: A comparison between various work postures. Ergonomics, 40(11): 1255–1268. 2. C.J. Cook and K. Kothiyal (1998) Influence of mouse position on muscular activity in the neck, shoulder and arm in computer users. Applied Ergonomics, 29(6): 439–443. 3. R. Burgess-Limerick, A. Plooy, and D. Ankrum (1999) The influence of computer display height on head and neck posture. International Journal of Industrial Ergonomics, 23: 171–179. 4. J.J. Congleton, M.M. Ayoub, and J.L. Smits (1988) The determination of pressures and patterns for the male human buttocks and thigh in sitting utilizing conductive foam. International Journal of Industrial Ergonomics, 2(3): 193–202. 5. D.E. Gyi and J.M. Porter (1999) Interface pressure and the prediction of car seat discomfort,” Appl. Ergon., 30, pp. 99–107. 6. S. Pheasant and C.M. Haslegrave (2006) Bodyspace: Anthropometry, ergonomics and the design of work, 3rd ed. Taylor & Francis, New York. 7. Y. Buntengchit and S. Nowvarutpanommat (2003) Anthropometric study of female students of King Mongkut’s Institute of Technology North Bangkok. Proceeding of 2003 IE network national conference. 8. Y. Buntengchit and C. Pisantanakul (2003) Anthropometric study of female students of King Mongkut’s Institute of Technology North Bangkok. Proceeding of 2003 IE network national conference. 9. V. Laemlaksakul and S. Kaewkuekool (2006) Laminated bamboo materials for furniture: A systematic approach to innovative product design. WSEAS Transactions on. Advances in Engineering Education, 5(3): 435–450.
Chapter 3
Factors Influencing Symbol-Training Effectiveness Annie W.Y. Ng and Alan H.S. Chan
Abstract The aim of this paper is to present a comprehensive review of symbol training in the past 40 years. Three symbol-training methods (i.e., paired-associate learning, recognition training, recall training) that were commonly used by ergonomists and industrial designers are identified. Factors affecting symbol-training effectiveness are given. Experimental design and analysis for symbol-training effectiveness research are also described. This review would be helpful in formulating research plans and methodology for conducting other symbol-training studies. Keywords: Graphical symbol · experimental design · statistical analysis · symboltraining effectiveness · training method
3.1 Introduction The term signs, icons, symbols, pictograms, pictographs, and glyphs often appear in the literature and seem to be interchange when referring to physical objects, concepts, or functions. Even though users may guess what an icon represents the very first time, training could improve the process of understanding the meaning of an icon [1]. Blum and Naylor [2] defined training as “a process that develops and improves skills related to performance.” Bailey [3] identified training as “the systematic acquisition of skills, knowledge, and attitudes that will lead to an acceptable level of performance on a specific task in a given context.” The extent to which training brings desired or appropriate outcomes is called training effectiveness [4]. Prior researches revealed that training significantly improved the comprehension of the meaning of symbolic traffic signs [5], occupational safety symbols [6], Annie W.Y. Ng and Alan H.S. Chan Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong
27
28
Annie W.Y. Ng and Alan H.S. Chan
industrial-safety and pharmaceutical symbols [7], service symbols [8], warning symbols in products [9], and hazardous material symbols [10]. In this paper, a comprehensive review on symbol-training in the past 40 years is given. Some current symbol-training methods are identified. Factors influencing symbol-training effectiveness are given. Experimental design and statistical analysis for symbol-training effectiveness research are also summarized. This review would be helpful in providing useful background information for formulating research plans and methodology for conducting further symbol-training studies.
3.2 Factors Influencing Symbol-Training Effectiveness 3.2.1 Training Method Table 3.1 presents a summary of studies related to symbol training in the past 40 years. Three symbol-training methods have been commonly used by ergonomists and industrial designers. The first method is called paired-associate learning. With this method, the learning is done in pairs of consisting of a symbol and its meaning so that one member of the pair evokes recall of the other [5, 11–13]. Other than the option of providing a short phrase, other means such as a mnemonic cue [14], an explanatory statement explaining the nature of the concept or hazard [7], or a short paragraph describing an accident that results from failure to comply with the symbol [9] have also been used. The second method is called recognition training. Recognition is the ability “to identify something or someone that has been seen, heard, etc., before” [15]. With this method of training, subjects were first informed of the meanings of all symbols. They then were given the meaning of a symbol and asked to choose the most appropriate symbol among the given selections in a trial. Feedback on the response accuracy was provided immediately [8, 10, 16]. The third method is called recall training. Recall means “bring something or someone back into the mind or recollect” [15]. An earlier study of Brown [17] stated that the examination of distracters in recognition may produce some sorts of interference, and Sternberg [18] indicated that recall tasks elicited a deeper level of learning than recognition tasks. In the recall training method, subjects were first notified about the meaning of the testing symbols and then orally recalled the meaning of every randomly selected symbol. Feedback on the accuracy of responses was given [8, 10, 16]. There does not appear to have been any research comparing the effectiveness of different symbol-training methods. However, there are a few studies about the relationship between paired-associate learning and symbol comprehension [5, 7, 9, 14]. The results indicated that symbol-training effectiveness is dependent on the training method. A summary of these studies is given in the following paragraphs. Griffith and Actkinson [14] assessed the influence of training for drivers in the United States Army armor on the comprehension of 128 international road signs.
Evaluated the performance on symbol identification
Cairney and Sless [6]
144 students from adult education
Drivers Assessed the effects of age and training on traffic sign recognition (i.e., training factors: age and training program)
Purpose Subjects 81 students Investigated the hypothesis that symbolic road signs could be more accurately recognized than verbal road signs Studied the effects of training on the Drivers in the US interpretability of road signs (i.e., Army armor training factor: training program)
Allen et al. [5]
Griffith and Actkinson [14]
Studies Walker et al. [11]
Table 3.1 Summary of studies related to symbol training Training method Subjects studied the signs and their meanings for 5 minutes
128 international road Three conditions were used as follows: signs (i) Sign only: the signs were presented individually for 10 seconds each while the instructor read aloud the name of the sign twice. (ii) Sign elaboration: the signs were presented individually for 10 seconds each while the instructor provided the name of the sign and mnemonic cue orally. (iii) Standard lecture: a series of lessons supplemented with training aids. Three conditions were used as follows: 72 symbolic traffic signs used in the (i) Received an educational pamphlet explaining USA the meaning and nature of the signs. (ii) Received a review of each sign with an educational plaque below it in the driving stimulator. (iii) Both (i) and (ii). 19 occupational safety Subjects were required to identify the meaning of a symbols randomly selected symbol within 30 seconds. The experimenter provided feedback on the accuracy of their responses. (cont.)
Symbols 3 symbolic and 3 verbal traffic signs
Purpose Subjects 60 university students Assessed the effect of training on the comprehension and retention of symbols over time (i.e., training factor: training program)
28 airway facilities field personnel
60 university students
Evaluated the effectiveness of symbol coding techniques
Investigated the effects of symbol, gender, and training on symbol comprehension (i.e., training factors: symbol, gender, and training program)
Studies Wogalter et al. [7]
Ramakrisnan et al. [8]
Wang and Chen [12]
Table 3.1 (continued) Symbols Training method 20 industrial-safety and Two conditions were used as follows: 20 pharmaceutical (i) While subjects viewed each symbol along with symbols a verbal label, the experimenter read aloud the label. (ii) While subjects viewed each symbol along with a verbal label and an explanatory statement, the experimenter read aloud the label and statement. 21 Federal Aviation The method contained three steps: Administration (i) Subjects were shown the correct symbol for facilities and service each examined facility. symbols (ii) Recognition training: For each trial, subjects saw the name of a facility and were asked to choose the corresponding symbol among a set of symbols until the correct symbol was identified. The name and the symbol remained on the screen for 3 seconds in order to help subjects learn the symbol. (iii) Recall training: For each trial, subjects were required to orally recall the meaning of a symbol. The correct meaning was provided immediately after each response. 12 hazard symbols The experimenter told the subjects the meaning of the symbols.
Lesch [9]
48 participants Explored the effects of symbol, education level, and gender on conceptual compatibility Studied age-related differences and 92 participants impact of training on comprehension and memory for warning symbols (i.e., training factors: age and training program)
Chen and Wang [13]
40 university students
Studied the effects of prohibitive traffic signs design on users’ subjective preference and visual performance
Wang et al. [16]
41 warning symbols used for labeling hazards associated with products
12 hazard symbols
9 symbolic and 1 verbal Taiwan traffic signs
(cont.)
Three conditions were used as follows: 1. Verbal label: Subjects viewed each symbol along with a verbal label describing the meaning of the symbol. 2. Explanatory statement: Subjects viewed each symbol with a verbal label and a brief statement explaining the nature of the hazard.
The method contained three steps: 1. The experimenter told the subjects the meaning of the signs through the use of a traffic sign booklet. 2. Recognition training: For every trial, subjects saw the meaning of a randomly selected sign and were asked to choose the corresponding sign from the booklet until the correct sign was identified. The experimenter repeated the answer once in order to help subjects learn the sign. 3. Recall training: For each trial, subjects were required to orally recall the meaning of a randomly selected sign until the correct meaning was recalled. The experimenter repeated the answer once in order to help subjects learn the sign. The experimenter told the subjects the meaning of the symbols.
Symbols
12 hazardous material symbols
Subjects
60 university graduates
Purpose
Investigated the effects of symbol, educational specialization, and training on symbol comprehension (i.e., training factors: symbol, educational specialization, and training program)
Studies
Wang and Chi [10]
Table 3.1 (continued)
The method contained three steps: 1. The experimenter told the subjects the meaning of the symbols through the use of a hazard symbol label booklet. 2. Recognition training: For every trial, subjects saw the meaning of a randomly selected symbol and were asked to choose the corresponding symbol from the booklet. The experimenter provided feedback on the accuracy of their responses. 3. Recall training: For each trial, subjects were required to orally recall the meaning of a randomly selected symbol. The correct meaning was provided for any incorrect responses made.
3. Accident scenario: Subjects viewed each symbol with a verbal label and a short paragraph describing an accident that resulted from failure to comply with the symbol.
Training method
3 Factors Influencing Symbol-Training Effectiveness
33
Three training conditions, viz., sign only, sign elaboration, and standard lecture were employed in their study. In the sign-only condition, the signs were presented individually for 10 seconds each while the instructor read aloud the name of the sign twice. In the sign-elaboration condition, the signs were presented individually for 10 seconds each while the instructor provided the name of the sign and mnemonic cue orally. Standard lecture was a series of lessons supplemented with training aids. Performance on sign comprehension was shown to improve after training. However, there were no statistically significant differences among the training conditions. Allen et al. [5] investigated the effect of training on the understanding and retention of 72 symbolic traffic signs contained in the United States Manual on Uniform Traffic Control Devices for drivers. One group of drivers received an educational pamphlet explaining the meaning and the nature of the signs, one group received a review of each sign with an educational plaque shown below it in the driving stimulator, and one group received a combination of both. The results revealed that all three training conditions increased comprehension and memory for the meaning of the traffic signs. However, the differences amongst training conditions were insignificant. Wogalter et al. [7] examined the influence of training on the understanding and memory of 40 industrial-safety and pharmaceutical symbols. Two paired-associate learning conditions were tested: verbal-label condition and verbal label with explanatory statement condition. Thirty participants viewed each symbol along with a verbal label (used to describe the meaning of the symbol) while the experimenter read aloud the label. Another 30 participants viewed each symbol along with a verbal label and an explanatory statement (used to describe the nature of the concept on hazard) while the experimenter read aloud the label and statement. The results demonstrated that both conditions improved comprehension and memory for the meaning of test symbols. Surprisingly, comprehension and memory was no better when an additional explanatory statement was provided than when a verbal label alone was presented. This may be attributed to the following facts: 1. Subjects were not able to encode the explanatory statements satisfactorily. 2. The retention measure was not sensitive enough to evaluate the consequence of the explanatory statements. 3. The verbal label alone evoked retention to a near-ceiling effect 4. The explanatory statements failed to provide additional memory codes other than those provided by the verbal labels. Lesch [9] evaluated the effect of training on the comprehension and memory of 41 warning symbols used for labeling hazards associated with products. In addition to verbal label and explanatory statement conditions, an accident scenario condition was also included. In the verbal-label condition, subjects viewed each symbol along with a verbal label describing the meaning of the symbol. In the explanatory statement condition, subjects viewed each symbol with a verbal label and a brief statement explaining the nature of the hazard. In the accident scenario condition,
34
Annie W.Y. Ng and Alan H.S. Chan
subjects viewed each symbol with a verbal label and a short paragraph describing an accident that resulted from failure to comply with the symbol. It was found that the three training conditions significantly enhanced the understanding and retention of the meaning of warning symbols. The verbal label produced the best performance, followed by the explanatory statement, and then the accident scenario. There were two reasons for explaining the failure of the accident scenario condition to provide additional benefit relative to the explanatory statement and verbal label conditions: 1. The length of the scenarios and the large number of symbols trained might have overloaded the memory. 2. Participants might not process the accident scenario sufficiently.
3.2.2 Other Training Factors Symbol-training effectiveness is dependent not only on the training method but also the symbol design and the characteristics of the trainee population (e.g., age). Regarding symbol design, training was still be effective for well-designed symbols, but poorly designed symbols may not be successfully understood in the 1-month posttraining test [10]. For trainee characteristics, symbol knowledge decreased as age increased before and after training [5]. Lesch [9] found that older participants (50 to 67 years) performed much more poorly than younger participants (18 to 35 years) in comprehension of warning symbols, both before and after training. Fisk et al. [19] summarized the issues that need to be considered in developing training programs for older adults. The influences of educational specialization [10] and gender [12] on symboltraining effectiveness were examined. However, neither of them were significant training factors. Simon [20] revealed that trainees whose learning style matched training methodology were more successful in training outcomes. But the learning style effect has not been studied in symbol-training research. Future experiments should be performed to investigate whether each current symbol training method has unique merits to meet training objectives designed for learners with particular learning styles.
3.3 Experimental Design and Analysis for Symbol-Training Effectiveness Research The last section introduces the factors influencing symbol-training effectiveness. The experimental design and analysis for symbol-training effectiveness research are shown here. Pretest–posttest designs are widely used in symbol-training research, primarily for the purpose of measuring the effectiveness of different training conditions [5,7,9, 10,12]. Subjects were measured before and immediately after training in pretest and
3 Factors Influencing Symbol-Training Effectiveness
35
posttest, respectively. For determining whether training effects are maintained over time, another posttest was held 1 week [5] or 1 month [10] following the immediate posttest. To prevent subjects from retaining the meaning of test symbols in shortterm memory through subvocal rehearsal, an intervening task, such as letter search task and demographic questionnaire [7], playing poker [10], and puzzle game [12], was performed immediately after training. Posttest score, difference score, and percentage change are three common indicators of training effectiveness. Difference score is the change in score between posttest and pretest. Percentage change is the ratio of difference score to pretest score. Percentage change is not recommended for use in the analysis of pretest– posttest design [21, 22]. This is because, first, the distribution of percentage change is usually not normal and thus violates the assumptions of most parametric statistical tests. Second, the percentage change would create a bias and overemphasize the performance improvement of the group with poorer baseline scores. Nevertheless, some recent studies have used this statistic in pretest–posttest design analysis [23, 24]. Newby [25] recommended gain ratio, which compares the actual improvement and the theoretical maximum room of improvement, for measuring the amount of learning achieved by a trainee during a training activity. Gain ratio was used earlier by Hovland et al. [26] and some recent education studies [27–31]. However, it has not been used in symbol-training research. Three methods are recommended for studying data collected from pretest– posttest comparison group design: (1) posttest score approach, (2) analysis of covariance approach, and (3) difference score approach [32, 33]. Assuming that there are three treatment groups and one control group in a pretest–posttest design, subjects are randomly assigned to the groups prior to pretest, and each group is measured before and after training. With the first approach, an analysis of variance is performed using posttest score as the dependent variable and treatment condition as the independent variable. With the second approach, an analysis of covariance is conducted using pretest score as the covariate, posttest score as the dependent variable, and treatment condition as the independent variable. In the third approach, an analysis of variance is performed using difference score as the dependent variable and treatment condition as the independent variable. When the data to be analyzed in the pretest-posttest design are not normally distributed, nonparametic analyses should be undertaken. The posttest score approach is less powerful than the other two approaches since pretest scores are ignored during data analysis [33]. Girden [34] specified that when the regression coefficient of the posttest score on the pretest score equals 1, difference score approach and analysis of covariance (ANCOVA) approach will produce the same F ratio, with difference score analysis being slightly more powerful due to the lost degrees of freedom with ANCOVA. When the regression coefficient is less than 1, the error term will be smaller in ANCOVA, resulting in a more powerful test. It was noted that performance may change from pretest to posttest without treatment through maturation, history, regression revisited, mortality, instrumentation, and testing [35]. Maturation denotes the natural physiological changes, such as fatigue, hunger, and growth, from pretest to posttest within the trainee. History specifies events other than the treatment that have occurred between pretest and posttest.
36
Annie W.Y. Ng and Alan H.S. Chan
For example, there may be an increase in room temperature. Regression revisited refers to the situation where participants are chosen on the basis of their extreme pretest scores. Regardless of whether there is a treatment, subjects whose scores are high (low) on the first assessment will probably show a decrease (an increase) in score when they are measured a second time. Mortality indicates a phenomenon occurring when fewer participants were measured at posttest than at pretest. Instrumentation denotes that the measuring instrument used for posttest is different from the one used during pretest. Testing refers to the situation where pretest has a positive change on posttest performance. As performance may change from pretest to posttest without treatment, the above six factors should be considered in the design of pretest–posttest experiments for symbol-training studies.
3.4 Conclusion In the paragraphs above, a comprehensive review on symbol training in the past 40 years was presented. Three commonly used symbol-training methods, viz., paired-associate learning, recall training, and recognition training were identified. Factors influencing symbol-training effectiveness were given. The experimental design and statistical analysis for symbol-training effectiveness research were also summarized. This review would be helpful for the design of symbol-training programs that are more user-friendly for use in industry.
References 1. R.S. Goonetilleke, H.M. Shih, H.K. On, and J. Fritsch (2001) Effects of training and representational characteristics in icon design. International Journal of Human-Computer Studies, 55: 741–760. 2. M.L. Blum and J.C. Naylor (1968) Training and learning. In H. Wayne and G. Murphy (eds.) Industrial psychology: Its theoretical and social foundations. Harper & Row, New York, pp. 237–275. 3. R.W. Bailey (1996) Human performance engineering: Designing high quality professional user interfaces for computer products, applications and systems. Prentice Hall PTR, Upper Saddle River, NJ. 4. E. Salas, K.A. Burgess, and J.A. Canon-Bowers (1995) Training effectiveness techniques. In J. Weimer (ed.), Research techniques in human engineering. Prentice Hall, Englewood Cliffs, NJ, pp. 439–475. 5. R.W. Allen, Z. Parseghian, and P.G. Van Valkenburgh (1980) Simulator evaluation of age effects on symbol sign recognition. Proceedings of the Human Factors and Ergonomics Society 24th Annual Meeting. Santa Monica, CA: Human Factors and Ergonomics Society, pp. 471– 475. 6. P. Cairney and D. Sless (1982) Communication effectiveness of symbolic safety signs with different user groups. Applied Ergonomics, 13: 91–97. 7. M.S. Wogalter, R.J. Sojourner, and J.W. Brelsford (1997) Comprehension and retention of safety pictorials. Ergonomics, 40: 531–542. 8. A.S. Ramakrishnan, R.L. Cranston, A. Rosiles, D. Wagner, and A. Mital (1999) Study of symbols coding in airway facilities. International Journal of Industrial Ergonomics, 25: 39–50.
3 Factors Influencing Symbol-Training Effectiveness
37
9. M.F. Lesch (2003) Comprehension and memory for warning symbols: Age-related differences and impact of training. Journal of Safety Research, 34: 495–505. 10. A.H. Wang and C.C. Chi (2003) Effects of hazardous material symbol labeling and training on comprehension according to three types of educational specialization. International Journal of Industrial Ergonomics, 31: 343–355. 11. R.E. Walker, R.C. Nicolay, and C.R. Steams (1965) Comparative accuracy of recognizing American and international road signs. Journal of Applied Psychology, 49: 322–325. 12. A.H. Wang and M.T. Chen (2000) Effects of symbolic pictorial, gender, and training on users’ comprehension for hazardous labels. Journal of Ergonomic Study, 2: 81–88 (in Chinese). 13. M.T. Chen and A.H. Wang (2003) Effects of dangerous materials symbols, educational background, and gender on conceptual compatibility. Institute of Occupational Safety and Health Journal, 11: 188–196 (in Chinese). 14. D. Griffith and T.R. Actkinson (1977) International road signs: interpretability and training techniques. Proceedings of the Human Factors Society 21st Annual Meeting. Santa Monica, CA, Human Factors and Ergonomics Society, pp. 392–396. 15. B. Li (1999) Oxford advanced learner’s English–Chinese dictionary (Revised extended 4th ed.). Oxford University Press, Hong Kong. 16. A.H. Wang, H.S. Lin, and M.T. Chen (2002) Effects of prohibitive traffic signs design on drivers’ subjective preference and visual performance under different driving velocities. Journal of the Chinese Institute of Industrial Engineers, 19: 105–115 (in Chinese). 17. J. Brown (1976) An analysis of recognition and recall and of problems in their comparison. In J. Brown (ed.), Recall and recognition. Wiley, London, pp. 1–35. 18. R.J. Sternberg (2003) Cognitive psychology. Thomson/Wadsworth, Belmont, CA; Australia. 19. D. Fisk, W.A. Rogers, N. Charness, S.J. Czaja, and J. Sharit (2004) Designing for older adults: Principles and creative human factors approaches.: CRC Press, Boca Raton, FL; London. 20. S.J. Simon (2000) The relationship of learning style and training method to end-user computer use: a structural equation model. Information Technology, Learning, and Performance Journal, 18: 41–59. 21. P.L. Bonate (2000) Analysis of pretest-posttest designs. Chapman & Hall, Boca Raton, FL. 22. A.J. Vickers The use of percentage change from baseline as an outcome in a controlled trial is statistically inefficient: a simulation study. BMC Medical Research Methodology, 1. 23. C.L. Loprinzi, J.C. Michalak, S.K. Quella, J.R. O’Fallon, A.K. Hatfield, R.A. Nelimark, A.M. Dose, T. Fischer, C. Johnson, N.E. Klatt, W.W. Bate, R.M. Rospond, and J.E. Oesterling (1994) Megestrol acetate for the prevention of hot flashes. The New England Journal of Medicine, 331: 347–352. 24. H. Anderson, P. Hopwood, R.J. Stephens, N. Thatcher, B. Cottier, M. Nicholson, R. Milroy, T.S. Maughan, S.J. Falk, M.G. Bond, P.A. Burt, C.K. Connolly, M.B. Mclllmurray, and J. Carmichael (2000) Gemcitabine plus best supportive care (BSC) vs BSC in inoperable non-small cell lung cancer—a randomized trial with quality of life as the primary outcome. British Journal of Cancer, 83: 447–453. 25. T. Newby (1992) Training evaluation handbook. Gower, Aldershot, Hants, England. 26. C.I. Hovland, A.A. Lumsdaine, and F.D. Sheffield (1949) Experiments on mass communication. Princeton University Press, Princeton, NJ. 27. R.R. Hake (1998) Interactive-engagement versus traditional methods: a six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics, 66: 64–74. 28. K. Cummings, J. Marx, R. Thomton, and D. Kuhl (1999) Evaluating innovation in studio physics. Physics education research: A Supplement to the American Journal of Physics, 67: S38–S44. 29. E.F. Redish and R.N. Steinberg (1999) Teaching physics: Figuring out what works. Physics Today, 24–30. 30. D.E. Meltzer (2002) The relationship between mathematics preparation and conceptual learning gains in physics: A possible “hidden variable” in diagnostic pretest scores. American Journal of Physics, 70: 1259–1268.
38
Annie W.Y. Ng and Alan H.S. Chan
31. T.L.N. Emerson and B.A. Taylor (2004) Comparing student achievement across experimental and lecture-oriented sections of a principles of microeconomics course. Southern Economic Journal, 70: 672–693. 32. D.M. Dimitrov and P.D. Rumrill (2003) Pretest-posttest designs and measurement of change. Work, 20: 159–165. 33. J.A. Gliner, G.A. Morgan, and R.J. Harmon (2003) Pretest-posttest comparison group designs: Analysis and interpretation. Journal of the American Academy of Child & Adolescent Psychiatry, 42: 500–503. 34. E.R. Girden (1992) ANOVA: Repeated measures. Sage, Newbury Park, CA. 35. M.L. Mitchell and J.M. Jolley (2004) Research design explained, 5th edition. Wadsworth, Belmont, CA; London.
Chapter 4
Multiple-Colony Ant Algorithm with Forward–Backward Scheduling Approach for Job-Shop Scheduling Problem Apinanthana Udomsakdigool and Voratas Kachitvichyanukul
4.1 Introduction The job-shop scheduling problem (JSP) is one of the strongly nondeterministic polynomial-time hard (NP-hard) combinatorial optimization problems and is difficult to solve optimality for large-size problems. For practical purposes, several approximation algorithms that can find good solutions in an acceptable time have been developed. Most conventional ones in practice are based on priority dispatching rules (PDRs). In recent years ant colony optimization (ACO) has been receiving attention in solving scheduling problems including the static and dynamic scheduling problems. The successful applications of the ant algorithm to solve the static problem were founded in the single-machine weighted tardiness problem (Gaen´e et al. [1], Gravel et al. [2]), the flow-shop scheduling problem (Shyu et al. [3], Ying and Liao [4]), the open-shop scheduling problem (Blum [5]), and the resource constraint project scheduling problem (Merkle et al. [6]). The application to JSP has proven to be quite difficult. The first group of researchers (Colorni et al. [7]) that applied ACO algorithm to solve the JSP was far from reaching state-of-the-art performance. Later Blum and Sampels [8] developed the ant algorithm for shop scheduling, and the results indicate that their algorithm works well when applied to the open-shop problem. In Udomsakdigool and Kachitvichyanukul [9] the forward–backward scheduling approach is applied in the single-colony ant algorithm, and they found that this approach improves the solution in many problems. Recently, Udomsakdigool and
Apinanthana Udomsakdigool The department of Industrial Engineering Technology, College of Industrial Technology, King Mongkut’s Institute of Technology North Bangkok, 1518 Pibulsongkram Road, Bang Sue, Bangkok, 10800, Thailand Voratas Kachitvichyanukul The Department of Industrial System Engineering, School of Engineering and Technology, Asian Institute of Technology, P.O. Box 4, Klong Luang, Pathumthani 12120, Thailand
39
40
Apinanthana Udomsakdigool and Voratas Kachitvichyanukul
Kachitvichyanukul [10] introduced the multiple-colony ant algorithm to find the JSP solution. This approach uses more than one colony of ants working cooperatively for the solution, and the result clearly shows that most scheduling problems can be improved. This paper proposes the new method of ACO to solve the JSP. In this method, forward–backward scheduling and multiple–colony approach are introduced. In the proposed ant algorithm, each colony contains two types of ants that construct the solutions in order of precedence and in the reversing order of precedence of processing sequences. These two types of ants exchange the information via modifying the pheromone trail in the same pheromone matrix. Each colony is characterized by the information they use to guide their search, i.e., each colony is forced to search in different regions of search space and cooperate to find good solutions by exchanging information among colonies. The proposed algorithm is investigated for its potential in solving the benchmark instances available in OR-Library (Beasley [11]). This paper is organized as follows. In the Section 4.2, a definition of the JSP, a graph-based representation, the general concept of ACO, the memory requirement for ant and colony, the hierarchical cooperation in multiple colonies, and the backward scheduling approach are given. The descriptions and the features in the proposed ant algorithm are presented in Section 4.3. The computational results on benchmark problems are provided in Section 4.4. Finally the conclusion and recommendation for further study are presented in Section 4.5.
4.2 Problem Definition and Graph-Based Representation 4.2.1 Problem Definition The n × m JSP can be defined as a set J of n jobs {Ji }ni=1 to be processed on a set M of m machines {M j }mj=1 . Each job Ji composes of a set of operations {Oi j }mj=1 to be performed on machines for an uninterrupted processing time pi j .. The processing order of operations required for each job represents the predetermined given order of job through the machines (precedence constraint). If the relation Oip → Oiq is in a chain and both operations belong to job Ji , then there is no Oik with Oip → Oik or Oik → Oiq , and Oip has to be finished before Oiq can start. Each machine can process at most one job, and each job can be processed by only one machine at a time (machine constraint). The objective is to determine the starting time Si j for all operations in order to minimize the makespan Cmax while satisfying all precedence and machine constraints as described mathematically in equation (4.1) and the inequalities (4.2) and (4.3), respectively. minCmax = min{max(Sij + pij ) : ∀Ji ∈ J, ∀M j ∈ M},
Oi j ∈ O
(4.1)
4 Multiple-Colony Ant Algorithm with Forward–Backward Scheduling
41
subject to Si j ≥ Sik + pik
when Oik → Oi j,,
(4.2)
(Si j ≥ Sk j + pk j ) ∨ (Sk j ≥ Si j + pi j ).
(4.3)
4.2.2 Graph-Based Representation Every instance of the JSP can be formulated using a graph-based representation called a disjunction graph G = (O,C, D), where O is the set of all nodes (processing operations), C is a set of conjunctive directed arcs, and D is a set of disjunctive undirected arcs that represents the machine constraint of operations belonging to different jobs. C corresponds to the precedence relationship between operations of a single job. Thus, operations belonging to the same job are connected in sequence. The operations of jobs that are processed on the same machine are connected pairwise in both directions. Two additional fictitious nodes, the source (the predecessor of the first operation of every job) and the sink (the successor of the last operation of every job) of the zero processing time are added to the set. A path P is defined as an acyclic sequence of total operations that represent the possible solution of the instance. The makespan of a schedule is equal to the longest path from source to sink in P. This path is called a critical path, and the operations that it passes through are called critical operations. An example the job-shop problem is introduced here to explain the ideas. The instance consists of nine operations that are partitioned into three jobs and have to be processed on three machines. The detail of the example problem is shown in Table 4.1. The disjunctive graph of the example is presented in Fig. 4.1, where dotted lines represent the conjunctive directed arcs of machines and the bold line represents the undirected arcs of jobs. When the processing order is determined, the direction of arcs will be selected in such a way that P is acyclic. For example, the processing order of operations are selected based on the most work remaining criterion; choose one that has the highest work remaining. The sequence of operations of P is to be {source → O1 → O7 → O4 → O2 → O8 → O5 → O3 → O6 → O9 → sink} The Gantt chart of P is presented in Fig. 4.2., and it can be observed that the critical path passes through {O1 → O8 → O5 → O6 } the makespan is equal to 13 units of time.
Table 4.1 The detail of an example problem Operation Job1 Job2 Job3
O1 O4 O7
O2 O5 O8
Machine O3 O6 O9
M3 M2 M1
M2 M3 M3
Process time M1 M1 M2
3 3 3
4 3 5
3 2 1
42
Apinanthana Udomsakdigool and Voratas Kachitvichyanukul
Fig. 4.1 Disjunctive graph of the example problem
4.2.3 The General Concept of ACO Algorithm The ACO algorithm is a metaheuristic inspired by the foraging behavior of real ants. This behavior is the basis for local interaction of each ant, which leads to the emergence of shortest path. In the ACO algorithm a finite-size colony of artificial ants collectively searches for good-quality solutions to the problem under consideration. There are two main components in an ant algorithm: the construction of solution and the update of pheromone trail. In the construction step, each ant constructs a feasible solution by using the incremental constructive approach. Each ant builds a solution, starting from an initial state moving through a sequence of neighboring states, by applying a stochastic local search policy directed (a) by private information of an ant and (b) by publicly available pheromone trails accumulated by all the ants from the beginning of the search process and a priori problem-specific local information. After they complete their solutions, the pheromone values are updated depending on the quality of the solutions; the better the solution, the stronger the pheromone value. This process serves as a positive feedback mechanism for the sharing of information about the quality of the solution found, and the evaporation process allows the
Fig. 4.2 Gantt chart of the solution of an example problem
4 Multiple-Colony Ant Algorithm with Forward–Backward Scheduling
43
ants to forget the regions that do not contain a solution with high quality. After the ants repeat this procedure for a certain number of iterations, the path that has strongest pheromone value will become the dominant solution. This solution expresses a shortest path through the states of the problem that emerged as a result of the global cooperation among all ants of the colony (for more detail, see Dorigo and St¨utzle [12]).
4.2.4 Memory Requirement for Ant and Colony In an ant algorithm, a finite number of ants in a colony search for solutions independently. The ants share their experience by updating the pheromone matrix, and through this cooperative behavior, the best solution will emerge after a number of iterations. To implement the ant algorithm, the basic data structures have to be defined. These structures must allow data storage for the problem instance and the pheromone trail, as well as represent each individual ant. To manage the solution construction, each ant has its own local memory to store the past history of movement along with useful information to compute the goodness of the move. Moreover, it can play a fundamental role in managing the feasibility of the solutions. Once each ant has accomplished its task, the information sharing among individuals in the colony is achieved through the update of the common memory called the pheromone matrix. The required data structure is shown in Fig. 4.3. The pheromone trail matrix collects the pheromone trail, which represents a long-term memory about the search experience of the colony. The pheromone trails change over time depending on the pheromone updating rule. The heuristic information matrix collects heuristic information for possible exploitation of problem-specific knowledge. The heuristic information used by the ant may be static or dynamic. In the static case the values of heuristic information are computed once at the initialization time and remain unchanged throughout the run. An example is the use of process time as heuristic information; the value is unchanged. In contrast, the heuristic information for the dynamic case depends on the partial solution constructed so far and therefore has to be recomputed at each step of an ant’s journey. An example is the use of heuristic information based on remaining work time; the value varied depends on the partial solution constructed. Accordingly, each ant has its own memory to manage the solution construction. This memory is equipped with four lists: The unvisited list contains the unscheduled operations. The allowable list contains the operations that do not violate the technological constraints. To reduce the search space in the allowable list and to guide an ant to search in the high-quality region, some constraints are imposed on the ant walk. The operations with respect to those constraints are kept in the candidate list. Finally, the visit list is used to keep the selected operation or the sequence of moves by an ant.
Apinanthana Udomsakdigool and Voratas Kachitvichyanukul
F
tF
tF
tF
1
...
2
...
tFF
O1
O2
O3
O4
O5
O6
O7
O8
O9
S
1
2
hSS hS1 hS2 h1S h11 h12 h2S h21 h22
... F ... hSF ... h1F ... h2F ...
S
...
tSF t1F t2F
...
F
... ... ... ... ...
2
tS2 t12 t22
...
2
1
tS1 t11 t21
...
1
S
tSS t1S t2S
...
S
Heuristic Information Matrix From/ To
...
Pheromone Trail Matrix From/ To
...
44
hFS hF1 hF2 ... hFF
F
Colony Memory S
Ant Memory
O1
O2
O3
Allowable list O1
O4
O7
Candidate list O1
O4
Unvisit list
...
F
F
Visit list
S start or source node, F finish or sink node Fig. 4.3 Memory requirements for ant and colony: (S) start or source node, (F) finish or sink node
4.2.5 Hierarchical Cooperation in Multiple Colonies The basic idea of multiple colonies is to coordinate the activity of different ant colonies; each of them optimize a makespan of the problem instance. Ants in multiple colonies cooperate in two levels: high-level cooperation and low-level cooperation. In the low-level cooperation, individuals in the colony cooperate to find the best solution. In the high-level cooperation, each colony uses the useful information collected from many colonies to find the best solution. The hierarchical level of cooperation in multiple colonies is shown in Fig. 4.4. In the proposed technique, the colonies work by using both the individual, or local, pheromone matrix and the overall, or global, pheromone matrix. As shown in Fig. 4.5. the local pheromone matrix stores the information for each colony. In contrast, the global pheromone matrix serves the role of global memory, collecting the information from colonies. Ants in each colony perform the same task, that is, to find the solution in the search space. The exploration of the search space in each colony may be guided by different candidate list strategies and heuristic information. When the ants construct the solution, they use the combination of information from both global pheromone trail and local pheromone trail. The local pheromone trails are updated separately, but the global pheromone is updated only by the best of all colonies.
4 Multiple-Colony Ant Algorithm with Forward–Backward Scheduling
45
Fig. 4.4 Hierarchical level of cooperation in multiple colonies
In summary, the multiple-colony ant algorithm is the technique in which ants cooperate to find good solutions by using both the local experience within a colony as well as shared information among colonies.
4.2.6 Backward Scheduling Approach In general, a given problem of the job-shop scheduling problem can be converted to a problem equivalent to the original one, called reversed problem. The reversed problem assumes that the operations must be processed in the reverse order of the original problem; that is, the predecessors of each operation are considered as its successors. If the reversed problem is used, however, the resulting schedule should
Fig. 4.5 The pheromone trail matrix and the cooperation in multiple colonies
46
Apinanthana Udomsakdigool and Voratas Kachitvichyanukul
Fig. 4.6 Gantt chart of the reverse of P with left shift procedure
be reversed back to the regular time frame after it is solved. The start time and the completion time of a job in the original problem are related to the completion time and the start time of the job in the reversed problem, respectively. It sometimes happens that a reversed problem is easier to solve than the original. For example, the criterion to select the processing order of operations of the example problem is the same but selected in the reversed order of the precedence constraint. Hence, the sequence of operations of P is to be {source → O3 → O9 → O6 → O8 → O2 → O5 → O1 → O7 → O4 → sink}. The reverse of this sequence can be used to form a schedule. If a non-delay schedule is required as a solution, left shift of operations (moving operations to earliest possible time without violating precedence constraints and given order of operations) on each machine must be done at the final step of the backward approach. In this example, the makespan is reduced to 12 units of time. The Gantt chart with reverse sequence of schedule P is illustrated in Fig. 4.6.
4.3 Description of Algorithm In the proposed ant algorithm, hereinafter called MFBAnt, there are at least two heterogeneous colonies characterized by the heuristic information they use to guide their search. In each colony two kinds of ant, forward and backward ants, work for the best solutions. The main steps in MFBAnt include the solution construction and the pheromone updating. In the construction step, the forward ants and backward ants select the operation from a set of allowable operations by applying the two-step random probability transitional rule guided by the pheromone trail and the heuristic information (Dorigo and Gambarella [13]). After the ants complete their solution, a local improvement is performed. Then the two-step pheromone updating, local and global updating rule is performed. The restart process is also applied when the search is trapped in a local optimum. The procedure of MFBAnt is shown in Fig. 4.7. The details of each step are described in the following sections.
4 Multiple-Colony Ant Algorithm with Forward–Backward Scheduling
47
4.3.1 Initialize Pheromone and Parameter Setting The pheromones are initialized with the value drawn as a random number in the interval (0.1, 0.5). The reason behind that is to enforce the diversification at the start of the algorithm. The parameters that control the search are set to the important weight of pheromone trail, α = 1; the important weight of heuristic information, β = 5; the important weight of global pheromone trial, w = 0.7; the pheromone evaporating weight of local and global pheromone matrix, ρl , ρg = 0.1; and the exploitation– exploration weight, q0 = 0.5. The number of ants in each colony is n, the number of total operations is O, the number of forward ants is a f = 0.8O, and the number of backward ants is ab = 0.2O. The algorithm terminates when the total number of iterations reaches 1000. The parameter values of ACO used in this study are the numbers as reported by Udomsakdigool and Kachitvichyanukul [9, 10]. In MFBAnt, three colonies with most work remaining, earliest start time, and earliest finish time as heuristic information are introduced [10]. In the construction step, the forward and backward ants construct the solution in forward and reverse order of precedence constraints, respectively. For backward ants the schedule should be reversed back to the normal time frame after it is solved. The detail of solution construction is described as follows. For all colonies, at each step of solution construction, an ant k selects one operation from a set of candidate operations Ck , that is, a subset of allowable operations Ak that can be scheduled at a construction step by applying probability transitional rule. The selection probability for each operation in the set Ck depends on the two-step random proportional probability transition rule model as follows. While building the solution, the selection of operation by an ant is guided by the pheromone information τ and the heuristic information η . At the construction step of operation i in iteration t, the kth ant selects an operation by taking a random number q. If q ≤ q0 , the operation is chosen according to (4.4). Otherwise, an operation is selected according to (4.5). ⎧ ⎪ ⎨arg max[τ k (t)]α [η k ]β if q ≤ q0 , io io jik (t) = (4.4) o∈Cik ⎪ ⎩ J otherwise, where j is the selected operation at the present step of operation i and J is an operation selected from the random proportional transition rule defined as (4.5). ⎧ k α kβ [τ (t)] [η ] ⎪ ⎨ ∑ io[τ k (t)]αio[η k ]β io k (4.5) pio (t) = o∈Cik io ⎪ ⎩ 0 otherwise, where i is the operation at the current step, o is the operation in candidate list, pio is a probability to select operation o for the next step, τio is the pheromone trail between operations i and o, ηio is the heuristic information between operations i and o, and
48
Apinanthana Udomsakdigool and Voratas Kachitvichyanukul
Input: A problem instance of the JSP /∗Step 1: Initialization ∗/ /∗Set parameter values∗/ Set parameters (Nt = tmax , α = αc , β = βc , ρl , ρg = ρc , q0 = q0c , Rn = nmax Nc = c, Nc1 = c1 , Nc2 = c2 , . . . , Ncc = cc , ρg = ρc , w = wc ) /∗Initialize global pheromone value∗/ For each edge (i, j) do Set an initial global pheromone value τi j,g (0) = τ0 End for /∗Initialize local pheromone value∗/ For Nc = 1 to c For each edge (i, j)do Set an initial local pheromone value τi j,l (0) = τ0 End for End for /∗Main loop∗/ /∗Step 2: Solution construction∗/ Set best solution since start algorithm, Ssg (t0 ) = φ For t = 1 to tmax do For Nc = 1 to c do Set best solution of colony c since start algorithm, Ssc (t0 ) = φ Set Best solution of colony c in iteration, Sic (t0 ) = φ For t = 1 to tmax do If forward ant then For k = 1 to a f do /∗Set all lists†∗/ Set Unvisit list, U k = ∀O /∗all operations∗/ Set Visit list, V k = φ Set Allowable list, Ak = φ Set Candidate list, Ck = φ End for For k = 1 to a f do /∗Starting node∗/ Place ant k on the starting node Store this information in V k Delete this operation from U k /∗Build the solution for forward ant∗/ Ant builds a tour step by step until U k = φ by apply the following steps: Ant randomly choose q number, q = rand(0, 1) Choose the next operation j from Ck according to equation (4.4) If q ≤ q0 , Otherwise an operation is selected according to equation (4.5) Keep operation j in V k and delete operation j from U k Compute the makespan Cmax of the sequence, Sk in V k End for /∗Step 3: Local improvement, option∗/ For ant k = 1 to a f do Apply local improvement If an improve Cmax is found then Update Cmax and V k End for Fig. 4.7 Procedure of MFBAnt. a Perform the same step as forward ants, but the solutions are constructed in the reverse order of precedence constraint. b Perform the same step as forward ants
4 Multiple-Colony Ant Algorithm with Forward–Backward Scheduling
49
For ant k = 1 to af do Select best solution of forward ant of iteration t, Si f (t) End for Update the Sif (t) of forward ant Update the Ssf (t) of forward ant End if /∗Backward ant∗/ If backward ant then For k = 1 to ab do /∗Set all lists †∗/ End for For k = 1 to ab do /∗Starting node∗/ ∗ / Build the solution for each ant a∗/ /∗Step 3: Local improvement, option b∗/ End for For ant k = 1 to ab do Select best solution of backward ant of iteration t, Sib (t) End for Reversed back Sib (t) to the forward time frame End if Selected Si (t) ← best(Si f (t), Sib (t)) Update the Sic (t) Update the Ssc (t) /∗Step 4: Update local pheromone matrix∗/ For each edge (i, j) in V k of Sic (t) do Update pheromone trials according to equation (4.8) End for /∗Step 5: restart process for each colony (local pheromone matrix), option∗/ If the restart criteria is reached Apply restart process End if End for Update the Ssg (t) /∗Step 6: Update global pheromone matrix∗/ For each edge (i, j) in V k of Ssg (t) do Update pheromone trials according to the equation (4.9) End for /∗Step 7: restart process for colonies (global pheromone matrix), option∗/ If the restart criteria is reached Apply restart process End if End for Output: Best solution Fig. 4.7 (Continued)
Ci is the set of operations in the candidate list at the step of operation i. In addition, α and β are the parameters that determine the relative importance of the pheromone trail and the heuristic information (α > 0, and β > 0), q is the random number uniformly distributed in [0,1], and q0 ∈ (0,1) is the parameter that determines the relative importance between exploitation and exploration. The pheromone trail between
50
Apinanthana Udomsakdigool and Voratas Kachitvichyanukul
operations i and o is calculated as in (4.6): k k τiok (t) = (1 − w)τio,l (t) + wτio,g (t)
(4.6)
where w is the important weight of global pheromone trail, τio,l is the pheromone trail between operations i and o from the local pheromone matrix, and τio,g is the pheromone trail between operations i and o from the global pheromone matrix. To translate the dispatching rules (most work remaining, earliest start time, and earliest finish time) into heuristic information score, every operation in the candidate list C is normalized. An example of translating the most work remaining into heuristic information scores is shown in (4.7), where ηo is a heuristic information score of operation o, wr is the work remaining time of the job that operation o belongs to.
ηo ←
wr(o) . ∑ wr(o)
(4.7)
o∈C
The selected operation is kept in the visit list V k . Ant repeats the construction step until the set U k , which contains the unscheduled operations, is empty. A sequence Sk of all the operations O in V k (except for the source and the sink) indicates a solution for the job-shop problem. The makespan of the solution can be calculated from the critical path Cp in Sk .
4.3.2 Local Improvement The local improvement plays an importance role in improving the solution, especially in the metaheuristic method. The local improvement procedure explores the best solution from a certain neighborhood of a given schedule and keeps it as the solution. In the ant algorithm, the local improvement is performed after the ants complete their solutions. The local improvement method used in this paper is adapted from Nowicki and Smutinicki [14], which defined a neighborhood solution by moving the operation near the border line of blocks on a single critical path in sequence of processing order. In the first block the last two operations are swapped, and in the last block the first two operations are swapped, respectively. For other blocks between the first and the last blocks, the first two and the last two operations that contain at least two operations are swapped. In Fig. 4.2 there is a single critical path Cp = (O1 , O8 , O5 , O6 ) that decomposes into 2 blocks, B1 = (O1 , O8 , O5 ) and B2 = (O6 ). There is one neighborhood solution that swaps operation O8 and O5 . Sometime in performing the moves, it is possible to end up with a schedule that is not feasible. To cope with this problem, the rearrange technique is introduced. In other words, the operation that violates the precedence constraint will be moved after its predecessor. After the neighborhood search is performed, the sequence with the best makespan is kept as the solution. If this search does not improve the objective, the original solution is kept.
4 Multiple-Colony Ant Algorithm with Forward–Backward Scheduling
51
4.3.3 Pheromone Updating In MFBAnt, the two-step pheromone updating rule is performed. That is, after each colony updates its local pheromone matrix, the global pheromone matrix is updated. In local updating, the solutions from each ant, including forward and backward ants, in the colony are compared, and the best solution is kept as the best solution in iteration. Then this solution is compared with the best solution since the start of the algorithm for their individual colony. The best of these solutions is used to update the local pheromone matrix. For global updating, the best solutions since the start algorithm of each colony are compared with the best solution since the start of all colonies. The best solution among them is used to update the global pheromone matrix.
4.3.3.1 Local Pheromone Matrix Updating In colony c, after all ants complete their solutions, the best solution found in colony c since the start of algorithm, Ssc , is used to update its local pheromone matrix. The rule of pheromone updating is defined in (4.8).
τij,l (t + 1) = (1 − ρl )τij,l (t) + ρl ∆τij,l (t), where
1 τi j,l (t) = 0
(4.8)
if (i, j) ∈ tour of Ssc , otherwise.
In equation (4.8), ρl ∈ [0, 1) is the pheromone evaporating parameter for the local pheromone matrix. The minimum pheromone value is set to 0.001. When applying the pheromone updating, the pheromone value that is less than this number is set back to 0.001.
4.3.3.2 Global Pheromone Matrix Updating The information exchange among colonies is done after all colonies finish their solutions. The best solution since the start of all colonies, Ssg , is used to update the global pheromone matrix following the formula given in (4.9).
τi j,g (t + 1) = (1 − ρg )τi j,g (t) + ρg ∆τi j,g (t), where
1 τi j,g (t) = 0
(4.9)
if (i, j) ∈ tour of Ssg , otherwise.
In equation (4.9), ρg ∈ [0, 1) is the pheromone evaporating parameter for global pheromone matrix updating. The minimum pheromone value is set to 0.001. When
52
Apinanthana Udomsakdigool and Voratas Kachitvichyanukul
Table 4.2 Results of MFBAnt on benchmark problems Proposed algorithm Problem instance
Size(n × m)
Optimal
FT06 FT10 FT20 LA01 LA02 LA03 LA04 LA05 LA06 LA07 LA08 LA09 LA10 LA11 LA12 LA13 LA14 LA15 LA16 LA17 LA18 LA19 LA20 LA21 LA22 LA23 LA24 LA25 LA26 LA27 LA28 LA29 LA30 LA36 LA37 LA38 LA39 LA40 ABZ5 ABZ6 ORB1 ORB2 ORB5 ORB6
6×6 10 × 10 20 × 5 10 × 5 10 × 5 10 × 5 10 × 5 10 × 5 15 × 5 15 × 5 15 × 5 15 × 5 15 × 5 20 × 5 20 × 5 20 × 5 20 × 5 20 × 5 10 × 10 10 × 10 10 × 10 10 × 10 10 × 10 15 × 10 15 × 10 15 × 10 15 × 10 15 × 10 20 × 10 20 × 10 20 × 10 20 × 10 20 × 10 15 × 15 15 × 15 15 × 15 15 × 15 15 × 15 10 × 10 10 × 10 10 × 10 10 × 10 10 × 10 10 × 10
55 930 1165 666 655 597 590 593 926 890 863 951 958 1222 1039 1150 1292 1207 945 784 848 842 902 1046 927 1032 935 977 1218 1235 1216 1152 1355 1268 1397 1196 1233 1222 1234 943 1059 888 887 1010
Average % deviationa a Average
Best 55 ∗ 930 ∗ 1165
666 ∗ 655 ∗ 597
590 593 926 890 863 951 958 1222 1039 1150 1292 ∗ 1207 ∗ 947 ∗ 784 848 ∗ 848 ∗ 907 1063 ∗ 944 ∗ 1032 ∗ 940 ∗ 989 ∗ 1220 ∗ 1240 ∗ 1247 1162 ∗ 1365 ∗ 1300 ∗ 1439 1224 ∗ 1262 ∗ 1250
Udomsakdigool and Kachitvichyanukul [10]
%D
Time (s)
Best
%D
Time (s)
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.21 0.00 0.00 0.71 0.55 1.63 1.83 0.00 0.53 1.23 0.16 0.40 2.55 0.87 0.74 2.52 3.01 2.34 2.35 2.29
34 162 235 36 36 37 38 36 101 102 101 103 102 236 235 235 238 236 161 160 162 162 161 902 900 898 905 904 8563 8558 8559 8560 8560 12618 12614 12620 12619 12624
55 944 1178 666 658 603 590 593 926 890 863 951 958 1222 1039 1150 1292 1240 977 793 848 860 925 1063 954 1055 954 1003 1308 1269 1328 1162 1411 1334 1457 1224 1298 1269 1239 948 1070 893 897 1022
0.00 1.51 1.12 0.00 0.46 1.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.73 3.39 1.15 0.00 2.14 2.55 1.63 2.91 2.23 2.03 2.66 7.39 2.75 9.21 0.87 4.13 5.21 4.29 2.34 5.27 3.85 0.41 0.53 1.04 0.56 1.13 1.19
21 70 82 13 14 13 12 14 35 36 36 35 37 80 82 82 88 85 52 54 54 56 55 390 409 384 385 382 3155 3125 3150 3200 3204 3096 2928 3062 3048 2998 72 70 74 70 72 70
0.63%
% deviation is calculated from the same 38 problems, FT06 to LA40.
1.92%
4 Multiple-Colony Ant Algorithm with Forward–Backward Scheduling
53
applying the pheromone updating, the pheromone value that is less than this number is set back to 0.001
4.3.4 Restart Process When the system is trapped in an area of the search space a restart process is performed. There are two types of restart process: local restart process and global restart process. Each colony performs the local restart process if the ants are trapped in a local optimum. All pheromone values in every path in the local pheromone matrix are reinitialized, and the algorithm is started again. The global restart process is performed if the best solution of all colonies since the start of the algorithm is not improved. The pheromone values in every path in the global pheromone matrix are reinitialized, and the algorithm is started again. The restart processes in the global pheromone matrix and the local pheromone matrix are performed separately.
4.4 Experimental Results MFBAnt is tested on benchmark problems that are available from the OR-Library. The algorithm was coded in C and run on a Pentium 4, 2.4-GHz PC with 1 GB of RAM running on a Windows platform. To evaluate the algorithm, each of the problem instances was repeated for 10 trails. The best solution is obtained from 10 trails of this tested algorithm. The percent deviation from the optimal solution is calculated as [(best solution – optimal solution)/optimal solution] ×100. The CPU time is in seconds. The final results are listed in Table 4.2. The results in Table 4.2 illustrate that there are 21 instances, FT06 to FT20, LA01 to LA15, LA17, LA18, and LA23, where the algorithms yielded the optimal solution without using local improvement and restart process. The deviation is less than 1% for the small-size problem and less than 4% for the large-size problem. The average percent deviation is 0.63%. Comparing this with the solutions of 24 problems obtained by Udomsakdigool and Kachitvichyanukul [10], the proposed algorithm yields better solutions in 21 problems, and in 3 problems the solutions are the same. This comparison excludes 14 problems that yield optimal solutions.
4.5 Conclusion and Recommendation 4.5.1 Conclusion This paper presents the new method of the ant algorithm applied to solve the JSP. The proposed algorithm includes the forward–backward scheduling and
54
Apinanthana Udomsakdigool and Voratas Kachitvichyanukul
multiple-colony approach. This algorithm is tested for the performance over the benchmark problems. The experimental result indicates that this method yields excellent performance. The average percent deviation is 0.63%. Comparing with [10], the average percent deviation is 1.92%. It can be concluded that the performance of MFBAnt is achieved by allowing the ants to diversify the search via construction of solutions in the forward and backward directions and to exploit different regions via different heuristic information. In addition, the information exchange among colonies allows them to access the lesson learned by the other colonies.
4.5.2 Recommendation There are many possibilities to further improve the algorithm. First, to achieve an efficient multiple-colony ant algorithm, the strategy for information exchange should be examined. For example, which kind of information should be exchanged? How frequent should the exchanges take place among the colonies? In addition, the execution time may be reduced when parallel implementation is included. Second, in order to improve the efficiency of the proposed algorithm, different local improvement techniques, such as left-shift local search and e-shift, may be combined with ant algorithm, and their performance should be investigated. Finally, to increase the effectiveness and efficiency in the exploration of the search space, strategic use of randomness during an ant’s solution construction of may be considered. However, the appropriate balance between diversification and intensification remains a topic for further investigation.
References 1. C. Gaen´e, W.L. Price, and M. Gravel (2002) Comparing and ACO algorithm with other heuristics for the single machine scheduling problem with sequence-dependent setup times. Journal of the Operational Research Society, 53: 895–906. 2. M. Gravel, W.L. Price, and C. Gagn´e (2002) Scheduling continuous casting of aluminium using a multiple objective ant colony optimization metaheuristic. European Journal of Operations Research, 143: 218–229. 3. S.J. Shyu, B.M.T. Lin, and P.Y. Yin (2004) Application of ant colony optimization for no-wait flow shop scheduling problem to minimize the total completion time. Computer and Industrial Engineering, 47: 181–193. 4. K.C. Ying and C.J. Liao (2004) An ant colony system for permutation flow-shop sequencing. Computers and Operations Research, 31(5): 791–801. 5. C. Blum (2004) Beam–ACO-hybridizing ant colony optimization with beam search: An application to open shop scheduling. Computers and Operations Research, 32(6): 1565–1591. 6. D. Merkle, M. Middendorf, and H. Schmeck (2002) Ant colony optimization for resourceconstrained project scheduling. IEEE Transaction on Evolutionary Computation, 6(4): 53–66. 7. A. Colorni, M. Dorigo, V. Maniezzo, and M. Trubian (1996) Ant system for job-shop scheduling. Belgian Journal of Operations Research, Statistics, and Computer Science, 34(1): 39–54.
4 Multiple-Colony Ant Algorithm with Forward–Backward Scheduling
55
8. C. Blum and M. Sampels (2004) An ant colony optimization algorithm for shop scheduling problems. Journal of Mathematical Modelling and Algorithms, 3: 285–308. 9. A. Udomsakdigool and V. Kachitvichyanukul (2006) Two-way scheduling approach in ant algorithm for solving job shop problems. Industrial Engineering and Management Systems, 5(2): 68–75. 10. A. Udomsakdigool and V. Kachitvichyanukul (2007) Multiple colony ant algorithm for jobshop scheduling problem. International Journal of Production Research, 99999:1: 1–21. (Online link to article URL: http://dx.doi.org/10.1080/00207540600990432). 11. J.E. Beasley (1996) Obtaining test problems via Internet. Journal of Global Optimization, 8(4): 429–433. 12. M. Dorigo and T. St¨utzle (2004 ) Ant colony optimization. The MIT Press, Cambridge, MA. 13. M. Dorigo and L.M. Gambarella (1997) Ant colonies for the traveling salesman problem. Biosystems, 43(2): 73–81. 14. E. Nowicki and C. Smutinicki (1996) A fast table search algorithm for the job-shop problem. Management Science, 42(6): 797–813.
Chapter 5
Proposal of New Paradigm for Hand and Foot Controls in the Context of Spatial Compatibility Effect Alan H.S. Chan and Ken W.L. Chan
Abstract Many workstations make heavy use of the hands for primary control of a process. If some control tasks can be assigned to the feet, there would be an obvious benefit in having the hands free for other tasks that require a higher level of precision and dexterity. Spatial compatibility between displays and controls is a weighty determinant of performance. This paper proposes a research framework that aims to 1. Design and conduct a series of spatial compatibility experiments for measuring human subjects’ response preferences and choice reaction times at different configurations of displays and hand and foot controls. 2. Investigate the effect of interaction between hand and foot controls in such configurations. 3. Determine the optimum positional mappings of hand and foot controls with visual signals presented at different planes of displays. The objective is progress toward an optimal human–machine interface design for improving overall system performance. Keywords: Ergonomic design · human–machine interface · spatial compatibility
5.1 Introduction Generally, foot controls are not employed as widely as hand controls among industrial applications. For applications equipped with foot controls, such as automobiles and airplanes, almost all the primary controls are given over entirely to the hands. If some of the controls can be assigned to the feet, there would be an obvious advantage in leaving the hands free for other tasks that demand a higher precision and Alan H.S. Chan and Ken W.L. Chan Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, Kowloon Tong, Kowloon, Hong Kong
57
58
Alan H.S. Chan and Ken W.L. Chan
dexterity level. Computer workstations require the use of hands for both text input and cursor positioning. The design and physical arrangement of the keyboard and mouse do not allow these two tasks to be processed simultaneously. However, the dilemma can be solved by employing a foot-operated input device, which not only saves time on repetitive keyboard-to-mouse hand movements but also eliminates stress at the wrists, elbows, and shoulders [1]. Although some industrial designers suggest that the feet are slower and less accurate than the hands, this belief is not supported by experimental results [2, 3]. Foot controls are usually reserved for large-force inputs and gross-control movements [4]. However, a common counterexample can be found in the pedal arrangements of automobiles. Critical and finely controlled operations, requiring fairly little force, are performed with the feet on the accelerator and brake pedals. It has also been shown that the small forces required for the operation of switches can be generated in nearly all directions with the feet [4]. Despite the disadvantages of restricted body posture, foot controls will become more important in the control tasks of seated operators, relieving the workload of the hands and allowing simultaneous functional operations [5]. In the form of a footmouse, automobile brake, or accelerator pedal, foot control mechanisms are usually used to control one or two functions. Kroemer [2] has measured the speed and accuracy of discrete foot motions for 12 different foot positions and found that subjects could perform these tasks with considerable accuracy after a short learning period. This result suggests that the feet can be used for more varied control functions. Springer and Siebes [1] tested the practical suitability of the footmouse for both handicapped and able-bodied subjects. Their results revealed that handicapped subjects using the footmouse produced the same performance accuracy as able-bodied subjects using a hand-operated mouse, suggesting that a foot-operated input device is an effective positional control device. Past research on foot controls focused mainly on the evaluation of alternative design criteria of travel time, speed of operation, precision, force produced, and subjective preference [2, 6, 7]. In human– machine interface design, the question of how a display can be associated with hand controls has been a topic in human factors studies since effective human–machine interfaces will obviously be advantageous in improving human performance. However, it is rather surprising that the use of foot controls in the context of control– display compatibility has rarely been investigated.
5.1.1 Spatial Stimulus–Response (SR) Compatibility Spatial SR compatibility is one type of the important control–display compatibilities that suggests the selection of a response is directly related to the position of stimulus [3]. Spatial compatibility between stimulus and response is a weighty determinant of performance. It is common to find that compatible pairings lead to faster reaction
5 Proposal of New Paradigm for Hand and Foot Controls
59
times (RTs) and lower error rates than do incompatible pairings. At present, there are no specific research studies examining the spatial compatibility relationship between the display and foot controls. Furthermore, there are no useful ergonomic design guidelines for simultaneous manipulation of hand and foot controls when responding to signals from displays positioned at different planes relative to the operators. Given the potential importance and usefulness of employing foot controls in industrial applications and the significance of spatial compatibility relationships in control tasks, various control–display configurations involving the installation of hand and foot controls were carefully designed for the detailed investigation of the objective performance measures of accuracy and speed. The results of this study will provide useful ergonomics guidelines for using foot controls and will be beneficial to today’s high-growth industrial environments.
5.2 Research Plan and Methodology Forty right-handed subjects will participate voluntarily in each of the following three experiments. They will have access to normal or corrected-to-normal vision via an orthorator. The experiments will be carried out on a personal computer, using programs written in Visual Basic 6 for stimulus preparation and display as well as data collection. Custom-made control boxes and foot pedals will be fabricated and interfaced with the computer software and hardware.
5.3 Experiment 1: Spatial SR Compatibility Effect of Foot Controls This experiment aims to investigate the spatial SR compatibility effect of foot controls responding to stimuli presented in transverse and longitudinal orientations.
5.3.1 Design One of the four 15-mm-diameter red circles at the front-left, front-right, rear-left, and rear-right positions of a horizontal screen will be presented to subjects. They will be positioned at the four corners of an imaginary 100-mm-per-side square. Two foot pedals with two keys each will be used for inputting the subjects’ responses. They are placed on the floor and adjusted freely to fit a subject’s normal foot posture. Four spatial SR mapping conditions, viz., both transverse and longitudinal compatible (BC), transverse compatible and longitudinal incompatible (TC), longitudinal
60
Alan H.S. Chan and Ken W.L. Chan
compatible and transverse incompatible (LC), and both transverse and longitudinal incompatible (BI), will be tested on all subjects. In the BC mapping condition, the visual stimuli and corresponding response keys are arranged congruously in both longitudinal and transverse orientation. Subjects will respond by pressing the front-right key (FR) for the front-right visual signal (fr), the front-left key (FL) for the front-left signal (fl), the rear-right key (RR) for the rear-right signal (rr), and the rear-left key (RL) for the rear-left signal (rl). In the TC mapping condition, congruous SR mapping only occurs in the transverse orientation. Subjects will respond by pressing FR for rr, FL for rl, RR for fr, and RL for fl. In the LC mapping condition, congruous SR mapping only occurs in the longitudinal orientation. Subjects will respond by pressing FR for fl, FL for fr, RR for rl, and RL for rr. In the BI mapping condition, the SR mappings are opposite to those in the BC condition. Thus, subjects will respond by pressing the FR for rl, FL rr, RR for fl, and RL for fr. The subjects will be tested with the four blocks of SR mapping conditions in a counterbalanced order. Each block contains 8 practice trials and 20 testing trials. During the testing, subjects will sit at a distance of 500 mm directly in front of a 17-in LCD monitor. Each trial starts with the display of a green 10-mm-diameter circle at the center of the screen serving as a warning signal and fixation point. After 1 to 4 seconds, one of the four visual circles will randomly light up. Subjects will then tread on the appropriate key upon detecting the signal according to the compatibility conditions being tested. In all trials, subjects will be asked to react as fast and accurately as they can. No feedback on the accuracy will be given. Subjects’ response times and errors will be recorded for analysis by the application program. This experiment will test the following major hypotheses: (a) Similar to the results obtained with hand controls, there will be significant spatial compatibility effects suggesting that human performance will be better in the compatible mapping condition/orientation than in the incompatible one. (b) The right/left compatibility effect will be stronger than the front/rear one. (c) The responses made with the right foot will be faster and more accurate than those made with left foot.
5.4 Experiment 2: Spatial SR Compatibility Effect of Hand and Foot Controls In order that the feet be responsible for manipulating controls, it is important to know how well they interact and cooperate with the hands in response execution. This experiment aims to investigate the spatial SR compatibility effect of the
5 Proposal of New Paradigm for Hand and Foot Controls
61
combined use of both hand and foot controls in responding to the stimulus in horizontal and vertical mapping conditions.
5.4.1 Design The stimulus display and spatial SR mapping conditions will be similar to those of Experiment 1. However, with the combined use of hand and foot controls, the stimulus circles corresponding to the spatial positions of the hands and feet will be presented on a vertical display during testing. In a trail, one of the four 15mm-diameter red circles at the top-left, top-right, bottom-left, and bottom-right positions will be presented. Two push buttons and two foot pedals directly underneath will be used for inputting subject responses. Four spatial SR mapping conditions, viz., both horizontal and vertical compatible (AC), horizontal compatible and vertical incompatible (HC), vertical compatible and horizontal incompatible (VC), and both horizontal and vertical incompatible (AI) will be tested for all subjects in a counterbalanced order. Each block contains 8 practice trials and 20 testing trials. The stimulus presentation, testing procedure, and instructions will be similar to those of Experiment 1. Subjects will press or tread on the appropriate device after signal detection according to the compatibility conditions (AC, HC, VC, and AI) being tested. This experiment will test the following major hypotheses: (a) There will be significant spatial compatibility effects found with the combined use of hand and foot controls. (b) The right/left compatibility effect will be stronger than the top/bottom one. (c) Responses made with the feet will be as accurate as with the hands in the context of spatial compatibility. (d) Responses made with the hands will be faster than those made with the feet.
5.5 Experiment 3: Spatial SR Compatibility Effect of Hand and Foot Controls for Stimulus and Response Arrays on Orthogonal Planes Spatial compatibility effects of hand controls exist in situations where stimulus and response are located orthogonally to each other [8, 9]. When a vertical visual stimulus set corresponded with a horizontal response set, subjects performed better with the up-right/down-left SR mapping than with the up-left/down-right mapping. The study of orthogonal SR compatibility effects is of considerable importance because there are no clearly intrinsic and explicit cues between the spatial positions of stimulus and the response for the effects.
62
Alan H.S. Chan and Ken W.L. Chan
There are many situations involving industrial equipment and control cockpits where the positions of displays and controls are orthogonally oriented. Due to spatial limitations and other engineering constraints, a perfect match between the display and the control arrays in a control console may not always be possible. Furthermore, there are cases that the planes containing the signal and response arrays may also be orthogonal to each other [3]. However, it is noted that no compatibility study on the mapping of hand and foot controls lying in the vertical plane with an orthogonal display lying horizontally has been conducted. With such a configuration, confusion in the associations of top/bottom control arrays and front/rear display arrays would arise, and formation of compatible SR mappings is not direct or obvious. In Experiment 2, the control and display planes are parallel to each other. This experiment, however, aims to investigate the spatial SR compatibility affect of displays and hand and foot controls lying in orthogonal planes. Other than the use of a prostrate monitor, the stimuli and controls used here will be similar to that in Experiment 2.
5.5.1 Design For this orthogonal setting, four spatial SR mapping conditions, viz., BC, TC, LC, and BI will be tested for all subjects in a counterbalanced order. Each block contains 8 practice trials and 20 testing trials. The procedure for testing and responding is similar to that of Experiments 1 and 2. This experiment will test the following major hypotheses: (a) There will be significant spatial compatibility effects found with the orthogonal SR planes. (b) Hand and foot codes will be mapped for front and rear stimulus, respectively, in compatible mapping conditions. (c) The right/left compatibility effect will be stronger than the front/rear one.
5.6 Experiment 4: Spatial SR Compatibility Effect of Hand and Foot Controls for Stimulus and Response Arrays on Parallel and Orthogonal Planes It was hypothesized in Experiment 3 that spatial SR compatibility effect will occur for orthogonal SR planes with combined use of hand and foot controls, and there will be a mapping advantage for hand-front/foot-rear arrays. In a complex or realistic control console, it may not be possible to position foot controls directly underneath the hand controls; consequently, the spatial cue of control devices may be interpreted in more than one dimension. The understanding and interpretation of spatial cues associating displays and control devices for such a condition is a major issue to be resolved. It is also important from a practical as well as theoretical
5 Proposal of New Paradigm for Hand and Foot Controls
63
Fig. 5.1 Experimental setup for testing of hand and foot controls for parallel and orthogonal SR planes
perspective because a slow or incorrect decision under such an arrangement can have very serious consequences. In this experiment, the foot pedals will be placed directly underneath the front row of signals (Fig. 5.1). This results in a front-foot pedal and rear-hand control arrangement in the horizontal plane, as well as the original top-hand control and bottom-foot pedal arrangement in the vertical plane; i.e., an additional pair of parallel SR planes is formed while the original orthogonal SR planes still exists. Hence, this experiment aims to investigate the subjects’ performance with the combined use of hand and foot controls and under the influence of redundant spatial cues derived from the parallel and orthogonal SR planes. This will help determine the relative strength of the visual cues on these two pairs of planes. It will also provide practical results for formulating useful ergonomics recommendations in the design of realistic control consoles.
5.6.1 Design The design of this experiment is similar to that of the Experiment 3. The four spatial mapping conditions are described according to the spatial compatibility of displays and controls projected from the top view (Fig. 5.2). To avoid an ambiguous and
Condition 1
Condition 2
Fig. 5.2 The four spatial SR mapping conditions
Condition 3
Condition 4
64
Alan H.S. Chan and Ken W.L. Chan
complicated description of the spatial SR relations, the four spatial SR mapping conditions to be tested will be labeled 1 to 4. This experiment will test the major hypothesis that there is prevalence of the orthogonal SR array over the parallel SR array.
5.7 Analysis Analysis of results will be on performance and response preference in terms of RTs and response errors. Reaction times beyond ±3 limits will be discarded from further analysis. For each of the three experiments, mean RTs and errors will be computed for the different experimental conditions corresponding to stimulus positions, response types and positions, warning period, and compatibility conditions. Further analysis on mean RTs will be performed using analysis of variance (ANOVA). The interaction effects of stimulus position and response position will be examined for possible spatial SR compatibility effects. Similar analyses will also be performed for the error percentages. Specific analysis will be made to answer the following: Does the spatial SR compatibility effect of using foot controls exist to the same degree of magnitude as that using hand controls? What are the response speed and accuracy differences between using hand and foot controls? Will spatial SR compatibility effects still be found in the presence of both hand and foot controls? If so, what sort of interactions occurs between displays and different control types? How well do feet interact and cooperate with hands in response execution? What are the positional mapping preferences of hand and foot controls to different planes of displays? The above items are examples of some of the analyses possible. After collecting the data a more in-depth and detailed analysis will be conducted with the objective of providing a significant set of recommendations and guidelines for ergonomic interface design that can improve the competitiveness of machinery and equipment. Acknowledgment The work described in this paper was fully supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. CityU 110306).
References 1. J. Springer and C. Siebes (1996) Position controlled input device for handicapped: Experimental studies with a footmouse. International Journal of Industrial Ergonomics, 17: 135–152. 2. K.H.E Kroemer (1971) Foot operation of controls. Ergonomics, 14(3): 333–361. 3. M.S. Sanders and E.J. McCormick (eds.) (1993) Human Factors in Engineering and Design (7th ed.). McGraw-Hill Singapore. 4. K.H. Kroemer, H.B. Kroemer, and K.E. Kroemer-Elbert (2001) Ergonomics: How to design for ease and efficiency. Prentice Hall, NJ.
5 Proposal of New Paradigm for Hand and Foot Controls
65
5. W. Woodson, B. Tillman, and P. Tillman (1992) Human factors design handbook. McGraw-Hill, New York. 6. H.J. Bullinger, J.E. Bandera, and W.F. Muntzinger (1991) Design, selection and location of foot controls. International Journal of Industrial Ergonomics, 8(4): 303–311. 7. M.A. Van Veelen, C.J. Snijders, E. Van Leeuwen, R.H. Goossens, and G. Kazemier (2003) Improvement of foot pedals used during surgery based on new ergonomic guidelines. Surgical Endoscopy, 17(7): 1086–1091. 8. Y.S. Cho and R.W. Proctor (2003) Stimulus and response representations underlying orthogonal stimulus-response compatibility effects. Psychonomic Bulletin & Review, 10(1): 45–73. 9. A.H.S. Chan and K.W.L. Chan (2004) Design implications from spatial compatibility on parallel and orthogonal stimulus-response arrays. Asian Journal of Ergonomics, 5(2): 111–129.
Chapter 6
Development of a Mathematical Model for Process with S-Type Quality Characteristics to a Quality Selection Problem K. Tahera, R.N. Ibrahim, and P.B. Lochert
Abstract A quality selection problem generally identifies the process parameters, such as the initial process means of a production process. The joint determination of the initial process means and the production run has been reported in the literature. Most of these studies considered a process with multiple independent nominalthe-better types of quality characteristics. In a real industrial situation, the quality characteristics may depend on each other. In addition, the process could have other types of quality characteristics. In this chapter, a mathematical model is developed that jointly determines the optimum initial process means and production run of a process with multiple dependent smaller-the-better types of quality characteristics. A genetic algorithm is used as an optimization algorithm in this study, whereas local optimization algorithms have commonly been used in preceding studies. Keywords: Genetic algorithm · multiple dependent S-type quality characteristics · process mean · production run
6.1 Introduction Quality selection is a classical problem that deals with the problem of selecting the setting of the optimal initial process means for a given product specification. The improper selection of this setting affects the expected total profit (or cost), the number of defective items, the inspection cost, and the reprocessing cost. Generally, a loss function is used to quantify the quality loss when the quality characteristics deviate from target values. Thus, it assists managers to evaluate the unobservable costs such as warranty cost, loss of market share, and sales of return resulting from the customer’s dissatisfaction.
K. Tahera, R.N. Ibrahim, and P.B. Lochert Mechanical Engineering, Monash University, Wellington Road, Clayton 3800, Australia
67
68
K. Tahera et al.
The traditional step loss function does not incorporate these unobservable costs into a quality loss computation. This approach assumes that the quality cost does not depend on the actual value of the quality characteristic as long as it is within the specification limit. In other words, the step loss function treats an item that falls exactly on the target value as the same as an item that falls just inside the specification limit. Therefore, this approach favors manufacturers by allowing them to produce items with a large variance. Dr. J. Taguchi [1] includes society loss in terms of customers’ satisfaction and environment cost in the quality classification. He defines quality loss as “what the product costs society from the time the product is released from shipment.” Thereafter, the Taguchi loss function is treated as a standard quality measurement system. Other quality loss functions [2] have been proposed that have a objective similar to the Taguchi loss function. In reality, the process mean setting deviates from its initial value as a result of the presence of assignable causes, which eventually moves the process to an outof-control state. In this state, the increase of the number of defective items leads to an increase of quality loss. This situation can be accepted for a certain time above when it is necessary to take a restoration action. This action can include readjusting the process parameters to its optimum initial values or upgrading the tool. Such action is generally expensive and should be done from a viewpoint of the economic feasibility. Therefore, determining the production run is another critical factor to produce quality items. A joint determination of the optimum process mean and production run helps the manufacturer to produce items at a minimum production cost. A few researchers investigated the joint effect of process mean and production run in the process under different assumptions (for a review, see Tahera et al. [3]). These models can be summarized as follows. First, most models worked with a nominal-the-better type of quality characteristics. Second, only a single quality characteristic was assumed in most of the studies, and they used a step loss function or a Taguchi loss function as a quality measurement system. Third, few researchers [4, 5] considered multiple quality characteristics, but they assumed that the quality characteristics were not dependent on each other. Fourth, most studies in the literature used traditional direct and gradient-based optimization techniques. Some of the popular algorithms used in the literature were direct search methods, the bisection method [6], Hookes and Jeeve’s algorithm [7,8], a Fibonacci search [9], golden section methods [9], and generalized reduced gradient techniques [5, 10, 11]. However, the determination of initial process means setting and production run of a process with multiple smaller-the-better type and larger-the-better type of quality characteristics has not been investigated in previous research. There are many situations where quality characteristics may depend on each other. These dependencies have not been explored in developing the process target models. Besides, previous researchers used traditional optimization algorithms extensively. However, the limitations of most of these algorithms are as follows: 1. They tend to get trapped in a local minimum rather than in a global minimum. 2. They are not efficient in handling problems having discrete variables.
6 Development of a Mathematical Model for Process with S-Type
69
3. Some of these algorithms can not be applied for problems where mathematical derivatives are complex or unknown. This chapter addresses these issues and proposes a mathematical model to determine the optimum initial setting of process means and the optimum production run of a process with multiple smaller-the-better types of quality characteristics that incurs a minimum total production cost. The dependency of the quality characteristics is considered in this model. The parameters of the proposed model are solved by using a genetic algorithm that does not suffer the limitations of the traditional solution algorithms. A numerical example is provided which illustrates the application of the proposed model.
6.2 Model Development In this section, a mathematical model for determining the optimal process means and the optimal production run is developed for a deteriorating process. The expected total production cost in this study consists of (a) the off target cost, which is modeled by the multivariate quality loss function, (b) the process adjustment cost, and (c) the maintenance cost during the production run. The following assumptions and nomenclature are used in the model: Assumptions: 1. The production process has a known and constant variance for each of the dependent quality characteristics. 2. The quality characteristics are exponentially distributed with a mean of λ1 . 3. At a random point in time, τ , the process changes from an in-control state to an out-of-control state as a result of an assignable cause. 4. The maintenance action could be carried out without disrupting the production. Nomenclature: yi (t, τ ) The random variable denoting the quality measurement for the ith quality characteristic at time t Loss coefficient associated with quality characteristic i kii Loss coefficient associated with quality characteristics i and j ki j µi Initial mean for ith quality characteristic µi∗ Optimal initial mean for ith quality characteristic σi2 Process variance for ith quality characteristic σi j Process covariance between ith and jth quality characteristic ξi Target value for ith quality characteristic T Production run Optimal production run T∗ µi (t, τ ) The process mean for ith quality characteristic at time t
µi (t, τ ) = µi µi (t, τ ) = µi +Wi (t, τ )
τ ≥t τ
70
K. Tahera et al.
Wi (t, τ ) Deteriorating function Wi (t, τ ) = 0 Wi (t, τ ) = 0 CA ρ τ f (τ ) M(t)
τ ≥t τ
Adjustment cost Production rate per unit time The elapsed time until the occurrence of the assignable cause, a random variable that is assumed to be exponentially distributed with a mean of λ1 The density function of the occurrence time of the assignable cause(= λ e−λ τ ) Maintenance cost at time t
We consider a process with multiple smaller-the-better types of quality characteristics. For such a process the aim is to keep the quality characteristics (for instance: the surface roughness, the deviation from a design value, the emission of carbon monoxide gas) as small as possible. A traditional step loss function or a simple Taguchi loss function can not capture the quality loss in case of multiple quality characteristics. Therefore, a multivariate quality loss function should be applied to quantify the quality loss. For this process where the quality characteristics are statistically independent, a multivariate quality loss function [12] can be used: L(y) = k11 y21 + k22 y22 + k12 y1 y2
(6.1)
Here k11 , k22 , and k12 are the quality loss coefficients, which can be determined by using the regression method or solving a system of simultaneous linear equations. While the univariate quality loss function only judges the individual quality characteristics, the multivariate approach considers the customer’s perception of the quality characteristics. In this case, the quality characteristics are interdependent, leading to a cost (k12 ), which accounts for the customer’s reactions. The quality loss at time t can be expressed as L(t, τ ) = k11 [y1 (t, τ )]2 + k22 [y2 (t, τ )]2 + k12 [y1 (t, τ )] · [(y2 (t, τ )]
(6.2)
When the quality characteristics depend on each other, a covariance term is included in the multivariate quality loss function. Since E[X 2 ] = V [X]+{E[X]}2 , where V [·] denotes variance, the expected quality loss can be given by: E[L(t)] = k11 σ12 (t) + (µ1 (t))2 + k22 σ22 (t) + (µ2 (t))2 + k12 [σ12 (t) + (µ1 (t)) (µ2 (t))]
(6.3)
6 Development of a Mathematical Model for Process with S-Type
71
Consider that at a random time τ , the occurrence of an assignable cause moves the process to deteriorating states. To mathematically represent this nature of the process, we assume At t < τ :
σ12 (t) = σ12 µ1 (t) = µ1
σ22 (t) = σ22 µ2 (t) = µ2
σ12 (t) = σ12
At t > τ :
σ12 (t) = σ12 µ1 (t) = µ1 +W1 (t, τ )
σ22 (t) = σ22 µ2 (t) = µ2 +W2 (t, τ )
σ12 (t) = σ12
Thus, the expected loss at time t is E[L(t)] =
∞
k11 σ12 + µ12 + k22 σ22 + µ22 + k12 [σ12 + µ1 µ2 ] f (τ )d τ
t
+
t
k11 σ12 + (µ1 +W1 )2 + k22 σ22 + (µ2 +W2 )2
0
+k12 [σ12 + (µ1 +W1 ) (µ2 +W2 )] f (τ )d τ
(6.4)
We assume different deteriorating process states as shown in Fig. 6.1. The following deterioration functions W(t, τ ) are assumed for each of the process states. (a) Stable state: A process in a stable state is an ideal process that does not face any deterioration throughout the production period. The initial process mean setting
µ+δ Process mean
µ
δ
µ t
t
τ
0
(a) Stable State
(b) Shift State θ(t−τ)
θ(t−τ) µ
0
µ
Process mean : µ + θ(t − τ)
τ
(c) Drift State
Fig. 6.1 Different process states
t
0
Process mean : µ + δ+ θ(t − τ)
δ
t
τ
(d) Shift and Drift State
72
K. Tahera et al.
does not change during the period, and therefore, this process keeps stability for the entire production time. Thus the deteriorating functions are W1 (t, τ ) = 0
W2 (t, τ ) = 0
The expected loss at time t: E[L(t)] =
∞
k11 σ12 + µ12 + k22 σ22 + µ22 + k12 [σ12 + µ1 µ2 ] f (τ )d τ
t
+
t
k11 σ12 + µ12 + k22 σ22 + µ22 + k12 [σ12 + µ1 µ2 ] f (τ )d τ
(6.5)
0
(b) Shift state: A process can experience shift as a result of assignable causes. For example, a negative shift occurs when the voltage suddenly drops due to a power failure, while a constant shift occurs when phenomena such as leakage, chipping, or a malfunctioning mounting are observed. In this state, we assume W1 (t, τ ) = δ1
W2 (t, τ ) = δ2
Thus, the expected loss at time t is E[L(t)] =
∞
k11 σ12 + µ12 + k22 σ22 + µ22 + k12 [σ12 + µ1 µ2 ] f (τ )d τ
t
+
t
k11 σ12 + (µ1 + δ1 )2 + k22 σ22 + (µ2 + δ2 )2
0
+ k12 [σ12 + (µ1 + δ1 ) (µ2 + δ2 )]} f (τ )d τ
(6.6)
(c) Drift state: A drift can happen in a positive or a negative direction. For example, a positive drift occurs when a tool wears out due to aging, while a negative drift occurs when the diameter of a spray nozzle decreases due to clogging. We assume a positive drift in this section. However, the effect of a negative drift can be found by changing the sign of the drift factor. In this state, we assume W1 (t, τ ) = θ1 (t − τ )
W2 (t, τ ) = θ2 (t − τ )
Thus, the expected loss at time t: E[L(t)] =
∞
k11 σ12 + µ12 + k22 σ22 + µ22 + k12 [σ12 + µ1 µ2 ] f (τ )d τ
t
+
t
k11 σ12 + (µ1 + θ1 (t − τ ))2 + k22 σ22 + (µ2 + θ2 (t − τ ))2
0
+ k12 σ12 + (µ1 + θ1 (t − τ )) (µ2 + θ2 (t − τ )) f (τ )d τ
(6.7)
6 Development of a Mathematical Model for Process with S-Type
73
(d) Shift and drift state: Both shift and drift effect could happen in a production process resulting from assignable causes. For example, a machine can experience wear out and leakage at the same time. Therefore, this joint deterioration effect is important for some processes. In such state, we assume W1 (t, τ ) = δ1 + θ1 (t − τ )
W2 (t, τ ) = δ2 + θ2 (t − τ )
Thus, the expected loss at time t is E[L(t)] =
∞
k11 σ12 + µ12 + k22 σ22 + µ22 + k12 [σ12 + µ1 µ2 ] f (τ )d τ
t
+
t
k11 σ12 + (µ1 + δ1 + θ1 (t − τ ))2 + k22 σ22 +(µ2 + δ2 + θ2 (t − τ ))2
0
+k12 [σ12 + (µ1 + δ1 + θ1 (t − τ )) (µ2 + δ2 + θ2 (t − τ ))] f (τ )d τ
(6.8)
We assume the deterioration follows an exponential distribution, i.e., f (τ ) = λ e−λ τ . After performing integration by parts and rearranging the terms in equations (6.5) to (6.8), we find the total expected quality loss over the production run T is L(µ , T ) =
T
E[L(t)]dt
(6.9)
0
Table 6.1 shows the summary of the total expected quality loss at different states of the process during random deterioration. During resetting of the process parameters, an adjustment cost, CA , will be incurred. Other than the adjustment procedures, the maintenance actions (i.e., oiling, cleaning) are carried out during the production run, and the cost of maintenance accumulates up to time T . We assume that these maintenance actions do not stop the production process. Thus, the total expected cost is as follows: E[TC] = L(µ , T ) +CA +
T
M(t)dt
(6.10)
0
The optimum process mean and production run can be found by minimizing the expected cost per unit time as ⎡ ⎤ T 1⎣ C[µ , T ] = L(µ , T ) +CA + M(t)dt ⎦ (6.11) T 0
The total cost can be computed by determining the optimum initial mean settings of the two quality characteristics and the optimum production run. Theoretically, the
74
K. Tahera et al.
Table 6.1 Total expected quality loss at different states Process state
Total expected quality loss
Stable
L(µ , T ) = ρ AT −λ T L(µ , T ) = ρ AT + (B + B )(T − 1−eλ ) 2 3 −λ T 2 −λ T +C T3 − Tλ + 2T − 2 1−eλ 3 L(µ , T ) = ρ AT +C T2 − Tλ + 1−eλ 2 λ2
Shift Drift Shift and drift
⎧ 2 ⎫ −λ T −λ T ⎪ ⎪ ⎨ AT + (B + B ) T − 1−eλ ⎬ + (C + E) T2 − Tλ + 1−eλ 2 3 (µ , T ) = ρ 2 −λ T ⎪ ⎪ ⎩ +C T3 − T + 2T2 − 2 1−e 3 ⎭ λ λ λ
Where n n i A = ∑ kii σi2 + µi2 + ∑ ∑ ki j [σi j + µi µ j ] i=1 n
n
i=1 j=1 i
B = ∑ kii [2µi δi ] + ∑ ∑ ki j [µi δ j + µ j δi ]
i=1 j=1 i = ∑ ∑ ∑ ki j δi δ j i=1 i=1 j=1 n n i C = ∑ kii [2µi θi ] + ∑ ∑ ki j [µi θ j + µ j θi ] i=1 i=1 j=1 n n i C = ∑ kii θi2 + ∑ ∑ ki j θi θ j i=1 i=1 j=1 n n i E = ∑ kii 2δi θi + ∑ ∑ ki j [δi θ j + δ j θi ] i=1 i=1 j=1
B
i=1 n
kii δi2 +
n
optimal process parameters can be found by setting the derivatives with respect to each variable to zero [i.e., ∂ C(∂µµ,T ) = 0 and ∂ C(∂µT,T ) = 0] and by solving the resulting equations. However, these derivatives are complicated and mathematically cumbersome. Therefore, the pattern search techniques of Hooke and Jeeves [13] or generalized reduced gradient (GRG) algorithms can be employed to determine the optimal process parameters. In recent years, population-based evolutionary algorithms such as genetic algorithms are also showing promising results in solving complex models.
6.3 Genetic Algorithm Holland [14] proposes a genetic algorithm (GA), a population-based search algorithm that tries to imitate the evolution of a living being. The gene plays an important role in the evolutionary process. From an optimization point of view, a gene is the variable of the problem to be optimized. The collection of genes forms a “chromosome” or “individual. ” According to Darwin’s survival-of-the-fittest theory, natural selection discards inferior individuals who fail to adapt to the environment.
6 Development of a Mathematical Model for Process with S-Type
75
Fig. 6.2 The steps of a GA
The gene is inherited by the new children through the use of genetic crossover. The other genetic operator, mutation, is responsible for creating new species over the ages. The chromosome resulting from these genetic operations take a part in forming the population of the next generation. The process is repeated for a desired number of generations or for a specific period of time or until no improvement is noticed at certain generations. The general structure of the GA is shown in Fig. 6.2. The following sections discuss the main components of GAs.
6.3.1 Genetic Representation The implementation of a GA starts with the representation of its chromosome. A chromosome is the basic structure that stores the building information of a solution. Each gene in a chromosome represents the decision parameter of the solution. Figure 6.3 shows the basic structure of a chromosome. The gene values in the chromosome can be expressed by using binary digits, integers, floating-point numbers, and strings. In a binary-coded GA, an encoder– decoder phase encodes the chromosome to its binary form and decodes it to an integer form. This phase is not required when using a real-coded GA, thus this type of representation provides the following two benefits: 1. Speed. The execution speed of a GA process increases due to the absence of the encoding–decoding process.
76
K. Tahera et al.
Gene
Chromosome
Population Fig. 6.3 Chromosome in a population
2. Search space. In a binary-coded GA, the search region depends on the bit size of a particular gene or the parameter value. For example, a gene with bit size = 8 can handle a maximum gene value of 255. On the other hand, using integer values per gene widens the search region.
6.3.2 Population Size One of the other major differences between traditional algorithms and GAs is that while conventional algorithms start with a single point, GAs work with multiple points (i.e., a population). The population size is a critical parameter of the performance of GA. There is no clear indication in the literature about how large the population should be. A large population covers the entire search space and produces better result at the expense of difficulties in storing of the data and high execution time. On the other hand, smaller population size executes faster, but the crossover operation may not work, and it might leave some search spaces to explore; therefore, the result may or may not be the optimum one. Besides, the size of the population could be kept constant throughout the algorithm, or it could be varied from generation to generation depending on the number of chromosome deaths.
6.3.3 Generating Initial Population Individual genes are generated randomly by using different approaches, such as uniform sampling, random sampling, and complementary sampling. The uniform sampling method generates chromosomes uniformly throughout the search space. The random sampling method may produce some large gaps in the cost function. The complementary sampling randomly generates half the chromosomes, and then the second half is the complement of the first half. A well-sampled initial
6 Development of a Mathematical Model for Process with S-Type
77
population reduces the convergence time and prevents premature convergence in a local minimum. In addition, the initial generation can be produced randomly based on some predefined knowledge. In this case, GAs rapidly converge to the optimal solution.
6.3.4 Fitness Function A fitness function is used to measure the “goodness” of a chromosome. It is used to compare the fitness of chromosomes during genetic operations. The chromosome undergoes different genetic operations based on the fitness value.
6.3.5 Selection A GA performs a selection process in which the “most-fit” members or chromosomes of the population survive and are allowed to reproduce. The children replace the “weak” members of the population. There are many selection approaches that can be incorporated in a GA. Two of the most popular selection schemes are “tournament” and “roulette wheel.” In a tournament selection, a small subset of chromosomes is picked up randomly, and the chromosome with the highest fitness becomes a parent. The tournament repeats for every parent needed. The roulette wheel selection or stochastic sampling with replacement is based on the probabilities. The probabilities assigned to the chromosomes in the mating pool are inversely proportional to their cost or fitness. A chromosome with the highest fitness has the greatest probability of mating, while the chromosome with the lowest fitness has the lowest probability of mating. A random number determines which chromosome is to select. An elitism strategy in the selection process ensures the best individual to propagate to the next round.
6.3.6 Mating or Crossover Inspired by the role of sexual reproduction in the evolution of living things, a GA tries to mimic this by using a crossover process. In a selection process the parents are chosen. The reproduction ability of these parents is calculated based on some random number. If this random number is lower than the crossover rate (0.6 to 1), then the parents are permitted to reproduce. A crossover operation can generate a single offspring or multiple offsprings. During the reproduction phase, the elements of existing solutions are combined in order to create new offsprings, with some of the features of each parent chosen by natural selection. In a single-point crossover, a random crossover point is chosen, and the elements of the parents beyond that point are swapped to produce the offspring, as shown in Fig. 6.4. There are also a number of alternative approaches of crossover, such as two-point crossover and uniform crossover. The problem with these point
78
K. Tahera et al.
Fig. 6.4 Single-point crossover
Crossover Point Parent 1 Parent 2
7.78 5.89
18.67 5.89 19.45 10.6
Offspring 1 Offspring 2
7.78 5.89
19.45 10.6 18.67 5.89
crossover methods is that no new information is introduced. The blending methods remedy this problem by finding ways to combine variable values from the two parents into new variable values in the offspring. See Haupt and Haupt [15] for other types of crossover.
6.3.7 Mutation Mutation is used in GA to add diversity so that the algorithm can start searching in a new domain. In a standard GA, the mutation process, as shown in Fig. 6.5, periodically makes random changes in one or more members of current population, either by flipping the bit to 0 or 1 or by using any random number. The result of a mutation may be an infeasible solution, and the algorithm attempts to “repair” such a solution to make it feasible. The type of mutation and the rate of mutation have slightly different effects on genes, depending on the bit representation of the genes. Fogarty [16] studied five different variable mutation rate schemes for an industrial burner, and he concluded that the variable mutation rate worked better than the constant mutation rate when the initial population was a matrix of all zeros. When the initial population was a matrix of random bits, no significant difference between the various mutation rates was noticed. Since these results were performed on one specific problem, no general conclusions about a variable mutation rate are possible.
Fig. 6.5 Random mutation
7.78
19.45
10.6
7.78
19.45
7.8
6 Development of a Mathematical Model for Process with S-Type
79
6.3.8 Termination Criteria A common approach is to terminate the GA when a fixed number of generations have been executed. The GA process may also be run for a fixed amount of time, and this can be terminated when the diversity reduces to a specific level or when the average or best population has not improved in the last n generations.
6.3.9 Generic Algorithm Parameters Genetic algorithms include a number of parameters such as population size, the number of generations, crossover rate, and mutation rate. The setting of these parameters is application dependent. Recently, parameterless GA [17] was proposed. Pongcharoen et al. [18] used a systematic approach for selecting appropriate GA parameters based upon a design-of-experiments approach. However, Aytug et al. [19] pointed out that most of the research involving GAs has failed to systematically explore the setting of GA parameters.
6.4 Numerical Example The GA program was applied to the quality selection model as developed in Section 6.2. The program was written using Visual Basic 6, and it was executed on a 2.4-GHz Intel PC. The parameter setting of GA is given in Table 6.2. To illustrate the potential industrial application of the proposed model, let us consider a numerical example. Assume that a process with two dependent smallerthe-better types of quality characteristics (x and y) and they follow a bivariate normal distribution with given standard deviations (i.e., σ12 = 0.49, σ22 = 0.25, and σ12 = 0.25). The process adjustment cost is $1000 and the production rate is 1000
Table 6.2 Genetic algorithm parameters GA Parameters Gene representation Population size Generating initial population Fitness Function Selection Type Crossover Rate Crossover Type Mutation Rate Mutation Type Termination Criteria Total Generation
Real 64 Random Sampling Method Minimise total operating cost Tournament Selection 0.8 Modified Blending 0.001 Random Number of Total Generation 100
80
K. Tahera et al.
Table 6.3 Result of the numerical example Model at process states
Initial Mean of 1st QC Initial Mean of 2nd QC Production Run Unit Cost
Symbol
Unit
Stable
Shift
Drift
Shift and Drift
µ1 µ2 T C
inch inch month $
0.00 0.00 8.16 2.36
0.00 0.00 8.16 4.69
0.00 0.00 3.51 3.02
0.00 0.00 3.25 4.98
A sensitivity analysis could also be carried out to show the effect of different input parameters on the output.
units/month. Further, the loss coefficients k11 , k22 , and k12 are 3, 2, and 1, respectively. Let us also assume the process moves to an out-of-control state as a result of assignable cause. The deterioration function is exponentially distributed at a failure rate: λ = 0.5. The shift rates are considered to be δ1 = 1 and δ2 = 2. The drift rates are considered to be θ1 = 0.4 and θ2 = 1. The maintenance cost is M(t) = 10 + 3t. We solve the parameters of the cost model in equation (6.11) using a GA. Table 6.3 summarises the result.
6.5 Conclusions The quality selection problem is one of the most studied problems in the field of combinatorial optimization. However, most of these studies have used traditional optimization algorithms to solve the parameters of their process adjustment models. In this chapter we have presented a new process adjustment model that uses a GA as a solution approach. In particular, this chapter has made three contributions. First, the joint determination of the optimum initial setting of process means and the optimum production run of a deteriorating process with multiple smaller-the-better types of quality characteristics was developed. Second, the dependency of the multiple quality characteristics was considered. Third, a GA was applied to solve the parameters of the proposed model, and to the best of the authors’ knowledge this is the first initiative to use a GA to model quality selection problems. Acknowledgment The authors would like to acknowledge the support received from CRCIEAM and the postgraduate scholarship to carry out the research work.
References 1. G. Taguchi (1985) Introduction to off-line quality control, Central Japan Quality Control Association, pp. 1–25. 2. F.A Spiring and A.S.Yeung, (1998) A general class of loss functions with industrial applications. Journal of Quality Technology, 30: 152–162.
6 Development of a Mathematical Model for Process with S-Type
81
3. K. Tahera, W.M Chan, and R.N. Ibrahim (2007) Joint determination of process mean and run length: A Review. International Journal of Advanced Manufacturing Technology, 2nd review. 4. J. Teeravaraprug and B.R. Cho (2002) Designing the optimal process target levels for multiple quality characteristics. International Journal of Productions Research, 40: 37–54. 5. W.M.Chan and R.N. Ibrahim (2005) Designing the optimal process means and the optimal production run for a deteriorating process. International Journal of Advanced Manufacturing Technology, online first, DOI: 10.1007/s00170-005-0209-4. 6. D.S. Bai and M.K. Lee (1993) Optimal target values for a filling process when inspection is based on a correlated variable. International Journal of Production Economics, 32: 327–334. 7. S.L. Chen and K.J. Chung (1996) Determining the optimal production run and the most profitable process mean for a production process. International Journal of Production Research, 34:2051–2058. 8. K.S.Al-Sultan and M.A. Al-Fawzan (1998) Determination of the optimal process means and production cycles for multistage production systems subject to process deterioration. International Journal of Production Planning and Control, 9(1): 66–73. 9. J. Roan, L. Gong, and K. Tang (2000) Joint determination of process mean, production run size and material order quantity for a container filling process. International Journal of Production Economics, 63: 303–331. 10. M.A. Rahim and F. Tuffaha (2004) Integrated model for determining the optimal initial settings of the process mean and the optimal production run assuming quadratic loss functions. International Journal of Production Research, 42: 3281–3300. 11. M.A. Hariga and M.A. Al-Fawzan (2005) Joint determination of target value and production run for a process with multiple markets. International Journal of Production Economics, 96(2): 201–212. 12. K.C. Kapur and B.R. Cho (1996) Economic design of the specification region for multiple quality characteristics. IIE Transactions, 28: 237–248. 13. R. Hooke and T.A. Jeeves (1961) Direct search solution for numerical and statistical problems. Journal of Association of Computing Machinery, 8:212–229. 14. J. Holland (1975) Adaptation in natural and artificial systems. University of Michigan Press, Ann Arbor, MI. 15. R.L. Haupt and E. Haupt (2004) Practical genetic algorithms, 2nd edition. Wiley Interscience, New York. 16. T.C. Fogarty (1989) Varying the probability of mutation in the genetic algorithm. Proceedings of 3rd International Conference on Genetic Algorithms, Morgan Kaufmann, Los Altos, CA, pp. 104–109. 17. G. Harik and F. Lobo (1999) A parameterless genetic algorithm. Proceedings of the Genetic and Evolutionary Computation Conference, 1: 258–265. 18. P. Pongcharoen, C. Hicks, P.M. Braiden, and D. Stewardson (2002) Determining optimum genetic algorithm parameters for scheduling the manufacturing and assembly of complex products. International Journal of Production Economics, 78(3): 311–378. 19. H. Aytug, M. Knouja, and F.E. Vergara (2003) Use of genetic algorithms to solve production and operation management problems: A review. International Journal of Production Research, 41(17): 3955–4009.
Chapter 7
Temporal Aggregation and the Production Smoothing Model: Evidence from Electronic Parts and Components Manufacturing in Taiwan Chien-wen Shen
7.1 Introduction To understand the inventory behaviors of manufacturers, production smoothing is one of the most discussed theoretical models from the perspective of macroeconomics or an individual firm. The basic motive of production smoothing is that companies can increase or decrease their finished goods inventories to allow production that is smoother than sales [1]. Hence, the production-smoothing model (PSM) of inventories depends on a convex short-run cost function and adjustment costs that induce firms to maintain inventories for dampening the effects of demand fluctuations [2]. In other words, production has to be less volatile than sales in PSM. The above hypothesis is reasonable because it’s a common scene in manufacturing. Meanwhile, inventories will most usually serve as production smoother if adjusting production is costly in comparison with the costs of keeping inventories [3]. Based on the above framework, researchers have developed various formulations of PSM, which have been empirically implemented to different manufacturing sectors. However, the applications of PSM remain debatable despite the intuitive appeal of PSM. Previous empirical studies have shown mixed results regarding the validation of PSM, and Ghali [4] has shown that we should expect to see production smoothing for only a subset of manufacturing industries. He also claims that unless one confines the analysis solely to data on industries for which the PSM should a priori be applicable, the percentage of cases where smoothing is observed is irrelevant. Hence, it is important to verify different formations of PSM and to extend the empirical study of PSM to other key industries. Additionally, different studies argued other factors that might influence the results of PSM. For example, Christiano and Eichenbaum [5] studied the effect of temporal aggregation on the estimated speed of adjustment in a stock-adjustment Chien-wen Shen Department of Logistics Management National Kaohsiung First University of Science and Technology, Taiwan
83
84
Chien-wen Shen
model. They found that aggregation over time leads to gross underestimation of the speed of adjustment. Similarly, Ghali [6] found that temporal aggregation leads to downward bias in the estimated speed of adjustment. Because many macroeconomic studies are based on aggregated seasonally adjusted data, further investigation on the effect of temporal aggregation is therefore critical for PSM applications. According to the above motivations, this paper wishes to extend the empirical study of PSM to the electronic parts and components industry in Taiwan, where the manufacturing of semiconductors, electronic parts and components, optoelectronics, and computer peripherals plays an imperative role in the global supply of information and computer technology industry. According to the 2005 survey of Industry and Technology Intelligence Service, the electronic parts and components industry accounts for 37.4% of the total manufacturing product in Taiwan. Therefore, it’s worth analyzing whether this crucial manufacturing sector follows the inventory and production behaviors of PSM. The effect of temporal aggregation on PSM will be evaluated in this empirical test. Here the temporal aggregation means that monthly data are aggregated to quarterly observations. Such analysis can give us additional practical insights on the inventory behaviors of PSM. This article proceeds as follows. The literature on empirical studies of PSM is reviewed in Section 7.2. Then the basic formations of PSM used for this study are introduced in Section 7.3. The empirical test of PSM on the electronic parts and components industry in Taiwan is discussed in Section 7.4, where the hypothesis tests and model analyses on seasonally aggregated data and disaggregated nonseasonally adjusted (monthly) data are compared and examined. Finally, Section 7.5 summarizes the findings and provides suggestions for future research.
7.2 Literature Review Previous literature regarding the empirical tests of PSM has been focused on the discussion of which firms or industries may follow the claimed behaviors from the PSM to use inventories to smooth production levels in the face of fluctuating demand. Whether firms smooth or bunch production over time is not merely an academic question; it has broad implications for the business cycle, inventory demand, and even seasonal and frictional unemployment [7]. An early study from Miller [1] inspected the data from the agriculture sector and found out the support of the production-smoothing hypothesis for the case of condensed and evaporated milk. However, there is no such PSM behavior in the flour milling industry, which is more likely to have the characteristics of productionto-order. Krane and Braun [8] examined disaggregated physical-product data and revealed that production is smoother than shipments in two-thirds of the 38 industry or product groupings considered. However, they found that estimates of the model’s cost parameters are usually imprecise and often do not have the signs postulated by PSMs.
7 Temporal Aggregation and the Production Smoothing Model
85
Lovell [9] studied 84 interacting firms in 21 industries and discovered that the ratio of the variance of sales to the variance of output is an unreliable indicator of how seriously firms attempt to smooth production in the aggregate. He also stated that the two slow estimates of the speed of adjustment reported in many econometric studies of US data may result in part from aggregation bias rather than from limitations of the model itself. Guariglia and Schiantarelli [10] used a panel of 467 UK manufacturing firms for the period of 1980–1991 to analyze the performance of two versions of the linear quadratic model of inventories on micro data. They suggested that incentives to production smoothing are not prevalent in UK manufacturing firms. This conclusion also holds when firms are partitioned according to whether they are more or less likely to face financial constraints. Allen [11] uncovered that the variance of production relative to sales is less than 1 for 23 out of 35 manufacturing and retail industries when detrended and seasonally unadjusted data are used. This finding is consistent with the production-smoothing behavior. However, no such inequality exists when seasonally adjusted data is used. Recent research from Gorman and Brannon [7] found that firms do not smooth production due to seasonal adjustment of the census data for US manufacturing. Ghali [4] noted that the shape of the production cost function, whether convex or nonconvex, is an important determinant, but not the only determinant, of whether production smoothing is an optimal behavior. Mollick [2] showed that 17 out of 18 cases for seasonally adjusted data in the Japanese vehicle industry satisfy the PSM by full information maximum likelihood employed jointly on inventories and sales equations. Positive costs of being away from targeted inventory are present in six out of these nine goods; this makes the PSM a good description of this industry. Based on the above review regarding the empirical studies of PSM, we can see that PSM is not universally accepted in all industry sectors. Thus, we might wonder whether PSM is still applicable to the industry of modern technology like the electronic parts and components. Moreover, does the temporal aggregation on data affect the results of hypothesis tests and parameter estimations for PSM? These practical questions are the main subjects that this study would like to investigate further.
7.3 Model Specifications Behind the intuition of PSM, companies will try to maximize intertemporal cash flow in choosing an optimal stock of inventories so that variables exogenous to optimizing agents, such as real factor input prices and real interest rates are taken to be the driving forces behind inventory accumulation [12]. If firms faced with convex cost functions chose to smooth the rate of production output in order to minimize costs, one would expect to observe the production smoothing hypothesis that the rate of production output would vary less than the rate of sales, with variations in inventory stocks absorbing some of the fluctuations in sales [13]. To describe the above hypothesis in mathematical expressions, let V (·) and C(·) denote the variance
86
Chien-wen Shen
and covariance of variables in parentheses, respectively. If inventories are held for production smoothing motives, the variance of production should be less than that of sales [1] or V (Q) (7.1) V (S) < 1, where Q and S denote the production and the actual sales, respectively. The above inequality holds when the covariance between H and S is negative enough, where H denote the inventory difference between period t and t − 1. One needs to test this production-smoothing hypothesis (1) to ensure the buffer stock behavior before further model analysis. Under the production-smoothing hypothesis, researchers have been developed several patterns of PSM to analyze inventory behaviors. According to the research from Lovell [9], equations (7.2) and (7.3) are the underlying foundation equations of PSM. In equation (7.2), Lovell assumes that there is a linear relationship between normal inventory stock of finished goods Htn at the end of period t and the normal sales Stn in period t. Equation (7.3) describes the situation that the desired change in inventories HtD − Ht−1 may only be partially adjusted to the normal level Htn − Ht−1 , where HtD is the desired inventory stock of finished goods at the end of period t and Ht−1 is the inventory at the end of period t − 1. Htn = β0 + β1 Stn . Htd
− Ht−1 =
δ (Htn − Ht−1 ), 0 ≤ δ
(7.2) ≤ 1.
(7.3)
Because normal sales and anticipated sales are generally unobservable in economic statistics, Ghali [6] assumes Stn = Sta = St + α (St − St−1 ) + εt ,
(7.4)
where disturbance εt is assumed to have first-order autocorrelation format. Under the model construction of PSM, Lovell claims that the production output Qt in period t is between previous production Qt−1 and the desired change in stocks plus anticipated sales Sta in period t. Equation (7.5) expresses Lovell’s basic formula of PSM, where τ is the smoothing coefficient and 0 ≤ τ < 1. Qt = (1 − τ )(Htd − Ht−1 + Sta ) + τ Qt−1 .
(7.5)
Suppose we substitute equation (7.2) into equation (7.3) and replaces Sta with equation (7.4), then equation (7.5) can be described as a variant flexible accelerator model (FAM) introduced by Ghali [6]: Qt = (1 − τ )δ β0 + (1 − τ )(1 + δ β1 )St + α (1 − τ )(1 + δ β1 )(St − St−1 ) −(1 − τ )δ Ht−1 + τ Qt−1 + νt−1 , (7.6) where νt−1 = α (1 − τ )(1 + δ β1 )εt−1 . Although there are other variant PSMs, this paper only applied the commonly used flexible accelerator model for practical discussion.
7 Temporal Aggregation and the Production Smoothing Model
87
To check the existence of heterogeneity, runs test of the residuals was applied instead of the Durbin–Watson test because (7.6) has a lagged value of the dependent variable Qt−1 as an explanatory variable. If there is an autocorrelation problem according to the runs test, this study applied Hildreth–Lu’s approach instead of ordinary least squares for parameter estimation. Furthermore, one needs to transform the estimation results from (7.6) to obtain the original structural parameters like α , β0 , β1 , δ , and τ . The major concerns in PSM are the degree of production smoothing τ and the speed of inventory adjustment δ , where τ equals the coefficient of Qt−1 and δ equals 1 over the coefficient of Ht−1 . Under the above model specification, the estimates of both τ and δ should be between 0 and 1. In the following discussion of empirical study, the flexible accelerator model is used for PSM analysis.
7.4 Empirical Results The main goal of this research is to examine the effect of temporal aggregation on PSM and the applicability of PSM to the manufacture of electronic parts and components. According to the yearbook of industrial production statistics published by the Department of Statistics Ministry of Economic Affairs in Taiwan, there are 16 key product subgroups under the category of electronic parts and components. Thus, the following 16 products are selected as the investigation target of this research: IC package (2710-003), integrated circuit (2710-010), foundry wafer (2710-100), diode (2710-210), transistor (2710-220), electronic capacitor (2720-010), electronic transformer (2720-100), resistor (2720-200), electronic connector (2720-940), printed circuit board (2730-000), light-emitting diode chip (2792-010), light-emitting diode (2792-015), passive matrix liquid-crystal device (LCD) or module (2792-055), active matrix liquid-crystal device or module (2792-060), power supplies (2799-010), and quartz oscillator (2799-910). The seven-digit number following the product name is the (4 + 3)-digit code classified by the standard industrial classification system in Taiwan. Monthly observations of production, sales, and inventory for each product during the period from July 1995 to June 2005 are also collected from the yearbook of industrial production statistics. The active matrix liquid-crystal device or module was later excluded from this study because its official records only start from the year 2000. Besides, the base unit of production, sales, and inventory data is “1000 pieces” for all subgroups except printed-circuit board and power supply subgroups, where the base unit is “square feet” and “set” for printed-circuit boards and power supplies, respectively. In short, this section is divided into two subsections. The first subsection compares the test results of PSM hypothesis between seasonally aggregated data and (nontemporally aggregated) monthly data. For those products from the electronic parts and components industry satisfying the PSM hypothesis, their parameter analyses of PSM are further discussed in the second subsection, where the effect
88
Chien-wen Shen
of temporal aggregation is also examined. These findings will help us to better understand the impact of aggregated data in PSM.
7.4.1 Tests of Production-Smoothing Hypotheses The ratios of variances V (Q) to V (S) for 15 electronic parts and components products are summarized in Table 7.1, where ratios are compared between seasonally aggregated data and monthly data for each product. As the preliminary of PSM has to satisfy the hypothesis in (7.1), results of Table 7.1 show that there are 12 of 15 products supporting the production-smoothing hypothesis V (Q)/V (S) < 1 no matter whether its from disaggregated data or aggregated data. On the other hand, the ratios of production variance to sales variance for IC package, electronic transformer, and passive matrix liquid-crystal device or module are 1.253, 1.286, and 1.164, respectively, for raw data and 1.268, 1.411, and 1.175, respectively, for aggregated data. The above findings indicate that temporal aggregation on data does not affect the test results of production-smoothing hypotheses for the target industry. To understand the reason why the variances of production are larger than that of sales for the products of IC package, electronic transformer, and passive matrix liquid-crystal device or module, we look into their production and sales data. For example, according to the production and sales (aggregated or disaggregate) data for IC package, one can observe that sales are always higher than production before May 2002. However, this pattern is totally opposite after that time point. If data are further divided into two subperiods, one can find out that V (Q)/V (S) = 0.846 during the period from July 1995 to April 2002 and V (Q)/V (S) = 1.178 during the period from May 2002 to June 2005. It shows that the production-smoothing hypothesis does not hold for the recent inventory behavior of the IC package product. Inspection of data from the products of electronic transformer and passive matrix liquid-crystal device or module also displays similar patterns like IC package. Other reasons about hypothesis violations can be further investigated in the future research to understand the inventory behavior in this industry sector.
7.4.2 Model Analyses Parameter estimations of the PSM in (6) are analyzed in this subsection for those products satisfying the production-smoothing hypothesis,. To decide which estimation method will be adopted for each product subgroup, runs test was applied to check the existence of heterogeneity problem in (6). According to the results of runs tests summarized in Table 7.2, only the nontemporally aggregated data for the
7 Temporal Aggregation and the Production Smoothing Model
89
Table 7.1 Variances of production (Q) and sales (S), and ratios of variances for aggregated data or disaggregated data Product subgroup 2710-003 Integrated circuit (IC) package 2710-010 Integrated circuit 2710-100 Foundry wafer 2710-210 Diode 2710-220 Transistor 2720-010 Electronic capacitor 2720-100 Electronic transformer 2720-200 Resistor 2720-940 Electronic connector 2730-000 Printed circuit board 2792-010 Light emitting diode chip 2792-015 Light emitting diode 2792-055 Passive matrix LCD or module 2799-010 Power supplies 2799-910 Quartz oscillator
Temporal aggregation? No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes
V (Q) 1.486 × 1011 1.316 × 1012 1.392 × 1010 1.201 × 1011 4.931 × 102 4.404 × 105 2.928 × 1010 2.064 × 1011 5.010 × 109 3.786 × 1010 3.799 × 1013 3.374 × 1014 1.503 × 107 1.284 × 108 4.574 × 1013 4.012 × 1014 1.633 × 109 1.197 × 1010 6.977 × 1014 6.065 × 1015 2.206 × 1012 1.960 × 1013 1.920 × 1010 1.680 × 1011 1.116 × 108 9.481 × 108 1.002 × 1012 8.643 × 1012 1.271 × 108 1.134 × 109
V (S) 1.185 × 1011 1.038 × 1012 1.453 × 1010 1.280 × 1011 5.635 × 102 5.042 × 105 8.805 × 1010 7.304 × 1011 6.047 × 109 4.732 × 1010 4.517 × 1013 3.997 × 1014 1.169 × 107 9.099 × 107 5.645 × 1013 4.929 × 1014 2.266 × 109 1.775 × 1010 7.814 × 1014 6.776 × 1015 2.274 × 1012 2.034 × 1013 2.935 × 1010 2.563 × 1011 9.583 × 107 8.070 × 108 2.398 × 1012 2.021 × 1013 1.378 × 109 1.245 × 1010
V (Q)/V (S) 1.253 1.268 0.958 0.936 0.875 0.873 0.333 0.283 0.828 0.800 0.841 0.844 1.286 1.411 0.810 0.814 0.721 0.674 0.894 0.895 0.970 0.964 0.654 0.655 1.164 1.175 0.418 0.428 0.092 0.091
product of electronic connector (2720-940) has the autocorrelation problem, where the observed runs (O. Runs) and the expected runs (E. Runs) of residuals for (6) are 49 and 60.40, respectively. Thus, the Hildreth–Lu estimation method was applied to such data, whereas an approach of ordinary least squares was used to perform parameter estimations for other product subgroups on aggregated or nonaggregated data. Next, the results of parameter estimations are shown in Table 7.3. Almost all of the coefficients of determination R2 are above 85% and thus indicate excellent fits for all product subgroups in the PSM of (6). The only exception is the nontemporally aggregated data from electronic connector. There the coefficient of determination
90
Chien-wen Shen
Table 7.2 Runs tests of PSM Product subgroup 2710-010 Integrated circuit 2710-100 Foundry wafer 2710-210 Diode 2710-220 Transistor 2720-010 Electronic capacitor 2720-200 Resistor 2720-940 Electronic connector 2730-000 Printed circuit board 2792-010 Light emitting diode chip 2792-015 Light emitting diode 2799-010 Power supplies 2799-910 Quartz oscillator
Temporal aggregation? No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes
O. Runs 60 20 53 24 66 21 57 18 63 17 61 17 49 22 60 21 55 20 65 20 65 19 53 17
E. Runs 60.50 20.38 60.40 20.38 60.40 20.49 60.29 20.49 60.16 20.49 60.40 20.18 60.40 20.18 59.55 20.49 60.29 20.49 59.79 20.38 60.46 20.18 59.55 18.95
p-value 0.927 0.900 0.173 0.238 0.301 0.868 0.543 0.419 0.599 0.257 0.911 0.294 0.036∗ 0.548 0.934 0.868 0.328 0.874 0.332 0.900 0.403 0.697 0.220 0.491
O. Runs observed runs, E. Runs expected runs ∗ p-value < 0.05
R2 is only 68.7%, which is still acceptable for practical implementation. The reason why it makes such low fit is possibly due to the Hildreth-Lu estimation for heterogeneity correction. Besides, model fitness seems not to be affected by temporal aggregation, even though the aggregated data generally yields a little bit better fit than the disaggregated data, except for light-emitting diode and power supplies. Furthermore, coefficient estimates of St , St − St−1 , Ht−1 , and Qt−1 are used to compute the estimates of structural parameters α , β0 , β1 , δ , and τ from equations (7.2) to (7.6), where the degree of production smoothing τ and the speed of inventory adjustment δ are the main interests of PSM. Unlike the results of coefficients of determination, temporal aggregation shows its influence on parameter estimations. For example, some of the parameter estimations display different signs on aggregated and disaggregated data for the product of electronic capacitor, electronic connector, and quartz oscillator. The significances on parameters also reveal some degrees of inconsistency between different data sources for most of the products, excluding transistor, resistor, electronic connector, light-emitting diode chip, power
7 Temporal Aggregation and the Production Smoothing Model
91
Table 7.3 Parameter estimations of PSM Product subgroup 2710-010 Integrated circuit 2710-100 Foundry wafer 2710-210 Diode 2710-220 Transistor 2720-010 Electronic capacitor 2720-200 Resistor 2720-940 Electronic connector 2730-000 Printed circuit board 2792-010 Light emitting diode chip 2792-015 Light emitting diode 2799-010 Power supplies 2799-910 Quartz oscillator ∗ p-value † p-value
Temporal aggregation? No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes
St 0.336∗ 0.400 0.711∗ 0.733∗ 0.359∗ 0.121 0.504∗ 0.376∗ 0.285∗ 0.096 0.494∗ 0.419∗ 1.036∗ 0.384∗ 0.795∗ 0.719 0.683∗ 0.733∗ 0.079† 0.106 0.159∗ 0.207∗ 0.077∗ 0.081∗
St − St−1 0.552∗ 0.521 0.052 0.161 0.479∗ 0.562∗ 0.256∗ 0.483∗ 0.438∗ 0.837∗ 0.238∗ 0.447∗ −0.266∗ 0.374∗ 0.023 0.191 0.142 0.175 0.202∗ 0.173† 0.279∗ 0.276∗ 0.118∗ 0.138†
Ht−1 0.023 0.043 −0.086∗ −0.103∗ −0.144∗ −0.020 −0.066∗ −0.057∗ 0.007 −0.017 −0.038∗ −0.050∗ −0.078 −0.040 0.063 0.026 −0.111∗ −0.132∗ −0.015 −0.002 −0.028 −0.007 0.015 −0.058
Qt−1 0.615∗ 0.505 0.242∗ 0.218 0.547∗ 0.756∗ 0.432∗ 0.588∗ 0.680∗ 0.931∗ 0.472∗ 0.560∗ −0.412∗ 0.493∗ 0.127 0.208 0.386∗ 0.359∗ 0.904∗ 0.888∗ 0.745∗ 0.661∗ 0.670∗ 0.921∗
R2 95.5% 96.2% 98.8% 99.5% 88.1% 93.0% 85.2% 93.9% 97.5% 99.1% 97.5% 98.7% 68.7% 86.3% 97.1% 98.6% 98.2% 99.0% 92.8% 91.0% 93.0% 92.3% 92.4% 95.0%
< 0.05 < 0.10
supplies, and quartz oscillator. Will these inconsistencies affect the estimation results of structural parameters? Later discussion could provide us with the answers. Table 7.4 summarizes the estimation results for δ and τ . At first, let us consider the results satisfying the criteria in equations (7.3) and (7.5). As we can see, foundry wafer, diode, transistor, resistor, light-emitting diode chip, light-emitting diode, and power supplies have their estimates of structural parameters within the feasible range between 0 and 1 for both aggregated data and disaggregated data. However, the phenomenon of gross underestimation on the speed of adjustment found by Ghali [6] and Christiano and Eichenbaum [5] does not appear in this study. Hence, the temporal aggregation on data may not cause the downward bias in the estimated speed of inventory adjustment δ and degree of production smoothing τ . High estimates of τ from the products of diode, light-emitting diode, and power supplies imply that their productions depend more on the production outputs in the preceding period. On the other hand, the productions of foundry wafer and lightemitting diode chip rely more on the desired changes in inventory stocks plus anticipated sales than the previous production outputs. Low estimated speed of inventory
92
Chien-wen Shen
Table 7.4 Estimations of structural parameters for PSM Product subgroup 2710-010 Integrated circuit 2710-100 Foundry wafer 2710-210 Diode 2710-220 Transistor 2720-010 Electronic capacitor 2720-200 Resistor 2720-940 Electronic connector 2730-000 Printed circuit board 2792-010 Light emitting diode chip 2792-015 Light emitting diode 2799-010 Power supplies 2799-910 Quartz oscillator ∗ p-value
Temporal aggregation? No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes No Yes
δ −0.059 −0.087 0.114† 0.132 0.318† 0.082 0.116† 0.138† −0.020 0.246 0.072† 0.113† 0.055 0.078 −0.027 −0.033 0.180† 0.206† 0.155 0.018 0.112 0.021 −0.046 0.733
τ
0.615∗ 0.505 0.242∗ 0.218 0.547∗ 0.756∗ 0.432∗ 0.588∗ 0.680∗ 0.931∗ 0.472∗ 0.560∗ −0.412∗ 0.493∗ 0.127 0.208 0.386∗ 0.359∗ 0.904∗ 0.888∗ 0.745∗ 0.661∗ 0.680∗ 0.921∗
for the significance test on the coefficient of Qt−1 < 0.05 for the significance tests on both the coefficients of Qt−1 and Ht−1 < 0.05
† p-values
adjustment δ suggests minor inventory adjustments on integrated circuit, foundry wafer, diode, transistor, resistor, light-emitting diode chip, light-emitting diode, and power supplies. In addition, inconsistencies of parameter significance shown in Table 7.3 seem not to play an important role on the estimations of structural parameters for PSM. According to the results of Table 7.4, only 3 (electronic capacitor, electronic connector, and quartz oscillator) out of 12 products show contradictions on the estimates of structural parameters between aggregated data and disaggregated data. Examining the reasons why make such inconsistencies, we can find out that the estimates of δ from the disaggregated data of electronic capacitor and quartz oscillator are negative and very close to zero. This suggests the behavior of insignificant monthly inventory adjustment could be distorted on aggregated data and, therefore, show opposite outcomes. Meanwhile, the heterogeneity problem from the monthly data of electronic connector is likely the cause of inconsistency on its τ . estimation. If we investigate the disaggregated data for further analysis, merely seven electronic parts and components products (foundry wafer, diode, transistor, resistor, light-emitting diode
7 Temporal Aggregation and the Production Smoothing Model
93
chip, light-emitting diode, and power supplies) satisfy not only the PSM hypothesis regarding variances of production and sales but also the criteria for speed of inventory adjustment and degree of production smoothing. As the monthly data can reflect the behavior of manufacturers more accurately than the temporally aggregated data, the above findings suggest that the PSM is not universally acceptable in the practical implementation of the electronic parts and components manufacturing in Taiwan. This outcome is consistent with previous studies such as [1, 2, 8, 10, 11] (5), where PSM only partially holds for respective industry sectors.
7.5 Conclusions This research analyzed the applicability of PSM on seasonally aggregated data and disaggregated nonseasonally adjusted data for the key electronic parts and components products in Taiwan. We started with the examination of PSM hypothesis for suitable candidates of products. There are 12 of 15 goods supporting the productionsmoothing hypothesis no matter the aggregation type of data. On the other hand, only IC package, passive matrix liquid-crystal device or module, and electronic transformer violate the PSM hypothesis regarding variance of production and sales. Although possible causes of this violation were discussed in Section 7.4.1, future research may look into this problem for a better understanding of the inventory behavior on the electronic parts and components manufacturers. Additionally, temporal aggregation on data does not affect the test results of PSM hypothesis in this empirical study. For those subgroups satisfying the production-smoothing hypothesis, their parameter estimations on PSM are further analyzed. According to the findings of this study, PSM demonstrates excellent fits for almost all product subgroups. Even though the fitness of PSM seems not to be affected by temporal aggregation for most goods, there are some inconsistencies on the results of parameter estimations, such as sign or significance, between aggregated data and disaggregated data. However, temporal aggregation will not result in such inconsistencies for structural parameters when constant variances of residuals in PSM are satisfied and estimates of inventory adjustment from the disaggregated data are positive. Moreover, temporal aggregation on data appears not to cause the downward bias in the estimated speed of inventory adjustment and degree of production smoothing. Finally, only foundry wafer, diode, transistor, resistor, light-emitting diode chip, light-emitting diode, and power supplies satisfy both the PSM hypothesis and the criteria of structural parameters among 15 electronic parts and components products. Hence, the PSM is not unanimously applicable in this manufacturing sector despite the intuitive appeal of PSM. Future research could further explore the reasons behind these mixed empirical results and yield a better understanding of PSM applications.
94
Chien-wen Shen
References 1. S.E. Miller (1990) Some empirical evidence for production smoothing in the agribusiness sector. Agribusiness, 6(1): 41–52. 2. A.V. Mollick (2004) Production smoothing in the Japanese vehicle industry. International Journal of Production Economics, 91: 63–74. 3. H.V.M. Peeters (1997) The (mis-)specification of production costs in production smoothing models. Economics Letters, 57: 69–77. 4. M.A. Ghali (2003) Production-planning horizon, production smoothing, and convexity of the cost functions. International Journal of Production Economics, 81-82: 67–74. 5. L. Christianoand M. Eichenbaum (1987) Temporal aggregation and structural inference in macroeconomics. Carnegie-Rochester Conference Series on Public Policy, Vol. 26. Elsevier, Amsterdam, pp 63–130. 6. M.A. Ghali (1996) Temporal aggregation and estimation of inventory functions. International Journal of Production Economics, 45: 21–27. 7. M.F. Gorman and J. Brannon (2000) Seasonality and the production-smoothing model. International Journal of Production Economics, 65: 173–178. 8. S.D. Krane and S.N. Braun (1991) Production smoothing evidence from physical-product data. The Journal of Political Economy, 99(3): 558–581. 9. M.C. Lovell (1993) Simulating the inventory cycle. Journal of Economic Behavior and Organization, 21: 147–179. 10. A. Guariglia and F. Schiantarelli (1998) Production smoothing, firms’ heterogeneity, and financial constraints: Evidence from a panel of UK firms. Oxford Economic Papers, 50(1): 63–78. 11. D. Allen (1999) Seasonal production smoothing. Federal Reserve Bank of Saint Louis Review, 81(5): 21–39. 12. R.J. Rossana (1993) The long-run implications of the production smoothing model of inventories: An empirical test. Journal of Applied Econometrics, 8: 295–306. 13. M.A. Ghali (1987) Seasonality, aggregation and the testing of the production smoothing hypothesis. American Economic Review, 77(3): 464–469.
Chapter 8
Simulations of Gear Shaving and the Tooth Contact Analysis Shinn-Liang Chang, Hung-Jeng Lin, Jia-Hung Liu and Ching-Hua Hung
Abstract Gear shaving is commonly used as a finishing process for gear manufacturing. The parameters of the shaving machine influence the precision of the shaved gear tooth profile significantly. This research develops the mathematical model of the shaved gear with longitudinal crowning considering the setting parameters of the gear-shaving machine. The additional rotating angle is also set up when the shaving cutter moves along the axis of work gear. The shaved gear’s tooth contact analysis is also studied. The results shown in this research can be beneficial to engineers in the industry of gear manufacturing and control of a gear-shaving machine.
8.1 Introduction Gear shaving is one of the most efficient and economical processes for gear finishing after the rough cuttings of hobbing or shaping. Through this process, the highest gear precision can be achieved is DIN 6 or DIN 7. Longitudinal crowning of a gear tooth can also be accomplished by gear shaving. This type of tooth modification eliminates edge contact of meshing gear teeth and, hence, significantly increases operation cycles. There are four basic shaving methods: axial shaving, tangential shaving, diagonal shaving, and plunge shaving. To induce longitudinal crowning in tangential or plunge shaving, the cutter has to be modified several times by grinding wheel to meet the requirements, which results in much time consumption and inevitable higher cost. In axial (or diagonal) shaving, however, it only requires rocking of the machine worktable by using the built-in crowning mechanism. This research focuses Shinn-Liang Chang and Hung-Jeng Lin Department of Power Mechanical Engineering, National Formosa University, Huwei, Yunlin, Taiwan 632, ROC Jia-Hung Liu and Ching-Hua Hung Mechanical Engineering Department, National Chiao Tung University, Hsinchu, Taiwan 300, ROC
95
96
Shinn-Liang Chang et al.
on simulation of gear shaving machine. Quality of shaved gears is also examined. The results can be applied to gear shaving so that gears with high qualities can be produced efficiently by simply adjusting machine settings. The meshing between cutter and work gear in shaving can be viewed as a meshing pair of gears in three-dimensional (3D) crossed-axis manner. The rotary shaving of an involute pinion was studied by Dugas [1]. Litvin [2] proposed the basic meshing conditions of a 3D crossed-axis helical gear pair. Tsay [3] studied tooth contact and stress analysis of helical gears. Miao and others [4, 5] calculated the value of the crossed-axis angle at operating pitch circle and completed constructing a mathematical model of plunge shaving cutter. In addition to this research on fundamental theories of gear shaving, other topics were investigated. Software for designing the shaving cutter has been provided by Kim and Kim [6], where the meshing between cutter and work gear is approximated to an equivalent 2D spur gear model. Moriwaki and coworkers [7, 8] proposed a stochastic model to predict the effect of shaving cutter performance on the finished tooth form. Hsu and Fong [9] proposed a mathematical model of serration cutting edge of plunge shaving cutter and the effects on tooth profile. Chang et al. [10] proposed a novel shaving cutter with the relief portion manufactured by the hob cutter. The majority of the literature focuses on studying the shaving cutter itself and the effects on the shaved gears. However, a higher quality of shaved gear can also be made through research on the gear-shaving machine. In this research, the mathematical model of a generated gear by gear-shaving machine is constructed, and the additional rotating angle in shaving is initially derived. The tooth contact analysis of the shaved gears is conducted for quality evaluations. The results can be extended into the field of controller program design in power shaving for better performance and efficiency. This research therefore can enhance the quality of shaved gears.
8.2 Mathematical Model of the Shaving Machine A complete mathematical model of the shaved gear includes the coordinate transformations and the meshing equations. To construct the coordinate system, the representation of the shaving machine must be simplified. The crowning mechanism of the gear-shaving machine shown in Fig. 8.1 can induce longitudinal crowning on a shaved gear by rocking the worktable. In the motion, the pivot can be fed horizontally only, and the pin moves along the guide way. Once the angle θ between guide way and horizontal is specified (θ = 0) in the shaving process, the rocking motion of the worktable can be achieved. When θ = 0, the worktable will move horizontally without rocking and produce no crowning. The crowning mechanism can be further parameterized as shown in Fig. 8.2, in which dv and dh are vertical and horizontal distances between pin and pivot at the initial position. While the pivot (worktable) moves zt horizontally in shaving from position I to position II, the pin will move d p along the guide way. The rotating
8 Simulations of Gear Shaving and the Tooth Contact Analysis
97
Shaving cutter
q
Pivot
Pin
Work table
Guideway Fig. 8.1 Crowning mechanism of gear-shaving machine
angle of worktable ψt can be derived as shown in (8.1) [11]. ⎛ ⎞ ⎛ ⎞ sin θ − d cos θ + z sin θ d d v t ⎠ + sin−1 ⎝ h ⎠− θ. v ψt = sin−1 ⎝ 2 2 2 2 dh + dv dh + dv
(8.1)
The coordinate systems of the shaving process can be simplified and illustrated as shown in Fig. 8.3. The coordinate systems Ss (Xs ,Ys , Zs ) and S2 (X2 ,Y2 , Z2 ) are connected to shaving cutter and work gear separately, while Sd (Xd ,Yd , Zd ) is the fixed coordinate system; Sa (Xa ,Ya , Za ), Sb (Xb ,Yb , Zb ), Sc (Xc ,Yc , Zc ), Se (Xe ,Ye , Ze ), S f X f ,Y f , Z f , and Si (Xi ,Yi , Zi ) are auxiliary coordinate systems. Other parameters pivot I
Zt
pin II
dp q pin I
yt dh
Fig. 8.2 Parametric representation of crowning mechanism
dv
pivot II
98
Shinn-Liang Chang et al. Xd ,Xe ,Xf
p
Yf C+E0
g Ye
Xc Xa ,Xb X2 Yi Ys Ze Zs,Zi ,Zf Z2 ,Za
Xs
fs Xi
Ya f2 Y2
yt
Zc ,Zd
C
Zb Yd
Yb,Yc zt
Fig. 8.3 Coordinate system of shaving machine
in Fig. 8.3 are described as follows: zt denotes traveling distance of shaving cutter along the axial direction of work gear; C denotes the distance between pivot and center of work gear; γ denotes the angle between two crossed axes; E0 represents the center distance; φs and φ2 represent the angles of rotation of cutter and gear, respectively, which are related to each other in shaving operation. When the shaving cutter is assumed to be a helical involute gear, surface equation rs and unit normal ns in coordinate system Ss can be represented by (8.2) and (8.3) [2] where us and νs are surface parameters. T rs = xs (us , νs ) ys (us , νs ) zs (us , νs ) , T ns = nsx (us , νs ) nsy (us , νs ) nsz (us , νs ) ,
(8.2) (8.3)
where xs = rbs cos (νs + µs ) + us cos λbs sin (νs + µs ) ys = rbs sin (νs + µs ) − us cos λbs cos (νs + µs ) zs = ∓us sin λbs ± ps νs
∂ rs ∂ rs ∂ rs ∂ rs
ns = ×
∂ us × ∂ νs ∂ us ∂ νs The upper sign “±” is for the right-hand helical, and the lower sign “∓” is for the left-hand helical; rbs and λbs denote the radius and lead angle of shaving cutter base circle; ps represents the helical parameter, and µs is the profile constant. The locus equation of shaving cutter represented in work gear coordinate system S2 is shown in (8.4): (8.4) r2 (us , νs , φs , zt ) = M2s (φs , zt ) rs (us , νs ) ,
8 Simulations of Gear Shaving and the Tooth Contact Analysis
⎡
where
d11 ⎢ d21 M2s = ⎢ ⎣ d31 0
d12 d22 d32 0
d13 d23 d33 0
99
⎤ d14 d24 ⎥ ⎥, d34 ⎦ 1
and d11 = − cos φs cos φ2 cos ψt + sin φs sin φ2 cos γ − cos φ2 sin ψt sin φs sin γ d12 = − sin φs cos φ2 cos ψt − sin φ2 cos γ cos φs + cos φ2 sin ψt sin γ cos φs d13 = sin φ2 sin γ + cos φ2 sin ψt cos γ d14 = (C + E0 ) cos φ2 cos ψt − cos φ2 (zt sin ψt +C) d21 = cos φs cos ψt sin φ2 + cos φ2 sin φs cos γ d22
+ sin φ2 sin ψt sin φs sin γ = sin φs cos ψt sin φ2 − cos φ2 cos φs cos γ
d23
− sin φ2 sin ψt cos φs sin γ = cos φ2 sin γ − sin φ2 sin ψt cos γ
d24 = −(C + E0 ) sin φ2 cos ψt + sin φ2 (zt sin ψt +C) d31 = sin ψt cos φs − cos ψt sin φs sin γ d32 = sin ψt sin φs + cos ψt sin γ cos φs d33 = cos ψt cos γ d34 = −(C + E0 ) sin ψt − zt cos ψt The gear ratio between work gear and shaving cutter remains constant as shown in (8.5): m2s =
φ2 Ts = , φs T2
(8.5)
where Ts and T2 denote the tooth numbers of shaving cutter and work gear, respectively. When there exists a feeding motion of the shaving cutter along the axis of work gear, the relation between φ2 and φs must be modified as
φ2 =
Ts φs ± φ2s . T2
(8.6)
In equation (8.6), φ2s denotes the additional rotating angle, and it varies with the feeding along the axis of work gear. The “±” sign depends on the directions of helices and rotation. The additional rotating angle in the crowning process is further illustrated in Fig. 8.4. In Fig. 8.4 (a), as the worktable moves zt and makes an angle ψt , the solid line indicates position I while the dash line indicates position
100
Shinn-Liang Chang et al.
aa Work Gear
a a
rp2
rp2
C yt zt pivot II (a)
pivot I
aa
A
bp2 a
h
f2s
B
a
rp2
Helical line
h = AB (b)
Fig. 8.4 Relationship between the work-gear movement and the additional rotating angle
II. C denotes the distance between the pivot and the center of work gear, and r p2 represents the radius of the pitch circle. Suppose a is the point of cutting action when the worktable is located at position I; as the worktable moves from position I to position II, the moving distance of a is aa . The representations can be derived and shown as (8.7) and (8.8): rp2 = (C + r p2 ) cos ψt + zt sin ψt −C,
(8.7)
= (C + r p2 ) sin ψt + zt cos ψt .
(8.8)
aa
The additional rotating angle, illustrated in Fig. 8.4(b), can then be obtained as (8.9).
φ2s =
[(C + r p2 ) sin ψt + zt cos ψt ] tan β p2 . (C + r p2 ) cos ψt + zt sin ψt −C
(8.9)
8 Simulations of Gear Shaving and the Tooth Contact Analysis
101
Table 8.1 Parameters of work gear and shaving cutter for illustration of tooth crowning with crowning mechanism Work gear Number of teeth (T2 ) Normal module on pitch circle (m pn2 ) Normal circular tooth thickness on pitch circle (s pn2 ) Normal pressure angle on pitch circle (α pn2 ) Helical angle on pitch circle (β p2 ) Outer diameter Face width
36 2.65 4.858 mm 20◦ 10◦ LH 105.9 mm 28.4 mm
Shaving cutter Number of teeth (TS ) Helical angle on pitch circle (β ps ) Face width Normal circular tooth thickness on pitch circle (s pns )
73 22◦ RH 25.4 mm 3.348 mm
In the axial shaving, there are two major kinematical parameters, φs and zt , so that two meshing equations are needed to calculate the enveloping surface of the work gear [2], which are listed as equations (8.10) and (8.11).
∂ r2 = 0 (zt constant), ∂ φs ∂ r2 f2 (us , νs , φs , zt ) = n2 · = 0 (φs constant), ∂ zt
f1 (us , νs , φs , zt ) = n2 ·
(8.10) (8.11)
where n2 = L2s ns = [n2x
n2y
n2z ]T ,
in which L2s is a 3 × 3 matrix by deleting the last row and column of matrix M2s . The tooth surface of a shaved gear with longitudinal crowning can be obtained by considering equations (8.2) to (8.4), (8.6), (8.10), and (8.11) simultaneously. The following example is provided to illustrate the derived equations shown above. The basic data of shaving cutter and work gear are shown in Table 8.1, and the important setting parameters of the shaving machine are shown in Table 8.2. Substituting the parameters into the derived equations and solving by numerical methods, the tooth
Table 8.2 Machine setting parameters of gear-shaving machine for illustration of tooth crowning with crowning mechanism Machine setting parameters Angle between guide way and horizontal (θ ) Vertical distance between pivot and pin (dv ) Horizontal distance between pivot and pin (dh ) Distance between pivot and center of work (C)
2◦ 50 188 mm 545 mm 385 mm
102
Shinn-Liang Chang et al.
X2
Z2 = 10 mm
Z2 = 0 mm Y2 Fig. 8.5 Tooth surface of work gear (no crowning) from Z2 = 0 mm to Z2 = 10 mm
surface from Z2 = 0 mm to Z2 = 10 mm is obtained as shown in Fig. 8.5 when θ = 0 (no crowning). To induce longitudinal crowning of a gear tooth, a value of θ other than zero must be given; the larger the value of θ , the more obvious the crowning effect . By setting θ = 2◦ 50 , the result is shown in Fig. 8.6. The theoretical tooth is in the middle, and the crowning effect is enlarged on the right-hand and left-hand sides, respectively. The values of longitudinal crowning are also indicated. Further confirmation by
−4.27mm
−0.07mm
−1.67mm
−0.01mm
Scale of crowning 600:1
−3.23mm −1.67mm
z = −10mm
−2.13mm
− 0.07mm −1.99mm
−1.99mm
z = 0mm
−0.03mm
− 4.27mm
− 0.01mm
−2.49mm z = 10mm
−3.23mm −2.13mm
Fig. 8.6 Longitudinal crowning of the shaved gear
− 0.03mm
−2.49mm
8 Simulations of Gear Shaving and the Tooth Contact Analysis
103
tooth contact analysis, which is presented in the next section, is needed to ensure the quality of the shaved gears.
8.3 Tooth Contact Analysis of the Shaved Gear To evaluate the transmission quality of the shaved gear, tooth contact analysis (TCA) is adopted to investigate the kinematic error caused by assembly errors between the mating gear pair. The coordinate systems are constructed as shown in Fig. 8.7. Tooth contact of gears is simulated by expressing both tooth surfaces in the coordinate system Sq (Xq ,Yq , Zq ), as shown in Fig. 8.7. Ideal meshing of two mating gears (gear 2 and gear 4) satisfies (8.13) and (8.14) [2]: rq = rq ,
(2)
(4)
(8.13)
(2) nq
(4) nq .
(8.14)
=
Z4, Zp Y4 O4,Op
Yp f4 X4
Xp Z2, Zv
E X2
Xv f2
Xh,Xq
Zq ∆gh
∆gv
Zh
Y2 Ov ,Oh ,Oq ,O2
Yq Yv, Zh E = E24 + ∆E
Fig. 8.7 Coordinate system of the meshing gear pair for TCA
104
Shinn-Liang Chang et al. Table 8.3 Parameters of the meshing gear pair in Example 1 Items
Gear 2 (with crowning)
Gear 4 (no crowning)
Normal module Number of teeth Helical angle Pressure angle
2.65 36 10◦ LH 20◦
2.65 36 10◦ RH 20◦
Equation (8.13) expresses that tooth surfaces ∑2 and ∑4 have the common con(2) (4) tact point (or line) determined by position vectors rq and rq . Equation (8.14) indicates that mating gear surfaces ∑2 and ∑4 have the common unit normal at their contact point (or line). There are only five independent equations that can be derived (1) (2) from (8.13) and (8.14) since |n f | = |n f | = 1. Coordinate systems Sh (Xh ,Yh , Zh ) and Sv (Xv ,Yv , Zv ) in Fig. 8.7 are constructed to simulate center distance variation ∆E; horizontal and vertical axis misalignments ∆γh and ∆ γv that may appear in assembling. S p (Xp ,Yp , Z p ) and Sq (Xq ,Yq , Zq ) are the fixed coordinate systems. φ2 and φ4 are rotating angles of gear 2 and 4, respectively. Under the ideal meshing condition without assembly errors, φ4 = T2 /T4 φ2 , where T2 and T4 denote tooth numbers of gear 2 and 4. However, when errors are considered, it becomes necessary to solve (8.13) and (8.14) for the nonlinear function φ4 (φ2 ) rather than directly substituting φ2 to obtain φ4 . Kinematic error between gear 2 and 4 is defined by equation (8.15): ∆φ4 (φ2 ) = φ4 (φ2 ) −
T2 φ . T4 2
(8.15)
An example is provided below to illustrate the calculation of kinematic error. Example 1. Parameters of a meshing gear pair are selected as shown in Table 8.3 to study the kinematic error. Gear 2 is shaved by the shaving machine operated under the condition shown in Table 8.2, while gear 4 is without crowning. The kinematic error is investigated under the following five conditions: Condition 1. In the ideal condition, i.e., ∆E = 0 mm and ∆γh = ∆γv = 0◦ . Condition 2. With center distance error, ∆E = 1 mm Condition 3. With horizontal axis misalignment only, i.e., ∆E = 0 mm, ∆γh = 0.02◦ , and ∆γv = 0◦ . Condition 4. With vertical axis misalignment only, i.e., ∆E = 0 mm, ∆γh = 0◦ , and ∆γv = 0.02◦ . Condition 5. With both vertical and horizontal axis misalignments, i.e., ∆E = 0 mm and ∆γh = ∆γv = 0.02◦ . The result of TCA is shown in Fig. 8.8. It is observed that center distance variation produces no kinematic error, which is the advantage of involute gear pair.
8 Simulations of Gear Shaving and the Tooth Contact Analysis
Condition 1
Condition 2
Condition 3
105
Condition 4
Condition 5
2.0 1.5 Kinematic error (arc-second) 1.0 0.5 Input angle (degree) −10 −8 −6 −4
0.0 −2
−0.5
2
0
4
6
8
10
−1.0 −1.5 −2.0
Fig. 8.8 Distribution of kinematic error from Example 1
However, axis misalignment causes significant kinematic error, and it becomes more severe when only the horizontal one presents.
8.4 Longitudinal Tooth Crowning Introduced by Litvin Another way to produce gears with longitudinal crowning by shaving is to feed the work gear radially and axially at the same time [12]. As shown in Fig. 8.9, ψ1 and ψS denote rotating angles of work gear and shaving cutter, respectively. ES1 is the center distance between cutter and gear. In shaving, the work gear is fed p axially and ∆S p1 radially. With the axial feeding p , the additional rotating angle ψ p is represented as shown in equation (8.16).
ψp =
p , p1
(8.16)
where the helical parameter p1 is defined as p1 = r p1 tan λ p1 , r p1 denotes the pitch radius, and λ p1 denotes the lead angle of work gear. p and ES1 are related by (0) (8.17) ES1 = ES1 − ∆S p1 = (ro1 + ros ) − a p1 2p , (0)
where Es1 denotes the initial center distance; ro1 and ros denote operating pitch radius of work gear and shaving cutter, respectively; and a p1 denotes the parabolic coefficient that controls the amount of tooth crowning. In this way, no crowning
106
Shinn-Liang Chang et al.
lp yp
Es1
y1
DSp1
ys
Fig. 8.9 Shaving operation with longitudinal tooth crowning proposed by Litvin [12]
mechanism is needed; however, coordination between p and ES1 is necessary instead, and this must be implemented on CNC shaving machine. If a p1 = 0, the center distance ES1 will remain constant in the shaving operation, and hence, no tooth crowning is produced. To obtain the tooth surface of the shaved gear, following the similar process mentioned above, the coordinate systems are constructed as shown in Fig. 8.10. The locus equation of shaving cutter represented in work gear coordinate system S1 (X1 ,Y1 , Z1 ) is shown as (8.18) r1 = M1s • rs where M1s = M1w Mwo Mou Mu j M js . The relation between cutter and gear rotating angles is represented by
ψ1 =
Ts ψs ± ψ p . T1
(8.19)
The meshing equations are shown as equations (8.20) and (8.21):
∂ r1 = 0 ( p constant), ∂ ψs ∂ r1 = 0 (ψs constant). f2 (us , νs , ψs , p ) = N1 • ∂ lp
f1 (us , νs , ψs , p ) = N1 •
(8.20) (8.21)
By considering (8.16) to (8.21) based on the parameters listed in Table 8.4, the crowned tooth surface of a work gear can be obtained and is shown in Fig. 8.11, which indicates the amounts of crowning at specified positions. With or without crowning mechanism, longitudinal tooth crowning can be induced in both ways, and the selection depends on which type of machine is used for implementation.
8 Simulations of Gear Shaving and the Tooth Contact Analysis
107
lp Yo
Yw
Zo
Oo
y1 Y1 Zw,Z1
Ow,O1 Xo
(0)
Esl
ys
Esl
Xu ,X j
X1
Xw
Xs
Zj ,Zs gs Ou ,O j,Os Yu
Zu
Yj Ys
Fig. 8.10 Simplified coordinate systems of Fig. 8.9
An example is provided below to show the results of TCA for the gear pair derived in this section. Example 2. Basic data of the gear pair are listed in Table 8.3 with the same machine setting listed in Table 8.4. The same five conditions as those in Example 1 are analyzed, and the results are shown in Fig. 8.12. Under condition 3, namely, when only horizontal axis misalignment presents, the most severe kinematic error is caused, which is the same as Example 1. However
108
Shinn-Liang Chang et al.
Table 8.4 Parameters of gear, shaving cutter, and machine for illustration of longitudinal tooth crowning proposed by Litvin [12] Work gear Tooth number (T1 ) Normal module on pitch circle (m pn1 ) Normal circular tooth thickness on pitch circle (s pn1 ) Normal pressure angle on pitch circle (α pn1 ) Helical angle on pitch circle (β p1 ) Outer diameter Face width
36 2.65 4.858 mm 20◦ 10◦ L.H. 105.9 mm 28.4 mm
Shaving cutter Tooth number (Ts ) Helical angle on pitch circle (β ps ) Face width Normal circular tooth thickness on pitch circle (s pns )
73 22◦ R.H. 25.4 mm 3.348 mm
Machine settings Parabolic coefficient (a p1 )
6.5 × 10−5
Initial center distance Cross-axis angle (γs )
(0) (Esi )
152.593 mm 11.98◦
the kinematic error varies with different distribution of longitudinal tooth crowning. Compared with Fig. 8.8, it shows that both linear kinematic error functions are caused. Therefore, it is verified that parabolic kinematic error function, which possesses better quality of transmission, can be achieved only by profile crowning, not longitudinal crowning. Longitudinal crowning can be used to eliminate edge contact, and by carefully selecting values for machine setting parameters, kinematic error can be locally improved.
−6.09mm
Scale of crowning 600:1
−4.72mm
−0.12mm
−3.44mm
z = −10mm
−0.01mm
−3.20mm
−3.44mm
z = 0mm −0.12mm
z = 10mm
−4.18mm
−0.05mm
−4.18mm
−6.09mm
−0.01mm
−5.41mm −4.72mm −3.20mm
Fig. 8.11 Longitudinal tooth crowning introduced by Litvin [12]
−0.05mm
−5.41mm
8 Simulations of Gear Shaving and the Tooth Contact Analysis Condition 1
Condition 2
Condition 3
109
Condition 4
Condition 5
2.0
Kinematic error (arc-second) 1.5 1.0 0.5
−10
Input angle (degree) −8
−6
−4
0.0 −2
−0.5
0
2
4
6
8
10
−1.0 −1.5 −2.0
Fig. 8.12 Distribution of kinematic error from Example 2
8.5 Conclusion In this research, the axial shaving of the gear-shaving machine is simulated, and the tooth surface of shaved gear is constructed. The longitudinal crowning induced by the crowning mechanism and by CNC control of the shaving machine are illustrated and compared. TCA has also been investigated in this research. The results in this research can be used as instructions for simulation and design of a gear-shaving machine.
References 1. J.P. Dugas (1996) Gear shaving basics—Part I. Gear Technology, May/June: 26–30. 2. F.L. Litvin (1994) Gear geometry and applied theory. PTR Prentice Hall, NJ. 3. C.B. Tsay (1988) Helical gears with involute shaped teeth: Geometry, computer simulation, tooth contact analysis, stress analysis. Journal of Mechanisms, Transmission, and Automation in Design, 110: 482–491. 4. H.C. Miao and H. Koga (1996) Design and analysis of plunge shaving for finishing gears with tooth profile modifications. ASME Power Transmission and Gearing Conference, DE, 88: 275–281. 5. H. Koga, K. Umezawa, and H.C. Miao (1996) Analysis of shaving processing for helical gears with tooth modification. ASME, Power Transmission and Gearing Conference, DE, 88: 265– 273. 6. J.D. Kim and D.S. Kim (1996) The development of software for shaving cutter design. Journal of Materials Processing Technology, 59: 359–366. 7. I. Moriwaki and M. Fujita (1994) Effect of cutter performance on finished tooth form in gear shaving. Journal of Mechanical Design, Transactions of the ASME, 116(3): 701–705.
110
Shinn-Liang Chang et al.
8. I. Moriwaki, T. Okamoto, M. Fujita, and T. Yanagimoto (1990) Numerical analysis of tooth forms of shaved gears. JSME, International Journal Series III, 33(4): 608–613. 9. R.H. Hsu and Z.H. Fong (2006) Theoretical and practical investigations regarding the influence of the Serration’s geometry and position on the tooth surface roughness by shaving with plunge gear cutter. Journal of Mechanical Engineering Science, Proceedings of the Institution of Mechanical Engineers, Part C, 220(2: 223–242. 10. S.L. Chang, U.D. Wu, J.K Hsieh, C.H. Tseng, S.D. Cheng, and K.R. Chang (2006) The influence of Serration on a shaving cutter with a pre-designed relief portion. Materials Science Forum, 505–507: 961–966. 11. H.J. Lin (2006) Simulation of gear shaving machines and gear tooth contact analysis. Master thesis, National Formosa University (in Chinese). 12. F.L. Litvin, Q. Fan, D. Vecchiato, A. Dememego, R.F. Handschuh, and T.M. Sep (2001) Computerized generation and simulation of meshing of modified spur and helical gears manufactured by shaving. Computer Methods in Applied Mechanics and Engineering, 190: 5037– 5055.
Chapter 9
On Aggregative Methods of Supplier Assessment Vladim´ır Modr´ak
Abstract This chapter focuses on the field of decision-making processes in connection to the supplier assessment in a logistics chain. This chapter also describes the meaning of supplier assessment, through which companies can minimize their production costs that arise while waiting for missing parts in a manufacturing process. The substance of the work is the description of a method developed for the combined supplier assessment based on the evaluation of the quality of supplied products, adherence to time schedules, and delivery of the agreed quantity. Finally, in this work the benefits of the assessment method used are mentioned. Keywords: Supplier assessment · evaluation criteria · supplier selection · logistics · schedule of quantity · time schedules
9.1 Introduction The dynamic character of the current business environment forces managers to examine and control their relationships with suppliers in the context of complex supply chains. A supply chain can be characterized as a network of facilities and distribution options that performs the functions of procurement of materials, transformation of these materials into intermediate and finished products, and the distribution of these finished products to customers. Its importance as a research theme arose with the appearance of so-called of demand-driven manufacturing and distribution strategies. In this context the main role of supply-chain management (SCM) is to effectively and efficiently manage the material flow in the supply chain. Obviously, supply-chain management can be defined from a number of angles. Considering the Vladim´ır Modr´ak Technical University of Koˇsice, Faculty of Manufacturing Technologies, Bayerova 1, 080 01 Preˇsov, Slovakia
111
112
Vladim´ır Modr´ak
objectives of this article, the relevant one is the definition according to which the supply-chain management emphasizes the overall and long-term benefit to all parties on the chain through co-operation and information sharing (Gunasekaran and Ngai [1]). One of the crucial functions of SCM is supplier relationship management (SRM), the aim of which is to reduce direct and indirect costs, improve quality, and speed responsiveness to customers. supplier relationship management focuses generally on the coordination of all interactions with a supply base, such as contract negotiation, supplier assessment, and sourcing. Supplier assessment has become increasingly important for manufacturing companies due to the need toobtain real competitive advantages, whilst failure in supplier selection can lead to significant operational and financial consequences (Yang and Xu [2]). The performance of suppliers can be assessed from different viewpoints. The work will present the following methods of supplier assessment: 1. Approaches to the supplier assessments according to the adherence to time schedules will be investigated. These assessments along with an assessment of product’s quality are ordinarily required for meeting the quality management system standards in the car industry. 2. The main emphasis is put on the proposed supplier assessment procedure in terms of the adherence to quantity stipulations, which extends the options of the supplier assessment and thereby allows a more complex method of assessment of suppliers’ performance and reliability. The chapter is organized as follows. Section 9.2 describes a wider literature context on supply-chain collaboration and the meaning of supplier management. Section 9.3 discusses issues related to specific aspects of supplier assessment and selection criteria. Section 9.4 consists of the focal part of the chapter and is oriented to the alternative techniques of supplier assessment. In the final section of this work the economic effects are summarized and conclusions of the assessment method used in this study are stated.
9.2 Research Background and Motivation In the framework of the process of semi-finished product and subcomponent procurement, probably the most important decision is the selection from a range of potential suppliers that are competent to provide the required material and/or service. According to Elram [3], the procurement process in which the management of relations with suppliers takes place can be divided into: the preparatory phase (usually the creation of a decision-making unit), identification of potential suppliers, examination and selection of suppliers, establishment of relationships, and the evaluation of relationships. The third phase by these steps has been designated for this study. When making selection decisions, the team in charge considers a wide range of criteria. The criteria include the overall time of supply, timing of supplies,
9 On Aggregative Methods of Supplier Assessment
113
ability to speed up supplies, competitive price, and the after-sale support (Dobler and Burt [4]). As a result of an increasing interest in enhancing productivity and decreasing costs, company management began to be more interested in the SRM functions and have started to seek the building of closer relations with their suppliers (Spekman [5]). Relative to this statement, another definition states that a supply chain (SC) is a network of mutually interconnected companies that jointly participate in fulfilling promises to the final customers (Mentzer et al. [6]). In this connection it is symptomatic that companies with a lower performance are often kept out of the integrated supply chain (Bask and Juga [7]). Perry et al. [8] emphasize a question that there is a need for new organizational paradigms to cater to the increasingly complex SC. In relation to that Christopher and Towill [9] describe a role of “lean” and “agile” paradigms in modern SC; i.e., the two approaches can complement each other, and in many cases there is a requirement for a hybrid lean–agile strategy to by adopted. Helper [10] stresses that sustainable supply-chain strategy should pay equal attention to downstream as well as upstream activities. As a result, companies are turning their attention to inbound logistics since they have observed that the extra benefit of modernization of outbound operations becomes more and more difficult. According to Olin et al. [11], the lagging behind of inbound logistics is caused by horizontal fusions of manufacturers into large portfolio-oriented companies focusing on the assembly and marketing while reducing their in-house development and manufacturing activities in favor of multi-tier supplier base. A natural precondition for a successful SCM is the trust and satisfaction among the parties involved. Spekman et al. [12] point to the fact that, in recent history, business partners maintained their distance from each other due to the lack of trust, showing little willingness to share internal information about their own companies. Such partner relationships are analyzed in more detail, for example, by Saunders [13], who considers them to be the decisive factor in the success of SCM. According to Rinehart et al. [14], the number of prior transactions with a supplier has previously been used as a proxy for trust. Related to that issue, Gulati [15] stated a common attitude that if a buyer’s experience with a supplier were adverse, he would typically not do business with that supplier in the future, provided that the supplier is not the only option for the buyer. The so-called Japanese-oriented supplier management that encouraged firms to form relational contracts with a limited number of suppliers to foster innovation, as well as improved quality, played an important role in development buyer–supplier relations. The idea of the presented approach in this chapter has been motivated also by field studies of buyer–supplier relations which suggest that in many cases the shift to fewer suppliers is advantageous due to product quality and cost reasons. Descriptions of Japanese-oriented supplier management practices in Western contexts have focused on of reducing the costs and risks of relational contracts (Hollingsworth and Boyer [16]). Each material flow can work only if the information comes at the right place, at the right time, and is reliable, precise, and correct. On the other hand, there are
114
Vladim´ır Modr´ak
many unexpected and unpredictable disruptions that cause the information chaos of a supply chain, the existence of which hinder making optimal decisions at each stage in the supply chain (Christopher and Lee [17]).
9.3 The Importance of Suppliers Assessment and Selection It is known that in terms of minimizing working capital and the related direct costs for keeping stock, it is important that the level of stock in the firm should be as low as possible. For that reason it is essential to manage inventory so that the supplies with the minimal storage period form most of the stock, and, vice versa, the supplies that are on the stock for a longer period should have the their lowest share. Material planners know this theoretical side of the stock. But they have their own practical experience, which shows that not all suppliers are able to fulfill their obligations thoroughly and that there is a risk of delays or losses during transport, etc. All supplier and transport risks can threaten the operation of the firm and disrupt its production. The potential impacts of such risks can be partially decreased by • The creation of a safety stock quantity specified for each critical type of material and used for bridging such faults in supplying that are caused by a supplier’s lack of discipline or by delays or losses during transport. • Ordering more material than the schedule of needs, which may cause an increased amount of stock and thereby contribute to an excessive working capital. Due to the real threat of such risks, a firm should evaluate suppliers and lead them toward fulfilling their supplies at the specified time, in the specified quantity, and to the specified quality. The underestimation of this problem usually has a negative impact on the economic situation of the firm, either by the tying up of excessive capital in the stock or by downtime in the production, which endangers the competitiveness of the firm. A precondition for the decision making in the selection of suppliers is to choose from a number of alternatives. During the assessment of a supplier, the most important aspect is his performance, but not just in supplying, but also the performance of the whole firm in the comprehensive understanding. For that reason, it is beneficial to have general information as well as specific data about the firm’s performance. The quality of the assessment of the suppliers in general depends on the quantity of relevant information and the criteria used for the selection. What criteria can one specify in the supplier assessment? For example, they can be • • • • •
Delivery terms and conditions (payment period and other) Reliability of supplies (quantities, assortment, and delivery limits) Available capacity Types of packaging Packaging units
9 On Aggregative Methods of Supplier Assessment
115
Table 9.1 Supplier assessment oriented on their products and delivery reliability Assessment criteria
5 points, very good
4 points, good
3 points, neutral
2 points, provisory
1 point, insufficient
Quality of products
First class quality
Better than the specified requirements
Partly below the specified requirements
Price competitiveness
More than 5% under the average price More than 10% under the average time schedules
Less than 5% under the average price
Complies with the specified requirements Equivalent to the average price
Less than 5% over the average price
Does not comply with the specified requirements More than 5% over the average price
Complies with the average time schedules
Less than 10% over the average time schedules
More than 10% over the average time schedules
Adherence to time schedules
• • • • •
Less than 10% under the average time schedules
Geographical distances Degree of importance of the customer’s requirements Degree of flexibility of the supplier to changes in supply conditions Degree of viability of the customer’s requirements Price and quality of products
Since criteria are often characterized by different expressions (one is positive; another negative), in the aggregated method of assessment they should relate to a common comparable base. When that condition is met, appraised points are usually allocated to the individual criteria according to possible states of fulfillment (see the example in Tables 9.1 or 9.2). Table 9.2 Supplier assessment focused on their overall reliability Assessment criteria
5 points, very good
4 points, good
3 points, neutral
2 points, provisory
1 point, insufficient
Securing the quality of deliveries
Better than specified requirements
Partly better than specified requirements
Time schedules were met
Deliveries are ahead of time less than 1 week
Deliveries are partly under the specified requirements Deliveries are delayed more than 5 days
Failure of quality
Adherence to time schedules
Respecting schedule of quantity
Schedules of quantity were met
Schedules of quantity are exceeded by less than 5%
Deliveries comply with the specified requirements Deliveries are either delayed more than 2 days or are ahead of time more than 1 week Either the quantity unfulfilled is less than 5% or the quantity is exceeded by more than 5%
Quantity unfulfilled is more than 5%
Deliveries are delayed more than 10 days
Quantity unfulfilled is more than 10%
116
Vladim´ır Modr´ak Table 9.3 Supplier categories by assessment indicator Evaluation number
Supply class
97–100 91–96 0–90
A B C
Then the overall aggregated assessment may be obtained, for example, in the following way: The assessment of individual supplies is calculated by the multiplication of the point value of each criterion by an individual weight. According to Tables 9.1 or 9.2, let the total value of the weights of the three criteria be 20. Then the sum of multiplications of the maximum points and individual weights will be equal to 100. The summary supplier assessment indicator can be calculated as the average of the values of individual supplies, on the basis of which we allocate suppliers into performance groups, for instance, according to Table 9.3. Understandably, in the complex supplier assessment process it is necessary to start from the company’s supply-chain strategy by applying supply management tools and by taking into account the economic effectiveness of the whole project. The supply management tools include all available techniques for the optimization of supplies in terms of the firm’s own economic interests and the credibility of its customers. They include the following components: • Deciding on prices and delivery terms and the purchased quantity for the scheduled period • Quality policy, implemented through a quality management system • Selection of suppliers in terms of size, number, and location • Specification of secondary performance and additional services provided by the third-party logistics (3PL) providers and/or 4PL providers • Advertising and promotion policy—use of promotion and advertising in public relations
9.4 Alternative Techniques of Supplier Assessment A proposed approach to the alternative aggregative assessment of suppliers is based on the following main criteria: • Quality of supplied products • Adherence to time schedules • Adherence to quantity stipulations
9 On Aggregative Methods of Supplier Assessment
117
Table 9.4 Allocation of evaluation factor according to ppm value Class
Value (ppm)
Factor
1 2 3 4 5 6 7 8 9 10
0–50 51–100 101–500 501–1000 1001–5000 5001–10,000 10,001–50,000 50,001–100,000 100,001–500,000 500,001–1,000,000
1 5 10 15 20 30 40 50 70 100
9.4.1 Assessment of the Quality of Supplied Products Naturally, there are more common approaches to the assessment of the quality of supplied products. In a proposed aggregative method, for the determined first criterion, quality of supplied product, the principle of the number of defective parts per million (ppm) will be used. The procedure of that method is described, for example, in the work of Hahn et al. [18]. The procedure consists from the categorization of individual supplies based on their calculated value of ppm to the classes, for which were determined evaluation factors (see Table 9.4). The so-called quality number (QN) is given by the formula QN = 101 − ΦF, where ΦF presents the average value of evaluation factors of individual supplies over a monitored period. Then, based on QN, suppliers are assigned to one of the three performance groups A, B, or C. An example of category ranges for specification of the performance groups is shown in Table 9.5.
9.4.2 Assessment by Adherence to Time Schedules One of the conditions for obtaining internationally recognized quality management systems (QMS) certificates is the supplier assessment in terms of adherence to time schedules. That fact, as well as general interest of the companies’ management in the
Table 9.5 Supplier categories by assessment of the quality of supplied products Supply class
Quality number
A B C
97–100 91–96 0–90
118
Vladim´ır Modr´ak Table 9.6 Allocation of evaluation factor according to a delivery nature Supply class
Delivery nature
Deviation from the delivery dates (d)
Evaluation factor fi
1 2 3 4 5 6 7 8 9 10
Earlier Earlier Earlier On schedule Later Later Later Later Later Later
> −3 −2, −3 −1 0 1 2 3, 4 5, 7 8 >8
20 10 5 1 10 20 30 50 70 100
monitoring of the reliability of suppliers, has led to specification of many effective methods for the supplier assessment by this common criterion. Three methods are described in the following sections.
9.4.2.1 Assessment by Evaluation Number The evaluation number (EN) is utilized for the classification of suppliers into category A, B, or C based on their assessment in terms of adherence to time schedule. The evaluation number can be calculated by the formula EN = 101 −
∑ (ti . fi ) , n
(9.1)
where ti = unit values of supply classes, while i ∈ < 1, 10 >. fi = supply class evaluation factors. n = total number of supplies. The method of allocation using the evaluation factor is contained in Table 9.6. Then, based on EN, suppliers are classified as one of the three performance categories A, B, or C in compliance with Table 9.7.
9.4.2.2 Assessment by Determinative Number In this section, we describe an alternative method of assessing the reliability of adherence to time schedules that is more sensitive to distinguishing between systemic factors and random ones. Systemic factors, which are of a more-or-less
9 On Aggregative Methods of Supplier Assessment
119
Table 9.7 Aggregative supplier assessment form 1. Assessment of the quality of supplied products Evaluation scope C (0–90 points) B (91–96) A (97–100) According to PPM it is in the scope: 2. Assessment by adherence to time schedules Evaluation scope C (0–90 points) B (91–96) A (97–100) According to adherence to time schedules, it is in the scope. 3. Assessment by adherence to quantity stipulations Evaluation scope C (2.1–5 points) B (1.41–2) A (1–1.4) According to adherence to quantity stipulation, it is in the scope. 4. Total assessment
chronic character and are characterized by repeated plus or minus values, allow the identification of the reasons of unreliability as well as the estimation of deviations from the delivery dates. The determinative number (DN) used as an indicator for a classifying suppliers into categories is calculated in the following way. Values of the EN of early and on-schedule supplies, ENe , and the EN of belated and on-time supplies, ENb , are calculated separately according to formulas (9.2) and (9.3) and Table 9.8. ENe = 101 − ΦENe ,
(9.2)
ENb = 100 − ΦENb ,
(9.3)
where ΦENe is arithmetical mean value of ENe during a monitored period and ΦENb is arithmetical mean value of ENb during a monitored period. Then the DN can be calculated by the formula DN = min {ENe , ENb }
(9.4)
Subsequently, based on the determinative number, the suppliers are evaluated according to the categorization used in Table 9.7. In the case where a supplier is evaluated as B or C, he is informed accordingly so that he can react and learn the reason for being included in groups B or C. In the interest of the improvement of his own evaluation, after learning the shortcomings, the supplier should take measures to remove them. If the supplier does not react to the information and even continues to show no interest in the removal of his shortcomings, the information about such unreliable supplier is sent for decision making at a higher level of management.
120
Vladim´ır Modr´ak
Table 9.8 Allocation of evaluation factor according to the deviation from the delivery date Earlier and on schedule supplies
Belated supplies
Difference of adherence (d)
Evaluation factor fi
Difference of adherence (d)
Evaluation factor fi
0 1 2–3 4–5 6–7 8–9 >9
1 5 10 20 30 50 70
1 2 3 4–5 6–7 8–9 >9
10 20 30 50 60 80 100
9.4.2.3 Assessment by Estimation Number Although the preceding two manners for supplier assessment in terms of adherence to time schedule look to be convincing theoretically, the reality may be different. The reliability of the adherence to time schedules may also be influenced by other factors that are not related with the supplier’s capability to dispatch a delivery but are related to the aspect of logistics management, e.g., • Postponement of goods by transport agents (e.g., problems with dispatching from the warehouse, loss of export documents) • Problems during transport (e.g., long waiting period on a border, natural calamities, technical problems) In the cases when the assessment of logistical services provided by a 3PL is relevant, it is possible to use assessment criteria contained in Table 9.9. Since the sum of multiplications of the maximum points and individual weights is equal to 100, it is feasible to use an obtained estimation number to categorize a supplier as one of the three performance groups A, B, or C according to previous techniques.
9.4.3 Supplier Assessment with Respect to the Schedule of Quantity In order to secure smooth-running production as well as satisfying the operational requirements of the customer, inevitably one must have well-functioning system of inventory control and inbound logistics management. The described assessment, in terms of satisfying the quantity schedule, widened the preceding method of supplier assessment in the company because of the growing operational changes affecting the production process. When assessing a supplier in terms of satisfying the schedule of quantity, it is useful to examine the relevance of the following questions:
9 On Aggregative Methods of Supplier Assessment
121
Table 9.9 Assessment criteria adapted by Supplier Manual of Bosch Group Main criteria
Partial criteria
Adherence to schedule (weight 0.5) Are schedules delivered in time and in the correct quantity? Are schedules delivered in the correct quality? Flexibility (weight 0.2) How flexible is supplier reaction to short-notice increase in quantity? How flexible is supplier reaction on changes in delivery dates? Logistics (weight 0.2) Is supplier ready to use electronic data interchange (EDI)? Are shipping documents correctly filled out? - Is supplier prepared to use 3PL supplier warehouse? - Are the packaging rules correctly executed? Communication and cooperation (weight 0.1) - Are supplier collaborators customer oriented and reachable? - Is supplier interested in joint completion of new projects?
Assessment 95
85
90
100
Some times
Average
Normally
Always
Very little
Average
High
Very high
Bad
Average
Good
Very good
rarely
average
often
always
1. What problems occur in the firm if the supplier supplies a lower quantity than has been ordered? 2. What problems occur in the firm if the supplier supplies a higher quantity than has been ordered? 1. The risk of supplying a lower quantity is relevant for the firm particularly in the case there are too distant suppliers, as in the the failure to supply exactly the ordered quantity by ship, it is necessary to transport the material by plane. 2. In the case of exceeding the ordered quantity to such an extent that the firm cannot include the surplus material in its safety stock, the material is returned to the supplier. This causes additional costs to both parties involved. If the extent of exceeding is low and the firm decides to accept exceeding the ordered quantity, it is obliged to issue an additional order, which will increase the administrative costs of the operation (to a relatively lower extent) and will increase the amount of money bound in the stock unnecessarily. Besides the above two unfavorable options of nonadherence to the specified schedule of quantity, supplying a type of material different than that ordered is not unusual either. In such a case the material is sent
122
Vladim´ır Modr´ak
Table 9.10 Supplier assessment by adherence to quantity stipulations Name of supplier: No. of supply
Ordered quantity
1 2
15.000 24.000 · · · · · · 12 8.000 13 10.000 14 10.000 15 20.000 16 10.000 17 10.000 Average coefficient
Evaluated period: Delivered quantity
Difference (units)
15.000 24.000 · · · 8.000 10.400 9.600 19.000 10.500 11.500
· · · -
Difference (%) 0% 0% · · ·
400 −400 −1000 500 1.500
0% 4% −4% −5% 5% 15%
Coefficient 1 1 · · · 1 2 3 3 2 3 1.4
back immediately to the supplier. If the failure to adhere to the schedule of quantity is not a subject of supplier assessment and the error is more or less tolerated, the likelihood of its repetition is high. For the supplier assessment in terms of the supplied quantity, a method of recording the quantity of the ordered material and of the quantity of the supplied material has been proposed, and a simplified method is contained in Table 9.10. The difference between quantities is expressed in percentages and an allocation of coefficient for given supply is specified according to Table 9.11. At the end of the measured period the arithmetical mean value of the coefficients is calculated and the supplier is classified into category A, B, or C according to Table 9.12. The advantages of this assessment in terms of respecting the quantity schedule are • Precise information about the stock of each part; • Possibility of operational changes in orders
Table 9.11 Rules for allocation of coefficient for given supply Coefficient
Difference between ordered and delivered quantity (%)
1 2 3 4 5
If difference is 0% If difference is 0% to +5% If difference is 0% to −5% or > +5% If difference is −5% to −10% If difference > −10%
9 On Aggregative Methods of Supplier Assessment
123
Table 9.12 Supplier categories by adherence to quantity stipulations Supply class
Average coefficient value
A B C
1–1.4 1.41–2 2.1–5
• Minimization of costs of left-out material in the case of smooth changes in the production requirements • Elimination of any downtime in production • Cost savings due to eliminated downtime The disadvantages of this assessment in these terms is • In the case of an immediate change, it is not possible to reduce costs of the leftout material. • Increased costs required to monitor the quantity schedule.
9.4.4 Aggregative Supplier Assessment The supplier assessment according to presented assessment methods may not always be unambiguous. The result may be influenced not just by involved third parties but also by mutually agreed delivery terms and conditions. To obtain an impersonal evaluation of the quality of supplies and the quality of the supplier, it is essential to have enough pertinent and detailed information, which will assess the qualities of individual suppliers precisely as much as necessary. By the combination of all three assessments we can obtain a more objective view of the quality of individual suppliers. The aggregative assessment of the three main criteria can be combinatorially arranged as shown in Table 9.13.
Table 9.13 Combinatorial rules for determining the aggregative evaluation Quality Time schedules/delivery dates Schedules of quantity Total Quality Time schedules Schedules of quantity Total Quality Time schedules Schedules of quantity Total
A A A A B A A B C A A C
A A B A B A B B C A B C
A A C B B A C B C A C C
A B A B B B A B C B A C
A B B B B B B B C B B C
A B C B B B C B C B C C
A C A B B C A C C C A C
A C B C B C B C C C B C
A C C C B C C C C C C C
124
Vladim´ır Modr´ak
Finally, we can arrange partial assessment results and the aggregated result in a compact form, for instance, by Table 9.7.
9.5 Discussion and Closing Remarks The supplier assessment using the proposed procedure creates a higher pressure on the quality of supplies, which it is not possible to attain immediately after it is put into practice. It requires close collaboration with suppliers, based on mutual trust and partnership. Only in this way it is possible to examine specific problems of suppliers and together look for solutions for their removal. Successful solutions often require changes in the organization of work, training of the staff, installation of new computing equipment, etc. This approach was experimentally tested in a company that produces cable harnesses for cars. By a gradual increase of the suppliers’ discipline, downtimes in production were eliminated. The overall sum of downtime costs during the year prior to the introduction of a widened method of supplier assessment was about US$110,000 of the total yearly turnover of US$170,000,000. In the first year after the introduction of the modified method of supplier assessment, a decrease of costs by about 14% compared to the average in the past 5 years was recorded. Effects of the increase in the suppliers’ discipline, besides the above mentioned ones, show in the higher capability to react flexibly to the customer’s requirements, the lower number of claims, the decreased number of input control staff, the decreased overall costs of quality, as well as the saving of transport costs. There are also known examples pointing to the fact that the results of the systematic management of relations with suppliers may appear in a relatively short time. For example, by the introduction of the systematic supplier performance assessment in a firm involved in the assembly of healthcare instruments, a decrease in the stock of parts by 34 % during several months was achieved (Harrington et al. [19]. Precise and effective supplier assessment may contribute to the removal of numerous problems related with the unreliability of suppliers. Every firm should try to avoid such potential problems. There is not just one correct method for their prevention. Therefore when making the selection of supplier evaluation techniques, company specifics are an important consideration. An alternative solution is the development of a company’s own supplier assessment method. The aim of this chapter was to point out other possible methods of supplier assessment and to provide inspiration for companies trying to increase the effectiveness of the inbound logistics management.
References 1. A. Gunasekaran and E.W.T. Ngai (2004) Information systems in supply chain integration and management. European Journal of Operational Research, 159(2): 269–295.
9 On Aggregative Methods of Supplier Assessment
125
2. J.B. Yang and D.L. Xu (2004) Intelligent decision system for supplier assessment, DSS2004. The 2004 IFIP Conference on Decision Support Systems, Prato, Tuscany, Italy. 3. L.M. Ellram (1995) A managerial guideline for the development and implementation of purchasing partnerships. International Journal of Purchasing and Material Management, 31(2): 9–16. 4. D.W. Dobler and D.N Burt (1996) Purchasing and supply chain management, 6th ed. McGraw-Hill, New York, pp. 238–260. 5. R.E. Spekman (1988) Strategic supplier selection: understanding long-term buyer relationships. Business Horizons, 31(4): 75–81. 6. J.T. Mentzer, W. DeWitt, J.S. Keebler, S. Min, N.W. Nix, C.D. Smith, and Z.G. Zacharia (2001) What is supply chain management? In J.T. Mentzer (ed.), Supply Chain Management. Sage, Thousand Oaks, CA, pp. 1–25. 7. A.H. Bask and J. Juga (2001) Semi-integrated supply chains. Towards the new area of supply chain management. International Journal of Logistics, 4(2): 137–152. 8. M. Perry, A.S. Sohal, and P. Rumpf (1999) Quick response supply chain alliances in the Australian textiles, colothing, and footwear industry. International Journal of Production Economics, 62(1–2): 119–132. 9. M. Christopher and D. Towill (2002) Developing market specific supply chain strategies. International Journal of Logistics Management, 13(1). 10. S. Helper (1991) How much has really changed between U.S. auto markers and their suppliers? Sloan Management Review, 15(3): 15–28. 11. J.G. Olin, N.P. Greis, and J.D. Kasarda (1999) Knowledge management across multi-tier enterprises. European Management Journal, 17(4): 335–347. 12. R.E. Spekman, J.W. Kamauff, and N. Myhr (1988) An empirical investigation into supply chain management: A perspective on partnerships. Supply Chain Management, 3(2): 53–67. 13. A.G. Saunders (1994) Suppliers audits as part of a supplier partnership. The TQM Magazine, 6(2): 41–42. 14. L.M. Rinehart, J.M. Eckert, T.J. Page, and T. Atkin (2004) Assessment of supplier–customer relationships. Journal of Business Logistics, 25(1): 25–62. 15. R. Gulati (1995) Does familiarity breed trust? The implications of repeated ties for contractual choice in alliances. Academy of Management Journal, 38(1): 85–113. 16. J.R. Hollingsworth and R. Boyer (1997) Coordination of economic actors and social systems of production. In J.R. Hollingsworth and R. Boyer (eds.), Contemporary capitalism: The embeddedness of institutions. Cambridge University Press, Cambridge, UK, pp. 1–47. 17. M. Christopher and H. Lee (2004) Mitigating supply chain risk through improved confidence. International Journal of Physical Distribution & Logistics Management. 18. G.J. Hahn, N. Doganaksoy, and Ch. Stanard (2001) Statistical tools for six sigma. Quality Progress, 34 (9): 78–82. 19. T.C. Harrington, D.M. Lambert, and M. Christopher (1991) A methodology for measuring vendor performance. Journal of Business Logistics, 12(1): 97–98.
Chapter 10
Human Factors and Ergonomics for Nondestructive Testing B.L. Luk and Alan H.S. Chan
Abstract Nondestructive testing (NDT) does rely heavily on human judgment and visual capability to identify any faults or defects on the specimen at the end of the process. Despite the fact that a human plays an important role on the reliability of the NDT test results, very little research work has been carried out to study the ergonomics and human factors in using various NDT methods. Among the wide variety of NDT methods, dye penetrant inspection, magnetic particles inspection, ultrasonic inspection, and eddy current inspection are four of the most commonly used NDT techniques in industry. In this chapter, several human factors that could affect the reliability of the tests are discussed and some recommendations are also provided to improve the tests. Keywords: Magnetic particles inspection · dye penetrant inspection · ultrasonic inspection · eddy current inspection · nondestructive testing · ergonomics
10.1 Introduction Nondestructive testing is an examination or evaluation performed on a test object without changing or altering it in any way while determining the absence or presence of conditions or discontinuities that may have an effect on its usefulness or serviceability [1]. The number of NDT methods that can be used to inspect components and make measurements is large and continues to grow. Researchers keep on finding new ways of applying knowledge of physics, materials engineering, and other scientific disciplines to develop better NDT methods. However, very little research work has been carried out to study the ergonomics and human factors in using these NDT methods and equipment; yet the human factor is one of the important factors that could significantly affect the reliability of the test results. In order to investigate B.L. Luk and Alan H.S. Chan City University of Hong Kong, Kowloon Tong, Hong Kong
127
128
B.L. Luk and Alan H.S. Chan
the ergonomic, safety, and health problems in conducting these NDT methods, three NDT companies were visited in this study. During the site visits, it was found that some of the NDT operators were not aware of their own personal safety and health. In this chapter, four of the popular nondestructive inspection methods, viz., dye penetrant inspection (DPI), magnetic particles inspection (MPI), eddy current inspection, and ultrasonic inspection, were selected for the study, and the related issues of ergonomics, safety, and health are examined and discussed. Both the DPI and MPI have some similarities in operational and procedural aspects although their basic working principles are different. Unlike most of the other popular NDT techniques, no electronic instruments are used to detect faults and defects with these two methods. Instead, these two methods rely heavily on the use of human visual abilities and cognitive decision process. Also it was noted that both methods involve spraying hazardous chemicals on the surface of the specimen prior to the human inspection task [2, 3]. Unlike DPI and MPI, both eddy current and ultrasonic inspection techniques, on the other hand, involve scanning the part surface with a transducer, and the returned signals are recorded electronically and then displayed visually on a visual display terminal (VDT), from which operators are required to interpret the signals. Furthermore, no hazardous chemicals need to be sprayed on the part surface prior to the human inspection task. The couplant used in ultrasonic inspection is generally not harmful to humans. Most importantly, cracks or defects are mainly detected by the electronic instruments rather than purely relying on the capability of human eyesight.
10.2 Principles and Procedures 10.2.1 Dye Penetrant Inspection Dye penetrant inspection is a method for revealing surface cracks by using color dye. The technique is based on the ability of a liquid to be drawn into a “clean” surface-breaking flaw by capillary action [4, 5]. Similar to magnetic particles inspection, the procedures of performing dye penetrant inspection begin with pre-cleaning of the surface of specimen with a specialized cleaner. A layer of penetrant is then applied to the surface of the cleaned specimen. The penetrant is allowed to remain on the surface for a sufficient time to allow the penetrant to seep through the defects by capillary action. Generally, a 10-minute dwell time is sufficient on clean castings, welds, and most defects. The surface is then wiped clean with a clean towel or cloth premoistened with cleaner or dye remover. After that, the surface is sprayed with developer to draw penetrant trapped in flaws back to the surface where it will be visible. The developer is allowed to stand on the part surface for an average of 20 minutes to permit the extraction of the trapped penetrant out of any surface flaws.
10 Human Factors and Ergonomics for Nondestructive Testing
129
Fig. 10.1 A crack (indicated by the arrow) was visualized with the DPI method
Finally, the part is ready for inspection under appropriate lighting to detect the presence, location, and size of defect (Fig. 10.1). After the inspection is completed, the part surface is cleaned thoroughly with cleaner or remover.
10.2.2 Magnetic Particles Inspection Magnetic particles inspection is a relatively simple technique, and it can be considered as a combination of two nondestructive testing methods: magnetic flux leakage testing and visual inspection. Encountering a small air gap created by cracks on or near the surface (0 to 4 mm) of a magnetized component, the magnetic field would spread out since the air can not support as much magnetic flux per unit volume as the magnet can. When the flux spreads out, it appears to leak out of the material and is thus called a flux leakage field [6, 7]. If iron particles are sprinkled on it, the particles will be attracted and cluster at the flux leakage fields, thus forming a visible indication that the inspector can detect (Fig. 10.2). The procedures of performing MPI involve pre-cleaning the surface of specimen. The surface is then brushed with a wire brush (if necessary) and wiped off with purified wiping cloths [Fig. 10.3 (a) and (b)]. After cleaning, the surface is free of grease, oil, and other moisture that could prevent the suspension1 from wetting the surface and preventing the magnetic particles from moving freely. A layer of white contrast paint is then sprayed on the part surface to provide a better contrast for visualizing the position of faults [Fig. 10.3(c)]. After that, the part is magnetized 1 The MPI suspension is a two-phase system comprising of finely divided magnetic particles dispersed in a vehicle, often a liquid petroleum distillate.
130
B.L. Luk and Alan H.S. Chan
Fig. 10.2 Working principle of MPI
with a pre-magnetized yoke [Fig. 10.3(d)]. While locating the yoke on the specimen, a suspension of magnetic particles is gently sprayed or flowed over the surface, which is then blown to ensure the magnetic particles are evenly spread on the part surface [Fig. 10.3 (e) to (g)]. The part surface is then carefully inspected to look for areas where the magnetic particles are clustered. Surface discontinuities will produce a sharp indication [Fig. 10.3(h)]. The indications from subsurface flaws will be less defined and loose definition as depth increases. After inspection, the surface should be demagnetized and then cleaned again.
10.2.3 Ultrasonic Inspection Ultrasonic examination is a nondestructive method in which beams of highfrequency sound waves are introduced into materials for detecting subsurface flaws in the material. The sound waves travel through the material with some attendant loss of energy (attenuation) and are reflected back from internal imperfections or geometric surfaces of the part. The reflected waves are transformed into electrical signal by the transducer and displayed on a visual display unit. From the signal, information about the location, size, orientation, and other features of the reflector can then be obtained [8]. The procedures of performing ultrasonic inspection involve cleaning the part surface with a cloth and detergent and calibrating the ultrasonic instrument with a calibration block made of material similar to the specimen being tested. A Sonatest Masterscan 340 Ultrasonic Flaw Detector is shown in Fig. 10.4 [9]. The settings of the ultrasonic instrument (pulser and receiver settings) should be optimized during calibration in order to maximize the systems resolution capabilities. The part surface is then coated with a layer of couplant to facilitate the transmission of ultrasonic energy from the transducer into the test specimen. After that, scanning is performed by moving the transducer over the part surface with an appropriate scan pattern, direction, and speed. During scanning, the signal displayed on the display unit should be carefully scrutinized for any reflections that indicate internal cracks or defects. All
10 Human Factors and Ergonomics for Nondestructive Testing
Fig. 10.3 Procedures of MPI
131
132
B.L. Luk and Alan H.S. Chan
Fig. 10.4 A Sonatest Masterscan 340 Ultrasonic Flaw Detector
suspected flaw indications, regardless of their amplitude, should be further investigated to the extent necessary to provide accurate characterization, identity, and location. A schematic drawing of output signal indicating a defect in a specimen is shown in Fig. 10.5. The main application areas of ultrasonic inspection include the detection of subsurface flaws and thickness for both ferrous and nonferrous metals.
10.2.4 Eddy Current Eddy current inspection uses the principle of electromagnetism as the basis for conducting examinations. Eddy currents are created through a process called electromagnetic induction. When alternating current is applied to a conductive material under examination, a magnetic field develops in and around the conductor. This magnetic field then induces electrical eddy currents in the material, and the presence of flaws or material variations on the surface would distort the paths of the
Fig. 10.5 Schematic drawing of ultrasonic inspection
10 Human Factors and Ergonomics for Nondestructive Testing
133
Fig. 10.6 Scanning of the part surface with a probe in eddy current inspection
eddy currents in the material. A sensing coil designed to pick up these eddy currents can detect the presence of such flaws and characterize their properties to some extent [4]. The operating procedure of eddy current inspection is similar to that of ultrasonic test, except that no couplant is needed. An eddy current instrument is calibrated before inspection. Scanning is then performed by the human inspectors with different types of probe over the cleaned surface, and the signals shown on the display unit are simultaneously observed (Fig. 10.6).
10.3 Human Abilities and Skills Required 10.3.1 Perceptual and Cognitive Abilities The visual ability of inspectors is critical to identifying any defect in both magnetic particles and dye penetrant inspections. Their visual acuity should meet certain standards in order to perform the inspection tasks competently and properly. The European Standard EN473:1993 [10], Qualification and Certification of NDT Personnel General Principles, establishes a system for the qualification and certification of personnel who perform industrial NDT. According to EN473:1993 [10], the inspectors shall provide evidence of satisfactory vision as determined by an oculist, optometrist, or other medically recognized person in accordance with two requirements. First, their near vision acuity shall permit reading a minimum of Jaeger number 1 (Snellen acuity of 20/15) or Times Roman N 4.5 or equivalent letters at not less than 30 cm with one or both eyes, either corrected or uncorrected. In addition, the verification of the visual acuity shall be done annually. In the NAS 410 [11], the Jaeger (J) number 1 criterion is also recommended for use for certification of operators for near-distance visual acuity, which is believed to
134
B.L. Luk and Alan H.S. Chan
be an important attribute for looking for a crack in NDT. Similarly, a recommended practice of Jaeger number 2 or equivalent type and size letter at the distance designated on the chart but not less than 12 in. (30.5 cm) on a standard Jaeger test chart is proposed in SNT-TC-1A [12]. According to Hellier [1], the normal eye can distinguish a sharp image when the object being viewed subtends an arc of 1/12 of a degree (5 minutes), which corresponds to a 20/20 Snellen acuity. At this condition, the thickness of the lines and spaces between the lines of the object subtend 1 minute of arc. It was also suggested that details of 1 minute of arc can be perceived by the human eye by other researchers [13,14]. However, the authors believed that this measure of acuity, more precisely known as minimum separable acuity, may not be directly related to or useful for determining the suitability of operators for NDT tasks. Instead, minimum perceptible acuity, which is the ability to detect the presence of a spot or object from its background, should be considered. It was reported by Hecht et al. [15] that the minimum perceptible acuity is of a value as high as 0.008 minute of arc. Unfortunately, no other research on the minimum perceptible acuity was reported. The second requirement of human vision is that the color vision of inspectors shall be sufficient that they can distinguish and differentiate contrast between the colors used in the NDT method concerned as specified by the employer. It is obvious that the screening of operators based on the color vision requirement can not be easily implemented as there is no specification of any color-deficiency test recommended, and the employers may not have sufficient knowledge and expertise in selection of the appropriated type of color vision test for recruitment of NDT operators. The PseudoIsochromatic Plate Ishihara Compatible (PIPIC) Color Vision Test 24 Plate edited by Dr. Waggoner [16] is a commonly used test of color vision. According to Wickens [17], approximately 7% of the male population is color deficient and unable to discriminate certain hues from each other. Most prevalent is red-green color blindness in which the wavelengths of these two hues create identical sensations if they are of the same luminance intensity. However, it is difficult to precisely determine the type and degree of color deficiency of a person, as there is wide variability within a class of color deficiency [14]. Apart from visual acuity and color vision mentioned in EN473:1993 [10], other visual characteristics of the operators such as contrast sensitivity and visual search abilities are also important factors. Contrast sensitivity may be defined as the reciprocal of the minimum contrast between a lighter and darker spatial area that can just be detected and become indistinguishable [17]. Since both MPI and DPI involve detecting the crack as a darker color indicator on a lighter background, an operator’s ability in contrast sensitivity is hence critical to the success and speed of the crack detection. Like visual acuity, contrast sensitivity decreases with age of operators and may be required for consideration in a selection test for operators. For detecting the presence or absence of cracks, visual search is undertaken through a series of random or systematic eye fixations. A crack is usually first detected in the periphery with peripheral vision and then confirmed by foveal fixation.
10 Human Factors and Ergonomics for Nondestructive Testing
135
For a large product surface area, the peripheral vision is as important as foveal vision ability. Hence selection of NDT operators based on the area and shape of subjects’ visual fields should also be seriously considered. A comprehensive visual field size and shape measurement software, Visual Lobe Measurement Software (VILOMS), developed by one of the authors [18] has been proved to be useful in predicting visual search speed of operators and may be helpful for screening NDT operators. For the ultrasonic and eddy current inspection methods, the shape, curvature and position of a part to be inspected are the factors that will affect the performance of the inspectors to scan and search for flaws. There is no doubt that if the part is an irregular shape, it will be difficult to move the transducer smoothly to follow a scanning path over the surface. Also, some deep indentations of the part may be difficult to access by a transducer, not to mention following any predefined searching paths to locate possible deflects.
10.3.2 Physical Strength In magnetic particles inspection, alternating current yokes have a lifting force of at least 4.5 kg with 50- to 100-mm spacing between legs. Direct current yokes and permanent magnets shall have a lifting force of at least 13.5 kg with a 50- to 100-mm spacing between legs or 22.5 kg with a 100- to 150-mm spacing between legs [4]. Therefore, during the magnetizing process, the arms of inspectors should have adequate lifting capability to separate the yoke from the part under inspection. Snook and Ciriello [19] provide information regarding what can be presumed to be suitable or unacceptable conditions for manual material activities in the United States. For lifting task, they suggested that the maximal acceptable lift weight, which is achieved by 90% of male industrial workers, is 14 kg and 17 kg for 51 cm of vertical distance of lifting within the height of knuckle and shoulder in every 12 seconds and 60 seconds, respectively. The data do not represent the capacity limits of individuals. Instead, they represent the opinions of more than 100 experienced material handlers as to what they would do willingly and without overexertion.
10.3.3 Surface Preparation Technique With the MPI, for a clear indication of any crack on a surface, a very important parameter, which must be closely controlled, is the concentration of magnetic particles in the suspension. If there are insufficient particles in the suspension, no indications can be formed. However, too many particles may then give a high background that can mask small indications. The concentration is usually measured using the settling-test technique. The precipitate volume should be 0.15 to 0.3 mL per 100 mL for black oxide particles [4]. In DPI, removal of excess penetrant is
136
B.L. Luk and Alan H.S. Chan
the most delicate part of the inspection procedure. This task requires removing all the excess penetrant on the surface without removing too much dye from the defects. Therefore, the inspector should be skillful in removing penetrant in order to visualize the defect successfully.
10.4 Ergonomics, Safety, and Health Problems 10.4.1 Illumination Lighting can make details easier to see and colors easier to discriminate without producing discomfort or distraction [20]. Since both the MPI and DPI methods rely upon the inspectors ability to see the test indication, the lighting condition provided for these inspection tasks is extremely important. If insufficient or too high a level of illumination is provided at the workplace, it not only affects the sensitivity of the method but may also cause inspector fatigue. Since lighting parameters can only affect the visual aspects of a task, the greater the contribution of vision to the performance of a task, the greater will be the effect of lighting on that task [14]. Obviously, NDT inspection tasks depend highly on vision for judgment of the presence of defect. If glare or reflections occur on the surface of the specimens, fatigue will quickly set in and lead to discomfort and decrease inspector performance since this prevents clear inspection of the specimens. Glare is produced by brightness within the field of vision that is sufficiently greater than the luminance to which the eyes are adapted so as to cause annoyance, discomfort, or loss in visual performance and visibility [14]. For ultrasonic and eddy current inspection, bad lighting can cause excessive glare and reflection on the screen of the signal display unit. This will inevitably increase the fatigue level of the operator doing the task.
10.4.2 Working Posture The working posture of inspectors is usually determined by the actual location, size, and mobility of the specimen. If the part being inspected is immovable, inspectors may need to adopt an uncomfortable working posture for performing the inspection tasks. As shown in Figs. 10.7 to 10.9, inspectors performing ultrasonic inspection, MPI, and DPI have to squat down beside a specimen that is located in a low position. Working with such kinds of improper postures may lead to different forms of musculoskeletal pains. The rapid upper limb assessment (RULA) method, which was developed by McAtamney and Corlett [21, 22], is a postural targeting method for estimating the risks of work-related upper limb disorders. A RULA assessment gives a quick and
10 Human Factors and Ergonomics for Nondestructive Testing
137
Fig. 10.7 Working posture of an inspector during performing DPI (Source: ETS-Testconsult Ltd.)
systematic assessment of the postural risks to a worker. For the working posture shown in Fig. 10.9, the RULA score is 7, which means that the person is working in the worst posture with an immediate risk of injury from their work posture, and the reasons for this need to be investigated and changed immediately.
Fig. 10.8 Inspector squatted down next to the specimen located in a low position performing an ultrasonic test
138
B.L. Luk and Alan H.S. Chan
Fig. 10.9 Working posture of an inspector performing MPI (Source: ETS-Testconsult Ltd.)
10.4.3 Potential Chemical Hazards 10.4.3.1 Dye Penetrant Inspection The chemicals involved in DPI include cleaner, dye penetrant, and developer. The cleaner used in DPI is similar to that used in MPI. Dye penetrant may contain mineral oil, phthalic esters and liquefied petroleum gasses. The bland, oily liquid may irritate the skin and eyes. Although bulk material is difficult to ignite, it will burn vigorously if engulfed in fire. Aerosol is extremely flammable. The ingredients of developer may include 2-propanol, 2-propanone, isobutane, and talc. It is an extremely flammable white liquid and aerosol. Its fast evaporating vapors can reach hazardous levels quickly in unventilated spaces. It can irritate skin by removing natural skin oils on long or repeated exposures. It can also irritate eyes, but itdoes not damage eye tissue. Inhalation of this chemical may causes dizziness and nausea. A summary of hazardous ingredients and chemicals used in DPI and MPI and their related health risks is shown in Table 10.1.
10.4.3.2 Magnetic Particles Inspection The chemicals involved in MPI include solvent-based cleaner, contrast paint, and magnetic particle suspension. In the course of operation, these chemical materials can have direct and unsafe effects on human operators (typically, exposure to chemical solvents), or they can affect the environment in ways that are potentially hazardous [2, 3]. A typical solvent-based cleaner or remover in aerosol form may contain a light aliphatic solvent naphtha and carbon dioxide propellant, which may
10 Human Factors and Ergonomics for Nondestructive Testing
139
Table 10.1 Summary of hazardous ingredients and potential health risks of chemicals used in MPI and DPI Hazardous ingredients Solvent based cleaner or remover • Light aliphatic solvent naphtha • Carbon dioxide propellant
Potential health risks • • • • •
Flammability Flash fire Mist or vapor may irritate the respiratory tract Eye and skin irritation in case of liquid contact Central nervous system (CNS) depression and target organ effects if overexposed • Slipping hazard if spilled
White contrast paint • 2-Propanone • Titanium oxide • Carbon dioxide propellant
• Highly flammable • Cause sore throat, cough, confusion,
• • • •
Magnetic particle suspension Iron oxide, White mineral oil (petroleum) Liquefied petroleum gasses Isobutane
• Extremely flammable aerosol • Bland oily liquid may irritate the skin • Isobutane vapors may cause dizziness and
Dye penetrant • Mineral oil • Phthalic esters • Liquefied petroleum gasses
• Bland, oily liquid may irritate the skin and
Developer 2-Propanol 2-Propanone Isobutene Talc
• Flammable • Fast evaporating vapors can reach hazardous
• • • •
headache, dizziness, drowsiness and unconsciousness in case of inhalation • Irritate skin by removing natural skin oils on long or repeated exposures • Eye redness, eye pain and blurred vision can be caused if eyes are exposed to it • Fast evaporating vapors can reach hazardous levels quickly in unventilated spaces
nausea in case of inhalation
eyes • Burn vigorously if engulfed in fire • Extremely flammable
levels quickly in unventilated spaces • Irritate skin by removing natural skin oils on
long or repeated exposures • Irritate eyes, but does not damage eye tissue • Dizziness and nausea if inhaled
cause flammability. Its vapor may cause flash fire. It would be harmful or fatal if swallowed. Mist or vapor may irritate the respiratory tract. Also, liquid contact may cause eye and skin irritation. Overexposure may cause central nervous system (CNS) depression and target organ effects. Chemicals spills may create a slipping hazard. White contrast paint may contain hazardous ingredients such as 2-propanone, titanium oxide, and carbon dioxide propellant. 2-Propanone is highly flammable and,
140
B.L. Luk and Alan H.S. Chan
if inhaled, may cause sore throat, cough, confusion, headache, dizziness, drowsiness, and unconsciousness. It can irritate skin by removing natural skin oils on long or repeated exposures. Furthermore, eye redness, eye pain, and blurred vision can be caused if eyes are exposed to it. The fast evaporating vapors of white contrast paint can reach hazardous levels quickly in unventilated spaces. Magnetic particle suspension in general, contains iron oxide, white mineral oil (petroleum), liquefied petroleum gasses, and isobutane. It is an extremely flammable aerosol. The bland oily liquid may irritate the skin. In case of inhalation, isobutane vapors may cause dizziness and nausea [2, 3].
10.5 Conclusions and Recommendations During observation tasks, it seems that the NDT operators were not aware of their personal safety and health. Although they have read the warning messages on the packages, they did not heed them. Employers should be responsible for providing a safe working environment, a safety guide, protective equipment, etc. Since, the position of the specimen being inspected is different for different cases, it is impossible to propose a standard working posture for the inspectors to follow. Instead, some guidelines can be recommended. Pheasant [23] suggested that inspectors should avoid forward inclination of the head, neck, and trunk. Tasks that require the upper limbs to be used in a raised position should also be avoided. If it is possible, joints should be kept within the middle third of their range of motion. Also, twisted or asymmetrical postures should be avoided. As MPI involves a lifting task, job rotation is recommended in order to avoid muscular injury caused by extensive lifting of the yoke with high a weight, as much as 22.5 kg. For dye penetrant inspections, an illuminance level of 300 to 550 lx at the surface of the part is generally sufficient for gross defects where indication is large [6]. For extremely critical inspection, higher illuminance in the 1000 lx range is normally considered necessary. With visible magnetic particles, testing should not be attempted with less than 100 lx. Levels between 300 and 1000 lx are best for most visible testing applications. Critical tests of small discontinuities may require 2000 to 5000 lx. However, it should be noted that extended testing at levels over 2000 lx may produce eyestrain [4, 6]. Recommendations for the use of chemicals should also be given. When applying chemicals from aerosol cans, all inspectors must take very great care to protect not only their own well-being, but also that of their colleagues around them who may be involved in the inspection task. When aerosols are used in an indoor workshop, care must be taken to ensure adequate local ventilation. There are occasions, however, when the tasks must be carried out in locations where the ventilation is poor, such as inside a vessel or pipe. In such circumstances, it would be advisable to supply the inspectors with an independent air supply to avoid inhaling the local atmosphere. The best form of this arrangement is the enclosed helmet variety, which protects the head completely [5, 7].
10 Human Factors and Ergonomics for Nondestructive Testing
141
Apart from the issue of ventilation of the working environment, care must also be taken to avoid contact with eyes and skin when using the chemicals in NDT. Inspectors may wear protective goggles or glasses if necessary. It is also essential to avoid taking these chemicals into the mouth. If such chemicals get into the mouth accidentally, it should be washed out immediately with a large quantity of water. Then medical treatment needs to be obtained directly afterward. In both MPI and DPI, most of the chemicals are applied by spray, frequently from aerosol cans. It is important to ensure that the spray nozzle is pointing away from the user or anyone else. The inspector should wear safety glasses to protect their eyes and wear rubber gloves if hand exposure is unavoidable. When the chemicals are sprayed on site, it is recommended that inspectors check the direction of the wind and avoid the possibility of sprayed chemicals being blown back over them [4, 6]. Furthermore, great care must be taken in handling and storage of the chemicals to avoid fire and explosive hazards, because most of the chemicals used in nondestructive test are flammable. Inspectors should be warned that an aerosol can should never be heated above 55◦ C (130◦ F). Stock of aerosol must be stored away from any heat source. In addition, chemicals must not be sprayed around arcs or flame in order to avoid ignition. If the chemical is accidentally released, sources of ignition should be turned off or removed first. The released chemicals should then be mopped up or swept up with absorbent [5]. Acknowledgment The authors would like to thank ETS-Testconsult Ltd., A.E.S. Destructive and Non-Destructive Testing Ltd. and FT Laboratories Ltd. for their participation in this study. The authors would also like to thank Miss Elaine Y. L. Chong for her help in carrying out some of the study. The work described in this paper was supported by a grant from City University of Hong Kong (Project No. ITRG 008-06).
References 1. C.J. Hellier (2001) Handbook of nondestructive evaluation. McGraw-Hill, New York. 2. International Occupational Safety and Health Information Centre (2006) International Chemical Safety Cards. Retrieved October 10, 2006 from the World Wide Web: http://www.ilo.org/public/english/protection/safework/cis/products/icsc/dtasht/index.htm 3. Magnaflux (2006) Overview of Products. Retrieved October 10, 2006 from the World Wide Web: http://www.magnaflux.com/products/overview.stm 4. R.C. McMaster (1982) Liquid penetrant tests. American Society for Nondestructive Testing, Columbus, OH. 5. D. Lovejoy (1991) Penetrant testing: A practical guide. Chapman & Hall, London, New York. 6. J.T. Schmidt, K. Skeie, and P. McIntire (1989) Magnetic particle testing. American Society for Nondestructive Testing, Columbus, OH. 7. D. Lovejoy, 1993, Magnetic Particle Inspection: A Practical Guide. London, NY: Chapman & Hall. 8. A.S. Birks, R.E. Green, and P. McIntire (1991) Ultrasonic testing. American Society for Nondestructive Testing, Columbus, OH. 9. Manual of Sonatest Masterscan 340 Ultrasonic Flaw Detector (2006) Retrieved September 18, 2006 from the World Wide Web: http://www.russelltech.com/PDF/ Sonatest/ms340%20Russellsmaller.pdf
142
B.L. Luk and Alan H.S. Chan
10. EN473 (1993) Qualification and certification of NDT personnel—General principles. 11. NAS 410 (1996) MIL-STD-410E NAS certification & qualification of nondestructive test personnel. Aerospace Industries Association of America, Inc., Washington, DC. 12. ASNT (2001) Recommended practice No. SNT-TC-1A. The American Society for Nondestructive Testing. 13. K.H.E. Kroemer, H.B. Kroemer, and K.E. Kroemer-Elbert (1994) Ergonomics: How to design for ease and efficiency. Prentice Hall, Englewood Cliffs, NJ. 14. M.S. Sanders and E.J. McCormick (1992) Human factors in engineering and design. McGrawHill International Editions. New York, USA. 15. S. Hecht, S. Ross, and C.G. Mueller (1947) The visibility of lines and squares at high brightness. Journal of Optical Society of America, 37: 500–507. 16. T.L. Waggoner (2004) PseudoIsochromatic Plate Ishihara Compatible (PIPIC) Color Vision Test 24 Plate. 17. W.D. Wickens (2004) An introduction to human factors engineering. Pearson/Prentice Hall, Upper Saddle River, NJ. 18. A.H.S Chan and D.K.T So (2006) Measurement and quantification of visual lobe shape characteristics. International Journal of Industrial Ergonomics, 36: 541–552 19. S.H. Snook and V.M. Ciriello (1991) The design of manual handling tasks: revised tables of maximum acceptable weight and forces. Ergonomics, 34: 1197–1213. 20. P. Boyce, (1981) Human factors in lighting. Macmillan, New York. 21. L. McAtamney and E.N. Corlett (1993) RULA: A survey method for the investigation of work-related upper limb disorders. Applied Ergonomics, 24: 91–99. 22. L. McAtamney and E.N. Corlett (2004) rapid upper limb assessment (RULA). In N. Stanton et al. (eds.), Handbook of Human Factors and Ergonomics Methods, Chapter 7, Boca Raton, FL, pp. 7:1–7:11. 23. S. Pheasant (1986) Bodyspace: Anthropometry, ergonomics and design. Taylor & Francis, London.
Chapter 11
A Novel Matrix Approach to Determine Makespan for Zero-Wait Batch Processes Amir Shafeeq, M.I. Abdul Mutalib, K.A. Amminudin, and Ayyaz Muhammad
Abstract Scheduling zero-wait batch processes involves various parameters of which makespan is normally used as the deciding parameter for selecting the optimum production sequence. Most of the currently available methods for determining makespan are based on complex mathematical programming techniques. The current research focuses on developing a novel method to determine makespan for zero-wait batch processes using simple mathematics coupled with some rule-based guidelines to be performed on a matrix containing the batch process recipes. The method is reasonably efficient and gives promising results with different batch process recipes. Keywords: Makespan · multiproduct · scheduling · zero wait
11.1 Introduction It often the case in many branches of chemical industry that batch processes are preferred over continuous processes. This is due to their ease of operation and flexibility in adjusting to frequent changes in product specifications as demanded by the current market, especially for pharmaceuticals and specialty chemicals. Sometimes, it happens that the intermediate product produced in a batch process is not stable and, therefore, must be transferred immediately to the next stage. This type of batch process is called zero wait (Ryu et al. [1]). The field of process and production engineering has given a significant importance to scheduling design of zero-wait batch processes due to their economic drives to meet market demands for specialty and high-value-added products (Balasubramanian and Grossmann [2], Ryu et al. [1]).
Amir Shafeeq, M.I. Abdul Mutalib, K.A. Amminudin, and Ayyaz Muhammad Chemical Engineering Program, Universiti Teknologi Petronas, 31750, Tronoh, Perak, Malaysia
143
144
Amir Shafeeq et al.
The determination of minimum makespan of a batch process is recognized as one of the important design parameters as it helps to decide for the best scheduling design. Just by considering makespan as an independent decision parameter would permit a designer to select the best production sequence from the various possible sequences in scheduling mixed batch process plants (Caraffa et al. [3], Dupont and Dhaenens-Flipo [4]). The literature study reveals mostly the use of mathematical methods such as mixed integer linear programming (MILP) and mixed integer nonlinear programming (MINLP) in all the formulations for determining minimum makespan for zero-wait batch processes (Balasubramanian and Grossmann [2], Jung et al. [5]). Although it is capable of providing the optimal solution, but it doesn’t provide other near optimal solutions from which a designer should have a flexibility to choose, especially when there are subjective constraints that need to be considered. A novel matrix-based method is proposed in this work, which quickly determines the makespan for all possible batch production sequences, thus allowing selection to be made from several identified optimal solutions. The method developed here is capable of handling large-size problems and could play a significant role toward developing an interactive design method for batch scheduling. The method is developed through observations made on several batch process recipes in which the makespan is determined using a Gantt chart method. The outcome of this analysis is the development of rule-based guidelines to be performed on the matrix containing the specified batch process recipes.
11.2 Batch Process Batch processes are generally classified as single product or multiproduct. Singleproduct batch processes offer processing of a single product only. This is because plant layout and available resources are specific in nature and only suitable for processing one type of product. However, in a multiproduct process, more than one product can be processed using the same facility by sharing the same available resources. The number of campaigns of each product in single and multiproduct batch processes depends upon the market demand. No matter the number of campaigns of each product required, the completion time of the batch processes is an important parameter that must be determined to schedule the production. The makespan, i.e., completion time, of such type of batch processes can be determined using various techniques available in the literature. One way is to apply the Gantt chart method as illustrated below for single-product and multiproduct batch processes. For the purpose of understanding this method, the recipe of three different products A, B, and C is taken from Ryu et al. [1], as shown in Table 11.1. The batch process offers three stages; the cleanup and transfer times are assumed negligible.
11 A Novel Matrix Approach to Determine Makespan for Zero-Wait Batch
145
Table 11.1 Processing time for three products A, B, and C Processing time (h) Products
Stage 1 S1
Stage 2 S2
Stage 3 S3
A B C
10 15 20
20 8 7
5 12 9
11.2.1 Makespan for Single-Product Batch Processing Consider the production of three campaigns of product A only using a singleproduct batch process. The batch process can be represented using the Gantt chart method as illustrated in Fig. 11.1(a). It can be seen from Fig. 11.1(a) that makespan can be calculated by adding the processing time of stage 1 and stage 3 and three times of stage 2, i.e., 75 hours. Similarly, for producing three campaigns of product B only, the makespan is calculated by adding processing time in stages 2 and 3 and three times of stage 1 [Fig. 11.1(b)], i.e., 65 hours. Again for product C, the same procedure as earlier applied in case of product B can be used to calculate makespan, i.e., 76 hours [Fig. 11.1(c)].
10
10
10
20
20
20
5
5
5
(a) 15
15
15
8
8
8
12
12
12
(b) 20
20
20
7
7 9
7 9
(c) Fig. 11.1 Batch process can be represented using Gantt chart method
9
146
Amir Shafeeq et al.
It can be observed from above illustrations that the path used to calculate makespan is different for different product recipes. This is because the path continuity is different for different recipes, as illustrated by the dotted lines in Fig. 11.1(a) to (c). There are other paths that could be used to calculate the makespan, but it would involve discontinuity in the path used as a result of idle time. This can be seen in the first stage and third stage for the production campaigns of product A [Fig. 11.1(a)] and in the second stage and third stage for the production campaigns of product B and C [Fig. 11.1 (b) and (c)]. If any of these paths are selected instead of the continuous path, which does not require any idle time calculation, one must determine these idle times before final calculation of makespan is possible Therefore, using the continuous path for makespan calculation is preferred. The location of the idle times would be different for different product recipes, as shown in Fig. 11.1. Thus, to generalize the makespan calculation procedure, there must be one path that can be chosen irrespective of the presence of idle time. Hence, there is a need of an approach that could calculate the correct makespan for any product recipe using one common path irrespective of the idle time that exists in that common path.
11.2.2 Makespan for Multiproduct Batch Process As observed earlier from the makespan calculation of a single-product batch process, the makespan calculation procedure is different for different product recipes. This procedure becomes even more tedious for the multiproduct batch process, where more than one product is produced using the same facility, as illustrated in Fig. 11.2(a) for three products A, B, and C. It is observed from Fig. 11.2(a) that the continuous path to determine makespan varies from the one observed earlier
10
15
20
20
8
7
5
12
9
(a) 10
15
20
20
8
5
7
12
9
(b) Fig. 11.2 Multiproduct batch process, which produces more than one product using the same facility
11 A Novel Matrix Approach to Determine Makespan for Zero-Wait Batch
147
for single product (dotted line). The makespan for this multiproduct batch process producing one campaign of each product A, B, and C is 66 hours. It is also observed, that for the same path to be selected for the makespan calculation, as was done in case of single product, there would be a need to calculate the idle times between production campaigns at first stage [dotted line in Fig. 11.2(b)]. It can be judged from these observations of the makespan calculation of single- and multiproduct batch processes that the situation might become even more complex for large-size problems and with different product recipes. Another technique available in the literature is to calculate the makespan by using MILP formulations developed by Jung et al. [5]. This technique has been applied by Ryu et al. [1] on the multiproduct batch process recipe (Table 11.1) that follows a zero-wait transfer policy. The following equations are used: (11.1)
Min CNM N
∑ yl,i = 1, ∀i
(11.2)
∑ yl,i = 1, ∀l
(11.3)
l=1 N i=1
N
Ci j ≥ Ci−1,M − ∑
M
∑
l=1 k= j+1
j = 1, . . . . . . . . . .M − 1, ∀i N
CiM ≥ Ci−1, j + ∑
N
Plk yl,(i−1) + ∑
M
∑ Plk yli ,
l=1 k= j
(11.4)
M
∑ Plk yli ,
l=1 k= j
i = 2, . . . . . . . . . N, j = M
(11.5)
The objective function to minimize process completion time (CNM, where N = number of products, M = number of stages) is represented by (11.1). The production of specific products (l) at specific time slots (i) for a specific production sequence by producing product i at the kth sequence is represented by equations (11.2) and (11.3). The binary variable (yli ) is 1 if product l is made at the ith time slot; otherwise, it is zero. The completion times of individual products at different stages (k, j) for the selected production sequence are represented by equations (11.4) and (11.5) and satisfy the zero-wait transfer policy. The variable Ci j represents completion time of the ith product in stage j. The parameter Plk represents the processing time of product l in stage k. The optimal solution from the above mathematical model in terms of minimum process makespan using the processing time data in Table 11.1 is 61 hours for the production sequence of BAC (Ryu et al. [1]). The MILP method is capable of providing the optimal solution but only by using complex mathematical programming, which needs to be well understood by designer prior to making a decision, especially when subjective constraints need to be considered. This fact motivated the need to develop an improved, alternate method of determining the makespan for zero-wait batch processes that does not
148
Amir Shafeeq et al.
involve complex mathematics and also allows for near optimal solutions. This paper introduces the matrix method.
11.3 The Matrix Method The matrix method uses simple formulations that are executed systematically in a stepwise manner. The formulations are conducted on a set of matrices that can be arranged to represent the individual product recipes that make up the batch scheduling problems, with the purpose of determining its makespan. For ease of understanding, the detail of the matrix method is described using an example for a batch production of three products, each requiring three process stages. A zero-wait transfer policy with negligible cleanup and transfer times is assumed. Let us supposed the three products are A, B, and C and the three process stages are S1 , S2 , and S3 , which normally have different values for each product subject to the production recipe. Step 1. Arrange the product recipes that make up the batch scheduling design problem according to the arrangement below. In this respect the scheduling will be based on a sequence where product A is produced first, followed by product B, and lastly product C. 0 1 2 0 AS1 AS2 AS3 1 BS1 BS2 BS3 2 CS1 CS2 CS3 Step 2 Select the elements from the entire first column and the bottom most row of the matrix, i.e., the elements are AS1 , BS1 , CS1 , CS2 , and CS3 . The selection of the above is important in order to begin the execution of the matrix method. Introduce slack variables in between the first column elements of the matrix. In the case of the matrix shown, there will be two slack variables introduced, i.e., VAB located between AS1 and BS1 and VBC located between BS1 and CS1 .
0
1
2
0
AS1 AS2 AS3 VAB 1 BS1 BS2 BS3 VBC 2 CS1 CS2 CS3 Step 3. The value of each slack variable is determined by performing the following procedures to the elements of the two rows located above and below it, respectively. First, comparisons are made on the value of the matrix elements between the two rows diagonally, i.e., BS1 and AS2 and BS2 and AS3 . For each comparison made, note the element that has the larger value. Suppose the comparison outcome shows that AS2 has a larger value than BS1 and BS2 has a larger value than AS3 , as indicated by the direction of the two arrows below. Another comparison is then made between the sum of BS1 and BS2 and the sum ofAS2 and AS3 . Suppose this
11 A Novel Matrix Approach to Determine Makespan for Zero-Wait Batch
149
comparison outcome shows that the sum of BS1 and BS2 is greater than the sum of AS2 and AS3 .
0 1 2
0 1 2 AS1 AS2 AS3 VAB BS1 BS2 BS3 VBC CS1
CS2 CS3
For each comparison outcome, a specific formula has been developed for determining the value of the slack variable as illustrated in Table 11.2. For the matrix described above, the formula for the calculation of the slack variable VAB is VAB = AS2 − BS1 , where VAB = V0,0 , AS2 = M0,1 , BS1 = M1,0 , AS3 = M0,2 , BS2 = M1,1 for i = 1 in Table 11.2. The same procedure is then repeated for the second and third row of the matrix element in order to calculate the second slack variable, VBC . Let us supposed a similar situation occurred in terms of the larger value for the matrix elements as encountered earlier; therefore the formula used for calculating VBC is VBC = BS2 − CS1, where VBC = V1,0 , BS2 = M1,1 , CS1 = M2,0 , BS3 = M1,2 , and CS2 = M2,1 for i = 2 in Table 11.2. Step 4. From the comparison outcome and the calculation made for the slack variables, the makespan for the multiproduct batch process is calculated using the formula Makespan = AS1 +VAB + BS1 +VBC + CS1 + CS2 + CS3 . The observation made above could be generalized for any n number of products to determine the values of the slack variables and finally the makespan of the batch process. The standard matrix notations Mi, j (where i stands for products in rows and j stands for stages in columns with each starting from zero value) could be used to represent the product recipes and Vi, j (introduced in between the rows of matrix Mi, j ,) could be used to represent the slack variables. The calculation procedure begins by making a comparison between elements of the first two rows of the matrix and then repeating for n number of products as performed earlier in Steps 1 to 4. For each comparison outcome, the values of slack variables are calculated according to the specific formula developed subject to the comparison outcome. The comparison outcome between each set of the two rows (Mi,0 + Mi,1 ) and (Mi−1,1 + Mi−1,2 ), where (I = 1, 2, . . . , n − 1), is categorized as Case A, Case B, and Case C. Each case presents two choices for slack variable calculation. Out of these
150
Amir Shafeeq et al.
Table 11.2 Calculation of slack variables Cases A.
B.
(Mi,0 + Mi,1 ) > (Mi−1,1 + Mi−1,2 )
(Mi,0 + Mi,1 ) < (Mi−1,1 + Mi−1,2 )
Slack variable
Constraints
1. Vi−1,0 = Mi−1,1 − Mi,0
Mi−1,1 > Mi,0 Mi−1,2 < Mi,1
2. Vi−1,0 = 0
Mi−1,1 = Mi,0 or Mi−1,2 < Mi,1 Mi−1,1 < Mi,0 or Mi−1,2 < Mi,1 Mi−1,1 < Mi,0 or Mi−1,2 = Mi,1 Mi−1,1 < Mi,0 Mi−1,2 > Mi,1
1. Vi−1,0 = Mi−1,1 − Mi,0
Mi−1,1 > Mi,0 or Mi−1,2 < Mi,1 Mi−1,1 > Mi,0 Mi−1,2 = Mi,1 Mi−1,1 < Mi,0 or Mi−1,2 > Mi,1 Mi−1,1 = Mi,0 or Mi−1,2 > Mi,1 Mi−1,1 > Mi,0 Mi−1,2 > Mi,1
2. Vi−1,0 = (Mi−1,1 + Mi−1,2 ) − (Mi,0 + Mi,1 )
C.
(Mi,0 + Mi,1 ) = (Mi−1,1 + Mi−1,2 )
1. Vi−1,0 = 0
2. Vi−1,0 = Mi−1,1 − Mi,0
Mi−1,1 = Mi,0 or Mi−1,2 = Mi,1 Mi−1,1 < Mi,0 Mi−1,2 > Mi,1 Mi−1,1 > Mi,0 Mi−1,2 < Mi,1
two choices, one would be selected based on certain constraints, as illustrated in Table 11.2. On the basis of the slack variables calculation, the makespan could be calculated using the following generalized expressions for any n number of products with three batch process stages. Makespan =
n−1
2
n−2
i=0
j=1
i=0
∑ Mi,0 + ∑ Mn−1, j + ∑ Vi,0
11.4 Application of the Matrix Method The matrix method developed is verified using the same example stated earlier. Suppose we start by determining the makespan for a batch sequence producing in the
11 A Novel Matrix Approach to Determine Makespan for Zero-Wait Batch
151
order of A, B, C. The matrix representing the recipes according the production sequence is as shown below.
0
0 10
1 20
2 5
8
12
V0,0 1 15 V1,0 2
20
7
9
The comparison outcome between the first two rows of the matrix according to the procedure discussed above would illustrate the following observations: (M1,0 + M1,1 ) < (M0,1 + M0,2 ) and
M0,1 > M1,0 , M0,2 < M1,1
i.e., (15 + 8) < (20 + 5) and 20 > 15, 5 < 8. The comparison outcome above is then referred to Table 11.2 for calculation of slack variables. It is observed from Table 11.2 that Case B (choice 1) satisfies the above comparison outcome. Hence, the slack variable calculated is as follows: V0,0 = M0,1 − M1,0 = (20 − 15) = 5. The same procedure is then repeated for the last two rows of the above matrix. The comparison outcome shows the following observations: (M2,0 + M2,1 ) > (M1,1 + M1,2 ) and
M1,1 < M2,0 M1,2 > M2,1
i.e., (20 + 7) > (8 + 12) and 8 < 20, 12 > 7. Again, the comparison outcome above is referred to Table 11.2, and it is found that Case A (choice 2) satisfies the above comparison outcome. Hence, the slack variable calculated is as follows: V1,0 = 0. Finally, the makespan would be calculated using the formula stated earlier as follows: Makespan =
2
2
1
i=0
j=1
i=0
∑ Mi,0 + ∑ M2, j + ∑ Vi,0
= (10 + 15 + 20) + (7 + 9) + (5 + 0) = 66
152
Amir Shafeeq et al. Table 11.3 Makespan of all possible sequences of products A, B, and C Production sequence
Makespan (h)
ABC ACB BAC BCA CAB CBA
66 65 61 70 70 70
It must be noted that makespan value obtained above is based on a sequence where product A is produced first, followed by product B, and lastly product C. All the possible production sequences for the above example of three products can be generated using a simple permutation rule P(n) = n!, where P(n) = number of possible batch process sequences and n = number of products. For example, the number of all possible sequences for producing three products A, B, and C is P(3) = 3! = 6, as shown in Table 11.3. The rows of the matrix in above example are then rearranged according to the specified production sequence for slack variables calculation. The makespan determined for each production sequences is shown in Table 11.3. The solutions from which the design engineer could choose would then be ranked according to the shortest makespan, as observed in Table 11.3, i.e., 61 hours for the production sequence B, A, C. The results obtained the from the matrix method are found to be the same as those obtained from earlier example using MILP. This proves that the developed matrix method is able to determine the makespan for each possible production sequence arising from a specified multiproduct batch process problem.
11.5 Conclusion This chapter proposes a novel method based on matrix formulations to determine the makespan as well as the optimal production sequence with minimum makespan. Apart from producing the optimal production sequence, the method is also capable of ranking the entire possible solutions for the batch scheduling design, which will enable the designer to further screen the solutions using other factors, which may be based on subjective factors. The designer is then in a better position to ensure the scheduling design is more acceptable under practical constraints. The matrix method is capable of working with any n number of products, processed in three serial stages. The future work would be to apply the matrix method to other transfer policies, such as with and without intermediate storage tanks, and to further improve handling more than three process stages.
11 A Novel Matrix Approach to Determine Makespan for Zero-Wait Batch
153
References 1. Jun-Hyung Ryu and Efstratios N. Pistikopoulos (2007) A novel approach to scheduling of zerowait batch processes under processing time variations. Computers and Chemical Engineering, 31: 101–106. 2. J. Balasubramanian and I.E. Grossmann (2002) A novel branch and bound algorithm for scheduling flowshop plants with uncertain processing times. Computers and Chemical Engineering, 26: 41–57. 3. V. Caraffa, S. Ianes, T.P. Bagchi, and C. Sriskandarajah (2001) Minimizing makespan in a blocking flowshop using genetic algorithms. International Journal of Production Economics, 70: 101–115. 4. L. Dupont and C. Dhaenens-Flipo (2002) Minimizing the makespan on a batch machine with non-identical job sizes: an exact procedure. Computers and Operations Research, 29: 807–819. 5. J. Jung, H. Lee, D. Yang, and I. Lee (1994) Completion times and optimal scheduling for serial multi-product processes with transfer and set-up times in zero-wait policy. Computers and Chemical Engineering, 6: 537–544.
Chapter 12
Interactive Meta-Goal Programming: A Decision Analysis Approach for Collaborative Manufacturing Hao W. Lin, Sev V. Nagalingam, and Grier C.I. Lin
Abstract The benefits of collaborative manufacturing are widely recognized by both the industry and the academic world. However, the engagement of collaborative manufacturing for small and medium manufacturing enterprises (SMMEs) has proven to be a challenging task. One critical obstacle is that decision makers are constantly haunted by reoccurring complex decision making aiming toward the attainment of strategic objectives for collaborative manufacturing. The focus of this study is thus to develop an interactive meta-goal programming (IMGP) -based decision analysis framework to support the decision-making processes for collaborative manufacturing. The framework will have a critical positive impact on the operation of SMMEs engaging in collaborative manufacturing, as the accuracy and efficiency of their collaborative decision-making processes will be significantly improved.
12.1 Introduction Collaborative manufacturing is a strategic action that requires manufacturing enterprises to establish close relationships for the benefits of all participants. Forming a collaborative manufacturing network (CMN) enables participants to exploit each other’s core competencies [1]. Further, collaborative manufacturing involves allocating specific manufacturing operations to the right enterprises, thus improving the Hao W. Lin Centre for Advanced Manufacturing Research, University of South Australia, SA 5095 Australia (61-08-8302-3112; fax: 61-08-8302-5292; e-mail:
[email protected]) Sev V. Nagalingam Centre for Advanced Manufacturing Research, University of South Australia, SA 5095 Australia (e-mail:
[email protected]) Grier C.I. Lin Centre for Advanced Manufacturing Research, University of South Australia, SA 5095 Australia (e-mail:
[email protected])
155
156
Hao W. Lin et al.
overall business performance. Overall collaborative manufacturing allows participating manufacturing enterprises to stay lean in their core competencies and establish their niche in the overall process of providing global manufacturing solutions. Therefore, this strategy is fast gaining momentum in the manufacturing industry worldwide. The trend is especially apparent in the SMME sector, as collaborative manufacturing is a promising gateway for SMMEs to compete against larger enterprises [2]. Collaborative manufacturing, however, presents critical management challenges. Decision makers of an enterprise engaged in collaborative manufacturing must continue to orchestrate the functional units within the enterprise. Furthermore, cross-organizational relationships and collaboration strategies between the business partners must be managed with similar effectiveness and efficiency. The essence of this challenge requires that participants propose their unique, individually desired decision goals, and then these goals are analyzed holistically to achieve the best possible outcome for the enterprise and its collaborative business partners. To assist SMMEs in overcoming this challenge, this study developed a decisionsupport framework based on interactive meta-goal programming (IMGP). Adopting this would enable the achievement of global optimal and efficient collaborative decision making for a small or medium manufacturing enterprise (S/MME) and its business partners. In Section 12.2, the background information on decision making in collaborative manufacturing is discussed. In Section 12.3, the development of an IMGP-based decision analysis framework is discussed. In Section 12.4, a hypothetical example is given to justify the framework.
12.2 Decision Making in Collaborative Manufacturing In collaborative manufacturing, the essence of decision making is to analyze the unique objectives, capabilities, constraints, and commitments of all entities in the CMN, to ensure the best fulfilment of business objectives. In such an environment, decision making is highly complex due to large number of distributed but interrelated manufacturing variables, contradicting objectives, and large number of alternatives for achieving the objectives. A systematic decision-making process is required to address this overwhelming complexity. In 1977 Simon [3] proposed a four-phase decision-making model, which has been claimed by Turban et al. [4] as the most concise and yet complete characterization of a rational decision-making approach. The four decision-making phases are intelligence, design, choice, and implementation. The intelligence phase is used to simplify and make knowledgeable assumptions about the real world problem, so that decision makers can comprehend the situations and correctly define the potential problems and/or opportunities. The design phase involves the selection of an appropriate model to analyze the decision. The choice phase focuses on using algorithms to find the best possible solution. Finally the implementation phase is to continuously monitor manufacturing activities against their anticipated performance achievements. If variation exists, new decision-making process is activated to rectify the underlying issues. The IMGP
12 Interactive Meta-Goal Programming: A Decision Analysis Approach
157
model focused in this study mostly coincides with research topics that cover the second and third phases of Simon’s decision-making model.
12.3 Interactive Meta-Goal Programming-Based Decision Analysis Framework The IMGP-based decision analysis framework is introduced by recapping the fundamental concepts of goal programming (GP). Goal programming is a mathematical multicriteria optimization technique that has been actively studied in the disciplines of multicriteria decision making (MCDM) and multi-objective programming (MOP). It is showing a promising future in theoretical developments and practical applications on an industrial scale [5]. Proposed in 1961 by Charnes and Cooper [6], GP was initially introduced as an extension to linear programming. One of the most fundamental differences between linear programming and GP is that for GP, each decision goal is represented by a separate unique mathematical function. The utility function of GP thus focuses on minimizing the undesired deviations from the goal target. The undesired deviation can be surplus offset (goal that needs to be equal or less than the target), slack offset (goal that needs to be equal or larger than the target), or no offset (goal that needs to be attained exactly). Every goal’s corresponding importance toward the decision outcome is represented by relative weighting factors. Finally, normalization is often applied to ensure that all goal functions are analyzed using the same scale, so that trade-offs between different decision goals can be more accurately justified. Schniederjans’ work [7] gives detailed description on the conceptual development and practical application of GP. Due to the simplicity in model formulation, the effectiveness in solving MCDM problems, and the fact that its minimal goal target deviation character conveys Simon’s philosophy of either just satisfying or accepting suboptimal solutions [3], GP has received overwhelming endorsement by researchers. The IMGP methodology is a recent development of the already proven, successful GP technique. Establishing on the fundamental theory of GP, the IMGP model introduces two new concepts, meta-goal [8] and interactive process [9], to further enrich the performance of conventional GP models. In this chapter we demonstrate that IMGP can be successfully applied to overcome optimal decision-making challenges faced by SMMEs engaged in collaborative manufacturing.
12.3.1 Meta-Goals Meta-goal is a simultaneous cognitive evaluation on the degree of attainments for original decision goals considered in a GP model. Each meta-goal variant evaluates undesired deviation of every existing goal function in a unique way to concisely communicate with decision makers the overall status of the decision outcome.
158
Hao W. Lin et al.
The conventional GP model merely attempts to find a solution where the sum of all weighted undesired deviations is minimized. Such a model is unable to effectively and accurately portray the decision maker’s perspective preference for decision problems that consist of relatively higher number of goals and that are entangled in dealing with complex relationship spanning different operational contexts of a CMN. For this type of complex decision problems, decision makers must be presented with a model that can easily gain cognitive insights to the decision situation and input-appropriate searching parameters without extensive knowledge of the original decision model so that the analysis process can quickly converge to the most satisfying solution within the vast solution space. Our study suggests that an analysis approach that incorporates multiple metagoal variants to build a higher level GP model of the original decision problem GP model. This is known as the meta-goal programming multi-objective optimization approach [8]. In this approach, each meta-goal variant type offers a unique way of expressing and manipulating the overall achievement of all concerning decision goals. Through the use of collective meta-goal variants, there is no need to directly manipulate the underlying original GP model when searching for the optimal solution. This is particularly important as in CMN, the operational goals are usually proposed by managers of different functional units to represent the desirable performance targets for their corresponding manufacturing contexts. Other decision makers are either not well informed or lack the knowledge to make sound changes to the operational goal functions without possibilities of introducing errors to the model. Furthermore, the meta-goal deviations allow the CMN to more swiftly identify an overall picture on the strengths and the weaknesses of the manufacturing process under consideration. Meta-goal targets can then be adjusted accordingly to achieve a solution that nominates better balance in performance for every manufacturing activity, so that overall business performance can be improved. Different meta-goal variants are defined below, and there is a discussion on each meta-goal variant’s strength and its applicability on translating decision makers’ perspective preferences toward the attainment of different collaborative manufacturing decision goals.
12.3.1.1 Meta-Goal Variant 1 The first meta-goal variant is the minimization of aggregated undesired goal deviations modeled by equation (12.1). Every undesired deviation considered, di , is normalized by dividing its value by the target value of the corresponding decision goal, Ti . This ratio can be verbally described as the percentage of un-achievement for the concerning decision goal. Weighting factors, wi , are used to represent the relative importance between all undesired deviations. The meta-goal target, Q1 , suggests that the sum of the normalized undesired goal deviations must be at least equal or smaller than Q1 . In term of manufacturing decision making, this meta-goal is particularly effective when analysing multiple contradicting goals that consume the same resources. Decision makers can rank the original decision goals and let the model anticipate the best achievable manufacturing outcome with the available resources.
Weighted and normalised goal deviations
Weighted and normalised goal deviations
12 Interactive Meta-Goal Programming: A Decision Analysis Approach
Q1 Decision goals
Q2
Decision goals
b) Maximum undesired goal deviation is equal to or less than Q2
Weighted and normalised goal deviations
a) Aggregated undesired goal deviations is equal to or less than Q1
Y value of a decision goal
159
Y=1
Y=0 Decision goals
D Range
B Decision goals
c) Ratio of unattained goals to the total number of goals is equal to or less than Q3
d) Range of undesired goal deviations is equal or less than Q4
Fig. 12.1 Meta-goal variants
Furthermore, decision makers can continuously modify tradeoffs and establish new ranking schemes until the desired outcome is achieved. This particular meta-goal analysis is depicted in Fig. 12.1(a). In Fig. 12.1(a), the height of each bar represents the corresponding weighted undesirable deviation of a particular goal. If each bar has a width of one unit, it can be interpreted that the total area under the graph is equal to the result of meta-goal variant 1. r
di
∑ wi Ti ≤ Q1 .
(12.1)
i=1
12.3.1.2 Meta-Goal Variant 2 The second meta-goal variant is minimization of the maximal undesired goal deviations considered, and it is mathematically modeled by equation (12.2). Meta-goal variant-2 identifies the largest normalized and weighted deviation amongst all the goals considered, and ensures this value is kept at a minimum. The meta-goal target value, Q2 , conveys the decision maker’s preference on the degree of fulfilment that every considered goal must achieve exactly or better. In terms of manufacturing decision making, this meta-goal is highly effective for analysing a set of manufacturing operations where their particular performance parameters must be maintained at a critical level. For example, consider a manufacturer has five different market regions for one of its products, and a successful order fulfilment target of 99% (0.99) is required. In this example, if all market sectors are of equal priority, wi ’s are omitted from equation (12.2), and the target, Q2 ,would take the value of 0.01. A typical
160
Hao W. Lin et al.
output is depicted in Fig. 12.2(b), which shows that all undesirable deviations fall below the meta-goal target. di (12.2) ≤ Q2 . max wi i=1,...,r Ti 12.3.1.3 Meta-Goal Variant 3 The third meta-goal variant is to minimize the number of unattained goals, which is mathematically modeled using Eqs. (12.3) and (12.4). Two new variables are introduced in the equations: yi is a binary variable that indicates a goal is satisfied when equal to 0 but otherwise, equal to 1; Mi is an arbitrary number that is considerably larger than Ti , the target of the corresponding original goal, and it is an operand that tests whether the goal is satisfied. To keep consistency between all meta-goal variants, the number of unattained goals is normalized by dividing it by the total number of decision goals in the original model. Inherently, the meta-goal target Q3 is simply the total percentage of unattained goals with respect to all the goals considered. With meta-goal variant 3, the actual degree of undesired deviation for any decision goal is irrelevant, since the goal is simply classified as attained or not. In manufacturing decision making, this meta-goal analysis is useful when all goals are of the same priority and a decision maker prefers to optimize the overall decision outcome by maximizing the number of fully attained goals. For example, when several customer orders are received within a particular time window, and if every customer is equally valuable, the manufacturer achieves optimal customer satisfaction if the maximum number of customer orders can be fulfilled over the next production cycle. Figure 12.2(c) indicates that a goal can be either attained (yi = 0) or unattained (yi = 1). By comparing our model with the model proposed by Caballero et al. [8], it is noticed that equation (12.3) is slightly modified with the introduction of a lower bound parameter, −Mi . During our experiment, we found that this lower bound parameter is necessary to guarantee that yi has a value of 0 when a goal is achieved (when di is 0). −Mi < di − Mi yi ≤ 0, i = 1, . . . , r, yi = {0, 1}
(12.3)
r
∑ yi
i=1
r
≤ Q3
(12.4)
12.3.1.4 Other Meta-Goal Variants The three meta-goal variants discussed in the previous subsections provide the fundamental tools allowing decision makers to analyze decision alternatives at an abstract level that is easier to understand. These tools effectively improve the accuracy of the decision outcome. Our study has convinced us that other meta-goal variants should be developed to further enrich the analytical features of the approach. One
12 Interactive Meta-Goal Programming: A Decision Analysis Approach
161
such example is a meta-goal that aims to minimize the differences between all the undesired goal deviations under consideration. This analysis is especially important in our work, as collaborative manufacturing often requires operational commitments, profits, risks, and other performance attributes to be shared as equally as possible among all the business partners of the CMN. The meta-goal variant must portray performance sacrificing of some goals for the improvement of other goals and maintain the balance in performance for all functional units. Figure 12.1(d) depicts an expected outcome for this type of meta-goal.
12.3.2 Interactive Process In the study of computer-supported decision making, we define interactive process as a communication algorithm between the decision makers and the problem model. Core objectives of the process are to accurately construct and refine the decision model, efficiently explore the solution space, and eventually discover the best possible decision outcome as desired by the decision makers. Three types of interactive processes are discussed in the following subsections.
12.3.2.1 Decision Goal Function Proposal To ensure that an accurate meta-goal model be built for the concerning decision problem, all entities affected by the decision are given opportunities to analyze the decision problem definition using their local expertise, and thus to nominate their desired local goal(s). Considering all aspects of the manufacturing solution, the overall nominated local goals would be large in number and present dramatic variations. Nevertheless, the varying goals can be largely divided into two contradictory classes: minimization of local resource utilization and maximization of local output and service performance. Participating entities usually compete for scarce resources while attaining rewarding benefits at the same time. Thus, all goals must be concisely represented in a uniform fashion and then collectively analyzed by the meta-goal variants in order to determine the optimal balance preferred by the decision makers. In the IMGP model, every decision goal is represented by a unique linear goal function. Every technology coefficient of a goal function represents the degree of contribution that a decision variable puts toward the corresponding decision goal. The coefficient may anticipate the utilization of a certain resource, production output, service quality, or other manufacturing performance measurement associated with the corresponding decision variable. Respectively, the right-hand side of the equation represents the manufacturing performance target that the decision makers desired. Finally, maximum allowable deviations for goal functions and other resource capacities are usually represented using hard constraint equations. To mathematically model IMGP-based decision problems from the proposed decision objective, the following notations are introduced to represent the parameters
162
Hao W. Lin et al.
and variables involved in the model. It is assumed that decision objectives are classified into prioritized clusters, which implies goals in a higher priority level must be satisfied before the goals in a lower level. ai, j l and ai, j ρ bk, j Bl and Bρ
Ck ρ dil and di Dl and Dρ
i j k l ρ
Mil and Mi ρ
nli and ni
ρ
pli and pi
ρ
Ql1,...,4 and Q1,...,4 rl and rρ ρ
R1,...,4 s t ρ Til and Ti ρ
wli and wi xj
Technology coefficient for the jth decision variable of ith goal function in the l th and ρ th priority level respectively Technology coefficient for the jth decision variable of the kth hard constraint Constraining variable for meta-goal variant-4 in the l th and ρ th priority level respectively, they represent the minimum goal deviation Constraint value of the kth hard constraint Undesired goal deviations of ith goal function in the l th and ρ th priority level respectively Constraining variable for meta-goal variant-2 in the l th and ρ th priority level respectively, they represent the maximum goal deviation Goal function index for every existing priority level, i = 1, . . . , rl ; and i = 1, . . . , rρ Decision variable index, j = 1, . . . , s Hard constraint index, k = 1, . . . ,t l th highest priority level, index for the current sub-problem of the overall meta-goal programming decision problem Constraining variable for meta-goal variant-3 in the l th and ρ th priority level respectively, their purpose in the model is discussed in Section 3.1 Slack goal deviations of the ith goal function in the l th and ρ th priority level respectively Surplus goal deviations of the ith goal function in the l th and ρ th priority level respectively Target value of meta-goal variant-1, 2, 3, or 4 in the l th and ρ th priority level respectively Total number of goal functions in the l th and ρ th priority level respectively Undesirable deviations for meta-goal variant-1, 2, 3, or 4 accepted by decision makers in a higher level, ρ Total number of decision variables Total number of hard constraints Target value of ith goal function in the l th and ρ th priority level respectively Relative weighting factors assigned to the undesired goal deviations of ith goal function in the l th and ρ th priority level respectively jth decision variable
12 Interactive Meta-Goal Programming: A Decision Analysis Approach ρ
yli and yi
ρ
l α1,...,4 and α1,...,4
ρ
l β1,...,4 and β1,...,4
ρ
l δ1,...,4 and δ1,...,4
ρ
l µ1,...,4 and µ1,...,4
ρ
163
Constraining variable for meta-goal variant-3 in the l th and ρ th priority level respectively, their purpose in the model is discussed in Section 3.1 Slack deviation of meta-goal variant-1, 2, 3, or 4 in the l th and ρ th priority level respectively Surplus deviation of meta-goal variant-1, 2, 3, or 4 in the l th and ρ th priority level respectively Undesirable deviation of meta-goal variant-1, 2, 3, or 4 in the l th and ρ th priority level respectively Relative weighting factors assigned to the undesired deviations of meta-goal variant-1, 2, 3, or 4 in the l th and ρ th priority level respectively, within a priority level, the weights of all meta-goal variants must add up to 1, a weighting value of 0 represents the corresponding metagoal is not considered Priority level index for all levels of higher priority than the l th level, ρ = 1, . . . , l − 1
Equations (12.5) to (12.22) represent the IMGP decision problem model for priority level l. To solve the entire model, priority levels are analyzed one at a time, from the highest priority to the lowest. Meta-goal solutions obtained from all levels higher than l are converted into hard constraints during the analysis of level l. This ensures higher priority goals are achieved before the lower ones. Minimise : Z = µ1l δ1l + µ2l δ2l + µ3l δ3l subject to Hard constraints
(12.5)
s
∑ bk, j x j ≤ Ck ; for k = 1, . . . ,t
(12.6)
j=1
Original goals in level l s
∑ bk, j x j ≤ Ck ; for k = 1, . . . ,t
(12.7)
j=1
Meta-goal variant 1 used in level l rl
dl
i=1
i
∑ wli Til
+ α1l − β1l = Ql1
(12.8)
Meta-goal variant 2 used in level l wli
dil − Dl ≤ 0; Dl + α2l − β2l = Ql2 ; Til
(12.9)
164
Hao W. Lin et al.
for i = 1, . . . , rl
(12.10)
Meta-goal variant 3 used in level l rl
∑ yli
−Mil < dil − Mil yli ≤ 0; i=1l + α3l − β3l = Ql3 ; r
(12.11)
i = 1, . . . , rl
(12.12)
for Original goals in all higher priority levels ρ = 1, . . . , l − 1 s
ρ
ρ
∑ ai, j x j + ni
ρ
ρ
− pi = Ti ; for i = 1, . . . , rρ ; ρ = 1, . . . , l − 1
(12.13)
j=1
Meta-goal variant 1 achievements in higher priority levels ρ = 1, . . . , l − 1 rρ
ρ
∑ wi
ρ
di ρ ρ ρ ρ + α1 − β1 = Q1 Ti
i=1 ρ ρ δ1 = R1 ;
(12.14) ρ
for ρ = 1, . . . , l − 1; and if µ1 > 0
(12.15)
Meta-goal variant 2 achievements in higher priority levels ρ = 1, . . . , l − 1 ρ
wi
ρ
di ρ ρ ρ − D ≤ 0; for i = 1, . . . , r Ti ρ
ρ
(12.16)
ρ
Dρ + α2 − β2 = Q2 ρ δ2
=
(12.17)
ρ R2 ;
for ρ = 1, . . . , l − 1; and if
ρ µ2
>0
(12.18)
Meta-goal variant 3 achievements in higher priority levels ρ = 1, . . . , l − 1 ρ
ρ
ρ ρ
−Mi < di − Mi yi ≤ 0; for i = 1, . . . , rρ rρ
ρ
∑ yi
i=1 rρ
(12.19)
ρ
ρ
ρ
+ α3 − β3 = Q3 ρ δ3
=
ρ R3 ;
(12.20) for ρ = 1, . . . , l − 1; and if
ρ µ3
>0
(12.21)
Variable constraints ρ
ρ
l l , α1,2,3 , β1,2,3 , β1,2,3 , Dl , Dρ , Al , Aρ , Bl , Bρ ≥ 0 x j , nli , pli , α1,2,3
ρ
ρ
ρ
l , µ1,2,3 , wli , wi ≤ 1; yli , yi ∈ {0, 1} 0 ≤ µ1,2,3
(12.22)
12 Interactive Meta-Goal Programming: A Decision Analysis Approach
165
12.3.2.2 Iterative Decision-Maker Preference Entry This subsection is dedicated to discussing the interactive process involved in the evaluation of the IMGP decision model introduced in the previous subsection. The interactive process is not entirely formally scientifically based; rather intuitions that a decision maker has on a decision problem and his/her ability to make the correct trade-off and judgment when interacting with the model are of higher importance. In Gardiner and Steuer’s work [9], a unified interactive process algorithm has been proposed for GP. This algorithm is modified for the purpose of solving meta-goal decision problems that our study considers. The modified algorithm is explained below: 1. Establish initial weighting factors: From our literature review, we have identified analytical hierarchy process (AHP) as an effective method for deriving numerical representation on relative weights for all original decision goals considered [10]. The actual step-by-step AHP algorithm is not described here, as it has been very well established by Satty [10], the pioneer of AHP. 2. Construct initial criterion matrix: The initial criterion matrix provides decision makers with achievement boundary information for all the existing meta-goals. Based on this information, decision makers can more accurately convey their trade-off preferences in order to make coherent and knowledgeable selection on the target values and weighting factors for the meta-goals used. The model used to build the initial criterion matrix is modified from the IMGP model depicted by (12.5) to (12.22). The criterion matrix model considers all existing goals are clustered in a single priority level. Inherently, equations (12.13) to (12.21) are omitted from the model as they represent decision goals and meta-goal achievements belonging to higher levels. Further, equations (12.8), (12.10), and (12.12) are also omitted since the target values of meta-goals currently used are not yet determined. An alternate utility function is also used for the criterion matrix model, which is represented by equation (12.23). Finally the model is solved repeatedly; each time, it optimizes the achievement of one meta-goal variant separately. This implies solving the model with µ1 in equation (12.23) equal to 1, then for µ2 equal to 1, and finally for µ3 equal to 1. Solutions in all scenarios are summarized to form the initial trade-off criterion matrix. ⎛ l ⎞ r l ∑ yli ⎟ ⎜ r l d ⎜ ⎟ Z = µ1l ∑ wli il + µ2l Dl + µ3l ⎜ i=1l ⎟ + µ4l Dl − Bl . (12.23) ⎝ r ⎠ Ti i=1 3. Verify initial solution: Present the initial criterion matrix to the decision maker for verification. This interactive algorithm is terminated if the decision maker is satisfied with any of the solutions presented on the matrix. Otherwise, the algorithm moves to the next step for further analysis. 4. Establish priority levels: Depending on their contribution toward the overall performance of the decision, some objectives must be satisfied before the others. In
166
Hao W. Lin et al. Start Solving IMGP decision model from the highest priority level, l = 1, Original goal functions represented by Eq. 1.7
Construct lth level sub-decision model Set weighting factors for all original goals in level l Set meta-goal functions used in level l, Eqs. 1.8 - 1.12 Set hard constraints for the overall decision model, Eq. 1.6 If l > 1: Set original goal functions in higher priority levels, Eq 1.13 Set meta-goal constraint functions established in all higher priority levels, Eqs. 1.14 - 1.21
l =l + 1
Save lth level meta-goal solutions Backlog meta-goal functions used and their corresponding solution attained in the lth level The backlog is needed to setup meta-goal constraint functions in lower priority levels
Solve lth level sub-decision model Convert level sub-decision model into sparse matrix representation recognisable by Lindo API optimisation engine Evaluate best solution, in the lth level, for the nominated meta-goal variants, and their corresponding weighting factors and targets l Obtain solution for δ lth
1 ,..., 4
Yes
Lower priority level exists? No Present solution for the entire decision model List all meta-goal attainment results for every priority level,
d il , d i p ,Til ,Ti p List all original goal attainment results for every priority level,
δ1,...,4l ,δ1,...,4p ,Q1,...,4l ,Q1,...,4p List all decision solutions for the current decision model,
End Decision solution ready to be reviewed by decision-makers
xj
Fig. 12.2 Multilevel IMGP decision model evaluation algorithm
this step, the decision maker classifies all existing objectives into subclusters, and each cluster is then assigned to a unique priority level. A cluster with a higher priority level must be satisfied first. If the algorithm reiterates from this step because a satisfactory result could not be found in the previous iteration, the current goal classification scheme and the corresponding priority scores can be refined to exploit better decision outcomes. 5. Evaluate IMGP decision model: The algorithm for evaluating an IMGP decision model with multiple prioritized levels of decision goals is depicted in Fig. 12.2. If desired by the decision maker, the relative importance of all objectives within their respective priority level is analyzed again using AHP. Every instance of the decision model is solved using a Web-services-based meta-goal programming solver that the authors have developed. The underlying optimization engine used to build the meta-goal programming solver is called Lindo API, a product developed by Lindo System Inc. [11]. 6. Verify current solution: If the current solution is successfully verified and accepted by decision makers, this solution is confirmed, and the decision analyzing process is ceased. Otherwise, the current solution is added to the criterion matrix, and the process advances to the next iteration from step 4.
12 Interactive Meta-Goal Programming: A Decision Analysis Approach
167
Create Delphi instance and nominate coordinator
Define group DM problem and corresponding objectives
Nominate desirable or feasible local goals with supporting comments
e Provid tion Resolu Concerned Decision-making individual within the CMN
Condu c Argum t ent
Obtain group view on goals, and end Delphi process
Evaluate nominated goals, and cross-verify their importance and validity
Yes
Coordinator satisfied with outcome?
No
Fig. 12.3 Delphi process for collaborative manufacturing
12.3.2.3 Networked Delphi Process For simplicity, so far in our discussion about the interactive process, we only illustrated the decision problem with a single decision maker’s interaction. However, collaborative manufacturing requires inputs from multiple decision makers across different entities of the CMN, whose perspectives are usually different due to their ambitions to pursue maximum performance for the corresponding manufacturing processes that they manage. These differences must be efficiently identified, examined, and discussed in order to establish an overall decision model that is most widely accepted by the participants of the collaborative decision-making process. From our literature review, we have identified that the Delphi method [12] is a suitable approach to address this need. Figure 12.3 depicts a Delphi process-based group decision-making approach suitable for our study.
12.3.3 Interactive Meta-Goal Programming-Based Decision Analysis Workflow In Sections 12.3.1 and 12.3.2, we discussed the components that are required to set up an IMGP-based decision problem model. In this subsection, we present the superposition of these components to form a complete IMGP-based decision analysis framework. The work process and information flow for this framework are described in Fig. 12.4.
168
Hao W. Lin et al. 1
3
2 Scan CM environment
Select decision alternatives and set scenario
Construct formal decision statements
4 .1
4 .2
Propose goal functions for entity 1
4 .n
Propose goal functions for entity 2
Propose goal functions for entity n
11 5
Collect performance feedbacks
6 Establish initial weighting factors
10
Evaluate initial criterion matrix iteration = 0 Outcome satisfied?
Implement decision Yes
iteration = iteration + 1 7
8 Activate networked Delphi process
Delphi? Yes
No
Refine aspiration criteria
No
9
Evaluate and summarise decision model solutions
Fig. 12.4 IMGP-based collaborative decision analysis workflow
Stage 1. Decision makers select important operational and performance information as environment and manufacturing parameters, and assign a desired target to each parameter. A rule is then imposed on each parameter where a deviation from its target by a certain value would trigger a decision-making process. It is important that the entire set of parameters monitored could aggregate to convey the behaviour of the real manufacturing environment, and furthermore, any real potential problem and/or opportunity could be detected from the observation of these parameters. Stage 2. Upon the detection that a decision is required in Stage 1, this stage generates a formal statement of the corresponding decision problem. The statement would include all the associated data and information required for analysing the decisionproblem, the objectives of the CMN, the potential impact of the decision problem, the knowledge domains the decision problem belongs to, and who is responsible for the final outcome of the decision. Stage 3. Based on the decision problem statement, the most expected scenario is defined, and a set of decision alternatives (continuous or discrete) is selected for consideration. This information and the formal statement of the decision problem are then forwarded to all the entities that are affected by the decision. Stage 4. It is expected that every entity has the capability to analyze its local decision variables, parameters, and the information received from Stage 3 in order to propose unique local goals for the concerning decision problem. Each goal is expressed as a unique linear goal function that will be used to build the overall IMGP model. This process has been elaborated in Section 12.3.2.1. Stages 5 to 7. The decision maker in charge of the current decision problem collates all local objective functions and builds an initial meta-goal model for the
12 Interactive Meta-Goal Programming: A Decision Analysis Approach
169
decision problem. Personal perspectives on the objective priorities, variable weightings, and meta-goal targets are inputted into the model in order to explore potential solutions. The algorithm utilizeded to fulfill these steps has been explained in Section 12.3.2.2. Stage 8. If the decision maker fails to attain a satisfying solution, or he/she requires the verification by others of the preferences entered, a group decision-making process is activated. The Delphi method is introduced to facilitate such a process. The algorithm of this process is discussed in Section 12.3.2.3. Stage 9. The best possible solution for the model of the decision problem thus far is confirmed. This solution is forwarded to the decision maker in charge of the decision for confirmation. If not satisfied, the algorithm reiterates from Stage 7 to further refine existing model parameters and variables. Otherwise, the most satisfying solution is passed to Stage 10. Stage 10. The decision outcome is confirmed and forwarded to all concerning entities so that all necessary resources are appropriately planned. Subsequently, manufacturing actions are performed in strict accordance to the decision outcome. Stage 11. As the decision is being implemented, its performance is constantly monitored against the desired targets as anticipated by the decision outcome. Any significant deviation from the anticipated performance would inherently trigger a responding decision-making process to address the underlying issues.
12.4 Example In this example, which demonstrates the strength of the IMGP-based method, we use a hypothetical decision-making problem frequently encountered by a suture manufacturer. We consider the four internal functional units that are involved in the decision-making process. These units are the Sales unit that proposes production quota goals; the Finance unit that proposes goals on business profits and production costs; the Operational Planning unit that proposes goals on the production capacity; and the Scheduling unit that proposes goals on the usages of the current machine group formation. Furthermore, both the needle and thread suppliers are also participating in this process. The problem is to decide on the production quantity for four different types of suture for the next production cycle. The suture production steps consist of swaging (joining the thread to the needle), packaging, and quality checking. The production lead time for different types of suture varies, as finer needles are more difficult to handle than the coarse ones. The available swaging machines are divided equally into two groups to accommodate the two different needle sizes. Although any machine can be set up to produce the other needle size, the production workers do not prefer to do this, since setting up is a time-consuming process, and the slightest mistake would destroy the costly die in the swaging mechanism. Suppose the following goals are proposed by the decision-making entities.
170
Hao W. Lin et al.
Production quota goals proposed by the Sales unit, measured in dozens: x1 + n1 − p1 = 300
(12.24)
x2 + n2 − p2 = 350
(12.25)
x3 + n3 − p3 = 420
(12.26)
x4 + n4 − p4 = 325
(12.27)
Profit and production-cost goals proposed by the Finance unit: 58x1 + 51x2 + 47x3 + 41x4 + n5 − p5 = 70000 30x1 + 26x2 + 25x3 + 21x4 + n6 − p6 = 50000
(12.28) (12.29)
Production capacity goals proposed by the Operational Planning unit: 0.2x1 + 0.18x2 + 0.16x3 + 0.15x4 + n7 − p7 = 350
(12.30)
0.05(x1 + x2 ) + 0.025(x3 + x4 ) + n8 − p8 = 70 0.025(x1 + x2 + x3 + x4 ) + n9 − p9 = 35
(12.31) (12.32)
Machine group capacity goals proposed by the Scheduling unit: 0.2x1 + 0.18x2 + n10 − p10 = 175
(12.33)
0.16x3 + 0.15x4 + n11 − p11 = 175
(12.34)
Needle availability goals proposed by the Needle supplier: 12(x1 + x2 ) + n12 − p12 = 8000 12(x3 + x4 ) + n13 − p13 = 8000
(12.35) (12.36)
Thread availability goals proposed by the Thread supplier: 12 × 0.4(x1 + x3 ) + n14 − p14 = 3000 12 × 0.4(x2 + x4 ) + n15 − p15 = 3000
(12.37) (12.38)
Hard constraint is to produce at least 1000 dozen sutures: x1 + x2 + x3 + x4 ≥ 1000
(12.39)
Suppose that the decision makers have determined that the following deviation factors are undesirable: n1 , n2 , n3 , n4 , n5 , p6 , p7 , p8 , p9 , p10 , p11 , p12 , p13 , p14 , p15 . The initial criterion trade-off matrix is evaluated first, and the result is summarized in Table 12.1. The shaded cells represent best achievement for the corresponding meta-goals.
12 Interactive Meta-Goal Programming: A Decision Analysis Approach
171
Table 12.1 Initial trade-off criterion matrix x1 300 279 300
µ1 = 1 µ2 = 1 µ3 = 1
x2 350 356 350
x3 342 390 342
x4 325 302 325
MGV1 0.37 0.45 0.37
MGV2 0.19 0.07 0.19
MGV3 0.27 0.47 0.27
µ1 = 1 optimizes meta-goal variant 1, µ2 = 1 optimizes meta-goal variant 2, µ3 = 1 optimizes meta-goal variant 3, MGV1 meta-goal variant 1 result, MGV2 meta-goal variant 2 result, MGV3 meta-goal variant 3 result.
Decision makers can continue their analysis by categorizing all the decision goals into prioritized subclusters, to form a new multilevel IMGP decision model summarized in the following points. The result of this model is summarized in Table 12.2. Comparing with the trade-off matrix, it is expected that the solution can potentially be improved by making appropriate adjustments to the meta-goal targets or the priority cluster scheme. • Priority level 1: Goal 1, 2, 3, 4, 5, and 6 analyzed by meta-goal variant 1, with a meta-goal target of 0.4. • Priority level 2: Goal 7, 8, 9, 10, and 11 analyzed by meta-goal variant 2, with a meta-goal target of 0.1. • Priority level 3: Goal 12, 13, 14, and 15 analyzed by meta-goal variant 2, with a meta-goal target of 0.15.
12.5 Conclusion This chapter introduced an IMGP-based decision analysis method. This method allows individual decision makers first to propose goals for the problem for his/her benefits and then to cross-verify the goals using a Delphi process before they are
Table 12.2 Multilevel prioritized solution Decision variable x1 = 297.77
x2 = 314.43
Priority level 1
x3 = 341.67
x4 = 325
Priority level 2
Priority level 3
MGV
AA
Target
MGV
AA
Target
MGV
AA
Target
1
0.4
≤0.4
2
0
0.1
2
0.02
0.15
OG 1 2 3 4 5 6
AA 300 314 339 325 62678 32467
T ≥300 ≥350 ≥420 ≥325 ≥70k ≤50k
OG 7 8 9 10 11
AA 220 47 32 117 103
T ≤350 ≤70 ≤35 ≤175 ≤ 175
OG 12 13 14 15
AA 7346 8000 3069 3069
T ≤8k ≤8k ≤3k ≤3k
MGV meta-goal variant, OG original goal, AA actual achievement
172
Hao W. Lin et al.
analyzed by the IMGP model to obtain the final solution. Our work has determined that this method is suitable for handling distributed decision making such as presented in collaborative manufacturing, and this suitability was verified using an example. The future work is to develop a Web-based software system that allows decision makers to utilize this system irrespective of their physical location and to engage the collaborative decision-making process through the automated procedures.
References 1. M. Danilovic and M. Winroth (2005) A tentative framework for analysing integration in collaborative manufacturing network settings: A case study. Journal of Engineering and Technology Management, 22(1–2): 141–158. 2. N. Bilbao, D. del Pozo, D.J.M. Lopez, and I. Etxaniz (2004) The collaborative manufacturing approach. In IEEE International Conference on Industrial Informatics. IEEE. 3. H.A. Simon (1977) The new science of management decision. Prentice Hall, Upper Saddle River, NJ. 4. E. Turban, J.E. Aronson, and T.P. Liang (2005) Decision support systems and intelligent systems, 7th edition. Pearson Education, Upper Saddle River, NJ. 5. B. Aouni and O. Kettani (2001) Goal programming model: A glorious history and a promising future. European Journal of Operational Research, 133(2): 225–231. 6. A. Charnes and W.W Cooper (1961) Management models and industrial applications of linear programming. Wiley, New York. 7. M.J. Schniederjans (1995) Goal programming: Methodology and applications. Kluwer Academic Publishers Group, The Netherlands. 8. R. Caballero, F. Ruiz, M.V.R. Uria, and C. Romero (2006) Interactive meta-goal programming. European Journal of Operational Research, 175(1): 135–154. 9. L.R. Gardiner and R.E. Steuer (1994) Unified interactive multiple objective programming. European Journal of Operational Research, 74(3): 391–406 10. T.L. Satty (1980) The analytic hierarchy process. McGraw-Hill, New York. 11. Lindo Systems Inc. (2006) Web site Lindo API. http://www.lindo.com/, accessed April 28, 2006. 12. M. Turoff and S.R. Hiltz (1995) Computer based Delphi processes. In M. Adler and E. Ziglio (eds.), Gazing into the oracle: The Delphi method and its application to social policy and public health. Jessica Kingsley Publisher, London.
Chapter 13
Nonlinear Programming Based on Particle Swarm Optimization Takeshi Matsui, Kosuke Kato, Masatoshi Sakawa, Takeshi Uno, and Kenji Morihara
13.1 Introduction Actual optimization problems to find a solution optimizing a certain objective function under given constraints are often formulated as nonlinear programming problems. If both the objective function and the constrained region are convex, the problem is called a convex programming problem. For convex programming problems, efficient optimization methods such as the sequential quadratic programming method, the generalized reduced gradient method, and so forth, have been proposed. On the other hand, there has been established no efficient optimization method for nonconvex nonlinear programming problems. In recent years, metaheuristics such as simulated annealing and genetic algorithm have drawn considerable attention. For example, RGENOCOP V that is the floating point type genetic algorithm introducing updating of a basepoint solution of homomorphism to generate initial population is propsed and its effectiveness is shown in [1]. However, since optimization problems in the real world become larger and more complicated, a high speed and accurate optimization method is expected. A particle swarm optimization (PSO) method was proposed by Kennedy et al. [2] and has attracted considerable attention as one of promising optimization methods with higher speed and higher accuracy than those of existing solution methods. In the original PSO method, however, there are drawbacks that it is not directly applicable to constrained problems and it is liable to stopping around local optimal solutions. In this research, to deal with these drawbacks of the original PSO methods, we incorporate the bisection method and a homomorphous mapping to carry out the search considering constraints. In addition, we propose the multiple stretching technique and modified move schemes of particles to restrain the stopping around local optimal solutions. Takeshi Matsui, Kosuke Kato, Masatoshi Sakawa, Takeshi Uno, and Kenji Morihara Graduate School of Engineering, Hiroshima University
173
174
T. Matsui et al.
13.2 Nonlinear Programming Problem In this research, we consider general nonlinear programming problems with constraints formulated as ⎧ ⎨gi (x) ≤ 0, i = 1, . . . , m, (13.1) Minimize f (x) subject to l j ≤ x j ≤ u j , j = 1, . . . , n, ⎩ x = (x1 , . . . , xn )T ∈ Rn , where f (·), gi (·) are convex or nonconvex real-valued functions, and l j and u j are the lower bound and the upper bound of each decision variable x j , respectively. The feasible region of (13.1) is denoted by X.
13.3 Particle Swarm Optimization Particle swarm optimization [2] is based on the social behavior that a population of individuals adapts to its environment by returning to promising regions that were previously discovered [3]. This adaptation to the environment is a stochastic process that depends on both the memory of each individual, called particle, and the knowledge gained by the population, called swarm. In the numerical implementation of this simplified social model, each particle has four attributes: the position vector in the search space, the velocity vector, the best position in its track, and the best position of the swarm. The process can be outlined as follows: Step 1. Step 2. Step 3. Step 4.
Generate the initial swarm involving N particles at random. Calculate the new velocity vector of each particle, based on its attributes. Calculate the new position of each particle from the current positon and its new velocity vector. If the termination condition is satisfied, stop. Otherwise, go to Step 2.
To be more specific, the new velocity vector of the ith particle at time t + 1, vt+1 i , is calculated by the following scheme introduced by Shi and Eberhart [4]. vt+1 := ω t vti + c1 Rt1 (pti − xti ) + c2 Rt2 (ptg − xti ). i
(13.2)
In (13.2), Rt1 and Rt2 are random numbers between 0 and 1, pti is the best position of the ith particle in its track, and ptg is the best position of the swarm. There are three problem-dependent parameters: the inertia of the particle ω t and two trust parameters c1 , c2 . Then, the new position of the ith particle at time t, xt+1 i , is calculated from (13.3). := xti + vt+1 xt+1 i i ,
(13.3)
where xti is the current position of the ith particle at time t. The ith particle calculates by (13.2) in consideration of the current search the next search direction vector vt+1 i
13 Nonlinear Programming Based on Particle Swarm Optimization
175
Fig. 13.1 Movement of a particle in PSO
pgt v it+1 pit
xit
xit+1 v it
direction vector vti , the direction vector going from the current search position xti to the best position in its track pti , and the direction vector going from the current search position xti to the best position of the swarm ptg , and it moves from the curcalculated by (13.3). The parameter rent position xti to the next search position xt+1 i ω t controls the amount of the move by searching globally in the early stage and searching locally by decreasing ω t gradually. It is defined by
ω t := ω 0 −
t(ω 0 − ω Tmax ) 0.75(Tmax )
(13.4)
where Tmax is the number of maximum iteration times, ω 0 is an initial value at the time iteration, and ω Tmax is the last value. The searching procedure of PSO is shown in Fig. 13.1. Comparing the evaluation value of a particle after move, f (xt+1 i ), with that of the best position in its track, t ), then the best position in its track is updated as ) is better than f (p f (pti ), if f (xt+1 i i t+1 t := xt+1 pt+1 i i . Furthermore, if f (pi ) is better than f (pg ), then the best position in t+1 t+1 the swarm is updated as pg := pi . Such simple PSO methods [2,4] include two problems. One is that particles concentrate on the best search positon of the swarm and they cannot easily escape from calculated by (13.2) the local optimal solution since the search direction vector vt+1 i always includes the direction vector to the best search position of the swarm. Another is that a particle after move is not always feasible for problems with constraints.
13.4 Improvement of Particle Swarm Optimization In this study, to prevent the concentration and stopping of particles around local optimal solutions in the simple PSO, we introduce modified move schemes of a particle, the secession and the multiple stretching technique. In addition, in order to treat constraints, we divide the swarm into two subswarms. In one subswarm, since the move of a particle to the infeasible region is not accepted, if a particle becomes infeasible after move, it is repaired to be feasible. In the other subswarm, the move of a particle to the infeasible region is accepted.
176
T. Matsui et al.
Fig. 13.2 A homomorphous mapping
1 x r
T
y −1
T
−1
X
0
1
−1 [-1, 1] n
13.4.1 Generation of Initial Search Positions of Particles Since it is assumed that the simple PSO method mentioned above is applied to nonlinear programming problems without constraints except the upper bound and lower bound constraints for each decision variable, feasible initial search positions can be easily obtained by generating them in U, the region where all of the upper bound and lower bound constraints are satisfied. In this paper, however, our aim is to propose a new solution method for constrained nonlinear programming problems with general constraints as well as the upper bound and lower bound constraints. For such constrained problems, the generation of initial search positions of particles in U may be inefficient because it does not always give feasible initial search positions in X. Therefore, the homomorphous mapping proposed by Koziel and Michalewicz [5] is adopted in order that all of initial search positions are feasible. Koziel and Michalewicz [5] proposed a mapping T from the feasible region X ⊂ Rn to the n dimensional hypercube [−1, 1]n with the following properties (Fig. 13.2): 1. It maps some point r, called the base-point, in X to the origin 0 of the n-dimensional hypercube [−1, 1]n : T : r → 0. (13.5) 2. It maps any point x ∈ X to a point y in the n-dimensional hypercube [−1, 1]n as T : x → y :=
x−r , tmax · max (x j − r j )
(13.6)
j=1,...,n
where tmax is a positive real number such that r + (x − r) · t is on the boundary X. In this paper, we map N points generated randomly in the n-dimensional hypercube [−1, 1]n to the feasible region X by the inverse T −1 of the homomorphous mapping T to obtain feasible initial search positions.
13.4.2 Modified Move Schemes of a Particle Let us consider the move from the current search position xti of the ith particle. is the best positon of a particle in First, if the previous search position xt−1 i is situated near the best position in the its track pti , the next search position xt+1 i
13 Nonlinear Programming Based on Particle Swarm Optimization Fig. 13.3 The new search direction when the best search point of a particle is renewed at the previous search point
177 pkt
xit+1
pgt v it+1 pit
xit
v it
swarm ptg with high possibility. Thereby, as shown in Fig. 13.3, we change (13.2) to as follows. determine the next search direction vt+1 i vt+1 := c1 Rt1 (pti − xti ) + c2 Rt2 (ptk − xti ). i
(13.7)
By the change of the search direction determination scheme, we can relax the concentration of particles to ptg . Next, in case that the current search position xti is the best position of a particle in its track pti , the direction to current search position seems good. Thus, as in Fig.13.4, as follows: we change (13.2) to determine the next search direction vt+1 i := ω t vti . vt+1 i
(13.8)
Otherwise, we use (13.2) to determine the next search direction vt+1 i .
13.4.3 Division of the Swarm into Two Subswarms In the application of PSO to optimization problems with constraints, a particle after move is not always infeasible if we use the move schemes of a particle mentioned above. To deal with such a situation, we divide the swarm into two subswarms. In one subswarm, since the move of a particle to the infeasible region is not accepted, if a particle becomes infeasible after move, it is repaired to be feasible. To be more specific, with respect to infeasible particles that violate constraints after move, we repair its search position to be feasible by the bisection
pgt
v it+1
Fig. 13.4 The new search direction when the best search point of a particle is renewed at the current search point
xit = pit
v it
xit+1
178
T. Matsui et al.
Fig. 13.5 The concentration of individuals around the best search point of the population
Best search point of the population
method on the direction from the search position before move, xti , to that after move, xt+1 i . In the other subswarm, the move of a particle to the infeasible region is accepted.
13.4.4 Secession In PSO, since particles tend to concentrate on the best position of the swarm as the search goes forward, as shown in Fig. 13.5, the global search becomes difficult. Thus, we introduce the following secession of a particle (Fig. 13.6). 1. Secession I: A particle moves at random to a point in the feasible region. 2. Secession II: A particle moves at random to a point on the boundary of the feasible region. 3. Secession III: A particle moves at random to a point in a direction of some coordinate axis.
Secession I
Secession II
Secession III
Fig. 13.6 The secession
Best search point of the population
13 Nonlinear Programming Based on Particle Swarm Optimization
179
13.4.5 Multiple Stretching Technique A stretching technique to prevent the stopping around a local optimal solution of particles in PSO is suggested by Parsopoulos et al. [6]. It enables particles to escape from the current local optimal solution and not to approach the same local optimal solution again by changing the original evaluation function f (x) to an extended evaluation function H(x) defined as ¯ f (x) − f (x) ¯ + 1) G(x) = f (x) + γ1 x − x(sign( sign( f (x) − f (x)) ¯ +1 , H(x) = G(x) + γ2 tanh(µ (G(x) − G(x))) ¯
(13.9) (13.10)
where x¯ is the current local optimal solution, and γ1 , γ2 , µ are parameters. In addition, a function sign(·) is defined as follows. ⎧ x > 0, ⎨ 1, 0, x = 0, sign(x) = (13.11) ⎩ −1, x < 0. In the function G(x), the second term is the penalty, which depends on the distance between a search position x and the current local optimal solution x. ¯ The term is equal to 0 for x whose objective function value f (x) is better than f (x), ¯ while it takes a value depending on the distance between x and x¯ for x whose f (x) is worse than f (x). ¯ Next, the function H(x) is defined using G(x) expressed in (13.9). The value of H(x) for a search position x is equal to the objective function value f (x) ¯ while it takes a very large value if f (x) of of x if f (x) of x is better than that of x, x is worse than that of x. ¯ Using H(x) as the new evaluation function, particles can escape from the current local optimal solution and search a new region that may include better solutions than the current local optimal solution. Although the stretching technique [6] enables particles to escape from the current local optimal solution, they may stop at the same local optimal solution again when we apply the stretching technique to the next local optimal solution. Thus we propose a multiple stretching technique corresponding to plural local solutions. To be concrete, we consider the following function S(x) for m local optimal solutions x¯k , k = 1, 2, . . . , m. Gk (x) = f (x) + γ1 x − x¯ k [sign( f (x) − f (¯xmin )) + 1]. sign( f (x) − f (¯xmin )) + 1 . Hk (x) = Gk (x) + γ2 tanh(µ (Gk (x) − Gk (¯xk ))) ∑m Hk (x) S(x) = k=1 m
(13.12) (13.13) (13.14)
Here, x¯min is the best among m local optimal solutions. The value of S(x) for a search position x is equal to the objective function value f (x) of x if f (x) is better
180
T. Matsui et al. f(x,y)
Fig. 13.7 Problem with many local optimal solutions (Levy no. 5)
200 150 100 50 0 −50 −100 −150 −200
2 1.5 1
−2
0.5 −1.5
−1
0 −0.5
-0.5 0
-1
0.5 1
-1.5
1.5
-2
than that of x¯min , while it takes a very large value if the distance between x and the nearest local optimal solution is less than a certain value. Otherwise, it takes a value depending on the distance. For example, we consider the problem with many local optimal solutions shown in Fig. (13.7). S(x) in the multiple stretching technique is shown in Fig. (13.8), in which case we provided two local optimal solutions (−0.7, 1.4), (−0.4, −0.8) by search.
13.4.6 The Procedure of Revised PSO The procedure of the revised PSO proposed in this paper is summarized as follows:
Step 1.
Step 2.
Find a feasible solution by PSO in consideration of the degree of violation of constraints, and use it as the base-point of the homomorphous mapping, r. Let t := 0, and go to Step 2. Generate feasible initial search positions based on the homomorphous mapping. To be more specific, map N points generated randomly in the n-dimensional hypercube [−1, 1]n to the feasible region X using the homomorphous mapping, and let these points in X be initial search positions x0i , i = 1, . . . , N. In addition, let the initial search position of each particle, x0i , be the initial best position of the particle in its track, p0i , and let the best position among x0i , i = 1, . . . , N, be the initial best position of the swarm, p0g . Go to Step 3.
S(x,y)
1e+07 8e+06 6e+06 4e+06 2e+06 0 −2
Fig. 13.8 S(x) in the multiple stretching technique
2 1.5 1 0.5 −1.5
−1
−0.5
0 −0.5 0
0.5
1
−1 −1.5 1.5
-2
13 Nonlinear Programming Based on Particle Swarm Optimization
Step 3.
Step 4.
Step 5. Step 6.
181
Calculate the value of ω t by (13.4). For each particle, using the informato the next search tion of pti and ptg , determine the direction vector vt+1 i by the modified move schemes explained in Section 4.3. position xt+1 i Next, move it to the next search position by (13.3) and go to Step 4. Check if the current search position of each paticle in the subswarm with repair based on the bisection method, xt+1 i , is feasible. If not, repair it to be feasible using the bisection method, and go to Step 5. Determine whether the multiple stretching technique is applied. If it is applied, go to Step 6. Otherwise, go to Step 7. Apply the multiple stretching technique, i.e., each particle is evaluated by the value of S(·) for xt+1 i , i = 1, . . . , N.
Step 7.
Evaluate each particle by the value of f (·) (objective function) for xt+1 i , i = 1, . . . , N. Go to Step 8.
Step 8.
t+1 If the evaluation function value S(xt+1 i ) or f (xi ) is better than the evaluation function value for the best search position of the particle in its track, pti , update the best search position of the particle in its track as t+1 : = xt+1 : = pti , and go to Step 9. pt+1 i i . If not, let pi
Step 9.
t+1 If the minimum of S(xt+1 i ), i = 1, . . . , N, or the minimum of f (xi ), i = 1, . . . , N, is better than the evaluation function value for the current best search position of the swarm, ptg , update the best search position of the t+1 t+1 t+1 swarm as pt+1 g : = ximin . Otherwise, let pg := pg , and go to Step 10. If the condition of the secession acts is satisfied, apply the secession acts to every particle according to a given probability, and go to Step 11. Finish if t = Tmax (the maximal value of time). Otherwise, let t : = t + 1, and return to Step 3.
Step 10. Step 11.
13.5 Numerical Example We apply the proposed PSO (rPSO) and RGENOCOP V [1] which is one of the existing efficient methods, to five nonlinear programming problems with different scale. The number of trial is 40 for rPSO and RGENOCOP V. Tables 13.1 to 13.5 show the results obtained by both methods: the best objective function value of 40 trials, the averege one, the worst one, and the average computation time. In these experiments, the parameters of RGENOCOP V are set as the population size = 70. On the other hand, the paremeters of rPSO are set as the size of the subswarm with repair based on the bisection method = 35, the subswarm with infeasible particles = 35, and the maximal value of time Tmax = 5000 for all problems. For the first problem, as shown in Table 13.1, the proposed rPSO can always obtain the strict optimal value (−15.000), while RGENOCOP V cannot; and the average computation time of rPSO is shorter than that of RGENOCOP V. For the
182
T. Matsui et al. Table 13.1 Results for a problem with n = 13 and m = 6 Method Best Average Worst Time (s)
rPSO −15.000 −15.000 −15.000 2.153
RGENOCOP V −15.000 −11.070 −4.666 7.275
Table 13.2 Results for a problem with n = 10 and m = 8 Method Best Average Worst Time (s)
rPSO 24.333 24.570 25.095 1.919
RGENOCOP V 24.512 28.821 41.674 9.575
Table 13.3 Results for a problem with n = 10 and m = 8 Method Best Average Worst Time (s)
rPSO −216.650 −207.970 4.386 1.779
RGENOCOP V −214.912 −79.953 205.184 8.909
Table 13.4 Results for a problem with n = 40 and m = 22 Method Best Average Worst Time (s)
rPSO −4403.699 −3907.939 −3263.133 10.375
RGENOCOP V −4129.062 −3169.530 −2038.250 84.538
Table 13.5 Results for a problem with n = 10 and m = 8 Method Best Average Worst Time (s)
rPSO −216.654 −178.461 4.232 2.321
RGENOCOP V −216.402 −121.491 7.569 8.909
second problem, as shown in Table 13.2, both methods cannot obtain the strict optimal value (24.306). However, the difference between results by the proposed rPSO and the strict optimal value is considerably smaller than the difference between results by GENOCOP V and the strict optimal value. In addition, the average computation time of rPSO is shorter than that of RGENOCOP V. For the third problem, which is larger–scale (the strict optimal value is unknown), as shown in Table 13.4,
13 Nonlinear Programming Based on Particle Swarm Optimization
183
the proposed rPSO is better than GENOCOP V with respect to the best objective function value, the averege one, the worst one, and the average computation time. From these results, it is indicated that the proposed PSO (rPSO) is superior to RGENOCOP V and rPSO is promising as an optimization method for constrained nonlinear programming problems.
13.6 Conclusions In this paper, focusing on a particle swarm optimization (PSO) method, we considered its application to constrained nonlinear programming problems. In order to deal with drawbacks of PSO methods, we incorporated the bisection method and a homomorphous mapping to carry out the search considering constraints, as well as the multiple stretching technique and modified move schemes of particles to restrain the stopping around local solutions. We showed the efficiency of the proposed revised PSO method (rPSO) by comparing it with an existing method, RGENOCOP V, through their application in some numerical examples. As future works, the proposed PSO will be extended for application to multiobjecitve programming problems, multilevel programming problems, integer programming problems, and so forth. Acknowledgment This research was partially supported by The 21st Century COE Program on “Hyper Human Technology toward the 21st Century Industrial Revolution.” This research was partially supported by the Ministry of Education, Science, Sports, and Culture, Grant-in-Aid for Scientific Research (C), 18510127, 2006.
References 1. J. Kennedy and R.C. Eberhart (1995) Particle swarm optimization. Proceedings of IEEE International Conference on Neural Networks, pp. 1942–1948. 2. J. Kennedy and W.M. Spears (1998) Matching algorithms to problems: An experimental test of the particle swarm and some genetic algorithms on the multimodal problem generator. Proceedings of IEEE International Conference Evolutionary Computation, pp. 74–77. 3. S. Koziel and Z. Michalewicz (1999) Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization. Evolutionary Computation, 7(1):19–44. 4. K.E. Parsopoulos and M.N. Varahatis (2002) Recent approaches to global optimization problems through Particle Swarm Optimization. Natural Computing, (1):235–306. 5. M. Sakawa, K. Kato, and T. Suzuki (2002) An interactvie fuzzy satisficing method for multiobjective non-convex programming problems through genetic algorithms. Proceedings of 8th Japan Society for Fuzzy Theory and Systems, Chugoku/Shikoku Branch Office Meeting, pp. 33–36. 6. Y.H. Shi and R.C. Eberhart (1998) A modified particle swarm optimizer. Proceedings of IEEE International Conference on Evolutionary Computation, pp. 69–73.
Chapter 14
A Heuristic for the Capacitated Single Allocation Hub Location Problem Jeng-Fung Chen
Abstract The capacitated single allocation hub location problem (CSAHLP) is a decision problem in regard to the number of hubs, hub locations, and the allocation of nonhubs to hubs, considering that each hub has its capacity constraint. The crucial factors for designing an economical hub network are to determine the optimal number of hubs, to properly locate the hubs, and to allocate nonhubs to hubs. In this research an effective heuristic is presented to resolve the CSAHLP. Computational characteristics of the presented heuristic are evaluated through extensively computational experiments using the Australia Post data set. Computational results indicate that the presented heuristic outperforms a simulated annealing method from the literature. Keywords: Capacitated hub location problem · hub network · p-hub median problem · uncapacitated hub location problem
14.1 Introduction The implementation of hub networks is to consolidate flows from different origins and ship them via hubs to different destinations to reduce the total transportation cost. In hub networks, all hubs are interconnected, and none of the nonhubs is directly connected. Each of the nonhub nodes is allocated to multiple hubs (multiple allocation) or a single hub (single allocation). Many studies have shown that the implementation of hub networks can improve the total transportation cost, and successful applications of hub networks have arisen in many areas [1–7].
Jeng-Fung Chen Department of Industrial Engineering and Systems Management, Feng Chia University, P.O. Box 25-097, Taichung, TAIWAN, R.O.C. 40724
185
186
Jeng-Fung Chen
There are several classes of hub network problems [8]. In the p-hub median problem (p-HMP), the number of hubs p is determined a priori. The objective is to locate the hubs and to allocate nonhubs to hubs so that the total transportation cost is minimized. The total transportation cost includes (14.1) the collection cost incurred during the transportation from the origin to its allocated hub; (14.2) the transfer cost incurred during the transportation between hubs; and (14.3) the distribution cost incurred during the transportation from the allocated hub to the destination. The uncapacitated hub location problem (UHLP) differs from the p-HMP in that the number of hubs is not predetermined in the UHLP. The number of hubs is a decision variable, and a fixed cost for establishing a hub is included in the objective function. The capacitated hub location problem (CHLP) is another class of hub network problems in which each hub has its capacity constraint. Although the CHLP is more commonly encountered in practice, it attracts less attention than the p-HMP and UHLP. In this research we extend the work of Chen [9] and develop an effective heuristic to resolve the CHLP with CSAHLP. Computational characteristics of the presented heuristic are evaluated through computational experiments using the Australia Post (AP) data set. Computational results indicate that the presented heuristic is capable of obtaining optimal solutions for almost all small-scaled problems and it outperforms the simulated annealing (SA) method by Ernst and Krishnamoorthy [10] in solving the large-scaled CSAHLP. The rest of this research is divided into five sections. The previous related studies are reviewed in Section 14.2. A mathematical model for the CSAHLP is described in sSection 14.3. The proposed heuristic is detailed in Section 14.4. Computational results are reported in Section 14.5. Conclusions and suggestions for future research are discussed in the last section.
14.2 Previous Related Studies The hub networks have attracted a lot of research attention since the 1980s. A mixed 0/1 integer linear formulation for the p-HMP with multiple allocation (UMApHMP) was developed by Campbell [11]. Ernst and Krishnamoorthy [12] proposed a shortest-path-based heuristic and used enumeration and branchand-bound methods to obtain exact solutions. O’Kelly [13] showed that the single allocation p-HMP (USApHMP) is NP-hard and developed two enumerationbased heuristics. Klincewicz [14] proposed a tabu search (TS) heuristic and a greedy randomized adaptive search heuristic for the USApHMP. Campbell [15] pointed out that the solution for the UMApHMP provides a lower bound for the USApHMP and developed two greedy-exchange heuristics for the USApHMP. For the uncapacitated multiple allocation hub location problem (UMAHLP), Klincewicz [16] proposed an algorithm to obtain the exact solution. An improved
14 A Heuristic for the Capacitated Single Allocation Hub Location Problem
187
version of Klincewicz’s dual-ascent approach was developed by Mayer and Wagner [17]. Boland et al. [18] developed preprocessing procedures and tightening constraints for the mixed 0/1 linear programming formulation. O’Kelly [19] formulated the uncapacitated single allocation hub location problem (USAHLP) as a quadratic 0/1 optimization problem and proposed a heuristic to deal with it. A mixed 0/1 linear programming (LP) formulation for the USAHLP was presented by Campbell [11]. Abdinnour-Helm and Venkataramanan [20] proposed a branch-and-bound and a genetic algorithm (GA) to resolve the USAHLP. Aykin [2] proposed a branchand-bound algorithm and an SA-based greedy-interchange heuristic to resolve the USAHLP. A hybrid heuristic based on the GA and TS was presented by AbdinnourHelm [21]. Topcuoglu et al. [22] proposed a GA-based procedure and an SA heuristic to solve the USAHLP. A hybrid heuristic based on the SA and TS was presented by Chen [9]. The results obtained by Chen match the best solutions found in the literature. In the CHLP, the hubs are capacitated. A mixed integer LP model for the capacitated multiple allocation hub location problem (CMAHLP) was presented by Campbell [11]. Ebery et al. [23] presented formulations and solution approaches for the CMAHLP. Boland et al. [18] developed preprocessing procedures and tightening constraints for mixed integer LP formulations. Ernst and Krishnamoorthy [10] proposed two heuristics and an LP-based branch-and-bound solution method for solving the CSAHLP. Campbell [11] considered capacity restriction at a hub on both collection and transfer. In the work of Ebery et al. [23], Boland et al. [18], and Ernst and Krishnamoorthy [10], the capacity restrictions were considered only on collection. In this research we consider the capacity restrictions also only on the collection. For different solution approaches to the different types of hub network problems, readers may refer to O’Kelly and Miller [24], Bryan [25], Bryan and O’Kelly [26], and Campbell et al. [8]. In this research we extend the work of Chen [9] and develop an effective heuristic to resolve the CSAHLP. Computational characteristics of the presented heuristic are evaluated through computational experiments using the AP data set.
14.3 A Model in this section a mathematical model for the CSAHLP is given to describe the problem structure. Let Wi j be the flow from origin i to destination j; Oi = Σ jWij ; Cijkm be the transportation cost per unit flow from i to j routed via hubs k and m in that order (i.e., Cijkm = χ Cik + α Ckm + δ Cm j , in which Cik is the distance between i and k, χ is the unit collection cost, α is the unit transfer cost on all the interhub link, and δ is the unit distribution cost); Xijkm be the fraction of flow from origin i to destination j routed via hubs k and m; Fk be the fixed cost of establishing hub k; Γk be the capacity of hub k; and Zik = 1 if node i is allocated to a hub located at k and 0 otherwise. The
188
Jeng-Fung Chen
CSAHLP may be formulated as [10] Min
∑ ∑ ∑ ∑ Wi jCijkm Xijkm + ∑ FkYk i
k m
j
k
S.T. ∑ ∑ Xijkm = 1
∀i, j
(14.1)
k m
Zik ≤ Zkk
∀ i, k
(14.2)
∑ Xijkm = Zik
∀ i, j, k
(14.3)
∑ Xijkm = Z jm
∀ i, j, m
(14.4)
∀ i, k
(14.5)
m
k
∑ Oi Zik ≤ Γk Zkk i
Zik ∈ {0, 1} Xijkm ≥ 0
∀ i, k ∀ i, j, k, m
The objective function is to minimize the costs of collection, transfer, distribution, and establishing hubs. Constraint (14.1) requires that the flow between every origin–destination pair is routed via some hub pair. Constraint (14.2) assures that each flow can only be routed via hubs. Constraints (14.3) and (14.4) enforce that every node can only be allocated to one hub. The capacity constraint of each hub is ensured by (14.5).
14.4 Heuristic The presented heuristic, SATLCHLP, is described in this section. According to Chen [9], heuristic SATLCHLP can be divided into three levels: the first level is to determine the number of hubs; the second level is to choose the hub locations for a given number of hubs; and the third level is to allocate the nonhubs to the chose hubs.
14.4.1 Determining the Number of Hubs In order to obtain the optimal/near optimal solution effectively, it is impractical to try every different number of hubs. We can employ the upper bounds for the number of hubs. According to Chen [9], the number of hubs (p) is equal to p − 1, if the marginal reduction of the transportation cost is greater than the increase in fixed cost when p = p − 1 and the marginal reduction of the transportation cost is no greater than the increase in fixed cost when p = p .
14 A Heuristic for the Capacitated Single Allocation Hub Location Problem
189
14.4.2 Selecting Hub Locations In this research the p nodes with larger capacity are selected as the initial hubs. The hub locations are then improved by the restricted single location exchange procedure [9]. Improvement Procedure for Hub Locations The restricted single location exchange procedure generates the neighborhood solutions for hub locations by searching only a portion of nonhubs and results in exactly one nonhub replacing one of the current hubs. When replacing hub m with one of the nonhubs, regard hub m as the origin. All the nonhubs are then divided into four groups: group 1 consists of all the nonhubs within the region formed by the 45◦ and 135◦ lines; group 2 consists of all the nonhubs within the region formed by the 135◦ and 225◦ lines; group 3 consists of all the nonhubs within the region formed by the 225◦ and 315◦ lines; and group 4 consists of all the nonhubs within the region formed by the 315◦ and 45◦ lines. q nonhubs with potential to be good hub locations are then searched for each group. If the number of nonhubs in group 1 is less than q, the deficiencies are complemented by the nonhubs in group 3, and vice versa. If the nonhubs in group 3 are not enough to complement the deficiencies, they are complemented by the nonhubs in groups 2 and 4 equally. Naturally, each of the 4q selected non-hubs should have potential to be a good hub. In this research an index, Ii = (Wi / ∑ Wi ) − (Di / ∑ Di ) + (Ψi / ∑ Ψi ), considering the flow, distance, and marginal cost for hub capacity is used to select the good hub locations, in which Wi = ∑ j (Wi j +W ji ) is the total flow to and from node i, Di = ∑ j (Ci j +C ji ) is the total distance between node i and the other nodes (considering bi-direction), and Ψk is the capacity of node i divided by the fixed cost for establishing hub i (i.e., Ψi = Γi /Fi ).
14.4.3 Allocating Nonhubs To Hubs When each nonhub node can only be allocated to one hub, it may be better to allocate each nonhub to the nearest or second-nearest hub. Hence we first allocate each nonhub (in nonincreasing order of total flow) to the hub as near as possible (without violating capacity constraints) and then apply the following procedure to improve the solutions. Improvement Procedure for Reallocating Nonhubs Step 0.
Set Inner max and the size of tabu list. Note that the tabu list is used to store characteristics that classify certain nonhub moves as tabu in the later search.
190
Step 1.
Step 2.
Jeng-Fung Chen
Starting from the nonhub with the least total flow, reallocate it to the nearest or second-nearest hub. If it cannot be reallocated due to the capacity constraint, apply a swap move. The paired nonhub(s) is chosen from the nonhubs that are allocated to the tried hub. If the solution is improved, accept the reallocation. Otherwise, use Metropolis’s criterion [27] to determine whether the reallocation is accepted. If all the nonhubs are tried, proceed to step 2. Otherwise, select the next nonhub to be reallocated. If an Inner max number of moves are performed without improvement over the best-known solution, stop the improvement procedure. Otherwise, return to step 1.
14.4.4 Heuristic SATLCHLP Heuristic SATLCHLP is now outlined as follows: Step 0. Step 1.
Step 2. Step 3. Step 4.
Step 5.
(Initialization). Set the initial temperature, Markov chain length, size of the tabu list, and stopping criterion. Calculate the lower bound for the number of hubs (LB) based on the total collection. Set p = LB and PILOT = 1 (i.e., indicating that it is a pilot run in order to determine the number of hubs). Generate the initial hub locations and apply the allocation procedure. Apply the restricted single location exchange procedure and the allocation procedures (i.e., performing hub and nonhub moves). If PILOT = 0 (i.e., the number of hubs is determined), proceed to step 5. If the resulting marginal reduction of the transportation cost is greater than the increase in fixed cost, set p = p + 1 and return to step 2 (i.e., increasing the number of hubs for another pilot run). Otherwise, set PILOT = 0 and p = p − 1. Update the temperature. If the stopping criterion is reached, stop the whole procedure. Otherwise, return to step 3.
14.5 Computational Results The AP data set from the literature is used to evaluate the computational characteristics of heuristic SATLCHLP. To the best of our knowledge, the best results obtained by the SA of Ernst and Krishnamoorthy [10] match the best solutions found in the literature. Hence, the performance of SATLCHLP is compared with the SA of Ernst and Krishnamoorthy. Both heuristics were coded in C ++, and all of the experiments were performed on a Pentium IV 2.4 GHz PC. (We recoded the SA of Ernst
14 A Heuristic for the Capacitated Single Allocation Hub Location Problem
191
and Krishnamoorthy in C ++ so that both heuristics were coded using the same data structure and performed on the same computer. The recoded SA is abbreviated as SA’ hereafter.)
14.5.1 Australia Post Data Set The AP data set was introduced by Ernst and Krishnamoorthy [28]. It was derived from the mail flows in an Australia city and contains 200 nodes. The flow is not symmetric (i.e., Wi j = W ji ),Wii = 0, χ = 3, α = 0.75, and δ = 2. For smallscaled problems, five problem sizes are considered: 10, 20, 25, 40, and 50 nodes. For large-scaled problems, two problem sizes are considered: 100 and 200 nodes. For each type of problem, two versions of fixed costs are considered: loose and tight.
14.5.2 Results Computational results are reported in this subsection. First, parameter settings are described.
14.5.2.1 Parameter Settings According to Ernst and Krishnamoorthy [10], the initial solution of SA’ was generated by randomly selecting 10% to 90% of the nodes to be hubs. The nonhubs were then allocated randomly to these hubs. They applied six types of transitions to generate neighborhood solutions: RelocateHub; ReallocateNode; NewHub; MergeClusters; SplitCluster; and SwapNodes. The probabilities of selecting the above transitions are set at 0.25, 0.53, 0.07, 0.01, 0.01, and 0.13, respectively. They used the approach proposed by White [29] to set the initial temperature and employed temperature reheating scheme [30] to escape from local minima. The cooling rate was set at 0.8, and the Markov chain length was set at 2n. In our research, if the best solution was not improved in two consecutive iterations, the temperature was updated. The initial temperature was set at 600; ?, ?, and ? were set to be 1, 3, and 1, respectively; the number of nonhubs to be searched √ in each group (q) was set to be 1.4n; the sizes of the tabu lists for hub and nonhub moves were set at 7 and 3, respectively; Inner max was set at 5; the cooling rate was set at 0.9; and the heuristic was terminated when the best solution was not improved in two consecutive temperatures. The parameters suggested were determined after extensive testing considering both solution quality and computational efficiency.
224250.05 250992.26 263399.94 263399.94 234690.96 253517.40 271128.18 296035.40 238977.95 276372.50 310317.64 348369.15 241955.71 272218.32 298919.01 354874.10 238520.59 272897.49 319015.77 419319.47
Optimal cost
224250.05 250992.26 263399.94 263399.94 234690.96 253517.40 271128.18 296035.40 238977.95 276372.50 310317.64 348369.15 241955.71 272218.32 298919.01 354874.10 238520.59 272897.49 319015.77 417440.991
Problem Name
10LL 10LT 10TL 10TT 20LL 20LT 20TL 20TT 25LL 25LT 25TL 25TT 40LL 40LT 40TL 40TT 50LL 50LT 50TL 50TT
2
1
0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 1.41% 0.00% 0.46% 0.00% 0.17% 0.00% 1.28%
Min Gap
This solution may not be an optimal solution. Nsr: Number of runs that obtained the best solution.
Average
Best cost reported [10] 3.20% 2.01% 2.09% 0.43% 2.03% 2.50% 4.83% 3.83% 1.67% 1.92% 0.00% 1.82% 0.07% 5.60% 0.45% 6.11% 1.78% 4.53% 2.00% 8.12%
Max Gap
Table 14.1 Computational results of small-sized AP test problems
1.12% 0.88% 0.86% 0.35% 0.20% 1.16% 0.86% 2.76% 1.10% 1.18% 0.00% 1.34% 0.01% 3.80% 0.18% 4.34% 0.36% 2.73% 1.22% 5.63%
Avg gap
SA’
0.29
0.01 0.01 0.01 0.01 0.05 0.03 0.05 0.05 0.08 0.09 0.05 0.08 0.29 0.43 0.23 0.49 1.11 0.98 0.62 1.18
CPU time 1 1 1 2 9 4 8 1 2 2 10 1 9 0 3 0 8 0 2 0
Nsr2 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.001%
Mingap 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 1.14% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.15%
Max gap 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.23% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.09%
Avg gap
SATLCHLP
1.35
0.07 0.06 0.07 0.04 0.20 0.10 0.19 0.26 0.43 0.34 0.34 0.41 1.27 2.14 1.46 2.98 2.30 7.39 2.34 4.60
CPU time
10 10 10 10 10 10 10 10 10 10 3 10 10 10 10 10 10 10 10 0
Nsr
246713.97 256183.42 362950.09 474680.32 231102.40 260178.36 237760.48 278961.01
100LL 100LT 100TL 100TT 200LL 200LT 200TL 200TT
Average
Overall Best cost
Problem name
246713.97 256638.38 362950.09 474680.32 241992.97 268894.41 273443.81 292754.97
Best cost reported [10]
0.00% 1.72% 0.34% 3.51% 1.72% 0.47% 0.01% 0.19%
Min gap 1.54% 4.85% 0.69% 9.95% 4.20% 1.83% 6.11% 8.20%
Max gap
Table 14.2 Computational results of large-sized AP test problems
0.92% 3.20% 0.46% 7.44% 3.66% 1.27% 4.73% 5.24%
Avg gap
SA’
48.37
8.88 9.48 6.00 10.12 84.40 96.55 65.67 105.90
CPU time 3 0 0 0 0 0 0 0
Nsr 0.00% 0.00% 0.00% 0.01% 0.00% 0.00% 0.00% 0.00%
Min gap Max gap 0.01% 0.07% 0.00% 0.12% 0.07% 0.59% 0.08% 0.27%
0.002% 0.04% 0.00% 0.06% 0.05% 0.30% 0.05% 0.15%
Avg gap
SATLCHLP CPU time
103.35
30.74 13.50 13.46 54.54 196.35 318.05 174.94 129.19
3 1 10 0 1 1 1 1
Nsr
194
Jeng-Fung Chen
14.5.2.2 Results for Small-Sized Problems The computational results for small-sized AP problems are given in Table 14.1. The CPU times reported were averaged over ten runs. According to Table 14.1, SA’ did not obtain the optimal solutions for four problems (problems 40LT, 40TT, 50LT, and 50TT). Except for problem 50TT, SATLCHLP obtained the optimal solutions for the other test problems in almost every run. Although SATLCHLP did not obtain the best solution for problem 50TT, the obtained worst solution (within 0.15% gap) is better than the best solutions obtained by SA’ (within 1.24% gap). On an overall average, for small-sized AP problems the best (average) solution of SA’ is reduced 0.11% (1.55%) by the average solution of SATLCHLP.
14.5.2.3 Results for Large-Sized Problems Table 14.2 shows the computational results for large-sized AP problems. Except for problem 100TT, SATLCHLP obtained the best solutions for the other large-sized AP problems. Although SATLCHLP did not obtain the best solution for problem 100TT, the obtained worst solution (within 0.12% gap) is still better than the best solutions obtained by SA’ (within 3.51% gap). The average solution obtained by SATLCHLP is better than the best solution obtained by SA’ in almost every test problem (except problems 100LL and 200TL). This again indicates that SATLCHLP is capable of consistently obtaining good solutions and outperforms SA’. On an overall average, for large-sized AP problems the best (average) solution of SA’ is reduced 1.10% (3.42%) by the average solution of SATLCHLP. Furthermore, unlike SA’ whose solutions are greatly affected by random numbers, the random numbers hardly affect the solution quality of SATLCHLP. As to run times consumed, the run times required for SATLCHLP ranged from 13.46 to 318.05 seconds. This indicates that SATLCHLP is able to obtain a good solution in a reasonable time.
14.6 Conclusions and Suggestions for Future Research In this research we extend the work of Chen [9] and develop an effective heuristic to resolve the CSAHLP. Computational characteristics of the proposed heuristic have been evaluated through computational experiments using the AP data set from the literature. Computational results have demonstrated that the proposed heuristic is capable of obtaining optimal solutions for almost all small-scaled problems, and outperformed the SA method of Ernst and Krishnamoorthy [10] in solving the largescaled CSAHLP. As for the future research, it may be desirable to create new data sets of larger sizes. Considering that each nonhub node can only be allocated to a specific number of hubs and that the cost for establishing a hub is a discrete
14 A Heuristic for the Capacitated Single Allocation Hub Location Problem
195
function of the capacity are another two important issues for future research to pursue. Acknowledgement This material is based on work supported by the National Science Council on Grant number NSC 94-2213-E-035-019.
References 1. S. Abdinnour-Helm (2001) Using simulated annealing to solve the p-hub median problem. International Journal of Physical Distribution and Logistics Management, 31: 203–220. 2. T. Aykin (1995) Networking policies for hub-and-spoke systems with application to the air transportation system. Transportation Science, 29: 201–221. 3. N. Bania, P. Bauer, and T. Zlatoper (1998) U.S. air passenger service: A taxonomy of route networks, hub locations, and competition. Logistics and Transportation Review, 34: 53–74. 4. T. Don, S. Harit, J.R. English, and G. Whicker (1995) Hub and spoke networks in truckload trucking: Configuration, testing, and operational concerns. Logistics and Transportation, 31: 209–237. 5. J.G. Klincewicz (1998) Hub location in backbone tributary network design: A review. Location Science, 6: 307–335. 6. M.J. Kuby and R.G. Gray (1993) Hub network design problem with stopovers and feeders: Case of Federal Express. Transportation Research, 27: 1–12. 7. K. Lumsden, F. Dallari, and R. Ruggeri (1999) Improving the efficiency of the hub and spoke system for the SKF European distribution network. International Journal of Physical Distribution and Logistics Management, 29: 50–64. 8. J. Campbell, A. Ernst, and M. Krishnamoorthy (2002) Hub location problems. In Z Drezner and H Hammacher (eds.) Facility location: applications and theory. Springer, Berlin. 9. J.F. Campbell (1994) Integer programming formulations of discrete hub location problems. European Journal of Operational Research, 72: 387–405. 10. A. Ernst and M. Krishnamoorthy (1999) Solution algorithms for the capacitated single allocation hub location problem. Annals of Operations Research, 86: 141–159. 11. J.-F. Chen (2007) A hybrid heuristic for the uncapacitated single allocation hub location problem. Omega–International Journal of Management Science, 35: 211–220. 12. A. Ernst and M. Krishnamoorthy (1998) Exact and heuristic algorithms for the uncapacitated multiple allocation p-hub median problem. European Journal of Operational Research, 104: 100–112. 13. M.E. O’Kelly (1987) A quadratic integer problem for the location of interacting hub facilities. European Journal of Operational Research, 32: 393–404. 14. J.G. Klincewicz (1992) Avoiding local optima in the p-hub location problem using tabu search and GRASP. Annals of Operations Research, 40: 283–302. 15. J.F. Campbell (1996) Hub location and the p-hub median problem. Operations Research, 44: 923–935. 16. J.G. Klincewicz (1996) A dual algorithm for the uncapacitated hub location problem. Location Science, 4: 173–184. 17. G. Mayer and B. Wagner (2002) HubLocator: An exact solution method for the multiple allocation hub location problem. Computers and Operations Research, 29: 715–739. 18. N. Boland, M. Krishnamoorthy, A. Ernst, and J. Ebery (2004) Preprocessing and cutting for multiple allocation hub location problems. European Journal of Operational Research, 155: 638–653. 19. M.E. O’Kelly (1992) Hub facility with fixed costs. The Journal of RSAI, 71: 293–306. 20. S. Abdinnour-Helm and M.A. Venkataramanan (1998) Solution approaches to hub location problems. Annals of Operational Research, 78: 31–50.
196
Jeng-Fung Chen
21. S. Abdinnour-Helm (1998) A hybrid heuristic for the uncapacitated hub location problem. European Journal of Operational Research, 106: 489–499. 22. H. Topcuoglu, F. Corut, M. Ermis, and G. Yilmaz (2005) Solving the uncapacitated hub location using genetic algorithms. Computers and Operations Research, 32: 967–984. 23. J. Ebery, M. Krishnamoorthy, A. Ernst, and N. Boland (2000) The capacitated multiple allocation hub location problem: Formulations and algorithms. European Journal of Operational Research, 120: 614–631. 24. M.E. O’Kelly and H. Miller (1994) The hub network design problem: A review and synthesis. Journal of Transport Geography, 2: 31–40. 25. D.L. Bryan (1998) Extensions to the hub location problems: formulations and numerical examples. Geographical Analysis, 30: 315–330. 26. D.L. Bryan and M.E. O’Kelly (1999) Hub-and-spoke networks in air transportation: An analytical review. Journal of Regional Science, 39: 275–295. 27. N. Metropolis, A.N. Rosenbluth, M.N. Rosenbluth, and A.H. Teller (1953) Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21: 1087–1092. 28. A. Ernst and M. Krishnamoorthy (1996) Efficient algorithms for the uncapacitated single allocation p-hub median problem. Location Science, 4: 139–154. 29. S. White (1984) Concepts of scale in simulated annealing. Proceeding of the IEEE International Conference on Computer Design. Port Chester, NY, pp. 646–651. 30. I. Osman (1993) Metastrategy simulated annealing and tabu search algorithms for the vehicle routing problem. Annals of Operational Research, 41: 421–451. 31. Y. Lee, B. Lim, and J. Park (1996) A hub location problem in designing digital data service networks: Lagrangian relaxation approach. Location Science, 4: 185–194.
Chapter 15
Multimodal Transport: A Framework for Analysis Mark K.H. Goh, Robert DeSouza, Miti Garg, Sumeet Gupta, and Luo Lei
Abstract Disparities in economic development, transport policies, and infrastructure across nations and modes of transport make the integration of multimodal corridors a challenging task for regional organizations like the Asia-Pacific Economic Cooperation (APEC), European Union (EU), and Association of Southeast Asian Nations (ASEAN). In this chapter, a framework based on nontariff barriers arising from the lack of interconnectivity between modes, lack of infrastructure, and tariff barriers arising from cabotage and customs regulation at the interface is proposed to analyze the issues faced by a regional organization in the creation of multimodal transport corridors. Multimodal transport networks are used by third-party logistics (3PL) service providers who offer end-to-end cargo delivery services. Better multimodal transport networks are expected to improve intraregional trade and attract multinational enterprises (MNEs) to the region, thereby increasing foreign direct investment (FDI). We introduce multimodal transport and discuss the economic and regional characteristics of the ASEAN region. This is followed by an overview of transport-related infrastructure development in the region. An analysis of the issues faced by regional organizations in the integration of multimodal corridors using the proposed framework follows.
15.1 Introduction The global freight industry has witnessed several changes in the past few decades. The advent of advanced technology, increased competition with greater demand for cost- and time-efficient logistics services, and consolidation in the logistics services environment has transformed the industry. Third-party logistics service providers who provide door-to-door cargo delivery services using integrated multimodal Mark K.H. Goh, Robert DeSouza, Miti Garg, Sumeet Gupta, and Luo Lei The Logistics Institute-Asia Pacific Block E3A, Level 3, 7, Engineering Drive 1, Singapore 117574
197
198
Mark K.H. Goh et al.
transport networks are becoming prominent players in the industry. The competitive advantage of 3PL service providers stems from their ability to provide integrated end-to-end delivery of goods ranging from raw materials to finished products. The use of multimodal transport is an attractive opportunity for 3PL services providers and helps them to conserve the cost benefits obtained from effective supply chain management and obtain economies of scale and scope. In the global context, providing integrated logistics solutions to customers is a challenging task since international freight movement is affected by numerous regulatory and nonregulatory barriers. This chapter presents a conceptual framework to analyze the issues faced in the creation of multimodal transport networks. The benefits of multimodal transport are well researched. Some key benefits that make multimodal transport an attractive opportunity for public and private stakeholders [1] include 1. Savings in cost and time from the optimum use of each mode of transport for each phase of the journey 2. Greater returns on private and public infrastructure investments 3. Better capacity utilization resulting from optimum usage of each mode 4. Reduced energy usage 5. Decreased environmental hazards In addition, the formation of regional trading blocs, the increasing importance of time-based competition, fuel costs, global supply chains, e-commerce, and increasing competition to attract FDI has provided an impetus to the process [2]. Several regional organizations and government agencies are actively involved in developing state-of-the-art multimodal transport corridors comprising rail, road, air, and maritime transport to facilitate the seamless movement of goods within a region. Local governments and regional organizations have an interest in creating safe and sustainable multimodal transport systems that not only serve as a catalyst for socioeconomic development but also enhance international competitiveness. An efficient method of transporting freight at low cost and greater efficiency is a strong reason for multinational firms to invest in a particular region leading to an increase in FDI. The presence of strong multimodal production and transport networks increases the trade in goods, the services, and the capital flows within a region and hence boosts intraregional trade. While the transportation of freight using more than one mode of transport is a common practice, the distinctive feature of modern-day multimodal transport is the movement of large volumes of freight under a single transport liability document covering all phases of the journey issued by a multimodal transport operator (MTO). This feature developed largely due to the introduction of containerization of freight in the 1960s [3]. Multimodal transport, also known as combined transport, connotes the use of more than one mode of transport to move freight without the actual handling of goods. The term intermodal transport is often used interchangeably with multimodal transport. In multimodal transport the liability for damage during different phases of the journey lies with either the same or different freight forwarders. Confining the
15 Multimodal Transport: A Framework for Analysis
199
definition to cases where the goods are not disturbed or handled during transit leads to the exclusion of multimodal noncontainerized freight, or transloading, from the purview of the existing studies. Lack of multimodal transport networks excludes opportunities for shippers and freight forwarders to enhance the scope of their services and to reduce costs and duration of transport [4]. In this chapter a framework to better understand the issues involved in integrating multimodal transport networks is developed. The objectives of this study are to develop a framework for understanding (a) the regulatory and nonregulatory issues in the integration of multimodal networks and (b) the perceptions of intermediate users regarding multimodal networks.
15.2 Literature Review Extant literature on multimodal transportation explores the need for the development of multimodal transport networks, benchmarking of multimodal freight transport, cost and time benefits of using multimodal transport [5], and studies of specific projects in the EU and APEC regions. A few studies focus on the behavioral aspects, i.e., the perceptions of the end user and the shipper [2] and the modal choices [6–8]. Several methods have been adopted to study the issues related to the integration of multimodal transport networks. For instance, case study research focusing on cost and time conducted by [5] shows the cost and efficiency advantages of using different combinations of routes and modes for the transportation of freight within the ASEAN. The model used in the study is based on quotations obtained from freight forwarders and is used to calculate the cost of 1 twenty-foot equivalent units (TEU) of Freight All Kinds (FAK) for four different routes through different countries and ports using a combination of modes. This model was proposed by [9]. The results of the case study indicate that the all-road option from Vientiane to Singapore via Bangkok results in the fastest transit time, the road–sea combination via Bangkok port gives the cheapest transit freight, and the road–rail solution has the highest confidence-index. The case study clearly shows the cost benefits of using multimodal transport. However, it does not consider airfreight and focuses on containerized movement of freight. Studies on regional issues related to multimodal transport in specific areas have been conducted in the EU region, the APEC region, and other regions. The focus of these studies range from benchmarking of costs and analysis of the issues related to multimodal transport. Theoretical studies that examine the integration of multimodal transport networks from the perspective of the end user have also been conducted [2]. Another stream of multimodal network-related studies focuses on the choice of the mode of transport using probability models with several parameters like travel time and freight costs and other considerations [6–9]. In the following section, the institutional environment in which logistics service providers (LSPs) operate is studied.
200
Mark K.H. Goh et al.
15.3 Theoretical Framework Logistics services providers may be considered similar to organizations embedded within an industry network environment with customers on one hand and institutions on the other. The lower the transaction costs of the organization, the greater will be the specialization and the greater the productivity of the system. However, the cost of exchange depends on the institutions of the country: its legal, political, social, and educational systems; culture; and so on. In effect, institutions govern the performance of the economy [10]. The customers of the LSPs are shippers and manufacturers who outsource their logistics activities to LSPs. The institutions are regulatory organizations, governments, and regional organizations. Therefore, LSPs are embedded in an environment of customers who outsource their logistics requirements on one hand and institutions that consist of regional organizations, governments, ministries, and regulatory bodies on the other. Figure 15.1 presents the theoretical framework for this study. The cost of exchange may arise from several sources, namely, tariff-related regulatory barriers and nontariff, nonregulatory barriers. Propositions 1a and 1b describe the nontariff-related barriers. Nontariff-related barriers are barriers that arise due to the lack of infrastructure or an absence of interconnectivity between the mutually competitive yet interactive modes of transport. Proposition 1a. Nontariff or nonregulatory barriers arise due to lack of infrastructure. Proposition 1b. Nontariff-related barriers arise from lack of inter-connectivity between modes of transport. Propositions 2a and 2b describe the tariff-related barriers. Tariff-related barriers are deemed barriers that arise due to varying levels of inefficient procedures in customs
Non-Regulatory Barriers
Intermediate User Freight Forwarders NVOCC LSPs Institutional Environment Regional Organizations Domestic Government Regulatory Bodies
Proposition 1
Cost of Exchange
a. Infrastructure By mode By country b. Interconnectivity Between modes Regulatory Barriers
Proposition 2 a. Customer Related b. Cabotage Related By mode By country
Fig. 15.1 Theoretical framework showing propositions
15 Multimodal Transport: A Framework for Analysis
201
clearance which delay shipments and increase shipment costs. Other than customs clearance, tariff-related barriers arise due to cabotage regulations that prevent the domestic transportation of goods by different modes of transport. Proposition 2a. Regulatory barriers that arise from cost of exchange at interface, i.e., customs-related barriers. Proposition 2b. Regulatory barriers arise from cabotage, i.e., restrictions on domestic transport of freight within a foreign country. These propositions are tested using data collected from desk research and interviews, the methodology for which is described in the following section.
15.4 Research Methodology The case study method developed by [11] was adopted as the research methodology. In order to obtain an understanding of the user’s viewpoint, detailed interviews were conducted with industry players who offer wide-ranging logistics services in the region. Interviewees were selected based on the services they provide and their experience in the region. The interviewees were asked a series of questions, and their responses were recorded and transcribed verbatim. Interviews were conducted by telephone or by visiting the workplace of the interviewee. The interviews lasted from about 30 minutes to 2 hours. The firms interviewed have offices in at least one or more ASEAN country. Data were collected from August to December 2006. Information regarding the firm’s countries of operation, field of operation, number of employees, and size was collected in the general part of the questionnaire. The first part of the interview questionnaire pertained to the perception of the interviewee about the geographic and economic development of the ASEAN nations. The second part of the interview questionnaire contained items pertaining to the regulatory barriers impeding the integration of logistics services in the region by country and by mode. Last but not the least, the questionnaire contained items testing the perceptions of the users about the infrastructure facilities for each mode in each ASEAN country. Other sources of data include resources in the public domain like ASEAN Secretariat reports and news articles, journal articles, and similar research conducted to study the issues and initiatives of multimodal transport in the region. Information collected during the interviews pertained to the perspective of the LSPs and port authorities on the regulatory and nonregulatory logistics-related barriers faced in the region. The data collected during the interviews and desk research has been analyzed and presented in the form of a case study to obtain support for the theoretical propositions of this research. The interview questions were found to be exhaustive and
202
Mark K.H. Goh et al.
covered all the relevant issues related to the impediments to the integration of multimodal transport networks in the region. Since semistructured interviews were conducted with several members of the logistics supply chain ranging from shippers and LSPs to port and air cargo authorities, a comprehensive understanding of the issues pertaining to the region was obtained.
15.5 Case Study The ASEAN, founded in 1967, comprises 10 Southeast Asian nations. Strategically located at the crossroads between the East and the West, ASEAN has historically been an important trading hub. However, the region exhibits economic disparity as reflected by the gross domestic product (GDP) per capita, which varies from US$13,879 for Brunei to US$554 for Vietnam (see Table 15.1). Some member countries, Cambodia, Laos, Vietnam, and Malaysia, are considered to be developing nations, while a few nations such as Singapore have joined the world’s top 20 most competitive economies. The ASEAN is home to approximately 530 million people or about 10% of the world’s population. The population ranges from merely 340,000 people in Brunei to 230 million people in Indonesia. The region has three time zones and is geographically wide spread. The ASEAN countries differ greatly in size, e.g., the land area of Singapore is 68,000 ha while that of Vietnam 330, 000 km2 ). Indonesia and Philippines are archipelagoes with over 10,000 islands. Most of the countries have long coastlines except Laos, which is the only landlocked country. The region’s population is approximately 500 million, about 10% of the world’s population. The ASEAN was formed with the objective of enhancing the global competitiveness and socioeconomic development of the region. The ASEAN Free Trade Area (AFTA) was established in 1992 with a view to integrate the ASEAN economies into Table 15.1 Per capita GDP for ASEAN countries (2005) ASEAN country
Per capita GDP (US$)
Brunei Darussalam $13,879 Cambodia $358 Indonesia $1,193 Lao PDR $423 Malaysia $4,625 Myanmar $166 The Philippines $1,042 Singapore $25,207 Thailand $2,537 Vietnam $554 Data for Table 15.1 obtained from ASEAN statistics–selected ASEAN macroeconomic indicators [http://www.aseansec.org/stat/Table1.xls].
15 Multimodal Transport: A Framework for Analysis
203
a single production base and to create a regional market by eliminating tariff and nontariff trade barriers amongst ASEAN nations. Efforts are underway to consolidate the region and form an ASEAN economic community by 2020 to cater to this single market and form a single production base. The Framework Agreement for the Integration of the Priority Sectors was signed at the 10th ASEAN Summit in Bali, 2003 with the goal of integrating 11 high-priority sectors. Logistics was recognized as an important sector to bring about the integration of these high-priority sectors. Several initiatives are currently underway to improve the physical infrastructure sector, especially multimodal transport networks in the region.
15.5.1 Railways Proposals to integrate the rail network spanning seven ASEAN nations include projects to complete the Singapore Kunming rail link (SKRL) under the ASEAN– Mekong Basin Development Council by constructing the missing links in Cambodia (48 km) and from Cambodia to Vietnam (210 km) [12]. The ASEAN is also committed to building the spur lines that link Myanmar and Thailand and run between Laos and Vietnam at an estimated cost of US$1.8 billion.
15.5.2 Road Vehicle motorization has shown a marked increase in the ASEAN. The idea of a highway network predates the formation of the ASEAN. Efforts to integrate the road network in the ASEAN were revived in 1996 at the transport ministers meeting at Bali, Indonesia. The main objectives of this agreement are to designate national routes and form a road network. The proposed ASEAN highway network totals a length of 37,069 km with the longest highway length in Indonesia (9,239 km) followed by Thailand (6,692.5 km) and Myanmar (4,534 km) [13].
15.5.3 Maritime Maritime transport is a primary method for moving cargo in the ASEAN. Other than the cost and volume benefits of maritime transport over road and air transport for moving high-volume, low-value freight, long coastlines of the sea-bound countries and the presence of several ports supports maritime transport in the region. The strategic geographic location of ASEAN between the Asian land mass and the Americas has made it an important maritime trade hub. Multimodalism as a shipping operation strategy is vital for efficient and costeffective transport particularly in international trade. Other than ports, multimodal transport encompasses related activities and facilities along the logistics chain, such as freight centers, hubs, inland clearance depots, and other interfaces. Studies have
204
Mark K.H. Goh et al.
revealed that the structure and performance of the logistic chain elements passing through ports are critical factors that contribute to the competitiveness of the ports [14]. Hence, the significance of multimodalism in enhancing the efficiency of the logistics chain cannot be overemphasized [15].
15.5.4 Multimodal The resolution to adopt multimodal transport was accepted by the ASEAN in October 2000. The ASEAN Multimodal Transport agreement is expected to have a significant impact on the movement of goods in the ASEAN. However, several impediments prevent the successful integration of multimodal networks within a region. First, the difference in the quality of transport infrastructure and the availability of funds for investment in large and capital-intensive transport projects due to the economic disparity amongst the member nations hampers improvement in infrastructure. Second, the differences in economic development amongst the countries also hinder the implementation of customs-related regulations like electronic data interchange (EDI) and regulations against corruption and theft. Third, since nations within a region are independently governed, hindrances to the integration of multimodal networks may arise due to differences in transport policies and attitudes of the respective domestic governments concerning the role of each mode, costs, regulations on foreign investment and cabotage, etc. Fourth, competition amongst the nations to capture a larger share of FDI and competition amongst the individual modes may deter efforts toward the creation of multimodal networks. The main propositions of the framework are discussed here. Proposition 1a. Nontariff barriers arise due to the lack of infrastructure. This class of barriers arises due to lack of infrastructure in road, rail, maritime, and air transport infrastructure. Road transport forms the backbone of multimodal transport, especially in cases where the region can not be served adequately by maritime or rail networks. The absence of a well-established international standard road network (with roads of sufficient width and quality), restrictions on fleet size, and differences in days, area, and hours of operation hinders the movement of freight and reduces the use of roads for multimodal transport by LSPs. For example, in a few countries, such as Vietnam (especially in cities such as Hanoi), there is an innercity ban on trucks in excess of 2 t during peak hours, thereby increasing waiting time and turnaround and hence cost. In Indonesia, poor road conditions are a significant barrier to the use of road networks for the transport of freight [16]. Rail transport is an effective method of transporting low-cost bulk freight over long distances. To encourage the use of rail networks in the ASEAN, efforts are underway to create a trans-ASEAN rail network. The missing links in the SKRL project are being completed under the ASEAN SKRL project. Other infrastructure barriers that may prevent the use of rail transport arise from the high costs, nonharmonized rail gauges, the lack of capacity, and priority of passenger transport over
15 Multimodal Transport: A Framework for Analysis
205
domestic cargo. However, no need of rail transshipment exists at the international rail links (Malaysia–Thailand and Malaysia–Singapore) since no problems arising from differences in track and loading gauges are experienced in these areas. The airport infrastructure in the ASEAN was found to be fairly adequate. Malaysia, Singapore, and Thailand have the largest fleets within the ASEAN. Nonregulatory barriers in airport infrastructure can arise due to the allocation of landing slots, access to warehouse and cargo handling services, and control of ground handling services by one or at the most two companies. Other challenges may arise due to the absence of airports in key industrial areas in the region or due to the infrequency of flights. Port infrastructure plays an important role in encouraging intraregional maritime transport. Malaysia, Singapore, Indonesia, Thailand, and the Philippines are among the 35 most important maritime nations of the world. Malaysia has a well-developed coastal shipping system with Port Klang and Port Tanjong Pelepas already serving as important load centers. Indonesia and the Philippines are archipelagic nations with large domestic shipping fleets and at best a limited international fleet. The Indonesian archipelago has over 17,000 islands and several ports, which are classified into 107 primary ports, 544 government ports, and 1,233 private ports serving remote and underdeveloped areas. In Indonesia, the main ports are Belawan, Tanjong Priok (Jakarta), Tanjong Emas (Semarang) and Makassar (Ujung Pandang). In the Philippines, 19 base ports and 89 national ports are supplemented by several hundreds of municipal and private ports supporting local economic activity. Myanmar, Thailand, and Vietnam have long coastlines, but do not have well-developed coastal shipping. Singapore is an important transshipment hub for the region ranking amongst the busiest ports in the world and the seventh largest merchant fleet size in the world. Laos, a landlocked country, has two ships under the Vietnamese fleet. Infrastructure-related issues at the ports arise from the lack of facilities such as deep seaports and modern container berths and are observed in Vietnam, Myanmar, and Cambodia [17]. Proposition 1b. Nontariff barriers arise due to lack of interconnectivity. Interconnectivity between modes is an important requirement for the effective, seamless transport of freight within a region. Modern-day ports must provide facilities for efficient, low-cost intermodal and intramodal transportation of cargo. Ports must accommodate other modes of transport interfacing at and with ports forming an integral part of the transport network to move goods to the end users providing a strategic connection. Studies show that the structure and performance of the logistics chain passing through ports are critical factors that contribute to the competitiveness of the ports [14]. Unlike most other ports, consolidation facilities within the freetrade zone are available at the port in Singapore. In a study of the 45 most important ports in the ASEAN, it was found that all 45 ports have interconnectivity with highways. About 13 ports have connectivity with railways, and 9 have connections with Inland Waterways [18]. Proposition 2a. Regulatory barriers arise due to the cost of exchange at interface, i.e., customs-related barriers.
206
Mark K.H. Goh et al.
Barriers in customs clearance arise due to time-consuming documentation processing because of lack of EDI, burdensome inspection procedures, restrictions on brokerage services by foreign firms, and restrictions on weight and value of shipments. The interviews with LSPs show that they face the greatest hindrance in border clearance procedure due to security-related delays, short hours of operation, and numerous public holidays. These problems are faced at the Malaysia–Singapore border resulting in delays, traffic jams, and long queues. The implementation of EDI reduces the customs clearance time. For instance, the customs clearance time in Singapore, which has full EDI, is short compared to Malaysia and Brunei, which have only partial implementation of EDI, and to Indonesia, which has no EDI. Logistics service providers require additional documentation or original documents to clear customs in Indonesia and Vietnam. Delays may also occur due to the lack of harmonization of regulations between domestic ports or borders within the same country. This problem is faced in archipelagic countries like Indonesia and Malaysia where often variations in rules exist at ports or borders in different regions. Regulatory barriers may also arise due to the lack of clarity on harmonized system (HS) codes used to classify goods. These problems are compounded by language barriers and lack of border co-ordination. Logistics service providers also face difficulties in obtaining customs brokerage licenses in some ASEAN nations like Vietnam, where foreign LSPs are only allowed to provide such services exclusively in industrial parks; the Philippines, where such licenses are issued only to citizens; and Indonesia and Malaysia, which require that firms set up a separate corporate entity to obtain a customs brokerage license. In addition, LSPs who provide end-to-end cargo delivery services using multimodal networks face restrictions on investment in transport infrastructure in foreign countries. Restrictions on the foreign ownership of transport infrastructure, requirements for local participation, and nontransparent and duplicate licensing policies pose several hindrances to foreign investors. This problem is faced in Malaysia and Indonesia, where local partnership with a Bumiputra member is required to set up a trucking venture. This results in the loss of control and additional cost in the form of salaries and ex gratia payments. Similar regulatory barriers are faced in Vietnam where foreigners are not allowed to own or operate ground transport fleets and equipment. Such requirements may dissuade foreign companies from owning and employing certain modes of transport and result in greater expense in the transportation of freight. In a few cases foreign ownership may not be allowed at all, e.g., in Thailand or with certain conditions like capacity tests and economic needs as in the Philippines. The lack of transparency and the prevalence of corruption or exchange of under-the-table money are also potential hindrances that arise due to the lack of regulatory control. Proposition 2b. Regulatory barriers arise due to cabotage regulations. Cabotage restrictions imply the restrictions on the movement of freight within a domestic territory by foreign LSPs. These restrictions hamper the ability of foreign LSPs to provide seamless freight transportation of within a country.
15 Multimodal Transport: A Framework for Analysis
207
Road cabotage arises when foreign vehicles are not allowed to ply a domestic territory within a country or region. The restriction on the entry of Singapore-registered trucks in Malaysia is a good example of such restrictions. Here, Malaysian registered trucks are allowed to enter and deliver cargo in Singapore. However, they are not allowed to collect freight on the return journey. Cabotage restrictions in shipping imply the restrictions that limit the provision of domestic point-to-point transport services to ships registered under their respective national flags. Restrictions other than cabotage and restrictions that prevent foreign vessels from carrying loads from one port to the next in the same country are restrictions on providing other port-related services. Air cabotage restricts the domestic transportation of freight to national carriers. Restrictions on the fifth freedom of foreign carriers prevent Singapore national carriers from transporting domestic cargo from Penang, an industrial region in Malaysia, directly to the USA without making a stop at the Malaysian capital. Similar cabotage restrictions prevent low-cost carriers from Malaysia to carry cargo from Singapore to Malaysia. The presence of regulatory restrictions like customs and cabotage lead to the use of alternate modes of transport leading to excess time and cost to LSPs.
15.6 Future Directions for Research The advantages derived from the reduction in cost and savings in time point toward the need for an integrated multimodal transport network, which results in efficient and cost-effective logistics services. This study has provided a framework to analyze the key issues related to the integration of multimodal networks. This framework can be used for the preliminary analysis of the existing infrastructure, interconnectivity, and rules and regulations within any region, country, or city. Further areas of research include the study of trade flows to identify the critical paths to determine the optimal multimodal transport paths for freight. Other areas of research include the study of specific ports or regions within the ASEAN and their multimodal connectivity. The case study method can also be adopted to examine the costsbenefit analysis of removing regulatory barriers and comparative analysis of the different modes. To conclude, while several impediments are faced in the creation of multimodal transport network, efforts to establish integrated travel corridors in the region require the harmonization of tariffs and customs and cabotage regulations. In addition, issues related to the lack of infrastructure and impediments to proper coordination at the border clearance and checkpoints have to be reduced. The framework in this study helps to understand the issues involved in a particular region and, though not comprehensive, touches upon the main issues involved. It is beneficial to understand the overall issues, in depth, involved in the integration of multimodal transport.
208
Mark K.H. Goh et al.
References 1. The Commission (1994) Toward a national intermodal transportation system. National Commission on Intermodal Transportation. Washington, DC, p. 60. 2. T.K. Stank and A.S. Roath (1998) Some propositions on intermodal transportation and logistics facility development: shippers’ perspectives. Transportation Journal, Spring, 13–24. 3. Y.C. Wan, S. Lim, and T. Sim (2006) Multimodal transport: The practioner’s definitive guide. Singapore Logistics Association, Singapore. 4. B. Jennings and M.C. Holcomb (1996) Beyond containerization: The broader concept of intermodalism. Transportation Journal, Lock Haven, 35(3): Spring: 5–14. 5. R. Banomyong (2004) Assessing import channels for a land-locked country. Asia Pacific Journal of Marketing and Logistics, 16(2): 62–81. 6. M.J.J. Gaudry (1980) Dogit and Logit models of travel mode choice in Montreal. The Canadian Journal of Economics/Revue Canadienne d’ Economique, May, 13(2): 268–279. 7. T.H. Oum (1979) A Cross sectional study of freight transport demand and rail-truck competition in Canada. The Bell Journal of Economics, 10(2) (Aut.): 463–482. 8. R. Barff, D. Mackay, and R.W. Olshavsky (1982) A selective review of travel-mode choice models. The Journal of Consumer Research, 8(4) March: 370–380. 9. A.K.C. Beresford and R.C. Dubey (1990) Handbook on the management and operations of dry ports. UNCTAD, RDP/LDC.7, Geneva, Switzerland. 10. R. Coase (1998) The new institutional economics. The American Economic Review, 88(2), in Papers and Proceedings of the 110th Annual Meeting of the American Economic Association, May: 72–74. 11. R.K. Yin (1985) Case study research applied social research methods, Series 5. Sage Publications, Thousand Oaks, CA. 12. Almec Corporation (2002) ASEAN maritime transport development study. pp. 1–38. 13. T.S. Lee, S. Han, J.H. Kim, and D.K. Lee (2005) Linking South East Asia. Civil Engineering, September: 60–65. 14. Teurelincx (1997) Functional analysis of port performance as a strategic tool for strengthening a port’s competitive and economic potential. International Journal of Maritime Economics, 222, II(2): 119–140. 15. N. Khalid (2004) The emergence of multimodalism in the Straits of Malacca Region. Maritime Institute of Malaysia. 16. USITC no. 3770 (2005) Logistic services: An overview of the global market and potential effects of removing trade impediments. pp. 1–154. 17. H.R. Vitasa and N. Soeprapto (1999) Maritime sector developments in ASEAN. Paper presented in the Maritime Policy Seminar organized by UNCTAD and Ministry of Communications of Indonesia, Jakarta, October 11–13. 18. P.D.P. Australia Pty Ltd./Meyrick and Associates (2005) Promoting efficient and competitive intra-ASEAN shipping services. pp. 1–150.
Chapter 16
Fractional Matchings of Graphs Jiguo Yu and Baoxiang Cao
16.1 Terminology and Notation Fractional factor theory has extensive applications in some areas such as network design, combinatorial topology, and decision lists. For example, in the communication networks, if a large data package can be partitioned into parts to send to different destinations by different channels, then the running efficiency of the networks will be greatly improved. Feasible and efficient assignment data packages can be viewed as a problem of finding a fractional factor satisfying certain special conditions. In this chapter, we consider fractional matching of graphs. Our terminology and notation will be standard. Readers are referred to [1], [2] for undefined terms. All graphs considered in this chapter will be finite simple graphs. Let G be a graph with vertex set V (G) and edge set E(G), The degree of x in G is denoted by dG (x). λ (G) and κ (G) denote the edge connectivity and connectivity of G, respectively. δ (G) denotes the minimum degree of G. If S is a subset of V (G), G[S] denotes the induced subgraph by S. The set of isolated vertices of G\S is denoted by I(G\S) and |I(G\S)| = i(G\S). For two disjoint subsets S, T of V (G), EG (S, T ) denotes the set of edges that has one vertex in S and another in T and |EG (S, T )| = eG (S, T ). Let g and f be two integer-valued functions such that 0 ≤ g(x) ≤ f (x) for all x ∈ V (G). A (g, f )-factor F of G is a spanning subgraph of G satisfying g(x) ≤ dF (x) ≤ f (x) for all x ∈ V (G). A fractional (g, f )-indicator function is a function h that assigns to each edge of graph G a number in [0, 1], so that for each vertex x ∈ V (G) we have g(x) ≤ dGh (x) ≤ f (x), where dGh (x) = ∑e∈Ex h(e) is the fractional degree of x ∈ V (G), with Ex = {e : e = xy ∈ E(G)}. Let h be a fractional (g, f )indicator function of a graph G. Set Eh = {e : e ∈ E(G) and h(e) = 0}. If Gh is a spanning subgraph of G such that E(Gh ) = Eh , then Gh is called a fractional (g, f )factor of G. If g(x) = f (x) = k (k is a nonnegative integer) for all x ∈ V (G), the Jiguo Yu and Baoxiang Cao School of Computer Science, Qufu Normal University, Ri-zhao, Shandong, 276826, P. R. China
209
210
Jiguo Yu and Baoxiang Cao
a fractional (g, f )-factor is called a fractional k-factor. In particular, a fractional 1-factor is also called a fractional perfect matching. We now give three parameters of graphs. Chvatal ´ [3] introduced the definition of toughness t(G) of graph G. When G is not a complete graph, |S| : S ⊆ V (G), ω (G\S) ≥ 2 . t(G) = min ω (G\S) Isolated toughness I(G) of graph G was introduced [4]. When G is not complete, |S| : S ⊆ V (G), i(G\S) ≥ 2 , I(G) = min i(G\S) when G is complete, I(G) = +∞. The binding number bind (G) of graph G was introduced by Woodall in [5]. |NG (X)| : 0/ = X ⊂ V (G) and N(X) = VG . bind(G) : = |X| There are four important aspects for us to study with regard to factional matching of graphs, i.e., fractional factor-critical, fractional deleted, fractional covered, and fractional extendable. In the following sections, we will discuss them respectively.
16.2 Basic Results on Fractional Matching First, we have the following basic fact by the definitions of fractional k-factor and fractional perfect matching given in Section 1. Theorem 1 Let G be a graph and k ≥ 2 an integer. If G has fractional k-factor, then G has a fractional perfect matching. A Tutte-type condition for a graph to have a fractional matching is given in [2]. Theorem 2 Let G be a graph. Then G has a fractional perfect matching if and only if for every subset S of V (G), i(G\S) ≤ |S|. By the definition of isolated toughness I(G), Theorem 2 can be written as follows [4]: Theorem 3 Let G be a graph. Then G has a fractional perfect matching if and only if I(G) ≥ 1. Yu and Liu [6] gave another necessary and sufficient condition related to binding number.
16 Fractional Matchings of Graphs
211
Theorem 4 Let G be a graph. Then G has a fractional perfect matching if and only if bind(G) ≥ 1. Proof Suppose that bind(G) ≥ 1. We prove that G has a fractional 1-factor. By Theorem 2, it suffices to prove that for any S ⊆ V (G) it holds that i(G\S) ≤ |S|. Otherwise, there exists a subset S of V (G) such that i(G\S) > |S|. Obviously, S = V (G) and i(G) = 0 since bind(G) ≥ 1. We have S = 0/ and N(I(G\S)) ⊆ S. Thus |N(I(G\S))| ≤ |S| and N(I(G\S)) = V (G). Furthermore, bind(G) ≤
|S| |N(I(G\S))| ≤ < 1. |I(G\S)| i(G\S)
This contradicts that bind(G) ≥ 1. On the other hand, we show that if G has a fractional 1-factor, then bind(G) ≥ 1. Otherwise, there exists a proper subset X of V (G) with X = 0/ such that |N(X)|/|X| < 1, that is, |N(X)| < |X|. Then G[X] has at least one isolated vertex. Otherwise, we have X ⊆ N(X) and |N(X)| ≥ |X|. Set S = N(X)\(N(X) ∩ X) = 0. / Then i(G\S) ≥ |X \[N(X) ∩ X]| = |X| − |N(X) ∩ X| > |N(X)| − |N(X) ∩ X| = |N(X)\[N(X) ∩ X]| = |S|. By Theorem 2, G has no fractional 1-factor, a contradiction. The proof is completed.
16.3 Fractional Factor-Critical Graph The notation of n-factor criticality is firstly introduced and discussed by Yu [7]. Favaron et al. obtained some interesting results in [8], [9]. Nishimura [10] discussed the perfect matching and matching extension of a graph. Plummer and Saito studied the closure and factor-critical graphs [11]. Ananchuen and Saito considered factor criticality and complete closure [12]. A graph G is said to be fractional n-factor critical if after deleting any n vertices the remaining subgraph still has a fractional perfect matching. For fractional n-factor-criticality, Ma obtained the following results [13]. Theorem 5 Every fractional n-factor critical graph is also fractional n -factor critical for n > n ≥ 0. Theorem 6 A graph G is fractional n-factor critical if and only if for any S ⊆ V (G) and |S| ≥ n, i(G − S) ≤ |S| − n. The following result were given by Nishimura [10].
212
Jiguo Yu and Baoxiang Cao
Theorem 7 Let G be a connected graph and M be an arbitrary (fixed) maximal matching of G. If G − V (e) is n-factor critical for every e ∈ M, then G is n-factor critical. For fractional n-factor-critical graph and its maximal matching, we have the following [14] Theorem 8 Let G be a connected graph with |V (G)| > n + 2, and M be an arbitrary (fixed) maximal matching of G. If G − V (e) is fractional n-factor critical for every e ∈ M, then G is fractional n-factor critical. Proof Suppose that G − V (e) is fractional n-factor critical for every e ∈ M but G is not fractional n-factor critical. Then there exist a subset R ⊆ V (G) and |R| = n such that G − R has no fractional 1-factor. Therefore, for some S ⊆ V (G) − R, i((G − R) − S) ≥ |S| + 1 holds by Theorem 2. It is sufficient to show that G −V (e) is not fractional n-factor-critical for some e ∈ M and induce a contradiction. Set W = G − (R ∪ S). Claim 1. M ⊂ E(R, G − R). We consider four cases. Case 1. There exists an edge e ∈ M ∩ E(G[R]). In this case, we have i({G −V (e) − [R −V (e)] − S}) = i((G − R) − S) ≥ |S| + 1, which means G − V (e) is not fractional (n − 2)-factor critical; clearly G − V (e) is not fractional n-factor critical by Theorem 5, a contradiction. Case 2. There exists an edge e ∈ E(G[S]) and e ∈ M ∩ E(G − R). In this case, we have i(([G −V (e) − R] − [S −V (e)]) = i((G − R) − S) ≥ |S| + 1. It is a contradiction. Case 3. There exists an edge e ∈ E(G[W ]). In this case, we have i(G −V (e) − R − S) ≥ i(G − R − S) ≥ |S| + 1. It is a contradiction. Case 4. There exists an edge e ∈ E(S,W ) with e = xy and x ∈ S, y ∈ W . We have i([G −V (e) − R] − (S − {x})) = i([G − R] − (S ∪ {y}) ≥ i([G − R] − S) − 1 ≥ |S| = |S| − 1 + 1 = |S − {x}| + 1. It is a contradiction. All contradictions indicate that the claim holds.
16 Fractional Matchings of Graphs
213
Claim 2. S = 0. / Suppose that S = 0. / M = 0/ by the assumption. There exists an edge e = ab ∈ M satisfying e ∈ E(S, R) or e ∈ E(W, R). Let a ∈ S ∪W and b ∈ R. If e ∈ E(S, R) and W = 0, / then for a vertex c ∈ W , i([G −V (e)] − (R ∪ {c} − {b}) − (S − {a})) = i((G − R) − (S ∪ {c})) ≥ i((G − R) − S) − 1 ≥ |S| ≥ |S − {a}| + 1. If W = 0, / then there exists a vertex c ∈ S since |M| ≥ 2, and i([G −V (e)] − (R ∪ {c} − {b}) − (S − {a} − {c})) = i(G − R − S) ≥ |S| + 1 ≥ |S − {a} − {b}|. Therefore, G −V (e) is not fractional n-factor critical, a contradiction. If e ∈ E(W, R), then for a vertex d ∈ S, we have i([G −V (e)] − (R ∪ {d} − {b}) − (S − {d})) = i(G − R − S) ≥ |S| + 1 ≥ |S − {d}| + 1. Again we obtain G −V (e) is not fractional n-factor critical, a contradiction. Hence S = 0. / Claim 3. 1 ≤ i(G − R) ≤ 2. By i(G − R − S) ≥ |S| + 1 and |S| = 0, we have i(G − R) ≥ 1. If i(G − R) = m ≥ 3, let b1 , b2 , . . . , bm be isolated vertices of G − R. Without loss of generality, suppose that e = ab1 ∈ M, where a ∈ R. Then we have i(G −V (e) − [R ∪ {b2 } − {a}]) = m − 2 ≥ 1, which implies that G −V (e) is not fractional n-factor critical, a contradiction. Claim 4. Every component of G − R is an isolated vertex. Suppose that C is a component of G and |V (C)| > 1. By the connectedness of G and the maximality of M, there exists an edge e = ab ∈ M ∩ E(C, R) with a ∈ C and b ∈ R. Since |V (C)| > 1, for d ∈ C − {a}, all isolated vertices of G − R are still isolated vertices in G −V (e) − (R ∪ {d} − {b}). Thus i(G −V (e) − [R ∪ {d} − {b}]) ≥ i(G − R) ≥ 1, which implies that G −V (e) is not fractional n-factor critical, a contradiction. Thus, we obtain that G − R has at most two components, each of which is an isolated vertex. Then |V (G)| ≤ n + 2, which contradicts the assumption. We complete our proof. For a vertex x of a graph G, local completion of G at x is the operation of joining every pair of nonadjacent vertices in NG (x), assuming NG (x) is not complete. For a
214
Jiguo Yu and Baoxiang Cao
property P of graphs, a vertex x in a graph G is said to be P-eligible if the subgraph of G induced by NG (x) satisfies P but it is not complete. For a graph G, a graph H is said to be a P-closure of G if there exists a series of graphs G = G0 , G1 , . . . , Gr = H such that Gi is obtained from Gi−1 by local completion at some P-eligible vertex in Gi−1 and H = Gr has no P-eligible vertex. The notion of P-closure sheds a new light on perfect matchings and fractional perfect matchings of graphs. We will give some results on fractional n-factorcriticality and some P-closures. A set A ⊂ V (G) is independent if A ∩ N(A) = 0. / The size of a maximum independent set in G will be called the independent number of G and be denoted by α (G). A set B ⊂ V (G) is a dominating set if B ∪ N(B) = V (G). The size of a minimum dominating set in G is called the domination number of G and is denoted by γ (G). Let Ind(k) be graphs with the property that the independent number is bounded by k, i.e., Ind(k) = {G : α (G) ≤ k}. Let Dom(K) be graphs with the property that the domination number is bounded by k, i.e., Dom(k) = {G : γ (G) ≤ k}. In the following we will study the relations between fractional factor criticality and Ind(k), fractional factor criticality and Dom(k), respectively. The results can be found in [14] Theorem 9 Let G be a graph and H be a spanning subgraph of G. If H is fractional n-factor critical, then G is fractional n-factor critical. For the relation between fractional n-factor criticality and local completion, we have the following Theorem 10 Let G be a graph of order p with vertex set V (G) = {x1 , . . . , x p }. Then G is fractional n-factor critical if and only if Gxi is fractional n-factor critical for 1 ≤ i ≤ 1/2(p + n), where Gxi is the graph obtained from G by local completion at xi and n is a positive integer with n < p. The above theorem is sharp in the sense that if local completion at a vertex xi yields a fractional n-factor critical graph for 1 ≤ i ≤ 1/2(p + n) , the original graph may not be fractional n-factor critical. Let G = Ks ∨ Ks−n+1 , where s is an integer with s > n. Then |V (G)| = 2s − n + 1 and G is not fractional n-factor critical. However, for each vertex x ∈ Ks , Gx is a complete graph of order 2s − n + 1, which is fractional n-factor critical, when p ≡ (n − 1)(mod 2). Next we consider Ind(k)-closure and study how it affects the fractional factor criticality of graphs. Theorem 11 Let G be an (n + k)-connected graph with n ≥ 1 and k ≥ 1. Let x ∈ V (G) and α (G[NG (x)]) ≤ k, and let G be the graph obtained from G by local completion at x. If G is fractional (n + k − 1)-factor critical, then G is fractional n-factor critical. The following theorem shows the effect of Dom(k)-closure on the fractional criticality of graphs. Theorem 12 Let s and k be integers with s ≥ 3, k ≥ 2. Let G be an [n + (s − 1)k]connected graph such that the centers of the induced K1,s ’s in G are independent, let x be a vertex of G with γ (G[NG (x)] ≤ k, and let G be the graph obtained from G by local completion at x. If G is fractional [n + (s − 2)k − 1]-factor critical, then G is fractional n-factor critical.
16 Fractional Matchings of Graphs
215
16.4 Fractional Deleted Graphs Let G be a graph with vertex set V (G) and edge set E(G). For two disjoint subsets S, T of V (G), EG (S, T ) denotes the set of edges that has one vertex in S and another in T and |EG (S, T )| = eG (S, T ). A graph G is called fractional (g, f )-deleted if for each edge e of G, there exists a fractional (g, f )-factor Gh such that h(e) = 0. Similarly we can define fractional k-deleted graphs. A graph G is called fractional k-edge-deleted if deleting E ⊆ E(G), |E | = k, there exists a fractional perfect matching. Yang and Kang [15] studied fractional (g, f )-deleted graph and obtained the following Theorem 13 Let G be a graph, and let g and f be two integer-valued functions defined on V (G) such that g(x) ≤ f (x) for all x ∈ V (G). Then G is fractional (g, f )deleted if and only if δG (S, T ) ≥ ε (S, T ) for all S ⊆ V (G) and T = {x : x ∈ V (G)\S and dG\S (x) ≤ g(x)}. Where δG (S, T ) = dG\S (T ) − g(T ) + f (S) and ε (S, T ) is defined as follows: ⎧ ⎨ 2, T is not independent; ε (S, T ) = 1, T is independent and |E(T,V (G)\(S ∪ T ))| ≥ 1; ⎩ 0, neither (1) nor (2) holds. By Theorem 13, it is easy to obtain the following result. Theorem 14 Let G be a graph and k > 0 be an integer. Then G is fractional kdeleted if and only if for all S ⊆ V (G), T = {x : x ∈ V (G)\S, d(G\S) (x) ≤ k}, k−1
∑ (k − j)Pj (G\S) ≤ k|S| − ε (S, T ),
j=0
where the Pj (G\S) denote the number of vertices in G\S with degree j and ε (S, T ) is defined as above. For complete graphs, Ma [16] obtained the following result. Theorem 15 Let Kp be a completed graph of order p(p ≥ 4). Then Kp is fractional (p − 2)-edges-deleted. Yu, et al. [17] proved the following. Theorem 16 Let G be a graph. If t(G) > 1, then G is fractional 1-deleted. Proof By Theorem 14, we only need to prove that i(G\S) ≤ |S| − ε (S, T ), holds for all S ⊆ V (G), T = {x : x ∈ V (G)\S, d(G\S) (x) ≤ 1}.
(16.1)
216
Jiguo Yu and Baoxiang Cao
Case 1. |S| ≤ 1. Since δ (G) ≥ κ (G) ≥ 2t(G) > 2, we have T = {x : x ∈ V (G)\S, dG\S (x) ≤ 1} = φ ; so ε (S, T ) = 0, i(G\S) = 0 < |S| − ε (S, T ). Thus (1) holds. Case 2. |S| = 2. Since δ (G) ≥ 3, we have i(G\S) = 0. For ε (S, T ) ≤ 2, then i(G\S) = 0 ≤ |S| − ε (S, T ) and (1) holds. Case 3. |S| ≥ 3. Subcase 3.1 ω (G\S) ≤ 1. In this case i(G\S) ≤ ω (G\S) ≤ 1 = 3 − 2 ≤ |S| − ε (S, T ). Thus (1) holds. Subcase 3.2 ω (G\S) ≥ 2. Since 1 < t(G) ≤ |S|/ω (G\S), we have |S| > ω (G\S). Suppose that G is not fractional 1-deleted. By Theorem 14, there exists φ = S0 ⊆ V (G), and |S0 | ≥ 3, i(G\S0 ) > |S0 | − ε (S0 , T ). Let T1 = {x : x ∈ V (G)\S0 , dG\S0 (x) = 1}, NG\S0 (T1 ) = {y : x ∈ T1 , xy ∈ EG\S0 (G)} and E(T1 ) = {e = uv : u = v, u, v ∈ T1 }. Subcase 3.2.1 E(T1 ) = φ . In this case ε (S0 , T ) ≤ 1. Thus i(G\S0 ) ≥ |S0 | by i(G\S0 ) > |S0 | − ε (S0 , T ). Let |V1 | = |NG\S0 (T1 )| ≤ |T1 |. Note that w(G\(S0 ∪V1 )) ≥ i(G \ (S0 ∪V1 )) = i((G\S0 )\V1 ) ≥ i(G\S0 ) + |T1 | ≥ |S0 | + |T1 |; hence, we have 1 < t(G) |S0 ∪V1 | ≤ ω (G\(S0 ∪V1 )) |S0 ∪ T1 | ≤ i(G\(S0 ∪V1 )) |S0 | + |V1 | ≤ i(G\S0 ) + |T1 | |S0 | + |V1 | ≤ |S0 | + |T1 | ≤ 1, a contradiction. Subcase 3.2.2 E(T1 ) = φ . In this case ε (S, T ) = 2. By the assumption we have i(G\S0 ) > |S0 | − ε (S, T ) = |S0 | − 2. Moreover, i(G\S0 ) ≥ |S0 | − 1. Since |S0 | > ω (G\S0 ), we have ω (G\S0 ) ≤ i(G\S0 ). Thus, we have ω (G\S0 ) = i(G\S0 ) since i(G\S0 ) ≤ ω (G\S0 ). This contradicts E(T1 ) = φ . From all cases above we obtain the desired contradictions. The theorem is proved.
16 Fractional Matchings of Graphs
217
The condition t(G) > 1 in the theorem is best possible. Let G1 be a 5-cycle. Then t(G1 ) = 1, but G1 is not fractional 1-deleted. Theorem 17 Let G be a graph. If t(G) ≥ 3/2, and |V (G)| > 3, then G is fractional 2-deleted. Theorem 18 Let G be a graph with |V (G)| > 3. If κ (G) ≥ k and I(G) ≥ k ≥ 1, then G is a fractional (k − 1)-edge-deleted graph. Theorem 19 The condition κ (G) ≥ k can not be reduced to λ (G) ≥ k. Let k = 2. Consider graph G1 which consists of cycles C3 = x1 x2 x3 and C5 = x1 x4 x5 x6 x7 with chords x4 x6 , x4 x7 , and x5 x7 . Clearly κ (G) = 1, λ (G) = 2, and I(G) = 5/2 ≥ 2. Let e = x2 x3 . G = G\{e} has no fractional 1-factor. Thus G is not fractional 1-edgedeleted.
16.5 Fractional Covered Graphs The (g, f )-covered graph was introduced in [18]. A graph is called (g, f )-covered if for each edge of G there is a (g, f )-factor containing it. Li et al. introduced fractional (g, f )-covered graph in [19] and obtained some basic results. A graph is fractional (g, f )-covered if for each edge e of G there is a fractional (g, f )-factor with the indicator function h such that h(e) = 1. For any S ⊆ V (G), let T = {x : x ∈ V (G)\S and dG\S (x) ≤ g(x)}. We define ε (S) as follows: ε (S) = 2 if S is not independent. ε (S) = 1 if S is independent and there is an edge joining S and V (G)\(S ∪ T ), or there is an edge e = uv joining S and T such that v ∈ T , dG\S (v) = g(v). ε (S) = 0 if neither (1) nor (2) holds. The following theorem was given in [19]. Theorem 20 Let G be a graph, and let g and f be two integer-value functions defined on V (G) such that g(x) ≤ f (x) for all x ∈ V (G). Then G is fractional (g, f )covered if and only if for all S ⊆ V (G) and T = {x : x ∈ V (G)\S, and dG\S (x) ≤ g(x)}, δ (S, T ) ≥ ε (S), where ε (S) is as defined earlier. By Theorem 20, it is easy to obtain the Theorem 21. Theorem 21 Let G be a graph and k > 0 be an integer. Then G is fractional kcovered if and only if for all S ⊆ V (G), T = {x : x ∈ V (G)\S, and dG\S (x) ≤ k}, k−1
∑ (k − j)Pj (G\S) ≤ k|S| − ε (S),
j=0
where the Pj (G\S) denote the number of vertices in G\S with degree j.
218
Jiguo Yu and Baoxiang Cao
For the relations between toughness and fractional 1-covered, 2-covered graphs, we have the following theorem [20]. Theorem 22 Let G be a graph when G is an even cycle, or t(G) > 3/2, then G is fractional 1-covered. Proof Let k = 1 in Theorem 21, it suffice to prove that i(G\S) ≤ |S| − ε (S),
(16.2)
holds for all S ⊆ V (G). If G is a complete graph, obviously the theorem holds. In the following we suppose that G is not a complete graph. We can also have δ (G) ≥ κ (G) ≥ 2t(G) > 3. Case 1. |S| ≤ 3. Since δ (G) > 3, we have i(G\S) = 0. For ε (S) ≤ 2, then i(G\S) < |S| − ε (S), and (4) holds. Case 2. |S| ≥ 4. Since 3/2 < t(G) ≤ |S|/ω (G\S), we have |S| > 3/2ω (G\S). Suppose that G is not fractional 1-covered. By Theorem 21, there exists φ = S0 ⊆ V (G), |S0 | ≥ 4, satisfying i(G\S0 ) > |S0 | − ε (S0 ). Since ε (S0 ) ≤ 2, we have i(G\S0 ) ≥ |S0 | − 1.
(16.3)
Thus, ω (G\S0 ) ≥ i(G\S0 ) ≥ |S0 | − 1 ≥ 3 by (2). Since 3 |S0 | < t(G) ≤ , 2 ω (G\S0 ) we have 3/2ω (G\S0 ) < |S0 |. Thus,
ω (G\S0 ) +
3 3 ≤ ω (G\S0 ) ≤ |S0 | − 1 ≤ i(G\S0 ), 2 2
a contradiction. The theorem is proved. The condition t(G) > 3/2 in the theorem is best possible. Let G1 = K 2 ∨ P3 . We have t(G1 ) = 3/2. Let S = V (P3 ), i(G\S) = 2, ε (S) = 2, then i(G\S) > |S| − ε (S), G1 is not fractional 1-covered. Theorem 23 Let G be a graph. If t(G) ≥ 3/2, and V (G) ≥ 3, then G is fractional 2-covered. The following result was also obtained in [20]. Theorem 24 Let G be a graph and δ (G) ≥ 3. If I(G) > 3/2 for all S ⊆ V (G), then G is fractional 1-covered.
16 Fractional Matchings of Graphs
219
Proof Let k = 1 in Theorem 21. It suffice to prove that i(G\S) ≤ |S| − ε (S),
(16.4)
holds for all S ⊆ V (G). We consider two cases. Case 1. |S| < 2. / and i(G\S) = 0. Since δ (G) ≥ 3, T = {x : x ∈ V (G)\S and dG\S (x) ≤ 1} = 0, If S = 0, / in this case ε (S) = 0, then i(G\S) = 0 ≤ |S| − ε (S). Thus (4) holds. Otherwise, |S| = 1, in this case ε (S) ≤ 1, then i(G\S) = 0 ≤ |S| − ε (S). Thus (4) holds. Case 2. |S| ≥ 2. Subcase 2.1 i(G\S) ≤ 1. Since ε (S) ≤ 2, i(G\S) ≤ 1 = 3 − 2 ≤ |S| − 2 ≤ |S| − ε (S); hence (5) holds. Subcase 2.2 i(G\S) ≥ 2. Since I(G) > 3/2, we have |S|/i(G\S) > 3/2, that is, 2|S| > 3i(G\S). Thus, 2|S| − 1 ≥ 3i(G\S). Since i(G\S) ≥ 2, we have 2|S| − 1 ≥ 2i(G\S) + 2. Hence i(G\S) ≤ |S| − 3/2. Moreover, i(G\S) ≤ |S| − 3/2 since i(G\S) is an integer. Thus i(G\S) ≤ |S| − ε (S) since ε (S) ≤ 2. The theorem is proved. The condition δ (G) ≥ 3 in the theorem can not be reduced to δ (G) ≥ 2. Let G1 be a graph that consists of two cycles C1 = v1 v2 v3 and C2 = v1 v4 v5 v6 with one common vertex v1 and v4 v6 is a chord of C2 . Clearly δ (G1 ) = 2, I(G1 ) = 2 ≥ 3/2. Let S = {v1 , v2 }, ε (S) = 2, and i(G\S) = 1 > |S| − ε (S). Therefore G1 is not fractional 1covered. The condition I(G) ≥ 3/2 is best possible. Let G2 be a cycle C5 = v1 v2 v3 v4 v5 with chords v1 v4 , v1 v5 , and v2 v3 . Clearly δ (G2 ) = 3, I(G2 ) = 3/2. Let S = {v1 , v3 , v4 }, ε (S) = 2, and i(G\S) = 2 > |S| − ε (S). Therefore, G2 is not fractional 1-covered. Let S be a subset of V (G), and M be a matching of G. Let T = {x : x ∈ V (G)\S and dG\S (x) ≤ g(x)}, D = V (G)\(S ∪ T ), EG (S) = {e : e = xy ∈ E(G), x, y ∈ S}, E = M ∩ EG (S), E = M ∩ EG (S, D), H = G[E ∪ E ], βG (S, M) = 2|E | + |E | =
∑ dH (x).
x∈S
A necessary and sufficient condition for a graph to have a fractional (g, f )-factor covering a given k-matching is given in the following: Theorem 25 Let G be a graph, g and f be two integer-valued functions defined on V (G) such that 0 ≤ g(x) ≤ f (x) for all x ∈ V (G), and let M be a matching of G.
220
Jiguo Yu and Baoxiang Cao
Then G has a fractional (g, f )-factor Gh such that h(e) = 1 for every e ∈ M, where h is the indicator function of Gh , if and only if
δG (S, T ) = dG\S (T ) − g(T ) + f (S) ≥ βG (S, M) for any subset S of V (G), and T = {x : x ∈ V (G)\S and dG\S (x) ≤ g(x)}. By Theorem 25, it is easy to obtain the following theorem [13, 20]. Theorem 26 Let G be a graph with a k-matching and k = 0, Then G is fractional k-extendable if and only if i(G\S) ≤ |S| − 2k holds for any S ⊆ V (G) such that G[S] contains a k-matching.
16.6 Fractional Extendable Graph A graph G is called k-extendable if every k-matching can be extended to a perfect matching. Some results on k-matching can be found in [21]. A graph G is said to be fractional k-extendable if every k-matching M can be extended to a fractional perfect matching Gh with h(e) = 1 for e ∈ M. In [16], Ma gave the following result. Theorem 27 Let G be fractional k-extendable, where k ≥ 1 is an integer, then G is also fractional (k − i)-extendable for every 1 ≤ i ≤ k. In this section, we give Theorem 28 [22]. Theorem 28 Let G be a connected graph with a fractional perfect matching and M be an arbitrary (fixed) maximal matching of G. If G\V (e) is fractional k-extendable for every e ∈ M, then G is fractional k-extendable, where k is an integer with k ≥ 2. Proof Suppose that G\V (e) is fractional k-extendable for every e ∈ M, but G is not fractional k-extendable. Then there exists a k-matching M of G such that G\V (M ) has no fractional perfect matching. Let R = V (M). Thus, by Theorem 2, there exists S ⊆ V (G)\R such that i((G\R)\S) > |S| (16.5) holds. It is sufficient to prove that G\V (e) is not fractional k-extendable and deduce a contradiction. Let W = (V (G)\R)\S. Claim 1. M ⊆ EG (R,V (G)\R) ∪ E(G[R]). We consider three cases. Case 1. There exists an edge e ∈ M ∩ E(G[S]).
16 Fractional Matchings of Graphs
221
In this case, note that M ⊆ G(R ∪ (S\V (e))), and |R| = 2k. We have i((G\V (e))\R\(S\V (e))) = i(G\R\S) > |S| = 2k + |S| − 2k = |R| + |S| − 2k = |R ∪ (S\V (e))| − 2k + 2, which means that G\V (e)\R has no fractional perfect matching. Therefore, G\V (e) is not fractional k-extendable by Theorem 27, a contradiction. Case 2. There exists an edge e ∈ M ∩ E(G[W ]). In this case, we have i([G\V (e)]\R\S) ≥ i(G\R\S) > S. By the same reason used in Case 1, we obtain the desired contradiction that G\V (e) is not fractional k-extendable. Case 3. There exists e = xy ∈ M ∩ EG (S,W ). Let x ∈ S, y ∈ W , then S \ {x} ⊆ V (G) \V (e). Since G \V (e) is fractional k-extendable, we have i([G\V (e)]\R\(S\{x})) ≤ |S\{x}| = |S| − 1. On the other hand, if y is an isolated vertex of G\R\S, then i([G\V (e)]\R\(S\{x})) ≤ |S\{x}| = i(G\R\S) − 1. If y is not an isolated vertex of G\R\S, then i([G\V (e)]\R\(S\{x})) ≥ i(G\R\S). Hence, i([G\V (e)]\R\(S\{x})) ≥ i(G\R\S) − 1. Moreover, by (3), we have i(G\R\S) − 1 > |S| − 1, which is a contradiction. / Claim 2. M ∩ M = 0. If there exists an edge e = xy ∈ M ∩ M , then M \{e} is the (k − 1)-matching of G\V (e). By (2), i((G\V (e)) − (R\V (e))\S) = i(G\R\S) > |S| = |(R\V (G)) ∪ S| − 2(k − 1).
222
Jiguo Yu and Baoxiang Cao
Thus, G\V (e) is not fractional (k − 1)-extendable. Hence, G\V (e) is not fractional k-extendable by Theorem 26, a contradiction. Claim 3. All components of Re (M ∪ M) are alternating paths, where Re (M ∪ M) denotes the set of edges in M ∪ M whose ends are all in R. By Claim 1 and Claim 2, obviously, Re (M ∪ M) induces only odd cycles or alternating paths. Note that the end edges of such an alternating path are in M . Suppose that Re (M ∪ M) contains an even cycle D = a1 a2 . . . a2m−1 a2m a1 . Let M1 = {a2 j a2 j+1 : j = 1, 2, . . . , m} ⊆ M , where a2m+1 = a1 , and M2 = {a2 j−1 a2 j | j = 1, 2, . . . , m} ⊆ M. Note that if G has no fractional perfect matching containing M , then since G[R] = G[Ve (M )] = G(Ve ((M \M1 ) ∪ M2 )) with |M | = |(M \M1 ) ∪ M2 |, G also has no fractional perfect matching containing (M \M1 ) ∪ M2 . By hypothesis and Theorem 26, G\{a1 , a2 } is also fractional (k − 1)-extendable. However, since i(G\{a1 , a2 }\(Ve ((M \M1 ) ∪ (M2 \{a1 , a2 })))\S) = i(G\R\Si(G\R\S) > |S|, Then G\{a1 , a2 } is not fractional (k − 1)-extendable, a contradiction. Thus, claim 3 holds. In the rest of proof, a1 Pa2 denotes alternating path in Re [M M ] with the end vertices a1 and a2 . Claim 4. S = 0. / Suppose that S = 0. / Since M = 0, / there exists some edge e = ab ∈ M, bb ∈ M , that satisfies e ∈ EG (S, R) or e ∈ EG (W, R). Let a ∈ S ∪ W and b ∈ R. Three cases need to be considered. Case 1. E(G[S]) = 0. / If there exists e ∈ M ∩ EG (S, R), then i((G\V (e))\(R\{b, b }) \ ((S\{a}) ∪ {b })) = i(G\R\S) > |S|; hence, G\V (e) is not fractional (k − 1)-extendable. By Theorem 26, G\V (e) is not fractional k-extendable, a contradiction. Thus we have M ∩ EG (S, R) = 0, / and by Claim 1 and the maximality of M, we have E(G[S]) = 0. / / Case 2. EG (S,W ) = 0. Let e1 = ca ∈ EG (S,W ) with a ∈ W and c ∈ S, by the maximality and Claim 1 there exists an edge e2 = ab ∈ M ∩ EG (W, R). Let M0 = (M \ {ab}) ∪ {ca}, then |M| = |M0 |; M0 is also a maximal matching. Hence, by the hypothesis for every e ∈ M0 , we have G\V (e) is fractional k-extendable. Since e1 ∈ M0 , we have i((G\V (e1 ))\R\(S\{c})) = i(G\R\(S ∪ {a})) ≥ i(G\R\S) − 1 > |S| − 1;
16 Fractional Matchings of Graphs
223
thus, G\V (e1 ) is not fractional k-extendable, a contradiction. Case 3. E(S, R) = 0. / Let e = a b ∈ E(S, R), where a ∈ S. Then by the maximality of M, there exists e = ab ∈ M ∩ EG (W, R) or e = ab ∈ M ∩ E(G[R]). Let M0 = (M \ {e}) ∪ {e }, then |M0 | = |M|, M0 is a maximal matching of G, Thus by the hypothesis, we have for e ∈ M0 , G\V (e ) is fractional k-extendable. i((G\V (e ))\(R\{b, b })\((S\{a } ∪ {b })) = i(G\R\S) ≥ i(G\R\S) − 1 > |S|; thus, i(G\V (e )) is not fractional (k − 1)-extendable, a contradiction. From all the cases above, we have S = 0. / The claim holds. Claim 5. 1 ≤ i(G\R) ≤ 2. By the hypothesis G is not fractional k-extendable, we have i(G\R) > |R| − 2k = 0 by Theorem 26. Thus, i(G\R) ≥ 1. If i(G\R) ≥ 3, by the connectedness of G, the maximality of M, and Claim 1, there must exist e = ab ∈ [M ∩ EG (W, R)] with a ∈ W, b ∈ R, and bb ∈ M . By the hypothesis, for every e ∈ M, G\V (e) is fractional k-extendable, and G\V (e) is also fractional (k − 1)-extendable by Theorem 26. Thus, i(G\V (e)\(R\{b, b })) ≥ i(G\R) − 2 ≥ 1, a contradiction. Therefore, i(G\R) ≤ 2. Claim 6. Every component of G\R is an isolated vertex. Suppose that C is a component of G\R and |V (C)| > 1. By the connectedness of G, the maximality of M, and Claim 1, there exists an edge e = ab ∈ M E(C, R) with a ∈ C, and b ∈ R, and there exists e = a a ∈ E(C). By the maximality of M there must exist e0 = a d ∈ M ∩ EG (C, R) with d ∈ R. By Claim 3, there exists an alternating path dPb. Let M0 = M\(M ∩ dPb)\{e}\{e0 } ∪ (M ∩ dPb) ∪ {e }. We have |M0 | = |M|, thus M0 is also a maximal matching of G. By the hypothesis for every e ∈ M0 , G\V (e ) is fractional k-extendable, therefore, i(G\V (e )\R) ≥ i(G\R) ≥ 1, a contradiction. Thus |V (C)| = 1, the claim holds. By Claim 5 and Claim 6 we know that G\R has at most two components, each of which is an isolated vertex. Thus by the assumption for arbitrary e ∈ M, G\V (e) is fractional k-extendable, we have |V (G)| ≥ 2k + 2.
224
Jiguo Yu and Baoxiang Cao
Let a, b be the two isolated vertices of G\R, M = {e1 , e2 , . . . , ek }, e1 = cc , and e2 = ff , and ek = dd . By the claims above and the maximality of M, we have Re [M ∪ M ] ∪ M induces an alternating path whose end edges are a and b. Let e = ca ∈ M; since c is in R, for the alternating path cPd in Re [M ∪ M ], there exists d b ∈ M. Case 1. e0 = c b ∈ / EG (R,W ). In this case we have i(G\V (e)\(R\{c c})) = i(G\R) = 2; thus, G\V (e) is not fractional k-extendable, a contradiction. Case 2. e0 = c b ∈ E(R,W ). Let M0 = M\(M cPd )\{d b} ∪ (M \{e1 )} ∪ {e0 }. Thus |M0 | = |M|, and M0 is the maximal matching of G. By the hypothesis, for every e ∈ M0 , we have G\V (e) is fractional k-extendable. However, i(G\{d , d}\(R\{d , d})) = i(G\R) = 2; hence, G\{ek } with ek ∈ M0 is not fractional (k − 1)-extendable, and G\{ek } is not fractional k-extendable by Theorem 27, a contradiction. All the cases show that i(G\R) < 2, then |V (G)| < 2k + 2, which contradicts to the assumption. We complete our proof.
16.7 Conclusion Fractional graph theory is a relatively new branch of graph theory. There are several interesting results, such as fractional Hamilton graph and fractional colorings in this area, that can be found in [2]. More results and problems on fractional factor can be found in [23, 24].
References 1. J. A. Bondy and U. S. R. Murty (1976) Graph theory with applications. Macmillan Press Ltd, New York. 2. E. R. Scheinerman and D. H. Ullman (1997) Fractional graph theory. John Wiley and Sons, Inc. New York. 3. Chv´atal V (1973) Tough graphs and hamiltonian circuits. Discrete Mathematics. 5: 215–228. 4. J. B. Yang, Y. H. Ma, and G. Z. Liu (2001) Fractional (g, f )-factors of graphs. Appl. Math., J. chinese Univ. 26(4): 385–390. 5. D. R. Woodall (1990) k-factor and neighborhood of independent sets in graphs. J. London Math. Soc. 41: 385–392.
16 Fractional Matchings of Graphs
225
6. J. G. Yu and G. Z. Liu (2004) Binding number and minimum degree conditions for graphs to have fractional factors. J. Shandong University 39(3): 1–5. 7. Q. L. Yu (1993) Characterizations of various matchings in graphs. Austra. J. combin. 7: 55– 64. 8. O. Favaron (1996) On n-factor-critical graphs, Discussions in Mathematical Graph Theory, 16: 41–51. 9. O. Favaron and M. Shi (1998) Minimally k-factor-critical graphs. Australia Journal of Combinatorics 17: 89–97. 10. T. Nishimura (2000) On 1-factors and matching extension. Discrete Math. 222: 285–290. 11. M. D. Plummer and A. Saito (2000) Closure and factor-critical graphs. Discrete Math. 215: 171–179. 12. N. Ananchuen and A. Saito (2003) Factor critibility and complete closure of graphs. Discrete Math. 265: 13–21. 13. Y. H. Ma and G. Z. Liu (2004) Some results on fractional k-extendable graphs. J. Engin. Math. 21(4): 567–573. 14. J. G. Yu, Q. J. Bian, G. Z. Liu and N. Wang (2007) Some results on fractional n-factor-critical graphs. J. Appl. & computing (to appear). 15. J. B. Yang and W. M. Kang (2000) Fractional (g, f )-covered graph and fractional (g, f )deleted graph. Proceedings of the 6th national conference of operations research society of China, Global Link Publishing Company, pp. 450–454. 16. Y. H. Ma (2002) Some results on fractional factors of graphs. Ph.D. Thesis, Shandong University, Shandong, China. 17. J. G. Yu, N. Wang, Q. J. Bian and G. Z. Liu (2007) Some results on fractional deleted graphs. OR Transactions. 11(2): 65–72. 18. G. Z. Liu (1988) On (g, f )-covered graphs. Atca. Math. Scientia. 8(2): 181–184. 19. Z. P. Li, G. Y. Yan and X. S. Zhang (2003) On fractional (g, f )-deleted graphs. Math. Appl. 16(1): 148–154. 20. J. G. Yu, N. Wang and B. X. Cao (2006) Some results on fractional covered graphs. Lecture Notes in OR 6: 334–341. 21. M. D. Plummer (1994) Extending matching in graphs: A Survey. Discrete Math. 31: 277–292. 22. J. G. Yu, N. Wang, X. J. Feng and G. Z. Liu (2007) A note on fractional extendable graphs. IMECS2007, Vol 2: 2270–2273. 23. G. Z. Liu and X. Zhang (2006) Fractional factors and fractional Hamilton graphs. Advances Math. 35(3): 257–264. 24. J. G. Yu, G. Z. Liu, B. X. Cao and M. J. Ma (2006) A degree condition for graphs to have fractional k-factors. Advances in Mathematics. 35(5): 621–628.
Chapter 17
Correlation Functions for Dynamic Load Balancing of Cycle Shops Claudia Fiedler and Wolfgang Meyer
17.1 Problem Statement A cycle shop is a special job shop where all jobs (or processes P) obey different sequences of operations on the machines. In contrast to a flow shop, some operations can be repeated on some machines a number of times. The sequence of operations on the machines in a cycle shop can be nicely depicted by a cyclogram [1]. Figure 17.1(a) shows examples for a five-machine work cell, where the machines M1, . . . , M5 are linearly arranged to form a flow line [Fig. 17.1(a)]. We now insert an additional operation on an additional common machine (T) between each two consecutive operations in each process. In general, this common server T is called an input–output resource, as contrasted to the processing resources M [2]. In manufacturing, T is the transport system or a flexible robot that performs transport and loading operations among different machines M1, . . . , M5. Figure 17.1(b) shows the cyclogram for a cycle shop of rank 1 (number of cycle machines) and multiplicity 6 (maximum number of cycles per machine) as it evolves from Fig. 17.1(a). The cyclograms above display the logical structure of a production site but are stripped off from all time information that is needed to model the control problem and to optimize the shop performance. Shop performance is measured in terms of order flow time f , work-in-process WIP, and productivity P = WIP/ f . These parameters are conveniently represented in the work-cell throughput diagram of Fig. 17.2(a). The slope of the output curve equals the productivity P which is limited by maximum capacity of the workstations contained in the cell. If input curve (start of production) and output curve (end of production per process) are in parallel, we speak of steady-state or stationary behavior, on the long run. This does not necessarily mean balanced operation for each workstation inside the cell, however. Claudia Fiedler and Wolfgang Meyer University of Technology, 21071 Hamburg, Germany
227
228
Claudia Fiedler and Wolfgang Meyer
Fig. 17.1 Cyclograms for (a) flow shop and (b) cycle shop
In optimizing a work cell, the flow time should be minimized, the productivity should be maximized, and WIP should be balanced. In a balanced shop, the work load is equally distributed over all resources, preferably at a high level of utilization. This balance is very sensitive to the sequence of incoming orders, of course. The dynamic load-balancing problem is then formulated as a two-fold scheduling and design problem: 1. Depending on the incoming orders P1, P2, P3, . . . , schedule the transport operations in such a way as to maximize shop productivity or, equivalently, as to minimize the sum of order release intervals Σ vi (Fig 17.2). 2. If the transport system is the bottleneck resource, increase the transport capacity by providing additional transporting units until the bottleneck has moved to some other resource.
Fig. 17.2 (a) Work-cell throughput diagram. (b) Work content diagram: work in process (WIP), productivity (P), average flow time ( f ), process release intervals (vi )
17 Correlation Functions for Dynamic Load Balancing of Cycle Shops
229
There are a few fundamental rules that facilitate the solution of this problem. The most important one refers to the variance of orders or processes to be sent to the system: The processes should be homogeneous with respect to their routing and their processing times in order to enhance flowlike behavior of the cell. In Fig. 17.2, for instance, the flow times of P1. . .P6 do not differ too much. In this case, by proper choice of v2 , . . . , v6 , a symmetric WIP curve is obtained over time [Fig. 17.2(b)]. The throughput diagram of Fig. 17.2(a) and the work content diagram of Fig. 17.2(b) contain the same information but focus at different aspects. First of all, we may learn from Fig. 17.2(b) that WIP(t) is by far not constant over time. On the contrary, the diagram adjusts for reality, as work starts at some time in the morning and finishes later the day. It may be daring to talk about stationary or balanced operation under such conditions. However, load balance is a dynamic parameter. It describes the relation of input and output curves for a production system, which are time derivatives themselves. Therefore, the symmetry of Fig. 17.2(b) is a sufficient condition for balanced behavior, and the diagram can be used as a tool for estimating rough-guess schedules at work-cell level. Figure 17.2 does not support load balancing at workstation level, however. Here, the real problem, i.e., resource sharing, occurs. Each job concurrently being processed inside the work cell calls for the same set or at least subset of resources, especially of cycle resources, e.g. the transport system. In fact, the processes P1, . . . , P6 in Fig. 17.2 are only coupled by the mutually exclusive constraints posed by the common cycle resource. In our approach, the coupling is modeled by collision functions (which should be termed noncollision functions). These multi-dimensional functions are linear superpositions of cross-correlations for each combination of different processes underway in the system. The system design problem is then solved by proper design of correlation functions, and the scheduling problem is solved by pattern matching of these collision functions. In the following, we extend the collision model to a systematic approach for deriving deterministic schedules for coupled discrete processes from first principles, without search. We present a theory on how to transform deterministic process plans expressed as Gantt charts into correlation functions and how to construct nonperiodic schedules from the resulting collision functions. We elaborate on a representation for collision avoidance recently put forward by us in [2] and apply it to dynamic scheduling and load balancing of cyclic work cells. In our no-wait model, no buffers may exist at machines, and the transporting units are not allowed to wait when being in the loaded state.
17.2 Load-Balancing Systems: State of the Art Load balancing copes with the constraints of the real world by sequencing operations and allocating resources to operations in such a way as to optimize time performance measures, at equal work load for the resources to avoid bottlenecks. Dynamic load balancing has long been a topic in computer science [3]. During the
230
Claudia Fiedler and Wolfgang Meyer
last decade, a lot of investigations into logical (untimed) resource allocation systems were conducted under the label of RAS [4]. For (timed) cycle shops, related work has been done in the context of robotic scheduling [5] and hoist scheduling for automated electroplating lines [6]. The load-balancing problem is already nondeterministic polynomial-time hard (NP-hard) for robotic flow shops with more than two machines and for two or more different processes, not to speak of cycle shops. See the most recent handbook of scheduling for a wealth of information [7]. Most load-balancing algorithms rely on the integer programming (IP) or constraint programming (CP) paradigm or a combination of both, often based on some kind of prohibited-interval rule. Simulation, heuristic, and sometimes exhaustive search have been used as well. However, no constructive analytical tool exists for real-size NP-hard problems, of course. In support of short-term planning and lookahead algorithms, therefore, deterministic correlation theory has been adopted for periodic flow-shop scheduling [2]. We extend this theory to dynamic load balancing of cycle shops in the following.
17.3 Process Plan and Resource Model Load balancing at the workstation level is performed with the help of the resource Gantt chart (see Fig. 17.3). The Gantt chart allocates process operations to process resources, along the time axis. In Fig. 17.3, five machine operations per process are performed by five different machines M1, . . . , M5, whereas six transport operations are devoted to one single transporting unit T. If more processes P2, P3 enter the system in addition to P1, the transport system gets busier, i.e. the upper row of Fig. 17.3 gets crowded, eventually leading to collisions.
Fig. 17.3 Process resource Gantt chart for three processes (P1, P2, P3): M1, . . . , M5 machines, T transport system
17 Correlation Functions for Dynamic Load Balancing of Cycle Shops
231
Fig. 17.4 Transport resources and operations Gantt chart (extracted from Fig. 17.3) for (loaded) transport operations only
Additionally, empty moves among different processes become a necessity. In the combined process–resource Gantt chart of Fig. 17.4, the transport operations have been extracted from Fig. 17.3. If we add moves to the chart (dashed bars in Fig. 17.5), then the prescribed process plan from Fig. 17.4 can not be realized by one transporter only and we must provide a second one, T2. Another possibility to relax the situation for the crowded transport system is rescheduling, that means enhancing the release intervals v2 and v3 for processes P2 and P3 in Fig. 17.3 on cost of productivity. This is in fact the design and scheduling task: to optimize the trade-off among productivity and flexibility under resource constraints, in other words, to construct robust schedules for flexible plants. This is a tedious task in the time domain t. Therefore, we transform the problem into the schedule domain v with the help of correlation functions in the next section.
Fig. 17.5 Transport resources and operations Gantt chart (extracted from Fig. 17.3) for move operations (dashed bars), T23 transport operation from machine M2 to M3, and T transport system
232
Claudia Fiedler and Wolfgang Meyer
17.4 Theory of Correlation Scheduling 17.4.1 Two Processes Being Sent to the Plant The release interval v2 in Fig. 17.3 was determined by shifting of process P2 along the time axis to the right until no more overlaps occurred among operations of P1 and P2, at each resource or each line of the Gantt chart, respectively. This shift mechanism is mathematically described by the collision function CO12 (v) in (17.1). Here, P1(t) and P2(t) are two binary time functions representing the operations (bars) of the respective Gantt charts:
CO12 (v) =
P1(t) · P2(t − v)dt
(17.1)
Figure 17.6 visualizes (17.1) for two operations P1(t) and P2(t) according to Fig. 17.6(a). In Fig. 17.6(b), P2 is shifted to the right for three different values of v. In Fig. 17.6(c), P1 (unshifted) and P2 (shifted) are multiplied. In Fig. 17.6(d), the overlap among P1(t) and P2(t − v) is quantified by integration. In Fig. 17.6(e), CO12 (v) is drawn pointwise along the v axis for three shift values of v including v = 0 and v = vmin . Thereby, the collision function measures the overlap of P1 and P2. Mathematically, the collision function is an integral transformation from time space t into the space of release intervals v. It resembles the well-known correlation
Fig. 17.6 Collision function CO12 (v) for three values of the process release interval v
17 Correlation Functions for Dynamic Load Balancing of Cycle Shops
233
function CC12 (v) (17.2) and the convolution integral CON12 (v) (17.3), but it is not the same:
CC12 (v) = CON12 (v) =
P1(t) · P2(t + v)dt
(17.2)
P1(t) · P2(v − t)dt
(17.3)
For CO12 (v) = 0, both processes P1 and P2 do not interfere. In other words, for collision factors other than zero, collisions take place. The set of release intervals v2 for the second process P2, which guarantees collision-free behavior, is V2 = {0, v2 |CO12 (v2 ) = 0 }
(17.4)
17.4.2 Three Processes Being Sent to the Plant In Fig 17.3, three processes have been sent to the plant consecutively with a schedule of v2 = (0, ν2 , ν3 ). Now, collisions among three processes must be considered which leads to the two-dimensional collision function CO3 (ν2 , ν3 ):
CO3 (v2 , v3 ) = =
[P1(t) + P2(t − v2 )] · P3(t − v2 − v3 )dt P1(t) · P3(t − v2 − v3 )dt+
P2(t − v2 ) · P3(t − v2 − v3 )dt
= CO13 (v2 + v3 ) + CO23 (v3 )
(17.5)
The respective set of feasible schedules is V3 = V2 ∩ {v3 |CO3 (v2 , v3 ) = 0 } = {0, v2 , v3 |CO12 (v2 ) = 0 ∧ CO23 (v3 ) = 0 ∧ CO13 (v2 + v3 ) = 0} (17.6)
17.4.3 Generalization to n Processes For n different processes Pn being handled by the system in parallel, the ndimensional collision function reads as
COn (v2 , v3 , . . . , vn ) =
[P1(t) + P2(t − v2 ) + P3(t − v3 ) + · · ·
+ Pn−1 (t − vn )] · Pn (t − v2 − v3 − · · · − vn )dt (17.7) The respective set of feasible schedules is Vn = Vn−1 ∩ {vn |COn (v2 , v3 , . . . , vn ) = 0 } = {0, v2 , v3 , . . . , vn |CO12 (v2 ) = 0∧
234
Claudia Fiedler and Wolfgang Meyer
CO23 (v3 ) = 0 ∧ · · · ∧ CO(n−1)n (vn ) = 0 ∧ CO13 (v2 + v3 ) = 0 ∧ CO24 (v3 + v4 ) = 0 ∧ · · · ∧ CO(n−2)n (vn−1 + vn ) = 0 ∧ CO14 (v2 + v3 + v4 ) = 0 ∧ · · · ∧ CO(n−3)n (vn−2 + vn−1 + vn ) = 0 ∧ .. . CO1n (v2 + v3 + · · · + vn ) = 0}
(17.8)
Equation (17.8) is the key to the solution of the load-balancing problem and to the job-shop scheduling problem in general. The n-dimensional solution space for the n-dimensional schedules (vectors) vn = (0, v2 , v3 , . . . , vn ) is completely expressed by binary collision functions according to (17.1). We present a special solution algorithm in the next section and apply it to load balancing.
17.5 Dynamic Scheduling 17.5.1 Collision Functions For demonstration, we restrict ourselves to the case where five processes are persistent in the system at a time. For this case, equation (17.8) reads as V5 = {0, v2 , v3 , v4 , v5 |CO12 (v2 ) = 0 ∧ CO23 (v3 ) = 0 ∧ CO34 (v4 ) = 0 ∧ CO45 (v5 ) = 0 ∧ CO13 (v2 + v3 ) = 0 ∧ CO24 (v3 + v4 ) = 0 ∧ CO35 (v4 + v5 ) = 0 ∧ CO14 (v2 + v3 + v4 ) = 0 ∧ CO25 (v3 + v4 + v5 ) = 0 ∧ CO15 (v2 + v3 + v4 + v5 ) = 0}
(17.9)
For the five processes P1, . . . , P5 of Table 17.1, a small selection of collision functions (CO12 , CO13 , CO23 ) is shown in Fig. 17.7. For the remaining combinations (CO14 , CO15 , CO24 , . . .), the functions and sets of possible release intervals V2 (17.4) look similar. The processes are of the rank 1 multiplicity 6 – type with constant travel and move times. This is not a necessary condition, however: multiplerank process plans with different travel and move times can be treated with the
Table 17.1 Process plans for the running example Resources Operations Processing times P1 P2 P3 P4 P5
M1 m1 200 320 390 120 360
M2 m2 160 390 120 360 280
M3 m3 500 110 390 120 50
M4 m4 120 220 120 140 50
M5 m5 260 40 120 120 120
tr 20 20 20 20 20
T mo 10 10 10 10 10
Flow-time f 1370 1210 1270 990 990
17 Correlation Functions for Dynamic Load Balancing of Cycle Shops
235
Fig. 17.7 Collision functions COi j (v) and set of feasible schedules V2 according to equations (17.1) and (17.4) among (a) P1, P2; (b) P1, P3; (c) P2, P3
collision method in the same way. In fact, each type of Gantt charts can be represented and analysed by means of collision functions as presented here.
17.5.2 Scheduling Procedure Once the collision functions COij (v) have been calculated (or the inverse crosscorrelations CCji (v) which is the same, compare (17.1) and (17.2), equation (17.9) is completely determined and ready for solution. In most scheduling applications, we optimize schedules with respect to some manufacturing parameter, for instance, the smallest order throughput time. In this case, equation (17.8) is the set of (nonlinear) conditions or functions gi that constrains the nonlinear optimization problem of (17.10) and (17.11):
236
Claudia Fiedler and Wolfgang Meyer
Fig. 17.8 Dynamic scheduling by constraining future release intervals. Process examples from Table 17.1
Minimize
n
F(v) = ∑ vi
(17.10)
i=2
subject to gi (v) = 0,
i = 2, 3, . . . , n,
v = (0, v2 , . . . , vn ), v ∈ V
(17.11)
Here, the decision vector v is the schedule to be optimized. It is composed of the sequence of release intervals vi : v = (0, v2 , v3 , . . . , vn ). F(v) is a linear goal function, though in general, F(v) may be nonlinear. The constraint set of gi (17.11) or Vn (17.8) acts as follows: The second process P1 can enter the work cell only if CO12 (v2 ) = 0 is fulfilled. Then, the third process can enter only if no collisions occur with the second process, meaning CO23 (v3 ) = 0, and with the first one, meaning CO13 (v2 +v3 ) = 0. Then, the fourth process can enter only if no collisions take place with the third one, CO34 (v4 ) = 0, with the second one, CO24 (v3 + v4 ) = 0, and with the first one, CO14 (v2 + v3 + v4 ) = 0. By this repetitive planning procedure, the space of admissible release intervals is constrained stepwise by taking choices about the values of vi , as in any planning. The shifting of collision functions is illustrated in Fig. 17.8 for the running example of Table. 17.1. The schedule is determined by a greedy algorithm, which selects the
Fig. 17.9 Optimum schedule for the five processes P1, . . . , P5 from Table 17.1; Gantt chart for one transport system T
17 Correlation Functions for Dynamic Load Balancing of Cycle Shops
237
smallest possible value for vi at each step. The respective Gantt chart is shown in Fig. 17.9. The optimum schedule is v = (0, 250, 370, 470, 170). Average flow time is f = 1166, and average productivity is P = WIP/ f = 5/1166. The overall throughput time for the whole set of processes P1, . . . , P5 amounts to 2240.
17.6 Load Balancing 17.6.1 Load Balancing at System Level The system considered is a production system at work-cell level. A work cell is the combination of resources that are devoted to special operations and aggregated as a cell. A work cell consists of two types of resources that form two subsystems: processing units (workstations or machines M) and transporting units (T). Figure 17.10(a) shows the aggregation hierarchy of resources, and Fig. 17.10(b) shows the class hierarchy of processes, also called functional hierarchy. Both hierarchies differ but are matched: In a properly designed system, the functional structure of processes and operations is realized by an appropriate aggregational structure of resources. At system level, the work-cell throughput diagram of Fig. 17.2(a) applies. The best balanced version of Fig. 17.2(a) is shown in Fig. 17.11. It is based on the assumption that in the long run, the work cell’s flow time f is the average of the process flow times fi : 1 f = ∑ fi , i = 2, 3, . . . , n (17.12) n i
Fig. 17.10 System structure: (a) Aggregation hierarchy, (b) function hierarchy, machine operations m(t), transport operations tr(t), move operations mo(t)
238
Claudia Fiedler and Wolfgang Meyer
Fig. 17.11 Balanced work-cell throughput diagram
The slope of the output curve is the productivity P = WIP/ f ; in the case exemplified by Fig. 17.2, P = n/ f = 6/ f . Therewith, the balanced release intervals vi amount to f fi−1 − fi , i = 2, 3, . . . , n (17.13) vi = + n 2 Unfortunately, equation (17.12) is a rather crude assumption. It does not consider the coupling among processes by the mutual exclusion of common limited resources, as discussed before. Especially, it does not attend to the question if the schedule according to (17.13) is free of collisions. This question can only be answered by a model at workstation level, of course.
17.6.2 Load Balancing at Subsystem Level The bottleneck resource in our application is supposed to be the transport system. For a heavily loaded transport system, the respective collision functions are COi j (v) = 0 nearly everywhere, especially if move operations are considered. By deleting some transport operations from T1 and allocating them to a second transporter T2, as done in Fig. 17.5, we ease the situation. The idea of correlation balancing, then, is to separate one crowded collision function into two or more less occupied ones, which leave more space for flexible scheduling and plan optimization. Figure 17.12 illustrates the result. Figure 17.12(a), on the left side, shows the set V3 of feasible schedules for the third process P3 to be released to the work cell with one transporting unit (Fig. 17.9). Figure 17.12(b) is the set V3 for the same work cell, now equipped with three transporting units T1, T2, T3. As expected, the possibility space for future planning decisions is much wider in the second case (right-hand side of Fig. 17.12), as is reflected in the large black regions of V3 , as compared to the first case (left-hand side of Fig. 17.12). The collision function CO3 (v3 ) in Fig. 17.12(b) belongs to the load-balanced Gantt chart of Fig. 17.13, which is the counterpart to Fig. 17.9. The Gantt chart was derived as explained, with an additional condition relaxing the transporter work load.
17 Correlation Functions for Dynamic Load Balancing of Cycle Shops
239
Fig. 17.12 Collision functions and set of feasible schedules for third process P3 to be released to the work cell for (a) one transporting unit T and (b) three transporting units T1, T2, T3
The schedule obtained in Fig. 17.13 is v = (0, 220, 340, 440, 140). The overall throughput time is only slightly shorter as in Fig. 17.10 and amounts to 2120. The improvement for each release time is equivalent to the length of one transport operation and only possible if two transporters are allowed to load and unload a workstation at the same time. Here, the transporter is not the bottleneck. If we extend the move and transport times by three times to 30 and 60 seconds, the transporter is definitely the bottleneck.
Fig. 17.13 Optimum schedule for the five processes P1, . . . , P5 from Table 17.1. Gantt chart for three transporting units T1, T2, T3
240
Claudia Fiedler and Wolfgang Meyer
Fig. 17.14 Collision functions for the unbalanced work cell with one transporting unit T
Figure 17.14 shows the respective collision functions for each new process if we chose the collision functions to be zero at the smallest possible release time. The arrows mark the chosen release times. The set of release times is then v = (0, 930, 1090, 750, 870). The overall throughput time (or makespan) amounts to 4890 seconds. Figure 17.15 displays the related Gantt chart.
Fig. 17.15 Gantt chart for the unbalanced work cell with one transporting unit
17 Correlation Functions for Dynamic Load Balancing of Cycle Shops
241
Fig. 17.16 Collision functions for local minima as release times with one transporting unit T
There is additional information represented by the collision functions that can be used for load balancing: The value of the function is a measure for the degree of overlap among operations. In other words, local minima of CO(v) refer to less critical conditions or a few collisions, maybe only one. A local minimum of CO(v) then implies that the minimal collisions can be resolved by as few as possible additional transporters. The idea is to chose a local minimum as release time and allocate the colliding transport operations to a second transporter. The value of the chosen minimum is bounded by the maximum of the workstation collision function plus one transporting time if we do not allow loading and unloading a workstation at the same time. Figure 17.16 shows the collision functions for that case. As a possible schedule, we use v = (0, 350, 470, 570, 310). The related Gantt chart is shown in Fig. 17.17. The throughput time now is 2950 seconds with one more transporter. A comparison of Figs. 17.15 and 17.17 shows the great improvement obtained by adding one more transporter only. With the second transporter, now the chosen local minima in the collision functions are forced to be zero and are global minima (Fig. 17.18).
242
Claudia Fiedler and Wolfgang Meyer
Fig. 17.17 Balanced Gantt chart with two transporting units T1 and T2
Fig. 17.18 Collision functions for the balanced work cell with two transporting units T1 and T2
17.7 Conclusion Main characteristics of cycle shops are loops in the transport routing and the resulting complex transport schedules, especially if the transport system forms the bottleneck of the work cell or manufacturing site. By collision functions as presented in this paper, the impact of each loop on future planning and design decisions can
17 Correlation Functions for Dynamic Load Balancing of Cycle Shops
243
be conveniently analyzed. The underlying deterministic correlation theory was applied to the dynamic load-balancing problem of job shops with batch-one no-wait production, i.e., for sequences of different jobs and no buffers at the machines. The extension to probabilistic processes is possible as well.
References 1. V.G. Timkovsky (2004) Cycle shop scheduling. In Handbook of scheduling. Chapman & Hall, Boca Raton, FL, pp. 7-1–7-22. 2. W. Meyer and C. Fiedler (2006) Auto correlation and collision avoidance in robotic flow shops. Proceedings of 45th IEEE Conference on Decision and Control CDC 2006, San Diego, December. 3. C.A. Kohring (1995) dynamic load balancing for parallelized particle simulations on MIMD computers. Parallel Computing, 21: 683–693. 4. S.A. Reveliotis (2002) Liveness enforcing supervision for sequential resource allocation systems. In Synthesis and control of discrete event systems. Kluwer Academic Publishers, Boston, Dordrecht, London, pp. 203–212. 5. J. Blazewicz, N. Brauner, and G. Finke (2004) Scheduling with discrete resource constraints In Handbook of scheduling. Chapman & Hall, Boca Raton, FL, pp. 23-1–23-18. 6. M.-A. Manier and C. Bloch (2003) A classification for Hoist scheduling problems. International Journal of Flexible Manufacturing Systems, 15(1): 37–55. 7. J.Y.-T. Leung (2004) Handbook of scheduling. Chapman & Hall, Boca Raton, FL.
Chapter 18
Neural Network-Based Integral Sliding Mode Control for Nonlinear Uncertain Systems S.W. Wang and D.L. Yu
18.1 Introduction This chapter presents a new integral sliding surface with an adaptive radial basis function (RBF) neural network. In addition to the advantages of no reaching phase and nullifying matched uncertainties, more importantly, it compensates partially for the effects of unmatched uncertainties in the system closed-loop dynamics. Only a part of unmatched uncertainty appears in the resultant system closed-loop dynamics, and thus the system robustness is enhanced. The adaptation law of the RBF network is derived using a defined Lyapunov function. Also based on Lyapunov theory, the switching gain condition is obtained to ensure the system states remaining on the designed sliding surface. Numerical simulations show the effectiveness of the proposed method and improvement against existing methods.
18.1.1 Sliding Mode Control A common problem for controlling nonlinear dynamic systems is how to deal with time-varying system uncertainties. Among various control methods, sliding mode control (SMC) is a simple and robust choice for control system researchers and engineers. In the past 30 years, SMC has attracted significant research interest in the worldwide control community. Its basic idea is to drive the trajectory of the controlled system onto a predesigned sliding surface such that the expected system specification is satisfied. Its design usually consists of two steps: S.W. Wang Weihai Yuanhang Technology Development Co., Ltd. 19 Tangshan Road, Hi-tech District, Weihai, Shandong, 264209, People’s Republic of China, Email D.L. Yu Control Systems Research Group, School of Engineering, Liverpool John Moores University, Byrom Street, Liverpool, L3 3AF, UK
245
246
S.W. Wang and D.L. Yu
1. Define a sliding surface, along which the system’s closed-loop dynamics satisfies the desired specifications. 2. Design a control law to drive the system states onto the sliding surface and remain on it afterwards. The key feature of SMC is its robustness against matched uncertainties and perturbations. Due to this advantage, SMC has found numerous industrial and laboratory developments and applications. However, the previous research and implementations of SMC have also presented some drawbacks of this method: • It can produce the so-called chattering problem, i.e., the system state trajectories oscillate about the sliding mode with a high frequency. • Its rapid changing discontinuous control actions can reduce actuators’ operating life. • The conventional SMC has a reaching phase, during which the system has not arrived at the sliding surface and is not robust to uncertainties or perturbations, even those satisfying the matching condition. • It is still sensitive to unmatched uncertainties, which can affect the system dynamics negatively.
18.1.2 Integral Sliding Mode Control In order to overcome the above problems of SMC, integral sliding mode control (ISMC) has been developed and attracted wide interests in the nonlinear control community [1, 2]. It introduces an integral term into the sliding surface, which makes the system initial states start from the sliding mode and eliminates the reaching phase. Thus, ISMC enhances the robustness against matched uncertainty of the conventional SMC [3–7]. However, it is still sensitive to unmatched uncertainties that exist in many practical systems. An ISMC-controlled system was developed that completely nullifies matched uncertainties, but with the unmatched uncertainty the system stability depends on the controlled nominal system and the features of the equivalent unmatched uncertainties [8]. Castanos and Fridman [9] discussed mainly how to select the optimal design matrix to ensure that the unmatched uncertainty is not amplified by the discontinuous control, but it is suitable only for a certain type of nonlinear systems with constant input matrices. The research on ISMC has been focused on how to reduce the influence of unmatched uncertainty.
18.1.3 Radial Basis Function Neural Network Approximation The main contribution of this research is proposing a new integral sliding surface that includes an additional design matrix with an adaptive RBF neural network. According to the previous research [10–12], neural networks have shown their strong
18 Neural Network-Based Integral Sliding Mode Control for Nonlinear Uncertain
247
ability to uniformly approximate continuous functions to a specified degree of accuracy in theory. With this property, neural networks help to save a lot of effort on system modeling and unknown nonlinearity approximation. Among various proposed neural networks, RBF networks have the same universal approximation ability despite their simple structure of a linear superposition of nonlinear nodes. Once the neural network structure is chosen, the main problem of neural network design is how to train network parameters to achieve the best approximation. In the early neural network designs, optimization techniques, so-called supervised learning, are used to derive fixed or adaptive parameter laws. In this kind of design method, neural networks are first trained by a set of training data; in other words, network parameters are determined by optimizing an objective function of errors between network outputs and training targets. For a well-designed neural network, it is necessary to achieve a high level of approximation accuracy not only for a set of discrete points but also for a continuous function over a large subset of the state space. Thus, the trained neural network needs to be tested using other sets of data to demonstrate its generalization. If the approximation results satisfy the desired requirements, then the neural network design is finished. The above designs are realized via numerous empirical experiments, but with little analytical study for stability and performance. In order to solve the problem, some Lyapunov theory-based methods are developed to realize the adaptation of network parameters [12–14]. The advantages of these approaches are that unknown control parameters can be approximated with neural networks and the stability and performance of the closed-loop control systems can be guaranteed. Therefore, a RBF network is capable of approximating and counteracting wholly or partially the term of the unmatched uncertainty in the ISMC system dynamics.
18.2 Problem Statement Consider the following nonlinear uncertain system: x˙ = f (x) + B(x) {[I + ∆Bm (x)] u + ∆ fm (x)} + ∆ fu (x),
(18.1)
where x(t) ∈ Ω ⊂ Rn is the measurable state vector within a compact set Ω ⊂ Rn , u(t) ∈ Rm is the control vector, f (x) ∈ Rn and B(x) ∈ Rn×m are known nonlinear functions, and rank{B(x)} = m. ∆ fm (x) and ∆Bm (x) are the matched uncertainties. The unknown continuous function ∆ fu (x) is the unmatched uncertainty. It is assumed that all system uncertainties are bounded, i.e., there exist ∆ fm (x) ≤ ρm (x), ∆Bm (x) ≤ 1 − εb , and ∆ fu (x) ≤ ρu (x), where ρm (x) and ρu (x) are known nonnegative nonlinear functions and εb is a positive constant and εb < 1. Assumption The known nominal nonlinear plant of the system in equation (18.1) is x˙ = f (x) + B(x)un (x),
(18.2)
248
S.W. Wang and D.L. Yu
which is globally asymptotically stabilizable via a nominal control un (x), i.e., there is a Lyapunov function V (x), such that its first-order partial derivative satisfies
γ1 (x) ≤ V (x) ≤ γ2 (x) , ∂V T ˙ V (x) = [ f (x) + B(x)un (x)] ≤ −γ (x) . ∂x
(18.3) (18.4)
Here, γ1 , γ2 : R+ → R+ are class K∞ functions, and γ : R+ → R+ is defined as γ (x) = β · x, where β > 0. The nominal plant under un (x) satisfies some prescribed specifications.
18.3 New Integral Sliding Surface A new integral-type sliding surface is proposed as S(x) = Dx − Dx0 −
t
D f (x) + DB(x)un (x) − DB(x) fˆNN (x) d τ = 0,
(18.5)
t0
where x0 is the state vector at time t0 , D ∈ Rm×n satisfies the condition DB(x) is uniformly invertible. fˆNN (x) ∈ ℜm is a RBF network in the following form: nh
x − ci 2 fˆNN (x) = ∑ wˆ i exp − σi2 i=1
(18.6)
where nh is the number of network centers, wˆ i (t) ∈ ℜm is the network weight, ci = [c1i , c2i , · · · cni ]T ∈ Rn are the network center vectors, and σi is the network width. The last term of the integral part in the sliding surface in (18.5) can be treated as a design vector g(x) ∈ Rm , i.e.,
2 x − c i g(x) = −DB(x) fˆNN (x) = −DB(x) ∑ wˆ i exp − . σi2 i=1 nh
(18.7)
Take the first derivative of the sliding surface S(x), ˙ = DB(x) {[I + ∆Bm (x)] u + ∆ fm (x) − un (x)} + D∆ fu (x) − g(x) S(x)
(18.8)
˙ = 0 and x(t) = xd (t). The subscript d denotes the In the sliding mode, S(x) = S(x) state vector in the sliding mode. The equivalent control law is ueq (xd ) = [I + ∆Bm (xd )]−1
− [DB(xd )]−1 [D∆ fu (x) − g(x)] −∆ fm (xd ) + un (xd )
(18.9)
18 Neural Network-Based Integral Sliding Mode Control for Nonlinear Uncertain
249
Substitute the above equation into (18.1), one obtains the closed-loop dynamics of the sliding surface: x˙d = f (xd ) + B(xd )un (xd ) + {I − B(xd ) [DB(xd )]−1 D}∆ fu (xd ) (18.10)
+ B(xd ) [DB(xd )]−1 g(xd ) Then, define a vector φu (x) ∈ Rn as follows: φu1 (x) = Q−1 (x){I − B(x) [DB(x)]−1 D}∆ fu (x) φu (x) = φu2 (x)
(18.11)
with φu1 (x) ∈ ℜm and φu2 (x) ∈ ℜn−m , where Q(x) ∈ ℜn×n is an orthogonal matrix from the QR decomposition of B(x),
R(x) Q(x) = B(x) 0
(18.12)
By dividing Q(x) into two submatrices Q1 (x) ∈ ℜn×m and Q2 (x) ∈ ℜn×(n−m) , we obtain
φu1 (x) φu1 (x) = [Q1 (x) Q2 (x)] Q(x) φu2 (x) φu2 (x)
= {I − B(x) [DB(x)]−1 D}∆ fu (x)
(18.13)
Considering equations (18.7) and (18.13), the closed-loop dynamics become x˙d = f (xd ) + B(xd )un (xd ) + Q1 (xd )φu1 (xd ) + Q2 (xd )φu2 (xd ) − B(xd ) fˆNN (x)
(18.14)
Then the problem becomes how to design the RBF network fˆNN (x) to reduce the influence of the unmatched uncertainty ∆ fu (x) on the closed-loop dynamics. Theorem 1 If the weights of the RBF neural network fˆNN (x) are adapted as follows: t
wˆ i (t) = wˆ i (t0 ) + t0
η
· xdT B(xd ) exp
x − ci 2 − dτ , σi2
i = 1, . . . , nh ,
(18.15)
then the closed-loop dynamics of the nonlinear system on the integral sliding surface x˙d = f (xd ) + B(xd )un (xd ) + Q2 (xd )φu2 (xd )
(18.16)
250
S.W. Wang and D.L. Yu
is globally asymptotically stable with the condition
β > φu2 (xd ) + B(xd ) ε
∀xd ∈ Rn ,
when xd = 0,
(18.17)
where the adaptation parameter η is a positive constant chosen by users and ε is a very small positive constant. Proof Considering the unmatched uncertainty ∆ fu (x) is a bounded continuous function, the unknown function R−1 (x)φu1 (x) : Ω → Rm containing one part of unmatched uncertainty is continuous over a compact set Ω ⊂ ℜn . According to the universal approximation property of RBF networks [10–12], the function can be approximated by a RBF network fNN (x) to arbitrary accuracy using sufficient center number nh , i.e., R−1 (x)φu1 (x) = fNN (x) + e(x) nh x − ci 2 + e(x) ∀x ∈ Ω = ∑ wi exp − σi2 i=1
(18.18)
where R(x) ∈ ℜm×m is an upper triangular matrix from the QR decomposition of B(x), wi is the ideal constant weight, the approximation error e(x) satisfies |e(x)| < ε . One can design the RBF network fˆNN (x) as an estimator of fNN (x) by adapting its weight wˆ i (t) to converge to the ideal constant value wi with the weight estimation error w˜ i (t), i.e., wi = wˆ i (t) + w˜ i (t). Thus, equation (18.18) becomes
x − ci 2 R (x)φu1 (x) = ∑ [wˆ i + w˜ i ] exp − + e(x) σi2 i=1 −1
nh
(18.19)
Define a Lyapunov function as follows: 1 1 V1 (xd ) = xdT xd + 2 2η
nh
∑ w˜ 2i
(18.20)
i=1
Its first partial derivative is 1 nh V˙1 (xd ) = xdT x˙d − ∑ w˜ i w˙ˆ i η i=1 1 f (xd ) + B(xd )un (xd ) T − = xd +Q(xd )φu (xd ) − B(xd ) fˆNN (x) η
nh
∑ w˜ i w˙ˆ i
i=1
(18.21)
18 Neural Network-Based Integral Sliding Mode Control for Nonlinear Uncertain
251
According to equations (18.12), (18.13), and (18.19), the term Q(xd )φu (xd ) can be represented as follows: Q(xd )φu (xd ) = Q1 (xd )φu1 (xd ) + Q2 (xd )φu2 (xd )
R(xd ) −1 = [Q1 (xd ) Q2 (xd )] · R (xd )φu1 (xd ) + Q2 (xd )φu2 (xd ) 0
nh xd − ci 2 R(xd ) = [Q1 (xd ) Q2 (xd )] · ∑ [wˆ i + w˜ i ] exp − σ 2 0 i i=1 +e(xd ) + Q2 (xd )φu2 (xd ) = B(xd )
2 x − c i d fˆNN (xd ) + ∑ w˜ i exp − + e(xd ) σi2 i=1 nh
+ Q2 (xd )φu2 (xd )
(18.22)
Then one obtains V˙1 (xd ) = xdT [ f (xd ) + B(xd )un (xd )] + xdT Q2 (xd )φu2 (xd )
nh xd − ci 2 1 T + xd B(xd ) ∑ w˜ i exp − + e(xd ) − 2 η σ i i=1
nh
∑ w˜ i w˙ˆ i
i=1
= xdT [ f (xd ) + B(xd )un (xd )] + xdT Q2 (xd )φu2 (xd )
nh 2 x − c 1 i d + xdT B(xd )e(xd ) + ∑ w˜ i xdT B(xd ) exp − − w˙ˆ i η σi2 i=1 (18.23)
Choose w˙ˆ i = η
· xdT B(xd ) exp
xd − ci 2 − . σi2
(18.24)
Then the first partial derivative of the defined Lyapunov function becomes V˙1 (xd ) = xdT [ f (xd ) + B(xd )un (xd )] + xdT Q2 (xd )φu2 (xd ) + xdT B(xd )e(xd )
(18.25)
Considering the Assumption and the norm of a matrix with orthonormal columns being 1, we obtain from (18.25) that V˙1 (xd ) < −γ (xd ) + xdT φu2 (xd ) + xdT B(xd )ε = −β · xd + xdT φu2 (xd ) + xdT B(xd )ε = xdT · [−β + φu2 (xd ) + B(xd )ε ]
(18.26)
252
S.W. Wang and D.L. Yu
Notice that φu2 (xd ) ≤ φu (xd ) = I − B(xd ) [DB(xd )]−1 D · ∆ fu (xd ) ≤ I − B(xd ) [DB(xd )]−1 Dρu (xd ). Considering B(xd ) is bounded and D is a constant matrix, I − B(xd ) [DB(xd )]−1 D ≤ ρx (xd ) is also bounded. Besides, ε is a very small positive constant; thus, ε = B(xd )ε is also very small. Thus, one can choose a big enough β such that β > ρx (xd )ρu (xd ) + ε , i.e., the condition (18.17) is satisfied. Therefore, V˙1 (xd ) < 0 is achieved, and the approximation error converges to zero. The closed-loop dynamics are of the form shown in (18.16) and are globally asymptotically stable. Take the integral of (18.24), one obtains the adaptation law (18.15) for the weights of the RBF network fˆNN (x), which ends the proof. Remark 1 The obtained closed-loop sliding mode dynamics do not contain any matched uncertainty and reduce the influence of unmatched uncertainty. Thus, the proposed integral sliding surface with RBF networks improves the control performance of the ISMC against system uncertainty, especially unmatched uncertainty. In [8], the following closed-loop dynamics were obtained by using a basic integral sliding surface. x˙d = f (xd ) + B(xd )un (xd ) + {I − B(xd ) [DB(xd )]−1 D}∆ fu (xd )
(18.27)
The Euclidean norm of the uncertainty term is {I − B(xd ) [DB(xd )]−1 D}∆ fu (xd ) = Q(xd )φu (xd ) = Q1 (xd )φu1 (xd ) + Q2 (xd )φu2 (xd ) ≥ Q2 (xd )φu2 (xd )
(18.28)
Note that the last term in (18.28) is the norm of the uncertainty term in (18.16) of the developed method. Therefore, The developed method has a more robust dynamics to the unmatched uncertainty than the method in [8]. Remark 2 Additionally, Castanos and Fridman [9] pointed out that I − B [DB]−1 D = 1 when the input matrix B is constant and the design matrix is selected as D = B+ (B+ is the left inverse of B, i.e., B+ = [BT B]−1 BT ). Then the closed-loop dynamics of the sliding surface in the method in [9] are x˙d = f (xd ) + Bun (xd ) + ∆ fu (xd ).
(18.29)
For the closed-loop dynamics obtained by using the proposed integral sliding surface, with the same B and D, the norm of the unmatched uncertainty is ∆ fu (xd ) = Q(xd )φu (xd ) = Q1 (xd )φu1 (xd ) + Q2 (xd )φu2 (xd ) ≥ Q2 (xd )φu2 (xd )
(18.30)
18 Neural Network-Based Integral Sliding Mode Control for Nonlinear Uncertain
253
Therefore, the norm of the unmatched uncertainty in the closed-loop dynamics is further reduced by the developed method compared with the method in [9].
18.4 Sliding Mode Control Law In the integral-type sliding mode control, a control law is usually designed in the following form:
T un (x) − ρ (x) [DB(x)]T S(x) if S(x) = 0 [DB(x)] S(x) (18.31) u(t) = un (x) if S(x) = 0 Theorem 2 If the sliding mode control law is designed as in (18.31) and the switching gain satisfies the following condition:
1 (1 − εb )un (x) + ρm (x) (18.32) ρ (x) > εb +[DB(x)]−1 Dρu (x) + [DB(x)]−1 · g(x) with the vector g(x) described in (18.7), then the proposed integral sliding mode can be maintained.
Proof Define a Lyapunov function as
1 V2 (x) = ST (x)S(x). (18.33) 2 Substituting the control law under the condition S(x) = 0 and considering the uncertainty bounds, its first derivative with respect to time t becomes ˙ V˙2 (x) = ST (x)S(x) = −ρ (x)[DB(x)]T S(x) − ∆Bm (x)ρ (x)[DB(x)]T S(x) +ST (x)DB(x)∆Bm (x)un (x) + ST (x)DB(x)∆ fm (x)
≤
+ST (x) [DB(x)] · [DB(x)]−1 D∆ fu (x) − ST (x) [DB(x)] · [DB(x)]−1 g(x)
−ρ (x) − ∆Bm (x)ρ (x) + ∆Bm (x)un (x) + ∆ fm (x) +[DB(x)]−1 D · ∆ fu (x) + [DB(x)]−1 · g(x)
[DB(x)]T S(x)
εb ρ (x) − (1 − εb )un (x) − ρm (x) ≤− −[DB(x)]−1 Dρu (x) − [DB(x)]−1 · g(x) [DB(x)]T S(x) < 0
(18.34)
In order to ensure the above inequality, the switching gain ρ (x) must satisfy the switching gain choosing condition in (18.32) since DB(x) is full column rank and
254
S.W. Wang and D.L. Yu
S(x) = 0. Thus, this condition guarantees the control law (18.31) can maintain the proposed sliding mode. Remark 3 The control law (18.31) ensures the system to maintain on the sliding surface even the unmatched uncertainty is not completely compensated by the proposed sliding surface. As long as the switching gain is high enough to satisfy the condition (18.32), the system stability can be guaranteed.
18.5 Numerical Example Consider the parameters of a nonlinear uncertain system (18.1) as follows: ⎡ ⎤⎡ ⎤ 0 0 1 x1 (18.35) f (x) = ⎣ 0 −2 0⎦ ⎣x2 ⎦ −1 0 2 x3 ⎡ ⎤ 1 0.1 sin2 (x1 ) − 2 ⎦. B(x) = ⎣0.1 cos3 (x2 ) − 3 (18.36) 4 4 5 6 + 0.1 cos(x3 ) The matched and unmatched uncertainties are of the following form:
0.1x12 + 0.1 ∆ fm (x) = 0.1x12 + 0.2x23 + 0.1x34 + 0.1
0 0.1 sin(x2 ) ∆Bm (x) = 0 0.1 cos2 (x1 ) ∆ fu (x) = [0.5x1 cos(x2 ) 0.2x2 sin(x12 ) 0.4x3 sin2 (x2 )]T .
(18.37) (18.38) (18.39)
The bounds of the matched uncertainty are ρm (x) = 0.2x12 +0.2 |x2 |3 +0.1x34 +0.2 and εb = 0.9. The bound of the unmatched uncertainty is ρu (x) = 0.5 x. The eigen 0 0.1 0.2 values are placed at s = [−1 −2 −3]T . Choose the design matrix D = 0.4 . −0.1 0.2 We then obtain φu2 (xd ) ≤ φu (xd ) ≤ I − B(xd ) [DB(xd )]−1 Dρu (xd ) < 1.6 × 0.5xd = 0.8xd
(18.40)
Considering B(xd ) is bounded and ε is a very small positive constant, B(xd )ε is also very small. Choose β = xd ; thus γ (xd ) = xdT xd , so that the condition (18.17) is satisfied as follows: φu2 (xd ) + B(xd ) ε < 0.8 xd + B(xd ) ε < xd when xd = 0.
(18.41)
As for the RBF structure, the network inputs are selected as x(t) = [x1 (t) x2 (t) x3 (t)]T and are scaled to the range of (0, 1) before they are fed into
18 Neural Network-Based Integral Sliding Mode Control for Nonlinear Uncertain
255
Fig. 18.1 The response of the state x(t): dotted line (- - -) is the ideal system (without any uncertainty) response; solid line (—) is the response under the proposed control; dash-dotted line (- · - · -) is the response using the existing ISMC in Cao and Xu [8]
the networks. The network centers and widths are chosen to be constant using the K-means clustering method and P-nearest center rule. Different orders and numbers of hidden nodes have been tried in the experiments, and a first-order structure with 12 hidden nodes is selected. The adaptation parameter is selected as η = 1. The weights are initialized with small random values. The simulation is run from an initial value of the state x(t0 ) = [0 −2 1]T with a fixed step of 1 ms. Figure 18.1 shows three system responses: the dotted line denotes the ideal system without any uncertainties; the solid line denotes the system under the proposed control; the dash-dotted line is by the system under conventional ISMC. It is evident that the proposed method has a response much closer to the ideal response than the ISMC in Cao and Xu [8]. To show the network convergence, the three estimation errors (e1 , e2 , e3 ) are displayed in Fig. 18.2. It is seen in Fig. 18.2 that all the three errors converge to zero, which implies that the networks are adapted to represent the transforms of partial unmatched uncertainties.
256
S.W. Wang and D.L. Yu
Fig. 18.2 The approximation errors: solid line (—) is e1 ; dash-dotted line (- · - · -) is e2 ; dotted line (· · · ·) is e3
18.6 Conclusions A new integral sliding mode control scheme with an adaptive RBF network is proposed, which eliminates completely the matched uncertainties and partially the unmatched uncertainty in the resultant system closed-loop dynamics. Enhanced robustness to the unmatched uncertainties is proved by reduced norm of these uncertainties and appeared in the closed-loop dynamics compared with the existing methods. The method is realized using the approximation feature of RBF neural networks and Lyapunov theory. The new selection condition for the switching gain is derived to ensure the system states are maintained on the proposed sliding surface. Numerical simulations showed the network approximation of the proposed method superior to the existing methods.
References 1. V. Utkin and J. Shi (1996) Integral sliding mode in systems operating under uncertainty conditions. Proceedings of the Conference on Decision and Control, Kobe, Japan, pp. 4591–4596. 2. A. Poznyak, L. Fridman, and F.J. Bejarano (2004) Mini-max integral sliding-mode control for multimodel linear uncertain systems. IEEE Transactions on Automatic Control, 49: 97–102. 3. V.I. Utkin (1977) Variable structure systems with sliding modes. IEEE Transactions on Automatic Control, 22: 212–222. 4. G.P. Matthews and R.A. DeCarlo (1988). Decentralized tracking for a class of interconnected nonlinear systems using variable structure control. Automatica, 24: 187–193. 5. K.D. Young, V.I. Utkin, and U. Ozguner (1999) A control engineer’s guide to sliding mode control. IEEE Transactions on Control Systems Technology, 7: 328–342. 6. L Fridman, A.Poznyak, and F. Bejarano (2005) Decomposition of the min-max multi-model problem via integral sliding mode. International Journal of Robust Nonlinear Control, 15: 559–574.
18 Neural Network-Based Integral Sliding Mode Control for Nonlinear Uncertain
257
7. Y.Niu, D.W.C. Ho, and J. Lam (2005) Robust integral sliding mode control for uncertain stochastic systems with time-varying delay. Automatica, 41, 873–880. 8. W.-J. Cao J.-X. and Xu, (2004) Nonlinear integral-type sliding surface for both matched and unmatched uncertain systems. IEEE Transactions on Automatic Control, 49: 1335–1360. 9. F. Castanos and L. Fridman (2006) Analysis and design of integral sliding manifolds for systems with unmatched perturbations. IEEE Transactions on Automatic Control, 51: 853–858. 10. K. Funahashi (1989) On the approximate realization of continuous mappings by neural networks. Neural Networks, 2: 183–192. 11. R.M. Sanner and J.-J.E. Slotine (1992) Gaussian networks for direct adaptive control. IEEE Transactions on Neural Networks, 3: 837–863. 12. C. Wang and D.J. Hill (2006) Learning from neural control. IEEE Transactions on Neural Networks, 17: 130–146. 13. Y.X. Diao and K.M. Passino (2004) Stable adaptive control of feedback linearizable timevarying non-linear systems with application to fault-tolerant engine control. International Journal of Control, 77(17): 1463–1480. 14. D. Wang and J. Huang (2005) Neural network-based adaptive dynamic surface control for a class of uncertain nonlinear systems in strict-feedback form. IEEE Transactions on Neural Networks, 16(1): 195–202.
Chapter 19
Decentralized Neuro-fuzzy Control of a Class of Nonlinear Systems Miguel A. Hern´andez and Yu Tang
Abstract A decentralized control based on recurrent neuro-fuzzy networks is proposed for a class of nonlinear systems. It consists of an adaptive component and a uncertainty compensation component. First the control law is designed using the state feedback, and the semiglobal stability is established. Then, by means of a highgain observer, this control law uses only the output feedback. The main features of the proposed scheme are its robustness against uncertainties and its simplicity of implementation. To illustrate the proposed scheme, experiments on a 2-degree-offreedom robot are included.
19.1 Introduction In the past decades, research interest on designing controllers for large-scale systems has been increased. Decentralized control features design simplicity, robustness against failures, and computational efficiency. If properly designed, decentralized control may give the same level of performance as achieved in a centralized control. The adaptive decentralized control approaches for linear and linear-dominant systems have been well developed; see, for example, [1–4] and the references cited therein. For a set of linear-dominant subsystems whose interconnections are nonlinear but linearly bounded by the norms of the overall system states, the approaches proposed by [1, 2, 4] guarantee the exponential convergence of tracking errors and parameter estimation error to a bounded residual set. References [3 and 5] consider subsystems with interconnections that are bounded by high-order polynomials. Under certain assumptions, the output feedback controllers can guarantee the global stability of the closed-loop systems based on state observers [6, 7]. The concept of the model-based observer in robot manipulators was used in [8]. That Miguel A. Hern´andez and Yu Tang Faculty of Engineering, National University of Mexico, FI-UNAM, P.O. Box 70-273, 04510 Mexico DF, MEXICO
259
260
Miguel A. Hern´andez and Yu Tang
nonlinear observer was inserted in the feedback loop ensuring local asymptotic stability. Reference [9] presents control systems that use state feedback and, by means of the separation principle, use the states estimated by a high-gain observer. It has been demonstrated that the neural networks (NN) and fuzzy logic systems (FLSs) are powerful tools for control designs [10–12]. In [12–14] NN is used for the design of control laws. The control of systems with NN or FLS with observers is reported in [13–15]. Decentralized control with NN or FLS is proposed in [16–18]. Recurrent neuro-fuzzy networks (RNFNs) are considered well suited for identification and control purposes due to their ability to approximate a dynamic system, and their simpler structure in the implementation. The recurrence is obtained by feeding back the output at a certain layer, particularly at the membership functions layer [19, 20], promise layer [21], or the output layer [22]. In this paper we present a decentralized control scheme for the class of nonlinear systems that are affine in the control input. The proposed control consists of an adaptive control component based on an RNFN and a robust control component to compensate for the uncertainties in the system. Based on Lyapunov analysis, the semiglobal exponential stability of the closed-loop system is established with the state feedback. Next, by means of a high-gain observer (HGO), only the output is fed back for control implementation. To illustrate the proposed control, experiments were carried out in a 2-degree-of-freedom robot.
19.2 Problem Statement Consider the class of large-scale systems given by
Σ : x˙ = F(x) + G(x)u,
y = H(x),
(19.1)
I n with where u, y ∈ R I N are the input and output of the system, respectively, and x ∈ R n ≥ N represents the states. F(.), G(.), and H(.) are unknown smooth functions. Assume that the system (19.1) may be decomposed into N interconnected subsystems according to some criteria (see, e.g., [23]),
Σi : x˙ i = Fi (x) + Gi (xi )ui ,
yi = Hi (x),
(19.2)
where [xT1 xT2 · · · xTN ]T = x ∈ R I n , n = ∑Ni=1 ni , and xi = [xi,1 xi,2 · · · xi,ni ]T ∈ R I ni are I are the control input and the output, rethe states of the ith subsystem; ui , yi ∈ R In →R I ni , Gi : R I ni → R I ni , and Hi : R In →R I are unknown smooth spectively; and Fi : R functions. Assuming the relative degree of Σi to be ni , the dynamics of (19.2) can be expressed as [16] (ni )
yi (1)
with yi = [yi yi
(ni −1) T ]
· · · yi
= fi (yi ) + gi (yi )ui + Zi (y) and y = [yT1 yT2 · · · yTN ]T .
(19.3)
19 Decentralized Neuro-fuzzy Control of a Class of Nonlinear Systems
261
Given a reference signal yr,i for the ith subsystem, we assume that yr,i and their (n ) derivatives up to the order ni − 1 are bounded, and yr,ii is piecewisely continuous. Define the tracking error in the ith subsystem as ( j)
( j)
( j)
ei = yi − yr,i ,
j = 0, 1, ..., ni − 1; ( j)
the control objective is to make the tracking error ei namics of ei is obtained from (19.3) and (19.4)
ultimately bounded. The dy-
(n ) E˙i = Ai Ei + Bi [ fi (yi ) + gi (yi )ui + Zi (y) − yr,ii ] (1)
(ni −1) T ) ,
where Ei = (ei ei , ..., ei
(19.4)
(19.5)
and
⎡
0 1 0 ⎢0 0 1 ⎢ ⎢ Ai = ⎢ ... ... ... ⎢ ⎣0 0 0 0 0 0
... ... .. . ... ...
⎤ ⎡ ⎤ 0 0 ⎢0⎥ 0⎥ ⎥ ⎢ ⎥ .. ⎥ , B = ⎢ .. ⎥ . ⎢.⎥ i .⎥ ⎥ ⎢ ⎥ ⎣0⎦ 1⎦ 1 0
Assumption 1. The interconnection is bounded by qi ( Ei ) in the following way |Zi (Y )| ≤
N
∑ ci j q j ( E j )
(19.6)
j=1
i for some ci j ≥ 0, where . denotes the Euclidean norm, qi ( Ei ) = ∑m k=0 | pk,i T p¯i Ei | with mi > 0 a known integer, 0 = p0,i ≤ p1,i ≤ p2,i ≤ · · · ≤ pmi ,i some constant, and p¯i denotes last column of Pi [see (19.26)].
Assumption 2. The control gain satisfies that gi ≤ gi (yi ) ≤ g¯i qi ( Ei ), ∀ yi ∈Rni , where gi and gi are unknown positive constants. The problem that we consider in this work is to design a control law for (19.3) to ensure the tracking error ei and its derivatives of order up to ni − 1 are ultimately bounded while maintaining all the closed-loop signals bounded. Notice that in the absence of the interconnection [Zi (y) = 0], if the system model were known, the control law would be chosen as 1 (n ) ui = − (19.7) fi (yi ) + Ki Ei − yr,ii , gi (yi ) where Ki is such that the matrix Acl,i = Ai − Bi Ki is Hurwitz, giving E˙i = (Ai − Bi Ki )Ei ,
(19.8)
which guarantees that the tracking error vector Ei (t) → 0 exponentially. We will refer to the control given in (19.7), denoted by u∗i , as the ideal control. This control law can be expressed as u∗i = u∗i (ei , zi ), where zi represents the controller dynamics.
262
Miguel A. Hern´andez and Yu Tang
The ideal control cannot be implemented because the dynamics of the plant are unknown. However, (19.7) can be approximated to any degree of accuracy in a compact set by a universal approximator. Recurrent neuro-fuzzy networks will be used to approximate this ideal control law in a compact set (this compact set will be given in the stability analysis). In order to ensure the stability and performance of the closed-loop system, an additional signal will be designed to compensate for the interconnections and the errors arising from the approximation of the ideal control by an RNFN.
19.3 Recurrent Neuro-fuzzy Networks A recurrent network is characterized by feedback of the output of some of their layers. Different structures for recurrent networks have been proposed, giving the feedback in membership function [19,22], fuzzy rules [20], or their own output [24]. The structure of the RNFN proposed here is inspired by the recurrent neuro-fuzzy system FLSs in [21] and is given by Rr : If ei is Ar (ei ) and zi is Br (zi ), then ζir = θ r and ξi = φ r ,
(19.9)
I are the inputs of where Rr denotes the rth rule of the RNFN, 1 ≤ r ≤ nr . ei and zi ∈ R the RNFN. Recall that ei is the tracking error of the ith subsystem. zi is an internal I state of the RNFN that represents the proposed controller dynamics and ξir ∈ R I are the outputs of the rth rule. θir , φir are singletons, and Ar (ei ), Br (zi ) and ζir ∈ R are fuzzy sets characterized by local membership functions and global membership functions, respectively:
ei − cri 2 µAr (ei ) = exp − , σir
µBr (zi ) =
1 , 1 + exp (ςir (ari − zi ))
(19.10)
where cri , ari are the center and σir , ςir are the width of the Gaussian and sigmoid membership function, respectively. The output of the RNFN is
ζi =
nr
∑ φk,i wk,i
and
ξi =
k=1
nr
∑ θk,i wk,i
(19.11)
k=1
with wk,i (ei , zi ) = µAk (ei )µBk (zi ).
(19.12)
ζi and ξi can be expressed as
ζi = φiTWi (ei , zi ) and ξi = θiTWi (ei , zi ),
(19.13)
19 Decentralized Neuro-fuzzy Control of a Class of Nonlinear Systems
263
where θiT = [θ 1 θ 2 · · · θ nr ], φiT = [φ 1 φ 2 · · · φ nr ] and Wi (ei , zi ) = [w1,i w2,i · · · wnr ,i ]T . Let
u f ,i (ei , zi ) = zi + ζi = zi + φiTWi (ei , zi ),
(19.14)
z˙i = −γi zi + ξi = −γi zi + θiTWi (ei , zi ).
(19.15)
The RNFN defined in (19.14) and (19.15) cannot be implemented because the vectors of parameters θi and φi are unknown. Therefore, it is necessary to use their estimated values. Define the estimate of the RNFN as uˆ f ,i (ei , zˆi ) = zˆi + φˆiTWi (ei , zˆi ),
z˙ˆi =
−γi zˆi + θˆiTWi (ei , zˆi ) + N1,i ,
(19.16) (19.17)
where θˆiT = [θˆ 1 θˆ 2 · · · θˆ nr ], φˆiT = [φˆ 1 φˆ 2 · · · φˆ nr ] are estimated values of θi and φi , respectively, and Wˆ i (ei , zˆi ) = [wˆ 1,i wˆ 2,i · · · wˆ nr ,i ]T . N1,i is to be defined such that the internal state (19.17) be bounded | zˆi | ≤ c¯z,i ,
(19.18)
where c¯z,i is a positive constant. It has been demonstrated that RNFNs are universal approximators (see, e.g., [25, 26]) in the sense that given any real continuous function, say u∗ (e, z), in a compact set IE × ZZ, and any ku > 0, there exists an RNFN given by u f such that sup(e,z) ∈ IE×ZZ |u f (e, z) − u∗ (e, z)| < ku,i .
19.4 Design of the Decentralized Control In this section, we design a controller for the plant (19.3) approximating the ideal control (19.7) by means of an RNFN. We use the technique of Lyapunov redesign [27] to design a compensation signal to compensate for the parametric uncertainty due to the unknown optimal parameters, the approximation error, and the interconnection.
19.4.1 Control Law We propose the following control law ui = uc,i + uˆ f ,i ,
(19.19)
264
Miguel A. Hern´andez and Yu Tang
where uˆ f ,i is the fuzzy control of the ith subsystem given by (19.16) and (19.17). The fuzzy set and the nr rules give the fuzzy basis functions Wi and the parameter vector θi = [θ 1 θ 2 · · · θ nr,i ]T . Therefore, uˆ f ,i (ei , zˆi ) = zˆi + θˆiT Wi (ei , zˆi )
(19.20)
with N1,i in (19.17) given by N1,i (Ei ) = −
σi qi ( Ei ) | pTi Ei | . qi ( Ei ) | pTi Ei | +σi
(19.21)
It is easy to see that N1,i is bounded by σi . The component uc,i is defined as uc,i = −
δˆi2 (pTi Ei )q3i ( Ei ) δˆi | pTi Ei | q2i ( Ei ) + εi
,
˙ δˆi = −ρi δˆi + αi | pTi Ei | qi ( Ei )
(19.22) (19.23)
with δˆi (0) > 0, where δˆi is the estimate of δi (defined below), updated by the following adaptation laws:
θ˙ˆ = −ϑi θˆi +
ρi zˆi Wˆ i , Wˆ i | zˆi | +ρi
φ˙ˆi = −ψi φˆi − ςi
ϕi pTi Ei qi ( Ei ) Wˆ i , | pTi Ei | qi ( Ei ) + ϕi
(19.24)
(19.25)
where αi , βi , γi , εi , and σi are design parameters, qi ( Ei ) is defined in Assumption 1, and pi , the last column of the matrix Pi , is the result of the Lyapunov equation [Ai − Bi Ki ]T Pi + Pi [Ai − Bi Ki ] = −I.
(19.26)
It can be demonstrated from definitions (19.17), (19.24), and (19.25) that z˜i , θˆi , and φ˜i are bounded by c¯z,i , c¯θ ,i , and c¯φ ,i , respectively, with c¯z,i , c¯θ ,i , and c¯φ ,i some unknown constants.
19.4.2 Stability Analysis It follows from the fact of universal approximation of RNFN that the ideal control law (19.7) can be approximated by an RNFN with the parameters θi and the fuzzy basis functions Wi (ei , zi ), giving the approximation error ∆ ui (ei , zi ). Let the parameter errors be
θ˜i = θˆi − θi
and
φ˜i = φˆi − φi .
(19.27)
19 Decentralized Neuro-fuzzy Control of a Class of Nonlinear Systems
265
From (19.19) we have ui = uc,i + uˆ f ,i = uc,i + uˆ f ,i ± u∗i ± u f ,i = u∗i + uc,i + zˆi + φˆiT Wˆ i − zi − θiT Wi + [u f ,i − u∗i ] = u∗i + uc,i + z˜i + φˆiT Wˆ i − φiT Wi + [u f ,i − u∗i ].
(19.28)
For simplicity we will group the terms: u¯i = uc,i + z˜i + φˆiT Wˆ i − φiT Wi + [u f ,i − u∗i ].
(19.29)
The closed-loop system with the control law ui (19.28) and (19.29) is E˙i = [Ai − Bi Ki ]Ei + Bi [gi (yi )u¯i + Zi (Y )].
(19.30)
The following Chebyshev inequality will be used later for stability analysis: N
N
N
i=1
j=1
i=1
∑ ai ∑ b j ≤ N ∑ ai bi ,
(19.31)
which holds for 0 ≤ a1 ≤ a2 ≤ · · · aN ,
and
0 ≤ b1 ≤ b2 ≤ · · · bN .
(19.32)
Consider the following Lyapunov function candidate:
gi 2 g¯i 2 g¯i T 1 g¯i ˜ T ˜ T ˜ ˜ ˜ V =∑ Ei Pi Ei + δi + z˜i + θi θi + φi φi αi γi ηi ςi i=1 2 N
with the estimation error defined as δ˜i = δˆi − δi . Its time derivative is N 1 ˙T T ˙ ˙ V = ∑ (Ei Pi Ei + Ei Pi Ei ) + Mi , i=1 2
(19.33)
where
Mi =
gi
g¯i g¯i g¯i δ˜i δ˜˙i + z˜i z˙˜i + θ˜iT θ˙˜i + φ˜iT φ˙˜i . αi γi ηi ςi
It follows from (19.28) and (19.26) that EiT Pi {(Ai − Bi Ki )Ei + Bi [gi (yi )(u¯i ) + Zi (Y )]} + {(Ai − Bi Ki )Ei + Bi [gi (yi )(u¯i ) + Zi (Y )]}T Pi Ei = EiT Pi [Ai − Bi Ki ]Ei + EiT [Ai − Bi Ki ]T Pi Ei + 2EiT Pi {Bi [gi (yi )(u¯i ) + Zi (Y )]} = EiT Pi [Ai − Bi Ki ] + [Ai − Bi Ki ]T Pi Ei + 2EiT Pi {Bi [gi (yi )(u¯i ) + Zi (Y )]} = {−EiT Ei + 2EiT Pi Bi [gi (yi )(u¯i ) + Zi (Y )]}. (19.34)
266
Miguel A. Hern´andez and Yu Tang
Substituting (19.34) and (19.28) in (19.33), V˙ ≤
N
∑
i=1
1 − EiT Ei + pTi Ei [gi (yi )(uc,i ) + Mi 2
+ pTi Ei [gi (yi )(φˆiT Wˆ i + z˜i − φiT Wi + [u f ,i − u∗i ]) + Zi (Y )] .
(19.35)
Analyzing the last term of (19.35), we have N
∑ pTi Ei
gi (yi ) φˆiT Wˆ i + z˜i − φiT Wi + u f ,i − u∗i + Zi (Y )
i=1
N ≤ ∑ pTi E| i g¯i qi ( Ei )(φˆiT Wˆ i + z˜i − φiT Wi + u f ,i − u∗i ) + Zi (Y ) i=1 N
≤ ∑ pTi Ei g¯i qi ( Ei )(| φˆiT Wˆ i | +cz,i i=1
+c¯z,i + φiT Wi | + | u f ,i − u∗i |) + | Zi (Y )
N ≤ ∑ pTi Ei | {g¯i qi ( Ei )( φˆiT Wˆ i + cz,i i=1
N
+ c¯z,i + φiT Wi + ku,i ) + ∑ Ci, j q j ( E j )} j=1
N T ≤ ∑ pi Ei g¯i ci qi ( Ei ) + ∑ Ci, j q j ( E j ) N
i=1
i=1
≤ ∑ pTi Ei ∑ δi, j q j ( E j ) N
N
i=1 N
i=1
≤ ∑ N max(δi, j ) pTi Ei q j ( Ei ) i=1 N
j
= ∑ gi δi pTi Ei qi ( Ei )
(19.36)
i=1
with ci = c¯z,i + cz,i + c¯φ ,i ri + cφ ,i ri + ku,i , and
δi, j =
g¯i ci + ci, j i = j, i= j, ci, j
gi δi = N max(δi, j ). j
(19.37)
The fifth inequality is the consequence of applying Chebyshev inequality (19.31).
19 Decentralized Neuro-fuzzy Control of a Class of Nonlinear Systems
267
From (19.36) and (19.35) it follows that V˙ ≤
1 ∑ − 2 EiT Ei + pTi Ei gi (yi )uc,i + gi δi pTi Ei qi ( Ei ) i=1 g g¯i g¯i g¯i + i δ˜i δ˜˙i + z˜i z˙˜i + θ˜iT θ˙˜i + φ˜iT φ˙˜i ; αi γi ηi ςi N
(19.38)
replacing (19.15) and (19.21) in (g¯i /γi )˜zi z˙˜i , we have g¯i ˙ g¯i z˜i z˜i = z˜i −γi z˜i + θˆiTWˆ i − θiTWi + N1,i γi γi 2 g¯i T ˆ T ˆ ≤ −g¯i z˜i + z˜i [θi Wi − θi Wi + N1,i ] γi T 2 g¯i T ˆ ˆ ≤ −g¯i z˜i + z˜i | θi Wi + | θi Wi | + | N1,i γi ≤ −g¯i z˜2i +
g¯i | c¯z,i + cz,i | [c¯θ ,i ri + cθ ,i ri + σi ] γi
≤ −g¯i z˜2i +C1,i
(19.39)
g¯i | c¯z,i + cz,i | [c¯θ ,i ri + cθ ,i ri + σi ]. γi Substituting (19.24) in (g¯i /ηi )θ˜iT θ˙˜i ,
with C1,i =
ρi zˆi ρi zˆi g¯i ˜ T g¯i ϑi ˜ T ˆ g¯i ˜ T ˆ ˆ ˆ Wi = − Wi θ −ϑi θi + θ θi + θi ηi i ηi i ηi Wˆ i | zˆi | +ρi Wˆ i | zˆi | +ρi g¯i ϑi ˜ T ˆ g¯i ˜ T ≤− θ θi + θi ρi ηi i ηi g¯i ϑi ˜ T ˆ g¯i ρi ˜ T ≤− θ θi + θi ηi i ηi g¯i ϑi ˜ T ˆ g¯i ρi ˆ ≤− θ θi + ( θi + θi ) ηi i ηi g¯i ϑi ˜ T ˆ g¯i ρi ¯ ≤− θ θi + (Cθ ,i +Cθ ,i ). (19.40) ηi i ηi Analyzing (g¯i /ςi )φ˜iT φ˙˜i , and replacing (19.25) we obtain
ϕi pTi Ei qi ( Ei ) ˆ g¯i ˜ T ˙˜ g¯i ˜ T ˆ Wi φ φi = φi −ψi φi − ςi T ςi i ςi | pi Ei qi ( Ei ) + ϕi | g¯i ψi ˜ T ˆ ≤− φ φi + [c¯φ ,i + cφ ,i ]ϕi ri . ςi i
(19.41)
268
Miguel A. Hern´andez and Yu Tang
It follows from (19.39)–(19.40), (19.41) and (19.38) that N g¯i ϑi ˜ T ˆ g¯i ψi ˜ T ˆ 1 ˙ V ≤ ∑ − EiT Ei − g¯i z˜2i − θi θi − φi φi + pTi Ei [g¯i qi ( Ei )uc,i 2 η ς i i i=1 g g ¯ i ρi ¯ T i ˜ ˜˙ + gi δi | pi Ei | qi ( Ei ) + δi δi + (Cθ ,i +Cθ ,i ) +C1,i + (c¯φ ,i + cφ ,i )ϕi . αi ηi After adding and subtracting δˆi to gi δi | pTi Ei | qi ( Ei ), we have gi δi | pTi Ei | qi ( Ei ) = gi (δi ± δˆi ) pTi Ei qi ( Ei ) = gi (δˆi − δ˜i ) pTi Ei qi ( Ei ) ≤ g¯i δˆi pTi Ei q2i ( Ei ) − g δ˜i pTi Ei qi ( Ei ). i
(19.42) Replacing (19.23) and (19.42) and applying the control law ui , we get N δˆi | pTi Ei | q2i ( Ei )εi g¯i ϑi ˜ T ˆ g¯i ψi ˜ T ˆ 1 V˙ ≤ ∑ − EiT Ei − g¯i z˜2i − θi θi − φi φi + 2 ηi ςi | δˆi (pTi Ei )q2i ( Ei ) | +εi i=1 g ρi g¯i ρi ¯ − i δ˜i δˆi +C1,i + (Cθ ,i +Cθ ,i ) + (c¯φ ,i + cφ ,i )ϕi αi ηi N g¯i ϑi ˜ T ˆ g¯i ψi ˜ T ˆ gi ρi ˜ ˆ 1 θ θi − φ φi − δi δi ≤ ∑ − EiT Ei − g¯i z˜2i − 2 ηi i ςi i αi i=1 g¯i ρi ¯ + εi +C1,i + (Cθ ,i +Cθ ,i ) + (c¯φ ,i + cφ ,i )ϕi ηi N g¯i ϑi ˜ T ˜ g¯i ψi ˜ T ˜ gi ρi ˜ 2 1 ≤ ∑ − EiT Ei − g¯i z˜2i − θ θi − φ φi − δ + εi +C1,i 2 ηi i ςi i αi i i=1 3g ρi 3g¯i ϑi T g¯i ρi ¯ 3g¯i ψi T + (Cθ ,i +Cθ ,i ) + (c¯φ ,i + cφ ,i )ϕi θi θi + φi φi + i δi2 . ηi 4ηi 4ςi 4αi Let
λ =
g¯i ρi 3g¯i ϑi ∑ εi +C1,i + (c¯φ ,i + cφ ,i )ϕi + ηi (C¯θ ,i +Cθ ,i ) + 4ηi θiT θi i=1 3gi ρi 2 3g¯i ψi T + φ φi + δ (19.43) 4ςi i 4αi i N
and
τ = min i
1 g i ρi g¯i ϑi g¯i ψi , , g¯i , , 2 αi ηi ςi
;
(19.44)
then we have V˙ ≤ −2τ V + λ .
(19.45)
19 Decentralized Neuro-fuzzy Control of a Class of Nonlinear Systems
269
A ultimate error bound is given by 2λτ , which can be made arbitrarily small by properly choosing the design parameters. The compact set to which the closed-loop variables belong is characterized by V (t) ≤ max{V (0), λ /2τ }.
19.5 Output Feedback We now use, (HGO) [27] to estimate the state yi . It is shown in (pg. 622) [27] that the design of such HGO satisfies the separation principle, provided that the state feedback control guarantees the semiglobal boundedness and the observer gains are high enough. The HGO is given by
e1,i = ei ,
α j,i e˙ˆ j,i = eˆ j+1,i + j (e1,i − eˆ1,i ), εob,i .. . ˙eˆn ,i = αnni ,i (e1,i − eˆ1,i ), i i εob,i
1 ≤ j ≤ ni − 1,
(19.46)
ni << 1 is a design parameter and α j,i > 0 are chosen such that the roots where 0 < εob,i of pni + α1,i pni −1 + · · · + αni −1,i p + αni ,i have negative real parts. The estimate of yi is therefore ( j−1)
yˆi
( j−1)
= eˆ j,i + yr,i
,
1 ≤ j ≤ ni .
(19.47)
The controller is implemented by substituting yi in (19.19) to (19.25) by their estimates. The control is saturated outside a compact region of interest to prevent the peaking introduced by the HGO [27], i.e.,
uo,i − uc,i ui = umax,i sat , (19.48) umax,i where sat() is the saturation function and umax,i is the saturation limit, chosen to cover the region of interest. The overall stability is ensured by the semiglobal stability provided by the state feedback and the separation principle [27].
19.6 Experimental Results In this section, the theoretical results developed in the previous sections are applied to a Robot for test and evaluation. In the experimental tests a Rhino Robot with 5 degrees of freedom, of which only two user links (shoulder and elbow), was used.
270
Miguel A. Hern´andez and Yu Tang
The dynamic model of an n-link robot is M(q)q¨ +C(q, q) ˙ q˙ + G(q) + F(q) ˙ = u,
(19.49)
where q is an n-dimensional vector of joint positions (degree); M(q) = [mi j (q)], ˙ the n × 1 Coriolis and the n × n inertia matrix; C(q, q) ˙ q¨ with C(q, q) ˙ = [ci j (q, q)], ˙ = [ fi (q˙i )], centrifugal torques; G(q) = [gi i(q)], the n × 1 gravitational torques; F(q) the n × 1 friction torques; and u = [ui ], the ith control input voltage (volts). For the development of the decentralized control, each joint is considered as a subsystem of the entire manipulator system interconnected by coupling torques representing the inertial coupling terms, the Coriolis, centrifugal, friction, load torque, and gravity terms. By separating terms depending only on local variables (qi , q˙i , q¨i ) from those terms of other joint variables, the following gives the dynamical equation of the ith subsystem. mi (qi )q¨i + ci (qi , q˙i )q˙i + gi (qi ) + zi (qi , q˙i , q¨i ) = ui ,
n
∑
zi =
mii (q)q˙ + [mi (q) − mi (qi )q¨i ]
j=1, j =i
n
∑
ci j (q, q) ˙ q˙i + [cii (q, q) ˙ − ci (qi , q˙i )]q˙i [g¯i (q) − gi (qi )].
(19.50)
j=1, j =i
Results of the control based on RNFN in the join 1
e
0 −0.5 −1 0
5
10
15
20
25
0 −50 0
10
20
30
40
45
0
10
20
30
40
0
10
20
30
40
0
10
20 Time
30
40
10 llTetall
u
35
5 0
40
10
0
−10
30
10 delta
yr(r), y(b)
50
0
10
20
30
5 0
40
0 −0.5
0.04
Z
llfill
0.06
−1
0.02 0
0
10
20 Time
30
40
Fig. 19.1 The relevant signals in the first link by state feedback
19 Decentralized Neuro-fuzzy Control of a Class of Nonlinear Systems
271
Results of the control based on RNFN in the join 2
e
1 0
0
5
10
15
20
25
30
35
40
45
15
50 delta
yd(r), y(b)
−1
0
10 5
−50 0
10
20
30
0
40
0
10
20
30
40
0
10
20
30
40
0
10
20 Time
30
40
10 lltetall
u
20 0
−10
0
10
20
30
10 0
40
0 −0.5
0.04
Z
llFill
0.06
0.02 0
−1 0
10
20 Time
30
40
Fig. 19.2 The relevant signals in the second link by state feedback
Results of the control based on RNFN in the join 1 0.5 e
0 −0.5 −1 0
5
10
15
20
25
x 10
−5
30
35
40
45
1 delta
yr(r), y(b)
50 0 −50 0
10
20
30
0.5 0
40
0
10
20
30
40
0
10
20
30
40
0
10
20 Time
30
40
10 llTetall
u
10 0
−10
0
10
20
30
0
40
0
0.08
−0.5
0.06 Z
llfill
5
0.04
−1
0.02 0
−1.5 0
10
20 Time
30
40
Fig. 19.3 The relevant signals in the first link in Rhino Robot by output feedback
272
Miguel A. Hern´andez and Yu Tang Results of the control based on RNFN in the join 2 1.5
e
1 0.5 0 5
10
15
20
25
0
0
10
20
30
35
40
45
1
0
40
10
0
10
20
30
40
0
10
20
30
40
0
10
20 Time
30
40
20 lltetall
u
30
0.5
−50
0
−10
−5
x 10
1.5
50 delta
yd(r), y(b)
0
0
10
20
30
10 0
40
0 −0.5
0.06 Z
llFill
0.08 0.04
−1
0.02 0
−1.5 0
10
20 Time
30
40
Fig. 19.4 The relevant signals in the second link in Rhino Robot by output feedback
Let xi = [qi , q˙i ]T and yi = qi , it is easy to see that (19.50) satisfies Assumption 2. An intuitive control based on the experience to design the antecedent part of an RNFN in (19.9), where Ar (ei ) are Gaussian fuzzy sets taking from negative, zero, and positive for ei with (center, width) = (±0.5, 15) and (center, width) = (0, 5), and B(zi ) are sigmoid membership functions defined by (center, width) = (0, −6) for zi . This gives a fuzzy control u f ,i with nr = 3 rules each. In the first experiment, state feedback control was implemented. Figures 19.1 and 19.2 show the results obtained for link 1 and link 2, respectively. Next, the HGO was used to get an output feedback control. Figures 19.3 and 19.4 depict the results.
19.7 Conclusions This paper has proposed a decentralized control for a class of nonlinear systems based on RNFNs. Semiglobal boundedness of the closed-loop system has been established based on Lyapunov stability. Experiments in a 2 degrees of freedom were carried out to illustrate the proposed scheme. Acknowledgments This work is supported in part by PAPIIT-UNAM IN106206.
19 Decentralized Neuro-fuzzy Control of a Class of Nonlinear Systems
273
References 1. D.T. Gavel and D.D. Siljak (1989) Decentralized adaptive control: structural conditions for stability. IEEE Transactions Automatic Control, 34: 413–426. 2. P.A. Ioannou (1986) Decentralized adaptive control on interconnected systems. IEEE Transactions Automatic Control, 34: 291–298. 3. L. Shi and S.K. Singh (1992) Decentralized adaptive controller design for large-scale systems with higher order interconnections. IEEE Transactions on Automatic Control, 37: 1106–1118. 4. K.S. Narendra and N.O. Oleng (2002) Exact output tracking in descentralized adaptive control systems. IEEE Transactions Automatic Control, 47: 390–394. 5. Y. Tang, M. Tomizuka, G. Guerrero-Ram´ırez, and G. Montemayo (2000) Decentralized robust control of mechanical systems. IEEE Transactions on Automatic Control, 45: 771–776. 6. A. Teel and L. Praly (1994) Global stabilizability and observability imply semi-global stabilizability by output feedback. Systems Control Letter, 22: 313–325. 7. Z.P. Jiang (2000) Decentralized and adaptive nonlinear tracking of large-scale systems via output feedback. IEEE Transactions on Automatic Control, 45: 2122–2128. 8. S. Nicosia and P. Tomei (1990) Robot control by using only position measurements IEEE Transactions Automatic Control, 35: 1058–1061. 9. H.K. Khalil (1996) Adaptive Output Feedback Control Nonlinear Systems Represented by Input-Output Models. IEEE Transactions Automatic Control, 41: 177–188. 10. T. Takagi and M. Sugeno (1984) Fuzzy identification of systems and its applications to modeling and control. IEEE Transactions Systems, Man, and Cybernatics, SMC-15: 116–132. 11. L. Wang (1994) Adaptive fuzzy systems and control, design and stability analysis. Prentice Hall, Uppr Saddle Rivier NJ. 12. R.M. Sanner and J.J.E. Slotine (1992) Gaussian netwrks for derect adaptive control. IEEE Transactions on Neural Networks, 3: 837–863. 13. A. Levin and K.S. Narendra (1996) Control of nonlinear dynamical systems using neural netwoks—Part II: Observability, identification, and control. IEEE Transactions on Neural Networks, 7: 30–42. 14. S. Seshagiri and H.K. Khalil (2000) Output feedback control of nonlinear systems using RBF neural networks. IEEE Transactions on Neural Networks, 11: 69–79. 15. S.S. Ge, C.C. Hang, and T. Zhang (1999) Adaptive neural network control of nonlinear systems by state and output feedback. IEEE Transactions Systems, Man, and Cybernatics, 29: 818–827. 16. J.T. Spooner and K.M. Passino (1999) Decentralized adaptive control of nonlinear systems using radial basis neural networks. IEEE Transactions on Automatic Control, 44: 2050–2057. 17. S.N Huang, K.K. Tan, and T.H. Lee, (2003). Decentralized control design for large-scale systems with strong interconnections using neural networks. IEEE Transactions on Automatic Control, 48: 805–810. 18. S. Tong, H.X. Li, and G. Chen (2003) Adaptive fuzzy decentralized control for a class of large-scale nonlinear systems. IEEE Transactions Systems, Man, and Cybernatics: Part B Cybernetics, 33: 1–6. 19. C.H. Lee and C.C. Teng (2000) Identification and control of dynamic systems using recurrent fuzzy neural networks. IEEE Transactions Fuzzy Systems, 8: 349–366. 20. C.C. Ku and K.Y. Lee (1995) Diagonal recurrent neural networks for dynamic systems control. IEEE Transactions on Neural Networks, 6: 144–156. 21. C.F Juang (2002) A TSK-type recurrent fuzzy network for dynamic systems processing by neural network and genetic algorithms. IEEE Transactions Fuzzy Systems, 10: 155–170. 22. Y.C. Wang, C.J. Chien, C.C. Teng (2004) Direct adaptive interative learning control of nonlinear systems using an output-recurrent fuzzy neural network. IEEE Transactions on Systems, Man, and Cybernetics, 34: 1348–1359.
274
Miguel A. Hern´andez and Yu Tang
23. P. Grosdidier and M. Morari (1986) Interaction measure for system under decentralized control. Automatica, 22: 309–319. 24. Wen Yu and Xiaoou Li (2001) Some new results on system identification with dynamic neural networks. IEEE Transactions on Neural Networks, 12: 412–417. 25. K.I. Funahashi and Y. Nakamura (1993) Approximation of dynamical systems by continouos time recurrent neural networks. Neural Networks. 801–806. 26. G.A. Rovithakis (2000) Adaptive control with recurrent high-order neural networks: theory and industrial applications. Springer. 27. H.K. Khalil (2002) Nonlinear Systems, 3rd ed. Prentice Hall Upper Saddle River, NJ.
Chapter 20
A New Training Algorithm of Adaptive Fuzzy Control for Chaotic Dynamic Systems Chun-Fei Hsu, Bore-Kuen Lee, and Tsu-Tian Lee
Abstract This chapter proposes a PID-learning-type adaptive fuzzy controller (PIDAFC) system for chaotic dynamic systems. The proposed PID-AFC system is comprised of a fuzzy controller and a robust controller. The fuzzy controller is designed to mimic an ideal controller, and the robust controller is designed to dispel the effect of the approximation error between the fuzzy controller and the ideal controller. All the control parameters are on-line tuned in the sense of Lyapunov theorem; thus the stability of the system can be guaranteed. Moreover, to relax the requirement for the bound value in the robust control, a bound estimation is investigated to on-line estimate the approximation error introduced by fuzzy controller. The chattering phenomena in the control efforts can be reduced. Finally, a comparison between a conventional adaptive fuzzy controller (AFC) and the proposed PID-AFC is presented. Simulation results verify that the proposed PID-AFC can achieve better tracking performance and faster tracking error convergence than the conventional AFC for chaotic dynamic systems. Keywords: Adaptive control · fuzzy control · Lyapunov stability theorem
20.1 Introduction If the exact model of the controlled system is well known, there exists an ideal controller to achieve favorable control performance by possible canceling all the system uncertainties [1]. Since the system parameters and the external load disturbance may Chun-Fei Hsu and Bore-Kuen Lee Department of Electrical Engineering, Chung Hua University, Hsinchu 300, Taiwan, Republic of China Tsu-Tian Lee Department of Electrical Engineering, National Taipei University of Technology, Taipei 106, Taiwan, Republic of China
275
276
Chun-Fei Hsu et al.
be unknown or perturbed, the ideal controller can not be implemented. Moreover, if all uncertainties existed in the controlled system are bounded, a robust design technique termed as sliding mode control has been presented to confront these uncertainties [2, 3]. Once the states of the controlled system enter the sliding mode, the dynamics of the system are determined by the choice of sliding hyperplanes and are independent of uncertainties and external disturbances. However, to satisfy the existence condition of the sliding mode, a conservative control law usually results in large and chattering control efforts. The chattering phenomena in control efforts will wear the bearing mechanism and excite unmodeled dynamics. Fuzzy system has supplanted conventional technology in some scientific applications and engineering systems, especially in control systems. The fuzzy system consists of a set of fuzzy if–then rules. Fuzzy control using linguistic information possesses several advantages such as robustness; model-free, universal approximation theorem; and rule-based algorithm [4, 5]. Though it is one of the most effective methods using expert knowledge, it has not been viewed as a rigorous approach due to the lack of formal synthesis techniques that can guarantee global stability of the fuzzy control. To tackle this problem, some researchers have focused on the use of the Lyapunov synthesis approach to construct an adaptive fuzzy controller (AFC) [6–10]. The key element of the AFC is the merger of adaptive control with fuzzy approximation theory, where the fuzzy system can approximate the unknown control system dynamics or the ideal controller. The control parameters of the AFCs proposed in [6–10] are always on-line tuned by an I-learning-type algorithm in the Lyapunov sense; thus, the stability of the system can be guaranteed. This kind of AFCs can be found in the literature and is referenced as an I-AFC in this chapter. Chaotic systems have been studied and known to exhibit complex dynamical behavior. The chaotic dynamic systems can be observed in many nonlinear circuits and mechanical systems. The interest in chaotic systems lies mostly upon their complex, unpredictable behavior and extreme sensitivity to initial conditions as well as parameter variations [11]. Recently, control of the chaotic dynamic system has become a significant research topic in the physics, mathematics, and engineering communities [11–15]. However, there are still drawbacks. Among these proposed control schemes, some cannot achieve favorable performance, and others require overly complex design procedures. To tackle these drawbacks, the motivation of this chapter is to propose a PIDAFC for the chaotic dynamic systems. The proposed PID-AFC system comprises a fuzzy controller and a robust controller. The fuzzy controller is designed to mimic an ideal controller, and the robust controller is designed to eliminate the effect of the approximation error between the fuzzy controller and the ideal controller. In this chapter, the PID-type learning algorithm is derived to on-line tune the controller parameters based on the Lyapunov stability theorem; thus, not only can the stability of the system be guaranteed, but also the convergence of the tracking error can be speeded up. Finally, simulation results in [6] indicate that the PID-AFC can achieve better tracking performance than the I-AFC, which only uses the Ilearning-type algorithm. It should be pointed out that the proposed PID-AFC greatly improves the tracking performance at the expense of only a little more computation.
20 New Training Algorithm of Adaptive Fuzzy Control for Chaotic Dynamic
277
Fig. 20.1 Typical chaotic orbits of the uncontrolled chaotic dynamic system
20.2 Problem Formulation Chaotic systems have been studied and known to exhibit complex dynamical behavior. The interest in chaotic systems lies mostly upon their complex, unpredictable behavior and extreme sensitivity to initial conditions as well as parameter variations. Consider a second-order chaotic system such as the well-know Duffing’s equation describing a special nonlinear circuit or a pendulum moving in a viscous medium under control [11]: x¨ = −px˙ − p1 x − p2 x3 + q cos(wt) + u = f (x, x) ˙ + u,
(20.1)
where t is the time variable, w is the frequency, f (x, x) ˙ = −px˙ − p1 x − p2 x3 + q cos(wt) is the system dynamic function, u is the control effort, and p, p1 , p2 , and q are real constants. Depending on the choice of these constants, it is known that the solutions of system (20.1) may exhibit periodic, almost periodic, or chaotic behavior. In order to observe the chaotic, unpredictable behavior, the open-loop system behavior with u = 0 is simulated with p = 0.4, p1 = −1.1, p2 = 1.0, and w = 1.8. The phase plane plots from an initial condition point (0, 0) are shown in Fig. 20.1 (a) and (b) for q = 2.10 and q = 7.00, respectively. It is shown that the uncontrolled chaotic dynamic system has different chaotic trajectories for different q values. The control objective is to find a control law so that the chaotic trajectory x can track the desired periodic orbit. The control objective is to find a control law so that the chaotic trajectory x can track the desired periodic orbit. Define the tracking error as e = xc − x,
(20.2)
where xc is the command trajectory. Assume that the parameters of the system are well known. An ideal control law can be obtained [1]: ˙ + x¨c + k1 e˙ + k2 e. u∗ = − f (x, x)
(20.3)
278
Chun-Fei Hsu et al.
Substituting (20.3) into (20.1) gives e¨ + k1 e˙ + k2 e = 0.
(20.4)
If k1 and k2 are chosen to correspond to the coefficients of a Hurwitz polynomial whose roots lie strictly in the open left half of the complex plane, then lim e = 0 can t→∞ be implied. Since the system parameters may be unknown or perturbed, the ideal control law u∗ in (20.3) cannot be implemented.
20.3 Design of AFC with PID-Type Learning Algorithm To solve the problem of impractical ideal controller and time-consuming trial-anderror tuning procedures to construct the fuzzy rules base, a new AFC, which is comprised of a fuzzy controller and a robust controller, is proposed with a PID-type learning algorithm.
20.3.1 Approximation of Fuzzy System Assume that there are m rules in the fuzzy rules base with the following form [4, 5]: Rule k : If e is Fek and e˙ is Fe˙k , then u is αk ,
(20.5)
where Fek and Fe˙k , k = 1, 2, . . . , m, are the linguistic terms characterized by their corresponding fuzzy membership functions and αk are the singleton control actions. The fuzzy control is constructed with a singleton fuzzification, a product inference and a sum-of-weighting defuzzification. The fuzzy control performs the mappings according to [4] m
u=
∑ αk ξk ,
(20.6)
k=1
where ξk represents the firing output of the membership function of the kth fuzzy rule. The fuzzy rule is constructed in the sense that e and e˙ will approach zero with fast rise time and without large overshoot. The output of the fuzzy controller can be represented in a vector form as u = αT ξ,
(20.7)
where α = [α1 , α2 , . . . , αm ]T and ξ = [ξ1 , ξ2 , . . . , ξm ]T . It has been proven that there exists a fuzzy approximator in (20.7) such that it can uniformly approximate the ideal controller u∗ in (20.3). By the universal approximation theorem, there exists an optimal fuzzy approximator u∗fz such that [6] u∗ = u∗fz + ε = α∗T ξ + ∆,
(20.8)
20 New Training Algorithm of Adaptive Fuzzy Control for Chaotic Dynamic
279
where α∗ is the optimal vector of α and ∆ denotes the approximation error. In fact, the optimal vector that is needed to best approximate the ideal controller is difficult to determine. Thus, an estimate fuzzy controller is defined as uˆ = αˆ T ξ,
(20.9)
where αˆ is the estimation vector of α∗ . In this chapter, the estimation vector αˆ is divided into three parts as ˆ 1 + β2 Ω ˆ 2 + β3 Ω ˆ 3, αˆ = β1 Ω
(20.10)
ˆ 1, Ω ˆ 2 , and Ω ˆ 3 are the proporwhere βi , i = 1, 2, 3, are the positive constants and Ω ˆ respectively. Furthermore, (20.8) can be tional, integral, and derivative terms of α, rewritten as u∗ = (β1 Ω∗1 + β2 Ω∗2 + β3 Ω∗3 )T ξ + ∆,
(20.11)
where Ω∗1 , Ω∗2 , and Ω∗3 are the proportional, integral, and derivative terms of the optimal vector, respectively. Then, the estimated error of u∗ is defined as u˜ = u∗ − uˆ = α∗T ξ − αˆ T ξ + ∆ ˆ 1 + β2 Ω ˜ 2 − β3 Ω ˆ 3 )T ξ + ε , = (−β1 Ω
(20.12)
˜ i = Ω∗ − Ω ˆ i (i = 1, 2) and ε = β1 Ω∗T ξ+ β3 Ω∗T ξ+∆. Note that ε is assumed where Ω i 1 3 to be bounded by |ε | ≤ E.
20.3.2 Design of PID-AFC The block diagram of PID-AFC for a chaotic dynamic system is shown in Fig. 20.2. The control law is comprises a fuzzy controller ufz and a robust controller urb as u = ufz + urb = αˆ T ξ + urb .
(20.13)
Substituting (20.13) into (20.1) and using (20.3) and (20.12), the error dynamics can be obtained as e˙ = Ae + b(u∗ − ufz − urb ) ˆ 1 + β2 Ω ˜ 2 − β3 Ω ˆ 3 )T ξ + ε − urb ], = Ae + b[(−β1 Ω where e = [e
e] ˙ T,
A=
0 −k2
(20.14)
1 , −k1
and b = [0 1]T . Note that k1 and k2 are chosen such that A is a Hurwitz matrix. Then, the following theorem can be stated and proved.
280
Chun-Fei Hsu et al. E
urb robust controller
xc +
ufc ++
e fuzzy controller
−
u
chaotic dynamic system
x
ˆ ,Ω ˆ ,Ω ˆ Ω 1 2 3 adaptive law
Fig. 20.2 Block diagram of the PID-AFC system
Theorem 1. Consider a chaotic dynamic system (20.1) with a control law designed as (20.13), in which the estimation vector αˆ is on-line tuned by the PID-learning algorithm given as ˆ 1 = eT Pbξ, Ω
(20.15)
ˆ2 = Ω
eT Pbξdt,
(20.16)
ˆ 3 = d (eT Pbξ), Ω dt
(20.17)
t 0
where the symmetric positive definite matrix P satisfies the following Lyapunov equation: AT P + PA = −Q,
(20.18)
where Q is a symmetric positive matrix. The robust controller is designed as urb = E sgn(eT Pb),
(20.19)
where sgn(.) is a sign function. Then the stability of the PID-learning-type adaptive fuzzy control system can be guaranteed. Proof Define a Lyapunov function in the following form:
β3 ˆ T ˆ β2 ˜ T ˜ 1 V1 = eT Pe + Ω Ω Ω2 . 1 Ω1 + 2 2 2 2
(20.20)
20 New Training Algorithm of Adaptive Fuzzy Control for Chaotic Dynamic
281
Differentiating (20.20) with respect to time and using (20.14) gives 1 1 ˙ˆ + β Ω ˆ T1 Ω ˜ T ˙˜ V˙1 = eT P˙e + e˙ T Pe + β3 Ω 1 2 2 Ω2 2 2 1 ˆ T1 ξ + β2 Ω ˜ T2 ξ − β3 Ω ˆ T3 ξ + ε − urb ) = eT (AT P + PA)e + eT Pb(−β1 Ω 2 ˙ˆ + β Ω ˆ T1 Ω ˜ T ˙˜ + β3 Ω 1 2 2 Ω2 1 ˆ T3 ˆ T1 eT Pbξ + β2 Ω ˜ T2 eT Pbξ − β3 eT PbξΩ = eT (AT P + PA)e − β1 Ω 2 ˙ˆ + β Ω T ˆ T1 Ω ˜ T ˙˜ + β3 Ω (20.21) 1 2 2 Ω2 + e Pb(ε − urb ) ˙ˆ can be easily derived. ˙˜ = −Ω ˜ 2 = Ω∗ − Ω ˆ 2 , the fact Ω From the definition of Ω 2 2 2 Then, observing equations(20.15) to (20.17), we obtain ˙ˆ , ˆ3 = Ω Ω 1 ˙˜ . ˆ 1 = −Ω Ω 2
(20.22) (20.23)
Substituting (20.15) to (20.17), (20.22), and (20.23) into (20.21) gives 1 ˙ˆ + β Ω ˆ T1 Ω ˆ 1 + β2 Ω ˜ T2 Ω ˆ 1 − β3 Ω ˆ 1Ω ˆ T3 + β3 Ω ˆ T1 Ω ˜ T ˜˙ V˙1 = − eT Qe − β1 Ω 1 2 2 Ω2 2 + eT Pb(ε − urb ) T T 1 ˙˜ − β Ω ˆ T1 Ω ˆ 1 − β2 Ω ˜ T2 Ω ˆ ˙ˆ ˆ ˙ˆ ˜ T ˙˜ = − eT Qe − β1 Ω 2 3 1 Ω1 + β3 Ω1 Ω1 + β2 Ω2 Ω2 2 + eT Pb(ε − urb ) 1 ˆ T1 Ω ˆ 1 + ε eT Pb − E eT Pb = − eT Qe − β1 Ω 2 1 ˆ T1 Ω ˆ 1 + |ε | eT Pb − E eT Pb ≤ − eT Qe − β1 Ω 2 1 ˆ T1 Ω ˆ 1 − (E − |ε |) eT Pb ≤ 0. (20.24) = − eT Qe − β1 Ω 2 As a result, the PID-AFC is stable. Thus, the proof of Theorem 1 is complete.
20.3.3 Design of PID-AFC with Bound Estimation In Section 20.3.2, the application of the PID-AFC system requires the bound of approximation error. However, the bound of approximation error E is difficult to measure for practical applications in industry. If E is chosen too large, the control effort results in large chattering. The chattering phenomenon in the control effort will wear the bearing mechanism and excite unstable dynamics. If E is chosen too small, the control system may be unstable. For an application in practical design, the bound of approximation error is chosen large enough to avoid being unstable.
282
Chun-Fei Hsu et al.
bound estimation law Eˆ
urb robust controller
xc +
e fuzzy controller
−
u fc ++
u
chaotic dynamic system
x
ˆ ,Ω ˆ ,Ω ˆ Ω 1 2 3 adaptive law
Fig. 20.3 Block diagram of the PID-AFC system with bound estimation law
To relax the requirement for the bound of approximation error, the PID-AFC system with bound estimation is shown in Fig. 20.3. Theorem 2. Consider a chaotic dynamic system (20.1) with a control law designed as (20.13), in which the estimation vector αˆ is on-line tuned by the PID-learning algorithm given as (20.15) to (20.17) and the robust controller is designed as urb = Eˆ sgn(eT Pb),
(20.25)
where Eˆ is the estimated bound value of the approximation error. Then the stability of the PID-AFC system with bound estimation law can be guaranteed. Proof Define a Lyapunov function in the following form:
β3 ˆ T ˆ β2 ˜ T ˜ 1 1 ˜2 E , V2 = eT Pe + Ω Ω2 Ω2 + 1 Ω1 + 2 2 2 2η1
(20.26)
where E˜ = Eˆ − E and η1 is a positive constant. Differentiating (20.26) with respect to time and using (20.15) to (20.17) and (20.25), we can obtain 1 1 1 ˜ ˙˜ ˙ˆ + β Ω ˆ T1 Ω ˜ T ˙˜ EE V˙2 = eT P˙e + e˙ T Pe + β3 Ω 1 2 2 Ω2 + 2 2 η1 1 ˙˜ − β Ω ˆ T1 Ω ˆ 1 − β2 Ω ˜ T2 Ω ˆ ˙ˆ T ˆ ˙ˆ T ˜ T ˙˜ = − eT Qe − β1 Ω 2 3 1 Ω1 + β3 Ω1 Ω1 + β2 Ω2 Ω2 2 1 + eT Pb(ε − urb ) + E˜ E˜˙ η1 1 T T ˙˜ ˆ1Ω ˆ 1 + eT Pb(ε − urb ) + 1 E˜ E. = − e Qe − β1 Ω (20.27) 2 η1
20 New Training Algorithm of Adaptive Fuzzy Control for Chaotic Dynamic
283
For achieving V˙2 ≤ 0, the estimation law is chosen as E˙ˆ = η1 eT Pb ;
(20.28)
then (20.27) can be rewritten as 1 ˆ T1 Ω ˆ 1 + ε eT Pb − Eˆ eT Pb + (Eˆ − E) eT Pb V˙2 = − eT Qe − β1 Ω 2 1 T ˆ T1 Ω ˆ 1 + ε eT Pb − E eT Pb = − e Qe − β1 Ω 2 1 ˆ T1 Ω ˆ 1 + |ε | eT Pb − E eT Pb ≤ − eT Qe − β1 Ω 2 1 ˆ T1 Ω ˆ 1 − (E − |ε |) eT Pb ≤ 0. (20.29) = − eT Qe − β1 Ω 2
Fig. 20.4 Simulations of I-AFC for Case 1
284
Chun-Fei Hsu et al.
Fig. 20.5 Simulations of I-AFC for Case 2
As a result, the PID-AFC system with bound estimation law can be guaranteed to be stable in the Lyapunov sense. Thus, the proof of Theorem 2 is complete.
20.4 Simulation Results In this section, we will reveal the control performance of the proposed PID-AFC for the chaotic dynamic system. To investigate the efficiency of the proposed controller regarding parameter variations in the chaotic Duffing dynamic system, two cases are considered: q = 2.10 for Case 1 and q = 7.00 for Case 2. For a choice of Q = −I, k1 = 2, and k2 = 1, solving the Lyapunov equation (20.18) gives 1.5 0.5 P= (20.30) 0.5 0.5
20 New Training Algorithm of Adaptive Fuzzy Control for Chaotic Dynamic
285
Fig. 20.6 Simulations of PID-AFC for Case 1
These parameters are chosen to achieve the best transient control performance considering the requirement of asymptotic stability. To show the advantages of the PIDAFC, a conventional AFC using I-type learning algorithm is adopted for comparison. Simulation results of the I-AFC of Case 1 and Case 2 are shown in Figs. 20.4 and 20.5, respectively. The tracking responses x are shown in Figs. 20.4(a) and 20.5(a), the tracking responses x˙ are shown in Figs. 20.4(b) and 20.5(b), and the associated control efforts are shown in Figs. 20.4(c) and 20.5(c), respectively. From Figs. 20.4 (a) and (b) and 20.5 (a) and (b), the tracking errors can be seen obviously in the first cycle, especially in Fig. 20.5(b). This is because the controller parameters are still in the learning process. After the controller parameters have been learned, the perfect tracking responses can be obtained. However, the convergence speed of the I-AFC is not satisfactory.
286
Chun-Fei Hsu et al.
Fig. 20.7 Simulations of PID-AFC for Case 2
The control parameters of PID-AFC are selected as β1 = 5, β2 = 50, β3 = 0.5, and E = 1. The initial settings are chosen through some trials to achieve favorable transient control performance. Simulation results of the PID-AFC of Case 1 and Case 2 are shown in Figs. 20.6 and 20.7, respectively. The tracking responses x are shown in Figs. 20.6(b) and 20.7(b), the tracking responses x˙ are shown in Figs. 20.6(a) and 20.7(a), and the associated control efforts are shown in Fig. 20.6(c) and 20.7(c), respectively. From the simulation results, we can conclude that the proposed PIDAFC yields better tracking performance. Although favorable tracking responses can be obtained, the chattering phenomena of the control efforts shown in Figs. 20.6(c) and 20.7(c) are undesirable. Now, a PID-AFC system with bound estimation law is applied to the control again. The control parameters of PID-AFC with bound estimation law are selected as β1 = 5, β2 = 50, β3 = 0.5, and η1 = 0.1. The initial settings are chosen through some trials to achieve favorable transient control performance. Simulation results
20 New Training Algorithm of Adaptive Fuzzy Control for Chaotic Dynamic
287
Fig. 20.8 Simulations of PID-AFC with bound estimation law for Case 1
of the PID-AFC with bound estimation law of Case 1 and Case 2 are shown in Figs. 20.8 and 20.9, respectively. The tracking responses x are shown in Figs. 20.8(a) and 20.9(b), the tracking responses x˙ are shown in Figs. 20.8(a) and 20.9(b), and the associated control efforts are shown in Figs. 20.8(c) and 20.9(c), respectively. From the simulation results, robust control performance also can be obtained; moreover, the chattering phenomena are much reduced in the control efforts according to the on-line adjustment of the bound value in the robust controller.
20.5 Conclusions To solve the problem that the dynamic characteristics of the chaotic equation are nonlinear and the precise model is difficult to obtain, a PID-AFC is proposed. The controller parameters of the PID-AFC system are on-line tuned based on the
288
Chun-Fei Hsu et al.
Fig. 20.9 Simulations of PID-AFC with bound estimation law for Case 2
Lyapunov stability theorem; thus, the stability of the system can be guaranteed. To relax the requirement for the bound value in the robust control, a PID-AFC system with bound estimation law is investigated. Simulations verify that the proposed PID-type learning algorithm can speed up the convergence of the tracking error. From the simulation results, they conclude three main contributions of this research: 1. PID-learning-type adaptive fuzzy controller can achieve favorable tracking performance in controlling complex nonlinear systems. 2. Convergence of the tracking error and control parameter is accelerated by the PID-type learning algorithm. 3. Bound estimation of approximation error is investigated to reduce the chattering phenomena in the control efforts.
20 New Training Algorithm of Adaptive Fuzzy Control for Chaotic Dynamic
289
References 1. J.-J.E. Slotine and W.P. Li (1998) Applied nonlinear control. Prentice Hall, Englewood Cliffs, NJ. 2. C. Unsal and P. Kachroo (1999) Sliding mode measurement feedback control for antilock braking systems. IEEE Transactions on Control Systems Technology, 7(2): 271–280. 3. M. Lopez, L.G. Vicuna, M. Castilla, P. Gaya, and O. Lopez (2004) Current distribution control design for paralleled DC/DC converters using sliding-mode control. IEEE Transactions on Industrial Electronics, 51(2): 419-428. 4. C.C. Lee (1990) Fuzzy logic in control systems: fuzzy logic controller-part I/II. IEEE Transactions on Systems, Man, and Cybernetics, 20(2): 404–435. 5. S.W. Kim and M. Park (1996) A multirule-base controller using the robust property of a fuzzy controller and its design method. IEEE Transactions on Fuzzy Systems, 4(1): 315–327. 6. L.X. Wang (1994) Adaptive fuzzy systems and control-design and stability analysis, Prentice Hall, Englewood Cliffs, NJ. 7. Y.G. Leu, T.T. Lee, and W.Y. Wang (1999) Observer-based adaptive fuzzy-neural control for unknown nonlinear dynamic systems. IEEE Transactions on Systems, Man, and Cybernetics, 29(5): 583–591. 8. S.D. Wang and C.K. Lin (2000) Adaptive tuning of the fuzzy controller for robots. Fuzzy Sets and Systems, 110(3): 351–363. 9. C.M. Lin and C.F. Hsu (2003) Self-learning fuzzy sliding-mode control for antilock braking systems. IEEE Transactions on Control Systems Technology, 11(2): 273–278. 10. C.M. Lin and C.F. Hsu (2004) Adaptive fuzzy sliding-mode control for induction servomotor systems. IEEE Transactions on Energy Conversion, 19(2): 362–368. 11. G. Chen and X. Dong (1993) On feedback control of chaotic continuous-time systems. IEEE Transactions on Circuits and Systems, I, 40(9): 591–601. 12. A. Loria, E. Panteley, and H. Nijmeijer (1998) Control of the chaotic Duffing equation with uncertainty in all parameter. IEEE Transactions on Circuits and Systems, I, 45(12): 1252– 1255. 13. S.S. Ge and C. Wang (2000) Adaptive control of uncertain Chua’s circuits. IEEE Transactions on Circuits and Systems, I, 47(9): 1397–1402. 14. Z.P. Jiang (2002) Advanced feedback control of the chaotic Duffing equation. IEEE Transactions on Circuits and Systems, I, 49(1): 244–249. 15. C.F. Hsu, C.M. Lin, and T.T. Lee (2006) Wavelet adaptive backstepping control for a class of nonlinear systems. IEEE Transactions on Neural Networks, 17(5): 1175–1183.
Chapter 21
General-Purpose Simulation Management for Satellite Navigation Signal Simulation Ge Li, Xinyu Yao, and Kedi Huang
Abstract we first analyze the requirements of a satellite navigation signal simulation system from the points of view of simulation architecture, real-time calculation, distributed data communication, and real-time simulation engine. Then we propose a general-purpose architecture for the satellite navigation signal simulation system. It consists of the non-real-time layer, the weak real-time layer, the strong real-time layer, and the rigid real-time layer. Finally, we study the general-purpose simulation management and control issues such as experiment design and management, simulation database, simulation management techniques, and system scalability. Keywords: Simulation management · satellite navigation signal simulation · simulation architecture · real-time simulation · distributed system
21.1 Introduction Simulation management usually includes process management and mode management. Simulation process management includes initialization, data logging, on-line entity parameters and states display, and the analysis of simulation results. Loading simulation scenario, configuring the simulation entities according to the scenario, initializing the simulation environment, synchronizing time are all finished at the initialization phase. Monitoring and controlling every simulation entity’s logging states are the critical tasks at the execution phase; Analyzing the results, generating analysis reports, and simulation exercise playback are implemented after simulation. Simulation mode management indicates the starting up, pausing, resuming, and stopping of the simulation exercise. These simulation management techniques
Ge Li, Xinyu Yao, and Kedi Huang Institute for Automation, National University of Defense Technology, Changsha, Hunan, China, 410073
291
292
Ge Li et al.
are general purpose and independent of the simulation architecture and system implementation. From the simulation practitioner’s point of view, a navigation signal testing and evaluating system usually consists of test task preparation subsystem, simulation control and management subsystem, radio-frequency (RF) signal generation subsystem, and test environment subsystem. These subsystems vary in time constraints, computation overhead, and communication overhead. So the navigation signal testing and evaluating system is a real-time distributed system with different time requirements. The tightly coupled hardware-in-the-loop simulation can reach microsecond simulation frame. Simulation management is complicated and difficult for real-time distributed simulation systems [1]. To satisfy the real-time requirements of the satellite navigation signal simulation system, first we analyze the real-time requirements from the points of view of the simulation architecture, real-time calculation, distributed data communication, and real-time simulation engine. Then we propose a four-layered simulation management architecture, which consists of the non-real-time layer, the weak real-time layer, the strong real-time layer, and the rigid real-time layer. The general-purpose simulation management and control issues such as experiment design and management, simulation database, simulation management technique, and system scalability are also studied in detail.
21.2 The Real-Time Application Requirements 21.2.1 Requirements for the Simulation Architecture Satellite navigation signal simulation system is a real-time system [2–4]. It includes both weak real-time simulation control functions, such as man-computer interaction and simulation visualization, and strong real-time model computation functions in millisecond level. The simulation time can reach microsecond and even nanosecond levels. The architecture should also synchronize distributed computers and devices under strict time constraints.
21.2.2 Requirements of the Real-Time Calculation for the High-Fidelity Model Satellite navigation signal simulation requires very complicated high-fidelity models. The calculation is time-consuming. On the other hand, the precise pseudo-range control requires fast control rate. The architecture should satisfy the real-time requirement.
21 General-Purpose Simulation Management for Satellite Navigation Signal
293
21.2.3 Requirements of the Data Communication for Different Layers Satellite navigation signal simulation features small simulation step and tightly coupled simulation models. The calculation results should output both to the signal process chips through high-speed bus and to the other simulation nodes through real-time communication network or LAN, if the expansibility is taken into consideration. Different layers have different bandwidth and communication mechanism. The architecture should integrate data communications for different layers.
21.2.4 Requirements of the Real-Time Simulation Engine The real-time simulation engine should support models with different type and granularity to run cooperatively. It should also support the global control of the time synchronization module and multisteps simulation.
21.3 A General-Purpose Architecture for Satellite Navigation Signal Simulation Architecture is one of the most important problems for designing simulation system [5, 6]. The standards and specifications of architecture affect the performance of simulation system directly. The architecture of distributed simulation system describes the system components and the relationship, communication protocol, standards among these components, as well as the rules and principles for the components designing. A better and rational architecture must take the requirements of actual system into account and reduce the system expenses and risk. Simulation system can use different simulation architectures according to the real-time requirement of the system. Generally speaking, simulation architecture for the distributed interactive simulation can reach the millisecond-level simulation time frame. The simulation architecture for the tightly coupled hardware-in-the-loop simulation can reach microsecond simulation frame. Satellite navigation signal simulation features high-fidelity models and strictly real-time requirements. In navigation signal testing and evaluating simulation applications, the test task preparation subsystem can be processed off-line and therefore is in non-real-time layer. The simulation control and management subsystem should cooperate with the other subsystems. The RF signal generation subsystem requires nanosecond control. So the conventional simulation architecture can not meet the high requirements for functions and performance. According to different real-time requirements of subsystem of the satellite navigation signal simulation system, we propose a
294
Ge Li et al. Non real-time layer Test task preparation
Test evaluation
Visualization
Local Area Newwork
Weak real-time layer Data relay
Simulation control and management
Strong real-time layer High speed real-time communication network
Real-time simulation Signal evaluation
Time synchronization server
RF signal Signal acquisition
Atomic Clock
Rigid real-time layer
Fig. 21.1 A general-purpose simulation architecture
four-layered architecture (Fig. 21.1): the non-real-time layer, the weak real-time layer, the strong real-time layer, and the rigid real-time layer. The top layer is the non-real-time layer, which has no time constraint. It performs off-line operations such as modeling and model capsulation, database operations, test task configuration, and post data process. This layer operates on PCs connected by LAN so that it is compatible with simulation modeling platforms such as MATLAB. The weak real-time layer provides the user a feeling of real-time operation. It includes modules such as simulation management and control, on-line man– machine interaction, data logging, two-dimensional (2D)and three-dimensional (3D) visualization, diagram display, and system evaluation. This layer is also
21 General-Purpose Simulation Management for Satellite Navigation Signal
295
based on networked PCs, providing the user with a powerful user interface using Windows/C++ tools, database, and visualization software. The strong real-time layer has strict real-time constraint. It includes real-time simulation modules such as satellite orbit calculation, user receiver movement, signal transmission link, space environment, navigation message generation, data relay module, equipment control module, and signal test module. It operates on highspeed reflective memory network and Vxworks real-time operating system. It outputs data of model calculation and simulation results very frequently. The bottom layer is the rigid real-time layer. It refers to the modules related to signal generation, including signal generation and delay control as well as Doppler frequency shift control. The signal outputs to RF signal modulation. Digital sign processor (DSP), field-programmable gate array (FPGA), and parallel signal processing algorithms are used in this layer.
21.4 General-Purpose Real-Time Distributed Simulation Managements 21.4.1 Experiment Design and Management Techniques Experiment design and management technique supports experimentation data at three levels: • Project level, the top level, distinguishes between different test by the concept of project, to facilitate the generality of simulation systems, such as receiver test and ground station test. • Design plan level, the second level, is where according to actual projects, we can design different system realizations and choose different project object and parameter composition to study a system’s compound characteristics. • Simulation experiment level, the bottom level, is where experiments are carried out to validate the designs (same object can have model realizations of different levels and granularities). The core here is the management of design plan level, experiment level, and model repository.
21.4.1.1 Realization Techniques for Design-Plan-Level Management Design plan management is based on a database system, and it provides plan design, plan composition, plan duplication, signal automatic coding, parameter automatic coding, etc. It is the foundation and information source of all simulation experiments.
296
Ge Li et al.
21.4.1.2 Realization Techniques for Simulation-Experiment-Level Management Simulation experiment management is based on design plan management, and it provides experiment creation, model selection, experiment duplication, signal association, signal and parameter address automatic allocation, etc. Simulation experiment is the basic unit of simulation management.
21.4.1.3 Model Management Techniques Models are managed according to their types, and their parameters, input, and output information are stored in a database to realize maximum reusability through standard management method. The actual operations include model class creation, modification, parameter set, and classifications. Model classes are used in design plan level; models are used in experiment level, which is the basic information source as well as the key factor of system reusability.
21.4.2 Simulation Database Techniques Database system is built on a commercial relational database, making full use of the mature functionalities. The design result of experiment design management work will be realized, and its main functions can be classified into 5 areas:
21.4.2.1 Simulation Model Management Satellite navigation signal simulation contains many simulation models. Simulation model management is mainly responsible for simulation model input, modification, editing, and providing various query and statistic outputs according to users’ requirements. The information it manages includes basic information of the models (such as name, creation date, purpose description) and their structural information (such as model parameters and the submodels they contain).
21.4.2.2 Simulation Data Management Simulation data management is mainly responsible for recording, modifying, and editing simulation data generated during simulation runs and for outputting it to other format files. Users can also download certain data according to their queries. All input and output operations can be carried out in a network environment.
21 General-Purpose Simulation Management for Satellite Navigation Signal
297
21.4.2.3 Simulation Experiment Management Simulation experiment management is mainly responsible for inputting, modifying, and editing simulation experiment information. It also provides various query and statistic functions according to users’ requirements. The information it manages includes basic information of simulation experiments (such as name, experiment date, experiment goals), structural information (selected simulation model and equipment, simulation data format, etc.), and data information (e.g., configuration data of simulations).
21.4.2.4 Simulation Equipment Management Simulation equipment management is mainly responsible for inputting, modifying, and editing simulation equipment information and providing various query and statistic functions according to users’ requirements. The information it manages includes basic information of simulation equipments (name, type, manufacturer, etc.), structural information (simulation equipment parameters, etc.), data information (configuration data and status data of simulation equipment, etc.).
21.4.2.5 Simulation User Management Simulation user management is mainly responsible for user management of the simulation platform, to ensure that only authorized and secure use of the simulation system.
21.4.3 Simulation Management Techniques 21.4.3.1 Simulation Experiment Selection and Configuration Loading Experiment plan is constructed from experiment design module based on database selection and loaded across multiple compute nodes. It is data-driven under the idea of centralized management and distributed operation. Based on the loaded data, each computation node automatically compute respective model.
21.4.3.2 Simulation Process Management This part includes general commands such as simulation initialization, processing, pausing, resuming, and stopping. It also includes online parameter modification, online monitoring of run-time status of distributed compute nodes, including such status as processing mode, fault model, and processing frame time.
298
Ge Li et al.
21.4.3.3 Distributed Synchronization Techniques The satellite navigation signal simulation system is itself a distributed multicomputer system. How to synchronize multicomputers under real-time processing modes and how to synchronize with external systems such as receiver and ground operation station when they are joined in the closed loop are questions that have to be addressed.
21.4.3.4 Real-Time Simulation Engine Based on different simulation experiment purposes, different models are loaded for each run and on different set of nodes. The information flow among models will dynamically change too. All these require a flexible processing mechanism provided by a simulation engine that can support both real-time constrains of model computation and data-driven dynamic load processing requirements, as well as providing convenient model information flow control capability.
21.4.4 System Scalability Realization Satellite navigation signal simulation itself is not a complete simulation system; it still needs to be combined with a relevant signal testing system, weapon compound navigation system, even larger-scale combat application systems. The scalability of a system can be realized from three areas: 1. Signal level can feed digital signals to RF signal simulation module to generate final RF signals. 2. Closely coupled parts, through extended simulation management, include kinematics simulation model of missiles and satellites in its direct management. 3. The top level can adopt high-level architecture (HLA) federate interface to realize larger-scale simulation interconnections.
21.5 Conclusions To satisfy the real-time requirements of the satellite navigation signal simulation, we propose a four-layered simulation management architecture, which consists of the non-real-time layer, the weak real-time layer, the strong real-time layer, and the rigid real-time layer. This architecture can also be tailored to the other real-time simulation system. To facilitate maintenance, the architecture is modularized and is easy to modify and upgrade. As for scalability, it adopts an open software architecture, so it is easy to manage and can integrate different model systems. Other development, management, and run-time tools can be easily integrated into the system. Extending
21 General-Purpose Simulation Management for Satellite Navigation Signal
299
the existent simulation system to a larger scale simulation system is made easy and gives the system excellent scalability. It has plug-and-play capability and can realize component-level model reuse. Although the real-time distributed simulation management techniques studied in this chapter are in the context of satellite navigation signal simulation, these techniques can realize multiple experiment plans and model compositions with flexibility; thus, they can be used widely in the other real-time distributed simulation systems.
References 1. M. Baker, B. Dingman, and W. Gregg (1997) High fidelity GPS satellite simulation. In AIAA Modeling and Simulation Technologies Conference, AIAA, New Orleans, pp. 213–223. 2. A. Brown and N. Gerein (2001) Advanced GPS hybrid simulator architecture. In Proceedings of ION 57th Annual Meeting, Albuquerque, NM, pp.1–8. 3. L. Ge and K.D. Huang (1997) The applications of distributed computing technology in distributed interactive simulation. Modeling, Measurement and Control, D, 16(1): AMSE. 4. L. Ge, K. Lin, and K.D. Huang (2005) An architecture to facilitate the development of parallel and distributed simulation system. Advances in Modeling and Analysis, D, 10(1): AMSE, 1–16. 5. C.M. Krishna and K.G. Shin (1997) Real-time system. McGraw-Hill, New York. 6. R. Strunce, F. Maher, and D. Lang (2001) An object oriented dynamic simulation architecture for rapid tethered-spacecraft prototyping. In 39th Aerospace Sciences Meeting and Exhibit, AIAA, Reno, NV.
Chapter 22
Multilayered Quality-of-Service Architecture with Cross-layer Coordination for Teleoperation System X.U. Lei and L.I. Guo-dong
Abstract The quality of the best effort service provided by Internet protocol (IP) networks has proven to be inadequate for real-time control applications. A variety of methods have been proposed to address this issue, and most of these efforts are in the realm of control system. Based on Open System Interconnection reference model (OSI/RM), we present a multilayered network, quality-of-service (QoS) enhancement architecture for network-based teleoperation systems. The QoS architecture is composed of resource network enhancement and communication network enhancement. We classify the QoS enhancement methods as the end-to-end approach and the intermediate nodes-dependent approach accordingly. The structure covers principal layers of network architecture. We give solutions for network optimization on these layers and emphasize the optimization methods at network layer and transport layer. Simulation and experiment result verified the mechanisms deployed in these key layers. Cross-layer coordination and adaptation issues are discussed, and three schemes are proposed. For different application scenarios, we propose a lightweight structure as minimum configuration and integrated structure for Internetbased applications. The modeling process is very general and may serve as the basis for a wide range of network control systems in their performance-improving activities. Keywords: QoS · network control · teleoperation · DiffServ
22.1 Introduction The high demand for network-based control systems has driven the development of real-time control technologies. However, the network part of the control model has X.U. Lei and L.I. Guo-dong Computer Science and Technology Department, North China Electric Power University, Beijing, 102206 Peoples Republic of China
301
302
X.U. Lei and L.I. Guo-dong
long been treated as an unchangeable or uncontrolled black box; this leads to an even tougher adaptation task for control systems to face [1]. The widely adopted packet switch technologies have contributed much to the openness, scalability, and robustness of computer networks. However, the quality of the best effort service has proven to be inadequate for the provision of real-time control applications over the networks. Transmission latency and its random nature remain dilemmas for network-based control systems [2]. The network performance issue has received much attention in recent years. A variety of methods have been proposed to address this issue [3–7]. However, maximizing the performance of IP network is more complex. It requires the mechanism operating at multiple layers of the protocol stack in accordance with the application scenario. The object of our work is to present a network-control-system-oriented QoS enhancement framework with proposed optimization methods for different layers. In order to generalize the problem, we consider a modern teleoperation system that generates various sorts of data traffic including control data, field sensor data, field video and/or audio monitor data. This data traffic is transmitted through the network bidirectionally. The background traffic is also not negligible. Depending on different network scenarios, traffic carried by the networks can be file transfer data, Web surfing data, database access traffic, and much more. The background traffic has burst rate and unsteady characteristics , which directly result in network delay and jitter (variation of time delay). The remainder of this article is organized as follows. We first analyze the network performance parameters, especially the network delay. Based on the OSI reference model, we then outline two QoS enhancement groups and present the QoS enhancement framework. Next, we give solutions to each layer with emphasis on network layer and transport layer, simulation result, and experiment result, respectively. For cross-layer coordination and adaptation design, we suggest a direct mapping scheme for network layer and data link layer; we also discuss measurement-based crosslayer coordination and session layer signal adaptation schemes for higher layer QoS decision making. For the applications in different scenarios, we then discuss two extreme situations and propose a light-weight structure for minimum local area network (LAN) configuration and a maximum integrated structure for Internet-based applications.
22.2 Network Performance Parameters Analysis Most of the data traffic generated by teleoperation systems are real-time interactive data flows. These belong to QoS class 0 as defined by ITU Y.1541. The upper bound of transmission delay for QoS class 0 is 100 ms; the upper bound of jitter for class 0 is 50 ms [8]. This is a general situation. For some industry applications such as
22 Multilayered QoS Architecture with Cross-layer Coordination
303
electric power station automation systems, the most crucial protection messages require only 3-ms transmission time [9]. There are four important network performance parameters: available bandwidth or throughput, latency or delay, jitter, and packet loss rate. These parameters are widely adopted to define QoS as the quality of packet transmission [10]. It must be noted that these parameters are usually related to each other. Considering the situation when network loads suddenly become heavy, all four parameters will be affected. Since precise timing control is of the utmost priority in network-based control systems, it should be given more concern on network delay analysis. In typical computer networks, network delay consists of four parts: • Packet delivering delay. Time needed to send the data block to the communication channel. It is proportional to the length of the data block and inversely proportional to the bandwidth of the communication channel (network port or interface card). • Propagating delay. Time needed for electric magnetic signal propagating though a given distance over the physical medium. It varies with the length of the channel and inversely varies with the speed of the propagation, which is a constant for a given physical medium. • Queuing delay. In a packet switch network, depending on the network traffic load, packet may be queuing and wait for service in each node along its path. This makes an important part of latency; enhancement in network layer and data link layer will deal with this problem. • Processing delay. In order to forward the data packets, every node along the path needs to process protocol headers, known as protocol overhead; it depends on the mechanism and complexity of the protocol. For end nodes (source and destination), the processing involves more protocol layers than in the intermediate nodes. The multilayered QoS enhancement methods presented here will deal with the packet delivering delay, queuing delay, and processing delay except propagating delay.
22.3 Architectural Framework Although OSI protocol stacks are rarely implemented, the model itself is still quite general and very important. It provides a more precise feature description for each layer in terms of OSI. From the viewpoint of network optimization, we divide the network architecture into two parts: resources network and communication network. Resources network includes the upper four layers in OSI/RM. The protocol suites in this part are implemented in end host. The communication network covers the lower three layers. The QoS methods related to communication network
304
X.U. Lei and L.I. Guo-dong OSI/RM
QoS enhancement
Application
QoS identify and measurement
Presentation Session
Recourse Network End to End QoS
Transport Network Data Link
QoS aware encoding QoS Signal / RTCP / RTP UDP / RUDP
Communication Network Intermediate nodes dependent QoS
DiffServ / Integrated Service IEEE 802.1q / MPLS
Physical
Fig. 22.1 Multilayered QoS enhancement framework.
depend heavily on the implementation of intermediate nodes of the packet switch network. The QoS enhancement framework consists of two parts: resources network enhancement and communication network enhancement. The difference between these two parts is that for resources network enhancement, an end-to-end QoS enhancing approach requires no intermediate node participating; it is independent from the network devices. However, in communication network enhancement, the methods must be compatible with the technique implemented in the intermediate nodes. The QoS enhancement framework is shown in Fig. 22.1. We define the layered QoS enhancement as follows: • Application layer. Everything at this layer is application specific; in this layer quality of service for teleoperation should be identified and measured. • Presentation layer. This layer formats the data to be sent across a network. Since different coding and compressing methods have influence on the processing delay and packet delivery delay, QoS-aware dynamic self-adaptation encoding mechanism should be considered for telepresence messages. • Session layer. This layer sets up, coordinates, and terminates information exchanges between the applications at each end. QoS-aware session control signals can be used. • Transport layer. This layer provides reliable and efficient delivery to reduce processing delay and packet loss. • Network layer and data link layer. These layers mainly reduce the queuing delay, which is arguably the crucial part of the overall network delay and also the main reason for the jitter.
22 Multilayered QoS Architecture with Cross-layer Coordination
305
22.4 Communication Network QoS Enhancement 22.4.1 Network Layer QoS Optimization The trend of IP network is heading for new resources sharing paradigms capable of providing differentiated QoS to end users. The network layer QoS mechanism relies heavily on network backbone carriers’ strategies. The enforce function of network routers includes traffic admission control, traffic shaping, and queue scheduling. The mainstream QoS models in network layer are integrated services and differentiated services (DiffServ), because the scalability problem, the most adopted QoS model, is DiffServ [11]. In DiffServ models, services are classified as expedite forwarding (EF), assured forwarding (AF), and best effort (BE) service. Expedite forwarding class is the highest QoS level. By putting all the EF packets in a dedicated queue, EF packets are served at a rate higher than the packet arriving rate, thus providing a low loss rate, low latency, low jitter, and assured bandwidth [12]. In order to emphasize the real-time nature and its QoS priority, we propose to identify the real-time teleoperation data packet as EF class by marking their Differentiated Services Code Points (DSCP) fields. Since every router along the forwarding path provides the highest priority for EF traffic, the queuing delay and packet loss for teleoperation system will be greatly reduced even background traffic exists. When network load grows, the BE packets will be delayed or dropped. We set a simulated environment to assess the DiffServ approach. Three service traffic types were generated: real-time traffic, http traffic, and ftp traffics; http is usually a short and random connection. The topology was designed in a way that when all the traffic was added, a bottleneck existed in one of the links. Figure 22.2 illustrates the network delay for real-time traffic without DiffServ. The existing jitter reveals that the real-time traffic is influenced by other burst traffic, since all traffic fairly share the bandwidth and get the same BE services.
Fig. 22.2 Delay of real-time traffic without DiffServ
306
X.U. Lei and L.I. Guo-dong
Fig. 22.3 Delay of real-time traffic with DiffServ
Figure 22.3 illustrates the network delay for real-time traffic with DiffServ. The result reveals that by putting real-time traffic into the highest priority class (EF of DiffServ), the time delays for real-time traffic remain low and become steady.
22.4.2 Data Link Layer QoS Optimization Although there is no specific data link layer definition in TCP/IP suit, Ethernet or IEEE802.3 has been the dominant standard in LAN situations. In most industry applications, Ethernet technique takes the place of field bus. In LAN-based control applications, a priority-level-based mechanism known as IEEE 802.1p can be used. IEEE802.1p is an extension to IEEE 802.1q (Ethernet VLAN). A 3-bit field for priority identify is extended before the VLAN ID field as shown in Fig. 22.4. In order to get the best service, teleoperation application-generated frames can be assigned the highest priority as level 7 [13].
DA
SA
Priority
TPID
0
TCI
VLAN ID
Fig. 22.4 Extension of 802.1p to 802.3 and 802.1q
L
DATA
22 Multilayered QoS Architecture with Cross-layer Coordination
307
For control applications that across wide area network (WAN), label switch techniques can be used to expedite the transmission. The most promising technology is multiprotocol label switch (MPLS). Multiprotocol label switch can also support DiffServ by mapping the DSCP into MPLS shim header [14], which will be discussed later in detail.
22.5 Resource Network QoS Enhancement 22.5.1 Transport Layer QoS Optimization In transport layer, most real-time applications use UDP instead of TCP. However, teleoperation system requires both timeliness and reliable transport, and UDP is not a reliable protocol. As an enhancing transport layer protocol, RUDP (Reliable UDP) takes advantages of TCP and UDP in effectiveness and reliability [15, 16]. In order to make comparisons between UDP and RUDP, TCP and RUDP, we use these protocols to transport real-time video data in a robot teleoperation monitoring system. For the comparison between RUDP and UDP, in each test, 100 robot teleoperation monitoring video frames were sent; the test was repeated 100 times. Figure 22.5 illustrates the result. The vertical axis represents the quantity of video frames that were correctly received. The value for RUDP is around 64 to 80 frames, and the value for UDP is 44 to 58 frames. The average values are 71.27 frames and 51.46 frames for RUDP and UDP, respectively.
Fig. 22.5 Comparisons between RUDP and UDP
308
X.U. Lei and L.I. Guo-dong
Fig. 22.6 Comparisons between RUDP and TCP
As for RUDP and TCP, the time duration for telerobotic video data transmission was 1 second, test repeated 100 times. Figure 22.6 illustrates the result. The vertical axis represents the quantity of video frames that correctly received. The value for RUDP is around 8 to 13 frames, and for UDP, 2 to 6 frames. The average values are 10.35 frames and 3.9 frames for RUDP and TCP, respectively. It must be noted that the reasons for incorrect frame decoding are due to time delay and packet loss. The experiments were carried out in campus network. The result indicates that in the video monitoring transmission of robot teleoperation systems, RUDP is more reliable and efficient than UDP, far more efficient, and has lower overhead than TCP.
22.5.2 Presentation Layer QoS Enhancing Most modern teleoperation systems use interactive multimedia information to create a telepresence environment. As delay-sensitive messages, real-time video and audio traffic require low delay, low jitter, low packet loss, and a given amount of bandwidth. In presentation layer, a correct self-adaptation involves changing the video/audio encoding scheme, video frame rate, and video frame size at the transmitter side. These will change the length of data block [17], reduce delivery delay, simplify the compress and depress arithmetic, and reduce the process time. The development of real-time video/audio encoding scheme is mature, and the details are out of the scope of this article. We emphasize efficient QoS-aware dynamic self-adaptation approaches that cooperate with session layer or application layer strategies.
22 Multilayered QoS Architecture with Cross-layer Coordination
309
22.5.3 Session Layer QoS Supervision This layer sets up, coordinates, and terminates exchanges between the applications at each end. QoS-aware session control signals can be used to enhance the QoS adaptation ability. The most widely adopted protocol for multimedia transmission is real-time control protocol (RTCP)/real-time protocol (RTP). The main functions of RTCP are quality of service supervision and feedback and audio/video stream synchronizing. During the session, RTCP messages periodically report statistics about the byte accounts, packet loss, and the variation of the packet arriving time. With RTCP, the sender knows how well the other side is receiving the audio/video data via receiver reporting (RR) messages [18]. The sender application can then make adapting response to the performance of the network. We propose this mechanism be used in coordination with the presentation layer adapting methods.
22.5.4 Application Layer QoS As we have mentioned above, everything at this layer is application specific. In this layer, quality of service requirements and related network performance parameter sets for teleoperation should be identified. Since there are different applications such as telerobotics, substation automation systems, and industry control systems, the requirement analysis and the QoS identification must be application specific.
22.6 Cross-layer Coordination and Adaptation In this section the possible methods of cross-layer coordination are to be discussed. Direct mapping can be used among adjacent layers in communication network, for instance, network layer and data link layer. As mentioned earlier, in WAN situations, MPLS can seamlessly support higher layer QoS by mapping the DSCP into MPLS shim header. Multiprotocol label switch and DiffServ are coherently interoperational in the way they both identify and mark the packets at edges. In MPLS data traffic is related to forwarding equivalent class (FEC); FECs are marked with labels. While in DiffServ, traffic is divided into a set of service classes identified by DSCP values. Both two models are scalable since they put the classification and marking operation at the edge nodes, thus significantly simplifying the implementation of core nodes. Originally, MPLS labels and DSCP marks were used for a different purpose; the former is to identify packet-forwarding path (LSP), and the latter is for packet classification. Multiprotocol label switch model uses MPLS shim to encapsulate
310
X.U. Lei and L.I. Guo-dong
the IP packets, with labels in the MPLS shim header. The problem is that MPLS nodes (LSRs) in MPLS field won’t be able to see the inner DSCP value since it is encapsulated in the IP header. In order for the cores to be able to provide DiffServ; the DSCP has to be mapped into out-layer MPLS shim header. There are two mapping methods for supporting DiffServ in MPLS networks: One is E-LSP and the other is L-LSP [14]. We will compare these two methods and give our suggestion. L-LSP: DSCP is mapped into out-layer MPLS label segment at LER. Label and EXP jointly identify the service class of the packet with EXP representing the dropping priority. Label Switch Routers (LSRs) schedule packets according to its Label, and make dropping decision according to its EXP value. E-LSP: DSCP is mapped into 3-bit EXP field within MPLS shim by edge router (LER). In this way, the cores (LSR) can schedule the packet according to its service class labeled in EXP field. Originally, DSCP has 6 bits, which can identify 64 services classes, while 3-bit EXP can only provide eight services classes. Therefore, E-LSP is suitable for networks that provide no more than eight service classes. For a network-based teleoperation application, eight service classes are adequate to differentiate the data traffics; therefore, we suggest E-LSP method be used. The advantages of provision E-LSP are • EXP and DSCP mapping can be predefined; this makes its implementation easier than L-LSP. • There is a direct mapping between EXP and IP precedence. • 802.1p can be seamlessly integrated with E-LSP approach. Direct mapping scheme of DiffServ classes to MPLS shim is shown in Fig. 22.7.
EF AF1 AF2 BE
Label
Frame Header
Outer layer label (Identify LSPs)
TTL
EXP
Inner layer label (Identify VPNs)
Fig. 22.7 DiffServ and MPLS mapping in E-LSP approach
IP header
Data
22 Multilayered QoS Architecture with Cross-layer Coordination
311
Application Presentation
Adaptation Modules in application server
Session Transport Network
Network performance measurements
Data Link Physical Fig. 22.8 Measurement-based cross-layer coordination
The independence of higher layers in resources network makes direct mapping difficult since the higher layers are mostly application related. In this design, a measurement module measures the QoS parameters in the network layer instantaneously, and the result is used as the input by higher layers to make proper adaptation (Fig. 22.8). Based on session layer QoS signal such as RTCP, conditions of network performance can be fed back to higher layers; adaptive encoding schemes can then be provisioned.
22.7 Application Scenarios Network control system can be WAN based or LAN based. QoS provision should also change in accordance with the situation. For instance, in substation automation systems and some industry control fields, applications are deployed over dedicated LANs. Ethernet switches are used instead of IP routers. Network layer mechanisms such as DiffServ are unnecessary in this situation; MPLS WAN techniques are also not available. However, in an Internet based telerobotics, the network environment is far more complex with unexpected traffic loads and intermediate nodes behavior. Multilayered enhancing is then necessary. In this section, we discuss two extreme scenarios: One is the minimum network configuration, for which we propose a light-weight structure. The other is an Internet configuration, the most complex network environment that needs a maximum integrated protocol structure. Applications in other scenarios can be adjust between these two extremes.
312 Fig. 22.9 Maximum integrated protocol stack structure
X.U. Lei and L.I. Guo-dong
QoS identify and measureQoS aware encoding QoS Signal / RTCP / RTP UDP / RUDP DiffServ MPLS Physical Layer
For maximum configuration, as in an Internet-based teleoperation application, QoS schemes in more layers should be integrated together to provide powerful QoS enhancement as shown in Fig. 22.9. For the minimum network configuration, consider the situation in industry control systems, where Ethernet connections are used to replace the field bus. In this configuration, messages are locally exchanged with no internetwork addressing issues; traffic carried by the dedicated network is light and steady. Our suggestion is to packet the application data (mostly control data and sensor data) directly into the Ethernet frame (Fig. 22.10). The reasons are 1. Field-bus-like application requires speedy response; for instance, some state and control messages allow only several millisecond transmission time in the substation automation system. Since the functions of network layer and transport layer are unnecessary here, there is no need to invoke the additional overhead. 2. Hardware addressing is adequate to locate the intelligent electronic device (IED).
Control and sensor data Ethernet / IEEE 802.1q Fig. 22.10 Light-weight structure for minimum configuration
Physical Layer
22 Multilayered QoS Architecture with Cross-layer Coordination
313
22.8 Conclusion Network-based teleoperation systems require more QoS guarantees than normal web, Email, and file transfer applications. In resolving this dilemma, we presented the multilayered QoS enhancement framework. We classify the QoS enhancements as end-to-end approach and intermediatenode-dependent approach. In end-to-end approach, users have the freedom to design and deploy the mechanism. Working in network layer, DiffServ can reduce the queuing delay and packet loss for real-time control traffic. Simulation result reveals that it can greatly eliminate jitter. Reliable UDP can provide efficient and reliable data transport. Experiment indicates that in terms of the telerobotic video traffic transmission performance, RUDP is better than UDP and TCP. We also define QoS functions in other layers, providing a big picture of QoS enhancement architecture. Our modeling process is very general and may serve as the basis for a wide range of network control system research in efforts to improve the system performance. Cross-layer coordination and adaptation designs are discussed. Direct mapping between DiffServ and MPLS provide a seamless cross-layer coordination scheme between network layer and data link layer. Through measurementbased cross-layer coordination and session signaling adaptation, higher layer decision and adaptive encoding can be made in accordance with the network layer conditions. The QoS enhancement architecture presented here is flexible because of its layered nature. It can be tailored according to different uses. Under this framework, users can make further modifications or extensions to the enhancement architecture in a specific layer according to their control tasks. Our future work will include dynamic QoS optimization and adaptation for some key layers. Acknowledgment We would like to acknowledge Zhang Cheng and Zhang Linlin, who assisted in the work of simulation and measurement, for their support.
References 1. Chen Jun-jie, Xue Xiao-hong, and Huang Wei-yi (2004) Research and developing strategy of overcoming time-delay infection for telepresence telerobot system. Chinese Journal of Sensors and Actuators, (2): June, 232–237. 2. Jing Xing-Jian, Wang Yue-chao, and Tan Da-Long (2004) Control of time-delayed tele-robotic system: Review and analysis. Acta Atomatica Sinica, 30(2): March, 214–221. 3. Imad Elhajj, Ning Xi, Wai Keung Fung, Yun hui Liu, Y. Hasegawa, and T. Fukuda (2003) Supermedia-enhanced Internet-based telebobotics. In Proceedings of the IEEE, 91(3): 396– 421, March.
314
X.U. Lei and L.I. Guo-dong
4. Imad Elhajj, Amit Goradia, Ning Xi, Wang Tai Lo, Yun Hui Liu, and Toshio Fukuda (2003) Internet-based tele-manufacturing. Seventh International IEEE Conference on Automation Technology, e-automation, Chia-yi, Taiwan, October. 5. Imad Elhajj, Ning Xi, BooHeon Song, Meng-Meng Yu, Wang Tai Lo, and Yunhui Liu (2003) Transmission and rendering of supermedia via the internet. The Proceedings of IEEE International Conference on Electro/Information Technology, Indianapolis, June. 6. Xiuhui Fu, Jianning Hua, Wei Zheng, Ning Xi Dalong, Tan Yuchao, and Wang Qiang Huang (2004) Interactive telecooperation via Internet. Robio2004, Shenyang, China. 7. Wei Zheng, Xiuhui Fu, Jianning Hua, Dalong Tan, Ning Xi, and Yuechao Wang (2004) Realtime supermedia transmission in Internet. Proceedings of the 2004 International Conference on Intelligent Mechatronics and Automation UESTC, Chengdu, China, August 26–31. 8. ITU-T Recommendation Y.1541 (2002) Network performance objectives for IP-based services, May. 9. IEC 61850-5 (2003) Communication networks and systems in substations, Part 5: Communication requirements for functions and device models. 10. S. Shenker and J. Wroclawski (1997) IETF RFC2216 Network element service specification template, September. 11. Xipeng Xiao and Lionel M. Ni (1999) Internet QoS: A big picture. IEEE Network, March, 8–18. 12. S. Blake (1998) Architecture for differentiated services. IETF RFC 2475, December. 13. IEEE Computer Society (1998) LAN Layer 2 QoS/CoS protocol for traffic prioritization. IEEE 802.1p. 14. Francois Le Fraucheur and W. Lai (2003) Requirements for support of Diff-Serv-aware MPLS traffic engineering, IETF RFC 3567, July. 15. David Velten, Robert Hinden, and Jack Sax (1984) Reliable data protocol. IETF RFC 908, July. 16. C. Partridge and R. Hinden (1990) Version 2 of the reliable data protocol (RDP) IETF RFC 1151, April. 17. Meng Qing-xin, Shi Sheng-li, Ruan shuang-chen, and Guo Xiao-qin (20050 Title optimization of intra-frame coding algorithm for H.264 in Internet-based teloperation system. Journal of Harbin Engineering University, 26(3): June. 18. H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson (2003) RTP: A transport protocol for real-time applications. RFC 3550, July.
Chapter 23
Improvement of State Estimation for Systems with Chaotic Noise Pitikhate Sooraksa and Prakob Jandaeng
Abstract To estimate the state of the system, one needs the covariance matrices as the inputs. The accuracy of the new prediction of the estimation is based recursively on the previous ones. To search for the optimal solution, researchers try to obtain best closed-state approximation for the covariance inputs using the Kalman filtering technique. Many variations of the technique have been proposed for many years. However, in this chapter, our version presents a new improvement of state estimation of the systems with various chaotic noises. Introducing an updated scaling factor to the covariance matrices is a simple modification yet provides a highly effective way to estimate the state of the system in the presence of chaotic noises. Performance comparison among the original Kalman filter, an adaptive version, and our enhanced one is carried out. Computer simulation shows remarkable improvement of the proposed method for estimation of the state of the systems with chaotic noises. Keywords: Kalman filter · estimator · noise · chaos
23.1 Introduction To estimate state of the system in the presence of Gaussian noises, Kalman filter (KF) is the best known method [1–8]. In this technique, the covariance matrices associated with system noise Q and the measurement noise R are required as the inputs in the estimation procedure. For highly dynamic systems like chaotic ones, the change is quite unpredictable if we don’t know the type and initial condition of the seed chaos. This eventually leads to the difficulty in obtaining accurate covariance matrices. In this case, an adaptive KF is required. The objective is to improve such adaptive filters for better performance in the presence of chaotic noise. Pitikhate Sooraksa and Prakob Jandaeng Department of Information Engineering, Faculty of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Chalongkrung Rd., Ladkrabang, Bangkok, Thailand 10520
315
316
Pitikhate Sooraksa and Prakob Jandaeng
The chapter is organized as follows. Section 23.2 provides the improvement concept and derivation of the proposed method. For the standard treatment of classical KF, the reader is referred to [1–8]. Section 23.3 presents computer simulation results, which show that the proposed method outperforms the normal KF and the old adaptive version [7]. Section 23.4 states concluding remarks.
23.2 Improvement of Adaptive Kalman Filtering System dynamic and measurement models in state-space equation for KF derivation are well known and are described in the form X(k+1) = Φ(k) X(k) + Γ(k)W(k) Z(k+1) = H(k+1) X(k+1) +V(k+1)
(23.1)
where E[W(k) ] = 0 cov[W(k) ] = Q(k)
(23.2)
E[V(k) ] = 0 cov[V(k) ] = R(k) cov[W(k) ,V(k) ] = 0 Note that Q(k) and R(k) are the input covariance matrices associated with white noises W(k) of the system and the measurement noise V(k) , respectively. The meaning of the above equations can be easily decerned. Let X(k) and Z(k) be a state vector and a measurement vector, respectively, of a stochastic system at discrete time k. Other notations are defined in a similar manner, where Φ is the transition matrix, Γ is the input noise matrix, and H is the measurement sensitivity matrix. Using equations (23.1) and (23.2) and following the standard procedure [1–8], ˆ and the error covariance extrapolation one obtains the state estimate extrapolation X P as follows: (k/k) (k+1/k) = Φ(k) X X P(k+1/k) = Φ(k) P(k/k) ΦT(k) + Γ(k) Q(k) ΓT(k)
(23.3)
The updated successive values of the parameters in (23.3) and the updated Kalman gain K can then be obtained as shown in (23.4). −1 K(k+1) = P(k+1/k) HT(k+1) H(k+1) P(k+1/k) HT(k+1) + R(k+1) (k+1/k) + K(k+1) [Z(k+1) − H(k+1) X (k+1/k+1) = X (k+1/k) ] X P(k+1/k+1) = [I − K(k+1) H(k+1) ]P(k+1/k)
(23.4)
23 Improvement of State Estimation for Systems with Chaotic Noise
317
Note that the covariance matrices Q and R in the above successive calculation are constant matrices. To update both covariances recursively, one may adopt an adaptive algorithm from [7]. From this point, in lieu of notation compatibility and comparison, let us adopt the notations defined in [7]. Note also, from now on, that the notations Vh and Wh will switch meanings, so that Vh now stands for the estimate measurement noise and W stands for the estimate system noise. Cˆ is the estimate error covariant. Repeating the same procedure as in normal KF (for a detailed derivation the reader is referred to [7]), the derivation of the adaptive algorithm yields the following results: (k+1/k) V h(k+1) = Z(k+1) − H(k+1) X
(23.5)
and 1 N Vh(k+1) = ∑ V h(k+1) N i=1 1 N CV h(k+1) = ∑ (V h(k+1) − Vh(k+1) )(V h(k+1) − Vh(k+1) )T N i=1 1 N E[CV h(k+1) ] = ∑ H(k+1) P(k+1/k) HT(k+1) + R(k+1) N i=1
(23.6)
The updated value of the successive covariance matrix R(k+1) can be calculated by R(k+1) =
1 N ∑ (V h(k+1) − Vh(k+1) )(V h(k+1) − Vh(k+1) )T N i=1 1 N − ∑ H(k+1) P(k+1/k) HT(k+1) N i=1
(23.7)
Similarly the successive estimate system noise and the error covariance matrix can be described as (k/k) (k+1/k) − Φ(k) X W h(k+1) = X (23.8) and N h(k+1) = 1 ∑ W h(k+1) , W N i=1
1 N h(k+1) )(W h(k+1) − W h(k+1) )T , CW h(k+1) = ∑ (W h(k+1) − W N i=1 1 N E[CW h(k+1) ] = ∑ Φ(k) P(k/k) ΦT(k) + Q(k) − P(k+1/k) . N i=1
(23.9)
318
Pitikhate Sooraksa and Prakob Jandaeng
Finally, the updated value of the covariance matrix Q(k) can be calculated by Q(k) =
1 N ∑ (W h(k+1) − Wh(k+1) )(W h(k+1) − Wh(k+1) )T N i=1 1 N − ∑ P(k+1/k) − Φ(k) P(k/k) ΦT(k) N i=1
(23.10)
In our modified version, the key idea is to introduce an updating covariance scaling factor σ 2 to multiply with the covariance matrices P, Q, and R. Unlike in [7], instead of iterating and updating the covariance matrices by using (23.9) and (23.10), the proposed method is just simply multiplying the scaling factor with the constant covariance matrices. The major equations for calculating the scaling factors can be described below: (k+1/k) , V h(k+1) = Z(k+1) − H(k+1) X cov[V(k+1) ] = cov[X(k+1) ] = cov[V h(k+1) ] =
(23.11)
2 σR0 R(k+1) 2 σX0 P(k+1/k) 2 2 σX0 H(k+1) P(k+1/k) HT(k+1) + σR0 R(k+1)
and E[E[Xk |Z k ]] = E[Xk ], where 2 T = tr R−1 E V hT(k+1) R−1 V h (k+1) (k+1) (k+1) (σX0 H(k+1) P(k+1/k) H(k+1) +σ2R0 R(k+1) ) , 2 −1 T = σ E V hT(k+1) R−1 V h tr R H P H (k+1) (k+1) (k+1/k) X0 (k+1) (k+1) (k+1) +σ2R0 ,
(23.12)
N
1 ∑ Xi , N i=1 2 V h − σ E V hT(k+1) R−1 (k+1) R0 (k+1) = , k 1 −1 T ∑ tr R(i+1) H(i+1) P(i+1/i) H(i+1) k i=1
E[X] =
σ2X0
(23.13)
and 1 k −1 T T 2 − − V h tr R H Φ P Φ H σ E V hT(k+1) R−1 (k+1) ∑ (i+1) (i+1) (i) (i/i) (i) (i+1) R0 (k+1) k i=1 2 . (23.14) σQ0 = 1 k −1 T T tr R H Γ Q Γ H ∑ (i+1) (i+1) (i) (i) (i) (i+1) k i=1
23 Improvement of State Estimation for Systems with Chaotic Noise
319
Note that the derivation is carried out by combining the method of adaptive KF [7] and the methods of prediction of the residual. To validate effectiveness of the proposed method, computer simulation results are presented in the next section.
23.3 Results 23.3.1 Model The nominal model used for testing the effectiveness of the algorithms is the system described in [9], which can be shown as ⎡ ⎤ 1.1269 −0.4940 0.1129 0 0 ⎦, Φ(k) = ⎣ 1 0 1 0 ⎡ ⎤ −0.3832 0 0 0 0.5919 0 ⎦, Γ(k) = ⎣ 0 0 0.5191 1 0 0 . H(k) = 0 1 0 It should be remarked that for testing the KF technique, this system used a test bed similar to a Rena’s picture in signal processing.
23.3.2 Computer Simulation Three sets of chaotic noises have been employed for testing the effectiveness of the proposed algorithm compared to the existing methods. The chaotic noises used here are the Lorenz chaotic noise, the logistic map, and the Henon map [10]. The tests are shown in the following simulations. 23.3.2.1 The Lorenz Chaotic Noise System Lorenz chaotic noise injected into the system can be described mathematically by dx = σ (y − x) dt dy = γ x − y − xz dt dz = xy − β z dt The noise parameters are σ = 10, γ = 28, and β = 8/3.
320
Pitikhate Sooraksa and Prakob Jandaeng
Table 23.1 A result showing the root-mean-square (RMS) error of the estimation obtained using the Lorenz model Parameter
Original KF
Adaptive KF
Enhancement of adaptive KF
X(1, 1)
1.2944
0.8970
0.0449
X(2, 1)
1.7862
0.8551
0.0670
X(3, 1)
1.8243
0.7778
0.0775
Average
1.6349
0.8433
0.0631
The results compared to the existing methods are provided in Section 23.2. Note that all methods can filter out the chaotic noises as shown in Table 23.1. The error signals from the estimation of the three methods are shown in Figs. 23.1 to 23.3 for the states X(1,1), X(2,1), and X(3,1), respectively, where the solid line is the error obtained from our method or enhancement of adaptive KF, dashed line is the old adaptive version, and the line with circle markers stands for the original or normal KF. As can be seen from the figures, our proposed method provides the least error. The output of the original KF gets poor performance in the presence of the logistic noise. 23.3.2.2 The Logistic Map Chaotic Noise System The model for logistic map noise used here can be illustrated mathematically by x[n] = kx[n − 1](1.25 − (5(x[n − 1]2 )) + (4(x[n − 1]4 ))) where k = 3.5 and −1 < x[n − 1] < 1.
Fig. 23.1 Error of parameter X(1,1) for the system with the Lorenz noise
23 Improvement of State Estimation for Systems with Chaotic Noise
321
Table 23.2 A result showing RMS error of the estimation obtained using the logistic map model
Parameter
Original KF
Adaptive KF
Enhancement of adaptive KF
X(1,1)
0.3616
0.3559
0.0430
X(2,1)
0.4982
0.3649
0.0554
X(3,1)
0.5031
0.3656
0.0582
Average
0.4543
0.3621
0.0522
Fig. 23.2 Error of parameter X(2,1) for the system with the Lorenz noise
Fig. 23.3 Error of parameter X(3,1) for the system with the Lorenz noise
322
Pitikhate Sooraksa and Prakob Jandaeng
Table 23.2 shows RMS error of the estimation obtained from the simulation. Figures 23.4 to 23.6 show error of parameter X( j, 1) where j = 1, 2, 3, respectively. Again our method outperforms the rest in comparison.
23.3.2.3 The Henon Map Chaotic Noise System The Henon map noise system can be illustrated by x[n] = 1 − α x[n − 1]2 + y[n − 1] y[n] = β x[n − 1]
Fig. 23.4 Error of parameter X(1,1) for the system with the logistic map noise
Fig. 23.5 Error of parameter X(2,1) for the system with the logistic map noise
23 Improvement of State Estimation for Systems with Chaotic Noise
323
Table 23.3 A result showing RMS error of the estimation obtained using the Henon map model
Parameter
Original KF
Adaptive KF
Enhancement of adaptive KF
X(1,1)
0.3919
0.2912
0.0656
X(2,1)
0.5078
0.3932
0.0929
X(3,1)
0.4881
0.4372
0.0966
Average
0.4626
0.3738
0.0850
Fig. 23.6 Error of parameter X(3,1) for the system with the logistic map noise
Fig. 23.7 Error of parameter X(1,1) for the system with the Henon map noise
324
Pitikhate Sooraksa and Prakob Jandaeng
The parameters used here are α = 1.4 and β = 0.3. Table 23.3 and Figs. 23.7 to 23.9 are presented in the same fashion as in the previous subsections. Note that our proposed method yields better results in comparison for the estimation, whereas the original KF gets poor performance.
Fig. 23.8 Error of parameter X(2,1) for the system with the Henon map noise
Fig. 23.9 Error of parameter X(3,1) for the system with the Henon noise
23 Improvement of State Estimation for Systems with Chaotic Noise
325
23.4 Conclusion This chapter presented a method for improving state estimation of the Kalman filtering technique in the presence of chaotic noises. To enhance the original KF, covariance scaling factors were introduced to be multiplied with the covariance matrices from the system and the measurement. According to the simulation, the proposed method provides excellent performance with fast convergence compared to adaptive KF and the original KF. The original KF seems to have difficulty in eliminating the effects of chaotic noises. The readers are encouraged to perform further investigation on adopting the proposed method to cope with similar problems having other highly complex chaotic noises. Acknowledgment This paper is supported in part by the Thailand Research Fund under Grant RSA 4680007. The authors would like to thank K. Klomkarn for useful discussion about the systems with chaotic noises.
References 1. R.K. Mehra (1972) Approaches to adaptive filtering. IEEE Transactions on Automatic Control, 2: 693–698. 2. A. Gelb (1974) Applied optimal estimation. MIT Press, New York. 3. P.Z. Jia and Z.T. Zhu (1984) Optimal estimate and its application. Science Publishing Company, China. 4. Y. Bar-Shalom and X.R. Lie (1993) Estimation and tracking principles, techniques and software. Artech House, Boston, London. 5. G. Chen (1993) Approximate Kalman filtering. World Scientific, New York. 6. B. Anderson and J. Moore (1979) Optimal filtering. Prentice Hall, Englewood Cliffs, NJ. 7. B.P. Dahshayani, M.R. Ananthasayanam, and N.V. Vighnesam (2001) Adaptive Kalman filter technique for relative orbit estimation for collocate deostationary satellites. AIAA Atmospheric Flight Mechanics Conference,1: 481–484. 8. F. Gustafsson (2000) Adaptive filtering and change detection. John Wiley & Sons, New York, p. 97. 9. http://www.mathworks.com/ products/control/demos.html?file=/products/demos/shipping/ control/kalmdemo.html 10. G. Chen and X. Dong (1998) From chaos to order: Methodologies, perspectives and applications. World Scientific, Singapore.
Chapter 24
Combined Sensitivity–Complementary Sensitivity Min–Max Approach for Load Disturbance–Setpoint Trade-off Design Ramon Vilanova and Orlando Arrieta
Abstract An approach to proportional-integrative-derivative controller tuning based on a simple plant model description, first order plus time delay, is presented. The approach is based on the formulation of an optimal approximation problem in the frequency domain for the sensitivity transfer function of the closed loop. The inclusion of the sensitivity function allows for a disturbance attenuation specification. The solution to the approximation problem provides a set of tuning rules that constitute a parameterized set that is formulated in the same terms as in [1] and includes, a third parameter that determines the operating mode of the controller. This factor allows one to determine a tuning either for step response or disturbance attenuation. The approach can be seen as an implicit 2-degree-of-freedom controller because by using one parameter, the operating mode (servo/regulation) of the control system is determined as well as the appropriate tuning of the controller.
24.1 Introduction Proportional-integrative-derivative (PID) controllers are with no doubt the most extensive option that can be found on industrial control applications. Their success is mainly due to its simple structure and the meaning of the corresponding three parameters. This fact makes PID control easier to understand by the control engineer than other most advanced control techniques. Because of the widespread use of PID controllers, it is interesting to have simple but efficient methods for tuning the controller. In fact, since Ziegler and Nichols proposed their first tuning rules [2], and intensive research has been done. This research has ranged from modifications of the original tuning rules [3–5] to variety
Ramon Vilanova and Orlando Arrieta Telecommunication and System Engineering Department, ETSE, Universitat Aut`onoma de Barcelona, 08193 Bellaterra, Barcelona, Spain
327
328
Ramon Vilanova and Orlando Arrieta
of new techniques, including analytical tuning [6, 7], optimization methods [8, 9], and gain and phase margin optimization [8, 10]. Recently, tuning methods based on optimization approaches with the aim of ensuring good stability robustness have received attention in the literature [11, 12]. However these methods, although effective, rely on somewhat complex optimization procedures and do not provide tuning rules. Instead, the tuning of the controller is defined as the numerical solution of the optimization problem. However, from an end-user point of view, a precise meaning is given to the tuning parameters instead of just taking the output of the numerical algorithm as the tuning values. In [1, 13], an approach to PID tuning is presented, that is based on a combination of a simple model description, first order plus time delay (FOPTD), and closed loop specifications with robustness considerations. The tuning rules are given in parameterized form in terms of desired time constant and robustness level and a completely automatic tuning as determined by the process parameters [1]. The problem with this approach is that the design problem is stated completely in terms of a step-response specification. Therefore the resulting tuning provides low disturbance attenuation performance. The purpose here is to extend this approach in order to include disturbance attenuation specifications. The design problem is stated in similar terms but considers the closed-loop sensitivity function instead of the reference to output relation. The design problem is formulated as an optimal approximation problem in such a way that the resulting PID tuning rules include, as a special case, the tuning guidelines provided in [13]. The new tuning rules constitute a parameterized set that is formulated in the same terms as in [1] and include a third parameter that determines the operating mode of the controller. This factor allows as to determine a tuning either for step response or disturbance attenuation. The contribution is organized as follows. Section 2 presents the problem formulation: process model, PID structure, and the optimization problem based on a min– max optimal approximation problem. Section 3 reviews the solution to the min-max optimization problem and provides the structure of the optimal controller. Starting from the optimal controller structure, Section 4 presents the tuning rules that originate from a reference to output step-response specification. Along the same lines, Section 5 extends the results to the case of a sensitivity-based approximation problem in order to include a disturbance attenuation specification. An example is presented in Section 6. The results are extended in Section 7, looking for an intermediate tuning defined in terms of a trade-off between both modes of tuning the PID controller. Finally, conclusions and considerations are conducted in Section 8.
24.2 Problem Formulation In this section the controller equations are presented as well as the assumed process model structure and the optimization problem that is posed in order to tune the PID controller.
24 Combined Sensitivity–Complementary Sensitivity PID Min–Max Design
329
24.2.1 PID Controller There exist different ways to express the Proportional-Integrative-Derivative (PID) control law [14]. In this paper we concentrate on the ISA-PID control law [8]: sTd 1 [cr(s) − y(s)] u(s) = Kp br(s) − y(s) + [r(s) − y(s)] + sTi 1 + sTd /N
(24.1)
where r(s), y(s), and u(s) are the Laplace transforms of the reference, process output, and control signal, respectively. Kp is the PID gain, Ti and Td are the integral and derivative time constants, and N is the ratio between Td and the time constant of an additional pole introduced to assure the properness of the controller. Parameters b and c are called setpoint weights and constitute a simple way to obtain a 2-DOF controller. As their choice does not affect the feedback properties of the resulting controlled system, with no loss of generality here we will assume b = c = 1. This way the PID transfer function we will work with can be written as K(s) = Kp
1 + s(Ti + Td /N) + s2 Ti (Td /N)(N + 1) . sTi (1 + sTd /N)
(24.2)
24.2.2 Process Model An important category of industrial processes can be represented by an FOPDT model as Ke−Ls , (24.3) 1+Ts where K is the process gain, T the time constant, and L the time delay. This class of models is easy to determine by means of a simple step-response experiment to get the process reaction curve. In order to deal with the delay term it is usual to use a rational approximation. The following simple first-order Taylor expansion of the e−Ls term will be used: Gn (s) =
e−Ls ≈ 1 − Ls.
(24.4)
24.2.3 Design Problem Formulation The approach presented in this paper is based on sensitivity function optimization. Roughly speaking the goal is to tune the PID controller to match a desired target sensitivity function. This problem can be formulated as a weighted model matching problem between a specified desired sensitivity, Sd (s), and the achieved
330
Ramon Vilanova and Orlando Arrieta
Fig. 24.1 Feedback control system: (a) conventional configuration, (b) internal model control configuration
d r
u
K(s)
G(s)
y
(a) d r
C(s)
u
G(s)
y
Gn (s)
(b)
sensitivity, S(s) = [1 + Gn (s)K(s)]−1 , as min W (s)[Sd (s) − S(s)] ∞ . K(s)
(24.5)
The weighting function W (s) allows one to formulate the model matching problem as a frequency-dependent approximation problem. In a previous work [1] a similar design approach was presented where the model matching problem is stated in terms of the complementary sensitivity transfer function T (s) = Gn (s)K(s)[1 + Gn (s)K(s)]−1 . To optimize for T (s) constitutes a stepresponse design problem. On the other hand, S(s) determines the disturbance attenuation properties of the feedback control system (Fig. 24.1). Here we will show that problem (24.5) can be stated in such a way that the complementary sensitivity optimization results to be a special case of the frequency-dependent approximation problem. In order to formulate problem (24.5) in a more suitable way the controller design is recast on the internal model control framework. This will allow the design problem to be expressed in terms of the internal model control (IMC) parameter. The IMC [15, 16] is based on the introduction of a model of the plant running in parallel with the actual plant. Comparison with the usual feedback configuration leads to the following relation between the IMC and classical feedback controller: C(s) =
K(s) , 1 + K(s)Gn (s)
(24.6)
K(s) =
C(s) . 1 −C(s)Gn (s)
(24.7)
On the basis of the introduced IMC parameter C(s), the closed-loop transfer function relations sensitivity S(s) and complementary sensitivity T (s)
24 Combined Sensitivity–Complementary Sensitivity PID Min–Max Design
331
read as follows: T (s) = C(s)Gn (s)
S(s) = 1 −C(s)Gn (s).
(24.8)
Therefore, the following min–max problems can be formulated: CSo (s) = arg min W (s){Sd (s) − [1 − Gn (s)C(s)]} ∞ , C(s)
CTo (s) = arg min W (s)[T d (s) − Gn (s)C(s)] ∞ , C(s)
(24.9) (24.10)
where CSo (s) is the IMC solution to the sensitivity optimization problem, whereas CTo (s) is the solution to the complementary sensitivity optimization problem. This second controller is introduced just for comparison purposes. Next section will solve problem (24.10) and derive the corresponding tuning relation for the ISA-PID controller (24.2) by using (24.7).
24.3 Solution to the Optimal Approximation Problem The design problem has been formulated in (24.10) and (24.10) as an approximation problem in the frequency domain. Both problems are special cases of min E(s)∞ = min W (s)[M(s) − N(s)C(s)]∞ . C(s)
C(s)
(24.11)
Effectively, (24.10) results from M(s) = T d (s) and N(s) = Gn (s) and (24.10), from M(s) = 1 − Sd (s) and N(s) = Gn (s). Several approaches exists to solve this H∞ problem. See [17, 18] among others. Here we will follow a particularization of the solution presented in [19] and also used in [1]; for the continuous time domain case; where a polynomial approach was taken. This has the advantage of providing the structure of the optimal controller. Therefore, as we will do here, the problem statement can be constrained in order to provide a solution that leads to a PID controller. First of all N(s), M(s), and W (s) are factored as N(s) =
nM (s) nW (s) nN (s) M(s) = W (s) = . dN (s) dM (s) dW (s)
The solution to the minimization of the cost function (24.11) lies in optimal interpolation theory. First, factorize the numerator nN (s) as − nN (s) = n+ N (s)nN (s), − where the polynomial n+ N (s) only has stable roots and nN (s) is the remaining part. In order to obtain a unique factorization, the polynomial n+ N (s) is assumed to be
332
Ramon Vilanova and Orlando Arrieta
− monic. Let ν =deg(n− N (s)) and {z1 , z2 , . . . , zν } be the distinct zeros of nN (s). From equation (24.11) the error function E(s) is subjected to the following interpolation constraints: i = 1 . . .ν. (24.12) E(zi ) = W (zi )M(zi ),
If zi is a zero with multiplicity νi , then additional differential interpolation constraints should be imposed. A well-established theory [17, 20, 21], that solves this problem exists, and a closed form solution can be obtained from the following lemma [17]. Lemma The optimal E o (s), which minimizes E(s)∞ , is of an all-pass form: o
E (s) =
∗
ρ q(s) q(s) 0
if ν ≥ 1, if ν = 0,
(24.13)
where q(s) = 1 + q1 s + q2 s2 + · · · + qν −1 sν −1 is a strictly Hurwitz polynomial and q∗ (s) = q(−s). −1 are real and are uniquely determined Furthermore, the constants ρ and {qi }νi=1 by the interpolation constraints (24.12). Now we will proceed with the application of this lemma in order to compute the optimal C(s) = Co (s). Note first that in our case ν = 1 and z1 = 1/L. Therefore, the interpolation constraints give the following value for the optimal cost ρ :
ρ =W
1 1 M . L L
(24.14)
Application of the above lemma gives the following equation for the optimal parameter Co (s): W (s)M(s) −W (s)N(s)Co (s) = ρ
q∗ (s) . q(s)
Then q∗ (s) Co (s) = [W (s)N(s)]−1 W (s)M(s) − ρ q(s) nW (s)nM (s)q(s) − ρ q∗ (s)dW (s)dM (s) dW (s) dN (s) = − dW (s)dM (s)q(s) nW (s)n+ N (s)nN (s)
(24.15)
In order for Co (s) to be a stable transfer function, n− N (s) must be a factor of the numerator. That is to say, there must exist a polynomial χ (s) such that ∗ n− N (s) χ (s) = nW (s)nM (s)q(s) − ρ q (s)dW (s)dM (s).
(24.16)
24 Combined Sensitivity–Complementary Sensitivity PID Min–Max Design
333
It follows that to determine the optimal controller Co (s), the χ (s) polynomial must be known. In any case, the optimal Co (s) will obey the following structure: dN (s)χ (s) nW (s)n+ N (s)dM (s)q(s) 1 (1 + T s)χ (s) . = K (1 + zs)(1 + TM s)
Co (s) =
(24.17) (24.18)
Expression (24.17) provides the structure of the IMC parameter C(s) that solves the optimal approximation problem (24.11). In the next two sections, this structure will be applied to the case where the approximation problem arises from the sensitivity and complementary sensitivity matching problems. For each one of these subproblems the particular form for χ (s) and value of ρ will be computed.
24.4 Step Response Tuning This section reviews the main result of [1,13], and providing the tuning relations that arise from the application of the solution to the optimal approximation problem to solve the design problem (24.10). The specification of a target T d (s) corresponds to a step-response specification: The controller is chosen in order to achieve a desired reference to output behavior. The approximation problem is formulated, according to (24.10), as CTo (s) = arg min W (s)(T d (s) − Gn (s)C(s)) ∞ . C(s)
(24.19)
We will use T d (s) to specify the desired closed-loop time constant TM . Therefore, it will take the form 1 nM (s) = . (24.20) M(s) = dM (s) 1 + TM s With respect to the weighting function W (s), in order to automatically include integral action and keep it as simple as possible, we will assume the following form: W (s) =
nW (s) 1 + zs = . dW (s) s
By using this settings, the minimum cost ρT is given by 1 (L + z) 1 =L , min E(s)∞ = ρT = W M L L (L + TM )
(24.21)
(24.22)
and the solution for the optimal cost CTo (s) is CTo (s) =
1 (1 + T s)χT (s) . K (1 + TM s)(1 + zs)
(24.23)
334
Ramon Vilanova and Orlando Arrieta
. With respect to χT (s), it must obey (24.16); so if χT (s) = χT0 + χT1 s, then (1 − Ls)(χT0 + χT1 s) = (1 + zs) − ρT s(1 + TM s).
(24.24)
It is easily seen that
χT0 = 1, with
χT1 = z + L − ρT
χT1 = z + L − ρT .
(24.25)
(24.26)
The resulting feedback controller is KTo (s) =
(1 + T s)(1 + χT1 s) 1 K(ρT + TM ) s(1 + TM [(ρT + z)/(ρT + TM )s]
(24.27)
and can be identified with (24.2) using the following expressions for the controller parameters: KpT =
TiT , K(ρT + TM )
TiT = T + χT1 − TM
(ρT + z) , (ρT + TM )
TdT (ρT + z) , = TM T N (ρT + TM ) T ρT (ρT + TM ) NT + 1 = T . Ti L (ρT + z)
(24.28)
These tuning relations provide the four ISA-PID parameters in terms of the desired TM and z and determine the frequency range where the solution to (24.11) provides a better match. It is√worth noting √ that a choice for TM and z is provided in [1]. If we choose TM = 2L and z = 2TM = 2L, (24.28) provides the following simple tuning rule: TiT , KL2.65 TiT = T + 0.03L, TdT = 1.72L, NT T NT + 1 = T . Ti KpT =
(24.29)
The properties of this tuning selection with respect to robustness and the role played by the parameters z and TM are discussed in [1, 13].
24 Combined Sensitivity–Complementary Sensitivity PID Min–Max Design
335
24.5 Disturbance Attenuation Tuning In this section the approximation problem is posed in terms of the sensitivity function, therefore specifying a desired disturbance to output target function, Sd (s). Following similar steps as in the previous case, we will get tuning relations that are to be considered for a disturbance attenuation problem. The approximation problem is formulated, according to (24.10), as CSo (s) = arg min W (s){Sd (s) − [1 − Gn (s)C(s)]} ∞ . C(s)
(24.30)
The target sensitivity function Sd (s) is given the following form: Sd (s) =
γs , TM s + 1
(24.31)
where γ is the new free parameter introduced and whose concrete meaning will become clear later on when comparing both optimization problems. Therefore, the resulting reference model to be considered in the approximation problem (24.11) results to be M(s) = 1 − Sd (s) =
nM (s) (TM − γ )s + 1 = . dM (s) 1 + TM s
(24.32)
Note that, for the special case γ = TM , the target function does coincide with (24.20). With respect to the weighting function W (s), in order to automatically include integral action and keep it as simple as possible, we will assume the following form: nW (s) 1 + zs = . (24.33) W (s) = dW (s) s By using this settings, the minimum cost ρS is given by min E(s)∞ = |ρS | =
(L + z) (TM + L − γ ), (L + TM )
(24.34)
and the solution for the optimal cost CSo (s) is CSo (s) =
1 (1 + T s)χS (s) . K (1 + TM s)(1 + zs)
(24.35)
. With respect to χS (s), it must obey (24.16); so if χS (s) = χS0 + χS1 s, then (1 − Ls)(χS0 + χS1 s) = (1 + zs)(1 + (TM − γ )s) − ρS s(1 + TM s).
(24.36)
It is easily seen that and
χS0 = 1
(24.37)
χS1 = z + L − ρS + TM − γ .
(24.38)
336
Ramon Vilanova and Orlando Arrieta
The resulting feedback controller is KSo (s) =
(1 + T s)(1 + χS1 s) 1 K(ρS + γ ) s(1 + [ρS TM + γ z)/(ρS + γ )s]
(24.39)
can be identified with (24.2) using the following expressions for the controller parameters: TiS , K(ρS + γ ) ρS TM + γ z , TiS = T + χS1 − ρS + γ TdS ρS TM + γ z = S N ρS + γ T 1 ρS + γ S . N + 1 = S χS ρS TM + γ z Ti KpS =
(24.40)
This new set of tuning rules also provide the four ISA-PID. However, this time they are parameterized in terms of the desired TM and z and a new parameter γ . It is straightforward to verify that with γ = TM , we get ρS = ρT , χS1 = χT1 . Therefore both problems provide the same tuning. The tuning rules (24.40) can be seen as a parameterized set in terms of γ . If the tuning we are using is for step response, γ = TM ; for if it is disturbance attenuation, γ = TM . This way, the values of TM and z are first selected in order to determine the desired closed-loop time constant. Secondly, the value of γ can be determined in terms of the operation mode of the control system. When a reference change is to be applied, the controller is to be set to γ = TM , and when turning to regulation mode, a previously selected γ = TM is fixed. The advantage of this parametrization is that of having tuning for both operating modes under the same tuning rule. One common possibility is the use of a 2-degree-of-freedom version of the PID controller and handle both situations separately. However, this implies an increase in the number of the tuning parameters.
24.6 Example The purpose of this section is to provide an example of the performance of the parameterized tuning rule and to demonstrate how the performance changes from step response to disturbance attenuation as γ varies. Let us consider the following plant and FOPTD approximation: G(s) = ≈
1 (1 + s)(1 + 0.1s)(1 + 0.01s)(1 + 0.001s) e−0.073s . 1.073s + 1
(24.41)
24 Combined Sensitivity–Complementary Sensitivity PID Min–Max Design
337
1
Output
0.8
0.6
0.4
0.2
0 0
1
2
3
4
5
Time Fig. 24.2 Output signal generated by application of the step-response-based tuning
0.3
Output
0.2
Increasing γ = TM ... 0.9
0.1 0 −0.1
0
1
2
3
4
5
4
5
Time 0.05
Control
0 −0.05 Increasing γ = TM ... 0.9
−0.1 −0.15 −0.2
0
1
2
3 Time
Fig. 24.3 Output and control signal to an input load disturbance generated by using the Disturbance attenuation tuning and different values of γ
338
Ramon Vilanova and Orlando Arrieta
From the FOPTD approximation we identify Kn = 1, Ln = 0.073, and Tn = 1.073. These plant parameters give us, by application of the simple tuning rule (24.29), the PID controller that generates the output shown in Fig. 24.2. As it can be seen, the step response is quite acceptable, but the load disturbance attenuation is sluggish. Application of the disturbance attenuation based tuning, provides an alternative to improve this disturbance attenuation. By using different values for γ and the same values of z and TM , Fig. 24.3 clearly shows the performance can be readily improved. Values of γ start with γ = TM , providing the same tuning as the step-response-based tuning, and increase until γ = 0.9.
24.7 Trade-off Tuning From the preceding sections it is clear that the γ parameter offers a simple way of balancing both modes of tuning the PID controller. However, in order to properly choose γ , we need some performance measures related to the defined problems. Problems (24.10) and (24.10) define, respectively, a disturbance attenuation problem and a step-response problem with respect to a certain reference model. The performance level achieved is measured, in both cases, by using the infinity norm of the corresponding weighted error. In order to balance both kinds of specifications, we will need to know how the solution to each of the previous problems performs with respect to the other. However, to directly use the evaluation of the infinity norm may not be an informative measure review the optimal solution will provide an allpass error while any other controller will show a frequency-dependent-shaped error. Therefore, these errors are difficult to compare even if we take the maximum error (in fact the infinity norm). This is the reason we introduce, for each problem, a different but related measure. Problem (24.5) defines the disturbance attenuation features of the resulting design by a suitable definition of the target sensitivity function Sd (s). Previous developments have used a γ -dependent Sd (s) function as (24.31). However, by changing the problem definition only we do not have direct information about the achieved performance improvement. The measure we would like to introduce comes from a direct interpretation of the ∞-norm as the (system) norm induced by the (signal) 2-norm. Effectively, it is well known that (assuming zero reference signal) y2 = S(s)d2 ≤ S(s)∞ d2 .
(24.42)
Since for each value of γ we will have a different CSo (s), we can accordingly write CSo (s; γ ). Each one of these optimal controllers will generate the corresponding sensitivity function and exhibit a given performance level for the disturbance attenuation measured as the corresponding 2-norm of the output signal. If we concentrate on step disturbance signals, it is possible to compute the associated integral squared error (ISE) value as a function of γ and get the minimum of such function.
24 Combined Sensitivity–Complementary Sensitivity PID Min–Max Design
339
This will suggest an automated procedure for selecting γ . Therefore ISEd (γ ) =
∞ 0
(y(t))2 dt = y22
(24.43)
can be computed, after Parseval, as
1 ∞ Y ( jw)Y (− jw) dw 2π −∞
1 = Y (s)Y (−s) ds. 2π
ISEd (γ ) =
(24.44) (24.45)
This last integral is a contour integral up the imaginary axis, then it becomes an infinite semicircle in the left-half-plane. The contribution from this semicircle is zero because Y (s) is strictly proper. By the residue theorem this integral equals the sum of the residues of Y (s)Y (−s) at its poles in the left half-plane. Straightforward computation leads to ISEd (γ ) =
zTM (ρS + γ )2 + (zTM + χS1 L)2 , 2zTM (TM + z)
(24.46)
bearing in mind that ρS = ρS (γ ) and χS1 = χS1 (γ ). By taking the derivative with respect to γ , we can obtain the optimal γdo that minimizes the ISEd value (24.46) as
∂ ISEd (γ ) = 0, ∂γ
(24.47)
which implies
γdo =
L + TM z − TM
zTM L + L2 (L + z + TM ) L+z− . zTM + L2
(24.48)
√ √ If we use the simple rule suggested above, where TM = 2L and z = 2TM , it turns out that γdo = γdo (L). Therefore once the time delay is known the value of γ , as well as the rest of the PID parameters, can be automatically selected. However, to change γ from γ = TM will generate a performance degradation with respect to the step-response specification (24.10). This is why we can compute the error with respect to the reference model M(s) = 1/(TM s + 1) and get its variation with respect to γ . The error EM (s) with respect to the reference model can be computed as follows.
1 1 o −CS (s; γ )Gn (s) EM (s) = TM s + 1 s 1 ) + χ 1 Ls χ (z + L − 1 S S = . TM s + 1 (1 + zs)
(24.49) (24.50)
340
Ramon Vilanova and Orlando Arrieta
Along the same lines as in the disturbance attenuation case and in order to get a comparable performance measure, we compute the associated ISETM (γ ) =
∞ 0
(eM (t))2 dt
(24.51)
that can be computed, after Parseval, as
1 ∞ EM ( jw)EM (− jw) dw 2π −∞
1 = EM (s)EM (−s) ds. 2π
ISETM (γ ) =
(24.52) (24.53)
By the residue theorem this integral equals the sum of the residues of EM (s)EM (−s) at its poles in the left half-plane. Straightforward computation leads to zTM (z + L − χS1 )2 + (χS1 L)2 , (24.54) ISETM (γ ) = 2zTM (TM + z) which can also be minimized by taking the derivative with respect to γ . The optimal γToM that minimizes the ISETM value (24.54) is obtained as
∂ ISETM (γ ) = 0, ∂γ
(24.55)
which implies
γToM = L + TM −
zTM (z + TM ) . zTM + L2
(24.56)
The previous functionals allow us to define a global performance index that includes both specifications: J(γ ) = α ISETM (γ ) + β ISEd (γ ),
(24.57)
where α and β are suitable weights that allow us to adjust the desired tradeoff. For the extremal values α = 0 and β = 0 we get the optimal values of γ , γdo , and γToM , respectively. Other values will provide a trade-off selection for γ . As an example, Fig. 24.4 shows the ISE performance corresponding to the system of the previous example (24.41). The index is plotted against γ for the expressions (24.46), (24.54), and (24.57). Note that the optimal values for each expression are plotted into the graph. Process output of the system is shown in Fig. 24.5 for three values of γ . On the top there are the outputs for tuning (24.29) (which is the equivalent to γ = TM ) and the trade-off tuning with the optimal value for (24.57) which is γ = 0.26. This last one is also plotted in the bottom of the graph with the tuning corresponding to γ = 0.9, which according Fig. 24.3 provides more disturbance attenuation. An important point is raised after Fig. 24.4 concerning the selection of γ . It is seen that a complete automated and guided selection of all the PID parameters can
24 Combined Sensitivity–Complementary Sensitivity PID Min–Max Design
341
0.18 0.16 ISETM(γ) 0.14
ISEd(γ) J(γ)=ISETM(γ) + 0.2 ISEd(γ)
ISE
0.12 0.1
γop for Jmin
0.08 0.06
γo
γoT
0.02
d
M
0.04
0
0.2
0.4
γ
0.6
0.8
1
Fig. 24.4 ISE index for equations (24.46), (24.54), and (24.57)
Process Output
1.5
1 γ =T
M
0.5
0
γ = γop 0
2
4
6
8
10
Time
Process Output
2 γ=0.9 γ=γ
1.5
op
1 0.5 0
0
2
4
6 Time
Fig. 24.5 Output signal
8
10
342
Ramon Vilanova and Orlando Arrieta
be done once the delay L of the system model is known. A consideration has to be done concerning the time domain results of this selection. Although the selected tuning corresponds to the controller CSo (s; γ o ), it may turn out that the time domain response will not seem to be the best one. Regarding Fig. 24.3, for example, it is seen that better time domain responses are obtained for values γ ≈ 0.9. Therefore, even for small variation of the performance index (see y-axis scale in Fig. 24.4), there can be large variations in the corresponding time domain response. This subject raises the question of selection of the performance index and its correlation with the shape of the time domain response it generates. This is a subject of current research.
24.8 Conclusions An approach to PID tuning based on an optimal approximation problem has been presented. The approximation problem is stated in terms of the sensitivity function of the closed-loop system. An appropriate formulation of the target sensitivity function generates the tuning of the controller as a parameterized set of tuning rules. This set provides tuning rules for each operating mode of the controller. The overall tuning needs three parameters: two parameters along the lines of previously presented tuning rules and a new third parameter that determines the level of regulation mode of the controller. Further research is conducted on systematic methods of determining automatically this third parameter. Acknowledgments This work has received financial support from the Spanish CICYT program under grant DPI2004-06393. The financial support from the University of Costa Rica and from the MICIT and CONICIT of the Government of the Republic of Costa Rica for the author O. Arrieta’s Ph.D. studies is greatly appreciated.
References 1. R. Vilanova (2006) PID controller tuning rules for robust step response of first-order-plusdead-time models. ACC06, American Control Conference, Minneapolis, MN. 2. J. Ziegler and N. Nichols (1942) Optimum settings for automatic controllers. Transactions ASME 759–768. 3. I. Chien, J. Hrones, and J. Reswick (1952) On the automatic control of generalized passive systems. Transactions ASME 175–185. 4. C. Hang, K. Astrom, and W. Ho (1991) Refinement of the Ziegler–Nichols formula. IEEE Proceedings. Part D. 138: 111–118. 5. K. Astrom and T. Hgglund (2004) Revisiting the ziegler nichols step response method for PID control. Journal of Process Control, 14: 635–650. 6. S. Hwang and H. Chang (1987) Theoretical examination of closed-loop properties and tuning methods of single loop pi controllers. Chemical Engineering Science, 42: 2395–2415. 7. S. Skogestad (2003) Simple analytic rules for model reduction and pid controller tuning. Journal of Process Control, 13: 291–309.
24 Combined Sensitivity–Complementary Sensitivity PID Min–Max Design
343
8. K. Astrom and T. Hgglund (1995) PID controller. Instrument of Society of America. 9. K. Astrom, H. Panagopoulos, and T. Hgglund, (1998) Design of pi controllers based on nonconvex optimization. Automatica, 34: 585–601. 10. H. Fung, Q. Wang, and T. Lee (1998) Pi tuning in terms of gain and phase margins. Automatica, 34: 1145–1149. 11. M. Ge, M. Chiu, and Q. Wang (2002) Robust pid controller design via lmi approach. Journal of Process Control, 12: 3–13. 12. R. Toscano (2005) A simple pi/pid controller design method via numerical optimizatio approach. Journal of Process Control, 15: 81–88. 13. R. Vilanova and P. Balaguer (2006) Robust pid tuning relations based on min max optimisation. ROCOND06, IFAC Symposium on Robust Control. 14. P. Cominos and N. Munro (2002) Pid controllers: recent tuning methods and design to specification. IEEE Proceedings. Part D. 149: 46–53. 15. D.E. Rivera, M. Morari, and S. Skogestad (1986) Internal model control 4. pid controller design. Industrial & Engieneering Chemistry Research, 25: 252–265. 16. M. Morari and E. Zafirou (1984) Robust process control. Prentice Hall, Englewood Cliffs, NJ. 17. B. Chen (1984) Controller synthesis of optimal sensitivity: Multivariable case. IEEE Proceedings. Part D. 131: 547–551. 18. B. Francis (1987) A Course in H∞ Control theory. Springer-Verlag. 19. R. Vilanova and I. Serra (1999) Model reference control in two degree of freedom control systems: Adaptive min-max approach. IEEE Proceedings. Part D. 146: 273–281. 20. D. Sarason (1967) Generalized interpolation in ∞ . Transactions AMS 127: 179–203. 21. G. Zames and B. Francis (1983) Feedback, minmax sensitivity and optimal robustness. IEEE Transactions on Automatic Control 28: 585–601.
Chapter 25
Nonlinear Adaptive Sliding Mode Control for a Rotary Inverted Pendulum Yanliang Zhang, Wei Tech Ang, Jiong Jin, Shudong Zhang, and Zhihong Man
Abstract Due to the nature of Proportional-Integrative-Derivative (PID) controller, the inverted pendulum will seldom be in the steady state in a noisy uncertainty environment, which degrades the usefulness of PID controller in a system that requires high precision. Lyapunov-based sliding mode and adaptive controllers are proposed for a rotary inverted pendulum in this research. They are applied to stabilize the pendulum around the balancing state in the Lyapunov sense. Both the simulation and the experimental results show that not only can strong robustness with respect to system uncertainties and nonlinearities be obtained but also the pendulum position can dynamically converge to the desired balancing state by using nonlinear Lyapunov-based controllers.
25.1 Introduction The inverted pendulum is one of the most commonly studied systems in the control area. It is quite popular because the system is an excellent test bed for learning and testing various control techniques and its variations represent different kinds of robotic links. In this research, we use a KRi rotary inverted pendulum model PP300 [1], shown in Fig. 25.1. The control objective of the inverted pendulum is to swing up the pendulum by rotating the base arm from a stable position (vertically downward state) to the balancing state (vertically upward state) and to keep the vertically upward balance of the pendulum in spite of the disturbance. Accordingly, there are two types of controllers, i.e., swing-up controller and balancing controller, involved in the while process with a corresponding switching algorithm needed. Many swing-up control methods have been reported [2–4], and we choose the energy control strategy [4] due to its simplicity and accuracy. This chapter focuses Yanliang Zhang, Wei Tech Ang, Jiong Jin, Shudong Zhang, and Zhihong Man School of Mechanical and Aerospace Engineering, Nanyang Technological University, Blk N3, B4a-02A, Singapore 639798
345
346
Yanliang Zhang et al.
Fig. 25.1 A rotary inverted pendulum PP-300
mainly on the design of balancing controller. In the past few years, many researchers have worked on this problem and came up with lots of controllers such as state feedback, general predictive control (GPC), and neural networks [5–7]. However, we notice that the majority of the controllers were developed on the linear model of the system and had their own limitations. In this research, we propose a new, robust Lyapunov-based nonlinear adaptive sliding mode controller to improve the performance of the system by making the pendulum less sensitive to the internal noise and external disturbance. The design and implementations start from the simple linear system model and extend to the complex nonlinear system model. It is shown that, unlike previous existing controllers, by properly choosing the sliding variable, the single-control input has a strong impact on both pendulum position and arm velocity. As a result, the effect of system uncertainties can be eliminated, and the evident output (pendulum position) can dynamically converge to the desired position. The organization of this chapter is as follows. In Section 25.2, the mathematical model of the rotary inverted pendulum control system is outlined; the concepts of sliding mode control and adaptive control are summarized. In Section 25.3, both linear and nonlinear sliding mode controllers are presented; some MATLAB simulations for them are performed. In Section 25.4, a nonlinear adaptive sliding mode controller is presented; MATLAB simulation for the controller is also conducted. In Section 25.5, experimental results and analysis are given. In Section 25.6, a conclusion is drawn.
25.2 Background 25.2.1 Mathematical Model of System The model presented is based on the standard right-handed Cartesian co-ordinate system. The angular position of the arm α is assigned to be increasing when the
25 Nonlinear Adaptive Sliding Mode Control for a Rotary Inverted Pendulum
347
Pendulum P Z
c
Jo C O
Y
β
J1 l1
m1g1 b
α
Rotating arm
Lo a
X Fig. 25.2 Coordinate system of PP-300
arm is rotating clockwise. The angular position of the pendulum β is assigned to be increasing when the pendulum is rotating about an axis passing through the arm section from the origin to the pivot point of the pendulum. The reference of β is taken from the upward vertical as shown in Fig. 25.2. The mathematical model using the Lagrange equation is derived in [8].
25.2.2 Sliding Mode Control Sliding mode control is a viable high-speed switching feedback control. It is a important approach widely used to design robust controllers for both linear and nonlinear systems with system uncertainties and bounded input disturbances. The basic idea of sliding mode control is as follows: 1. The desired system dynamics is first defined on a sliding mode surface in the state space. 2. A controller is then designed, using the output measurements and system uncertainty bounds, to drive the closed-loop system to reach the sliding mode surface. 3. The desired dynamics of the closed-loop system is then obtained on the sliding mode surface [9], as shown in Fig. 25.3.
s
Fig. 25.3 The convergence property of a sling mode control system: s is the sliding variable, x is the state vector, and t is time
t
x
348
Yanliang Zhang et al.
u(t)
r(t) Adatpive Controller
y(t) Plant
On-line parameter estimation algorithms or adaptation laws Fig. 25.4 An adaptive control system: r(t) is the desired input, y(t) is the output, and u(t) is the control input
The characteristics of the sliding mode control make it a desired choice of control scheme in applications where the plant is nonlinear and displays parametric uncertainty.
25.2.3 Adaptive Control In the robotic control area, adaptive control is often an improved version of sliding mode control, which deals with time-varying parameters present in robotic systems. There are three components in a typical adaptive control system, as shown in Fig. 25.4: (25.1) the plant to be controlled, (25.2) an on-line parameter estimation algorithm or a set of parameter adaptive laws, and (25.3) an adaptive controller. There are two ways to write the adaptation laws: least-square based and Lyapunov stability based. If least square is used, the designs of adaptation laws and adaptive controller are separated and independent. However, the adaptive controller and the adaptive laws are designed together in Lyapunov stability-based adaptive control. In this research, Lyapunov stability-based control is specially considered due to its simplicity in the analysis of system stability and error convergence.
25.3 Sliding Mode Control: Design and Simulation 25.3.1 Linear Sliding Mode Control In this section, our goal is to design a sliding mode controller to stabilize the pendulum around the balancing state. For such a complex system, it is common to deal with its linearized model first. At the target position (β = 0), under the following assumptions α˙ = 0, β = 0, β˙ = 0, we can obtain the following linearized
25 Nonlinear Adaptive Sliding Mode Control for a Rotary Inverted Pendulum
349
model [10]: α¨ J0 + m1 L02 m1 L0 l1 C + KRt Ka b + 0 2 ¨ m1 L0 l1 J1 + m1 l1 β 0 Kt Ku 0 0 α + = Ra u 0 −m1 gl1 β 0
0 C1
α˙ β˙ (25.1)
After that, if we define the state vector X = [α β α˙ β˙ ]T , the above linear model can be written in the general state-space form [5]: X˙ = AX + Bu.
(25.2)
In order to achieve our control objective for the pendulum, the sliding surface is chosen as s = CT X, (25.3) where C = [0 λ ψ 1]T with arbitrary positive constants λ and ψ . Equivalently, equation (25.3) can be written as s = β˙ + λ β + ψ α˙ .
(25.4)
Define the candidate Lyapunov function as follows: 1 V = s2 . 2
(25.5)
Differentiating V with respect to time, we have V˙ = ss˙ = sCT X˙ = sCT (AX + Bu) = sCT AX + sCT Bu
(25.6)
Instead of using traditional linear controllers, a linear sliding mode controller u is chosen: −1 sign(s) u = T CT AX − T τ τ > 0, (25.7) C B C B where ⎧ ⎪ if s > 0 ⎨1 (25.8) sign(s) = 0 if s = 0 ⎪ ⎩ −1 if s < 0 Now, substitute (25.7) into (25.6). We obtain −1 T sign(s) V˙ = sCT AX + sCT B C AX − τ CT B CT B = −τ |s| < 0 for s = 0 and τ > 0
(25.9)
350
Yanliang Zhang et al. 1.2 1
Pendulum position
0.8 0.6 0.4 0.2 0 −0.2 −0.4 −0.6
0
2
4
6
8
10
12
14
16
18
20
Time t (sec) Fig. 25.5 Pendulum position vs. time in linear sliding mode control
According to the Lyapunov stability theory [11], the switching plane variable s reaches the sliding mode surface s = CT X = 0 in a finite time. After s = 0, the desired closed-loop system dynamics, given by s = β˙ + λ β + ψ α˙ = 0, is obtained. Considering this equation and assuming α and β are two independent parameters, it is easy to understand the following expression: ˙ β = −ψλ α +C1 exp(−λ t) (25.10) ˙ α˙ = −ψλ β − ψβ where C1 is a constant. Therefore, as long as we properly choose the values of λ and ψ , the variables α˙ and β will converge to zero on the sliding mode surface. This successfully fulfills our control objective. In order to simulate the derived sliding mode controller and perform the experiment on the system to verify the result, we first carry out system identification and parameter estimation of the rotary inverted pendulum to obtain all the parameters necessary in the controller. As we know, λ is used to control the pendulum convergence rate, and ψ is used to control the arm velocity. Based on MATLAB optimization, we optimally choose λ = 1 and ψ = 0.03. Figures 25.5 and 25.6 show the good system performance associated with control input and pendulum position by using the boundary layer technique to remove the chattering caused by the sign(s) function.
25.3.2 Nonlinear Sliding Mode Control Based on the previous discussion, now it is straightforward to work on the nonlinear model of the system, which provides more accuracy. For simplicity, replace the
25 Nonlinear Adaptive Sliding Mode Control for a Rotary Inverted Pendulum
351
Fig. 25.6 Control input vs. time in linear sliding mode control
original system model with a k α¨ c d α˙ 0 p + + = u, k b β¨ e f β˙ h 0
(25.11)
where a = J0 + m1 L02 + m1 l12 sin2 β b = J1 + m1 l12 h = −m1 gl1 sin β kt kb 1 kt ku k = m1 L0 l1 cos β c = C0 + + m1 l12 β˙ sin(2β ) p = Ra 2 Ra 1 1 d = −m1 L0 l1 β˙ sin β + m1 l12 α˙ sin(2β ) f = C1 e = − m1 l12 α˙ sin(2β ) 2 2 If we define the state vector X = [ α β α˙ β˙ ]T = [ x1 x2 x3 x4 ]T = [ X1 X2 ]T ,
(25.12)
then we have ⎤ ⎡ ⎤ α˙ 0 ⎢ β˙ ⎥ ⎥ = f1 (x3 , x4 ) ⎣ 0 ⎦u + =⎢ ⎣ α¨ ⎦ f2 (x1 , x2 , x3 , x4 ) B β¨ x x˙ = 1 = 3 = f1 (x3 , x4 ) x˙2 x4 x˙ = 3 = f2 (x1 , x2 , x3 , x4 ) + Bu x˙4 ⎡
X˙
X˙1 X˙2
(25.13)
352
Yanliang Zhang et al.
Fig. 25.7 Pendulum position vs. time in nonlinear sliding mode control
Now, choose the sliding variable s such that X1 = C1 X1 +C2 X2 . s = CT X = C1 C2 X2
(25.14)
Take the derivative of sliding variable s, we obtain s˙ = C1 X˙1 +C2 X˙2 = C1 f1 +C2 f2 +C2 Bu.
(25.15)
Similarly, in order to make β and α˙ converge to zero, we define C1 = [0 λ ] and C2 = [ψ 1]. λ and ψ are defined the same as in the linear sliding mode control. Differentiating the candidate Lyapunov function V = 12 s2 leads to V˙ = ss˙ = sC1 f1 + sC2 f2 + sC2 Bu,
(25.16)
which is appropriate if the controller is chosen as u=
−1 τ (C1 f1 +C2 f2 ) − sign(s) τ > 0. C2 B C2 B
(25.17)
Substituting V˙ into (25.17), the result is V˙ = −τ |s| < 0
for s = 0 and τ > 0.
(25.18)
After this step, all the discussions are exactly same with the linear model. Figures 25.7 and 25.8 show the perfect system performance regarding to control input (boundary layer function used) and pendulum position. Comparing these figures with the previous ones, the following facts are noted: 1. The control input signal needed is much smaller than that for the linear model; in other words, it is energy efficient.
25 Nonlinear Adaptive Sliding Mode Control for a Rotary Inverted Pendulum
353
Fig. 25.8 Control input vs. time in nonlinear sliding mode control
2. The pendulum output accuracy scale is nearly 10 times more than that for the linear model, which is quite desirable. Simplicity and robustness are two advantages of sliding mode controllers. Simplicity can be noticed by readers through the design process. All previous simulation results were obtained on the “ideal” case. However, in practice, both disturbance and noise are present. The performance of the system will be presented in the experimental part. Based on the analysis of sliding mode control, we realize one important process that affects the system performance significantly: the identification of the system. All the parameters in (25.1) and (25.11) have to be identified before implementing the controllers. System mechanical constraints, electronic noise, and disturbances affect the accuracy of the identification. Although sliding mode control can successfully deal with bounded inputs, it would be advantageous if some controller parameters can be adaptively identified. In this chapter, we demonstrate the design of a nonlinear adaptive sliding mode control that is suitable for systems with timevarying parameters.
25.4 Nonlinear Adaptive Sliding Mode Control Design and Simulation 25.4.1 System Parameters For the inverted pendulum, there are four uncertain system parameters J0 , J1 , C0 , and C1 , which are assumed to be constant. In this design [12], assume J0 and J1 are
354
Yanliang Zhang et al.
known already; only C0 and C1 are estimated using adaptive laws. The estimated values of J0 and J1 are determined from system identification process. Thus, we have to rewrite (25.11) in the form a k α¨ c +C0 d α˙ 0 p + + = u, k b β¨ e C1 β˙ h 0
(25.19)
where a = J0 + m1 L02 + m1 l12 sin2 β b = J1 + m1 l12 kt ku p= k = m1 L0 l1 cos β h = −m1 gl1 sin β Ra kt kb 1 1 c= + m1 l12 β˙ sin(2β ) e = − m1 l12 α˙ sin(2β ) Ra 2 2 1 d = −m1 L0 l1 β˙ sin β + m1 l12 α˙ sin(2β ) 2 The system dynamics can also be written in the following ways:
aα¨ + kβ¨ + (c +C0 )α˙ + d β˙ = pu kα¨ + bβ¨ + eα˙ +C1 β˙ + h = 0
(25.20)
Then we can obtain 1 [(ek − bc −C0 b)α˙ + (C1 k − bd)β˙ + kh + bpu], ab − k2 1 β¨ = [(kc +C0 k − ae) α˙ + (kd −C1 a) β˙ − ah − kpu]. ab − k2
α¨ =
(25.21) (25.22)
Define the sliding variable s = β˙ + λ β + ψ α˙ with arbitrary positive constants ψ and λ . Take the time derivative of the sliding variable s and substitute (25.21) and (25.22) into it. We have s˙ = β¨ + λ β˙ + ψ α¨ 1 = [(kc +C0 k − ae)α˙ + (kd −C1 a)β˙ − ah − kpu] + λ β˙ ab − k2 ψ + [(ek − bc −C0 b)α˙ + (C1 k − bd)β˙ + kh + bpu] ab − k2 1 ([kc − ae + ψ (ek − bc)]α˙ + [kd + λ (ab − k2 ) − ψ bd]β˙ = ab − k2 + ψ kh − ah + (k − ψ b)α˙ C0 + (ψ k − a)β˙ C1 + (ψ b − k)pu) 1 [Z + XC0 +YC1 + (ψ b − k)pu] (25.23) = ab − k2
25 Nonlinear Adaptive Sliding Mode Control for a Rotary Inverted Pendulum
355
where Z = [kc − ae + ψ (ek − bc)]α˙ + [kd + λ (ab − k) − ψ bd]β˙ + ψ kh − ah X = (k − ψ b)α˙ Y = (ψ k − a)β˙ We choose the candidate of Lyapunov function V as 1 1 1 V = s2 + η1 (C0 − Cˆ0 )2 + η2 (C1 − Cˆ1 )2 , 2 2 2
(25.24)
where η1 and η2 are arbitrary positive constants. Differentiating V with respect to time t leads to V˙ = ss˙ − η1 (C0 − Cˆ0 )C˙ˆ0 − η2 (C1 − Cˆ1 )C˙ˆ1 .
(25.25)
We may choose the adaptive controller to be u=
1 [Z + X Cˆ0 +Y Cˆ1 + (ab − k2 )sign(s)κ ] p(k − ψ b)
(25.26)
with κ > 0 and the adaptive laws to be X , ab − k2 Y C˙ˆ1 = η2−1 s . ab − k2
C˙ˆ0 = η1−1 s
(25.27) (25.28)
By substituting equations (25.23) and (25.26) to (25.28) into (25.25), we have s [Z + XC0 +YC1 + (ψ b − k)pu] − η1 (C0 − Cˆ0 )C˙ˆ0 ab − k2 −η2 (C1 − Cˆ1 )C˙ˆ1 s = [Z + XC0 +YC1 − [Z + X Cˆ0 +Y Cˆ1 + (ab − k2 )sign(s)κ ]] ab − k2 −η1 (C0 − Cˆ0 )C˙ˆ0 − η2 (C1 − Cˆ1 )C˙ˆ1 s = [X(C0 − Cˆ0 ) +Y (C1 − Cˆ1 )] − |s| κ − η1 (C0 − Cˆ0 )C˙ˆ0 ab − k2 −η2 (C1 − Cˆ1 )C˙ˆ1
V˙ =
s X [X(C0 − Cˆ0 ) +Y (C1 − Cˆ1 )] − |s| κ − (C0 − Cˆ0 )s 2 ab − k ab − k2 Y −(C1 − Cˆ1 )s ab − k2 = − |s| κ < 0 (25.29) =
According to the second method of Lyapunov stability, the sliding variable s asymptotically converges to zero.
356
Yanliang Zhang et al.
25.4.2 Parameter Selection For the nonlinear adaptive controller, the five parameters λ , ψ , η1 , η2 , and κ need to be decided. At the sliding mode surface s = 0, the desired closed-loop system dynamics is obtained as follows: (25.30) s = β˙ + λ β + ψ α˙ = 0. In addition, the derivative of sliding variable s˙ can be further inferred to satisfy this equation: s˙ = β¨ + λ β˙ + ψ α¨ = 0. (25.31) From the nonlinear model, we can derive the expression m1 l0 l1 cos β α¨ + (m1 l12 + J1 )β¨
1 2 + − m1 l1 α˙ sin 2β α˙ 2
+C1 β˙ − m1 gl1 sin β = 0
(25.32)
Assume sin β = β , cos β = 1, and sin(2β ) = 2β ; when β ∈ [−15◦
15◦ ], we have
m1 l0 l1 α¨ + (m1 l12 + J1 )β¨ − m1 l12 α˙ 2 β +C1 β˙ − m1 gl1 β = 0.
(25.33)
From equations (25.31) and (25.32), we have
α˙ = −
β˙ + λ β , ψ
(25.34)
α¨ = −
β¨ + λ β˙ . ψ
(25.35)
Substituting (25.34) and (25.35) into (25.33), we have m1 l12 + J1 −
m1 l0 l1 ψ
λ m1 l0 l1 ˙ (β˙ + λ β )2 ¨ β + C1 − β − m1 l1 β g + l1 = 0. ψ ψ2 (25.36)
Optimization methods of nonlinear equations in MATLAB are adopted to find optimal values of λ , ψ , η1 , η2 , and κ . The simulation result shown in Fig. 25.9 proves the validity of the nonlinear adaptive sliding mode controller. Two adapted parameters, C0 and C1 , can successfully converge to constant values.
25 Nonlinear Adaptive Sliding Mode Control for a Rotary Inverted Pendulum
357
Fig. 25.9 Simulation result of adaptive sliding mode control
25.5 Experimental Results In this section, three set of results are shown in Figs. 25.10 to 25.12. The first set is collected using a linear sliding mode controller, which is based on the linearized system. The second set is from a nonlinear sliding mode controller, and the last set is from a nonlinear adaptive sliding mode controller. It is quite obvious that the performance is improving from linear sliding mode control to nonlinear sliding mode and, finally, to adaptive control. The performance differences in nonlinear and linear control schemes can be easily explained because different mathematical models are used. The nonlinear model is expected to be more reliable than the linear model. The performance of nonlinear adaptive sliding mode control is better than the that of nonlinear sliding mode control because some uncertain parameters are adaptively chosen instead of using identification results. The parameters updated by adaptive laws are more close to the current system status. Quantitatively speaking, the power
358
Yanliang Zhang et al.
Fig. 25.10 Experimental results of linear sliding mode control
consumption reduces by 50% in nonlinear sliding mode control compared with that in linear sliding mode case and by 70% in nonlinear adaptive sliding mode control.
Fig. 25.11 Experimental results of nonlinear sliding mode control
25 Nonlinear Adaptive Sliding Mode Control for a Rotary Inverted Pendulum
359
Fig. 25.12 Experimental results of nonlinear adaptive sliding mode control
25.6 Conclusion This research project extensively covers the entire design and development cycle for a typical engineering system: complex nonlinear inverted pendulum system. All the objectives are fully met. For the given system, a precise nonlinear system mathematical model was derived first. To ease the process, a linear approximation of the system was obtained. These models are essential for the adaptive sliding controller design. For the completeness of the research, a linear sliding mode controller was introduced first, followed by a nonlinear sliding mode control, which provided a dominant foundation for adaptive control. The adaptive control was then designed based on the nonlinear sliding mode controller by choosing two parameters to be adaptively adjusted. Experimental results demonstrated that nonlinear adaptive sliding mode control is suitable and desirable for systems with parameter uncertainties and disturbances.
References 1. KRi inverted pendulum PP-300: User and service manual (2002) KentRidge Instruments Ptd Ltd, Rev. 2.5, July. 2. K. Furuta, M. Yamakita, and S. Kobayashi (1992) Swing-up control of inverted pendulum using pseudo-state feedback. Journal of Systems and Control Engineering, 206: 263–269.
360
Yanliang Zhang et al.
3. M. Wiklund, A. Kristenson, and K.J. Astrom (1993) A new strategy for swing up and control of an inverted pendulum. IFAC World Congress 1993, IX. 4. K.J. Astrom and K. Furata (1996) Swinging up a pendulum by energy control. IFAC 13th Triennial World Congress, San Francisco. 5. Y.T. Leong (1998) State space predictive control of inverted pendulum. MS thesis, Nanyang Technological University. 6. K.V. Ling, P. Falugi, J.M. Maciejowski, and L. Chisci (2002) Robust predictive control of the Furuta pendulum. IFAC’02 Conference Proceedings. 7. J.N. Hu and Y.H. Wang (2003) Control of nonlinear systems via neural network modelling and state dependent Riccati equation. Technical report, Nanyang Technological University, December. 8. K.V. Ling, Y. Lai and K. Chew (2001) An online Internet laboratory for control experiments, in: Advances in Control Education 2000 (L. Vlacic and M. Brisk, eds.), Pergamon, Great Britain, pp. 173–176. 9. Z.H. Man (2005) Robot control systems. Robotics, Prentice Hall, Englewood Cliffs, NJ. 10. K. Ogata (1995) Linearization of nolinear mathematical models. Modern Control Engineering, Prentice Hall, Englewood Cliffs, NJ, pp. 143–149. 11. J.–J. E. Slotine and W. Li (1991) Applied nonlinear control. Prentice Hall, Englewood Cliffs, NJ. 12. J.–J.E. Slotine (1984) Sliding controller design for nonlinear systems. International Journal of Control, 40: 421–434.
Chapter 26
Robust Load Frequency Sliding Mode Control Based on Uncertainty and Disturbance Estimator P.D. Shendge, B.M. Patre, and S.B. Phadke
26.1 Introduction Load frequency (LF) is one of the important problem in electric power system design and operation. Electric power systems consist of a number of control areas, which generate power to meet the power demand. However, poor balancing between generated power and demand can cause the system frequency to deviate away from the nominal value and creates inadvertent power exchanges between control areas. To avoid such situations, LF controllers are designed and implemented to automatically balance generated power and demand in each control area [1, 2]. The classical integral controller is successful in achieving zero steady-state frequency deviation but exhibits poor dynamic performance. In power systems, one of the most important issue is the load frequency control (LFC), which deals with the problem of how to deliver the demanded power of the desired frequency with minimum transient oscillations [3]. Whenever any suddenly small load perturbations resulted from the demands of customers in any areas of the power system, changes of high-line power exchanges and the frequency deviations will occur. Thus, to improve the stability and performance of the power system, it is necessary that generator frequency be setup under different loading conditions. For this reason, many control approaches have been developed for the load frequency
P.D. Shendge College of Engineering, Pune-411 005, India B.M. Patre SGGS Institute of Engineering and Technology, Vishnupuri, Nanded-431 606, India S.B. Phadke Defence Institute of Advanced Technology, Pune-411 025, India
361
362
P.D. Shendge et al.
control. Among them, PID controllers [4], optimal controllers [5], nonlinear [6] and robust control [7], and neural and/or fuzzy strategies [8–10] approaches has been developed. In an industrial plant, e.g., a power system, one of the problems always encountered is the parametric uncertainties. The usual design approach for LFC employs the linear control theory to develop control law on the basis of the linearized model with fixed system parameters. Since the operating point of a power system and its parameter change continuously, a fixed controller may no longer be suitable in all operating conditions. In order to take this parametric uncertainties into account, several papers have been published using concept of variable structure system [11], adaptive control techniques [7, 12] robust variable structure model following control [13] to the design LFC in the presence of parametric uncertainty. The performance of these techniques was not satisfactory for 89% parameter variation from their nominal values. In this chapter the design of a robust model following sliding mode LFC for a single area power system based on uncertainty and disturbance estimator (UDE) [14–16] along with a second- and higher order filter for estimation of the error is presented. Ackermann’s formula [17] is used for reaching phase elimination, while uncertainty and disturbance estimation method [14] to used for estimation of uncertainty and disturbance. The control proposed does not require the knowledge of bounds of uncertainty and disturbance and is continuous. The simulation results of the control strategy for LFC is presented by changing all parameters by up to 98% from their nominal values where some other existing method fails [7, 11–13]. The results shows that the system performance is robust to parameter variations and disturbance. Initially we used first-order filter for the estimation in which the error is of the order of O(τ ), where τ is filter time constant, which is very small. This result has been extended using a second-order filter, for estimation error, which is O(τ 2 ), without causing any disturbance. As τ is very small, error is also very small. This result is generalized for nth-order filter to show that the error is of the order of O(τ n ) The attention is focused on how robust sliding mode control can be designed for LFC with the help of uncertainty and disturbance estimator and also to show how the error can be improved by using a higher order estimation filter.
26.2 Dynamic Model for Load Frequency Control Electrical power systems are complex, nonlinear, and dynamic. The usual practice is to linearize the model around the operating point and then develop the control laws. Since the system is exposed to small changes in loads during its normal operation, the linearized model will be sufficient to represent the power system dynamics. The dynamic model in state variable form can be obtained from the transfer function
26 Robust Load Frequency SMC Based on UDE
363
model. The state equations can be written as [7, 18, 19] Kp Kp −1 ∆ f˙ = ∆ f + ∆ Pg − ∆ Pd , Tp Tp Tp −1 1 ∆ P˙g = ∆ Pg + ∆ Xg , Tt Tt −1 1 1 1 ∆ X˙g = ∆ f − ∆ Xg − ∆ Pc + ∆ f dt. RTg Tg Tg Tg
(26.1) (26.2) (26.3)
Introducing an integral control of ∆ f ,
∆E = K
∆ f dt,
(26.4)
to ensure the regulation property of ∆ f , i.e.,
∆ E˙ = K ∆ f where K is the integral control gain. The different symbols used in (26.1) to (26.4) are x1 = ∆ f , x2 = ∆ Pg , x3 = ∆ Xg , x4 = ∆ E, ∆ Pd , ∆ Pc , Tg , Tt , Tp , Kp , R,
incremental frequency deviation in Hz incremental change in generator output power in p.u. MW incremental change in governor valve position in p.u. MW incremental change in phase angle of voltage in radians load disturbance in p.u. MW incremental change in speed changer position in p.u. MW governor time constant in seconds turbine time constant in seconds plant time constant in seconds plant gain speed regulation ratio in Hz p.u. MW−1
The dynamic model in state variable form can be written as x˙ = Ax + bu + F ∆ Pd , where
⎡
−1 ⎢ Tp ⎢ ⎢ ⎢ ⎢ 0 A=⎢ ⎢ ⎢ −1 ⎢ ⎢ RT ⎣ g
K
and the input u = ∆ f dt.
Kp Tp −1 Tt 0 0
(26.5)
⎤ 0 1 Tt −1 Tg 0
0
⎥ ⎥ ⎥ ⎥ 0 ⎥ ⎥, ⎥ −1 ⎥ ⎥ Tg ⎥ ⎦ 0
⎡
0 ⎢ 0 ⎢ b=⎢ ⎢ 1 ⎣ Tg 0
⎤ ⎥ ⎥ ⎥, ⎥ ⎦
(26.6)
364
P.D. Shendge et al.
26.3 Model Following and UDE-Based Control Law Problem Statement Consider a LTI single input single output (SISO) system [14, 15] defined by x˙ = Ax + bu + ∆ Ax + ∆ bu + d(x,t),
(26.7)
where x is the state vector, u is the control input, A and b are the known constant matrices, ∆ A and ∆ b are uncertainties in the system, and d(x,t) is the unknown disturbance. Assumption 1 The uncertainties ∆ A and ∆ b and disturbance d(x,t) satisfy the matching conditions given by
∆ A = bD,
∆ b = bE,
d(x,t) = bv(x,t),
(26.8)
where D and E are unknown matrices of appropriate dimensions and v(x,t) is an unknown function. The system of equation (26.7) can be written as x˙ = Ax + bu + be(x,t),
(26.9)
where e(x,t) = Dx + Eu + v(x,t). The term e(x,t), although it contains uncertainty and disturbance, will be referred as lumped uncertainty. Let (26.10) x˙m = Am xm + bm um be a stable model. Assumption 2 The choice of model is such that A − Am = bL,
bm = bM,
(26.11)
where L and M are suitable known matrices. The objective is to design a control u so as to force the plant (26.9) to follow the model (26.10) in spite of the parameter variations. The equations (26.8) and (26.11) are well-known matching conditions required to guarantee invariance and are explicit statements of the structural constraints stated in [14].
26.4 Design of Control In this section a model following control is designed with help of the method suggested in [15, 17]. Define a sliding surface
σ = bT x + z,
(26.12)
26 Robust Load Frequency SMC Based on UDE
365
where z˙ = −bT Am x − bT bm um
z(0) = −bT x(0).
(26.13)
Equation (26.13) for the auxiliary variable z defined here is different from that given in [17]. By virtue of the choice of the initial condition on z, σ = 0 at t = 0. If a control u can be designed ensuring sliding, then σ˙ = 0 implies x˙ = Am x + bm um
(26.14)
and hence fulfills the objective of the model following. Differentiating equation (26.12) and using (26.9) and (26.13) give
σ˙ = bT Ax + bT bu + bT be(x,t) − bT Am x − bT bm um = bT bLx − bT bMum + bT bu + bT be(x,t).
(26.15)
Let the required control be expressed as u = un + ueq ,
(26.16)
where ueq takes care of known terms and un caters to the uncertainty. Selecting ueq = −Lx + Mum − (bT b)−1 kσ ,
(26.17)
where k is a positive constant. From (26.15) and (26.17) we get
σ˙ = bT bun + bT be(x,t) − kσ .
(26.18)
Now we will design the component un . The lumped uncertainty e(x,t) can be estimated as given in [14]. Rewriting the above equation, e(x,t) = (bT b)−1 (σ˙ + kσ ) − un .
(26.19)
It can be seen that lumped uncertainty e(x,t) (26.19) cannot be done directly. Let the estimate of the uncertainty be defined as e(x,t) ˆ = [(bT b)−1 (σ˙ + kσ ) − un ]G f (s),
(26.20)
where G f (s) strictly proper first-order low pass filter with unity steady-state gain and has enough bandwidth. With such a filter ∼ e(x,t) ˆ = e(x,t).
(26.21)
e (x,t) = e(x,t) − e(x,t). ˆ
(26.22)
Error in the estimation is
366
P.D. Shendge et al.
26.4.1 Uncertainty and Disturbance Estimation with First-Order Filter Let G f (s) is proper first-order low-pass filter with unity gain be defined as G f (s) =
1 , τs + 1
(26.23)
where τ is small positive constant. With the above G f (s) and in view of (26.19), (26.20), and (26.22), e (x,t) = [1 − G f (s)][(bT b)−1 (σ˙ + kσ ) − un ] = τ e(x,t)G ˙ f (s).
(26.24)
The error in estimation varies with τ , enabling design of un as un = −e(x,t) ˆ = −(bT b)−1 (σ˙ + k)G f (s) + G f (s)un . Solving for un gives
(bT b)−1 un = τ
kσ σ+ . s
(26.25)
(26.26)
26.4.2 Uncertainty and Disturbance Estimation with Second-Order Filter The accuracy of estimation can be improved as much as desired by an appropriate choice of filter G f (s). The second-order filter used here is with transfer function G f (s) =
1 . τ 2 s2 + 2τ s + 1
(26.27)
The lumped uncertainties and disturbances can be written as e(x,t) = e(x,t)G f (s) + e(x,t)(1 − G f (s)) = e(x,t)G f (s) + e(x,t)(2τ s + τ 2 s2 )G f (s) = e(x,t)(1 + 2τ s)G f (s) + τ 2 G f (s)e(x,t). ¨
(26.28)
Now the estimation is e(x,t) ˆ = e(x,t)(1 + 2τ s)G f (s) = (1 + 2τ s)G f (s)((bT b)−1 (σ˙ + kσ ) − un ) = (1 + 2τ s)G f (s)((bT b)−1 (σ˙ + kσ ) − un ).
(26.29)
26 Robust Load Frequency SMC Based on UDE
From (26.28) and (26.29),
¨ e = τ 2 G f (s)e(x,t),
367
(26.30)
which proves error of the estimation is proportional to τ 2 . The control un is given by (26.31) un = −(1 + 2τ s)G f (s)[(bT b)−1 (σ˙ + kσ ) − un ]; after simplifying we get un [1 − (1 + 2τ s)G f (s)] = −(1 + 2τ s)G f (s)[(BT B)−1 (σ˙ + kσ )].
(26.32)
Putting the value of G f (s) in preceding equation from (26.27) and simplifying, we get control un as
2 (2τ k + 1) k + un = (bT b)−1 + σ. (26.33) τ τ 2s τ 2 s2
26.4.3 Uncertainty and Disturbance Estimation with nth-Order Filter In the previous section the accuracy of estimation improvement by an appropriate choice of filter G f (s) for first- and second-order filter was discussed. Now this discussion is extended further for nth-order filter. Suppose the nth-order low-pass filter is defined by 1 . (26.34) G f (s) = n n τ s + τ n−1 sn−1 + · · · + τ s + 1 The lumped uncertainties and disturbances for nth-order filter can be written as e(x,t) = e(x,t)G f (s) + e(x,t)(1 − G f (s)) = e(x,t)G f (s) + e(x,t)(τ n sn + τ n−1 sn−1 + · · · + τ s + 1)G f (s) = e(x,t)G f (s) + e(x,t)(τ n−1 sn−1 + · · · + τ s + 1)G f (s) + τ n en (x,t)G f (s) = G f (s)e(x,t)(τ n−1 sn−1 + · · · + τ s + 1) + τ n en (x,t)G f (s).
(26.35)
Now defining estimation as e(x,t) ˆ = G f (s)e(x,t)(τ n−1 sn−1 + · · · + τ s + 1)
= (τ n−1 sn−1 + · · · + τ s + 1)G f (s) (bT b)−1 [(σ˙ + kσ ) − un )]
= (τ n−1 sn−1 + · · · + τ s + 1)G f (s) (bT b)−1 [((s + k)σ − un )] . (26.36) ˆ is implementable since (τ n−1 sn−1 + · · · + τ s + The control un based on the e(x,t) 1)(s + k)G f (s) is proper for the choice of G f (s) in (26.34). With (26.35), (26.36), and (26.22) we have (26.37) e = τ n G f (s)en (x,t), which proves the error of the estimation is O(τ n ).
368
P.D. Shendge et al.
26.5 Model Following and UDE Based LFC The plant considered as in equation (26.6) with the parameter values as [11, 19], Tp = 20 s, Tt = 0.3 s, Tg = 0.08 s, Kp = 120 Hz p.u. MW−1 , K = 0.6 p.u. rad−1 , R = 2.4 Hz p.u. MW−1 . The state space model is given by ⎤ ⎡ ⎤ ⎡ 0 −0.05 6 0 0 ⎢ 0 ⎥ ⎢ 0 −3.333 3.333 0 ⎥ ⎥ ⎢ ⎥ ⎢ A=⎢ ⎥. ⎥, b = ⎢ ⎣ 12.5 ⎦ ⎣ −5.208 0 −12.5 −12.5 ⎦ 0.6
0
0
0
0
Disturbance F ∆ Pd = 10sin(10t). The reference input um is a square wave of unity amplitude. In order to satisfy the model following conditions, we will convert above system into phase variable form by using the transformation Z = Tx Then the plant in equation (26.6) becomes Z˙ = TAT −1 Z + T bu + T F ∆ Pd ,
(26.38)
where ⎡
0
1
0
0
0 0
0 0
1 0
0 1
−149.985
−106.2327
−42.4545
−15.833
⎢ ⎢ TAT −1 = ⎢ ⎣
and
⎤ ⎥ ⎥ ⎥ ⎦
⎡ ⎤ 0 ⎢0⎥ ⎢ ⎥ Tb = ⎢ ⎥ . ⎣0⎦ 1
The model selected was critically damped model such that x˙m = Am xm + bm um , where
⎡
⎤ ⎡ ⎤ 0 0 ⎥ ⎥ ⎢ 0 ⎥ ⎢0⎥ ⎥ , bm = ⎢ ⎥ . ⎣0⎦ 0 0 1 ⎦ −24 −50 −35 −10 24
0 ⎢ 0 ⎢ Am = ⎢ ⎣ 0
1 0
0 1
(26.39)
26 Robust Load Frequency SMC Based on UDE
Square wave
369
x’ = Ax+Bu y = Cx+Du
xm
Reference Model
Model states
u Control x Plant States
x’ = Ax+Bu y = Cx+Du
MATLAB Function
Plant
Control Fcn
1 s Integrator
t Clock
External disturbance
1 s
Simulation time
Integrator1
Fig. 26.1 Simulation model
The initial conditions for plant and model states are given by x(0) = [1 0 0 0]T , xm (0) = [0 1 1 1]T . Uncertainties in A and b are ⎡ ⎡ ⎤ ⎤ 0 0 0 0 0 ⎢ ⎢ ⎥ 0 0 0 0 ⎥ ⎥, ∆b = ⎢ 0 ⎥. ∆A = ⎢ ⎣ ⎣ ⎦ 0 0 0 0 0 ⎦ −142.985 −104.2327 −41.4545 −15.0 −0.4
1
1.5 Plant Model
2 Plant Model
−1 0
0 −0.5
5
−1
10
time in sec
4
0
5
time in sec
−3 0
10
5
10
0
−4
0
−0.5
σ
control
4
m
4
10
1
0
p
5
time in sec
0.5
200
−2
−1
−200
−6
−1.5 −400
−8 −10 0
−1 −2
400 Plant Model
2
0
3
0.5
2
1
p
−0.5
xp & xm
xp & xm
m
0
Plant Model
1
2
1
0.5
3
1
5
time in sec
10
−600 0
−2 5
time in sec
10
−2.5
0
time in sec
Fig. 26.2 Load frequency response by using UDE with first-order filter for 98% parameter uncertainty: (a) plant and model state x1 , (b) plant and model state x2 , (c) plant and model state x3 , (d) plant and model state x4 , (e) control, (f) error
370
P.D. Shendge et al. 1
1.5 Plant Model
2 Plant Model
−1
0 −0.5
0
5
−1
10
time in sec
4
−1 −2
0
5
−3
10
time in sec
400 Plant Model
2
0
3
0.5
2
p
−0.5
xp & xm
xp & xm
m
1
0
Plant Model
1
2
1
0.5
3
1
0
5
10
5
10
time in sec
0.5
200
0
−2 −4
0
σ
control
p
4
m
4
0
−0.5
−200
−6
−1
−400 −8 −10
0
5
−600
10
time in sec
0
5
10
time in sec
−1.5
0
time in sec
Fig. 26.3 Load frequency response by using UDE with second-order filter 98% parameter uncertainty: (a) plant and model state x1 , (b) plant and model state x2 , (c) plant and model state x3 , (d) plant and model state x4 , (e) control, (f) error
26.6 Results The actual LFC simulink model is shown in Fig. 26.1. The simulation results are shown in Figs. 26.2 to 26.8. System response by using UDE with first- and secondorder filter are shown in Figs. 26.2 and 26.3. Figure 26.2 (a) to (d) shows the plant
0.06 0.04 0.02 0 −0.02 −0.04 −0.06 −0.08 −0.1 −0.12 0
1
2
3
4
5 time in sec
6
7
8
9
10
5 time in sec
6
7
8
9
10
(a)
0.005 0 −0.005 −0.01 −0.015 −0.02 −0.025 −0.03 −0.035 −0.04 −0.045 0
1
2
3
4
(b)
Fig. 26.4 Error when τ = 1 ms for (a) first-order filter and (b) second-order filter
26 Robust Load Frequency SMC Based on UDE
371
0.15 0.1 0.05 0 −0.05 −0.1 −0.15 −0.2 −0.25
0
1
2
3
4
5 time in sec
6
7
8
9
10
6
7
8
9
10
(a) 0.02 0 −0.02 −0.04 −0.06 −0.08 −0.1
0
1
2
3
4
5 time in sec (b)
Fig. 26.5 Error when τ = 2 ms for (a) first-order filter and (b) second-order filter
states, while Fig. 26.2(e) indicates control torque required, and Fig. 26.2(f) shows error plot, after using first-order filter. Similar results using second-order filter are shown in Fig. 26.3 (a) to (f). Both results are for 98% parameter uncertainty in A and 40% in b. The figure reveals the ability of the controller to drive the system to follow the reference model. It is observed that the system remains invariant to the
0.3 0.2 0.1 0 −0.1 −0.2 −0.3 −0.4 0
1
2
3
4
5 time in sec (a)
6
7
8
9
10
1
2
3
4
5 time in sec (b)
6
7
8
9
10
0.05
0 −0.05
−0.1
−0.15
−0.2
0
Fig. 26.6 Error when τ = 4 ms for (a) first-order filter and (b) second-order filter
372
P.D. Shendge et al. 0.4 0.3 0.2 0.1 0 −0.1 −0.2 −0.3 −0.4 −0.5 −0.6 0
1
2
3
4
5 time in sec (a)
6
7
8
9
10
0
1
2
3
4
5 time in sec (b)
6
7
8
9
10
0.15 0.1 0.05 0 −0.05 −0.1 −0.15 −0.2 −0.25 −0.3
Fig. 26.7 Error when τ = 8 ms for (a) first-order filter and (b) second-order filter
imposed parameter variation. This reveals the controller ability to force the plant to follow the model in spite of parameter variations. Figures 26.4 to 26.8 shows comparison of the error plot after using first- and second-order filter estimation when τ changes differently. Table 26.1 shows the error in estimation for first- and second-order filter for various values of τ . It is observed that error in estimation
0.8 0.6 0.4 0.2 0 −0.2 −0.4 −0.6 −0.8 0
1
2
3
4
5 time in sec (a)
6
7
8
9
10
0
1
2
3
4
5 time in sec (b)
6
7
8
9
10
0.4 0.3 0.2 0.1 0 −0.1 −0.2 −0.3 −0.4 −0.5
Fig. 26.8 Error when τ = 16 ms for (a) first-order filter and (b) second-order filter
26 Robust Load Frequency SMC Based on UDE
373
Table 26.1 First- and second-order filter error for different τ τ First-order filter Second-order filter 0.001 0.017755 0.0001812 0.002 0.03518 0.0007104 0.004 0.0686 0.002817 0.008 0.12676 0.012017 0.016 0.206 0.04908
O(τ ) has increased values of τ for first-order filter and O(τ 2 ) for second-order filter.
26.7 Conclusion A systematic procedure for design of robust model following sliding mode LFC for single area power system based on UDE has been presented. The control strategy uses Ackermann’s formula for elimination of reaching phase and UDE for the estimation of uncertainties and disturbances helps in not requirement of the knowledge of bounds. The simulation results of the control strategy for LF control is presented by changing A and b parameters by 98% and 40%, respectively, from their nominal values. These results show that system performance is robust relative to parameter variations and disturbances. The uncertainty and disturbance estimation with second-order filter lumped uncertainty can be estimated at the best accurate level with help of UDE with second-order filter rather than first-order filter. The error in the estimation can be improved by O(τ 2 ) with a second-order filter.
References 1. P.S. Kundur (1994) Power system stability and control. McGraw-Hill. 2. Saadat and Hadi (1999) Power system analysis. McGraw-Hill. 3. T. Sasaki, T. Kadoya, and K. Enomoto (2003) Study on load frequency control using redox flow batteries. IEEE Transactions on Power Systems, 3:1714–1718. 4. Y. Moon, H. Ryu, J. Lee, and S. Kim (2001) Power system load frequency control using noise-tolerable PID feedback. IEEE International Symposium on Industrial Electronics, 3: 1714–1718. 5. M.Y.D. Azzam (2002) An optimal approach to robust controller design for load-frequency control. Transmission and Distribution Conference and Exhibition, 1: 180–183. 6. J. Talaq and F. Al-Basri (1999) Adaptive h z y gain scheduling for load fequency control. IEEE Transactions on Power Systems, 14: 145–150. 7. Y. Wang, R. Zhou, and C. Wen (1994) New robust adaptive load-frequency control with system parametric uncertainties. IEE Procactions C, 141: 184–190. 8. T. Hiyama, S. Koga, and Y. Yoshimuta (2000) Fuzzy logic based multi-functional load frequency control. Power Engineering Society Winter Meeting, IEEE, 2: 921–926. 9. Q.P. Ha (1998) A fuzzy sliding mode controller for power system load-frequency control. Second International Conference on Knowledge-Based Intelligent Electronic Systems, 1: 149– 154.
374
P.D. Shendge et al.
10. M. Harandi and S. Bathee (1997) Decentralized variable-structure and fuzzy logic load frequency control of multi-area power systems. Fuzzy Logic Symposium, Malaysia. 11. N.N. Bengiamin, and W.C. Chan (1982) Variable structure control of electric power generation. IEEE Transactions, PAS-101: 376–380. 12. C.T. Pan, C.M. Liaw (1989) An adaptive controller for power system load frequency control. IEEE Transactions, PWRS-4: 122–128. 13. B.M. Patre, and B. Bandyopadhyay (2002) Robust variable structure model following controller for load frequency controller. Systems Science, 28(4): 43–59. 14. Q.C. Zhong and D. Rees (2004) Control of uncertain LTI systems based on an uncertainty and disturbance estimator’. Journal of Dynamic Systems, Measurement and Control, 126: 905– 910. 15. S.E. Talole and S.B. Phadke (2006) Model following sliding mode control based on uncertainty and disturbance estimator. Journal of Dynamic Systems, Measurement and Control (Under review). 16. P.D. Shendge and B.M. Patre (2006) Robust model following load frequency sliding mode controller based on uncertainty and disturbance estimator. International Conference on Control, Scotland. 17. J. Ackermann and V. Utkin (1998) Sliding mode control design based on Ackermann’s formula. IEEE Transactions on Automatic Control, 43(2): 234–237. 18. R.N. Dhar (1982) Computer aided power system operation and analysis. McGraw-Hill, New Delhi. 19. M.V. Hariharan, A.Y. Sivramkrishnan, and M.C. Srisailam (1984) Design of variable-structure load-frequency controller using pole assignment techniques. Int. J. Control, 40: 487–498.
Chapter 27
Robust Intelligent Motion Control for Linear Piezoelectric Ceramic Motor System Using Self-constructing Neural Network Chun-Fei Hsu, Bore-Kuen Lee, and Tsu-Tian Lee
Abstract The linear piezoelectric ceramic motor (LPCM) has much merit, such as high precision, fast control dynamics, large driving force, smaller dimension, high holding force, silence, and a minimum step size that is less than that of the class of electromagnetic motors. In this chapter, a robust intelligent motion control (RIMC) system is developed for the LPCM. The RIMC system comprises a neural controller and a robust controller. The neural controller utilizes a self-constructing neural network (SCNN) to mimic an ideal feedback controller, and the robust controller is designed to achieve L2 tracking performance with desired attenuation level. If the hidden neuron of the SCNN is insignificant, it should be removed to reduce the computation load; otherwise, it should be retained. Finally, the experimental results show that a perfect tracking response of LPCM can be achieved by using the proposed RIMC method. Keywords: Adaptive control · neural network control · self-structuring · linear piezoelectric ceramic motor
27.1 Introduction Neural networks (NNs) are a promising new generation of information processing systems that demonstrate the ability to learn from training data. Recently, the NNbased adaptive control technique has represented an alternative design method for
Chun-Fei Hsu and Bore-Kuen Lee Department of Electrical Engineering, Chung Hua University, Hsinchu 300, Taiwan, Republic of China Tsu-Tian Lee Department of Electrical Engineering, National Taipei University of Technology, Taipei 106, Taiwan, Republic of China
375
376
Chun-Fei Hsu et al.
various control systems [1–5]. One must distinguish between two classes of control applications: open-loop identification and closed-loop feedback control. Identification applications are similar to signal processing or classification, so that the same open-loop algorithms may often be used. On the other hand, in closed-loop feedback applications the NN is inside the control loop, so that special steps must be taken to ensure that the tracking error and the NN weights remain bounded in the closed-loop system. Based on the approximation ability property of NNs, the NN-based adaptive controllers have been developed to compensate for the effects of nonlinearities and system uncertainties, so that the stability, convergence, and robustness of the control system can be improved. The LPCM has much merit, such as high precision, fast control dynamics, large driving force, smaller dimension, high holding force, silence, and a minimum step size that is less than that of the class of electromagnetic motors, so that it can be used in many different applications. However, the driving principle of the LPCM is based on the ultrasonic vibration force of piezoelectric ceramic elements and mechanical frictional force. Therefore, its mathematical model is complex, and the motor parameters are time varying because of increasing temperature and changes in motor drive operating conditions [6]. Several NN-based intelligent control approaches have been addressed for the LPCM control without the control system dynamics [7–11]. Although the control performances of the LPCM are acceptable in [7–11], the learning algorithm only includes the parameter-learning phase, and they have not considered the structure-learning phase. If the number of hidden neurons is chosen too large, the computation loading is so heavy that they are not suitable for practical applications. If the number of hidden neurons is chosen too small, the learning performance may not be good enough to achieve desired control performance. To tackle this problem, several self-constructing NNs, consisting of structure and parameter learning phases, have been proposed [12–14]. These two-phase learning algorithms not only decide the structure but also adjust the parameters of the NN. Recently, some self-constructing NNs have been applied to solve the control problems [15–17]. The motivation of this chapter is to design a robust intelligent motion control (RIMC) system for the LPCM system without any knowledge of the control system dynamics. The developed RIMC system is comprised of a neural controller and a robust controller. The neural controller utilizes an SCNN to mimic an ideal feedback controller, and the robust controller is designed to achieve L2 tracking performance with desired attenuation level. The learning phase includes the structurelearning and the parameter-learning phases. In the structure-learning phase, the SCNN can on-line create the new hidden neurons if the approximation performance is not good and can on-line prune the insignificant hidden neurons if the neuron is inappropriate. In the parameter-learning phase, the controller parameters are on-line tuned based on the Lyapunov stability theorem; thus, the stability of the closed-loop system can be guaranteed. Finally, the computer control experimental system for the LPCM is setup. The experimental results show that the RIMC
27 Robust Intelligent Motion Control for Linear Piezoelectric Ceramic Motor
377
y
moving table
spacer
A
B piezoelectric ceramic support spring
tuning inductor
B'
ground
A' x
preload spring
(a)
FD
FN x moving table
FF
(b) Fig. 27.1 (a) Structure of LPCM and (b) friction drive system
system can achieve the perfect tracking response after the SCNN is sufficiently trained.
27.2 Problem Formulation The driving principles of the LPCM are based on the ultrasonic vibration force of piezoelectric ceramic element and mechanical frictional force. Figure 27.1(a) shows the principal structure of the LPCM [6]. Four electrodes (A, A , B, and B ) are bounded to the front face to form a checkerboard pattern of rectangles, and each substantially covers one-quarter of this face. In order to transmit the motion of the spacer to the moving table, a preload spring is designed to supply pressure between the spacer and the moving table. For a friction drive system, shown in Fig. 27.1(b), the main efforts contain a normal force FN , a driving force FD , and a lumped friction force FF . The LPCM can be described as a second-order nonlinear dynamic equation by Newton’s law as [9] (M + m)x¨ = F(x) + G(x)u,
(27.1)
378
Chun-Fei Hsu et al.
where M is the mass of the moving table; m is the mass of the payload; x = [x x] ˙T represents the position and velocity of the moving table; F(x) is the nonlinear dynamic function including friction, ripple force, and external disturbance; G(x) is the gain of the LPCM resonant inverter; and u is the input force to the LPCM. Rewriting (27.1), the dynamic equation of the LPCM can be obtained as x¨ =
G(x) F(x) + u = f (x) + g(x)u, M+m M+m
(27.2)
where F(x) G(x) and g(x) = . M+m M+m These nonlinear dynamic functions can not be exactly obtained, assuming that all the parameters of the LPCM system are well known, the nominal model can be represented as x¨ = fn (x) + gn (x)u, (27.3) f (x) =
where fn (x) is the nominal value of f (x) and gn (x) > 0 is a nominal constant of g(x). If the uncertainties occur, then the system model (27.2) can be described as x¨ = [ fn (x) + ∆ f (x)] + [gn (x) + ∆g(x)]u = fn (x) + gn (x)u + d(x),
(27.4)
where ∆ f (x) and ∆g(x) denote the uncertainties, and d(x) is referred to as the lumped uncertainty, defined as d(x) ≡ ∆ f (x) + ∆g(x)u.
27.3 Robust Intelligent Motion Controller Design The tracking control problem of the LPCM system is to find a control law so that the position of the moving table x can track a reference command xc . The tracking error is defined as (27.5) e = xc − x, and a sliding surface is defined as t
s = e˙ + k1 e + k2
0
e(τ )d τ ,
(27.6)
where k1 and k2 are nonzero positive constants. If the system dynamics of LPCM are known exactly, an ideal feedback controller can be designed as [18] u∗ =
1 (− fn + x¨c + k1 e˙ + k2 e − d) gn
(27.7)
27 Robust Intelligent Motion Control for Linear Piezoelectric Ceramic Motor
379
σ w1
v1 v2
σ
w2
vm
…
s
wm
Σ
y
σ Fig. 27.2 The structure of self-constructing NN
Substituting (27.7) into (27.4) yields ˙ + k2 e = 0. e¨ + k1 e(t)
(27.8)
If k1 and k2 are chosen to correspond to the coefficients of a Hurwitz polynomial, that is, a polynomial whose roots lie strictly in the open left half of the complex plane, then it implies that lim e = 0. However, since the system parameters may be t→∞
unknown or perturbed, the ideal feedback controller u∗ in (27.7) can not be implemented.
27.3.1 Description of SCNN A single-hidden-layer SCNN with m hidden neurons is shown in Fig. 27.2. The output of this SCNN takes the form [2] m
y = ∑ wi σ (vi s),
(27.9)
i=1
where s and y are the input and output of the SCNN, respectively, σ represents the hidden-layer activation function, vi is the interconnection weight between the input and hidden layers, and w j is the interconnection weight between the hidden and output layers. These weights will be on-line adjusted in the following derivation. The activation function in this chapter is considered to be a sigmoid function.
σ (z) =
1 . (1 + e−z )
(27.10)
By collecting all the weights of the SCNN, equation (27.9) can be expressed in a vector form as y = wT σ (vs),
(27.11)
380
Chun-Fei Hsu et al.
where v = [v1 , v2 , . . ., vm ] ∈ Rm and w = [w1 , w2 , . . ., wm ] ∈ Rm . A main property of NN with respect to feedback control is the universal function approximation property. In general, the approximation error decreases as the net size m increases. The proposed growing algorithm can split up the rth hidden neuron at the Nth sampling time if the following condition is satisfied |v˙r | + |w˙ r | m
∑ (|v˙i | + |w˙ i |)
≥ Gth , r = 1, 2, . . ., m
(27.12)
i=1
where Gth denotes the disjunction threshold value. When the approximated nonlinear functions are too complex, the disjunction threshold value should be chosen small so that the neurons can be easily created. The tuning laws v˙i and w˙ i will be derived in the next subsection. For that reason, if condition (27.12) is satisfied, then a new neuron is created to spread the relatively large variation of the weights. The rth neuron satisfying (27.12) is divided into two neurons, and the newly created neuron is denoted by r . The new weights connected to the r th neuron are decided as follows: vr (N + 1) = vr (N),
(27.13)
wr (N + 1) = α wr (N),
(27.14)
where α is a positive constant. The weights connected to the rth neuron are determined as follows: vr (N + 1) = vr (N),
(27.15)
wr (N + 1) = (1 − α )wr (N).
(27.16)
This method is induced from the facts that the weights connected to the newly created neuron will share the large variations of the weights. This chapter also proposed a pruning algorithm to determine whether to eliminate the existing hidden neurons that are inappropriate. A significance index for the importance of the tth neuron is determined as Pt (N + 1) =
Pt (N) exp(−τ ) i f σ (vt s) < ρ , t = 1, 2, . . . , m i f σ (vt s) ≥ ρ
Pt (N),
(27.17)
where Pt is the significance index of the t-th neuron its initial value is 1, ρ is the elimination threshold value and τ is the elimination speed constant. The proposed pruning algorithm is designed such that the t-th neuron in the hidden layer prune at the N-th sampling time, if the following condition is satisfied Pt ≤ Pth ,t = 1, 2, . . . , m,
(27.18)
27 Robust Intelligent Motion Control for Linear Piezoelectric Ceramic Motor
381
where Pth a significance threshold value. The tth neuron satisfying (27.18) is removed to reduce the computation load.
27.3.2 Approximation of SCNN By the universal approximation theorem, an optimal NN approximation can be designed as [2] u∗ = u∗nn + ∆ = w∗T σ (v∗ s) + ∆ = w∗T σ ∗ + ∆,
(27.19)
where v∗ and w∗ are the optimal vectors of SCNN and ∆ is the approximation error. Let the number of optimal neurons be m∗ , and let the neurons be divided into two parts. The first part contains m neurons, which are the activated part; and the secondary part contains m∗ − m neurons, which do not exist yet. Thus, the optimal weights of NN v∗ and w∗ are classified in two parts such as ∗ w ∗ ∗ ∗ ∗ (27.20) v = [va vi ] and w = a∗ , wi ∗
∗ −m)
where v∗a ∈ Rm and w∗a ∈ Rm are activated parts and v∗i ∈ R(m −m) and w∗i ∈ R(m are inactivated parts, respectively. An estimation of unn is given by ˆ Ta σ (ˆva s) = w ˆ Ta σˆ a , unn = w
(27.21)
ˆ a are the estimated values of the optimal weights v∗a and w∗a , respecwhere vˆ a and w tively. Define the estimated error u˜ as ∗ ∗T ∗ ˆ Ta σˆ a + ∆ u˜ = u∗ − unn = w∗T a σa + wi σi − w T T T ∗T ∗ ˜ a σˆ a + w ˆ a σ˜ a + w ˜ a σ˜ a + wi σi + ∆, =w
(27.23)
˜ a = w∗a − w ˆ a , and v˜ a = v∗a − vˆ a . where σa∗ = σ (v∗a s), σi∗ = σ (v∗i s), σ˜ a = σa∗ − σˆ a , w ∗ The Taylor series expansion of σa with respect to vˆ a s can be derived as [5]
σa∗ = σˆ a + σa v˜ a s + h,
(27.24)
where σa is the Jacobian and h is a vector of higher order terms. Therefore,
σ˜ a = σa v˜ a s + h.
(27.25)
382
Chun-Fei Hsu et al. δ
urb
robust controller xc
+
−
e
sliding s surface
unn +
neural controller
+
u
vˆ a , wˆ a
adaptive law
Gth
disjunction condition
m
deletion condition
Pth
robust intelligent motion control
x
LPCM system
Fig. 27.3 RIMC for LPCM system
Substituting (27.24) into (22) yields ˆ Ta σa v˜ a s + ε , ˜ Ta σˆ a + w u˜ = w
(27.26)
∗ ˜ Ta σ˜ a + w∗T ˆ Ta h + ∆. where ε = w i σi + w
27.3.3 Design of RIMC The proposed RIMC system is developed for the LPCM as shown in Fig. 27.3, i.e., u = unn + urb ,
(27.27)
where the neural controller unn uses an SCNN to mimic the ideal feedback controller in (27.7), and the robust controller urb is designed to achieve L2 tracking performance with attenuation of disturbances including approximation errors and external uncertainties. Substituting (27.26) into (27.4) yields x(n) = fn + gn (unn + urb ) + d.
(27.28)
27 Robust Intelligent Motion Control for Linear Piezoelectric Ceramic Motor
383
Using (27.7) and (27.27), the error dynamic equation is obtained s˙ = gn (u∗ − unn − urb ).
(27.29)
By the approximation error equation (27.25), equation (27.28) can be rewritten as ˆ a + ε − urb ). ˜ Ta σˆ a + s˜vTa σaT w s˙ = gn (w
(27.30)
ˆ a = [w ˆ Ta σa v˜ a ]T is used since it is a scale. Therefore, the In this derivation, v˜ Ta σaT w following theorem can be stated and proved. In case of the existence of ε , consider a specified L2 tracking performance [19, 20] T T 1 1 1 T ˜ a (0)w ˜ a (0) + δ w s2 (t)dt ≤ [s2 (0) + v˜ Ta (0)˜va (0) + ε 2 (t)dt] gn ηv ηw 0 0 (27.31) where ηv and ηw are positive constants. If the system starts with initial conditions ˜ a (0) = 0, the L2 tracking performance in (27.28) can be s(0) = 0, v˜ a (0) = 0, and w rewritten as s δ ≤ , (27.32) sup ε g 2 n ε ∈L [0,T ] where s2 =
T 2 T 2 2 0 s (t)dt and ε = 0 ε (t)dt.
Theorem 1. Consider a linear piezoelectric ceramic motor system expressed by (27.1). If the robust intelligent motion control system is designed as (27.26), a selfconstructing NN splits the hidden neurons if condition (27.12) is satisfied and prunes the hidden neurons if condition (27.18) is satisfied. The adaptation laws of activated ˆ a are designed as parts vˆ a and w ˆ a, v˙ˆ a = −v˙˜ a = ηv s2 σaT w
(27.33)
˙ˆ a = −w ˙˜ a = ηw sσˆ a , w
(27.34)
where ηv and ηw are the positive learning rates, and the robust controller is designed as urb = gn
(δ 2 + 1)s , 2δ 2
(27.35)
where δ is a prescribed attenuation constant. Proof. Define a Lyapunov function candidate in the following form: 1 gn T gn T ˜ w ˜ a. ˜ a ) = s2 + V (s, v˜ a , w v˜ v˜ a + w 2 2ηv a 2ηw a
(27.36)
384
Chun-Fei Hsu et al.
Differentiating (27.35) with respect to time and using (27.32) to (27.34) give gn gn T ˙ ˜ w ˜a ˜ a ) = ss˙ + v˜ Ta v˙˜ a + w V˙ (s, v˜ a , w ηv ηw a gn gn T ˙ ˆ a + ε − urb ) + v˜ Ta v˙˜ a + ˜ w ˜a ˜ Ta σˆ a + s˜vTa σaT w w = sgn (w ηv ηw a ˙˜ a v˙˜ a w ˆa+ ˜ Ta sσˆ a + = gn v˜ Ta s2 σaT w + gn w + sgn (ε − urb ) ηv ηw s2 g2n (δ 2 + 1) 2δ 2 2 1 1 1 sgn = − s2 g2n − − δ ε + δ 2ε 2 2 2 δ 2 1 2 2 1 2 2 ≤ − s gn + δ ε . 2 2 = sgn ε −
(27.37)
Assume ε ∈ L2 [0, T ], ∀T ∈ [0, ∞). Integrating the above equation from t = 0 to t = T yields 1 2 T 2 1 2 T 2 s (t)dt + δ ε (t)dt. (27.38) V (T ) −V (0) ≤ − gn 2 2 0 0 Since V (T ) ≥ 0, the above inequality implies the following inequality: 1 2 g 2 n
T 0
1 s (t)dt ≤ V (0) + δ 2 2 2
T 0
ε 2 (t)dt.
(27.39)
Using (27.35), the above inequality is equivalent to (27.30). Thus, the proof is completed.
27.4 Experimental Results The computer control experimental system for the LPCM is shown in Fig. 27.4. In the experimental test condition, the parameter variation case is the addition of one iron disk weighing 2.3 kg to the mass of the moving table. The control objective is to control the moving table to follow a 3 cm periodic step command. Moreover, a second-order transfer function 64 s2 + 16s + 64
(27.40)
is chosen as the reference model for the step command. To illustrate the effectiveness of the proposed design method, a comparison between a fix-structuring NN control [2] and the proposed RIMC is made. The
27 Robust Intelligent Motion Control for Linear Piezoelectric Ceramic Motor
385
Fig. 27.4 Computer-controlled LPCM system
experimental results of the fix-structuring NN control with five hidden neurons for the nominal and parameter variation cases are shown in Fig. 27.5. The experimental results show that the degenerate tracking responses result when the parameter variations occur. Then, the fix-structuring NN control with 21 hidden neurons is applied to control the LPCM again. The experimental results for the nominal and parameter variation cases are shown in Fig. 27.6. The experimental results show that favorable tracking performance can be achieved; however, the computation loading is heavy. Finally, the proposed RIMC system is applied to control the LPCM again. The experimental results of the proposed RIMC are shown in Fig. 27.7. It shows that the favorable tracking performance can be achieved for the proposed RIMC after the structure- and parameter-learning phases. The RIMC can on-line create new hidden neurons if the approximation performance of NN is not good enough and can on-line prune insignificant hidden neurons to reduce the computational loading.
386
Chun-Fei Hsu et al.
Fig. 27.5 Experimental results of the fix-structuring NN control with five hidden neurons
27 Robust Intelligent Motion Control for Linear Piezoelectric Ceramic Motor
Fig. 27.6 Experimental results of the fix-structuring NN control with 21 hidden neurons
387
388
Chun-Fei Hsu et al.
Fig. 27.7 Experimental results of the proposed RIMC
27.5 Conclusions This chapter developed a RIMC system, which is comprises a neural controller and a robust controller, for the linear piezoelectric ceramic motor. The neural controller utilizes an SCNN to mimic an ideal computation controller, and the robust controller is designed to achieve L2 tracking performance with desired attenuation level. The SCNN is used to on-line estimate the ideal computation controller with the structureand parameter-learning phases of NN, simultaneously. The structure-learning phase possesses the ability of on-line generation and pruning of hidden neurons to achieve
27 Robust Intelligent Motion Control for Linear Piezoelectric Ceramic Motor
389
Fig. 27.7 (continued)
optimal neural structure, and the parameter-learning phase adjusts the interconnection weights of NN to achieve favorable approximation performance. Finally, the RIMC method is applied to control an LPCM. The experimental results show that a perfect tracking response of LPCM can be achieved by using the proposed RIMC method. In summary, the major contributions of this chapter can be summarized as follows: 1. The SCNN has been created with growing and pruning algorithms of the hidden neurons to achieve favorable learning performance. 2. The application of the RIMC to control the LPCM was successful.
390
Chun-Fei Hsu et al.
References 1. O. Omidvar and D.L. Elliott (1997) Neural systems for control. Academic Press, New York. 2. R.R. Selmic and F.L. Lewis (2002) Neural-network approximation of piecewise continuous functions: application to friction compensation. IEEE Transactions on Neural Networks, 13(3): 745–751. 3. C.M. Lin and C.F. Hsu (2004) Supervisory recurrent fuzzy neural network control of wing rock for slender delta wings. IEEE Transactions on Fuzzy Systems, 12(5): 733–742. 4. C.M. Lin and C.F. Hsu (2003) Neural network hybrid control for antilock braking systems. Transactions on Neural Network, 14(2): 351–359,. 5. Z.P. Wang, S.S. Ge, and T.H. Lee (2004) Robust adaptive neural network control of uncertain nonholonomic systems with strong nonlinear drifts. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 34(5): 2048–2059. 6. T.Sashida and T. Kenjo (1993) An introduction to ultrasonic motors. Clarendon Press, Oxford. 7. F.J. Lin, R.Y. Duan, R.J. Wai, and C.M. Hong (1999) LLCC resonant inverter for piezo-electric ultrasonic motor drive. IEE Proceedings—Electric Power Applications, 146(5): 479–487. 8. F.J. Lin, R.J. Wai, K.K. Shyu, and T.M. Liu (2001) Recurrent fuzzy neural network control for piezoelectric ceramic linear ultrasonic motor drive. IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 48(4): 900–913. 9. Y.F. Peng, R.J. Wai, and C.M. Lin (2004) Implementation of LLCC-resonant driving circuit and adaptive CMAC neural network control for linear piezoelectric ceramic motor. IEEE Transactions on Industrial Electronics. 51(1): 35–48. 10. R.J. Wai, F.J. Lin, R.Y. Duan, K.Y. Hsieh, and J.D. Lee (2002) Robust fuzzy neural network control for linear ceramic motor via backstepping design technique. IEEE Transactions on Fuzzy Systems, 10(1): 102–112. 11. R.J. Wai, C.M. Lin, and Y.F. Peng (2004) Adaptive hybrid control for linear piezoelectric ceramic motor drive using diagonal recurrent CMAC network. IEEE Transactions on Neural Networks, 15(6): 1491–1506. 12. S. Wu and M.J. Er (2000) Dynamic fuzzy neural networks – a novel approach to function approximation. IEEE Transactions on Systems, Man, Cybernetics, Part B, 30(2): 358–364. 13. S.J. Lee and C.S. Ouyang (2003) A neuro-fuzzy system modeling with self-constructing rule generation and hybrid SVD-based learning. IEEE Transactions on Fuzzy Systems, 11(3): 341– 353. 14. C.T. Lin, W.C. Cheng and S.F. Liang (2005) An on-line ICA-mixture-model-based selfconstructing fuzzy neural network. IEEE Transactions on Circuits Systems I, 52(1): 207–221. 15. F.J. Lin, C.H. Lin and P.H. Shen (2001) Self-constructing fuzzy neural network speed controller for permanent-magnet synchronous motor drive. IEEE Transactions on Fuzzy Systems, 9(5): 751–759. 16. S. Wu, M.J. Er, and Y. Gao (2001) A fast approach for automatic generation of fuzzy rules by generalized dynamic fuzzy neural networks. IEEE Transactions on Fuzzy Systems, 9(4): 578–594. 17. J.H. Park, S.H. Huh, S.H. Kim, S.J. Seo and G.T. Park (2005) Direct adaptive controller for nonaffine nonlinear systems using self-structuring neural networks. IEEE Transactions on Neural Networks, 16(2): 414–422. 18. J.J.E. Slotine and W.P. Li (1998) Applied Nonlinear Control. Prentice-Hall, Englewood Cliffs, NJ. 19. W.Y. Wang, M.L. Chan, C.C.J. Hsu and T.T. Lee (2002) H ∞ tracking-based sliding mode controller for uncertain nonlinear systems via an adaptive fuzzy-neural approach. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 32(4): 483–492. 20. C.F. Hsu, C.M. Lin, and T.T. Lee (2006) Wavelet adaptive backstepping control for a class of nonlinear systems. IEEE Transactions on Neural Networks, 17(5): 1175–1183.
Chapter 28
Development of Hybrid Magnetic Bearings System for Axial-Flow Blood Pump Lim Tau Meng and Cheng Shanbao
28.1 Introduction Studies have shown that failure of the cardiovascular system is one of the most common health disorders causing premature deaths in our society. Heart transplantation is the last option for those patients who suffer from heart diseases. Artificial heart pumps are good candidates for implantable ventricular assist systems that can solve the problem of deficiency in the number of available organ donors. One of the key technologies of our rotary heart pump is the design of a rotor that is magnetically suspended by two hybrid magnetic bearings (HMBs) and driven by a Lorentz-type motor. The HMBs will help to eliminate mechanical contact, which leads to material wear, red blood cell damage, heat generation, platelet aggregation, thrombus growth, and even pump failure. Therefore, artificial heart pumps supported by magnetic bearings have been widely employed [1–9]. Yamane et al. [5] and Yuhki et al. [6] developed a centrifugal blood pump with pivot bearings to suspend an impeller. Akamatsu et al. [7] and Nojiri et al. [8] reported a blood pump whose impeller was levitated by an axial magnetic bearing; however, their magnetically coupled motor-shaft to drive the impeller was supported by ball bearings. There are mainly two types of blood pumps, namely, centrifugal and axial-flow types. Due to their better anatomical fit, axial-flow blood pumps are preferred [4]. In this chapter, a fully magnetically levitated axial-flow blood pump is developed using two 5-degree-of-freedom (DOF) controlled HMBs. Proportional-integral-derivative controllers (PID controllers) are used to control the proposed HMBs. The proposed Lorentz-type motor drives the rotor in sensorless mode using STMicroelectronics’s ST7FMC microcontroller. In this system, the rotor can rotate with speeds of up to 14,000 revolutions per minute (rpm) in stable suspension. This chapter describes the development of the HMBs system of the axial-flow blood pump, including its design, principles, control, and performance. Lim Tau Meng and Cheng Shanbao Nanyang Technological University, Singapore.
391
392
Lim Tau Meng and Cheng Shanbao
Fig. 28.1 Cross-sectional view of axial-flow blood pump
28.2 Design of Axial-flow Blood Pump The cross-sectional view of the axial-flow blood pump is shown in Fig. 28.1. The impeller is enclosed in the rotor that is driven by the motor and supported by two HMBs without mechanical contact. The blood flow is in the axial direction with respect to the inlet and the outlet cannulae, that is, passing through the straightener that guides the blood flow, the impeller that pumps the blood when it rotates and the diffuser that aligns the rotating blood flow to the outlet. The straightener and the diffuser are enclosed in the rotor without contact, and the impeller is shrunk-fit into the bore of the rotor. The left and right HMBs support the two ends of the rotor, and the motor drives the rotor in the middle. The core of the magnetic bearing is made from 0.23-mmthick silicon steel laminations that help to decrease the eddy current and hysteresis losses. Copper wires are wound around the core, and the generated flux loop circulates through the corresponding two rings on the rotor, which are also made from silicon steel laminations. Between the two rings on the rotor, there is a permanent magnet ring made of neodymium, iron, and boron (Nd-Fe-B) material to produce the bias magnetic flux for the electromagnets, and that is the reason why this kind of magnetic bearings is called hybrid magnetic bearings. Two eddy current probes are placed in the orthogonal directions on each side to measure the radial positions of the rotor. Also made from silicon steel lamination, the motor stator is located at the middle of the rotor where the rotor is secured with 8 poles permanent magnet (PM). The motoring coils are wound around the stator, which are driven by three phase
28 Development of Hybrid Magnetic Bearings System
393
Fig. 28.2 Dimensions of axial-flow blood pump
currents. The resulting torque is controlled by the current magnitude and phase, and it is independent of rotor position and time. The scheme of the axial blood pump with its main dimensions is shown in Fig. 28.2; the impeller, diffuser, and straightener are not shown for the sake of clarity. The photograph of the impeller, diffuser, and straightener of the axial blood pump [10] is shown in Fig. 28.3.
Fig. 28.3 The photograph of the impeller, diffuser, and straightener
394
Lim Tau Meng and Cheng Shanbao
Fig. 28.4 Scheme of magnetic force in one direction
28.3 Principles of Magnetic Bearings The electromagnetic force on the rotor in one direction as shown in Fig. 28.4 can be expressed by equation (28.1).
f=
µ0 N 2 Ag 2
I12 I22 − 2 (g0 + y) (g0 − y)2
(28.1)
where µ0 is the permeability of free air 4π × 10−7 (T m/A), N is the number of coil turns, Ag is the cross-sectional area of the flux, I is the coil current, go is the nominal air gap, and y is the displacement of the rotor in the direction shown in Fig. 28.4. By controlling the coil currents, the electromagnetic forces can be adjusted appropriately to counteract the gravity of the rotor and make the rotor remain stable in the magnetic field. The axial movement of the rotor is constrained by the radial electromagnetic forces and, therefore, passively controlled by the magnetic bearings.
28.4 Principles of Lorentz-type Motor The scheme of the motor is shown in Fig. 28.5. The arrangement of the motor coils is Um , Wm , and Vm , respectively. The PM on the rotor is assumed to produce the following sinusoidal waveform of flux density around the air gap for the motor stator: Bg = −B sin(ω t − 4θ)
(28.2)
28 Development of Hybrid Magnetic Bearings System Fig. 28.5 Schematic of the motor wiring connections
395
45⬚
15⬚
Stator
Un Vn
S
Wn
N
N Wn
S
S
N N
S
Vn
8 poles PM
Un Rotor
The motoring coils are driven by the following three-phase current: ⎧ ⎫ IUm = A cos (ω t + ϕ ) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ 2 ⎨ ⎬ IWm = A cos ω t + π + ϕ 3 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ 4 ⎪ ⎪ ⎪ ⎩ IVm = A cos ω t + π + ϕ ⎪ ⎭ 3
(28.3)
Based on equation (28.3), the current distribution in the circumferential direction can be expressed by the Dirac delta function as
1 1 7 9 im = IUm δ θ − − π −δ θ − π +δ θ − π −δ θ − π 8 8 8 8
5 11 29 35 + IWm δ θ − π − δ θ − π + δ θ − π − δ θ − π 24 24 24 24
13 19 37 43 + IVm δ θ − π − δ θ − π + δ θ − π − δ θ − π 24 24 24 24 (28.4) According to the Lorentz force principle, the torque produced by the currents is
T = 2Nrl
7π 8
− π8
Bg im d θ = 6NrlAB cos ϕ
(28.5)
Equation (28.5) shows that the torque can be controlled by the current magnitude A and phase ϕ, and it is independent of rotor position and time. The corresponding variables above are defined in Table 28.1.
396
Lim Tau Meng and Cheng Shanbao Table 28.1 Definition of variables A B θ ω ϕ Bg l r N
Magnitude of motoring current Magnitude of flux density Mechanical angle Electric frequency Phase of motoring current Flux density in the air gap Length of stator Radius of rotor Number of turns
28.5 Control of the HMBs System The four radial directions of the rotor are actively controlled by the two HMBs with PID controllers. The system control diagram in one radial direction is shown in Fig. 28.6. The transfer function of the PID controller is as follows: Gc (s) = k p +
kd s ki + 1 + sTd Ti s + 1
(28.6)
where k p is the proportional coefficient, kd is the differential coefficient, ki is the integration coefficient, and Td and Ti are time constants of differentiator and integrator, respectively. By trial and error, the controller parameters can be decided in the experiment, and their values are listed in Table 28.2. The dSPACE ds1103 and MATLAB 6.1 with Simulink were used as the rapid prototyping tool for the development of the digital PID controller of the HMBs. The model of the digital controller was designed and simulated in Simulink. After building the Simulink model into the digital signal processor (DSP) of dSPACE ds1103, the control algorithms were implemented digitally in DSP, and the digital controller can operate real-time in the HMBs system via hardware interface of dSPACE ds1103. The voltage range of the interface of dSPACE ds1103 is from −10 V to +10 V; 16-bit analog-to-digital (A/D) converters and 14-bit digital-to-analog (D/A) converters are chosen as interfaces of this experiment. The sampling frequency of the controller was 20 kHz. The rotor displacements were measured by eddy current
Reference Signal PID Controler
PWD Amplifier
Position Sensor
Fig. 28.6 Control diagram of HMBs
HMB
Rotor
28 Development of Hybrid Magnetic Bearings System
397
Table 28.2 Control parameters of magnetic bearings kp kd ki
1.00 0.00065 0.25
Td Ti I0
0.0001 0.0318 0.20 A
probes (Type: Applied Electronics Corporation, AEC-5503A, which has a resolution of 0.5 µm and gain of 8 V/mm). Advanced Motion Controls (AMC) Pulse Width Modulated (25 A series) servo amplifiers have been used to amplify the control signal to drive the magnetic bearings. The STMicroelectronics’ ST7FMC microcontroller was used to control the Lorentz-type direct-current (DC) motor. The STMicroelectronics’ ST7FMC microcontroller is an integrated system designed to provide the users with a complete, ready-to-use motor application [11]. Sensorless control mode was chosen to drive the motor using a back electromotive force zero-crossing detector as a part of the ST7 motor controller peripheral, and therefore, the requirement of a flux sensor is eliminated, which can make the HMBs system more compact.
28.6 Performance of the HMBs System The mass of the rotor is 80.12 g. The nominal air gap of the rotor is 0.35 mm. The radial directions (x and y) are actively controlled, and the axial direction (z) is passively controlled. The rotor can rotate in stable suspension with the speed up to 14,000 rpm. Figures 28.7 and 28.8 show the displacements of the rotor in the x, y, and z directions when the rotor is not rotating and at the rotational speed of 14,000 rpm. The displacement contents in x, y, and z directions are noisy due to the noise and harmonics, and therefore, it is difficult to see the actual displacement in the three directions directly from the time domain signals. The fast Fourier transform (FFT) is utilized to transform the time domain signal into frequency domain, and then the FFT amplitude at the rotor rotation frequency is measured as the displacement of the rotor at the corresponding speed. In order to obtain the frequency response of the HMBs system, the rotor displacement is recorded every time in x, y, and z directions when the rotor rotation speed is increased from 0 to 14,000 rpm with an interval of 500 rpm. Using FFT, their corresponding frequency response can be obtained as shown in Fig. 28.9, from which it can be seen that the rotor rotation in suspension in air is very stable (i.e., no collision with the stator and pump housing was observed). There was only about 1-µm displacement in x, y, and z directions when the rotor is not rotating at around 10,000 rpm, the rotor displacement is largest (i.e., for radial directions it is only about 27 µm, for axial direction it is about 60 µm). The displacement at the highest speed of 14,000 rpm is about 10 µm in the x and y directions and about 35 µm in the z direction.
398
Lim Tau Meng and Cheng Shanbao
Fig. 28.7 Rotor displacements in x, y, and z directions at 0 rpm
Fig. 28.8 Rotor displacements in x, y, and z directions at 14,000 rpm
28 Development of Hybrid Magnetic Bearings System
399
Fig. 28.9 Levitated response of the HMBs system of axial-flow blood pump in x, y, and z directions (dp = displacement in graph)
28.7 Conclusions and Future Work In this chapter, a compact HMBs system is introduced. The rotor of the axial-flow blood pump can be suspended stably by the system both at rest and in rotation at speeds of up to 14,000 rpm. The compact HMBs system is implantable because of its small dimensions. Five degrees-of-freedom of the system are controlled by HMBs, of which four radial directions are actively controlled and one axial direction is passively controlled. In the future, parameter estimation on the magnetic bearing system will be employed to accurately obtain its stiffness and damping properties in frequency domain [12, 13]. Self-sensing techniques will be applied to the system to eliminate the need of eddy current probes, and the performance of the axial blood pump will be tested.
References 1. J. Asama, T. Shinshi, S. Takatani, L. Li, and A. Shimokohbe (2003) A compact magnetic bearing system for centrifugal ventricular assist devices. Proceedings of the 7th International Symposium on Magnetic Suspension Technology, pp. 117–122.
400
Lim Tau Meng and Cheng Shanbao
2. J. Asama, T. Shinshi, H. Hoshi, S. Takatani, and A. Shimokohbe (2004) A new design for a compact centrifugal blood pump with a magnetically levitated rotor. ASAIO Journal 2004, 50: 550–556. 3. H. Hoshi, J. Asama, T. Shinshi, S. Takatani, K. Ohuchi, M. Nakamura, T. Mizuno, H. Arai, and A. Shimokohbe (2005) Disposable magnetically levitated centrifugal blood pump: design and in vitro performance. Artificial Organs, 29(7): 520–526. 4. T.M. Lim and D.S. Zhang (2005) Numerical analysis of blood trauma in an enclosed-impeller axial flow pump. 13th Congress of the International Society for Rotary Blood Pumps, Tokyo, Japan. 5. T. Yamane, M. Nishida, T. Kijima, and J. Maekawa (1997) New mechanism to reduce the size of the monopivot magnetic suspension blood pump. Artificial Organs, 21: 620–624. 6. A. Yuhki, M. Nogawa, and S. Takatani (2000) Development of a compact, sealless, tripod supported, magnetically driven centrifugal blood pump. Artificial Organs, 24: 501–505. 7. T. Akamatsu, T. Tsukiya, K. Nishimura, P. Chang-Hee, and T. Nakazeki (1995) Recent studies of the centrifugal blood pumps with a magnetically suspended impeller. Artificial Organs, 19: 631–634. 8. C. Nojiri, T. Kijima, J. Maekawa, K. Horiuchi, T. Kido, T. Sugiyama, T. Mori, N. Sugiura, T. Asada, W. Umemura, T. Ozaki, M. Suzuki, T. Akamatsu, S. Westaby, T. Katsumata, and S. Saito (2001) Development status of Terumo implantable left ventricular assist system. Artificial Organs, 25: 411–413. 9. T.M. Lim and D. Zhang (2006) Development of Lorentz force type self-bearing motor for an alternative axial flow blood pump design. Artificial Organs, 30: 347–353. 10. L.P. Chua, B.Y. Su, T.M. Lim, and T.M. Zhou (2007) Numerical simulation of an axial blood pump. Accepted by Artificial Organs. 11. SoftTec Microsystems (2000) AK-ST7FMC Starter kit for STMicroelectronics ST7FMC motor control device user’s manual, Revision 1.0. 12. T.M. Lim and Shanbao Cheng (2005) Parameter estimation of one-axis magnetically suspended system with a Digital PID controller. 1st International Conference on Sensing Technology, Nov. 21–23, Palmerston North, New Zealand, pp. 419–424. 13. T.M. Lim and S. Cheng (2007) Parameter estimation and statistical analysis on frequencydependent active control forces. Mechanical Systems and Signal Processing, 21(5): 2112– 2124.
Chapter 29
Critical Angle for Optimal Correlation Assignment to Control Memory and Computational Load Requirements in a Densely Populated Target Environment D.M. Akbar Hussain and Zaki Ahmed
Abstract The research presents a simulation study on the performance of a target tracker using a critical angle selection technique for optimal correlation assignment of a target track with the incoming observation(s) for the track splitting filter (TSF) algorithm. In a typical TSF all the observations falling inside a likelihood ellipse are used for update. However, our proposed optimal correlation procedure TSF algorithm uses only those observations for track update that fall within the critical angle sector made inside the prediction ellipse. This kind of approach is particularly important if the computational and memory requirements are limited relative to the amount of input data (number of objects) that can potentially saturate the system. Previous performance work [1] has been done on specific (deterministic) scenarios. One of the reasons for considering the specific scenarios, which were normally crossing targets, was to test the efficiency of the track splitting algorithm. This approach gives a measure of performance for a specific, possibly unrealistic scenario. However, such investigation procedures help in designing tracking systems that can select high-value targets based on particular attributes. In order to develop procedures that would enable a more general performance assessment compared with deterministic scenarios, our study adopted a random target motion scenario. Its implementation for testing the proposed technique using a track splitting Kalman filter algorithm is investigated. A number of performance parameters that give the activity profile of the tracking scenario are also investigated. This kind of performance evaluation can provide in-depth knowledge of tracking activity for developing possibly better and more appropriate target tracking systems. The complete prototype system is implemented using a TMS320C6416 digital signal processor (DSP). D.M. Akbar Hussain Information and Security Analysis Research Centre, Department of Computer Science and Engineering, Aalborg University, Niels Bohs Vej 8, 6700, Esbjerg, Denmark Zaki Ahmed School of Electronic, Communication and Electrical Engineering, University of Hertfordshire, College Lane, Hatfield, Herts AL10 9AB UK
401
402
D.M. Akbar Hussain and Zaki Ahmed
29.1 Introduction Tracking of a single target, in the ideal situation where one noisy observation obtained at each radar scan is possible using standard Kalman filter technique. In the multitarget case, an unknown number of observations are received at each radar scan, and assuming no false observations, each observation has to be associated with an existing or new target tracking filter. When the targets are well apart from each other, then forming an observation prediction ellipse around a track to associate the correct observation with that track is a standard technique [2]. When targets are near to each other, then more than one observation may fall within the prediction ellipse of a filter, and prediction ellipses of different filters may interact. The number of observations accepted by a filter will therefore be quite sensitive, in this situation, to the accuracy of the prediction ellipse. Several approaches may be used for when prediction ellipses of different filters interact [3–5], one of which is called the track splitting algorithm. In this algorithm, if n observations occur inside a prediction ellipse, then the filter branches or splits into n tracking filters. This situation, which results in an increased number of filters, makes the algorithm computationally expensive. Some mechanism for restricting the excess tracks that originated from track splitting is required since eventually this process may result in more than one filter tracking the same target. The first such mechanism is the support function, which uses the likelihood function of a track as the pruning criterion. The second mechanism is the similarity criterion, which uses a distance threshold to prune similar filters tracking the same target [1].
29.2 Critical Angle Representation As stated earlier, in a standard TSF all those observations falling inside the likelihood ellipse are equally probable for update as shown in Fig. 29.1. In this case a track splits into four filters as four observations are present inside the prediction ellipse. However, in our proposed procedures the TSF can only use those observations that fall inside the critical angle sector present in the prediction ellipse. We have used two separate procedures for defining critical angle, as shown in Figs. 29.2 and 29.3, respectively. It should be noted that in fact if n observations are inside the most likelihood ellipse, then potentially (n − 1) wrong observation-track pairing updates are possible. Therefore, most observations are false—meaning does not belong to that particular target—so making updates with false observations not only affects accuracy but also memory requirement and computational load increases exponentially if this track update persists for a while. However, in our proposed procedures the TSF algorithm uses selective observations through critical angle sector. Figure 29.2 shows the first procedure; the dotted lines show the critical angle. In this particular case only observations 2 and 3 will be selected for update. These observations are within the critical angle, which is made with the target position
29 Critical Angle for Optimal Correlation Assignment to Control Memory
403
Track Update 1 Observation 1
Track Update 2
Observation 2
Track Update 3 Predicted Position Observation 4
Observation 3
Track Update 4
Fig. 29.1 Track update with TSF
estimate. The critical angle is determined for each track observation pair based on the heading of individual target, which is calculated at each scan. However, even if the model used to track the target is correct, the statistical nature of the problem can lead to a wrong association of the track–observation pairing.
No Update for this observation
Critical Angle
Observation 1
Observation 2
Track Update 2 Track Update 3
Predicted Position Observation 3 Observation 4 No Update for this observation
Fig. 29.2 Critical angle procedure 1
404
D.M. Akbar Hussain and Zaki Ahmed
Fig. 29.3 Critical angle procedure 2 Track Update 1
Critical Angle
Observation 1
Predicted Position
No Update for this observation
Observation 2
Track Update 3 Observation 3
The second procedure is shown in Fig. 29.3; in this case critical angle is calculated with the predicted position instead of the target estimate. In this particular case, observations 1 and 3 are used to update. To test the system performance, statistics about system behavior are obtained by considering a couple of parameters, explained later in the text.
29.3 Motion Model Consideration The motion of a target being tracked is assumed to be approximately linear and modeled by the following equations: xn+1 = Φxn + Γwn ,
(29.1)
zn+1 = Hxn+1 + vn+1 ,
(29.2)
where the state vector xTn+1 = (x
x˙
y
y) ˙ n+1
(29.3)
is a four-dimensional vector, wn the two-dimensional disturbance vector, zn+1 the two-dimensional observation vector, and vn+1 is the two-dimensional observation error vector.
29 Critical Angle for Optimal Correlation Assignment to Control Memory
405
Also Φ is the assumed (4 × 4) state transition matrix, Γ(4 × 2) is the excitation matrix, and H(2 × 4) is the observation matrix and are defined, respectively, as ⎡ ⎤ 1 ∆t 0 0 ⎢0 1 0 0⎥ ⎥ Φ=⎢ (29.4) ⎣0 0 1 ∆t ⎦ 0 0 0 1 ⎡ 2 ⎤ 0 ∆t /2 ⎢ ∆t 0 ⎥ ⎥ (29.5) Γ=⎢ ⎣ 0 ∆t 2 /2⎦ 0 ∆t 1 0 0 0 H= (29.6) 0 0 1 0 Here ∆t is the sampling interval and corresponds to the time interval (scan interval) assumed constant, at which radar observation data are received. The system noise sequence wn is a two-dimensional Gaussian white sequence for which E(wn ) = 0,
(29.7)
where E is the expectation operator. The covariance of wn is
E wn wTm = Qn δnm
(29.8)
where Qn is a positive semidefinite (2 × 2) diagonal matrix and δmn is the Kronecker delta defined as 0 n = m δnm = 1 n=m The observation noise sequence vn is a two-dimensional zero mean Gaussian white sequence with a covariance of
(29.9) E vn vTm = Rn δnm , where Rn is a positive semidefinite symmetric (2 × 2) matrix given by Rn =
σx2 σxy
σxy σy2
(29.10)
σx2 and σy2 are the variances in the errors of the x, y position observations, and σxy is the covariance between the x and y observation errors. It is assumed that the observation noise sequence and the system noise sequence are independent of each other, that is,
(29.11) E vn wTm = 0
406
D.M. Akbar Hussain and Zaki Ahmed
The initial state x0 is also assumed independent of the wn and vn sequences, that is,
(29.12) E x0 wTn = 0
E x0 vTn = 0 (29.13) x0 is a four-dimensional random vector with mean E(x0 ) = x0/0 and a (4 × 4) positive semidefinite covariance matrix defined by P0 = E(x0 − x0 )(x0 − x0 )T
(29.14)
where x0 is the mean of the initial state x0 . The Kalman filter is an optimal filter as it minimizes the mean squared error between the estimated state and the true (actual) state, provided the target dynamics are correctly modeled. The standard Kalman filter equations for estimating the position and velocity of the target motion described by equations (29.1) and (29.2) are xˆn+1/n = Φxˆn
(29.15)
xˆn+1 = xˆn+1/n + Kn+1 vn+1
(29.16)
Kn+1 = Pn+1/n H T B−1 n+1
(29.17)
Pn+1/n = ΦPn ΦT + ΓQFn ΓT
(29.18)
Bn+1 = Rn+1 + HPn+1/n H T
(29.19)
Pn+1 = (I − Kn+1 H)Pn+1/n vn+1 = zn+1 − H xˆn+1/n
(29.20) (29.21)
where xˆn+1/n , xˆn+1 , Kn+1 , Pn+1/n , Bn+1 , and Pn+1 are the predicted state, estimated state, the Kalman gain matrix, the prediction covariance matrix, the covariance matrix of innovation, and the covariance matrix of estimation, respectively. Qn F is the covariance of the observation noise assumed by the filter, which is normally taken equal to Qn . In a practical situation, however, the value of Qn is not known; so the choice of Qn should be such that the filter can adequately track any possible motion of the target. To start the computation, an initial value is chosen for P0 . Even if this is a diagonal matrix, then clearly from the above equations, the covariance matrices Bn+1 , Pn+1, and Pn+1/n for a given n do not remain diagonal when Rn is not diagonal.
29.4 Implementation To simulate the output of the radar, a data generator routine was written in C to run on a TMS320C6416 DSK board. Parameters describing the simulation are target density, mean and variance for the initial velocity, target window size, probability of detection, radar resolution, acceleration noise in the target model, and observation
29 Critical Angle for Optimal Correlation Assignment to Control Memory
407
noise. These parameters can be either entered interactively by the user or defined as default prior to compiling. The trajectories for the targets are generated using the kinematics described in equation (29.1), namely, a constant velocity motion with acceleration noise. Targets move in the XY plane, and the positions of the targets are considered with respect to the sensor located at the fixed origin of coordinates. The initial target positions are randomly selected in a predefined tracking window such that they are uniformly distributed inside that space. The directions of the targets are also randomly selected between 0 and 2π. The initial velocity of the targets is taken from a random distribution by specifying a mean and standard deviation for the velocity. The density of the targets for a complete run remains constant by replacing those targets that leave the tracking window by other targets whose initial positions, velocity, and heading are again selected randomly as described earlier.
29.5 Performance Parameter A single parameter for the performance evaluation of a multiple-target tracking algorithm is difficult to obtain. The target tracking problem is statistical in nature, and many factors enter performance assessment. For example, one tracking algorithm may be computationally efficient but lose true tracks for a significant time. On the other hand, another algorithm may perform better in tracking accuracy and rarely lose the true tracks but requires more computation time. A practical approach for the assessment of a multiple-target tracking algorithm is to use simulation studies; typically analytical methods are complicated. We are investigating three parameters that seem logical for the described situation: • Terror is the average tracking error, which is the difference between true target position and estimated position. • Nb is the possible number of branches. • Co is the correct observation, i.e., the average number of times a correct observation is selected for update. As mentioned earlier, we are using Kalman filters, although a less expensive α-β algorithm in terms of space and computation is more attractive, but simulations have shown that the trade-off in using Kalman filter is better observation prediction ellipse and support function assessment, which are important factors when multiple targets exist [6–9]. The average tracking error for the x and y coordinate is given by
b 1 (29.22) Tx(∇) = ∑ (Xt(∇) − Xˆe( j) )2 b j=1 Ty(∇)
1 = b
b
∑ (Yt(∇) − Yˆe( j) )2
j=1
(29.23)
408
D.M. Akbar Hussain and Zaki Ahmed
where b is the number of branches belonging to the tree of track ∇, Xt(∇) and Yt(∇) are the true x and y positions of track ∇ (noise-free observation known from the observation generation program) at specific scan, and Xˆe( j) , Yˆe( j) are the x and y track estimates of branch j at that scan. The global average tracking error is then defined as 1 t (29.24) Terror = ∑ 0.5∗ (Tx(∇)i + Ty(∇)i ) t i=1 where t is the total scan time. Even when all observations are correctly taken by a filter, an average tracking error Terror will exist due to the statistical nature of the problem. In a multiple-target tracking environment, since incorrect observations may also be taken by a specific filter, tracking accuracy is affected. Nb is another important parameter that tells actually what kind of activity is present in the tracking area. A value of zero for Nb indicates that there is no track splitting. In a multiple-target tracking environment with crossing targets, maneuvering targets, and false observations, zero value for Nb will not be possible at various stages of tracking. Another parameter Co we selected provides information about system performance—it gives statistics on how often the system was able to select a correct observation. In a multiple-target tracking scenario a correct target motion model is not enough to ensure successful track maintenance due to the existence of multiple observations, since the acceptance of an observation from any neighboring target may result in termination of the true track through similarity. Because of the track-splitting process, the lost target may be absorbed by the neighboring targets.
29.6 Simulation Results A number of target scenarios were executed to observe the performance of our proposed procedures. However, only results from four scenarios are presented here. In each case the tracking window size is changed and the target density is kept constant, which basically dictates the interaction among the neighboring targets. Figures 29.4 to 29.11 show the target paths with various window sizes. It can be seen that when the window size is reduced with the same number of targets, the interaction among targets obviously increased. We have chosen four sets of target densities 15, 20, 25, and 30, respectively. Figures 29.4 and 29.5 show 15 targets with a window size of 10 × 10 and 25 × 25, respectively. It can be seen that when the window is 10 × 10, more targets went out of the window and reappeared (same color dots) at random locations to keep the density same. Figures 29.6 and 29.7 has 20 targets in the two windows 10 × 10 and 25 × 25, and it can be observed that more interaction and reappearance of targets take place in the smaller window size. It is evident that when the target density increases (Figs. 29.8 to 29.11), more interaction and target reappearance is taking place for smaller window size. Tables 29.1 and 29.2 provides the normalized performance evaluation parameters for these scenarios with our two proposed procedures,
29 Critical Angle for Optimal Correlation Assignment to Control Memory Fig. 29.4 Fifteen targets with window size (10 × 10), 7 targets reappeared
Fig. 29.5 Fifteen targets with window size (25 × 25), 4 targets reappeared
Fig. 29.6 Twenty targets with window size (10 × 10), 13 targets reappeared
409
410 Fig. 29.7 Twenty targets with window size (25 × 25), 7 targets reappeared
Fig. 29.8 Twenty-five targets with window size (10 × 10), 26 targets reappeared
Fig. 29.9 Twenty-five targets with window size (25 × 25), 12 targets reappeared
D.M. Akbar Hussain and Zaki Ahmed
29 Critical Angle for Optimal Correlation Assignment to Control Memory
411
Fig. 29.10 Thirty targets with window size (10 × 10), 30 targets reappeared
respectively. In general, it can be observed that with more targets in the window, tracking accuracy deteriorates. The number of branches are calculated over the whole tracking period of 100 seconds; for example, a value of 0.1 means that during tracking the track has split at least 10 times. A value of 0.90 for Co means that the track update has been made 90 times with true (correct) observations out of 100, and obviously, a value of 1 indicates 100% correct association. In Table 29.1, it can be seen that there is not much difference between the two procedures; reason being there was limited amount of interaction among the targets meaning a less dense target environment. This confirms that when there is little target interaction, both procedures do not influence the performance parameters. However, results obtained in Table 29.2 are different from each other, as those are obtained for densely populated target scenarios. Notably, the results for procedure 2 are slightly better than are those for procedure 1. The reason being most
Fig. 29.11 Thirty targets with window size (25 × 25), 18 targets reappeared
412
D.M. Akbar Hussain and Zaki Ahmed
Table 29.1 Tracking window (25 × 25) 15 targets
20 targets
25 targets
30 targets
Performance parameters (Procedure 1) T error Nb Co
0.001 0 1
0.001 0 1
0.0012 0.145 0.95
0.0012 0.156 0.90
Performance parameters (Procedure 2) T error Nb Co
0.001 0 1
0.001 0 1
0.0012 0.131 0.95
0.0010 0.144 0.92
15 targets
20 targets
25 targets
30 targets
Performance parameters (Procedure 1) T error Nb Co
0.007 0 1
0.008 0.16 0.90
0.018 0.386 0.83
0.0159 0.458 0.80
Performance parameters (Procedure 2) T error Nb Co
0.007 0 1
0.008 0.10 0.92
0.010 0.301 0.87
0.0121 0.345 0.89
Table 29.2 Tracking window (10 × 10)
observations typically fall along the major axis of the prediction ellipse; consequently, the second procedure gave better results. It can be see that the amount of track splitting is smaller for procedure 2, which resulted in better tracking error (less) and also more accurate assignment between target tracks and observations resulting higher Co value.
29.7 Conclusion In this study the performance of a tracking system to select an appropriate track– observation pair using a critical angle selection has been investigated. Two different procedures are defined for the critical angle in selecting the most probable observation–track paring. The track splitting approach requires a large number of tracking filters; so not only is less accurate tracking observed but also if multiple targets exist in the same vicinity for a longer period of time, memory and computational requirements grow exponentially. As expected, the study here has found that when the tracking window becomes denser, all the performance parameters are affected. However, the performance of the system with our proposed procedures shows improvement compared with a standard TSF algorithm, and in particular, procedure 2 performs better. Although results
29 Critical Angle for Optimal Correlation Assignment to Control Memory
413
from the standard TSF algorithm are not documented here, it is evident that more splitting results in less accurate tracking. The reason being a standard TSF algorithm takes all the observations present in the prediction ellipse to update. Although our simulated study has considered more targets in a confined limited space, which indicates a chaotic or unrealistic situation, it has tested the algorithm in the worst-case scenario. The obtained parameters values can help in the design and development of a better and efficient tracking system. Furthermore, obtaining empirical values for various performance parameters provides a more in-depth vision to understand the situation. This study has used a simple simulated approach instead of more complicated analytical method to design a particular target tracking system. These results can also be used to design and implement a tracking system on a given machine that has a limited amount of computational and storage capability.
References 1. D.P. Atherton, E. Gul, A. Kountzeris, and M. Kharbouch (1990) Tracking multiple targets using parallel processing. Proceedings of IEE, Part D, No. 4, July, pp. 225–234. 2. P.L. Smith and G. Buechler (1975) A branching algorithm for discriminating and tracking multiple objects. IEEE Transactions on Automatic Control, AC-20(February): 101–104. 3. D.P. Atherton and C. Deacon (1985) Tracking studied of two crossing targets. IFAC Identification and System Parameter Estimation Symposium, University of York, July, pp. 637–642. 4. Y. Bar-Shalom and T.E. Fortmann (1988) Tracking and data association, Academic Press, Boston. 5. S.S. Blackman (1986) Multiple-target tracking with radar applications. Artech House, Dedham, MA. 6. M. Kharbouch (1991) Some investigations on target tracking. PhD thesis, Sussex University. 7. D.M. Akbar Hussain (2003) Tracking multiple objects using modified track-measurement assignment weight approach for data association, INMIC-2003, International Multi-topic Conference, December 08–09, Islamabad, Pakistan. 8. D.M. Akbar Hussain, Michael Durrant, and Jeff Dionne (2001) Exploiting the computational resources of a programmable DSP micro-processor (Micro Signal Architecture MSA) in the field of multiple target tracking. SHARC International DSP Conference 2001, September 10–11, Northeastern University, Boston. 9. D.P. Atherton, D.M.A. Hussain, and E. Gul (1991) Target tracking using transputers as parallel processors. 9th IFAC Symposium on Identification and system Parameter Estimation, Budapest, Hungary, July.
Chapter 30
High-Precision Finite Difference Method Calculations of Electrostatic Potential David Edwards, Jr.
30.1 Introduction 30.1.1 Historical Development 1970–2007 The finite difference method (FDM) is one of the standard methods [1–7] for electrostatic potential calculations in nonanalytic geometries. In this technique, typically, a single, square mesh is overlaid upon the geometry and subsequently relaxed. During the relaxation, the potentials at successive points within the mesh are evaluated using an appropriate algorithm, itself being a function of the potentials at the surrounding mesh points. The process is continued until there is no further change in the potential at any of the points within the mesh upon subsequent iterations through the mesh. The above process while being relatively simply is remarkably inaccurate. To achieve reasonably high precisions, a very closely spaced mesh must be used that can take a very long time to relax. Early efforts to improve the precision for a given mesh were directed to the area of algorithm development and were to a large extent disappointing [8,9]. The resultant precisions seemed unfortunately to be reasonably independent (within a factor or two) of the algorithmic precision (see Fig. 30.3 of [9].) This was the situation until 1980–1983 when it was noticed [9] that the largest error in a very simple geometry occurred at a mesh point one unit from a corner of an element. The reason for this was, in retrospect, obvious since there is no algorithm that can accurately predict the potential at these points using its surrounding mesh points. So one makes a guess, and guesses aren’t very precise. Since the precision of a single region mesh was known to improve (∼1/N) with higher mesh densities, it was conjectured at that time that by locally increasing the mesh density around the singular points, a precision would be obtained that would David Edwards, Jr. IJL Research Center, Newark, Vt., 05871
415
416
David Edwards, Jr.
be equivalent to the precision obtained where the entire mesh was constructed using the enhanced local density. This conjecture was validated by establishing regions focusing onto the problem point in the above example and observing a significant reduction in the net error. The above work was presented in 1983 [9] and, as far as I am aware, was the first instance of the use of multiple regions to improve the precision in the electrostatic FDM problem. (It is noted that the 1972 paper of Natali et al. [8] in fact had two regions but they had the same mesh spacing so that there was no difference between their error values and the error values of a net encompassing the entire geometry using this mesh spacing.) The only work done in improving the precisions of FDM between 1983 and 2005 was a 1999 paper of Heddle [10]. In his paper, Heddle showed that the nine-point algorithms of Durand [11] (fourth-order algorithms) were somewhat more accurate than the standard five-point or star algorithm. (This effect had also been observed and documented in Figs. 30.3 and 30.7 of [9].) Thus without significant exaggeration it can probably be stated that between 1983 and 2007 no work was done in furthering the application of multiple regions to the FDM problem. The 1983 work of the author [9] did leave many important issues unresolved, the main one being that it was not clear how the multiregion structure was to be established other than by trial and error, which is not only impractical but also, to a certain extent, lacks elegance. To further the development of the multiregion process that leads to a solution of the above autoconstruction process, a series of papers have recently been written and presented [12]. The present chapter will provide both a review of the basic elements of the method and a description of the multiregion process by which the necessary regions can in fact be auto-established at a required accuracy for a given geometry.
30.1.2 Brief Description of the Process In FDM, typically, a square mesh is overlaid upon the geometry and all points within the mesh are iteratively relaxed. During the iteration, the potential at each point in the mesh is determined from the neighboring potentials by means of an appropriate algorithm. The algorithm development process includes not only the creation of algorithms but also their testing and comparison. At present, testing is done by evaluating algorithms at a selection of points in a typically analytic geometry (concentric spheres) and drawing conclusions from the results. This suffers from the facts that the number of points at which the algorithm is evaluated is typically quite small and that on a small number of points the results may depend upon the location of the points chosen. (This latter effect will be clearly seen in many of our figures.) Thus a method is needed that establishes the algorithmic error over the entire geometry and allows for readily interpreting the results. As indicated in [9, 12] in order to achieve precisions much greater than 1 × 10−3 or 1 × 10−4 , the geometry must be overlaid not with a single region but with a
30 High-Precision FDM Calculations of Electrostatic Potential
417
multiregion structure, where the additional regions encompass the areas containing the principal sources of error of the geometry, i.e., areas either of high gradients of the geometry or near the protruding corners or edges of elements (singular points). Although multiregions must be developed for accurate calculations, as remarked above, no method for establishing its structure over a particular geometry exists at present. This severely limits both the usefulness and applicability of the technique. The region structure has been established in the past [9, 12] essentially by trial and error, i.e., incorporating a certain region structure and finding the post-relax precision. As this is both ill-defined and time-consuming, a method is required to autoestablish the entire region structure on the basis of the desired relaxation precision such that the post-relax precision requirement would be met by a final relax of the created region structure. In order to aid in the solution of the above problems, a function called grad6(r’, z’) [12] is defined for each point (r’, z’) of the geometry and shown to have the following property: For two points within the geometry (r1, z1) and (r2, z2), if grad6(r1,z1) < grad6(r2,z2), then the maximum that the algorithmic error can be for the first point is less than the maximum that the algorithmic error can be for the second point. Although not ordering the errors at the points themselves (i.e., in the above example, the actual algorithmic error at the first point may in fact be larger than the error at the second point, as illustrated in many of the accompanying figures), the above property will be found quite useful both in evaluating algorithms and, in particular, in auto-establishing a multiregion structure. The remainder of the chapter will be organized in the following manner: 1. Construction of order-10 algorithm for general mesh points and definition of the grad6 function 2. Properties of the grad6 function and definition of the maximum error function 3. Comparison of different algorithms for the two-tube zero-gap lens 4. Application to region construction 5. Notes of caution 6. Summary and conclusion
30.2 Construction of Order-10 Algorithm for General Mesh Points and the Definition of the grad6 Function A general mesh point is defined as any point at least two units from either a bounding surface or the axis. The mesh points of the geometry discussed here will be restricted to this set since these points both are sufficient to illustrate the principal ideas of this chapter and provide an ample supply of points on which to test algorithms on various geometries. It is noted that although there are over 20 different boundary point types, the general mesh points comprise typically over 95% of the points in the geometry, and hence they are in essentially all of the differing fields of the system.
418
David Edwards, Jr.
The most accurate general mesh point algorithm is probably the order-10 general mesh point algorithm, and its construction follows from [9]. The method is briefly summarized below as it provides insight into the motivation for and subsequent definition of the grad6 function. About any mesh point with absolute coordinates r’, z’ there is assumed to be a power series expansion of the potential v(r, z) as a function of the relative coordinates r, z about this mesh point. [In this notation the potential at the position r’ z’ of the lens is v(r = 0, z = 0)]. For an order-10 algorithm the power series may be written as v(r, z) = c0 + c1 z + c2 r + c3 z2 + · · · + c64 zr9 + c65 r10 + O(11),
(30.1)
where O(11) means terms of order rk zl , where k + l ≥ 11 are neglected. In the expression (30.1) there are 66 coefficients that determine v(r, z). Requiring that the v(r, z) satisfy Laplace’s equation produces one equation involving the coefficients c j ’s and powers of r and z. Further requiring that Laplace’s equation be true in an arbitrary neighborhood of the central mesh point produces 45 equations from this single equation, in which only the coefficients c j appear. Thus, 21 additional equations are required for a solution, the additional equations being generated by evaluating the potential at each of a selection of 21 mesh points surrounding the central point using 2.1. These points are taken from the two rings of mesh points surrounding the central point (total number of mesh points in the two surrounding rings is 25). This set of linear equations (66 equations in 66 unknowns) may be solved for the set of c j ’s (c0 included), and the potential c0 at the central point found will depend upon the values of the 21 selected neighboring mesh points. It is noted that the off-axis Laplace’s equation involves the radial distance of the mesh point from the axis, and hence c0 will also be a function of this parameter. As the algorithm itself is a sum of about 600 terms, a presentation of the solution is not instructive and is not given here. To accomplish the objectives described in the introduction, a function is needed that would provide a reasonable estimate of the algorithmic errors using only the data from the relaxed net itself. Later in this section a definition of this function will be described. In the following paragraph observations of a general nature are presented that can provide insight into how such a function is created. In the construction of the 10th-order algorithm, the coefficient c0 is found using the truncated power series, the assumption being that the high-order terms are negligible. Thus, it is reasonable to expect that the precision of c0 is related to the magnitude of the neglected terms. Further, having solved for the complete set of 10th-order coefficients, while looking at the values of groups of coefficients of similar order at different points in the net, it was observed that 1. Coefficients of a given order were comparable. 2. Different groups in general tracked, i.e., if one of the groups was significantly smaller at a particular mesh point than at another mesh point, the other groups would exhibit similar behavior.
30 High-Precision FDM Calculations of Electrostatic Potential
419
Thus, it appeared that there was a relationship among the groups of coefficients, and it seemed possible that, in fact, there might be a relation between a coefficient group and the neglected terms of the series and, thus, that the precision of c0 might be related to a group value. From the insight of the above it was decided to try the group of 6th-order coefficients {c21 , . . . , c27 } in forming an estimation of the neglected terms in the series. To this end, grad6 is defined as the root-mean-square (RMS) value of the set of 6th-order coefficients: 27 (30.2) grad6(r’, z’) = sqrt ∑21 [c j (r’, z’)2 ] , where the dependence of c j on the mesh point (r’, z’) has been explicitly displayed. For any point in the net the set of 6th-order coefficients can be determined, and grad6 can be evaluated using (2), which results in the two-dimensional coordinate space of the geometry being mapped into a linear space by the grad6 function. The question is, does it provide a reasonable estimate of the algorithmic error at a given point in the net? The answer will turn out to be that it unfortunately provides a rather poor estimate of the actual error but has properties that enable it to still be quiet useful, as will be explored and explained in the next section.
30.3 Properties of the grad6 Function and the Definition of the Maximum-Error Function The geometries used in this and later sections are described in Appendix A, and except for the concentric sphere geometry, the potentials have been obtained using the multiregion relaxation method described herein and have an estimated accuracy of less than 10−14 for points not in the immediate vicinity of an edge or corner point of an element. Test nets are derived from the reference net by scaling all points in the reference net by an integral factor and then including in the test net only those points having integer coordinates. This produces a test net with much coarser mesh spacing than the reference net and having a potential precision of the reference net at any mesh point within the test net. The algorithmic error at a point in the test net is determined by evaluating the algorithm at that point using the surrounding points of the test net and then comparing the obtained value with the more precise value at the point itself, which is the obtained value of the corresponding point in the reference net. As an example, using the net of tube 11 (see Appendix A), the grad6 function and the order-10 algorithmic error can be determined for all points within the net and are plotted with respect to one another in Fig. 30.1. It should be noted that all general mesh points of the tube-11 geometry are represented on the grad6 axis. Seen from Fig. 30.1 is that for a given value of grad6, there is a considerable spread in the algorithmic error. Hence, as mentioned above, it does a rather poor job of
420
David Edwards, Jr.
log |v_test-valgorithm|
error vs grad6 for tube 11 one mesh density −5 −6 −7 −8 −9 −10 −11 −12 −13 −14 −15 −16 −17 −18 −19 −20 −10
nets 81 x 961
−9
−8
−7
−6
−5
−4
−3
log grad6
Fig. 30.1 The algorithmic errors are plotted vs. grad6 for the zero-gap two-tube lens. The reference net is 161 × 1921 and the reduction factor is 4 to produce the test net of 81 × 961
estimating the algorithmic precision for those mesh points with a given value of grad6. However, it is also apparent from Fig 30.1 that the plot, though having an illdefined lower bound, has a rather sudden cutoff where the error increases for a fixed value of grad6. Thus, the algorithmic error curve is observed to have a reasonably well-defined upper bound, and the maximum error function is taken to be this upper bound. The maximum error function is thus constructed by forming a partition of the log grad6 axis into a set of contiguous intervals of width (0.2) and then determining the maximum of the algorithmic error values in each interval. (It is noted that for values of log grad6 < −8, the algorithmic error has reached the limit of the calculationed precision of ∼10−16, and hence in Fig. 30.1, the independence of algorithmic error on grad6 for values less than 10−8 is an artifact of the calculation.) Although for any point in the physical geometry with a value of log grad6 = −7 one cannot say what the algorithmic precision would be for these points, since it ranges from ∼10−14 to ∼10−16, but one can say that it must be less than ∼3×10−14 . This is in fact a strong statement since one could ensure that whenever the algorithm is applied to a point in the geometry whose value of grad6 <10−7, the algorithmic error would necessarily be less than ∼3 × 10−14 . Thus, one could have the means of controlling the algorithmic errors at any point in the mesh during a relaxation. This paragraph embodies the essence of our method. The dependence of the algorithmic errors for test geometry g5 for four different meshes is shown in Fig. 30.2. We see that the maximum error function is reasonably independent of the density of the mesh overlaying the geometry, which is particularly significant considering that a change in mesh density completely relabels the points surrounding any value of grad6. Thus, the algorithmic error is bounded by a
30 High-Precision FDM Calculations of Electrostatic Potential
421
algorithmic error for g5 for 4 meshes −2 size 161 x 161 81 x 81 41 x 41 21 x 21
log |v_test-valgorithm|
−4 −6 −8
−10 −12 −14 −16 −18 −20 −10
−9
−8
−7
−6
−5
−4
log grad 6
−3
−2
−1
Fig. 30.2 Comparison of the algorithmic error vs. grad6 for geometry g5 using four different size meshes. Base net is 321 × 321. Little dependence on mesh size or density of the maximum is seen at any point on the log grad6 axis
maximum error function independent of the mesh density of the net overlaying the geometry. (The scatter of a small set of points for the 161 × 161 mesh for log grad6 < 10−8 is felt to be a reflection of the inaccuracy in the potential calculation at these points in the reference net.) Figure 30.3 shows the results from the tube-11 geometry, and the same behavior as for geometry g5 is observed. A plot of the maximum error function itself for the three mesh densities for this geometry (tube 11) is given in Fig. 30.4, again error vs grad6 for tube 11 various mesh densities
0
log |v_test-valgorithm|
−2 −4
various nets 21 x 241 41 x 481 81 x 961
−6 −8 −10 −12 −14 −16 −18 −20 −10
−9
−8
−7
−6
−5
log grad 6
−4
−3
−2
−1
0
Fig. 30.3 A similar data set as in Fig. 30.2 is used with similar results; namely, little dependence on mesh size is found on the maximum at any point on log grad6 axis
422
David Edwards, Jr.
error vs grad6 tube 11 order 10 maximum 3 densities
0 −2
mesh sizes 81 x 961 41 x 481 21 x 241
log|v_test-v_algorithm|
−4 −6 −8 −10 −12 −14 −16 −18 −20 −10
−9
−8
−7
−6
−5 −4 log grad6
−3
−2
−1
0
Fig. 30.4 A plot of the maximum error function for the three meshes for tube 11
log error
maximum error fn tube11 tube20 g5 −4 −5 −6 −7 −8 −9 −10 −11 −12 −13 −14 −15 −16 −17 −18 −10
geometries tube 11 tube 20 g5
−9
−8
−7
−6
−5
−4
−3
−2
log grad6
Fig. 30.5 Comparisons of the maximum error function for the three geometries tube 11, tube 20, and g5. Apart from some small amount of scatter, the curves are remarkably similar
30 High-Precision FDM Calculations of Electrostatic Potential
423
log |v_test-valgorithm|
comparison of max error fn for several geometries −3 −4 −5 −6 −7 −8 −9 −10 −11 −12 −13 −14 −15 −16 −17 −8.5
geometries g6 lhi precision g6 low precision tube 11, 20, g5 sphere
−8
−7.5
−7
−6.5
−6
−5.5 −5 log grad6
−4.5
−4
−3.5
−3
−2.5
−2
Fig. 30.6 A comparison between the average of the geometries g5, tube 11, and tube 20 with g6 (a considerably stronger field situation) and concentric sphere (a considerably weaker field situation). The maximum error function does show a weak but measurable dependence on geometry
demonstrating the considerable independence of the maximum error function on mesh density. The dependence of the maximum error function on geometries (tube 11, tube 20, and g5) is shown in Fig. 30.5. For each geometry represented in the figure, the algorithmic error function is found by combining the algorithmic error for the three mesh densities associated with the geometry. We see that the maximum error function is similar for all geometries in this class. To see the effect in a geometry with a considerably stronger field than those used in Fig. 30.5, the maximum error function was determined for geometry g6 and plotted in Fig. 30.6 along with the averaged results of Fig. 30.5 as well as those from the concentric sphere. We see that the maximum error function does depend measurably, albeit weakly, on the class of geometries from which it is derived. (The implication is weak in the following sense: Consider that a maximum algorithmic precision of ∼10−13 is required. Determining the required value of grad6 using Fig. 30.6 for the class containing tube 11, a value of log grad6 of −7.0 is indicated, whereas for the geometries in a class represented by g6, a value of −7.2 to −7.4 is required, the difference between the two log grad6 values being quite small.)
30.4 Comparison of Different Algorithms for the Two-Tube Zero-Gap Lens In Fig. 30.7 various algorithms previously described in the literature are compared along with the present order-10 algorithm. It is noted that the order-2 algorithm is the familiar five-point or star algorithm and the order-4 algorithm is also known as the
424
David Edwards, Jr.
comparison of different algorithms on tube11 save in errors vs algorithms for tube11.grf
0
log |v_test-valgorithm|
−2 −4 −6
algorithm order 2 kuyatt durand-order4 10
−8 −10 −12 −14 −16 −18 −20 −12
−11
−10
−9
−8
−7 −6 log grad6
−5
−4
−3
−2
Fig. 30.7 A comparison of the numerous algorithms in use along with the order-10 algorithm presented here for a general mesh point in the geometry. (See text for a discussion)
nine-point or Durand algorithm [11]. Several features are apparent from Fig. 30.7. The first is that is that there is a clear separation among the order −2, −4, and −10 algorithms. The second is that the slope of the order-10 algorithm is considerably steeper than the order-4 algorithm; hence, at a grad6 value of 10−8, the order-10 algorithm is six orders of magnitude more precise than the others in current use. The third feature is that for grad 6 values of ∼10−3 , all algorithms have essentially the same error. It is believed that Fig. 30.7 represents the first time that the Kuyatt and Durand algorithms have been compared in a two-tube lens over all of the points in the lens. The Durand algorithm gives a very slightly improved precision over the Kuyatt algorithm, which is perhaps a little surprising in that the Kuyatt algorithm was created ∼15 years after the Durand algorithm. (Since both are nine-point algorithms, it is not unexpected that they are approximately equivalent.) The last remark of this section is that the observed equivalence of algorithmic errors for algorithms with grad6 values >10−3 , as mentioned above, is consistent with the following well-known observation: When different algorithms are used in the relaxation process of single-region nets [10], there is a similarity in the resultant errors independent of the algorithm and its single-point precision characteristics as determined from evaluations in precise geometries. This point was also made in the introduction. This effect is likely due to the fact that in any geometry with edge or corner points, grad6 values of 10−3 and higher will be realized in a single-region net, and hence, as described above, the algorithmic errors will be large
30 High-Precision FDM Calculations of Electrostatic Potential
425
for these points. These errors can in fact dominate the error spectra throughout the net [9, 13].
30.5 Application to Region Construction The goal of the multiregion construction is to determine a region structure so that during the relaxation process, no mesh point that is being relaxed has an algorithmic error greater than the precision requirement for the relaxed net. The process of region construction that will accomplish this goal follows from the above discussion and is briefly described here. Begin with a base net overlaying the geometry and a desired precision requirement for the relaxed net. Determine a value of grad6 (call it v6), such that the maximum error function at v6 is less than the desired precision. Using this value the algorithmic errors at any mesh point that has a grad6 value less than or equal to v6 will be less than the desired precision. Relax the entire base net that is to be considered the first parent region. After relaxing the parent net, find the grad6 values for all mesh points. Draw a rectangle around all points with grad6 values larger than v6. The points within this rectangle will be the child region of the parent. Relax the parent along with its child (note that in relaxing the parent, the points within the child are excluded from the relaxation process; hence all points being relaxed in the parent will have grad6 values smaller than the v6). In the child region, again draw a rectangle around all points with grad6 values larger than v6. Continue this process until the appropriate number of regions has been constructed. (It should be noted that the actual number of child regions required for a given precision must be independently determined. For a precision of ∼10−15, approximately 36 regions have been found to be more than sufficient for the geometries described in this chapter.) Then do a final relax of the region structure. In the process of region construction, the criterion for ending a relax process can be considerably less precise than that required during the final relax since one needs only to find roughly where the regions are placed and not highly precise values of the potentials within the net. Thus, one creates the region structure using a less precise ending criterion and then does a final high-precision relax of this structure. Although many details of the process are beyond the scope of the present discussion and have been omitted from the above description, it should be clear that in the final relaxation of the structure no point is relaxed that has a grad6 value greater than the v6 and, hence, the algorithmic errors at all points have been bounded by the precision requirement. Thus, the goal of our region construction process and a major goal of this chapter have been attained. An example of the above the region structure for tube 20 (161 × 1921 base net) is shown in Fig. 30.8, where the focus is on the central area of the lens and only the first six regions drawn. The value of v6 used was 10−8 . W e see that the first two regions encompass both edge points and serve to reduce the gradient regions so
426
David Edwards, Jr.
the regions for tube 20 gap / dia = .1 log v6 = -8 save in regions for tube 20.grf
120 110 100 r axis
90 80 70 60 50 40 600
610
620
630
640 z axis
650
660
670
680
Fig. 30.8 The edges of the two cylinders are at r = 80 and z = 632 and 648, respectively. Only the first 6 regions of the 40 regions surrounding each edge point are shown. Seen is that the first two regions encompass both edges while further region development telescopes the regions into the respective edge points. The v6 parameter was 10−8
that subsequent regions telescope onto the respective edges. As mentioned above, a total of 36 regions were used that resulted in an estimated maximum axis error of ∼10−14 .
30.6 Dependence of Algorithm Precision upon the Set of Surrounding Points As mentioned above, when solving for an order-10 algorithm, 66 equations are required, 45 coming from Laplace’s equation and 21 coming from the surrounding mesh points. This set of 21 surrounding mesh points is denoted as {cj21}. The question to be answered in this section is simply does it make any difference in the resulting precision of the derived algorithm which 21 points are selected? The minimal set of boundary point types (33 types) occurs for points restricted to the two rings surrounding the central point [11]. There are a 24 mesh points in these two rings, and 21 are selected (3 are absent) from the entire set. It was suspected that those points somehow “closest” to the central point would give the most precise algorithm. The sum of the square of distances of the absent points from the central point is used as a measure of the compactness of the set. This measure is called sumd3. (The set with the highest value of sumd3 is the set with “close measure” to the central point). The individual distance of a point from the central point is labeled dj. Figure 30.9 gives the value of dj for the 24 mesh points in the two rings surrounding the central point labeled o.
30 High-Precision FDM Calculations of Electrostatic Potential Fig. 30.9 The value of dj for the 24 mesh points. o is the central point
427
8 x
5 x
4 x
5 x
8 x
5 x
2 x
1 x
2 x
5 x
4 x
1 x
0 o
1 x
4 x
5 x
2 x
1 x
2 x
5 x
8 x
5 x
4 x
5 x
8 x
The complete set of possible 21 mesh-point combinations contains ∼ 2000 individual sets, each combination containing 21 elements. As it takes ∼1000 seconds to compute any particular solution of the 66 linear equations, to consider all entries would be both impractical and probably without valuable. To form a subset of the complete set that will be analyzed and used below, the complete set of 24 points was considered as a basis. From the basis the upper left corner was selected as the first mesh point absent, and the remaining two absent points were selected from the outer ring. This set had ∼ 100 elements. A small selection of combinations were also obtained using only one absent point from the outer ring and two absent points from the inner ring, and the complete set of points having the three absent points in the inner ring were obtained, which reflects the least compact set. Figure 30.10 shows the results of algorithm precisions vs. sumd3 for the above mesh-point sets. It is noted that for sumd3 = 18 or 21, two of the points within the set were from the corners of the outer ring, the remaining point having a dj of either 4 or 5. Also that for dj ≤ 8 either one or two points were from taken from the inner ring. Figure 30.10 demonstrates a clear trend that the precision becomes worse for decreasing sumd3. Furthermore, no value of sumd3 seems to gives a higher precision than the highest value of sumd3 with a solution. Thus, as perhaps expected, the most precise algorithm is seen to occur for the largest value of sumd3 having a valid algorithmic solution and increases markedly for smaller values of d3 being the worst for two or three missing points taken from the inner ring. It is noted that in fact, sumd3 = 24 is the largest value of sumd3 representing all missing points that are at the corners of the outer ring. However no element in this set had a valid solution. Each combination with the three absent points coming from corners of the outer ring resulted in either redundant or inconsistent equations. The next largest value of sumd3 was 21 and consisted of a set of 48 distinct elements, 16 of which were redundant or inconsistent, leaving a set of 32 elements with valid solutions. Each element in this set, denoted by {cj21}, had two corner points, the remaining point having a d3 value of 5, which for a given corner point combination could be selected in eight ways. For each of the eight ways of
428
David Edwards, Jr. error at grad6 = -7 vs sum distance^2 of absent points two tube .1 gap
−11 −11.5
log error
−12 −12.5 −13 −13.5 −14 −14.5 −15 2
4
6
8
10
12
14
16
18
20
22
24
sum distance^2 of absent points
Fig. 30.10 The log of algorithmic error for mesh points having grad6 = 10−7 vs. sumd3
selecting the remaining point, one of four sets of c0 points was obtained and is denoted by {c018}. The four sets of {c018} are shown in Fig. 30.11, together with a representative from the set {cj21}. The labels of four distinct sets of {c018} were defined in Fig. 30.11, A to D. The value of the log of the algorithm error for a value of grad6 = 10−7 is given in Table 30.1 for the three geometries, which are listed in order of decreasing precisions. We see that for any of the three geometries the precision of all elements {c018} are essentially degenerate. In addition, there does not seem to be any element of the four sets that has a consistently higher value for all of the studied geometries. Said in a somewhat simpler manner, it doesn’t seem to make any difference which of the four sets is used as the “closest set” for the algorithm. This is perhaps the first time that such a discussion of algorithmic precision vs. point selection has been presented. The above conclusion also reflects the authors experience over the past 2 years in determining the set of mesh points that would produce the most precise algorithms; namely, the closest set is either equivalent to or does somewhat better than the many other choices.
30.7 Notes of Caution The first note of caution has already been alluded to in the above: To achieve the benefits of the high-order algorithms, the net must be overlaid with a multiregion structure. Not using a multiregion structure limits the precision one can attain to 10−3 to 10−4 independent of the precision of the algorithm. Put slightly differently, high-order algorithms won’t significantly affect the precision of single-region calculations.
30 High-Precision FDM Calculations of Electrostatic Potential x
x
x
x
O
x
x
x
x
x
x
x
o
x
x
x
x
x
x
x
O O {cj21}
x
x
x
O
O
x
x
x
x
x
x
x
x
x
x
o
x
x
x
x
x
x
x
O
x
x
x
x
>>>
429
x
x
x
x
x
x
x
x
x
x
o
x
x
x
x
x
x
x
x {c018}
A
x
>> >
x
x
x
x
x
x
x
o
x
x
x
x
x
x
x
x
x
x B
{c018}
{cj21} x
x
x
x
O
x
x
x
O
x
x
x
x
x
x
x
x
x
o
x
x
x
x
o
x
x
x
x
x
x
x
x
x
x
x
x
O x {cj21}
x
x
x
x x x {c018} C
x
x
x
x
O
x
x
x
x
x
x
x
o
x
x
O
x
x
x
x
O x {cj21}
x
x
x
>>>
>>>
x
x
x
x
x
x
x
x
x
x
o
x
x
x
x
x
x x x {c018} D
Fig. 30.11 In the four panels shown, the left pane gives a representative of the set of 21 mesh points for which the 66 equations were solved for c0 , and the right pane gives the set of 18 mesh points actually used by the solution c0
430
David Edwards, Jr.
Table 30.1 The value of the log of the algorithmic error for mesh points having a grad6 value of 10−7 is given for each of the mesh-point sets for c0 . A to D are defined in Fig. 30.11 for the three distinct geometries. Entries are listed in order of most precise to least precise Tube 20
g5
g6
A = −14.09016 B = −14.05536 C = −14.03746 D = −14.03656
D = −13.98598 A = −13.97919 C = −13.97874 B = −13.97628
C = −13.89000 A = −13.88949 D = −13.88825 B = −13.88014
The second note of caution is that in this chapter the only class of mesh points considered were general mesh points since, as mentioned above, these were sufficient to define, demonstrate, and test the ideas and concepts presented here. At present, most of the 10th-order algorithms developed by the author for other types of mesh points (i.e., one unit from a metal surface, on axis, one unit from axis, etc.) have single-point precisions similar to the general mesh point with one very notable exception: The algorithm for a point one unit from a boundary line is required when relaxing a child region. Since mesh points on the “other side of the line” are not in the child region, they can not be used in an algorithm determination. The result is that a non-closest set of neighboring points must be used, which results in marked degradation of the precision by a factor of ∼ 100. This degradation is mitigated by the fact that these mesh points are in a child region and have a considerably smaller value of grad6 than the parent points on the adjacent line; hence, a slightly lower degradation results. However, these points still affect the computational precision, particularly in the vicinity of a region boundary. Algorithm determination of this class of mesh points is an area of ongoing study; the goal of which is to obtain an algorithm with precisions similar to the general mesh-point algorithm.
30.8 Summary and Conclusion In this work a grad6 function has been defined that allows simultaneous testing and comparison of algorithms at all mesh points in the geometry (see Fig. 30.7). This is useful not only for comparing different algorithms but also for the development of a single algorithm, since in that development, as mentioned above, one has many different neighboring points available to select, and the results of each selection may result in differing algorithm precision characteristics. The second and considerably more significant use of the grad6 function is that it enables the definition of a maximum error function, which is used in the autoestablishing of the multiregion structure essential for high-precision calculations.
30 High-Precision FDM Calculations of Electrostatic Potential
431
In the framework described in Section 30.5, the only user choices in determining a region structure are the value of grad6 to employ, which may be found from curves such as those in Fig. 30.6, and the total number of regions required. The subsequent placement of the entire network of regions is done in a relatively straightforward procedure as outlined above, an example of which has been shown in Fig. 30.8. The conclusion and major result is that with the use of the grad6 and maximum error functions, the required multiregion structure is readily developed for problems requiring high-precision potential calculations in cylindrically symmetric geometries.
Appendix A The geometries used in this study are two versions of the two-tube lens and two test lenses. The two-tube lenses are essentially two thick-wall cylinders that are each closed at one end and joined together with a possible separating gap at their open ends. Tube 11 has zero gap, and tube 20 has a gap-to-diameter ratio of 0.1. The nomenclature used in this chapter describing the base net for all of the geometries is nr × nz, where nr represents the number of radial points to the outer radius of the cylinders and nz represents the number of points along the z axis. For the two-tube lenses the inner radius is (nr − 1)/2 and is 1/2 of the lens diameter. In both two-tube lenses the center of the gap is at the lens center. The test geometries used are specially constructed lenses that consist of an outer cylinder at 0 V (closed at both ends) and a centrically located inner disk at a potential of 10 V. These are shown in Fig. 30.12. Geometry g6 is essentially that of g5 with the addition of rectangular elements in the upper left and right corners to give a field situation of the corner of one element facing the corner of another element, which is clearly considerably different from the fields of g5.
geometry g5
geometry g6
321 x 321
321 x 321
360
360 v=0
v=0
320
320 280
240
240
200
200
r axis
280
160 120
160 120
v = 10
80
80
40
40
0 -40
0
40
80
120
160
v=0
v=0
200
240
280
320
Fig. 30.12 Test geometries g5 and g6 used
360
0 -40
v = 10
0
40
80
120
160 200 z axis
240
280
320
360
432
David Edwards, Jr.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.
13.
F.H. Read (1970) Journal of Computational Physics, 6: 527. K. Halbach and R.F. Holsinger (1976) Particle Acceleration, 7: 213. F.H. Read, A. Adams, and J.R. Soto-Montiel (1971) Journal of Physics E, 4: 625. H.D. Ferguson, J.E. Spencer, and K. Halbach (1976) Nuclear Instruments and Methods, 134: 409. P.H. Rose, A. Galejs, and L. Pecie (1964) Nuclear Instruments and Methods, 31: 262. P.T. Kirstein and J.S. Hornsby (1964) IEEE Transactions on Electron Devices, 11: 196. J.E. Boers (1965) IEEE Transactions on Electron Devices, 12: 425. S. Natali, D. DiChiro, and C.E. Kuyatt (1972) Journal of Research of National Bureau of Standards, Section A, 76: 27. D. Edwards, Jr. (1983) Review of Scientific Instruments, 54: 1729. D.W.O. Heddle (1999) Journal of Physics D: Applied Physics, 32: 1447–1454. E. Durand (1957) Comptes rendus de l’Acad´emie des sciences Paris, 244: 2355 D. Edwards, Jr. (2007). Review of Scientific Instruments, 78: 1–10, IMECS2007, Hong Kong, March 21–24, published in conference proceedings, EUROCON2007, Warsaw, September 9– 12, to be published in conference proceedings. A. Skollermo and G. Skollermo (1978) Journal of Comparative Physiology, 28: 103.
Chapter 31
Newton–Tau Method Karim Ivaz and Bahram Sadigh Mostahkam
31.1 Introduction Many applied problems have their natural mathematical setting as integral and integro-differential equations, thus they usually have the advantage of simpler methods of solution. In addition, a large class of initial and boundary value problems, associated with differential equations, can be reduced to integral equations. Problems in human population, mortality of equipment and its rate of replacement, biological species living together, torsion of a wire, automatic control of a rotating shaft, radiation transport and determining the energy spectrum of neutrons, and electromagnetic fields [1] are some of the fields that are integral and integro-differential equations. Many numerical and analytic methods for solving integral and integro-differential equations exist but few of them are for solving nonlinear equations. Tau’s method is used for solving integral and differential equations [2–5]. Here we combine the Newton’s method and Tau’s method for solving nonlinear Fredholm integral and integro-differential equations.
31.2 Solving Nonlinear Fredholm Integral Equation 31.2.1 Formulation of the Problem The one-dimensional nonlinear Fredholm integral equation of the second kind in u(t) is b
u(t) = f (t) + a
K(t, s, u(s)) ds, t ∈ [a, b],
(31.1)
where f (t) and nonlinear kernel K(t, s, u(s)) are given. Karim Ivaz and Bahram Sadigh Mostahkam Department of Mathematical Sciences, University of Tabriz, Tabriz, Iran
433
434
Karim Ivaz and Bahram Sadigh Mostahkam
For numerical solution of equations of the type (1) with the Newton–Tau method, first equation (1) is converted to a linear integral equation by using a Newton’s method, then by applying Tau’s method to this linear equation a numerical solution obtained. These steps are described in the following sections.
31.2.2 Application of the Newton Method Now we apply the Newton method to linearize (31.1). For this purpose assume that: 1. f ∈ C[a, b]. 2. K ∈ C([a, b] × [a, b] × R) and is continuously differentiable with respect to its third argument. We introduce an operator T : V → V, V = C[a, b], through the formula T (u)(t) = u(t) − f (t) −
b a
K(t, s, u(s)) ds, t ∈ [a, b].
So the integral equation (1) can be written in the form T (u) = 0.
(31.2)
Newton’s method for this problem is
un+1 = un − [T (un )]−1 T (un ), or equivalently,
T (un )(un+1 − un ) = −T (un ),
where T (un ) is Frechet derivative of T at un . Let us compute the derivative of T , 1 T (u)(v)(t) = lim [T (u + hv)(t) − T (u)(t)] h→0 h b 1 = lim [hv(t) − (K(t, s, u(s) + hv(s)) − K(t, s, u(s))) ds] h→0 h a b ∂ K(t, s, u(s)) v(s) ds. (31.3) = v(t) − ∂u a
Therefore, the corresponding Newton’s iteration formula is
δn+1 (t) −
b ∂ K(t, s, un (s))
δn+1 (s) ds = −un (t) + f (t) +
∂u un+1 (t) = un (t) + δn+1 (t). a
b a
K(t, s, un (s)) ds, (31.4)
At each step, we solve a linear integral equation. We also assume that u∗ is a root of equation (31.2) such that [T (u∗ )]−1 exists and is a continuous map from V to V .
31 Newton–Tau Method
435
Assume further that T (u) is locally Lipschitz continuous at u∗
T (u) − T (v) ≤ Lu − v, ∀u, v ∈ N(u∗ ), where N(u∗ ) is a neighborhood of u∗ and L > 0 is a constant. Then by application of the local convergence theorem [6, 7], there exists a β > 0 such that if u0 − u∗ ≤ β , then the Newton sequence {un } is well-defined and converges to u∗ . Furthermore, for some constant M we have error bounds un+1 − u∗ ≤ Mun − u∗ 2 , n
un − u∗ ≤
(M β )2 . M
31.2.3 Application of the Tau Method Consider equation (31.4),
δn+1 (t) −
b ∂ K(t, s, un (s))
∂u
a
δn+1 (s) ds = −un (t) + f (t) +
b a
K(t, s, un (s)) ds,
un+1 (t) = un (t) + δn+1 (t), n = 0, 1, 2, ... Let BT = {1,t,t 2 ,t 3 , . . .} be standard polynomial basis. Now we convert equation (31.4) to the corresponding linear algebraic equations. Let us assume that ∞ ∞ ∂ K(t, s, un (s)) = ∑ ∑ Ki j sit j , ∂u i=0 j=0 ∞
δn+1 (t) = ∑ ait i = aB, i=0
−un (t) + f (t) +
b a
n
K(t, s, un (s)) ds = ∑ fit i = f B, i=0
with a = (a0 , a1 , a2 , . . .), f = ( f0 , f1 , f2 , . . .). Then one can write b ∂ K(t, s, un (s)) a
∂u
δn+1 (s) ds =
∞
∞
∞
∑ ∑ ∑ Ki j al t
l=0 i=0 j=0
= aKB,
i
b a
s j sl (s) ds
436
Karim Ivaz and Bahram Sadigh Mostahkam
where ⎛
⎞ ∑∞j=0 K0 j α j0 . . . ∑∞j=0 Kn j α j0 . . . ⎜ ⎟ .. .. ⎜ ⎟ . . ⎜ ⎟ K=⎜ ∞ ∞ ⎟ K α . . . K α . . . ∑ ∑ jn n j jn 0 j j=0 j=0 ⎝ ⎠ .. .. . . with
α jl =
b
s j sl ds =
a
1 (b j+l+1 − a j+l+1 ), j+l +1
j, l = 0, 1, 2, . . . , and the coefficient of the exact solution δn+1 (t) of problem (31.4) satisfies the following infinite algebraic system a(I − K) = f . Definition The polynomial
δˆn+1 = an B will be called an approximate solution of (31.4) if the vector an = (a0 , a1 , ..., an ) is the solution of the system of linear algebraic equations an (I − Kn ) = fn where I − Kn is the matrix defined by restriction of I − K to its first (n + 1) rows and columns.
31.2.4 Numerical Examples In this section we apply the Newton–Tau methods to some examples in order to compare numerical solutions with exact solutions. Example 1. Consider the following nonlinear integral equation u(x) = x − ex +
1 0
ex+t−u(t) dt, x ∈ [0, 1]
with exact solution u(x) = x. The numerical results with initial guess u0 (x) = 0 are given in the Table 31.1 and Fig. 31.1 (m, number of iterations in Newton’s method; n, degree of polynomial in Tau’s method).
31 Newton–Tau Method
437
Table 31.1 Numerical results for Example 1 (m = 3, n = 1) x
Computed u(x)
Relative error
0.0 0.2 0.4 0.6 0.8 1.0
−0.00000010 0.19999978 0.39999967 0.59999956 0.79999944 0.99999933
— 1.1000 e-006 8.2500 e-007 7.3333 e-007 7.0000 e-007 6.7000 e-007
Table 31.2 Numerical results for Example 2 (m = 3, n = 3) x
Computed u(x)
Relative error
0.0 0.2 0.4 0.6 0.8 1.0
0.00012987 0.00812987 0.06412987 0.21612987 0.51212987 1.00012987
— 1.6234 e-002 2.0292 e-003 6.0125 e-004 2.5365 e-004 1.2987 e-004
Example 2. Consider the following nonlinear integral equation: cos(1) − 1 + u(x) = x + 3 3
1 0
t 2 sin(u(t)) dt, x ∈ [0, 1]
with exact solution u(x) = x3 . The numerical results with initial guess u0 (x) = 0 are given in Tables 31.2 and 31.3, Fig. 31.2 (m, number of iterations in Newton’s method; n, degree of polynomial in Tau’s method). Example 3. Consider the following nonlinear integral equation 1 u(x) = sin(x) − x − 1 + cos(1)sin(1) + 2
1
(x + t + u2 (t))dt,
0
1.2 1 0.8 0.6 0.4 0.2 0
Fig. 31.1 Exact solution (−), approximated points (+)
−0.2 0
0.2
0.4
0.6
0.8
1
438
Karim Ivaz and Bahram Sadigh Mostahkam
1.4
1.2
1.2
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
0.2
0.4
0.6
0.8
1
−0.2 0
0.2
0.4
0.6
0.8
1
Fig. 31.2 Exact solution (−), approximated points (+)
with exact solution u(x) = sin(x). The numerical results with initial guess u0 (x) = 0 are given in Table 31.4 and 31.5 and Fig. 31.3 (m, number of iterations in Newton’s method; n, degree of polynomial in Tau’s method).
31.3 Solving a System of Nonlinear Integral Equations 31.3.1 Formulation of the Problem In the last section we derived the Newton–Tau method for solving one-dimensional nonlinear Fredholm integral equations of the second kind. Here we generalize this Table 31.3 Numerical results for Example 2 (m = 6, n = 3) x
Computed u(x)
Relative error
0.0 0.2 0.4 0.6 0.8 1.0
−0.00000006 0.00799994 0.06399994 0.21599994 0.51199994 0.99999994
7.5000 e-006 9.3750 e-007 2.7778 e-007 1.1719 e-007 6.0000 e-008
Table 31.4 Numerical results for Example 3 (m = 4, n = 5) x
Computed u(x)
Relative error
0.0 0.2 0.4 0.6 0.8 1.0
−0.01134198 0.18732735 0.37807669 0.55330602 0.70605535 0.83032469
5.7090 e-002 2.9125 e-002 2.0077 e-002 1.5753 e-002 1.3246 e-002
31 Newton–Tau Method
439
Table 31.5 Numerical results for Example 3 (m = 6, n = 5) x
Computed u(x)
Relative error
0.0 0.2 0.4 0.6 0.8 1.0
0.00045106 0.19912039 0.38986973 0.56509906 0.71784839 0.84211773
— 2.2704 e-003 1.1591 e-003 8.0864 e-004 6.8627 e-004 7.6859 e-004
method for solving a system of nonlinear Fredholm integral equations of the second kind (SNFIE). We consider the following model for a system of nonlinear Fredholm integral equations of the second kind: U(x) = F(x) + λ .
b a
K(x,t, u1 (t), u2 (t), . . . , ud (t)) dt, x ∈ [a, b],
(31.5)
where U(x) = [u1 (x), u2 (x), . . . , ud (x)]T , F(x) = [ f1 (x), f2 (x), . . . , fd (x)]T , K(x,t, u1 (t), . . . , ud (t)) = [K1 (x,t, u1 (t), . . . , ud (t)), . . . , Kd (x,t, u1 (t) . . . , ud (t))]T ,
λ = [λ1 , λ2 , . . . , λd ]T . The functions F and K and the vector λ are given, and U is the solution vector to be determined. For R = [R1 , . . . , Rd ]T , we define
λ .R = [λ1 R1 , λ2 R2 , . . . , λd Rd ]T .
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4 0.4
0.3
0.3
0.2 0.1
0.2
0
0.1
−0.1 0
0.2
0.4
0.6
0.8
1
0
Fig. 31.3 Exact solution (−), approximated points (+)
0
0.2
0.4
0.6
0.8
1
440
Karim Ivaz and Bahram Sadigh Mostahkam
31.3.2 Application of the Newton Method to SNFIE Now we apply the Newton method to linearization of the model (5). For this purpose, we assume that 1. fi ∈ C[a, b], i = 1, . . . , d. 2. Ki ∈ C([a, b] × [a, b] × Rd ), i = 1, . . . , d, are continuously differentiable with respect to ui for i = 1, . . . , d. We introduce an operator T :W d → W d , W = C[a, b], through the formula T (U)(x) = U(x) − F(x) − λ .
b a
K(x,t, u1 (t), . . . , ud (t)) dt, x ∈ [a, b].
So the integral equations in (5) can be written in the form T (U) = 0.
(31.6)
Newton’s method for this problem is
Um+1 = Um − [T (Um )]−1 T (Um ), m = 0, 1, 2, . . . , or equivalently,
T (Um )(Um+1 −Um ) = −T (Um ),
where T (Um ) is the Frechet derivative of T at Um . Let us compute the derivative of T , 1 T (U)(V )(x) = lim [T (U + hV )(x) − T (U)(x)] h→0 h b 1 = lim [hV (x) − λ . (K(x,t, u1 (t) + hv1 (t), . . . , ud (t) + hvd (t)) h→0 h a − K(x,t, u1 (t), . . . , ud (t)))dt]
= V (x) − λ .
b a
κ (U(t))V (t)dt,
(31.7)
where ⎛ ∂K ⎜ κ (U(t)) = ⎜ ⎝
1 (x,t,u1 (t),...,ud (t)) ∂ u1
.. .
∂ Kd (x,t,u1 (t),...,ud (t)) ∂ u1
... .. . ...
∂ K1 (x,t,u1 (t),...,ud (t)) ∂ ud
.. .
∂ Kd (x,t,u1 (t),...,ud (t)) ∂ ud
and V (x) = [v1 (x), . . . , vd (x)]T .
⎞ ⎟ ⎟ ⎠
31 Newton–Tau Method
441
Therefore, the corresponding Newton’s iteration formula is b
δm+1 (x) − λ
a
κ (Um (t))δm+1 (t)dt = −Um (x) + F(x) + λ ×
b
K(x,t, um,1 (t), . . . , um,d (t)) dt,
a
Um+1 (x) = Um (x) + δm+1 (x),
(31.8)
where um,i is the ith element of the approximate vector Um . So at each step, we solve a system of linear integral equations. We also assume that U ∗ is a root of equation (31.6) such that [T (U ∗ )]−1 exists and is a continuous map from W d to W d . Assume further that T (U) is locally Lipschitz continuous at U ∗ ,
T (U) − T (V ) ≤ LU −V , ∀ U,V ∈ N(U ∗ ), where N(U ∗ ) is a neighborhood of U ∗ and L > 0 is a constant. Then by the application of local convergence theorem [6, 9], there exists a δ > 0 such that if U0 −U ∗ ≤ δ , then the Newton sequence {Um } is well-defined and converges to U ∗ . Furthermore, for some constant M we have error bounds m
Um+1 −U ≤ MUm −U ∗ 2
Um −U ∗ ≤
and
(M δ )2 . M
31.3.3 The Tau Method Applied to (8) Consider equation (31.8)
δm+1 (x) − λ .
b a
κ (Um (t))δm+1 (t) dt = −Um (x) + F(x) + λ . ×
b a
K(x,t, um,1 (t), . . . , um,d (t)) dt,
Um+1 (x) = Um (x) + δm+1 (x), m = 0, 1, 2, . . . Let X = {1, x, x2 , x3 , ...} be standard polynomial basis. Now we convert equation (31.8) to the corresponding linear algebraic equations. Let us assume that ∞ ∞ ∂ Ki (x,t, u1 (t), . . . , ud (t)) ij k s |U=Um = ∑ ∑ Kks xt , ∂uj k=0 s=0
δm+1 (x) = [ δm+1,1 (x), . . . , δm+1,d (x) ] = t
∞
i, j = 1, . . . , d ∞
∑ a1 j x , . . . , ∑ ad j x
j=0
j
j=0
T j
= AX,
442
Karim Ivaz and Bahram Sadigh Mostahkam
where A = [ai j ], i = 1, . . . , d,
j = 0, 1, . . .
and b
−Um (x) + F(x) + λ .
a ∞
K(x,t, um,1 (t), . . . , um,d (t)) dt T
∑
=
j=0
∞
f1 j x j , . . . , ∑ fd j x j
= FX
j=0
with F = [ fi j ],
i = 1, . . . , d,
j = 0, 1, . . .
Then one can write b ∂ Ki (x,t, u1 (t), . . . , ud (t))
∂uj
a ∞
=
∞
∑∑
∞
∑ Kksi j a jq xk
q=0 k=0 s=0
|U=Um δm+1, j (t) dt
b a
t st q dt = a j K i j X,
where a j = [a j1 , a j2 , . . .], ⎛
⎞ ∑∞j=0 K0 j α j0 . . . ∑∞j=0 Kn j α j0 . . . ⎜ ⎟ .. .. ⎜ ⎟ . . ⎟ Ki j = ⎜ ⎜ ∑∞ K0 j α jn . . . ∑∞ Kn j α jn . . . ⎟ j=0 ⎝ j=0 ⎠ .. .. . . with
α jl =
b
t j t l dt =
a
1 b j+l+1 − a j+l+1 f or j, l = 0, 1, . . . ; j+l +1
so we have the following form for the integral part of the system: ⎛ d ⎞ ∑ j=1 a j K 1 j X b ⎜ ⎟ .. κ (um (t))δ m+1 (t) dt = ⎝ ⎠. . a
∑dj=1 a j K d j X
and the coefficients of exact solution δm+1 (x) of problem (8) satisfies the following infinite algebraic system aG = g, where a = [ a1 a2 . . . ad ],
31 Newton–Tau Method
443
and G is a block matrix, ⎛
I − λ1 K 11 −λ2 K 21 ⎜ −λ1 K 12 I − λ2 K 22 ⎜ G =⎜ .. .. ⎝ . . −λ1 K 1d −λ2 K 2d
and
⎞ . . . −λd K d1 . . . −λd K d2 ⎟ ⎟ ⎟ .. .. ⎠ . . . . . I − λd K d1
⎛
⎞ F1t ⎜ Ft ⎟ ⎜ 2⎟ g =⎜ . ⎟ ⎝ .. ⎠ Fdt
where Fi denotes the ith row of matrix F. Definition The polynomial
(δm+1 )n = an X
will be called an approximate solution of (31.8) if an = (a1,n , a2,n , . . . , ad,n ), ai,n = [ai0 , ai1 , . . . , ain ], is the solution of the system of linear algebraic equations an Gn = gn where the elements of Gn are the restriction of the elements of G to first (n + 1) rows and (n + 1) columns, similarly for gn .
31.3.4 Numerical Examples In this section we apply the above methods to some examples in order to compare numerical solutions with analytic solutions. Example 4. Consider the following system of nonlinear Fredholm integral equations of the second kind:
3 u1 (x) − 01 [xtu21 (t) + t 2 u32 (t)] dt = −1 9 + 4 x, 1 3 1 2 2 u2 (x) − 0 [tu1 (t) − xu2 (t)] dt = 9 + 7 x + 45 x2
with exact solution u1 (x) = x and u2 (x) = x2 . The numerical results with initial guess u0,1 (x) = 0 and u0,2 (x) = 0 are given in Tables 31.6 and 31.7, Figs. 31.4 and 31.5 (m, number of iterations in Newton’s method; n, degree of polynomial in Tau’s method). Example 5. Consider the following system of nonlinear Fredholm integral equations of the second kind:
u1 (x) − 01 e−t [u21 (t) − xu1 (t)]u2 (t) dt = 23 − 12 x u2 (x) − 01 (ex−2t u22 (t) + xu31 (t)) dt = −1 4 x
444
Karim Ivaz and Bahram Sadigh Mostahkam
Table 31.6 Numerical results for Example 4 (m = 10, n = 2) x
Computed u1 (x)
Relative error
Computed u2 (x)
Relative error
0.0 0.2 0.4 0.6 0.8 1.0
−0.00000358 0.20004537 0.40009432 0.60014326 0.80019221 1.00024116
2.2685 e-004 2.3580 e-004 2.3877 e-004 2.4026 e-004 2.4116 e-004
0.00032421 0.04024127 0.16016220 0.36008701 0.64001570 0.99994828
6.0318 e-003 1.0138 e-003 2.4169 e-004 2.4531 e-005 5.1720 e-005
Table 31.7 Numerical results for Example 4 (m = 15, n = 3) x
Computed u1 (x)
Relative error
Computed u2 (x)
Relative error
0.0 0.2 0.4 0.6 0.8 1.0
−0.00000016 0.19999980 0.39999976 0.59999971 0.79999967 0.99999963
1.0000 e-006 6.0000 e-007 4.8333 e-007 4.1250 e-007 3.7000 e-007
0.00000105 0.04000080 0.16000053 0.36000025 0.63999995 0.99999965
2.0000 e-005 3.3125 e-006 6.9444 e-007 7.8125 e-008 3.5000 e-007
with exact solution u1 (x) = 1 − x and u2 (x) = ex . The numerical results with initial guess u0,1 (x) = 1 and u0,2 (x) = 1 are given in Tables 31.8 and 31.9, Figs. 31.6 and 31.7 (m, number of iterations in Newton’s method n degree of polynomial in Tau’s method).
0.3
0.25
0.2
0.15
0.1
0.05
0
−0.05
Fig. 31.4 u1 (x) in Table 31.6
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
31 Newton–Tau Method
445
Fig. 31.5 u2 (x) in Table 31.6
1.8
1.7
1.6
1.5
1.4
1.3
1.2
1.1
1 0
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
31.4 Solving Nonlinear Integro-Differential Equation 31.4.1 Formulation of the Problem Now we extend the Newton–Tau method for solving nonlinear integro-differential equations. Let us consider the following model for a one-dimensional nonlinear Table 31.8 Numerical results for Example 5 (m = 6, n = 6) x
Computed u1 (x)
Relative error
Computed u2 (x)
Relative error
0.0 0.2 0.4 0.6 0.8 1.0
0.99999061 0.79998426 0.59997791 0.39997156 0.19996521 −0.00004114
9.3900 e-006 1.9675 e-005 3.6817 e-005 7.1100 e-005 1.7395 e-004
1.00006759 1.22147643 1.49190697 1.82220821 2.22560760 2.71819091
6.7590 e-005 6.0316 e-005 5.5147 e-005 4.9069 e-005 2.9957 e-005 3.3448 e-005
Table 31.9 Numerical results for Example 5 (m = 10, n = 10) x
Computed u1 (x)
Relative error
Computed u2 (x)
Relative error
0.0 0.2 0.4 0.6 0.8 1.0
1.00000000 0.80000000 0.60000000 0.40000000 0.20000000 0.00000000
0 0 0 0 0
1.00000000 1.22140275 1.49182469 1.82211880 2.22554092 2.71828180
0 0 0 0 0 7.3576 e-009
446
Karim Ivaz and Bahram Sadigh Mostahkam 1
Fig. 31.6 u1 (x) in Table 31.8
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Fredholm integro-differential equation with exact solution u(x),
F(x, u, u , . . . , u(ν ) ) = f (x) + λ
b a
K(x,t, u(t)) dt, x ∈ [a, b]
(31.9)
with supplementary conditions
(1) (k−1) (2) (k−1) u (a) + c u (b) = d j , j = 1, 2, . . . , ν , c ∑ jk jk ν
(31.10)
k=1
where F(x, u, u , . . . , u(ν ) ), f (x), K(x,t, u), a, b, c jk , c jk , and d j are given. In general, F and K are nonlinear functions of u. (1)
(2)
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
Fig. 31.7 u2 (x) in Table 31.8
0 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
31 Newton–Tau Method
447
31.4.2 Application of the Newton Method Now we apply the Newton method to linearize the problem (31.9) and (31.10). For this purpose assume that
1. F is Frechet differentiable and F (x, u, u , . . . , u(ν ) ) = Du(x). 2. f ∈ C[a, b]. 3. K ∈ C([a, b] × [a, b] × R) and is continuously differentiable with respect to its third argument. Here D is a linear differential operator of order ν with polynomial coefficients defined by ν di D := ∑ pi (x) i , dx i=0 αi
pi (x) :=
∑ pi j x j ,
j=0
where αi is the degree of pi (x) and pi = (pi0 , pi1 , ..., piαi , 0, 0, . . .). We introduce an operator T : V → V , V = C[a, b], through the formula
T (u)(x) = F(x, u, u , . . . , u(ν ) ) − f (x) − λ
b a
K(x,t, u(t)) dt, x ∈ [a, b].
So the integro-differential equations (31.9) and (31.10) can be written in the form T (u) = 0, ν
∑
(31.11)
(1) (2) c jk u(k−1) (a) + c jk u(k−1) (b) = d j , j = 1, 2, . . . , ν .
k=1
Newton’s method for this problem is
um+1 = um − [T (um )]−1 T (um ), or equivalently,
T (um )(um+1 − um ) = −T (um ),
where T (um ) is the Frechet derivative of T at um . Let us compute the derivative of T, 1 T (u)(v)(x) = lim [T (u + hv)(x) − T (u)(x)] h→0 h b ∂ K(x,t, u(t)) = Dv(x) − λ v(t) dt. ∂u a
(31.12)
448
Karim Ivaz and Bahram Sadigh Mostahkam
Therefore, the corresponding Newton’s iteration formula is Dδm+1 (x) − λ
b ∂ K(x,t, um (t)) a
+ f (x) + λ with
ν
∑
∂u
(ν )
δm+1 (t) dt = −F(x, um (x), um (x), . . . , um (x))
b a
K(x,t, um (t)) dt
(31.13)
(1) (k−1) (2) (k−1) c jk δm+1 (a) + c jk δm+1 (b) = 0, j = 1, 2, . . . , ν ,
k=1
um+1 (x) = um (x) + δm+1 (x). At each step, we solve a linear integro-differential equation. With assumptions stated in Section 31.2.2, the produced sequence is well-defined and converges to the root of (31.11).
31.4.3 Application of the Tau Method Consider equation (31.13) Dδm+1 (x) − λ
b ∂ K(x,t, um (t)) a
∂u
+ f (x) + λ with
ν
∑
(ν )
δm+1 (t) dt = −F(x, um (x), um (x), . . . , um (x)) b a
K(x,t, um (t)) dt,
(1) (k−1) (2) (k−1) c jk δm+1 (a) + c jk δm+1 (b) = 0, j = 1, 2, . . . , ν ,
k=1
um+1 (x) = um (x) + δm+1 (x),
m = 0, 1, 2, . . .
{1, x, x2 , x3 , . . .}
Let X = be standard polynomial basis. Now we convert equation (31.13) to the corresponding linear algebraic equations. Let us assume that ∞ ∞ ∂ K(x,t, um (t)) = ∑ ∑ Ki j xit j , ∂u i=0 j=0 ∞
δm+1 (x) = ∑ ai xi = aX, i=0
where a = (a0 , a1 , a2 , . . .), and
(ν )
−F(x, um (x), um (x), . . . , um (x)) + f (x) + λ
b a
∞
K(x,t, um (t)) dt = ∑ fi xi = f X, i=0
31 Newton–Tau Method
449
with f = ( f0 , f1 , f2 , . . .). Then one can write b ∂ K(x,t, um (t))
∂u
a
∞
∞
∞
δm+1 (t) dt = ∑ ∑ ∑ Ki j al xi l=0 i=0 j=0
b
t j t l dt = aKX,
a
where ⎛
⎞ ∑∞j=0 K0 j α j0 . . . ∑∞j=0 Kn j α j0 . . . ⎜ ⎟ .. .. ⎜ ⎟ . . ⎜ ⎟ K=⎜ ∞ ∞ ⎟ K α . . . K α . . . ∑ ∑ j=0 n j jn ⎝ j=0 0 j jn ⎠ .. .. . . with
α jl =
b
t j t l dt =
a
1 (b j+l+1 − a j+l+1 ), for j, l = 0, 1, 2, . . . j+l +1
We also can write [12]
Dδm+1 (x) = aΠ X,
where ν
Π = ∑ η i pi (µ ), ⎛
0 ⎜1 ⎜ ⎜ η = ⎜0 ⎜0 ⎝ .. .
0 0 2 0 .. .
0 0 0 3 .. .
0 0 0 0 .. .
⎞
i=0
⎛
... ... ⎟ ⎟ ... ⎟ ⎟, ... ⎟ ⎠ ...
0 ⎜0 ⎜ ⎜ µ =⎜ 0 ⎜0 ⎝ .. .
1 0 0 0 .. .
0 1 0 0 .. .
⎞ 0 0 ... ⎟ ⎟ 1 ... ⎟ ⎟, 0 ... ⎟ ⎠ .. . . . .
and matrix representation for the supplementary conditions are as follows: ν
∑
(1) (k−1) (2) (k−1) c jk δm+1 (a) + c jk δm+1 (b)
k=1
for j = 1, 2, . . . , ν , and
ν
=
∞
i! ai (31.14) (i − (k − 1))! k=1 i=k−1
(1) (2) × c jk ai−k+1 + c jk bi−k+1 = aB j
∑ ∑
450
Karim Ivaz and Bahram Sadigh Mostahkam
⎛
⎞
(2) 0! (1) 0! [c j1 + c j1 ]
⎜ ⎟ ⎜ ⎟ (2) (2) 1! (1) 1! (1) ⎜ ⎟ [c a + c b] + [c + c ] j1 j2 1! j1 0! j2 ⎜ ⎟ ⎜ ⎟ .. ⎜ ⎟ ⎜ ⎟ . ⎜ ⎟. B j =⎜ ⎟ ⎜ (ν −1)! [c(1) aν −1 + c(2) bν −1 ] + · · · + (ν −1)! [c(1) + c(2) ] ⎟ ⎜ (ν −1)! j1 j1 jν −1 jν −1 ⎟ 0! ⎜ ⎟ ⎜ ⎟ (2) ν (2) ν ! (1) ν ν ! (1) ⎜ ⎟ ν ! [c j1 a + c j1 b ] + · · · + 1! [c jν a + c jν b] ⎝ ⎠ .. . We refer to B as the matrix representation of the supplementary conditions and B j as its jth column. The following relations for computing the elements of the matrix B can be deduced from (31.14): i
bi j =
(i − 1)!
∑ (i − k)!
(1)
(2)
c jk ai−k + c jk bi−k
i, j = 1, . . . , ν ,
k=1
(i − 1)! (1) i−k (2) i−k i = ν + 1, ν + 2, . . . , j = 1, . . . , ν . c a + c b ∑ jk jk k=1 (i − k)! ν
bi j =
Then the supplementary conditions take the form aB = 0, and the coefficients of exact solution δm+1 (x) of problem (31.13) satisfy the following infinite algebraic system: aG = g, where
G = (B1 , . . . , Bν , Πˆ 1 , Πˆ 2 , . . .),
Πˆ = Π − λ K, g = (0, . . . , 0, f0 , f1 , . . .), and Πˆ j is the jth column of matrix Π .
Table 31.10 Numerical results for Example 6 (m = 5, n = 5) x
Computed u(x)
Relative error
0.0 0.2 0.4 0.6 0.8 1.0
1.00000000 1.22140132 1.49182120 1.82207754 2.22524050 2.71695760
0 1.1790 e-006 2.3461 e-006 2.2644 e-005 1.3499 e-004 4.8716 e-004
31 Newton–Tau Method
451
Table 31.11 Numerical results for Example 6 (m = 10, n = 10) x
Computed u(x)
Relative error
0.0 0.2 0.4 0.6 0.8 1.0
1.00000000 1.22140275 1.49182468 1.82211875 2.22554081 2.71828170
0 8.1873 e-009 1.3406 e-008 2.7441 e-008 5.3919 e-008 4.7824 e-008
Table 31.12 Numerical results for Example 7 (m = 1, n = 10) x
Computed u(x)
Relative error
0.0 0.4 0.8 1.2
0.00000000 0.38922710 0.71586557 0.92720646 0.98970901
4.9109 e-004 2.0778 e-003 5.1850 e-003 1.0291 e-002
π 2
Table 31.13 Numerical results for Example 7 (m = 5, n = 10) x 0.0 0.4 0.8 1.2 π 2
Definition The polynomial
Computed u(x)
Relative error
0.00000000 0.38941834 0.71735610 0.93203930 1.00000361
— 0 1.3940 e-008 2.2531 e-007 3.6100 e-006
(δm+1 )n = an X
will be called an approximate solution of (31.13) if the vector an = (a0 , a1 , . . . , an ) is the solution of the system of linear algebraic equations an Gn = gn , where Gn is the matrix defined by restriction of G to its first (n + 1) rows and columns.
31.4.4 Numerical Examples Now we apply the above methods to some examples in order to compare numerical solution with exact solution.
452
Karim Ivaz and Bahram Sadigh Mostahkam
Example 6. Consider the nonlinear integro-differential equation
u (x) + (u (x))2 −
1 0
ex−2t u2 (t) dt = e2x , x ∈ [0, 1],
u(0) = 1,
u (0) = 1,
with exact solution u(x) = ex . The numerical results with initial guess u0 (x) = 1 + x are given in Tables 31.10 and 31.11, (m, number of iterations in Newton’s method; n, degree of polynomial in the Tau’s method). Example 7. Consider the nonlinear integro-differential equation
u (x) + u(x) −
π /2 0
u(0) = 0,
x xcos(t)u2 (t) dt = − , 3
u (0) = 1
with exact solution u(x) = sin(x). The numerical results with initial guess u0 = x are given in Tables 31.12 and 31.13 (m, number of iterations in Newton’s method; n, degree of polynomial in Tau’s method).
References 1. A. Jerri (1999) Introduction to integral equations with applications. John Wiley & Sons, Hobofen, NJ. 2. E. L. Oritz and L. Samara (1997) An operational approach to the Tau method for numerical solution of nonlinear differential equations. Computing, 27: 15–25. 3. K.M. Liu and C. K. Pan (1999) The automatic solution system of ordinary differential equations by the tau method. Computer & Mathematics with Applications, 38: 197–210. 4. M. Hosseini (2000) The application of the operational Tau method on some stiff system of ODE’s. International Journal of Applied Mathematics, 2: 9. 5. M. Hosseini and S. Shahmorad (2003) Tau numerical solution of Fredholm integro-differential equations with arbitrary polynomial basis. 27: 145–154. 6. M. Berger (1977) Nonlinearity and functional analysis. Academic Press, New York. 7. E. Zeidler (1986) Nonlinear functional analysis and its applications. Springer-Verlag, New York.
Chapter 32
Reconfigurable Hardware Implementation of the Successive Overrelaxation Method Safaa J. Kasbah, Ramzi A. Haraty, and Issam W. Damaj
32.1 Introduction “Surely the first and oldest problems in every branch of mathematics spring from experience and are suggested by the world of external phenomena” [1]. At the age of 2 or 3, we study addition by assembling collections of objects and counting them. At the age of 4 or 5, we start using an abstract mathematical construction, a model, known as the positive integers [2]. Later, in elementary school, we start using more complicated constructions known as operators. Apparently, these early taught mathematical constructions form the basis of our understanding of a variety of problems. As mathematical problems become more complex, it might still be possible to find their solutions by means of available computing devices. However, there are a number of mathematical problems whose solutions are difficult to realize using available computing power [3–5]. Examples of such problems are factoring very large numbers (RSA depends on this problem’s computational difficulty) [5], finding the solution to partial differential equations [6], and deciding whether a knot in three-dimensional Euclidean space is unknotted (topological problem) [7]. More of such problems can be found in [4, 5, 8–10]. Complex problems in science and engineering, including the aforementioned ones, are computationally intensive in nature [11]. Factoring very large numbers can be achieved by computation only, since the underlying algorithmic procedures are well known. The same is true for solving partial differential equations or studying the topological unknotted problem as well as for thousands of scientific and engineering endeavors. A number of attempts to accelerate the computation of such complex mathematical problems have been motivated with the enormous advances in computing systems. Safaa J. Kasbah and Ramzi A. Haraty Division of Computer Science and Mathematics, Lebanese American University, Beirut, Lebanon Issam W. Damaj Department of Electrical and Computer Engineer, Dhofar University, Salalah, Sultanate of Oman
453
454
Safaa J. Kasbah et al.
A large number of physical phenomena can be expressed as systems of linear equations. Numerical solutions for these equations allow us to glean valuable information about the system at hand. There are two basic approaches for solving linear systems: direct methods and iterative methods. In the first approach, a finite number of operations are performed to find the exact solution. In the second approach, an initial approximation of the solution is generated; then this initial guess is used to generate another approximate solution, which is more accurate than the previous one [12] The robustness of applying iterative methods over direct methods is shown in different areas including circuit analysis and design, weather forecasting, and analyzing financial market trends. The well-known iterative methods are Gauss–Seidel, multigrid, Jacobi, and successive overrelaxation (SOR), which is of a particular interest in this chapter. Successive overrelaxation has been devised to accelerate the convergence of Gauss– Seidel and Jacobi [13], by introducing a new parameter, ω , referred to as the relaxation factor. The SOR rate of convergence is highly dependent on the relaxation factor. The main difficulty of using SOR is finding a good estimate of the relaxation factor [12]. A number of techniques have been proposed for determining the exact value of ω that accelerates the rate of convergence of the method [12, 13]. All available iterative methods packages, including SOR, are done in software. Examples are the ITPACK 3A, ITPACK 3B, ITPACK 2C, ITPACK 2D, and ELLPACK packages [14, 15]. Several sequential and parallel techniques were used in these packages to accelerate the method [16]. The emergence of the new computing paradigm reconfigurable computing (RC) introduces novel techniques for accelerating certain classes of applications including signal processing (e.g., weather forecasting, seismic data processing, magnetic resonance imaging (MRI), and adaptive filters), cryptography, and DNA matching [17]. Reconfigurable computing systems combine the flexibility offered by software and the performance offered by hardware [18]. It requires a reconfigurable hardware, such as an field-programmable gate array (FPGA), and a software design environment that aids in the creation of configurations for the reconfigurable hardware [17]. In [19], the first hardware implementation of an iterative method, the Multigrid, is presented. The speedup achieved demonstrates that hardware design can be suited for such computationally intensive applications. Toward proving the hypothesis that accelerated versions of the iterative methods can be realized in hardware, we undertook the first hardware implementation of the SOR method; using the same FPGAs that were used in [19–21]. In this chapter, we study the feasibility of implementing SOR in reconfigurable hardware. We use Handel-C, a higher level design tool, to code our design, which is analyzed, synthesized, and placed and routed using the FPGAs proprietary software (DK Design Suite, Xilinx ISE 8.1i, and Quartus II 5.1). We target Virtex II Pro, Altera Stratix, and Spartan3L, which is embedded in the RC10 FPGA-based system from Celoxica. We report our timing results when targeting Virtex II Pro and compare them to software version results written in C++ and running on a general purpose processor (GPP).
32 Reconfigurable Hardware Implementation of the SOR Method
455
32.2 Description of the Algorithm The successive overrelaxation method is an iterative method used for finding the solution of elliptic differential equations. Successive overrelaxation has been devised to accelerate the convergence of Gauss–Seidel and Jacobi [13] by introducing a new parameter, ω , referred to as the relaxation factor. Given the linear system of equations Aφ = b,
(32.1)
A = D + L +U,
(32.2)
the matrix A can be written as
where D, U, and L denote the diagonal, strictly upper triangular, and strictly lower triangular parts of matrix A, respectively [12]. Using the successive overrelaxation technique, the solution of the partial differential equation (PDE) is obtained using x(k) = (D − ω L)−1 [ω U + (1 − ω )D]x(k−1) + ω (D − ω L)−1 b,
(32.3)
where x(k) represents the kth iterate. The SOR rate of convergence strongly depends on the choice of the relaxation factor, ω [14]. Extensive work has been done on finding a good estimate of this factor in the [0, 2] interval 14, 15]. Recent studies have shown that for the case where • • • •
ω ω ω ω
= 1, SOR simplifies to Gauss–Seidel method [22]. ≤ 1 or ω ≥ 2, SOR fails to converge [22]. 1, SOR used to speed up convergence of a slowly converging process [12]. ≺ 1, SOR helps to establish convergence of diverging iterative process [15].
32.3 Reconfigurable Computing Today, it becomes possible to benefit from the advantages of both software and hardware with the presence of the RC paradigm [18] Actually, the first idea to fill the gap between the two computing approaches, hardware and software, goes back to the 1960s when Gerald Estrin proposed the concept of RC [23]. The basic idea of RC is the “ability to perform certain computations in hardware to increase the performance, while retaining much of the flexibility of a software solution” [18]. Reconfigurable computing systems can be either of fine-grained or of coarsegrained architecture. An FPGA is a fine-grained reconfigurable unit, while a reconfigurable array processor is a coarse-grained reconfigurable unit. In the fine-grained architecture, each bit can be configured; while in the coarse-grained architecture, the
456
Safaa J. Kasbah et al.
operations and the interconnection of each processor can be configured. An example of a coarse-grained system is the MorphoSys, which is intended for accelerating data-path applications by combining a GPP and an array of coarse-grained reconfigurable cells [24]. The realization of the RC paradigm is made possible by the presence of programmable hardware such as large-scale complex programmable logic device (CPLD) and FPGA chips [25]. RC involves the modification of the logic within the programmable device to suite the application at hand.
32.3.1 Hardware Compilation There are certain procedures to be followed before implementing a design on an FPGA. First, the user should prepare his/her design by using either a schema editor or one of the hardware description languages (HDLs) such as VHDL (Very highscale integrated circuit HDL) and Verilog. With schema editors, the designer draws her/his design by choosing from the variety of available components (multiplexers, adders, resistors, etc.) and connects them by drawing wires between them. A number of companies supply schema editors where the designer can drag and drop symbols into a design and clearly annotate each component [26]. Schematic design is considered simple and easy for relatively small designs. However, the emergence of big and complex designs has substantially decreased the popularity of schematic design while increasing the popularity of HDL design. Using an HDL, the designer has the choice of designing either the structure or the behavior of the design. Both VHDL and Verilog support structural and behavioral descriptions of the design at different levels of abstractions. In structural design, a detailed description of the system’s components, subcomponents, and their interconnects are specified. The system will appear as a collection of gates and interconnects [26]. Though it has a great advantage of having an optimized design, structural presentation becomes hard as the complexity of the system increases. In behavioral design, the system is considered as a black box with inputs and outputs only, without paying attention to its internal structure. In other words, the system is described in terms of how it behaves rather than in terms of its components and the interconnection between them. Though it requires more effort, structural representation is more advantageous than the behavioral representation in the sense that the designer can specify the information at the gate level, allowing optimal use of the chip area [27]. It is possible to have more than one structural representation for the same behavioral program. Noting that modern chips are too complex to be designed using the schematic approach, we will choose the HDL instead of the schematic approach to describe our designs. Whether the designer uses a schematic editor or an HDL, the design is fed to an electronic design automation (EDA) tool to be translated to a netlist. The netlist can then be fitted on the FPGA using a process called place and route, usually completed
32 Reconfigurable Hardware Implementation of the SOR Method
457
Fig. 32.1 Field-programmable gate array (FPGA) design flow
by the FPGA vendors’ tools. Then the user has to validate the place-and-route results by timing analysis, simulation, and other verification methodologies. Once the validation process is complete, the binary file generated is used to (re)configure the FPGA device. More about this process is found in the coming sections. Implementing a logic design on an FPGA is depicted in Fig. 32.1: The above process consumes a remarkable amount of time; this is due to the design that the user should provide using HDL, most probably V HDL or Verilog. The complexity of designing in HDL, which has been compared to the equivalent of assembly language, is overcome by raising the abstraction level of the design; this move is achieved by a number of companies such as Celoxica, Cadence, and Synopsys. These companies are offering higher level languages with concurrency models to allow faster design cycles for FPGAs than using traditional HDLs. Examples of higher level languages are Handel-C, SystemC, and Superlog [26].
32.3.2 Handel-C Language Handel-C is a high level language for the implementation of algorithms on hardware. It compiles program written in a C-like syntax with additional constructs for exploiting parallelism [26]. The Handel-C compiler comes packaged with the Celoxica DK Design Suite, which also includes functions and memory controller for accessing the external memory on the FPGA. A big advantage, compared to other C-to-FPGA tools, is that Handel-C targets hardware directly and provides a few hardware optimizing features [28]. In contrast to other HDLs, such as VHDL, Handel-C does not support gate-level optimization. As a result, a Handel-C design
458
Safaa J. Kasbah et al.
uses more resources on an FPGA than a VHDL design and usually takes more time to execute. In the following subsections, we describe Handel-C features that we have used in our design [28, 29].
32.3.2.1 Types and Type Operator Almost all ANSI-C types are supported in Handel-C with the exception of float and double. Yet floating-point arithmetic can still be performed using the floatingpoint library provided by Celoxica. In addition, Handel-C supports all ANSI-C storage class specifiers and type qualifiers except volatile and register, which have no meaning in hardware. Handel-C offers additional types for creating hardware components such as memory, ports, buses, and wires. Handel-C variables can only be initialized if they are global or if declared as static or constant. Handel-C types are not limited to width since when targeting hardware, there is no need to be tied to a certain width. Variables can be of different widths, thus minimizing the hardware usage.
32.3.2.2 Par Statement The notion of time in Handel-C is fundamental. Each assignment happens in exactly one clock cycle; everything else is free [28]. An essential feature in Handel-C is the “par” construct, which executes instructions in parallel.
32.3.2.3 Handel-C Targets Handel-C supports two targets. The first is a simulator that allows development and testing of code without the need to use hardware (P1 in Fig. 32.2). The second is the synthesis of a netlist for input to place-and-route tools that are provided by the FPGA’s vendors (P2 in Fig. 32.2). The remainder of this section describes the phases involved in P2, as it is clear from P1 that we can test and debug our design when compiled for simulation. The flow of the second target involves the following steps: • Compile to netlist. The input to this phase is the source code. A synthesis engine, usually provided by the FPGA vendor, translates the original behavioral design into gates and flip-flops. The resultant file is called the netlist. Generally, the netlist is in the electronic design interchange format (EDIF). An estimate of the logic utilization can be obtained from this phase. • Place and route (PAR). The input to this phase is the EDIF file generated from the previous phase, i.e., after synthesis. All the gates and flip-flops in the netlist are physically placed and mapped to the FPGA resources. The FPGA vendor tool should be used to PAR the design. All design information regarding timing, chip
32 Reconfigurable Hardware Implementation of the SOR Method
459
Fig. 32.2 Handel-C targets
area, and resources utilization are generated and controlled for optimization at this phase. • Programming and configuring the FPGA. After synthesis and PAR, a binary file will be ready to be downloaded into the FPGA chip [30, 31].
32.4 Hardware Implementation of SOR The SOR method was designed using Handel-C, a higher level hardware design tool. Handel-C comes packaged with DK Design Suite from Celoxica. It allows the designer to focus more on the specification of the algorithm rather than adopting a structural approach to coding [14]. Handel-C syntax is similar to the ANSI-C with additional extensions for expressing parallelism [14]. One of the most important features in Handel-C, which is used in our implementation, is the par construct that allows statements in a block to be executed in parallel and in the same clock cycle. Our design has been tested using the Handel-C simulator; afterwards, we have targeted a Xilinx Virtex II Pro FPGA, an Altera Stratix FPGA, and Spartan3L, which is embedded in an RC10 FPGA-based system from Celoxica. We have used the proprietary software provided by the devices’ vendors to synthesize, PAR, and analyze the design [28, 32, 33]. In Figs. 32.3 and 32.4, we present a parallel and a sequential version of SOR. In the first version, we used the par construct whenever it was possible to execute more than one instruction in parallel and in the same clock cycle without affecting the logic of the source code. The dots in the combined flowchart–concurrent process
460
Safaa J. Kasbah et al.
Fig. 32.3 Successive-overrelaxation parallel version showing the combined flowchart–concurrent process model. The dots represent replicated instances
model, which is shown in Fig. 32.3, represent replicated instances. Figure 32.4 shows a traditional way of sequentially executing instructions on a general-purpose processor. Executing instructions in parallel have shown a substantial improvement in the execution of the algorithm. To handle floating-point arithmetic operations, which are essential in finding the solution to PDEs using iterative methods, we used the pipelined floating-point library provided by Celoxica [28]. However, an unresolved bug in the current version of the DK simulator limited the usage of the floating-point operations to four in the design. The only possible way to avoid this failure was to convert or unpack the floating-point numbers to integers and perform integer arithmetic on the obtained unpacked numbers. Though it costs more logic to be generated, the integer
32 Reconfigurable Hardware Implementation of the SOR Method
Fig. 32.4 Successive overrelaxation flowchart, sequential version
461
462
Safaa J. Kasbah et al. Table 32.1 Virtex II Pro synthesis results Mesh size 8×8 16 × 16 32 × 32 64 × 64 128 × 128 256 × 256 512 × 512 1024 × 1024 2048 × 2048
Occupied slices 128 136 219 265 315 610 1098 1601 2289
Total equivalent gate count 2918 3033 4807 5978 7125 14538 23012 31848 53476
operations on the unpacked floating-point numbers have a minor effect on the total number of the design’s clock cycles.
32.5 Experimental Results As mentioned before, the main objectives of this chapter are (1) studying the feasibility of implementing SOR method in hardware and (2) realizing an accelerated version of the method. The first objective is met by targeting high-performance FPGAs: Virtex II Pro (2vp7ff672-7), Altera Stratix (ep1s10f484c5), and Spartan3L (3s1500lfg320-4), which is embedded on RC10 board from Celoxica. The second objective is met by comparing the timing results obtained, with a software version written in C++ and compiled using Microsoft Visual Studio .Net. All the test cases were carried out on a Pentium (M) processor 2.0 GHz, 1.99 GB of RAM. The relaxation factor ω was chosen to be 1.5 [22]. The obtained results are based on the following criteria: • Speed of convergence. The time it takes the SOR method to find the solution to the PDE in hand. In another words, it is the time needed to execute the multigrid Table 32.2 RC10 Spartan3L synthesis results Mesh size 8×8 16 × 16 32 × 32 64 × 64 128 × 128 256 × 256 512 × 512
Occupied slices 302 499 589 745 877 1201 2010
Total equivalent gate count 279010 281001 282997 284000 285872 297134 299858
32 Reconfigurable Hardware Implementation of the SOR Method
463
Table 32.3 Altera Stratix synthesis results Total logic elements 519 601 810 999 1274 1510 2286 2901 3286
Mesh size 8×8 16 × 16 32 × 32 64 × 64 128 × 128 256 × 256 512 × 512 1024 × 1024 2048 × 2048
Number of LUT tables for logic element usage 250 310 501 637 720 890 1087 1450 1798
Total registers 120 155 199 280 347 948 501 569 640
algorithm. In hardware implementation, the speed of convergence is measured using the clock cycles of the design divided by the frequency at which the design operates. The first parameter is found using the simulator, while the second is found using the timing analysis report that is generated using the FPGA vendor’s tool. • Chip-area. This performance criterion measures the number of occupied slices on the FPGA on which the design is implemented. The number of occupied slices is generated using the FPGA vendor’s PAR tool.
4.0000
Execution time (seconds)
Execution time (seconds)
We use the FPGA vendor’s tools to analyze and report the performance results of each FPGA. The synthesis results obtained, for different problem sizes, when targeting Virtex II Pro, Altera Stratix, and Spartan3L are reported in Tables 32.1, 32.2, and 32.3, respectively. Figure 32.5 shows SOR execution time when targeting Virtex II Pro FPGA versus the execution time of SOR in C++. We started with a problem size of 8 × 8 and reached 2048×2048. Obviously, one can notice the acceleration of the method when moving from software implementation to hardware implementation. The speedup of
3.5000 3.0000 2.5000 2.0000 1.5000 1.0000 0.5000 0.0000 8x8
16x16 32x32 Mesh size
450.00 400.00 350.00 300.00 250.00 200.00 150.00 100.00 50.00 0.00
64x64
128x128
512x512 Mesh size
2048x2048
C++ Handel-C
Fig. 32.5 Successive overrelaxation execution time results in both versions, C++ and Handel-C
464
Safaa J. Kasbah et al. Table 32.4 The speedup of the design for different problem sizes Mesh size 8×8 16 × 16 32 × 32 64 × 64 128 × 128 256 × 256 512 × 512 1024 × 1024 2048 × 2048
Speedup 1.76 1.88 6.71 5.70 1.51 1.49 3.03 2.58 3.38
the design, for different problem sizes, is shown in Table 32.4 and calculated as the ratio of execution time (C++)/execution time (Handel-C).
32.6 Conclusion In this chapter, we have studied the feasibility of implementing the SOR method on reconfigurable hardware. We used a hardware compiler, Handel-C, to code and implement our design, which we map onto high-performance FPGAs: Virtex II Pro, Altera Stratix, and Spartan3L, which is embedded in the RC10 board from Celoxica. We used the FPGAs vendor’s tool to analyze the performance of our hardware implementation. For testing purposes, we designed a software version of the algorithm and compiled it using Microsoft Visual Studio .Net. The software implementation results were compared to the hardware implementation results. The synthesis results prove that SOR is suitable for FPGA implementation; the timing results prove that SOR on hardware outperforms SOR on GPP. In the near future, we plan to improve (a) the speedup of the algorithm by designing a pipelined version of SOR and (b) the efficiency of the algorithm by moving from Handel-C to a lower level HDL such as VHDL. Besides, we will consider mapping the algorithm into a coarse-grain reconfigurable systems (e.g., MorphoSys) [34] and benefiting from the advantages of formal modeling [35]. We can extend the benefit of SOR by implementing other versions of the algorithm such as MSOR), SSOR, and USOR.
References 1. D. Hilbert (1900). Mathematical problems. Lecture delivered before the International Congress of Mathematicians at Paris in 1900. Available at http://aleph0.clarku. edu/∼djoyce/hilbert/problems.html#note1.
32 Reconfigurable Hardware Implementation of the SOR Method
465
2. W. Gowers (2000) The importance of mathematics. Available at http://www.dpmms.cam. ac.uk/∼wtg10/importance.pdf. 3. D. Bailey and J.M. Borwein (2005) Future prospects for computer-assisted mathematics. Canadian Mathematical Society Notes, 37:8 2–6. 4. D. Bailey, P. Borwein, and S. Pluoffe (1997) On the rapid computation of various polygarithmic constants. Mathematics of Computation, 66(218): 903–914. 5. D. Bailey, J. Borwen, V. Kapoor, and E.Weisstein (2006) Ten problems in experimental mathematics. American Mathematical Monthly, 113: 481–509. 6. K.W. Morton and D.F. Mayers (1994) Numerical solution of partial differential equations. Cambridge University Press, Cambridge. 7. G. Burde and H. Zieschang (1985) Knots. Walter de Gruyter studies in mathematics, Berlin. 8. J. Hass, C. Jeffrey, J. Lagarias, and P. Nicholas (1999) The computational complexity of knot and link problems. Journal of ACM. 42(2): 185–211. 9. G. Havas (2003) On the complexity of the extended Euclidean algorithm: Extended abstract. Centre for Discrete Mathematics and Computing, School of Infromation Technology and Electrical Engineering, The University of Queensland, Australia: http://www.itee.uq.edu.au/∼havas/cats03.pdf. 10. G. Havas and J.P. Seifert (1999) The complexity of the extended GCD problem. Mathematical Foundations Of Computer Science. Lecture Notes in Computer Science, pp. 103–113. 11. R.A. DeMillo and R.J. Lipton (1979) Some connections between mathematical logic and complexity theory. In Proceedings of the 11th ACM Symposium on Theory of Computing, pp. 153–159. 12. D. Young (1950) Iterative methods for solving partial difference equations of elliptic type. Ph.D. thesis, Harvard University. 13. G. Evans, J. Blackledge, and P. Yardley (2000) numerical methods for partial differential equations. Springer-Verlag, London. 14. W. Bailey (2003) The successive over relaxation algorithm and its application to numerical solutions of elliptic partial differential equations. B.S. project, Dublin Institute of Technology. 15. D. Kincaid (2004) Celebrating fifty years of David M. Young’s successive overrelaxation iterative method. In M. Feistauer, V. Dolejsi, P. Knobloch, and K. Najzar (eds.) Numerical mathematics and advanced applications. Springer-Verlag, Berlin Heidelberg, pp. 549– 558. 16. C. Zarka, G. Edward, and C. Freedman (1990) Efficient decomposition and performance of parallel PDE, FFT, Monte Carlo simulations, simplex, and sparse solvers. Proceedings of the 1990 ACM/IEEE Conference on Super Computing, pp. 455–464. 17. Y. Li, T. Callahan, E. Darnell, R. Harr, U. Kurkure, and J. Stockwood (2000) Hardwaresoftware co-design of embedded reconfigurable architectures. In 37th Design Automation Conference, Los Angeles, CA, pp. 507–512. 18. K. Compton and S. Hauck (2002) Reconfigurable computing: A survey of systems and software. In ACM Computing Surveys, 34(2): 171–210. 19. S. Kasbah (2006) Multigrid solvers in reconfigurable hardware. Master thesis, Lebanese American University, 2006. 20. S. Kasbah and I. Damaj (2006) A hardware implementation of multigrid algorithms. Poster session: 17th International Conference on Domain Decomposition Methods, Austria. 21. S. Kasbah, I. Damaj, and R. Haraty (2006) Multigrid solvers in reconfigurable hardware. Journal of Computational and Applied Mathematics, doi:10.1016/ j.cam.2006.12.031. 22. H.E. Kulsrud (1961) A practical technique for the determination of the optimum relaxation factor of the successive over-relaxation method. Communications of the ACM, 4(4): 184–187 23. F. Vahid and T. Givargis (2002) Embedded systems design: A unified hardware/software introduction. Wiley, New York. 24. N. Bagherzadeh, F. Kurdahi, H. Singh, G. Lu, M. Lee, and E. Filho (2000) MorphoSys: Design and implementation of the MorphoSys reconfigurable computing processor. Journal of VLSI and Signal Processing-Systems for Signal, Image and Video Technology.
466
Safaa J. Kasbah et al.
25. T.J. Todman, G.A. Constantinides, S.J.E. Wilton, O. Mencer, W. Luk, and P.Y.K. Cheung (2005) Reconfigurable computing: architectures and design methods. IEE Proceedings: Computers and Digital Techniques, 152(2): 193–197 26. J. Turely (2003) How chips are designed. Prentice Hall, Professional Technical Reference. 27. S.K. Valentina (2004) Designing a digital system with VHDL. Academic Open Internet Journal, 11. 28. Celoxica (2007) www.celoxica.com. 29. C. Peter (2000) Overview: Hardware compilation and the Handel-C language. Oxford University Computing Laboratory: http://web.comlab.ox.uk/oucl/work/christian.peter/ overview handelc.html. 30. J. Cong (1997) FPGAs synthesis and reconfigurable computing. University of California, Los Angeles: http://www.ucop.edu/research/micro/96 97/96 176.pdf. 31. Shewel J (1998) A Hardware / Software Co-Design System using Configurable Computing Technology. http://ipdps.cc.gatech.edu/1998/it/schewel.pdf. 32. Altera Inc. (2007) www.altera.com. 33. Xilinx (2007) www.xilinx.com. 34. I. Damaj and H. Diab (2003) Performance Evaluation of Linear Algebraic Functions Using Reconfigurable Computing. The International Journal of Super Computing, Kluwer. 24(1): 91–107. 35. I. Damaj, J. Hawkins, and A. Abdallah (2003) Mapping high-level algorithms onto massively parallel reconfigurable hardware. IEEE International Conference of Computer Systems and Applications, pp. 14–22.
Chapter 33
Tabu Search Algorithm Based on Strategic Oscillation for Nonlinear Minimum Spanning Tree Problems Hideki Katagiri, Masatoshi Sakawa, Kosuke Kato, Ichiro Nishizaki, Takeshi Uno, and Tomohiro Hayashida
33.1 Introduction The minimum spanning tree (MST) problem is to find the least cost spanning tree in an edge-weighted graph. In the real world, MST problems are usually seen in network optimization. For instance, when designing a layout for telecommunication systems, if a decision maker prefers to minimize the total cost for connection between cities or sections, it is formulated as an MST problem. In other examples, the objective is to minimize the total time for construction or to maximize the network reliability. In classical MST problems, weights attached to edges are constant, and all the weights are independent of each other. In other words, the objective function is linear. Polynomial–time algorithms for solving a usual MST problem were first constructed by Kruskal [1] and Prim [2]. Gabow et al. [3] and Chazelle [4] developed more efficient algorithms. However, there is a case where the objective function is nonlinear in real-world problems. For instance, when each edge weight is represented by a random variable or a fuzzy set, MST problems under randomness are often equivalently transformed into a deterministic nonlinear MST problem. Katagiri and others [5–7] considered some specific types of minimum spanning tree problems where each edge constant is a fuzzy random variable. They also constructed polynomial–time algorithms for solving nonlinear MST problems that are deterministic equivalent problems for the original ones. Since a nonlinear MST problem is generally one of the NP-hard combinatorial optimization problems, it is important to construct approximate solution algorithms for solving large-scale nonlinear MST problems. As for previous studies on
Hideki Katagiri, Masatoshi Sakawa, Kosuke Kato, Ichiro Nishizaki, Takeshi Uno, and Tomohiro Hayashida Graduate School of Engineering, Hiroshima University, Kagami-yama 1-4-1, Higashi-hiroshima, Hiroshima, 739-8527 Japan
467
468
Hideki Katagiri et al.
nonlinear MST problems, Zhou and Gen [8] considered quadratic MST problems and proposed a solution algorithm through genetic algorithms (GAs). For combinatorial optimization problems, there are some metaheuristics such as evolutionary computation (EC), simulated annealing (SA), ant colony optimization (ACO), and tabu search (TS). Blum and Blesa [9] investigated several metaheuristic approaches for edge-weighted k-cardinality tree problems and compared the performances of EC, SA, ACO, and TS. They demonstrated that TS [10] has advantages for high cardinality. Since MST problems corresponds to highest cardinality case in k-cardinality tree problems, it is expected that the performance of TS algorithm is good for nonlinear MST problems. Therefore, we motivate ourselves to develop a TS algorithm for solving nonlinear MST problems. In particular, we construct a TS algorithm based on strategic oscillation, which has good performance for multidimensional 0–1 knapsack problems [11]. This paper is organized as follows: Section 2 formulates a nonlinear MST problem. In Section 3, we propose a solution algorithm using TS. Section 4 provides numerical experiments and shows the advantage of our algorithm over the algorithm using a GA.
33.2 Problem Formulation In this paper, we consider a connected undirected graph G = (V, E), where V = {v1 , v2 , . . . , vn } is a finite set of vertices and E = {e1 , e2 , . . . , em } is a finite set of edges. Let x = (x1 , x2 , . . . , xm )t be an m-dimensional column vector. We identify a spanning tree T with an x if 1 if edge ei ∈ T is selected, xi = 0 otherwise. Then, an MST problem with a nonlinear objective function is formulated as follows: Minimize f (x) subject to a j x ≤ b j , j = 1, . . . , l, x ∈ X,
(33.1)
where f is a real-valued nonlinear function and a j is a m-dimensional row vector. X stands for the collection of x which corresponds to a spanning tree in the given graph G . When there is no constraint and f is linear in the above problem, problem (33.1) becomes a usual minimum spanning tree problem. In this case, the problem is solved by polynomial–time algorithms such as the Prim method on the Kruskal method. In general, however, problem (33.1) is an NP-hard problem, and the existing exact solution algorithms such as the branch and bound method cannot solve a
33 Tabu Search for Nonlinear Minimum Spanning Tree Problems
469
large-scale nonlinear MST problem in a practically feasible computational time. Therefore, we develop a TS algorithm for obtaining a good approximate optimal solution of the large-scale nonlinear MST problems.
33.3 Summary of Tabu Search Local search generally improves the current solution because it moves from the current solution xc to a solution x ∈ N(xc ), which is better than the current solution, where N(·) is a given neighborhood structure. For simplicity, suppose that xc is a local minimum solution and that the next solution x is selected as the best solution among N(xc ). If the local search is applied for x , then the next solution moved from x is back to xc because xc is the best solution among a neighborhood N(x ). In this way, cycling among solutions often occurs around local minima. In order to avoid such cycling, TS algorithms use a short-term memory. The short-term memory is implemented as a set of tabu lists that store solution attributes. Attributes usually refer to components of solutions, moves, or differences between two solutions. Tabu lists prevent the algorithm from returning to recently visited solutions. Aspiration criteria permit a part of moves in the tabu list to cancel any tabu status. The typical aspiration criterion is to accept a tabu move if it leads to a new solution better than the current best solution. The outline of TS is as follows: Step 1. Step 2. Step 3.
Generate an initial solution x and initialize a tabu list T L. Find the best solution x ∈ N(x) such that x ∈ T L, and set x := x . Stop if a termination condition is satisfied. If not, then update T L and return to Step 2.
In Step 2, a tabu list memorizes solution attributes. A tabu tenure, i.e., the length of the tabu list determines the behavior of the algorithm. A larger tabu tenure forces the search process to explore larger regions, because it forbids revisiting a higher number of solutions. In Step 3, we check whether the algorithm satisfies a termination condition. The termination condition is usually related to the iteration number of the algorithm and/or the iteration number of not updating the current best solution.
33.4 Tabu Search Algorithm Based on Strategic Oscillation for Nonlinear MST Problems Hanafi and Freville [11] considered a TS algorithm for the 0–1 multidimensional knapsack problems. Their algorithm is based on strategic oscillation. Strategic oscillation is useful to efficiently explore the region where there may be good
470
Hideki Katagiri et al.
solutions, called promising zone. Since the promising zone for 0-1 multidimensional knapsack problems is the boundary between feasible region and infeasible region, strategic oscillation explores the region around the boundary, crossing over it. We extend the Hanafi’s algorithm to deal with nonlinear MST problems. The essential features of our TS algorithm for solving a nonlinear minimum spanning tree problem are characterized by several procedures, i.e., generating an initial solution, local search, strategic oscillation, and diversification procedure. The outline of our TS algorithm is as follows: Step 0.
Step 1.
Step 2. Step 3.
(Initial Solution). Generate an initial solution x0 . If x0 is feasible, then set xc := x0 and xb := x0 . Otherwise, construct a feasible solution x0f from x0 using MCVQ, and set xc := x0f and xb := x0f . (Local Search). If the termination condition is satisfied, then the algorithm is terminated. Otherwise, improve the current solution xc by local search, and set xb := xc . (Strategic Oscillation). Explore around the boundary between the feasible and infeasible regions, alternating MOFV with MCVQ. (Diversification). Remove some of edges that have high residence frequency from T c . Slap a long tabu tenure to the removed edges. Continue to add the edge that has low residence frequency so as not to make a cycle until a spanning tree is formed, using MOFV criterion. If the current solution is infeasible, then move it to a feasible solution using MCVQ criterion. Return to Step 1.
33.4.1 Initial Solution Let SCC(i) denote a set of connected components that consists of exactly i edges. To construct a spanning tree, first, an edge e ∈ E is chosen uniformly at random. This procedure constructs a subtree SCC(1) that consists of only one edge. Then, a connected component SCC(k + 1) is constructed by adding an edge to the SSC(k), following the edge addition rule: Edge addition rule. SCC(k + 1) is constructed by adding an edge e :∈ arg
min
{ f (SCC(k) + e ) − f (SCC(k))}
e ∈ENC (SCC(k))
to the current SCC(k) under construction, where ENC (SCC(k)) := {e ∈ E|SCC(k) and e has no cycle}.
33 Tabu Search for Nonlinear Minimum Spanning Tree Problems
471
33.4.2 Neighborhood Structure and Local Search Let T be a set of edges which forms a spanning tree, and let T be a class of all possible spanning trees in a given graph. The neighborhood N(T ) consists of all spanning trees that can be generated by removing an edge e ∈ T and by adding an edge from the set ENH (T − e) \ {e}, where ENH (T − e) is defined as follows: ENH (T − e) := {e ∈ E|T − e + e ∈ T }. In order to improve a current solution xc ∈ X corresponding to the current spanning tree T c , it scans the neighborhood N(xc ) and chooses a spanning tree x such that x ∈ arg
min f (x). x∈N(xc )∩X
This procedure is local search or the exploration of local area of the current solution.
33.4.3 Tabu List and Aspiration Criterion In general, local search updates a current best solution. However, such updating is stopped, when cycling occurs around a local optimum solution. Our TS algorithm uses only one tabu list denoted by TabuList, which stores indices of the edges that were recently added or removed. As described before, every move consists of two steps; the first step is to remove one edge e ∈ T c from the current spanning tree T c , and the second step is to add an edge in ENH (T c −e)\{e} to T c −e. The status of the forbidden moves are explained as follows: If an edge e j is in TabuList and x j = 0, then our algorithm forbids the addition of the edge e j , namely, x j ← 1. In addition, if an edge ei is in TabuList and xi = 1, then our algorithm forbids the deletion of the edge ei , namely, xi ← 0. An aspiration criterion is activated to overcome the tabu status of a move whenever the solution then produced is better than the best historical solution achieved. This criterion will be effective only after a local optimum is reached.
33.4.4 Strategic Oscillation The characteristic of the strategic oscillation is that the several move evaluation criteria are used for selecting moves. Our algorithm involves two move evaluation criteria. One is minimizing objective function value (MOFV), and the other is minimizing constraint violation quantity (MCVQ). While MOFV is used, our algorithm continues to select moves for a specified depth beyond the boundary, without considering the constraints. To be more specific,
472
Hideki Katagiri et al.
the algorithm alternates the edge addition rule described before and the following edge deletion rule: Edge deletion rule. SCC(k − 1) is constructed by removing an edge e ∈ arg max { f (SCC(k) − e ) − f (SCC(k))} e ∈CY (k)
from a cycle in the current SCC(k) under construction, where CY (k) denotes the set of edges in SCC(k) which forms cycles. The evaluation criterion is switched from MOFV to MCVQ at some turning point. While MCVQ is used to evaluate possible moves, the algorithm continues to select x such that x ∈ arg min δ (x), x∈N(xc ) where δ is the degree of violation of the constraints defined by
δ (x) :=
l
∑ max{0, a j x − b j }.
j=1
33.4.5 Diversification Frequency-based memory is one of the long-term memories and consists of gathering pertinent information about the search process so far. In our algorithm, we use residence frequency memory, which keeps track of the number of iterations where edges have been selected as a part of the solution. The diversification procedure begins at the situation that some spanning tree is formed.
33.5 Numerical Experiment In this section we apply our TS algorithm to solve examples of problem (33.1). The experiments are executed for complete undirected graphs where the number m of edges are 45, 105, 190, and 435. We use C as the programming language and compile all software with Microsoft Visual C++ 6.0. All the examples were tested on a PC with Celeron 2.4 GHz CPU under Microsoft Windows 2000. The compared GA is a modified version of the algorithm proposed by Zhou and Gen [8], in which the solution structure and genetic operations are based on the EC algorithm proposed by Blum and Blesa [9]. We have run each experiment 30 times. In each table, Best, Mean and Worst denote the best, mean and worst objective function values, respectively. Time in this table represents the average CPU running time in seconds.
33 Tabu Search for Nonlinear Minimum Spanning Tree Problems
473
Table 33.1 Comparison result for Example 1 TS
GA
m = 45
Best Mean Worst Time(s)
2.65 × 10−4 2.65 × 10−4 2.65 × 10−4 0.094
2.65 × 10−4 2.65 × 10−4 2.65 × 10−4 3.964
m = 105
Best Mean Worst Time(s)
5.98 × 10−7 5.98 × 10−7 5.98 × 10−7 0.162
5.98 × 10−7 2.03 × 10−6 4.98 × 10−6 19.003
m = 190
Best Mean Worst Time(s)
5.47 × 10−12 1.15 × 10−10 3.50 × 10−10 0.631
3.05 × 10−8 2.80 × 10−9 9.25 × 10−9 101.479
m = 435
Best Mean Worst Time(s)
2.98 × 10−17 7.01 × 10−12 4.44 × 10−11 4.11
1.89 × 10−11 5.11 × 10−10 1.96 × 10−9 716.58
Example 1. The first example is a nonlinear fractional MST problem, which is derived from a deterministic equivalent problem of an MST problem under fuzzy stochastic environments. Minimize
xt V x (β x + γ )2
Subject to ax ≤ b, x ∈ X,
where V is an m × m positive definite matrix. The coefficient vector β is defined as β = (β1 , β2 , . . . , βm ), where β j , j = 1, . . . , m, are constant and γ is also constant. Table 33.1 shows the comparison result for Example 1. In Table 33.1, better values are indicated by boldface. Our TS algorithm is better than GA from the viewpoints of both accuracy and computational time. Example 2. The second example is derived from the reliability optimization problem. m
Maximize
∏(ri + ci x)xi
subject to
a j x ≤ b j , j = 1, . . . , l, x ∈ X
i=1
where ri and ci = (ci1 , ci2 , . . . , cim ) are a constant and a constant value vector, respectively. Table 33.2 shows the comparison result for Example 2. Our TS algorithm is better than GA from the viewpoints of both accuracy and computational time.
474
Hideki Katagiri et al. Table 33.2 Comparison result for Example 2 TS
GA
m = 45
Best Mean Worst Time(s)
6.36 × 10−1 6.33 × 10−1 6.33 × 10−1 0.152
6.33 × 10−1 6.33 × 10−1 6.33 × 10−1 2.802
m = 105
Best Mean Worst Time(s)
6.21 × 10−1 6.17 × 10−1 6.15 × 10−1 0.364
6.15 × 10−1 5.95 × 10−1 5.81 × 10−1 8.3438
m = 190
Best Mean Worst Time(s)
6.09 × 10−1 6.07 × 10−1 6.07 × 10−1 0.948
5.72 × 10−1 5.34 × 10−1 4.83 × 10−1 24.592
m = 435
Best Mean Worst Time(s)
5.61 × 10−1 5.59 × 10−1 5.59 × 10−1 6.31
4.55 × 10−1 3.95 × 10−1 3.20 × 10−1 96.93
Example 3. The third example is an original problem in which the objective function includes a trigonometric function. 2 ∑ij=1 (−1) j x j y , − 10 cos(2 π y ) subject to y = 5.14 × i i ∑ i n2 − 1 i=1 m
Minimize
i = 1, . . . , m a j x ≤ b j , j = 1, . . . , l x ∈ X Table 33.3 Comparison result for Example 3 TS
GA
m = 45
Best Mean Worst Time(s)
97.01 97.01 97.01 0.174
97.01 98.95 116.41 2.802
m = 105
Best Mean Worst Time(s)
117.46 117.46 117.46 2.787
117.46 171.16 251.71 11.8323
m = 190
Best Mean Worst Time(s)
112.93 129.07 393.40 12.377
225.86 306.04 361.38 32.688
m = 435
Best Mean Worst Time(s)
83.68 97.43 145.04 50.62
412.80 528.14 601.89 110.53
33 Tabu Search for Nonlinear Minimum Spanning Tree Problems
475
Table 33.3 shows the comparison result for Example 3. Our TS algorithm is better than GA except for the worst objective function value for m = 190. As a whole, Tables 1 to 3 show the advantage of TS over the GA in terms of both computational time and accuracy.
33.6 Conclusion In this paper, we have considered a TS algorithm for solving nonlinear minimum spanning tree problems. The proposed algorithm has been developed based on strategic oscillation and diversification by residence frequency. The results of numerical experiments show that our algorithm has advantages of high speed and high accuracy. In the future, we will investigate effects on the performance of TS algorithm if the parameters involved in TS algorithm are changed. Furthermore, we will examine the performance of our TS algorithm for various types of graphs, such as grid graphs, regular graphs, and scale-free network.
References 1. J.B. Kruskal (1956) On the shortest spanning subtree and the traveling salesman problem, Proceedings of the American Mathematical Society, 7: 48–50. 2. R.C. Prim (1957) Shortest connection networks and some generalisations. Bell System Technical Journal, 36: 1389–1401. 3. H.N. Gabow, Z. Galil, T. Spencer, and R.E. Tarjan (1986) Efficient algorithms for finding minimum spanning trees in undirected and directed graphs. Combinatorica, 6: 109–122. 4. B. Chazelle (2000) A minimum spanning tree algorithm with inverse-Ackermann type complexity. Journal of the ACM, 47: 1028–1047. 5. H. Katagiri and H. Ishii (2000) Chance constrained bottleneck spanning tree problem with fuzzy random edge costs, Journal of the Operations Research Society of Japan, 43: 128–137. 6. H. Katagiri, M. Sakawa, and H. Ishii (2004) Fuzzy random bottleneck spanning tree problems. European Journal of Operational Research, 152: 88–95. 7. H. Katagiri, E.B. Mermri, M. Sakawa, K. Kato, and I. Nishizaki (2005) A possibilistic and stochastic programming approach to fuzzy random MST problems. IEICE Transaction on Information and Systems, E88-D: 1912–1919. 8. G. Zhou and M. Gen (1998) An efficient genetic algorithm approach to the quadratic minimum spanning tree problem. Computers & Operations Research, 25: 229–237. 9. C. Blum and M.J. Blesa (2005) New metaheuristic approaches for the edge-weighted kcardinality tree problem. Computers & Operations Research, 32: 1355–1377. 10. F. Glover and M. Laguna (1997) Tabu search. Kluwer Academic Publishers, Norwell, MA. 11. S. Hanafi and A. Freville (1998) An efficient tabu search approach for the 0–1 multidimensional knapsack problem. European Journal of Operational Research, 106: 659–675.
Chapter 34
Customization of Visual Lobe Measurement System for Testing the Effects of Foveal Load Cathy H.Y. Chiu and Alan H.S. Chan
Abstract Visual lobe is commonly defined as the area visible at a single glimpse or the area within which a point source can be perceived without movement of the eyes or the head. Lobe size is a function of the characteristics of the target and background; targets of different conspicuity give different visual lobe areas. Performance of a peripheral task was also affected when a central task was performed concurrently. In a dual-task study, deterioration of a peripheral task was usually noted when foveal load was induced. The task performance deterioration was due to the different experimental settings of the foveal load variables such as complexity of foveal load and priority assignment of attentional resources. However, various foveal load features were used in different experiments. Finally, the various features of foveal load for dual-task tests were summarized. The visual lobe measurement system (VILOMS) software was then enhanced in order to investigate the effects of foveal load on visual lobe shape [1]. Keywords: Foveal loading · lobe shape index · visual lobe measurement
34.1 Introduction Visual search is an important process in many human activities. Despite the advances in technology, vision is a topic of interest in industrial inspection where target items to be detected are embedded in a background of nontargets [2]. Performance characteristics of the central visual field on the human visual system have long been an interesting topic, especially since many visual functions are optimal in the foveal region. When a human fixates a point, visual acuity is at the maximum in the fovea, and the visual sensitivity along the line of sight decreases Cathy H.Y. Chiu and Alan H.S. Chan Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong
477
478
Cathy H.Y. Chiu and Alan H.S. Chan
approximately linearly into the periphery, while in the far periphery it falls off more rapidly [3]. Only two of the many visual functions are superior in the periphery relative to the central field, absolute light sensitivity and self-motion or vection. Leibowitz [4], therefore, postulated a dichotomy of functions between central field and recognition and peripheral field and spatial orientation in the visual system. Peripheral vision comprises most of the visual field, and the cooperation of foveal and peripheral vision plays an important role in the total performance of human vision in man–machine systems. For example, in supervisory tasks in a control room, aircraft control, and car driving, the human operator needs to observe several displays at the center of his visual field continuously and accurately. The operator also has to simultaneously detect and respond to unpredictable alarm signals or unexpected events presented in the periphery of the visual field that provide vital information for the successful control of machines and for survival. Several previous investigation studies on divided attention showed a poorer peripheral task performance when both central and peripheral tasks were performed simultaneously. Peripheral visual performance was degraded by task-irrelevant information [5], alcohol [6], and increased concentration on a demanding visual task presented in the central visual field [7]. The accuracy of detection of peripheral signals decreased as performance on a central task was improved by incentives. The accuracy on the report of the presence of stimuli in the extreme periphery was found to decrease when subjects were required to count the flashes of a foveal fixation light [8]. Some other studies indicated a deterioration of peripheral task performance when the complexity of a foveal task increased [9, 10]. Different dual-task studies were conducted with different experimental settings on response sequence, priority assignment of attentional resources, levels of foveal loading, and order of testing. Most studies asked subjects to respond to the foveal task prior to the peripheral one [11,12]. Most dual-task studies made the foveal task primary and the peripheral task secondary. Other studies set the peripheral task as the primary task and the foveal task as the secondary task [13]. A study conducted by Leibowitz and Appelle did not have any specific instructions on the priorities of the tasks [14]. As there were different experimental settings for dual-task studies, a study was conducted to examine the effects of different response sequences to the dual-test task on tunnel vision and task performance [13]. Moreover, priority assignment of attentional resources and order of testing were also investigated. Results showed that except for the response sequences, all other factors demonstrated significant effects on visual lobe dimensions. Features of foveal load varied across different dual-task studies. Most dualtask studies were conducted by holding the foveal visual factors as a constant while increasing the levels of foveal load difficulty by raising the cognitive demand [11–13, 15, 16]. Williams [11] required subjects to respond “same” or “different” based on the physical or category conditions (i.e., whether a letter was a vowel or a consonant) as the low and high foveal load tasks. Besides, Williams [12, 16, 17] conducted dual-task tests by asking subjects to memorize a set of letters and
34 Customization of VILOMS for Testing the Effects of Foveal Load
479
numbers containing two or six characters for a low and high foveal load level, respectively. Chan and Courtney [13, 15] used a two-digit number as the foveal load. Four levels of foveal load difficulty were induced in their study. For the lowest level, the foveal load was absent. For Level 1 task, the foveal load was presented; however, subjects were not required to make any responses to the foveal load. For Levels 3 and 4, subjects were asked to directly report the numbers and the sum of the numbers, respectively. Ikeda and Takeuchi [10] demonstrated shrinkage of visual field in a foveal recognition and peripheral detection task. Four levels of foveal load were used in their study. For the lowest level, there was no foveal load. A set of two English lowercase letters was used for the second level. Level 3 was a combination of three lowercase letters, three numbers, single handwritten Japanese characters, or simple traffic signals. Level 4 were combinations of three lowercase letters and numbers, two uppercase letters, three numbers arranged in two lines, two Japanese characters representing place names such as Tokyo, or complicated nonsense figures. All of these were handwritten. Instead of using a character set as foveal load task, Rog´e et al. used spots [18]. The foveal task consisted of eight spots forming a circle at 1◦ eccentricity from the center. Each time, the spots were displayed for 100 ms. One of the spots appeared more luminous than the others. Subjects were required to detect and report the presence of this more luminous spot as quickly and accurately as possible. Unlike most dual-task tests that use a static foveal load, a dynamic foveal load was used in the study conducted by Rog´e et al. [19]. In that study, the signals for the central task were a 1.8-cm moving pointer, and a critical signal of numbers appeared randomly. The traditional practice of predicting search performance relied mainly on visual lobe area, size, or dimensions [20,21]. However, Chan and Courtney [22] suggested that irregularities of visual lobes have important implications for visual search and related mathematical models. In mapping visual lobes on 16 meridians for more accurate estimates of the lobe area, the binocular visual fields were found to be very irregular in shape, and there were differences among subjects [23]. A more detailed full field mapping later confirmed the irregularities of the lobe boundary and even indicated the presence of regions of apparent insensitivity within the lobe boundary [24]. It was revealed from the aforementioned researches that visual lobe shape played an important role in the mathematical modeling of searches other than visual lobe size, area, or dimension. Obviously, foveal load and other related factors significantly change the visual lobe dimensions. However, previous results proved that visual lobe shape was very irregular and that using size alone did not guarantee a successful prediction of search performance and that lobe shape should also be taken into consideration. Thus, in addition to visual lobe size, changes of visual lobe shape due to the presence of foveal load are worth studying. Although different features of foveal load were used for different dual-task studies, foveal load features from Chan and Courtney’s study was employed for the enhancement of VILMOS [15]. After the enhancement of the application software, the changes of visual lobe shape under the influences of
480
Cathy H.Y. Chiu and Alan H.S. Chan
Fig. 34.1 The factors selection page in the software
different task parameters can then be quickly examined in terms of the 16 shape indexes which were categorized into roundness, boundary smoothness, symmetry, elongation, and regularity.
34.2 Design 34.2.1 Additional Features The new features added to VILMOS support greater flexibility for changing the foveal loading parameters in order to accommodate various experimental design requirements (Fig. 34.1). 1. Levels of foveal load. In order to study the effects of different foveal level difficulties on visual lobe shape characteristics, four options of foveal loading level, i.e., Off, Level 1, Level 2, and Level 3, are provided [Fig. 34.2(a)]. When the foveal loading function is selected as Off, there does not exist any foveal loads on the stimulus; subjects are required to respond to the peripheral task only. This situation is the same as performing a simple visual lobe measurement. At Levels 1, 2, and 3 foveal tasks, foveal load is added to the stimulus by placing a two-digit number in the center of the stimulus. At the lowest Level 1 foveal task, a two-digit number is presented in the center of the stimulus; however, subjects are only required to respond to the peripheral task. Response to the two-digit numbers is not required despite its presence. At Level 2 foveal task, in addition to the peripheral task, subjects also need to identify the foveal two-digit number. At the highest Level 3 foveal task, subjects are required to respond to the foveal load by summing up the two digits in the number as well as the peripheral task.
34 Customization of VILOMS for Testing the Effects of Foveal Load
481
Fig. 34.2 The selection page for (a) levels of foveal load and (b) order of testing
2. Order of testing. In a dual-task test, subjects could first respond to either the foveal load or the peripheral target. Therefore, two input priority options, i.e., foveal load or peripheral target, are offered [Fig. 34.2(b)]. Setting the priority input as “Target” requires subjects to locate the peripheral target prior to keying in the numbers for foveal task at Levels 2 and 3. Vice versa, for the option of “Loading,” before locating the peripheral target, subjects are required first to key in the numbers for Level 2 and Level 3 foveal tasks. In this dual-task test, the total number of presentations is independent of the number of correct responses made on the foveal loading task. The measurement comes to the end as soon as the subject’s visual lobe has been delineated. For a comprehensive understanding of lobe shape characteristics, sufficient number of meridians are recommended to be used. Usually, the number of exposures of a complete 24-meridian mapping for a subject ranged from 350 to 600.
34.2.2 Stimuli The stimuli are generated using VILMOS. In a dual-task test, for each presentation a two-digit number and a target are displayed amongst a background of regularly spaced nontargets [Fig. 34.3(a)]. The two-digit number is presented in the center of the screen, and it is randomly selected in the range between 12 and 98, excluding pairs of the same number, e.g., 22, 33. The target appears only on the meridians that are going to be mapped. All the other positions are then filled up by 418 nontargets forming a uniform 2-dimensional test field. No targets are placed on the outside
(a)
34 Customization of VILOMS for Testing the Effects of Foveal Load
483
edges, where all the targets are totally surrounded by the nontargets. The targets, nontargets, and the numbers are of the same size. A postexposure masking stimulus with all the background objects, targets, and two-digit number positions filled with +’s of the same size as that in the testing stimuli is presented immediately after the stimulus. Subjects are then required to indicate the estimated target location by clicking the mouse on a + position. When subjects perform a foveal task at Level 2 and Level 3, a message will be displayed under the two-dimensional test field reminding subjects to input the two digits [Fig. 34.3(b)] and to input the sum of the two digits [Fig. 34.3(c)], respectively. Subjects are forced to make responses to both the peripheral and foveal tasks even though they cannot identify the target position, i.e., the two-digit number or both the target position and the numbers [Fig. 34.3(d)].
34.2.3 Software The program for the test was developed with the use of Microsoft Visual Basic ProR with an OCX component HITME R developed by Mabry R fessional Edition 6.0 Software Inc. The visual lobe measurement system can be used to generate the stimuli, capture subjects’ responses, and export data to statistical software for further analysis.
34.2.4 Apparatus A personal computer with AMD Athlon 2500-MHz or above microprocessor should be used to present the stimuli. A quality display adapter is highly desirable for smooth performance of image presentation. A mechanical mouse is used to control stimulus presentation and input estimated target positions. A keyboard is used by subjects for inputting and confirming foveal responses. An adjustable chair is used for subjects’ comfort and to ensure that the line of regard is roughly perpendicular to and in the center of the screen.
34.2.5 Output Once the visual lobe mapping is completed, the message “Test completed” will be shown, and a result presentation screen will be displayed instantaneously.
←−−−−−−−−−−−−−−−−−−−−−− Fig. 34.3 (a) An example of a dual-task test. One target O appeared amongst background of Xs, and a two-digit Arabic numeral appeared in the center of the stimuli (not to scale). (b) An example of a dual-task test for foveal task at Level 2. The message “Input the 2-digit:” is shown under the postexposure masking stimulus. (c) An example of a dual-task test for a Level 3 foveal task. “Input the sum of the 2 digits:” is displayed below the postexposure masking stimulus. (d) A window displaying “Please input loading first” pops up once the subject presses Enter without keying in any numbers
484
Cathy H.Y. Chiu and Alan H.S. Chan
Fig. 34.4 The result presentation screen for percentage of correct foveal loading responses
In addition to the existing information, e.g., subject’s personal data, measurement environment, visual lobe shape, lengths of meridians, and shape indexes, the total percentage of correct responses for the foveal task will also be displayed (Fig. 34.4). All the captured results will also be exported to statistical software for further analysis. For the results related to the foveal task, the total percentage of correct responses as well as the percentage of correct responses for the foveal task with respect to each target presentation location will be included.
34.3 Conclusion In conclusion, the user-friendly VILOMS software was successfully enhanced by adding a new feature of foveal load testing. This enhanced software enabled the authors to study the effects of different factors such as levels of foveal load, order of testing, response sequence on visual lobe shape, and priority assignment of attentional resources on visual lobe shape of various meridians in a more effective and efficient way. Besides, the influences of different peripheral task factors such as target difficulty, target size, and density of nontarget background on foveal load, performance could also be investigated. A complete set of lobe area, perimeter, and 16 shape indexes together with the percentage of correct foveal response could be obtained after each completion of a dual-task test. This allowed the authors to carry out statistic analysis easily. Acknowledgment The work described in this paper was fully supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China [Project No. CityU 110407].
34 Customization of VILOMS for Testing the Effects of Foveal Load
485
References 1. A.H.S. Chan and D.K.T. So (2006) Measurement and quantification of visual lobe shape characteristics. International Journal of Industrial Ergonomics, 36: 541–552. 2. S.G. Drury and P.V. Prabhu (1994) Human factors in test and inspection. In G. Salvendy and W. Karwowski (eds.), Design of work and development of personnel in advanced manufacturing. Wiley, New York, pp. 355–401. 3. J.R. Bloomfield (1995) Theoretical approaches to visual search. In J.G. Fix and S.G. Drury (eds.), Reliability and quality control. Taylor & Francis, London, pp. 19–29. 4. H.W. Leibowitz (1986) Recent advances in our understanding of peripheral vision and some implications. Proceedings of Human Factors Society 30th Annual Meeting, Human Factors Society, Santa Monica, CA, pp. 605–607. 5. N.H. Mackworth (1965) Visual noise causes tunnel vision. Psychonomic Science, 3: 67–68. 6. H. Moskowitz and S. Sharma (1974) Effects of alcohol on peripheral vision as a function of attention. Human Factors, 16: 174–180. 7. A.P. Gasson and G.S. Peters (1965) The effect of concentration upon the apparent size of the visual field in binocular vision. The Optician, Part I, 148: 660–665; Part II, 149: 5–12. 8. R.G. Webster and G.M. Haslerud (1964) Influence on extreme peripheral vision of attention to a visual or auditory task. Journal of Experimental Psychology, 68: 269–272. 9. D.J. Holmes, K.M. Cohen, M.N. Haith, and FJ. Morrison (1977) Peripheral visual processing. Perception and Psychophysics, 22: 571–577. 10. M. Ikeda and T. Takeuchi (1975) Influence of foveal load on the functional visual field. Perception and Psychophysics, 18: 255–260. 11. L.J. Williams (1982) Cognitive load and the functional field of view. Human Factors, 24: 683– 692. 12. L.J. Williams (1985) Tunnel vision induced by a foveal load manipulation. Human Factors, 27: 221–227. 13. H.S. Chan and A.J. Courtney (1994) The effects of priority assignment of attentional resources, order of testing and response sequence on tunnel vision. Perception and Motor Skills, 78: 899–914. 14. H.W. Leibowitz and S. Appelle (1969) The effect of luminance thresholds for peripherally presented stimuli. Human Factors. 11: 387–392. 15. H.S. Chan and A.J. Courtney (1993) Effects of cognitive foveal load on a peripheral singletarget detection task. Perception and Motor Skills, 77: 515–533. 16. L.J. Williams (1989) Foveal load affects the functional field of view. Human Factors, 2: 1–28. 17. L.J. Williams (1995) Visual field tunneling in aviators induced by memory demands. The Journal of General Psychology, 122: 225–235. 18. J. Rog´e, L. Kielbasa, and A. Muzet (2002) Deformation of the useful visual field with state of vigilance task priority, and central task complexity. Perceptual and Motor Skills, 95: 118–130. 19. J. Rog´e, L. Kielbasa, and A. Muzet (2002) Deformation of the useful visual field with state of vigilance task priority, and central task complexity. Perceptual and Motor Skills, 95: 118–130. 20. A.J. Courtney and H.S. Chan (1986) Visual lobe dimensions and search performance for targets on a competing homogenous background. Perception and Psychophysics, 40: 39–44. 21. A.K. Gramopadhye and R. Sreenivasan (1994) Visual lobe and visual search performance. Proceedings of the Human Factors and Ergonomics Society 38th Annual Meeting, pp. 1261– 1265. 22. H.S. Chan and A.J. Courtney (1996) Foveal acuity, peripheral acuity and search performance: A review. International Journal of Industrial Ergonomics, 18: 113–119. 23. A.J. Courtney and H.S. Chan (1985) Mapping the binocular visual field for a target embedded in a regular background. Perceptual and Motor Skills, 61: 1067–1073. 24. A.J. Courtney and H.S. Chan (1985) Visual lobe area for single targets on a competing homogeneous background. Human Factors, 27: 643–652.
Index
A Ackermann’s formula, 362, 373 Adaptive control system, components of, 348 Adaptive fuzzy controller (AFC), 276 design of, 278 Adaptive Kalman filtering chaotic noises for Henon map, 322–324 logistic map, 320–322 Lorenz, 319–320 system dynamic and measurement models for, 316 Aggregation hierarchy, of resources, 237 Aggregative supplier assessment, 123–124 Air cabotage, 207 Algorithmic error vs. grad6 for geometry g5, 421 Altera Stratix synthesis results, 463 Analysis of variance (ANOVA), 64 ANSI-C storage class specifiers, 458 Ant colony optimization (ACO) algorithm, 39, 468 general concept of, 42–43 information exchange among, 51 memory requirement for, 43–44 APEC. See Asia-Pacific Economic Cooperation Application layer. See also Layered QoS enhancement functionality of, 304 QoS of, 309 Approximation problem, 327 for disturbance attenuation tuning feedback controller, 336
target sensitivity function, 335 frequency-dependent, 330 solution of optimal, 331–333 ASEAN. See Association of Southeast Asian Nations ASEAN Free Trade Area (AFTA), 202 ASEAN highway network, 203 Asia-Pacific Economic Cooperation, 199 Association of Southeast Asian Nations, 199, 202 Australia Post (AP) data set computational results of, 192–194 flow of, 191 for heuristic SATLCHLP evaluation, 190 Average tracking error, 407–408 Axial-flow blood pump blood flow, 392 impeller, diffuser, and straightener, 393 Axial shaving method, 95 kinematical parameters in, 101
B Backward scheduling approach, 45–46 Bamboo chair anthropometric measurements for designing, 13 culture appropriateness level of, 18 comfort level of, 21, 24 designs correlation of, 19 dimensional appropriateness of, 24 ergonomic dimensions of, 20 participants weight analysis, 22 design and fabrication of, 14
487
488 Bamboo chair (cont.) evaluation of cognitive domain, 14 modern aesthetic appeal of, 25 comfort level of, 21, 22, 24 correlation for all designs of, 19 and culture bamboo chair, 17 dimensional appropriateness of, 24 ergonomic dimensions of, 21 participants’ subjective views regarding, 18 sample selection, 17 satisfaction tests for, 13 Batch processes. See also Zero-wait batch processes classification of, 144 comparison with continuous processes, 143 makespan estimation of by Gantt chart method, 145–146 multiproduct, 146–147 single product, 145–146 Binding number bind of graph, 210 Binocular visual fields, 479
C Cabotage restrictions, on foreign LSPs, 206–207 Capacitated single allocation hub location problem decision problem, 185 Heuristic SATLCHLP for and AP problem solutions, 189 description of, 190 hub location selection, 189 hubs number estimation, 188 nonhubs allocation, 189–190 hub networks, 187 mathematical model for objective function, 188 problem structure analysis, 187 Cardiovascular system axial-flow blood pump of, 392 failure of, 391 Central nervous system, 139 Chaotic system, 276, 277 Chebyshev inequality, for stability analysis, 265, 266 Chinese population direction-of-motion stereotypes for, 2 stimulus-response stereotypes for, 3 Chi-square tests, 3 Closed-loop system, dynamics of, 347
Index CNS. See Central nervous system Collaborative manufacturing, decision making in, 156 Collision functions, 234–235 for balanced work cell, 242 in heavily loaded transport system, 238 for local minima, 241 for unbalanced work cell, 240 Combined transport. See Multimodal transport Communication network cross-layer coordination in, 309 QoS enhancement of data link layer optimization, 306–307 network layer optimization, 305, 306 Complex programmable logic device, 456 Control law, 263, 264 approximated by RNFN, 264 closed-loop system, 265 fuzzy controller and robust controller, 279 PID-learning algorithm, 280 Control-movement compatibility, in cranes, 2 Convex programming problem, 173. See also Optimization problems Coordinate systems of gear-shaving process, 97, 98 Correlation scheduling theory, 232–234 Cost of exchange nontariff-related barriers, 200 tariff-related barriers, 200–201 Covariance matrices and covariance scaling factor, 318 measurement and system noise associated with, 315 CPLD. See Complex programmable logic device Critical angle in prediction ellipse of filter, 402 selection technique, 402–404 Crowning mechanism of gear-shaving machine mathematical model of, 96, 97 rotating angle, 99–100 CSAHLP. See Capacitated single allocation hub location problem Culture bamboo chair. See also Bamboo chair appropriateness level of, 18 comfort level of, 21, 24
Index designs correlation of, 19 dimensional appropriateness of, 24 ergonomic dimensions of, 20 participants weight analysis, 22 Customs-related barriers, 205–206 Cycle shop, 227, 228 Cyclogram for cycle shop, 227, 228
D Data link layer. See also Layered QoS enhancement functionality of, 304 QoS optimization of, 307 Decentralized control scheme design of control law, 263–264 stability analysis, 264–268 development of, 270 for non linear system, 260 Decision making in collaborative manufacturing, 156–157 internal functional units and, 169 Delphi process for collaborative manufacturing, 167 Determinative number (DN), assessment, 118–120 Diagonal shaving method, 95 Differentiated services (DiffServ) classification of, 305 multiprotocol label switch (MPLS) and, 307 real-time traffic delay with, 306 Direction-of-motion stereotypes for Chinese population, 2 comparison of, 4 strength and reversibility of, 1 Disjunction graph, 41 Disturbance attenuation tuning approximation problem and target sensitivity function, 335 and feedback controller parameters, 336 input load disturbance, 337–338 integrated squared error of, 338–340 target sensitivity function, 338 Down-to-decrease (DD) stereotypes, 6 Drift state, in deterioration function, 71–72 Duffing’s equation, for nonlinear circuit, 277 Durand algorithm, 424 Dye penetrant inspection (DPI) method, 128–129 health hazards in, 138–139
489 importance of lighting in, 136 working posture of, 136–138 Dynamic load-balancing, 228 Dynamic model, 362, 363 Dynamic scheduling collision functions, 234–235 by constraining future release intervals, 236 scheduling procedure, 235–237
E Eddy current inspection, 132 Eddy current inspection techniques, 128, 132–133 Edge deletion rule, 472 Electric power system load frequency problem of, 361 parametric uncertainties in, 362 Electromechanical design, 9 Electronic data interchange (EDI), 121, 204, 206 Electronic design automation (EDA) tool, 456 Elliptic differential equations, solution of, 455 E-LSP approach, DiffServ and MPLS mapping in, 310 Estimation error, for stability analysis, 265–267 European Union (EU), 199 Evaluation number (EN), assessment, 118 Evolutionary computation (EC), 468 Expedite forwarding (EF) services, 305. See also Differentiated services (DiffServ)
F Factional matching, of graphs fractional covered graph, 217–220 fractional deleted graph, 215–217 fractional extendable graph, 220–224 fractional factor-critical graph, 211–214 terminology and notations for, 209–210 FAM. See Flexible accelerator model Fast Fourier transform (FFT), 397 FEC. See Forwarding equivalent class Field-programmable gate array (FPGA) design flow, 457 fine-grained reconfigurable unit, 455 logic design implementation on, 457
490 Filter lumped uncertainties and disturbances of first-order, 365 nth-order, 367 second-order, 366 Finance unit profit and production-cost goals of, 170 role in decision making, 169 Finite difference method (FDM) algorithm development process, 416 for electrostatic potential calculations, 415 principal sources of error, 417 Firm risks, supplier assessment, 114–116 Flexible accelerator model, 86 Floating-point arithmetic operations, 460 Flow shop cyclograms for, 228 scheduling problem, 41 FOPTD approximation, plant parameters, 336–338 Foreign LPS, restrictions on, 206 Forward–backward scheduling, 53 Forwarding equivalent class, 309 Four-way lever-circular display configurations, 7 Four-way lever-digital counter test, 6 Fractional covered graphs, 217 Fractional deleted graphs, 215 Fractional extendable graphs, 220 Fractional factor criticality, 211 Fractional factor theory, 209 Fractional indicator function, 209 Fractional perfect matching, 210 Frequency-based memory, 472 Functional hierarchy, of processes, 237 Fuzzy control, 264 Fuzzy controller, 279 Fuzzy logic systems (FLS), tools for control designs, 260 Fuzzy system approximation of, 278 control system dynamics, 276
G GA. See Genetic algorithm Gantt chart, 42, 230–231, 235 GA parameters, 79 Gauss–Seidel method, 455 Gear shaving machine, simulation longitudinal tooth crowning in, 105–109 mathematical models in, 96–103
Index methods of, 95 tooth contact analysis of, 103–105 Generalized reduced gradient, 74 General mesh point, 417 General purpose processor (GPP), 454 General-purpose real-time distributed simulation management experiment design and management techniques, 295 model management technique, 296 simulation experiment management, 297 process management, 297 simulation database techniques for data and model management, 296 experiment and equipment management, 297 Genetic algorithm(s), 74–75, 468 crossover process, 77–78 fitness function, 77 implementation of, 75–76 mutation in, 78 parameters of, 79 population size, 76 sampling method, 76–77 selection process, 77 termination criteria of, 79 Global pheromone matrix pheromone evaporating parameter for, 51 restart processes in, 53 Goal programming (GP), 157 Grad6 function definition of, 417 properties of, 419 Graph-based representation. See Disjunction graph Graph, parameters of, 210 GRG. See Generalized reduced gradient Gross domestic product (GDP) per capita, for ASEAN countries, 202
H Hanafi’s algorithm, 470 Handel-C compiler, 457 Handel-C language (HDL) for implementation of algorithms on hardware, 457 types and type operator, 458 Hardware description languages (HDLs), 456 Hazardous material symbols, 28
Index Henon map noise system error of parameter with, 322–324 mathematical description of, 322 Heuristic information matrix, 43, 44 Heuristic SATLCHLP and AP problem solutions, 189 description of, 190 hub location selection, 189 hubs number estimation, 188 nonhubs allocation, 189–190 High-fidelity model, real-time calculation for, 292–293 High-gain observer (HGO), 260, 269 Hub networks algorithm for problems of, 186, 187 implementation of, 185 problems of, 186 Human–machine interface control–display configurations in, 1 degree of reversibility to reduce confusion, 3 for Hong Kong Chinese, 1 Human visual system, 477 Hybrid magnetic bearings system control diagram, 396 motor drives and rotor of, 392 performance of, 397–399 PID controller, 396 prototyping tool for development of, 396
I I-AFC learning algorithm, 285 simulations of, 283, 284 IEEE802.1p, 306 Iinput-output resource, in cycle shop, 227 Index of reversibility (IR), 3 Initial trade-off criterion matrix, 170, 171 Integral sliding mode control, 246 Integral-type sliding surface, new, 248–253 Interactive Meta-Goal Programming (IMGP)-based decision analysis framework decision problem model for priority level, 163, 164 example (suture manufacturer), 169–171 interactive process decision goal function proposal, 161–165
491 IMGP decision model evaluation, 165–167 networked Delphi process, 167 meta-goal collaborative manufacturing network (CMN), 158 decision making, 157 variants, 157–161 work process and information flow for, 167 decision outcome, 169 decision problem statement, 168 Interactive processes, decision making decision goal function proposal decision problems, 161 notations for decision objectives, 161–163 IMGP decision model evaluation analytical hierarchy process (AHP), 165 initial criterion matrix, 165 priority levels classification, 165–166 Internal model control (IMC) parameter comparison with feedback con-figuration, 330 structure of, 333 Intraregional maritime transport and infrastructure, 205 ISA-PID control law, 329, 334 ISMC. See Integral sliding mode control ISMC-controlled system, 246 Isolated toughness of graph G, 210
J Japanese-oriented supplier management practices, 113 Job-shop scheduling problem (JSP), 41
K Kalman filtering adaptive, 316 gain matrix, 406 measurement and system noise, 317 Kinematic error in shaved gear, 104–105 Kuyatt algorithms, 424
L Label Switch Routers (LSRs), 310 Laminated bamboo chair, 18 Laplace’s equation, mesh point selection by, 418
492 Large-scale systems, class of, 260 Layered QoS enhancement, 304 Lever–circular display test, 6 Linear piezoelectric ceramic motor computer-controlled, 385 dynamic equation of, 378 neural network (NN) control of, 384–387 principal structure of, 377 RIMC method, 389 robust intelligent motion control, 375 second-order nonlinear dynamic equation, 377 tracking control and error of, 378 ultrasonic vibration force, 376 Linear sliding mode controller, 349 control input vs. time in, 351 pendulum position vs. time in, 350 performance of, 357 Load balancing at subsystem level, 238–242 at system level, 237–238 Load balancing systems, 229–230 Load frequency (LF) control dynamic model for, 362, 363 state equations for, 363 in electric power system, 361 response using UDE, 369 Local pheromone matrix updating, 51 Logistic map noise system mathematical description of, 320 RMS error of estimation, 321–322 Logistics service providers, 199 customers of, 200 interview questionnaire for, 202 Longitudinal tooth crowning, in gear production, 105–109 Lorentz-type motor, principles of, 394 Lorenz chaotic noise system mathematical description of, 319 noise parameters of, 319, 320 LPCM. See Linear piezoelectric ceramic motor LSPs. See Logistics service providers Lumped uncertainty estimation, 364, 365 Lyapunov-based nonlinear adaptive sliding mode controller, 346 Lyapunov equation, 264, 280 Lyapunov function, 248, 250, 251, 253, 265, 280 Lyapunov redesign technique, 263 Lyapunov stability theorem, 276
Index M Magnetic bearings, basic principle of, 394 Magnetic particles inspection technique, 128–131 health hazards in, 138–140 importance of lighting in, 136 working posture of, 136–138 Magnetic resonance imaging (MRI), 454 Makespan estimation of multiproduct batch process, 146 using MILP formulations, 147 of single product batch process path continuity and idle time, 146 using Gantt chart method, 145 Man-machine systems, 478 Maritime transport, 203 Matrix method, makespan estimation applications of, 150 production sequence, 151–152 batch scheduling, 148 expression for, 150 slack variable determination, 148–149 Maximum-error function construction by partition of log grad6 axis, 420 definition of, 419 for tube, 11, 422 Maximum integrated protocol stack structure, 312 Meshing gear pair, coordinate system and parameters of, 103–104 Meta-goal programming multi-objective optimization approach, 158 MFBAnt, 46 benchmark problems for, 52 pheromone updating rule, 51 procedure of, 48 Microsoft Visual Studio .Net, applications of, 464 Minimizing constraint violation quantity (MCVQ), 471 Minimizing objective function value (MOFV), 471 Minimum spanning tree (MST), 467 Min-max problem formulation, 331 Modern bamboo chair. See also Bamboo chair aesthetic appeal of, 25 comfort level of, 21, 22, 24 correlation for all designs of, 19 and culture bamboo chair, 17 dimensional appropriateness of, 24 ergonomic dimensions of, 21
Index participants’ subjective views regarding, 18 Motoring coils, 395 MPI. See Magnetic particles inspection technique Multilevel IMGP decision model evaluation algorithm, 165–167 Multimedia transmission, protocol for, 309 Multimodal networks integration, impediments in, 204 Multimodal transport areas of research, 207 benefits of, 198 cost and efficiency advantages of, 199 Multimodal transport operator (MTO), 198 Multiple-colony ant algorithm, 40, 54 forward–backward scheduling in, 45 hierarchical cooperation in, 44–45 Multiple-target tracking algorithm, performance evaluation of, 407–408 Multiprotocol label switch (MPLS), 307 data traffic in, 309 mapping methods for DiffServ in, 310 Multiregion relaxation method, 419 Multivariate quality loss function, 70 Mutation, in genetic algorithm, 78
N NDT. See Nondestructive testing Network-based control systems applications of, 311–312 demand for, 301 Network delay in computer networks, 303 Network layer. See also Layered QoS enhancement functionality of, 304 QoS optimization of, 305–306 Network performance parameters analysis, 302–303 Network routers, enforce function of, 305 Neural controller, 376, 382 Neural networks (NNs) based adaptive control technique, 375 closed-loop feedback applications of, 376 tools for control designs, 260 Neuro-fuzzy networks, 259 Newton-Tau methods, numerical methods for, 436 n-factor criticality, 211 Nondestructive testing
493 chemical hazards of, 138–140 ergonomics, safety, and health problems of, 136–138 human factors in, 133–136 methods of, 128–133 Nondeterministic polynomial-time hard (NP-hard) combinatorial optimization problems, 39 Nonlinear adaptive sliding mode control and adaptive laws, 355 parameter selection, 356–357 performance of, 357 power consumption of, 357–358 system dynamics, 354 uncertain system parameters for, 353–354 validity of, 356, 357 Nonlinear dynamic systems, control of, 245, 246 Nonlinear Fredholm integral equation, 439 problem formulation, 433–434 problem solving by Newton method, 434–435 Tau method, 435–436 Nonlinear integro-differential equation problem formulation, 445–446 problem solving by Newton method, 447–448 Tau method, 448 Nonlinear programming problems, 174 Nonlinear sliding mode control, 350 control input vs. time in, 352–353 pendulum position vs. time in, 352 power consumption of, 358 Nontariff barriers and infrastructure, 204–205 and interconnectivity, 205 NP-hard combinatorial optimization problems, 467 NP-hard problem, 468 Numerical simulations, 254–256
O Open-shop scheduling problem, 39 Operational Planning unit production capacity goals of, 170 role in decision making, 169 Optimization problems, 173 Order-2 algorithm, 423 Order-10 algorithm construction for general mesh point, 417 power series for, 418
494 OR-Library, 53 Orthogonal and parallel SR planes, testing of hand and foot controls, 63
P Packet delivering delay, 303 Packet switch technologies, 301 Paired-associate learning, 28, 36 Partial differential equation (PDE), 455 Particle swarm optimization method homomorphous mapping, initial search positions, 176 multiple stretching technique for, 179 new search direction, 176, 177 numerical implementation of, 174 potentials of, 173 problems in, 175 procedure of revised, 180, 181 rPSO and RGENOCOP V, 181 searching procedure of, 175 secession stretching technique for, 178 swarm divison, 177, 178 Pheromone trail matrix, 43, 44 p-hub median problem (p-HMP), 186 PID-AFC system block diagram of, 280 bound of approximation error, 281 for chaotic dynamic system, 275, 276 control parameters of, 286 design of, 279 simulations of, 285, 286 stability of, 280 PID controllers. See Proportional-integrative-derivative controllers PID-learning algorithm, 280 PIPIC. See PseudoIsochromatic Plate Ishihara Compatible Color Vision Test 24 Plate 3PL. See Third-party logistics Plunge shaving method, 95 Polynomial–time algorithms, 467 Population stereotype, 2 Postexposure masking stimulus, 483 PP-300. See Rotary inverted pendulum control system Prediction covariance matrix, 406 Presentation layer. See also Layered QoS enhancement functionality of, 304 QoS enhancement of, 308 Priority dispatching rules (PDRs), 39 Processing delay, 303
Index Process resource Gantt chart, 230 Process states, quality loss, 74 Production-smoothing model (PSM), 83–84 hypothesis, 87–88 model analysis of, 88–93 model specification, 85–87 parameter estimations of, 91 runs tests, 90 structural parameters estimation, 92 temporal aggregation on, 87–93 Propagating delay, 303 Proportional-integrative-derivative controllers control law, 329 design problem formulation, 328 internal model control (IMC) parameter, 330–331 min–max problems, 331 weighted model matching problem, 329 frequency-dependent-shaped error, 338, 339 integral squared error (ISE) value of, 338, 339 inverted pendulum KRi rotary, 345, 346 problems associated with, 345 process model, industrial processes, 329 tuning of, 327–328 disturbance attenuation, 335–337 step response, 333–334, 337 PseudoIsochromatic Plate Ishihara Compatible Color Vision Test 24 Plate, 134 PSO method. See Particle swarm optimization method
Q Quality loss definition, 68 multivariate quality loss function, 70 Quality management system (QMS), 117 Quality number (QN), 117 Quality of service (QoS) class 0, 302 differentiated services (DiffServ) and integrated services, 305 enhancement framework of, layered, 304 network layer, 305 Quality selection problems, 67–68 deterioration functions, 70, 71 genetic algorithm, 74–79
Index mathematical model development, 69–74 numerical example, 79–80 Queuing delay, 303
R Radial basis function, 245 Rapid upper limb assessment method, 136–138 RBF. See Radial basis function RBF neural network, 245–247 Real-time control protocol applications of, 311 functions of, 309 Real-time distributed simulation managements design plan management, 295 simulation experiment management, 296 simulation data management technique, 296–297 standard management method, 296 Real-time interactive data flows, 302 Real-time simulation engine, 293, 298 Real-time traffic delay, 305–306 Recall training, 28 Recognition training, 28 Reconfigurable computing (RC), 454, 455 Recurrent neuro-fuzzy networks (RNFNs), 260, 262 approximation method of, 264–268 estimate of, 263 universal approximators, 263 Resource network QoS enhancement presentation layer, 308 session layer supervision, 309 transport layer optimization, 307–308 Rhino Robot, 5 degrees of freedom, 269 dynamic model of, 270 signals in, 270–272 Road cabotage, 207 Robust design technique, 276 Robust intelligent motion control (RIMC) system, 375 computational loading, 376, 385 design of feedback controller, 382 dynamic equation, 383 neural controller and robust controller, 376 Robust model, design of, 362 Root-mean-square (RMS), for 6th -order coefficients, 419
495 Rotary control–digital counter, 7 Rotary control–horizontal scale test, 6 Rotary inverted pendulum control system mathematical model of, 346–347 system identification and parameter estimation, 350 Rotating angle and work-gear movement, relations between, 100 Rotor, 392 air gap in, 394 electromagnetic force and flux density, 394 impeller, 392 permanent magnet, 394 rotation speed of, 397 RTCP. See Real-time control protocol RULA. See Rapid upper limb assessment method
S Sales unit production quota goals of, 170 role in decision making, 169 Satellite navigation signal simulation system, 291 distributed synchronization techniques, 298 general-purpose simulation architecture, 293–294 non-real-time layer, 294 real-time communication network, 293 real-time simulation architecture, 292–293 real-time system, 292 simulation architecture, 293 Scheduling unit machine group capacity goals of, 170 role in decision making, 169 SCM. See Supply-chain management Self-constructing neural network (SCNN), 379 learning phases of, 376, 388, 389 threshold value, 380 Session layer. See also Layered QoS enhancement functionality of, 304 QoS supervision of, 309 Shift and drift state, in deterioration function, 71–73 Shop performance, 227 Silicon steel laminations, 392 Simulated annealing (SA), 468
496 Simulation architecture, requirements for, 292 Simulation management techniques distributed synchronization, 298 process management, 297 Singapore Kunming rail link (SKRL), 203, 204 Single-colony ant algorithm, 39 Single input single output (SISO) system, 364 Single-point crossover, in genetic algorithm, 78 Sliding mode control law, 253–254 Sliding mode controller, design of linearized model, 348–349 simplicity and robustness of, 353 system identification and parameter estimation, 350 Sliding mode control (SMC), 245–246, 276, 361 applications of, 348 basic principle of, 347 design and simulation of linear, 348–350 nonlinear, 350–353 nonlinear adaptive, 353–357 Sling mode control system, convergence property of, 347 Sonatest Masterscan 340 Ultrasonic Flaw Detector, 130, 132 SRM. See Supplier relationship management State estimation by Kalman filter, 315 system and measurement noise, 316, 317 Henon map noise, 322–324 logistic map noise, 320–322 Lorenz chaotic noise, 319–320 State space model, 368 Step response tuning, controller output signal generated by, 337, 338 problem, 339 rules, 334 Stimulus-response (SR) compatibility, spatial, 58–59 effect of combined hand and foot controls, 60–61 foot controls, 59–60 stimulus and response arrays, 61–64 Stimulus–response stereotypes, 3 Stochastic local search policy, 42 Strategic oscillation, characteristics of, 471
Index Successive overrelaxation (SOR), 454 hardware implementation of, 459–462 Supervised learning, in neural network designs, 247 Supplier assessment advantages and disadvantages, 122–123 criteria, 114–116 importance in manufacturing companies, 112 Supplier relationship management, 112 Supply-chain management adherence to quantity stipulations, 120–123 adherence to time schedules assessment, 117–120 aggregative supplier assessment, 123–124 functions of, 112 importance of, 114–116 quality of supplied products assessment, 117 role of, 111 Suture production steps, 169 Symbol training, 27 experimental design and analysis for, 27, 34–36 factors affecting effectiveness of, 27 training factors, 34 training method, 28–34 methods for, 27 pretest–posttest experiments for, 36 System of nonlinear Fredholm integral equations (SNFIE) application of the Newton method to, 440–441 method for solving, 439
T Tabu search (TS), 468 algorithm based on strategic oscillation, 469–470 Taguchi loss function, 68, 70 Tangential shaving method, 95 Target motion model of, 404–406 simulation parameters, 406–407 tracking approaches, 402 trajectories for, 407 Target tracker. See Track splitting filter Tau method application, 441–443 Third-party logistics, 116 Tooth contact analysis (TCA) of shaved gear, 103–105
Index Tooth crowning, machine setting parameters, 101 Toughness of graph, 210 Tracking error, 277, 285 chaotic trajectory, 277 convergence of, 276, 288 Track splitting algorithm. See Track splitting filter Track splitting filter, 401 critical angle sector, selective observations of, 402–404 performance parameter of, 407–408 prediction ellipse of, 402, 412 and target density, 408–411 Transport layer. See also Layered QoS enhancement functionality of, 304 QoS optimization of, 307–308 Transport layer protocol RUDP and TCP, comparison between, 308 RUDP and UDP, comparison between, 307 Transport resources and operations Gantt chart, 231 Tuning rules, 328, 336–337
U Ultrasonic inspection techniques, 128, 130, 132 Unbalanced work cell, with one transporting unit T, 240 Uncapacitated hub location problem (UHLP) comparison with p-HMP, 186 as quadratic optimization problem, 187 Uncertainty and disturbance estimator (UDE) error of estimation, 367 first-order filter and second-order filter, 372
497 estimation accuracy of first-order filter, 366 nth-order filter, 367 second-order filter, 366 load frequency response, 369 Uncontrolled chaotic dynamic system, chaotic orbits of, 277
V Very highscale integrated circuit HDL (VHDL), 456, 484 VILOMS. See Visual Lobe Measurement Software Visual Basic 6, role, 79 Visual display terminal (VDT), 128 Visual Lobe Measurement Software, 135 Visual lobe measurement system (VILOMS) software, 477 Visual search process, 477
W Work cell, 237 Work-cell throughput diagram, balanced, 238 Work gear and shaving cutter gear ratio of, 99 parameters of tooth crowning, 101 Work-gear movement and additional rotating angle, relationship, 100
Z Zero-wait batch processes matrix method for batch scheduling, 148 makespan estimation, 148–150 production sequence, 151–152 scheduling design of, importance of, 143