Communications in Computer and Information Science
105
Rongbo Zhu Yanchun Zhang Baoxiang Liu Chunfeng Liu (Eds.)
Information Computing and Applications International Conference, ICICA 2010 Tangshan, China, October 15-18, 2010 Proceedings, Part I
13
Volume Editors Rongbo Zhu South-Central University for Nationalities Wuhan, China E-mail:
[email protected] Yanchun Zhang Melbourne, VIC, Australia E-mail:
[email protected] Baoxiang Liu He’Bei Polytechnic University Tangshan, Hebei, China E-mail:
[email protected] Chunfeng Liu He’Bei Polytechnic University Tangshan, Hebei, China E-mail:
[email protected]
Library of Congress Control Number: 2010936074 CR Subject Classification (1998): C.2, D.2, C.2.4, I.2.11, C.1.4, D.4.7 ISSN ISBN-10 ISBN-13
1865-0929 3-642-16335-1 Springer Berlin Heidelberg New York 978-3-642-16335-7 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2010 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper 06/3180
Preface
This volume contains the proceedings of the International Conference on Information Computing and Applications (ICICA 2010), which was held in Tangshan, China, October 15-18, 2010. As future-generation information technology, information computing and applications become specialized, information computing and applications including hardware, software, communications and networks are growing with ever-increasing scale and heterogeneity and becoming overly complex. The complexity is getting more critical along with the growing applications. To cope with the growing and computing complexity, information computing and applications focus on intelligent, selfmanageable, scalable computing systems and applications to the maximum extent possible without human intervention or guidance. With the rapid development of information science and technology, information computing has become the third approach of science research. Information computing and applications is the field of study concerned with constructing intelligent computing, mathematical models, numerical solution techniques and using computers to analyze and solve natural scientific, social scientific and engineering problems. In practical use, it is typically the application of computer simulation, intelligent computing, internet computing, pervasive computing, scalable computing, trusted computing, autonomy-oriented computing, evolutionary computing, mobile computing, computational statistics, engineering computing, multimedia networking and computing, applications and other forms of computation problems in various scientific disciplines and engineering. Information computing and applications is an important underpinning for techniques used in information and computational science and there are many unresolved problems that address worth studying. The ICICA 2010 conference provided a forum for engineers and scientists in academia, industry, and government to address the most innovative research and development including technical challenges and social, legal, political, and economic issues, and to present and discuss their ideas, results, work in progress and experience on all aspects of information computing and applications. There was a very large number of paper submissions (782), representing 21 countries and regions, not only from Asia and the Pacific, but also from Europe, and North and South America. All submissions were reviewed by at least three Program or Technical Committee members or external reviewers. It was extremely difficult to select the presentations for the conference because there were so many excellent and interesting submissions. In order to allocate as many papers as possible and keep the high quality of the conference, we finally decided to accept 214 papers for presentations, reflecting a 27.4% acceptance rate. And 69 papers were included in this volume. We believe that all of these papers and topics not only provided novel ideas, new results, work in progress and
VI
Preface
state-of-the-art techniques in this field, but also will stimulate the future research activities in the area of information computing and applications. The exciting program for this conference was the result of the hard and excellent work of many others, such as Program and Technical Committee members, external reviewers and Publication Chairs under a very tight schedule. We are also grateful to the members of the Local Organizing Committee for supporting us in handling so many organizational tasks, and to the keynote speakers for accepting to come to the conference with enthusiasm. Last but not least, we hope participants enjoyed the conference program, and the beautiful attractions of Tangshan, China.
September 2010
Rongbo Zhu Yanchun Zhang Baoxiang Liu Chunfeng Liu
Organization
ICICA 2010 was organized by Hebei Polytechnic University, Hebei Scene Statistical Society, and sponsored by the National Science Foundation of China, Hunan Institute of Engineering. It was held in cooperation with Lecture Notes in Computer Science (LNCS) and Communications in Computer and Information Science (CCIS) of Springer.
Executive Committee Honorary Chair General Chairs
Program Chairs
Local Arrangement Chairs
Steering Committee
Jun Li, Hebei Polytechnic University, China Yanchun Zhang, University of Victoria, Australia Baoxiang Liu, Hebei Polytechnic University, China Rongbo Zhu, South-Central University for Nationalities, China Chunfeng Liu, Hebei Polytechnic University, China Shaobo Zhong, Chongqing Normal University, China Jincai Chang, Hebei Polytechnic University, China Aimin Yang, Hebei Polytechnic University, China Qun Lin, Chinese Academy of Sciences, China MaodeMa, Nanyang Technological University, Singapore Nadia Nedjah, State University of Rio de Janeiro, Brazil Lorna Uden, Staffordshire University, UK Yiming Chen, Yanshan University, China Changcun Li, Hebei Polytechnic University, China Zhijiang Wang, Hebei Polytechnic University, China Guohuan Lou, Hebei Polytechnic University, China Jixian Xiao, Hebei Polytechnic University, China Xinghuo Wan, Hebei Polytechnic University, China
VIII
Organization
Chunying Zhang, Hebei Polytechnic University, China Dianchuan Jin, Hebei Polytechnic University, China Publicity Chairs Aimin Yang, Hebei Polytechnic University, China Xilong Qu, Hunan Institute of Engineering, China Publication Chairs Yuhang Yang, Shanghai Jiao Tong University, China Financial Chair Jincai Chang, Hebei Polytechnic University, China Local Arrangement Committee Lihong Li, Hebei Polytechnic University, China Shaohong Yan, Hebei Polytechnic University, China Yamian Peng, Hebei Polytechnic University, China Lichao Feng, Hebei Polytechnic University, China Yuhuan Cui, Hebei Polytechnic University, China Secretaries Kaili Wang, Hebei Polytechnic University, China Jingguo Qu, Hebei Polytechnic University, China Yafeng Yang, Hebei Polytechnic University, China
Program/Technical Committee Yuan Lin Yajun Li Yanliang Jin Mingyi Gao Yajun Guo Haibing Yin Jianxin Chen Miche Rossi Ven Prasad Mina Gui Nils Asc Ragip Kur On Altintas
Norwegian University of Science and Technology, Norwegian Shanghai Jiao Tong University, China Shanghai University, China National Institute of AIST, Japan Huazhong Normal University, China Peking University, China University of Vigo, Spain University of Padova, Italy Delft University of Technology, Netherlands Texas State University, USA University of Bonn, Germany Nokia Research, USA Toyota InfoTechnology Center, Japan
Organization
Suresh Subra Xiyin Wang Dianxuan Gong Chunxiao Yu Yanbin Sun Guofu Gui Haiyong Bao Xiwen Hu Mengze Liao Yangwen Zou Liang Zhou Zhanguo Wei Hao Chen Lilei Wang Xilong Qu Duolin Liu Xiaozhu Liu Yanbing Sun Yiming Chen Hui Wang Shuang Cong Haining Wang Zengqiang Chen Dumisa Wellington Ngwenya Hu Changhua Juntao Fei Zhao-Hui Jiang Michael Watts Tai-hon Kim Muhammad Khan Seong Kong Worap Kreesuradej Uwe Kuger Xiao Li Stefa Lindstaedt Paolo Li Tashi Kuremoto Chun Lee
IX
George Washington University, USA Hebei Polytechnic University, China Hebei Polytechnic University, China Yanshan University, China Beijing University of Posts and Telecommunications, China CMC Corporation, China NTT Co., Ltd., Japan Wuhan University of Technology, China Cisco China R&D Center, China Apple China Co., Ltd., China ENSTA-ParisTech, France Beijing Forestry University, China Hu’nan University, China Beijing University of Posts and Telecommunications, China Hunan Institute of Engineering, China ShenYang Ligong University, China Wuhan University, China Beijing University of Posts and Telecommunications, China Yanshan University, China University of Evry in France, France University of Science and Technology of China, China College of William and Mary, USA Nankai University, China Illinois State University, USA Xi’an Research Institute of Hi-Tech, China Hohai University, China Hiroshima Institute of Technology, Japan Lincoln University, New Zealand Defense Security Command, Korea Southwest Jiaotong University, China The University of Tennessee, USA King Mongkuts Institute of Technology Ladkrabang, Thailand Queen’s University Belfast, UK Cinvestav-IPN, Mexico Division Manager Knowledge Management, Austria Polytechnic of Bari, Italy Yamaguchi University, Japan Howon University, Korea
X
Organization
Zheng Liu Michiharu Kurume Sean McLoo R. McMenemy Xiang Mei Cheol Moon Veli Mumcu Nin Pang Jian-Xin Peng Lui Piroddi Girij Prasad Cent Leung Jams Li Liang Li Hai Qi Wi Richert Meh shafiei Sa Sharma Dong Yue YongSheng Ding Yuezhi Zhou Yongning Tang Jun Cai Sunil Maharaj Sentech Mei Yu Gui-Rong Xue Zhichun Li Lisong Xu Wang Bin Yan Zhang Ruichun Tang Wenbin Jiang Xingang Zhang Qishi Wu Jalel Ben-Othman
Nagasaki Institute of Applied Science, Japan National College of Technology, Japan National University of Ireland, Ireland Queen’s University Belfast, UK The University of Leeds, UK Gwangju University, Korea Technical University of Yildiz, Turkey Auckland University of Technology, New Zealand Queen’s University Belfast, UK Technical University of Milan, Italy University of Ulster, UK Victoria University of Technology, Australia University of Birmingham, UK University of Sheffield, UK University of Tennessee, USA University of Paderborn, Germany Dalhousie University, Canada University of Plymouth, UK Huazhong University of Science and Technology, China Donghua University, China Tsinghua University, China Illinois State University, USA University of Manitoba, Canada University of Pretoria, South Africa Simula Research Laboratory, Norway Shanghai Jiao Tong University, China Northwestern University, China University of Nebraska-Lincoln, USA Chinese Academy of Sciences, China Simula Research Laboratory and University of Oslo, Norway Ocean University of China, China Huazhong University of Science and Technology, China Nanyang Normal University, China University of Memphis, USA University of Versailles, France
Table of Contents – Part I
Trusted and Pervasive Computing An Novel Encryption Protocol for Mobile Data Synchronization Based on SyncML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chao Jiang, Meina Song, Ke Liu, and Ke Xu Identity-Based Sanitizable Signature Scheme in the Standard Model . . . . Yang Ming, Xiaoqin Shen, and Yamian Peng Service-Based Public Interaction Framework for Pervasive Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tao Wang, Yunxiang Ling, Guohua Zhang, and Huxiong Liao Analysis on Farmers’ Willingness to Participate in Skill Training for Off-farm Employment and Its Factors – The Case of Ya’an City of Sichuan Province, China . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xinhong Fu, Xiang Li, Wenru Zang, and Hong Chi Bayesian Decision Model Based on Probabilistic Rough Set with Variable Precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lihong Li, Jinpeng Wang, and Junna Jiang The Optimization Model of Hospital Sick Beds’ Rational Arrangements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yajun Guo, Jinran Wang, Xiaoyun Yue, Shangqin He, and Xiaohua Zhang
1 9
17
25
32
40
Scientific and Engineering Computing Inverse Eigenvalue Problem for Real Five-Diagonal Matrices with Proportional Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mingxing Tian and Zhibin Li
48
On the Ruin Problem in an Erlang(2) Risk Model with Delayed Claims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wei Zou and Jie-hua Xie
54
Stability of Euler-Maclaurin Methods in the Numerical Solution of Equation u (t) = au(t) + a0 u([t]) + a1 u([t − 1]) . . . . . . . . . . . . . . . . . . . . . . Chunyan He and Wanjin Lv
62
Algorithm for Solving the Complex Matrix Equation AX − XB = C . . . Sen Yu, Wei Cheng, and Lianggui feng
70
XII
Table of Contents – Part I
Research and Application of Fuzzy Comprehensive Evaluation of the Optimal Weight Inverse Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lihong Li, Junna Jiang, Zhendong Li, and Xufang Mu q-Extensions of Gauss’ Fifteen Contiguous Relations for 2 F1 -Series . . . . . Chuanan Wei and Dianxuan Gong
78 85
A New Boussinesq-Based Constructive Method and Application to (2+1) Dimensional KP Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Li Yin and Zhen Wang
93
Some Properties of a Right Twisted Smash Product A*H over Weak Hopf Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yan Yan, Nan Ji, Lihui Zhou, and Qiuna Zhang
101
Application of New Finite Volume Method (FVM) on Transient Heat Transferring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yuehong Wang, Yueping Qin, and Jiuling Zhang
109
Applications of Schouten Tensor on Conformally Symmetric Riemannie Manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nan Ji, Yuanyuan Luo, and Yan Yan
117
Area of a Special Spherical Triangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaohui Hao, Manfu Yan, and Xiaona Lu
123
Parallel and Distributed Computing A Parallel Algorithm for SVM Based on Extended Saddle Point Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaorui Li, Congying Han, and Guoping He CPN Tools’ Application in Verification of Parallel Programs . . . . . . . . . . . Lulu Zhu, Weiqin Tong, and Bin Cheng
129 137
The Study on Digital Service System of Community Educational Resources Based on Distributed Technology . . . . . . . . . . . . . . . . . . . . . . . . . Jiejing Cheng, Jingjing Huang, and Xiaoxiao Liu
144
Research into ILRIP for Logistics Distribution Network of Deteriorating Item Based on JITD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiang Yang, Hanwu Ma, and Dengfan Zhang
152
Overview on Microgrid Research and Development . . . . . . . . . . . . . . . . . . . Jimin Lu and Ming Niu Research on Cluster and Load Balance Based on Linux Virtual Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qun Wei, Guangli Xu, and Yuling Li
161
169
Table of Contents – Part I
Acceleration of Algorithm for the Reduced Sum of Two Divisors of a Hyperelliptic Curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiuhuan Ding
XIII
177
Multimedia Networking and Computing A Nonlinear Camera Calibration Method Based on Area . . . . . . . . . . . . . . Wei Li, Xiao-Jun Tong, and Hai-Tao Gan Cost Aggregation Strategy for Stereo Matching Based on a Generalized Bilateral Filter Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Li Li, Cai-Ming Zhang, and Hua Yan Stocks Network of Coal and Power Sectors in China Stock Markets . . . . . Wangsen Lan and Guohao Zhao An Cross Layer Algorithm Based on Power Control for Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yong Ding, Zhou Xu, and Lingyun Tao The Research of Mixed Programming Auto-Focus Based on Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shuang Zhang, Jin-hua Liu, Shu Li, Gang Jin, Yu-ping Qin, Jing Xiao, and Tao An The Optimization of Route Design for Grouping Search . . . . . . . . . . . . . . . Xiujun Wu
185
193 201
209
217
226
AOV Network-Based Experiment Design System for Oil Pipeline-Transportation Craftwork Evaluation . . . . . . . . . . . . . . . . . . . . . . . Guofeng Xu, Zhongxin Liu, and Zengqiang Chen
234
Model and Simulation of Slow Frequency Hopping System Using Signal Progressing Worksystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yuling Li
242
Internet and Web Computing Insight to Cloud Computing and Growing Impacts . . . . . . . . . . . . . . . . . . . Chen-shin Chien and Jason Chien Using Semantic Web Techniques to Implement Access Control for Web Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhengqiu He, Kangyu Huang, Lifa Wu, Huabo Li, and Haiguang Lai An Quadtree Coding in E-chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhong-jie Zhang, Xian Wu, De-peng Zhao, and De-qiang Wang
250
258 267
XIV
Table of Contents – Part I
Study on Applying Vector Representation Based on LabVIEW to the Computing between Direct Lattice and Reciprocal Lattice . . . . . . . . . . . . . Yingshan Cui, Xiaoli Huang, Lichuan Song, and Jundong Zhu Test and Implement of a Parallel Shortest Path Calculation System for Traffic Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lin Zhang, Zhaosheng Yang, Hongmei Jia, Bin Wang, and Guang Chen Controlling Web Services and 802.11 Mesh Networks . . . . . . . . . . . . . . . . . Chen-shin Chien and Jason Chien
274
282
289
Intelligent Computing and Applications Numeric Simulation for the Seabed Deformation in the Process of Gas Hydrate Dissociated by Depressurization . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhenwei Zhao and Xinchun Shang Control for Mechatronic Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yanjuan Zhang, Chenxia Zhao, Jinying Zhang, and Huijuan Zhao
296 304
Optimization of Acylation of Quercetin Using Response Surface Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wei Li, Qianqian Jin, Duanji Wan, Yuzhen Chen, and Ye Li
311
An Empirical Analysis on the Diffusion of Local Telephone Diffusion in China . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhigao Liao, Jiuping Xu, and Guiyun Xiang
318
The Fee-Based Agricultural Information Service: An Analysis of Farmers’ Willingness to Pay and Its Influencing Factors . . . . . . . . . . . . . . . Yong Jiang, Fang Wang, Wenxiu Zhang, and Gang Fu
326
Research on Belt Conveyor Monitoring and Control System . . . . . . . . . . . Shasha Wang, Weina Guo, Wu Wen, Ruihan Chen, Ting Li, and Fang Fang Distribution of the Stress of Displacement Field during Residual Slope in Residual Ore Mining Based on the Computer Simulation System . . . . Zhiqiang Kang, Yanhu Xu, Fuping Li, Yanbo Zhang, and Ruilong Zhou Numerical Simulation on Inert Gas Injection Applied to Sealed Fire Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jiuling Zhang, Xinquan Zhou, Wu Gong, and Yuehong Wang AUTO CAD Assisted Mapping in Building Design . . . . . . . . . . . . . . . . . . . Wenshan Lian and Li Zhu
334
340
347 354
Table of Contents – Part I
The OR Data Complement Method for Incomplete Decision Tables . . . . . Jun Xu, Yafeng Yang, and Baoxiang Liu Comprehensive Evaluation of Banking Sustainable Development Based on Entropy Weight Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Donghua Wang and Baofeng Li Fitting with Interpolation to Resolve the Construction of Roads in Mountains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jinran Wang, Xiaoyun Yue, Yajun Guo, Xiaojing Yang, and Yacai Guo Response Spectrum Analysis of Surface Shallow Hole Blasting Vibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chao Chen, Yabin Zhang, and Guobin Yan
XV
361
368
376
384
Evolutionary Computing and Applications Iterative Method for a Class of Linear Complementarity Problems . . . . . Longquan Yong
390
A Hybrid Immune Algorithm for Sequencing the Mixed-Model Assembly Line with Variable Launching Intervals . . . . . . . . . . . . . . . . . . . . Ran Liu, Peihuang Lou, Dunbing Tang, and Lei Yang
399
A Cooperative Coevolution UMDA for the Machine-Part Cell Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qingbin Zhang, Bo Liu, Boyuan Ma, Song Wu, and Yuanyuan He
407
Hardware Implementation of RBF Neural Network on FPGA Coprocessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhi-gang Yang and Jun-lei Qian
415
Prediction on Development Status of Recycle Agriculture in West China Based on Artificial Neural Network Model . . . . . . . . . . . . . . . . . . . . . Fang Wang and Hongan Xiao
423
An Improved Particle Swarm Optimization Algorithm for Vehicle Routing Problem with Simultaneous Pickup and Delivery . . . . . . . . . . . . . Rong Wei, Tongliang Zhang, and Hui Tang
430
Optimizing Particle Swarm Optimization to Solve Knapsack Problem . . . Yanbing Liang, Linlin Liu, Dayong Wang, and Ruijuan Wu
437
BP Neural Network Sensitivity Analysis and Application . . . . . . . . . . . . . . Jianhui Wu, Gouli Wang, Sufeng Yin, and Liqun Yu
444
Data Distribution Strategy Research Based on Genetic Algorithm . . . . . . Mingjun Wei and Chaochun Xu
450
XVI
Table of Contents – Part I
Water Pollution Forecasting Model of the Back-Propagation Neural Network Based on One Step Secant Algorithm . . . . . . . . . . . . . . . . . . . . . . . Xiaoyun Yue, Yajun Guo, Jinran Wang, Xuezhi Mao, and Xiaoqing Lei
458
Computational Statistics and Applications Passive Analysis and Control for Descriptor Systems . . . . . . . . . . . . . . . . . Chunyan Ding, Qin Li, and Yanjuan Zhang
465
Study of Bird’s Nest Habit Based on Variance Analysis . . . . . . . . . . . . . . . Yong-quan Dong and Cui-lan Mi
473
Finite p-groups Which Have Many Normal Subgroups . . . . . . . . . . . . . . . . Xiaoqiang Guo, Qiumei Liu, Shiqiu Zheng, and Lichao Feng
480
Cubic NURBS Interpolation Curves and Its Convexity . . . . . . . . . . . . . . . . Lijuan Chen, Xiaoxiang Zhang, and Mingzhu Li
488
Optimal Dividend Problem for the Compound Binomial Model with Capital Injections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yali He and Xiuping Zhao
496
The Research of Logical Operators Based on Rough Connection Degree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yafeng Yang, Jun Xu, and Baoxiang Liu
504
A Result Related to Double Obstacle Problems . . . . . . . . . . . . . . . . . . . . . . Xiujuan Xu, Xiaona Lu, and Yuxia Tong
512
Properties of Planar Triangulation and Its Application . . . . . . . . . . . . . . . . Ling Wang, Dianxuan Gong, Kaili Wang, Yuhuan Cui, and Shiqiu Zheng
519
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
527
Table of Contents – Part II
Trusted and Pervasive Computing Foot Shape Analysis of Adult Male in the China . . . . . . . . . . . . . . . . . . . . . Taisheng Gong, Rui Fei, Jun Lai, and Gaoyong Liang
1
Intelligent Recognition of Fabric Weave Patterns Using Texture Orientation Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jianqiang Shen, Xuan Zou, Fang Xu, and Zhicong Xian
8
Evaluating of on Demand Bandwidth Allocation Mechanism for Point-to-MultiPoint Mode in WiMAX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ZhenTao Sun and Abdullah Gani
16
A Novel TPEG Application for Location Based Service Using China Multimedia Mobile Broadcasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lu Lou, Xin Xu, DaRong Huang, and Jun Song
24
Double Reduction Factors Approach to the Stability of Side Slope . . . . . . Yaohong Suo
31
An Integrated and Grid Based Solution of Chemical Applications . . . . . . Qizhi Duan, Zhong Jin, Qian Liu, and Xuebin Chi
40
On the Nullity Algorithm of Tree and Unicyclic Graph . . . . . . . . . . . . . . . Tingzeng Wu and Defu Ma
48
Scientific and Engineering Computing Fault-Tolerant Service Composition Based on Low Cost Mechanism . . . . Yu Dai, Lei Yang, Zhiliang Zhu, and Bin Zhang
56
Research on Fuzzy Extension Synthesis Evaluation Method for Software Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jianli Dong and Ningguo Shi
64
Security Scheme for Managing a Large Quantity of Individual Information in RFID Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Namje Park
72
Empirical Study on Knowledge Management’s Effect on Organizational Effectiveness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yan Ma and Lu Sun
80
XVIII
Table of Contents – Part II
Calculation Method of Stability Coefficient of Perilous Rock Based on the Limit Equilibrium Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hongkai Chen and Hongmei Tang
88
The Research of Application on Intelligent Algorithms in Plate Recognition System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qiang Song and Guofu Ma
96
Ratio Method to the Mean Estimation Using Coefficient of Skewness of Auxiliary Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zaizai Yan and Bing Tian
103
Exact Traveling Wave Solutions of Time-Dependent Ginzburg-Landau Theory for Atomic Fermi Gases Near the BCS-BEC Crossover . . . . . . . . . Changhong Guo, Shaomei Fang, and Xia Wang
111
Improved Support Vector Machine Multi-classification Algorithm . . . . . . Yanwei Zhu, Yongli Zhang, Shufei Lin, Xiujuan Sun, Qiuna Zhang, and Xiaohong Liu VoD System: Providing Effective Peer-to-Peer Environment for an Improved VCR Operative Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . R. Arockia Xavier Annie and P. Yogesh
119
127
Parallel and Distributed Computing Application of the Location and Tracking System Based on Cricket . . . . . Wei Qiu
135
Application of Orthogonal Experiments and Variance Analysis in Optimization of Crash Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhonghao Bai, Qianbin Zhang, Zheng Xu, and Libo Cao
142
An Improved Text Retrieval Algorithm Based on Suffix Tree Similarity Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cheng-hui Huang, Jian Yin, and Dong Han
150
Land-Use Change and Socio-economic Driving Forces Based on Nanchong City . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Youhan Wang
158
Human Motion Classification Using Transform . . . . . . . . . . . . . . . . . . . . Qing Wei, Hao Zhang, Haiyong Zhao, and Zhijing Liu Research on the Optimal Transit Route Selection Model and Automatic Inquiry System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jianli Cao
165
172
Table of Contents – Part II
Optimal Control Algorithm of Nonlinear State-Delay System . . . . . . . . . . Ji Sun and Huai Liu
XIX
180
Multimedia Networking and Computing Linguistic Variable Ontology and Its Application to Fuzzy Semantic Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jun Zhai, Meng Li, and Kaitao Zhou Application of Cognitive Psychology in Web-Based Instruction . . . . . . . . . Caiyun Gao and Feifei Wang “Trucks Trailer Plus” Fuel Consumption Model and Energy-Saving Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhi-zhong Li, Min-ye Chen, and Hong-guang Yao New Magneto-Elastic Sensor Signal Test and Application . . . . . . . . . . . . . Lei Chen, Xiangyu Li, and Tangsheng Yang Strategies Prediction and Combination of Multi-strategy Ontology Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rujuan Wang, Jingyi Wu, and Lei Liu
188
196
204
212
220
Solving Numerical Integration by Particle Swarm Optimization . . . . . . . . Liangdong Qu and Dengxu He
228
Study on Method of Web Content Mining for Non-XML Documents . . . . Jianguo Chen, Hao Chen, and Jie Guo
236
An Integrated Parallel System for Rock Failure Process Analysis Using PARDISO Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Y.B. Zhang, Z.Z. Liang, T.H. Ma, and L.C. Li
244
Internet and Web Computing Heuristics Backtracking and a Typical Genetic Algorithm for the Container Loading Problem with Weight Distribution . . . . . . . . . . . . . . . . Luiz Jonat˜ a Pires de Ara´ ujo and Pl´ acido Rog´erio Pinheiro
252
Energy Based Coefficient Selection for Digital Watermarking in Wavelet Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fouzia Jabeen, Zahoor Jan, Arfan Jaffar, and Anwar M. Mirza
260
The Maximum Likelihood Method of Calculation of Reliability Index of Engineering Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Haibin Chen, Xiaojun Tong, and Yonghui Zhang
268
XX
Table of Contents – Part II
Inverse Eigenvalue Problem for Real Symmetric Five-Diagonal Matrix . . Lichao Feng, Ping Li, Dianxuan Gong, Linfan Li, Aimin Yang, and Jingguo Qu
275
Stress and Deflection Analysis of a Complicated Frame Structure with Special-Shaped Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yingli Liu, Teliang Yan, and Chunmiao Li
282
Theoretical Studies on the Proton Transfer through Water Bridges in Hydrated Glycine Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiangjun Meng, Hongli Zhao, and Xingsong Ju
289
Study on Deformation Failure and Control Strategy for Deep Large Span Soft Rock Roadway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhanjin Li and Xiaolei Wang
297
Intelligent Computing and Applications Arithmetic Processing of Image of Weld Seam Based on Morphological Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ping Huo, Xiang-yang Li, and Wei-chi Pei
305
The Comparative Study and Improvement of Several Important Attribute Significance Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Baoxiang Liu, Qiangyan Liu, and Chenxia Zhao
312
An Approximate Reduction Algorithm Based on Conditional Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Baoxiang Liu, Ying Li, Lihong Li, and Yaping Yu
319
B-Spline Method for Solving Boundary Value Problems of Linear Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jincai Chang, Qianli Yang, and Chunfeng Liu
326
Configuration Issues of Cashier Staff in Supermarket Based on Queuing Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Baofeng Li and Donghua Wang
334
Superconvergence Analysis of Anisotropic Finite Element Method for a Kind of Nonlinear Degenerate Wave Equation . . . . . . . . . . . . . . . . . . . . . . . Zhiyan Li, Linghui Liu, Jingguo Qu, and Yuhuan Cui
341
GL Index Calculation and Application in Intra-industry Trade . . . . . . . . . Ning Zheng, Wenxue Huang, and Xiaoguang Xue Kinetic Study on Hydrogenation of Propiophenone Catalyzed by Chitosan-Palladium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hong-Lei Wang, Dan-dan Jia, Lu Liu, Yue-hui Wang, and Hong-yan Tian
348
354
Table of Contents – Part II
XXI
Improvement of PAML Algorithm and Application . . . . . . . . . . . . . . . . . . . Dianchuan Jin and Zengwei Niu
360
On the Optimal Control Problem for New Business . . . . . . . . . . . . . . . . . . . Zhendong Li, Qingbin Meng, Yang Liu, and Yanru Zhang
367
Research of Tikhonov Regularization Method for Solving the First Type Fredholm Integral Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yamian Peng, Lichao Feng, Ying Yan, and Huancheng Zhang
375
The Research of Tree Topology Model for Growth of Natural Selection and Application in Geographical Profile for Criminal . . . . . . . . . . . . . . . . . Aimin Yang, Ruijuan Wu, Haiming Wu, and Xiaoli Liu
383
Research on a Class of Ordinary Differential Equations and Application in Metallurgy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chunfeng Liu, Haiming Wu, and Jincai Chang
391
Evolutionary Computing and Applications The Study and Application of China’s Population Growth . . . . . . . . . . . . Jingguo Qu, Yuhuan Cui, Yilong Lei, and Huancheng Zhang Centroid-Based Algorithm for Extracting Feature Points of Digital Cameras’ Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Guangli Xu, Zhijiang Wang, and Guanchen Zhou Self-study Control of Blast Furnace Material Flux Valve . . . . . . . . . . . . . . Kaili Wang and Xuebing Han
398
406 413
Experimental Study of Utilizing Width of Barefoot Print to Infer the Body Weight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yi Gao
420
Strong Convergence of Composite Iterative Schemes for Common Zeros of a Finite Family of Accretive Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . Huancheng Zhang, Yongfu Su, and Jinlong Kang
428
Application of Mathematical Model in Road Traffic Control at Circular Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhijiang Wang, Kaili Wang, and Huancheng Zhang
436
Research and Application of Expected Utility Function Model in the Teachers’ Financial Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yunhua Qu
444
Research on Web Articles Retrieval Based on N-VSM . . . . . . . . . . . . . . . . . Hongcan Yan, Xiaobin Liu, and Jian Wang
452
XXII
Table of Contents – Part II
Information Search Model Based on Ontology and Context Aware Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jianxin Gao and Hongmei Yang
460
Computational Statistics and Applications Dynamics of Multibody Systems with Friction-Affected Sliding Joints . . . Li Fu, Xinghua Ma, Yunchuan Liu, Zhihua Li, Yu Zheng, and Yanhu Xu
468
Automatic Building Extraction Based on Region Growing, Mutual Information Match and Snake Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gang Li and Chunhua Chen
476
Research and Exploiture of the Automatic Control System in Sinter Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xuebing Han and Kaili Wang
484
Nonconforming Finite Element Method for Nonlinear Parabolic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hongwu Yin, Buying Zhang, and Qiumei Liu
491
Conservation of the Fishery through Optimal Taxation: A Predator-Prey Model with Beddington-De Angelis Functional Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cui-fang Wang and Ying Yu Spatial Shift-Share Method: A New Method in the Study of Regional Industrial Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shibing You, Yanyan Chen, Tao Yang, and Bingnan Huang The Comparative Study of the Circumstances of Plantar Pressure at Different Speed of Walking by Utilizing the Plantar Pressure Measurement System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yi Gao
499
507
515
A New Attribute Reduction Algorithm Based on Classification Closeness Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cuilan Mi, Yafeng Yang, and Jun Xu
523
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
531
An Novel Encryption Protocol for Mobile Data Synchronization Based on SyncML Chao Jiang1, Meina Song2, Ke Liu2, and Ke Xu2 1
Department of Electronic Engineering, Beijing University of Posts and Telecommunications Beijing P.R. China 2 Department of Computer Science, Beijing University of Posts and Telecommunications Beijing P.R. China
[email protected],
[email protected],
[email protected],
[email protected]
Abstract. SyncML provides identity authentication service for the security of the data synchronization protocol at the server layer and database layer,. But the data is transmitted in cleartext XML after the authentication. These data are not encrypted and it is easy to be tapped. In order to enhance its security, this paper provides us a novel encryption data synchronization protocol which uses hierarchical encryption algorithms. Finally, an example is presented to illustrate this protocol and architecture. Keywords: SyncML, public-key algorithm, RSA , symmetric algorithm, AES.
1 Introduction Modernization of telecommunications makes the communication between people upgrade significantly in breadth, frequency and convenience. People store their own data in many distributed devices, for example, computers, mobile phones, PDAs and etc. The data need to be kept consistent in the different devices, which require data synchronization.. Currently the most widely used data synchronization protocol is SyncML. SyncML data synchronization protocol is an open international industry standards which was developed by OMA in order to achieve multi-platform, multi-device data synchronization and information exchange across the network. It is used by the applications in PDA, cell phone, PC, notebook to store and exchange of data with the applications in the network server. Different applications and devices are capable of data synchronization and information exchange by using of SyncML standard.. SyncML is important for the development of mobile value-added services, 3G networks, and especially in the field of mobile office. China Mobile's Personal Information Manager Service uses this protocol. With the widely use of SyncML protocol, its security requirements become more important
2 SyncML Protocol and Architecture Figure 1 describes the architecture of SyncML protocol [1]. R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 1–8, 2010. © Springer-Verlag Berlin Heidelberg 2010
2
C. Jiang et al.
Fig. 1. Application A in Figure 1 represents a network service which can provide data synchronization services for applications on other, network device, such as application B. Such services and equipment are connected using the common network transport protocol, for example, HTTP. In the above figure, the features of synchronization engine are totally placed in the data synchronization server, although some clients can actually provide part of the features of synchronization engine. Synchronization server agent and client agent communicate with each other through the SyncML synchronization protocol and the SyncML Representation protocol interfaces.
For the security of the synchronization protocol, SyncML provides identity authentication service at server layer and database layer, in which the server level authentication is required, and the database layer authentication is optional [1]. However, after the authentication, the data is transmitted by using XML, and these data are not encrypted which can be easily tapped and tampered by attacker. In order to enhance its security, ciphertext transmission is required on both sides, and encryption algorithm is needed to encrypt data before it is transmitted. In this case, even if the data was tapped, the eavesdropper cannot decrypt it in a reasonable period of time. But the SyncML standard did not have a corresponding encryption design; therefore we need to enhance the SyncML to ensure its safety.
3 Encryption System The basic task of encryption system is to provide user a communication channel to ensure confidentiality and authenticity. A central objective of cryptographic protocols is ensured the confidentiality and authenticity [2]. After encrypted, the plaintext can be changed to two kind of ciphertext: replacement ciphertext and transmission ciphertext. Replacement ciphertext means the text is replaced by other characters in the ciphertext instead of the original characters, which hidden characters by other words. Transmission ciphertext remains the original characters, but change the order of characters. Encryption algorithm is the process to convert plaintext into ciphertext.
An Novel Encryption Protocol for Mobile Data Synchronization Based on SyncML
3
The basic design principle is that all the algorithms must be open, only the key can be kept in secret [3]. There are two kinds of encryption algorithm: symmetric encryption and public key encryption [4]. Symmetric encryption algorithm uses the same key to encrypt and decrypt. The key is determined advance by both parties in other means, such as the classic password. There is high possibility that the key would be leaked in the distribution process, which causes the great inconvenience of the application of symmetric encryption. Now widely used symmetric encryption algorithms are 3DES, AES, RC4, RC5, etc., which are used in different products. In 1976, Diffie and Hellman of Stanford University proposed the concept of public key encryption algorithm. Its main feature is the encryption algorithm has two key E and D. E is the encryption key which is also the public key, D is the decryption key which is also the private key. The public key is open to all over the world, including the attacker, but the private key is kept confidential. For public-key encryption with D (E (P)) = P, and the introduction of D from E is very difficult. Public key encryption allows two people who never met before make a secret communication, as long as the two men knew each other's public key. The relatively successful public key encryption algorithm is the RSA algorithm which was put forward by Rivest, Shamir, Adleman from MIT in 1978 based on number theory [5]. RSA has been widely used in distributing keys, digital signatures and public key certificate. In the RSA algorithm, both sides have their own encryption and decryption key, and there is no key distribution process. RSA is a single replacement algorithm. In order to ensure the security of encryption, it is necessary to select two prime numbers which are at least 512bits, so the encryption blocks length is 1024bits. It is much longer compared with DES 64bits and AES 128bits, so the encryption speed is very slow compared to symmetric encryption, which is not suitable for large amounts of data encryption.
4 Encryption Protocol This article designs a protocol to negotiate the private key first by using the public key encryption system, and then it uses this private key for symmetric encryption. This design combines the advantage of both encryption systems. This paper adopts the following terms: Table 1. Describe the tags in map using the corresponding explanation Symbols
Description,
A B EA EA(P) DA DA(P) H(M) RA Cert KS
Client Server A- public key encryption P encrypted with the EA on A -private key to decrypt P encrypted with the DA on Hash value of M by using MD5 A selection of a 64bis random sequence Public key certificate based on PKI Session agreed symmetric encryption key
4
C. Jiang et al.
4.1 Hierarchical Encryption Protocol Specific protocol flow is shown in Figure 2.
Fig. 2. Encryption protocol flow chart
1) 2)
3)
4)
5)
A sends its own identity, public key, and the digital signature encrypted by its private key to B. After B received the message from A, it gets the public key of A ---- EA and DA(H(A,EA)).B gets H(A,EA) by using EA to decrypt DA(H(A,EA)), then using HASH algorithm on received plaintext A and EA, calculate the HSAH values. If the result is the same as the received H(A,EA), it means that the message was not tampered. Then B returns its public key to A. If the server-side has PKI certificate, the corresponding certificate should be returned also. A uses EB to encrypt the selected key KS, the selected 64bits random sequence RA and it’s private key encryption signature H(KS,RA), then send the Ciphertext to B. After B received the message sending from A in step 3, it uses the identical method described in step 2 to verify the effectiveness of KS,RA. If it is validation, B sends the RA encrypted with EA, the self selected 64bits random sequence RB and its own digital signature to A. After A received message sending from B in step 4, it decrypts the message by using its private key DA, acquire RA, RB, DB (H (RA, RB), and verify its effectiveness by using the same method as step 2,. After it is validated, compare the RA with the sequence sending in step 3. If they are consistent, it proves that B actually received the message sending in step 3. Then A uses the selected key KS to encrypt RB, and send it to B.
After B received the message sending in step 5, it uses KS to decrypt the message to RB. Then B compared RB with the value it saved. If they are consistent, it proves that A actually received the message sending in step 4. At this time, both sides have a common key KS The following transmission data can be encrypted with a symmetric encryption algorithm, ensure the confidentiality of transmission.
An Novel Encryption Protocol for Mobile Data Synchronization Based on SyncML
5
4.2 Protocol Analysis This protocol uses the RSA algorithm for public key encryption algorithm. RSA algorithm is the most mature and widely used public-key encryption algorithm, and its patent protection expired in September 2000, which makes its users do not have to worry about patent fees. This paper uses MD5 digest algorithm for signature verification [6]. The 128-bit digest generated by MD5 can effectively prevent the birthday attack [7]. MD5 digest algorithm is also used by SyncML authentication. The symmetric encryption algorithm uses AES (Advanced Encryption Standard) algorithm, also known as Rijndael algorithm, which is standard by (U.S.) National Institute of Standards and Technology, AES supports key length of 128 and 256 bits, and it is completely open, which has high encryption strength and speed[8]. When the connection is established between the client and server at the beginning, the process of authentication has already been carried out. So now A and B are indeed in the communication. A and B exchange public key through the messages of step 1 and step 2, and the use of signatures is to ensure the transmission not to be tampered by attackers. Through message in step 3, 4, 5, both sides get a shared symmetric key. Then the plaintext will be encrypted by the agreed key before transmission. As the public key is just used to encrypt several messages which are used to determine the symmetric keys, the encryption data is very limited, and the speed of encryption is very fast [9]. For the later of encryption of large number of synchronization data, symmetric encryption protocol is used to speed up the encryption. Therefore, this paper integrates the advantages of both public key encryption system and private key encryption system, eliminates the process of the distribution of the key, and provides faster encryption speed than pure public key encryption.
5 An Example of Encryption Protocol Here is an example which is easy to be tapped. Xml encryption, regardless of how the encryption is performed, can store the encrypted data in one of two ways. z After encryption the whole element is replaced with an element named <EncryptedData> z After encryption only the data in the element is replaced and its name remains readable in the document. The difference is very subtle but it’s rather important. For example: Your xml document contains a root element called <employee> that contains a child element called <WrittenWarning> in which details of disciplinary action is stored. If you were sending this xml and wanted the <WrittenWarning> elements details protected with approach 1 the <WrittenWarning> is replaced with an element called <EncryptedData> and no information can be gathered from the document. With approach 2 however the <WrittenWarning> element stays and only the data is encrypted. Anyone who intercepted this document might not know the specific details of the discipline action but they will still know that something has happened with that employee. Any attributes on the <WrittenWarning> element are also not encrypted.
6
C. Jiang et al.
So the approach you take depends on what the data is and how much information you want to give away. In .NET v2.0 deciding on which approach to take is specified using a Boolean value and can be easily modified.[10] Example of XML Encryption Below is an example of XML encryption using the asymmetric approach where the author element in the xml document is replaced with an <EncryptedData> element. The XML Document. Example of an Xml document before Encryption
<article> <articleinfo>
XPath Queries on XmlDocument objects in .NET 1.1 <para>This article covers the basics. Mr. George <surname>James <email>[email protected] Example of an Xml document after Encryption
<article> <articleinfo>
XPath Queries on XmlDocument objects in .NET 1.1 <para>This article covers the basics. <para>This article does not cover. <EncryptedData Type="http://www.w3.org/2001/04/xmlenc#Element" xmlns="http://www.w3.org/2001/04/xmlenc#"> <EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#aes256-cbc" />
<EncryptedKey xmlns="http://www.w3.org/2001/04/xmlenc#"> <EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#rsa-1_5" />
An Novel Encryption Protocol for Mobile Data Synchronization Based on SyncML
7
session r4f7SI1aZKSvibb…CipherValue> sGNhKqcSovipJdOFCFKYEEMRFd… The author element and its children have been replaced with the <EncryptedData> element which contains a number of other elements that are used to describe the encrypted data, i.e. the encryption algorithms used, the session key used, etc.
6 Conclusion and Future Work There is no definition of data encryption in transmission in the OMA data synchronization standard, so the process of data synchronization can easily be tapped, which causes the synchronizing information be disclosure. This paper designed an encryption protocol which can be used in SyncML data synchronization. It combines both the public key encryption and symmetric encryption to prevent negative attacker to eavesdrop the data. The password ciphertext block chain mode or password feedback mode can also be used in this transmission protocol, which will further enhance its security.
Acknowledgments This work is supported by the National Key project of Scientific and Technical Supporting Programs of China under Grant Nos. 2008BAH24B04, 2006BAH02A03; the Program for New Century Excellent Talents in University No. NCET-08-0738; and the Innovation Technology Star Program of Beijing under Grant No. 2007A045.
References 1. DS Protocol. Approved Version 1.2.1 – 10 (August 2007) 2. Coron, J.-S.: Security & Privacy. IEEE 4(1), 70–73 (2006); Digital Object Identifier 10.1109/MSP.2006.29 3. Kerckhoff, A.: La Cryptographie Militaire. J.des Sciences Militaires 9, 5–38 (1983) 4. Tanenbaum, A.S.: Computer Networks. tsinghua university, Beijing (2004) 5. Rivest, R.L., Shamir, A., Adleman., L.: A method for obtaining digital signatures and public-key cryptosystems. Communications of the ACM (1978)
8 6. 7. 8. 9. 10. 11.
C. Jiang et al. Rivest, R.L.: The MD5 Message-Digest Algorithm. RFC 1321 (April 1992) Yuval, G.: How to Swindle Rabin. Cryptologia 3, 187–190 (1979) Daemen, J., Rijmen, V.: The Design of Rijndael. Springer, Berlin (2002) RFC1321: The MD5 message-digest algorithm (1992) SyncML Synchronization Protocol, version 1.0 (2001) Design of Data Synchronization Service System Based on SyncML/J2EE (2004)
Identity-Based Sanitizable Signature Scheme in the Standard Model Yang Ming1,2 , Xiaoqin Shen3 , and Yamian Peng4 1
2
School of Information Engineering, Chang’an University, Xi’an, Shaanxi 710064, China Shaanxi Road Traffic Detection and Equipment Engineering Research Center, Xi’an Shaanxi 710064, China 3 School of Sciences Xi’an University of Technology, Xi’an, Shaanxi 710054, China 4 College of Science Hebei Polytechnic University, Tang Shan, Hebei 063009, China
[email protected], elle
[email protected],
[email protected]
Abstract. Sanitizable signature is a powerful and practical tool, which is quite useful in governmental or military offices, where there is a dilemma between disclosure requirements of documents and private secret. In this paper, an identity-based sanitizable signature scheme in the standard model (without random oracle) is proposed by combining identity-based cryptography and sanitizable signature. We also formally give the model of identity-based sanitizable signature in the standard model. Finally, a full security proof for the proposed scheme is provided according to our security model. Keywords: Sanitizable signature, ID-based signature, Standard model, Bilinear pairing.
1
Introduction
In 1984, Shamir [1] first proposed the idea of identity-based (simply ID-based) public key cryptography (ID-PKC) to simplify key management procedure of traditional certificate-based public key cryptography. The main idea of ID-PKC is that the user’s public key can be calculated directly from his/her identity such as email addresses rather than being extracted from a certificate issued by a certificate authority (CA). Private keys are generated for the users by a trusted third party, called Private Key Generator (PKG) using some master key related to the global parameters for the system. Since Boneh and Franklin [2] proposed a practical ID-based encryption scheme, many papers have been published in this area such as [3-6]. The digital signature has been an essential tool in E-society which is designed to prevent alteration of a signed digital document. However, applications like E-Government, E-Education and E-Health systems need appropriate alteration of some signed documents in order to hide sensitive information other than protect the integrity of the document. For example, in the disclosure of official R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 9–16, 2010. c Springer-Verlag Berlin Heidelberg 2010
10
Y. Ming, X. Shen, and Y. Peng
information, national secret information is masked when an official document is sanitized so that its nonsensitive information can be disclosed when it is demanded by a citizen. If this disclosure is done digitally by the traditional digital signature schemes, the citizen cannot verify the disclosed information correctly because the information has been changed to prevent the leakage of sensitive information. That is, with current digital signature schemes, the confidentiality of official information is incompatible with the integrity of that information. This is called the digital document sanitizing problem in [7]. Similar solutions for this problem have been proposed in [8] as content extraction signature; and in [9] as redactable signature. In 2005, a sanitizable signature scheme was introduced by Ateniese et al in [10], which can alter the signed document instead of hiding the signed document. The main goal of sanitizable signatures is to protect the confidentially of a specified part of the document while ensuring the integrity of the document. A sanitizable signature scheme is a new kind of digital signature which allows a designated part, called the sanitizer, to hide certain parts of the original document after the document is signed, without interacting with the signer. The verifier confirms the integrity of disclosed parts of the sanitized document from the signature and sanitized document. In other words, a sanitizable signature scheme allows a semi-trusted sanitizer to modify designated parts of the document and produce a valid signature on the legitimately modified document without any interaction with the signer. These designated portions of the document are blocks or segments explicitly indicated as mutable under prior valid signature only if it modifies there portions an no other parts of the message. Following these works, several authors [11-20] proposed various sanitizable signature schemes with different properties. To the best of my knowledge, all the sanitizable signature schemes are based on Public Key Infrastructure setting, there is no construction of identity-based sanitizable signature (IDSS) scheme in the literature. However, it would be of great practical interest to design an IDSS scheme. In this paper, motivated by Waters’ signature [6, 21] and sanitizable signature [17, 20], we first formally define the model of the IDSS and propose the first ID-based sanitizable signature scheme (IDSS) in the standard model. Security analysis shows that our scheme satisfies security requirements. The rest of the paper is organized as follows: Section 2 briefly describes the necessary preliminaries. Section 3 presents the syntax and security model of IDSS. Our IDSS is proposed in Section 4. Security proof of the proposd scheme is given in Section 5. Finally, we conclude our paper.
2 2.1
Preliminaries Bilinear Pairings
Let G1 and G2 be two multiplicative cyclic groups of prime order q and let g be a generator of G1 . The map e is said to be an admissible bilinear pairing with the following properties:
Identity-Based Sanitizable Signature Scheme in the Standard Model
11
– Bilinearity: For all u, v ∈ G1 , and a, b ∈ Zq , e(ua , v b ) = e(u, v)ab . – Non-degeneracy: e(g, g) = 1. – Computability: There exists an efficient algorithm to compute e(u, v) for all u, v ∈ G1 . 2.2
Complexity Assumptions
Computational Diffie-Hellman (CDH) Problem. Given g, g a , g b ∈ G1 , for unknown a, b ∈ Z∗q , compute g ab . The success probability of a polynomial algorithm A in solving CDH problem is denoted as = Pr[A(g, g a , g b ) = g ab ] ≥ ε SuccCDH A Definition 1. The computational (t, ε) CDH assumption holds if no t-time adversary has at least ε in solving CDH problem.
3 3.1
Formal Model of Identity-Based Sanitizable Signature Scheme Syntax
Identity-based sanitizable signature scheme enables the authenticity of a disclosed document to be verified in four-party model consisting of a private key generator (PKG), a signer, a sanitizer, and a verifier. An IDSS scheme consists of the algorithms (Setup, Extract, Sign, Sanitize, Verify). 3.2
Security Model
According to [11], there are several security requirements that sanitizable signatures need to satisfy. Unforgeability. The property requires that no any one can forge the signer’s or sanitizer’s signature (this can be thought of as an outsider attack). Indistinguishability. The property requires that any one can not distinguish a message/signature pair whether that is sanitized or not. Immutability. The property requires that the sanitizer should not be able to produce a valid signature for a message where it has altered other than the parts are allowed to sanitized (this can be thought of as an insider attack).
4
The Proposed Scheme
In this section, we give our proposed IDSS scheme in the standard model. Our scheme is inspired by scheme in [6,17,20,21]. The scheme is described as follows.
12
Y. Ming, X. Shen, and Y. Peng
Setup. Given a security parameter k, PKG chooses two cyclic groups G1 and G2 of prime order q, a generator g and a admissible bilinear pairing e : G1 ×G1 → G2 . It then chooses a random value α ∈ Z∗q , computes g1 = g α and selects g2 ∈ G1 . Furthermore, PKG picks u , m ∈ G1 and two vectors u = (ui ), v = (vi ) of length n, whose entries are random elements from G1 . The system parameters are params = (G1 , G2 , e, q, g, g1 , g2 , u , m , u, v) and the master key is g2α . Extract. Let ID be the n-bit identity (ID1 , · · ·, IDn ). For a user with identity ID, its private key dID is generated as follow. PKG randomly picks rID ∈ Z∗q and computes dID = (d1 , d2 ) = (g2α (u
n i=1
i rID uID ) , g rID ) i
Sign. Let m be the n-bit message (m1 , · · ·, mn ) and Ks be the set of indices that the sanitizer is permitted to modify. The signer randomly chooses r ∈ Z∗q and computes σ1 = d1 (v
n i=1
vimi )r = g2α (u
n
i rID uID ) (v i
i=1
n i=1
vimi )r , σ2 = d2 = g rID , σ3 = g r
The resultant signature is σ = (σ1 , σ2 , σ3 ). Then the signer sends the secret information (vi )r (i ∈ Ks ) to the sanitizer via a secure channel. ¯ = (m Sanitize. Let M ¯ 1 , · · ·, m ¯ n ) be the message whose signature differ from M = (m1 , · · ·, mn ) at indices K ⊆ Ks . Let K = {i ∈ K : mi = 0, m ¯ i = 1} and K = {i ∈ K : mi = 1, m ¯ i = 0} such that K ∪ K = K and K ∩ K = Φ. When receiving σ = (σ1 , σ2 , σ3 ) and (vi )r (i ∈ Ks ) from the signer, the sanitizer does as follows: – check the validity of the signature σ. – choose r¯ID , r¯ ∈ Z∗q and compute σ ¯1 = σ1 (u
n i=1
vr
i r uID ) ¯ID i∈K vir (v i
i∈K
i
n i=1
¯ i r¯ vim ) ,σ ¯2 = σ2 g r¯ID , σ ¯3 = σ3 g r¯
The resultant sanitized signature is σ ¯ = (¯ σ1 , σ ¯2 , σ ¯3 ). Verify. The verifier accepts the signature if and only if the following equality holds: e(σ1 , g) = e(g1 , g2 )e(u
n i=1
i uID , σ2 )e(v i
n i=1
vimi , σ3 )
Note that the verify algorithm is same for a sanitized signature and non-sanitized signature.
Identity-Based Sanitizable Signature Scheme in the Standard Model
5
13
Security Proof of the Proposed Scheme
Theorem 1. (Unforgeability) Assume there is an adversary A that is able to break the unforgeability of our scheme with an advantage ε when running in a time t and making at most qe private key extraction queries and qs signature queries. Then there exists an algorithm B that can solve an instance of the CDH problem in a time t = t + (5qe + (2n + 4)qs )te with an advantage ε = 16(qe +qs 1)qs (n+1)2 ε, where te denotes the time of an exponentiation in G1 and n denotes length of bit string of message. Proof. Assume that there is a polynomial bounded adversary A that is able to break the unforgeability of our scheme, then there exists a algorithm B that can compute g ab with a non-negligible advantage when receiving a random CDH problem instance g, g a , g b . B runs A as subroutine and acts as challenger in unforgeability game and interacts with A as described below. Our proof is based on Waters’ idea in [22]. Setup. The algorithm B randomly chooses the following elements: – – – – – –
two integers 0 ≤ lu ≤ q and 0 ≤ lm ≤ q. two integers 0 ≤ ku ≤ n and 0 ≤ km ≤ n (lu (n + 1) < q, lm (n + 1) < q). an integer x ∈ Zlu and n-dimensional vector (x1 , · · ·, xn ) ∈ Zlu . an integer y ∈ Zlm and n-dimensional vector (y1 , · · ·, yn ) ∈ Zlm . an integer z ∈ Zq and n-dimensional vector (z1 , · · ·, zn ) ∈ Zq . an integer ω ∈ Zq and n-dimensional vector (ω1 , · · ·, ωn ) ∈ Zq .
For ease of analysis, we define four functions for the identity ID = (ID1 , ···, IDn ) and the message M = (m1 , · · ·, mn ), where (IDi , mi ) ∈ {0, 1}, (1 ≤ i ≤ n): F (ID) = x + K(M ) = y +
n
xi IDi − lu ku and J(ID) = z +
i=1 n i=1
yi mi − lm km and L(M ) = ω +
n i=1 n i=1
zi IDi ωi m i
Then B assigns system parameters as follows: – g1 = g a and g2 = g b . – u = g2−lu ku +x g z and ui = g2xi g zi (1 ≤ i ≤ n), which means that, for any n F (ID) J(ID) i identity ID, we have u uID = g2 g . i
i=1
– v = g2−lm km +y g ω and vi = g2yi g ωi (1 ≤ i ≤ n), which means that, for any n K(M ) L(M) message M , we have v vimi = g2 g . i=1
Finally, B returns all parameters to A. We can see that all distributions are identical to that in real world. Query. B answers the private key extraction queries and signature queries as follows.
14
Y. Ming, X. Shen, and Y. Peng
– Private key extraction query: When A issues a private key extraction query on an identity ID, B acts as follows: (1) If F (ID) = 0 mod lu , B aborts and reports failure. (2) If F (ID) = 0 mod lu , B can construct a private key by picking a random rID ∈ Z∗q and computing: − J(ID)
−
F (ID)
1
dID = (d1 , d2 ) = (g1 F (ID) (g2 g J(ID) )rID , g1 F (ID) g rID ) – Signature query: When A issues a signature query for the message M = (m1 , · · ·, mn ) on the identity ID = (ID1 , · · ·, IDn ), B acts as follows: (1) If F (ID) = 0 mod lu , B can construct a private key for ID as in private key extraction query, and then use the Sign algorithm to create a signature on M . (2) If F (ID) = 0 mod lu and K(M ) = 0 mod lm , B picks r , r ∈ Z∗q and computes L(m) 1 n n − K(m) IDi r − K(m) mi r r r σ = (σ1 , σ2 , σ3 ) = (u u i ) g1 (v vi ) , g , g1 g i=1
i=1
a where r¯ = r − K(m) . This equation shows that B’s replies to A’s signature queries are distributed as they would be in an interaction with a real challenge. (3) If F (ID) = 0 mod lu and K(M ) = 0 mod lm , B aborts and reports failure.
Forgery. If B did not abort, A will output a valid signature σ ∗ = (σ1∗ , σ2∗ , σ3∗ ) on the message M ∗ under the identity ID∗ . If F (ID∗ ) = 0 mod q and K(M ∗ ) = 0 mod q then B aborts. Otherwise, F (ID∗ ) = 0 mod q and K(M ∗ ) = 0 mod q, σ∗ then we can compute (σ∗ )J(ID∗ )1(σ∗ )L(M ∗ ) , which is the solution to given CDH 2 3 problem. It remains only to analyze the success probability and running time of B. Analogy to [22], we can obtain the success probability of B is ε = 16(qe +qs 1)qs (n+1)2 ε. Algorithm B’s running time is the same as A’s running time plus the time it takes to respond to qe private key extraction queries and qs signature queries. Each private key extraction query needs B to perform 5 exponentiation in G1 . Each signature query needs B to perform 2n+4 exponentiation in G1 . We assume that exponentiation in G1 takes time te . Hence, the total running time is at most t = t + (5qe + (2n + 4)qs )te . Theorem 2. (Indistinguishability) The proposed scheme is unconditionally indistinguishability against a (ε, t, qe , qs ) adaptive chosen message distinguisher D. The proof of Theorem 2 found in the full version of the paper due to the space limitation. Theorem 3. (Immutability) Assume there is an adversary A that is able to break the immutability of our scheme with an advantage ε when running in a
Identity-Based Sanitizable Signature Scheme in the Standard Model
15
time t and making at most qe private key extraction queries and qs signature queries. Then there exists an algorithm B that can produce a valid signature in a time t = t + (lqe + 2lqs )te with an advantage ε = ε, where te denotes the time of an exponentiation in G1 and l denotes the number of the bit positions which is allowed to alter. The proof of Theorem 3 found in the full version of the paper due to the space limitation.
6
Conclusions
As a special signature, ID-based sanitizable signatures are widely applicable, it is very suitable for the cases in which the sanitizer can alter the signed document in order to hide personal sensitive information. In this paper, we study IDSS based on Waters’ signature scheme by combining ID-based cryptography and sanitizable signature. We firstly proposed the model and a concrete scheme of IDSS in the standard model. And we show that the proposed scheme is security in our model. Acknowledgments. This work is supported by Natural Science Foundation of Shaanxi Province (NO: 2010JQ8017), the Special Fund for Basic Scientific Research of Central Colleges, Chang’an University (NO: CHD2009JC099).
References 1. Shamir, A.: Identity-based cryptosystems and signature schemes. In: Blakely, G.R., Chaum, D. (eds.) CRYPTO 1984. LNCS, vol. 196, pp. 47–53. Springer, Heidelberg (1985) 2. Boneh, D., Franklin, M.: Identity-based encryption from the weil pairing. In: Kilian, J. (ed.) CRYPTO 2001. LNCS, vol. 2139, pp. 213–229. Springer, Heidelberg (2001) 3. Paterson, K.G.: ID-based signatures from pairings on elliptic curves. IEEE Communication Letter 38(18), 1025–1026 (2002) 4. Cha, J.C., Cheon, J.H.: An identity-based signature from gap Diffie-Hellman groups. In: Desmedt, Y.G. (ed.) PKC 2003. LNCS, vol. 2567, pp. 18–30. Springer, Heidelberg (2002) 5. Hess, F.: Efficient identity based signature schemes based on pairings. In: Nyberg, K., Heys, H.M. (eds.) SAC 2002. LNCS, vol. 2595, pp. 310–324. Springer, Heidelberg (2003) 6. Paterson, K.G., Schuldt, J.C.N.: Efficient identity based signatures secure in the standard model. In: Batten, L.M., Safavi-Naini, R. (eds.) ACISP 2006. LNCS, vol. 4058, pp. 207–222. Springer, Heidelberg (2006) 7. Miyazaki, K., Susaki, S., Iwamura, M., Matsumoto, T., Sasaki, R., Yoshiura, H.: Digital documents sanitizing problem. IEICE 103(195), 61–67 (2003) 8. Steinfeld, R., Bull, L., Zheng, Y.: Content extraction signatures. In: Kim, K. (ed.) ICISC 2001. LNCS, vol. 2288, pp. 285–304. Springer, Heidelberg (2002) 9. Johnson, R., Molnar, D., Song, D.X., Wagner, D.: Homomorphic signature schemes. In: Preneel, B. (ed.) CT-RSA 2002. LNCS, vol. 2271, pp. 244–262. Springer, Heidelberg (2002)
16
Y. Ming, X. Shen, and Y. Peng
10. Ateniese, G., Chou, D.H., de Medeiros, B., Tsudik, G.: Sanitizable signatures. In: di Vimercati, S.d.C., Syverson, P.F., Gollmann, D. (eds.) ESORICS 2005. LNCS, vol. 3679, pp. 159–177. Springer, Heidelberg (2005) 11. Miyazaki, M., Iwamura, M., Matsumoto, T., Sasaki, R., Yoshiura, H., Tezuka, S., Imai, H.: Digitally signed document sanitizing scheme with disclosure condition control. IEICE Transactions 88-A(1), 239–246 (2005) 12. Suzuki, M., Isshiki, T., Tanaka, K.: Sanitizable signature with secret information. In: SCIS 2006, vol. 4A1-2, p. 273 (2006) 13. Klonowski, M., Lauks, A.: Extended sanitizable signatures. In: Rhee, M.S., Lee, B. (eds.) ICISC 2006. LNCS, vol. 4296, pp. 343–355. Springer, Heidelberg (2006) 14. Miyazaki, K., Hanaoka, G., Imai, H.: Digitally signed document sanitizing scheme based on bilinear maps. In: Proceedings of the First ACM Symposium on Information, Computer and Communications Security, pp. 343–354. ACM Press, New York (2006) 15. Izu, T., Kunihiro, N., Ohta, K., Takenaka, M., Yoshioka, T.: A sanitizable signature scheme with aggregation. In: Dawson, E., Wong, D.S. (eds.) ISPEC 2007. LNCS, vol. 4464, pp. 51–64. Springer, Heidelberg (2007) 16. Yuen, T.H., Susilo, W., Liu, J.K., Mu, Y.: Sanitizable signatures revisited. In: Franklin, M.K., Hui, L.C.K., Wong, D.S. (eds.) CANS 2008. LNCS, vol. 5339, pp. 80–97. Springer, Heidelberg (2008) 17. Haber, S., Hatano, Y., Honda, Y., Horne, W., et al.: Efficient signature schemes supporting redaction, pseudonymization, and data deidentification. In: Proceedings of the Third ACM Symposium on Information, Computer and Communications Security, pp. 353–362. ACM Press, New York (2008) 18. Brzuska, C., Fischlin, M., Freudenreich, T., et al.: Security of sanitizable signatures revisited. In: Jarecki, S., Tsudik, G. (eds.) PKC 2009. LNCS, vol. 5443, pp. 317–336. Springer, Heidelberg (2009) 19. Canard, S., Jambert, A.: On Extended Sanitizable Signature Schemes. In: Pieprzyk, J. (ed.) CT-RSA 2010. LNCS, vol. 5985, pp. 179–194. Springer, Heidelberg (2010) 20. Agrawal, S., Kumar, S., Shareef, A., Pandu Ranganand, C.: Sanitizable signatures with strong transparency in the standard model, http://eprint.iacr.org/2010/175 21. Waters, R.: Efficient identity based encryption without random oracles. In: Cramer, R. (ed.) EUROCRYPT 2005. LNCS, vol. 3494, pp. 114–127. Springer, Heidelberg (2005)
Service-Based Public Interaction Framework for Pervasive Computing Tao Wang*, Yunxiang Ling, Guohua Zhang, and Huxiong Liao C4ISR Technology National Defense Science and Technology Key Lab, NUDT, Changsha, Hunan, China
[email protected],
[email protected],
[email protected],
[email protected]
Abstract. The idea of pervasive computing as people yearning for is not a longstanding version, and a unified interactive interface is a realistic demanding aspect of pervasive computing. To meet this requirement, a service-based framework called Public Interaction Framework (PIF) is presented in this paper. It focuses on allowing the applications that require complex interactions to run simultaneously on a public environment. PIF uses a device-based and taskoriented universal service called Public Interaction Service (PIS) to maintain the general support with multi-agent systems for the devices and applications registered to it. The importance of this work is providing an open, flexible framework that supporting the interactive equipment net in the pervasive computing environment, thus facilitating the application development. Keywords: service-based, interaction framework, pervasive computing.
1 Introduction This paper describes a public human-computer interaction framework called Public Interaction Framework (PIF) for the complex interactions in the pervasive computing environment. In this paper we focus on the overall design and implementation of supporting complex interactions in the interaction systems. The PIF divides the whole interaction system into four spaces: user space, device space, service space and task space. And the main feature is the Public Interaction Service (PIS), the kernel component for the PIF to fit the requirements during the interaction process between human and computer. In this paper we present the PIS as the typical service space for a public pervasive computing environment. We also discuss how it contributes towards satisfying the goals that to maintain the general support with multi-agent systems for the devices and applications registered to it.
2 Background Mark Weiser coined the phrase “Ubiquitous Computing” around 1988, largely defined it and sketched its major concerns [1]. All models of Pervasive Computing *
This work is supported by NSFC 60875048.
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 17–24, 2010. © Springer-Verlag Berlin Heidelberg 2010
18
T. Wang et al.
share a vision of small, inexpensive, robust networked processing devices, distributed at all scales throughout everyday life and generally turned to distinctly common-place ends. And a lot of work has been done as efforts for the developers to achieve the goal of pervasive computing, such as the MIT Oxygen Project [2], the Microsoft Easy Living Project [3], the CMU Aura Project [4], the GIT Aware Home [5], etc. Recently, numerous frameworks for pervasive computing have been proposed. Tianyin Xu et al. proposed the service discovery framework USDM-PerComp for pervasive computing environment, using a Web Service Server/Directory Server (WSS/DS) based two-level hierarchical topology [6]. And Jean-Yves TIGLI et al. presented extended SOA model for pervasive computing called Service Lightweight Component Architecture (SLCA), which presents various additional principles to meet completely pervasive software constraints [7]. With respect to the applications in the pervasive computing environment, Devdatta Kulkarni et al. presented a contextaware RBAC (CARBAC) model that focuses on the context-based access control requirements of such applications [8]. Dedicated to the development of pervasive computing applications, Wilfried Jouve et al. also presented a domain-specific Interface Definition Language (IDL) as well as its compiler [9]. Trevor Pering et al. had focus on pervasive collaboration with platform composition, and placed associated Composition Framework prototype [10].
3 Approach As computers become cheaper, smaller and more powerful, they begin to appear in places where until recently we did not expect to find them. The idea of pervasive computing and smart environments is no longer a dream and has long become a serious area of research and soon this technology will start entering our everyday lives [11]. One of the major obstacles preventing this technology from spreading is the lack of interaction consistency. And the developers have presented some frameworks such as W3C multimodal interaction framework to resolve this problem, but there are still a lot of problems and challenges for the developers to face up if they decide to actualize them. And as mentioned above, recent researches in pervasive computing field, including the frameworks, models and platform composition, have partly turned to services and components. Taking notice of that, though it is unpractical to support every interaction devices directly, the trend of modularization and service of the interaction technique, and the new progress of cross-platform technique, has brought us the possibility of public interaction service to support the complex interactions with the interactive equipment net in the pervasive computing environment. Meanwhile, from the view of system engineering, the simple merge of the information from modals and devices cannot reach optimization. Thus, to enhance efficiency of the interactions, we should try to integrate the information from modals and devices, to divide the tasks between the devices and the software, with the full use of the rule of human cognizance such as relativity between information and actions. It has come to our notice that the human and computers have the different characteristics not only in the area of the interaction. So, follow the differences, let the human and computers do what they should do, do what they easily to do, and do what they adapt to do.
Service-Based Public Interaction Framework for Pervasive Computing
19
4 Public Interaction Framework During the research and development of the intelligent room/space as a simple example of pervasive computing environment, Richard Hasha had found that a common distributed object platform was needed. After pointed the problem domain, he had also presented a prototypical object-network OS to support large numbers of objects and inter-object referencing [12]. Thus, the interactive equipments in the pervasive computing environment are deemed to make up a huge net with lots of subnets as shown in Fig.1. And each subnet includes lots of applications that requiring support for complex interactions. Our topic is to provide a unified interface/platform for the applications in the Interactive Equipment Nets (IENs).
Fig. 1. The IENs of pervasive computing environment
In this paper, we describe the interaction system with the Public Interaction Framework (PIF) that divides the interaction system into four spaces: user space, device space, service space and task space as shown in Fig. 2. User Space. The users are both the start points and end points in the whole interaction cycle. Corresponding to the intelligence level of the human cognizance, user space emphasizes the interaction capabilities of the users, especially the initiative that the computers and devices lack. Device Space. Kinds of devices are the essential mediums for human-computer interaction. Device space emphasizes the interaction capabilities of the devices, as the devices produce the original interaction data, corresponding to the data level of the human cognizance. Service Space. Running on the hosts. Services should be aware of devices and should be designed to explicitly interact with them. It is the kernel part of the framework. Thus the service space needs to provide a unified interface to all the devices that can be identified, and do its best efforts to support the applications based on it. The functions of service space have referred to the information and knowledge level of the human cognizance.
20
T. Wang et al.
Task Space. To describe the requirements, task space has been called for. Task space has been designed to provide the applications with the resources limits and guidelines.
Fig. 2. Using four spaces to describe the interaction system
Given four spaces to describe interaction systems, we present the Public Interaction Framework (PIF) as shown in Fig. 3.
Fig. 3. The public interaction framework
To develop applications within the Public Interaction Framework, it is important to compartmentalize the functions properly, i.e. the developers should give each space the explicit requirements. Obviously, it is not difficult for the developers to know what the devices that their applications need to support can do, but it is necessary to tell the services what the devices can produce. So, here the developers need to register the supported devices to the service providers in the service space. Meanwhile, the similar questions also exist in the task space. The developers should let the service space know what the given applications can afford to the users and devices. So, the developers also need to register the supported functions of the applications to the services. The services work as the kernel of the framework with their main working flow as shown in Fig. 4. Actually, the services configuration and configure files are used for the initial/basic registrations mentioned above. After the registrations, the Central Control Unit (CCU) will control the service state and resources scheduling. As the services are running, there are three main modules: device management, communication control, and task management. The device management module controls the device agents, the task management module controls the task agents, and the communication module controls the communications both of the local and
Service-Based Public Interaction Framework for Pervasive Computing
21
networks. And with the help of communication control module, the CCU schedules the output device resources to satisfy the requests of applications.
Fig. 4. The public interaction service
Device Agent There are different device detection methods to identify different kinds of devices. To detect the static devices (i.e. the non-smart devices, such as keyboards, mouse, audio/video equipments, etc.), the service device detection process needs to query the registered devices table. To detect the smart devices (i.e. the smart cell phones, laptops, etc.), the service device detection process keeps listening. When the smart devices call to connect to the server, the process just detects them. After the PIS device detection process, the PIS device management module creates device agents for the identified devices. After this, the interaction data from the identified input devices will be captured by the input-device agents with their architectures as below shown in Fig. 5. Each input device will be binding to the proper device agent. And the input-device agents work following the processing steps, i.e. decodes the device data with sensor libraries, analysis modal data with engines, and detects the tasks, referring to the local state and communication with the PIS. For instance, Meng Xiangliang et al. had presented a new development model for multimodal interactive applications with unified access interface, and a collection of layered input primitives which is suitable for multimodal input interaction [13]. Meanwhile, the output devices are also controlled by the output-device agents as shown in Fig. 6. But the work that the output device agents do is simpler than the input-device agents; they just receive the directives from PIS and link to the device drivers for the application outputs. The following work will be transferred to the devices.
Fig. 5. The input-device agent architecture
22
T. Wang et al.
Fig. 6. The output-device agent architecture
Task Agent The device agents trigger the executions of the task agents under the appropriate conditions, and task agents work with the given applications. Fig.7 shows the task agent architecture. Task agents get original instructions from the device agents though the PIS, process the context sensitive instruction assembly with the restrictions of conflict control strategy, and then collate the assembled command with local semantic library for the command linking with the local application interface.
Fig. 7. The task agent architecture
The interaction has been once cycled after the process of task agent. And interaction cycles make up the whole process of the given applications. Thus, the framework has carried out the whole interaction system in the subnets of IENs for the pervasive computing.
5 Innovations and Related Works Pervasive computing is not only a dream now; lots of papers propose various approaches to take into account specific information (for example, contextual information) into the applications in the pervasive computing environment. It is a classical way to address this topic though, in this paper we use the experience of other researches for reference and focus on existing approaches and techniques to support the interactions with the PIF. Extending the topology of USDM-PerComp [6], we bring in the concept IENs as our pervasive computing environment, and in this paper we mainly work on the subnets of IENs. Greatly influenced by HLA, we try to deal with dynamic device environment and dynamic application/software environment by using the PIS as RTI of PIF. To abate the computing load of the PIS CCU, agents are used as the ambassadors. The agents need to complete the data decoding, data analysis, and task detection with the help of interrelated techniques, for instance, context awareness that is very important for improving the intelligence of human-computer interaction [14].Though there are still lots of problems to be solved in the development of the agent based systems, Nicholas Hanssens et al have present concepts, technologies, and applications that they have developed to address the needs, ranging from low-level communication infrastructure to distributed applications with multi-modal interfaces, by taking an agent-base
Service-Based Public Interaction Framework for Pervasive Computing
23
approach [15]. Stephen Peters has taken the existing Metaglue intelligent environment system and extend its capabilities to handle multiple users and spaces [16]. Taking the principles shared in SLCA into consideration [7], we notice that software-as-a-service (SAAS) technique is greatly called for, thus applications that linked by task agents, should provide their functions as services. Standardization is one of the critical problems for the universal property and extendibility of the applications, so it is necessary for the PIF to work well. To reach the goal, a number of protocols need to be developed.
6 Conclusion This paper has described our approach to allow the applications that require complex interactions to run simultaneously under a public framework in the pervasive computing environment. We have used the PIF to describe the whole interaction system and tried to provide the unified interfaces for the complex interactions with the help of PIS. The future work is to try to actualize and consummate the PIF in the future system development. And we first will focus on improving the PIS with lightweight ambassador components.
References 1. Weiser, M.: Computer of the 21st Century. J. Scientific American 265(3), 94–104 (1991) 2. MIT Project Oxygen, http://oxygen.lcs.mit.edu 3. Microsoft Research, EasyLiving Project, http://www.research.microsoft.com/easyliving/ 4. Garlan, D., Siewiorek, D.P., Smailagic, A., Steenkiste, P.: Project Aura: Toward distraction-free pervasive computing. J. IEEE Pervasive Computing 1(2), 22–31 (2002) 5. Georgia Tech, Everyday Computing Project, http://www.cc.gatech.edu/fce/ecl/ 6. Xu, T., Ye, B., Kubo, M., Shinozaki, A., Lu, S.: A Gnutella Inspired Ubiquitous Service Discovery Framework for Pervasive Computing Environment. In: Proceedings of 8th IEEE International Conference on Computer and Information Technology, pp. 712–717. IEEE Press, New York (2008) 7. Tigli, J.-Y., Lavirotte, S., Rey, G., Hourdin, V., Riveill, M.: Lightweight Service Oriented Architecture for Pervasive Computing. J. International Journal of Computer Science Issues 4(1), 1–9 (2009) 8. Kulkarni, D., Tripathi, A.: Context-Aware Role-based Access Control in Pervasive Computing Systems. In: Proceedings of the 13th ACM Symposium on Access Control Models and Technologies, pp. 113–122. ACM Press, New York (2008) 9. Jouve, W., Lancia, J., Palix, N., Consel, C., Lawall, J.: 6th IEEE Conference on Pervasive Computing and Communications, pp. 252–255. IEEE Press, New York (2008) 10. Pering, T., Want, R., Rosario, B., Sud, S., Lyons, K.: Intel Research Santa Clara: Enabling Pervasive Collaborationwith Platform Composition. In: Tokuda, H., Beigl, M., Friday, A., Brush, A.J.B., Tobe, Y. (eds.) Pervasive 2009. LNCS, vol. 5538, pp. 184–201. Springer, Heidelberg (2009)
24
T. Wang et al.
11. Gajos, K.: A Knowledge-Based Resource Management System for the Intelligent Room. Thesis, Massachusetts Institute of Technology (2000) 12. Hasha, R.: Needed: A common distribute object platform. J. IEEE Intelligent Systems, 14–16 (1999) 13. Xiangliang, M., Yuanchun, S., Xin, Y.: Inputware: A Unified Access Interface for Multimodal Input Based on Layered Interaction Primitives. J. Harmonious Man-Machine Environment 2008, 242–247 (2008) 14. Abowd, G.D., Mynatt, E.D.: Charting Past, Present, and Future Research on Ubiquitous Computing. ACM Transactions on Computer-Human Interaction 7(1), 29–58 (2000) 15. Hanssens, N., Kulkarni, A., Tuchida, R., Horton, T.: Building Agent-Based Intelligent Workspaces. Technical report, MIT Artificial Intelligence Laboratory (2002) 16. Peters, S.: Infrastructure for Multi-User, Multi-Spatial Collaborative Environments. Technical report, MIT Artificial Intelligence Laboratory (2001)
Analysis on Farmers’ Willingness to Participate in Skill Training for Off-farm Employment and Its Factors —— The Case of Ya’an City of Sichuan Province, China Xinhong Fu, Xiang Li, Wenru Zang, and Hong Chi College of Economics and Management, Sichuan Agricultural University, 46 Xinkang Road, Ya’an City, Sichuan Province, 625014, P.R. China
[email protected]
Abstract. This paper takes farmers who have transfer tendency for study. Based on analysis of descriptive statistics of data which collected by authors in 2007, we assume that the farmers’ willingness to participate in skill training for offfarm employment (STOE) as dependent variable. Eight independent variables was selected from three groups and analysis on the factors of farmers’ willingness to join in STOE by binary-Logistic model. The result shows that the demand preference of training group, including training time, demand of training information, training place and sharing proportion of training cost, and the age have a remarkable influence on STOE. Moreover, the gender, education level and the amount of house labors have little influence on STOE. It reveals that the running model of STOE have strong impact on farmers’ decision of joining it. Finally, it proposed some suggestion for the improvement of efficiency for STOE. Keywords: Skill Training for Off-farm Employment (STOE), willingness, factors.
1 Introduction 1.1 Background There is 150 million rural surplus labors in China and the quantity of migrant workers from rural areas raised each year. Recent research revealed that the wages of shorttime trained migrant workers were 7.9% higher than that of non-trained farmers. But, this proportion of skill trained labor in migrant workers was 35.2% in 2006. In order to promote the training for migrant labors from rural areas, the General Office of the State Council transmitted “The National Planning of Training for Migrant Workers from 2003 to 2010” (NPTMW), which formulated by State Ministry of Agriculture and other 5 State ministries in September, 2003. At present, a great number of rural surplus labors with tendency of transferring are the main training objectives in rural labors output areas, also are the research objective of this paper. A large number of STOE programs for migrant farmers have been carried on with a lots of money by government in recent years. However, some problems appeared R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 25–31, 2010. © Springer-Verlag Berlin Heidelberg 2010
26
X. Fu et al.
within training process, such as farmers who knew very little information about training, and lack enthusiasm for training, training context no match the wishes of farmers and so on. As a result, trainings were inefficient on allocation of training resources, reduced greatly the effectiveness of trainings. In order to improve the actual effect of STOE, it seems to be necessary to understand the willingness of farmers to participate in STOE as well as the factors. 1.2 Literature Review If you have more than one surname, please make sure that the Volume Editor knows how you are to be listed in the author index. There was a phenomenon of “The false demand weak” caused by the training without matched the demand of employment. The training organization did not consider the demand of farmers for skill training, and the labor training system could not meet the rapidly change of training requirement [1, 2, 3]. The related research has provided the useful reference for this paper. The majority existing research on skill training for farmers with potential training demand are mainly qualitative research, only a small number of them are descriptive quantitative research [4]. Some of the other quantitative researches were focused on the skill training of migrant workers in the urban area [5, 6, 7], We failed to find out any empirical research on farmers’ willingness to participate in STOE and its factors. Thus we tried to concentrate on this new issue.
2 Methodology and Data Resource 2.1 Methodology and Data Resource The data were collected through semi-structure questionnaire from farmers who have transfer tendency. We adopted the special form of log-linear model: binary Logistic regression model, which usually used to analysis attributes variable. We supposed the probability y=1 is P, and then: P : p( yi = 0 | xi , β ) = F ( − xi ' β )
(1)
.
We can estimate the parameters of model by maximum likelihood criterion. The Logistic model obeys the logistic distribution, namely means: Pr ( y t = 1 | xi ) =
e xi β 1 + e xi β .
(2)
The commonly form of logistic model is as follow: m
Pi = G (α + ∑ β j X ij ) = j =1
1 m ⎡ ⎛ ⎞⎤ 1 = exp ⎢ − ⎜ α + ∑ β j X i j ⎟ ⎥ j =1 ⎠ ⎦⎥ ⎣⎢ ⎝
(3) .
In the model above: Pi is the distribution probability of dependent variable, i is serial number of sample, j is the serial number of influence factors; βj is the regression
Analysis on Farmers’ Willingness to Participate in STOE and Its Factors
27
coefficient of influence factors; m is the number of influence factors; Xij is independent variable, shows influence factor j of sample i,α is intercept. 2.2 Data Resource The data were collected by authors in rural areas in Ya’an city of Sichuan province in China on August, 2007. The interviewees were farmers who have not participated in STOE with 20-50 years old or the graduated students from junior or high school. We selected four towns with large number of rural surplus labors in Ya’an cities for investigation. That is Yao Qiao, Nanjiao, Duo ying and Da Xing. There are a total of 200 questionnaires by using semi-structure form; all of them were collected completely. The amount of valid questionnaires is 172, the valid rate is 85%.
3 Description of Sample Characteristic and Farmers’ Willingness to Participate in STOE 3.1 Description of Sample Characteristics From the 172 valid questionnaires, we knew the following statistics characteristics of the interviewers: (1) male-dominated. The masculine proportion (57.4%) is slightly higher than the feminine, the male and female proportion is 1.35:1. (2) Mainly middle age people. In the samples, 31-51 years old accounted for 78.6%. (3) Educational level is not high. Their education was mainly from elementary school and junior school, accounting for 20.3% and 61% respectively. 3.2 Analysis on Farmers’ Willingness to Participant in STOW 63.2% farmers surveyed are willing to participate in STOE. The reasons are that training can help them improve their employment skills, so as to enhance the competitiveness for their career, and to strengthen their integrated quality to some extent. There are 36.8% of farmers expressed that they were not willing to participate in STOE. They thought that the cost of training was quite high and short-term training has little impact on upgrading their working skills and improving their working environment and conditions. The results show that 77.6% of farmers were desired obviously or interested int training information. However, only 22.4% of farmers do not need or care about information. Most of them were female in their 50s with elementary education level or even lower. Table 1 shows that 25% of the farmers surveyed are willing to participate in training of mechanical and electrical maintenance. 20.3% of them prefered clothing tailoring training. 16.3% chose cooking and cosmetology training, for which most are females. 12.8% are willing to participate in housing construction training, chose by the males primarily. These training programs which farmers favored in are mostly homoplasy with current training project carried out in Ya'an cities. We can conclude that the training project satisfied the demand basically, the structure of training need to be optimized.
28
X. Fu et al. Table 1. Demand of training skills for farmer
Skill category
Farmer Number (%)
Mecha Clothing nical tailoring and electric al mainte nance 43 35
Cooking and Housing cosmetology construct ion
Househol Carv d e out managem ent service
Other Total
28
22
14
13
17
172
25
16.3
12.8
8.1
7.6
9.9
100
20.3
Farmers gain the training information from three main channels: the village committee, neighbors & relatives and friends, as well as the town government. The proportion of the above three reaches to 69.1% sharing of the total among which nearly 2/3 training information from the village committee and town government. It shows that village committee and town governments did great work to spread training information. Among the interviewees, nearly 20% refused to answer or gave the answer as ‘I don’t know’, for which the concrete reason was not clear so far.
4 Analysis of Influence Factors for Farmers to Participate in STOE 4.1 Variable Selection We assume that dependent variable is farmers’ willingness to participant in STOE, which was divided to two categories: willingness and non-willingness. Dependent variable value is within [0, 1]. Willingness was defined as Y=1; non-willingness was defined as Y=0. As it has bivariate nature, the binary logistic model will be adopted. We estimate the regression coefficient of model by maximum likelihood criterion. The hypothesis of research is that the dependent variable was related to three kinds of independent variables mentioned above, which are Farmers’ individual characteristics, Family characteristics and farmers’ preference of training demand. The model can be indicated as: Farmers’ Willingness to participate in STOE = f (Farmer Individual characteristics, Family character, Farmer preference for training demand.) Introduces it into the binary logistic model, it can be indicated as: ⎡ P(Y1 ) ⎤ Log ⎢ ⎥ = B1 X 1 + B2 X 2 + B3 X 3 + LL + Bn X n + B0 ⎣ P(Y2 )⎦
,
(4)
where Y1 refers to the farmers’ willingness to participate in STOE, Y2 is unwillingness to participate in STOE. X1 is age, X2 is gender, X3 is education level, X4 is the amount of family labors, X5 is the demand of training information, X6 is training place, X7 is training time, X8 is the sharing proportion of training cost. B0 is the constant, Bn is the regression coefficient for Xn.
Analysis on Farmers’ Willingness to Participate in STOE and Its Factors
29
Table 2. The variables of model and their definition Variables of Model Explained variable Willingness to participate in STOE Y Explanatory variable 1. Farmer Individual characteristics
()
(X ) Gender(X ) Age
Education level
Variables definition
0-1
Willingness to participant in STOE=1, Unwillingness to participant in STOE=0
0-4
1
2
Val ue
(X ) 3
2. Family characteristics variables The amount of family labors X4 3. Farmer preference of training demand Demand of training information X5
( )
( ) Training place(X ) Training time (X ) Sharing proportion of training cost(X ) 6
7
0-1 0-3
0-3
0-4 0-4 0-4
bellow 20 years old =0, 21-30 years old=1, 31-40 years old=2, 41-50 years old=3, 51-60 years old=4 Male=0, Female=1 At most elementary school=0, the junior middle school=1, the high school=2, at least college=3
One labor=0, two labors=1, three labors=2, more than three labors =3
( )——Demand completely (4)
No any demand at all 0
Other=0, Enterprise=1, Training School=2, local town=3, local village=4 Over 60 days=0, 31-60 days=1, 16-30 days=2, 11-15 days=3, within 10 days =4
0-4
100%=0 80%=1 50%=2 30%=3 0=4
8
4.2 Model Results of the Operations The binary Logistic model by SPSS software is used for analysis. Table 3 is the estimated result for the factors of farmers’ willingness to participate in STOE. 4.3 Result Analyses and Discussion From Table 3, we can know that the statistical test value in the binary Logistic model: R2=0.713 demonstrates that the model fits the data well within an acceptable scope. According to descending order, the factors are the training time, demand of training information, training place, age and sharing proportion of training cost. We will interpret the analogue result as follows. (1) The training time variable is significant at 0.01 level, a positive sign. It indicates that more farmers are willing to participate in STOE for the shorter training time. The reason is probably that the shorter training time is better. However, facts indicate that situation is not true. Actually, the training with the credentials such as electricians will last a long time, at least 5 days or more. And farmers who hold the credentials are very popular in the employment market, meanwhile their wages are also high.
30
X. Fu et al. Table 3. Binary logistic model of factors for farmers’ willingness to participate in STOE Variables of Model
Explanatory variable 1.Farmer Individual characteristics Age X1 Gender X2
( ) ( )
Education level
(X ) 3
2.Family characteristics variables The amount of family labors X4 3.Farmer preference of training demand Demand of training information X5 Training place X6 Training time X7 Sharing proportion of training cost X8 Constant
( ) ( ) ( )
( ) ( )
B
SE
-0.857** 0.380 20.620 493.28
WALD DF
EXP
SIG
(B)
4.848 0.000
1 1
0.028 0.998
0.363 0.000
-0.765
0.571
1.772
1
0.182
0.467
-0.031
0.374
0.012
1
0.913
0.960
1.728*** 0.636
7.909
1
0.005
5.970
1.261*** 0.345 2.338*** 0.481
15.999 22.24 5
1 1
0.000 0.000
3.887 9.352
0.563*** 0.233
7.394
1
0.007
1.884
19.843
1
0.000
0.000
-13.688
3.073
Note: -2LL=61.256, R2=0.713. *** expresses significant at 0.01 level,** expresses significant at 0.05 level,* expresses significant at 0.1 level. Exp(B) equal to occurring ratio. It can be used to measure and interpret the chance of original occurring ratio that caused by the increasing of one explanatory variable unit. (2) The variable of farmer demand to training information is remarkable at 0.01 level, positive sign. This indicates that the farmer demand to training information is one of important factors that whether farmer is willing to participate in STOE. The higher the demand of training information, the more farmers are willing to participate in STOE. (3) The variable of training place is significant at 0.01 level and positive sign. It shows that the closer STOE places from the farmers living in, the more willingness for farmers to participating in STOE. Farmers did not need to pay any tuition for most of STOE currently being held. However, if training place locates far away from the farmers’ inhabited area, farmers have to bear bus fee, accommodation and even food, which will increase their financial burden. (4) Simulation coefficient for sharing proportion of training cost variable is also remarkable at 0.01 level And positive sign. This indicates that the farmers assume the less the cost of training, the more willingness to participate in STOE. Combination of Ya'an's actual situation, some skill training projects offered by the government, such as the Sunshine Project are so popular with farmers. Comparatively, some training project which farmers had to undertake partial expense tended to be less attractive. (5) Age variable affected significantly. The simulation factor is significant at 0.05 level and negative sign.. It shows that the age is one of factors to affect the farmers willing. Particularly, the older farmers are more reluctant to participate in STOE. We
Analysis on Farmers’ Willingness to Participate in STOE and Its Factors
31
consider that it closely relates to both internal and external factors, such as older farmer had poorer ability to accept training skills and less willing to transfer to cities and more difficult to find a job in cities.
5 Policies and Suggestions (1) To determine a reasonable training time. STOE should be developed reasonable training time according to the nature of the skill training, not only to let farmers take full advantage of employment skills, but also take care of their production and living. (2) To focus on young farmers and farmers who pay more attention to training demand information. Try to help them to master one or two professional skills by training and move to cities stably. (3) To choose somewhere close to the farmers’ inhabited area as the training place. If this is possible, it could be considered the STOE in the village in order to alleviate the financial burden of farmers and obtain higher training benefit. (4) The training expense ought to be supplied by the government or the enterprise, so as to release the farmers’ financial burden for training. (5) To continue to keep the township or village as the main department on organizations and publicity for STOE. So as to find out the more information of farmer's training demand and feedback to government and training organization.
References 1. Huang, P., Zhan, S.: Capacity analysis of skill training systems for migrant labor in China. In: The Report for Canadian International Development Agency, CIDA (2004) 2. Yang, Y.: Research on Training of Migrant Workers. Theory Construction (4), 26–29 (2006) 3. Huang, P., Ma, C.: Chinese rural migrant workers-related training, education. Peking University, Comments (3), 77–86 (2007) 4. Jiang, W.: The Problem and Countermeasure of employment skill training for migrant. Financial Research (10), 69–71 (2007) 5. Han, Q., et al.: Investigation on Training of Migrant Worker. China Vocational-Technical Education (1), 16–17 (2007) 6. Zhang, Q., Zhang, Y.: Research on influence factors of Migrant Workers investment for Training initiative at two-stage in the demand view. Nanjing Agricultural College Journal (Social Sciences Version) (2), 1–20 (2008) 7. Zhang, Z.: The present situation and advancement measures about the transfer training of rural labor. China Vocational-Technical Education (26), 23–25 (2004)
Bayesian Decision Model Based on Probabilistic Rough Set with Variable Precision* Lihong Li, Jinpeng Wang, and Junna Jiang College of Science, Hebei Polytechnic University, Tangshan Hebei 063009, China
[email protected]
Abstract. It is important to find a suitable loss function and thus produce a realistic decision-making rules in the minimum risk of Bayesian decisionmaking process. Generally, there is no stringent condition in the decisionmaking process, and if we add constraints of the risk of loss, the risk of the error will be reduced. We discuss the basic process of minimum risk of Bayesian decision, and set up the probabilistic rough set model with variable precision on Bayesian decision. The model could decrease the error risk of the Bayesian decision. Keywords: Variable precision, probabilistic rough set, Bayesian decision.
1 Background Rough set theory [1] has been proposed in many fields since has been rapid development and successful application [2, 3, 4, 5], however, in practical applications found a limitation of Pawlak rough set model [3]that is its treatment for the classification must be fully correct or certain [6]. Variable precision rough set model expands the Pawlak rough set model. One idea is the introduction of the parameter β (0 ≤ β ≤ 0.5) on the basic variable precision rough set model, which allows a certain degree of error resolution exists; the parameter threshold range of variable precision rough set can approximate reduction standard established [7]. Probabilistic rough set model uses the incomplete information system and statistical information that may exist, to study a large number of random phenomena in nature, dealing with the knowledge base generated by the random data, provides a realistic reflection of the law. As two parameters α and β in the probability rough set has not strict conditions, which could be against some of the concepts of field distortion characterization. The probability of variable precision rough set model [8] improves the problem. Bayesian [10] decision theory and method as the basic method of statistical pattern recognition, is the smallest classification error probability subject to classification. Discussing Bayesian decision-making process, to establish the probability of Bayesian rough set model is implemented using probabilistic rough set *
Supported by Scientific Research Guiding Plan Project of Tangshan in Hebei Province (No. 09130205a).
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 32–39, 2010. © Springer-Verlag Berlin Heidelberg 2010
Bayesian Decision Model Based on Probabilistic Rough Set with Variable Precision
33
of Bayesian decision making with minimal risk. But in practical applications to find the appropriate loss function is extremely difficult [9, 11]. Therefore it is very difficult to get reasonable parameters by loss of function. This article will analyze the Bayesian decision-making process, and on the basic of the reasonable loss function, discuss the relationship of Bayesian decision problems and the probability of variable precision rough set model, establish a probabilistic Bayesian decision variable precision rough set model.
2 Probabilistic Rough Set Model with Variable Precision Definition 1 [1]. Let U be finite universe, set functions P : 2U − [0.1] be probability measure, if
(1) P(U ) = 1 ; (2) when A I B = φ s.t. P( A U B) = P( A) + P( B) . Definition 2 [5]. Let
⎧⎪1 − X I Y X , where X > 0 c( X , Y ) = ⎨ , ⎪⎩0, where X > 0
(1)
where X denotes the base of set X, named c( X , Y ) is relative error resolution of X about Y. Definition 3 [1]. Let U be finite universe, R is equivalence relation on U, and its equivalence class is U R = {X 1 , X 2 ,L X n } . Noted X is equivalence class of x. Let P
be σ algebra probability measure that defined on U, triple AP = (U , R, P ) is called probability approximation space. Definition 4 [8]. Let U be finite universe, limited 0.5 < α < 1, ∀X ⊆ U , defined probability approximation space about X according to parameters α lower approximation:
PI α X = {x ∈ U P ( X X ) ≥ α } , PI α X = {x ∈ U P ( X X ) < 1 − α } ,
(2)
where: [X ] has the same objects described with x , also called x ’s description. Boundary and negative domains respectively is:
posα X = PI α X = {x ∈ U | P( X | [x]) ≥ α }, bnα X = {x ∈ U | 1 − α < P( X | [x ]) < α },
negα X = U \ PI α = {x ∈ U | P( X | [x ]) ≤ 1 − α }.
(3)
34
L. Li, J. Wang, and J. Jiang
3 Minimum Risk Bayesian Decision Bayesian decision theory and statistical pattern recognition is the basic method. Basic principle is: there are similarities in the mode of the sample space close to each other, and formed the "Group", that "Like attracts like." In the process of Bayesian minimum risk decision-making, introduce the concept of risk, use of prior probability of occurrence to get minimum risk for taking a decision on the target x . Let Ω = {X 1 , X 2 ,L X s ,} be a finite set of character state, every set X i is subset of U, often referred to the concept, A = {r1 , r2 , L rm ,} is constituted by m sets about possible decision-making behavior, P ( X j [ X ]) denotes probability of X i that describe
[X]; assumed P( X j [ X ]) is known, let λ ( X i X j ) denote risk of loss for on state X j decision-making used ri , generally based on the specific issues under study, analysis of wrong decisions the extent of losses, and related experts to discuss identified. Assumed [X] is description for a object, do implementation of the decision-making ri for it, then Expected loss can be get by total probability formulate:
(
)(
)
R(ri | [x ]) = ∑ λ ri | X j P X j | [x] . s
j =1
(4)
For every [X], there is Random observations for it, for different [X], when take decision-making ri , the size of the risk conditions are different, so what kind of decision whether to take, decided by [X]. If we can see policy-making rj functions for the object x , noted by r(x), then the overall risk is: R = ∑ R (r ( x ) | [x])P ([x ]) . [X ]
(5)
In considering the losses caused by wrongful convictions, we hope that the loss is the smallest, if when every decision or action are taken, the conditions are to minimize risk, that is, each observation to ensure minimum risk under the conditions, then the overall expected risk is minimization, Bayesian minimum risk decision-making may follow these steps: (1) ∀x ∈ U , use P( xi [x ]) and loss of function value λ (ri X j ) , according to (1)
calculate conditions risk R (ri [x]) for using rk = (i = 1,2,L , m) .
(2)Compare the for m conditions risk value R(ri [x ])(i = 1, 2,L , m) that got on
step(1). Fine out decision-making rk which make conditions risk value minimum, that is R(rk [x ]) = min R (ri [x ]) , then rk is Minimum risk Bayes decision. i=1, 2 ,L,m
Bayesian Decision Model Based on Probabilistic Rough Set with Variable Precision
35
4 Probabilistic Rough Set Model with Variable Precision to Bayesian Decision Lemma 1 [10]. Let Ube finite universe, X is subset of U, then feature states set Ω = { X ,− X } . Lemma 2 [10]. Concept X divide U for three parts: pos( X ), bn( X ) and neg (X ) . Theorem 1: For X on U, if X is minimum risk Bayes decision, then there exists an equal probability rough set model of variable precision. Proof: (1)According to Lemma 1, U is divided to two parts: pos( X ), bn( X ) (2)According to Lemma 2, concept X divide U for three parts: pos( X ), bn( X )
and neg (X ) . For every object x in U, its description [x] faces three decisionmaking(certain decision-making (Y), negative decision (N), pending decision(D)) (Y)decision-making r1 : x ∈ pos ( x), , that is r1 : [ x] → pos ( X ), N decision-making r2 : x ∈ bn( x) , that is
()
r2 : [ x] → bn( X ), decision-making
set A = {r1 , r2 , r3 } ,
let λ (ri X ) be risk of loss for feature set X taking decision-making ri ; λ (ri X ) be feature set –X for ri , P( X [X ]) be probability for [x] on X, P (− X [X ]) be probability for [x]on -x, so the condition risk for x on[X]taking ri is: R = (ri [x]) = ri 1 P ( X [X ]) + λ1 2 P(− X [X ]) ,
(6)
where λi 1 = λ (λi X ) , λi 2 = λ (λi X ) (i = 1,2,L , m) . According to Bayes decision, we can get the rule of minimum risk decision: (Y) r1 : [ x] → pos ( X ), , if R(r1 [ x]) ≤ R (r2 [ x ]) and R (r1 [ x]) ≤ R(r3 [ x]) ,
(7)
(N) r2 : [ x] → bn( X ), , if R(r2 [ x ]) ≤ R(r1 [ x ]) and R(r2 [ x]) ≤ R(r3 [ x]) ,
(8)
(D) r3 : [ x] → bn( X ), if R (r3 [ x]) ≤ R (r1 [ x]) R (r3 [ x]) ≤ R (r2 [ x]) .
(9)
According to total probability formula: P[ X [ X ]] + P (− X x ) = 1 , get P (− X x ) = 1 − P[ X [ X ]] , and let it take to the rule, and take(6) to too, for example we take(7) to get:
Solve to:
R (r1 [x ]) = λ11 P( X [x ]) + λ1 2 P (− X [x ]) ,
(10)
R (r2 [x ]) = λ21 P ( X [x ]) + λ1 22 P (− X [x ]) .
(11)
36
L. Li, J. Wang, and J. Jiang
P = ( X [x ]) ≥
λ12 − λ22 . (λ21 − λ11 ) + (λ1 2− λ22 )
(12)
And P = ( X [x ]) ≥
λ1 2− λ32
(λ31 − λ11 ) + (λ1 2− λ32 )
.
(13)
From the above derivation: for ∀x ∈ U , which decision-making behavior is final selection, there is relation with the probability of feature set X for x of [x], that is relate to the size of P = ( X [x]) . In reality, we must select the appropriate risk of loss λi i = (i = 1,2,3; i = 1,2) , to get rational decision rules. Select the different decision to get the different risk of loss. For X, if choose certain decision-making λ1 , then the risk is not more than λ3 , the risk of λ3 is less than λ2 , that is : λ11 , For –X, to reduce the risk of wrong decisions, we strengthen constraints for the risk of loss function :
⎧λ21 = λ1 2 ⎪ ⎪λ31 = λ3 2 . ⎨ ⎪λ11 = λ2 2 ⎪λ3 − λ1 ≤ λ2 − λ 1 1 ⎩ 1 31
(14)
Calculate the minimum risk decision p( X [x ]) ≥ α rule can be expressed as: (Y) r1 : [ x] → pos ( X ), , if p( X [ X ]) ≥ α , p( X [x ]) ≥ γ , (N) r2 : [ x] → neg ( X ), , if p ( X [x ]) ≥ γ , p( X [x ]) β ,
(D) r3 : [ x] → bn( X ), , if β ≤ p( X [x ]) ≤ α , where
α=
λ1 2 − λ3 2 , (λ31 − λ11 ) + (λ1 2 − λ3 2 )
λ1 2 − λ2 2 = 0.5 , (λ21 − λ11 ) + (λ1 2 − λ2 2 ) λ3 2 − λ 2 2 α= = 1− α . ( λ 2 1 − λ 3 1 ) + ( λ3 2 − λ 2 2 ) α=
(15)
For the constrained risk of loss to know: α ∈ [0.5,1] , for λ3 to know: parameters satisfy β ≤ α , we discuss for the following two conditions: (1) if β < α , then β < γ < α and β = 1 − α , At this time the decision may become: (Y) r1 : [ x] → pos( X ), , if p( X [X ]) ≥ α ,
Bayesian Decision Model Based on Probabilistic Rough Set with Variable Precision
37
(N) r2 : [ x] → neg ( X ), , if p( X [X ]) ≥ 1 − α ,
(D) r3 : [ x] → bn( X ), , if 1 − α < p( X [X ]) < α ,
If when p( X [X ]) = α , take r1 , p( X [X ]) = 1 − α , take r2 , then above rules become: (Y 1 ) r1 : [ x] → pos( X ), , if p( X [ X ]) ≥ α , (N 1 ) r2 : [ x] → neg ( X ), , if p( X [X ]) ≥ 1 − α , (D 1 ) r3 : [ x] → bn( X ), , if 1 − α < p( X [X ]) < α , then:
{
pos( X ) = U [x] p ( X [x ] ≥α } ,
{
neg ( X ) = U [x ] p( X [x ] ≥1 − α } ,
(16)
bn( X ) = U {1 − α < p( X [x ]) < α .
So the approximation and lower approximation for X is: PI α X = {x ∈ U p( X ) [x ] > α } ,
PI α X = {x ∈ U p( X ) [x ] > 1 − α } .
(17)
It is probabilistic rough set model with variable precision. (2) if β = α , then β = α = γ = 0.5 , the corresponding Bayesian decision-making become to: (Y) r1 : [ x] → pos ( X ), ,if p( X [X ]) ≥ 0.5 , (N) r2 : [ x] → neg ( X ), ,if p( X [X ]) ≤ 0.5 , (D) r3 : [ x] → bn( X ), ,if p( X [X ]) = 0.5 .
If when take r3 , then the above decision-making become to:
( Y1 ) r1 : [ x] → pos ( X ), if p( X [X ]) > 0.5 ,
( N1 ) r2 : [ x ] → neg ( X ), if p( X [X ]) < 0.5 ,
( D1 ) r3 : [ x] → bn( X ), if p( X [X ]) = 0.5 . Positive field, negative field, the boundary for object X is:
{ neg ( X ) = U {[x ] p ( X [x ] < 0.5 , bn( X ) = U {[x ] p( X [x ] = 0.5 . pos( X ) = U [x ] p( X [x ] >0.5 ,
(18)
The approximation and lower approximation for X is: PII α X = {x ∈ U p( X ) [x ] ≥ 0.5} ,
PII α X = {x ∈ U p ( X ) [x ] > 0.5} .
(19)
38
L. Li, J. Wang, and J. Jiang
At this point the boundary is called Absolute boundary for probabilistic rough set mode(II), 1 BN 1 X = {x ∈ U p( X ) [x ] = } . 2 2
(20)
To sum up: for X on U, if X exists minimum risk Bayesian decision-making, then there must be an equivalent probability rough set model with variable precision.
5 Example Let U = {x1 , x2 ,L x6 } be A group of patients who go to diagnose, feature states set is Ω :{Certain disease, free disease}, X:{Sick person}, -X:{Disease-free person}, decision-making A = {r1 , r2 , r3 } , where r1 ={Treatment }, r2 ={Without treatment}, r3 ={Further observation}
。Loss risk is:
λ11 = 0.01 , λ31 = 0.05 , λ21 = 0.15 , λ2 2 = 0.01 , λ3 2 = 0.05 , λ12 = 0.15 . Which has the same attributes described with objects xi recorded as [ xi ] . p ( X [x1 ]) = 0.23 , p ( X [x2 ]) = 0.35 , p ( X [x3 ]) = 0.6 , p( X [x4 ]) = 0.74 , p ( X [x5 ]) = 0.88 , p ( X [x6 ]) = 0.9 , request Bayesian minimum risk decisions. Solution: According to theorem: 0.15 − 0.05 α= = 0.71 , (0.06 − 0.01) + (0.15 − 0.05)
γ = 0.5 , β = 1 − 0.71 = 0.29 So pos( X ) = {x ∈ U p ( X [xi ]) ≥ 0.71} = {x4 , x5 , x6 } , neg ( X ) = {x ∈ U p ( X [xi ]) ≤ 0.29} = {x1 } , bn( X ) = {0.29 ≤ p ( X [xi ]) < 0.71} = {x2 , x3 } , PI α X = pos( X ) = { x4 , x5 , x6 } , PI α ( X ) = pos( X ) U bn( X ) = {x4 , x5 , x6 , x2 , x3 } .
Therefore, Bayesian minimum risk decisions is: x4 , x5 , x6 need treatment, x1 without treatment, x 2 , x3 need further observation.
Bayesian Decision Model Based on Probabilistic Rough Set with Variable Precision
39
6 Conclusion Bayesian decision-making process is difficult to determine the risk of loss, in the case of absence of real change of Bayesian decision-making, strengthens the conditions on the risk of loss limits, reduces the risk of making mistakes, also, discusses the relation of Bayesian decision-making process and probability rough set model of variable precision, proofs the minimum risk Bayes decision-making can be treated equivalent with probability rough set model with variable precision.
References 1. Wenxiu, Z.: Rough Set Theory and Methods. Science Press, Beijing (2001) 2. Idzislaw, P.: Rough set theory and its applications to data analysis. Cyberaetics and Systems (1998) 3. Meiping, X.: Laplace distribution parameter estimation of loss function and the risk that the Bayes function. Statistics and Decision-Making (2010) 4. Li, H.X., Yao, Y.Y., Zhou, X.Z., Huang, B.: Two-Phase Rule Induction from Incomplete Data. In: Wang, G., et al. (eds.) RSKT 2008. LNCS (LNAI), vol. 5009, pp. 47–54. Springer, Heidelberg (2008) 5. Wojciech, I.: Variable precision rough set mode of Computer and System Sciences (2007) 6. Wang, G.Y., Zhang, Q.H.: Uncertainty of rough sets in different knowledge granularities. Chinese Journal of Computers (2008) 7. Yueling, Z.: Based on variable precision rough set threshold. Control and Decision (2007) 8. Bingzhen, S.: Probabilistic rough set model with variable precision. Northwest Normal University (2005) 9. Ning, S., Hongchao, M.: Pattern Recognition Theory and Method. Wuhan University Press, Wuhan (2004) 10. Huaizhong, Z.: Bayes decision-making Probabilistic rough set model. Small MicroComputer System (2004) 11. Meiping, X.: Laplace distribution parameter estimation of loss function and the risk that the Bayes function. Statistics and Decision-Making (2010)
The Optimization Model of Hospital Sick Beds’ Rational Arrangements Yajun Guo, Jinran Wang, Xiaoyun Yue, Shangqin He, and Xiaohua Zhang Institute of Mathematics and Information Technology, Hebei Normal University of Science and Technology Qinhuangdao 066004, Hebei Province, China
[email protected]
Abstract. It is practical to apply queuing theory in service system. Aimed at hospitalized queuing problem of the ophthalmology hospital, by analyzing the hospital statistics data, this text built sick beds’ proportion allotment model based on queueing system with multi-server method and evaluation index. And solved the patient visiting’s beds allotment problem, made beds most utilization and make using of the maximum resource rate. Carried on an examination and evaluation to the model, the paper gets four suitable models of hospital beds arrangement, admission waiting time, hospital beds’ distribution ratio and hospital beds’ distribution ratio. It also serves as reference for similar projects. Keywords: Queuing system with multi-server; evaluation index; 0-1 program system.
1 Introduction Patient's seeing a doctor to line up is a kind of familiar phenomenon. The reason is variety. Among them, the limited resource in the hospital is main reason. According to hospital's information data, this text builds up reasonable of sickbed arrangement model, For example, the hospitalized department of some ophthalmology hospital. Analyze the evaluation index sign of making sure the reasonable system, according to at time inpatient number and waited for the statistics circumstance of inpatient, we establish the model that it makes all patients the average lingered time's and sickbed proportion allotment in the system.
2 Symbolic Account and Basic Hypothesis 2.1 Symbolic Account λ : average arrival rates; μ : average service rate; Wq : average waiting time; Ws :
average sojourn time; Ls : queue length; Lq : waiting queue length; ρ : service intensity; C : the total number of service counter in current system; S i : the service R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 40–47, 2010. © Springer-Verlag Berlin Heidelberg 2010
The Optimization Model of Hospital Sick Beds’ Rational Arrangements
41
counter number of the i type patient needed; μi : the average service rate of the i type patients; P0 : system idle coefficient; P i : system state i probability. 2.2 Basic Hypothesis (1) System is in a dynamic balance state, patient arrival rate occupancy rate and discharge rate keep stable in a certain period, and the period is seven days. (2) The patient arrival is Poisson flow. (3) Service organization of service time and patient get out of the hospital time respectively is independent and identically distributed random variables, and obey to respective exponential distributions [1, 2]. (4) Considering beds arrangement, take no account of limited operative condition. 2.3 Evaluating Quantitative Index Aiming at the relative stability of ophthalmologic disease types and the Cycle Stability of visiting and hospital. Evaluating quantitative index is as follows [3]. (1) Queue length Ls and waiting queue length L q describe the ability and efficiency of service, which is a concerned quantitative index to custom and service organization. (2) The Customers’ average waiting time Wq and average sojourn time Ws quantitative index of most caring, Influencing patient treatment, hospital beds’ distribution [4, 5]. (3) Service intensity ρ : measure index of service efficiency in the system [6]. The evaluating quantitative index set up reasonable assumption, it effectively shows service system’s main factors,Furthermore the indexs are metric. Assumption that hospital was in dynamic balance state, using the dates’ expectation value, get λ =8.67, μ =1/8.56, s =79, Wq =10.44, Ws =19.09, Ls =165.51, Lq =90.51, ρ =0.94. By analyzing the above data, we can obtain: (1) ρ = 0.94 < 1 , it shows that service intensity also inadequate high, because the more approach to 1, the more optimize the system; (2) Lq > Ls , it shows that waiting service number is bigger than to accept service 2
number, for long time it will increase to waiting queue length, lower service intensity. From above two reasons, it certainly will prolong the customer’s waiting time and sojourn time, increase its service cost, accordingly, the bed arrangement of the system is obviously not reasonable and have to improve.
3 Modeling and Solving 3.1 Modeling and Solving of Hospital Beds Arrangement Suppose preparing time of all surgical operation is respectively 2 days, because of the special request of the bilateral cataract operation, every Saturday we arrange bilateral cataract admission, according to the initiative principle to make sure the sickbed
42
Y. Guo et al.
arrangement. In addition to reserving to the acute sufferer's certain bed, the surplus bed can be a constant and is divided into a cataract sufferer and other sufferers. (1) If the second day is on Monday or on Saturday, notify to wait for all or parts of cataract sufferers hospitalizing, its surplus sufferer's counting will produce two kinds of situations: The sufferer has no surplus(beds are divided into have surplus and no surplus);The sufferer has surplus and one by one arranges the next period. (2) If the second day is not on Monday or on Saturday, under reserving an emergency bed everyday, we arrange beds according to the FCFS principle [7, 8]. Table 1. The day statistics data table of the cataract outpatient department number days
7
14
21
28
35
42
49
56
monocular cataract number
15
32
37
51
58
69
81
89
bilateral cataract number
20
31
43
59
78
94
107
125
total
35
63
80
110
136
163
188
214
Table 2. The day statistics data table of the outpatient department number days
7
14
21
28
35
42
49
56
number
76
132
194
254
311
369
430
492
By anglicizing the statistics information of admission sufferer records, we find that majority of patients are hospitalizing to prepare the first-time surgical operation 1-3 days later, only minority of cataract sufferers prepare surgical operation for more than 7 days. Therefore, let patients operative 2 days on average after admission. Suppose to discharged number is a constant everyday, if beds can full use is completely decided by the patient's type. Analyzing the hospital sufferer records on the 7.199.11th, 2008, we get: discharged numbers are 6.5 one day on average, and only have minority of numbers is more or less. So the system in the hospital can be looked as the balance, the average hospital numbers can replaces discharged numbers. By statistics data of the visiting information, the day statistics data table 1 of the cataract sufferer's number, the week statistics data table 2of all sufferer's numbers and the day statistics table 3 of traumatic outpatient department. The data shows in the table 2: the expectation value of cataract sufferer is 3.82 , it explains to have about 4 cataract sufferers every day , have about 27 people to visiting weekly; But patients expectation value is 8.79, it explains to have about 9 patients every day, namely 62 people to come to medical weekly. The comparison can gain that the cataract patient’s numbers is the half of patient number, it is necessary to priority arrangement. Table 3. The day statistics table of traumatic outpatient department days traumatic
7 8
14 15
21 24
28 31
35 36
42 43
49 51
56 60
The Optimization Model of Hospital Sick Beds’ Rational Arrangements
43
The data indicate that the traumatic visiting number is 1 person and patient is arranged a bed every day on average, weekly about 7 people. So we adapt to the initiative principle, let a week as a period; consider the bed arrangement in hospital divides two layers: (1) If the second day is Saturday or Monday, the cataract sufferer has the initiative to hospitalize, and they go into hospital for these two days. (2) If the second day is not Saturday and Monday, in addition to cataract, external injury, we carry out an arrangement according to the FCFS principle. Let N = 7 be the discharged hospital every day on average. if next day is Monday or Saturday ( β = 1) , then we arrange the cataract sufferer all or part hospitalization, After analysis and calculation,, the cataract sufferer number are 2 N in the week; If the next day isn't Monday and Saturday ( β = 0) , then we arrange two other types of sufferers all or part hospitalization, the number are 5 N in the week, in summary, the sickbed assigns project model as follows [9, 10]: To the cataract sufferer, hospital numbers are E1 = βN a certainly day in the week, but to non-cataract sufferer, numbers are ⎧ E1 = β N ⎧0 , while β = ⎨ . E 2 = (1 − β ) N , namely ⎨ E ( 1 β ) N = − ⎩ 2 ⎩1
Statistics get that the bed can at most receive 48 sufferers in the week, from model one, we gain that the cataract patient is 14 people, other patients is 34 people, allotment project such as table 4. Table 4. Model one - the bed assign project a week
week
1
2
patient type
cataract (priority)
other (FCFS)
3 other (FCFS)
4
5
other (FCFS)
other (FCFS)
6 cataract double(pr iority)
7 other (FCFS)
Table 5. The outpatient sufferer situation table 5 before the 7.15th, 2008
Type Number
monocular cataract 3
bilateral cataract 2
glaucoma 2
retinal diseases 3
If β =1, the arranged cataract sufferer numbers are E1 = βN = 7 on the second day, other sufferers number is 0,but the cataract sufferer numbers are 5 at time, two remain indexes arrange the cataract sufferer last week; If β =0, the other arranged sufferer numbers are E 2 = (1 − β ) N = 7 on the second day, other sufferers number is 5, two remain indexes arrange the previous sufferers according to the FCFS principle. 3.2 Modeling and Solving of Admission Waiting Time Wq denotes the patient’s waiting time, because bed in the hospital has certain
occupancy ratio in any moment, and occupancy ratio look as a constant in certain time
44
Y. Guo et al.
[11, 12], therefore if can waiting for diagnose patients accept a treatment, it depends on the bed number C of hospital, namely a hospital serve a sufferer with C completely open service window. by queuing system with multi-server, get customers arrival rates, a constant, Lq =
ρ=
λ Cμ
is
service
μ
intensity,
λC ρP0 μ C!(1 − ρ ) 2 C
is
the
Wq =
Lq
λ
, λ is
is waiting queue length, average
service
rate,
C −1
λn 1 λ C 1 + ( ) ( )] −1 is system idle coefficient. C C μ − ρ ! 1 n μ ! n =0 Using Matlab software, we get relationship figure between average queue time and service window. P0 = [∑
10
0
-10
-20
-30
-40
-50
0
10
20
30
40 C
50
60
70
80
Fig. 1. Relationship figure between average queue time and service window
Graphic analysis: waiting queue length reach maximum at 75 hours, and has a decline trend after it, the minimum at 79 hours. Model results: C = 75 , Lq max = 82 ; C =79, Lq min = 15 ; thus Wq max = 9.46, Wq min =1.73. When C ≥ 75 , the model only tends to stability and has limitation. At admission after the patient visiting at least 2 days and at most 10 days under stable state. 3.3 Modeling and Solving of Hospital Beds’ Distribution Ratio Analyzing the statistics data, a kind of sufferers’ average distribution rate μ i ( i = 1,2,3,4 ) is respectively: μ1 = 1/5.11, μ 2 = 1/8.04, μ 3 =1/11.51, μ 4 =1/6.76, Let si
∑si
is the proportion of the i sickbed, it has si window(bed)to provide service.
Considering S i service window alone, it is a queuing system with multi-server, then the i sick has Ws
i
sufferers
=
Li
λ
queue
, where λ denotes average arrival rate, Li = length,
ρi =
λ si μ i
denotes
λs ρ i Pi0 denotes μ i si !(1 − ρ i ) 2 i
si
service
intensity,
The Optimization Model of Hospital Sick Beds’ Rational Arrangements s ⎡⎛ λn ⎞ 1 ⎛ λ ⎞ Pi 0 = ⎢⎜ ∑ n ⎟ + ⎜⎜ ⎟⎟ ⎢⎜⎝ μ i n! ⎟⎠ s i ! ⎝ μ i ⎠ ⎣
i
⎛ 1 ⎜ ⎜1− ρ i ⎝
⎞⎤ ⎟⎥ ⎟⎥ ⎠⎦
45
−1
, Considering the optimal value of Li , record
Li = Li ( si ) . Namely, average waiting time is a function of service window, let L'i ( S i ) = 0 ⇒ S i ( i = 1,2,3,4 )The i sufferer provide the S i service window, we obtain the
optimal value of Ws .That is the shortest lingering time, when the hospital bed i
allotment proportion is s1 : s2 : s3 : s 4 in the system. Li ⎧ ⎪Ws = λ ⎪ ⎨ λ s ρi Pi 0 ⎪ Li = ⎪⎩ μi s si !(1 − ρi )2 Using Matlab software, programming operation gets figure 2 as follows: i
i
i
Fig. 2. A kind of patients waiting queue length and service window numbers diagram
From the Fig. 2 by analyzing the picture, we have to discuss to kinds of evaluation index of the queuing service system in the reasonable interval, and have comparability, and we make sure someone public and reasonable interval firstly, two principle assurance is as follows: 1) Four types of sufferers’ si queuing serve system are placed in the reasonable interval at the same time; 2) Persistently increase beds cause the serious waste, so the reasonable zone is a limited zone. The reasonable interval of four types of sufferers and the related data of the vibration interval are as follows: Table 6. Patient's type Oscillate zone and Reasonable zone
types of sufferers
the vibration interval
the reasonable interval
the first kind
[88,96]
[88, ∞ ]
the second kind
[102,113]
[102, ∞ ]
the third kind
[113,130]
[113, ∞ ]
the forth kind
[143,153]
[143, ∞ ]
46
Y. Guo et al.
The left endpoint of the public interval is 143. In the steady interval we increase service window Lq and its Changing Rate lead the smallest, therefore the public and reasonable interval has never needed to consider steady interval, the right endpoint is determined by some zone of the vibration interval. Noticing the vibration interval of the 1, 2, 4th sufferer don't left end in the public interval, as long as the vibration interval characteristic of analysis the third sufferer to make sure the public interval’s right end. It is characteristics of interval. After hocking first, descend later to arrive one peak value, descend velocity fast followed slowly, at end enter steady interval, therefore in the process of descending in appeared inflection point A, by A level 147 make right endpoint. Namely reasonable and public interval is [134, 147]. After public interval determined, from figure 5 we gain that kinds of sufferers’ the average waiting time take the minimum value of S i , respectively 147, 147, 143, 147; then The value of S i get 147:147:143:147.
4 Model Evaluation The model is effective combination between the initiative principle and FCFS principle; while taking no account of the premise of changing hardware equipments, the hospital only pass to raise oneself service level. Because the surgical operation prepares time get long, sickbed occupied time is grow and influence a sufferer into total time on a treatment of hospital thus. Expect that the surgical operation of cataract sufferer has a special arrangement, other patient's surgical operations preparing time is tended to steady, therefore other patients aren't considering inside. The cataract sufferer has higher proportion, and the rigidity index accepts a cataract sufferer for two days per weeks and also lacks equity to the cataract sufferer of out-patient service. There is necessity turning to the further excellent model.
Acknowledgements This paper is supported by Scientific Technology Research and Development Program Fund Project in Qinhuangdao city (No.200901A288) and by the Teaching and Research Project of Hebei Normal University of Science and Technology (No.0807). The authors are grateful for the anonymous reviewers who made constructive comments.
References 1. Wei, X.: Operations Research. Beijing Mechanical Industry Press (2005) 2. Yang, W., He, X., Yang, X.: New Operational Research Tutorial Model. Method and Computer Implementation 4, 243–247 (2005) 3. Xie, I., Xue, Y.: Optimization Modeling and LINDO. LINGO Software 7, 369–374 (2008) 4. Tang, Y., Tang, X.: Queuing Theory. The Basis And Analysis. Science Press, Beijing (2006)
The Optimization Model of Hospital Sick Beds’ Rational Arrangements
47
5. Xu, J., Hu, Z.: Operations Research Data, Models, Decision-Making. Science Press, Beijing (2006) 6. Zhao, J., Dan, Q.: Mathematical Modeling and Mathematical Experiment. Higher Education Press, Beijing (2003) 7. Ke, J.-C., Wang, K.-H.: The reliability analysis of balking and reneging in repairable syswith warm standbys. Quality and Reliability Engineering International 18, 211–218 (2007) 8. Sun, Y., Yue, D.: Performance Analysis of M/M/s/N Queuing System with Balking Renegand Multiple Synchronous Vacations. Systems Engineering Theory and Practice 27, 152–158 (2007) 9. Zhang, Y., Yue, D., Yue, W.: Analysis of an M/M/1/N queue with balking, reneging and sevacations. In: Proc. of the Fifth International Symposium, pp. 37–47 (2005) 10. Wang, K.-H., Chang, Y.-C.: Cost analysis of a M/M/R queuing system with balkreneging and server breakdowns. Mathematical Methods of Operations Research 56, 169–180 (2002) 11. Jiang, S., Fu, X.: Application of Checking Service Model of Supermarket Based on Queuing Theory. Logistics Science – Technology 10, 141–142 (2008) 12. Liu, W., Liu, Z.: Applying Queuing Theory to Quantitative Analysis on Outpatient Billing Service in hospital. Professional Forum 30(10), 87–89 (2009)
Inverse Eigenvalue Problem for Real Five-Diagonal Matrices with Proportional Relation Mingxing Tian and Zhibin Li College of Mathematics and Physics, Dalian Jiaotong University, 116028, Dalian, China
[email protected],
[email protected]
Abstract. This paper researches the inverse eigenvalue problem for real fivediagonal matrices. And presents that using two real numbers unequal and two nonzero vectors to establish n order real five-diagonal matrix with different proportion between minor diagonal and the second minor diagonal .The paper discusses the relationship of the generalized Jacobi matrix and the five-diagonal matrix, then obtains some important results. And a numerical example is provided. Keywords: proportional relation; five-diagonal matrix; characteristic value; Inverse problem.
1 Introduction 1.1 The Background of the Inverse Eigenvalue Problem Inverse eigenvalue problem for the matrix has a strong physics background and practical significance. It comes from solid mechanics, particle physics, quantum mechanics, structural design, system parameters, automatic control and many other fields. For example, using the frequency and modal of discrete vibration system to seek the original vibration system can be through the establishment of matrix inverse eigenvalue problem model to solve [1]. For the three-diagonal matrix, including Jacobi matrix and generalized Jacobi matrix, whose research on the inverse problem has achieved promising results [2-3]. Inverse eigenvalue problem for five-diagonal matrix is not only from the inverse eigenvalue problem of differential equations discretization, but also from areas such as structural mechanics, whose study is constantly improving [4-5]. 1.2 Related to the Definition Firstly, what is called generalized Jacobi matrix refers to the matrix: ⎛ l1 b1 ⎜ ⎜ c1 l2 b2 O O J =⎜ ⎜ cn − 2 ⎜ ⎜ ⎝
O ln −1 cn −1
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 48–53, 2010. © Springer-Verlag Berlin Heidelberg 2010
⎞ ⎟ ⎟ ⎟ ⎟ bn −1 ⎟ ln ⎟⎠
(1)
Inverse Eigenvalue Problem for Real Five-Diagonal Matrices with Proportional Relation
49
There, bi ci > 0(i = 1, 2,L, n − 1), ai ∈ R(i = 1, 2,L , n) .The paper has referred to generalized Jacobi matrix in reference [3], it is equal to the subdiagonal elements of matrix J have the following relationship ci = kbi , k > 0 . In addition, real five—diagonal matrix is the shape matrix as follows ⎛ a1 e1 g1 ⎜ ⎜ f1 a2 e2 ⎜h O O A=⎜ 1 O O ⎜ ⎜ hn −3 ⎜⎜ ⎝
g2 O
O
O
O
fn−2 hn − 2
an −1 f n −1
⎞ ⎟ ⎟ ⎟ ⎟ gn − 2 ⎟ en −1 ⎟ ⎟ an ⎟⎠
(2)
There, ai , ei , fi , gi , hi ∈ R . • If ei = f i , hi = gi , then matrix is Real Symmetric Five-Diagonal Matrix. There are some important results in references [4, 5]. In this paper, widened the scope of the matrix and extend the partial results based on it. • If fi = kei , hi = k 2 gi , k > 0 , so matrix A into matrix B , specific form is as follows: ⎛ a1 ⎜ ⎜ ke1 ⎜k2g B=⎜ 1 ⎜ ⎜ ⎜⎜ ⎝
e1 a2 O O
g1 e2 O O k2 gn−3
g2 O O
O O
ken−2 an−1 k 2 gn−2 ken−1
⎞ ⎟ ⎟ ⎟ ⎟ gn−2 ⎟ en−1 ⎟ ⎟ an ⎟⎠
(3)
Form like matrix B as the real five-diagonal matrix with proportional relation. Let λ1 , λ2 ,L , λn are characteristic values of n × n real five-diagonal matrix A , q1 , q2 ,L , qn are the corresponding eigenvectors, so (λi ,qi ) is named the characteristic pair of matrix A . 1.3 The Prepared Theorems and Nature Lemma 1. The product of two generalized Jacobi matrices is a real five-diagonal matrix, which has the ratio of 1: k in minor diagonal and the ratio of 1: k 2 in the second minor diagonal (the elements of the second minor can be not zero). Prove. If n ≥ 3 , 2 2 ⎞ lb bb 2 ⎛ l1 +kb1 1 1 +l2b1 1 2 ⎛ l1 b1 ⎞ ⎜ ⎟ 2 2 2 bb ⎜ ⎟ ⎜k(lb 1 1 +l2b1) kb1 +l2 +kb2 l2b2 +l3b2 2 3 ⎟ ⎜kb1 l2 b2 ⎟ ⎜ k2bb ⎟ O O O O 2 1 2 ⎜ ⎟ =⎜ B= J = O O O ⎟ ⎜ ⎟ ⎜ bn−2bn−1 ⎟ O O O O kbn−2 ln−1 bn−1 ⎟ ⎜ ⎜ 2 2 2 2 k bn−3bn−2 k(ln−2bn−2 +ln−1bn−2) kbn−2 +ln−1 +kbn−1 ln−1bn−1 +lnbn−1 ⎟ ⎜ ⎟ kbn−1 ln ⎟⎠ ⎜⎜ ⎝ k2bn−2bn−1 k(ln−1bn−1 +lnbn−1) kbn2−1 +ln2 ⎟⎠ ⎝
2 ∀bi ≠ 0(i = 1, 2,L , n − 1) , know bi bi +1 ≠ 0, k bi bi +1 ≠ 0(i = 1, 2,L , n − 2) .we can get a five-
diagonal matrix, because at least one is not zero in the second minor diagonal.
50
M. Tian and Z. Li
Lemma 2. If (λ ,x) is any characteristic pair of for n × n generalized Jacobi matrix, then (1) x1 xn ≠ 0 , (2) xi2 + xi2+1 ≠ 0(i = 1, 2,L , n) [6]. Lemma 3. Given real numbers λ , μ , nonzero vectors x = ( x1 , x2 , L , xn ) ,
y = ( y1 , y2 , L, yn ) , which meets the conditions as follows [3]:
)
(
(1) d n = 0 ; (2) Di bi ≠ 0 i =1, 2,L, n- 1 ; (3) ci =kbi ( k > 0, i = 1, 2,L , n − 1) , so existent and unique generalized Jacobi matrix. Sub-diagonal elements of matrix J are ( μ − λ )d i . The elements of main diagonal are bi =
Di
, ci = kbi (i = 1, 2,L , n − 1)
ci −1 xi −1 + bi xi +1 ⎧ ⎪⎪λ − x li = ⎨ (i = 1, 2,L , n). c y ⎪ μ − i −1 i −1 + bi yi +1 ⎪⎩ yi
Nature 1. Let (λ ,x) is the characteristic pair of irreducible generalized Jacobi matrix 2 J , so (λ , x ) is the characteristic pair of real five-diagonal matrix B; Especially, if all the eigenvalues of J are non-negative, and λ is the ith eigenvalue of matrix J , then λ 2 is the ith eigenvalue of matrix B . Prove. According to Jx = λ x , so Bx = J 2 x = J λ x = λ 2 x .if all the eigenvalues of J are non-negative, and eigenvalues are 0 ≤ λ1 < λ2 < L < λn in turn. When λi is the
ith eigenvalue of matrix J , according to Jx = λ x, Bx = λ 2 x , then λ 2 is the ith eigenvalue of matrix B . Nature 2. If (λ ,x) is the characteristic of the matrix B , x is non-zero characteristic
vector, (1) at least one is not zero in and
( )x x
xn , 3
1 n
≠0
,(4) x
2 i
() ≠ 0 , (i = 1, 2,L, n) .
x1 and x2 , 2 at least one is not zero in xn −1
+x
2 i +1
x1 and x2 are zero, because of the first equation of Bx = λ x ,we can get x3 = 0 (the elements of the second minor diagonal can be not zero),so xi is not Prove. (1) If
zero in turn, which is in conflict with that characteristic vector is non-zero.(2) Similarly xn −1 and xn are not zero at the same time. (4) according to lemma 1,lemma 2 and nature 1, we can get that (3)and(4) are established..
2 Solving Question A In this paper, it discusses a kind of inverse eigenvalue problem for the five-diagonal matrix associated with generalized Jacobi matrix.
Inverse Eigenvalue Problem for Real Five-Diagonal Matrices with Proportional Relation
51
2.1 Bring Forward Question
(
)
Question A. Given real numbers λ , μ , λ > 0, μ > 0, λ ≠ μ , and nonzero vectors
x, y ∈ R . Find n × n real five-diagonal matrix with different proportion between n
minor diagonal and the secend minor diagonal ,which are f i = kei , hi = k 2 gi .And
( λ , x ) , ( μ , y ) are characteristic pairs of the matrix. First agreed:
x0 = xn +1 = y0 = yn +1 = c0 = cn = b0 = bn = 0
.
i −1
di = ∑ k s xi − s yi − s (i = 1, 2,L , n) , Di = xi
xi +1
s =0
(4) yi
.
(5)
yi +1
2.2 Inversing and Reasoning
Let ( λ , x ) and ( μ , y ) are two characteristic pairs of generalized Jacobi matrices J n , In it ci = kbi , k > 0, i = 1, 2,L , n − 1 , μ > λ > 0 , ⎛ l1 b1 ⎞ ⎛ l1 b1 ⎞ ⎜ ⎟ ⎜ ⎟ c l b kb l b 2 2 ⎜1 2 ⎟ ⎜ 1 2 ⎟ ⎟ =⎜ ⎟ Jn = ⎜ O O O O O O ⎜ ⎟ ⎜ ⎟ c l b kb l b n−2 n−1 n−1 ⎟ ⎜ n−2 n−1 n−1 ⎟ ⎜ ⎜ ⎟ ⎜ cn−1 ln ⎠ ⎝ kbn−1 ln ⎟⎠ ⎝
Matrices
Jn
and
characteristic
vector
meet
eqution
(4).
(6)
Because
of
J n x = λ x, J n y = μ y , so ci −1 xi −1 + li xi + bi xi +1 = λ xi (i = 1, 2,L n) ,
ci −1 yi −1 + li yi + bi yi +1 = μ yi (i = 1, 2,L n) .
From two formulas above, and meeting the equation (5), we can get bi Di = ( μ − λ )d i (i = 1, 2, L , n) .
(7)
So, according to the conditions of lemma 2 and lemma 3, we can get the unique solution of the generalized Jacobi matrix J n . There, bi = ( μ - λ )di , (i = 1, 2,L , n − 1) , l1 = λ − b1 x2 , ln = λ − kbn −1 yn −1 , Di
x1
kbi −1 xi −1 + bi xi +1 ⎧ ⎪ λ− xi ⎪ li = ⎨ (k > 0, i = 2,L , n − 1). kb y ⎪ μ − i −1 i −1 + bi yi +1 ⎪⎩ yi
yn
52
M. Tian and Z. Li
For the real five-diagonal matrices ⎛ a1 e1 g1 ⎜ ⎜ f1 a2 e2 ⎜h O O Bn = ⎜ 1 O O ⎜ ⎜ hn −3 ⎜⎜ ⎝
g2 O O
O O
fn−2 hn − 2
an −1 f n −1
⎞ ⎟ ⎟ ⎟ ⎟ gn − 2 ⎟ en −1 ⎟ ⎟ an ⎟⎠
(8)
Let corresponding (λ , x ) and ( μ , y ) are two characteristic pairs of n order matrices Bn . There are f i =kei , hi =k 2 gi , k > 0
(9)
According to nature 2, matrix Bn meets Bn x = λ x, Bn y = μ y , and from lemma 1 ,we can get B = J 2 , 2 2 ⎞ lb bb 2 ⎛ l1 +kb1 1 1 +l2b1 1 2 ⎛ l1 b1 ⎞ ⎜ ⎟ 2 2 2 bb ⎜ ⎟ ⎜k(lb 1 1 +l2b1) kb1 +l2 +kb2 l2b2 +l3b2 2 3 ⎟ ⎜kb1 l2 b2 ⎟ ⎜ k2bb ⎟ O O O O 1 2 ⎟ =⎜ Bn = Jn2 =⎜ O O O ⎟ ⎜ ⎟ ⎜ bn−2bn−1 ⎟ O O O O kbn−2 ln−1 bn−1 ⎟ ⎜ ⎜ k2bn−3bn−2 k(ln−2bn−2 +ln−1bn−2) kbn2−2 +ln2−1 +kbn2−1 ln−1bn−1 +lnbn−1 ⎟ ⎜ ⎟ kbn−1 ln ⎟⎠ ⎜⎜ ⎝ k2bn−2bn−1 k(ln−1bn−1 +lnbn−1) kbn2−1 +ln2 ⎟⎠ ⎝
So, apparently we can get the elements of matrix B . The k is given and
li , bi can be sure uniquely. Thus ai , ei , gi and hi have unique
values. Then, set up the following conclusions. 2.3 The Main Results
λ , μ (μ > λ > 0) , x = ( x1 , x2 , L, xn ) , y = ( y1 , y2 ,L , yn ) .if (λ , x) and (μ , y)
Theorem
1.
Given
real
numbers
nonzero
vectors
meet conditions as
follows: 1. d n = 0 ; 2. Di di ≠ 0(i = 1, 2, L , n − 1) ; 3. f i =kei , hi =k 2 gi , k > 0 , so there is a unique five-diagonal matrix ⎛ a1 e1 g1 ⎞ ⎜ ⎟ f a e g 2 2 ⎜ 1 2 ⎟ ⎜ h1 O O O O ⎟ Bn = ⎜ ⎟. O O O O g n− 2 ⎟ ⎜ ⎜ hn−3 f n−2 an−1 en−1 ⎟ ⎜⎜ ⎟ hn−2 f n−1 an ⎟⎠ ⎝
Such that (λ , x ) and (μ , y) are two characteristic pairs of the matrix Bn .
Inverse Eigenvalue Problem for Real Five-Diagonal Matrices with Proportional Relation
53
Theorem 2. Let two real numbers λ , μ , and μ > λ > 0 , x and y are nonzero vectors; Meet the conditions of theorem 1, then we can get the solution of question A(It is equal to the value of every elements in Bn )
1. elements of main diagonal: ai = kbi2−1 + li2 + kbi2 (i = 1, 2,L, n)
2. elements of minor diagonal: ei = li bi + li +1bi , fi = kei (i = 1, 2, L, n − 1) , 3. elements of the second minor diagonal: g i = bi bi +1 , hi = k 2 g i (i = 1, 2, L , n − 2) , The remaining elements are zero.
3 Numerical Example Example. Let λ = 1 + 2, μ = 1 + 4 2 , x = (4, 2, −1, - 2)T , y = (2 2, 4, 2 2,1)T , k = 1/ 2 to find the matrix B4 . And (λ , x) , ( μ , y ) are two characteristic pairs of the matrix B4 . Solution. Real numbers λ , μ meet the solution of generalized Jacobi matrix. According to the conditions of the lemma 3 , we can get that ⎛1 ⎜ 2 J4 = ⎜ ⎜0 ⎜ ⎝0
4 0 0⎞ ⎟ 1 6 0⎟ . 3 1 4⎟ ⎟ 0 2 1⎠
It is easy to calculated the real five-diagonal matrix:
⎛9 ⎜ 4 B4 = ⎜ ⎜6 ⎜ ⎝0
8 24 0 ⎞ ⎟ 27 12 24⎟ . 6 27 8 ⎟ ⎟ 6 4 9⎠
References 1. X.-q. Wu.: Inverse Eigenvalue Problems ForJacobi Matrices And other inverse Problems. Shanghai University, Shanghai (2007) 2. Dai, H.: The inverse eigenvalue problem for Jacobi matrix. Computational Physics 4, 451–456 (1994) 3. Li, Z.-b., Zhao, X.-x., Li, W.: The inverse eigenvalue problem for generalized Jacobi matrices. Journal of Dalian Jiaotong University 4, 5–8 (2008) 4. Zhou, X.-z., Hu, X.-y.: Inverse problem for real symmetric five-Diagonal matrix and its eigenvalue. Journal of Hunan University 23(1), 9–14 (1996) 5. Wang, Z.-s.: Inverse eigenvalue problem for real symmetric five-Diagonal matrix. Higher University Journal of Computational Mathematics 24(4), 366–376 (2002) 6. Li, Z.-b., Li, Y.-m.: Eigenvalue problem for generalized Jacobi matrices. Journal of Dalian Railway Institute 25(4), 18–20 (2004)
On the Ruin Problem in an Erlang(2) Risk Model with Delayed Claims Wei Zou and Jie-hua Xie Department of Science, NanChang Institute of Technology, 330099 NanChang, P.R. China {zouwei,jhxie}@nit.edu.cn
Abstract. In this paper, a continuous time risk model with delayed claims is considered, in which the claim number process is an Erlang(2) process. Two types of individual claims, main claims and by-claims, are defined, where each by-claim is induced by a main claim and the occurrence of the by-claim may be delayed with a certain probability. A system of integro-differential equations with certain boundary conditions for the non-ruin probability is derived and solved. Explicit expressions for non-ruin probabilities are obtained when the claim amounts from both classes are exponentially distributed. Numerical results are also provided to illustrate the applicability of the main result. Keywords: Erlang(2) risk model; non-ruin probability; delayed claims.
1
Introduction
In reality, insurance claims may be delayed due to various reasons. Since the work by Waters and Papatriandafylou [1], risk models with this special feature have been discussed by many authors in the literature. For example, Yuen and Guo [2] studied a compound binomial model with delayed claims and obtained recursive formulas for the finite time survival probabilities. Xiao and Guo [3] obtained the recursive formula of the joint distribution of the surplus immediately prior to ruin and deficit at ruin in this model. Xie and Zou [4] studied a risk model with delayed by-claims. Exact analytical expressions for the Laplace transforms of the ruin functions were obtained. Yuen et al. [5] studied a risk model with delayed claims, in which the time of delay for the occurrence of a by-claim is assumed to be exponentially distributed. Macci [6] presented a sample path large deviation principle for the delayed claims risk model presented in [5]. Xie and Zou [7] derived the expected present value of total dividends in a delayed claims risk model under stochastic interest rates. All delayed claims risk models described above relied on the assumption that the claim number process is the Poisson process. Erlang distribution is one of the most commonly used distributions in queueing theory and risk theory. Recently, various results of risk processes have been
Corresponding author.
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 54–61, 2010. c Springer-Verlag Berlin Heidelberg 2010
On the Ruin Problem in an Erlang(2) Risk Model with Delayed Claims
55
obtained when the claims occur as an Erlang process, e.g., Dickson and Li [8] derived expressions for the joint density of the time to ruin in an Erlang(2) risk model. Li and Garrido [9] studied the expected discounted penalty function for the Erlang(n) risk process. Other risk models involving the Erlang claim number processes were studied by Sun and Yang [10], Dickson and Hipp [11], Xie and Zou [12], Borovkov and Dickson [13], Li and Lu [14], Wu and Li [15], and the references therein. Motivated by these works, we explore the possibility of further applications of Erlang processes to actuarial problems. Specifically, we study a continuous time risk model with delayed claims, in which the claim number process is the Erlang(2) process. Two types of individual claims, main claims and by-claims, are defined, and the two types of claim have different distributions of severity. In this risk model, each main claim induces a by-claim. Moreover, the by-claim and its corresponding main claim may occur simultaneously or the occurrence of the by-claim may be delayed to the next time epoch. We assume that the occurrence of the delayed by-claim is independent of the occurrence of the next main claim. The model proposed in this paper is a generalization of delayed claims risk model and Erlang(2) risk model. The work of this paper can be seen as a complement to the work of Yuen and Guo [2] and Dickson and Hipp [11]. Hence our results include the corresponding results of the Erlang(2) risk model. This paper is devoted to deriving the explicit expressions for non-ruin probabilities in the delayed claims risk model. The rest of the paper is organized as follows: in Section 2, we describe the risk model in detail and define the surplus process of this model. An integro-differential equation system of non-ruin probabilities is derived in Section 3. This system is fully solved in Section 4 when the claim amounts from both classes are exponentially distributed. Several numerical examples are also given in Section 4.
2
The Model
Here, we consider a continuous time model which involves two types of insurance claims; namely the main claims and the by-claims. Let the aggregate main claim number process {N (t); t ≥ 0} be the Erlang(2) claim number process, with intensity β. Its jump times are denoted by {Ti }i≥1 with T0 = 0. The main claim amounts {Yi }i≥1 are assumed to be independent and identically distributed (i.i.d.) positive random variables with common distribution F . Let {Xi }i≥1 be the by-claim amounts, assumed to be i.i.d. positive random variables with common distribution G. The main claim amounts and by-claim amounts are independent and their means are denoted by μF and μG . In this risk model, we assume the claim occurrence process to be of the following type: there will be a main claim Yi at every epoch Ti of the Erlang(2) process and the main claim Yi will induce a by-claim Xi . Moreover, the main claim Yi and its corresponding by-claim Xi may occur simultaneously at Ti with probability θ, or the occurrence of the by-claim Xi may be delayed to the next
56
W. Zou and J.-h. Xie
epoch Ti+1 with probability 1 − θ. We assume that the occurrence of the delayed by-claim Xi is independent of the occurrence of the next main claim Yi+1 . In this setup, the surplus process U (t) of this risk model is defined as N (t)
U (t) = u + ct −
Yi − R(t),
(1)
i=1
where u is the initial capital, c the constant rate of premium, R(t) is the sum of all by-claims Xi that occurred before time t. The non-ruin probability, or survival probability, is defined to be Φ(u) = Pr(U (t) ≥ 0; for all t ≥ 0). In order to guarantee that ruin does not occur almost surely, we assume that the following positive safely loading condition holds, i.e., 2−1 β(μF + μG ) < c. With all else being the same, we consider a slight change in the risk model. Instead of having one main claim and a by-claim with probability θ at the first epoch T1 , another by-claim is added at the first epoch T1 . We denote the corresponding non-ruin probability for this auxiliary model by Φ1 (u) which is very useful in the derivation of Φ(u).
3
System of Integro-Differential Equations
We are interested in the non-ruin probability Φ(u). Consider what will happen at the first epoch T1 . Obviously there will be a main claim Y1 . The main claim will induce a by-claim X1 . If the by-claim X1 also occurs at the first epoch T1 , the surplus process U (t) will renew itself with different initial reserve. The probability of this event is θ. If the by-claim delays to the second epoch T2 , U (t) will not renew itself in this case but transfer to the auxiliary model described in the paragraph above. The probability of this event is 1 − θ. Take what happened at T1 into account, we can set up the following equation for Φ(u) and Φ1 (u): ∞ u+ct 2 −βt Φ(u) = θ β te Φ(u + ct − y)dF ∗ G(y)dt 0
0
∞
+(1 − θ)
β 2 te−βt
0
u+ct
Φ1 (u + ct − y)dF (y)dt.
(2)
0
With the auxiliary model, similar analysis gives ∞ u+ct Φ1 (u) = θ β 2 te−βt Φ(u + ct − y)dF ∗ G ∗ G(y)dt 0
0
∞
+(1 − θ) 0
β 2 te−βt
u+ct
Φ1 (u + ct − y)dF ∗ G(y)dt,
(3)
0
where ∗ denotes the operation of convolution. Putting s = u + ct and differentiating with respect to u, we get the following integro-differential equations: β θβ 2 ∞ − β(s−u) s c Φ(1) (u) = Φ(u) − 2 e Φ(s − y)dF ∗ G(y)ds c c u 0
On the Ruin Problem in an Erlang(2) Risk Model with Delayed Claims
− (1) Φ1 (u)
(1 − θ)β 2 c2
∞
u
e−
β(s−u) c
s
Φ1 (s − y)dF (y)ds,
(4)
0
β θβ 2 ∞ − β(s−u) s c = Φ1 (u) − 2 e Φ(s − y)dF ∗ G ∗ G(y)ds c c u 0 (1 − θ)β 2 ∞ − β(s−u) s c − e Φ1 (s − y)dF ∗ G(y)ds. c2 u 0
Differentiating with respect to u once again yields 2β (1) β2 β2θ u (2) Φ (u) = Φ (u) − 2 Φ(u) + 2 Φ(u − y)dF ∗ G(y) c c c 0 β 2 (1 − θ) u + Φ1 (u − y)dF (y), c2 0 2β (1) β2 β2θ u (2) Φ1 (u) = Φ1 (u) − 2 Φ1 (u) + 2 Φ(u − y)dF ∗ G ∗ G(y) c c c 0 β 2 (1 − θ) u + Φ1 (u − y)dF ∗ G(y). c2 0
4
57
(5)
(6)
(7)
Explicit Results for Exponential Claims
In the case of exponential claims, explicit expressions for Φ(u) and Φ1 (u) can be obtained. Assume that main claim amounts {Yi }i≥1 and by-claim amounts {Xi }i≥1 are exponentially distributed with equal mean μ. Then F ∗ G(y) fol−1 lows an Gamma distribution with density μ−2 ye−μ y . Moreover, we also know −1 (F ∗ G ∗ G(y)) = 2−1 μ−3 y 2 e−μ y . In this case, (6) and (7) become u u−y 2β (1) β2 β2θ (2) Φ (u) = Φ (u) − 2 Φ(u) + 2 2 Φ(y)(u − y)e− μ dy c c c μ 0 u−y β 2 (1 − θ) u + Φ1 (y)e− μ dy, (8) 2 c μ 0 u u−y 2β (1) β2 β2 θ (2) Φ1 (u) = Φ1 (u) − 2 Φ1 (u) + 2 3 Φ(y)(u − y)2 e− μ dy c c 2c μ 0 u 2 u−y β (1 − θ) + Φ1 (y)(u − y)e− μ dy. (9) c2 μ2 0 Differentiating once again yields 2β 1 2β β2 β2 Φ(3) (u) = − Φ(2) (u) + − 2 Φ(1) (u) − 2 Φ(u) c μ cμ c c μ u−y β 2 (1 − θ) β2 θ u + Φ (u) + Φ(y)e− μ dy. 1 2 2 2 c μ c μ 0
(10)
58
W. Zou and J.-h. Xie
Furthermore, 2β 2 4β 1 β2 2β 2β 2 Φ(4) (u) = − Φ(3) (u)+ − 2 − 2 Φ(2) (u)+ − Φ(1) (u) c μ cμ μ c cμ2 c2 μ 2 β θ β2 β 2 (1 − θ) (1) β 2 (1 − θ) + 2 2 − 2 2 Φ(u) + Φ (u) + Φ1 (u), (11) 1 c μ c μ c2 μ c2 μ2 2β 2 4β 1 β2 2β 2β 2 Φ(5) (u) = − Φ(4) (u)+ − 2 − 2 Φ(3) (u)+ − Φ(2) (u) c μ cμ μ c cμ2 c2 μ 2 β θ β2 β 2 (1 − θ) (2) β 2 (1 − θ) (1) + 2 2 − 2 2 Φ(1) (u) + Φ (u) + Φ1 (u). (12) 1 c μ c μ c2 μ c2 μ2 Let u = 0 in (9), (11) and (12), we can get the initial value Φ1 (0) =
β 4 (1 − θ)μ2 Φ(0) − 2cβ 3 μ(Φ(0)(1 − θ) + μ(2Φ(1) (0) + μΦ(2) (0))) cβ 2 (−1 + θ)(c + 2βμ) +
c2 β 2 a1 + 2c3 βa2 + c4 a3 , cβ 2 (−1 + θ)(c + 2βμ)
(13)
where a1 = (θ −1)Φ(0)+μ(3−θ)Φ(1) (0)+μ(9Φ(2) (0)+5μΦ(3) (0)), a2 = Φ(1) (0)− μ2 (3Φ(3) (0) + 2μΦ(4) (0)), a3 = μ{μ(Φ(4) (0) + μΦ(5) (0)) − Φ(3) (0)} − Φ(2) (0). Hence, (8), (10), (11) and (12) form a linear differential system with boundary conditions (13) and 2β 1 2β β2 β2 β 2 (1 − θ) (3) (2) Φ (0) = − Φ (0) + − 2 Φ(1) (0) − 2 Φ(0) + Φ1 (0), c μ cμ c c μ c2 μ 2β (1) β2 Φ (0) − 2 Φ(0), Φ(∞) = 1, Φ1 (∞) = 1. (14) c c Using (11) and (12), we obtain 3 4β 2 8β 1 6β 2 4β 4β (6) (5) (4) Φ (u) = − Φ (u) + − 2 − 2 Φ (u) + + 2 3 c μ cμ μ c c cμ 12β 2 8β 3 5β 2 β4 2β 4 2β 3 − 2 Φ(3) (u) + 3 − 2 2 − 4 Φ(2) (u) − 4 − 3 2 Φ(1) (u). (15) c μ c μ c μ c c μ c μ Φ(2) (0) =
Its characteristic equation 3 4β 2 8β 1 6β 2 4β 4β 12β 2 z6 − − z5 − − 2 − 2 z4 − + − z3 c μ cμ μ c c3 cμ2 c2 μ 3 4 8β 5β 2 β4 2β 2β 3 2 − 3 − 2 2− 4 z + − 3 2 z=0 c μ c μ c c4 μ c μ has six roots, namely −c + βμ −c + βμ − z1 = 0, z2 = , z3 = cμ
c2 + 6cβμ + β 2 μ2 , 2cμ
On the Ruin Problem in an Erlang(2) Risk Model with Delayed Claims
−c + βμ +
z4 =
59
c2 + 6cβμ + β 2 μ2 β , z 5 = z6 = . 2cμ c
The positive relative security loading condition c > βμ implies that z4 , z5 and z6 are positive. Therefore, the general solution for Φ(u) is Φ(u) = C1 + C2 exp(z2 u) + C3 exp(z3 u).
(16)
From the boundary conditions (13) and (14), we immediately get C1 = 1, and β 2 μ2 c4 (θ − 1) + β 3 (1 + θ)μ3 A1 + cβ 2 μ2 A2 + c2 A3 + c2 βμA4 C2 = , (17) β 5 (θ − 1)μ5 Λ1 + c5 Λ2 + c2 β 3 μ3 Λ3 − c3 β 2 μ2 Λ4 + Λ5 −2β 2 μ2 (c − βμ)(c3 θ + c2 β(3θ − 1)μ + β 3 (1 − θ)μ3 ) , (18) β 5 (θ − 1)μ5 Λ1 + c5 Λ2 + c2 β 3 μ3 Λ3 − c3 β 2 μ2 Λ4 + Λ5 where Λ1 = βμ + c2 + 6cβμ + β 2 μ2 , Λ2 = 3βμ + c2 + 6cβμ + β 2 μ2 , Λ3 = β(3−7θ)μ+3(1−θ) c2 + 6cβμ + β 2 μ2 , Λ4 = (3+θ) c2 + 6cβμ + β 2 μ2 +6β(2+ 6 4 2 2 5 5 2 2 2 θ)μ, Λ5 = c +c β (θ−7)μ +3cβ (θ−1)μ , A1 = βμ− c +6cβμ + β μ , A2 = β(16−5θ)μ+3 c2 + 6cβμ +β 2 μ2 , A3 = 2β(3θ−2)μ+(θ−1) c2 + 6cβμ + β 2 μ2 , A4 = β(5θ + 4)μ + (3θ − 1) c2 + 6cβμ + β 2 μ2 . C3 =
Example 1. Let β = 1.5, c = 2.5, μ = 1.4. Figure 1 shows the non-ruin probabilities Φ(u), for different values of u ∈ [0, 15] and θ = 0, 0.25, 0.5, 0.75, 1. From this graph, we can see that, as expected, these non-ruin probabilities increase as the initial surplus u increases. Moreover, Φ(u) is increasing as the probability of the delay of by-claims is increasing, i.e., θ is decreasing.
0.9 0.8
Non−ruin probability
0.7 + + + + θ=0
0.6
− − − − θ=0.25 0.5
......
θ=0.5
−−−−−− θ=0.75 − . − . − θ=1
0.4 0.3 0.2 0.1
0
5
10 Value of initial surplus
15
Fig. 1. Non-ruin probabilities Φ(u) for different θ in Example 1
60
W. Zou and J.-h. Xie
For exponential claims with unequal means, i.e., μF = μG , (F ∗ G(y)) = − y − y (e μF − e μG )/(μF − μG ). We discuss a special case of θ = 0. In this case, (6) and (7) become Φ(2) (u) = (2)
Φ1 (u) =
2β (1) β2 β2 Φ (u) − 2 Φ(u) + 2 c c c μF
2β (1) β2 β2 Φ1 (u) − 2 Φ1 (u) + 2 c c c (μF − μG )
u
Φ1 (y)e
− u−y μ F
dy,
0
u
u−y
− − u−y Φ1 (y) e μF − e μG dy,
0
Using arguments similar to those used for exponential claims with equal means, we can get the resulting differential equation 4β 1 1 B1 Φ(6) (u) = − − Φ(5) (u) + 4 Φ(4) (u) c μF μG c μF μG +
B2 Φ(3) (u) 4 c μF μG
+
B3 Φ(2) (u) 4 c μF μG
+
B4 Φ(1) (u), 4 c μF μG
(19)
where B1 = −(c4 −4c3 βμF −4c3 βμG +6c2 μF μG β 2 ), B2 = −(6c2 β 2 μF +6c2 β 2 μG c4 − 4c3 β − 4cμF μG β 3 ), B3 = −(5c2 β 2 − 4cβ 3 μF − 4cβ 3 μG + μF μG β 4 ), B4 = −(β 4 μF + β 4 μG − 2cβ 3 ), and the corresponding boundary conditions are 2β 1 2β β2 β2 β2 Φ(3) (0) = − Φ(2) (0) + − 2 Φ(1) (0) − 2 Φ(0) + 2 Φ1 (0), c μF cμF c c μF c μF 2β (1) β2 2β (1) β2 (2) Φ (0) − 2 Φ(0), Φ1 (0) = Φ1 (0) − 2 Φ1 (0), Φ(∞) = 1. c c c c Given the parameter values, one can solve for Φ using Mathematica or other computer software. Φ(2) (0) =
Example 2. Let β = 1.5, c = 2.5, θ = 0, μF = 1.4, μG = 1.6. As a result, Φ(u) are given by Φ(u) = 1 − 0.703423e−0.0665025u + 0.0134625e−0.934656u.
5
Concluding Remarks
In this paper, we study an Erlang(2) risk model with delayed claims. We prove that the non-ruin probability satisfies some integro-differential equations. Exact representations for the solutions of these equations are derived when the claim sizes are exponentially distributed. Actually, for other claim amount distributions, for example, the mixture of exponential or Gamma distributions, the solution can also be obtained by similar method. The model studied here can also be extended to the Erlang(n) risk model with delayed claims.
On the Ruin Problem in an Erlang(2) Risk Model with Delayed Claims
61
Acknowledgments. The authors are grateful to the referees for their valuable comments and suggestions. The research was fully supported by the Science and Technology Foundation of JiangXi Province (Project No. GJJ10267).
References 1. Waters, H.R., Papatriandafylou, A.: Ruin Probabilities Allowing for Delay in Claims Settlement. Insurance: Mathematics and Economics 4, 113–122 (1985) 2. Yuen, K.C., Guo, J.Y.: Ruin Probabilities for Time-Correlated Claims in the Compound Binomial Model. Insurance: Mathematics and Economics 29, 47–57 (2001) 3. Xiao, Y.T., Guo, J.Y.: The Compound Binomial Risk Model with Time-Correlated Claims. Insurance: Mathematics and Economics 41, 124–133 (2007) 4. Xie, J.H., Zou, W.: Ruin Probabilities of a Risk Model with Time-Correlated Claims. Journal of the Graduate School of the Chinese Academy of Sciences 25, 319–326 (2008) 5. Yuen, K.C., Guo, J.Y., Kai, W.N.: On Ultimate Ruin in a Delayed-Claims Risk Model. Journal of Applied Probability 42, 163–174 (2005) 6. Macci, C.: Large Deviations for Risk Models in which Each Main Claim Induces a Delayed Claim. Stochastics: An International Journal of Probability and Stochastic Processes 78, 77–89 (2006) 7. Xie, J.H., Zou, W.: Expected Present Value of Total Dividends in a Delayed Claims Risk Model under Stochastic Interest Rates. Insurance: Mathematics and Economics 46, 415–422 (2010) 8. Dickson, D.C.M., Li, S.M.: Finite Time Ruin Problems for the Erlang(2) Risk Model. Insurance: Mathematics and Economics 46, 12–18 (2010) 9. Li, S.M., Garrido, J.: On Ruin for the Erlang(n) Risk Process. Insurance: Mathematics and Economics 34, 391–408 (2004) 10. Sun, L.J., Yang, H.L.: On the Joint Distributions of Surplus Immediately Before Ruin and the Deficit at Ruin for Erlang(2) Risk Processes. Insurance: Mathematics and Economics 34, 121–125 (2004) 11. Dickson, D.C.M., Hipp, C.: On the Time to Ruin for Erlang(2) Risk Process. Insurance: Mathematics and Economics 29, 333–344 (2001) 12. Xie, J.H., Zou, W.: Ruin Probabilities for a Risk Model with Depenent Classes of Insurance Business. Acta Mathematicae Applicatae Sinica(Chinese Series) 32, 545–546 (2009) 13. Borovkov, K.A., Dickson, D.C.M.: On the Ruin Time Distribution for a Sparre Andersen Process with Exponential Claim Sizes. Insurance: Mathematics and Economics 42, 1104–1108 (2008) 14. Li, S.M., Lu, Y.: The Distribution of the Total Dividends Payments in a Sparre Andersen Model with a Constant Dividend Barrier. Probability and Statistics Letters 79, 1246–1251 (2009) 15. Wu, X., Li, S.M.: On the Discounted Penalty Function in a Discrete Time Renewal Risk Model with General Interclaim Times. Scandinavian Actuarial Journal 109, 281–294 (2009)
Stability of Euler-Maclaurin Methods in the Numerical Solution of Equation u (t) = au(t) + a0u([t]) + a1 u([t − 1]) Chunyan He and Wanjin Lv College of Mathematics, Helongjiang University, Harbin 150080, China
[email protected]
Abstract. To study the numerical solution of the delay differential equations with piecewise continuous arguments (EPCA) which are usually emerged in dynamic models and controls of biological systems ,signal systems and so forth. High acurate numerical solution and stability analysis for the equation u (t) = au(t) + a0 u([t]) + a1 u([t − 1]) was concerned. The adaptation of the Euler-Maclaurin method was considered,and the stability region for the Euler-Maclaurin methods for the equation was determined. The conditions that the analytic stability region is contained in the numerical stability region are obtained and a numerical experiment is given. The numerical solution can also perserve the stability of the analytic solution. Keywords: Delay differential equation; Euler-Maclaurin methods; piecewise continuous arguments; asymptotic stability.
1
Introduction
This paper deals with the numerical solution of the delay differential equations with piecewise continuous arguments (EPCA) u (t) = f (t, u(t), u(α1 (t), u(α2 (t))
(1)
where the arguments αi (t), i = 1, 2, have intervals of constancy. This kind of equations has been initiated by Wiener [1,2], Cooke and Wiener [3], and Shah and Wiener[4]. The general theory and basic results for EPCA have by now been thoroughly investigated in the book of Wiener[5]. In recent years,The research of numerical methods for the kind of equations have obtained some results[6, 7, 8, 9]. It is known that differential equations with piecewise constant argument are closely related to delay differential equations [10,11]. They also have applications
Foundation item: supported by the Natural Science Foundation of Heilongjiang Education Committee(11541269).
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 62–69, 2010. c Springer-Verlag Berlin Heidelberg 2010
Stability of Euler-Maclaurin Methods in the Numerical Solution of Equation
63
in certain biomedical models [12]. In addition [5] contains results on the relationship between equations with piecewise constant arguments and impulsive equations. In this paper we consider the following equation (2) with Euler-Maclaurin methods, where a, a0 , a1 , u0 , u−1 are real constants and [·] denotes the greatest integer function. u (t) = au(t) + a0 u([t]) + a1 u([t − 1]), t ≥ 0 (2) u(t) = u0 , u(−1) = u−1 As follows, we present some results for the solution of (2), and use these notations in the book of [5], m0 (t) = eat + (eat − 1)a−1 a0 , m1 (t) = (eat − 1)a−1 a1 , b0 = m0 (1), b1 = m1 (1). Definition 1. [5] A solution of (2) on [0, ∞) is a function u(t) that satisfies the following conditions: 1. u(t) is continuous on [0, ∞). 2. The derivative u (t) exists at each point t ∈ [0, ∞), with the possible exception of the points [t] ∈ [0, ∞) where one-sided derivatives exist. 3. (2) is satisfied on each interval [k, k + 1) ⊂ [0, ∞) with integral endpoints. Theorem 1. [5] The solution of Eq. (2) is asymptotically stable if and only if the roots of Eq. (3) have a modulus less than one λ2 − b0 λ − b1 = 0.
(3)
Lemma 1. [13] The roots of Eq. (3) have a modulus less than one if and only if |b1 | < 1, |b| < 1 − b1 . Theorem 2. T he solution u = 0 of Eq. (1.2) asymptotically stable as t → +∞, if and only if |a1 | < a/(ea − 1), (4) a1 − a(ea + 1)/(ea − 1) < a0 < −a − a1 .
2
Bernoulli’s Numbers and Beroulli’s Polynomial
It is known that
∞
Bj z = z j , |z| < 2π, z e − 1 j=0 j!
(5)
∞
Bj (x) zexz = z j , |z| < 2π, z e − 1 j=0 j!
(6)
where Bj and Bj (x), j = 0, 1, · · · , are called Bernoulli’s number and the jthorder Bernoulli’s polynomial,respectively. Bj and Bj (x) have the following properties:
64
C. He and W. Lv
Proposition 1. [9] B0 = 1, B1 = −1/2, B2j = 2(−1)j+1 (2j)!
∞
−2j
(2kπ)
, B2j+1 = 0, j ≥ 1.
(7)
k=1
B0 (x) = 1, B1 (x) = x − 1/2, B2 (x) = x − x + 1/6, Bk (x) = 2
k k j=0
j
Bj xk−j . (8)
3
Euler-Maclaurin Method
1 Let h = m be a given stepsize with integer m ≥ 1 and the gridpoints ti be defined by ti = ih, i = 0, 1, 2, · · ·. Let i = km + l, l = 0, 1, 2, · · · , m − 1. Then for (2),the Euler-Maclaurin formula leads to a numerical process of the following type:
ui+1 = ui + ha(ui+1 + ui )/2 −
n 2j B2j (ha) j=1
(2j)!
(ui+1 − ui ) + ha0 ukm + ha1 u(k−1)m . (9)
Lemma 2. [14,15].Assume that f (x) has 2n + 3rd continuous derivative on the interval[ti , ti+1 ].T hen we have n ti+1 B2j (h)2j (2j−1) (2j−1) [f (t ) − f (t )] ti f (t)dt − h[f (ti+1 ) + f (ti )]/2 + i+1 i (2j)! j=1 = O(h2n+3 ). (10) Since in each interval [k, k + 1) (2) can be seen as ordinary differential equation, the derevative u(j) (t) exists in the interval [k, k + 1) for j = 0, 1, 2, · · ·. If we denote f (t) = u (t) = au(t) + a0 u([t]) + a1 u([t − 1]), then f (t) = u (t) = au (t) = au(t) + a(a0 u([t]) + a1 u([t − 1])), f (j) (t) = u(j+1) (t) = a(j+1) u (t) + aj (a0 u([t]) + a1 u([t − 1])).
(11)
Theorem 3. For any given n ∈ N , the Euler-Maclaurin method is of order 2n + 2. Proof. Let km ≤ i < (k+1)m−1. Then from Lemma 2 with f (t) = u (t) we have t u(ti+1 ) − u(ti ) = tii+1 u (t)dt = ha[u(ti+1 ) + u(ti )]/2 + h(a0 ukm + a1 u(k−1)m ) n B2j (h)2j − [u(ti+1 ) − u(ti )] + O(h2n+3 ). (2j)! j=1
(12)
Stability of Euler-Maclaurin Methods in the Numerical Solution of Equation
Let i = (k + 1)m − 1. Then for any given with 0 < < h,we have ti+1 − u(ti+1 − ) − u(ti ) = u (t)dt
65
(13)
ti
Taking → 0+ in (3.5) we can obtain that (13) holds for i = (k + 1)m − 1. Suppose ui = u(ti ) and ukm = u(k), u(k−1)m = u(k − 1). Then from (3.1) and (3.4),we have (u(ti+1 ) − ui+1 )(1 − ha/2 +
n 2j B2j (h) j=1
(2j)!
) =O(h2n+3 ),
which implies that the theorem is true.
4
(14)
Numerical Stability
Let i = km + l, l = 0, 1, · · · , m − 1. Then (9) can be rewritten as ukm+l+1 = α(x)ukm+l + a0 (α(x) − 1)ukm /a + a1 (α(x) − 1)u(k−1)m /a,
(15)
where x = ha and φ(x) = 1 − x/2 +
n 2j B2j (h) j=1
(2j)!
), α(x) = 1 + x/φ(x).
(16)
Let us consider the solution of (15) with the type of λk ϕl , k is an integer and 0 ≤ l ≤ m, ukm+l = λk ϕl , f or, 0 ≤ l ≤ m, k = −1, 0, ...m ukm = λk , f or, k = −1, 0, 1, ..., 0 ≤ l ≤ m, ϕ0 = 1.
(17)
If λ = 0 then (17) reduces to the zero solution of (15). Let λ = 0, and n = km + l, 0 ≤ l ≤ m − 1, then (4.1) reduces to ϕl+1 = α(x)ϕl + a0 (α(x) − 1)ϕ0 /a + a1 (α(x) − 1)ϕ0 /λ.
(18)
Therefore together with ϕ0 = 1, we have ϕl = m
0 (l) + m
1 (l)λ−1 where
(19)
m
0 (l) = α(x)l + a0 (α(x)l − 1)/a, m
1 (l) = a1 (α(x)l − 1)/a.
From (4.3) we have ϕm = λ i.e., λ2 − b0 λ − b1 = 0 where b0 = m
0 (m) and b1 = m
1 (m).
(20)
66
C. He and W. Lv
Clearly (17) is the characteristic equation of (4.1). It is easy to see that the following theorem holds. Theorem 4. F or any given u−m and u0 , the solution un of (4.1) tends to zero as n tends to ∞ if and only if the characteristic equation (4.6) has no roots on zero |λ| ≥ 1. Therefor from Theorem 4 we have Corollary 1. F or any given u−m and u0 , the solution un of (4.1) tends to zero as n tends to ∞ if and only if | b1 | < 1, (21)
b0 < 1 − b1 . Lemma 3. [9] If |x| ≤ 1,then φ(x) ≥
1/2, x > 0, 1, x ≤ 0.
Lemma 4. [9] If |x| ≤ 1, then φ(x) ≤ x/(ex − 1), when n is even , φ(x) ≥ x/(ex − 1), when n is odd. In the rest of this section we assume M > |a|, which implies that |x| < 1 for the stepsize h = 1/m with m ≥ M . In this section we will discuss the stability of the Euler-Maclaurin methods. We introduce the set H consisting of all points (a, a0 , a1 ) ∈ R3 which satisfy condition (1.4), i.e., H = {(a, a0 , a1 ) : |a1 | < a/(ea − 1), a1 − a(ea + 1)/(ea − 1) < a0 < −a − a1 }. and divide the region H into three parts: H0 = {(a, a0 , a1 ) : (a, a0 , a1 ) ∈ H, and, a = 0}, H1 = {(a, a0 , a1 ) : (a, a0 , a1 ) ∈ H, and, a < 0}, H2 = {(a, a0 , a1 ) : (a, a0 , a1 ) ∈ H, and, a > 0}. Definition 2. The set of all points (a, a0 , a1 ) at which the process (9) for Eq.(2) is asymptotically stable and is called asymptotical stability region denoted by S. In the similar way we denote S0 = {(a, a0 , a1 ) ∈ S : a = 0}, S1 = {(a, a0 , a1 ) ∈ S : a < 0}, S2 = {(a, a0 , a1 ) ∈ S : a > 0}. In the following we will investigate which conditions lead to H ⊆ S.
Stability of Euler-Maclaurin Methods in the Numerical Solution of Equation
67
Theorem 5. F or the Euler − M aclaurin method, we have H ⊆ S if and only if n is odd. Proof (i) a = 0. The (16) reduces to |1 +
a0 | < 1, which is the same as (1.3) with a = 0. Hence H = S. (ii) a = 0. It is obvious that set H ⊂ S if and only if a/(ea − 1) ≤ a/(am (x) − 1) which is equivalent to
α(x) ≥ ex , a < 0 x
α(x) ≤ e , a > 0
(22) (23)
Since α(x) = 1+x/φ(x),(18) and (19) reduce to φ(x) ≥ x/(ex −1), which implies that n is odd according to Lemma 4.4.
5
Numerical Experiments
In this section we will give one numerical experiment to illustrate the conclusions in the paper. We consider the following system: u (t) = −9u(t)+2u([t])−5u([t−1]), t ≥ 0, u(0) = 1, u(−1) = 1, From Theorem (4),it is easy to see that (−9, 2, −5) ∈ H,and can see that the analytic solution at t = 10,u(10) ≈ 0.0882. In the following Table ,we have listed the absolute errors AE and the relative errors RE at t=2 of the EulerMaclaurin method with n = 3, and the ratio of the errors of the case m = 3 over that of m = 40 for (15) and (16). We can see from this table that the Euler-Maclaurin method with n = 3 is of order 8. In Fig ,we draw the numerical solution of the Euler-Maclaurin method for (15),which are agreement with Theorem 5. Table 1. Euler-Maclaurin method with n = 3
m=3 m=6 m=12 m=20 m=40 Ratio
AE
RE
7.7253E-6 2.0387E-8 7.2321E-11 1.1897E-12 4.6213E-15 257.4384
8.7622E-5 2.3124E-7 8.2029E-10 1.3494E-11 5.2416E-14
68
C. He and W. Lv
Fig. 1. The Euler-Maclaurin method with h = 0.025 for (15)
References 1. Wiener, J.: Defferential equations with piecewise constant delays. In: Lakshmikantham, V. (ed.) Trends in the Theory and Practice of Nonlinear Differential Equations, pp. 547–552. Marcel Dekker, New York (1983) 2. Wiener, J.: Pointwise initial-value problems for functional differential equations. In: Knowles, I.W., Lewis, R.T. (eds.) Differential Equations, pp. 571–580. NorthHolland, New York (1984) 3. Cooke, K.L., Wiener, J.: Retarded differential equations with piecewise constant delays. J. Math. Anal. Appl. 99, 265–297 (1984) 4. Shah, S.M., Wiener, J.: Advanced differential equations with piecewise constant argument deviations. Int. J. Math. Math. Soc. 6, 671–703 (1983) 5. Wiener, J.: Generalized Solutions of Differential Equations. World Scientific, Singapore (1993) 6. Liu, M.Z., Song, M.H., Yang, Z.W.: Stability of Runge-Kutta methods in the numerical solution of equation u’(t) = au(t)+ao u([t]). Journal of computional and applied mathematics 166, 361–370 (2004) 7. Yang, Z.W., Liu, M.Z., Song, M.H.: Stability of Runge-Kutta methods in the numerical solution of equation u’(t)=au(t)+ao u([t])+a1 u([t–1]). Applied Mathematics and Computation 162, 37–50 (2005) 8. Song, M.H., Yang, Z.W., Liu, M.Z.: Stability of θ-Methods for Advanced Differential Equations with Piecewise Continuous Arguments. Computers and Mathematics with Applications 49, 1295–1301 (2005) 9. Lv, W.J., Yang, Z.W., Liu, M.Z.: Stability of the Euler-Maclaurin methods for neutral defferential eqations with piecewise continuous arguments. Applied Mathematics and Computation 106, 1480–1487 (2007) 10. Gy¨ ori, I., Ladas, G.: Oscillation Theory of Delay Differential Equations with Applications. Oxford University Press, Oxford (1991) 11. Gopalsamy, K.: Stability and Oscillation in Delay Differential Equations of Population Dynamics. Kluwer Academic Publishers, Dordrecht (1992)
Stability of Euler-Maclaurin Methods in the Numerical Solution of Equation
69
12. Busenberg, S., Cooke, K.L.: Models of vertically transmitted diseases with sequential- continuous dynamics. In: Lakshmikantham, V. (ed.) Nonlinear Phenomena in Mathematical Sciences, pp. 179–187. Academic Press, New York (1982) 13. Miller, J.J.H.: On the location of zeros of certain classes of polynomials with applications to numerical analysis. J. Inst. Math. Appl. 8, 671–703 (1983) 14. Dahlquist, G., Bj¨ orck, A.: Numerical Methods, Prentice-Hall Ser. Automat. Comput. Prentice-Hall, Englewood Cliffs (1974) (Translated from the Swedish by Ned Anderson) 15. Stoer, J., Bulirsh, R.: Introduction to Numerical Analysis, 2nd edn. Texts Appl.Math. Springer, New York (1993) (Translated from the German by R. Bartels, W. Gantschi, C. Witzgall)
Algorithm for Solving the Complex Matrix Equation AX − XB = C Sen Yu , Wei Cheng, and Lianggui feng Department of Mathematics, National University of Defense Technology, Changsha 410073, Hunan, P.R. China {nudtyusen,ch2tong}@hotmail.com,
[email protected]
Abstract. The complex matrix equation AX − XB = C is of interest in simplifying matrix representations of semilinear transformations arising from quantum mechanics. However, there has not been a complete algorithm for solving it so far. In this paper, we provide a complete algorithm to compute solutions of the complex matrix equation AX − XB = C, and give an example. The results of this paper manifest the advantage of the quaternion matrix theory in solving some complex matrix problems. Keywords: complex matrix equation, j-conjugate, quaternion matrices.
1
Introduction
In this paper, we mainly study the complex matrix equation AX − XB = C,
(1)
which is is of interest in simplifying matrix representations of semilinear transformations arising from quantum mechanics (see, e.g., the introduction of [1]). The Sylvester matrix equation AX − XB = C over real or complex field has numerous applications in control theory, signal processing, filtering, model reduction, image restoration and so on (see, e.g.[2–4]). As a generalization, Eq. (1) has also been studied widely by both pure and applied mathematicians (see, e.g.[1, 5–8]). Bevis et al[5] gave a necessary and sufficient condition for the existence of solution of Eq. (1) by means of consimilarity, and [1] characterized the consistency and solutions of Eq. (1) and its special cases. Recently, Wu et al[6] studied Eq. (1) by means of the real representation, and gave an expression of the solution when it is unique. In [7, 8], Wu et al studied the similar forms of Eq. (1) by the same method. However, for the case when the solution of Eq. (1) is not unique, the existed solution expressions are not independent, by them we hardly compute the solution directly. It seems that there has not been a complete algorithm for solving the matrix equation so far (known to the authors).
The authors are supported in part by NCET(NCET06-09-23) and NUDT(JC0802-03).
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 70–77, 2010. c Springer-Verlag Berlin Heidelberg 2010
Algorithm for Solving the Complex Matrix Equation AX − XB = C
71
In this paper, by considering the complex field as a maximal subfield of quaternion skew-field, and by considering Eq. (1) as the quaternion matrix equation − XB = C, where X = −jXj denote the j-conjugate of X, we give a necesAX sary and sufficient condition for the existence of solution of Eq. (1), and provide a complete algorithm for computing a solution of this equation. The results of this paper show that considering the complex matrix as quaternion matrix provides us a powerful tool for solving some complex matrix problems, which exhibit the superiority of quaternion matrix theory in a sense. Throughout the paper, let R denote the real number field, C denote the complex field, and Q = C + Cj = R + Ri + Rj + Rk denote the real quaternion skew-field, where ij = −ji = k, i2 = j2 = k2 = −1. We denote the set of all m × n matrices over a ring Ω with identity by Ω m×n . For A ∈ Qm×n , A denotes of A, and Ad denotes the derived the conjugate A1 A2 matrix of A (i.e., Ad = , where A = A1 + A2 j is the symplectic −A2 A1 decomposition of A). As known, the derived matrix of quaternion matrix is of the following properties: 1. (A + B)d = Ad + B d ; 2. (AB)d = Ad B d for arbitrary A, B with compatible dimensions. denote the j-conjugate of quaternion matrix A. Clearly, if A, B ∈ Qm×n , Let A the j-conjugate of quaternion matrices has the following properties: 1. 2. 3. 4. 5.
A = A; + B; A+B = A AC = AC, if C ∈ Qn×p ; −1 = A −1 , if A is invertible; A = −jAj, if A ∈ Cm×n . A=A
The following lemma about the Jordan cannonical forms of quaternion matrices is known, which was studied in [10–12]. Lemma 1. [10–12] Let A ∈ Qn×n , then there exists an invertible matrix P ∈ Qn×n such that ⎛ ⎞ Jn1 (λ1 ) ⎜ ⎟ Jn2 (λ2 ) ⎜ ⎟ P −1 AP = ⎜ (2) ⎟ = J, . . ⎝ ⎠ . Jns (λs ) where λk = ak + bk i ∈ C, ak and bk ∈ R may be chosen so that bk ≥ 0, k = 1, 2, . . . , s, and sk=1 nk = n. The λk (k = 1, 2, . . . , s) are all right eigenvalues of A which are not necessarily distinct. The J is uniquely determined by A up to the order of the matrices Jnk (λk ) in Eq. (2). The matrix J is called the Jordan canonical form of A corresponding to maximal subfield C of Q.
72
2
S. Yu, W. Cheng, and L. feng
Algorithm
Lemma 2. (Roth’s Theorem) Let A ∈ Cm×m , B ∈ Cn×n and C ∈ Cm×m be is some X ∈ Cm×n such that AX − XB = C if and only if given. There AC A 0 ∼ . 0 B 0 B Lemma 3. [13] Quaternion matrix equation AX − XB = C has a solution if and only if the equation Ad X − XB d = C d has a complex matrix solution, where A, B, and C are known quaternion matrices, X is unknown, they contain suitable dimensions. Theorem 1. Suppose A ∈ Cm×m , B ∈ Cn×n , and C ∈ Cm×n be given. Then the following are equivalent. 1. ⎛ Eq. (1) is consistent; ⎞ ⎛ 0 A 0 C 0 A 0 ⎜ −A 0 −C 0 ⎟ ⎜ −A 0 0 ⎟ ⎜ 2. ⎜ ⎝ 0 0 0 B⎠ ∼ ⎝ 0 0 0 0 0 −B 0 0 0 −B
⎞ 0 0⎟ ⎟. B⎠ 0
Proof. The complex matrix equation AX − XB = C is also a quaternion matrix = −jAj. Then equation. Since X is an unknown complex matrix, A = A AX − XB = C − XB = C ⇐⇒ AX ⇐⇒ AjX − XBj = Cj
(3) (4) (5)
By Lemma 3, Eq. (5) has solutions if and only if the complex matrix equation (Aj)d X − X(Bj)d = (Cj)d is consistent over the complex field. Since A, B and C are complex matrices, 0 A 0 B 0 C (Aj)d = , (Bj)d = and (Cj)d = . −A 0 −B 0 −C 0 By Lemma 2, (Aj)d X − X(Bj)d = (Cj)d is consistent if and only if (Aj)d (Cj)d (Aj)d 0 ∼ , 0 (Bj)d 0 (Bj)d which means that
⎛
0 A 0 ⎜ −A 0 −C ⎜ ⎝ 0 0 0 0 0 −B The proof is completed.
⎞ ⎛ C 0 A 0 ⎜ −A 0 0 0⎟ ⎟∼⎜ B⎠ ⎝ 0 0 0 0 0 0 −B
⎞ 0 0⎟ ⎟. B⎠ 0
By the above theorem, we can judge whether the complex matrix equation AX − XB = C has solutions or not.
Algorithm for Solving the Complex Matrix Equation AX − XB = C
73
Theorem 2. Suppose A ∈ Cm×m , B ∈ Cn×n , and C ∈ Cm×n be given. Then the following are equivalent. 1. Eq. (1) has solution; a unique 0 A 0 B 2. and have no public complex eigenvalue. −A 0 −B 0 Proof. Since the complex matrix equation AX − XB = C is also a quater − XB = C, which is equivalent to the equation nion matrix equation, AX AjX − XBj = Cj. Let σr (Aj) and σr (Bj) denote the set of the right eigenvalues of Aj and the set of the right eigenvalues of Bj respectively. By the Corollary 4 in Huang[13],
the equation AjX − XBj = Cj has a unique solution if and only if σr (Aj) σr (Bj) = ∅. Let σ((Aj)d ) and σ((Bj)d ) denote the set of the d eigenvalues of (Bj)d respectively.
of (Aj) and the set of the complexdeigenvalues
d σr (Aj) σr (Bj) = ∅ is equivalent to σ((Aj) ) σ((Bj) ) = ∅. The proof is completed. Now we give an algorithm to compute solutions of Eq. (1). Algorithm Step 1: Input A, B and C; Step 2: Calculate Aj, Bj and Cj, ⎛ ⎞ ⎛ 0 A 0 C 0 A 0 ⎜ −A 0 −C 0 ⎟ ⎜ −A 0 0 ⎟ ⎜ M1 = ⎜ ⎝ 0 0 0 B ⎠ and M2 = ⎝ 0 0 0 0 0 −B 0 0 0 −B
⎞ 0 0⎟ ⎟; B⎠ 0
Step 3: Compute the Jordan canonical form of M1 and M2 . If they are equal, then go to step 4 , else, stop and output: “DO NOT HAVE SOLUTION.”; Step 4: Compute the Jordan decompositions of Aj and Bj by using the algorithm given in [14], to obtain complex matrices JAj and JBj , quaternion matrices Q and P such that Aj = QJAj Q−1 and Bj = P JBj P −1 ; Step 5: Calculate E = Q−1 CjP to obtain complex matrices E1 , E2 such that E = E1 + E2 j; Step 6: Using the Kronecker product and the stretching function vec or using the method of [15], we solve the complex matrix equations in U and V : JAj U − U JBj = E1 ,
(6)
JAj V − V J Bj = E2 ,
(7)
to obtain U and V ; Step 7: Let W = Q(U +V j)P −1 , and rewrite W as W = X+Y j (X, Y ∈ Cn×n ). Output X. Example 1. Compute the complex solutions of matrix equation i 0 2 0 −2 + i 3 X−X = . 11−i 0 2i 1 2 − 5i
(8)
74
S. Yu, W. Cheng, and L. feng
−2 + i 3 2 0 i 0 . , and C = ,B= Step 1: Let A = 1 2 − 5i 0 2i 11−i −2j + k 3j 2j 0 k 0 , , Cj = , Bj = Step 2: Aj = j 2j − 5k 0 2k j j−k ⎞ ⎛ 0 0 i 0 0 0 −2 + i 3 ⎜ 0 0 11−i 0 0 1 2 − 5i ⎟ ⎟ ⎜ ⎜ i 0 0 0 2+i 3 0 0 ⎟ ⎟ ⎜ ⎜ −1 −1 − i 0 0 −1 −2 − 5i 0 0 ⎟ ⎟, M1 = ⎜ ⎜ 0 0 0 0 0 0 2 0 ⎟ ⎟ ⎜ ⎜ 0 0 0 0 0 0 0 2i ⎟ ⎟ ⎜ ⎝ 0 0 0 0 −2 0 0 0 ⎠ 0 0 0 0 0 2i 0 0
and
⎞ 0 0 i 0 0 0 0 0 ⎜ 0 0 11−i 0 0 0 0 ⎟ ⎟ ⎜ ⎜ i 0 0 0 0 0 0 0⎟ ⎟ ⎜ ⎜ −1 −1 − i 0 0 0 0 0 0 ⎟ ⎟. M2 = ⎜ ⎜ 0 0 0 0 0 0 2 0⎟ ⎟ ⎜ ⎜ 0 0 0 0 0 0 0 2i ⎟ ⎟ ⎜ ⎝ 0 0 0 0 −2 0 0 0 ⎠ 0 0 0 0 0 2i 0 0 ⎛
Step 3: Compute JM1 and JM2 . ⎛
JM1
⎞
i
⎟ ⎟ ⎟ ⎟ ⎟ ⎟, ⎟ ⎟ ⎟ ⎟ ⎠
⎜ −i ⎜ ⎜ 2i ⎜ ⎜ −2i =⎜ ⎜ −1.4142i ⎜ ⎜ 1.4142i ⎜ ⎝ 2i −2i
and
⎛
JM2
⎞
i
⎟ ⎟ ⎟ ⎟ ⎟ ⎟. ⎟ ⎟ ⎟ ⎟ ⎠
⎜ −i ⎜ ⎜ 2i ⎜ ⎜ −2i =⎜ ⎜ −1.4142i ⎜ ⎜ 1.4142i ⎜ ⎝ 2i −2i
Thus, JM1 = JM2 .
Algorithm for Solving the Complex Matrix Equation AX − XB = C
75
⎛
⎞ ⎛ ⎞ 0 0 i 0 0 0 2 0 ⎜ 0 ⎜ ⎟ 0 1 1 − i⎟ ⎟, and (Bj)d = ⎜ 0 0 0 2i ⎟. Then Step 4: (Aj)d = ⎜ ⎝ i ⎝ −2 0 0 0 ⎠ 0 0 0 ⎠ −1 −1 − i 0 0 0 2i 0 0 ⎛ ⎞ ⎛ ⎞ i 2i ⎜ −i ⎟ ⎜ ⎟ ⎟ , and J(Bj)d = ⎜ −2i ⎟. J(Aj)d = ⎜ ⎝ ⎠ ⎝ ⎠ 1.4142i 2i −1.4142i −2i By Lemma 1, the Jordan canonical forms of Aj and Bj are respectively i 0 2i 0 JAj = and JBj = . Use the algorithm given in [14] to 0 1.4142i 0 2i compute the Jordan decompositions of Aj and get Aj = QJAj Q−1 , where Q=
1−j 0 −1 − 2i + j − 2k 1 − i + 1.4142k
and Q−1 =
0.5 + 0.5j 0 0.75 − 0.25i + 0.70711j − 0.35355k 0.25 + 0.25i − 0.35355k
;
the Jordan decomposition of Bj is Bj = P JBj P −1 , where P =
1+i+j+k −1 − j 1−j 1−i+j+k
,P
−1
1 = 2
1−i−j−k −1 − j i+j 1−i+j−k
.
Step 5: Since E = Q−1 CjP = E1 + E2 j, 1 + 3j + 3k 0.5 + 0.5i + 4.5j − 1.5k E= , e21 e22 E1 = and
1 0.5 + 0.5i 1.0251 − 0.11092i 0.48961 + 0.64644i
E2 =
3 + 3i 4.5 − 1.5i 9.8033 + 1.3536i 7.8891 − 9.8033i
,
,
where e21 = 1.0251 − 0.11092i + 9.8033j + 1.3536k and e22 = 0.48961 + 0.64644i + 7.8891j − 9.8033k Step 6: Solve the complex matrix equations (6) and (7), we have i −0.5 + 0.5i U= 0.1894 + 1.75i −1.1035 + 0.8385i and
V =
1−i −0.5 − 1.5i 0.3965 − 2.8713i −2.8713 − 2.3107i
.
76
S. Yu, W. Cheng, and L. feng
1 i Step 7: W = Q(U +V j)P = , where w3 = 10−6 (−4.795−4.5521i+ w3 w4 6.60904j + 0.17178k) and w4 = 1 +0.99999i + 10−6 (6.953j − 6.6094k). W = 1 i X + Y j (X, Y ∈ C2×2 ), so X = . 10−6 (−4.795 − 4.5521i) 1 + 0.99999i The element 10−6 (−4.795 − 4.5521i) approximates to 0, and 1 + 0.99999i 1 i approximates to 1 + i, so we can get X = . 01+i 1 i Ones can verify is the solution of Eq. (8). Furthermore, from Step 4, 01+i we know that (Aj)d and (Bj)d have no public complex eigenvalue. Thus, X is the unique solution of Eq. (8). −1
3
Conclusion
In mathematics, the quaternions are a number system that extends the complex numbers. Many results on complex field can be generalized to quaternion field. Obviously, as a special case, any result on quaternion field can be limited to complex field. In this work, we consider the complex matrix equation Eq. (1) as a quaternion matrix equation and solve the equation completely. With the help of quaternion toolbox designed by S.Sangwine, which can be download on Internet freely[16], the authors have realized the algorithm given above in Matlab 7.10.
References 1. Bevis, J.H., Hall, F.J., Hartwig, R.E.: The matrix equation AX − XB = C and its special cases. SIAM J. Matrix Anal. Appl. 19, 348–359 (1988) 2. Truhar, N., Tomljanovi´c, Z., Li, R.C.: Analysis of the solution of the Sylvester equation using low-rank ADI with exact shifts. Syst. control Lett. 59, 248–257 (2010) 3. Chen, C., Schonfeld, D.: Pose estimation from multiple cameras based on Sylvester’s equation. Comput. Vis. Image Und. 114, 652–666 (2010) 4. Halabi, S., Souley Ali, H., Rafaralahy, H., Zasadzinski, M.: H∞ functional filtering for stochastic bilinear systems with multiplicative noises. Automatica 45, 1038–1045 (2010) 5. Bevis, J.H., Hall, F.J., Hartwig, R.E.: Consimilarity and the matrix equation AX − XB = C. In: Uhlig, F., Grone, R. (eds.) Current Trends in Matrix Theory, pp. 51–64. North-Holland, Amsterdam (1987) 6. Wu, A.G., Duan, G.R., Yu, H.H.: On solutions of the matrix equations XF −AX = C and XF − AX = C. Appl. Math. Comput. 183, 932–941 (2006) 7. Wu, A.G., Fu, Y.M., Duan, G.R.: On solutions of matrix equations V −AV F = BW and V − AV F = BW . Math. Comput. Model. 47, 1181–1197 (2008) 8. Wu, A.G., Wang, H.Q., Duan, G.R.: On matrix equations X − AXF = C and X − AXF = C. Appl. Math. Comput. 230, 690–698 (2009)
Algorithm for Solving the Complex Matrix Equation AX − XB = C
77
9. Huang, L.P.: Consimilarity of quaternion matrices and complex matrices. Linear Algebra Appl. 331, 21–30 (2001) 10. Wiegmann, N.A.: Some theorems on matrices with real quaternion elements. Cana. J. Math. 7, 191–201 (1955) 11. Zhang, F.Z., Wei, Y.: Jordan canonical form of a partitioned complex matrix and its application to real quaternion matrices. Comm. Algebra 29, 2363–2375 (2001) 12. Huang, L.P.: Jordan canonical form of a matrix over quaternion field. Northeast. Math. J. 10, 18–24 (1994) 13. Huang, L.P.: The matrix equation AXB − GXD = E over quaternion field. Linear Algebra Appl. 234, 197–208 (1996) 14. Feng, L.G., Cheng, W.: Algorithm for solving the transition matrix of Jordan canonical form over quaternion skew-field (submitted for publication) 15. Horn, R., Johnson, C.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1994) R 16. Quaternion Toolbox for Matlab , [online], software library, written by Stephen Sangwine and Nicolas Le Bihan, http://qtfm.sourceforge.net/
Research and Application of Fuzzy Comprehensive Evaluation of the Optimal Weight Inverse Problem* Lihong Li1, Junna Jiang1, Zhendong Li2, and Xufang Mu1 1
College of Science, Hebei Polytechnic University Tangshan Hebei 063009, China 2 Basic Department, Tanshan College, Tangshan 063000, China
[email protected]
Abstract. For the problem of fuzzy comprehensive evaluation, use the method of Tsukamoto to find the weight of solution sets of fuzzy evaluation in inverse problems; normalize the multiple weight solutions that given by the actual conditions, then optimize it by the method of lattice close-degree, and get the optimal weight solution; and use this method to do decision-making for enterprise management system. The example that the fuzzy evaluation method of weight optimization is applied into the business management decision making proves the method is reasonable and effective. Keywords: Fuzzy comprehensive evaluation, optimal solution, fuzzy weights, close relationship equation.
1 Introduction For the fuzzy comprehensive evaluation issue [1], it is important that the selection of weight sets, which directly affects the accuracy of evaluation results. Until now there isn’t a unified mathematical methods for fuzzy comprehensive evaluation for the weight set, usually the people make determine for actual needs by subjective experience. There are many ways to determine the set of weights [2, 3, 4], directly given method [5,6], the importance of sorting method [7, 8], AHP [9, 10], fuzzy intervals [11, 12, 13] and so on, but all of these methods have one thing in common that they are strong with more subjective, and the large amount of calculation. Therefore, this paper presents a new method to determine and optimize the set of weights that "ideal rating" is the closest target, and applied to the management.
2 Tsukamoto Solution of Fuzzy Relation Equations Definition 1 [14]. Let U , V , W is given a non-doctrinaire domain, R , S ~
~
respectively, U to V and U to W the fuzzy relation, if the unknown fuzzy relation R ∈ f(V × W) to meet the equation: ~
R~ *
X =S, ~
~
(1)
Supported by Scientific Research Guiding Plan Project of Tangshan in Hebei Province (No. 09130205a).
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 78–84, 2010. © Springer-Verlag Berlin Heidelberg 2010
Research and Application of Fuzzy Comprehensive Evaluation
79
then the type known as the fuzzy relational equation. Fuzzy equation corresponding to the fuzzy linear equations:
⎧(r11 ∧ x1 ) ∨ (r1 2 ∧ x2 ) ∨ ∨ (r1n ∧ x n ) = s1 ⎪ ⎪(r21 ∧ x1 ) ∨ (r2 2 ∧ x 2 ) ∨ ∨ (r2 n ∧ xn ) = s 2 . ⎨ ⎪ ⎪(rm1 ∧ x1 ) ∨ (rm 2 ∧ x2 ) ∨ ∨ (rm n ∧ x n ) = s m ⎩ Theorem 1 [14]. Let R = ⎡⎣ rij ⎤⎦ ∈ [ 0,1]
m× n
sufficient condition for X = ( x1 , x 2 ,
, S = [ si ] ∈ [ 0,1]
m×1
(2)
, then the necessary and
, xn )′ is the solution of R~
X~ = S is ~
∀i, j , ri j ∧ xi ≤ si is∀i, ∃j0 s, tri j0 ∧ xi j0 = si . By the theorem of knowledge, problem solving fuzzy relation into a simple case of solving the problem of r ∧ x=s and r ∧ x ≤ s . Easy to get the solution of r ∧ x=s . ⎧ {s} , r > s ⎪ x ∈ ⎨[ s,1] , r = s . ⎪ ∅, r < s ⎩
(3)
And
⎪⎧[ 0, s ] , r > s r ∧ x ≤ s : x∈⎨ , (4) ⎪⎩ [ 0,1] , r ≤ s Let the solution of r ∧ x=s and r ∧ x ≤ s be simple, and introduce the following operator, ∀a, b ∈ [0,1] , ⎧ {b} , a > b ⎪ ⎪⎧[ 0, s ] , a > b bε a = ⎨[b,1] , a = b , bεٛa = ⎨ . (5) ⎪⎩ [b,1] , a ≤ b ⎪ ∅, a < b ⎩ Then the solution of r ∧ x=s x ∈ sε r ; r ∧ x ≤ s x ∈ sεٛr . For simplicity, noted single point set {b}= b . For r1 , r2 , , rn , s ∈ [0,1] , introduce interval vectors, Y = ( sεr1, , sεr2 ,
~
~
~
, sεrn ) Y = ( s ε r1, , s ε r2 ,
~
, s ε rn )
(6)
Theorem 2 [14]. The necessary and sufficient condition for fuzzy relation equation:
(r1 ∧ x1 ) ∨ having solution is i0 ∈ {1, 2,
∨ (rn ∧ xn ) = s ,
n} , s, t , sεri 0 ≠ φ , at this point equation set is: W1 ∪ W2 ∪ … ∪ WN .
(7)
80
L. Li et al.
For the general case of equation:
⎡ x1 ⎤ ⎡ s1 ⎤ ⎢ x ⎥ ⎢s ⎥ ⎢ 2⎥ = ⎢ 2⎥ . ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ xm ⎦ ⎣ s m ⎦ Easy to see that equation is the intersection of the following m equation set: (ri 1 ∧ x1 ) ∨ (ri 2 ∧ x 2 ) ∨ ∨ (ri n ∧ x n ) = si , i = 1,2, , m . ⎛ r11 ⎜ ⎜ ⎜r ⎝ m1
r1 2 ⎞ ⎟ ⎟ rm n ⎟⎠
(8)
(9)
Let
⎡ s1εr11 ⎢ s εr 2 21 Y =⎢ ⎢ ⎢ ⎢⎣ s mεrm1
s1εr11 s2εr21
s1εr11 ⎤ ⎡ s1ε r11 ⎢ s εr s 2εr21 ⎥⎥ 2 21 ,Y = ⎢ ⎥ ⎢ ⎥ ⎢ smεrm1 ⎥⎦ ⎢⎣ s mε rm1
smεrm1 ٛ
s1ε r11 s 2ε r21 s m ε rm1
s1ε r11 ⎤ s 2ε r21 ⎥⎥ ⎥ ⎥ s m ε rm1 ⎥⎦
(10)
Then Y set-valued matrix Y is interval-valued fuzzy matrix, for if si ε riq (i ) ≠ ∅, then ٛ
let si εٛriq ( i ) ∈ Y change to si ε riq ( i ) ∈ Y , and get the following interval fuzzy matrix
Wq (1),q ( 2 ),
q(m)
(Wi j (q(1), q (2),
q(m)) m×n ,
(11)
Where
⎧⎪si εri q ( i ) , j = 1,2, m q (m)) = ⎨ , (12) ⎪⎩si ε ri q ( i ) , j ≠ 1,2, n Theorem 3 [14]. The necessary and sufficient condition for existing solutions equation (7) is U X q (1),q ( 2 ), q ( m ) , q (1),q ( 2), q ( m )∈{1, 2, , n} (13) ∀i ∈ {k = 1,2, , m}, ∃qi ∈ {1,2, , n} s.t si εriq (i ) ≠ φ , wi j ((q (1), q (2),
where X q (1),q ( 2 ), m
q(m)
(∩ Wi 1 (q (1), q ( 2), i =1
is m
q (m)), ∩ Wi 2 (q (1), q( 2), i =1
m
, ∩ Wi n ( q(1), q ( 2),
q( m)),
i =1
q ( m)), )′
(14)
3 Optimization of the Weight Lattice Close Degree Step1 Let the target set for some target is E = {e1 , e2 ,
en }
(15)
.
The comprehensive category is P = { p1 , p2 ,
pn }
(16)
,
and comprehensive determination of all the weight distribution of class is W = {w p1 , w p 2 ,
wp n }
.
(17)
Research and Application of Fuzzy Comprehensive Evaluation
81
Step 2 Calculate the membership of indicators status and integrated decision class status Bej , Rij (i = 1,2, m; j = 1,2, n) , then get the matrix membership: B = {Be1 , Be 2 ,
Be n }; R = ( Ri j ) m×n
,
(18)
R~ = B~ find the solution set of the
Step 3 By the fuzzy relation equations W ~
equation (k )
(k )
{w p1 , w p12 ,
(k)
w pm }, (k = 1,2,
, q ).
(19) , Step 4 Let the weight solutions selected by experts in the weight solution set be normalized, then get the weight J = { A1 , A2 ,
Am }
,
(20)
Step 5 We need to select an optimal weight distribution Ak in, and make the integrated decision Bk = Ak R determined by Ak and the "ideal rating" B is the closest. Step 6 According to the principle of selecting and proximity. If ( Ak R, B) = max ( Aj R, B ) , then we think Ak is the best weight program in J, that is 1≤ j ≤ s
the optimal weight solution.
4 Case Studies Let garment enterprise management be an example [12, 13]. Consumers’ satisfaction for clothing is affected by many factors, such as clothing styles, fabrics, workmanship, color, these satisfaction directly affect the buying behavior of implementing or not. Therefore, the garment enterprises should first conduct a market survey before adjust the production structure of garment products. When making sales forecasts, we need to know degree of customer welcome for some clothing. Factor set: U={ color pattern, wear level, the price cost }. Reviews set: V={ welcome, more welcome, not welcome, not welcome }. To get the fuzzy relation matrix from U to V, random sampling method can be used to form a representative from the 100 who attended a variety of rating group, let 100 people separate the various elements of the goods according to proceed the evaluation of all reviews. For example, consider a single pattern on the suit, if 20% of the people are very welcome, there is 70% higher than welcome, 10% of people do not welcome, can get flower pattern→ (0.2,0.7,0.1,0) . Fuzzy mapping of U to V f : U → f (V ) . Similar to other factors for the single factor evaluation. To get U to V a fuzzy mapping f : U → f (V ) Color pattern (0.2,0.7,0.1,0) , wear level (0,0.7,0.1,0) → (0,0.4,0.5,0.1) , price cost → (0.2,0.3,0.4,0.1) .
82
L. Li et al.
Consider above single factor decision, induced fuzzy relations R = R f
⎛ 0.2 0.7 0.1 0 ⎞ ⎜ ⎟ R = ⎜ 0 0.4 0.5 0.1⎟ ⎜ 0.2 0.3 0.4 0.1⎟ ⎝ ⎠
.
After careful investigation and business studies, First determine the "ideal rating" is B = (0.2,0.5,0.4,0.1) . Try to find the weight solution set from R to B for relationship equation A R = B , and choose the best solution set from the weight distribution program. Solution: First solve the relationship equation; find all the weight solution set. Let the weight is X = ( x1 , x2 xn ) , then we get R~
X =S,
(21)
X =S,
(22)
~
~
Change the equation into general form R~
~
~
That is ⎡0.2 0 0.2⎤ ⎡0.2⎤ ⎢0.7 0.4 0.3⎥ ⎡ x1 ⎤ ⎢0.5⎥ ⎢ ⎥ ⎢x ⎥ = ⎢ ⎥ ⎢ 0.1 0.5 0.4⎥ ⎢ 2 ⎥ ⎢0.4⎥ ⎢ ⎥ ⎢⎣ x3 ⎥⎦ ⎢ ⎥ ⎣ 0 0.1 0.1⎦ ⎣ 0.1⎦ , The original equation is equivalent to equations
(23)
⎧ ( 0.2 ∧ x1 ) ∨ ( 0 ∧ x 2 ) ∨ ( 0.2 ∧ x 3 ) = 0.2 ⎪ ⎪( 0.7 ∧ x1 ) ∨ ( 0.4 ∧ x 2 ) ∨ ( 0.3 ∧ x 3 ) = 0.5 ⎨ ⎪ ( 0.1 ∧ x1 ) ∨ ( 0.5 ∧ x 2 ) ∨ ( 0.4 ∧ x 3 ) = 0.4 ⎪ ( 0 ∧ x1 ) ∨ ( 0.1 ∧ x 2 ) ∨ ( 0.1 ∧ x 3 ) = 0.1 ⎩ ,
(24)
We do
⎛ 0.2 ε 0.2 ⎜ 0.5ε 0.7 [ H ] = ⎜⎜ 0.4 ε 0.1 ⎜ ⎝ 0.1ε 0 0.2 εٛ0.2
⎛ ⎜ 0.5εٛ0.7 ٛ ⎡⎣ H ⎤⎦ = ⎜ ⎜ 0.4 εٛ0.1 ⎜ ٛ ⎝ 0.1ε 0 where I = [0.1] .
0.2ε 0 0.5ε 0.4 0.4ε 0.5 0.1ε 0 0.2εٛ0 0.5εٛ0.4 0.4εٛ0.5 0.1εٛ0
0.2ε 0.2 ⎞ ⎛ I ∅ I ⎞ ⎟ ⎜ ⎟ 0.5ε 0.3 ⎟ ⎜ 0.5 ∅ ∅ ⎟ = 0.4ε 0.4 ⎟ ⎜ ∅ 0.4 I ⎟ ⎟ ⎜ ⎟ 0.1ε 0 ⎠ ⎝ ∅ I I ⎠ 0.2εٛ0.2 ⎞ ⎛ I I ⎟ ⎜ ٛ 0.5ε 0.3 ⎟ ⎜ [ 0, 0.5 ] I = 0.4εٛ0.4 ⎟ ⎜ I 0, [ 0.4 ] ⎟ ⎜ ٛ 0.1ε 0 ⎠ ⎝ I I
I⎞ ⎟ I⎟ I⎟ ⎟ I⎠ ,
(25)
Research and Application of Fuzzy Comprehensive Evaluation
83
From [H ] we can know that, there are two non-zero elements in the first column, second column has two non-zero elements, there are three non-zero elements of third column, so we get 2 × 2 × 3 = 12 Gijk , but for last column of [H ] and [H ] is same, so
[ ]
except the same one, there are 4 solutions, calculate
[G ]
131 ∩
I ⎡ I ⎢0.5 I =⎢ ⎢ I 0 .4 ⎢ I ⎣ I
I⎤ ⎡[0,0.5]⎤ I ⎥⎥ = ⎢⎢ 0.4 ⎥⎥ I⎥ ⎢ I ⎥⎦ ⎥ I ⎦∩ ⎣
(26)
We get
[ X 111 ] = ⎡⎣[ 0, 0.5] , [ 0, 0.4] , [ 0,1]⎤⎦
T
(27)
.
Finally, find the union for all part of the solution set, and get the weight solution set for the original equation.
[X ] = [X ] ∪ [X ] ∪ [X ]∪ [X ] = [[0,0.5], [0,0.4], [0,1]]
T
131
141
231
241
.
(28)
The above one (26) is the weight solution set. In the case of satisfy the weight, according to the experts’ long experience at the garment industry, given the following three options for the weight distribution: A1 = ( 0.2, 0.5, 0.3) A2 = ( 0.5, 0.3,0.2 ) A3 = ( 0.2, 0.3, 0.5 )
.
(30)
First by the fuzzy evaluation model calculated B1 = A1 R = (0.2,0.4,0.5,0.1)
B2 = A2 R = (0.2,0.5,0.3,0.1)
(31)
B3 = A3 R = (0.2,0.3,0.4,0.1) .
Then use the lattice method of closeness to discuss, calculate Inner product and outer product for B j and B: B1 B = 0.4 B2 B = 0.5B3 B = 0.4
.
(32)
So cell close degree is
( B1 , B ) = 0.4 ∧ (1 − 0.1) = 0.4 ( B2 , B ) = 0.5 ∧ (1 − 0.1) = 0.5 ( B3 , B ) = 0.4 ∧ (1 − 0.1) = 0.4 .
(33)
According to the principle of selecting the near, ( B2 , B ) = 0.5 is Maximum, that to say A2 = (0.5,0.3,0.2) is the best weight distribution program, It shows that when customers select clothing in the clothing store, it will be placed in the choice of color patterns of the most important position, accounting for 50% of the overall importance, and then focus on the wear extent of its importance it is 30%, the final garment the cost price can not be ignored, and the overall importance of 20%.
84
L. Li et al.
5 Conclusion The article proposed the method to determine and optimize the set of weights that "ideal rating" is the closest target, firstly through evaluation of the inverse problem find out the fuzzy weight set, greatly reduced the weight of the range solution set, in this basis, experts selected several groups of the appropriate weight solution, and finally using method of lattice closeness to optimize the final solution of the weight, and then select the optimal weight solution. This fuzzy evaluation method of weight optimization solution can solve the business management decision making. Through applications for the production structure adjustment of garment enterprise, evaluation of results is more realistic, the accuracy of results clearly proves that the methods are reasonable and effective.
References 1. Angui, L.: Fuzzy Mathematics and Its Applications. Metallurgy Industry Press, Beijing (2005) 2. Li, T., Gang, S., Guangxi, Z., Yuanhui, N.: Multiple Weighting Matrices Selection Strategy for Orthogonal Random Beam forming Scheme Mini-micro Systems (2010) 3. Shangguan, Rongyao, F., Hongchuan., L.: An Attribute Weighting K-means Algorithm Based on Synthetic Weight. Computer and Modernization (2010) 4. Hongna, S., Liwen, Y., Xiangjun, L.: New Approaches of Keyword Feature Item Weighting Based on Synonymy Replace and Adjacent Merge. Computer and Modernization (2010) 5. Gang, D.: Management Mathematics Theory and Applications. Tianjin University Press, Tianjin (2002) 6. Hongji, L.: Basic Fuzzy Math and Using the Algorithm. Science Press, Beijing (2005) 7. Jie, S.: Based on PLC fuzzy control variable frequency speed regulation system. Journal of qufu normal university (2006) 8. Min, T.: Adopting field bus control system simulation technology. Kunming University (2006) 9. Andong, S.: Lyophilization lyophilizer system optimization control. Journal of shandong university (2007) 10. An, S.J.: Based on the network question-answering system answer extrac-tion method. Shenyang: Journal of Institute of Aviation Industry (2007) 11. Zhao, L.Z.: Analysis of fuzzy language communication function. Chang Chun: Science and Technology (2007) 12. Ye, F.: Applications and research in the hierarchical fuzzy control based on granular computing of fuzzy sets. Guangdong University Master’s degree thesis (2008) 13. Pawlak, Z.: Rough sets. Intl. Journal of Computer and Information Science (1982) 14. Huae, W.: Market research and analysis. China Statistics Press, Beijing (2000)
q-Extensions of Gauss’ Fifteen Contiguous Relations for 2 F1 -Series Chuanan Wei1 and Dianxuan Gong2, 1
Department of Information Technology, Hainan Medical College, Haikou 571101, China 2 College of Sciences, Hebei Polytechnic University, Tangshan 063009, China
Abstract. Contiguous relation is a fundamental concept within the theories of hypergeometric series and orthogonal polynomials. Gauss’ fifteen contiguous relations imply that any three 2 F1 -series whose corresponding parameters differ by integers are linearly related. As q-extensions of hypergeometric series, basic hypergeometric series are used widely in Statistics and Physics. By the method of comparing coefficients, we establish fifteen interesting three-term relations for 2 φ1 -series. Their limiting cases recover Gauss’ fifteen contiguous relations for 2 F1 -series. Keywords: Basic hypergeometric series; The method of comparing coefficients; Three-term relation for 2 φ1 -series; Gauss’ fifteen contiguous relations for 2 F1 -series.
1
Introduction
For two complex numbers x and q, define q-shifted factorial by (x; q)0 = 1
and (x; q)n =
n−1
(1 − xq k ) for
n = 1, 2, · · · .
k=0
Its fractional form reads as (α; q)n (β; q)n · · · (γ; q)n α, β, · · · , γ . q = A, B, · · · , C (A; q)n (B; q)n · · · (C; q)n n Following Slater [9], the basic hypergeometric series can be defined by r φs
∞ (a1 ; q)k (a2 ; q)k · · · (ar ; q)k z k a1 , a2 , · · · , ar , q; z = b1 , b2 , · · · , bs (b1 ; q)k (b2 ; q)k · · · (bs ; q)k (q; q)k k=0
Project supported by National Nature Science Foundation of China (No.60533060), Educational Commission of Hebei Province of China (No.2009448) and Natural Science Foundation of Hebei Province of China (No.A2010000908). Corresponding author.
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 85–92, 2010. c Springer-Verlag Berlin Heidelberg 2010
86
C. Wei and D. Gong
where {ai } and {bj } are complex parameters such that no zero factors appear in the denominators of the summand on the right hand side. The purpose of this paper is to establish fifteen interesting three-term relations for 2 φ1 -series by the method of comparing coefficients. Their limiting cases recover Gauss’ fifteen contiguous relations for 2 F1 -series.
2
Fifteen Three-Term Relations for 2 φ1 -Series
Fifteen three-term relations for 2 φ1 -series will be established in this section. Their limiting cases recover Gauss’ fifteen contiguous relations for 2 F1 -series. For convergence, we assume |z/q| < 1 for all 2 φ1 -series and |z| < 1 for all 2 F1 -series. Theorem 1 (Three-term relation for 2 φ1 -series) (a − 1)(zab − c)(zab − qc) a, b z qa, b φ q; = φ q; z 2 1 2 1 c c q a(z − q)(za2 − zab + c + qc − ac − qa) q(c − a)(za2 + c − ac − qa) a/q, b + 2 φ1 . (1) q; z c a(z − q)(za2 − zab + c + qc − ac − qa) Proof. In order to show the correctness of (1), it is sufficient to verify the following equation: a, b z φ q; a(z − q)(za2 − zab + c + qc − ac − qa) 2 1 c q qa, b = 2 φ1 q; z (a − 1)(zab − c)(zab − qc) c a/q, b + 2 φ1 (2) q; z q(c − a)(za2 + c − ac − qa). c Firstly, we obtain the relation by comparing the coefficients of z 0 on both sides of (2): −qa(c + qc − ac − qa) = qc2 (a − 1) + q(c − a)(c − ac − qa), whose correctness could be checked easily. Secondly, we have the relation by comparing the coefficients of z on both sides of (2): (1−a)(1−b) 2 a(c + qc − ac − qa − qa + qab) − a(c + qc − ac − qa) (1−q)(1−c) (1−qa)(1−b) = abc(1 − a)(1 + q) + qc2 (a − 1) (1−q)(1−c) (1−a/q)(1−b) 2 + qa (c − a) + q(c − a)(c − ac − qa) , (1−q)(1−c)
whose correctness could also be verified without difficulty.
q-Extensions of Gauss’ Fifteen Contiguous Relations for 2 F1 -Series
87
Thirdly, for n ≥ 2, we get the relation by comparing the coefficients of z n on both sides of (2): a, b a, b a a (qab−qa2 −qa−ac+qc+c) q qn−1 (qa+ac−qc−c) + q qn−1 q, c q, c n n−1 a, b a2 + (a−b) q qn−2 q, c n−2 ⎡ ⎤ qa, b qa, b qa, b ⎦ 2 2 2 ⎣ = abc(1−a)(1+q)+ a b (a−1) q qc (a−1) + q q q, c q, c q, c n n−1 n−2 a/q, b a/q, b + qa2 (c−a) . q q(c−a)(c−ac−qa) + q q, c q, c n n−1 a, b Dividing both sides by q , the last relation reduces to the equation: q, c n
a qn−1
+
(qa+ac−qc−c)+
(1−qn )(1−cqn−1 ) a (1−aqn−1 )(1−bqn−1 ) qn−1
(1−qn−1 )(1−qn )(1−cqn−2 )(1−cqn−1 ) a2 (1−aqn−2 )(1−aqn−1 )(1−bqn−2 )(1−bqn−1 ) qn−2
(qab−qa2 −qa−ac+qc+c)
(a−b)
= qc2 (aq n −1)+
(1−qn )(1−cqn−1 ) (1−qn−1 )(1−qn )(1−cqn−2 )(1−cqn−1 ) 2 2 abc(1+q)+ a b (1−bqn−1 ) (aqn−1 −1)(1−bqn−2 )(1−bqn−1 )
q−a 1−aqn−1
(q−a)(1−qn )(1−cqn−1 ) a2 (c−a) (1−aqn−2 )(1−aqn−1 )(1−bqn−1 )
+
(c−a)(c−ac−qa)+
,
whose correctness could be checked directly. Thus we complete the proof of this theorem. Performing the replacements a → q a , b → q b , c → q c for the equation that appears in Theorem 1 and then letting q → 1, we recover the following relation due to Gauss. Corollary 2 (Contiguous relation for 2 F1 -series: [9, Eq 1.4.1]) a(z−1) c−a a, b a + 1, b a − 1, b + 2 F1 , z = 2 F1 z z 2 F1 c c c c+az−bz−2a c+az−bz−2a where the hypergeometric series (cf. Bailey [2]) has been defined by ∞ (a1 )k (a2 )k · · · (ar )k z k a1 , a2 , · · · , ar z = r Fs b1 , b 2 , · · · , b s (b1 )k (b2 )k · · · (bs )k k! k=0
and the shifted factorial is (x)0 = 1
and
(x)n =
n−1
(x + k)
k=0
for
n = 1, 2, · · · .
88
C. Wei and D. Gong
Remark: Other fourteen three-term relations for 2 φ1 -series could be established in the same method. The limiting process for deriving Corollary 2 could be called “ the limiting case q → 1 of Theorem 1” and this salutation will be used frequently. Theorem 3 (Three-term relation for 2 φ1 -series) (zab − c)(a − 1) a, b qa, b φ q; z = φ q; z 2 1 2 1 c c za2 − zab + qab + c − ac − qa a(q − z)(b − 1) a, qb z + 2 φ1 q; . c q za2 − zab + qab + c − ac − qa The limiting case q → 1 of Theorem 3 recover the relation due to Gauss. Corollary 4 (Contiguous relation for 2 F1 -series: [9, Eq 1.4.2]) a b a, b a + 1, b a, b + 1 + 2 F1 . z = 2 F1 z z 2 F1 c c c a−b b−a Theorem 5 (Three-term relation for 2 φ1 -series) b−c a, b qa, b z b(z − q)(1 − a) a, b/q φ q; z = φ q; + φ q; z . 2 1 2 1 2 1 c c c q q(ab − c) ab − c The limiting case q → 1 of Theorem 5 recover the relation due to Gauss. Corollary 6 (Contiguous relation for 2 F1 -series: [9, Eq 1.4.3]) a(1 − z) b−c a, b a + 1, b a, b − 1 F z = F z + F . z 2 1 2 1 2 1 c c c a+b−c a+b−c Theorem 7 (Three-term relation for 2 φ1 -series) c(q − z)(a − 1) a, b qa, b z q; z = 2 φ1 q; 2 φ1 c c q (zab − zac + qac − qc) z(c − a)(c − b) a, b + 2 φ1 . q; z qc (1 − c)(zab − zac + qac − qc) The limiting case q → 1 of Theorem 7 recover the relation due to Gauss. Corollary 8 (Contiguous relation for 2 F1 -series: [9, Eq 1.4.4]) a(z − 1) z(c − a)(c − b) a, b a + 1, b a, b + 2 F1 . z = 2 F1 z z 2 F1 c c c+1 (cz − bz − a) c(cz − bz − a) Theorem 9 (Three-term relation for 2 φ1 -series) (zab−c)(q−c) a, b qa, b z c(z−q)(a−1) a, b + 2 φ1 . q; z = 2 φ1 q; q; z 2 φ1 c c c/q q (zb−c)(qa−c) (zb−c)(qa−c)
q-Extensions of Gauss’ Fifteen Contiguous Relations for 2 F1 -Series
The limiting case q → 1 of Theorem 9 recover the relation due to Gauss. Corollary 10 (Contiguous relation for 2 F1 -series: [9, Eq 1.4.5]) 1−c a a, b a + 1, b a, b F z = F z + F . z 2 1 2 1 2 1 c c c−1 1+a−c 1+a−c Theorem 11 (Three-term relation for 2 φ1 -series) a−c a, b a, qb z a(z − q)(1 − b) a/q, b φ q; z = φ q; + φ q; z . 2 1 2 1 2 1 c c c q q(ab − c) ab − c The limiting case q → 1 of Theorem 11 recover the relation due to Gauss. Corollary 12 (Contiguous relation for 2 F1 -series: [9, Eq 1.4.6]) b(1 − z) a−c a, b a, b + 1 a − 1, b + 2 F1 . z = 2 F1 z z 2 F1 c c c a+b−c a+b−c Theorem 13 (Three-term relation for 2 φ1 -series) q(c−a) q(b−c) a, b z a/q, b a, b/q = 2 φ1 + 2 φ1 . q; q; z q; z 2 φ1 c c c q (z−q)(a−b) (z−q)(a−b) The limiting case q → 1 of Theorem 13 recover the relation due to Gauss. Corollary 14 (Contiguous relation for 2 F1 -series: [9, Eq 1.4.7]) c−a b−c a, b a − 1, b a, b − 1 F z = F z + F . z 2 1 2 1 2 1 c c c (z−1)(a−b) (z−1)(a−b) Theorem 15 (Three-term relation for 2 φ1 -series) q z(b − c) a, b z a/q, b a, b φ q; = φ q; z + φ q; z . 2 1 2 1 2 1 c c qc q q−z (q − z)(c − 1) The limiting case q → 1 of Theorem 15 recover the relation due to Gauss. Corollary 16 (Contiguous relation for 2 F1 -series: [9, Eq 1.4.8]) 1 z(b − c) a, b a − 1, b a, b + 2 F1 . z = 2 F1 z z 2 F1 c c c+1 1−z (1 − z)c Theorem 17 (Three-term relation for 2 φ1 -series) q(a − c)(qzab + c2 − zac − zbc) a, b z a/q, b = 2 φ1 q; q; z 2 φ1 c c q (q − z)(qza2 b + ac2 − za2 c − qc2 ) (zab − c)(zab − qc)(c − q) a, b + 2 φ1 . q; z c/q (q − z)(qza2 b + ac2 − za2 c − qc2 )
89
90
C. Wei and D. Gong
The limiting case q → 1 of Theorem 17 recover the relation due to Gauss. Corollary 18 (Contiguous relation for 2 F1 -series: [9, Eq 1.4.9]) (z−1)(1−c) a−c a, b a − 1, b a, b F z = F z + F . z 2 1 2 1 2 1 c c c−1 a−1+(1+b−c)z a−1+(1+b−c)z Theorem 19 (Three-term relation for 2 φ1 -series) (b − 1)(zab − c)(zab − qc) a, b z q, qb φ q; = φ q; z 2 1 2 1 c c q b(z − q)(zb2 − zab + c + qc − bc − qb) q(c − b)(zb2 + c − bc − qb) a, b/q + 2 φ1 q; z . c b(z − q)(zb2 − zab + c + qc − bc − qb) The limiting case q → 1 of Theorem 19 recover the relation due to Gauss. Corollary 20 (Contiguous relation for 2 F1 -series: [9, Eq 1.4.10]) b(z−1) c−b a, b a, b + 1 a, b − 1 F z = F z + F . z 2 1 2 1 2 1 c c c c+bz−az−2b c+bz−az−2b Theorem 21 (Three-term relation for 2 φ1 -series) c(q − z)(b − 1) a, b a, qb z q; z = 2 φ1 q; 2 φ1 c c q (zab − zbc + qbc − qc) z(c − a)(c − b) a, b + 2 φ1 . q; z qc (1 − c)(zab − zbc + qbc − qc) The limiting case q → 1 of Theorem 21 recover the relation due to Gauss. Corollary 22 (Contiguous relation for 2 F1 -series: [9, Eq 1.4.11]) b(z − 1) z(c − a)(c − b) a, b a, b + 1 a, b F z = F z + F . z 2 1 2 1 2 1 c c c+1 (cz − az − b) c(cz − az − b) Theorem 23 (Three-term relation for 2 φ1 -series) (zab−c)(q−c) a, b a, qb z c(z−q)(b−1) a, b + 2 φ1 . q; z = 2 φ1 q; q; z 2 φ1 c c c/q q (za−c)(qb−c) (za−c)(qb−c) The limiting case q → 1 of Theorem 23 recover the relation due to Gauss. Corollary 24 (Contiguous relation for 2 F1 -series: [9, Eq 1.4.12]) 1−c b a, b a, b + 1 a, b + 2 F1 . z = 2 F1 z z 2 F1 c c c−1 1+b−c 1+b−c Theorem 25 (Three-term relation for 2 φ1 -series) q z(a − c) a, b z a, b/q a, b = 2 φ1 + 2 φ1 . q; q; z q; z 2 φ1 c c qc q q−z (q − z)(c − 1)
q-Extensions of Gauss’ Fifteen Contiguous Relations for 2 F1 -Series
91
The limiting case q → 1 of Theorem 25 recover the relation due to Gauss. Corollary 26 (Contiguous relation for 2 F1 -series: [9, Eq 1.4.13]) 1 z(a − c) a, b a, b − 1 a, b + 2 F1 . z = 2 F1 z z 2 F1 c c c+1 1−z (1 − z)c Theorem 27 (Three-term relation for 2 φ1 -series) q(b − c)(qzab + c2 − zac − zbc) a, b z a, b/q = 2 φ1 q; q; z 2 φ1 c c q (q − z)(qzab2 + bc2 − zb2 c − qc2 ) (zab − c)(zab − qc)(c − q) a, b + 2 φ1 . q; z c/q (q − z)(qzab2 + bc2 − zb2 c − qc2 ) The limiting case q → 1 of Theorem 27 recover the relation due to Gauss. Corollary 28 (Contiguous relation for 2 F1 -series: [9, Eq 1.4.14]) (z−1)(1−c) b−c a, b a, b − 1 a, b + 2 F1 . z = 2 F1 z z 2 F1 c c c−1 b−1+(1+a−c)z b−1+(1+a−c)z Theorem 29 (Three-term relation for 2 φ1 -series) z(c−a)(c−b)(qzab+c2 −zac−zbc) a, b z a, b = 2 φ1 q; q; z 2 φ1 c qc q c(c−1)(z−q)(qzab+zab+c2 −qc−zac−zbc) (zab−c)(zab−qc)(q−c) a, b + 2 φ1 . q; z c/q c(z−q)(qzab+zab+c2 −qc−zac−zbc) The limiting case q → 1 of Theorem 29 recover the relation due to Gauss. Corollary 30 (Contiguous relation for 2 F1 -series: [9, Eq 1.4.15]) z(a−c)(c−b) (c−1)(1−z) a, b a, b a, b + 2 F1 . z = 2 F1 z z 2 F1 c c+1 c−1 c{c−1+(1+a+b−2c)z} c−1+(1+a+b−2c)z Of course, there exist numerous other three-term relation for 2 φ1 -series. The interested reader may keep on the work.
References 1. Andrews, G.E., Askey, R., Roy, R.: Special Functions. Cambridge University Press, Cambridge (2000) 2. Bailey, W.N.: Generalized Hypergeometric Series. Cambridge University Press, Cambridge (1935) 3. Chu, W., Wei, C.: Legedre inversions and balanced hypergeometric series identities. Discrete Math. 308, 541–549 (2008) 4. Chu, W., Wei, C.: Set partitions with restrictions. Discrete Math. 308, 3163–3168 (2008)
92
C. Wei and D. Gong
5. Gupta, D.P., Masson, D.R.: Use of the Gauss Contiguous Relations in Computing the Hypergeometric Functions F (n + 1/2, n + 1/2; m; z). Interdisciplinary Information Sciences 2, 63–74 (1996) 6. Ibrahim, A.K., Rakha, M.A.: Contiguous relations and their computations for 2 F1 hypergeometric series. Computers & Mathematics with Applications 56, 1918–1926 (2008) 7. Rakha, M.A., Ibrahim, A.K.: On the contiguous relations of hypergeometric series. J. Comput. Appl. Math. 192, 396–410 (2006) 8. Rakha, M.A., Ibrahim, A.K., Rathie, A.K.: On the computations of contiguous relations for 2 F1 hypergeometric series. Commun. Korean Math. Soc. 24, 291–302 (2009) 9. Slater, L.J.: Generalized Hypergeometric Functions. Cambridge University Press, Cambridge (1966) 10. Takayama, N.: Gr¨ obner Basis and the Problem of Contiguous Relations, Japan. J. Appl. Math. 6, 147–160 (1989) 11. Vidunas, R.: Contiguous relations of hypergeometric series. J. Comput. Appl. Math. 153, 507–519 (2003) 12. Wei, C., Gu, Q.: q-Generalizations of a faimily of harmonic number identities Adv. Appl. Math. 45, 24–27 (2010)
A New Boussinesq-Based Constructive Method and Application to (2+1) Dimensional KP Equation Li Yin1, and Zhen Wang2 1
2
School of Science, Dalian Ocean University, Dalian, 116023, China
[email protected] Department of Applied Mathematics, Dalian University of Technology, Dalian, 116085, China
Abstract. In this paper, we present a constructive algorithm to obtain link of nonlinear evolution equation(s) (NLEEs) and (1+1) dimensional Boussinesq equation. We could generate the solutions to nonlinear evolution equations from the solutions to (1+1) dimensional Boussinesq equation by the obtained link, including N-soliton solutions, double periodic solutions and so on . As an example, we applied this new method to (2+1) dimensional KP equation. Some well results are obtained. This method can also be applied to other nonlinear evolution equations in mathematical physics. Keywords: Nonlinear evolution equation(s); (2+1) dimensional KP equation; soliton solutions; double periodic solutions.
1
Introduction
The investigation of the exact solutions to the nonlinear evolution equations (NLEEs) plays an important role in the study of nonlinear physics. For example, the wave phenomena observed in fluid dynamics is often modeled by the bell shaped sech solutions and the kink shaped tanh solutions. Recently, the computer algebra has been developed quickly. In the field of nonlinear science and engineering, to find as many and general as possible exact solutions to a nonlinear system is one of the most important and fundamental task for the scholars. The exact solutions, if available, to the nonlinear evolution equations can be used as the verification of numerical results. With the development of computerized symbolic computation, much work has been focused on the extensions and applications of the known algebraic methods to get the solutions to nonlinear evolution equations. There has been much progression in the development of
Partially supported by a Natural Sciences Foundation of China under the grant 50579004. Corresponding author.
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 93–100, 2010. c Springer-Verlag Berlin Heidelberg 2010
94
L. Yin and Z. Wang
those methods such as inverse scattering method [1], Darboux transformation [2, 3], Hirota bilinear method [4, 5], and tanh method [6, 7]. In 2006, L¨ u [8] presented a Burgers equation-based constructive method for solving nonlinear evolution equations. The direct method of symmetry reduction [9, 10] involves no Lie group theoretical techniques. The basic idea is to seek such a reduction of the nonlinear evolution equations as u(X) = α(X) + β(X)W [ζ(X))]
(1)
u(X) = α(X) + β(X)W [ξ(X), τ (X)]
(2)
or
where u is a dependant variable ,X = (t, x1 , x2 , · · · , xn ) (n ∈ N ) is an independent variable. Substituting (1) or (2) into the given nonlinear evolution equations enable one to obtain a partial differential equation of W . This method can be used to obtain the reductions that can not be obtained by the classical Lie group method, as shown in [9] and [10]. The method proposed in the following sections could produced explicit expressions to (2+1) dimensional KP equation by the solutions of (1+1) dimensional Boussinesq equation. And the results obtained is different from the results of Lou in [10]. This method is more simple and easier to comprehensive. The relation between KP equation and Boussinesq equation is an explicit expression. In other words, if one has a solution to (1+1) dimensional Boussinesq equation, then we can obtain a kind of solutions to KP equation, which has some freedom of selecting the variables and the parameters. Comparisons are made between Kruskal (1989)’s method, Lou (1991)’s method and the present algorithm. We conclude that one has freedoms appear in determining reduction parameters and variables in the present algorithm. The solution derived is not the general solution to KP equation.For illustration,we apply the new algorithm to solve (2+1) dimensional KP equation and successfully construct its N-soliton solution.
2
The Basic Idea of Boussinesq-Based Constructive Method
For a given system of polynomial NLEEs, with some physical fields ui (x, y, t) in three variables x, y and t N (ui , uit , uix , uiy , uitt , uitx , uity , uixx , uixy , · · · ) = 0
(3)
We introduce a new ans¨atz in terms of finite expansion in the following forms: ui (x, y, t) = ai0 +
mi j=1
aij wi (ξ, τ )
(4)
A New Boussinesq-Based Constructive Method and Application
95
where ξ = ξ(x, y, t), τ = τ (x, y, t), ai0 = ai0 (x, y, t) and aij = aij (x, y, t) are all differential functions to be determined later, and the new variable w(ξ, τ ) satisfies the Boussinesq equation: wτ τ + βwξξ + γ(w2 )ξξ + θwξξξξ = 0.
(5)
which describes motions of long waves in one-dimensional arises in a nonlinear lattices and in shallow water under gravity. To determine ui explicitly, we take the following three steps. Step 1. Substituting (4) into the given NLEEs and reduce the result with wτ τ = −βwξξ − γ(w2 )ξξ − θwξξξξ .
(6)
to obtain a system of polynomials for w and its derivatives. Collecting the coefficients of these polynomials yields a set of partial differential equations for a0 , a1 , ξ and τ . We call this set of NLEEs determine equations. Step 2. Solving the determine equations obtained in Step 1 for a0 , a1 , ξ and τ . Step 3. Substituting a0 , a1 , ξ, τ and any known explicit exact solution of the Boussinesq equation (5) into (4) to obtain the exact solution of the NLEEs in concern. Note that there is not any restriction on the expression of solution(eq(4)) of the direct method, if you could solve it. We can see that (4) is similar to (2). But unlike in the direct method of symmetry reduction, here w is required to satisfy the auxiliary equation (5). The introduction of auxiliary equation is inspired by the extended Tanh method [6]. The Boussinesq equation is one of the most fundamental evolution equations which has been investigated extensively. One may find numerous papers on line concerning this equation. In[11],an N-soliton solution to (1+1) dimensional Boussinesq equation was presented in the following form by Hirota bilinear transformation. Namely f = fN =
μj =0,1
exp
⎧ N ⎨ ⎩
j=1
⎡
N
μ j ⎣ ηj +
⎤ (1 − μm )Ajm ⎦ +
m=1,m=j
N
μj μm (Bjm + πi)
1≤j≤m
⎫ ⎬ ⎭ (7)
where hj − hm kj − km (8) and the first sum is taken over all possible combination of μj = 0, 1. Then (0)
ηj = (hj + kj )ξ − a(h2j − kj2 )τ + ηj ,
eAjm =
w = 2 ln (f )ξξ is an N-soliton solution to Boussinesq equation.
hj + km , km − kj
eBjm =
(9)
96
L. Yin and Z. Wang
We consider 1-soliton motion governed by the special solution to Boussinesq equation √
2bk2 ekξ+
2
k +1kτ a w(ξ, τ ) = 2 √ a + bekξ+ k2 +1kτ
(10)
and a periodic solution to Boussinesq equation w(ξ, τ ) =
−8 k 4 n2 + ω 2 − k 2 + 4 k 4 + 12 n2 k 4 cn2 (kξ + ω τ, n) 6k 2
(11)
where cn(kξ + ω τ, n) is a double period Jacobi function, n is its modula. A 2-soliton solution to (1+1) dimensional Boussinesq equation in what follows w(ξ, τ ) = 2ln(f )ξξ
(12)
where 2 1 1 a12 (k12 +k1 k2 +k22 )(4k1 k2 −1) (k1 − 4k1 )ξ+( 16k2 −k1 )τ 1 e a2 (16k12 k22 +4k1 k2 +1)(k1 −k2 )2 (k2 − 4k1 )ξ+( 1 2 −k22 )τ (k1 − 4k1 )ξ+( 1 2 −k12 )τ +(k2 − 4k1 )ξ+( 1 2 −k22 )τ 16k 16k 16k 2 1 2 2 1 2 +a2 e + a12 e .
f =1+
(13)
3
Application to the (2+1) Dimensional KP Equation
The (2+1) dimensional KP equation (ut + uux + uxxx )x + α2 uyy = 0
(14)
is called KP-I when α2 = 1 and KP-II when α2 = −1, (subscripts x, y and t denoting partial differentiations). It is well established now that the KadomtsevPetviashvili (KP) equation is the key ingredient in a number of remarkable nonlinear problems, both in physics and mathematics [1, 12]. Finding special solutions is an important aspect for understanding these problems. The solutions to KP equation have been extensively studied since they were first found. Various methods had been tried and many special solutions were given in [13] and therein. Recently, in[14], the authors found a novel class of solutions of the KPI equation and discussed the properties of these solutions. It suggested that the class of reflectionless potentials be far richer than what was previous known. We suppose (7) have the following formal solution: u(x, y, t) = a0 + a1 w(ξ, τ )
(15)
where ξ = ξ(x, y, t), τ = τ (x, y, t), a0 = a0 (x, y, t)and a1 = a1 (x, y, t) are all functions to be determined later , and the new variables w(ξ, τ ) must satisfies Eq.(5). With the aid of symbolic computation system Maple ,substituting Eq.(15) along with Eq.(6) in to Eq.(14) and setting all coefficients of w(ξ, τ )and its derivatives
A New Boussinesq-Based Constructive Method and Application
97
to be zero yields a set of over-determined algebraic equations with respect to a0 , a1 , ξ and τ . This complex equation system can be reduced to τx = 0,
a1x = 0,
ξxx = 0,
a1 ξx4 − α2 θa1 τy2 = 0, 2α2 a1 ξy τy + a1 τt ξx = 0, −2α2 γa1 τy2 + a21 ξx2 = 0, α2 a1 τyy + 2α2 a1y τy = 0,
(16)
2
α a1yy + a1 a0xx + a1xt = 0, α2 a0yy + a0xt + a20x + a0 a0xx + a0xxxx = 0. −α2 a1 βτy2 + α2 a1 ξy2 + a1 ξt ξx + a0 a1 ξx2 = 0, a1t + ξx + a1 ξxt + α2 a1 ξyy + 2α2 a1y ξy + 2a0x a1 ξx = 0, From the above equations,we get ξ and τ must be in the form of ξ = G1 (t)x + G2 (y, t),
τ = G3 (t)y + G4 (t)
(17)
where G1 (t), G2 (y, t), G3 (t), G4 (t) are undetermined functions , a0 is a function of y and t and a1 is a function of t. Then, Eqs.(16) eliminates to a0yy = 0, −G41 + αθG23 = 0, −a1 G21 + 2α2 G23 γ = 0, α2 a1 G2yy + a1t G1 + a1 G1t = 0,
(18)
2G3 G2y α2 + G1 G3t y + G1 G4t = 0, −βG23 α2 + G1 G1t x + G1 G2t + G22y α2 + a0 G21 = 0. Solving the above equation system in terms of a0 , a1 , G1 , G2 , G3 and G4 , we obtained the first solution as follows 2γ C1 2 a0 (y, t) = F1 (t) y + F2 (t) , a1 (t) = , G1 (t) = C1 , θ 1 G2 (y, t) = − F1 (t) C1 dt + C5 y + β C1 4 − C1 2 θ F2 (t) C1 θ 2 −α2 θ F1 (t) C1 dt + 2 α2 θC5 F1 (t) C1 dt − α2 θ C5 2 dt + C4 , C1 F1 (t)dt − C5 C1 α C1 2 √ G3 (t) = − √ , G4 (t) = −2 dt + C6 . θα θ
(19)
98
L. Yin and Z. Wang
4
4
3.5
3.5
3
3
2.5
2.5
2
2
1.5
1.5
1 –10
–10
1 –10
–5
–5 y
0
0
x
y
0
0
5
5 10
x
5
5
10
10
Fig. 1. t=0 wave structure
10
Fig. 2. t=2 wave structure
4
4
3.5
3.5
3
3
2.5
2.5
2
2
1.5
1.5
1 –10
–10
1 –10
–5
–5 y
0
0
–10 –5
–5
x
y
0
0
5
5 10
x
5
5
10
10
Fig. 3. t=4 wave structure
10
Fig. 4. t=6 wave structure
8
8 6
6
4
–10
2
–5
0 –2 10
–10 –5
–5
0 5
5 x
0
–5
–10
10
Fig. 5. t=0 wave structure
y
4
–10
2
–5
0 0
–2 10
5
5 x
0
–5
–10
10
Fig. 6. t=10 wave structure
y
A New Boussinesq-Based Constructive Method and Application
99
where F1 (t) and F2 (t) are both arbitrary functions of t, C1 , C2 , C3 , C4 , C5 and C6 are all arbitrary constants. So ξ and τ can be written as ξ = C1 x + − F1 (t) C1 dt + C5 y + C11 θ β C1 4 − C1 2 θ F2 (t) 2 −α2 θ F1 (t) C1 dt + 2 α2 θC5 F1 (t) C1 dt − α2 θ C5 2 dt + C4 , (20) (C1 F1 (t)dt−C5 )C1 α 2 C 1 √ τ = − √θα y + −2 dt + C6 . θ The other solution to Eq.(18) can be written as 2γ C1 2 a0 (y, t) = F1 (t) y + F2 (t) , a1 (t) = , G1 (t) = C1 , θ 1 G2 (y, t) = − F1 (t) C1 dt + C3 y + β C1 4 − C1 2 θ F2 (t) C1 θ 2 2 2 −α θ F1 (t) C1 dt + 2 α θC3 F1 (t) C1 dt − α2 θ C3 2 dt + C2 , C1 F1 (t)dt − C5 C1 α C1 2 √ G3 (t) = √ , G4 (t) = 2 dt + C6 . θα θ
(21)
where F1 (t) and F2 (t) are both arbitrary functions of t, C1 , C2 , C3 , C4 , C5 and C6 are all arbitrary constants. For example, if we choose w(ξ, τ ) as (9), we obtain explicit N-soliton solution to the (2+1) dimensional KP equation. The exact solution of NLEEs often model natural phenomena. From Fig.1 to Fig.4 show the motion of 1-soliton solution to KP equation which generated by 1-soliton solution Eq.(10) to (1+1) dimensional Boussinesq equation in the condition of β = −1, γ = −3, θ = −1, α = i, C1 = 1, C2 = 1, C3 = 1, C4 = 1, C5 = 0, C6 = 0, F1 (t) = 0, F2 (t) = 1, a = 1, b = 2, k = 1. Fig.5 and Fig.6 present the wave of period solution to KP equation which generated by the period solution Eq.(11)to (1+1) dimensional Boussinesq equation in the condition of β = −1, γ = −3, θ = −1, α = i, C1 = 1, C2 = 1, C3 = 1, C4 = 1, C5 = 0, C6 = 0, F2 (t) = 1, F1 (t) = 0, n = 0.999, ω = 0.5, k = 1.
4
Conclusion and Discussion
In the field of nonlinear science and engineering, searching exact solutions of NLEEs is a challenging task. Although there may not exist a general method which can be applied to solve all the NLEEs, a lot of efforts have been paid to finding better algorithms . In this letter, we present a new Boussinesq-based constructive method with the help of the symbolic computation system Maple. As the Boussinesq equation can be solved by various methods, one may get abundant types of solutions for NLEEs by using our algorithm. As an illustrative example, the (2+1) dimensional KP equation is considered . The combination of symbolic computation system Maple and manual calculation is a very powerful
100
L. Yin and Z. Wang
tool to investigate nonlinear science. We believe that the combination tool will be used to explore the interesting dynamical properties found in nonlinear systems. Acknowledgements. This work is partially supported by a Natural Sciences Foundation of China under the grant 50579004.
References [1] Ablowitz, M.J., Clarkson, P.A.: Solitons, Nonlinear Evolution Equations and Inverse Scattering. Cambridge University Press, Cambridge (1992) [2] L¨ u, X., Li, J., Zhang, H.Q., Tao, X., Tian, B.: Integrability aspects with optical solitons of a generalized variable-coefficient N-coupled higher order nonlinear Schrodinger system form inhomogeneous optical fibers. J. Math. Phys. 51, 043511 (2010) [3] Zhang, H.Q., Tian, B., Li, L.L., Xue, Y.S.: Darboux transformation and soliton solutions for the (2+1)-dimensional nonlinear Schrodinger hierarchy ewith symbolic computation. Physica A 388, 9–20 (2009) [4] L¨ u, X., Tian, B., Xu, T., Cai, K.J., Liu, W.J.: Analytical study of the nonlinear schrodinger equation with an arbitrary linear time-dependent potential in quasione-dimensional Bose-Einstein condensates. Ann. Phys (N. Y.) 323, 2554 (2008) [5] Conte, R., Musette, M.: Link between solitary waves and projective Riccati equations. J. Phys. A 25, 5609–5623 (1992) [6] Ma, W.X., He, J.S., Li, C.X.: A second Wronskian formulation of the Boussinesq equation. Nonlinear Anal. 70, 4245–4258 (2009) [7] Ma, W.X.: An application of the Casoratian technique to the 2D Toda lattice equation. Mod. Phys. Lett. B 22, 1815–1825 (2008) [8] Lu, Z.S.: A Burgers equation-based constructive method for solving nonlinear evolution equations. Phys. Lett. A 353, 158–160 (2006) [9] Clarkson, P.A., Kruskal, M.D.: New similarity solutions of the Boussinesq equation. J. Math. Phys. 30, 2201–2213 (1989) [10] Lou, S.Y., Ruan, H.Y., et al.: Similarity reductions of the Kp equation by a direct method. Phys. A 24, 1455 (1991) [11] Zhang, Y., Chen, D.Y.: A new representation of N-soliton solution for the Boussinesq equation. Chaos. Solitons. Fractals 23, 175–181 (2005) [12] Konopelchenko, B.G.: Solitons in Multidimensions: Inverse Spectral Transform Method: inverse spectral transform method. World Scientific, Singapore (1993) [13] Han, W.T., Li, Y.S.: Remarks an the solutions of the KP equation. Phys. Lett. A 283, 185–194 (2001) [14] Ablowitz, M.J., Chakravarty, S., Trubatch, A.D., Villarroel, J.: A Novel Class of Solutions of the Non-stationary Schr¨ odinger and the Kadomtsev-Petviashvili I Equations. Phys. Lett. A 267, 132–146 (2000)
Some Properties of a Right Twisted Smash Product A*H over Weak Hopf Algebras* Yan Yan, Nan Ji, Lihui Zhou, and Qiuna Zhang College of Sciences, Hebei Polytechnic University, Tangshan, Hebei, China
[email protected]
Abstract. To study the concept of the right twisted smash product algebras over weak Hopf algebras and investigate their properties., let H be a weak Hopf algebra and A an H-module algebra. Using integral theory, we describe some properties of a right twisted smash product A*H over weak Hopf algebras. Keywords: Weak Hopf algebra, the right twisted smash product, ideal, left integral.
1 Introduction Weak Hopf algebras have been proposed by G. Bohm and F. Nill [1] as a generalization of ordinary Hopf algebras in the following sense: the defining axioms are the same, but the multiplicativity of the counit and the comultiplicativity of the unit are replaced by weaker axions. Perhaps the easiest example of a weak Hopf algebra is a groupoids algebra,other examples are face algebras, quantum groupoids and generalized Kac algebra. The initial motivation to study weak Hopf algebras was their connection with the theory of algebra extension, and another important application of weak Hopf algebras is that they provide a natural framework for the study of dynamical twsits in Hopf algebras. It turns out that many important properties of ordinary Hopf algebras have “weak” analogues. For example, using the theory of integrals for weak Hopf algebras developed in [1], which is essentially parallel to that of ordinary Hopf algebras, one can prove an analogue of Maschke’s theorem for weak Hopf algebras [8] and show that semisimple weak Hopf algebras are finite dimensional [9, 10]. But the structure of weak Hopf algebras is much more complicated than that of ordinary Hopf algebras, even in the semisimple case [3]. For example, the antipode of a semisimple weak Hopf algebra over the field of complex number may have an infinite order [7]. In this paper, we mainly study the concept of the right twisted smash products over weak Hopf algebras and investigate their properties. *
This work was supported by the Scientific Fund Project of Hebei Polytechnic University (z200919).
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 101–108, 2010. © Springer-Verlag Berlin Heidelberg 2010
102
Y. Yan et al.
2 Preliminaries For the foundations of weak Hopf algebra theory we refer the reader to [1]. Throughout this paper, k denotes a field. We use Sweedler’s notation [2] for the comultiplication: Δ( x) = x1 ⊗ x2 . Definition 1. Let K be a field and H =< μ, η, , ε > be both an associative algebra over K and a coalgebra over K. If H satisfies the following conditions1) − 3) , it is
caller a weak bialgebra. If it satisfies the following conditions1) − 4) , it is caller a weak Hopf algebra with a weak antipode S. 1) (xy) = (x) ⊗ (y), 2)11 ⊗ 12 ⊗ 13 = 11 ⊗ 1112 ⊗ 12 = 11 ⊗ 12 11 ⊗ 12 , 3)ε(xyz) = ε(xy1 )ε(y 2 z) = ε(xy 2 )ε(y1z), 4)x1S(x 2 ) = ε(11 x)12 ,S(x1 )x 2 = 11 ε(x12 ),S(x1 )x 2S(x 3 ) = S(x). For any weak Hopf algebra H, the following conditions are equivalent
1)H is a Hopf a lg ebra, 2) (1) = 1 ⊗ 1, 3)ε(xy) = ε(x)ε(y), ∀x, y ∈ H 4)ε(11 x)12 = 11 ε(x12 ) = ε(x), ∀x ∈ H. Let H be a weak bialgebra, the linear maps Π L , Π R : H → H is defined by the formulas Π L (x) = ε(11 x)12 , Π R (x) = 11 ε(x12 ) , and we denote their images by H L = Π L (H) and H R = Π R (H) . Clealy, they are the subalgebras of H. G. Bohm and F. Nill [1] proved that in a weak Hopf algebra H, for all x, y ∈ H , we have: Π L (x) = ε(11 x)12 = x1S(x 2 ) = ε(S(x11 ))12 = S(11 )ε(12 x) Π R (x) = 11 ε(x12 ) = S(x1 )x 2 = 11 ε(12 S(x)) = ε(x11 )S(12 ) And for all x ∈ H , we have the following relations:
S(Π L (x)) = Π R (S(x)),
S(Π R (x)) = Π L (S(x))
x1 ⊗ x 2S(x 3 ) = 11 x ⊗ 12 ,
S(x1 )x 2 ⊗ x 3 = 11 ⊗ x12
x1 ⊗ S(x 2 )x 3 = x11 ⊗ S(12 ), x1S(x 2 ) ⊗ x 3 = S(11 ) ⊗ 12 x Moreover, the relation x1S(x 2 )x 3 = x also holds. In fact, x1S(x 2 )x 3 = Π L (x1 )x 2 = ε(11 x1 )12 x 2 = x .
.
.
Some Properties of a Right Twisted Smash Product A*H over Weak Hopf Algebras
103
If H is a weak Hopf algebra, its weak antipode S is both an anti-multiplicative map and an anti-comultiplicative map, that is, for all x, y ∈ H : S(xy) = S(y)S(x),S(1) = 1,S(x)1 ⊗ S(x) 2 = S(x 2 ) ⊗ S(x1 ), ε(S(x)) = ε(x) . Definition 2. Let H be a weak Hopf algebra with antipode S, and A be an algebra. A is called an H-bimodule angebra if the following conditions hold: (1) A is an H-bimodule with the left H-module structure map " → " and with the right H-module structure map " ← " ; (2) A is not only left H-module algebra with the left module action " → " but also right H-module algebra with the right module action " ← " Let H be a weak Hopf algebra. An algebra A is a (left) H-module algebra if A is a left H-module via x ⊗ a x → a and
1)x → ab = (x1 → a)(x 2 → b) 2)x → 1 = Π L (x) → 1
.
An algebra A is a (left) H-comodule algebra if A is a left H-comodule via Δ A : A → H ⊗ A, Δ A (a) = a −1 ⊗ a 0 and 1)Δ A (ab) = a −1b −1 ⊗ a 0 b0 2)Δ A (1) = 1 ⊗ 1
.
A coalgebra A is a (left) H-module coalgebra if A is a left H-module via x ⊗ a x → a and 1)Δ A (x → a) = (x1 → a) ⊗ (x 2 → a 2 ) 2)ε A (x → a) = ε H (x)εA (a)
.
It is clear that a left H-module algebra is also a right H*-comodule algebra if H is finite dimensional. Let A be a left H-module algebra, set A H = {a ∈ A x → a = Π L (x) → a, ∀x ∈ H} , then A H is a subalgebra of A, which is called the invariant subalgebra of H. In fact, for all x ∈ H,s, t ∈ A H ,we have x → st = (x1 → s)(x 2 → t)
= (x1 → s)(Π L (x 2 ) → t) = (x1 → s)(ε(11 x 2 )12 → t) ' = (11' x1 → s)(ε(11 1 2 x 2 )12 → t)
= (11 x1 → s)(ε(12 x 2 )13 → t) . = (11 x1 → s)(12 → t) = 1 → ((x → s)t) = (x → s)t = (Π L (x) → s)t
104
Y. Yan et al.
While by the same method, we have:
Π L (x) → (st) = (Π L (x) → s)t . So, st ∈ A H and A H is a subalgera of H. Now, we introduce the Sweedler’s arrow notation. That is, for any finite dimensional weak Hopf algebra H, its dual vector space H* = Hom k (H, k) has also a weak Hopf algebra structure. And for x ∈ H and φ ∈ H* , set x → Φ = Φ1 < Φ 2 , x >, Φ ← x =< Φ1 , x > Φ 2
Φ → x = x1 < Φ, x 2 >, x ← Φ =< Φ, x1 > x 2
.
Then for all y ∈ H , we have < x → Φ, y >=< Φ, yx > and < Φ ← x, y >=< Φ, xy > . Definition 3. [6] Let A be an H-bimodule algebra. A right twisted weak smash product A*H is defined on the vector space A ⊗ H . Define a multiplication
(a ⊗ h)(b ⊗ g ) = a (h1 → b ← S (h3 )) ⊗ h2 g , on tensor space A ⊗ H , for all a, b ∈ A, h, l ∈ H . Let a*x denote the class of a ⊗ x in A ⊗ H ,the multiplication in A*H is given by the familiar formula ∧
∧
11 → a ⊗ 12 h = a ⊗ h , ∧
∧
a ← S (12 ) ⊗ 11 h = a ⊗ h .
3 Ideals in A*H Definition 4. [4] A left(right) integral in a weak Hopf algebra H is an element l ∈ H( r ∈ H ) such that
xl = Π L ( x )l( rx = r Π R ( x )) , for all x ∈ H .A left or right integral in a weak Hopf algebra H is called nondegenerate if it defines a non-degenerate functional on H* . A left integral l is called normalized if Π L ( l ) = 1 .Slimilarly, a right integral r is normalized if Π R ( l ) = 1 . Lemma 5. Let H be a weak Hopf algebra, then H* H*H ≅ IL (H* ) ⊗HL
H*
HH .
Theorem 6. Let H be a weak Hopf algebra, then dim(I L (H)) = 1 . Proof: Since H* is a right H-Hopf module and H* ≅ I L (H* ) ⊗ H
∵ dim H* = dim H
∴ dim(I L (H* )) = 1
,
Similarly, we have dim(I L (H)) = 1 . Theorem 7. Let l be a non-zero left integral of H, A*H is a weak Hopf algebra,then
∀a ∈ A, ∀x ∈ H , the following relations hold in A*H:
Some Properties of a Right Twisted Smash Product A*H over Weak Hopf Algebras
105
(1) ax = x2 ( S −1 ( x1 ) → a ← S 2 ( x3 )), that is A * H ⊆ HA (2) xal = ( x → a )l , lax = l ( S −1 ( x1α ) → a ← S 2 ( x2 )) , α ∈ G ( H * ), x1α = α → x1 (3) (l ) = AlA is an ideal of A*H. Proof: (1) x2 ( S −1 ( x1 ) → a ← S 2 ( x3 )) , = (1* x2 )(( S −1 ( x1 ) → a ← S 2 ( x3 )) *1) = ( x2 → S −1 ( x1 ) → a ← S 2 ( x5 ) ← S ( x4 )) * x3 = ( S −1 ( x1 S ( x2 )) → a ← S ( x4 S ( x5 ))) * x3 = ( S −1 ( ∏ L (x1 )) → a ← S ( ∏ L (x3 ))) * x2 = ( S −1 ( S (11 )) → a ← S ( ∏ L (x2 ))) *12 x1
.
= (a ← S ( ∏ (x2 ))) * x1 L
= (a ← S (12 )) *11 x1 = a * x = ax (2)
xal = (1* x)( a * l ) = ( x1 → a ← S ( x3 )) * x2 l = ( x1 → a ← S ( x3 )) * ∏ L (x2 )l = (11 x1 → a ← S ( x2 )) *12 l = ( x1 → a ← S ( x2 )) * l = x1 → a * ∏ L (S ( x2 ))l
,
= x1 → a * ∏ L (∏ R ( x2 ))l = x11 → a ∏ L (S ( x2 ))l = x → a *l = ( x → a )l lax = lx2 ( S −1 ( x1 ) → a ← S 2 ( x3 )) = α ( x2 )l ( S −1 ( x1 ) → a ← S 2 ( x3 )) = l ( S −1 (α ( x2 ) x1 ) → a ← S 2 ( x3 ))
.
= l ( S −1 ( x1α ) → a ← S 2 ( x2 )) (3) This follows from (2). Lemma 8. [5] Let H be a finite dimensional weak Hopf algebra and l a non-zero left
integral of H, and let A be a left H-module algebra. Then the map l : A l(a) = l → a is an A -bimodule map with values in A . H
H
A given by
106
Y. Yan et al.
Proof: Suppose a ∈ A,s ∈ A H . We have ∀x ∈ H
x → (l → a) = (xl) → a = (Π L (x)l) → a
,
= Π (x) → (l → a) L
Then, l(a) = l → a ∈ A H . ∵ l → (as) = (l1 → a)(l 2 → s) = (l1 → a)(Π L (l2 ) → s)] = (l1l → a)(l 2 → s) = 1 → ((l → a)s) = (l → a)s l → (sa) = (l1 → s)(l 2 → a) = (Π L (l1 ) → s)(l 2 → a)
.
= (S(l1 )l → s)(l2 l → a) = (Π L (l1 ) → s)(l 2 l → a) = (l1 → s)(l 2 → (l → a)) = s(l → a) ∴ l(as) = l(a)s, l(sa) = sl(a) The proof is completed. Definition 9. The map l : A H on A.
A H as in Lemma 8 is called a (left) trace function for
Definition 10. A ring R is called semiprime if it has no non-zero nilpotent ideals. Theorem 11. Assume that A*H is semiprime and that l is a non-zero left integral of
H. If I is any non-zero left or right H-stable ideal of A, then l(I) ≠ 0 . Proof: If l(I) = 0 ,then lIl = 0 by Theorem 7(2). Thus if I is a left ideal, then J = Il is
a left ideal of A*H such that J 2 = 0 .Since A*H is semiprime, J = 0 and thus I = 0 a contradiction. If I is a right ideal ,the same argument works using J = lI .
4 Semisimplicity of A*H Lemma 12. The following conditions on a weak Hopf algebra H over K are equivalent: H is semisimple; There exists a normalized left integral e ∈ H , that is ∀h ∈ H , he = ∏LH ( h)e,∏LH (e) = 1 .
Some Properties of a Right Twisted Smash Product A*H over Weak Hopf Algebras
107
,
Theorem 13. Let A*H be a weak Hopf algebra e and q respectively are left integral of
,
A and H then e*q is a left integral of A*H if and only if the following condition holds: ∀a ∈ A , ∀x ∈ H , a( x1 → e ← S ( x2 )) * q = ∏ LA (a)ε H ( x)e * q . In this situation if e and q respectively are left integral of A and H,and satisfy the
,
property ε * (11 (e ← S (11′ )) * q)(12 *12′ ) = ε A (11 e)12 * ε H (11′ q)12′ integral of A*H. Proof: ∏L A* H ( a * x )(e * q )
, then
e*q is a left
= ε * ((11 *1)(a * x))(12 *1)(e * q) = ε * (11 (11 → a ← S (13 )) *12 x )(12 (11 → e ← S (13 )) *12 q )) = ε * (11 (11 → a ← S (12′ )) *12 11′ x )(12 (11 → e ← S (12′ )) *12 11′ q ))
,
= ε * (11 a * x)(12 e * q ) = ε A (11 a)12 ε H ( x)e * q = ∏LA (a)ε H ( x)e * q (a * x)(e * q) = a( x1 → e ← S ( x3 )) * x2 q = a( x1 → e ← S ( x3 )) * ∏LH ( x2 )q , = a( x1 → e ← S (12 x2 )) * S (11 ) q = a( x1 → e ← S ( x2 )) * q ∴ ∏ LA* H (a * x)(e * q) = (a * x)(e * q) . ⇔ a ( x1 → e ← S ( x2 )) * q = ∏LA (a)ε H ( x)e * q ∏L
*
(e * q ) = ε * ((11 *11 )(e * q )(12 *12 )) = ε * (11 (11 → e ← S (13 )) *12 q)(12 *14 ) = ε * (11 (11′′ → e ← S (12 )) *1112′′ q)(12 *13 ) = ε * (11 (e ← S (12 11′ )) *11q )(12 *12′ ) = ε * (11 (e ← S (11′ )) * q)(12 *12′ )
.
= ε A (11 e)12 * ε H (11′ q)12′ = ∏ LA (e) *∏ HL q = 1*1 ∴ e*q is a left integral of A*H. Theorem 14. Let A*H be a weak Hopf algebra. A and H are semisimple, and satisfy
the conditions of the theorem13, then A*H is semisimple.
108
Y. Yan et al.
References 1. Bohm, G., Nill, F., Szlachanyi, K.: Weak Hopf Algebras I.: Integral Theory and C* structure. J. Algebra, 385 (1999) 2. Sweedler, M.E.: Hopf Algebras. Berjamin, New York (1969) 3. Hirata, K., Sugano, K.: On Semisimple extensions and separable extensions over non commutative rings. J. Math. Soc. Japan 18(4), 360–373 (1966) 4. Cohen, M., Fishman, D.: Hopf algebra actions. J. Algebra 10, 363–379 (1986) 5. Wang, S., Li, J.q.: on Twisted Smash Products for Bimodules Algebras and the Drinfel‘d Double. Comm. Algebra 26, 2435–2444 (1998) 6. Zheng, N.: Smash Biproduct Over Weak Hopf Algebras. Advances in Mathematics (05) (2009) 7. Shi, M.: The Complexity of Smash product. Advances in Mathematics (05) (2008) 8. Ling, J.: Maschke-Type Therems for Weak Smash Coproducts, Journal of Mathematical Re-Search and Exposition (04) (2009) 9. Yin, Y., Zhang, M.: The Structure Theorem for Weak Hopf Algebras. Advances in Mathematics (06) (2009) 10. Ju, T.: Cocyclic Module Constructed by the right Adjoint Action of Hopf Algebras. Journal of Mathematics (02) (2010)
Application of New Finite Volume Method (FVM) on Transient Heat Transferring* Yuehong Wang1, Yueping Qin2, and Jiuling Zhang1,3 1 School of Resource & Environment, Hebei Polytechnic University, Tangshan 063009, Hebei, China 2 China University of Mining and Technology (Beijing), Beijing 100083, China 3 KaiLuan (group) Limited Liability Company, Tangshan 063018, Hebei, China
[email protected],
[email protected]
Abstract. The new arithmetic of Finite Volume Method (FVM) is created according to the conservation principle in evey control volume, which has great effect on the calculation accuracy of equation. In order to prove the method, the temperature field of transient heat transfer is taken as example in paper to deduced equation, and the result shows that it is same to the result of FEM which is created by variational principle with high precision, except the coefficient of time change. So the new arithmetic of FVM explains successfully the actual physical meaning of triangular element in the variation formula and can simplify the modeling process while improving the accuracy of finite volume method. In addition, the actual problem of three-dimensional temperature field of thermal calculations is resolved by the new arithmetic of FVM and the FEM, comparing FVM results with the theoretical solution, FEM result shows that the FVM is more similar with the theory than the FEM. As a word, the he new arithmetic of FVM not only can simply the process of establish equation but also can make sure of the calculation accuracy. What’s more, it can extend widely the applied range of the FEM on the problem of temperature field, and has great application value on hard technology.
,
Keywords: Finite volume method; temperature field; finite element method; variational principle.
1 Introduction It has long time that the FVM is used to solve the discretization of conservation problem since it came up in 1982 [1]. Up to the present, the FVM is an important numerical method to compute differential equation, and is used widely in various industries because it has lots of excellences of Finite Element Method (FEM) [2]. Such as, the FVM can generate mesh neatly, is the same with the complex area and boundary element; especially, it’s simple like Finite Difference Method on discretization format besides it can keep conservation of quality, momentum, energy *
Sponsor/ National Natural Science Foundation of China (50674091) (50874111)(50844036) (50974050).
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 109–116, 2010. © Springer-Verlag Berlin Heidelberg 2010
110
Y. Wang, Y. Qin, and J. Zhang
and so on [3, 4]. So the FVM is thought more highly of actual computation and theoretical research.
2 Energy Conservation Equation A particle named M is chosen freely in the temperature field looked as Ts(x,y,z,t). In addition, the Γ is a closed curve around the M which area is named F. The temperature field is analyzed according to the conservation of energy [5], and the energy equations separately are listed as follows.
∫ q dy + q x
y
Γ
qx = k
dx + ∫∫ qv dxdy = ∫∫ ρC P F
F
∂T , dF ∂t
(1)
∂T . ∂T ∂T ; qy = k ; qz = k ∂z ∂y ∂x
(2)
Where, T stands for the temperature, K; t stands for the time, s; k stands for the coefficient of heat conductivity, W⋅m⋅K; ρ stands for the density, kg⋅m-3; Cp stands for the specific gravity, J⋅kg-1⋅K-1; qv stands for the intension of heat source which is positive sign as exothermic processes, and is minus as endothermic processes, W⋅m−3; Г stands for the boundary of F, m.
3 Discretization Analysis of Energy Equations 3.1 New Arithmetic of FVM
(a)
(b)
(c)
(d)
Fig. 1. Control volumes
The whole meaning is equal if though the selected control volumes is different (shown in Fig.1), but the condition at node isn’t equal, so that different control equation is established which results in different calculation precision [6, 7]. As a word, the method of selecting control volumes is a key to the calculation precision of FVM. In the paper, the new method of selecting control volumes is as shown by Fig.2 which area is circled by broken line paralleling to the subtense, and every broken line pass the center of gravity of triangle triangular element.
Application of New Finite Volume Method (FVM) on Transient Heat Transferring
(a) Inside nodes
111
(b) Boundary nodes
Fig. 2. New control volumes
3.2 Discretization Analysis of Inside Nodes The method of FVM is similar with the FEM, the whole computational domain of temperature field is plotted out automatically with triangle grid generation as shown by a in Fig.2. The node is selected freely, and it’s supposed if the node is connected with n elements, the equation (1) will transform to the following equation according to the analysis of heat balance. n
⎡
∑ ⎢⎣q Δy x
k =1
(ik )
+ q y Δx(ik ) + qv S(ik ) − ρCP
∂T ⎤ S( ik ) ⎥ = 0 . ∂t ⎦
(3)
As is known to everyone, the equation includes several parts which are rest with the triangle element, in other words, every triangle element connecting with the node contributes to the mass conservation equation (Fig. 2). So every contribution of triangle element to the energy equation is computed respectively. Now the following analysis of heat balance is the triangle element of (k) contribution to the node of m. qxΔy(ik) +qyΔx(ik) = qx
[
) ]
2 (yi−yj ) +qy 2 (xj − xi ) = −k (bibm +cicm)TI +(bJbm +cJcm)TJ + bm2 +cm2 Tm , 3 3 3Δ
q v S ( ik ) = q v
(
4 . Δ 9
(4)
(5)
Because the temperature change is linear along the side of triangle, so the temperatures at A and B can be worked out.
2 1 TA = Ti + Tm ; 3 3
2 1 TB = Tj + Tm , 3 3
∂T 1 ⎛ ∂TA ∂TB ∂Tm ⎞ 2 ∂Ti 2 ∂Tj 5 ∂Tm . = ⎜ + + + + ⎟= ∂t 3 ⎝ ∂t ∂t ∂t ⎠ 9 ∂t 9 ∂t 9 ∂t
(6)
(7)
Same to the above formula deduction, the energy contribution of (k) triangle element to its nodes of i and j can be found, too. The energy conservation equation can be expressed by the matrix like equation (8).
112
Y. Wang, Y. Qin, and J. Zhang
⎧ Q ik ⎪ ⎨ Q jk ⎪Q ⎩ mk
⎫ ⎡ k ii ⎪ ⎢ ⎬ = ⎢ k ji ⎪ ⎢k ⎭ ⎣ mi
k ij k jj k mj
k im ⎤ ⎧ T i ⎫ ⎥⎪ ⎪ k jm ⎥ ⎨ T j ⎬ + k mm ⎥⎦ ⎪⎩T m ⎪⎭
⎧ pi ⎫ ⎪ ⎪ ⎨pj⎬− ⎪p ⎪ ⎩ m⎭
⎡ n ii ⎢ ⎢ n ji ⎢ n mi ⎣
n ij n jj n mj
⎧ ∂T i ⎪ n im ⎤ ⎪ ∂ t ⎥ ⎪ ∂T j n jm ⎥ ⎨ ∂t n mm ⎥⎦ ⎪ ∂ T m ⎪ ⎪⎩ ∂ t
⎫ ⎪. ⎪ ⎪ ⎬ ⎪ ⎪ ⎪⎭
(8)
e
e e e ⎧ ∂T ⎫ e = [K ] {T } + [N ] ⎨ ⎬ − {p } ⎩ ∂t ⎭
4 4 4Δ Φ bl2 + cl2 ; kln = knl = Φ (bl bn + cl cn ); pi = p j = pm = Qk 3 3 9 where, . 20Δ 8Δ k nll = ρCP ; nln = nnl = ρC P ; Φ = 81 81 3Δ (l , n = i, j, m, andl ≠ n ) kll =
(
)
3.3 Discretization Analysis of Boundary Nodes 3.3.1 Boundary Conditions There are three temperature boundary conditions. The temperatures of nodes are known along with the first boundary condition without the establishment of the energy equation; the heat flux (q2) is known along with the second boundary condition; the linear relationship of the boundary heat flux and temperature is known along with the third boundary condition. The Following analysis on energy equation is made when the nodes are at the second or third boundary or at the intersection of these two types of boundary. As shown by (b) in Fig.2, the boundary of 3-5 is the second boundary, the boundary heat flux q2 is known; the boundary of 4-5 is the third of boundary, the temperature of fluid medium and heat transfer coefficient are Tf and αcontacting with the object, so the linear function is that.
,
qΓ =α(T−Tf ) . 3
(9)
Γ3
3.3.2 Discretization Equation For example, the heat balance analysis of boundary node of 5 selected freely is carried by Fig. (b) in Fig.2. The pentagon of 5DABC is the object of control volume on boundary. During the establishment of the energy equation of boundary nodes, the difference is that it needs to calculate the heat transferred from the boundary, (like 5-C and 5-D shown in b in Fig.2) besides inside heat exchange and heat sources. When the boundary j-m is the second boundary, the energy contribution of (k) triangle element to its nodes of m can be found which is expressed by Q2mk.: 2 Qmk = q mk Lmk + q v S mk + Q jm
=
−k 4 2 [(bi bm + ci c m )Ti + (b j bm + c j cm )T j + (bm2 + c m2 )Tm ]+q v Δ k + q 2 si 3Δ 9 3
.
(10)
Same to the above formula deduction, the energy contribution of (k) triangle element to its nodes of i and j can be found, too. The energy conservation equation can be
Application of New Finite Volume Method (FVM) on Transient Heat Transferring
113
expressed by the matrix like equation (9), but the coefficients are different, as show by equation (11). 4 4 20Δ 8Δ Φ bl2 + cl2 ; k ln = k nl = Φ(bl bn + cl c n ); nll = ρC P ; n ln = n nl = ρC P . 3 3 81 81 4Δ 4Δ 2 pi = qv ; p j = p m = q v + q 2 si (l , n = i, j , m, l ≠ n ) 9 9 3
(
k ll =
)
且
(11)
When the boundary j-m is the third of boundary, the energy contribution of (k) triangle element to its nodes of m can be found which is expressed by Q3mk: Q m3 k = q m k L m k + q v S m k + Q jm =
−k ⎡ ( b i b m + c i c m ) Ti + ( b j b m + c j c m ) T j + ( b m2 + c m2 ) T m ⎤ ⎦ 3Δ ⎣ 4α s i 2α s i 2α s i 4 qv Δ k + Tm + Tj − Tf 9 9 9 3
+
. (12)
Same to the above formula deduction, the energy contribution of (k) triangle element to its nodes of i and j can be found, too. The energy conservation equation can be expressed by the matrix like equation (8), but the coefficients are different, as show by equation (13).
(
)
(
)
(
)
4as i 4asi 4 4 4 Φ bi2 + ci2 ; k jj = Φ b 2j + c 2j + ; k mm = Φ bm2 + c m2 + 3 3 9 3 9 4 4 k ij = k ji = Φ (bi b j + ci c j ); k im = k mi = Φ(bi bm + ci c m ) 3 3 2asi 4 20Δ 8Δ k jm = k mj = Φ(b j bm + c j c m ) + ; nll = ρC P ; nln = n nl = ρC P ; 3 9 81 81 2asi 4Δ 4Δ k pi = qv ; p j = p m = qv − t f ;Φ = 9 9 3 3Δ k ii =
(13)
4 Comparing the New Arithmetic of FVM with the FEM In the paper, the energy equation of 2D heat transferring is established according to the FVM by the energy conservation law and Fourier's law. The result shows by the equations of (8), (11) and (13) that the coefficient(×3/4) is same to the result of FEM which is created by variational principle, except the coefficient of ∂T ∂t (showing in table 1). Table 1. Comparing of two kinds of matrix coefficient
FEM FVM FVM×3/4
∂Ti ∂t
∂T j ∂t
coefficient 1/6 20/81 5/27
coefficient 1/12 8/81 2/27
∂Tm ∂t
coefficient 1/12 8/81 2/27
coefficient summation 1/3 4/9 1/3
114
Y. Wang, Y. Qin, and J. Zhang
The reasons for the difference between them, (1) Different physical concepts--variational principle shows “heat potential” (functional) extreme conditions in the node energy conservation principle shows the equilibrium condition of heat in control volume of nodes. (2) Different of integration regions--Variational principle of computational domain is S which is the triangle element area; the energy conservation principle’s is the control volume which area is 4/9 S. The overall sense is equivalent however, the conditions on each node is not congruent. (3)In the variational principle ∂T ∂t is deal with as fixed value in order to find the expression of functional, so that the item has the approximate and is not stringent enough in theory due to some errors of FEM formulas.
,
,
,the
y Ts=0
T0=f(x,y)
Ts=0
b
Ts=0
A a
0
x
Fig. 3. 2D temperature field of thermal calculations Table 2. Temperature of rectangular plate T(Day)
T (t)/ K FVM
0
T (t)/ K theoretical solution 10
T (t)/ K FEM
0
R/% calculation accuracy 10
10
R/% calculation accuracy 0
0.1
9.9816
-0.53398
9.931
9.931
-0.50693
0.2
9.4711
0.314641
9.5141
9.5141
0.454013
0.3
8.3698
1.077684
8.4761
8.4761
1.270042
0.4
7.0878
-0.20458
7.0705
7.0705
-0.24408
0.5
5.8912
-1.12032
5.824
5.824
-1.14068
0.6
4.8553
-1.27078
4.693
4.693
-3.34274
0.7
3.988
-1.81043
3.912
3.912
-1.90572
0.8
3.2698
-0.48933
3.201
3.201
-2.1041
0.9
2.6781
-0.69452
2.6163
2.6163
-2.30761
1.0
2.1934
-2.49385
2.1251
2.1251
-3.11389
Note: This problem without heat boundary.
Application of New Finite Volume Method (FVM) on Transient Heat Transferring
115
5 Example Assessment Two-dimensional temperature field of thermal calculations is shown as in Fig. 3. According to theoretical solution calculation: planar grid is 10×10(1/4 region),the time zone is in the table 2, the initial temperature of T is 283K, and the boundary temperature is 273K. So the center temperature of rectangular plate is T (t), shown in table 2. It seems from these examples the calculation result of FVM is more similar with the theory than the FEM comparing the calculation accuracy, so that the new arithmetic of FVM made in paper both only can simplify the process of modeling of FEM but also can enhance the calculation accuracy.
,
6 Conclusions The In this paper, the new arithmetic of FVM is created, and is applied to study the transient heat transfer problem by the law of energy conservation and Fourier's law, so the following conclusions are obtained: (1) The new arithmetic of FVM is created according to the conservation principle which area is circled by broken line paralleling to its subtense, and every broken line pass the center of gravity of triangle triangular element (as shown by Fig.2). The new arithmetic of FVM explains successfully the actual physical meaning of triangular element in the variation formula and can simplify the modeling process on the basis of improving the accuracy of finite volume method. What’s more, it can be applied to the heat transfer of 3D because of the simplicity of selecting control volume. (2) The mathematical model of transient heat transferring problem is found by the new arithmetic of FVM according to the law of energy conservation and Fourier's law. In the process, the functional of differential equations needn’t to be established like FEM, so it can simplify the modeling process about complex problem. (3) The searching method of element is to make better based on the FEM in this thesis, every unit is searched for computing its contribution on its three nodes, at last, the coefficient stiffness matrix of equation groups is get according to the contribution of every cell to node equations; the new method can deal with the different kinds of element in boundary in a way, and the border mount of boundary element isn’t limited again like FEM. So the compilatory workload and the complexity of process are reduced.
,
,
Acknowledgment The authors wish to thank He-gang coal mine rescue group and all other partners in this project for their helpful support. The anonymous reviewers are acknowledged for helpful and careful comments and modification of this manuscript that improved the quality of this paper.
116
Y. Wang, Y. Qin, and J. Zhang
References 1. Haitao., C., Xingye, Y.: The discrete finite volume method on quadrilateral mesh. Journal of Suzhou University 4, 6–10 (2005) 2. Jian, W.: Finite bulk-finite element means on solving convection-diffusion problem. Journal of Xinxiang Teachers College 5, 1–2 (2004) 3. Ewing, R., Lazarov, R., Lin, Y.: Finite volume approximations of nonlocal reactive flows in porous media. Number Methods PD Es 5, 285–311 (2000) 4. Yufei, C.: Studies of Mortar Finite Volume Method and Fractional Flow Formulation for Two-Phase Flow in Porous Media, pp. 45–46. SDU (2007) 5. Min, Y.: Numerical Analysis of Some Finite Volume Element and Finite Volume Schemes, pp. 66–67. SDU, Shandong (2005) 6. Zhe, Y., Hongxing, R.: Symmetric modified finite element methods for nonlinear parabolic problems. Chinese Journal of Engineering Mathematics 3, 530–536 (2006) 7. Li, Z.: Finite volume method on unstructured triangular meshes and its applications on convective phenomena. Mathematics In Practice and Theory 4, 90–96 (2003)
Applications of Schouten Tensor on Conformally Symmetric Riemannie Manifold Nan Ji1, Yuanyuan Luo2, and Yan Yan1 1
College of Science, Hebei Polytechnic University, Tangshan 063009, China 2 Tangshan Radio and TV University, Tangshan 063009, China
[email protected]
Abstract. Schouten tensor, which is expressed by the Ricci curvature and scalar curvature is a Codazzi tensor on a Riemannian manifold M(dimM>3)with harmonic Weyl conformal curvature tensor. By using this tensor, an operator ϒ 2 can be induced, which is self-adjoint relative to the L - inner product. Using this operator, some equalities and inequalities are obtained. Then by equalities between certain function on a compact local conformally symmetric Riemannie manifold, Einstein manifold and constant sectional curvature space are characterized. Some new theorems are established. Keywords: Self-adjoint differential operator; conformally symmetric space; Schouten tensor.
1 Introduction In this paper, firstly we define the Schouten tensor, which is expressed by the Ricci curvature and scalar curvature. then we induce an operator ϒ, which is self-adjoint relative to the L2 -inner product , and then ,we get some results. 1.1 Orthonormal Frame Field and Riemannian Curvature Let M be an n-dimentional Riemannian manifold, e1 , e2 ," , en a local orthonormal
frame field on M, and ω1 , ω2 ," , ωn is its dual frame field. Then the structure equation of M are given by d ωi = ∑ ω j ∧ ω ji , ωij = −ω ji , j
dωij = ∑ωil ∧ ωlj − l
1 ∑Rijklωk ∧ωl , 2 k ,l
(1)
where ωij is the Levi-civita connection and Rijkl the Riemannian curvature tensor of M. Ricci tensor Rij and scalar curvature are defined respectively by R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 117–122, 2010. © Springer-Verlag Berlin Heidelberg 2010
118
N. Ji, Y. Luo, and Y. Yan
Rij := ∑ Rkikj , r := ∑ Rkk . k
(2)
k
We define a tensor as following,
1 {δ ik R jl + δ jl Rik − δ il R jk − δ jk Ril } + n−2 (3) r {δ ik δ jl − δ il δ jk }. (n − 1)(n − 2) And we call it Weyl conformal curvature tensor, which is not change under conformal transformation of metric. If ∑ Clijk ,l = 0 , a Riemannian manifold is called a local Cijkl := Rijkl −
l
conformally symmetric space [6]. 1.2 Schouten Tensor and Operator ϒ
Now, we define Schouten tensor as the following [1]:
Sij := Rij −
1 rδ ij . 2(n − 1)
(4)
Then Sij = S ji , and we can know from [1] that S is a Codazzi tensor if and only if C is harmonic. In particular, if manifold M is local conformal symmetric, then S must be a Codazzi tensor [3]. Then through the straight calculation, we can get
trS =
n−2 r, S 2(n − 1)
2
= ∑ Rij2 − i, j
3n − 4 2 r , 4( n − 1) 2
where the tr denotes the trace of the Schouten tensor,
(5) 2
denote the square of
modulus. Then we induce an operator ϒ, which is similar to the operator introduced by Cheng-Yau in [4]. ϒ f = ∑ (δ ij ∑ Skk − Sij ) fij , i, j
k
(6)
for any f ∈ C 2 ( M ) . Since M is a local conformally symmetric manifold, the operator□ is self-adjoint relative to the L2 inner product of M, i.e.
∫ f ϒg = ∫gϒ f
M
M
.
(7)
Applications of Schouten Tensor on Conformally Symmetric Riemannie Manifold
2
119
Some Equalities Related to the Schouten Tensor and operator
When M is compact, We denote Laplacian and gradient operators by Δ and ∇ , respectively, and calculate the Δ S 2 as following: 1 Δ S 2
= ∇S
2
2
=
∑S
2 ij , k
i , j, k
+ ∑ Sij Sij , kk i, j , k
+ ∑ Sij (( Sij , kk − Sik , jk ) + ( Sik , jk − Sik , kj ) i, j,k
(8)
+ ( Sik , kj − S kk ,ij ) + S kk , jj )
= ∇S + ∑ Sij (trS ),ij + 2
i, j
∑S
ij
( Slk Rijkl + Sil Rlkjk ) .
i , j , k ,l
Then we get following equality: 1 Δ S 2
= ∇S + ∑ λi (trS ),ii + 2
2
i
1 Rikik (λi − λk )2 . ∑ 2 i, k
(9)
The last equality holds because of that Sis a Codazzi tensor, then we calculate ϒ tr ( S ) as following: ϒ tr ( S ) = ∑ (δ ij trS − Sij )(trS ),ij = (trS )Δ(trS ) − ∑ Sij (trS ),ij i, j
=
I,J
1 1 2 2 2 Δ(trS ) 2 − ∇trS − Δ S + ∇S + ∑ Sij ( Slk Rlijk + Sil Rlkjk ) , 2 2 i , j , k ,l
(10)
near a point p ∈ M , we choose an orthonormal frame fields e1 , e2 ," , en such that Sij = λiδ ij at p , then (8) is simplied to ϒ (trS ) =
1 1 2 2 Δ(trS )2 − ∇trS − Δ S + ∇S 2 2 1 + ∑ Rikik (λi − λk )2 . 2 i ,k
2
(11)
Since M is compact and Δ and ∇ are self-adjoint, we have by integration of (9) and (11).
∫
∇S + ∑ λi (trS ),ii + 2
i
M
∫
M
2
∇S − ∇trS
2
+
1 ∑ Rikik (λi − λk )2 = 0, 2 i, k
1 2 ∑ Rikik (λi − λk ) = 0. 2 i ,k
(12)
(13)
120
N. Ji, Y. Luo, and Y. Yan
3 Some Results Related to the Schouten Tensor and operatorϒ From (13), we get following theorem immediately: Theorem A. Let M be a compact conformally symmetric space, dim( M ) > 3 . If M has constant scalar curvature and positive sectional curvature, then M is a Einstein manifold. Noting that if a conformally flat manifold is also Einstein, then it is constant sectional curvature space, we have: Corollary A. Let M be a compact conformally flat space, dim( M ) > 3 . If M has constant scalar curvature and positive sectional curvature, then M is of constant curvature space. When we weaken the condition of positive sectional curvature in Theorem A to the condition of nonnegative sectional curvature, and we obtain the theorem and corollary as following: Theorem B. Let M be a compact conformally symmetric space, dim( M ) > 3 . If M has constant scalar curvature and nonnegative sectional curvature, then either M is Einstein, or M is the product of some Riemannian manifolds, i.e. M = M n1 × " × M nr ( n1 + " + nr = dim M ).
Every M ni (1 ≤ i ≤ n) Einstein manifold. Proof: Since r=const., then trS= const., ∇trS =0, and (13) can be rewritten as: 2
∫
M
2
∇S +
1 2 ∑ Rikik (λi − λk ) = 0 2 i,k
(14)
Since sectional curvature is nonnegative, it is obviously that: (i) ∇S = 0 (15)
(ii) if λi ≠ λk ,then Rikik = 0 . If λ1 = λ2 = " = λn , then M is a Einstein space.
If r of λi are not equal, after a suitable renumbering of the basis elements {ei } , we have:
λ1 = " λn ≠ λn +1 = " = λn ≠ " ≠ λn −1 = " = λn 1
1
2
r
r
nα −1 ≤ iα ≤ nα ,1 ≤ α ≤ r .
(16)
0 = ∑ Siα iβ , j ω j j
= dSiα iβ + ∑ Siα j ω jiβ + ∑ Siβ j ω jiα j
j
= (λiα − λiβ )ωiα iβ .
(17)
Applications of Schouten Tensor on Conformally Symmetric Riemannie Manifold
121
Then ωiα iβ = 0 , d ωiα = ω j ∧ ω jiα = ω jα ∧ ω jα iα , and distributions:
ω1 = "ωn = 0, ωn +1 = " = ωn = 0," , ωn −1 = " = ωn = 0 , are integral, and we 1
1
2
r
r
present the local decomposition of M, then we can get: M = M n1 × " × M nr ( n1 + " + nr = dim M ), every M ni (1 ≤ i ≤ n) is Einstein manifold. Corollary B. Let M be a compact conformal flat space, dim( M ) > 3 . If M has constant scalar curvature and nonnegative sectional curvature, then either M is a manifold with constant sectional curvature, or M is the product two Riemannian manifold with constant sectional curvature, i.e. M = M n −1 (c) × M 1 ( −c) . Proof: Because M is a conformal flat manifold, we have:
0 = Raα aα =
1 (λα + λa ) . n−2
(18)
Fixing λa , for all λa = −λα :
If λ1 = " = λn , from Corollary A, we know that M is Einstein , and because M is flat, then , we can get the conclusion that M is a manifold with constant sectional curvature, and if λ1 = " = λ p , λ p +1 = " = λn ( 1 ≤ a ≤ p, p + 1 ≤ α ≤ n ), let
λ1 = " = λ p = λ , λ p +1 = " = λn = μ . If a ≠ b, α ≠ β (1 ≤ a, b ≤ p, p + 1 ≤ α , β ≤ n ), the sectional curvature of M p 1 λ = const. n−2
(19)
1 1 1 λα = μ=− λ = const. n−2 n−2 n−2
(20)
Rabab = The sectional curvature of M n − p Rαβαβ =
If the sectional curvature of M p is c, then the sectional curvature of M n − p is –c , and it is obviously that the p is n-1, i.e. M = M n −1 (c ) × M 1 (−c ) . Lemma A. Equality:
(trS ) 2 − S
2
= const. ≥ 0 .
(21)
Implies inequality:
∇S
2
− (∇trS ) 2 ≥ 0 .
(22)
Proof: Take covariant derivative of (21), we have
∑S i, j
ij
Sij , k = (trS )(trS )k ,
1≤ k ≤ n ,
(23)
122
N. Ji, Y. Luo, and Y. Yan
and so:
S
2
∇S
2
≥ ∑ (∑ Sij Sij , k ) 2 =(trS ) 2 ∇trS k
2
≥ S
2
2
∇trS .
(24)
i, j
Lemma B. On a Riemannian manifold, if
∑C
2 ijkl
i , j , k ,l
−
∑R
2 ijkl
+
i , j , k ,l
4( n − 1) (trS ) 2 = const. ≥ 0 . ( n − 2) 2
(25)
Then, we have:
∇S
2
− (∇trS ) 2 ≥ 0 .
(26)
Acknowledgments This paper is supported by Scientific Fund Project of Hebei Polytechnic University (z201004).
References 1. Hertrich-Jeromin, U.: Models in Meobius Differential Geometry, pp.16–22 (2001) 2. Okumura, M.: Hypersurfaces and a pinching problem on the second fundamental tensor. Amer. J. Math., 207–213 (1974) 3. Cheng, S.Y., Yau, S.T.: Hypersurfaces with constant scalar curvature. Math. Ann., 195–204 (1977) 4. Chern, S.S., Do Carmo, M., Kobayashi, S.: Minimal submanifold of a sphere with second fundamentalform of constant length. In: Browder, F.E. (ed.) Fuctional Analysis and Related Fields, pp. 59–75. Springer, New York (1970) 5. Li, H.: Hypersurface with constant scalar curvature in space forms. Math. Ann., 665–672 (1996) 6. Yau, S.T.: Lectures on Differential Geometry, pp. 231–250. Higher Education Press (2004) 7. Baek, J.O., Suh, Y.J.: KYUNGPOOK Math. J. 44 Conformally Recurrent Riemannian Manifolds with Harmonic Conformal Curvature Tensor, 47–61 (2004) 8. Malek, F., Samavaki, M.: On weakly symmetric Riemannian manifolds. Differential Geometry - Dynamical Systems 10, 215–220 (2008) 9. Stanis law Ewert-Krzemieniewski, Conformally Flat Totally Umbilical Submanifolds in Some Semi-Riemannian Manifolds, KYUNGPOOK Math. J. 48,183-194 (2008) 10. Murathana, C., O zgur̈, C.: Riemannian manifolds with a semi-symmetric metric connection satisfying some semisymmetry conditions. In: Proceedings of the Estonian Academy of Sciences, pp. 210–216 (2008) 11. Nan, J., Xinghua, M., Yunwei, X., Yamian, P.: Conformal Flat Manifold and A Pinching Problem on the Schouten Tensor. In: DCABES 2009 Proceedings, pp. 99–100 (2009)
Area of a Special Spherical Triangle* Xiaohui Hao1, Manfu Yan1, and Xiaona Lu2 1
Mathematics and Information Science Department, Tangshan Teachers College, Hebei Tangshan 063000, China 2 College of Science, Hebei Polytechnic University, Tang Shan 063009, China
[email protected]
Abstract. Calculate the area of a special spherical triangle, which is formed by intersecting of a sphere and three mutually perpendicular planes. The intersection point of the three planes lies inside of the sphere. The result of the calculation is not an approximate solution but an analytic one. The analytical expression shows that the area of spherical triangle, under the radius of a sphere being known, is only determined by the contact angles. The conclusion can be used in the research of nucleation theory of crystallography. Keywords: Spherical geometry; spherical triangle; area.
1 Introduction Spherical geometry is a non-Euclid geometry which is based on negating the fifth postulate in the Euclid geometry. Spherical geometry has closely relationship with human life. The earth we lived can be approximately considered a sphere and the surface of the earth can be treated as spherical surface. Spherical geometry is widely applied in Meteorological Science, Astrophysics, GPS, Navigation, Aerospace, etc. [1, 2, 3, 4] Besides of these, it also can be used in Materials Science, Mirror Image, Industrial Design and other fields [5, 6, 7]. Spherical triangle is a basic graph in a sphere, and plays a key role in spherical geometry. A great many conclusions on the nature of spherical triangle have been obtained [8, 9, 10]. There are three positions between any plane and a sphere: deviation, contact and intersect, among which intersect is the most important. A circle is formed when sphere is cut by a plane. If the centre of a sphere is in the plane, the circle is named great circle, otherwise named small circle. Spherical triangle is formed by intersecting of a sphere and three mutually intersection planes, which go through the same inner point of the sphere. Side of a spherical triangle is a part of a great circle or a small circle. The point of intersection of three planes may not be the centre of a sphere, moreover, the three intersecting planes are arbitrary, and it is more difficult to exactly calculate the area of a general spherical triangle. In this paper, we just discuss the area of a spherical triangle on condition that the three planes are mutually perpendicular. *
This paper was supported by the development fund of TangShan teacher’s college (No: 09D01).
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 123–128, 2010. © Springer-Verlag Berlin Heidelberg 2010
124
X. Hao, M. Yan, and X. Lu
2 Area of a Special Spherical Triangle Let the radius of a sphere be R . α , β , γ ( 0 < α , β , γ < π ) are contact angles of the
three planes. We first think about the condition that 0 < α , β , γ ≤
1
π 2
in Fig. 1.
.
Fig. 1. Spherical triangle and contact angles
α , β , γ are the contact angles in Fig. 1, the area of the spherical triangle S is calculated by the following integral: S=∫
R 2 sin 2 γ − R 2 cos 2 β
R cos α
π
= R2 2
( sin γ −cos β −cosα) − R∫ 2
2
R2 sin 2 γ − R 2 cos 2 β
R cos α
arcsin
R2 sin2 γ −R2 cos2 β
R cos β R2 − x2
R 2 sin 2 γ − x 2
R cos β
Rcosα
−R∫
dx ∫
arcsin
R R − x2 − y2 2
Rcosγ R2 − x2
dy
dx
dx
Let: A = R∫
R 2 sin 2 γ − R 2 cos 2 β
R cos α
arcsin
R cos γ R −x 2
2
dx , B = R ∫
R 2 sin 2 γ − R 2 cos 2 β
R cos α
arcsin
Firstly, we consider A : A = R∫
R 2 sin 2 γ − R 2 cos 2 β
R cos α
arcsin
= R 2 sin 2 γ − cos 2 β arcsin
− R 2 cos γ ∫
R sin γ − R cos β 2
R cos α
2
2
2
R cos γ R2 − x2
dx
cos γ cos γ + cos β 2
2
x2
− R 2 cos α arcsin
R 2 sin 2 γ − x 2 ( R 2 − x 2 )
dx .
cos γ sin α
R cos β R2 − x2
dx .
Area of a Special Spherical Triangle
x2
R2 sin 2 γ − R2 cos 2 β
2 Let C = R cos γ ∫R cos α
R 2 sin 2 γ − x 2 ( R 2 − x 2 )
dx .
Consider C . Let x = R sin γ sin t , then we have: sin 2 γ − cos 2 β sin γ cos α arcsin sin γ
C = R cos γ ∫ 2
arcsin
= −R2 cos γ arcsin
⎛ ⎞ 1 dt ⎜ −1 + 2 2 ⎟ 1 − sin γ sin t ⎠ ⎝
sin2 γ − cos2 β cosα + R2 cos γ arcsin sin γ sin γ
⎛ cos γ + R 2 arctan ⎜ sin 2 γ − cos 2 β ⎝ cos β
⎞ cos α cos γ 2 ⎟ − R arctan ⎠ sin 2 γ − cos 2 α
We consider B :
B = R∫
R2 sin2 γ − R2 cos2 β
R cos α
arcsin
R cos β R2 − x 2
= R 2 sin 2 γ − cos 2 β arcsin − R 2 cos β ∫
dx cos β
cos γ + cos β 2
2
x2
R2 sin 2 γ − R2 cos2 β
− R 2 cos α arcsin
R 2 sin 2 β − x 2 ( R 2 − x 2 )
R cos α
dx .
x2
R 2 sin 2 γ − R 2 cos2 β
2 Let D = R cos β ∫R cos α
cos β sin α
R 2 sin 2 β − x 2 ( R 2 − x 2 )
dx .
Consder D : Let x = R sin β sin t , then we have: sin 2 γ − cos2 β sin β cos α arcsin sin β
D = R cos β ∫ 2
arcsin
⎛ ⎞ 1 dt ⎜ −1 + 2 2 ⎟ 1 − sin β sin t ⎠ ⎝
sin 2 γ − cos 2 β cos α + R 2 cos β arcsin sin β sin β cos β cos α cos β sin 2 γ − cos 2 β − R 2 arctan . + R 2 arctan 2 cos γ sin β − cos 2 α
= − R 2 cos β arcsin
Then we can get the area S : S=
π 2
R2
(
)
sin 2 γ − cos 2 β − cos α − A − B
.
125
126
X. Hao, M. Yan, and X. Lu
⎛π cos β R2 sin2 γ − cos2 β − R2 sin2 γ − cos2 β ⎜ − arcsin 2 ⎜ 2 2 cos γ + cos2 β ⎝ ⎛π π cos γ cos β ⎞ − R 2 cos α + R 2 cos α arcsin − R 2 cos γ ⎜ − arcsin ⎟ 2 sin α 2 sin γ ⎠ ⎝
=
π
⎞ ⎟ ⎟ ⎠
cos α cos γ sin 2 γ − cos 2 β + R 2 arctan sin γ cos β cos α cos γ cos β − R 2 arctan + R 2 cos α arcsin 2 2 sin α sin γ − cos α + R 2 cos γ arcsin
−R2 sin 2 γ − cos2 β arcsin
cos β
⎛π cos γ ⎞ − R2 cos β ⎜ − arcsin ⎟ sin β ⎠ ⎝2 cos γ + cos β 2
2
cos α cos β + R 2 arctan sin 2 γ − cos 2 β sin β cos γ cos α cos β − R 2 arctan . sin 2 β − cos 2 α + R 2 cos β arcsin
Then we get an analytical expression on the area of the spherical triangle under the condition of 0 < α , β , γ ≤
2
. That is:
π cos γ R 2 cos β − R 2 cos γ + R 2 cos α arcsin 2 2 sin α cos β cos γ + R 2 cos α arcsin + R 2 cos β arcsin sin α sin β cos α cos β cos α +R2 cos β arcsin + R2 cos γ arcsin + R2 cos γ arcsin sin β sin γ sin γ cos α cos γ cos α cos β − R 2 arctan − R 2 arctan 2 2 sin γ − cos α sin 2 β − cos 2 α
S=−
π
π
2
R 2 cos α −
π
cos β cos γ
− R 2 arctan
sin γ − cos β 2
2
+
π 2
R2 .
We may adapt a similar calculation in other conditions of the contact angles, and obtain the same conclusion. The way of calculation is briefly showed as following: If one of three contact angles is greater than π / 2 , might as well let
π
≤α <π , 2 now cos α ≤ 0 , the area of the spherical triangle can be calculated by integral as following: S=∫
R2 sin 2 γ − R2 cos2 β
R cos α
dx ∫
R 2 sin 2 γ − x2
R cos β
R R − x2 − y2 2
dy.
Area of a Special Spherical Triangle
π
127
If two of three contact angles are greater than π / 2 , might as well let
≤ α , β < π , now cosα ≤ 0 , cos β ≤ 0 , the calculation to the area of the 2 spherical triangle can be expressed as following: S=∫
R 2 sin 2 γ − R 2 cos 2 β
R cos α
+∫
R 2 sin 2 γ − x 2
dx ∫
R
R cos β
R sin γ R sin γ − R cos β 2
2
2
2
dx ∫
R − x2 − y 2 2
R2 sin 2 γ − x 2
dy
R
− R2 sin 2 γ − x2
R − x2 − y 2 2
dy .
If all three contact angles are greater than π / 2 , now cos α ≤ 0 , cos β ≤ 0 , cos γ ≤ 0 , analytical expression about the area of the spherical triangle can be gotten by a indirect way as following:
S = 2π R ( R − R cos β ) − ∫
R sin γ
R cos β
−∫
R2 sin 2 α − R2 cos2 γ
R cos β
−∫
R 2 sin 2 γ − y 2
dy ∫
− R2 sin 2 γ − y 2
dy ∫
R 2 sin 2 α − y 2
dy ∫
R 2 sin 2 α − y 2
R cos γ
R sin α
R R − x2 − y2 2
R R − z2 − y2 2
dx
dz
R
dz . R − z2 − y2 Under each of the three conditions, the result is exactly consistent with the condition R sin α − R cos γ 2
of 0 < α , β , γ ≤
π 2
2
2
− R2 sin 2 α − y 2
2
, that is
π cos γ R 2 cos β − R 2 cos γ + R 2 cos α arcsin 2 2 sin α cos β cos γ + R 2 cos α arcsin + R 2 cos β arcsin sin α sin β cosα cos β cosα +R2 cos β arcsin + R2 cos γ arcsin + R2 cos γ arcsin sin β sin γ sin γ cos α cos γ cos α cos β − R 2 arctan − R 2 arctan 2 2 sin γ − cos α sin 2 β − cos 2 α
S=−
π
2
2
R 2 cos α −
− R 2 arctan
π
cos β cos γ sin γ − cos β 2
2
+
π 2
R2 .
3 Conclusion and Remarks A detailed calculation for the area of a spherical triangle is carried out by means of integral method. It is concluded that the area of the spherical triangle is determined by the contact angles α , β , γ . Considering heterogeneous nucleation on the substrate of three mutually perpendicular planes is more common during metal solidification, the
128
X. Hao, M. Yan, and X. Lu
conclusion of this paper can be used in calculating the nucleus-liquid area which is necessity to study the surface energy of heterogeneous nucleation. Further research can be done for the analytical calculation about the area of a general spherical triangle as well as its application.
References 1. Yang, X.S., Hu, J.L., Chen, D.H., et al.: Verification of GRAPES unified global and regional numerical weather prediction model dynamic core. Chinese Science Bulletin 53(22), 3458–3464 (2008) 2. Zang, S.X., Zhou, H.L., Wei, R.Q., et al.: Structure and physical properties of the Earth’s interior. Acta Seismologica Sinica 16(5), 522–533 (2003) 3. She, C.L., Wan, W.X., Xu, G.R.: Climatological analysis and modeling of the ionospheric global electron content. Chinese Science Bulletin 53(2), 282–288 (2008) 4. Lu, X.R.: Geometry of broad line regions of active galactic nuclei. Chinese Journal of Astronomy and Astrophysics 8(1), 50–62 (2008) 5. Jiang, C.G., Jiang, Z.B., Liu, H., et al.: Accurate method to calculate curvature correction of the Earth in survey. Surveying and Mapping of Geology and Mineral Resources 20(3), 1–3 (2004) 6. Huang, X.J., Shen, L.: On the convergence of circle packings to the quasiconformal map. Acta Mathematica Scientia 29B(5), 1173–1181 (2009) 7. Dong, P.B., Liu, L.: Spherical triangle graphic methods based on the relation of spherical triangle and triangular pyramid. Journal of Engineering Graphics (2), 124–128 (2001) 8. Li, X.L., Chen, D.H., Peng, X.D., et al.: Implementation of the semi-lagrangian advection scheme on a quasi-uniform overset grid on a sphere. Advances in Atmospheric Sciences 23(5), 792–801 (2006) 9. Yang, D.H.: Some basic inequalities in higher dimensional non-Euclid space. Science in China Series A: Mathematics 50(3), 423–438 (2007) 10. Gong, X., Wang, B.X., Chen, L.P.: Solution to 3D constraint for solid model deformation. Journal of Huazhong University of Science and Technology (Natural Science Edition) 36(9), 75–78 (2008)
A Parallel Algorithm for SVM Based on Extended Saddle Point Condition Xiaorui Li, Congying Han, and Guoping He College of Information Science and Engineering, Shandong University of Science and Technology, Qingdao, Shandong, 266510, China
Abstract. The constraint-partitioning approach achieves a significant reduction in solution time while resolving some large-scale mixed-integer optimization problems. Its theoretical foundation, extended saddle-point theory, which implies the original problem can be decomposed into several subproblems of relatively smaller scale in virtue of the separability of extended saddle-point conditions, still needs to be deliberated carefully. Enlightened by such a plausible theory, we have developed a novel parallel algorithm for convex programming. Our approach not only works well theoretically, but also may be promising in numerical experiments. As the theoretical essence of Support Vector Machine (SVM) is a quadratic programming, we are inspired to apply this new method onto large-scale SVMs to achieve some numerical improvements. Keywords: support vector machine, large-scale quadratic programming, extended saddle points, parallel variable distribution.
1
Introduction
In the field of machine learning, SVM has become a hot issue for its outstanding learning performance, successfully applied into many fields such as facial recognition, handwritten digits recognition, automatic text categorization, etc. Given a training set T = {x1 , . . . , xl } with xi ∈ Rm , l ∈ N + , a label vector y ∈ l R with yi ∈ {−1, 1}, i = 1, . . . , l, a function φ : Rm → Rn is presumed to map training vector xi into a higher dimensional space. Simultaneously, we assume T can be separated into two subsets by a hyperplane in the mapped space, and the hyperplane maximizes the margin between them, denoted as ω T φ(x) + b = 0, then C-support vector classification [1] can determine the hyperplane: 1 T ω ω + CeT ξ ω,b,ξ 2 s.t. yi (ω T φ(xi ) + b) 1 − ξi , i = 1, . . . , l ξi 0, i = 1, . . . , l min
(1)
This work was supported by National Natural Science Foundation of China (NO.10971122), Key Scientific and Technological Project of Shandong Province (2009GG10001012), Research Fund for the Doctoral Program of Higher Education (20093718110005) and Shandong Natural Science Foundation of China (Y2008A01).
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 129–136, 2010. c Springer-Verlag Berlin Heidelberg 2010
130
X. Li, C. Han, and G. He
where ξi , i = 1, . . . , l are slack variables allowing the margin constraint to be violated. C is a real penalty parameter for such a possible violation, and the elements of e are all ones. The dual [2] of problem (1) can be formulated as: 1 T α Qα − eT α 2 s.t. y T α = 0
min α
(2)
0 αi C, i = 1, . . . , l where entries of Q are defined by Qij = yi yj Kij , i, j = 1, . . . , l and Kij = T . K(xi , xj ) = φ(xi ) φ(xj ) is so-called kernel function. As SVM is a quadratic programming, we can turn to classical algorithms such as Newton method, gradient projection method, interior-point algorithm, active set approach, etc. However, the multiply of training samples, which lead to huge memory requirements from matrix Q, together with the large size of training set T (n 104 ) result in invalidation of those classical optimization methods. In order to speed up large-scale SVM training, numerous researches have been carried out vigorously, most of which are based on different formulations of the original SVM algorithm or rely on approximation techniques, sample set shrinking, etc. Early approaches, such as Chunking algorithm (Boser, 1992), decomposition methods (Osuna, 1997), SMO (Sequential Minimal Optimization) method, still have deficiencies to various extents for large training set. In 2005, Tsang put forward Core Vector Machine (CVM) algorithm, which can solve SVM and regression problems efficiently on large datasets by adopting an efficient approximate minimum enclosing ball (MEB) algorithm. As most of such methods are training time-RAM cost trade-offs, many parallel algorithms spring out, reducing memory requirements and runtime effectively. Cascade SVM (Graf et al., 2005), splits data into subsets and optimized separately with multiple SVMs. The partial results are combined and filtered in a Cascade until global optimum is reached. Some variations of Cascade SVM also have outstanding numerical performance [3]. In the algorithm [4] developed by Woodsend at al, data is preprocessed in parallel to generate an approximate lowrank Cholesky decomposition, then the solver exploits the problem’s structure to perform many linear algebra operations in parallel, with relatively low data transfer between processors [5], resulting in excellent parallel efficiency for verylarge-scale problems. P-packSVM algorithm [6] can solve SVM with an arbitrary kernel and it embraces stochastic gradient descent method to optimize the primal objective. This algorithm can be highly parallelized with a special packing strategy, and experiences sub-linear speed-up with hundreds of processors. Yixin Chen has proposed a procedure [7] to solve large-scale mixed-integer programming. According to his dissertation, many constraints in existing largescaled problems are highly structured so they can be reformulated as: min f (z) z
s.t. ht (z(t)) = 0, g t(z(t)) 0, t = 0, . . . , N (local constraints) H(z) = 0, G(z) 0 (global constraints)
(3)
A Parallel Algorithm for SVM Based on Extended Saddle Point Condition
131
We can prove that problem (2) can be decomposed into N+1 subproblems under some assumptions, when the constraints of the original problem are partitioned into N+1 parts like (3). Moreover, among existing parallel approaches, only a few of them are training a standard SVM on the whole of the data set. In this paper, we conceived a parallel algorithm based on extended saddle-point theory, without shrinkage of the training samples. The paper is organized as follows: Section 2 gives theoretical foundation on extended saddle points. In section 3, we prove that the theory of extended saddle points can be applied in SVM and analyze feasibility whether it can serve as an effective solver. Then we propose a new parallel algorithm based on extended saddle-point condition for SVM in the following section. At last, we draw the main conclusions and point out future directions to extend this research.
2
Theory of Extended Saddle Points
In this section, we describe the theory of extended saddle points under constraint partitioning, and properties of extended saddle-point condition(ESPC). The remarkable feature of ESPC is that, it is true over an extended region of penalty values, instead of several unique values (Lagrange multipliers) or the requirement that penalty parameters tend to be infinite as general penalty methods do. While ESPC can be reformulated into several groups of necessary conditions as the partition of constraints, the theory facilitates decomposition of large-scaled original problem into several subproblems linked by minor global constraints. Simultaneously, we have proven the equivalence between ESPC and constrained local minimum under some assumptions. Thus ESPC under constraint partitioning will reduce the search complexity by conducting a parallel algorithm, solving each smaller subproblem independently and resolving global constraints. 2.1
Basic Theory of ESPC
Consider nonlinear constrained optimization problem P0 : (P0 ) : min f (x) x
s.t. h(x) = 0 g(x) 0
(4)
where x ∈ Rn is a continuous variable, and f is lower bounded. g = (g1 , . . . , gr )T , h = (h1 , . . . , hm )T , g, h ∈ C 1 , and the feasible domain is denoted as X. For an arbitrary vector y = (y1 , y2 , . . . , yd ), in this paper we note |y| = (|y1 |, |y2 |, . . . , |yd |), max{y, 0} = (max{y1 , 0}, max{y2 , 0}, . . . , max{yd , 0}), then the l1 -penalty function L(x, α, β) = f (x) + αT |h(x)| + β T max{g(x), 0}. Definition 1. (x∗ , α∗∗ , β ∗∗ ) is defined as an extended saddle point, if there exists α∗ 0, β ∗ 0 for ∀x ∈ Nδ (x∗ ), ∀α ∈ Rm , ∀β ∈ Rr , ∀α∗∗ > α∗ and ∀β ∗∗ > β ∗ satisfying L(x∗ , α, β) L(x∗ , α∗∗ , β ∗∗ ) L(x, α∗∗ , β ∗∗ ). Such condition is called ESPC. Then we contrast it with Lagrange saddle-point:
132
X. Li, C. Han, and G. He
Definition 2. For Lagrange function l(x, λ, μ) = f (x)+λT h(x)+μT g(x), (x, λ, μ) is a saddle point, if there exists x ∈ Rn , λ ∈ Rm , μ ∈ Rr , μ 0 satisfying l(x, λ, μ) l(x, λ, μ) l(x, λ, μ), ∀x ∈ Rn , ∀λ ∈ Rm , ∀μ ∈ Rr , μ 0. Intuitively, Lagrange saddle-point condition holds for specific Lagrange multiplier. Once we find an extended saddle point (x∗ , α∗ , β ∗ ), extended saddle-point inequality is true over an extended penalty region (α∗∗ , β ∗∗ ), α∗∗ > α∗ , β ∗∗ > β ∗ . Definition 3. x∗ is a constrained local minimum(CLM), if x∗ ∈ X, and for ∀x ∈ Nδ (x∗ ) ∩ X, f (x∗ ) f (x) is satisfied. In mixed-integer programming, for a CLM satisfying some kind of constraintqualification condition(existing no direction along which the sub-differentials of continuous equality and inequality constraints at CLM are all zeros), there is a equivalence between an extended saddle point and the CLM [8]. 2.2
A Modified ESPC for Covex Programming
In this paper, we succeed proving that ESPC holds if the functions involved in the original problem have some properties such as differentiability, convexity. Moreover, there is no demand of any constraint-qualification conditions. Theorem 1. Suppose f : Rn → R, g : Rn → Rr are differentiable convex functions. h : Rn → Rm is an affine function, h = Ax−b, and A is a column full rank matrix. While Slater condition(existing a point x subject to h( x) = 0, g( x) < 0) holds, then x∗ is a CLM of P0 if and only if there exists finite α∗ 0, β ∗ 0, for ∀x ∈ Nδ (x∗ ), ∀α ∈ Rm , ∀β ∈ Rr , ∀α∗∗ > α∗ and ∀β ∗∗ > β ∗ such that : L(x∗ , α, β) L(x∗ , α∗∗ , β ∗∗ ) L(x, α∗∗ , β ∗∗ ). Proof. ⇐part: Given an extended saddle-point x∗ , as there exists finite α∗ > 0, β ∗ > 0 satisfying ESPC, according to the first inequality: for ∀α ∈ Rm , ∀β ∈ Rr , (α − α∗∗ )T |h(x∗ )| + (β − β ∗∗ )T max{g(x∗ ), 0} 0. We can get h(x∗ ) = 0, g(x∗ ) 0, then x∗ is a feasible point of P0 . However, the second inequality warrant the optimum of x∗ : for ∀x ∈ Nδ (x∗ ) ∩ X, f (x) = L(x, α∗∗ , β ∗∗ ) L(x∗ , α∗∗ , β ∗∗ ) = f (x∗ ). ⇒part: Given a CLM x∗ of P0 . x∗ is feasible, so L(x∗ , α, β) = L(x∗ , α∗∗ , β ∗∗ ). (1)If all the constraints are inactive inequality constraints with regard to x∗ , that is, m = 0, g(x∗ ) < 0. As g ∈ C 1 , we know for ∀x ∈ Nδ (x∗ ), g(x) 0. Hence, L(x, α∗∗ , β ∗∗ ) = f (x) f (x∗ ) = L(x∗ , α∗∗ , β ∗∗ ). (3)Otherwise, based on the assumptions, there exists (λ, μ), μ 0 such that (x∗ , λ, μ) is a Lagrange saddle point: l(x∗ , λ, μ) l(x∗ , λ, μ) l(x, λ, μ), ∀x ∈ Rn , ∀λ ∈ Rm , ∀μ ∈ Rr , μ 0, If we assume α∗ = |λ|, β ∗ = μ, then for ∀x ∈ Nδ (x∗ ), α∗∗ > α∗ , β ∗∗ > β ∗ , L(x, α∗∗ , β ∗∗ ) = f (x) + α∗∗ T |h(x)| + β ∗∗ T max{g(x), 0} f (x) + α∗ T |h(x)| + T β ∗ T max{g(x), 0}. However, α∗ T |h(x)| = |λ|T |h(x)| λ h(x), β ∗ T max{g(x), 0} T μT g(x). Then, L(x, α∗∗ , β ∗∗ ) f (x) + λ h(x) + μT g(x) = l(x, λ, μ) l(x∗ , λ, μ) T = f (x∗ ) + λ h(x∗ ) + μT g(x∗ ) = f (x∗ ) + μT g(x∗ ).
A Parallel Algorithm for SVM Based on Extended Saddle Point Condition
133
What is more, as the hypothesis that f, g are differentiable convex and h is an affine function, a Lagrange saddle point must be a K-T point, that is μT g(x∗ ) = T 0, then L(x, α∗∗ , β ∗∗ ) f (x∗ ) + λ h(x∗ ) + μT g(x∗ ) = f (x∗ ) = L(x∗ , α∗∗ , β ∗∗ ). Therefore, we can conclude that ESPC is satisfied. 2.3
ESPC of Constraint Partitioning
Among previous work, an important observation is that the constraints involved in constrained optimizations have structural characteristic to some extent, enabling us partition them into several parts [9]. Without loss of generality, we assume the constraints of P0 can be grouped into N+1 parts just as problem(3), where ht (·), g t (·) only involves some elements of z, so-called stage vector. On the one hand, in stage t we can rewrite the corresponding variable z as z(t) and the constraints as ht (z(t)) = 0, g t (z(t)) 0. On the other, H(z) = 0, G(z) 0 are global-constraint functions of z. Here, z(t) = (z1 (t), z2 (t), . . . , zut (t)), ht = (ht1 , ht2 , . . . , htmt ), g t = (g1t , g2t , . . . , grt t ), H = (h1 , h2 , . . . , hm ), G = (g1 , g2 , . . . , gr ). f is lower bounded. f , g, h ∈ C 1 . Simultaneously, as the partition is performed on the basis of strict block-separable structure of original constraints, those N+1 stage vectors, z(0), z(1), . . . , z(N ) must never overlap with any other. We can construct subproblems as follows(t = 0, . . . , N ): (P t ) : min f (z(t)) + γ T |H(z(t))| + ηT max{G(z(t)), 0} z(t)
s.t. ht (z(t)) = 0, g t(z(t)) 0 Actually, not all the practical problems have such block-separable structure, but we may exploit the part which are in block-separable shape, leaving the remnant we fail to exploit as global constraints. Discussion above is also of great significance in practice. Firstly, let us exhibit the definition of mixed neighborhood: N N Nb (z) = Npt (z) = {z |z (t) ∈ N (z(t)); ∀zi ∈ z(t), zi = zi }. t=0
t=0
For Npt (z), t = 0, . . . , N , each perturbs z in only one stage, keeping the residual elements invariable. Now we devote to show ESPC of the partitioned problem. Definition 4. Let φ(z, γ, η) = γ T |H(z)| + η T max{G(z), 0}, the l1 -penalty functions of original problem P T and subproblem P t are defined as : N L(z, α, β, γ, η) = f (z) + [α(t)T |ht (z(t))| + β(t)T max{g t (z(t)), 0}] + φ(z, γ, η) t=0
Γd (z, α(t), β(t), γ, η) = f (z) + α(t)T |ht (z(t))|+β(t)T max{g t (z(t)), 0}+φ(z, γ, η). Chen has proved a plausible equivalence between a CLM of P T with respect to Nb (z ∗ ) and an extended saddle point, as well as the fact that ESPC under constraint partitioning can be separated into multiple conditions [10], which has some theoretical deficiencies in deduction. Here, we further develop the theoretical achievements to make it more pragmatic. Theorem 2. ESPC can be rewritten into N+2 necessary conditions : for ∀α, β, γ, η and ∀α∗∗ > α∗ , ∀β ∗∗ > β ∗ , ∀γ ∗∗ > γ ∗ , ∀η∗∗ > η∗ , ∀z ∈ Npt (z ∗ ), t = 0, . . . , N : Γd (z ∗ , α(t), β(t), γ ∗∗ , η∗∗ ) Γd (z ∗ , α(t)∗∗ , β(t)∗∗ , γ ∗∗ , η∗∗ ) Γd (z, α(t)∗∗ , β(t)∗∗ , γ ∗∗ , η∗∗ )
L(z ∗ , α∗∗ , β ∗∗ , γ, η) L(z ∗ , α∗∗ , β ∗∗ , γ ∗∗ , η ∗∗ )
134
X. Li, C. Han, and G. He
Proof. ⇒part: If a point z ∗ satisfies ESPC, it must be feasible. So we get Γd (z ∗ , α(t), β(t), γ ∗∗ , η ∗∗ ) = f (z ∗ ) = Γd (z ∗ , α(t)∗∗ , β(t)∗∗ , γ ∗∗ , η ∗∗ ), L(z ∗ , α∗∗ , β ∗∗ , γ, η) = f (z ∗ ) = L(z ∗ , α∗∗ , β ∗∗ , γ ∗∗ , η ∗∗ ). Given an arbitrary integer t0 between 0 and N, take α(t0 )∗ , β(t0 )∗ , γ ∗ , η ∗ from the corresponding elements in the penalty parameters, then for ∀α∗∗ (t0 ) > α∗ (t0 ), ∀β ∗∗ (t0 ) > β ∗ (t0 ), ∀γ ∗∗ > γ ∗ , ∀η ∗∗ > η ∗ , Γd (z, α(t0 )∗∗ , β(t0 )∗∗ , γ ∗∗ , η ∗∗ ) = f (z) + α∗∗ (t0 )T |ht0 (z(t0 ))| + β ∗∗ (t0 )T max{g t0 (z(t0 )), 0} + φ(z, γ ∗∗ , η ∗∗ ). As z ∈ Npt0 (z ∗ ), Γd (z, α(t0 )∗∗ , β(t0 )∗∗ , γ ∗∗ , η ∗∗ ) = L(z, α∗∗ , β ∗∗ , γ ∗∗ , η ∗∗ ) L(z ∗ , α∗∗ , β ∗∗ , γ ∗∗ , η ∗∗ ) = f (z ∗ ) = Γd (z ∗ , α(t)∗∗ , β(t)∗∗ , γ ∗∗ , η ∗∗ ). ⇐ part: As Γd (z ∗ , α(t), β(t), γ ∗∗ , η ∗∗ ) Γd (z ∗ , α(t)∗∗ , β(t)∗∗ , γ ∗∗ , η ∗∗ ), t = 0, . . . , N , we know z ∗ satisfies all the local constraints. z ∗ satisfies all the global constraints, because L(z ∗ , α∗∗ , β ∗∗ , γ, η) L(z ∗ , α∗∗ , β ∗∗ , γ ∗∗ ). Hence, z ∗ is feasible and L(z ∗ , α, β, γ, η) = f (z ∗ ) = L(z ∗ , α∗∗ , β ∗∗ , γ ∗∗ , η ∗∗ ). Here we construct α∗ , β ∗ making use of α(t)∗ , β(t)∗ , and take γ ∗ , η ∗ , then for ∀z ∈ Nb (z ∗ ), ∀α∗∗ > α∗ , ∀β ∗∗ > β ∗ , ∀γ ∗∗ > γ ∗ , ∀η ∗∗ > η∗ , ∀α, β, γ, η, L(z ∗ , α∗∗ , β ∗∗ , γ ∗∗ , η ∗∗ ) L(z, α∗∗ , β ∗∗ , γ ∗∗ , η ∗∗ ). The inequality above can be authenticated by reduction to absurdity. Therefore, ESPC has been proved. Consequently, by Theorem 1 we see a CLM of P T is equivalent to an extended saddle point in mixed neighborhood. Simultaneously, Theorem 2 suggests that we can obtain an extended saddle point by finding the individual extended saddle point in partitioned search subspace and resolving the global constraints.
3
ESPC for Solving SVM Parallel
The character of SVM is solving a quadratic programming like (2), where the objective function is f (α) = 12 αT Qα−eT α, which is a convex quadratic function; the inequality constraints are g1 (α) = −α, g2 (α) = α − C, which are linear functions; the equality constraint is also a linear function, h(α) = y T α. Obviously, it is easy to notice that all the assumptions in Theorem 1 are satisfied by SVM models. Thus, we can obtain the the equivalence between solving SVM and to get an extended saddle point. In fact, such an equivalence can be applied to solve any convex programming once the assumptions in Theorem 1 are satisfied. 3.1
A Basic Algorithm Based on ESPC
As discussed above, a CLM of P0 is equivalent to an extended saddle point, so the problem has become how to find an extended saddle point. It is implicated that x∗ is a local minimum of L(x, α∗∗ , β ∗∗ ), when α∗∗ , β ∗∗ are two specific constants that are lager than some given threshold values α∗ , β ∗ 0, according to the second inequality. While the first inequality suggests that x∗ satisfies all the constraints. Moreover, when a point x∗ satisfy both situations mentioned above, it is sufficient to say x∗ is a CLM. So we put forward a basic algorithm:
A Parallel Algorithm for SVM Based on Extended Saddle Point Condition
135
Algorithm 1 step1. initializing: Given an initial point x, α=0, β=0; δ is the growth step size of α, β; α, β are given thresholds in case α, β would be enlarged infinitely. step2. For each i = 1, . . . , m, if hi (x) = 0 then let αi = αi + δ. step3. For each j = 1, . . . , r, if gj (x) 0 then let βj = βj + δ. step4. Perform some kind of nonlinear unconstrained optimization algorithm to get a minimal point x∗ of L(x, α, β) with respect to x. step5. If h(x) = 0, g(x) 0, stop; if ∃i satisfying hi (x) = 0 and αi > αi or ∃j satisfying gj (x) 0 and βj > β j , stop; else, let x = x∗ , turn to step2. The advantage of Algorithm 1 is that ESPC which we used as stopping criteria, keeps true over an extended region of penalty values. When the penalty parameter is large enough, larger than some threshold, we can get the minimum of SVM according to the equivalence between a CLM and extended saddle points and manage to get rid of the setback of the infinite trend of penalty parameters. 3.2
A Parallel Algorithm Based on ESPC
However, as we see it, matrix Q is so large-scale that most previous methods are proved invalid. As Theorem 1 suggests, a CLM of P T with respect to mixed neighborhood Nb (z ∗ ) is equivalent to an extended saddle point in the mixed neighborhood. Simultaneously, Theorem 2 suggests that we can obtain such an extended saddle point by finding the individual extended saddle point in partitioned search subspace and resolving the global constraints. Associated with parallel variable distribution(PVD), parallel variable transformation synchronizing techniques, we propose a parallel algorithm based on ESPC as follows: Algorithm 2 step1. initializing: Give an initial point z, γ=0, η=0, growth step size δ, machine precision ε; γ, η are given threshold values. step2. Partition the problem into several subproblems based on the constraintstructure: Pt , t = 0, . . . , N , attaining N+1 lower-dimensional variables. Then, distribute these variables onto N+1 parallel processors. step3. For each i = 1, . . . , m, if Hi (z) = 0 then let γi = γi + δ. step4. For each j = 1, . . . , r,, if Gj (z) 0 then let ηj = ηj + δ. step5(parallel step). For each processor, solve Pt , t = 0, . . . , N independently, and denote the corresponding solution as z 0 , . . . , z N . step6(synchronize step). Update z utilizing z 0 , . . . , z N and note it as z ∗ . step7. If z ∗ − z ε, H(z) = 0, G(z) 0, stop; if ∃i satisfying Hi (z) = 0, γi > γi or ∃j satisfying Gj (z) 0, ηj > η j , stop; else, z = z ∗ , turn to step3. The procedures of solving subproblem in each processor can make use of Algorithm 1 or any other present optimization methods. In short, efforts in finding extended saddle points in the search space of subproblems which are much smaller than that of the original problem, is reduced considerably.
136
4
X. Li, C. Han, and G. He
Conclusions
The theory of extended saddle points proves a fine equivalence between a CLM and extended saddle points, and ESPC under constraints partitioning provides the theoretical foundation that enable us transform the original problem into several much smaller-scale subproblems. In this paper we propose a new parallel algorithm of SVM based on ESPC, which eliminate the deficiency of general penalty methods, such as the penalty parameters may tend towards infinity. It has been anticipated that our parallel algorithm of constraint partitioning can significantly reduce the search complexity by improving over existing solvers, which should be backed by numerical experiments, and we will devote to make implementation on large data set. Though our algorithm is mainly for resolving SVM, it is possible to extend our research to other more general formulations, under some proper assumptions.
References 1. de Leone, R.: Parallel Algorithm for Support Vector Machines Training and Quadratic Optimization Problems. Optimization Methods and Software 20, 379–388 (2005) 2. Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Belmont (1999) 3. Song, J., Wu, T., An, P.: Cascade Linear SVM for Object Detection. In: Proceedings of the 2008 The 9th International Conference for Young Computer Scientists, pp. 1755–1759. IEEE Computer Society, Washington (2008) 4. Woodsend, K., Gondzio, J.: Hybrid MPI/OpenMP Parallel Linear Support Vector Machine Training. The Journal of Machine Learning Research 10, 1937–1953 (2009) 5. Woodsend, K., Gondzio, J.: High-Performance Parallel Support Vector Machine Training. Parallel Scientific Computing and Optimization 27, 83–92 (2009) 6. Zhu, Z.A., Chen, W., Wang, G., Zhu, C., Chen, Z.: P-packSVM: Parallel Primal grAdient desCent Kernel SVM. In: Proceedings of the 2009 Ninth IEEE International Conference on Data Mining, pp. 677–686. IEEE Computer Society, Washington (2009) 7. Xu, Y., Chen, Y.: A Framework for Parallel Nonlinear Optimization by Partitioning Localized Constraints. In: Proc. International Symposium on Parallel Architectures, Algorithms and Programming (2008) 8. Wah, B.W., Chen, Y.: Constraint partitioning in penalty formulations for solving temporal planning problems. Artificial Intelligence 170, 187–231 (2006) 9. Wah, B.W., Chen, Y.: Solving Large-Scale Nonlinear Programming Problems by Constraint Partitioning. In: van Beek, P. (ed.) CP 2005. LNCS, vol. 3709, pp. 697–711. Springer, Heidelberg (2005) 10. Chen, Y.: Solving Nonlinear Constrained Optimization Problems Through Constraint Partitioning. Ph.D. Thesis, Department of Computer Science, University of Illinois, Urbana (2005)
CPN Tools’ Application in Verification of Parallel Programs Lulu Zhu, Weiqin Tong, and Bin Cheng School of Computer Engineering and Science, Shanghai University, 200072 Shanghai, China
[email protected],
[email protected], cb@ shu.edu.cn.com
Abstract. It is very important to verify parallel programs to assure the correctness. But they are more complicated than the sequential ones because of their uncertainty, so it is necessary to model the program. Petri net is a formal method and it can be a good description of the problems that parallel program encountered. CPN Tools can establish the Petri net model of the parallel program and use the state space tools for analysis. To illustrate the usability of CPN Tools, we give a simple example of modeling based correctness verification by analyzing a MPI parallel program. Keywords: Petri net, CPN tools, state space tool, parallel program.
1 Introduction Correctness is the principal requirement of a program. But parallel programs are more complicated than the sequential ones because of their uncertainty [1]. And every implementation of the parallel program will not likely occur again. So it is necessary to model the program and analyze it in order to reduce the unnecessary losses. Petri net is a formal method and it can be a good description of the problems that parallel program encountered, for example, concurrency, uncertainty, synchronization, communication, deadlock and so on [2,11]. Meanwhile, the graphical illustration and strict mathematical definition make it more intuitionistic than other tools. Many scholars do a lot of work on using Petri net to verify parallel programs. What’s more, there is a mature tool called CPN Tools [4], which makes the modeling, simulation and verification become simple and practical. CPN Tools is not a simple modeling tool of Petri net, but rather a tool for extending the Petri net. It is the most sophisticated modeling and simulation tool [10], which is developed by the University of Aarhus. It supports the powerful meta-language (ML) and has strong scalability. Using CPN Tools, it is possible to investigate the behavior of the modeled program using simulation, to verify correctness by means of state space methods and model checking, and to conduct simulation-based performance analysis [3]. In other words, it can use the same model to verify the correctness of the logic system, and to analyze the function whether to meet the demand at the same time. After establishing the model by the CPN Tools, it is necessary to check semantic. If the semantic checking has no problem, it enters into the simulation stage. Using the state R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 137–143, 2010. © Springer-Verlag Berlin Heidelberg 2010
138
L. Zhu, W. Tong, and B. Cheng
space tool of CPN Tools, it can calculate a state and save a state space report. Therefore, it can analyze the resource whether there is deadlock, unreachable or synchronization loss. Using this tool, it is a great convenience to the correctness verification even without realizing the complex mathematical proof which hides behind the result of simulation. The organizational of the remainder of this paper is as follows. Section 2 describes the Petri nets model of MPI parallel and models a Petri net by a simple example. Section 3 analyzes four common mistakes of MPI parallel programs, and then introduces the state space tool of CPN Tools. An example of using the state space tool for analyzing model is given in section 4. Finally, section 5 summaries the main result and outlook.
2 Petri net Model of the Parallel Program The definition and concepts of Petri net are given in reference [7], without other explanation. Among all the standards to program message-passing-based concurrent applications, MPI is one of the most popular because it extracts most of advantages of existing standards [6]. And at present, the MPI is widely used in parallel programs, so it is chosen to be modeled by Petri net in this paper. In the MPI programming mode, point-to-point communication function implements message-passing between two concurrent processes. According to the restrict relationship between the sending process and receiving process of message, it can be divided into 4 modes: standard mode, buffered-mode, synchronous-mode and ready-mode. On the other hand, according to the relationship between the operation of message passing and other operation, it can be divided into blocking operation and non-blocking operation. According to these two classifications, there are 8 kinds of point-to-point communication function [8]. Reference [5] gives the Petri net methods of the 8 kinds of communication. These basic Petri net models of MPI functions make it easy to model any MPI program. The process of modeling and verification of MPI program with Petri net can be achieved through the following steps: Step 1: Resolve each parallel program’s communication part and serial part in the process of compiling MPI program; analyze the pattern which the program’s communication function uses. Step 2: Model the serial parts by some established model [5] of programming language. Step 3: Model the communication parts by the known Petri net models in the Reference [5]. Step 4: Run the model in CPN Tools, simulate the model, and then enter the state space tool. According to the state-space report, it can analyze the correctness of the program. Step 5: If the result shows that there are errors in the program, analyze the reason and give a solution to correct it. The Step 4 is the key process in this work. In this step, the CPN Tools constructs a model and simulates it. What is more, it saves a state-space report though the state space tool.
CPN Tools’ Application in Verification of Parallel Programs
139
2.1 Model Description The following is a simple MPI program that two processes exchange data: process 0 sends a message to process 1 and receives from process 1; meanwhile process 1 sends a message to process 0 and receives from 0. This example can illustrate how to model the parallel program with Petri net and how to use CPN Tools for correctness verification. Example of a parallel program from Dou Zhihui. (2001). The Parallel Programming Technology of High Performance Computing--Design of MPI Parallel Program. Tsing University Press, Beijing Int main(int argc,char*argv[]){ MPI_Initialize(&argc,&argv); MPI_Comm_rank(&myrank,comm); If(myrank==0){ MPI_Recv(recvbuf,count,MPI_INT,1,tag,comm,&status); MPI_Send(sendbuf,count,MPI_INT,1,tag,comm); } Else { MPI_Recv(recvbuf,count,MPI_INT,0,tag,comm,&status); MPI_Send(sendbuf,count,MPI_INT,0,tag,comm); } MPI_Finalize(); } Assume that the two processes greet each other, that is to say, the process 0 sends the message "hello, process 1!" to process 1 and the process 1 sends the message "hello, process 0!" to process 0. According to the analysis, the communication that the algorithm uses is the standard model with blocking, so the communication mode can be modeled as the document [5]. Then the total of the model is shown in Figure 1:
Fig. 1. Petri net model of the program
Above, the upper part shows the executive of the process 0 (myrank = 0) and the lower part shows the executive of the process 0. In this model, the place “start” represents the initial state of the program; the place “end” represents the termination state of the program; the change “init” and “final” show respectively the call of the function MPI_Initialize and MPI_Finalize; the place “rank” shows the end of initialization; the change “get” is calling the function MPI_Comm_rank, that is to say,
140
L. Zhu, W. Tong, and B. Cheng
it gets the current process of identification; buf1 and buf2 is the message buffer; rec1 and rec2 denote the operations during receiving the message; received1 and received2 show receiving completion; send1 and send2 show sending messages; ack1 and ack2 confirm that the implementation is done; P1 and P5 are the beginning of the process 0 and process 1; P2 and P6 are said to be the preparation before the process 0 and 1 send messages; P3 and P7 are said to be the end of process 0 and 1 sending message; P4 and P8 are said to be the end of the process 0 and 1. In the Figure 1, some places contain a number of tags, called "token" [7]. Each token has a data value attached to it. This data value is called the token color. Compared with low-level Petri nets, these tags and the color together can represent the complex information easily. The initial place “start” in the graph has a set of colors that contains the process number and the desired message. When the trigger condition of the change “init” is met, a token is added to the rank, and then the process will begin. At last, if it emerges a mark “1”, the model runs to the end. 2.2 Model Variable Declaration The color set “INT” is used to model the serial number of the process; the other color set “DATA” is used to model the payload of data packets which the process should send; and the color set “INT×DATA” is used to model the data packets which contains a sequence number and some data.
Fig. 2. Data definition of the model
3 The Application of CPN Tools in the Verification Generally speaking, there are four types of abnormal phenomenon in the design process of the MPI parallel programs. They are lack-of-sending message, orphan message, deadlock, livelock [8]. The Petri net models of these four abnormal phenomena are different, and their specific Petri net languages are portrayed as follows: (1) Lack-of-sending message: lack-of-sending message exists in communication if and only if there is a place which the former set of Petri nets is empty. In other words, if a process need to receive message in the parallel program, but it doesn’t exist a process which sends the message. (2) Orphan message: orphan message exists in communication if and only if there is a place which the after set of Petri nets is empty. Contrary to the lack-of-sending message, if a process of parallel program sends a message, but it doesn’t exist a process which receives the message. (3) Deadlock [12]: under the deadlock state, no one statement can be executed. (4) Livelock: under the livelock state, some statements are executed repetitively, but they can't quit.
CPN Tools’ Application in Verification of Parallel Programs
141
The CPN Tools can model Petri net and can find these potential errors. Especially, it can generate the state space which contains all the standard properties of a Petri net, such as reachability, boundedness, liveness, fainess. They can analyze the Petri nets, and then verify the correctness of the program. 3.1 Introduction to State Space Tools and Its Functions The state space is also called occurrence graphs, reachability graphs or reachability trees [9]. The state space tool is integrated with CPN Tools. This means that you can easily switch between the editor, the simulator, and the state space tool. When a state space node has been found, it can be inspected in the simulator. You can see the enabled transition instances, investigate their bindings and make simulations. When a marking has been found in the simulator, it can be added to the state space or used as the initial marking for a new state apace. The state space tool can calculate a state space and save a state space report. In order to successfully enter the state space tool, it must meet the following conditions: (1) There is not a syntax error in the net. (2) Make sure that all transitions, places and pages in the net have names. (3) The names are required to be unique and to be alphanumeric ML-identifier. It will take some time to enter into the state space tool. It will emerge a green marked box in the left of the screen when successfully entering the state tool. Then you can apply the Calculate State Space tool. After that, you can apply the Calculate Strong connected components (Scc) Graph tool. You can use the Save Report tool to generate a text file which contains a standard report providing information about: statistics (size of state space and Scc graph), boundedness properties (inter and multi-set bounds for place instances), home properties (home marking), liveness properties (dead marking, dead/live transition instance), fairness properties (impartial/fair/just transition instances). 3.2 Result According to the approach mentioned above, part of the report, which is generated automatically by the state space tool, is shown as following: As the report said, there are dead markings. It means deadlock exists in the model. From the dead transition instances, we can find that the deadlock occurs near the changes “ack1” “ack2” “rec1” “rec2” “send1”and “send2”. Though the observation of dead transition instances and analysis of the model and program, we can find that the deadlock is due to the two processes waiting for the completion of each other’s sending message. Simulating the model by the simulation tool, we also find that it stops sending messages after the messages are sent to the places“P1” and “P5”. It can illustrate that “rec1”and “rec2” are unreachable. Therefore, we can exchange the position of the send and receive function, namely, exchange the sending and receiving location in the model. Then we re-design the model of program. The model is shown in Figure 4. Simulating and analyzing the revised model, its state space report is shown in Figure 5.
142
L. Zhu, W. Tong, and B. Cheng
Fig. 3. State space report
Fig. 4. Revised model
Fig. 5. State space report of the revised model
The report shows that the model doesn’t have deadlock and all the transitions are live; the revised model has eliminated the deadlock occurred in the original model. So this modification method exchanging the position of send and receive functions is correct. Simulating the model again, we find that it can send message correctly and each change and place is reachable.
CPN Tools’ Application in Verification of Parallel Programs
143
4 Conclusions and Outlook This paper describes the application of CPN Tools in the verification of parallel programs by a simple example. Through the state space tool, it avoids the generation of reachable marking graph but the state space report. So it is a great convenience to the correctness verification from the quantitative aspect. Although many scholars do a lot of research on the correctness verification of parallel programs so far, the application of CPN Tools to correctness verification is rare. Therefore, we hope that this article will be of benefit to the related researchers.
Acknowledgments This work is supported in part by Aviation Industry Development Research Center of China and Shanghai Leading Academic Discipline, Project, Project Number: J50103.
References 1. Barkaoui, K., Jean-Francois Pradat-Peyre, C.: Verification in concurrent programming with Petri nets structural techniques. In: Proceedings of the 3rd Inter IEEE High-Assurance System Engineering Symposium, pp. 124–133. IEEE Computer Society Press, Los Alamitos (1998) 2. Sui, D., Wang, L., Qing Ye, J.: Petri model and validation for MPI program. Chinese Journal of Computer Applications and Software 24(10), 205–209 (2007) 3. Jensen, K., Kristensen, L.M., Lisa Wells, J.: Coloured Petri Nets and CPN Tools for Modeling and Validation of Concurrent Systems. University of Aarhus (2007) 4. CPN Tools, http://www.daimi.au.dk/CPNTools/ 5. Huanqing Cui, D.: The Model and Verification of MPI Parallel Programs Based on Petri net. Shangdong University of Science and Technology (2004) 6. Sigel, S.F., Avrunim, G.S.: Modeling MPI Programs for Verification. Technical Report UM-CS-2004-75, Department of Computer Science, University of Massachusetts (2004) 7. Chongyi Yuan, M.: The Principles of Petri net. Publishing House of Electronics Industry, Beijing (2005) 8. Zhihui Du, M.: The Parallel Programming Technology of High Performance Computing–Design of MPI Parallel Program. Tsing University Press, Beijing (2001) 9. Jensen, K., Christensen, S., Lars Michael Kristensen, J.: CPN Tools State Space Manual. University of Aarhus (2006) 10. Zhu, L., Sui, R., Yingying Kong, J.: Simulation based performance analysis in CPN tools. Chinese Journal of Microcomputer applications 29(4), 78–81 (2008) 11. Zhai, D., Li, L., Shujie Zhang, J.: Modeling and simulation of multi-agent scheduling systems based on HTCP-net. Chinese Journal of Systems Engineering and Electronics 31(1), 100–107 (2009) 12. Huanqing, C., Liu Qiang, J.: Detection and Prevention of Communication Deadlock for Parallel Programs Based on Petri Net. Chinese Journal of Computer Engineering 34(23), 50–52 (2008)
The Study on Digital Service System of Community Educational Resources Based on Distributed Technology Jiejing Cheng, Jingjing Huang, and Xiaoxiao Liu School of Education, Nanchang University, 330031 Nanchang, China
[email protected],
[email protected],
[email protected]
Abstract. In the perspective of application of distributed technology, this paper, probing into its impact on community education and structure of its service system, proposes how to construct the educational resource service of distributed community, including theoretical frame of the study, resources construction, and path of service, mechanism and safeguard. Then a systematic elaboration is made on the strategies and solutions in the digital service system of distributed community educational resources. Keywords: digital resources, community education, service, distributed technology, educational resources.
1 Introduction With the rapid development of digital technology and the Internet, the amount of data on the Internet grows in high speed, which leads to the relative lack of data processing capacity. How to realize the distributed sharing of resources and computing capacity and how to deal with the current high-speed growth of data on the Internet, are problems needed to be tackled as soon as quickly at present. In such a developmental context, it is imperative to explore the application of distributed technology in the digital service of community educational resources. Distributed computing technology, a kind of applied technology, involves many areas and practical problems. It is the development of Parallel Computing, Distributed Computing and Grid Computing or the realization of these concepts of computer sciences. Distributed technology is result of the mixed evolution and rise of such concepts as Virtualization, Utility Computing, IaaS, PaaS, SaaS, etc. It is the basic principle of distributed technology that the operation of distributed data centers will be more similar to the Internet by making the calculation distribute in a large number of distributed computers, rather than the local computer or remote servers. The concept of distributed technology focuses computing and storage on the network, and simplifies local applications and the fat client to the browser with only one supported script, so that minimize the performance of personal computers and maximize the function of personal computers [1]. The structure of distributed technology is shown in fig.1. In the blueprint of the applications of distributed technology, users only need such terminal as a monitor or terminal platform to realize R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 144–151, 2010. © Springer-Verlag Berlin Heidelberg 2010
The Study on Digital Service System of Community Educational Resources
145
all the functions and operations via Web services. For users, distributed technology puts all the possible power and resources together to provide for each user to utilize.
Fig. 1. Structure of distributed technology
2 Impact Distributed computing model can greatly decline the cost of construction of educational information system. For community education, it will cost much to build computing center and it’s hard to patch with the fast growing of educational information system and the various requirements of the service. The distributed computing model offers the community education a suitable reference plan, the respecting mission of the educational department data centre and Internet centre can choose the distributed computing service to fulfill them and we can save cost without purchasing expensive hard devices and affording frequent maintain and up-grated. The main influences on community educational area as following: 2.1 The Transformation in the Sharing Ways of Educational Resources Although traditional Internet teaching system and community education system based on the Internet both put emphasis on the sharing. This kind of sharing still remains in the lower level [2]. It’s unnecessary to have a knowledge of who and how to provide the studying service for learners whereas the developers of community educational resources can realize more quick development through the unify joint. The distributed community education system takes the colleges and scientific studying institutions as its key knots and the companies and groups that specializing in Internet education could join into the system of the distributed community education. 2.2 The Transformation in the Organizing Ways of Educational Resources The distributed educational system essentially is the infrastructure which permits the resources and service accessing without relation to the position and they are provided by the machine and Internet lined geographically, the basic operation that supports the non-related position computing is resources finding. In order to finish the mission that customers submit and the requirements that customers’ proposal, we should match all the usable resources of the distributed education and figure out the best and the most reasonable resources deploying way and resources at tempering strategy [3]. 2.3 The Transformation in Education Ideology The distributed community education transfer the design and development of the resources of the teaching materials, PPT that focus on teaching into the structure category resources design of Internet curriculums, researching study column and
146
J. Cheng, J. Huang, and X. Liu
certain degree of organization while the construction of resources content into supporting learning and caters for the educational ideology posed recently. 2.4 Offer of Economic Application Software Customization Software as a service (SaaS) is a type of service provided by a distributed computing technology, which provides software as an online service, and Google Enterprise Application Suite (EAS) is this type of application. It no longer need to spend a lot of funds for the purchase of commercial Software authorization after remote terminal accessing to the education system which provides such services, such as office software, e-mail systems because all the above distributed services are provided in low-cost, some or even free [4]. Distributed technology has low requirement to client and thus earns much popularity, so that purchasing and maintaining cost can be saved. 2.5 Provision of the Reliable and Safe Data Storage Center In viral and hacker’s time, the most important is how to store data on the safe side. This poses a challenge to the school, and it is more serious in some schools short of professional. The distributed computing can provide safe and reliable data storage center. 2.6 Convenience of Community Educational Resources Building and Sharing At present all levels of educational administration organization, school and the education enterprise in our country have already constructed massive community educational information resources, and are constructing the more. Store the education information resources on distributed platform, and then the sharing of education information resources became more convenient and quick [5].
3 Content Analysis 3.1 Construction of Service System Digitalization of community education aims to construct the educational platform, lifelong education and learning system of digital community by normative procedures of how to build learning organizations [6]. Lifelong learning system of distributed community education in fact helps community members solve what and how to learn. In the construction of distributed community educational system, with the efforts of patriarch school all levels of regional government branches heap up the project, and integrate children’s palace, gym, cultural relics, historic sites and community schools. Besides, experimental study on digital community education should be made to probe into the cooperative development mode among school, family and community education. The architecture of distributed community education service is illustrated in figure 2. 3.2 Mechanism and Scale of Interactive and Cooperative Development among School, Family and Community Education According to the successful experiences in developed countries and the features of different communities and departments, linked digital community education
The Study on Digital Service System of Community Educational Resources
147
developmental pattern, centre on basic-level functional departments, will be constructed [5]. Through the power of government, community sectors are mobilized to participate in the community education, which makes the most of the advantage of resources. Besides, activetype developmental pattern is based on middle and primary schools, where coordinating boards for digital community education set up by nearby schools are responsible for the study of extracurricular education. Synthesizing-type developmental pattern is e-school-based, where regional community networking academy organizes educational activities through curriculum development, project development, and conducts cultural education, vocational, professional community education through academic and non-academic means. Finally, with the social forces organized by such influential business as information industry, the digital community education of community participatory and self-government type is created and meanwhile the logical starting point and thinking orientation of the digital community education development are analyzed. According to the analysis above, the network model of digital community education is given in figure 3.
Fig. 2. The architecture of distributed community education service
Fig. 3. The network model of digital community education
3.3 Service Platform and Resource Pool Based on Internet and communication technology, educational management, resource development, educational study and distance education in E-education are explored in full range. Further, school, family and community education are integrated by various software technology [7]. Community digital education platform can build many useroriented databases according to different levels and needs. 3.4 Environment and Facility System of the Platform In digital service system of community educational resources based on distributed technology, the educational environment is to make sure the digitalization of educational resources and construction of information service platform are systematic and scientific. Thus, government of all levels should be made clear of the role in the entire construction. What’s more, the community and government ought to formulate scientific developmental strategies; unify the leadership, coordination
148
J. Cheng, J. Huang, and X. Liu
and management; achieve overall planning and building; strengthen the guidance, law enforcement, law-abiding and related supervision; increase investment. 3.5 Evaluation and Specific Steps of the Service This contains the evaluation of distributed community educational resources service, of community service, of community members, and of educational benefits [8, 9]. After referring to experiences at home and abroad and combining practical situation in community, the initial evaluating indicators in distributed community education are proposed: 1) Entire plans and annual plans, which should contain clear targets and feasible measures; 2) Organization, management, advocacy and mobilization; 3) Network and resources, which include construction of educational MAN and education resources bank, development of applied platform and learning tools; 4) Construction of base, which includes integration and application of education and training organizations open to the community; 5) Implementation and effectiveness, which mainly concerns the construction of education for pre-school, youth, adult, aging people, migration as well as learning organization; 6) Safeguard mechanism, adequate financial resources, theoretical research and counseling, check and evaluation; 7) Significant characteristics in the application of educational platform, and innovation in theory, mode and method. 3.6 Theory Diffusion and Application Mechanism In sum, with the reference of theoretical and practical experience in developed regions, and relevant rules and regulations laid down by community, this paper suggests choosing conditional communities to implement the construction and practical studies in order to complete the theory and method of distributed community educational resources service and the basic contents, mechanism of action, policy-making, organization, coordination, evaluation, management and service of educational resources development. Besides, countermeasures and suggestions are raised toward the effective development of distributed community education resources [10].
4 Strategies and Methods Digital community education should abide by On Deepening the Educational Reform and Promoting Quality Education so that education is surveyed under local economy, society, technology, culture [11]. 4.1 Significance Experimental work in community education, establishment and improvement of lifelong education system and quality promotion of the whole nation are proposed [12]. The development of distributed community education service should follow this: to fully realize the roles and effects of construction of digital learning community, harmonious society and community culture and to make clear the gaps enhancing the sense of urgency and mission in the development of distributed community education.
The Study on Digital Service System of Community Educational Resources
149
4.2 Main Ideas Digital community education service plays an important role from the perspective of the relation between the service and school [13]. Based on the relationship between distributed community education service and school and family, the service connects all kinds of education. Distributed community education service has a pushing effect on urban and rural economic development [14]. 4.3 Management System Therefore, the education service is brought into objectives of government work and overall plan of regional economic development, social and educational development. What’s more, service committee, made up of responsible officers from sectors of education, publicity, finance, etc, is established in charge of making overall plans and coordinating the education service [15]. Administrative organizations of distributed community education service are set up at both levels of region and street. The interorganizational networks of the education service, which is subdistrict office-centered plans for the whole project, optimize the operating mode. 4.4 Operating Mechanism Urban-rural governments of all levels should heap up and coordinate all forces, support and participate in distributed community education service, create favorable atmosphere of public opinion and policy, clear the responsibilities of all sectors, bring the education service into annual appraisal criteria and conduct the management of objective responsibility, forming education-society interactive operating mechanism of vertical and horizontal integration [16]. 4.5 To Increase the Input in Environment and Facility To accelerate community infrastructure, team foundation and construction of grassroot organizations should be strengthened in distributed community education service. Second of all, the policy needs to be studied and made to accelerate distributed community education service, to carry out its safeguard mechanism, to clear its welfare-oriented quality, and to guarantee the sources of the funds. Thirdly, the expenditure of distributed community education service is listed among local financial budget so that the separate funding and supervision from all levels can ensure the investment of the education service on the legal track [17]. 4.6 Laws and Regulations on Management Management measure of distributed community education service should be drafted as soon as possible, which makes clear the guiding of the education service; proposes the detailed target, main tasks in coming 5 years; establishes and completes the mechanism of management and operation; strengthens the leadership and safeguard of community education[18]. Furthermore, at the meantime, some related policies such as implementation guideline of distributed community education resources sharing should be formulated to encourage all schools in the community to open the resources
150
J. Cheng, J. Huang, and X. Liu
to the society and some developed ones to provide various educational services by modern IT and distance education. In addition, evaluation mechanism ought to be established. Supervisory organs in the governments at all levels bring community education into working targets and draw up feasible supervision and evaluation program to ensure the healthy development of community education.
5 Conclusion Distributed community education is the service system which benefits the people and promotes E-education and urban-rural education integration. This paper aims to integrate all kinds of educational resources in distributed communities, implement lifelong learning, establish theoretical mode and evaluation system, construct communities of lifelong learning type. Besides, it further expands and perfects the ways and means of cooperative development among school, family and community education; offers construction program to distributed community development, distributed community education management and service. It is of great value to the plan, design, construction and management of distributed community development.
References 1. Huaxiang, X.: The Study of Pattern of Cloud Computing on Education. Computer Knowledge and Technology 10, 2690–2691 (2009) 2. Xueli, H., Miaomiao, Z.: The Study and Design of Distance Education System Based on CSW. Journal of Anhui TV University 1, 56–59 (2009) 3. Yang, Y.: Grid and Its Application in Distance Education. Computer Development and Application 8, 57–59 (2007) 4. Mingli, L.: Application of Grid in Distance Education. Journal of Software 8, 38–39 (2008) 5. Zan, M., Shan, Z., Li, L., Aijun, L.: The Application and Study of Modern Distance Education Architecture Based on Grid. E-Education Research 9, 23–28 (2006) 6. Weimin, R.: Learning Society, Digital Learning Port and Public Service System. Open Education Research 1, 12–14 (2007) 7. Lu, Z., Sujuan, Y.: The Thoughts of Community Education Mode Research in Developed Countries. Journal of Guangzhou Radio & TV University 1, 10–13 (2008) 8. Yaoxue, Z.: Digital Learning Port and Lifelong Learning. China Distance Education 1, 47 (2007) 9. Jianping, Y.: To Construct Digital Community and Learning City, http://www.google.com 10. Su, C.: On the Construction of Digital Community Learning Center. Education and Vocation 26, 68–70 (2009) 11. Jian, H.: On the Construction of Digital Community. Shanxi Architecture 34(32) (2008) 12. Hongwei, C., Hui, Y.: On Digital Community Construction Architecture. Journal of Harbin Financial College 1, 40–43 (2006) 13. Jiejing, C., Dongping, X., Xiaoxiao, L., Jingjing, H., Lin, Y.: The theoretical research on Digital Educational Information Resources Service System. In: 2010 Second International Workshop on Education Technology and Computer Science (ETCS 2010), pp. 266–269. IEEE Computer Society Conference Publishing Services, Los Alamitos (2010)
The Study on Digital Service System of Community Educational Resources
151
14. Jiejing, C., Xiaoxiao, L.: The Integration and Application of Resources for Distance Education Teaching Based on Distributed Technology. In: DCABES 2009, pp. 137–111. Electronic Industry Press (October 2009) 15. Gengsheng, W., Haixia, L.: Cost Analysis and Comparison of e-Colleges: A Case Study of Tsinghua University in Beijing, China. Distance Education in China 9, 74–77 (2009) 16. Ping, W., Jiping, Z.: Cloud Computing and Network Learning. Modern Educational Technology 11, 34–36 (2008) 17. Yan, Z., Hongke, L.: Cloud Computing and Application in Education. Software Guide 8, 71–72 (2009) 18. Xianyong, L., Xulun, L.: On the Application of Cloud Computing Technology in Library. The Journal of the Library Science in Jiangxi 1, 105–106 (2009)
Research into ILRIP for Logistics Distribution Network of Deteriorating Item Based on JITD Xiang Yang, Hanwu Ma, and Dengfan Zhang School of Business Administration, Jiangsu University, 212013, Jiangsu Province, P.R. China
[email protected]
Abstract. Just in time distribution (JITD) is a mode of distribution in terms of demand-pull replenishment. Considering the characteristics of deteriorating items and introducing the thought of JIT, a model of Integrated Location Routing and Inventory Problem (ILRIP) is established for the two-stage logistics distribution network with single-factory, multi-depot and multicustomer. ILRIP is used to allocate distribution centers (DCs) from several potential locations, define the best order quantities and points of DCs, and schedule vehicle routing. An improved algorithm of Particle Swarm Optimization (PSO) is designed to solve it. At the end of this paper, a numerical example is given. The result shows that the established distribution network can improve the customer service level and stability of quick response, shorten commodity circulation cycle, as well as enhance enterprise competition ability. Keywords: Logistics management; distribution network design; deteriorating item; JITD; PSO.
1 Introduction Distribution Network Design (DND) has always been one of the important strategic issues of an enterprise. DND must be market-oriented, and provide the right products and services at the right time and right place upon low cost, fast speed and high stability. In generally, DND includes three different decision levels: strategic facilities locations, tactical inventory policies and operational transportation routings [1]. Daskin (1985) [2], prior realized that the facility location, inventory control policy and vehicle routing are correlative, and argued that it is necessary to consider them together. Nozick et al. (1998) [3] argued that the inventory cost could be included in facility’s fixed cost. Liu et al. (2003) [4] proposed a mathematical model for the single-product multi-depot location routing problem taking inventory control decisions into consideration, and on the assumption that the customers’ demands were stochastic and the stocks were held only by customers, aimed at deciding the quantity and the order level for replenishment of route. To solve the model, a two-phase heuristic method was proposed, in phase 1, the initial solution using a route-first, location-allocation second approach based on the minimal system cost was determined. In phase 2, an improvement heuristic search for a better solution based on R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 152–160, 2010. © Springer-Verlag Berlin Heidelberg 2010
Research into ILRIP for Logistics DND Item Based on JITD
153
the initial solution in phase 1 was developed. Zuo-Jun et al. (2007) [5] established a model of facility location, which took both inventory decision and routing into account. Customers faced random demand, and its object was minimizing the total costs when each distribution center maintained a certain amount of safety stock in order to achieve a certain service level for the customers it served. A lagrangian relaxation based solution algorithm was proposed. Because it was a strategic location problem, in order to simplify the model, the authors used continuous approximation to approximate the optimal routing cost. Considering spare parts’ stochastic demand, taking a bi-level spare parts logistics system as the research object, Lv et al. (2010) [6] established a model of the LRIP with soft time window. A two-phase hybrid heuristic algorithm based on tabu search and improved C-W algorithm was designed. The above research don’t consider the deteriorating items, however, in daily life, there are a large number of deteriorating items, such as vegetables, fruits, milk, etc.. Moon et al. (2005) [7] established an inflation inventory model for deteriorating items. Huang et al. (2009) [8] considered the DND of a class of deteriorating items with short selling seasons, which integrated location and inventory policy. When designing the distribution network, seldom scholars take the above three decisionsmaking factors of deteriorating items into account. Most scholars focused on reducing cost only. However, because of the diversification of the customers’ demands, shorter lifecycle of product, increasingly remarkable economic globalization, enterprises are forced with the great pressure of reducing cost, increasing income and improving the stability of delivery. The basic idea of JIT is delivering the production to the right place in the right time with the right quantity and the right quality, and meeting customers’ needs as quickly as possible [9]. JITD could optimize the client service, and eliminate unnecessary stock of deteriorating items, only through this can reduce operation costs and risk effectively causing by high stock level. When analyzing JITD, Scholars mainly analyze the effect of JITD on improving customers’ satisfaction and eliminating unnecessary stock, few of them analyze the effect of JITD on establishing logistics network and the logistics operation costs, evaluate JITD in terms of quantitative analysis [10-11]. As a result, when designing logistics network of deteriorating items, stock should be transferred to distribution centers, and considers Integrated Location Routing and Inventory Problem (ILRIP) based on the JITD, from the point of the integrated logistics. The remainder of this paper is organized as follows. The problem formulation is presented in Section 2. The solution method, Particle Swarm Optimization (PSO) algorithm are designed in Section 3. Section 4 presents the numerical example, and the main conclusions are described in Section 5.
2 Model The two-stage distribution network is searched, which includes single factory, multiple potential DCs and customers. The purpose is to determine the optimal number and locations of DCs, their inventory control policies, the customers’ allocation to each DC and the routing of each DC.
154
X. Yang, H. Ma, and D. Zhang
2.1 Basic Assumptions In order to highlight the main factors and simplify the model, the following assumptions are established. (1) The number and locations of customers are fixed. The distribution cycle is TR ; Customers’ daily demands are d i . (2) The customers’ demands can be met, the shortage is forbidden. Customers are served by the same vehicle, and there is only one vehicle on each route. (3) The locations of potential DCs are fixed, and only the DCs hold the inventory. And the DCs adopt the order strategy of ( R, Q ) . (4) All vehicles are identical and constrained in load capacity, and items will not deteriorate during transport. Each vehicle starts at a depot, visits a set of customers on a route and returns to the same depot. (5) The items obey the exponential distribution of θ . 2.2 Parameter Settings
In order to describe the model clearly, the following parameters are introduced. K = {1, 2,..., k } is a set of vehicles; I = {1,..., N } is a set of clients; J = { N + 1,..., N + M } is a set of distribution centers; S j is the set of clients served by DC j , ∀j ∈ J ; d i is the daily demand of customer i , ∀i ∈ I ; D j is the daily demand of DC j , ∀j ∈ J ; L is the lead time of DC; OC j is the order cost of DC j , ∀j ∈ J ; C is the capability of vehicles; Fj is the fixed cost of establishing distribution centers, ∀j ∈ J ; CV is the fixed cost for the usage of vehicles; h j is the inventory cost per the unit time and the unit goods of DC j , ∀j ∈ J ; Ck is the charge of loading of vehicle k, ∀k ∈ K ; T j is the optimal order cycle of DC j , ∀j ∈ J ; TR is the distribution cycle of clients; CM is the unit time and unit shipping cost for transferring goods from the node i to node j , ∀i , j ∈ ( I ∪ J ) ; tij is the shipping time from the node i to node j , for simplicity, ignoring the time of discharge of nodes, and assuming its value is fixed, that is, not subject to the traffic conditions, ∀i , j ∈ ( I ∪ J ) ; ti is the reaching time that client i required, ∀i ∈ I ; Ti is the moment that the goods arrive at client i ∀i ∈ I ; l is the penalty cost per unit time when goods don’t reach in time; a j is the unit shipping cost from factory to DC; b is the unit cost of deteriorated items; I j (t ) is the inventory level of DC j with time t , ∀j ∈ J . Q j is the optimal order quantity of DC j , ∀j ∈ J ; X kgh = 1 , if g precedes h in route of vehicle k , ∀g , h ∈ ( I ∪ J ), g ≠ h, ∀k ∈ K , otherwise, X kgh = 0; y j = 1 if the facility j is established, ∀j ∈ J , otherwise, y j = 0 ; Yij = 1 if client i is served by facility j , ∀i ∈ I ∀j ∈ J ; otherwise, Yij = 0 ;
,
2.3 Establish Model
For normal profit-pursuing enterprises, it is necessary to take costs into account. When considering the inventory policy, We adopt Economic Order Quantity (EOQ)
Research into ILRIP for Logistics DND Item Based on JITD
∑
model. Under above analysis, D j =
i∈I
155
d iYij , the inventory system of the DC j can
be described by the following equations:
⎧ dI j (t ) + θ I j (t ) = − D j ⎪ . ⎨ dt ⎪⎩ I j (Tj ) = 0
(1)
The solutions of the above differential equation are: I j (t ) =
Dj
θ (Tj −t )
[e
θ
− 1] .
(2)
Therefore, the order quantity of DC j is: Dj
Q j = I j (0) =
θ
[e
θ Tj
− 1] .
(3)
Then the unit cost of ordering inventory from the supplier at the DC TC j =
hj
∫
Tj
I j (t ) dt +
0
Tj
OC j Tj
+
b (Q j − D j T j ) Tj
+
a jQj
,
Tj
j is: (4)
Adopting the Newton-Raphson approach, the optimal order cycle is:
2OC j D j ( h j + θ b + θ a j ) ,
Tj =
(5)
the optimal unit working inventory cost is 2 D j ⋅ OC j ( h j + θ b + θ a j ) + a j D j ,
TC j = *
(6)
Integrating the costs of location and distributions, the mathematic model is as follows: 1
M in
N 1
M in
N
Min
(T i − t i )
(Ti − t i )
i=1
V
(7)
i =1
N
∑
∑ {∑ C i∈ I
N
∑ 2
−
N
1 N
2
[ ∑ (Ti − ti ) ]
2
⋅ X kij + Ck d i ⋅ TR + CM Ti d iTR + Ti − ti ⋅ l} / TR
j∈J
+ ∑ Fj y j + + ∑ [ 2 D j ⋅ OC j ( h j + θ b + θ a j ) + a j D j ]
Subject to
:
(8)
i=1
j∈J
(9)
j∈ J
∑ ∑X
g ∈( I ∪ J ) i∈I
kgi
d iTR ≤ C ,
∀k ∈ K ,
(10)
156
X. Yang, H. Ma, and D. Zhang
Yij − y j ≤ 0, ∀i ∈ I , ∀j ∈ J ,
∑∑
X khi = 1,
∀i ∈ I ,
(11)
(12)
k ∈K h∈( I ∪ J )
∑
∑
X kgh −
g ∈( I ∪ J )
X khg = 0,
∀k ∈ K , h ∈ ( I ∪ J ) ,
g ∈( I ∪ J )
∑∑ X i∈ I
∑∑ X i∈ I
kij
, ∀k ∈ K ,
≤1
j∈ J
kij
− y j ≥ 0,
∀j ∈ J ,
(13)
(14)
(15)
k ∈K
∑X
kij
− y j ≤ 0,
∀j ∈ J , k ∈ K ,
(16)
i ∈I
∑X
kj1 j2
+ y j + y j ≤ 2, 1
∀j1 , j2 ∈ J ,
(17)
∀i1 , i2 ∈ I , k ∈ K ,
(18)
2
k ∈K
X ki i (Ti − Ti ) ≥ 0, 1 2
2
X kgh = 0,1
1
∀k ∈ K , y j = 0,1
Yij = 0,1
g, h ∈ (I ∪ J ) ,
(19)
∀j ∈ J ,
(20)
∀i ∈ I , j ∈ J .
(21)
The first and second objectives guarantee the punctuality and stability of JITD. The third objective is to minimize the total cost of the network. The constraint 10 guarantees that the total quantities of transportation goods in route of vehicle i are less than the capacity of it. The constraint 11 ensures that clients are served by the established distribution centers. The constraint 12 insures that every client is served by one and only one vehicle. The constraint 13 guarantees each vehicle departs from the node that it serves. The constraint 14 ensures that each vehicle is used by one distribution center at most. The constraints 15 and 16 guarantee that there is (are) one vehicle(s) used by established facilities and vehicles cannot be used by unfounded facilities. The constraint 17 ensures that there are no two distribution centers in each route. The constraint 18 guarantees the time that the goods arrive at client g is earlier than client h if g proceeds h in route of vehicle k . The constraints 19, 20 and 21 ensures X kgh , y j , Yij are binary variables.
Research into ILRIP for Logistics DND Item Based on JITD
157
3 Model Solution Firstly, transforming it into one objective:
Min
f ( x) = α ⋅ [
1
N
1
i
i
i
i =1
∑ {∑ C
V
i∈ I
j
−
N
1 N
2
[
∑ (T − t )] } 2
i
i
i =1
X ijk + Ck d iTR + C M Ti d iTR + Ti − ti ⋅ l} / TR .
(22)
j∈ J
∑ F y + ∑[ j ∈J
i
2
i =1
+ γ ⋅{ +
N
∑ (T − t )] + β ⋅ { N ∑ (T − t ) N
j
2 D j ⋅ OC j ( h j + θ b + θ a j ) + a j D j ]}
j∈ J
An algorithm of PSO is designed by literature [12] to solve location routing problem for fixed demand and capacitated vehicle, but each DC has only one route and the inventory control policy of DCs is neglected. An improved algorithm of PSO is designed to solve ILRIP. Referencing literature [12], a 2N-dimensional space is constructed corresponding to ILRIP, in which the number of clients is N. X can be regarded as two N-dimensional vectors: X v represented the customer is served b a DC, and X r represented the routing of the DC. Differing to representation of Yang Peng (2006), it allows some routing of each DC, and takes the inventory control policy into account. Though the expression has larger dimensions, it does not increase the complexity of computing, because PSO algorithm has good performance in Multidimensional optimization problem. The fitness function is F ( x ) = 1 { f ( x ) + R ⋅ max{
∑ ∑X
kgi
μiTR − b, 0}} ,
g ∈( I ∪ J ) i∈I
(23)
in which R is a large number as penalty weigh for overloading of vehicle. The framework of the PSO algorithm can be stated as follow: Step 1: Input the original dates. Such as the parameters of inventory etc. ; Step 2: Initialize the particle swarm 1) Initialize the position, xi ,of particle: each element of Xv is randomly generated from 1~M, and Xr randomly generated from 1~N; 2) Initialize the velocity vector vi of particle: each element of Vv is randomly generated from –M~M, and Vr randomly generated form –N~N; 3) Initialize the personal best fitness pbest ,i and the personal best position pi : computing the fitness of particle and evaluating to pbest ,i , let pi = xi ; 4) Initialize the global best fitness gbest and the global best position g i : evaluating the biggest one of pbest ,i to g best ; Step 3: Repeat until the number of generation equal to maximum generation number or maximum number of iterations with steady global best.. 1) For each particle, update X and V, then modify it to meet the problem space; the boundaries of them are as follows: xi ,max = vi ,max = M xi ,min = 1 vi ,min = − M ; 2) Turn the 2N-dimension particle xi into appropriated representation; 3) Evaluate the fitness of all particle in the new population; 4) Update the pbest ,i and gbest . Step 4: Stop the PSO algorithm and get the approximately optimal solution.
,
,
158
X. Yang, H. Ma, and D. Zhang
4 Computational Example Numerical experiments were conducted to examine the computational effectiveness and efficiency of proposed method. There are 4 potential DCs and 10 customers in two-stage logistics distribution network for single-factory, multi-DCs and multicustomers. The coordinates are lined out in coordinate system. The speed of vehicle is v = 80 , the coordinate of factory is (290,35), a j = 1 , OC j = 5000 , CV = 230 , C = 1600 , Ck = 1 , C M = 1 , TR = 1 , l = 600 , α = β = 0.3 , γ = 0.4 , θ = 0.25 , b = 2 , L = 2 , table 1 is the information of customers and table 2 is the information of distribution centers. Using the above algorithm to optimize the distribution network, the parameter settings of PSO are as follows: population size is 50, accelerating factors c1 = c2 = 1.49445 , inertia factor w decreases linearly with the number of iterations, wmax = 0.9 , wmin = 0.4 , the maximum number of iteration itermax = 200 Programming
with MATLAB 6.5, the results are shown in Table 3 and Table 4. The simulation results show that to establish the convenient distribution network, the distribution center should be established in J1 and J3, The routing J1 is J1-C9C10-C3-C2, its point of refreshment is 2244, and the order quantity is 4752.; there are two routes in J3’s Distribution center, one is J3-C1-C5-C6, the other is J3-C8-C7-C4, and its point of refreshment is 4586, the order quantity is 6038. From the Table 3 and Table 4, we can see that the mean of deviation is -0.02556; it meets the need of JITD effectively. And the mean square deviation is 0.1456, it shows that the delivery of this distribution network is relatively reliable, so as to improve the stability of delivery and guarantee the JITD. As can be seen from the example, it can determine the location of distribution centers, the inventory control policy and routing by solving the model. The established distribution network takes the punctuality and stability of JITD and cost into account, which is coincident with the requirements of profitmaking enterprises, as well as improves the customer service level. Table 1. The information of customers Cs
Coordinates (245,190)
Demands 624
C1 C2
time 2:00
(250,436)
153
7:00
C3
(297,413)
175
6:00
C4
(417,380)
237
6:30
C5
(289,227)
342
2:30
C6
(319,213)
283
3:00
C7
(492,176)
378
3:30
C8
(348,247)
429
1:30
C9
(85,470)
362
2:00
C10
(307,529)
432
5:00
Table 2. The information of DCs DCs
Coordinates
J1
(236,473)
Fixed cost 1500
Inventory cost 0.2
J2
(314,425)
2350
0.3
J3
(268,342)
1850
0.25
J4
(346,229)
1560
0.4
Research into ILRIP for Logistics DND Item Based on JITD
159
Table 3. Analysis of the time factor Customers
C1
C2
C3
C4
C5
C6
C7
C8
C9
C10
Arrival time
1:55
6:52
6:13
6:17
2:38
3:03
3:34
1:33
1:53
4:46
Required time
2:00
7:00
6:00
6:30
2:30
3:00
3:30
1:30
2:00
5:00
Deviation(Minutes)
-5
-8
+13
-13
+8
+3
+4
+3
-7
-14
Table 4. Results Selected DC
Inventory Point of Q refreshment
Routing
J1
4752
2244
J1-C9-C10C3-C2
J3
6038
4586
J3-C1-C5-C6 J3-C8-C7-C4
Weight cost
Total cost
Mean of deviation
Mean square deviation
12420
31051
-0.02556
0.1456
5 Conclusion By analyzing the characteristic of deteriorating items, when considering the distribution network design, we discuss the features of ILRIP for deteriorating items Based on JITD, the Multi-objective Mixed Integer Nonlinear Programming model (MMINLP) is established, which takes into account the requirements of common enterprises in the 21st century, and a PSO algorithm is designed, at the same time an numerical example is proposed to illustrate it. The location of distribution centers and their inventory policies and routes can be determined by solving the model. The established distribution network can not only reduce the cost, but also adapt the changing of environment and improve the customer service level. In order to analyze the problem, this paper designs the distribution network from the perspective of time and cost. If the objective function is profit oriented, to maximize the profit of revenue, the revenue management can be introduced into DND.
References 1. Owen, S.H., Daskin, M.S.: Strategic facility location: a review. European Journal of Operational Research 111, 423–447 (1998) 2. Daskin, M.S.: Logistics: An overview of the state of the art and perspectives on future research. Transportation Research Part A: General 19, 383–398 (1985) 3. Nozick, L.K., Turnquist, M.: A Integrating inventory impacts into a fixed charge model for locating distribution centers. Transportation Research Part E 43, 173–186 (1998) 4. Liu, S.C., Lin, C.C.: A two-phase heuristic method for the multi-depot location routing problem taking inventory control decisions into consideration. International Journal of Advanced Manufacturing Technology 22, 941–950 (2003) 5. Shen, Z.-J.M., Qi, L.: Incorporating inventory and routing costs in strategic location models. European Journal of Operational Research 179, 372–389 (2006)
160
X. Yang, H. Ma, and D. Zhang
6. Fei, L., Yan-hui, L.: Model and Algorithm for Location-Inventory-Routing Problem of Spare Parts Logistics System in Time-based Competition. Industrial Engineering and Management 15, 82–86 (2010) (in Chinese) 7. Moon, I., Giri, B.C., Ko, B.: Economic order quantity models for ameliorating/ deteriorating items under inflation and time discounting. European Journal of Operational Research 162, 773–785 (2005) 8. Song, H., Chao, Y., Jun, Y.: Distribution Network Design Model for Deteriorating Items Based on Stackelberg Game. Chinese Journal of Management Science 17, 122–129 (2009) (in Chinese) 9. Du, P., Wang, J.X., Ding, B.L.: Study of Supply and Demand Purchasing Model for JIT Environment. Operations Research and Management Science 17, 93–97 (2008) (in Chinese) 10. McDaniel, S., Ormsby, J.G., Gresham, A.B.: The effect of JIT on distributors. Industrial Marketing Management 21, 145–149 (1992) 11. Fullerton, R.R., McWatters, C.S.: The production performance benefits from JIT implementation. Journal of Operations Management 19, 81–96 (2001) 12. Peng, Y.: An Efficient Strategy for Multi-objective Optimization Problem. In: The Fifth International Symposium on Distributed Computing and Applications to Business, Engineering and Science, pp. 459–462. IEEE Press, Hang Zhou (2006)
Overview on Microgrid Research and Development Jimin Lu1 and Ming Niu2 1
2
Hebei Polytechnic University, Tangshan 063000, China School of Electrical and Electronic Engineering, North China Electrilic Power University, Beijing 102206, China; 69121 Heidelberg, Germany
[email protected],
[email protected]
Abstract. Microgrid is a small power system which integrates multiple distributed generators and local loads; it takes advantage of much clean energy like wind and solar, and it is also an effective way to solve the grid connection problem brought by the large number of DG. This paper introduces the concept of microgrid and the characteristic of various power sources in detail, and the key technology and its solution in microgrid is discussed at great length, especially the control technology and protection method. The prospects of microgrid are also investigated based on the above. Keywords: Microgrid; renewable energy; control strategy; smart grid.
1 Introduction With the technology of new energy becoming more mature, environment-friendly distributed generation units (DG, such as gas, hydrogen, solar energy, wind energy etc.) are used more widely [2]. They have great advantage in respects of economy; environmental protection and diversity of energy use [1, 4]. However there will be some problems when DG connects to distributed grid, for example power quality and impact to the relay protection in operation. Microgrid solved the coordination problem between DG and public grid; it integrates generators, storage devices, loads and control devices to a controllable unit. There is a point of common coupling (PCC) between microgrid and public grid so that it can avoid the connection problem and satisfy users’ high requirement of power quality. Basic microgrid architecture is shown in Figure 1. This consists of a group of radial feeders, which could be part of a distribution system or a building’s electrical system. There is a single point of connection to the utility called point of common coupling [8]. Some feeders, (Feeders 1-3) have sensitive loads, which require local generation. The non-critical load feeders do not have any local generation. Feeders 13 can island from the grid using the static switch that can separate in less than a cycle [9]. In this example there are three DG in the microgrid, which control the operation using only local voltages and currents measurements. When there is a problem with the utility supply the static switch will open, isolating the sensitive loads from the power grid. Non sensitive loads ride through the event. It is assumed that there is sufficient generation to meet the loads’ demand. R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 161–168, 2010. © Springer-Verlag Berlin Heidelberg 2010
162
J. Lu and M. Niu
Fig. 1. Microgrid architecture diagram
When the microgrid is grid-connected power from the local generation can be directed to the non-sensitive loads.
2 Distributed Generation and Energy Storage Components in Microgrid Numerous of DG and energy storage components are included in microgrid, and they supply power for the local load. Microgrid is connected to utility grid through an isolating switch, so it is regarded as an entirety by utility grid. DG units could be many forms of energy use (PV, wind power, micro-gas turbine, fuel cells, etc.), and also could be the forms of CHP (combined heat and power) and CCHP (combined cold heat and power). 2.1 Wind Power Technology Wind power is an important way of using renewable energy. A lot of research work about control strategy of wind generator has been done, however grid connected control are considered more often in these work. State and dynamic control strategies of wind generator during grid connection mode are studied [6]. More attention is paid on the transient control of wind generator when it faults, and the key technology is vector control which can realize the decoupling of active and reactive power. The main task of grid side converter is to ensure the current waveform and power factor meet the requirement, and keep the DC voltage stable; machine side converter regulates active power to obtain the maximum wind energy, and at the same time supplies excitation current for rotor circuit [3]. Some islanding operation mode research work also has been done, such as the control strategies of wind generator during this operation mode caused by utility grid fault are studied. Storage components are often adopted to support voltage on PCC at the moment DG system were isolated, and provide extra power for the island; if necessary, some load would be cut. Wind generator and rotor side converter are considered a DG, storage component and load side converter are considered another DG, and this method gives great inspiration for the control strategy of micro grid.
Overview on Microgrid Research and Development
163
Fig. 2. Wind generator system
2.2 Micro Gas Turbine Micro gas turbine (MGT) generation system is mature in technology and has wide business prospects. The rated power of MGT is between tens to hundreds kW, and the characters of MGT, such as environment-friendly, high efficient, high flexibility, high reliability etc, make it become the best choice for micro grid. Modeling MGT generation system is the foundation of researching microgrid control strategy. Some simple mathematical models of MGT and its control system are proposed based on heavy gas turbine. Simulation works are done in Matlab/Simulink according to this model, the items including frequency regulation and load-shedding. Model of converter connected to the permanent magnet synchronic generation is established together with model of MGT, rectifier is composed of uncontrollable diode, PWM control strategy is applied on inverter [10].
Fig. 3. Micro turbine generation system
2.3 Fuel Cell The work process of fuel cell is same with common cell. Fuel and oxidant are provided for electrodes in fuel cell and then power will be output continuously. Fuel cell is one of clean power generation. The advantages of fuel cell are high efficiency of energy converted, no pollution, less water, less space, short construction period, various kind of fuel, strong ability of sustain load change; and its disadvantage is the high cost. 2.4 Solar PV Generation Solar energy is idea green energy obviously, and experts firmly believe that solar energy will be one of the most important energy.
164
J. Lu and M. Niu
Fig. 4. Fuel cell generation system
PV generation has two operation modes (grid connection and islanding); PV grid connection technology is the trend of world’s PV generation, and also the important technology for large scale PV connecting to grid. PV grid connection generation system is composed with photovoltaic array module, inverter and controller. An AC current generated by the inverter is from power generated by photovoltaic cells into the grid reverse, photovoltaic cell controller controls the maximum power point tracking, keeps current waveforms, so that the power supplied to grid could balance with the maximum power generated by photovoltaic arrays module.
Fig. 5. Solar PV generation system
2.5 Energy Storage Element Considering the stability and economy of the system, an amount of electricity should be stored in the microgrid in case of unexpected events. With the development of power electronics and materiality, the modern energy storage technology has developed to a large extent, which has already played an important role in microgrid. There are three main effects of storing energy in microgrid: 1) Improve power quality and keep microgrid system stable 2) Supply short time electric power at the moment when the microgrid operation mode is converted into isolated mode. 3) Enhance the economic benefits of microgrid operation. All kinds of energy storage technologies and their characteristics are listed in table 1.
Overview on Microgrid Research and Development
165
Fig. 6. Solar PV generation system Table 1. Energy storage technologies and their characteristics Category Lead Acid Batteries Vanadium Redox Battery Sodium-sulphur Battery Metal-Air Battery Super-Capacitor Rechargeable Battery Pumping Energy Storage Compressed Air Energy Storage Flywheel Energy Storage Superconducting Energy Storage
Characteristics Low cost, short lifetime, environmental pollution, needing recovery
, independent design of power and capacity, low High density of energy and power,high cost,poor security Very high energy density,poor charge performance Large capacity energy density
Long lifetime, high efficiency, low energy density, short discharge time
,high cost, existing security problems Large capacity,mature technology,low cost,limited by the location Large capacity,low cost,limited by location,needing gaseous fuel High power,low energy density,high cost, not so mature technology High power , low energy density , high cost, needing regular High density of energy and power
maintenance
3 Key Technologies of Microgrid Control 3.1 Microgrid Control CERTS microgrid has two critical components, the static switch and the DG, as shown in Fig 1. The static switch has the ability to autonomously island the microgrid from disturbances. After islanding, the reconnection of the microgrid is achieved autonomously after the tripping event is no longer present. This synchronization is achieved by using the frequency difference between the islanded microgrid and the utility grid insuring a transient free operation without having to match frequency and phase angles at the connection point. Each DG can seamlessly balance the power on the islanded microgrid using a power vs. frequency droop controller. This frequency droop also insures that the microgrid frequency is different from the grid to facilitate reconnection to the utility [7].
166
J. Lu and M. Niu
Stability control system is necessary for flexible operation mode and high quality electricity service. Micro grid control system should ensure that: 1) The connection of any DG does not affect the system; 2) Microgrid could choose the operation point itself; 3) Microgrid could connect to (or isolate) grid smoothly; 4) DG could control active and reactive power separately; 5) Microgrid could regulate its voltage and system imbalance. There are mainly two microgrid control strategies: master/ slave control and peer to peer control (p2p). Master/ slave control is composed with master control from the top and slave control from the bottom. Top controller sends messages to controller units at the bottom. Peer to peer control is a method based on plug and play DG, all devices should be controlled automatically and equally, plugging or quitting anyone won’t affect other DG. Reliable communication lines is needed in Master-slave control to transmit collect and control information, any fault in communication software or control software may cause system instability, meanwhile, micro grid expansion will be restricted by communication cost and communication bandwidth. There is no need of communication loop for peer to peer control, and this control method has more advantages since it can realize plug and play of DG. Droop control is a common control strategy in p2p, droop controller regulate the active power and reactive power of inverter separately based on frequency error and voltage error. When the microgrid is connected to the grid, loads receive power both from the grid and from local DG, depending on the customer’s situation. If the grid power is lost because of events, voltage droops, faults, blackouts, etc., the microgrid can autonomously transfer to island operation. When regulating the output power, each source has a constant negative slope droop on the P, ω plane. Figure 3 shows that the slope is chosen by allowing the frequency to drop by a given amount, as the power spans from zero to Pmax, dashed line [5]. The voltage droop control method is similar with frequency droop control.
Δf
ΔP
Fig. 7. Droop control method
Domestic research on microgrid synthetically control remains in the simulation stage. Dynamitic characteristics of microgrid mode converting under different distributed power configuration are studied, and the changing law of power, voltage
Overview on Microgrid Research and Development
167
and frequency is obtained. According to the current research situation, the future research work of microgrid control systems is: 1) Test and verify frequency and voltage control method on hardware platform based on two modes (grid connection and island modes); 2) Normal operation and control of different kinds of DG; 3) More advanced and intelligent control strategies. 3.2 Protection of Microgrid The protection of microgrid has a much different with the traditional protection, the typical performances are: 1) uncertain current flows; 2) Short-circuit currents are much more different under two operation modes of grid connection and island. It is the key point of microgrid protection that how to respond to the fault in microgrid quickly and test utility grid fault fast under grid connection mode, thus ensure selectivity, rapidity, sensitivity and reliability, meanwhile, it is also the difficulties of microgrid protection. Different solutions to the above problems are proposed by foreign scholars. Protection strategies applicable for specific protected region in microgrid are established; fast test method for microgrid containing power electronics interfaces is proposed, firstly, determine whether there is a fault through checking the voltage situation of DG export, and identify what type of fault it is. Fault feature in microgrid containing inverters is studied in simulation software, and dynamic characteristic of microgrid under fault state is analyzed in quality. The research work provides a method to design suitable fault detection strategies. Microgrid protection can not base on the detection of fault current is pointed out in [4], and a new generators and grid protected method is proposed based on testing disturbance of generator terminal voltage. It is pointed out that the fault current is too small during island mode to apply the traditional over-current protection technology [5]. A protection strategy is proposed of whose setting value is based on zero sequence current and negative sequence current amplitude.
4 Conclusion As a new grid form of integrating renewable energy generation, microgrid has important significance in adjusting energy structure, environmental protection, development of the western, solving the power supply problem in remote areas, etc. In respects of improving power supply reliability and power quality microgrid will pay an important role, and it also attracted wide attention in various countries. There are many problems to solve in microgrid at present; however its development potential will be enormous.
References 1. Zongxiang, L., Caixia, W., et al.: Overview on Mierogrid Research. Automation of Eleetric Power System 31(10/19), 100–107 (2007) 2. Application Guide for Distributed Generation Interconnection, Update The NRECA Guide to IEEE 1547 (2006)
168
J. Lu and M. Niu
3. Hatziargyriou, N., Asano, H., Iravani, R., Marnay, C.: Microgrids. IEEE Power & Energy Magazine, 78–94 (2007) 4. Zhanghua, Z., Qian, A.: Present Situation of Research on Microgrid and Its Application Prospects in China. Power System Technology 32(16), 27–31 (2008) 5. Peng, L., Ling, Z., Yinbo, S.: Effective way for large scale renewable energy power generation connected to the Grid-Microgrid. Journal of North China Electric Power Universy 36(1), 10–14 (2009) 6. Chengshan, W., Shouxiang, W.: Study on Some Key Problems Related to Distributed Generation Systems. Automation of Electric Power Systems 32(29), 1–4 (2008) 7. Piagi, P., Lasseter, R.H.: Autonomous Control of Microgrids. In: IEEE PES Meeting, Montreal, pp. 1–8 (June 2006) 8. Lasseter, R.: Microgrids. In: IEEE PES Winter Meeting (January 2002) 9. Zang, H., Chandorkar, M., Venkataramanan, G.: Development of Static Switchgear for Utility Interconnection in a Microgrid. In: Palm Springs, CA.Power and Energy Systems PES, pp. 24–26 (February 2003) 10. Williams, C.: CHP Systems. Distributed Energy, 57–59 (March/April 2004)
Research on Cluster and Load Balance Based on Linux Virtual Server Qun Wei, Guangli Xu, and Yuling Li School of Math’s and Physics, Hebei Polytechnic University, Tangshan 063009, China
[email protected]
Abstract. The bandwidth of network has been more developed than the speed of processor and EMS memory accessing in view of the development of network technology. More and more bottlenecks appear on the server. The proliferation of Internet portfolio and specialization of network application make load balance to be instant demand. The best way to solve this problem is the technique of network cluster and load balance based on Linux Virtual Server. This paper discusses the work principle and Server/Client way of Linux Virtual Server, which provides a transparent way to extend network throughput and strengthen data process ability and improve network agility and usability. Linux Virtual Server could realize load balance conveniently and effectively by operation distributing to cluster real server according to the connection of transmission lever. This paper also describes load balance and balance strategy of LVS to make cluster technology according with the application demands in detail. Keywords: Linux; cluster; linux virtual server; load balance; load scheduler.
1 Introduction Explosive increases of the Internet operations have conduced that network server could not bear the deadweight. The updating hardware whose cost is high and effect is tiny has made administrators tired and led to the waste of existed resource. On the other hand, several special networks need load balance. For example, a large amount of local IDS are exchanged with some process center frequently and needed to process at real time. This operation needs very high request to host computer for process ability. So it is hard to complete the key task by unique equipment. The best way to solve this problem is the technique of network cluster and load balance based on Linux Virtual Server (in short below, LVS). This paper discusses the work principle and Server/Client way of LVS, which provides a transparent way to extend network throughput and strengthen data process ability and improve network agility and usability. This paper also describes load balance and balance strategy of LVS to make cluster technology according with the application demands in detail. Cluster is a relax-coupling multiprocessor system constructed of a group of separate computers whose course - communicating is realized by network. Application programs could transfer messages by share memory of network. Its outside behavior is a single and unified computing source, which has high usefulness and high capability R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 169–176, 2010. © Springer-Verlag Berlin Heidelberg 2010
170
Q. Wei, G. Xu, and Y. Li
and easy management [1]. The server cluster connected by high performance network becomes effective structure of flexibility and high usefulness so as to meet with computing service of more and more network clients. The main advantages of cluster system are: (1) Flexibility: namely expansibility of size, there are some other systems joined in this cluster system when total work flux exceeds the ability of this cluster system so as to enhance the total performance. (2) High validity: All account nodes are able to run separately, so if one node stops working, the result is the reduction of system performance and other nodes could still work so as to maintain the running of key application. (3) High ratio of performance to price: It could adopt cheap hardware to construct performance system. In case of equal performance, computer cluster has higher ratio of performance to price than that of huge computer of equal computing ability. (4) Load balance: In order to use all resources of system farthest, cluster has dynamic load balance function and could control load dynamic condition of all nodes by monitoring them [2]. (5) Usability: When a system of cluster goes wrong, cluster software react rapidly and distribute the task to other working system, namely non-stop service.
2 Load Balance Arithmetic of LVS Controlled by Linux Gateway 2.1 Work Theory of LVS Linux Virtual Server (LVS) is load balance arithmetic based on IP level and distributes according to content request so as to meet with high flexibility and high usefulness network service. It makes the structure of server cluster transparent to clients by seamless dispatching network quest to Real Server through a former Load Balancer. Client program has not been changed due to not influencing by server client and client accesses the network service of cluster system like accessing a high performance and high useable server. Adding and deleting one node transparently could realize the flexibility of system. The way that makes system high usable by checking the faults of nodes or service courses and resets system exactly could be realized in Linux kernel. Thus we could construct a group of servers to a virtual server that could carries out telescopic and high usable server. The system structure is just as charter one shows: The structure of LVS is made up of Load Balancer and Real Server. Real Server could be connected by high speed LAN and physical dispersed WAN. The front part of Real Server is a Load Balancer that sends client requests to different Real Servers. Every Real Server has its own address but client could only see an IP address (namely Load Balancer) [3]. Thus several Real Servers supply the same parallel service as one IP address. The front gateway of Linux is a load balancer, which executes dispatching arithmetic to contribute outside services to content nodes according to the connection (TCP and UDP). Content nodes always be connected by high speed LAN and also by WAN. The system could add or delete content nodes transparently, for example it could cut off one host temporarily, then join the cluster again after upgrading
Research on Cluster and Load Balance Based on Linux Virtual Server
171
(software or hardware) and could process the next host in turn. This online upgrade course is “shocking” outline maintenance, which could not be observed outside. Administrators improve system performance by adding new nodes, which could not waste resources. Content nodes cooperate with each other and LVS gateway could reconfigure and optimize according to demands at any moment. It is easy to add or remove a Real server to realize size extension and share in loading. The program running on load equalizer could monitor Real Server and process to ensure high-useable service. It could be used in Web and FTP and E-mail net stations as well as network security [4]. Content servers could be several or numeral. Linux kernel (supported by 2.0, 2.2 or 2.4 series), which has been specially changed, runs on LVS. 2.2 Load Balance Arithmetic of LVS The load balance is made up of several servers, which constitute a server set by symmetrical way. Every server could serve for external, which do not need the assistant of the other servers. The server symmetrically distributes the requests which be sent from outside to one server of symmetric structure and answers the requests of clients independently. The basic theory of load balance is to set a Virtual IP (VIP) for load balance equipment and put it before the server cluster. This VIP is the address of domain analysis. The load equalizer could rewrite the head file of the request when the request arrived at and appoints it to one computer of the server cluster. If the computer is removed from the cluster, the address needs not to be changed due to the same address of all computers apparently. Thus the request is not sent to the inexistent server. The client could see the result returned from load equalizer when only one response returned. Namely the object operated by the client is load equalizer and the back-end operation is transparent to the client [5]. The target of load equalizer is to distribute the proportional tasks to minimize execute time according to the performance of the processor. It also evaluates workload of the server to distribute the request task to every server. LVS uses three work ways of transmitting IP groups to realize the network load balance, named VS-NAT VS-TUN and VS-DR. (1) Virtual Server via Network Address Translation (VS-NAT) Virtual server is realized by using virtual server. Address translator has a legal IP address which could be visited by outside to change the address of outflow package which come from special network. It seems that the package comes from address translator. It could judge the inside node which the package should be sent to when outside package is sent to the translator. In this way content server could run any operation system, which supports TCP/IP protocol. They could only use reserved Internet interior addresses. Only the network card of Linux outside gateway needs to be distributed IP. The gateway executes the address translation to all the groups, which go to and come from content server, for example IP pretension [6]. But this leads to the limit of content servers, even though is the way adopted in most conditions. Its workflow is just as follows: The user sends requests to cluster server. When the request package sent to the outside address of virtual IP address arrives, it checks the destination address and port
、
172
Q. Wei, G. Xu, and Y. Li
number of the request package. If they meet with the service provided by LVS, some Real Server is chosen by a special arithmetic and at the same time the connection is added to the existed connection table. Then the destination address and port are changed to this connection and transmitted to this Real Server. When the back data package belongs to this connection, which could be found in the connection table, the address and port are changed and transmitted to Real Server. When returning to data package, the load equalizer could change the source address and port number to the one of LVS [7]. After the connection halts or be overtime, the connection record is deleted from connection table. Virtues: Real server could run on any operating system supporting TCP/IP, which uses private IP address and needs only one legal address distributed to load equalizer. So it could save IP address. Shortcomings: The expansibility is not good enough. When Real Server is up to twenty-five or more, load equalizer becomes bottleneck of the whole system because the request package and sending back package all need to be rewritten. Efficiency is relatively lower since transformer must pass the flow of sending back to the request. (2) Virtual Server via IP Tunneling (VS-TUN) IP tunnel (IP encapsulation) is a technique of encapsulating data by data package, which could transmit the data package of an IP address to another after encapsulation. In virtual server load equalizer encapsulates request package to transmit to different Real Server. Real Server could send the result to client after processing. So the whole procedure seems like a single IP address which providing service. This way is a transmission mechanism, which encapsulates IP package to other network workflow if the cluster nodes are not in the same network segment [8]. It could use VPN of tunnel technique or hires special line for secure. The services of cluster are Web service based on TCP/IP and Mail service and News service and Proxy service. Virtues: Many Internet services are ones whose request package is short and sending back package has a great deal data such as Web service. Load equalizer only sends request to different Real Servers and the Real Servers respond to the user request directly in IP tunnel. So load equalizer could process a large amount of requests and manage about one hundred Servers, which could not be system bottleneck with 1Gbps maximum flux [9]. This could be used to construct highperformance Real Server, especially substitutable virtual server. The substitutable server could send back need data to client after receiving request. Shortcomings: It asks that all Real Servers support tunnel protocol. But this technique will be used in all operating systems with tunnel protocol being standard of operating system. (3) Virtual Server via Direct Routing (VS-DR) Real Server and load equalizer share one virtual address. All Real Servers must configure the interface of loop back Alias to virtual IP address while load equalizer also configures interface to virtual IP address to receive request package. HUB/Switch connects all real Server and load equalizer. All content nodes and LVS gateways join in the same network segment (join in a HUB or a VLAN of two-level transmitter); the gateway does not rewrite the service request group of client and only replaces the MAC address of the frame of one content node. Then it is transferred to the specified content node to receive. This way is usually used in high-speed condition. The gateway dispatches the operation which is sent to VIP to several
Research on Cluster and Load Balance Based on Linux Virtual Server
173
service nodes and flow disperse is realized by the second-level; Content nodes are connected to gateway network card by hundred-mega transmitter and the groups which are sent back to content server are sent from the direct route of content server. Virtues: Real Server could run on any operation system to process a lot of requests instead of tunnel devices. The speed is fast and cost is low since the flow, which is sent back to client does not pass the controlled host [10]. Shortcomings: The load equalizer only changes MAC address of data frame to specified MAC address Real Server. This way could only be used in condition of computers of cluster and the controlling computer is in one network segment. Table one compares three work ways of LVS. In fact three ways could be used together (two-level balance, one is VS-TUN or VS-DR, or R-R DNS; the other is VSNAT). Table 1. Three work ways of LVS Ways Items
Network of server
VS-NAT
LAN used reserved address
VS-TUN
WAN (or LAN)
10 20
≤100
Function of Linux gateway
Transmission of network address
IP tunnel nodes
Operation of IP group by gateway
Two-direction rewrite
Encapsulation of unilateral tunnel
Content of server
VS-DR
LAN of the same network segment ≤100 High-speed special router Only transmit, not rewrite
2.3 Load Dispatch Arithmetic of LVS There are several arithmetic methods of selecting a Real Server from virtual server. Now LVS supports eight dispatch arithmetic methods as follows; every two could be classified to one balance strategy. So there are four groups: (1) Round Robin The different servers are dispatched according to requests by round robin, namely every time it executes i = (i + 1) mod n and selects the sever coded i. The virtue of the arithmetic is its concision that it does not need to record the current conjunction status, so it is a non-state dispatch. Round Robin supposes that the process performance of all servers is the same in despite the current connection amount and response speed. The arithmetic is relatively simple and do not adapt to the condition which server groups have not the same process performance [11]. Moreover when request time changed largely, Round Robin could lead to load imbalance of the servers. (2) Weighted Round Robin
174
Q. Wei, G. Xu, and Y. Li
This belongs to dispatch by circle way. Circle distributes specified weighty connection for every content server and considers the process ability differences of every content server. The Real Server of different process ability is treated variously. Every Real Server is designated to an integer value which marks processor ability and default value is one. For example Real Server A B C, their values are 4 3 2, the transmission order is ABCABCABA. This method does not need net connections of every Real Server and cost is lowest. So more Real Servers could be managed. But it could lead to load dynamically unbalance when request load is changed a lot. (3) Least Connections The new connection request is distributed dynamically to the content server whose active connections are least. This is dynamic arithmetic that decides which server to be connected according to active connection number. It needs to calculate active connection number of every server to let least connection server be transmission target address. This is a good arithmetic when net request load has been changed a lot .The reason is that all longer requests are not transmitted to the same server. (4) Weighted Least Connections It is an extension to least link method. Different value could be designated to every Real Server for the reason of different ability of Real Servers. The Real Server of largest value could receive more connections and active connection number of every server is direct ratio to its value, and its default value is one. New connection request is dynamically distributed according to the ratio of current connection number of content servers [12]. WLC is the default dispatch arithmetic of LVS system and is used widely to realize balance in application. (5) Locality-Based Least Connections This way is load balance dispatch to target IP address of request message and mainly used for Cache cluster system. The reason is that target IP address of client request message of Cache cluster could be changed. Here it is supposed that any backend server process any request. The target of arithmetic is to dispatch the same IP address to the same server in condition of server load balance. This could improve accessing localness and Cache success to enhance the process ability of the whole cluster. (6) Locality-Based Least Connections with Replication The cache server set of special target address is maintained in cluster nodes. The cache server which has least active connections is distributed when a connect request of matching comes. The node whose load is maximal should be deleted in a period of time. (7) Destination Hashing It is also a load balance to target address. But it is a static mapping arithmetic, which maps a target IP address to a server by a Hash function. The dispatch arithmetic of target address hashing is to find out the corresponding server from static hashing by regarding the request IP address as Hash key. The request is sent to this server if server is usable and not over loading. (8) Source Hashing This arithmetic is just reverse to destination hashing. The corresponding server is found out from static hashing by regarding the request IP address as Hash key. The request is sent to this server if server is usable and not over loading. If not, the system returns NULL. The hashing function of it is same as the one of destination hashing.
、、
、、
Research on Cluster and Load Balance Based on Linux Virtual Server
175
The above four groups are designed for different conditions. But their default is that cluster nodes could not feed back the load to gateway, namely equalizer. Even though WLC arithmetic set the fixed weight expressing service performance of all nodes by manager in view of gateway in advance [13]. The paper puts forward a group of new strategies, which constitute of two dispatch arithmetic methods based on real-time feedback: (1) Shortest Expected Delay: The gateway sends ping messages to all nodes and distributes new connection to the node of the fastest ICMP echo request. (2) Never Queue: This method is especially used at the network business faced to TCP connection. The gateway sends TCP connection request (SYN) used for detecting network status to interception ports of all nodes. Then it distributes new connections to the fastest response node and sends RS package to back out the halfopened connection used for detecting network status.
3 Conclusion Cluster technique is widely adopted by reason of good affection and good flexibility and moderate cost and wide suitable extension in all methods of meeting with highusability. This paper discusses the work theory of LVS and deeply analyses three load balance arithmetic methods and virtues and shortcomings of them. Finally this paper traverses eight load dispatch arithmetic methods of LVS to give valuable references to readers. It has been proved by practice that the technique of network cluster and load balance based on LVS is the best way to solve the bottleneck problem and load balance on the server.
References 1. Hong, C., Leqiu, Q.: Research and Construction of WEB Service Based on Linux Cluster. Computer Engineering and Application 40, 158–161 (2008) 2. Wensong, Z.: Linux Virtual Server for Scalable Network Services. In: Ottawa Linux Symposium (2008) 3. Northcutt, S., Qingni, Y.: Translates: Network Intrusion Detection. An Analyst’s Handbook. People Post and Telecommunications Publishing Company of Peking (2008) 4. Ruiyong, J.: The Research and Realization of File System of Different-structure Cluster Based on Network Storage. Transaction of Northwest Industry University 32, 49–54 (2007) 5. Zhonglin, S.: The Construction of Linux Virtual Server by Cluster Technique. Research on Computer Application. Peking (2006) 6. Vrenios, A., Wei, Z.: Programme Teaching Material of Linux Network. Hope Election Publishing Company, Peking (2007) 7. Zhaohui, M.: Translates: The System Structure of Linux Cluster. Mechanical Publishing Company, Peking (2006) 8. Matthew, N.: Richard Stones, Yang Xiaoyun: Linux Programming Design. Mechanical Publishing Company, Peking (2006) 9. Wentao, Z.: Load Balance Based on Linux Virtual Server. Computer Engineering. Peking (2006)
176
Q. Wei, G. Xu, and Y. Li
10. Jiake, C., Xiuli, S.: The Load Balance Strategy and Precontract Protocol of Extendable Resource. Computer Engineering and Application 41, 157–159 (2005) 11. Kim, C., Kameda, H.: Optimal Static Load Balancing of Multi-class Tasks in a Distributed Computer System. In: Proc. of the 10th Int’l Conference on Distributed Computing Systems. Peking, pp. 562–569 (2006) 12. Li, J., Kameda, H.: Load Balancing Problems for Multi-class Jobs in Distributed/Parallel Computer Systems. IEEE Transactions on Computers. Wuhan, 322–332 (2005) 13. Mirchandaney, R., Towsley, D., Stankovic, J.A.: Adaptive Load Sharing in Heterogeneous Distributed Systems. Journal of Parallel and Distributed Computing. Nanjing, 331–346 (2005)
Acceleration of Algorithm for the Reduced Sum of Two Divisors of a Hyperelliptic Curve Xiuhuan Ding School of Mathematics, Physics and Information Science, Zhejiang Ocean University, Zhejiang, 316000, China
[email protected]
Abstract. The reduced sum of two divisors is one of the fundamental operations in many problems and applications related to hyperelliptic curves. This paper investigated the operation of the reduced sum of two divisors implemented by M.J. Jacobson et al. That algorithm relied on two pivotal algorithms in terms of continued fraction expansions on the three different possible models of a hyperelliptic curve: imaginary, real, and unusual, and required quadratic cost. By applying Half-GCD algorithm, the pivotal algorithms decreases the time cost. Consequently, the algorithm for computing the reduced sum of two divisors of an arbitrary hyperelliptic curve is accelerated from quadratic to nearly linear time. Keywords: Hyperelliptic Curve, Reduced Divisor, Continued Fraction Expansion, Euclidean Remainder Sequence, Half-GCD Algorithm.
1
Introduction
The reduced sum of two divisors is one of the fundamental operations in many problems and applications related to hyperelliptic curves. The group law of the Jacobian [1] can be realized by this operation, and applications ranging from computing the structure of the divisor class group to cryptographic protocols [2–4] all depend on it. Furthermore, the speed of algorithms for solving discrete logarithm problems on hyperelliptic curves [5], particularly of medium and large size genus, depends on a fast computation of the group law. There has been a great deal of work on finding efficient algorithms for this operation (see for instance [4]). M.J. Jacobson et al presented a new algorithm (Algorithm 9.2, [6]) for computing the reduced sum of two divisors in terms of continued fraction expansions on the three different possible models of a hyperelliptic curve: imaginary, real, and unusual. This algorithm relies on two pivotal algorithms, the algorithm for selected output of the extended Euclidean sequence for polynomials (SOEES) and the extended Euclidean algorithm for polynomials (EEA). The notations and concepts used in this section can be seen in Section 2.
Supported by the Foundation of Zhejiang Ocean University (No. 21065013009).
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 177–184, 2010. c Springer-Verlag Berlin Heidelberg 2010
178
X. Ding
Algorithm 1. SOEES(a0 , a1 , N ): the inputs are a0 , a1 ∈ K[x] and integer N with deg(a0 ) > deg(a1 ) ≥ 0, 0 ≤ N < deg(a0 ); the output is (aj , aj+1 , gj , gj+1 ), where deg(aj ) > N ≥ deg(aj+1 ). [1] g0 ← 0, g1 ← 1, i ← 1; [2] while deg(ai ) > N do (a) qi ← ai−1 div ai , ai+1 ← ai−1 mod ai ; (b) gi+1 ← gi−1 − qi gi ; (c) i ← i + 1; [3] return (ai−1 , ai , gi−1 , gi ). Algorithm 1 requires O(n2 ) field operations, where n = deg(a0 ). Algorithm 2. EEA(a0 , a1 ): the inputs are a0 , a1 ∈ K[x] with deg(a0 ) > deg(a1 ) ≥ 0; the output is (ak , uk , gk ), where ak = gcd(a0 , a1 ) = uk a0 + gk a1 . In reference [6], there are no details about Algorithm 2. The known fastest algorithm (see for instance [7]) computes the desired pair (ak , uk , gk ) of the EEA algorithm by consuming O(n log2 n) field operations. In this paper, we accelerate the SOEES algorithm by applying Half-GCD algorithm [8–10] and decrease the above quadratic cost bound to the level O(n log2 n). Furthermore, we present a new EEA algorithm in accordance with the accelerated SOEES algorithm to compact the programming of Algorithm 9.2 [6]. The new EEA algorithm matches the known time complexity bound. Consequently, the algorithm for computing the reduced sum of two divisors of an arbitrary hyperelliptic curve is accelerated from quadratic to O(g log2 g) which is nearly linear time, where g denotes the genus of the curve.
2
The Extended Euclidean Sequence
Let K be any field, K[x] the ring of polynomials in the indeterminate x with coefficients in K. We use the infix binary operators “div” and “mod” to denote the quotient and remainder functions on K[x] respectively. The Euclidean remainder sequence for a, b ∈ K[x](b = 0) is the sequence (a0 , a1 , · · · , ak ) of non-zero elements where a0 = a, a1 = b, ai+1 = ai−1 mod ai (i = 1, · · · , k − 1) and 0 = ak−1 mod ak . In this paper, “matrices” and “vectors” are understood to be 2by 2matrices q1 , where and column 2-vectors, respectively. A matrix of the form Q = 10 deg(q) > 0, is said to be elementary, and q is called the partial quotient of Q. We also write Q = q in this case. A regular matrix Q = Q1 · · · Qj is a product of some j ≥ 0 elementary matrices. If Qi = qi for all i then Q is also denoted q1 , · · · , qj . If j = 0, Q is the identity matrix, denoted by E. Q
For vectors U, V and matrix Q, we write U − → V and say V is a reduction of U (via Q) if U = QV where Q is a regular matrix. If, in addition, U = a a such that deg(a) > deg(b) and deg(a ) > deg(b ), then we say ,V = b b
Acceleration of Algorithm for the Reduced Sum of Two Divisors
179
this reduction is an Euclidean reduction. Euclidean algorithm for a pair a, b ∈ K[x](deg(a) > deg(b)) can be regarded as a sequence of Euclidean reductions. Actually, in the Euclidean remainder sequence for a, b, the regular matrix Q in Fact 1 (ii)[10] is exactly E, or q1 , · · · , qj for some 1 ≤ j ≤ k, where qi = ai−1 divai for all i. c0 d0 c j dj Let P0 = = E, Pj = q1 , · · · , qj = , j = 1, 2, · · · , k (come 0 f0 ej fj uj gj ,j = pared with the notation in [11], where Qj = Pj ). Let Oj = Pj−1 = vj hj 0, 1, · · · , k. According to the properties of Pj (cf. [11]), we can easily deduce that hj = gj+1 , g0 = 0, g1 = 1, gj+1 = gj−1 − qj gj ; vj = uj+1 , u0 = 1, u1 = 0, uj+1 = uj−1 − qj uj . We call the sequence of triples (aj , aj+1 , Oj ) the extended Euclidean sequence, and the triple (aj , aj+1 , Oj ) the term of the extended Euclidean sequence, where (aj , aj+1 ) is the (j + 1)th consecutive pair in the remainder sequence of a, b. In the remaining part of this section, we propose the algorithm to compute Pj−1 from Pj which will be used latter in Section 4. Lemma 1 (Modification of Theorem 2.6 (ii) and (iii) in [11]) (1) cj = cj−1 qj + cj−2 , ej = ej−1 qj + ej−2 for j = 1, 2, · · · , k; c1 ≥ c0 = 1, cj > cj−1 for j = 2, 3, · · · , k; e0 = 0, e2 ≥ e1 = 1, ej > ej−1 for j = 3, 4, · · · , k. (2) cj−2 = cj mod cj−1 = cj mod dj for j = 3, 4, · · · , k; ej−2 = ej mod ej−1 = ej mod fj for j = 4, 5, · · · , k. Proof. The proof of (1) follows from a straightforward analysis of the definition of cj and ej , and the proof of (2) follows immediately from (1). According to Lemma 1, we can easily give the following algorithm. Algorithm 3. Input: Pj , j > 0; Output: Pj−1 . dj (cj mod dj ) [1] if j ≥ 4, return Pj−1 = ; fj (ej mod fj ) [2] if j = 3, and if fj = 1 then return Pj−1 = P2 = dj (cj mod dj ) ; else return Pj−1 = P2 = fj (ej mod fj ) [3] if j = 2, and if dj = 1 then return Pj−1 = P1 = dj (cj mod dj ) Pj−1 = P1 = ; 1 0 [4] if j = 1, return Pj−1 = P0 = E.
dj (cj mod dj ) , 1 1
11 , else return 10
180
3
X. Ding
Half-GCD Algorithm
Now we describe a fast polynomial “Half-GCD” (HGCD) algorithm that for inputs a, b ∈ K[x] with deg(a) > deg(b) ≥ 0, the output is a regular matrix Q such that deg(a) a Q c −→ , deg(c ) ≥ > deg(d ). b d 2 Algorithm 4. Polynomial HGCD(a, b) [10]. deg(a) [1] m ← ; if deg(b) < m then return (E); 2 m
m
[2] a0 ← a div x ; b0 ← b div x ; R ← HGCD(a0 , b0 );
a b
←R
−1
a ; b
[3] if deg(b ) < mthen return (R); c b [4] q ← a div b ; ← ; d a mod b [5] l ← deg(c); k ← 2m − l; k k [6] c0 ← c div x ;d0 ← d div x ; S ← HGCD(c0 , d0 ); q1 [7] Q ← R · · S; return (Q). 10 The complexity of the HGCD algorithm is O(n log2 n) [10], where n = deg(a).
4
The STEES Algorithm
Our main task is to present the algorithm for selected term of the extended Euclidean sequence for polynomials (STEES), which adopts the Half-GCD algorithm and will be used both in the fast SOEES and EEA algorithm. In this section, we propose the STEES algorithm which is the kernel of our paper, prove its correctness and analyze its complexity. Algorithm 5. STEES(a, b, N ): the inputs are a, b ∈ K[x] and integer N with deg(a) > deg(b) ≥ 0, 0 ≤ N < deg(a); the output a selected is −1 term of the a Q a extended Euclidean sequence, (a , b , O), where −−−→ , deg(a ) > b b N ≥ deg(b ). [1] if deg(b) ≤ N then return (a, b, E); [2] if N ≥ n/2 then n (a) w ← log n−N , m ← (1− 21w )n ; a0 ← a div xm ; b0 ← b div xm ; −1 (b) Q ← HGCD(a 0 ,b0 ); Q ← Q ; a a (c) ← Q ; b b (d) if deg(b ) ≤ N thenreturn (a , b , Q ); c b (e) else q ← a div b ; ← ; d a mod b
Acceleration of Algorithm for the Reduced Sum of Two Divisors
181
(f ) l ← deg(c), t ← 2N − l; c0 ← cdiv xt ; d0 ← d div xt ; q1 · R; S ← S −1 ; (g) R ← HGCD(c0 , d0 ); S ← Q · 10 c a (h) ; ← S d b (i) if deg(c ) = N then return (c , d , S ); (j) else run Algorithm 3 with the input S and get matrix T ; a + T12 b, c , T ). (k) T ← T −1; return (T11 [3] if N < n/2 then a u0 ; ← (a) U ← E; u1 b (b) do { u1 ); U ← U · V ; U ← U −1 ; (1) V ←HGCD(u 0 , u0 a (2) ; ← U u1 b return (3) if deg(u1 ) ≤ N then (u0, u1 , U ); u1 u0 ← ; (4) else p ← u0 div u1 ; u1 u0 mod u1 p1 (5) U ← U · ; s ← deg(u0 ); 10 } while (N < s/2); (c) v ← 2N − s; v0 ← u0 div xv ; v1 ← u1 div xv ; (d) F ←HGCD(v v1 ); G ← U · F ; G ← G−1 ; 0 , a r0 ; ← G (e) r1 b (f ) if deg(r0 ) = N then return (r0 , r1 , G ); (g) else run Algorithm 3 with the input G and get matrix H; a + H12 b, r0 , H ). (h) H ← H −1; return (H11 Let a, b ∈ K[x], deg(a) > deg(b) ≥ 0 and m ≥ 1 be given. As in the STEES algorithm above, let a0 = a div xm and b0 = b div xm . This determines a1 , b1 via the equation m a0 a1 x a 0 xm + a 1 a . = = b 0 xm + b 1 b0 b1 1 b Now let Q be any given regular matrix. This determines a0 , b0 , a1 , b1 via a0 a1 a 0 a1 −1 =Q . b0 b1 b0 b1 Finally, define a , b via
a b
=
a0 a1 b0 b1
xm 1
.
182
X. Ding
Note that
a0 b0
Q
− →
a0 b0
a Q a , − → . b b
The following correctness criteria is central to our analysis. Lemma 2 (Correctness criteria [10]). Let a, b, m, Q be given as above, and define the remaining notations ai , bi , ai , bi , a , b (i = 0, 1) as indicated. If deg(a0 ) > deg(b0 ),
(1)
deg(a0 ) ≤ 2deg(a0 )
(2)
then deg(a ) = m + deg(a0 ), deg(b0 ) ≤ m + max{deg(b0 ), deg(a0 ) − deg(a0 ) − 1}. In particular, deg(a ) > deg(b ). We are ready to prove the correctness of the STEES algorithm. Theorem 1 (STEES correctness). Algorithm STEES is correct. Proof. The algorithm returns a selected term of the extended Euclidean sequence in steps [1], [2](d), [2](i), [2](k), [3](b)(3), [3](f) or [3](h). In steps [1] and [3](b)(3), the results are clearly correct. Consider the selected term of the extended Euclidean sequence (a , b , Q ) in step [2](d). The notations m, Q, a0 , b0 , a , b in the algorithm conforms to those in Lemma 2. By induction hypothesis, the matrix Q computed in step [2](b) satisfies deg(a0 ) ≥ deg(a0 )/2 = n/2w+1 > deg(b0 ) a0 Q a0 where − → . Then Lemma 2 implies deg(a ) = m + deg(a0 ) ≥ (1 − b0 w+1 b0 1 1 1 )n + n/2 ≥ (1 − 2w+1 )n ≥ (1 − 2w+1 )n > N . Since deg(b ) ≤ N is the w 2 condition for exit at step [2](d), it follows that deg(a ) > N ≥ deg(b ) is satisfied on exit at step [2](d). Therefore, (a , b , Q ) is the right result. Now we prove deg(c ) ≥ N > deg(d ) after executing steps[2](e)-[2](h). Since we did not exit in step [2](d), we have deg(b ) > N . Also we renamed b to c. Hence N < l where l = deg(c). By induction, the matrix R computed in step [2](g) satisfies deg(c0 ) ≥ deg(c0 )/2 > deg(d0 ) c0 R c0 where − → . But deg(c0 ) = l − t = 2(l − N ) so d0 d0 deg(c0 ) ≥ l − N > deg(d0 ).
Acceleration of Algorithm for the Reduced Sum of Two Divisors
183
c R c Now let − → . Another application of Lemma 2 (substituting t for m, d d R for Q, c for a, d for b, etc.) shows that deg(c ) = t + deg(c0 ) ≥ t + l − N = N and deg(d ) ≤ t + max{deg(d0 , deg(c0 ) − deg(c0 ) − 1)} ≤ t + max{l − N − 1, l − N − 1} = t + l − N − 1 = N − 1. This shows deg(c ) ≥ N > deg(d ). Then we can consider the selected terms of the extended Euclidean sequence returned in steps [2](i) and [2](k). Since deg(c ) = N is the condition for exit at step [2](i), it follows that deg(c ) > N > deg(d ) together with the above analysis. Hence, (c , d , S ) is the correct result on exit at step [2](i). At step [2](k), T11 a+T12 b is the first term before c in the Euclidean remainder sequence. So deg(T11 a + T12 b) > N = deg(c ). Then it immediately follows that (T11 a+ T12 b, c , T ) is the right result on exit at step [2](k). Finally consider the selected terms of the extended Euclidean sequence returned in steps [3](f) and [3](h). The do-while recurrence at step [3](b) outputs u0 , u1 which are consecutive elements in the remainder sequence of a, b, with deg(u0 ) = s > N ≥ s/2. The left analysis of steps [3](f) and [3](h) is exactly the same as the analysis of steps [2](i) and [2](k). Hence we can deduce that the selected terms of the extended Euclidean sequence are the correct results. The time complexity of the STEES algorithm is O(n log2 n) following the usual analysis [9].
5
Pivotal Algorithms
According to the STEES algorithm, we can easily obtain the fast SOEES and EEA algorithms. Algorithm 6. Fast SOEES(a0 , a1 , N ), where a0 , a1 ∈ K[x] and integer N with deg(a0 ) > deg(a1 ) ≥ 0, 0 ≤ N < deg(a0 ). [1] (a, b, O) ← STEES(a0 , a1 , N ); [2] return (a, b, O12 , O22 ). Algorithm 7. Fast EEA(a0 , a1 ), where a0 , a1 ∈ K[x] with deg(a0 ) > deg(a1 ) ≥ 0. [1] (a, b, O) ← STEES(a0 , a1 , 0); [2] if b = 0, then return (a, O11 , O12 ), else return (b, O21 , O22 ). The correctness of Algorithm 6 and 7 is obvious. The complexity of Algorithm 6 is exactly the same as the time complexity of the STEES algorithm and so is the same as the complexity of Algorithm 7. Based on the STEES algorithm, the fast SOEES algorithm requires O(n log2 n) field operations and so does the fast EEA algorithm.
184
6
X. Ding
Conclusion
By applying Half-GCD algorithm, we decrease the quadratic time cost of the original SOEES algorithm to nearly linear time of the fast SOEES algorithm. Furthermore, we present a new EEA algorithm in accordance with the accelerated SOEES algorithm to compact the programming of Algorithm 9.2 [6]. The new EEA algorithm matches the known time complexity bound. Consequently, the algorithm for computing the reduced sum of two divisors of an arbitrary hyperelliptic curve is accelerated from quadratic to nearly linear time.
References 1. Galbraith, S., Harrison, M., Mireles Morales, D.J.: Efficient Hyperelliptic Arithmetic Using Balanced Representation for Divisors. In: van der Poorten, A.J., Stein, A. (eds.) ANTS-VIII 2008. LNCS, vol. 5011, pp. 342–356. Springer, Heidelberg (2008) 2. Kitamura, I., Katagi, M., Takagi, T.: A Complete Divisor Class Halving Algorithm for Hyperelliptic Curve Cryptosystems of Genus Two. In: Boyd, C., Gonz´alez Nieto, J.M. (eds.) ACISP 2005. LNCS, vol. 3574, pp. 146–157. Springer, Heidelberg (2005) 3. You, L., Sang, Y.: Effective Generalized Equations of Secure Hyperelliptic Curve Digital Signature Algorithms. The Journal of China Universities of Posts and Telecommunications 17, 100–115 (2010) 4. Jacobson, M.J., Menezes, A.J., Stein, A.: Hyperelliptic Curves and Cryptography. In: High Primes and Misdemeanors: Lectures in Honor of the 60th Birthday of Hugh Cowie Williams, Fields Inst. Comm., vol. 41, pp. 255–282. American Mathematical Society, Providence (2004) 5. Smith, B.: Isogenies and the Discrete Logarithm Problem in Jacobians of Genus 3 Hyperelliptic Curves. Journal of Cryptology 22, 505–529 (2009) 6. Jacobson, M.J., Scheidler, R., Stein, A.: Fast Arithmetic on Hyperelliptic Curves via Continued Fraction Expansions. In: Advances in Coding Theory and Cryptology. Series on Coding Theory and Cryptology, vol. 2, pp. 201–244. World Scientific Publishing Co. Pte. Ltd., Singapore (2007) 7. von zur Gathen, J., Gerhard, J.: Modern Computer Algebra. Cambridge University Press, Cambridge (1999) 8. Moenck, R.T.: Fast Computation of GCDs. In: STOC 1973: Proceedings of the Fifth Annual ACM Symposium on Theory of Computing, pp. 142–151. ACM Press, New York (1973) 9. Aho, A.V., Hopcroft, J.E., Ullman, J.D.: The Design and Analysis of Computer Algorithms. Addison-Wesley, Reading (1974) 10. Thull, K., Yap, C.: A Unified Approach to HGCD Algorithms for Polynomials and Integers (1998), http://citeseer.ist.psu.edu/235845.html 11. Wang, X., Pan, V.Y.: Acceleration of Euclidean Algorithm and Rational Number Reconstruction. SIAM J. Comput. 32, 548–556 (2003)
A Nonlinear Camera Calibration Method Based on Area Wei Li, Xiao-Jun Tong, and Hai-Tao Gan DMAP, Wuhan Polytechic University, No.1 West Zhonghuan Road, Changqing Garden, Wuhan 430023, Hubei, China
[email protected],
[email protected]
Abstract. This paper presents a novel nonlinear camera calibration algorithm using coplanar circles. The calibration procedure consists of two steps. In the first step, the distortion-free camera parameters are estimated based on the oval area of maximum overlap between the projected circle and the modeled ellipse. In the next step, nonlinear optimization process takes into account camera distortions. Compared with the classical point to point methods, our approach can get higher accuracy of the camera ideal parameters in first step. So, more accurate distortion parameters can be obtained. Keywords: Camera calibration, maximum overlap, higher accuracy.
1 Introduction Camera calibration is a process of modeling the mapping the two-dimensional images between 3D space. Calibration of cameras has important application in scene reconstruction, motion estimation, dimensional measurements etc. Existing camera calibration methods can be classified into two main categories: self-calibration [1, 2] and object-based calibration [3, 4]. There are three types of object-based patterns according to their dimensionalities, namely 1D pattern [5, 6], planar [7, 8], and 3D pattern [9, 10]. In the camera calibration approaches using the calibration target, the accuracy of calibration depends on the accuracy of the image measurements of the calibration pattern. Recently, planed-based is becoming a hot research topic for its flexibility and circular features are widely used. Ellipses have been actively used for pose estimation as well for camera calibration [11, 12]. Especially, a projected circle appears as an ellipse in the image plane, and the position of the circle can be extracted from the single image using conic fitting [13]. As we know, cameras model can be classified regarding the calibration method used to estimate the parameters.1) Distortion free model which compute the transformation. These techniques use the least squares method to obtain a transformation matrix which relates 3D points with their 2D projections, we called DLT method. The flexibility of the DLT-based calibration often depends on how easy it is to handle the calibration frame. The most widely used two-step method in camera calibration .The first-step using linear method to compute a closed-form solution for all the external and intrinsic parameters based on a distortion-free camera, then take these parameters as the initial guess, and the camera distortion is also taken into account, using nonlinear parameter optimization method to solve it. R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 185–192, 2010. © Springer-Verlag Berlin Heidelberg 2010
186
W. Li, X.-J. Tong, and H.-T. Gan
In the general non-linear optimal method involve a direct solution for most of the calibration parameters and the final parameters are obtained with nonlinear minimization. Indeed, the solution of DLT method is usually the solution that minimizes mean-form residuals of some equations. However, a nonlinear optimization algorithm may converge to local extremum that is not globally optimal and if an approximate solution is given as an initial guess, the number of iterations can be significantly reduced, and the globally optimal can solution can be reliably reached. The proposed method, parameters are estimated based on the oval area of maximum overlap between the projected circle and the modeled ellipse, So the obtained initial parameter is more accurate .In the optimization process, more accurate distortion parameters can be obtained.
2 Camera Model First of all, that the pinhole camera model is ideal model, in this model the object point and image point correspondence between the direct relationship between the perspective projection, and then consider the distortion of the camera distortion parameters. Let P as the image projection of the circular target F . The image point m = [u, v,1]T corresponds to the projection point M = [ X , Y , Z ,1]T . Using the camera pinhole model, we have
sm = K [ R | t ]M ,
(1)
where sm is a scale factor, [ R | t ] represents the relative rotation and translation between the world reference frame and the camera coordinate frame respectively. K represents the camera intrinsic matrix and is denoted as
⎡α K = ⎢⎢ 0 ⎢⎣ 0
γ u0 ⎤ β v0 ⎥⎥ , 0
(2)
1 ⎥⎦
where α and β are the horizontal and vertical focal lengths expressed in pixel units, γ is the skew of the image plane axes, and (u0 , v0 ) are the coordinates of the principal point. Without loss of generality, we will assume that the world space is restricted to one plane, with equation Z = 0 with respect to the world coordinate system. The projection equation (1) can be simplified as sm = K [r1
r2
⎡ m1 t ]M = HM ⎢⎢ m4 ⎢⎣ m7
m2 m5 m8
m3 ⎤ ⎡ X ⎤ m6 ⎥⎥ ⎢⎢ Y ⎥⎥ . m9 ⎥⎦ ⎢⎣ 1 ⎥⎦
Eliminate the scale factor sm in the formula (3) gets the following two equations
(3)
A Nonlinear Camera Calibration Method Based on Area
u=
m1 X + m2Y + m3 , m7 X + m8Y + m9
v=
187
(4)
m4 X + m5Y + m6 . m7 X + m8Y + m9
(5)
Taking into account the geometric distortion of the camera we can describe the distortion model. In this paper, we use the distortion model [14]:
δ u (u, v) = ( p1 + p3 )u 2 + p4 uv + p1v 2 + k1u (u 2 + v 2 ) ,
(6)
δ v (u, v) = p2 u 2 + p3uv + ( p1 + p4 )v 2 + k1v(u 2 + v 2 ) .
(7)
3 Camera Calibration This article uses the two-dimensional plane of the calibration plate, and calibration target is a set of circles. Because the projected circles are ellipse, we use the method in [14] to extract the center of elliptical images. The calibration board circle we used as our target can be expressed as ( X − X 0 ) 2 + (Y − Y0 ) 2 = r 2 , where ( X 0 , Y0 ) is the center of a circle, object point ( X , Y ) radius r we get
(8) using (4), (5)
X =
−vm8 m3 + vm9 m2 + m3 m5 + m6 um8 − m2 m6 − m5um9 , −um8 m4 − m2 vm7 + m2 m4 + um7 m5 + m1vm8 − m1 m5
(9)
Y=
− m3 vm7 − um9 m4 + um7 m6 + m3 m4 − m1m6 + m1vm9 . −um8 m4 − m2 vm7 + m2 m4 + um7 m5 + m1vm8 − m1 m5
(10)
If we substitute (9) and (10) for X and Y in (8), we get the following the expression (a1u + b1v + c1 ) 2 + (a2u + b2 v + c2 ) 2 = r 2 (a3u + b3v + c3 ) 2 ,
(11)
where a1 = m6 m8 − m5 m9 − X 0 (m5 m7 − m4 m8 ) , b1 = m2 m9 − m3 m8 − X 0 (m1m8 − m2 m7 ) , c1 = m3 m5 − m2 m6 − X 0 (m2 m4 − m1m5 ) , a2 = m6 m7 − m4 m9 − Y0 (m5 m7 − m4 m8 ) , b2 = m1m9 − m3 m7 − Y0 (m1m8 − m2 m7 ) , c2 = m3 m4 − m1m6 − Y0 (m2 m4 − m1m5 ) , a3 = m5 m7 − m4 m8 , b3 = m1m8 − m2 m7 , c3 = m2 m4 − m1m5 . Equation (11) can express as: a11u 2 + 2a12 uv + a22 v 2 + 2a13u + 2a23 v + a33 = 0 ,
(12)
188
W. Li, X.-J. Tong, and H.-T. Gan
where
a11 = a12 + a2 2 − r 2 a32 ,
a12 = a1b1 + a2 b2 − r 2 a3b3 ,
a22 = b12 + b2 2 − r 2 b32 ,
a13 = a1c1 + a2 c2 − r 2 a3 c3 , a23 = b1c1 + b2 c2 − r 2 b3 c3 , a33 = c12 + c2 2 − r 2 c32 . We notice from Equation (12) that the projection is a quadratic curve and its geometrical interpretation can be a circle, parabola, hyperbola, or ellipse. In practice, due to the limited field of view the projection will be a circle or ellipse. In general Equation (12) is a ellipse. So we can calculate its long axis, short axis, the center, rotation angle. ⎧u = u ' cos θ − v ' sin θ Let ⎨ , and Equation (12) can be expressed as: ' ' ⎩ v = u sin θ + v cos θ a11'u '2 + 2a12 'u ' v ' + a22 ' v '2 + 2a13' u ' + 2a23' v' + a33' = 0 ,
(13)
where ⎧ a11' = a11 cos 2 θ + 2a12 sin θ cos θ + a22 sin 2 θ ⎪ ' 2 2 ⎪ a12 = (a22 − a11 )sin θ cos θ + a12 (cos θ − sin θ ) ⎪ ' 2 2 ⎪ a22 = a11 sin θ − 2a12 sin θ cos θ + a22 cos θ . ⎨ ' ⎪ a13 = a13 cos θ + a23 sin θ ⎪ a ' = −a sin θ + a cos θ 13 23 ⎪ 23 ' ⎪⎩ a33 = a33 If cot 2θ =
a −a a11 − a22 1 , a12' = 0 , so we get rotation angle θ = arc cot( 11 22 ) , 2 2a12 2a12
long axis a* =
a13' 2 a' 2 a' + ' 23 ' − 33' , short axis b* = ' 2 a11 a22 a11 a11
' 2 a23 a' 2 a' , and center + ' 13 ' − 33 ' 2 ' a22 a11a22 a22
' a23 a' a' a' cos θ + 23 sin θ , y * = − 23 sin θ − 23 cos θ . ' ' ' ' a22 a22 a22 a22 In practice, area of the ellipse in image plane is easy to get. Solving the linear Camera parameters is to let model and the actual elliptical area of the maximum degree of coincidence. The public part of the two oval area can describe as a function of the distance of two ellipse (long axis), (short axis), (centers), and difference between two rotation angle. In paper [7] haven’t take all these factors into account. We noted maximum overlap function as f (s) , here
coordinates x* = −
s = (| a1 − a2 |,| b1 − b2 |,| x1 − x2 |,| y1 − y2 |,| θ1 − θ 2 |) , a1 , a2 , b1 , b2 , x1 , x2 , y1 , y2 ,θ1 ,θ 2 are long axis, short axis, center coordinates, rotation angle corresponds to two ellipses. When there is little difference between ellipses f ( s ) = s1 | a1 − a2 | + s2 | b1 − b2 | + s3 | x1 − x2 | + s4 | y1 − y2 | + s5 | θ1 − θ 2 | + ⋅⋅⋅ . When | a1 − a2 | , | b1 − b2 | , | x1 − x2 | , | y1 − y2 | , | θ1 − θ 2 | , → 0 we use the first part of the test expression as our function, Because of its high power of expression behind. Now we calculate the specific s1 , s2 , s3 , s4 , s5 . Any ellipse can Obtained by another ellipse change from the long and short axis, translation centers, rotation angle.
A Nonlinear Camera Calibration Method Based on Area
189
We consider only one factor for each of the two elliptical cross section of nonpublic area of impact without taking into account the influence of other factors, can be the following five kinds of variations (Fig. 1), so get the weight of each factor.
Fig. 1. Five variations
Now we calculated f ( s ) under different conditions, so we can get the weight of each factor. ⎧ x = a * cos α and elliptical polar The general parameters of ellipse equation ⎨ ⎩ y = b *sin α equation r (ϕ ) =
a 2b 2 . b * cos (ϕ ) + a 2 *sin 2 (ϕ ) 2
2
Case 1: long axis changes Δa , f (Δs1 ) = π bΔa , so s1 = π b . Case 2: short axis changes Δb , f (Δs2 ) = π a Δb , so s2 = π a . Case 3: Center coordinates changes Δx ,
b x a2 x x2 f (Δs3 ) = 4 ( a 2 − x 2 + arcsin ) |Δ− xa 2 − |−Δ , −a a 2 a 2 When Δx → 0 , f (Δs3 ) = 4bΔx , so, s3 = 4b . Case 4: Center coordinates changes Δy , f (Δs3 ) = 4
2* a y 2 b2 y ( b − x 2 + arcsin ) y =Δy 2 , b 2 2 b
when Δy → 0 , f (Δs4 ) = 4aΔy , so s4 = 4a . Case 5: rotation angle changes θ , quarter of the area of f (Δs5 ) , θ 2
∫θ −
2
(π +θ ) 2 , a 2b2 a 2b 2 dϕ − ∫ dϕ = l (θ ) ( π −θ ) 2 b 2 cos 2 (ϕ ) + a 2 sin 2 (ϕ ) b 2 cos2 (ϕ ) + a 2 sin 2 (ϕ )
when θ → 0 , l (θ ) = l ` (0)(a − b) + ⋅⋅⋅ , so f (Δs5 ) = 4(a − b)θ . So we now get the target function f ( s ) = π b | a1 − a2 | +π a | b1 − b2 | +4b | x1 − x2 | +4a | y1 − y2 | +4(a − b) | θ1 − θ 2 | .
190
W. Li, X.-J. Tong, and H.-T. Gan
Therefore, the objective functions of the minimum requirements to make public parts of the two largest ovals. Because exists inequality 2* r1 * r2 ≤ r12 + r22 , ( r1 , r2 ∈ R ). Let f ( S ) = (π b)2 ( a1 − a2 )2 + (π a )2 (b1 − b2 )2 + 16b( x1 − x2 )2 + 16a( y1 − y2 )2 + 16(a − b)2 (θ1 − θ2 ) when f (S ) → 0 , f ( s ) → 0 , so we use f ( S ) to replace f ( s ) . After obtaining the objective function we use it to calibrate our camera.
4 Calibration Results
(
)
We use a group of 640 × 480 images for calibration experiments (Fig .2). Edges were extracted using Canny’s edge detector and the ellipse were obtained using a least squares ellipse fitting algorithm. For comparison, we also used the pattern of squares and zhang’s calibration method.
Fig. 2. Calibration target
The results in Table 1 (1, 2, 3, 4 rows) methods are zhang initial solution, our initial solution, zhang method of nonlinear solution, our non-linear solution. e is average projection of the pixel error. Table 1. Calibration results D
E
u0
v0
p1
p2
p3
p4
k1
e
995.64427
997.63243
302.46406
235.35432
0
0
0
0
0
*
998.37657
998.43244
305.12423
237.34324
997.65434
998.35454
303.23423
236.65723
998.75426
998.24532
304.11523
237.30254
0 0.1324 2 0.1123 4
0
0
0
-0.00334
-0.03424
0.00436
-0.00023
0.03357
0.00314
0
*
0.1324 3 0.2334 7
0.4510 1 0.2021 4
We have presented a geometric camera calibration technique based on the oval area of maximum overlap between the projected circle and the modeled ellipse. The proposed approach has a number of advantages relative to existing methods.
A Nonlinear Camera Calibration Method Based on Area
191
An ellipse contains, long axis, short axis, center and rotation angle of the composition, only these five parameters to determine an ellipse. In the case of small distortion, we find the largest oval coincidence objective function to solve camera initial parameters. The objective function is determined by four ellipse parameters. And our approach can quickly find the initial guess. Area-based approach than the corresponding method based on the point of camera shake in certain circumstances, the calculated camera parameters smaller. Real experimental results show the algorithm's accuracy and robustness The camera model used non-linear optimal method in first step and second step, so the amount of computation is increased. With rapid development of the computer now it will not be the main factors we consider. The accuracy and the robustness of the proposed calibration strategy were confirmed by a series of experiments. Further research is currently being carried out in order to improve our based on the oval area of maximum overlap between the projected circle and the modeled ellipse techniques.
Acknowledgments This paper is supported by National Natural Science Foundation under Grant 79970025, 60403002 and 30370356 of China, and the plan of Science and Technological Innovation Team of the Outstanding Young and Middle-aged Scholars of Hubei Provincial Department of Education, and Hubei Provincial Department of Education under Grant D20081802 and Hubei provincial Natural Science Foundation under Grant 2004ABA031, 2005ABA233 and 2007ABB030, and National Postdoctoral Science Foundation of china (Grant 2004036016), and Foundation of Hubei Provincial Department of Education Grant 2003X130 and Scientific Research of Wuhan Polytechnic University Grant 06Q15 and Graduation study innovation of Wuhan Polytechnic University Grant 08cx014.
References 1. Faugeras 1, O.D., Luong 1, Q.-T., Maybank, S.J.: Camera self-calibration: Theory and experiments. In: Sandini, G. (ed.) ECCV 1992. LNCS, vol. 588. Springer, Heidelberg (1992) 2. Grossmann, E.: Discrete camera calibration from pixel stream, comput, Vis, Image Understand (2009) doi:10,1016/j.cviu.2009.03.009 3. Zhang, Z.: A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1330–1334 (2000) 4. Heikkila, J.: Geometric Camera Calibration Using Circular Control Points. IEEE Trans. Pattern Analysis and Machine Intelligence 22(10) (October 2000) 5. Zhang, Z.: Camera Calibration with One-Dimensional Objects. IEEE Transactions on Pattern Analysis and Machine Intelligence 26(7) (July 2004) 6. Zhao, Z., Liu, Y., Zhang, Z.: Camera Calibration With Three Noncollinear Points Under Special Motions. IEEE Transactions On Image Processing 17(12), 2393–2402 (2008) 7. Meng, X., Hu, Z.: A new easy camera calibration technique based on circular points. Pattern Recognition 36, 1155–1164 (2003)
192
W. Li, X.-J. Tong, and H.-T. Gan
8. Liu, Q., Su, H.: Correction of the Asymmetrical Circular Projection in DLT Camera Calibration. In: 2008 Congress on Image and Signal Processing (2008) 9. Lu, Y., Payandeh, S.: On the sensitivity analysis of camera calibration from image of spheres. Computer Vision and Imag Understanding (2009) doi:10,1016/j.cviu.2009.09.001 10. Zhang, H., Kwan-Yee, K., Zhang, G.: Camera Calibration from Images of Spheres. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(3), 499–503 (2007) 11. Li, Y., Hung, Y.S., Lee, S.: A stratified self-calibration method for circular motion in spite of varying intrinsic parameters. Image and Vision Computing 26, 731–739 (2008) 12. Wang, L., Yao, H.: Effective and automatic calibration using concentric circles. International Journal of Pattern Recognition and Artificial Intelligence 22(7), 1379–1401 (2008) 13. Zhang, Z.: Parameter estimation techniques: a tutorial with application to conic fitting. Image and Vision Computing 15, 59–76 (1997) 14. Weng, J., Cohen, P., Herniou, M.: Camera Calibration with Distortion Models and Accuracy. IEEE Transactions on Pattern Analysis and Machine Intelligence 14(10) (October 1992)
Cost Aggregation Strategy for Stereo Matching Based on a Generalized Bilateral Filter Model Li Li1,2 , Cai-Ming Zhang1,2 , and Hua Yan2 1
Department of Computer Science and Technology Shandong University, Jinan, China 2 Department of Computer Science and Technology Shandong Economic University, Jinan, China lily
[email protected],
[email protected],
[email protected]
Abstract. Stereo matching is a kernel problem in the field of stereo vision. An adaptive cost aggregation strategy based on a generalized bilateral filter model is proposed. The strategy extends range weights of the original bilateral filter by the inner and outer weighted average processes. A pixel is assigned a high range weight to the central pixel not only if the patches of the two pixels are similar but also if the neighboring patches around the two pixels are similar. The final range weights could more accurately reflect the similarities of relevant two pixels. Different cost aggregation methods can be derived from the model by modifying parameters. Experimental results compared with the other state-of-the-art cost aggregation methods demonstrate the effectiveness of our proposed cost aggregation strategy.
1
Introduction
Stereo matching algorithms aim at finding disparity maps based on two or more images captured from the same screen. A complete survey on stereo matching algorithms can be found in [1] and stereo algorithms can be classified into local and global methods. Most of stereo matching algorithms consist of four steps: matching costs computation, cost aggregation, disparity maps computation and disparity maps refinement. For local methods cost aggregation step is mandatory to increase the signal to noise ratio and is often adopted by global methods. Cost aggregation step is to aggregate raw matching costs within a support window. An ideal support window should be adjusted according to image content to include only the pixels with the same disparity. Many cost aggregation methods have been presented while this behavior is far from ideal. This paper proposed an adaptive cost aggregation strategy based on a generalized bilateral filter model. Bilateral filter is a non-iterative feature-preserving image smoothing technique and widely used in image denosing, computer vision and computer graphics areas. Bilateral filter assigns a geometric (spatial filter) and a color proximity (range filter) constraint independently, it can smooth the image noise while preserving edge features. The higher weight should be assigned to the pixels with the both smaller spatial and range distances to the central pixel. Adaptive Weight R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 193–200, 2010. c Springer-Verlag Berlin Heidelberg 2010
194
L. Li, C.-M. Zhang, and H. Yan
method (AW) [2] firstly used the bilateral filter to aggregate matching costs and achieved excellent results in terms of accuracy. In AW method, the weight of a pixel within the support window is obtained by applying two independent bilateral filters in the neighborhood of potential correspondence. Given a pixel pr in the reference image Ir and a potential corresponding pixel pl in the matching (pr , pl , d) is computed as follows: image Il with disparity d, the aggregated cost C (pr , pl , d) = C
qr ∈S(pr ) ql ∈S (pl )
WC (Ir (pr ) , Ir (qr )) WS (pr , qr ) WC (Il (pl ) , Il (ql )) WS (pl , ql ) C (qr , ql , d)
qr ∈S(pr ) ql ∈S (pl )
WC (Ir (pr ) , Ir (qr )) WS (pr , qr ) WC (Il (pl ) , Il (ql )) WS (pl , ql )
.
(1)
where the initial matching cost C(qr , ql , d) is single pixel TAD (Truncated Absolute Differences) score between corresponding pixels qr and ql assuming the disparity is d , the weighting functions WS and WC both are Gaussian, the spaital and range distances are Euclidean distances. The AW method provides excellent results in a WTA (Winner Take All) framework which are comparable to some global methods without any complex reasoning. However the bilateral filter has some limits such as poor smoothing in high gradient regions, smoothing and blunting cliffs, and time demanding and so on. Therefore many modified methods against the AW approach have been proposed. The Segment-based Support method (SS) used segment information [3]. By using segment information and removing the spatial weight, the SS method can further improve the accuracy of disparity maps. But the computational time is almost double of that of AW method. To decrease the execution time, a simplified asymmetrical strategy was proposed in [4]. The bilateral filter is enforced on the reference image only and weights are computed by means of a two pass approach. These simplifications yield a real-time implementation and worse but reasonable results compared with the AW method. A Fast Bilateral Stereo method (FBS) combined the traditional local approach with a symmetric adaptive weight strategy based on two independent bilateral filters applied on a regular block basis [5]. Disparity maps yielded by FBS method are, in general, less noisy compared with AW method and one can trade accuracy for speed and vice versa by modifying the block size. Danny Barash [6] pointed out that the nature of bilateral filter resembles that of anisotropic diffusion. So recently many stereo matching methods based on PDE model have been presented [7,8,9,10]. An adaptive cost aggregation strategy based on a generalized bilateral filter model was proposed in this paper. In traditional bilateral filter the range weights are computed based on photometric differences between two pixels no considering the neighboring pixels’ similarities around the two pixels. Based on the above observation we firstly present a generalized bilateral filter model and enforce it in cost aggregation. By adjusting parameters different cost aggregation methods can be derived from the proposed model. After cost aggregation the final disparity map is obtained by WTA strategy without any post processing steps.
Cost Aggregation Strategy for Stereo Matching
195
We give a detailed explanation for each part in Sections 2 and 3, show some experimental results to compare different cost aggregation methods in Section 4 and conclusions are given in Section 5.
2
Initial Matching Cost
In AW method, the TAD score between two potential corresponding pixels is computed as initial matching cost. The TAD matching cost is given by: u(pr , pl ) =| Ir (pr ) − Il (pl ) | . C(pr , pl , d) = min(u(pr , pl ), T ) .
(2) (3)
where T is a predetermined threshold parameter. For color image absolute difference is the summation of that of three channels in RGB color space (different from the original AW method with CIELab color space). In order to better handle the outliers, the truncated L1 norm is employed in this paper: C(pr , pl , d) = − log[δM + (1 − δM ) exp (−u/σM )] .
(4)
where δM and σM are predetermined parameters. In Section 4 compared experiments are held on performances of two cost functions (3) and (4).
3
Proposed Adaptive Cost Aggregation
In original bilateral filter, the range weight WC is the function of single differences between pairs of connected pixels in (1). The single difference has a limited ability to express the similarity of related two pixels and the information of their neighborhoods should be included in rang weight too. So we modify the bilateral filter formulation and range weight WC is computed based on patch similarity around two connected pixels. In this section we explain the generalized bilateral filter model and apply it in cost aggregation step. Firstly the range distance DC is defined as below: Gσ (m)u(pi+m , pj+m ))/ Gσ (m) . (5) DC (pi , pj ) = ( m
m
where the denominator is the normalization coefficient by the sum of all applied weights, u is given by (2), pi+m and pj+m are the neighborhood of pi and pj respectively, |m| < N and N is the patch size, the size and shape of the patch of pi and pj are all the same, Gσ (m) = exp(−|m|/2δσ ) is the Gaussian of radius δσ . So the distance is not the single difference between two related pixels and is a weighted average value of the patch around the two pixels to decrease the noise influence. Then the range weight is computed by: WC1 (pi , pj ) = exp(−DC (pi , pj )/2σC ) .
(6)
196
L. Li, C.-M. Zhang, and H. Yan
The above function WC1 is the Gaussian which converts the range distance DC to the similarity weight. We can further sum up the patch similarity weights of their corresponding neighbors. So the range weight WC1 is modified as follows: WC3 (pi , pj ) = ( Gρ (k)WC1 (pi+k , pj+k ))/ Gρ (k) . (7) k
k
where Gρ = exp(−|k|/2δρ ) is a similar function as Gσ in (5). When we substitute the equations (5) and (6) into (7), the equation (7) can be rewritten as below: 3
WC (pi , pj ) = (
Gρ (k) exp[−((
Gσ (m)u(pi+k+m , pj+k+m ))/
m
k
Gσ (m))/2σC ])/
m
Gρ (k) .
k
(8)
From the above equation we observe that there are two weighted average processes. For pixels pi and pj , we compute the patch distances of all patches at positions pi+k and pj+k taken with the offset k around pi and pj respectively. Each patch distance is computed as weighted average of the patch and controlled by Gσ . This is called the inner weighted average process. Then the patch distance is transformed into patch similarity weight by function (6). We then compute the weighted average of these patch similarity weights as the finial range weight between pi and pj . This is called the outer weighted average process controlled by Gρ . Thus the pixel pj will contribute to the pixel pi with a high weight not only if the patches around pi and pj are similar, but also if the neighboring patches pi+k and pj+k resemble each other. The equation (8) is viewed as a generalized bilateral filter model. Then the finial cost aggregation equation is written as r , pl , d) = C(p
3
WC (pr , qr )WS (pr , qr )C(qr , ql , d)/
qr ∈S(pr ) ql ∈S (pl )
3
WC (pr , qr )WS (pr , qr ) .
(9)
qr ∈S(pr ) ql ∈S (pl )
where WS = exp(−|pr − qr |/2σS ) is computed by Gaussian function based on the spatial distance same as (1), WC3 is computed by (8). In the above equation we simply execute the filter on the reference image only and the method can be extended to symmetric filter easily. We call the equation (9) a cost aggregation based on a generalized bilateral filter (CA GBF). Based on the generalized model (9) different cost aggregation methods can be obtained. First, let ρ → 0, leading to the following outer weighted function: 1 if k = 0 Gρ = . (10) 0 if k = 0 The equation (9) then simplifies to 1 r , pl , d) = C(p W 1 (pr , qr )WS (pr , qr )C(qr , ql , d) . Mpr ,pl q ∈S(p ) C
(11)
r r ql ∈S (pl )
where the range weight WC3 is replaced by WC1 which is expressed by (6) only considering the inner weighted average, the normalization denominator is symbolized as Mpr ,pl . The equation (11) is called a cost aggregation based on the inner weighted filter (CA IWF).
Cost Aggregation Strategy for Stereo Matching
Second, let σ → 0, the range weight WC3 in function (9) is replaced by WC2 (pi , pj ) = ( Gρ (k)WC (pi+k , pj+k ))/ Gρ k . k
197
(12)
k
where WC is the weighting function based on the single pixel difference similar with the function in (1), the range weight is computed considering only the outer weighted average. Then the equation (9) simplifies to r , pl , d) = C(p
1 Mpr ,pl
WC2 (pr , qr )WS (pr , qr )C(qr , ql , d) .
(13)
qr ∈S(pr ) ql ∈S (pl )
We call the above equation a cost aggregation based on the outer weighted filter (CA OWF). As a third example, let both ρ → 0 and σ → 0, then the generalized model simplifies to the classical bilateral filter and leads to r , pl , d) = C(p
1 Mpr ,pl
WC (pr , qr )WS (pr , qr )C(qr , ql , d) .
(14)
qr ∈S(pr ) ql ∈S (pl )
where WC (pr , qr ) = exp(−u(pr , qr )/2σC ). Similarly we call the equation (14) a cost aggregation based on the asymmetric bilateral filter (CA ABF). Finally the disparity map is obtained in a WTA framework: r , pl , d)) . D(pr ) = arg min(C(p d
4
(15)
Experimental Results
This section we aim at assessing the performance of our proposed cost aggregation strategy based on a generalized bilateral filter model. We have used the Middlebury stereo benchmark [1] to evaluate performances of different cost functions (described by (3) and (4)) and different cost aggregation methods based on our model (that is CA GBF, CA IWF, CA OWF and CA ABF). For cost function comparison experiments, the parameters are set as TAD threshold 40, δM = 10−7 and σM = 2. Cost functions are used in the CA ABF method (14). Parameters of cost aggregation are σC = 15, σS = 10.5 and support window size 21×21. The corresponding disparity maps of the Middlebury images are plotted in Figure 1. By the Middlebury stereo benchmark we compute the percentage of bad pixels (i.e. pixel whose absolute disparity error is greater than 1) for pixels in non-occluded areas, pixels in occluded areas, pixels near depth discontinuities described by [1]. For pixels in occluded areas, the results of four images are all better using L1 norm function than TAD function, which manifests the L1 norm function more robust to outliers. In cost aggregation comparison experiments, we adopt a constant parameter setting across four test images: support window size 21 × 21, δσ = 1.5, δρ = 1.5, σC = 15, σS = 10.5. The L1 norm is taken as the cost function with the
198
L. Li, C.-M. Zhang, and H. Yan
Fig. 1. Disparity maps for the “Tsukuba”, “Venus”, “Teddy” and “Cones” images. The top is the reference images. The second row is the truth disparity maps. The third row is the results using L1 norm function. The fourth row is the results using the TAD cost function. The last row is our optimal results using CA OWF.
same parameters described above. These parameters have been found empirically. Quantitative comparative results for our cost aggregation methods are given in Table 1. The focus here is the evaluation of the raw cost aggregation methods which do not deal explicitly with occlusions and we only report the percentage of bad pixels for pixels in non-occluded areas (Vis.) and near depth discontinuities (Dis.). From the table, we can find that for pixels in nonoccluded areas the results of the CA OWF method are the almost best of the four methods. Compared with CA ABF, the CA OWF method decreases the Vis. and Dis. errors more for Teddy and Venus images than errors for Tsukuba and Venus images, mainly because of the weighted average process increasing the Dis. errors for latter two images. The CA GBF can create the similar results as the CA OWF compared with the CA ABF method too. However the results of the CA IWF method could even become worse than that of CA ABF. The inner and outer weighted average processes adopt the Gaussian function which smoothes the data and blurs edges simultaneously. To evaluate our proposed cost aggregation method, we compare the results of our method with the state-of-the-art cost aggregation strategies. In our method, we adopt the CA OWF method with optimal parameters minimizing the Vis.+ Dis. error on the whole dataset: window size 31×31, δρ = 1.5, σC = 10, σS = 15.5. We have reported in Table 2 the results obtained by our method and the other five top performing cost aggregations strategies [2,3,5,11,12] according to [5]. It
Cost Aggregation Strategy for Stereo Matching
199
Table 1. Quantitative comparison results of our four cost aggregation methods for four test images Methods CA ABF CA OWF CA IWF CA GBF
Tsukuba V is. Dis.
Venus V is. Dis.
3.77 3.51 3.93 3.72
5.21 4.84 5.52 4.97
11.73 12.15 11.51 11.68
15.61 15.85 14.66 14.11
Teddy V is. Dis. 13.69 13.09 13.36 13.07
25.43 24.91 24.44 24.17
Cones V is. Dis. 10.32 9.03 10.50 9.90
20.60 18.83 20.58 19.61
Table 2. Quantitative comparison results of our method with the five top performing cost aggregation methods for four test images Methods SS[3] AW[2] CFBS[5] SB[11] VW[12] Our Method
Tsukuba V is. Dis. 2.19 3.33 2.95 2.25 3.12 2.34
7.22 8.87 8.69 8.87 12.40 10.00
Venus V is. Dis.
Teddy V is. Dis.
1.38 6.27 10.50 2.02 9.32 10.52 1.29 7.62 10.71 1.37 9.40 12.70 2.42 13.30 17.70 3.40 13.68 12.44
21.20 20.84 20.82 24.8 25.5 25.0
Cones V is. Dis. 5.83 3.72 5.23 11.10 21.20 7.61
11.80 9.37 11.34 20.10 27.30 16.86
is worth noting that these results reported by [5] are obtained using the original cost function proposed by authors of each paper and the results for AW and SS available on the Middlebury evaluation site including the post processing steps are not used. From the Table 2, we can see that our proposed method has accuracy comparable to the best performing cost aggregation strategies. The AW method outperforming the accuracy of our method is mainly due to the symmetric strategy used by it while asymmetric strategy by our method. However our method runs faster than the AW method and the AW run is 3226 seconds according to [5] while our method takes 525 seconds without any accelerating techniques on Teddy. Furthermore our method can decrease the noise influence due to our weighted average process. The results of our proposed method are plotted in Figure 1.
5
Conclusions
We have proposed a cost aggregation strategy based on a generalized bilateral filter model. The model extends the range weight computation of the bilateral filter in two steps: the inner weighted average process of the pixels’ range distances in the patch and the outer weighted average process of the patch similarity weights. The range weights in our model can more accurately reflect two pixels’ similarity.
200
L. Li, C.-M. Zhang, and H. Yan
By modifying the inner and outer parameters, different cost aggregation strategies are easily derived from the model. Within the evaluation framework for cost aggregation strategies, experimental results confirm the effectiveness of our proposed method. In the future we plan to adopt efficient accelerating calculations to decrease the computational time of our method. We are also interested in analyzing resemblance between bilateral filter and PDE model to further improve the accuracy of bilateral filter method. Acknowledgment. The authors would like to thank financial supports from National Natural Science Foundation of China under Grant Nos. 60970048, Natural Science Foundation of Shandong Province Grant Nos. 2009ZRB019SF , Project of Shandong Province Higher Educational Science and Technology Program under Grant Nos. J07Y J10.
References 1. Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 7–42 (2002) 2. Yoon, K.J., Kweon, I.S.: Adaptive support-weight approach for correspondence search. PAMI 28(4), 650–656 (2005) 3. Tombari, F., Mattoccia, S., Di Stefano, L.: Segmentation-based adaptive support for accurate stereo correspondence. In: Mery, D., Rueda, L. (eds.) PSIVT 2007. LNCS, vol. 4872, pp. 427–438. Springer, Heidelberg (2007) 4. Gong, M., Yang, R.G., Liang, W., Gong, M.W.: A performance study on different cost aggregation approaches used in real-time stereo matching. Int. Journal Computer Vision 75(2), 283–296 (2007) 5. Mattoccia, S., Giardino, S., Gambini, A.: Accurate and efficient cost aggregation strategy for stereo correspondence based on approximated joint bilateral filtering. In: Proc. of ACCV 2009 (2009) 6. Barash, D.: Bilateral Filtering and Anisotropic Diffusion: Towards a Unified Viewpoint. Hewlell-Packard Laboratories Technical Report, HPL-2000-18(R.1) (2000) 7. Mattoccia, S.: A locally global approach to stereo correspondence. In: Proc. Of ICCV Workshop 2009, pp. 1763–1770 (2009) 8. Scharstein, D., Szeliski, R.: Stereo matching with nonlinear diffusion. Int. J. of Computer Vision 28(2), 155–174 (1998) 9. Ari, R.B., Sochen, N.A.: Variational stereo vision with sharp discontinuities and occlusion handling. In: Proc. of ICCV 2007, Rio de Janeiro, Brazil, pp. 1–7. IEEE Computer Society Press, Los Alamitos (2007) 10. Zimmer, H., Bruhn, A., Valgaerts, L., Breuβ, M., Weickert, J., Rosenhahn, B., Seidel, H.P.: PDE-Based Anisotropic Disparity-Driven Stereo Vision. In: Vision, Modeling, and Visualization 2008: Proceedings, October 8-10 (2008) 11. Gerrits, M., Bekaert, P.: Local stereo matching with segmentation-based outlier rejection. In: Proc. of CRV 2006, pp. 66–66 (2006) 12. Veksler, O.: Fast variable window for stereo correspondence using integral images. In: Proc. of CVPR 2003, pp. 556–561 (2003)
Stocks Network of Coal and Power Sectors in China Stock Markets Wangsen Lan1 and Guohao Zhao2 1
2
Department of Mathematics, Xinzhou Teachers University. 10 Peace Street, Xinzhou, Shanxi, China ws
[email protected] School of Management Science and Engineering, Shanxi Finance and Economics University. 696 Wucheng Load, Taiyuan, China
[email protected]
Abstract. To explore the interaction among stocks and improve the ability to build portfolios, a Stocks Network of Coal and Power Sectors (SNCPS) was modeled in China stock markets, in which nodes were stocks, and edges were correlation coefficients of stocks logarithm returns from 1991 to 2009. The study calculated nodes degree and cluster coefficient of SNCPS, analyzed its centrality by social network way and community by k-plex, particularly advanced Backbone Network (BN) conception, developed Algorithm of the Largest Eigenvalue of Weight Matrix to detect BN. Results show that SNCPS is scale-free, negative exponent of node-degree distribution less than 1, the average cluster coefficient 0.68 for unweighted networks and 0.41 for weighted networks, node 000723, 601898, and 601918 with high betweenness, SNCPS with two partitions, the BN with 10 nodes which influences entire network greatly. Keywords: Complex network, stock, correlation coefficient, topology, backbone network.
1
Introduction
Price changes of stocks are complex system behaviors, which are dependent on particular economic environments. One of the most important ways of understanding prices interactions among stocks is to use the correlation matrix consisted of correlation coefficients [1]-[2]. As we all know, some stock return time series are of high correlations, sometimes reach 0.7 between two stocks in same sector. Research for the correlation among stocks can help investors to improve the ability to construct portfolio [2]. In recent years, interactions among assets have been researched by modeling complex networks in many literatures [2]-[13]. A large number of complex systems exist in the form of network topology that ignores some details on smallscale but pays more attention to natures of systems, which provides a way to deal with complex system for us. Stock markets is complex systems, its complexity is R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 201–208, 2010. c Springer-Verlag Berlin Heidelberg 2010
202
W. Lan and G. Zhao
embodied by stock index. Bonanno et al [2] thought that complex network analysis was a noise filter method, and network topology characteristics were helpful to understand the Relationship of assets in financial markets. Kim et al [3]-[4] built a weighed network on the S&P500, in which edge weight represented relation strength between nodes, and the sum of weights on all edges of a node was the ability of the node influencing its network. Onnela et al [5] compared cluster coefficients between company networks and random networks, and found that about 10% of edges contained the most information in the network. Literature [8] utilized the coefficient of determination from the multi-factor model on the stock networks to explain stock returns, and found that stocks with a large number of links in a stock network could be better explained by factors in the multe-factor model than stocks with a small number of links. Literature [9] studied the structural variation of the network formed by connecting S&P500 stocks, and found it high correlition with closing prices (or price returns). The key finding was that the scalefreeness of the degree distribution was disrupted when the market experiences fluctuation. Thus, the mean error of the power-law approximation became an effective indicative parameter of the volatility of the stock market. Literature [10] used a threshold method to construct China’s stock correlation network and then studied the network’s structural properties and topological stability. In the article [11], authors researched the stocks network in real estate sector of china stock market, and found network scale-free. Literature [12] Investigated the topological properties of the Brazilian stock market networks, built the minimum spanning tree, whose results suggested that stocks tended to cluster by sector. Literature [13] modeled investment behaviors of traders whose decisions was influenced by their trusted peers’ behaviors. The results demonstrated that real life trust networks could significantly delay the stabilization of a market.
2
Model
In ours researches, the data set is daily closing prices of 98 stocks in coal and power sector of China Stock Market. All data were collected from Hexun website (www.hexun.com) with time period of most long span from the beginning of 1991 to the end of 2009 including 4552. Some stocks data is less because of its shorter time period to have been listed. We start our analysis by assuming that n stocks with price Pi (t) for stock i at time t. Then the logarithmic return of stock i is ri (t) = ln Pi (t) − ln Pi (t − 1) which, for a certain consecutive sequence of trading days, forms the return vector si . In order to characterize the synchronous time evolution of stocks, we use the equal time correlation coefficients between stocks i and j, and it is defined as si sj − si sj γij = (1) 2 2 (s2i − si )(s2j − sj )
Stocks Network of Coal and Power Sectors in China Stock Markets
203
Where · · · indicates a time average over the trading days included in the return vectors. All correlation coefficients, forming an n × n matrix with −1 ≤ γij ≤ 1, are then transformed into an n × n influence matrix with elements |γij |, such that 0 ≤ |γij | ≤ 1. In not confusing situations, we still use γij to express |γij |, and call this matrix as the Absolute Correlation Matrix (ACM), denoted by R, that is R = (γij )n×n . We will construct undirected, weighted Stocks Networks of Coal and Power Sectors (SNCPS) in China stock markets, take all stocks as nodes, and correlation coefficients between stocks as weighted edges. In order to observe clearly the mutual influence behavior among stocks, we preset different threshold γ0 of correlation coefficient, and study network natures for γij > γ0 . We pay more close attention to the strong influence among stocks, so all works for γ0 > 0.55, and ignore self-connecting of every node, that is γii = 0 for 1 ≤ i ≤ n. All network parameters are calculated after having removed isolated nodes. Sometimes we also considered unweighted network that two nodes are linked while γij > γ0 for 1 ≤ i, j ≤ n, i = j.
3 3.1
Essential Parameters of SNCPS Node Degree and Network Scale
Degree of a node is the number of edges connected with it for unweighted network, or the sum of weight on all edges for weighted network. Big degree shows that node importance in network. We have calculated degrees for the weighted network of SNCPS, and Table 1 lists some nodes with the largest degree. Table 1. Degree of some nodes in SNCPS Node γ0 = 0.55 γ0 = 0.60 γ0 = 0.65 γ0 = 0.70
002060 600021 600027 000723 601666 42.487 28.178 25.627 38.839 20.852 31.552 12.072 14.903 28.444 15.131 7.346 3.991 4.132 16.646 13.902 5.028 11.227
601898 41.360 25.968 16.521 12.505
601918 54.439 40.644 23.156 13.767
Some researches showed that many asset networks were free-scale, whose node degree distributions were power law distribution P (s) ∼ |s|−δ [3-8,10-12], such as S&P500 stocks network with δ ≈ 1.8 [3-4]. For the stocks network in real estate sector of china stock market, the node degree approximate power law distribution, whose δ shows a linear growth with threshold γ0 , 0.8 < δ < 1.6, and average 1.25 [11]. For SNCPS that we are researching, the nodes degree distribution also approximate power law, δ = 0.41 for γ0 = 0.55, δ = 0.60 for γ0 = 0.60, δ = 0.87 for γ0 = 0.65 (see Fig. 1), and δ = 0.94 for γ0 = 0.70, average 0.7, which shows indistinctive scale-free nature.
204
W. Lan and G. Zhao
Fig. 1. Power value δ of SNCPS for correlation coefficient threshold γ0 = 0.65. The curve of power law was fitted by using data of 60 nodes.
3.2
Cluster Coefficient
Cluster coefficient is a measure of network connectivity. Cluster coefficient of a network is the average of cluster coefficients of its all nodes. There are many definitions of node cluster coefficient, Watts-Strogatz defines as: j,k γij γjk γki Ci = (2) j,k γij γki The cluster coefficient of SNCPS, as shown in table 2, is an average of 0.68 for unweighted networks, and an average of 0.41 for weighted networks. Table 2. Cluster coefficients of SNCPS correlation coefficient thresholds γ0 γ0 = 0.55 γ0 = 0.60 γ0 = 0.65 γ0 = 0.70 γ0 = 0.75
3.3
Cluster coefficient for unweighted network 0.763 0.687 0.594 0.770 0.586
Cluster coefficient for weighted network 0.440 0.357 0.428 0.463 0.343
Betweenness
Node betweenness measures the ability of a node as medium of other nodes in network, which means the node occupies important position linking other two nodes. The more a node occupies such a position, the higher its node betweenness is, and more nodes link by it. If a node links two separate components in network,
Stocks Network of Coal and Power Sectors in China Stock Markets
205
Table 3. Betweennesses of some nodes in SNCPS γ0 γ0 γ0 γ0 γ0
node = 0.55 = 0.60 = 0.65 = 0.70 = 0.75
000723 002060 601898 601918 617 590 493 1060 690 988 213 1334 648 266 128 639 48 49 104 130 22 37
the node shall be a cut point, or called a Bridge. For undirected networks, node betweenness be calculated by formula: Bi =
gjk (i) 2 (n − 1)(n − 2) gjk
(3)
j
Where gjk is the shortcut number from node j to node k, gjk (i) is the frequency number via node i from node j to node k, n is nodes number in network. In a general, Bridges have bigger node betweenness than other nodes in network. Table 3 shows several nodes with the largest node betweenness in SNCPS.
4 4.1
Partition and Backbone of SNCPS Communities and Partitions
A cohesion subgroup is some nodes connected more closely in network. It is of great significance to seek cohesion subgroups in a network for understanding network characteristics. There are many ways to seek subgroups, One of them is most commonly “k-plex” way. A k-plex is such a cohesion subgroup, in which, a node connect everyone but k nodes. In other words, a cohesion subgroup with n nodes is a k-plex only while the degree of each node isn’t less than n − k. According to different correlation coefficient thresholds, choosing appropriate n and k, we can find following subgroups in SNCPS: For γ0 = 0.65, let k = 2 and n = 11, we find 3 subgroups with majority same nodes, and merge them as a community: 000723, 000937, 000968, 600123, 600508, 600971, 600997, 601001, 601088, 601666, 601699, 601898, and 601918. For γ0 = 0.70, let k = 2 and n = 7, we find 10 subgroups with majority same nodes, and merge them as a community: 000552, 000723, 000937, 600123, 600508, 601001, 601088, 601666, 601898, and 601918. We can also divide a network into several partitions. Partitions are different from subgroups, that is, there are some same nodes in different communities, while there is not any same node in any two different partitions. Fig. 2 shows SNCPS topology. 4.2
Backbone Network
So far, to investigate subgroups of networks, all relevant researchers have been focusing in communities of networks. We should pay some attention to other
206
W. Lan and G. Zhao
Fig. 2. Topology of SNCPS for correlation coefficient threshold γ0 = 0.65, where bigger size nodes with bigger degree, thicker edges with bigger weight. The network is divided into 2 partitions denoted by nodes with different colors and different shapes. Node 000723 and 006020 are obviously the bridge between 2 partitions.
thing that real networks have almost some important nodes, which constitutes the core of entire network, and plays the backbone role. In this section, we will present a new conception Backbone Network (BN), and cut SNCPS in order to separate a BN from it. What is the BN? In other words, what characteristics the BN should have? A BN is different from a community or a partition, which should be of following characteristics: – BN provides path for the exchange of information between different subnetworks; – Nodes have higher degree in BN; – All nodes in BN are not “leaf” of “mother network”. Obviously, it is of great significance to separate a BN from a complex network. We will give an algorithm named as Algorithm of the Largest Eigenvalue of Weight Matrix (ALEWM). Suppose that eigenvalue of ACM R are |λ1 | ≥ |λ2 | ≥ . . . ≥ |λn |, and corresponding eigenvector v1 , v2 , . . . , vn , satisfying Rvi = λi vi , where 1 ≤ i ≤ n. According to the viewpoint of geometry, vi represents an n-dimensional vector, λi represents magnification scale of vi after R transformation. We are only interested in |λi | > 1, and might as well suppose |λi | > 1 if 1 ≤ i ≤ k. Further
Stocks Network of Coal and Power Sectors in China Stock Markets
207
suppose vi = (ai1 , ai2 , . . . , ain ) , λi aij (1 ≤ j ≤ n) represent projection of vi on dimension j after R transformation. We introduce a new variable: wj =
k
|λi aij |
(4)
i=1
where 1 ≤ j ≤ n. We name wj as Dominating Projection (DP) and use it to detect all nodes of BN. We list steps of ALEWM: – Calculate eigenvalues and corresponding eigenvectors of ACM R; – Find out the eigenvalues whose absolute values are bigger than 1 and eigenvectors corresponding with them; – Calculate DP value of all nodes; – Sort DP value from big to small, set a threshold θ0 , and regard all nodes with bigger DP than θ0 as BN nodes. Using above algorithm, and letting θ0 = 2, we can separate a backbone network with 10 nodes from SNCPS, see Fig. 3.
Fig. 3. BN with 10 nodes of SNCPS for γ0 = 0.65, where bigger size node with bigger DP, thicker edges with bigger weight. Bridge node 000723 and 006020 are retained.
5
Conclusions
The article has modeled undirected and weighted Stocks Network of Coal and Power Sectors in China stock markets, and given some statistics natures of the network. The research shows that the network is free-scale, nodes degree distribution P (s) ∼ |s|−δ with small δ, average 0.7. Average cluster coefficient are 0.68 for unweighted network and 0.41 for weighted network. The network can be divided into 2 partitions. Particularly, we advance the conception of Backbone Network, develop a Algorithm of the Largest Eigenvalue of Weight Matrix for detecting Backbone Network, and separate a Backbone Network with 10 nodes 601898, 000723, 601918, 601666, 601001, 601088, 002060, 601699, 600121 and 600508, which influence entirety network greatly.
208
W. Lan and G. Zhao
Stock prices are influenced by lots of factors, such as economic policies and social events, etc. Stock market is a complex system, so stock prices are also influenced by some internal factors of market. Complex network method can better explore internal informations, research various emergence behaviors of network, and provide a scientific basis to improve ability of building portfolios. Acknowledgments. Project supported by the Science & Technology Research & Development Projects in Higher Education Institution of Shanxi Province of China, NO. 20091148.
References 1. Noh, J.D.: Model for correlations in stock markets. Phy. Rev. E 61, 5981–5982 (2000) 2. Bonanno, G., Caldarelli, G., Lillo, F., Miccich, S., Vandewalle, N., Mantegna, R.N.: Networks of equities in financial markets. Eur. Phy. J. B 38, 363–371 (2004) 3. Kim, H.-J., Kim, I.-M.: Scale-Free Network in Stock Markets. J. Kor. Phy. Soc. 40, 1105–1108 (2002) 4. Kim, H.-J., Lee, Y., Hahng, B., Kim, I.-M.: Weighted Scale-Free Network in Financial Correlation. J. Phy. Soc. Jap. 71, 2133–2136 (2002) 5. Onnela, J.-P., Kaski, K., Kertsz, J.: Clustering and Information in Correlation Based Financeal Networks. Eur. Phy. J. B 38, 353–362 (2004) 6. Kim, K., Kim, S.Y., Ha, D.-H.: Characteristics of networks in financial markets. In: Conference on Computational Physics 2006. Com. Phy. Comm., vol. 177, pp. 184–185 (2007) 7. Lee, K.E., Lee, J.W., Hong, B.H.: Complex networks in a stock market. In: Conference on Computational Physics 2006. Com. Phy. Comm., vol. 177, p. 186 (2007) 8. Eom, C., Gabjin, O.H., Kim, S.: Statistical Investigation of Connected Structures of Stock Networks in a Financial Time Series. J. Kor. Phy. Soc. 53, 3837–3841 (2008) 9. Tse, C.K., Liu, J., Lau, F.C.M., He, K.: Observing Stock Market Fluctuation in Networks of Stocks. Complex Sciences, LNCS, Soc. Info. Tele. Eng. 5, 2099 (2009) 10. Huang, W.-Q., Zhuang, X.-T., Yao, S.: A network analysis of the Chinese stock market. Phy. A 388, 2956–2964 (2009) 11. Lan, W.S., Zhang, S.D.: An Application of Complex Networks: Researches on Stocks Strong Correlation. In: Conference Proceedings of 2009 International Institute of Applied Statistics Studies, vol. II, pp. 2235–2239. AAPH Press, Sydney (2009) 12. Tabak Benjamin, M., Serra Thiago, R., Cajueiro Daniel, O.: Topological properties of stock market networks: The case of Brazil. Phy. A 389, 3240–3249 (2010) 13. Bakker, L., Hare, W., Khosravi, H., Ramadanovic, B.: A social network model of investment behaviors in the stock market. Phy. A 389, 1223–1229 (2010)
An Cross Layer Algorithm Based on Power Control for Wireless Sensor Networks Yong Ding1, Zhou Xu2, and Lingyun Tao1 2
1 Zhejiang Economic & Trade Polytechnic, Hang Zhou, China Department of Mathematics and Science,Zhejiang Sci-Tech University, Hang Zhou, China {Yong Ding,ricky.ding}@163.com
Abstract. Wireless sensor networks (WSNs) have become one of the important embedded research fields in the world, which integrate the technologies of sensors computation, modern networks, distributed processing, etc. WSNs are composed of low cost sensor nodes that can communicate with each other in a wireless manner, have limited computing capability and memory and operate with limited battery power. So, energy-efficient mechanism for wireless communication on each sensor node is so crucial for wireless sensor networks. This paper firstly makes a summary to power control and the existing routing metrics then finds the several key characteristics of wireless sensor networks communication link, show that the wireless links in real sensor networks can be extremely unreliable secondly the CLA (Cross Layer Algorithm) is provided which based on power control. Finally, CLA in the paper is adapted to directed diffusion routing protocol, and evaluated the performance of CLA by comparing with existing routing protocols. Simulation results show that a significant reduction in network energy consumption and prolongs the network life span. Therefore, our research can give a significant guidance on the optimization of wireless sensor networks. Keywords: Wireless sensor networks; cross layer; network life span; directed diffusion.
1 Introduction Wireless sensor network (WSN) is a new kind of technology which had been developed in the last few years. WSN is a multi-hop network of self-organization system which is composed by a large number of low-cost micro-sensor nodes deployed in the monitored region. It was used to collaborate perception, collection and processing the information of object in the region covered by the network, and then sends the information to observers. WSNs are composed of low cost sensor nodes that can communicate with each other in a wireless manner, have limited computing capability and memory and operate with limited battery power targets (events) at long- term usage such as environment monitoring [1, 2]. WSN is far different from general wireless network in terms of the characteristics of network, communication mode and needs of data transmission, which is resource limited distributed system. The energy is one of the most important constraints in wireless R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 209–216, 2010. © Springer-Verlag Berlin Heidelberg 2010
210
Y. Ding, Z. Xu, and L. Tao
sensor networks problem. it is necessary to design a reasonable cross layer Algorithm based on specific application in order to enhance the energy-saving, become a criterion to evaluate the routing scheme for WSN [3, 4]. WSN is a non-infrastructure network, which consists of a set of sensor nodes form a self-organization, the purpose as perception, acquisition and processing network covers geographic regions perceived object information, and information processing, Obtain detailed and accurate information, the information eventually sent to the need for the user. User terminal management and analysis software to observe the network's operating conditions. Unlike traditional wired networks, the deployment of WSNs is relatively simple and inexpensive. This can be an efficient, low-overhead method of data delivery if it is reasonable to assume:1) sufficient network density; 2) accurate localization;3) high link reliability independent of distance within the physical radio range. However, recent experimental studies have shown that wireless links in real sensor networks can be extremely unreliable; deviating to a large extent from the idealized perfect-reception within-range models used in common network simulation tools. The wireless sensor networks are composed by the energy limited node. It takes wireless corresponds matter to gather and process useful data from the specific region, therefore how to do gathering and processing by the energy effective way, and guarantee the biggest lifetime of the network is a research key of WSNs, namely energy question. The goal of power control mechanisms is to dynamically adjust the nodes’ transmission range for maintaining some property of the communication network and to save network energy for maximizing network lifetime. Firstly, this paper analyzes communication link quality. Then CLA (Cross Layer Algorithm) is proposed which use power control algorithm; lastly, this algorithm is applied to directed diffusion routing protocol, which is named CLA-DD protocol. Simulation results show that CLA-DD protocol performance is superior to the router protocol based original measurement method in end to end packet loss rate; prolong network life span performance indicators.
2 Related Work In the process of calculating the lifetime of a sensor network, the lifetime is obtained with the means of dividing the overall network energy dissipation by the number of nodes. However, we argue that this kind of lifetime calculation deviates from the definition of lifetime of a sensor network, as some sensor nodes may deplete out before other sensor nodes do, making the actual lifetime of a sensor network shorter the lifetime calculated in [5]. Many research efforts have been focused on minimizing the energy expenditure for broadcasting either by reducing the number of redundant transmissions due to lack of coordination or by minimizing the total transmission energy required to maintain full connectivity in the network. Many studies on energy efficient routing for WSNs have been proposed [6]. Douglas S. J [7] proposed the ETX routing metric which is sending broadcast packets through the node and neighbor nodes obtaining the packet reception rate between forward and reverse link. Routing algorithm select the smaller ETX value as its next hop neighbor node. But the disadvantages of ETX are: are: 1, due to broadcast packet is small and the lower rate to send, this calculated value may be different from the sending node ETX to high rate
An Cross Layer Algorithm Based on Power Control for Wireless Sensor Networks
211
transmission of large data packets when the link Packet Dropout; 2, did not consider to the transmission rate, network load neighbor node interference and so on. The authors of [8] studies only considered the energy efficiency of routing, and did not consider the need to ensure real-time, reliable packet delivery. Only a couple of studies considered a deadline or the reliability of a packet in wireless communication. They evaluate link estimator, neighborhood table management, and reliable routing protocols techniques. A frequency-based neighbor management algorithm is used to retain a large fraction of the best neighbors in a small-size table. They show that cost-based routing using a minimum expected transmission metric shows good performance. In this work, our goal is to study the energy and reliability trade-offs pertaining to geographic forwarding in depth, both analytically and through extensive simulations, under a realistic packet loss model.
3 Communication Link Test In Wireless sensor network communication occurs in a many to one fashion, the internodes not only transmit their own sensed data to the sink but also relay other sensor’s data. Thus the traffic is not uniformly distributed and the energy dissipation speed of sensor nodes will also be different. Here, in order to test the actual communication link, analyzes communication link quality and found that these have underlying link with high loss rate, non-symmetry and other characteristics this paper made the following relevant test. 3.1 Hardware and Software Platform Hardware platform of experimental is the MICAz node developed by Crossbow Company; its working band is 2.4 GHz, IEEE 802.15.4 compliant. The module has DSSS radio which has the 250kbps maximum data transfer rate; its wireless Communications with Every Node as Router Capability; its expansion connector for light, temperature, acceleration/seismic, acoustic, magnetic, and other crossbow Sensor Boards; its processor is AT mega128, which has a low-power, high-speed processor; its communications module is high-performance wireless chip CC2430 [9] which compatible with IEEE 802.15.4 specifications. The advantages: 1) excellent receiver sensitivity and robustness to interferers, 2) low current consumption (RX: 27 mA, TX: 27 mA, microcontroller running at 32 MHz) 3) Very fast transition times from low-power modes to active mode enables ultra low average power consumption in low duty cycle systems. The base station model is MIB510. The software platform of experimental is TinyOS [10]. TinyOS is an open-source operating system designed for wireless embedded sensor networks. It features a component-based architecture which enables rapid innovation and implementation while minimizing code size as required by the severe memory constraints inherent in sensor networks. TinyOS's component library includes network protocols, distributed services, sensor drivers, and data acquisition tools – all of which can be used as-is or be further refined for a custom application. TinyOS's event-driven execution model enables fine-grained power management yet allows the scheduling flexibility made necessary by the unpredictable nature of wireless communication and physical world interfaces.
212
Y. Ding, Z. Xu, and L. Tao
3.2 Communication Link Randomly select a Mote, which send 500 packet to the base station MIB510, recording the number of successfully received packets Nr, calculation of packet reception rate (packet reception ratio, PRR), expressed as Pr = Nr/300, changing both the communication distance between d, Pr record the change with distance d. 1 0.9 0.8
packet reception ratio
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
1
2
3 4 5 6 7 8 transmission between the nodes (unit: m)
9
10
Fig. 1. The packet delivery ratio of forward and reverse link
From that: when the distance less 4m, link quality is very well, packet reception ratio is normally more than 80%; when the distance more 4m and less 8m, the nodes of the packet reception ratio start to decrease, and the great changes without law; this area can be defined as transition zone; when the distance more than 8m, the packet reception ratio is poor, so when choosing the path as much as possible to avoid such path.
4 Proposed CLA Energy is widely recognized as a scarce resource in wireless networks and various energy-aware routing and topology control algorithms were proposed. However, most of the previous works are based on the simple “Disc Model”. In allusion to the problem that actual network communication exits transitional zone and nonsymmetry, In order to select a relatively reliable routing path, but only from the packet reception ratio can not determine the actual communication link situation, In order to induce CLA, this paper has made the following statement: [11] analyzed the link for wireless sensor networks to communicate the underlying lognormal shadow model; the formula is as follows (in dBm): P L ( d ) = P L ( d 0 ) + 1 0 β lo g (
d )+ X d0
σ
.
(1)
An Cross Layer Algorithm Based on Power Control for Wireless Sensor Networks
213
Where PL(d ) is path loss for a specific location, d 0 is the reference distance, β is the path loss index, σ standards deviation. X σ Zero mean Gaussian random variable. From (1) can be received by the receiver of the power:
Precv = Ptrans − PL(d ) .
(2)
By equation (2) can be calculated signal to noise ratio in the receiving node (SNR) as: SNR( dbm ) = Precv ( dbm ) − N ( dbm ) .
(3)
By (1), (2), (3) can be type (4): SNR = Ptrans − PL(d ) − N d0 d = Ptrans − 10β log10 − X σ − N − ( PL(d ) − 10β log10 )
(4)
Ptrans 10
10 d −β . CX σ N In order to evaluate the cross layer algorithm, we use the following statement: The source node and the destination’s neighbor nodes estimate the channel gains from them to the destination, which are denoted as h sd . =
We define G sd =| h sd |2 , G si =| h si |2 , and G id =| h id |2 . Also, we normalize the noise variance to one and assume capacity-achieving codes over each link. The minimum required transmission energy to support a data rate R (bits per symbol) from the source to the destination shall satisfy: 1 R ≤ log 2 (1 + Et1Gsd + Et 2 Gid ) . (5) 2 Each relay node determines the transmission energy per symbol and the transmission energy per symbol
Et1 for the source
Et 2 for itself. When node i is used for
relaying. The factor 1/2 in (5) is due to time sharing between the source and relay transmissions. From (5), we obtain Et1Gsd + Et 2 Gid ) ≥ (22 R − 1) . On the other hand, node i have to decode the source signal successfully. Thus, the 1 transmission energy must satisfy: log 2 (1 + Et1Gsi ) ≥ R . 2 (22 R − 1) This translates to: Et1 ≥ . Gsi Through the above analysis: the link in the end, every note use set the initial power. Nodes need to calculate the distance between the neighbor nodes. Finally, every node needs to change the power so as to prolong the life span and improve the transmission reliability.
5 Simulation In this section, we perform extensive simulations to study the characteristics of different routing metric in random topologies under different densities and network
214
Y. Ding, Z. Xu, and L. Tao
sizes. Simulate random static networks of sizes ranging from 50 to 400 nodes having the same radio characteristics. We represent the density as the average number of nodes per a nominal radio range and vary it over a wide scale: 50, 100, 150, 200, 250, 300, 350, 400 nodes randomly placed in 500 × 500 (m) square area, the simulation lasted for 400 sec. The size of sending data packet is 64 bytes, and rate is 20Packets/s. End-to-end remaining energy En as the evaluation index, to evaluate HOP-DD ETXDD and CLA-DD. This paper uses the node named Source as the source node, named DEST as the destination node. Do the experiment using the method as above. We can make the curves about the number of nodes vs. average energy consumption and the curves of remaining energy (Figure 2 and Figure 3). 1 CLA-DD ETX-DD HOP-DD
0.9 0.8
packet loss rates
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
50
100
150 200 250 transmission time (unit:s)
300
350
400
Fig. 2. Transmission time vs. package loss rates 1 CLA-DD HOP-DD ETX-DD
0.95
remaining energy
0.9 0.85 0.8 0.75 0.7 0.65 0.6 0
50
100
150 200 250 transmission time (unit:s)
300
350
400
Fig. 3. Remain energy En vs. simulation time t
Figure 2 shows that Simulation scenario is set to 50, 100, 150, 200, 250, 300, 350, 400 nodes. f from the source node S to destination node D, the average end to end packet loss rates graph; as pre-set conditions, due to Hello, and ACK control packet,
An Cross Layer Algorithm Based on Power Control for Wireless Sensor Networks
215
the actual packet loss rates was higher than the theoretical value; CLA-DD and ETXDD take into account the actual traffic situation in the bottom of the communication link, then selecting a relatively reliable link. CLA-DD considers the lower-link communication characteristics of the non-symmetry, so its packet loss rates even lower than the ETX-DD. Figure 3 shows that the simulation results of the experiment: When the end to end route is established, because HOP-DD use the minimum hop count metric method, without considering the actual low-level communication link, Thus, when the transmission of data, the underlying need to re-transmit data, so that the network consumes more energy. CLA-DD, ETX-DD consider the reliability of the underlying link in the establishment of a relatively reliable routing path, comparing the ETXDD, requires less retransmission, thus reducing the energy overhead. CLA-DD compare with HOP-DD need fewer probe packets to judge the status of the link, so CLA-DD consumes less energy.
6 Conclusion Generally, nodes of WSN has large scales, complex or even dangerous work environment, restricted energy difficult to supplement, less store space and computing capacity. So, how to design a highly efficient and energy saving WSN routing algorithm to prolong lifetime of networks has become a hot spot of research. This paper puts more emphasis on energy saving routing algorithm of WSN, in this paper, on the basis of the power control theory proposed CLA algorithm; by modifying the original directed diffusion protocol, CLA is applied to directed diffusion routing protocol. Simulation results show that, in end to end packet loss rate, remaining energy, CLA provides real-time communication without compromising the energy awareness of the existing energy aware routing protocol; it selects a path that expends less energy than others, among paths that deliver a packet in time. Sometimes, it selects a path that expends more energy than the optimal path, because the path is selected at random, according to a probability. This enables even distribution of energy expenditure to sensor nodes. Thus, this algorithm could fairly bring about prudence energy and prolong network lifetime.
Acknowledgment This research work has been supported by the National Small Technology Innovation Fund (01C26212110802).
References 1. Cayirci, E., Baydere, S., Havinga, P.: Cross-layer energy analysis of multihop wireless sensor networks. In: Proceedings of the Second European Workshop on Wireless Sensor Networks (IEEE Cat. No.05EX960), Piscataway, NJ, USA, vol. 1, pp. 33–44 (2005) 2. Han, X., Cao, X., Lloyd, E.L., Shen, C.-C.: Fault-Tolerant Relay Node Placement in Heterogeneous Wireless Sensor Networks, vol. 9(5), pp. 643–656 (2010)
216
Y. Ding, Z. Xu, and L. Tao
3. Hong, L., Guang-Hui, L., Hai-Lin, F.: Fault-tolerant routing in wireless sensor networks with redundant cluster-heads. Harbin Gongye Daxue Xuebao/Journal of Harbin Institute of Technology 41(1), 80–85 (2009) 4. Israr, N., Awan, I.: Coverage based inter cluster communication for load balancing in heterogeneous wireless sensor networks. Telecommunication Systems 8, 3(38), 121–132 (2008) 5. Shin, K.-Y., Song, J., Kim, J., Yu, M.: REAR: reliable energy aware routing for wireless sensor networks. In: The 9th International Conference on Advanced Communication Technology, Piscataway, NJ, USA, vol. 2, pp. 525–530 (2007) 6. Zhao, J., Govindan, R.: Understanding Packet Delivery Performance in Dense Wireless Sensor Networks. ACM Sensys (November 2003) 7. Douglas, S.J., De, C., Daniel, A., John, B., Robert, M.: A High-Throughput Path Metric for Multi-Hop Wireless Routing. In: Proceedings of MOBICOM 2003, San Diego, CA, USA, pp. 134–146. ACM Press, New York (2003) 8. Liu, J., Zhao, F., Petrovic, D.: Information-directed routing in ad hoc sensor networks. IEEE J. Sel. Areas Commun. 23, 851–861 (2005) 9. Simic, L., Berber, S.M., Sowerby, K.W.: Partner choice and power allocation for energy efficient cooperation in wireless sensor networks. In: 2008 International Conference on Communications, Piscataway, NJ, USA, vol. 5, pp. 171–176 (2008) 10. Yaju, L., Zhenjiang, C., Li, Z., Dongming, L.: The design of ZigBee wireless sensor network node based on RF CC2430. Microcomputer Information 3(22), 167–170 (2007) 11. Hai, W., Neng-gui, Z., Zhi-gang, G.: Wireless sensor network node based on TinyOS operating system. Mechanical & Electrical Engineering Magazine 5(26), 20–30 (2009)
The Research of Mixed Programming Auto-Focus Based on Image Processing Shuang Zhang1,2, Jin-hua Liu2,4, Shu Li2,5, Gang Jin1, Yu-ping Qin3, Jing Xiao6, and Tao An1 1
The Institute of Optics and Electronics, Key Laboratory of Beam Control, Chinese Academy of Sciences, 610209, Chengdu, China 2 Graduate University of Chinese Academy of Sciences, 100039, Beijing, China 3 College of Mathematics & Software Science, Sichuan Normal University, 610068, Chengdu, China 4 The Chinese People' s Liberation Army University of Military Traffic, 300190, Tianjin, China 5 Beijing Institute of Oil Research, Laboratory of Oil Storage and Transportation Automation, 102300, Beijing, China 6 Xu Zhou Air force College, 221000, Xu Zhou, China
Abstract. Auto-focus technology is an important way to improve the accuracy of indentation diameter measurement system, intelligent and automate. This article describes the use of image processing method to achieve the auto-focus technologies of indentation diameter measurement system, at its core is to choose an appropriate image clarity evaluation function. After studying a number of image clarity evaluation functions is proposed the image clarity evaluation function which based on the vector mode and improved DCT transform. Experiments show that the proposed algorithm has a good unimodality, accuracy, stability, reliability and rapidity. Finally using COM Component Technology-based Matlab and VB mixed programming to ensure that algorithms and software can be design. Keywords: Auto-focus; measurement of indentation diameter; sharpness evaluation function; mixed programming.
1 Introduction The focus will affect the measurement results in brinell hardness diameter measurement system. In this paper, the research is to improve the accuracy, intelligence and automation of system measurement based on auto-focus technology of image processing and let “indentation diameter measurement" as its background. The key to the auto-focus technology is image clarity evaluation function which Based on image processing. Nowadays, most current algorithms are based on timedomain gray-entropy method and the gray-scale variance structure, but its focus on results is not very stable and the operation is slow. Because of issues, the article proposes two kinds of auto-focus algorithm based vector model and the improved DCT transform. R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 217–225, 2010. © Springer-Verlag Berlin Heidelberg 2010
218
S. Zhang et al.
2 Algorithm Vector Model The clarity of image is determined by the change degree of the gray value of image adjacent regions. We can calculate the image by the image gradient operator, because the edge of the image is the most intense place on gray value changes. Using neighborhood pixel gray value difference between the image gradient is characterized, in the mathematical model, namely, differential operator. The simplest differential operator is a Roberts operator, as shown in Figure 1, its expression is equation (1). F(i,j)
F(i+1,j)
F(i,j+1)
F(i+1,j+1)
Fig. 1. Sketch map of roberts operator
G(i, j) =| F(i, j) − F(i, j +1) | + | F(i, j) − F(i +1, j) |
(1)
It is embodying gradient from the horizontal and vertical gradient. But it is not very stable, especially for this hardness indentation images which involved defocus, in-focus and greater difference in image, as shown in Figure 2 and Figure 3.
Fig. 2. Out-of-focus image
Fig. 3. Focusing image
Although there may be a larger focus on the image edge gradient, yet the focus away from the image edge gradient is less than the impact of cumulative errors leading to the result in difficulties in automatically determines the image resolution, because of the focus image gray-scale is more centre than the defocus one. Therefore,
The Research of Mixed Programming Auto-Focus Based on Image Processing
219
in this article, the algorithm was improved based on image gray change. The results will have significant impact in the diagonal direction of the change because the images collected from the main body is round, so it should be taking into account the level of vertical and cross direction of the pixels change. F(i,j)
F(i+1,j)
F(i,j+1)
F(i+1,j+1)
Fig. 4. Sketch map of oblique direction
Figure 4 shows the model algorithm expression as follows:
G (i, j ) =| F (i, j ) − F (i + 1, j + 1) | + | F (i + 1, j ) − F (i, j + 1) |
(2)
The Figure 5 is to improve the model. F(i,j)
F(i+1,j)
F(i,j+1)
F(i+1,j+1)
Fig. 5. Modified gradient model
To make differ between horizontal and diagonal direction of the gradient in order to made a comprehensive change in the gradient of the pixel in its neighborhood area. The expression of vector model algorithm as follows:
G(i, j) =| F(i, j) − F(i, j +1) | + | F(i, j) − F(i +1, j) | −2| F(i, j) − F(i +1, j +1) |
(3)
3 DCT Transform Algorithm The level of image clarity and focus in the image of the frequency domain analysis determine by the number of high-frequency components. Therefore, it can be used as the criterion of image clarity. The most common transformations are Fast Fourier Transform (FFT) and Discrete Cosine Transform (DCT). As the FFT transform is a complex processing, which its calculated level is complex and take a long time. Hardness indentation diameter measurement system, a large number of measurements to conduct experiments, the electrical displacement at the micron level, and the indentation image is 2048 × 2048px. It is necessary to improve the speed of image processing even if it except the motor moving time is fixed. Therefore, it can use DCT transform, the simple and relatively rapid transformation. DCT transform can gather more energy, better high-frequency component separation ability, clarity evaluation function, separation and retain high-frequency components as the evaluation measure of image sharpness. Figure 6 shows an image is the result DCT transformed.
220
S. Zhang et al.
Two-dimensional DCT transformation. Such as type (4) as shown. M −1
F (u, v) = c(u )c(v) ∑ x =0
N −1
π (2 x + 1)u
y =0
2M
∑ f ( x, y) cos
u = 0,1,L M − 1;
cos
π (2 y + 1)v 2N
,
(4)
v = 0,1,L N − 1,
⎧ ⎪ 1/ M c(u ) = ⎨ ⎪⎩ 2 / M
(u = 0)
,
(5)
.
(6)
(u = 1, 2,L M − 1)
⎧ 1/ N ⎪ c(v) = ⎨ ⎪⎩ 2 / N
(v = 0) (v = 1, 2,L N − 1)
If in accordance with formula (4), writing programs, will include a four-fold cycle, which for handling large-resolution image is not acceptable. DCT transform according to reparability of the two-dimensional DCT transform to rewrite the two one-dimensional DCT transform computing equivalent form. DCT transform formulas such as equation (7). M −1
F (u, v) = c(u )c(v) ∑ x =0
N −1
π (2 x + 1)u
y =0
2M
∑ f ( x, y) cos
cos
π (2 y + 1)v 2N
(7)
.
Summation symbol written separately: M −1
π (2 x + 1)u
x=0
2M
F (u, v) = c(u ) ∑ cos
N −1
π (2 y + 1)v
y =0
2N
f ( x, y)c(v)∑ cos
.
(8)
And provides that C1 = c(u ) ∑ cos π (2 x + 1)u C2 = c(v)∑ cos π (2 y + 1)v , C1 and C2 are two M −1
N −1
2M
x =0
y =0
2N
separate domain vectors containing two different components of two-dimensional matrix. If the demand cycle of C1 and C2 according to u, x, and v, y manner, then the C1 and C2 of the elements can be expressed as C1 (u, x) and C2 (u, x), according to the matrix within a multiplicative rule, are: F (u, v) = C1 × f ( x, y) × C '2 .
(9)
Equation (9) is to improve the formula after the DCT transform. DCT-based image sharpness function is concerned about the high-frequency part of the image and the number of high-frequency components as the basis for determining the image resolution, so algorithms such as the type (10). N
G=∑ v
M
∑ | F (u, v) |
u + v > min( M , N ).
(10)
u
F (u, v) in type (10) is the results for the DCT transformation and (M, N) for the image resolution. However, the system focus and defocused image are quite different in brightness and gray level. The clarity of the image with the image brightness and gray level itself is much related, so the relatively high-frequency component was introduced to distinguish. As the DC component to a certain extent reflects the overall brightness
The Research of Mixed Programming Auto-Focus Based on Image Processing
221
of the image and overall message, which use high-frequency components and DC component as an image than to distinguish the relatively high-frequency components. Get the maximum value of G corresponding to the image shall be a sample image is the most clearly. The improved DCT algorithms, such as equation (11). N
G=
M
∑ ∑ | F (u , v ) | v
(11)
u + v > min( M , N ).
u
| F (1,1) |
4 Experimental Results Samples used in the images collected from the indentation diameter measurement system and image size are 640 × 480px and divided into three groups In order to verify the algorithm stability and the implementation of the efficiency in the experimental. Ordinate is normalized clarity evaluation function value, abscissa for the image sample number. 1
1
Roberts
0.9
0.9
FFT DCT
0.8
0.8
值数 价函评 的化 归一
0.7
0.7
值数 价函评 的 化一 归
向量模型算法 Roberts算法 拉普拉斯算法 FFT变换算法 DCT变换算法
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0 0
5
10
15
图像样本
20
25
30
0 0
35
Fig. 6. Result of various algorithm on the 1st experiment
5
10
15
图像样本
20
25
30
35
Fig. 7. Result of various algorithm on the 2nd experiment
1
编译组件
0.9
0.8
0.7
值 函数 价 评 的 化 一 归
0.6
0.5
向量模型算法 Roberts算法 拉普拉斯算法 FFT变换算法 DCT变换算法
添加M文件
0.4
0.3
编译成功
0.2
0.1
0 0
5
10
图像样本 15
20
25
30
Fig. 8. Result of various algorithm on the 3rd experiment
Fig. 9. Compile interface of COM component
From the above chart data, it is easy to see that the improved vector model algorithm in a single peak of the curve compared with Roberts algorithm for markedly improved and have good stability and repeatability in the three groups of different samples of the test results, while the Laplace algorithm is the poor performance of sample images, and instability in the maximum gradient based on the edge. FFTbased transformation of energy - entropy algorithm in this system cannot evaluate the clarity of the image; but DCT-based algorithms in a single-peak curve and stability have good performance.
222
S. Zhang et al. Table 1. Executive time of various algorithms
Algorithm Time(s) EXP1 EXP 2 EXP 3 Average time
Roberts
Vector model
Laplacian
DCT
FFT
5.6250
6.5930
24.5470
4.0940
71.9060
5.2030
7.3910
19.8910
4.2820
74.5630
4.5000
5.6560
20.0470
3.5000
62.9530
0.0655
0.2150
0.0396
0.6981
0.0511
The algorithm execution time was shown in Table 1. DCT transform algorithm in the image evaluation sheet the shortest time from an average time to see, just 0.0396s, while the Roberts algorithm and the vector model algorithm second, Laplace algorithm and the FFT transform algorithm take the longest time, and unable to meet the system requirements.
5 Realization of Mixed-Language Programming This system achieves the hybrid programming produced by VB calling Matlab COM components because the VB language simple, fast and has unique advantages in such friendly graphical interface design and development, but poor in numerical computing capabilities. Matlab can provide a powerful and matrix-related data processing and graphic display capabilities, but weak interface features. And this can better meet the system requirements. COM component technology is new software architecture different from the traditional, which provides a binary code can be shared industry standards. Such sharing is not confined to a particular programming language, will write algorithms and self-defined functions encapsulated in COM components, through Windows application calls encapsulated in the components of the corresponding function by a standard COM component interface, and return the results to the application. It can success generated dll file, which is need to reference the COM component in VB programming by prepared a good m files, as shown in Figure 10. 编译组件
添加M文件 编译成功
Fig. 10. Compile interface of COM component
The Research of Mixed Programming Auto-Focus Based on Image Processing
223
Fig. 11. Sketch map of VB project quieting the COM component
In the new VB project references this component, shown in Figure 11. It is created of the initialization process In the form object, as you can call the VB as the same built-in functions where they are needed in the program call the COM component operations. The function parameters and the input sequence of component type is the same with m files in Matlab composition.
6 Engineering Applications Indentation diameter measurement system is based on visual inspection and image processing technology, combined X, Y two-dimensional coordinate measuring technology and Z-measurements to automatically identify the focal plane (body surface) position, automatic focus and measurement, the system block diagram Figure 12.
Fig. 12. Measurement system of indentation diameter (MS: Measuring signal; MU: Measuring units)
This article discussed the whole corresponds control module based on the technical design of software auto-focus system (Figure 13). Software-controlled motor movement capture images and record the location of the images collected after processing the value of the largest selection of HD images, control the motor to move to the appropriate location for mining map, will be followed by images collected by the measurement module for processing.
224
S. Zhang et al.
Fig. 13. Flow chart for software
Fig. 14. Software interface
Software interface, as shown in Figure 14, divided into four regions. Area one for the CCD images to monitor real-time display area, area two main operating menu button, area three for the slider used to control the motor up and down movement, and area four for the acquisition image processing display area with a red-definition digital calibration value of the largest image and placed the first place. The motor automatically moved to capture the image is located when clicking the image within the region 4.
7 Conclusion In this paper, from the image gray value changes and spectral analysis, it discussed the image clarity found ways. Take image gray value gradient vector model algorithm, the improved DCT transform algorithm and construct clarity evaluation function. The vector model algorithm and the DCT transform algorithm have a good single peak of the curve, and the calculation time is short, more accurate, stable and reliable through experiments. It can improve the code utilization, programming efficiency and has been a good application in the actual project by Matlab and VB mixed programming.
References 1. Zi-fu, W., Du-yan, I., Jian-jun, U.: An Qin-bo: Ensemble Contour Traking. OptoElectronic Engineering 37(5), 12–18 (2010) 2. Yu-lan, L., Hai-qi, Z., Ping, W., Rui-hong, Y., Li-wei, T., Jun-ying, L.: Gun Bore Flaw Image Recognition Based on Power-amplitude Spectrum. Opto-Electronic Engineering 37(5), 36–40 (2010)
The Research of Mixed Programming Auto-Focus Based on Image Processing
225
3. Chuan-bin, Z., Zheng-long, D.: Study on Wavelet Filtering for Signal of Ring Laser Gyro. ACTA Electronica Sinica 32(1), 125–127 (2004) 4. Wornell, G.W., Oppenhein, A.V.: Estimation of Signals from Noisy Measurements Using Wavelets. IEEE Transactions on Signal Processing 40(3), 611–623 (1992) 5. Mallat, S.: A Wavelet Tour of Signal Processing. China Machine Press, Beijing (2002) 6. Fu-ming, F., Liang-lun, C., Xiao-fen, W., Jian-hua, P.: A new type of high-speed automatic focusing system. Opto-Electronic Engineering 37(5), 127–132 (2010) 7. Zhi-cheng, Z., Zhi-yuan, L., Jing-gang, Z.: Robust IMC-PID controller design for an optoelectronic tracking system with time-delay. Opto-Electronic Engineering 37(1), 30–36 (2010) 8. Zhi-hai, S., Wan-zeng, K., Shan-an, Z.: Scale and direction adaptive locating of video moving objects with subtractive clustering. Opto-Electronic Engineering 37(1), 37–42 (2010) 9. Chou, K.C., Golden, S.A., Willsky, A.S.: Multiresolution Stochastic Models, Data Fusion, and Wavelet Transform. Signal Processing 34(3), 257–282 (1993) 10. Craigmile, P.F., Guttorp, P., Percival, D.B.: Wavelet-Based Parameter Estimation for Trend Contaminated Fractionally Di-erenced Processes, Technical Report Series NRCSETRS No. 077 February 4, 1-30 (2004) 11. Deriche, M., Tewfik, A.H.: Maximum Likelihood Estimation of the Parameters of Discrete Fractionally Differenced Gaussian Noise Process. IEEE Trans. on Signal Processing 41(10), 2977–2989 (1993) 12. Hirchoren, G.A., Attellis, C.E.D.: Estimation of Fractal Signals Using Wavelets and Filter Banks. IEEE Trans. on Signal Processing 46(6), 1624–1630 (1998)
The Optimization of Route Design for Grouping Search Xiujun Wu School of Mathematics & Computer Science, Jianghan University, Wuhan 430056, China
[email protected]
Abstract. A layered algorithm of grouping search is proposed for the problem of the route with multiple restrictions based on earthquake rescue. Firstly, rescuers are divided into groups based on search width and then according to the principle of exactly dividing search region; secondly, tasks are assigned based on the principle of balanced region division; finally, the center of each square is regarded as the points in the graph and the search problem is turned into an approximate Hamiltonian problem, in which a path is designed to start from the beginning point and end at the midpoint of one line, the problem that the corners cannot be completely covered is modified with the extended “reentrant search”, and the balance of search time in groups is strengthened with the strategy “halfway conversion”. Thus we can get a search path with the shortest distance and most balanced search time for each person. The route has the search efficiency as high as 95% and can be seen as the optimal approximate solution, which improves previously known results. Keywords: Hamilton path, grouping strategy, optimal route.
1 Introduction The optimization of route design for grouping search is one of the key issues to be solved firstly in earthquake rescue. As this is a nondeterministic polynomial complete (NPC) problem, scholars are considering to convert it to the problem of graph theory based on complete coverage, and then use Dijkstra, dynamic programming, BellmanFloyd algorithm, heuristic route algorithm, DNA algorithm or some intelligent mathematic methods, such as genetic algorithm to try to solve the problem of route optimization [1, 2, 3, 4, 5, 6, 7, 8]. Wu et al. [9] transform the ground search problem of a group to a kind of approximate Hamiltonian problems, which are called Hamilton path (HP). They design an optimal path like “one stroke” as an approximate solution to the practical HP problem and the results show that the algorithm is effective and feasible. Applying an alternate approach, Yang, Liao and Huang [10, 11, 12] obtained a route algorithm with minimum coverage and exchanging search path for the problem of grouping rescue search. In this article, we will work on three versions of variation of HP problem by Wu’s algorithm through grouping to transform the application scenario in advance. The rest of the paper is organized as follows: Section 2 defines the grouping search problem of earthquake rescue. Some theoretical and existing results of the HP R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 226–233, 2010. © Springer-Verlag Berlin Heidelberg 2010
The Optimization of Route Design for Grouping Search
227
problems are also included in this section. The grouping principles are presented and analyzed in Section 3.
2 Background and Statement of the Problem Wenchuan Earthquakes occurring on May 12 severely paralyzed the ground transportation system and earthquakes were really rare. Shortly after the earthquakes, the relief headquarters quickly sent many special teams to search for victims in designated regions, thereby localizing people in need of help. Under the circumstances that nearly all communication system broke down, there was not a moment to be lost for rescue work. We should first work out routes for searching. Now the problem is raised: how to figure out a time-saving way of searching to make the search work more efficient. The following is a simplified problem. There is a 11200m×7200m flat rectangular destination region to be searched Provided the starting point is the center of this region, assembling is needed after searching and assembling point (ending point) is the midpoint of the short line, and the detectable radium of each person is 20 meters, moving at average 0.6 meter per second when searching and at average 1.2 meter per second when not searching. Everyone has GPS and walkie-talkie with communication radium of 1000meters. The search team contains 50 members equipped with three satellite phones, who are divided into three groups to carry out search. Each group can independently report its searching results to the headquarters. Now design a search path with the shortest time to search for the whole region. Wu et al work out the above problem for one-group search. Their solutions include three parts mainly. Firstly, it is converted directly to a HP problem by search width of the whole group; secondly, the problem that the corners cannot be completely covered is modified with the extended “reentrant search”; and thirdly the strategy “halfway conversion” is used to balance the whole search time. In this article, we need to solve the search problem under cooperation with groups. The number of 50 persons to be divided into three groups is as big as 19600. How can we divide 50 people into three groups and assign their corresponding task?
3 Analysis of the Problem 3.1 Grouping The general principle of grouping: divide people into three groups based on the principle of balance of search time needed by three groups. Step 1. Divide 50 people into three groups. The principle of the number of members in each group is determined by the divisi on of the actual searching region. The number of members determines the search width, which means the search width for each group should be 40 times the member number of each group. Since the region to search for is a 11200m×7200m rectangle, and (11200, 7200) =800, i.e., the
228
X. Wu
greatest common divisor of the length and the width is 800, 800/20 =40, so if we want to exactly divide the task, the number of members in each group should be the divisor of 20, which can be 1, 2, 4, 5, 10, and 20. Because the number of groups has been determined, the problem is to find three divisors whose sum is 50. The numbers we choose are 20, 20, and 10. 3.2 Division of Region Step 2. Divide the 11200m×7200m region into three parts and then allocate them to each group. The principle of region allocation Analysis. We know the shortest time of searching is the longest time needed by the group member who is the last to finish searching. To finish searching as fast as possible, i.e., min{T}, the time needed by each group should be balanced. The total time required includes both marching time and searching time. Marching time consists of the time needed to start from the beginning point and spread, the time of reentrant marching after extended search at the corners, the time of changing formation when concerting the searching routes and the time of going back to the assembling point after search. Searching time is composed of the time of searching main lines and the time of extended searching at corners. So basing on the following principle we can divide the searching region: The principle of region allocation: the area of searching region of each group should be in proportion to its member number to balance the searching tasks. The whole region can be divided into 14×9=126 squares, each one of which is 800×800. Thus the ratio of numbers of task squares for the three groups should be 20:20:10, that is, square numbers should be 50.4, 50.4 and 25.2. Considering the principles of exact division and balance, there are three possible solutions: (a) 50, 50 and 26 grids; (b) 51, 51 and 24 grids; (c) 50.5, 50.5 and 25 grids. Definition 1. Disequilibrium degree of searching distance is: αl =
Maximum differential search distance of three groups *100% . Mean search distance
Definition 2. Disequilibrium degree of searching time is: αs =
Maximum differential search time of three groups *100% . Mean search time
Hence the grouping disequilibrium degrees of three groups according to [2] are shown in Tab-1. Obviously, the division of different task regions should be taken into consideration saving as much time for non-searching as possible, that is, trying to change each region to a unicursal problem with the center as the starting point and the assembling point as the terminating point. There are several ways of division satisfying this requirement. The main difference of total time of various ways lies in the searching time. The maximum difference among searching tasks of three groups resulting from Division A is (26×2 50)×800=1600(m), whereas the maximum difference of
-
The Optimization of Route Design for Grouping Search
229
-
Division B is (51 24×2)×800=2400(m). Therefore we can say Division A is more balanced than Division B. Table 1. Just considering the grouping disequilibrium degree when searching main lines Division
Mean search distance (m)
Division
(a)
Maximum differential distance of groups (m) 1600
40320
(a)
(b)
2400
40320
(b)
(c)
400
40320
(c)
(d)
600
40320
(d)
Since there is a proportion of 20:20:10 existing in the number of people in each group, we can also consider Division C: two groups have the same number of members so we can get Division C if we divide equally some square and allocate two halves to the two groups. It is known from calculation that Division C has higher equilibrium degree than Division A and B. Basing on the specific searching width, we can also consider Division D (50.25 squares; 50.25 squares; 25.5 squares). Division D (Fig. 1): the region of Group A is symmetric to that of Group C, consisting of 50 squares, each of which is 800*800, and a rectangle which is 800*200, together representing the task for the groups with 20 people; white represents the task of the group with just 10 persons, consisting of 25 squares, each of which is 800*800, plus a rectangle which is 800*400. According to Division D, the difference of tasks among three groups is 400-200=200(m). After comparison, we find that the task allocation of Division D is most balanced. 3.3 Design of Paths Step 3. Design the searching paths used in different regions. After the division of groups and searching regions, we turn to the design of unicursal paths of each region. We take into consideration the following principles. Principle 1. Each group has the searching region near the starting point and assembling point, so the time for marching can be as balanced as possible and as little as possible. Principle 2. People in the same group can take the conversion policy just like [9], as shown in Fig. 2, at a certain place in the middle to make the task for each member of the group as balanced as possible, thereby ensuring that the time for searching main lines can approach to the average. Taking the same idea used in the design of searching for groups with 20 people [9], we, basing on Division D, could design paths for three groups as in Fig. 1. The consumption of searching time of three groups can be found in Tab-2 Tab-3 Table 4 and Table 5. Table 2 shows the search time of Group A (C) going through
230
X. Wu
Path 1 in Fig. 1 without conversion. Tab-3 shows the search time of Group A (C) when performing conversion at the designated point on Path 1. Table 4 shows the search time of Group B going through Path 1 in Fig. 1 without conversion. Table 5 shows the search time of Group B when performing conversion at the designated point on Path 1.
Fig. 1. Path design is for three groups. The region of Group A is completely symmetric to that of Group C, 20 persons of each of the two groups carrying out search. In between is the region of Group B, searched by 10 persons. Table 2. The search time of Group A (C) going through Path 1 in Fig. 2 without conversion Before conversion (A, C)
Solid line
Dotted line
Length of main line(m) Number of corners Non-search length(m) Total time(h) Minimum time (h)
41720 26 1318 19.7694 19.7
38680 26 1605 18.4284
Difference between solid line and dotted line 3040 0 0 1.34
Table 3. The search time of group A (C) when performing conversion at the designated point on path 1 After conversion(A, C)
Solid line
Dotted line 40200 26 1201 760
Difference between solid line and dotted line 0 0 0 0
Length of main line(m) Number of corners Non-search length(m) Distance for conversion adjustment (m) Total time(h) Minimum time (h)
40200 26 1922 760
46.6667 0.161 0.1759 0.1759
19.3 19.3
19.2146
0.1669
19.3
Time consumption (h)
The Optimization of Route Design for Grouping Search
231
Table 4. The search time of group B going through path 1 in Fig. 2 without conversion Before conversion(B)
Solid line
Dotted line
Length of main line(m) Number of corners Non-search length(m) Total time(h) Minimum time (h)
41520 38 770 19.6190 19.619
40080 38 410 19.18
Difference between solid line and dotted line 1440 0 0 0.439
Time (h)
consumption
19.2222/18.5556 0.2185 0.1782/0.0949
Table 5. The search time of group B when performing conversion at the designated point on path 1 After conversion (B) Length of main line (m) Number of corners Non-search length (m) Distance for conversion adjustment (m) Total time (h) Minimum time (h)
Solid line 40800
Dotted line 40800
Difference between solid line and dotted line 0
Time consumption (h) 18.8889
38 770
38 410
0 0
0.2185 0.1782/0.0949
380
380
0
0.088
19.3 19.3
19.2903
0.0833
19.3
After comparing the result of Fig. 4 with that of Fig. 6, we take the maximum searching time as the time of accomplishing tasks of the whole group and we get: Proposition 1. The total time consumed for searching of design scheme D is 19.3 hours. 3.4 Evaluation of the Scheme Disequilibrium degree of searching time. According to the definition of disequilibrium degree, the disequilibrium degree of an ideal division should be 0. From the calculation in Tab-10 we knew that by converting the search paths halfway, disequilibrium degrees all decreased greatly. Disequilibrium degrees of Group A, B and C reduced respectively to 0.86%, 0.43% and 0.86%, all approximating to 0 which indicated the effectiveness of conversion. Comparison with the ideal searching time. From Proposition 2 that the ideal time of searching accomplished by 50 people is 18.6 hours, and the search efficiency of Division D can be calculated and shown in Tab. 7. As shown in Tab. 7, the search efficiency of each group is 95%, which indicates conversion improved the time equilibrium degree at a lesser loss of search efficiency. The final time consumed for searching is 19.38 hours, only 0.78 hour more than the
232
X. Wu
Table 6. Disequilibrium degrees in groups and among groups before and after conversion
Time disequilibrium degree before conversion (%) Time disequilibrium degree after conversion (%) Reduction of disequilibrium degree (%)
In Group A (or C) 7.02
In Group B 2.27
Among groups 0.76
0.86
0.43
0.05
6.16
0.84
0.71
Table 7. Search efficiency of different groups Group Search efficiency before conversion(%) Search efficiency after conversion(%)
A 98.2 94.65
B 97.97 96.28
C 98.2 94.65
ideal time of 18.6 hours. Pay attention to that this ideal time consumption does not take the time of formation change into account and so cannot be achieved in practice. Thus we obtain the optimal approximate solution which is better than the existing results in reference [10, 11, 12].
Acknowledgments This work is supported by the Special Science Research Programming of Wuhan (No. 200950199019-07).
References 1. Dai, F.S., Shao, X.H.: The heuristic route algorithm of multiple restrictions based on Quality of Service. In: The Fourth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, harbin. IEEE Xplore Digital Library, vol. 8, pp. 1441–1445 (2008) 2. Xiao, J.T., Wang, J.: A Type of Variation of Hamilton Path Problem with Applications. In: The 9th International Conference for Young Computer Scientists, pp. 88–93. IEEE Computer Society, New York (2008) 3. Yang, Y.Q., Tong, Q., Zhan, X.Y.: The best route model of disaster inspection tour. Journal of Sichuan Teachers College (Natural Science) 20, 66–73 (1999) 4. Qin, S.Y., Gao, S.Z.: Path planning for mobile rescue robots in disaster areas with complex environments. CAAI Transactions on Intelligent Systems 5, 414–420 (2009) 5. Yin, J.H., Wu, K.Y.: Graph theory and algorithm. Science and Technology of China Press, Hefei (2004) 6. Wang, Z.Y., Zhang, Q., Dong, Y.F.: The improvement of DNA algorithm to the directed shortest Hamilton path problem. In: IEEE Xplore Digital Library, pp. 241–244 (2009) 7. Aleksandrov, L., Guo, H., Sack, J.-R.: Algorithms for Approximate Shortest Path Queries on Weighted Polyhedral Surfaces. Discrete Comput. Geom. (2009)
The Optimization of Route Design for Grouping Search
233
8. Aleksandrov, L., Maheshwari, A., Sack, J.-R.: Determining approximate shortest paths on weighted polyhedral surfaces. J. ACM 52, 25–53 (2005) 9. Wu, X.J., Wu, Z.J., Wu, Y.P., Cai, Q., Han, H.: Mathematical model of ground searching. Journal of Jianghan University (Natural Sciences) 37, 19–22 (2009) 10. Yang, H.F., Tian, Z.W.: On Ground Search Based on Complete Coverage Path Planning. Journal of Hunan First Normal College 9, 159–162 (2009) 11. Liao, B., Chen, Y., Deng, Y., Lei, Y.J.: Model of Earth Surface Search. Journal of Chongqing Vocational and Technical Institute 17, 114–116 (2008) 12. Huang, G.A., Deng, W., Lin, H.Y.: Ground search model. Journal of Guilin College of Aerospace Technology, 250–252 (2009)
AOV Network-Based Experiment Design System for Oil Pipeline-Transportation Craftwork Evaluation Guofeng Xu, Zhongxin Liu, and Zengqiang Chen Department of Automation, Nankai University, Tianjin 300071, P.R. China
[email protected],
[email protected],
[email protected]
Abstract. To describe and design an experimental in complex scheme experimental process, an activity network (like AOV network) method is proposed to describe it. Activity network is a nonrecurring graph with no negative weights and with a unique source and destination. A project consisting of a set of activities and precedence relationships can be represented by an activity network and the mathematical analysis of the network provides useful information for managing the project. The specific implementation way of this system is to simulate the process of pipelinetransportation in laboratory circumstance, and test the basic characteristics of crude oil in the process as well as with the process of changing circumstances. The design and analysis of experimental schemes system in evaluation system is on the basis of activity network and achieve automatically generated of variety of modes of experimental schemes. Keywords: experimental scheme,evaluation system, activity network, visual experimental program design.
1
Introduction
The characteristics of Chinese crude oil are mostly high waxy, high condensation point and high viscosity [1]. In the process of design, start-up, operation and optimization transportation of pipeline, it needs to simulate the pipelinetransportation process in the laboratory to test crude oil viscosity, condensation point, viscosity-temperature curve, yield value, thixotropy and other rheological parameters to provide basis for crude oil pipeline-transportation process [2]. However, the factors affecting the rheological properties of crude oil are numerous. In the past, the simulating lab test of oil pipeline-transportations, which were conducted by manual sampling in the process of oil pipeline-transportation craftwork evaluation, often made the deviation of experimental results with the introduction of many interference factors [3]. Oil pipeline-transportation craftwork evaluation system is put forward under this background. Scheme design system which provides an executable scheme of R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 234–241, 2010. c Springer-Verlag Berlin Heidelberg 2010
AOV Network-Based Experiment Design System
235
experiment for craftwork pipeline-transportation evaluation system is the critical subsystem of this system. Experiment scheme, which contains all of the control logic and control parameters, is the key to achieve automatic control of experimental process. Experimental devices carry out experiment under the guidance of the scheme and respond to a variety of emergencies. Control logic of control system is specified by the experimental scheme. This paper will describe the complexity of the experimental process in detail, and on this basis propose ways to describe and design an experimental scheme. This paper proposes an activity network (like AOV network) method to describe an experiment scheme to achieve above objectives and on this basis to achieve automatically generated of a variety of modes of experimental schemes. Oil pipeline-transportation craftwork evaluation system hereinafter referred to as evaluation system.
2
Evaluation Systems
Evaluation system consists of the following components: experimental devices, experimental design and information management software (upper computer), field control server (lower computer), and database server. Experimental devices includes model oil tanks, pour point meter [4], rheometer [5], mixed-packing equipment, sampling equipment, water bath temperature control equipment; experimental design and information management software is responsible for scheme design and issued; field control server is used to analysis plans, and issue the specific execution instructions to the experiment devices, return experimental data at the same time [6]; database server stores historical data and real-time data. This paper mainly describes the experimental scheme design system in the experimental design and information management software (upper computer). The organizational structure of the evaluation system is shown in Figure 1.
Fig. 1. Evaluation system structure
236
3
G. Xu, Z. Liu, and Z. Chen
Experimental Scheme Design System
In order to meet the man-readable, experimental scheme design system adopts visualization method. Experimental scheme design system includes the following functional modules: graphical representation module, program logic module, scheme analysis module, scheme automatic generation module, data access module. The hierarchical relationship between each module is shown in Figure 2 [7].
Fig. 2. Hierarchical relationship
Graphical representation of the scheme. It supports various operations including drag, copy, delete and multi-selection. Specific program plan is shown in Figure 3.
Fig. 3. Graphical experiment scheme
Conflict analysis of scheme. The results of conflict analysis of the scheme is shown below in Figure 4, we can see there is no conflict in this scheme.
AOV Network-Based Experiment Design System
237
Fig. 4. Conflict analysis of the scheme
Automatic generation of scheme. The current experimental scheme models of oil pipeline-transportation experiment are screening experiments and simulation experiments. On this basis, the scheme design system provides an open experimental scheme design method for users to draw the scheme directly. Experimental schemes which are in screening experiments and simulation experiments can be automatically generated. Screening experiment is to screen one of the best processing modes in a variety of processing modes to provide a basis for the processing of crude oil pipeline-transportation; Simulation experiment is to simulate long-distance transportation of crude oil. Figure 3 is an automatic generation screening experiment scheme.
4 4.1
The Composition of Experimental Scheme The Relationship between Device Actions
Experimental scheme will eventually be transformed into many device actions which contains control objectives and control parameters. In other words, experimental scheme consists of basic information, device actions and actions arrangement. The structure of evaluation system is organized on the basis of the characteristics of experimental scheme. Clarify the relationship between actions is the key in describing an experimental scheme. The relationships between device actions in this system usually have the following three:Predecessor and successor relation [8],Parallel relation and Affiliation. In fact, the above-mentioned three kinds of relationships exist in a variety of work plans universally. In real life people often use the activity network to describe a plan. 4.2
Activity Network
Activity network typically includes the following two forms: Activity On Vertices network (AOV network) and Activity On Edge network (AOE network).
238
G. Xu, Z. Liu, and Z. Chen
AOV network is the directed graph which uses vertices represent activities and directed edges express the precedence relationship between activities. It is easy to understand and has prefect intuitive [9-11]. AOE network is the directed graph which uses directed edges represents activities and vertices express events. It marks the edges with weights which indicate the cost of this activity. AOE network is an important tool in engineering estimates [12]. Both AOE network and AOV network are directed graphs, and require activity network can not contain ring. If there is a ring, then the project can not be completed [13]. 4.3
Activity Network of Experimental Scheme
AOE network and AOV network can describe predecessor and successor relation, parallel relation of experimental actions, but not describe of actions. In order to describe the experimental scheme fully and easy to understand, we modify AOV network and add undirected edge indicates on the basis of the original. As shown in Figure 5, C10 is an affiliation action of and C10 occur simultaneously. The network which includes three kinds of relations is called full relations network.(C1 − C10 represents activities)
Fig. 5. Full relations network
4.4
The Dynamic Analysis Process of Experimental Scheme
The implementation process of experimental scheme can be described as the following steps: 1) Seek actions which are not having predecessors in non-execution actions; 2) Open a daemon thread for each action and use threads to send execute commands and monitor the implementation of the actions; 3) If an action is finished, look for the follow-up actions which are not have predecessors and repeat the second step. The thread exit; 4) Wait until all of the daemon thread exits. The scheme is finished.
AOV Network-Based Experiment Design System
5 5.1
239
Analysis of Experimental Scheme Logical Checking
Full network relations do not constrict the relations between actions, but not any combination of device actions can constitute an operating action with according with experiment specifications. Feasibility analysis of experimental scheme is to check whether the network is in line with specific rules. For a special system, the number of action types is limited. Assuming that A1 , A2 , · · · , An are action types of system, the predecessor and successor relation rules which are defined by any two types of Ai , Aj are denoted as < Ai , Aj >; affiliation rules are recorded as (Ai , Aj )(1 i, j n) and parallel relation does not need to be limited. The collection which is composed of predecessor and successor relation rules is called predecessor and successor relation rule base; the collection which consists of affiliation rules is called affiliation rule base. The above-mentioned two kinds of rule base does not fully meet the requirements, such as A → B → A relation is not allowed but A → B and B → A relations are allowed in experimental scheme, therefore a supplementary rule base is needed on the basis of the above-mentioned two kinds of relation rule base. After building relation rule base, check relations of activity network according to relation rule base to determine activity network not against the rules. Specific way is to check all the edge of network activity whether accord with relation rule base. 5.2
Conflict Analysis
The scheme action is implemented by one or some devices. Due to parallel relation manifest parallel execution in actual execution, so it is difficult to avoid the conflict of using one device at the same time. Causes of conflict, there are two reasons: one is the drawbacks in design of experimental scheme, namely, scheme action complete in an ideal time but has conflict; another is dynamic conflict generated in scheme implementation. As the action is not necessarily completed in ideal time, there will always be a certain bias, or due to an action arise exception in the implementation affected the implementation of the follow-up action, leading to the follow-up action generate conflict. For the former conflict, pre-analysis scheme and improvement of the scheme design are needed to avoid it; for the latter conflict, you need to dynamic analysis in scheme implementation and use appropriate strategies to avoid conflict or mitigate its losses. Experimental scheme analysis requires an analysis of start time and end time of each action, on this based to analyze conflict situations, so it needs to traverse the network activity of scheme. The traversal methods of activity network are depth-first traversal and breadth-first traversal. The traversal algorithm of activity network is in detail here. The following describes how to calculate the start time and end time of actions.
240
G. Xu, Z. Liu, and Z. Chen
Let Ti s is the is the start time of action Ci ; Ti s is the end time of action Ci . The continuing ideal time of Ci is Ti ; Cj1 , Cj2 , · · · , Cjk are predecessor actions of Ci , then: (1) Tis = max(Tj1 e , Tj2 e , · · · , Tjk e ), Tie = Tis + Ti For Ci with no predecessor action in activity network: Tis = 0, Tis = Tis + Ti
(2)
The start time and end time of all actions of activity network can be calculated according to (1) and (2), providing a basis for conflict analysis. For actions Cj1 , Cj2 , · · · , Cjk of using device Mi , their time intervals are: [Tj1 s , Tj1 e ], [Tj2 s , Tj2 e ], · · · , [Tjm s , Tjm e ].If overlap region appears in these time intervals, then the scheme exits conflict, and this is the principle of conflict checking. 5.3
Conflict Avoidance
When conflict occurs, it needs to analyze the reasons of the conflict. Adjustment strategy is diverse and the adjustment methods are: in the best possible conditions to meet the intent of the experiment, extending or shortening duration of some actions, or inserting wait action in appropriate place under satisfying experiment purpose possibly conditions. Dynamic conflict avoidance is very difficult to achieve because the biggest problem is how to determine whether meet the purpose of this experiment. As experiment scheme design is off-line, conflict avoidance can achieve by manual modification.
6
Conclusions
Experiment design system for oil pipeline-transportation craftwork evaluation accomplishes designing experimental scheme, automatically generated of a variety of modes of experimental schemes and logic checking and conflict analysis under laboratory conditions. The system also provides guarantee for the automatic control of experiment, the accuracy of oil parameter and the management of oil pipeline-transportation quality. Acknowledgments. This work is supported by the National High Technology Research Development Project (863 Project) of China 2009AA04Z132, and the Specialized Research Fund for the Doctoral Program of Higher Education of China. (20090031110029).
References 1. Ding, J.L., Zhang, J.J., Li, H.Y., et al.: Flow behavior of Daqing waxy crude oil under simulated pipelining conditions. Energy and Fuels 20, 2531–2536 (2006) 2. Matsuda, Y., Ohse, N.: Simultaneous Design of Control Systems with Input Saturation. International Journal of Innovative Computing, Information and Control 4, 2205–2219 (2008)
AOV Network-Based Experiment Design System
241
3. Wardhaugh, L.T., Boger, D.V.: Flow Characteristics of Waxy Crude Oils: Application to Pipeline Design. AIChE Journal 37, 871–885 (1991) 4. Li, H.Y., Zhang, J.J., Yan, D.F.: Correlations between the Pour Point/Gel Point and the Amount of Precipitated Wax for Waxy crude. Petroleum Science and Technology 23, 1313–1322 (2005) 5. Zhang, J.J., Zhang, F., Huang, Q.Y., Yan, D.F.: Experimental Simulation of Effect of Shear on Rheological Properties of Beneficiated Waxy Crude Oils. Journal of Central South University of Technology (English Edition) 14, 108–111 (2007) 6. Zhang, Z.R., Li, J.X., Liu, Z.X., Chen, Z.Q.: Research on Automatic Control Simulation Platform of Oil Pipeline-transportation Technology Evaluation System. ICIC Express Letters 3, 633–638 (2009) 7. Damba, A., Watanabe, S.: Hierarchical Control in a Multiagent System. In: Second International Conference on Innovative Computing, Information and Control, pp. 437–440. IEEE Computer Society, Los Alamitos (2007) 8. Ge, F., Wu, N.: A Transformation Algorithm of Ladder Diagram into Instruction List Based on AOV Digraph and Binary Tree. In: TENCON 2006 - 2006 IEEE Region 10 Conference, pp. 188–191. IEEE Press, New York (2006) 9. Yang, H.H., Chen, Y.L.: Finding the Critical Path in an Activity Network with Time-switch Constraints. European Journal of Operational Research 120, 603–613 (2000) 10. Wang, M.F.: Research of Application Constructing Model Based on AOV Network. Computer Engineering and Applications 43, 85–101 (2007) 11. Mauerkirchner, M.: Event Based Simulation of Software Development Project Planning. In: Moreno-D´ıaz, R., Pichler, F. (eds.) EUROCAST 1997. LNCS, vol. 1333, pp. 527–540. Springer, Heidelberg (1997) 12. Li, T.Z.: A Novel Algorithm for Critical Paths. In: 2009 WRI World Congress on Computer Science and Information Engineering, pp. 226–229. IEEE, Piscataway (2009) 13. Tanaka, Y., Konishi, Y., Araki, N., Ishigaki, H.: Control of container crane by binary input using mixed logical dynamical system. In: 2008 International Conference on Control, Automation and Systems, pp. 13–17. IEEE, Piscataway (2008)
Model and Simulation of Slow Frequency Hopping System Using Signal Progressing Worksystem Yuling Li Hebei Polytechnic University, Modern Technology and Education Centre, Tangshan, Hebei Province, China
[email protected]
Abstract. Research scheme of frequency hopping (FH) communication system using signal processing worksystem (SPW) is presented. A slow frequencyhopping communication system is designed and modeled using SPW. Then the designed system is tested under broadband noise and partial-band noise interference channel for its bit error rate performance. The results showed that this scheme is helpful in FH communication system design. Keywords: FH, SPW, modeling, anti-jamming.
1 Introduction Frequency-hopping communication, because of its strong anti-jamming has been widely used in military communication systems. Frequency Hopping system performance is affected by many factors, so it is an important research topic to design a low cost high-performance frequency hopping communication system. The application of computer simulation technology in the study and design of frequency-hopping communication systems makes it possible to quickly find the system solutions to speed up the development of anti-jamming system and equipment of the PLA (People’s Liberation Army) so as to meet the demands of future hi-tech wars. Based on this, this paper modeled frequency-hopping wireless communication system based on the U.S. CoWare's SPW simulation platform and conducted performance simulation.
2 Slow Frequency Hopping Communication System Modeling 2.1 Slow Frequency Hopping Wireless Communication Link Design The designed slow frequency hopping wireless communication links contains: frequency hopping pattern, channel codes, modulation and demodulation of data, frequency-hopping and de-hopping, frequency hopping synchronization and channel modules. The overall framework of the system is shown in Figure 1. Each part of the system can have multiple implementations, and the system performances composed of different elements are also different. Common simulation platform can be designed for modeling the typical algorithms of various parts, R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 242–249, 2010. © Springer-Verlag Berlin Heidelberg 2010
Model and Simulation of Slow Frequency Hopping System Using SPW
243
Fig. 1. Block diagram of research model
simulating different options of system performance. This study only focuses on slow frequency hopping system of some algorithm. In order to improve the efficiency of simulation, it simplifies simulation model reasonably by adopting the equivalent lowpass principle and enhances the efficiency of simulation by adopting multi-samplingrate simulation . 2.2 The Modem Subsystem The MSK modulation used in this system is a pre-coded MSK [4], the MSK modulation model built on SPW is shown in Figure 2 (a). The model input is the transmission information DATA, the output can be MSK baseband signal or MSK modulation signal. The signal waveforms of each point are shown in Figure 2 (a), (b). In figure 2(a), I k ( aI (t ) ) and Q k ( aQ (t ) ) are even and odd bits of the data DATA. I = ak (t ) cos π t / 2Tb , Q = aQ (t ) sin π t / 2Tb . MSK baseband waveform is shown as the
MSK_ baseband in Figure 2(b), and the output waveform is shown as MSK in Figure 2 (b) after being modulated by carrier. It is worth noting that MSK waveform is not the DATA output of the direct frequency-shift keying, but the pre-encoded MSK signal, that is, the differential encoding of this signal is DATA. This does not affect the characteristics of MSK signals. The demodulation side does not require differential decoding, so that you can demodulate the correct DATA data. MSK demodulation module selects related receiver method, shown in Figure 2 (c). The system’s transmission channel design is AWGN, the bit error rate derived from simulation is the same as QPSK. The discussion about MSK performance is verified in [4]. 2.3 Hopping Modulator Subsystem Frequency-hopping pattern is generated computing the key (Key) and time information (TOD) in accordance with a certain pseudo-random algorithm, and then selects the hopping frequency according to the result index frequency table. The number of frequency that the system performance simulation uses is 32. Frequency hopping modulator consists of two parts: frequency hopping sequence generator and
244
Y. Li
(a) I, Q two-way signal waveform
(b) Signals, MSK and MSK baseband waveform
(c) Modulation and demodulation Fig. 2. The model and waves of MSK modulation and demodulation
frequency synthesizer. This design uses a PN sequence generator which generates pseudo-random 32 frequency control words through converting the domain from 2 to 32. The frequency control word controls the output frequency of signal generator and then generates hopping frequencies, as is shown in Figure 5. 2.4 Channel Encoding and Decoding Subsystem The use of RS codes can effectively resist partial-band interference and Multi-tone interference. For slow frequency hopping, it is necessary to join the interleaver to improve the system performance. Therefore, this system uses two sets of programs that join RS coder with interleaver to achieve a strong error control function. It is easy to implement the encoding and decoding by using the standard library that comes with the SPW.
Model and Simulation of Slow Frequency Hopping System Using SPW
245
2.5 Establishment of the Noise Module A) Broadband noise interference The rated power broadband noise that is launched by the Jammer in the whole band is known as wideband noise interference [5]. In the broadband noise environment, the noise power spectral density changes from N0 to N0 + NJ, which is equivalent to an increase of the original Gaussian white noise power spectral density, with bandwidth within the band? Therefore, the FH / MSK system performance without channel coding under wide-band noise interference is equivalent to system performance under the Gaussian white noise. Gaussian white noise can be selected by COMPLEX WHITE NOISE module in SPW, When the transmit signal power is 1, the noise module parameters [6] Output power=
sample frequency . 2 × bw × Eb / N 0
(1)
2) Partial-band noise interference [5, 7, 8] Noise power J concentrated in the entire work W as part of the band by jammer is knowm as partial-band noise interference. The ratio of Interference bandwidth W, to frequency hopping bandwidth W is called the interference factor ρ = WJ / W . Partial-
band noise power spectral density is N J = J / WJ = J /( ρW ) , When the signal jumped '
into the interference noise band, the noise power spectral density is N n = N 0 + N J ; or '
the noise power spectral density is N n = N 0
.
(a) Partial-band interference prototype
(b) The location of partial-band interference module in the system Fig. 3. Partial band noise model
246
Y. Li
Certain bandwidth interference can be generated by using filter to filter out the unwanted frequency components in the Gaussian white noise. The wide range of noise ρ is generated directly by using filter under a very high sampling rate. The effect of above measure is not good, and it also will reduce the speed of simulation. Considering the partial-band interference of Frequency-hopping system is similar to the noise interference when the power spectral density is N J' in ρ time before the modulation waveform is modulated by hopping, the bandwidth of partial-band module designed in this system is the bandwidth of frequency hopping single-hop. The specific design is shown in Figure 3 (a). Noise modules in the system is not placed in the channel section, but as is shown in Figure 3(b) below, the output of noise the module which is multiplied by a random sequence (the total ratio of "1" symbol to the sequence is ρ ), and then superimposed on the MSK modulation signal can be said as partial-band interference noise in frequency hopping system. 3) Multi-tone interference noise [5, 7, 9] The jammer does not interfere with a continuous band, but rather spreads some interference signal on a number of discrete frequency points, which is known as multi-tone interference noise. The system uses a single jamming tone to interfere with an individual frequency-hopping channel, so that the Jammer power can be distributed on more channels. Because the number of hopping frequencies used in this system simulation is 32, we chose 3 interference tones in the entire frequency-hopping channel. That is, selected three frequencies with random frequency and phase intervals in the frequency hopping list. Model of the SPW is shown in Figure 4 (a). Its output is done with FFT, and then the multi-tone interference frequency spectrum diagram can be drawn in Figure 4 (b) below.
(a) Multi-tone interference model
(b) Multi-tone interference frequency spectrum Fig. 4. Multi-tone noise model
Model and Simulation of Slow Frequency Hopping System Using SPW
247
2.6 Total System
This subject constructs slow frequency hopping wireless communication system platforms for encoding and decoding under the different disturbance modes respectively. The basic principles for both are the same. The decoding frequencyhopping system model under the part of the disturbance mode is shown in Figure 5.
Fig. 5. Slow frequency hopping communication system model
3 Simulation Results 3.1 Performance of Anti-wideband Interference
Figure 6(a) shows the bit error rate comparison chart of the encode and uncoded FH / MSK system under the wide-band noise interference, the power in interference patterns is J and it is the wideband interference which distributed across the hopping bandwidth Wss uniformly. As can be seen, in the 10-3 case, channel coding makes the performance increase by about 8dB, which is caused by a larger redundancy gain. 3.2 Anti-performance of Partial-Band Interference
When the Eb/N0 is 20dB, under partial-band interference, the uncoded system's bit error rate curves are shown in Figure 6 (b). For uncoded FH/MSK system to achieve 10-4 bit error rate, the smaller ρ becomes, and the greater signal interference ratios are required. We can see from Figure 6 (b) that for different signal interference ratios, there are different ρ which make bit-error-rate to reach the maximum. The trajectory of
ρ 0 shown in the figure means the relationship between the average error rate
Pb
and Eb / N J in the largest part of the band interference. As is shown in Figure 6, the general rules of the partial-band interference with channel coding are the same with that of uncoded. When ρ is given, the flat area in its graphics is similar with the bit error rate in uncoded system. While the drop area is more steep and appears when the signal interference ratio is low.
248
Y. Li
(a) The bit error rate that uncoded and encoded FH/MSK system under the broadband
(b) Part interference of the band jamming in the uncoded FH / MSK signal
(c) Part of the band jamming in the encoded FH / MSK signal Fig. 6. Performance of FH/MSK system under broadband and partial-band noise jamming
4 Conclusion The article completes the model and simulation of slow frequency hopping wireless communication system by using signal progressing work system and provides simulation basis for the hardware implementation of slow frequency hopping tactical frequency hopping radio. The simulation system is intuitive, flexible, scalable, and so on. If adding the technologies of multiple coding, modulations, spreading spectrum
Model and Simulation of Slow Frequency Hopping System Using SPW
249
mode, channel environment, and extended dance mix and chaotic frequency hopping sequences, the simulation system will provide a good basis and guiding role for the design and implementation of the actual system.
References 1. Matsumoto, T., Higashi, A.: Performance Analysis of RS-Coded M-ary FSK for Frequency -Hopping Spread Spectrum Mobile Radio. IEEE Trans. on Vehicular Tech. 41(3) (August 1992) 2. Kurpiers, A.F., Danesfahani, G.R., Jeans, T.G.: Simulation of Spread Spectrum Modem by SPW. In: IEE Colloquium on Communications Simulation and Modeling Techniques, September 28 (1993) 3. Jeruchim, M.C.: Simulation of Communication Systems: Modeling, Methodology, and Techniques. Kluwer Academic Publishers, New York (2000) 4. Sklar, B.: Digital Communications: fundamental and applications, 2nd edn. Prentice-Hall, Inc., Englewood Cliffs (2001) 5. Poisel, R.A.: Modern Communications Jamming Principles and Techniques. Artech House Inc., Norwood (2004) 6. Guo, L., Mu, X., Zhu, C., Song, H.: An Adaptive Frequency Hopping System Based on the Detection of Spectrum Holes. In: IEEE International Symposium on KAM 2008, pp. 498–501 (2008) 7. Binhong, D., Shaoqian, L., Fengqi, S.: Designing a Differential Frequency Hopping System with Hop Variable Frequency Transition Function. In: IEEE International Conference on WICOM 2009, pp. 1–4 (2009) 8. Li, T., Ling, Q., Ren, J.: Spectrally Efficient Frequency Hopping System Design for Wireless Networks. In: IEEE International Conference on WASA 2007, pp. 244–248 (2007) 9. Zhou, Z., Li, S., Cheng, Y.: Designing Frequency Transition Function of Differential Frequency Hopping System. In: IEEE International Conference on CMC 2010, vol. (2), pp. 296–300 (2010) 10. Hiren, G., Tayem, N., Pendse, R., Sawan, M.E.: Delay Estimator for Frequency Hopping System using Rank-Revealing Triangular Factorization. In: IEEE International Conference on VTC Spring 2009, pp. 1–4 (2009)
Insight to Cloud Computing and Growing Impacts Chen-shin Chien1 and Jason Chien2 1
Department of Industrial Education, National Taiwan Normal University, Taipei County, Taiwan
[email protected] 2 China Unversity of Science and Technology Computing Center China Unversity of Science and Technology, Taipei County, Taiwan
[email protected]
Abstract. This paper provides an insight to cloud computing, its impacts and discusses various issues that business organizations face while implementing cloud computing. Further, it recommends various strategies that organizations need to adopt while migrating to cloud computing. The purpose of this paper is to develop an understanding of cloud computing in the modern world and its impact on organizations and businesses. Initially the paper provides a brief description of the cloud computing model introduction and its purposes. Further it discusses various technical and non-technical issues that need to be overcome in order for the benefits of cloud computing to be realized in corporate businesses and organizations. It then provides various recommendations and strategies that businesses need to work on before stepping into new technologies. Keywords: Cloud computing, grid computing, cut cost, application models, SME service.
1 Introduction Everyone has an opinion on what is cloud computing. It can be the ability to rent a server or a thousand servers and run a geophysical modeling application on the most powerful systems available anywhere. It can be the ability to rent a virtual server, load software on it, turn it on and off at will, or clone it ten times to meet a sudden workload demand. It can be storing and securing immense amounts of data that is accessible only by authorized applications and users [13]. Cloud Computing is a broad concept of using the internet to allow people to access technology-enabled services. It is named after the cloud representation of the Internet on a network diagram. Cloud computing is the reincarnation of Centralized data processing and storage as paralleled by the mainframe. A mainframe could be a large computer used by large organizations for bulk data processing. In a broader context, cloud computing is a large network of computers used by large organizations to provide services to smaller ones and individuals. Cloud computing is sometimes also termed as Grid computing or Network computing. Cloud computing is a resource delivery and usage model. It means getting resource via network “on-demand” and “at scale” in a multitenant environment. The network of providing resource is called “Cloud”. ‘What goes R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 250–257, 2010. © Springer-Verlag Berlin Heidelberg 2010
Insight to Cloud Computing and Growing Impacts
251
on in the cloud manages multiple infrastructures across multiple organizations consisting of frameworks providing mechanisms for self-healing, self-monitoring and automatic reconfiguration’, states Kevin Hartig
Fig. 1. Cloud Computing
The cloud being a virtualization of resources manages itself. Although there are people who maintain and keep track of the hardware, operation systems and networking in a proper order, but from the user’s or application developer’s perspective, only the cloud is referenced. Cloud computing is the third revolution of IT industry, following the Personal computer revolution and the Internet revolution. The Cloud computing matters to us as cloud computing and web based applications are the future of computing in which all of us will interact. In our daily life, we come across a number of vendors providing cloud computing services such as Gmail, Google, Yahoo, MSN, etc.[4] Among web based office applications and online photo and document sharing include flickr and Zoho. Large scale web based storage and computing power applications to suit our needs include services like Google App Engine and Amazon Web services.In a world where almost anyone and anything can connect to the Internet, the exponential increase in the volume of information and connected devices creates a dilemma: IT complexity increases as does the demand for simplicity. Organizations are facing accelerating business change, global and domestic competitive pressure and social responsibility demands. They are striving to reach their full potential by rapidly implementing innovative business models while simultaneously lowering the IT barriers to driving innovation and change. These challenges call for a more dynamic computing model that enables rapid innovation for applications, services and service delivery. Cloud computing can be one element of such a model. The underlying technologies associated with cloud computing can also be a part of an innovative approach focusing on creation of a more dynamic enterprise, as applications and the services they support are no longer locked to a fixed, underlying infrastructure and can adjust quickly to change. These views were expressed in an official report [14].
252
C.-s. Chien and J. Chien
By cloud computing, we have the ability to scale to meet changing user demands quickly, usually within minutes. Cloud computing is environmental friendly, task oriented and virtually requires no maintenance. Data is usually not lost in the event of a disaster. It is easy to build one’s own web based applications that run in the cloud. In cloud computing, one has the benefit of separation of application code from physical resources and one can use external assets to handle peak loads. Like any other revolution, cloud computing is the result of the technological process and business model transition. The major driving factors include [3] The virtualization tech and market’s fast development The hardware’s fast development, like CPU and network devices The wide band’s network fast development The fast increase of corporate IT infrastructure requirement The fast change and time-to-market requirement of Internet applications Economy crisis forcing companies to cut cost.
2 Issues to Be Overcome Cloud computing is easiest to adopt when there is a considerably flexible approach to phasing it in and relating it to other applications. The biggest challenge in cloud computing may be the fact that there is no standard or single architectural method. Therefore, it is best to view cloud architectures as a set of approaches, each with its own examples and capabilities. Describes below are some of the most common hurdles that need to be overcome in an organization. 2.1 Technical Issues A. Security One of the significant technical hurdles that need to be overcome in order for cloud computing benefits to be realized is security. Reliability and security concerns in an organization might need mitigation and applications might need to be rearchitected [7]. Business perceptions of increased IT flexibility and effectiveness will have to be properly managed. In most of the cases, network security solutions are not properly architected to keep up with the movement required for cloud to deliver cost effective solutions. With their businesses’ information and critical IT resources outside the firewall, customers worry about their vulnerability to attack. B. Technical Hardware & Software Expertise Users need equipment and resources to customize cloud computing services more relevant and more tailored to the needs of their businesses. Proper man-power is needed to develop the applications to suit a business’s needs. The availability of the physical hardware and software components need to be ensured for realizing the benefits of cloud computing. According to Dion Hinchcliffe [4,5], wider technical fluency and expertise in the selected cloud computing platforms, which tend to emphasize technologies such as Open Source or newer web-style programming languages and application models will have to be achieved. C. Non-Technical Issues Apart from the technical issues, there are several non-technical issues which require equal attention and need to be resolved. Some of the significant non-technical hurdles
Insight to Cloud Computing and Growing Impacts
253
to the adoption of cloud computing services by large enterprises are financial, operational and organizational issues [7]. D. Financial Issues According to McKinsey & Co.’s new report [7], “Clearing the Air on Cloud Computing”, cloud computing can cost twice as much as in-house data centers. This poses a problem for large enterprises, but actually works to the advantage of small and midsize companies and businesses. McKinsey further states, “Cloud offerings currently are most attractive for small and medium-sized enterprises…and most customers of clouds are small businesses.” The reason behind this is that smaller companies don’t have the option of developing themselves into giant data centers. Greenberg notes, “Few if any major corporations are looking to replace their data centers with a cloud…the ‘server-less company’ are one that’s only feasible for startups and SMBs.” Cost variability is an important aspect of cloud computing. When one considers cost transparency, scalability and cost variability, a new challenge and opportunity for organizations arises. 2.2 Operational and Organizational Issues Organizations need to define standards and workflow for authorizations. A strategy for the consumption and management of cloud services, including how the organization will deal with semantic management, security and transactions need to be created. One should evaluate cloud providers using similar validation patterns as one does with new and existing data center resources. According to Vinita Gupta [2], before deciding to switch over to cloud computing, one should fully understand the concept and implications of cloud computing as to whether maintaining an IT investment in-house or buying it as a service. The organization has to look at the overall return on investments as they cannot simply rip off and replace an existing infrastructure. The managers have to look at the short-term costs as well as the longterm gains. Service levels offered by different vendors need to be analyzed in terms of uptime, response time and performance. Finally, a proof of concept should be created which can do a few things including getting an organization through the initial learning process and providing proof points as to the feasibility of leveraging cloud computing resources. some Internal Issues that a business might face While switching to newer technologies, an organization could face many internal issues. Some of them are explained as follows: A. Distributed business levels The distributed business and the level of consistently reliable computer networks in an organization can pose a challenge towards switching from traditional infrastructure to cloud computing. The case for an organization to go in for cloud computing is similar to a decision to own or rent a house. An organization which has spent a good amount of cash on its own storage and security systems will have a tough time taking the decision to migrate to a dedicated environment. B. Complexity of applications The complexity of the applications and the technology infrastructure is dependent on how the organization has adopted IT. If this has evolved from the deployment of technologies over a period of time, then the complexity level will certainly be high and in such a case, transformation to cloud computing would be difficult. Not
254
C.-s. Chien and J. Chien
everything comes under cloud computing as each organization has its own specific requirements suited to their needs whether on functionalities, performance, or maybe even security and privacy needs that could be unique to an organization, and may not be supported through the public cloud[2]. It is relatively very difficult to adapt to cloud computing in an organization where highly customized applications or home grown applications are used. The availability of a robust network and information security is also a challenge. C. Cost Cost of process change is another hurdle in the transformation. Conventional IT organizations will have to engage with internal customers as well as IT service providers at a different plane. Most importantly, the culture and mindset will have to change.
3 Security and Reliability Issues Bryan Gardiner [1] expresses that for large corporate organizations, a number of concerns regarding the adoption of cloud computing arise. In most of the cases, they are worried about lost or stolen data. Hard-liners see the very concept of the cloud as a deeply unreliable security nightmare. In a survey conducted by a research firm IDC, almost 75 percent of I.T. executives reported that security was their primary concern, followed by performance and reliability [1], i.e., how enterprise data is safeguarded in a shared third-party environment. The pace of the future uptake is heavily dependent on how soon these issues are resolve, and when cloud providers will be able to obtain official certifications of their security practices from independent third parties. In spite of the work-in-progress, a large number of other research issues remain and part of the purpose of this paper is to help define a longer-term research agenda for the researchers. Key issues include: How do consumers know what services are available and how do they evaluate them? How do consumers express their requirements? How are services composed? How are services tested? What is the appropriate, high integrity, service delivery infrastructure? How must consumers’ data be held to enable portability between different service suppliers? What standards can be used or must be defined to enable portability of service? What will be the impact of branded services and marketing activities (high quality vs low price)? How can organizations benefit from rapidly changing services and how will they manage the interface with business processes? How will individuals perceive and manage rapidly changing systems? What is the limit to the speed of change? What payment and reward structures will be necessary to encourage SME service suppliers? What will be the new industry models and supply chain arrangements? How can we evaluate the research outcomes? How to plan towards the new technologies?
Insight to Cloud Computing and Growing Impacts
255
Cloud computing is inevitable and it is a force that organizations and businesses need to quickly come in terms with. As the economic and social motivation for cloud computing is high, businesses which are heavily computer resource dependent need to take cautionary measures and the right decisions at the right time to avoid ending up with unproductive solutions while migrating to new technologies. According to Davis Robbins [10], an organization should always make sure that they know what they are paying for and should pay careful attention to the following issues: Service levels, privacy matters, compliances, data ownership and data mobility. A number of cloud computing vendors may be hesitant to commit to the consistency of performance regarding an application or transaction. One has to understand the service levels they expect regarding data protection and speed of data recovery. In large corporate organizations, privacy matters a lot. Someone else hosting and serving the organizations’ data could be approached by someone else from within or outside the cloud without one’s knowledge or approval. All the regulations applying to the business must properly be reviewed. Cloud services and vendors must meet the same level of compliance for data stored in the cloud. One has to make sure that if a cloud relationship is terminated, would the data be shared between the cloud services and the organization. If it is returned back, which format it will be in. Development and test activities must be carried out prior to switching completely to cloud services. It will allow the organization to reduce capital spending and related data center costs while increasing speed and agility. In addition to this, one can also evolve their internal infrastructure towards a more cloud-like model. One needs to identify which services can reside in the cloud and which should be present internally within the business. Systems and services core to the business should be determined and a sourcing strategy to achieve the low cost, scalability and flexibility should be determined. This should include all the necessary protections such as data ownership, mobility and compliance. Businesses must keep costs down to stay competitive while at the same time investing in new ideas that will provide compelling and attractive new products and services to their customers. Businesses should be able to compare their traditional computer cost system to the utility pricing model common in the cloud computing business. Other hidden costs such as management, governance and transition costs including the hiring of new staff must also be considered. The amount of time required to recoup the investment in a transition to cloud computing need to be analyzed. Company executives need to discuss important issues while bringing cloud computing into the picture such as the effect of service-oriented architecture strategy, impaction of disaster recovery plans, policies regarding backups and legally mandated data archives, risk profile for using cloud computing services and mitigation strategies.
4 Switch over to New Technologies Switching to newer technologies such as cloud computing would be best when the processes, applications, and data are largely independent. When the points of integration in a business are well defined, embracing cloud services is effective. In an organization where a lower level of security will work just fine and the core internal enterprise architecture is healthy, conditions are favorable for the organization to switch to newer technologies. A business which requires Web as the desired platform
256
C.-s. Chien and J. Chien
to serve its customers and wants to cut cost while benefiting from the new applications, the business can achieve the best competitive advantage in the market. There for to compete effectively in today’s world, executives need every edge they can get, from low cost to speed and employee productivity. By tapping into the right cloud capabilities, companies can quickly enter new markets and launch new products or services in existing markets. When demand grows, they can quickly scale up, and when opportunities dry up, they can just as quickly scale down with minimum waste of time and capital. By using cloud-based solutions such as crowd-sourcing, companies can open up innovation to more employees, customers and their partners.
5 Conclusion and Future Work Cloud computing is a fascinating realm, that makes it easier to deploy software and increase productivity. However, there are some technical and non-technical realities that make security somewhat difficult to deliver in a cloud. The cloud presents a number of new challenges in data security, privacy control, compliance, application integration and service quality. It can be expected that over the few years, these problems will be addressed. There for to be successful, companies should take small incremental steps towards this new environment so they can reap benefits for applicable business situations and learn to deal with the associated risks. In general, cloud computing will act as an accelerator for enterprises, enabling them to innovate and compete more effectively. Under the current economic conditions, executives need to rethink their strategies dealing with cost-effective solutions. They need to use the cloud services for the right jobs they require. Today’s infrastructure clouds such as Amazon EC2 offer a relatively inexpensive and flexible alternative to buying inhouse hardware. They are also beneficial for computation-intensive jobs, such as data cleansing, data mining, risk modeling, optimization and simulation. Businesses and enterprises should now take steps to experiment, learn and reap some immediate business benefits by implementing cloud computing in their organizations. Unless they seriously consider making the cloud a part of their strategy, they would find themselves disadvantaged when competing in today’s increasingly multi-polar marketplace. The number of mobile applications is growing sharply. However, limited processing power, battery life and data storage of mobile phone must limit the growth of application software for mobile industry. The adoption of cloud computing should be make mobile applications more sophisticated and make them to available for broader audience of subscribers.
References 1. Gardiner, B.: The Future of Cloud Computing: A Long-Tern Forecast (March 09, 2009), http://www.portfolio.com/views/columns/dualperspectives/ 2009/03/09/A-Long-Term-Forecast (retrieved June 03, 2009) 2. Gupta, V.: Will Cloud Computing really take off? (November 17, 2008), http://www.expresscomputeronline.com/20081117/management01.s html (retrieved June 03, 2009) 3. Hartig, K.: What is cloud computing? (April 15, 2009) http://cloudcomputing.syscon.com/node/579826 (retrieved June 03, 2009)
Insight to Cloud Computing and Growing Impacts
257
4. Hinchcliffe, D.: Enterprise cloud computing gathers steam (August 01, 2008), http://blogs.zdnet.com/Hinchcliffe/?p=191 (retrieved June 03, 2009) 5. Hinchcliffe, D.: Cloud computing: A new era of IT opportunity and challenges (March 03, 2009), http://blogs.zdnet.com/Hinchcliffe/?p=261&tag=rbxccnbzd1 (retrieved June03, 2009) 6. Linthicum, D.S.: Cloud Computing & Enterprise Architecture (2009), http://www.slideshare.net/Linthicum/ cloud-computingand-enterprise-architecture (retrieved June 03, 2009) 7. Lublinsky, B.: Cleaning the air on Cloud Computing (April 22, 2009), http://www.infoq.com/news/2009/04/air (retrieved June 03, 2009) 8. MacVitti, L.: 5 Steps to Building a Computing Infrastructure (April 21, 2009), http://www.f5.com/news-pressevents/news/2009/20090421.html (retrieved June 03, 2009) 9. King, R.: How cloud computing is changing the world. Business Week (April 8, 2008), http://www.businessweek.com/technology/content/aug2008/ tc2008082_445669.htm?chan (retrieved June 03, 2009) 10. Robbins, D.: Cloud Computing Explained (March 15, 2009), http://www.pcworld.com/businesscenter/article/164933/ cloud_computing_explained.html (retrieved June 03, 2009) 11. Swaminathan, K.S., Daugherty, P., Tobolski, J.: What the Enterprise needs to know about Cloud Computing. Accenture Technology Labs (2009) 12. http://www.smartertools.com/blog/archive/2008/11/20/ cloud-computing-challenges-benefits-and-the-future.aspx 13. Introduction to Cloud Computing architecture White Paper on sun Microsystems, 1st edn. (June 2009) 14. A technical white paper on IBM’s New Enterprise Data Centre: http://ibm.com/datacenter210, http://ibm.com/datacenter
Using Semantic Web Techniques to Implement Access Control for Web Service Zhengqiu He, Kangyu Huang, Lifa Wu, Huabo Li, and Haiguang Lai Institute of Command Automation, PLAUST, Nanjing, China
[email protected]
Abstract. Access control is a challenging problem for web service due to its open and distributed nature, which has not been addressed properly at present. In this paper, we show how semantic web technologies can be used to build a flexible access control system for web service. Role-based Access Control model is followed and extended with credentials. The access control model is represented by an OWL-DL ontology, and specific semantic rules are constructed to implement such as dynamic roles assignment, separation of duty constraints and roles hierarchy reasoning, etc. These semantic rules can be verified and executed automatically by the reasoning engine, which can simplify the definition and enhance the interoperability of the access control policies. A prototype implementation is also provided to validate the proposal. Keywords: web service, RBAC, access control, OWL, SWRL.
1 Introduction Web service is a new service-oriented computing paradigm which has become one of the preferred solutions for areas such as e-business, SOA (Service Oriented Architecture) and cloud computing, etc. However, in web service, service requests and responses are conveyed by SOAP (Simple Object Access Protocol) which has the ability to pass unhinderedly through firewalls. If false claims or malicious contents are contained in SOAP messages, it is possible to result in unauthorized access even damage to the internal applications. In addition, some web services may be just open to the specific users group, only requesters who satisfy the corresponding conditions can be permitted to access those services. Reliable access control is therefor a fundamental requirement for the acceptance of web service by organizations [1]. In web service, the interaction is usually between remotely located parties who may have no knowledge about each other. Access control of web service is thus required to cross the border of security domains and address the movement of unknown users across borders so that access to services can be granted [2, 3]. Current access control approaches like Role-based access control (RBAC) [4] generally assume that the identity is established and execute roles assignment statically based on the identities of users, which can be restrictive in web service environment [5, 9]. In order to address cross-domain movement of users, a new approach called Attribute-based Access Control (ABAC) has been proposed [6, 7]. The basic idea of ABAC is that, it R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 258–266, 2010. © Springer-Verlag Berlin Heidelberg 2010
Using Semantic Web Techniques to Implement Access Control for Web Service
259
defines permissions based on three types of attributes (subject, resource and environment). However, the specification and maintenance of ABAC policies has turned out to be complex and error-prone, especially if heterogeneous attribute schemes are involved [7]. In addition, ABAC assigns permission directly to user rather than assigning role to abstract permission, which violates the principles of scalability and manageability that motivates developers to use RBAC [8]. The aim of this paper is to specifically study the relationship between Semantic Web technologies and the RBAC model, and build a flexible access control system for web service. We follow the NIST Standard RBAC model [4] and borrow ideas from ABAC. Namely, we will execute roles assignment based on credential and its attributes provided by the user, rather than the identity. These credentials can be X.509 certificate, Kerberos ticket, public key, etc, which are issued and signed by the trusted authorities and can be accepted across domains. More importantly, semantic web specifications OWL (Web Ontology Language, [10]) and SWRL(Semantic Web Rule Language [11]) are adopted as policy languages to represent and implement the access control model. Semantic languages allow policies to be described over heterogeneous domain knowledge and promote common understanding among participants who might not use the same information model. In addition, semantic techniques can provide the reasoning services needed to deduce new information from existing knowledge and detect conflicts between policies [12-14]. The rest of this paper is organized as follows. Section 2 is the main part in which we elaborate the representation and reasoning on the access control model. The prototype implementation for the proposal is provided in section 3. An overview of related works is given in section 4. Final section concludes the paper.
2 Representation and Reasoning on the RBAC Model In this section, we will describe how to define OWL-DL ontology and SWRL rules that can be used to represent and extend the NIST Standard RBAC model, and show how they can be used to specify and implement access control for web service. 2.1 Users As mentioned above, we use credentials that can be verified and accepted across domains to represent the users, and perform user-to-role assignment based on the credential and its attributes. Credentials in web application may employ different authentication techniques such as Name Token, Binary Token, Key or SAML assertion. As our goal is to represent and extend the RBAC model using semantic web technologies, we have constructed a Credential class to extract the concepts, properties and relationships among these authentication techniques in abstract level. The main structure of Credential class is showed in Fig. 1. As shown in Fig. 1, the top-level class Credential is subclassed to UserNameToken, BinarySecurityToken, Key and SAMLAssertion. All subclasses are pairwise disjointed. Subclass UserNameToken denotes the users set whose identity is represented by name and password. Subclass BinarySecurityToken denotes the users set whose identity is confirmed by certain binary security token like X.509 certificate or Kerberos ticket.
260
Z. He et al.
Fig. 1. Credential Class
Subclass Key denotes the users set whose identity is represented by secret key such as Public key or Symmetric key. Subclass SAMLAssertion indicates the users set whose authentication information is described by the SAML assertion. The roles assignment is based on credential and its attributes, so specific properties should be defined for the credential class. For example, the data type property issuedBy is defined to indicate who have issued the credential. Property isInternal indicates whether the credential is issued by the internal or external authority. Property isValid indicates whether the credential is valid or not. Each subclass in credential class can also define its own properties based on respective characteristics and practical application. E.g., the UserNameToken class may have two data type properties hasName and hasPassword, both of type xsd:string. The X509Certificate class may have a data type property serialNumber of type xsd:integer. It is worthy to mention that the credential class here is not meant to be complete. As new authentication mechanisms become available, it can be extended to incorporate the latest developments. 2.2 Roles Role is the key concept in RBAC model. It adds an intermediary for assigning permissions to users which greatly simplifies authorization administration. In our proposal, a generic Role class is defined, and each concrete role is as its instance. An object property hasRole is defined to link a user to his assigned roles. Roles hierarchy. Roles hierarchy defines an inheritance relation among roles. Inheritance is described in terms of permissions, that is, r1 “inherits” role r2 if all privileges of r2 are also privileges of r1. In order to represent the hierarchical relation among roles, a transitive object property subRoleOf is defined, which holds between two roles to state that one inherits all privileges of the other. Constraints. There are two kinds of constraints in the NIST Standard RBAC model including static separation of duty (SSD) and dynamic separation of duty (DSD). SSD places constraints on roles assignment over pairs of roles where any user can only have one of the pair. DSD constraint holds between two roles where no user can have both simultaneously active. Two object properties ssd and dsd are defined to represent SSD and DSD constraints respectively. Both of them are symmetric and transitive. 2.3 Permissions Permission is an approval to perform an action on protected objects. The types of actions and objects depend on the practical application.
Using Semantic Web Techniques to Implement Access Control for Web Service
261
In web service, the protected objects include services and their operations, access control thereby should be enforced at service level and operation level. The action here is to invoke the services or operations. Consequently, for simplicity, we replace permissions just with services and operations. Two classes Service and Operation are defined to represent the services set and operations set respectively. The object property hasOperation is used to link a service to its operations. Some common properties such as publishedBy and securityLevel can be defined. The former indicates who the publisher is and the latter denotes the needed protection level of the service or operation. Assertions about these properties can be taken as the conditions to enforce permission assignment. According to the practical application, new hierarchies of services and operations, as well as new properties can be constructed. In order to establish relations between permissions and roles as well as users, some specific properties should be defined. E.g., properties assignedService and assignedOperation associate a role with a service and an operation respectively, which means that a user assigned to this role can invoke the corresponding service or operation. Properties permittedService and permittedOperation associate a user with a service and an operation respectively, which means that a user has privilege to invoke the corresponding service or operation. The situation of properties activatedService and activatedOperation is similar to that of permittedService and permittedOperation, but the privilege exists in session through activated roles. 2.4 Sessions Session is a mapping of one user to possibly many roles. A user establishes a session during which the user activates a subset of roles that he is assigned to. Each session is associated with a single user and each user is associated with one or more sessions. We define a Session class and several special properties to capture the session concept. Class Session represents the instance set of sessions. The object property establish is an inverse functional property to associate a user with a session. The object property activatedRole associates a session with a role activated in this session. When a user establishes a session, a new session instance assertion, along with other facts related to this session instance should be produced at run time. 2.5 Rules In above sections, a series of classes and properties are defined to describe the concepts and relations of RBAC. Some basic reasoning such as subsumption and satisfiability can be performed by semantic reasoner. However, due to the inherent limitation of logic basis of OWL-DL, some functions of our access control model can’t be well implemented based on these reasonings, such as dynamic user-to-role assignment, separation of duty constraints, and role activation and deactivation, etc. Consequently, technologies of logic layer above ontology layer in semantic web architecture should be adopted. That is, some specific rules should be defined and added into the RBAC ontology to implement the required functions. We adopt SWRL, a rule description language based on a combination of the OWL-DL and Horn logic clause and also a candidate standard for logic layer, to define the required rules. Five kinds of rules are defined and the rules instances are showed in table 1.
262
Z. He et al.
User-to-role assignment rules. For user-to-role assignment, we distinguish between registered and unregistered users. For registered users, roles assignment can be predefined just by adding related assertions of hasRole. E.g., suppose that there are assertions UserNameToken(u), Role(r) and hasRole(u, r) exist in the initial ontology base, these assertions indicate that u is a known user of type UserNameToken and r is one role assigned to him. For unregistered users, corresponding semantic rules must be defined to perform dynamic roles assignment. E.g., rule 1.1 specifies that any user with X.509 certificate issued by the external authority ‘ca’ can be assigned to role R1. Other roles assignment rules can be defined similarly in terms of specific application. Table 1. Reasoning rules represented by SWRL No.
Reasoning rules
∧ isInternal(?u,false) ∧ issuedBy(?u, "ca") ∧isValid(?u, true) hasRole(?u,R2)←PublicKey(?u) ∧ isInternal(?u, true) ∧issuedBy(?u, "ka") ∧isValid(?u, true) assignedService(R1, ?so)←Service(?so) ∧publishedBy(?so, “sp”) ∧securityLevel(?so, ?i)∧ swrlb:lessThan(?i, 1) assignedService(R2, ?so)←Service(?so) ∧ securityLevel(?so, ?i) ∧ swrlb:greaterThan(?i, 3) ¬ HasRole(?u, ?r2)←hasRole(?u, ?r1) ∧ ssd(?r1, ?r2) ¬ ActivatedRole(?s, ?r2)←activatedRole(?s, ?r1) ∧ dsd(?r1, ?r2) hasRole(?u, ?r2)←hasRole(?u, ?r1) ∧ subRoleOf(?r1, ?r2) assignedService(?r2, ?so)←assignedService(?r1, ?so) ∧ subRoleOf(?r2, ?r1) assignedOperation(?r2, ?oo)←assignedOperation(?r1, ?oo) ∧ subRoleOf(?r2, ?r1) activatedRole(?s, ?r2)←activatedRole(?s, ?r1) ∧ subRoleOf(?r1, ?r2) permittedService(?u, ?so)←hasRole(?u, ?r) ∧ assignedService(?r, ?so) activatedService(?u, ?so)←establish(?u, ?s) ∧ activatedRole(?s, ?r) ∧ assignedService(?r, ?so)
R1.1 hasRole(?u,R1)←X509Certificate(?u) R1.2 R2.1 R2.2 R3.1 R3.2 R4.1 R4.2 R4.3 R4.4 R5.1 R5.2
Permission-to-role assignment rules. Just as mentioned in section 2.3, the related property assertions of Service and Operation class can be used as the conditions to enforce permission assignment. For instance, rule 2.2 indicates that the services with security level greater than ‘3’ can be only assigned to role R2. Separation of duty constraints rules. Rules 3.1 and 3.2 are used to enforce SSD and DSD constraints respectively. For convenience, we use notation ‘ ¬ ’ to express negation, which doesn’t exist in SWRL. So, extra properties like cannotHasRole should be defined to enable these rules. Role hierarchies reasoning rules. Rules 4.1-4.4 express the role hierarchies reasoning. E.g., rule 4.1 indicates that, for any user u and any role r1 and r2, if u has been assigned to role r1 and r1 has subRoleOf relation with r2, then r2 is an implied role of this user. Other rules can be explained similarly. Association rules. In order to directly view all the permissions of a user, specific rules to associate users with permissions should be defined like rules 5.1 and 5.2. E.g., we can get all the services a user is currently permitted to invoke through rule 5.2. According to the description above, the set of access control policies here is actually a knowledge base that can be formalized as K = (T, A, R). We call K the access control knowledge base (ACKB), and T, A, R correspond to the TBox, ABox
Using Semantic Web Techniques to Implement Access Control for Web Service
263
and RBox of ACKB respectively. T comprises the axioms about all the common classes and properties defined in section 2.1-2.4. A is the assertions set that is stated based on the TBox and has relationship with the practical application. R denotes the rules set that is defined in section 2.5.
3 Implementation In this section, we will give an application scenario and adopt Jess rule engine as the reasoning system, to demonstrate the definition and inference process of the semantic access control proposal presented above and verify the enforcement effect. Scenario description: Suppose, for example, that for an organization there are four kinds of roles: R1, R2, R3, R4, and it provides five services: query, purchase, exchange, refund and approve. The hierarchies and relations of the roles are as follows: R4 is senior to R2 and R3 which are both senior to R1. Role R2 has ssd and dsd relationship with R3 and R1 respectively. We assume that services query and purchase are assigned to role R1, services exchange and refund are associated with the roles R2 and R3 respectively, and role R4 has privilege to access the service approve. So initial assertions showed in table 2 are added into the ABox of ACKB. Table 2. Initial assertions of the Policy Base Role(R1) Role(R2) Role(R3) Role(R4) ssd(R2,R3)
subRoleOf(R2, R1) subRoleOf(R3, R1) subRoleOf(R4, R2) subRoleOf(R4, R3) dsd(R2, R1)
Service(query) Service(purchase) Service(exchange) Service(refund) Service(approve)
assignedService(R1, query) assignedService(R1, purchase) assignedService(R2, exchange) assignedService(R3, refund) assignedService(R4, approve)
Now we assume that a user u1 with public key issued by the internal authority ‘ka’ wants to access the service purchase. So related assertions such as PublicKey(u1), isInternal(u1,true), issuedBy(u1,”ka”) and isValid(u1,true) will be constructed by the Policy Decision Point(PDP) and forwarded to the Jess engine to trigger the decision reasoning. During the reasoning process, session assertions and related facts about the user are dynamically produced by the PDP to carry on the reasoning. When the reasoning is completed, the Jess engine generates many inferred axioms that are not defined explicitly. These inferred axioms are showed in table 3. Table 3. Inferred axioms in the test scenario No. IA1 IA2 IA3 IA4 IA5 IA6 IA7 IA8 IA9
Inferred Axioms hasRole(u1, R1) hasRole(u1, R2) notHasRole(u1, R3) notActivatedRole(s1,R2) assignedService(R2, query) assignedService(R2,purchase) assignedService(R3, query) assignedService(R3,purchase) assignedService(R4, query)
No. IA10 IA11 IA12 IA13 IA14 IA15 IA16 IA17
Inferred Axioms assignedService(R4,purchase) assignedService(R4, refund) assignedService(R4,exchange) permittedService(u1,purchase) permittedService(u1,query) permittedService(u1,exchange) activatedService(u1,query) activatedService(u1,purchase)
264
Z. He et al.
From the inferred axioms activatedService(u1, query)(IA16) and activatedService(u1, purchase)(IA17), we can know that the user u1 has been granted to access the services query and purchase in his current session. Consequently, the PDP can make the decision that u1 is permitted to access the requested service, and the validity of the access control policy is verified. In addition, inferred axioms IA1IA3 indicate that u1 can be assigned with roles R1 and R2 but not R3 due to the SSD constraint. IA4 shows that R2 can’t be activated in current session s1 because of the DSD constraint (s1 is dynamically generated during the reasoning process through related session assertions for u1). All the implicit privilege relations between roles and services can be gotten through the axioms IA5-IA12, and all the possible privileges of u1 can be gotten by the axioms IA13-IA15.
4 Related Work Similar approaches based on DL to formalize RBAC are presented in [15], [16] and [13]. Zhao et al. [15] choose ALCQ to represent and reason on RBAC as well as the constraints like SOD and role cardinality. Chae et al. [16] extend the hierarchical RBAC by a class hierarchy of the accessed objects and implement this model in DL. However, Knechtel et al. [13] state that the proposal in [16] has several flaws, such as the DL semantics is not respected and its running example provides wrong results, etc. To fix these flaws, they adopt the DL SROIQ(D) that can simulate the concept product to redescribe the model. They follow the same example scenario and give the correct results. However, while the SROIQ(D) is very expressive, its computational complexity increases. Additionally, all above works do not consider their application in open environment where users may be not known in advance, so the statement like in [13] that the reasoning needs to be run only once is impractical. Technically, the works of [5], [12] and [17] are closer to our proposal. Lorenzo et al. [5] has developed an OWL-DL ontology to express the elements of a RBAC system. They state that roles are dynamically associated with users based on the contextual attributes, but they do not explain how to model these contextual attributes and combine them with the OWL-DL ontology. They formulate the SSD constraint through using the construct owl: disjointWith, but define the object property notTogetherWith ⊂ Role × Role to express the DSD constraint. While they don’t use any rules and provide a running example, it is unclear how to enforce DSD in practice. Finin et al. [12] propose two approaches to model RBAC: one where roles are represented as classes and another one where roles are as instances. While they do not make explicit which specific DL language they choose, they state that OWL is used. Furthermore, role assignments are not modeled in their proposal and there is also no associated system. Wu et al. [17] focus on specifying the static description of RBAC constraints by OWL, and do not provide any implementation. Additionally, it is confused that they use the same property conflictRole in different rules to express SSD and DSD constraints. Compared with these works, our proposal has following features: i) It considers the application in open environment and has been implemented, while the others have not. ii) It models the user as a Credential class and performs roles assignment dynamically based on related assertions of the credential. iii) It defines a series of SWRL rules to implement the required functions that can improve the flexibility of policies configuration.
Using Semantic Web Techniques to Implement Access Control for Web Service
265
5 Conclusions The dynamic and open characteristics of web service environment introduce new access control challenges. In this paper, we propose a semantic-based approach to construct a feasible and flexible access control solution for web service. The contributions of this paper can be summarized as follows: i) Access control model. By combining RBAC with ideas from ABAC, actually a new access control model is introduced which enables dynamic privileges assignment at two levels. On the one hand, attributes associated with services can determine the association of access privileges with roles. On the other hand, roles are dynamically assigned based on the credential attributes of users. ii) Access control ontology. A high level OWL-DL ontology is developed to express the elements of the access control model. The hierarchies and relations of users, roles and services can be thus represented in a natural manner. iii) Semantic rules. A series of SWRL rules are defined to implement the operations that can’t be well expressed by OWL-DL, such as dynamic roles assignment, static and dynamic separation of duty constraints, etc.
References 1. Singhal, A., Winograd, T., Scarfone, K.: Guide to Secure Web Service. NIST Special Publication 800-95 (2007) 2. Coetzee, M., Eloff, J.: Towards Web Service Access Control. Computers & Security 23, 559–570 (2004) 3. Bartoletti, M., Degano, P., Ferrari, G., Zunino, R.: Semantics-Based Design for Secure Web Services. IEEE Transactions on Software Engineering 34(1), 33–49 (2008) 4. David, F., Ravi, S., Serban, G.: Proposed NIST Standard for Role-Based Access Control. ACM Transactions on Information and System Security 4(3), 224–274 (2001) 5. Lorenzo, C., Isabel, F.C., Roberto, T.: A Role and Attribute Based Access Control System Using Semantic Web Technologies. In: Meersman, R., Tari, Z., Herrero, P. (eds.) OTMWS 2007, Part II. LNCS, vol. 4806, pp. 1256–1266. Springer, Heidelberg (2007) 6. Eric, Y., Jin, T.: Attributed based access control for Web services. In: IEEE International Conference on Web Services, pp. 561–569 (2005) 7. Priebe, T., Dobmeier, W., Kamprath, N.: Supporting Attribute-based Access Control in Authorization and Authentication Infrastructures with Ontologies. Journal of Software 2(1), 27–38 (2007) 8. Bhatti, R., Bertino, E., Ghafoor, A., Joshi, J.: XML-based Specification for Web Services Document Security. IEEE Computer 37(4), 41–49 (2004) 9. Wu, M., Chen, J.X., Ding, Y.S.: Role-Based Access Control for Web Services. WSEAS Transactions on Information Science and Applications 3(8), 1553–1558 (2006) 10. W3C: OWL Web Ontology Language Reference (2004), http://www.w3.org/TR/2004/REC-owl-ref-20040210/ 11. W3C: SWRL: A Semantic Web Rule Language Combining OWL and RuleML (2004), http://www.w3.org/Submission/SWRL/ 12. Finin, T., Joshi, A., Kagal, L., et al.: ROWLBAC: Representing Role Based Access Control in OWL. In: 13th ACM Symposium on Access Control Models and Technologies, Colorado, USA, pp. 73–82 (2008)
266
Z. He et al.
13. Knechtel, M., Hladik, J.: RBAC Authorization Decision with DL Reasoning. In: IADIS International Conference WWW/Internet, pp. 169–176 (2008) 14. Toninelli, A., Montanari, R., Kagal, L., Lassila, O.: A semantic context-aware access control framework for secure collaborations in pervasive computing environments. In: Cruz, I., Decker, S., Allemang, D., Preist, C., Schwabe, D., Mika, P., Uschold, M., Aroyo, L.M. (eds.) ISWC 2006. LNCS, vol. 4273, pp. 473–486. Springer, Heidelberg (2006) 15. Zhao, C., Heilili, N., Liu, S.: Representation and Reasoning on RBAC: A Description Logic Approach. In: Van Hung, D., Wirsing, M. (eds.) ICTAC 2005. LNCS, vol. 3722, pp. 381–393. Springer, Heidelberg (2005) 16. Chae, J.H., Shiri, N.: Formalization of RBAC policy with object class hierarchy. In: Dawson, E., Wong, D.S. (eds.) ISPEC 2007. LNCS, vol. 4464, pp. 162–176. Springer, Heidelberg (2007) 17. Wu, D., Lin, J.: Using Semantic Web Technologies to Specify Constraints of RBAC. In: 6th International Conference on Parallel and Distributed Computing, Applications and Technologies, pp. 543–545 (2005)
An Quadtree Coding in E-chart Zhong-jie Zhang1,3, Xian Wu2, De-peng Zhao1, and De-qiang Wang1 1
2
DaLian Maritime University, China Nanjing Maritime Safety Administration of the People’s Republic of China 3 ShanDong Institute of Light Industry, China
Abstract. With the development of IT and traffic logistics in sea, more function and performance of electronic chart (E-chart) in the navigation is needed. In the meantime, the requirement for high performance E-chart, provided by intensive application of embedded technology in kinds of vessel equipment, is becoming more and more urgent. However, the functions of an E-chart embedded are as fundamental as those of an ordinary desktop operating system’s, such as displaying, browsing and querying. It is very important to display, query and index for E-chart embedded efficiently in the limited resource. The paper brings forward a quadtree index structure based on limited resource for E-chart. The test indicates this method is more efficient than the classical R-tree or quadtree. Keywords: E-chart; R-tree; quadtree.
1 Introduction The functions of an E-chart embedded are as fundamental as an ordinary desktop operating system’s because of the limited computing capability, such as displaying, browsing and querying [13]. Because it always consumes most system resource while displaying and querying in E-chart embedded, it is very important to look for a method to display and query efficiently. Now, it always adopts GIS spatial index technology in E-chart embedded, for example, R-tree [5, 6], quadtree, KD-tree [4] and Grid file [1, 5]. These methods all adapt to desktop system with high computing because they need abundant computing and memory resource, so they don’t fit embedded system. This paper brings forward an index structure based on static quadtree aiming to the limited resource in E-chart embedded. It can index figure meta-data at high speed. The test indicates this method has more efficiency than classical R-tree and quadtree. It has little dependent on the type of datanode, and satisfies the need of instant display and query of E-chart embedded.
2 Designing Idea 2.1 Segmenting Method Based Quadtree of E-chart The idea of segmenting a static quadtree is to segment every node in levels. In other words, the first step is that the whole area included in E-chart looks as a root node R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 267–273, 2010. © Springer-Verlag Berlin Heidelberg 2010
268
Z.-j. Zhang et al.
which is abbreviated R and the level is noted as zero level. The second step is to segment the node of R into four children nodes, noted L1-1, L1-2, L1-3 and L1-4, and these four nodes are located the first level. The third step is continuous to segment the four nodes generated in second step separately, and this level is the second level. It can’t stop segmenting until all nodes satisfy the need of segmentation granularity. The whole E-chart looks as a root node. Then, it will generate four nodes after the root node is segmented in first level. And sixteen nodes are generated after the second level nodes are segmented form the third level. In the end, there are 4n nodes in the N-th level. It must cause notice that every node is segmented the same size into four children nodes. It can save resource instead of the method the recursion or stack which needs lot of resource to run a same process. The method proposed in this paper can use the sample iterated way to query quadtree strictly. The classical method of quadtree index usually puts all elements on leaf nodes. If the bound Rectangle of an object spans several rectangle scopes of leaf nodes in an E-chart, the object will be indexed by several leaf nodes. It may bring data redundancy [8]. The paper adopts an improved quadtree index method which puts the object spanning several leaf nodes on the middle node. The method can avoid data redundancy entirely and reduce the complexity of coding, although it reduces the index efficiency of the object. 2.2 Algorithm Analyzing The classical algorithm for travelling a quadtree is implemented by recursion or stack. And its complexity is o(log4n).. And the n indicates the number of nodes in the formula. The classical algorithm must be improved because the number of data is very large in E-chart. In the more, the complexity will be more while index grant data. The paper improves the classical algorithm to accomplish quick query by setting an accessorial vector to encode nodes of quadtree. In order to encoding the nodes of a quadtree, the paper uses sequence storage structure of a full quadtree. It can reserve the orders of nodes into the subscripts of vectors as the order value directly. The Fig.1 is an encoding example of a full quadtree with 3-depth.
Fig. 1. A 3-depth quadtree and node coding
To build query index for every meta, it can get its index by traveling the quadtree. It is supposed that m is the number of quadtree nodes, n is the number of meta data. The complexity of traveling one node is o(log4m), so the n’s is o(n*log4m). The encoding index only needs once, because the E-chart embedded doesn’t need mend. So, it can be run in hi-speed computers while saving results in data of E-chart. Firstly, confirming the scope of query by using the attributes of vectors in a full quadtree, and
An Quadtree Coding in E-chart
269
its complexity is less than o(log4m). The speed of computing is very quick because the number of quadtree nodes is small. Secondly, the generated attributss vectory are as accessorial vectory. It can query the meta with the appointed qualification or can query the meta satisfies the qualification whether or not.
3 Algorithm Implementing 3.1 Principle of Algorithm For encoding nodes of a quadtree, the paper uses sequence storage structure of full quadtree. It can reserve the orders of nodes into the subscripts of vectors as the order value directly. A full quadtree with d-depth has
∑
d −1 i =0
4i nodes. Supposing an index
node named p, it has relations as followed. Index value of its father nodes: ⎢ p − 1⎥ ⎢ 4 ⎥. ⎣ ⎦
(1)
4 p + i, i = 1, 2,3, 4 .
(2)
⎢⎣log 4 (3 p + 1) ⎥⎦ .
(3)
Index value of its children nodes:
Its depth:
For the n-level of a full quadtree, the index value of the first node in the n-th level is n−2 4n− 2 − 1 ∑ i =0 4i , that is 3 . The number of nodes in the n-th level is 4n−1 . These nodes segment E-chart into 2 n −1 × 2 n −1 phalanxes, and the index value of a phalanx located i-th row and j-th line is the equation as the followed. ⎢ p − 1⎥ ⎢ 4 ⎥ ⎣ ⎦
(4)
If the scopes of E-chart longitude and latitude are x0 to xn and y0 to yn separately, the xcell and ycell of every pane are as followed.
xcell = ( xn − x0 ) / 2n −1 ,
(5)
ycell = ( yn − y0 ) / 2n −1 .
(6)
3.2 Setting Index
Setting index is to compute the order of meta in E-chart. It can be gotten the order by locating the meta in E-chart. It can be computed by equations of 1-6 because of
270
Z.-j. Zhang et al.
segmenting E-chart averagely, because it is composed of dot, line and plane to a meta. It can be computed by bound rectangle for the object of line and plane, so it can be abstracted as dot object or rectangle object while computing. 3.2.1 Computing the Index Values of Points A dot object can be supposed as a rectangle object with zero height and zero width, so its index value must be located on a node of the full quadtree. Supposing, the longitude value and latitude value of a dot object are x and y separately, then the value of row and line where pane located its leaf nodes are as followed.
i = ( x − x0 ) / xcell ,
(7)
j = ( y − y0 ) / ycell .
(8)
Putting equation 7 and 8 into equation 4, the index value of p is as followed: p = (4 n −1 − 1) / 3 + (2 n −1 ( x − x0 ) / xcell + ( y − y0 ) / ycell ) .
Putting equation 5 and 6 into equation 6, it can be gotten. p = (4 n −1 − 1) / 3 + (2 n −1 ( x − x0 ) / ( xn − x0 )2n −1 + ( y − y0 ) / ( yn − y0 )2n −1 ).
Sampling the upper equation can get equation 9. p=
y − y0 4n −1 − 1 n x − x0 +2 + 2n −1 . 3 xn − x0 y n − y0
(9)
3.2.2 Computing the Index Values of Rectangles It is impossible that the index value of rectangle object. It can get the leaf nodes located lower left-hand and top right-hand of rectangle. Then, it must find the same ancestor node of these two nodes. Supposing the lower left-hand and top right-hand coordinate of a rectangle are ( xmin , ymin ) and ( xmax , ymax ) . Firstly, computing the leaf node located lower left-hand and top right-hand separately. pa =
y − y0 4 n −1 − 1 n xmin − x0 , +2 + 2 n −1 min xn − x0 yn − y0 3
pb =
y − y0 4n −1 − 1 n xmax − x0 . +2 + 2n −1 max 3 xn − x0 yn − y 0
Secondly, finding the same ancestor of pa and pb by iterative the equation 1 until pa ( n ) equals pb ( n ) . And the subscript value of pb ( n ) is the index value of the rectangle.
⎢ pa ( n −1) − 1 ⎥ ⎢ pb ( n −1) − 1 ⎥ pa ( n ) = ⎢ ⎥ , pb ( n ) = ⎢ ⎥. 4 4 ⎣ ⎦ ⎣ ⎦
An Quadtree Coding in E-chart
271
3.3 Designing Index
The aim of building index in E-chart embedded is to enhance the efficiency of query. The process can be finished by two steps. The first step is computing accessorial vectors. Supposing the longitude and altitude of a given area are xmin to xmax and ymin to ymax separately. (1) Setting initial value of accessorial vectors as false. (2) Looking for the given area as a rectangle object and compute its index value c. (3) Setting the accessorial vector corresponding p as true. (4) Setting all accessorial vectors of p’s ancestor nodes as true. (5) Traveling all children nodes of p. If there are overlap between children nodes of p and given area, setting the corresponding accessorial vector as true. All earmarked true nodes are related to the given area. And the rectangles covering the true nodes intersect the given area. The second step is querying data. In the step, it only needs to find whether the subscript value is true. If equaling true, the rectangle is useful
4 Algorithm Implementing In order to test the performance of the coding algorithm, the normal Quadtree and Rtree are selected to do comparison testing. In testing, six e-charts with different size and amount of data are used. The detail content of the e-charts is shown in Table 1. Table 1. Font sizes of headings. Table captions should always be positioned above the tables.
Chart C1 C2 C3 C4 C5 C6
Point data 1085 35 3402 3024 29043 365804
Rect Data 637 4693 1251 2666 18558 4693
All Data 1722 4728 4653 5690 47601 370497
First, built the index structures for every algorithm. In a computer CPU is a Celeron 2.4, all the data are held into memory, then to create the index structure of each algorithm. Creating the index contains only insert data operation. After indexing, query performance can be tested. The experiment only concerned with the point data and range data, the line data were treated as range data. Each query, three values are recorded, the data hit by index D, the effective data V, and the time-consuming inquiries T. Final the average values of D, V and T are calculated. Different algorithms have the same V-value, different D-value. And an efficient rate of algorithm A = V/D is introduced. This value represents the theoretical efficiency of the algorithm. Its maximum is 100%.
272
Z.-j. Zhang et al. 10000 8000 6000 4000
Quadtree R-Tree Coding
2000 0
C1
C2
C3
C4
C5
C6
Fig. 2. Time consuming to create an index (ms) 100 80 60 40
Quadtree R-Tree Coding
20 0
C1
C2
C3
C4
C5
C6
Fig. 3. Average efficient rates (%)
In practice, the cost of computing index is needed to consider. Here, the method of the average value of time-consuming to represent the comprehensive performance of various algorithms is used. On creating index, because only need to calculate, the static quadtree algorithm is faster an order of magnitude than the quadtree and R-tree. Roughly the same query efficiency and R-tree, but the data are more efficient. With the amount of data increases, the efficient rate of algorithm will increase. 100 80 60 40
Quadtree R-Tree Coding
20 0
C1
C2
C3
C4
C5
C6
Fig. 4. Average time consuming (ms)
An Quadtree Coding in E-chart
273
5 Conclusion The paper brings forward an improved algorithm based classical Quadtree structure to enhance the efficiency of E-chart embedded. But, the algorithm only accomplishes the function of query in E-chart. There is much work to do to consummate the performance of E-chart.
References 1. Fu, Y.-C., Hu, Z.-Y., Guo, W., Zhou, D.-R.: QR-Tree: A Hybrid Spatial Index Structure. In: Proceedings of the Second International Conference on Machine Learning and Cybernetics, Xi’an, November 2-5 (2003) 2. Balmelli, L., KovaEeviC, J., Vetterli, M.: Quadtrees for Embedded Surface Visualization: Constraints and Efficient Data Structures 0-7803-5467-2/99/ 1999 IEEE 3. Run-tao, L., Xiao-hua, A., Xiao-shuang, G.: Spatial Index Structure Based on R-tree. Computer Engineering 35(23) (December 2009) 4. Yong-hong, Q., Yong-nian, Z., Bin, Z.: KDT tree: Multi -dimensional index structure for spatial data. Computer Engineering and Applications 45(8), 29–31 (2009) 5. Zhong, X., Ming, F.G., Chang-jie, M.: Index Strategies for Embedded-GIS. Spatial Data Management 31(5) (September 2006) 6. Run-tao, L., Zhong-xiao, H.: Spatial index structure based on R-tree and quadtree: RQOP_tree. Journal of Harbin Institute of Technology 42(2) (February 2010) 7. Yuhong, C., Lei, S.: Exploration of Spatial Data Index Technique Based on R-Tree. Computer Applications and Software 25(12) (December 2008) 8. Tao-shen, L., Bi, L.: Research on Data Model and Query Optimization of Electron icMap in Embedded GIS. Aeronautical Computer Technique 37(2) (September 2007) 9. Xiaotong, W., Huanchen, W.: Data structure for the fast display of spatial objects in the ECD IS. Acta Geodaetica et Cartographica Sinica 28(1) ( February 1999) 10. Jun, W., Wen-shi, Z., Jian-tao, W.: Research and realization on speedy display of electronic multi-maps with macro data. Engineering of Surveying and Mapping 12(3) (September 2003) 11. Dong-jun, L., Guo-sun, Z.: Spatial data buffer policy based on quadtree. Computer Engineering and Applications 44(22), 162–165 (2008) 12. Xiao-guang, S.: The Research of Creating Method of Spatial Index in the Navigable Database. Geomatics & Spatial Information Technology 31(3) (June 2008) 13. Yinshan, J., Chuanying, J., Haiping, W., Bo, Z.: Design and Implementation of a Ship Navigation System Based on GPS and Electronic Chart. Computer Engineering 29(1) (January 2003) 14. Cunhong, H., Wenhua, Z., Lifang, G.: The Design and Realization of Common Vector Electric Map System. Modern Surveying and Mapping 28(5) (October 2005) 15. Jie, L., Songhai, Z., Qiang, L.: A Dynamic Multi-Resolution Terrain Model Based on Quadtree. Computer Engineering and Applications (July 2006) 16. Ying, L., Weifeng, T., Zhihua, J., Zhisha, C.: A Novel Scheme of Electronic Chart Display System. Ship Engineering (1) (2002)
Study on Applying Vector Representation Based on LabVIEW to the Computing between Direct Lattice and Reciprocal Lattice Yingshan Cui, Xiaoli Huang, Lichuan Song, and Jundong Zhu North China Coal Medical University, Information Centre, Construction South Road No. 57, 063000 Tangshan, China
[email protected],
[email protected],
[email protected],
[email protected]
Abstract. To improve the quality of learning about Direct Lattice and Reciprocal Lattice in the field of Solid-state Physics, LabVIEW 8.2 Professional Edition, as a development tool, was used for designing and realizing the vector’s manifestations and its computation process. The design idea was naturally infiltrated into the computing between Direct Lattice and Reciprocal Lattice. This method can help a beginner build up the concept of Direct Lattice and Reciprocal Lattice clearly from the vector perspective. Especially, it can make the beginner learn the relationship between Direct Lattice Vector and Reciprocal Lattice Vector deeply from the perspective of Fourier Transform. Keywords: LabVIEW, vector representation and computing, direct lattice, reciprocal lattice, fourier transform.
1 Introduction It is recognized that the concept of Direct Lattice and Reciprocal Lattice is very important in the analysis of Periodic Structures, which often appears in the course of Solid-state Physics, Crystal Diffraction, etc. How to introduce and understand the concept of Direct Lattice and Reciprocal Lattice, Direct Lattice Vector and Reciprocal Lattice Vector? This is a question worthy of further study [4]. In some of the traditional Solid-state Physics and other relevant literatures, maybe due to the limited space, the content is usually just about their definitions, conclusions and simple applications, not their natures. So this is difficult for beginners to understand and use the knowledge. The paper introduced initially a series of schematic knowledge about the vector’s concepts, expression forms and algorithms .Meanwhile, in the LabVIEW 8.2 Professional Edition development environment, with its graphical programming language, called G language [2], the paper designed and showed mostly two common modules: vector’s standardization, unitization and vector’s cross multiplication. The process was involved with some complex calculations, such as a product of a scalar quantity with a vector, vectors addition, two vectors dot product and two vectors outer R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 274–281, 2010. © Springer-Verlag Berlin Heidelberg 2010
Study on Applying Vector Representation Based on LabVIEW
275
product. Then the paper made this idea applied to the computing between Direct Lattice and Reciprocal Lattice in the field of Solid-state Physics. This approach, from the basic knowledge of vector to Direct Lattice Vector and Reciprocal Lattice Vector, can make beginners know the origin and the essence of their concepts more clearly and naturally. Moreover, it’s easy for beginners to master the relationship between Direct Lattice Vector and Reciprocal Lattice Vector simply and intuitively from the angle on Fourier Transform.
2 Vector Representation and Computing Vector is a physical quantity with the size and the direction. There is the direction or not is the fundamental difference between a vector and a scalar quantity. Vector size is called vector module. The standardized expression of a vector A is A = Ax a x + Ay a y + Az a z .
(1)
In this formula, the vectors Ax , Ay , Az are all the vector A components, which are expressed respectively on the projections of the vector A onto the forward directions of x , y , z axis in Cartesian coordinates. Ax , Ay , Az are expressed respectively on the modules of vector Ax , Ay , Az . We usually use the form [ Ax Ay Az ] to describe the vector A . So in programming, an array or a matrix was used for lodging this form. Meantime, a , a , a is expressed respectively on unit vectors which are on the forward x
y
z
direction of x , y , z axis in Cartesian coordinates [7]. That is [100],[010],[001] , which is formed a unit matrix. Then a , a , a are correspondingly played a role in Ax , Ay , Az . x
y
z
This step can make the vector Ax , Ay , Az all standardized. Soon after, the three standardized vector components are added and combined to get the normalized form of the vector A . And the module of vector A is expressed on | A |= ( Ax2 + Ay2 + Az2 )1/ 2 .
(2)
Then the unit vector a of the standardized vector A is expressed on a= =
A Ax ax + Ay a y + Az az = 2 2 2 1/ 2 A ( Ax + Ay + Az )
Ay Ax Az a + 2 ay + 2 az 2 2 2 1/ 2 x ( Ax + A y + Az ) ( Ax + Ay 2 + Az 2 )1/ 2 ( Ax + Ay 2 + Az 2 )1/ 2
(3) .
Based on the principle above, in LabVIEW 8.2 development environment, an onedimensional line array was applied on putting the vector A , that is [ Ax Ay Az ] . The storage way was to put the data Ax into a data cell with the index value of 0. In a similar way, to put the data Ay into a data cell with the index value of 1 and to put the data Az into a data cell with the index value of 2. Then Ax , Ay , Az , as the data elements
276
Y. Cui et al.
required, could be got from the determined index values in index array function. The unit vector a , a , a , that is [100],[010],[001] , were put into three one-dimensional line x
y
z
array constants. And then, according to the standard formula of vector A , multiplication computing was used between data value Ax and unit vector ax , data value Ay and unit vector
a y , data value Az and unit vector az .The forms of the
computed results were three one-dimensional line arrays. In three arrays, the value in every data cell of one array should be added up the value in every data cell of another array correspondingly. The process was involved in the algorithm theory about the summation among three vectors. The sum values were in every data cells of the got one-dimensional line array, which constituted the standardized vector A'[ Ax ' Ay ' Az ' ] .The program flow is shown as Fig. 1.
Fig. 1. The program block diagram on vector standardization
In the latter part of Fig.1, there was a vector-matrix transfer function which was used to vary the storage form of a data in the original one-dimensional line array that was to make one-dimensional space extend into two-dimensional space. This way could make the position of data storage more flexible, the usage of data more diverse [9]. And it was prepared for transforming a one-dimensional line array into a onedimensional column array. Then the matrix transpose function was used to exchange the row and the column about the matrix (two-dimensional array) got by converting. Eventually, the result was a one-dimensional column array, the deposited form of which was in a matrix (two-dimensional array).This change solved a problem about how to use the dot product operation of two vectors and get the module of the vector. In fact, a scalar product of two vectors is also called a dot product calculation. Its definition is that the projection of a vector on the direction of another vector is multiplied by the module of another vector. The result is a scalar quantity. In Cartesian coordinates, three unit vectors a , a , a are orthogonal to each other. x
y
z
According to the definition of scalar product, they are got ax iax = 1, a y iay = 1, az iaz = 1, ax ia y = 0, ay iaz = 0, az iax = 0 .
So the scalar product of two vectors can be expressed as Ai B = ( Ax ax + Ay a y + Az az )i( Bx a x + By a y + Bz az ) = Ax Bx + Ay By + Az Bz
(4) .
Study on Applying Vector Representation Based on LabVIEW
277
The formula above can be explained that the scalar product of two vectors is equivalent to the sum, the value of which is got by adding each of the products about the corresponding module of three vector components in every vector together. And this calculation is consistent with the multiplication algorithm of two matrixes. Therefore, after a series of transformations about the "one-dimensional arraymatrix-matrix transpose”, it was obtained a one-dimensional line array and a onedimensional column array both deposited in two matrix. Then multiplication rule between matrixes could be used for calculating. The got result was stored in the first cell formed by the cross between the first row and the first column in the new matrix (two-dimensional array).The data value could be found with the index array function by searching for the index value according to the 0 row and the 0 column. It was applied onto calculating the module of the standard vector. At last, the computed result was multiplied by a one-dimensional line array deposited in a matrix. And the got data in a new one-dimensional line array deposited in matrix were the module size of three vector components in the unitization standard vector. Its realization is as shown in Fig. 2.
Fig. 2. The program block diagram on standard vector unitization
In a word, the expression of a standard and unit-oriented vector belongs to the most basic concept in the areas of mathematics, physics and other discipline. Any formula transformation or derivations, related with a vector, are based on this. For example, the design idea on this common module is applied to the calculations between Direct Lattice Vector and Reciprocal Lattice Vector in Solid-state Physics.
3 The Fourier Expand of Periodic Function Location Space (Coordinate Space) is composed of Direct Lattice. Reciprocal Lattice can be regarded as composing State Space (K Space) [8, 11]. So the transformation between Coordinate Space and State Space is exactly the conversion between Direct Lattice and Reciprocal Lattice. And the relationship between Direct Lattice and Reciprocal Lattice follows the rule of Fourier Transform [3]. If γ is a bit vector in anywhere of repetitive units, Γ represents any physical quantity in crystal lattice, then there is Γ(γ ) = Γ(γ + l1α1 + l2α 2 + l3α3 ) .
(5)
In this formula, l1 , l2 , l3 are all integers, a1 , a2 , a3 are all side length vectors of repetitive units, that are periodic vectors in some relevant directions. The formula shows that the
278
Y. Cui et al.
physical appearance of anywhere γ in a repetitive unit is the same as which of corresponding somewhere in another repetitive unit. If the repetitive unit is not selected randomly, instead of the primitive cell, then a1 , a2 , a3 are all basic vectors in of anywhere γ in crystal has this formular. As formula (5), the physical quantity Γ(γ )
its own periodicity. This can be written as Γ(γ + Rl ) = Γ(γ ) .
(6)
And Rl = l1α1 + l2α2 + l3α3 stands for the direct lattice vector in crystal. Γ(γ ) is expanded as Fourier series Γ(γ ) = ∑ Γ( Kh )eiKh iγ .
(7)
h
This h includes three integers
(h1 , h2 , h3 ) .The
summation symbol ∑ , in fact, stands for h
∑∑∑ h1
h2
.Then there is
h3
Γ(γ + Rl ) = ∑ Γ( Kh )eiKh iγ ieiKh i Rl h
.
(8)
Formula (7)and formula (8) are substituted into formula (6).And the result is eiKh i Rl = 1 ,
(9)
K h i Rl = 2πμ ( μ is an integer) .
(10)
that is
Rl is Direct Lattice Vector, so we call K h as Reciprocal Lattice Vector. If the basic
vectors of Reciprocal Lattice Vector are β1 , β2 , β3 , there is Kh = h1β1 + h2 β2 + h3β3
.
(11)
Obviously, when primitive translation vectors of the reciprocal lattice β j ( j = 1, 2,3) and primitive vectors of the direct lattice αi (i = 1, 2,3) accord with the following relations ⎧= 2π (i = j ) ⎩ = 0(i ≠ j ) ,
αi iβ j = 2πδij ⎨
(12)
Formula (10) is naturally satisfied. Formula (7) indicates that the same physical quantity Γ between the expression in Direct Lattice and the expression in Reciprocal Lattice is complied with Fourier Transform Relations. In fact, the physical quantities Γ with the crystal lattice periodicity, such as electronic potential energy and electronic charge density in crystal lattice, should all satisfy the Fourier Transform above. Then formula (12) shows that the lattice with the basic vector αi and the lattice with the basic vector
βj
are mutually named as Direct Lattice and Reciprocal Lattice.
Study on Applying Vector Representation Based on LabVIEW
279
If primitive vectors of the direct lattice are a1 , a2 , a3 , the formal definition about primitive translation vectors of the reciprocal lattice is 2π [α 2 × α3 ] ⎫ ⎪ Ω ⎪ 2π [α3 × α1 ] ⎪ β2 = ⎬ Ω ⎪ 2π [α1 × α 2 ] ⎪ β3 = ⎪ Ω ⎭.
β1 =
(13)
In formula(13), Ω is the volume of crystal lattice primitive cell, that is Ω = α1 i[α 2 × α3 ]
.Obviously, the definition of formula(13) meets the relationship of formula(12).It reflects the relationship between Direct Lattice Vector and Reciprocal Lattice Vector. If one has been known, another can be got. Apart from the factor 2π , the dimension relationship of Direct Lattice and which of Reciprocal Lattice are mutually considered as reciprocal quantities. The dimension unit of direct lattice dimension is [meter], which of reciprocal lattice dimension is [meter]-1.
4 The Application on Reciprocal Lattice Vector Computing It is vivid and direct to use Reciprocal Lattice point to describe the crystal lattice diffraction spots problem [3, 10]. On the support of the above principle, considering from the actual situation [110],[110],[009300 ] was determined as the orientations of the direct lattice basic vectors a1 , a2 , a3 firstly. Then the universal modules designed by LabVIEW forward were put to use in the standardization and the unitization of these vectors. At the same time, the results should also be multiplied by the required actual sizes of these vectors, which were the necessary condition of constituting a genuine, owned the actual sizes and the accurate directions, primitive vector of the direct lattice. The final got vectors could be used for the calculation of Reciprocal Lattice Vector. Here's actual sizes were determined as corresponding with the array structures of dimers on As-As in the crystal cell of GaAs (001)As-rich(2×4) reconstruction surface. That was the cycle ( 2 2 lattice constant) numbers of the interval between two dimers on As-As owned same surrounding environments in the direction of direct lattice basic vector [1]. Then the program block diagram of implementing the application ideas is shown in Fig.3. In this block diagram, when getting primitive translation vectors of the reciprocal lattice, there was involved with the problem about computing the outer product (cross multiplication) of direct lattice basic vectors. It based on the principle below. The outer product of two vectors A, B is also named cross product, whose result is still a vector in sign of C .The size of vector C is the area of the parallelogram constituted by vector A, B .The direction of vector C is perpendicular to the plane constructed by vector A, B , moreover, follows with the right-handed screw rule. For the rectangular
280
Y. Cui et al.
Fig. 3. The program block diagram on computing Reciprocal Lattice Vector
coordinate system, by the definition of the vector product, there exist as the relationships between the unit vectors, they are ax × ax = 0, ay × ay = 0, az × az = 0, ax × a y = az , a y × az = ax , az × ax = ay . Then the vector product in the rectangular coordinate system can be expressed as A × B = ( Ax a x + Ay a y + Az az ) × ( Bx a x + B y a y + Bz a z ) = ( Ay Bz − Az B y )a x + ( Az Bx − Ax Bz )a y + ( Ax B y − Ay Bx )a z .
(14)
Its implementation process is shown in Fig.4.
Fig. 4. The program block diagram of cross multiplication
5 Conclusions The paper chose LabVIEW (icons instead of text lines and the data stream style to program) to develop application program, which can make the speed increase 4-10 times [2, 5, 6]. In addition, not only the expression on the standardization and unitization of vector, also their related complex calculations, and this process was designed and implemented with LabVIEW. It can make beginners rediscover the vector concept and its computing essence, which is prepared for the calculations between Direct Lattice vector and Reciprocal Lattice vector. In the mean time, it can help beginners understand, grasp and apply the relevant knowledge between Direct Lattice (vector) and Reciprocal Lattice(vector) more easily, more deeply.
Study on Applying Vector Representation Based on LabVIEW
281
Acknowledgments Thanks for my Alma Mater-Guizhou University, my tutors Zhengping Zhang Professor and Zhao Ding Professor. They have educated me. Thanks for my new work unitNorth China Coal Medical University, my leaders Xiaoli Huang Professor, Lichuan Song Vice-professor and Jundong Zhu Senior Engineer. They give me an opportunity to learn more.
References 1. LaBella, V.P., Yang, H., Bullock, D.W., Thibado, P.M.: Atomic Structure of the GaAs (001)-(2×4) Surface Resolved Using Scanning Tunneling Microscopy and First-Principles Theory. Physical Review Letters 83(15), 2989–2992 (1999) 2. Chen, X., Zhang, Y.: LabVIEW 8.20 programming from entry to master. Tsinghua University Press, Beijing (2007) (in Chinese) 3. Fang, J., Lu, D.: Solid state physics. Shanghai Science and Technology Press, Shanghai (1982) (in Chinese) 4. Li, G.: Fourier analysis and Reciprocity between Direct Lattice and Reciprocal Lattice. College Physics 12(9), 18–20 (1993) 5. Robbins, R.: Visual Programming for Automation. Control Engineering 57(4), 49–50 (2010) 6. Nelson, R.: Graphical programming for test and measurement. Test&Measurement World 27(2), 43 (2007) 7. Marconcini, P., Macucci, M.: A novel choice of the graphene unit vectors, useful in zonefolding computations. Carbon 45(5), 1018–1024 (2007) 8. Foadi, J., Evans, G.: Elucidations on the reciprocal lattice and the Ewald sphere. European Journal of Physics 29(5), 1059–1068 (2008) 9. McPhedran, R.C.: Systematic investigation of two-dimensional static array sums. J. Math. Phys. 48(3), 33501–33525 (2007) 10. Shutaro, C., Yoshiyuki, O.: Correlation Effect on the Two-Dimensional Peierls Phase. In: AIP Conference Proceedings, vol. 850(1), pp. 1323–1324 (2006) 11. Stickelmann, D., Kroll, H., Hoffmann, W., Heinemann, R.: A system of metrically invariant relations between the moduli squares of reciprocal-lattice vectors in one-, twoand three-dimensional space. Journal of Applied Crystallography 43(2), 269–275 (2010)
Test and Implement of a Parallel Shortest Path Calculation System for Traffic Network Lin Zhang1, Zhaosheng Yang2, Hongmei Jia1, Bin Wang1, and Guang Chen1 1
2
Hebei Polytechnic University, Tangshan 063009, Hebei Province, China Jilin University, No. 5988 Renmin Street, Changchun, Jilin Province, China
[email protected]
Abstract. This paper describes the parallel implementation of a traffic network model. A two-queue parallel shortest path algorithm is employed using recursive spectral bisection decomposition approach where each processor runs the same program but acts on a different subset of road network. The objective is to reduce the execution time of shortest path computing in dynamic traffic assignment. The model is parallelized and tested by 1, 2, 4, 8, 16 and 32 processors. The performances of the parallel model are discussed, and we think that two-queue parallel shortest path algorithm and recursevive spectral bisection approach are useful in solving shortest path problem of traffic network. Keywords: Intelligent transportation systems; dynamic traffic guidance; parallel shortest path; network decomposition.
1 Introduction The dynamic route optimal technique is one of the most important techniques in the traffic flow guidance systems. For the dynamic and time-varying characteristics of traffic flow, it is difficult to calculate all-to-all optimal paths which are used to avoid traffic jam for the drivers in time in the urban road network. The dynamic traffic assignment theory is fundamental of traffic flow guidance systems. Some researchers find that the time spends on shortest paths calculation is 95% of the total time spends on dynamic traffic assignment algorithm. So it is the key to improve the speed of shortest paths calculation to implement traffic flow guidance systems. Previous studies [1, 2] have shown that label-correcting algorithms have good performance for sparse traffic networks. Horst D. Simon [3] found that recursevive spectral bisection decomposition algorithm has good performance for large scale of road network. In this paper, we use two-queue parallel shortest path algorithm in the process of dynamic traffic assignment, and test the algorithm in recursevive spectral bisection road network partitions.
2 Shortest Path 2.1 Network Urban road networks are a set of n nodes connected by m directed arcs. We consider sparse networks; that is, the nodes are connected to a constant number of other nodes. R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 282–288, 2010. © Springer-Verlag Berlin Heidelberg 2010
Test and Implement of a Parallel Shortest Path Calculation System for Traffic Network
283
Therefore, the number of arcs is O(n) instead of n2- n as is the case for dense, fully-connected networks. Each arc in the network has a corresponding non-negative cost which is the travel time for that arc; thus, the shortest path algorithm finds the minimum time path instead of a minimum distance path. The network used in this paper is the road network of Changchun City which is a sparse network. 2.2
Sequential Shortest Path Algorithm
The shortest path problem is a well researched area and many approaches have been developed to solve this problem. We focus on iterative labeling algorithms since they are generally used to solve the problem for transportation applications. These algorithms solve for the shortest path from one node, the source, to every other node in the network. Each node has a label that represents the smallest known distance from the source to that node. Each node's label is iteratively updated until the end of the algorithm when it is equal to the shortest distance from the source to that node. The label updates occur by examining a single node i and its adjacent nodes' labels. If any of the labels can be reduced by including node i in their shortest path, the labels are updated. In labeling algorithms, a list is maintained which stores the nodes whose labels were just updated, since their adjacent nodes' labels may need to be updated. The algorithm terminates when this list is empty. The order that the nodes are removed from this list determines the type of algorithm that it is. If the removed node is the one with the smallest label in the list, then the algorithm is label-setting; otherwise, it is label-correcting. Label-setting algorithms perform the minimum number of node updates but at the expense of searching for the smallest node label or of maintaining a sorted data structure. Label-correcting algorithms, on the other hand, do not require that the removed node have the smallest distance label. The order in which the nodes are removed from the list and updated greatly affects the running time of the algorithm. Previous studies [1, 2] have shown that label-correcting algorithms have good performance for sparse traffic networks, in particular Pallottino's two queues algorithm [4]. 2.3
Parallel Shortest Path Algorithm
Executing the shortest path algorithms on a distributed memory machine requires that we first distribute the traffic network among the processors. The traffic network distribution method will be discussed in the next section. In this paper, each processor solves the portion of the shortest path tree on its subnetwork for each source. The outline of the algorithm is given in Table 1. Each processor repeatedly solves for shortest paths for its assigned subnetwork. We use the label-correcting two-queue algorithm which has good performance for sparse networks. Since each processor has information about its local subnetwork only, communication must occur between processors in order to compute the shortest path trees. Only distance label information about the boundary nodes needs to be
284
L. Zhang et al.
communicated since any path to each interior node that contains nodes on other processors must include a boundary node. Similar to the serial algorithm, termination occurs when all processors have no remaining work to do, that is when all the processors' lists are empty. Table 1. Parallel shortest path algorithm Repeat 1. Compute shortest path for subnetwork 2. Send and receive changed boundary node labels to neighbors 3. Detect termination condition: all processors' lists are empty Until termination detected
3. Network Decomposition 3.1 Recursive Coordinate Bisection This is probably the easiest algorithm conceptually. It is based on the assumption that along with the set of vertices V = {v1 , v2 ,L , vn } , there are also two or three-dimensional
coordinates available for the vertices. For each vi ∈ V we thus have an associated tuple
vi = ( xi , yi ) or triple vi = ( xi , yi , zi ) depending on whether we have a two or three dimensional model. A simple bisection strategy for the domain is then to determine the coordinate direction of longest expansion of the domain. Without loss of generality, assume that this x-direction. Then all vertices are sorted according to their x-coordinate. Half of the vertices with small x-coordinates are assigned to one domain, the other half with the large x-coordinates are assigned to the second subdomain. The partition into four subdomains, which results when apply recursive coordinate bisection twice is shown in figure1. 3.2 Recursive Spectral Bisection
The recursive spectral bisection algorithm is derived from a graph bisection strategy developed by Pothen, Simon, and Liou, which is based on the computation of a specific eigenvector of the Laplacian matrix of the graph G. The Laplacian matrix L(G ) = (lij ), i, j = 1L n is defined by:
f (vi , v j ) ∈ E ⎧ +1 ⎪ lij = ⎨− deg(vi ) if i = j . ⎪ 0 otherwise ⎩
(1)
Test and Implement of a Parallel Shortest Path Calculation System for Traffic Network
285
The Laplacian matrix has a number of intriguing properties, which are just listed here. .First note that the bilinear form associated with the Laplacian matrix can be written as follows: x t Lx = − ∑ ( xv − xw ) 2 . (2) ( v , w )∈E From this it follows that L(G) is negative semidefinite. From the definition of L it also v follows that the largest eigenvalue λ1 is zero, and that the associated eigenvector is e , the vector of all ones. This is simply a consequence of the particular choice of diagonal elements in L(G). If G is connected then λ2 the second largest eigenvalue, is negative. The magnitude of λ2 is a measure of connectivity of the graph or its expansion.
Fig. 1. Recursive coordinate bisection decomposition road network for 4 processors
v What is of interest here is the eigenvector x2 associated with λ2 . It turns out that this eigenvector gives some directional information on the graph. If the components of v x2 are associated with the corresponding vertices of the graph, they yield a weighting for the vertices. Differences in this weight give distance information about the vertices of the graph. Sorting the vertices according to this weight provides then another way of v partitioning the graph. The special properties of x2 have been investigated by Fiedler [5, 6]. His work gives most of the theoretical justification of the uses of the second eigenvector of the Laplacian matrix for the partitioning algorithm. Hence this eigenvector is called Fiedler vector for short.
286
L. Zhang et al.
Here the Lanczos [7] algorithm is used, since it does not require any manipulation of the Laplacian matrix L(G). All that is needed are matrix vector multiplications with L(G). These can be implemented at no additional storage cost, since the Laplacian matrix directly reflects the structure of the graph [8]. The partition into four subdomains, which results when apply recursive spectral bisection twice is shown in figure2.
Fig. 2. Recursive spectral bisection decomposition road network for 4 processors
4 Test and Conclusion Two-queque parallel shortest path algorithm and recursive spectral bisection algorithm was test by three different platforms that runs the program written by MPICN edition [9, 10]. The hardware configuration of the platforms is shown in table 2. The computing time of 100 nodes to all nodes is shown from table 3 to 5. Table 2. Hardware configuration of testing platforms Plateform 1. Beowulf cluster 2. IBM Blade cluster 3. SuGuangTianKuo sever S4800A1
CPU Intel Pentium(tm) III 1.0G (x2) IBM PowerPC 970 2.5GHz 4*870 AMD Opteron
Test and Implement of a Parallel Shortest Path Calculation System for Traffic Network
287
Table 3. Computing time of Beowulf cluster (unit:s) Processor 1 2 4 8 16
3500 nodes and 16769 links 445 280 110 76 54
15606 nodes and 45878 links 8335 2146 1045 540 442
Table 4. Computing time of IBM Blade cluster (unit:s) Processor 1 2 4 8 16 32
3500 nodes and 16769 links 64.5 49.2 25.4 15.7 11.4 15.6
15606 nodes and 45878 links 1024 715 358 152 153 92
Table 5. Computing time of SuGuangTianKuo sever S4800A1 (unit:s) Processor 1 2 4 8 16
3500 nodes and 16769 links 85.6 96.1 28.18 17.23 11.72
15606 nodes and 45878 links 1269 861 423 182 169
From table 3 to 5, we can see that the time spend on parallel shortest path computation is declined with the processor count increasing. For traffic information distribution cycle is 5 mins as usually, the time cost less than 300 s would be acceptable. So we recommend that two-queue parallel shortest path algorithm and recursive spectral bisection decomposition approach can be used in traffic assignment calculation with processors count much than 32 and nodes count less than 15000.
Acknowledgements This work was supported by the project of Natural Science Foundation of Hebei Province (F2010000976).
References 1. Chabini, I., Ganugapati, S.: Parallel algorithms for dynamic shortest path problems. International Transactions in Operational Research 9(3), 279–302 (2002) 2. Tremblay, N., Florian, M.: Temporal shortest paths: parallel computing implementations. Parallel Comput. 27, 1569–1609 (2001)
288
L. Zhang et al.
3. Simon, H.D.: Partitioning of unstructured problems for parallel processing. In: Proc. Conference on Parallel Methods on Large Scale Structural Analysis and Physics Applications. Pergammon Press, Oxford (1991) 4. Herberg, U.: Performance Evaluation of Using a Dynamic Shortest Path Algorithm in OLSRv2. In: 8th Annual Communication Networks and Services Research Conference (2010) 5. O’Cearbhaill, E.A., O’Mahony, M.: Parallel implementation of a transportation network model. Journal of Parallel and Distributed Computing 65, 1–14 (2005) 6. Fiedler, M.: Eigenvectors of acyclic matrices. Czechoslovak Math. J. 25(100), 607–618 (1975) 7. Fiedler, M.: A property of eigenvectors of nonnegative symmetric matrices and its application to graph theory. Czechoslovak Math. J. 25(100), 619–633 (1975) 8. Parlett, B.N.: The Symmetric Eigenvalue Problem. Prentice Hall, New Jersey (1980) 9. Peng, Q.: The Shortest Path Parallel Algorithm on Single Source Weighted Multi-level Graph. In: Second International Workshop on Computer Science and Engineering (2009) 10. Altın, A.: Intra-domain traffic engineering with shortest path routing protocols. 4OR: A Quarterly Journal of Operations Research (2009)
Controlling Web Services and 802.11 Mesh Networks Chen-shin Chien1 and Jason Chien2 1
Department of Industrial Education, National Taiwan Normal University, Taipei County, Taiwan
[email protected] 2 China Unversity of Science and Technology Computing Center China Unversity of Science and Technology, Taipei County, Taiwan
[email protected]
Abstract. Lamport clocks and congestion control, while significant in theory, have not until recently been considered intuitive. Given the current status of decentralized technology, cyberinformaticians predictably desire the investigation of simulated annealing. We disconfirm not only that suffix trees and reinforcement learning are largely incompatible, but that the same is true for the Internet. Keywords: Web Services, mesh networks, SCSI, framework.
1 Introduction The analysis of XML has simulated wide-area networks, and current trends suggest that the visualization of redundancy will soon emerge. However, a theoretical grand challenge in cryptography is the practical unification of B-trees and "smart" modalities [8]. Similarly, The notion that cyberinformaticians synchronize with atomic archetypes is generally well-received [12]. However, superpages [3] alone might fulfill the need for DNS. Another unfortunate issue in this area is the construction of peer-to-peer methodologies. On a similar note, the basic tenet of this approach is the development of I/O automata. We emphasize that our heuristic caches simulated annealing, without managing A* search. We omit a more thorough discussion for now. This combination of properties has not yet been studied in previous work. Motivated by these observations, cacheable modalities and atomic symmetries have been extensively evaluated by scholars. Unfortunately, this solution is rarely well-received. Our framework requests hash tables. We emphasize that FetuousResidencia cannot be simulated to synthesize IPv6. While similar algorithms study redundancy, we fix this challenge without synthesizing self-learning algorithms. Our focus in this paper is not on whether randomized algorithms and rasterization are often incompatible, but rather on motivating a compact tool for deploying access points (FetuousResidencia). We emphasize that we allow suffix trees to cache autonomous models without the evaluation of Internet QoS. By comparison, for example, many systems harness courseware. Two properties make this method perfect: FetuousResidencia is copied from the principles of programming languages, R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 289–295, 2010. © Springer-Verlag Berlin Heidelberg 2010
290
C.-s. Chien and J. Chien
and also FetuousResidencia emulates multimodal archetypes. As a result, we allow replication to create stable information without the understanding of active networks. Although such a hypothesis might seem perverse, it is supported by prior work in the field. The rest of this paper is organized as follows. We motivate the need for spreadsheets. Continuing with this rationale, we place our work in context with the existing work in this area. Next, we validate the construction of Lamport clocks [2]. Finally, we conclude.
2 Methodology We postulate that each component of FetuousResidencia observes compact epistemologies, independent of all other components. Further, we consider a heuristic consisting of n hash tables. Despite the fact that it might seem perverse, it is buffetted by prior work in the field. Any theoretical synthesis of the study of active networks will clearly require that the acclaimed ambimorphic algorithm for the confirmed unification of the Internet and SCSI disks by S. Anderson [11] follows a Zipf-like distribution; FetuousResidencia is no different. The question is, will FetuousResidencia satisfy all of these assumptions? No [13,5].
Fig. 1. The relationship between our framework and 128 bit architectures
Reality aside, we would like to explore a model for how our framework might behave in theory. This is a robust property of our heuristic. Further, we show the relationship between our heuristic and the investigation of scatter/gather I/O in Figure 1. This is an important property of our methodology. The model for our algorithm consists of four independent components: modular modalities, write-ahead logging, stable methodologies, and fiber-optic cables. FetuousResidencia does not require such a compelling improvement to run correctly, but it doesn't hurt. We use our previously enabled results as a basis for all of these assumptions.
Controlling Web Services and 802.11 Mesh Networks
291
3 Implementation After several months of arduous designing, we finally have a working implementation of FetuousResidencia. Further, while we have not yet optimized for performance, this should be simple once we finish implementing the centralized logging facility. It was necessary to cap the distance used by our method to 7695 percentile. Next, futurists have complete control over the client-side library, which of course is necessary so that the much-touted certifiable algorithm for the improvement of Moore's Law by F. Ramanathan et al. [1] is NP-complete. Furthermore, despite the fact that we have not yet optimized for performance, this should be simple once we finish hacking the virtual machine monitor. We plan to release all of this code under X11 license.
4 Results and Analysis We now discuss our evaluation strategy. Our overall performance analysis seeks to prove three hypotheses: (1) that rasterization no longer influences system design; (2) that DNS has actually shown amplified expected signal-to-noise ratio over time; and finally (3) that tape drive speed behaves fundamentally differently on our omniscient cluster. Only with the benefit of our system's RAM throughput might we optimize for complexity at the cost of usability. Our evaluation will show that reprogramming the average clock speed of our distributed system is crucial to our results. 4.1 Hardware and Software Configuration A well-tuned network setup holds the key to an useful evaluation strategy. We executed a deployment on our introspective cluster to disprove the independently flexible nature of opportunistically certifiable algorithms. It is never an appropriate purpose but fell in line with our expectations. Primarily, we quadrupled the floppy
Fig. 2. The mean bandwidth of our system, compared with the other algorithms
292
C.-s. Chien and J. Chien
disk speed of our network to better understand DARPA's system. We removed a 10petabyte tape drive from our Internet-2 cluster. Configurations without this modification showed degraded average throughput. Continuing with this rationale, we added more NV-RAM to our introspective cluster to investigate the RAM speed of our Internet testbed. Finally, Russian information theorists added 3 FPUs to our decommissioned IBM PC Juniors to better understand the flash-memory speed of CERN's permutable testbed.
Fig. 3. The 10th-percentile signal-to-noise ratio of our heuristic, compared with the other frameworks
Fig. 4. Note that hit ratio grows as hit ratio decreases - a phenomenon worth investigating in its own right
FetuousResidencia runs on modified standard software. We added support for FetuousResidencia as a kernel patch. Our experiments soon proved that instrumenting our DoS-ed Atari 2600s was more effective than exokernelizing them, as previous work suggested. We implemented the World Wide Web server in ANSI Java, augmented with extremely fuzzy extensions. All of these techniques are of interesting historical significance; Noam Chomsky and X. Garcia investigated an entirely different heuristic in 1953.
Controlling Web Services and 802.11 Mesh Networks
293
4.2 Dogfooding Our Framework Our hardware and software modifications show that rolling out our algorithm is one thing, but deploying it in a chaotic spatio-temporal environment is a completely different story. That being said, we ran four novel experiments: (1) we asked (and answered) what would happen if computationally stochastic superpages were used instead of access points; (2) we ran suffix trees on 79 nodes spread throughout the 100-node network, and compared them against hierarchical databases running locally; (3) we deployed 13 Macintosh SEs across the Internet network, and tested our spreadsheets accordingly; and (4) we asked (and answered) what would happen if independently randomly pipelined agents were used instead of digital-to-analog converters. All of these experiments completed without paging or unusual heat dissipation.
Fig. 5. The effective sampling rate of FetuousResidencia, compared with the other Frame works
We first shed light on the first two experiments as shown in Figure 2. This is an important point to understand. Note how deploying symmetric encryption rather than simulating them in software produce smoother, more reproducible results. Furthermore, we scarcely anticipated how wildly inaccurate our results were in this phase of the performance analysis. The many discontinuities in the graphs point to degraded median work factor introduced with our hardware upgrades. We next turn to experiments (1) and (3) enumerated above, shown in Figure 2. Note that Figure 2 shows the effective and not expected exhaustive effective energy. Note that Figure 2 shows the average and not average independent tape drive throughput. Our goal here is to set the record straight. Along these same lines, error bars have been elided, since most of our data points fell outside of 16 standard deviations from observed means. Lastly, we discuss experiments (1) and (4) enumerated above. Note how deploying symmetric encryption rather than deploying them in a controlled environment produce less jagged, more reproducible results. Note how rolling out agents rather than deploying them in the wild produce more jagged, more reproducible results. Of
294
C.-s. Chien and J. Chien
course, all sensitive data was anonymized during our hardware emulation. It is continuously a natural purpose but is supported by related work in the field.
5 Related Work Our solution is related to research into the study of the Turing machine, IPv4, and "fuzzy" theory. Hector Garcia-Molina et al. [7] developed a similar heuristic; unfortunately we verified that FetuousResidencia is recursively enumerable. Thusly, comparisons to this work are ill-conceived. Clearly, despite substantial work in this area, our solution is apparently the method of choice among hackers worldwide. Several replicated and real-time applications have been proposed in the literature [9]. A litany of related work supports our use of DNS [14]. However, the complexity of their method grows logarithmically as semantic configurations grow. Next, instead of controlling classical configurations [10], we fulfill this objective simply by emulating Boolean logic. All of these solutions conflict with our assumption that SCSI disks and secure epistemologies are theoretical [6].
6 Conclusion We argued in our research that online algorithms and virtual machines are always incompatible, and FetuousResidencia is no exception to that rule [4]. Our algorithm has set a precedent for decentralized configurations, and we expect that cryptographers will refine Fetuous Residencia for years to come. Next, Fetuous Residencia has set a precedent for public-private key pairs, and we expect that futurists will construct our algorithm for years to come. Further, our methodology is able to successfully enable many Byzantine fault tolerance at once. As a result, our vision for the future of cyberinformatics certainly includes our methodology.
References 1. Chomsky, N., Papadimitriou, C., Hennessy, J., Bhabha, X.: CADGER: A methodology for the deployment of von Neumann machines. In: Proceedings of the Workshop on Signed, Amphibious Technology (October 1999) 2. Chomsky, N., Papadimitriou, C., Hennessy, J., Bhabha, X.: CADGER: A methodology for the deployment of von Neumann machines. In: Proceedings of the Workshop on Signed, Amphibious Technology (October 1999) 3. Jones, P., Einstein, A., Wu, X., Lakshminarayanan, Z.: A case for Byzantine fault tolerance. Journal of Distributed, Distributed Epistemologies 8, 71–90 (1993) 4. Kumar, Q., Ramamurthy, F., Ito, Y., Jayaraman, W., Kobayashi, X., Agarwal, R., Cook, S., Johnson, D., Needham, R.: Journaling file systems considered harmful. In: Proceedings of the Symposium on Interposable, Game-Theoretic Information (October 1995) 5. Leiserson, C., Ashwin, W.: The influence of unstable algorithms on software engineering. Journal of Ubiquitous, Real-Time Information 50, 1–15 (2004) 6. Nehru, A.: Visualization of Moore’s Law. Journal of Real-Time, Encrypted. Modalities 5, 79–81 (2001) 7. Newell, A.: An exploration of multi-processors. IEEE JSAC 618, 75–84 (2005)
Controlling Web Services and 802.11 Mesh Networks
295
8. Quinlan, J.: Towards the deployment of public-private key pairs. In: Proceedings of the Symposium on Embedded, Perfect, Interactive Models (July 1970) 9. Taylor, L., Codd, E.: Decoupling IPv4 from Byzantine fault tolerance in DHCP. In: Proceedings of Micro (Sepember 2001) 10. Vivek, I., Turing, A., Levy, H., Subramanian, L., Sun, G., Darwin, C., Dahl, O., Wirth, N.: A methodology for the study of DNS. IEEE JSAC 2, 49–51 (2002) 11. Williams, G., Levy, H., Wirth, N., Ito, B.: CONTEX: A methodology for the improvement of red-black trees. Journal of Metamorphic Theory 4, 41–56 (2002) 12. Yeo, C.S.: Cloot: Virtual models. Journal of Compact, Heterogeneous Configurations 44, 158–196 (2002) 13. Zhao, I.T.: An investigation of massive multiplayer online role-playing games with CARREL. Journal of Modular, Empathic Archetypes 79, 54–67 (2000)
Numeric Simulation for the Seabed Deformation in the Process of Gas Hydrate Dissociated by Depressurization Zhenwei Zhao1,3 and Xinchun Shang2 1
Department of Civil Engineering, University of Science and Technology Beijing, Beijing, China 2 Department of mathematics and Mechanics, University of Science and Technology Beijing, Beijing, China 3 Institute of Mechanics, Chinese Academy of Sciences, Beijing, China
[email protected],
[email protected]
Abstract. When the gas hydrates dissociate, the mechanical properties of sediments will change, which may cause the deformation of the seabed. Such incidence will directly affect the stability of the undersea device. Under the consideration of seepage-stress coupling, this paper analyzed the deformation of seabed using finite element method. Duncan Chang’s E-B constitutive model code was written, the impact of stress on the elastic constants was considered in this model. It was implemented in ABAQUS and used to simulating the nonlinear deformation of sediments. The results show that vertical effective stress of soil around the well increase significantly when gas hydrates are exploited by depressurization. The deformation of seabed increases nonlinearly with the increasing of the decomposition radius of hydrate. The maximum settlement reaches 9 m and the maximum horizontal displacement reaches 4 m. The results provide guidance on the submarine construction in the process of gas production from hydrates. Keywords: Gas hydrates; Redevelopment of ABAQUS; Seabed deformation.
1 Introduction Gas hydrates are solid crystalline compounds in which gas molecules are lodged within the lattices of ice crystals under conditions of low temperature and high pressure [1]. Vast amounts of CH4 are trapped in naturally occurring hydrate accumulations in the permafrost and in deep ocean sediments [2]. One volume of natural gas hydrates contains about 164 volumes of CH4 [3]. The amount of hydrocarbons residing in hydrate deposits is estimated to substantially exceed all known conventional oil and gas resources. The reduction of pore pressure and the decreases of soil strength due to the loss of hydrates will cause the deformation of seabed; it will affect the stability of submarine facilities. Jeonghwan Lee et al. gave an experimental study on the gas production from hydrate by depressurization, it was shown that the degree of depressurization has R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 296–303, 2010. © Springer-Verlag Berlin Heidelberg 2010
Numeric Simulation for the Seabed Deformation
297
significant influence on the gas production rate, but the filed of stress was not studied [4]. S. Y. Wang et al. analyzed the effect of gas hydrates decomposition on the pipeline stability and the deformation of soil using finite element method, but the change of pressure was neglected in their research [5]. M. Y. A. Ng et al. studied the deformation of soil around the wellbore, the elastic-plastic model was used in their research [6]. Sayuri Kimoto et al. analyzed the deformation of ground induced by hydrate dissociation, Elastic-viscoplastic model was used to simulate the deformation of soil [7]. In this study, FORTRAN language was used to actuating the secondary development of ABAQUS finite element software. Duncan Chang’s E-B constitutive model code was written here, combined with Biot's consolidation theory; it was used to simulating the nonlinear deformation of seabed.
2 Calculation Principle 2.1 Biot's Consolidation Theory Biot's consolidation equations are derived by static equilibrium equations and continuous flow equations, it is tiger in theory and can be used to solve the deformation of soil and the pore pressure combined with the constitutive model of soil. It is based on the small deformation assumption and linear elastic constitutive model, the seepage obeys Darcy's law, soil particles and water can not be compressed. Soil skeleton satisfy the following basic equilibrium equations [8]:
∂σ x ∂τ yx ∂τ zx ∂P + + + = 0. ∂x ∂y ∂z ∂x ∂τ xy ∂x
+
∂σ y ∂y
+
∂τ zy ∂z
+
∂P =0. ∂y
∂τ xz ∂τ yz ∂σ z ∂P + + + + ρg = 0 . ∂x ∂y ∂z ∂y
(1)
(2)
(3)
Where, ρ is the saturation density of soil, σ, τ are effective stress, P is the pressure of water. It is assumed that the seepage obey Darcy's law, based on the compatibility equations of soil and the continuity principle of water, the relationship between the displacement and pore pressure can be derived as follows: k ⎛ ∂2 P ∂2 P ∂2 P ⎞ ∂ ⎛ ∂u ∂v ∂w ⎞ + + ⎜ ⎟=0. ⎜ + + ⎟− ∂t ⎝ ∂x ∂y ∂z ⎠ ρ w g ⎝ ∂x 2 ∂y 2 ∂z 2 ⎠
(4)
Where k is the permeability coefficient of soil, u, v, w is the displacement of soil, ρw is the density of water. The stress can be expressed by the deformation of displacement using the constitutive equation and the geometric equation, thus displacement u, v, w and pore water pressure P can be solved by (1)-(4).
298
Z. Zhao and X. Shang
2.2 Nonlinear Constitutive Model of Soil
The nonlinear deformation of the soil can be analyzed using finite element method by updating the elastic constants in each incremental step. Duncan-Chang model is a nonlinear elastic model based on the relationship between stress increment and strain increment, elastic constants change in different incremental step, but the relationship between stress and strain satisfies Hooke's Law in each step increment. Because Duncan-Chang model can simulate the nonlinear deformation of soil accurately, it is widely used in engineering. The elastic constants in Duncan Chang’s E-B model are calculated as follows [8]:
⎛ σ ⎞ ⎡ R f (σ 1 − σ 3 )(1 − sin φ ) ⎤ Et = K a pa ⎜ 3 ⎟ ⎢1 − ⎥ . 2c cos φ ⎝ pa ⎠ ⎢⎣ ⎦⎥ 2
n
(5)
m
⎛σ ⎞ B = kb p a ⎜ 3 ⎟ . ⎝ pa ⎠
(6)
Where Ka, n, Rf, c, Φ, Kb and m are material constants.
3 Finite Element Analysis 3.1 Calculation Model
Gas hydrates exists in the pore of clay at 191-225 m below the seabed in South China Sea, the water depth is 1230 m. It is assumed that there is a horizontal well in the center of hydrate layer, gas hydrates are decomposed by depressurization. The problem can be reduced to a plane strain problem. The finite element model is created considering a longitudinal section (see Fig.1), 3000 m is taken in horizontal direction, rc is the decomposition radius.
Fig. 1. The sketch for the finite element model
Numeric Simulation for the Seabed Deformation
299
Fig. 2. The mesh in local region around the well
3.2 Computation Parameters
The mechanical parameters of sediment change significantly after gas hydrates dissociate, the parameters need to be adjusted in the process of calculation. Duncan Chang’s E-B subroutine is used here to define the mechanical constitutive behavior of soil. The change of the parameters due to the loss of hydrates can be achieved by using User Defined Field subroutine. User Defined Field subroutine can define field variables and it can be used in conjunction with Duncan Chang’s E-B model subroutine. The value of pore water pressure can be extracted by User Defined Field subroutine, the parameters are defined as a function of pore pressure, and they are updated according to the value of pore water pressure in each increment. At a certain temperature, hydrates will dissociate while the pressure decreases below the equilibrium pressure Pe, the sediment parameters will change at this time. The equilibrium pressure Pe can be calculated by the following phase equilibrium equation [9]: Log10Pe = aT + bT2 + c ,
(7) -1
-1
where T is the temperature of sediment, a =0.0342 K , b = 0.0005 K , c = 6.4804. The parameters are obtained from experiment data in [10] and [11], Duncan-Chang model parameters are shown in Table 1. Table 1. Parameters in Duncan Chang’s E-B model Depth (m) ka n Rf C(kpa) Ф Kb M kur
0-50 29.7 0.35 0.78 14 10 24.8 0.45 1.2
50-191 108 0.35 0.78 44 20 90.75 0.45 1.2
191-225 (P > Pe) 223 0.35 0.78 59 22 185.6 0.45 1.2
191-225 (P < Pe) 40.1 0.35 0.78 25.6 12.1 33.4 0.45 1.2
225-500 178 0.35 0.78 54 21 148.8 0.45 1.2
300
Z. Zhao and X. Shang
4 The Results and Analysis When the gas hydrates are decomposed by depressurization, the pore water pressure field tends to be stable after a period of time, the hydrate dissociation front doesn’t move forward any more, the decomposition radius becomes 55 m finally. Fig.3 shows the distribution of pore water pressure when rc gets 55 m. Pore water pressure decreases obviously only in the area just around the well, The reduction of pressure will result in the redistribution of the stress of soil. 0 -100 -200 -300 -400 -500 -300 -200 -100
0
100
200
300
Fig. 3. The distribution of pore pressure
The distribution of vertical effective stress is shown in Fig.4. The vertical effective stress of soil increase significantly just around the well mainly because of the depressurization, and the force acting on the pipe will increase accordingly. The vertical effective stress of soil in the area near the surface of the seabed doesn’t change evidently. 0 -100 -200 -300 -400 -500 -300 -200 -100
0
100 200 300
Fig. 4. The distribution of vertical effective stress
The reduction of sediment elastic constants due to the loss of hydrates and the increase of effective stress of soil will result in the deformation of soil. Fig.5 shows the settlement of soil while rc reaches 55 m. The largest settlement occurs in the seabed surface directly above the well, it reaches 9 m. The settlement decreases
Numeric Simulation for the Seabed Deformation
301
gradually to the periphery; it indicates there isn’t significant dislocation. Fig.6 shows the horizontal displacement of soil while rc gets 55 m, the largest horizontal displacement occurs in the seabed surface 280 m from the center, it reaches 4 m. 0 -100 -200 -300 -500 -400 -300 -200 -100
0
100 200 300 400 500
Fig. 5. The distribution of vertical displacement 0 -100 -200 -300 -500 -400 -300 -200 -100
0
100 200 300 400 500
Fig. 6. The distribution of horizontal displacement
The deformation of the seabed surface has a direct effect on the stability of submarine facilities. The distribution of the displacement of seabed surface and its variations with rc are discussed as below. Fig.7 shows the settlement of seabed surface while rc reaches 10 m, 20 m, 30 m, 40 m, 50 m respectively. The settlement is larger in the area that horizontal coordinate is less than 200 meters. It is can be seen from Fig.7, the decomposition radius of gas hydrate has significant impact on the settlement of the seabed surface. When rc is less than 20 m, the largest settlement is about 0.8m.Largest settlement increase nonlinearly with the increasing of rc, especially when rc increases from 40 m to 55 m, the largest settlement changes from 5 m to 10 m.
Fig. 7. The settlement of seabed surface corresponding to different decomposition radius
302
Z. Zhao and X. Shang
Fig.8 shows the horizontal displacement of seabed surface while rc reaches 10 m, 20 m, 30 m, 40 m, 50 m respectively. Coordinate of the location that the maximum horizontal displacement occurs changes with the increasing of rc. When rc increases from 20 m to 55 m , it changes from 180 to 250 m. The maximum horizontal displacement is affected significantly by the decomposition radius of gas hydrates, it increases with the increasing of rc, the maximum horizontal reaches 4 m while rc gets 55 m.
Fig. 8. The horizontal displacement of seabed surface corresponding to different decomposition radius
5 Conclusions From the results of the coupling analysis on seepage and stress fields in the process of gas production from hydrates, the conclusions are obtained as follows: (1) If gas hydrates are exploited by drilling horizontal well, the maximum settlement and horizontal displacement occurs in the seabed surface, they reach 9 m and 4 m respectively. The sea floor facility should be distributed symmetrically around the well in order to avoid inclination, and it should not be constructed in the location that maximum horizontal displacement occurs. (2) The settlement and horizontal displacement of the seabed surface increase nonlinearly with the increasing of decomposition radius. In order to reduce the disaster of deformation of the soil, the decomposition range of gas hydrate should be controlled appropriately. (3) The vertical effective stress of soil increase significantly around the well due to the reduction of pore pressure, it has potential impact on the stability of submarine pipeline. The results provide reference for the design of submarine pipeline.
References 1. Kowalsky, M.B., Moridis, G.J.: Comparison of Kinetic and Equilibrium Reaction Models in Simulating Gas Hydrate Behavior in Porous Media. Energy Conversion and Management 48, 1850–1863 (2007)
Numeric Simulation for the Seabed Deformation
303
2. Alp, D., Parlaktuna, M., Moridis, G.J.: Gas Production by Depressurization from Hypothetical Class 1G and Class 1W Hydrate Reservoirs. Energy Conversion and Management 48, 1864–1879 (2007) 3. Liu, Y., Strumendo, M., Arastoopour, H.: Simulation of Methane Production from Hydrates by Depressurization and Thermal Stimulation. Ind. Eng. Chem. Res. 48, 2451–2464 (2009) 4. Lee, J., Park, S., Sung, W.: An Experimental Study on the Productivity of Dissociated Gas from Gas hydrate by Depressurization scheme. Energy Conversion and Management (2010) 5. Shuyun, W., Li, W., Xiaobing, L., Qingping, L.: Numerical Analysis of the Effects of Gas Hydrate Dissociation on the Stability of Deposits and Pipes. China Offshore Oil and Gas 20, 127–131 (2008) 6. Ng, M.Y.A., Klar, A.: Coupled Soil Deformation-Flow-Thermal Analysis of Methane Production in Layered Methane Hydrate Soils. In: Offshore Technology Conference, Houston, pp. 1258–1270 (2008) 7. Kimoto, S., Oka, F., Fushita, T.: A Chemo-thermo-mechanically Coupled Analysis of Ground Deformation Induced by Gas Hydrate Dissociation. International Journal of Mechanical Science 52, 365–376 (2010) 8. Guangxin, L.: Advanced Soil Mechanics. Tsing-hua University press, Beijing (2004) 9. Ahmadi, G., Ji, C., Smith, D.H.: Production of Natural Gas from Methane Hydrate by a Constant Downhole Pressure Well. Energy Conversion and Management 48, 2053–2068 (2007) 10. Sanyuan, S., Qun, L., Deqian, L.: Study on Parameters of Duncan-Chang Model for Handan Silt Clay. Journal of Hebei Institute of Architectural Science and Technology 23, 1–3 (2006) 11. Masui, A., Haneda, H., Ogata, Y., Aoki, K.: Effects of Methane Hydrate Formation on Shear Strength of Synthetic Methane Hydrate Sediments. In: Proceedings of the Fifteenth International Offshore and Polar Engineering Conference, Seoul, pp. 364–369 (2005)
Control for Mechatronic Systems Yanjuan Zhang, Chenxia Zhao, Jinying Zhang, and Huijuan Zhao College of Light Industry, Hebei Polytechnic University, Tangshan, 063000, P.R. China {zhangyanjuan1981,xinqing2228}@163.com
Abstract. In this paper, we take the DC motor in daily life as a system. Firstly, we present the mathematical mode, and discuss the stability and controllability of the system by using the MATLAB tool. Then, we discuss the steady-state error of the system, from which the result illustrates that feedback could reduce the impact of the interference on the output. At last, we show that the closed-loop feedback system has the characteristics of noise suppression. Keywords: Feedback system modeling; DC motor; steady-state error; MATLAB software.
1 Introduction As we know, the DC motor can offer power to the load actuator. The DC motor has a wide range of characteristics, for instance, the big torque, a wide control range of speed, a good characteristic of the speed – torque, easy to carry and so on. Correspondingly, it has been widely used, such as, in robot control systems, in conveyor systems, in disk drives, in machine tools and other practicality systems. Owe to all the above, in the first part, we will present the analysis of the system and give the mathematics model under some assumptions. Then, we can carry on graphical simulation of the model by using Matlab tool. Finally, we prove the feedback can reduce the impact of interference on the output and the closed-loop feedback system with noise rejection by discussing the steady-state error of the system [1, 2, 3, 4, 5].
2 Question and System Modeling 2.1 Questions and the Conditions of Assumptions For the system modeling in Figure 1, we will analysis the stability and controllability of system, and discuss how the feedback control system will suppress interference. We assume the conditions as follows: R : The equivalent resistance of rotor winding; L : The equivalent inductance of Rotor windings; U : External voltage; ω : Motor speed; J : Load; B : Entire mechanical damping constant rotation system; ub : Emf; ia : Current; I f : Constant current which generates magnetic induction. K c is the constant decided by the permanent magnet flux density, the number of rotor winding R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 304–310, 2010. © Springer-Verlag Berlin Heidelberg 2010
Control for Mechatronic Systems
305
Fig. 1. The DC motor
and the core of the physical nature. K t is the torque constant by another permanent magnet flux density, the number of rotor winding and the core of the physical nature. Next, state variables ia , ω (t ) are derived as a system in differential equations. 2.2 System Analysis and Modeling According to the voltage balance relation between the circuit loops, we obtain
U − u R − u L − ub = 0 . Because u R = ia R , u L = L
(1)
d ia , u b = K c ω , take the above-type into the equation dt
(1), and we can get: U − ia R − L
d ia − Kcω = 0 . dt
(2)
Then, according to moment balance between the motors, we have the formula like: Te − Tω ′ − Tω − TL = 0 .
(3)
Where Te is the motor electromagnetic torque, T is the driving torque transferred ω from acceleration motion is transferred from the speed torque generated, Tω is the electrical load torque. We all know that the electromagnetic moment of motor is positive proportional to the size of the electricity: Te = K t ia , where Kc is the torque constant designed by the flux density of permanent magnet, the number of rotor winding and the core of the d physical nature. T ' can be written as: Tω = J ω , In which J is the wheel inertia ω dt from rotor and motor load. The wheel moment associated with speed is Tω = B ω a , '
Where B is the damping constant of whole rotating system. Take above relations into equation (3), we have
Kt ia − J
d ω − Bω − TL = 0 . dt
(4)
306
Y. Zhang et al.
Join (2) and (4), we get a complete description of DC motor: R K d U ia = − a ia − c ω + , dt La La L
(5)
K T d B ω = t ia − ω − L . dt J J J
(6)
Written as state space form: d dt
⎡ Ra − ⎡ ia ⎤ ⎢ L a ⎢ω ⎥ = ⎢ ⎣ ⎦ ⎢ Kt ⎢⎣ J
⎡ y1 ⎤ ⎡1 ⎢ y ⎥ = ⎢0 ⎣ ⎣ 2⎦
Kc ⎤ ⎡ 1 L a ⎥ ⎡ ia ⎤ ⎢ L a ⎥⎢ ⎥+ ⎢ B ⎥ ⎣ω ⎦ ⎢ − ⎢⎣ 0 J ⎥⎦
−
0 ⎤ ⎡ ia ⎤ ⎡0 + ⎢ ⎥ ⎢ ⎥ 1 ⎦ ⎣ω ⎦ ⎣0
⎤ 0 ⎥ ⎡U ⎤ , ⎥⎢ ⎥ 1 ⎥ ⎣T L ⎦ − ⎥ J ⎦
0 ⎤ ⎡U ⎤ . 0 ⎥⎦ ⎢⎣ T L ⎥⎦
(7)
(8)
3 DC Motor Systems 3.1 System Stability
Let DC motor parameters be as the following: Ra = 1, La = 0.005, K c = 0.1, K t = 0.1, J = 0.004, B = 0.8.
Bring the above into the state equation (7) and (8), we have ⎡ i&a ⎤ ⎡−200 ⎢ & ⎥ = ⎢ ω ⎣ 25 ⎣ a⎦
−20 ⎤ − 2 0 0 ⎥⎦
⎡ ia ⎤ , ⎢ω ⎥ ⎣ a⎦
(9)
⎡ i&a ⎤ ⎡−200 ⎢ & ⎥ = ⎢ ⎣ 25 ⎣ω a ⎦
− 2 0 ⎤ ⎡ ia ⎤ . − 2 0 0 ⎥⎦ ⎢⎣ ω a ⎥⎦
(10)
Then, we can obtain the latent roots: λ1= -0.4 and λ2= -399.6. According to the necessary and sufficient condition of Lee stability, we know that the system is asymptotically stable. For any given initial value of state variables (initial state) ia =0.5 ω =0.15, we analyze the state curve by using MATLAB. The preparation of m-files (current):
,
1. A = [-200 -20; 25 -200]; 2. B = [200 0; 0 -250]; 3. C = [1 0; 0 1]; 4. D = 0; x0 = [0.5; 0.15]; 5. t = 0:0.001:0.05; 6. [Y, x, t] = initial (A, B, C, D, x0, t); 7. plot (t, x (:, 1)), grid title ('current with time') xlabel ('Time (seconds)') ylabel ('Current (amps)')
Control for Mechatronic Systems
307
We can get the response curve as Fig. 2. Similarly, we can get the angular response curve is as Fig. 3, the unit step response curve as Fig. 4, Unit step response curves as shown in Fig. 5.
Fig. 2. Response plan of initial state
Fig. 3. Response plan of initial state
Fig. 4. Relationship of current vs time
308
Y. Zhang et al.
Fig. 5. Relationship of angular velocity vs time
3.2 Identification of the Controllability
From the previous part, we can get: ⎡ − 200 A = ⎢ ⎣ 25
− 20 ⎤ ⎡ 200 ,B = ⎢ − 200 ⎥⎦ ⎣ 0
0 ⎤ − 250 ⎥⎦
rank (Γ c [ A, B ]) = rank ([ A, AB ]) = 2 .
.
(11) (12)
We know the system is controllable. 3.3 Study the Impact of Interference Signals on the System
Armature control DC motor open-loop block diagram as shown in Fig. 6, which includes the load torque disturbance signal Td ( s ) , all the necessary parameters are: Ra = 1, Km = 15, J =0.004, b =0.8, Kb =0.2, Ka =100, Kt = 2. The system has two inputs Va (s) and Td (s) , according to principle of superposition, we can consider separately the two input linear systems. That is, when study the impact of interference on the system, so that Va ( s) = 0, only consider Td ( s ) interference.
Fig. 6. Open-loop speed control system
Control for Mechatronic Systems
309
Closed loop speed control system block diagram is shown in Fig. 7. If the system can well suppress interference, then the output of interference signal ω ( s ) should be very small. First, consider Fig. 6 in the open-loop system.
Fig. 7. Closed-loop speed control system
Calculated using MATLAB went to the transfer function from Td ( s) to ω ( s ) , and calculate the system output corresponding unit step disturbance. −1 Operating results: um den = , ans = −0.2624 . 0.004 s + 3.8 The open-loop transfer function: ω ( s ) num −1 = = Td ( s ) den 0.004 s + 3.8
.
As expected output corresponding ω (t ) is zero, the final value of the corresponding interference ω (t ) is the system steady-state error. We denote it by ϖ 0 (t ) , where subscript "o" that open-loop system. From figure 9, the disturbance response curve can be seen, the system's steady-state error is approximately the speed when t = 5s. The steady-state error of approximation in the program is obtained by calculating the corresponding output, and Figure 9 the curve is drawn according to the calculations of yo . Approximate steady-state value of ωo is ωo (∞) ≈ ωo (5) = −0.2624rad / s .The response curve in figure 9 visually demonstrates this. Similarly, we can calculate the closed-loop transfer function of the closed-loop system from Td ( s ) to ω ( s) , and generate the system unit step disturbance on the output accordingly ω (t ) . The −1 closed-loop disturbance transfer function: ω ( s ) n u m Td ( s )
=
d en
=
0 .0 0 4 s + 3 0 0 3 .8
Similarly, the steady-state error ω (t ) is the final value of ωc (t ) . Record Corresponding to interference, where the subscript "c" denotes the closed-loop system. The steady-state error as shown in Figure 10 while the steady-state error of approximation can be calculated in the program output response obtained yc . The approximate steady-state value of ωc : ωc (∞) ≈ −3.3193e-4rad / s .
310
Y. Zhang et al.
In general, we hope ωc (∞) / ωo (∞) < 0.02 . In this system, the output response steady-state value ratio of the closed-loop and open-loop system to unit step disturbance signal: ωc (∞) / ωo (∞) = 0.00127 . We can see that the introduction of negative feedback has been significantly reduced the impact of interference on the output.
4 Conclusions Based on the DC motor system model elaboration and analysis, we have the following conclusions: (1) By analyzing the values of system matrix and the simulation map, we can see this type of DC motor system is always asymptotically stable. In the DC motor model, regardless of how to modify the parameters, as long as the parameters of practical significance (the same number) the system is always a negative real part eigenvalues. So DC motor system is always asymptotically stable. (2) DC motor system is controllable. (3) By considering the output response of the steady-state value of the ratio of the closed-loop and open-loop system to unit step disturbance signal of feedback, we deduce that the impact of interference significantly decreases, which indicates that the closed-loop feedback system has noise suppression features.
References 1. Li, Y.: Robust Control - Linear Matrix Inequalities. Tsinghua University Press, Beijing (2007) 2. Xiao-yan, Z.: Dual-option.: MATLAB in Automatic Control. Xi’an. University of Electronic Science and Technology Press (2006) 3. Li, Y.: Modern Control Theory. Cambridge University Press, Cambridge (2007) 4. Zheng, D.-Z.: Linear System Theory. Cambridge University Press, Cambridge (2005) 5. Dorf, R.C.: Modern Control Systems. Higher Education Press (2006)
Optimization of Acylation of Quercetin Using Response Surface Methodology Wei Li1, Qianqian Jin1, Duanji Wan2,*, Yuzhen Chen3, and Ye Li1 1
College of Biological Engineering, Hubei University of Technology, Wuhan 430068, Hubei Province, China
[email protected] 2 College of Chemical and Environmental Engineering, Hubei University of Technology, Wuhan 430068, Hubei Province, China Tel.: +86-27-88032320, Fax: +86-27-88032320
[email protected] 3 Department of Mathematics, Henan Institute of Science and Technology, Xinxiang 453003, Henan Province, China
Abstract. The optimization of acylation of quercetin was studied in this study. Through response surface methodology, the optimum condition was determined as following: molar ratio of quercetin to acetic anhydride 15, addition of pyridine drop 1.00, reaction temperature 45 °C and reaction time 106.59 min. Under this condition, the yield was as high as 0.3814 g with 0.32 g quercetin as raw material. Keywords: quercetin; acylation; response surface methodology.
1 Introduction Lipid oxidation is the major cause of flavor and nutritive value degradation of products containing fats and lipids. Lipid oxidation is a radical process involved in a chain reaction including induction, propagation and termination steps. During the induction period, alky1 and peroxyl radicals are formed. These highly reactive chemical species produce hydroperoxides (ROOH) during the propagation phase. The whole sequence is responsible for organoleptic and nutritional alterations due to the formation of off flavor volatile compounds from degradation of ROOH and the disappearance of essential fatty acids. Therefore, it is necessary to protect food lipids against free radicals by endogenous and exogenous antioxidants from a natural or synthetic origin [1, 2, 3, 4]. To overcome lipid oxidation, synthetic antioxidants such as butylated hydroxyl toluene (BHT), butylated hydroxyl anisole (BHA) and tert-butyl hydroquinone (TBHQ) have been widely used in food industry. However their toxicity makes the impetus for the search of alternative antioxidants. Flavonoids have been shown to be effective antioxidant. They readily react with one-electron oxidants, resulting in powerful free radical scavenging activity [5, 6, 7]. *
Corresponding author.
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 311–317, 2010. © Springer-Verlag Berlin Heidelberg 2010
312
W. Li et al.
Quercetin is an important kind of flavonoid. The antioxidant activity of quercetin could compare with that of TBHQ. But the molecular structure of quercetin decides that its hydrophilicity is well and hydrophobicity is not well because of its multiple phenolic-hydroxyl groups. Therefore, its application in hydrophobic food is impractical [8, 9]. The hydrophobicity of quercetin can be improved by acylation, which are influenced by many factors, such as extracting time, extracting temperature, etc. When many factors and interactions affect desired responses, response surface methodology (RSM) is an effective tool for optimizing the process. RSM uses an experimental design such as the central composite design (CCD) to fit a model by least squares technique. If the proposed model is adequate, as revealed by the diagnostic checking provide by an analysis of variance (ANOVA) and residual plots, contour plots can be usefully employed to study the response surface and located the optimum [10, 11, 12]. The purpose of our current work was to optimize the acylation of quercetin by Response Surface Methodology (RSM).
2 Materials and Methods 2.1 Materials Quercetin (Purity>98 %) was from Wuhan Yuancheng Technology Development Co,. Ltd. Other chemicals were of analytical grade. 2.2 Experiment Design One response was measured: yield (Y), defined as the weight of the obtained acylated quercetin. Each of variables to be optimized was coded at 3 levels: -1, 0, and 1. Table 1 shows the variables, their symbols and levels. The selection of variable levels was based on our preliminary study. A central composite design (CCD), shown on Table 2, was arranged to allow for fitting of a second-order model. The CCD combined the vertices of a hypercube whose coordinates are given by the 2n factorial design with the “star” points. The star points were added to the factorial design to provide for estimation of curvature of the model. Six replicates (run 3, 7, 10, 11, 17 and 19) at the center of the design were used to allow for estimation of “pure error” sum of squares. Experiments were randomized in order to minimize the effects of unexplained variability in the observed response due to extraneous factors. 2.3 Acylation of Quercetin The acetic anhydride was added dropwise to quercetin (0.32 g) in dry pyridine at room temperature. Then, it is poured into ice-cold water (100 mL). The white precipitate is separated by filtration, washed. The resultant material is dried at 55 °C and collected as the product.
Optimization of Acylation of Quercetin Using Response Surface Methodology
313
2.4 Statistical Analysis A software package (Design Expert7.0) was used to fit the second-order models and generate response surface plots. The model proposed for the response (Y) was: 4
4
n =1
n =1
Y = b0 + ∑ b n xn + ∑ b nn xn2 +
4
∑b
n ≠ m −1
xx
tm n m
,
where b0 is the value of the fitted response at the center point of the design, which is point (0, 0, 0, 0). Bn, bnn and bnm are the linear, quadratic and cross-product regression terms, respectively.
3 Result and Discussion 3.1 Diagnostic Checking of the Fitted Model Table 1. Variables and their levels for central composite design Variable Molar ratio quercetin to acetic anhydride Addition of pyridine (drop) Reaction temperature (°C) Reaction time (min)
Symbol X1 X2 X3 X4
Code-variable level -1 0 1 5 10 15 1 2 3 15 30 45 10 60 110
ANOVA for the regression was performed to assess the “goodness of fit”. The model for Y was: Y=0.13824+0.025511×X1-0.014067×X2+0.010767×X3+0.025628×X4-.023088×X1X26.16250×10-3×X1X3+0.017925×X1X4-4.27500×10-3×X2X3-0.036763×X2X4-7.71250× 10-3×X3X4-9.96579×10-3×X12+8.63421×10-3×X22+0.090434×X32-0.021216×X42 The result of ANOVA was shown on Table 3. The Model F-value of 3.68 implies the model was significant. There was only a 0.86 % chance that a “Model F-Value” this large could occur due to noise. Values of “Prob > F” less than 0.05 indicated model terms were significant. In this study, X1, X4, X2X4, X32 were significant model terms. The “Lack of Fit F-value” of 7.70 implied the Lack of Fit was significant. There was only a 1.80 % chance that a “Lack of Fit F-value” this large could occur due to noise. Table 2 resented various statistics to augment the ANOVA. The coefficient of determination (R-Squared) is the proportion of variability in the data explained or accounted for by the model. The “R-Squared” of 0.7744 was desirable. “Adeq Precision” measures the signal to noise ratio. A ratio greater than 4 is desirable. The ratio in this design was 7.998, which indicated an adequate signal. The obtained model could be used to navigate the design space.
314
W. Li et al. Table 2. Central composite design arrangement and response Run 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
X1 -1 -1 0 -1 0 0 0 1 0 0 0 1 1 0 -1 -1 0 1 0 0 1 1 -1 -1 1 -1 -1 0 1 1
Variable level X2 X3 1 1 -1 1 0 0 -1 1 0 0 1 0 0 0 1 1 0 1 0 0 0 0 -1 1 -1 -1 0 0 1 -1 1 -1 0 0 1 -1 0 0 -1 0 -1 -1 -1 1 0 0 -1 -1 0 0 -1 -1 1 1 0 -1 1 1 1 -1
X4 -1 1 0 -1 1 0 0 -1 0 0 0 -1 -1 -1 -1 1 0 1 0 0 1 1 0 -1 0 1 1 0 1 -1
Response (Y) 0.2065 0.1785 0.1419 0.2110 0.1686 0.2069 0.1399 0.1921 0.2436 0.1202 0.1037 0.1897 0.1884 0.0955 0.1752 0.1477 0.0966 0.1695 0.137 0.1169 0.3572 0.3892 0.1724 0.0696 0.1142 0.222 0.1648 0.2438 0.1992 0.2074
3.2 Response Surface Plotting Variables giving quadratic and interaction terms with the largest absolute coefficients in the fitted models were chosen for the axes of contour plots to account for curvature of the surfaces. In Figure 1, addition of pyridine and molar ratio of quercetin to acetic anhydride were selected for the vertical and horizontal axes respectively for the 3Dsurface of yield, while reaction time and reaction temperature were measured at different levels. In Figure 2, addition of pyridine and reaction temperature were selected for the 3D-surface of yield, while reaction temperature and molar ratio of quercetin to acetic anhydride were measured at different levels. In Figure 3, reaction time and molar ratio of quercetin to acetic anhydride were selected for the 3D-surface of yield, while addition of pyridine and reaction temperature were measured at different levels.
Optimization of Acylation of Quercetin Using Response Surface Methodology
315
Table 3. ANOVA for the fitted model Source Model X1 X2 X3 X4 X1X2 X1X3 X1X4 X2X3 X2X4 X3X4 X12 X22 X32 X42 Residual Cor total
Sum of squares
df
Mean square
F value
Prob>F
0.11 1.032×10-3 0.022 0.015 1.246×10-3 8.529×10-3 6.076×10-4 5.141×10-3 2.924×10-4 0.022 9.517×10-4 2.573×10-4 1.932×10-4 0.021 1.166×10-3 0.031 0.14
14 1 1 1 1 1 1 1 1 1 1 1 1 1 1 15 29
7.707×10-3 1.032×10-3 0.022 0.015 1.246×10-3 8.529×10-3 6.076×10-4 5.141×10-3 2.924×10-4 0.022 9.517×10-4 2.573×10-4 1.932×10-4 0.021 1.166×10-3 2.095×10-3
3.68 0.49 10.52 7.08 0.59 4.07 0.29 2.45 0.14 10.32 0.45 0.12 0.092 10.11 0.56
0.0086 0.4936 0.0055 0.0178 0.4527 0.0619 0.5981 0.1381 0.7139 0.0058 0.5106 0.7309 0.7656 0.0062 0.4672
Table 4. Post-ANOVA statistics Std.Dev Mean C.V.% PRESS
0.046 0.18 25.58 0.17
R-Squared Adj R-Squared Pred R-Squared Adeq Precision
0.7744 0.5639 -0.2380 7.998
Fig. 1. Effect of addition of pyridine and molar ratio of quercetin to acetic anhydride on yield; Reaction temperature=30°C; Reaction time=60 min
316
W. Li et al.
Fig. 2. Effect of addition of pyridine and reaction temperature on yield; Reaction temperature=30°C; molar ratio of quercetin to acetic anhydride=10.
Fig. 3. Effect of reaction time and molar ratio of quercetin to acetic anhydride on yield; Reaction temperature=30°C; addition of pyridine=2
3.3 Optimization Based on Yield The model is useful in indicating the direction in which to change variables in order to flavonoid yield. By using Design Expert 7.0 software, the point at molar ratio of quercetin to acetic anhydride 15, addition of pyridine drop 1.00, reaction temperature 45 °C, and reaction time 106.59 min could be recommended as a practical optimum. The estimated values for Y, 0.3481 g was obtained at those conditions. A verification experiment at the optimum condition found that under this condition, the yield was as high as 0.3481 g with 0.32 g quercetin as raw material.
Optimization of Acylation of Quercetin Using Response Surface Methodology
317
4 Conclusion The optimum acylation of quercetin could be achieved by the following condition: molar ratio of quercetin to acetic anhydride 15, addition of pyridine drop 1.00, reaction temperature 45 °C, and reaction time 106.59 min.
Acknowledgements This work is partly supported by the Hubei Bureau of Education Key Projects (D20091401), the Hubei Province Natural Science Foundation (2008CDZ001) and Hubei University of Technology Key Researchers Start-up Fund (BSQD0814).
References 1. Zou, Y., Lu, Y., Wei, D.: Antioxidant activity of a flavonoid-rich extract of Hypericum perforatum L. in vitro. Journal of Agricultural and Food Chemistry 16, 5032–5039 (2004) 2. Liu, B., Ning, Z., Gao, J., Xu, K.: Preparing apigenin from Leaves of Adinandra nitida. Food Technology and Biotechnology 1, 111–115 (2008) 3. Srivastava, A., Harish, S.R., Shivanandappa, T.: Antioxidant activity of the roots of Decalepis hamiltonii (Wight & Arn.). LWT-Food Science and Technology 39, 1059–1065 (2006) 4. Joseph, G.S., Jayaprakasha, G.K., Selvi, A.T., Jena, B.S., Sakariah, K.K.: Antiaflatoxigenic and antioxidant activities of Garcinia extracts. International Journal of Food Microbiology 101, 153–160 (2005) 5. Gao, J., Liu, B., Zhao, R., Ning, Z., Wu, Q.: Characterization and antioxidant activity of flavonoid-rich extracts from leaves of Ampelopsis grossedentata. Journal of Food Biochemistry 33, 808–820 (2009) 6. Proestos, C., Boziaris, I.S., Nychas, G.J.E., Komaitis, M.: Analysis of flavonoids and phenolic acids in greek aromatic plants: investigation of their antioxidant capacity and antimicrobial activity. Food Chemistry 95, 664–671 (2006) 7. Sun, T., Ho, C.: Antioxidant activities of buckwheat extracts. Food Chemistry 90, 743–749 (2005) 8. Li, W., Zheng, C., Ning, Z.: The Antioxidation activity of DMYL in lard system. Food Science 9, 73–76 (2005) 9. Liu, B., Du, J., Zeng, J., Chen, C., Niu, S.: Characterization and antioxidant activity of dihydromyricetin-lecithin complex. European Food Research and Technology 230, 325–331 (2009) 10. Rustom, I.Y.S., Lopez-Leiva, M.H., Nair, B.M.: Optimization of extraction of peanut proteins with water by response surface methodology. Journal of Food Science 6, 1660– 1663 (1991) 11. Stévigny, C., Rolle, L., Valentini, N., Zeppa, G.: Optimization of extraction of phenolic content from hazelnut shell using response surface methodology. Journal of the Science of Food and Agriculture 87, 2817–2822 (2007) 12. Tsen, J., Lin, Y., King, A.: Response surface methodology optimization of immobilized Lactobacillus acidophilus banana puree fermentation. International Journal of Food Science and Technology 44, 120–127 (2009)
An Empirical Analysis on the Diffusion of the Local Telephone Diffusion in China Zhigao Liao , Jiuping Xu, and Guiyun Xiang 1
Mana. Depa., Guangxi University of Technology, Liuzhou, 545006, P.R. China 2 School of Bus.&Admi., Sichuan University, Chengdu 610064, P.R. China
Abstract. various new product innovation diffusion models have been constructed to explore the principles of innovation diffusion. Most of them assume that the potential is dynamical and the price or advertising will influence the diffusion rate and the market potential. However, few of them acknowledge the fact that the population is increasing and the members of different colonies, which have different attitudes of risk and decisive patterns, are transferring between each other. In this research we concentrate on developing a dynamical rate model for innovation diffusion with the influencing factors shown above. With the data of telephone subscribers of urban and rural areas in China, we make an empirical analysis and find this model is more effective to show the process of innovation diffusion with the condition of the urbanization process. Keywords: Diffusion model; Fuzzy Coefficient; Innovation diffusion; Telephone diffusion; GA.
1
Introduction
Owing to the household management system of Planned economy in history, China has divided the population into urban population and rural population. Owing to the strict control, rare shifting, huge gaps between industry and agriculture, and the different environment and living level, there exists great difference between urban population and rural population in communication channels, information spreading, and consumption habit. However with the reform and exposure to the outside world, the economic development converts China into urban society from village society, and inhabitants begin to move. According to the related news, there is 1% rural population moving into town per year since the 1980s. The ratio between urban population and rural population may be 7:3 after 60 years. Thus, it is not enough to discuss their consumption systems respectively without taking the population moving between the cities and countries into consideration. Based on this, we will try to build the dynamical model for telephone diffusion under the condition of the existing large population moving between town and country.
This research was supported by the Key Program of NSFC (Grant No. 70831005) and the National Science Foundation for Distinguished Young Scholars, P.R. China (Grant No. 70425005) and the Foundation of Research Projects of Department of Education of Guangxi (No.200911MS117).
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 318–325, 2010. c Springer-Verlag Berlin Heidelberg 2010
An Empirical Analysis on the Diffusion of the Local Telephone Diffusion
319
Since the consumption habit, the living standard and the environment around make a great difference in the adopting rate. Moreover, the potential adopters who have changed their economic conditions or status with great economic development and urbanization. They will have great consumption demands for goods which they would not have even considered before. Furthermore, the rate of urbanization development is different in different stages. So it is more acceptable if we take the uncertain rate instead of the deterministic rate to simulate the process. We use the rate of people from rural areas stepping into urban to show the process of the urbanization development of this area. Consequently, this paper constructs a fuzzy innovation diffusion model, tries to find the principles of innovation diffusion with the affects of population increasing and urbanization. In this context, model uncertainty is portrayed through the fuzzy transition process from non-user to the adopter of an innovation because of the uncertain rate of urbanization. We organize this article as follows. In section 2, an innovation diffusion model with the population increasing and the conversion between two different colonies is constructed. An empirical analysis is done with the data of telephone subscribers of urban and rural areas in China compared with the Bass model in section 3. Some concluding remarks are finally given in section 4.
2
Modeling
There is comparative independence and stability between colony 1 and colony 2. Each of the two colonies has its own main communication styles, broadcasting channels and ways, and consumption habits. However, there also exists migration between them for specific reasons such as improved living standards. The members moving into new community will be accepted into the new consumption system since they are affected by the new environment. In the interior of a community, for some related external factors and oral communication, non-users adopt innovation and serve as users, or users give up the innovation for some reasons after experiencing it. 2.1
Conceptional Model
Based on the foundation above, the concrete convert relations and relations between the inside and outside of the two communities are shown as Fig. 1. The arrowheads denote the flow directions of the members. 2.2
Mathematical Model
Hypothesis 1: The members of each colony are divided into users and nonusers. The non-users include those who adopted the innovation and have given up using it. Hypothesis 2: The internal increasing rate of the two colonies obeys the Logistic distribution functions.
320
Z. Liao, J. Xu, and G. Xiang
Fig. 1. Customers flow diagram for innovation between two colonies
Hypothesis 3: The non-users of each colony are still non-users when entering another colony, and the users of each colony can either keep on or give up using the innovation for various reasons when entering another colony. Hypothesis 4: The number of the members in one colony shifting to another colony is proportional to the number of the total members of the colony, and the coefficient is fuzzy for the uncertainty of urbanization. Based on the conceptional model, the hypothesis and the conversion relations above, we build the increase equation for variables Q1 , Q2 , where Qi (t) represents the number of members in colony i in time t, i=1, 2. According to empirical statistics, the birthrate and ratio to death are linear functions of the population of the colony. Suppose βi0 + (−di0 Qi ) is the birthrate of colony i, βi0 is the birthrate of colony i without a resource limit, and −di0 Qi is the birthrate of colony i with a resource limit, where, βi0 and di0 are both positive constants. The ratio of death of colony i is βi1 + di1 Qi > 0, where βi1 is the natural death ratio without a resource limit and di0 Qi is the ratio of death increasing with a resource limit, that is to say the birthrate will decrease with the increase of population and the death ratio will increase with the increase of population. Thus, the number of members increased in time t in colony 1 is (β10 − β11 )Q1 − (d10 + d11 )Q21 and β10 − β11 > 0 according to hypothesis 2. From hypothesis 4 and the conversion way of F and M, the number of increased members shifting from colony 2 in time t is θ2 N2 + θ2 A2 = θ2 Q2 , and the number of members shifting into colony 2 from conversion way of E and N is θ1 N1 + θ1 A1 = θ1 Q1 , as seen in Fig. 1. Based on the analysis above, the increase equation of Q1 is dQ1 Q˙ 1 = == (β10 − β11 )Q1 − (d10 + d11 )Q21 + θ2 Q2 − θ1 Q1 . dt Similarly, the increase equation of Q2 is dQ2 Q˙ 2 = = (β20 − β21 )Q2 − (d20 + d21 )Q22 + θ1 Q1 − θ2 Q2 . dt
An Empirical Analysis on the Diffusion of the Local Telephone Diffusion
Consequently the increasing model of colonies’ members is governed by ⎧ ⎨ Q˙ 1 = (β10 − β11 )Q1 − (d10 + d11 )Q21 + θ2 Q2 − θ1 Q1 Q˙ = (β20 − β21 )Q2 − (d20 + d21 )Q22 + θ1 Q1 − θ2 Q2 . ⎩ 2 Q1 (0) = Q10 ≥ 0, Q2 (0) = Q20 ≥ 0
321
(1)
Suppose that Ni (t) is the number of users in colony i at time t, and Ai (t) is the number of non-users in colony i at time t. Now we will build the equation of variable N1 (t) in colony 1. From conversion way A in Fig. 1, because of internal oral communication and medium, the number of non-users who have changed to become users is a1 A1 + b1 N1 A1 , in which a1 is the probability for non-users to become users caused by medium, and b1 is the probability for non-users to become users caused by oral communication between users and non-users. Owing to natural deaths, the decreasing number of users is (β11 + d11 Q1 )N1 . From conversion way C in Fig. 1, there are e1 N1 users who have to give up using the innovation for some reason, and caused by members shifting, there are θ1 N1 users shifting out of colony 1 and k2 θ2 N2 users shifting into colony 1 from colony 2. Here θi is the probability for the members of colony i converting into another one and ki is the probability for users to continue using the innovation in colony i after converting into another colony. Based on the analysis above, the increase equation of variable N1 is dN1 N˙ 1 = = a1 A1 + b1 N1 A1 − (β11 + d11 Q1 )N1 − e1 N1 − θ1 N1 + k2 θ2 N2 . dt Similarly, the increase equation of variable N2 is dN2 N˙ 2 = = a2 A2 + b2 N2 A2 − (β21 + d21 Q2 )N2 − e2 N2 − θ2 N2 + k1 θ1 N1 . dt Consequently, we can say that the dynamical equations of innovation diffusion based on members shifting between colony 1 and colony 2 are governed by ⎧ Q˙ 1 = (β10 − β11 )Q1 − (d10 + d11 )Q21 + θ2 Q2 − θ1 Q1 ⎪ ⎪ ⎪ ⎪ N˙ 1 = a1 A1 + b1 N1 A1 − (β11 + d11 Q1 )N1 − e1 N1 − θ1 N1 + k2 θ2 N2 ⎪ ⎪ ⎪ ⎪ Q˙ 2 = (β20 − β21 )Q2 − (d20 + d21 )Q2 + θ1 Q1 − θ2 Q2 ⎪ 2 ⎪ ⎨ ˙ N2 = a2 A2 + b2 N2 A2 − (β21 + d21 Q2 )N2 − e2 N2 − θ2 N2 + k1 θ1 N1 . (2) ⎪ Q 1 = A1 + N 1 ⎪ ⎪ ⎪ ⎪ Q 2 = A2 + N 2 ⎪ ⎪ ⎪ Q (0) = Q ≥ 0, Q (0) = Q ≥ 0, A (0) = A ≥ 0, ⎪ 10 2 20 1 10 ⎪ ⎩ 1 A2 (0) = A20 ≥ 0, N1 (0) = N10 ≥ 0, N2 (0) = N20 ≥ 0 This model focus on the analysis of the affection on the customers by the process of urbanization. The urbanization process is expressed by the mutual conversion between various consumer groups. That is to say, with economic development and improved living standards, the low-consumption groups constantly improve
322
Z. Liao, J. Xu, and G. Xiang
their levels of consumption, and step into consumer groups of higher levels. From the right expression of the following equation, Q˙ 1 = (β10 − β11 )Q1 − (d10 + d11 )Q21 + θ2 Q2 − θ1 Q1 , in which (β10 − β11 )Q1 − (d10 + d11 )Q21 is fatally affected by natural population increase, and the policy of family planning, and little affected by economic development, while θ2 Q2 − θ1 Q1 is mainly affected by economic development. Although certain regularities of economic development can be followed, the effect of this on the customers is uncertain and will vary during the diffusion process of an innovation. However, the models mentioned in the introduction, including system (2) are crisp, and obviously cannot meet the actual changes in the law of development. Since these models stressed some main influencing factors, the impact of which on the consumers might be exaggerated and the combined effects of other factors ignored, some serious disturbance might occur. Moreover, during the process of diffusion, the impact on the diffusion rate by these factors is of great difference in different stages. For this reason, we try to define θi , i = 1, 2 as fuzzy numbers and use fuzzy theory to express the uncertainty of urbanization of China.
3
The Empirical Analysis
To make the empirical analysis easier, we suppose that this model satisfies following hypothesis: (1) Since the standard of living in town is superior to that in the country of China, we suppose θ2 = 0, which indicates that there are no people in China shifting from town into country; (2) Since the communication by telephone is popular and necessary, we suppose k1 = 1, ei =0 which indicates that the telephone users in country will continue using it after shifting into town and users in town will not give up using it; (3) The population in China will reach 1.6 billion and maintain stable, which is obtained according to with some related research on population in China[3]. 3.1
Model Conversion
In this section, we first use the definition of subduction of H-difference to simulate the diffusion process of local telephone in China. Hukuhara difference is defined as u−v = (u(α)−v(α), u¯(α)− v¯(α)). Consequently, we have the following system ⎧ N (t + 1) = a1 A1 + (1 + b1 )N 1 (t) − c1 N 21 (t) + d1 N 2 (t) ⎪ ⎪ ⎨ ¯1 ¯1 (t) − c1 N ¯12 (t) + d¯1 N ¯2 (t) N1 (t + 1) = a1 A¯1 + (1 + b1 )N . (3) 2 N (t + 1) = a A + (1 + b )N (t) − c N (t) ⎪ 2 2 2 2 2 2 ⎪ ⎩ ¯2 2 ¯2 (t) − c2 N ¯ (t) N2 (t + 1) = a2 A¯2 + (1 + b2 )N 2
An Empirical Analysis on the Diffusion of the Local Telephone Diffusion
323
Table 1. The error control target value of history data ERROR F a1 b1 c1 × 103 d1 dˆ1 a2 b2 c2 × 103 100% 43937.44 0.001795 0.226062 5.54699E-06 0.01164 0.027958 0.0003922 0.388549 3.24809E-05 80% 56921.23 0.001848 0.227387 4.87714E-06 0.018258 0.024537 0.00011641 0.416663 4.76498E-05 60% 63795.76 0.001114 0.252945 4.65459E-06 0.015721 0.023128 0.0001476 0.373296 4.13123E-05 55% 51876.72 0.001372 0.240317 4.44283E-06 0.018897 0.023776 0.00003187 0.39436 3.84283E-05 50% 54156.28 0.00115 0.268982 5.77116E-06 0.010228 0.029075 0.00005979 0.363491 2.26276E-05 45% 53254.21 0.000904 0.271314 4.96505E-06 0.014153 0.024267 0.00003765 0.357402 2.08844E-05
Table 2. The predicted consumer numbers of China local telephone under different control simulation error ERROR the minimum value the maximum value the minimum value the maximum value of urban area of urban area of rural area of rural area 100% 42194.89787 43026.2 12008.67 12012.68 50% 47665.11832 48739.22 16071.58 16072.23 45% 55876.63922 56496.48 17118.18 17118.6
Cityhistroy N1_H45
Ruralhistroy N2_L45
N1_L45 N2_H45
60000
50000
40000
30000
20000
10000
0 90 993 996 999 002 005 008 011 014 017 020 023 026 029 032 035 038 041 044 047 050 053 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
19
Fig. 2. Plots of potential curve of telephone in urban area and rural area governed by model (3) (the control error rate is 45%) Cityhistroy N1_H50
Ruralhistroy N2_L50
N1_L50 N2_H50
60000
50000
40000
30000
20000
10000
0 90 993 996 999 002 005 008 011 014 017 020 023 026 029 032 035 038 041 044 047 050 053 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
19
Fig. 3. Plots of potential curve of telephone in urban area and rural area governed by model (3) (the control error rate is 50%)
324
Z. Liao, J. Xu, and G. Xiang
We give the predict figures in Fig.2, Fig.3and Fig.4 respectively in which the error control ratios are 45%, 50%, 100% respectively. Since the the effect of simulation under other control simulation error ratio is not good, we only give the potential market of local telephone under the control simulation error ratio which is 45%, 50%, 100% respectively.From Table 2, the number of local telephone consumers in China range from 54203.57 to 73615.08 respect to different control error ratio.
50000
Cityhistroy N1_H100
Ruralhistroy N2_L100
N1_L100 N2_H100
45000 40000 35000 30000 25000 20000 15000 10000 5000 0 90 993 996 999 002 005 008 011 014 017 020 023 026 029 032 035 038 041 044 047 050 053 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1
19
Fig. 4. Plots of potential curve of telephone in urban area and rural area governed by model (3) (the control error rate is 100%)
4
Conclusion
In this paper, we constructed an innovation diffusion model with migration between two different colonies. With the analysis of the innovation diffusion model based on the members of the two colonies shifting between each other, and on the empirical analysis on the telephone users of China, we conclude that the subscribers will trend to their maximal market potential whether or not there are members shifting between town and rural areas. During the process of urbanization and the relaxation of household management in China, such phenomena should not be ignored since its influence on the economic and social system may be vital and significant. Since obtaining of data is very difficult, the population data is directly obtained from related information and empirical estimation. Moreover, the simulation of data is not accurate enough since we haven’t taken the economic environment, progressive living level and the effect of decreasing costs in communication into consideration, all of which will be included in future research.
References 1. Posad, A., Mahajan, V.: How many pirates should a software firm tolerate? an analysis of Piracy protection on the diffusion of software. International Journal of Research in Marketing 20, 337–353 (2003) 2. Bass, F.M.: A new product growth model for consumer durables. Management Science 15(5), 215–227 (1969)
An Empirical Analysis on the Diffusion of the Local Telephone Diffusion
325
3. Yunjie, C., Yan, Z.: The population target in our country: no more than 1.6 billion before middle of this century[EB/OL], http://www.longhoo.net/gb/longhoo/ news/civil/node107/userobject1ai11989.html/2003-01-09 4. Satoh, D.: A discrete bass model and Its parameter estimation. Journal of the Operations Research 44(1), 1–18 (2001) 5. Diamond, P.: Stability and periodicity in fuzzy differential equations [J]. IEEE Trans. Fuzzy Syst. 8, 583–590 (2000) 6. Jun, D.B., Kim, S.K.: Forecasting telecommunication service subscribers in substitutive and competitive environment. International Journal of Forecasting 18, 561–581 (2002) 7. Chang, P.-T., Chang, C.-H.: A stage characteristic-preserving product lifecycle modeling. Mathematical and Computer Modeling 37, 1259–1269 (2003) 8. Jiuping, X., Zhigao, L.: Model for innovation diffusion rate. Chinese Journal of Management 1, 330–340 (2004) (in Chinese) 9. Jiuping, X., Zhigao, L., Hu, Z.: A class of linear differential dynamical systems with fuzzy initial condition. Fuzzy Sets and Systems 158, 2339–2358 (2007) 10. Jiuping, X., Zhigao, L.: A class of linear differential dynamical systems with fuzzy matrices. Journal of Mathematical Analysis and Application (2009), doi:10.1016/j.jmaa.2009.12.053 11. Jiuping, X., Zhigao, L.: Aclass of linear differential dynamical systems with fuzzy initial condition. Fuzzy sets and systems 15821, 2339–2358 (2007) 12. Zhigao, L.: The study of mobile communication technology diffusion in China with fuzzy bass model. In: Proceedings of The Third International Conference on Management Science and Engineering Management, England, UK, pp. 3–9 (2009)
The Fee-Based Agricultural Information Service: An Analysis of Farmers' Willingness to Pay and Its Influencing Factors Yong Jiang, Fang Wang, Wenxiu Zhang, and Gang Fu College of Economics and Management, Sichuan Agricultural University, Ya’an, 625014, China
[email protected]
Abstract. With the deepening of reforming rural economic system, it is difficult for agricultural information service, simply relaying on free service, to meet the demands of the market economy and to provide more customized service to farmers. Based on questionnaires completed by 293 farmer households in 40 villages and towns in 10 counties of Sichuan province and Chongqing municipality, this paper explores farmers’ decision-making behaviors with consideration of information cost, by establishing utility functions and Logistic models. The investigation suggests that farmers’ willingness to pay is significantly influenced by factors like their education, agricultural acreage, income of agricultural production and operation and their accessibility of information services. Based on this, this paper proposes some strategies, such as improving farmers’ ability of searching and using information through various channels and following the path of combining ‘public-interset information service’ with ‘commercial information service’. Keywords: The fee-based agricultural service, farmers, willingness to pay, influencing factors.
1 Introduction Agricultural information service is a series of valuable service activities performed by the subjects of agricultural information service, such as the government, scientific research institutions, agriculture-related enterprises, agricultural cooperative organizations and the like, through developing and utilizing all kinds of information technology means as well as all sorts of service modes in order to provide various information resources for every stage of agricultural production [1]. With respect to its characteristics, agricultural information service has the composite attributes of pure public products, quasi-public products and private products [2]. Due to the social labors it involves and its own use value, agricultural information service has already become a kind of special commodity [3]. Agricultural information service can exist as a kind of special commodity because the obtaining of information lessens the uncertainties in the production environment; thereby it reduces the cost and relatively improves the agricultural productivity [4]. R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 326–333, 2010. © Springer-Verlag Berlin Heidelberg 2010
The Fee-Based Agricultural Information Service
327
The public product attribute and commodity attribute of the agricultural information service requires that during the process of operation and management, on one hand, the government’s role as a supplying subject and leader in the agricultural information service has to be put into full play, on the other hand, the establishment of the market mechanism has to be given priority so as to guide various investors to engage in providing the agricultural information service [5, 6]. However, the major problems that we are facing in practice are farmers willing to engage in and able to afford fee-based the agricultural information service? The previous researches show that scholars like Na Lei believe that farmers will rationally decide whether to pay for the searching of information after comparing their expected earnings with the cost to be invested [7]. Scholars like Yegang Zheng insist that farmers’ initiative for information demand are restricted to some extent by factors like small scale of operated land [8]. Through investigation, Shiming Xu finds that farmers’ awareness of information are mainly influenced by their individual education, local information infrastructures and so on [9]. To sum up, scholars, both home and abroad, have already made a lot of achievements in the research of the agricultural information service, but it is not difficult to find out that investigation rarely probes into the fee-based the agricultural information service from the viewpoint of farmers, and farmers’ willingness to pay can not be fully reflected by the index system established by previous researches. From this perspective, based on a summary of previous relevant researches, the thesis attempts to discuss the fee-based the agricultural information service which farmers are willing to pay and its contributing factors under the market economy, through establishing utility functions and Logistic models, in the hope of finding out a new point of departure for improving the agricultural information service.
2 Research Hypotheses, Models, Variables and Data 2.1 Selection of Research Hypotheses and Models This paper presumes that an information service organization prepares to carry out their program of fee-based information service in a rural area, which mainly features customized service, such as specialized information consultation service, and predictive information service. As limited rational men of business, when deciding whether to participate in this program or not, farmers generally will compare the expected utility and the cost of self-service with that of fee-based service, and then choose the way that can maximize their net income. Suppose farmers can obtain utility in consumption of fee-based the agricultural information service [10], and it can be expressed as follows: Uj=U(Ij, Cj)
(j=0,1).
(1) where, I stands for the income of agricultural production and operation, C stands for the income of other operations, j refers to whether the farmer participates in fee-based the agricultural information service program or not (0 for no, 1 for yes). If farmers participate in fee-based the agricultural information service, they need to pay the service charge π1 (otherwise, no service charges). Suppose a farmer household
328
Y. Jiang et al.
consumes n types of agricultural information in total in one year, and the average utilization amount of every kind of information service is m, the price p (including the direct and indirect charge of information service, such as the opportunity cost of information search), the price of other products presumed as l, and the farmer's gross income Y. As the agricultural information service has the attribute of public products, so if we take the government’s proper allowance for farmers’ participating in feebased the agricultural information service into consideration, the farmer’s budget constraint model can be expressed as:
( )
n ∑ 1 − B pi * mi + π j + C j = Y i =1
(j=0,1).
(2)
Substituting (1) into (2) produces the conditional utility function (3). Thus, the ratio between the utility of farmers’ participating in the fee-based the agricultural information service and that of the self-service can be expressed as:
( )
n
Uj =U[Ij , Y − π − ∑ 1 - B p * m ] j i i
(j=0,1),
(3)
i =1
( ∑n B* pi * mi − π1)+(e –e ).
U1 – U0 = β 0 (I1–I2)+ β1
i =1
1
2
(4)
In (4), β is a solve-for parameter, e a stochastic disturbance term, and the first term on the right of the equation refers to the expected increase of the agricultural production and operation income obtained by the farmer’s participating in fee-based the agricultural information service, the second term the net allowance for participating in fee-based the agricultural information service, and the third term a stochastic disturbance term. If the utility ratio in (4) is positive, the farmer generally will choose to engage in the fee-based the agricultural information service program. However, not all of the above costs and benefits can be measured by money; and some of the variables are just farmers’ subjective feelings. Therefore, they can not be measured and calculated as general investment decisions. As far as this research is concerned, this paper pays more attention to those factors which influence farmers’ willingness to pay for fee-based the agricultural information service. If use a function which can be expressed as: U = U1 – U0 = α + βi Xi + u i
(i=1, 2, 3, …, n),
(5)
where, α is a constant term, Xi an independent variable, and ui a stochastic disturbance term. With regard to the problem of binary discrete choice probability, such as whether farmers decide to participate in fee-based the agricultural information service program or not, this paper makes regression analysis by the Logistic model, and it is shown as: Log
Ρi = u i = α + βΧ i 1 − Pi
(i =1, 2, 3, …, n),
(6)
where, Pi refers to the probability of farmers’ specific choices toward the given variable, the left side of the equation is the ratio of probability between farmers’
The Fee-Based Agricultural Information Service
329
participating in fee-based the agricultural information service and self-service, and the right side is the linear form of a series of characteristic variables. Therefore, the function of farmer’s willingness to pay for the fee-based the agricultural information service is determined by the cumulative probability distribution function of ui. 2.2 Data source and Research Variable Selection The data quoted in this paper comes from the questionnaires done in 40 towns/townships in 10 counties/districts of Sichuan province and Chongqing municipality. In actuality, 360 questionnaires have been given out, 330 of them have been retrieved, and 293 have been confirmed as valid. The samples are selected at random, and the content of the investigation mainly includes each farmer’s individual characteristics, family characteristics, agricultural production characteristics, accessibility to the agricultural information service and other indexes. According to the above analysis, this paper has selected 15 variables of investigation for the empirical model, and the statistic characteristics of the investigated variables are depicted in table 1. For convenience, in this paper, the model of ui, reflecting farmers’ willingness to pay for the fee-based information service, is defined as follows: u i = α + β1Gen + β 2 Age + β 3Edu + β 4 Inc1 + β 5 Inc 2 + β 6Car + β 7 Req + β8 Ris + β 9 Lab + β10 Lan + β11Rel + β12 Tra + β13 Mod + β14 Org + β15 Dp
.
(7)
Table 1. Investigated variable assignment and its statistical description Variable name Gender
Variable definitions and units
0=female; 1=male 0=no more than19 years old; 1=20~29 years old; 2=30~44 years old; 3=44~54 years old; 4=no less than 55 years old 0=illiterate; 1=primary school graduate; 2=junior high school garduate; 3=senior high school Education(Edu) graduate; 4=junior college or undergraduate; 5=graduate and the above 0= less than 8000 Yuan (low Per capital income income), 1=8000~12000 Yuan of the family(Inc1) (intermediate income), 2= more [11] than 12000 Yuan( high income) Agricultural production and Aggregate income of the family operation income minus wage income (Yuan) (Inc2) Types of 0=ordinary households; household 1=specialized households; production and 2=households with combined operation occupations
Willing to pay 0.710
Possible Unwilling direction by to pay influencing 0.600 –
+/ +/–
2.340
2.980
1.990
1.190
+
5674
4589
+
9939
5017
+
1.120
0.940
+/–
330
Y. Jiang et al.
Table 1. (continued) Willingness for information service 0=no; 1=yes (Req) Information risk 0=is not able to tolerate; tolerance (Ris) 1=basically able; 2= able to tolerate Agricultural labor Agricultural labors’ proportion of proportion of the the total household labors (%) total (Lab) Cultivated land Mu (Lan) Are farmers able to search Social relationship information through personal network(Rel) relationship network? 0=no; 1=yes. Traditional Are farmers able to search information information through traditional transmission media media, such as radio? 0=no; 1=yes. (Tra) Modern Are farmers able to search information information through modern media, transmission media such as computers? 0=no; 1=yes. (Mod) Are farmers able to search information through organizations, Organization(Org) such as agrotechnical stations? 0=no; 1=yes. Ya’an(YA), Chengdu(CD), DP Nanchong(NC), Chongqing(CQ) 0=unwilling to pay; 1=willing to Y pay
+/– +
0.790
0.400
0.910
0.760
0.728
0.685
2.652
1.89
0.930
0.820
0.810
0.780
+
0.570
0.330
+
0.550
0.300
+
+ + +
3 Empirical Analysis and Results Discussion With regard to the data processing, this paper adopts SPSS17.0 to estimate model ui, and the results are shown in table 2. Table 2. Estimated results of the model
B
S.E.
Wald
df
Sig.
OR
Gen
0.536
0.398
1.807
1
0.179
1.708
95.0% Confidence Interval Upper Lower Bound Bound 0.782 3.730
Age Edu Inc1 Inc2 Car
-0.318 1.046 0.609 1.429 0.241
0.247 0.188 0.678 0.365 0.212
1.650 30.841 0.808 15.348 1.293
1 1 1 1 1
0.199 0.000 0.369 0.000 0.256
0.728 2.846 1.839 4.173 1.273
0.448 1.967 0.487 2.042 0.840
Influencing Factors
1.182 4.116 6.943 8.529 1.930
The Fee-Based Agricultural Information Service
331
Table 2. (continued) Req 1.659 Ris 0.331 Lab 0.156 Lad 0.533 Rel 0.758 Tra 0.235 Mod 0.362 Org 0.903 YA CD 0.546 NC 0.251 CQ 0.915 C -3.911 -2Loglikehood:223.472
0.353 22.120 1 0.334 0.981 1 0.726 0.046 1 0.145 13.537 1 0.572 1.757 1 0.490 0.230 1 0.417 0.756 1 0.354 6.509 1 0.636 0.739 1 0.703 0.128 1 0.894 1.048 1 0.632 38.290 1 Cox&SnellR2:0.349
Hosmer-Lemeshow:9.026 (Sig:0.340)
0.000 5.255 2.632 10.491 0.322 1.392 0.724 2.678 0.829 1.169 0.282 4.852 0.000 1.704 1.283 2.264 0.185 2.134 0.696 6.542 0.631 1.265 0.484 3.303 0.385 1.437 0.635 3.251 0.011 2.468 1.233 4.941 0.390 1.727 0.497 6.001 0.721 1.286 0.324 5.105 0.306 2.497 0.433 14.401 0.000 0.020 Nagelkerke R2:0.501 Sig:0.000
Percentage Correct:83.3%
NB: “-” stands for the reference group (without variables).
3.1 Statistical Test of the Model The Sig. value of the model is 0.000, and its percentage of correct classification is 83.3%, that is, the model is statistically significant. Meanwhile, the model's 2Loglikehood value is 223.472, and the Cox&SnellR2 and the Nagelkerke R2 of it are 0.349 and 0.501 respectively. The estimate of the large samples shows that the goodness of fit of the model is high, and the independent variables are able to explain the dependent variables well. 3.2 Analysis of the Estimated Results Among the individual characteristic variables. Farmers’ education is estimated up to 1% of the statistical significance, and its wald value is 32.455. It suggests that under the condition of other contributing factors being controlled, the higher the farmer's education is, the more likely the farmer is willing to pay for participating in fee-based the agricultural information service. The possible reason is that comparatively high education is good for farmers to identify the profit opportunity implied in information. Among the household characteristic variables. The statistical significance of both the income of agricultural production and operation and the willingness of request for information service reaches 1%. Moreover, this kind of influence (OR value) reaches 4.173 and 5.255. The possible reason is that the more agricultural production and operation income the households have (the average income is over the intermediate income, as shown by the statistical results in table1), the less the relative cost of information investment is, and accordingly the more the willingness to pay increases. Moreover, comparatively high information demand will generally stimulate the farmer to identify and seize the information in favor of agricultural production.
332
Y. Jiang et al.
Among the agricultural production characteristic variables. The statistical significance of household cultivated land reaches 1%. It suggests that under the condition of other contributing factors being controlled, the larger the household cultivated land is, probably the more easily the fee-based agricultural information service is accepted by farmers. The possible reason is that for agricultural production with labor-saving technical progress, the larger the household cultivated land is, the more likely the scale effect is produced by utilization of information service. Among the accessibility variables of information service. The organization variable has significant influence. The possible reason is that compared with interpersonal communication, the information service offered by organizations generally are more authoritative and practical, which, therefore, strengthens the possibility by which the farmer chooses the organizations to some extent.
4 Conclusions and Suggestions 4.1 Research Conclusions Under the condition of information cost, the contributing factors of farmers’ decision-making behaviors are complicated. Whether the farmer is willing to pay for the information service or not is not only influenced by his individual characteristics, but the household characteristics, the characteristics of the agricultural production and operation, and the accessibility to information service. The research suggests that farmers’ willingness to pay is significantly influenced by many factors, such as farmers’ education, the income of the agricultural production and operation, the household cultivated land, the willingness of request for agricultural information, whether information can be obtained through organizations and the other factors. 4.2 Strategic Suggestions To build up a complete information market, following a path combining ‘publicinterset information service’ with ‘commercial information service’. In terms of the agricultural information service supply, the government shall not have a commanding view any more, give priority to the role of the market mechanism, gradually set up the range of free agricultural service and the range of fee-based service, and follow a path combining “information service of public interest” with “commercial information service”. As time goes by, the government shall pay more attention to building up and managing the micro-environment of information service, with the proportion of “public-interset information service” it provides going down, and the proportion of “commercial information service” going up. Only by doing so can the information service market gradually consummate. To enhance farmers’ capacity of searching for and utilizing information through various means. First, make farmers organized, by which can not only make farmers enjoy the scale effect of information searching, but also effectively reduce their opportunity cost of information searching; second, establish the agricultural information brokerage system, by which qualified information brokers can help
The Fee-Based Agricultural Information Service
333
farmers timely search accurate information and eliminate the disadvantage of inadequate information; third, help farmers obtain basic and vocational education, which is a fundamental measure to promote farmers’ effective information demand.
References 1. Jiang, Y., Sheng, M., Han, T., Chen, P.: Practice and Exploration of Grass-roots The agricultural information service—A view based on the construction of the agricultural information service in Langzhong, Sichuan. J. Hubei Agricultural Sciences, 2903–2907 (2009) (in Chinese) 2. Li, Y.: Research on The agricultural information service System in China, pp. 24–26. China Agricultural University, D. Beijing (2005) (in chinese) 3. Thysen, L.: Agrieulture in the Information Society. J. Journal of Agricultural Engineering Research, 297–303 (2000) 4. Fu, G., Liu, C., Lin, W.: Information Cost: The Status Quo of Domestic and International Research and Its Comments. J. Journal of Information, 83–86 (2007) (in Chinese) 5. wolf, S., just, D., Zilberman, D.: Between data and decisions: the organization of agricultural Economic Information Systems. J. Reaserch Policy, 121–141 (2001) 6. Fan, L.: Construction and Consummation of the Institutional Environment for the Development of Information Resource Industry in China. J. Journal of Information, 1155–1160, 1178 (2008) (in Chinese) 7. Lei, N., Zhao, B., Yang, J., et al.: Analysis of Farmers’ Willingness to Pay for Agricultural Information and Its Contributing factors—A case analysis of Heibei Province. J. Journal of Agrotechnical Economics, 108–112 (2007) (in Chinese) 8. Zheng, Y., Jin, K.: Farmers’ Information Demand and Its Existing Problems. J. Journal of Library and Information Sciences in Agriculture, 48–51 (1999) (in Chinese) 9. Xu, S.: On Farmers’ Information Awareness. J. Journal of Information, 67–68 (2007) (in Chinese) 10. Gertler, P.J., van der Gaag, J.: Measuring the willingness to pay for social services in developing countries, pp. 1–30. World Bank, Washington (1988) 11. Du, J., Zhang, Y., Li, D.: Research on Priorities of Grass-roots Agricultural Information Service. J. Journal of Jiangxi Agriculture, 112–115 (2008) (in Chinese)
Research on Belt Conveyor Monitoring and Control System Shasha Wang, Weina Guo, Wu Wen, Ruihan Chen, Ting Li, and Fang Fang Hebei Polyteehie University, Electronic Information Engineering, Hebei tangshan 063009, China
[email protected],
[email protected],
[email protected],
[email protected],
[email protected],
[email protected]
Abstract. The design and development objective of the entire belt conveyor control system is to minimize physical labor, which can make full use of other energy and variety of information other than manpower operation so as to increase efficiency and reduce accidents. This paper makes an in-depth analysis of implementation possibilities and requirements of the ARM-based monitoring of belt conveyor system by using LM3S8962 chip as a research object to design a set of belt conveyor monitoring system. The fault detection and control of belt conveyor can be done through the on-site sensors information signal collection by remote monitoring of belt conveyor and the motor protection. Keywords: μC/OS-II, LM3S8962, CAN bus, belt conveyor.
1 Introduction Conveyor used in production logistics has become major and common logistics equipments. Belt conveyor is an important type of conveyor. Belt conveyors is widely used in mines underground tunnel, mine ground transportation systems, open pit mining and ore dressing plant, it can be used to tilt transport or horizontal transport. Currentl intelligent management of belt conveyors has a certain study, but single function, and the results are not satisfactory [1]. Belt Conveyor Control System according to the various points of remote sensors send the signal came to realize on the belt conveyor start-stop and fault detection [2]. This system including control system hardware and software design and μC/OS-II migration; use RS485 and CAN bus protocol established CAN control network. The central control system uses 32-bit high-cost-effective LM3S8962 microcontroller [3], which provides high performance, a wide range of integration capabilities, as well as the choice of location in accordance with requirements applicable to a variety of cost-focused and clearly requires the ability to connect a process control applicable to a variety of cost-focused and clearly requires the ability to connect a process control application program. Makes the whole system is simple, small size and low power consumption. System also supports CAN-Bus, PWM with dead zone and other powerful functions, its characteristic is peripheral devices simple, yet very powerful, rich resources, stable operation and inexpensive, so it can be widely used in all kinds of large-scale industrial controls. In this system equipped with R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 334–339, 2010. © Springer-Verlag Berlin Heidelberg 2010
Research on Belt Conveyor Monitoring and Control System
335
a Multi-tasking Real Time Operating Systemμ C/OS-II. Because of μC/OS-II transplant simple and fast real-time response characteristics, makes the system reliability, stability and real-time have been a better guarantee. Thus it is widely used 16-bit, 32-bit or 64-bit microcontroller, DSP or microprocessor. Shorter development cycle of systems, reduce development costs.
2 System Hardware Platform Design Conveyor control system design is intended to facilitate user operation of industrial control systems, as well as the safety operation of industrial control increase security coefficient, and reduce the staff time to the operating system. Therefore, the basic functions of conveyor control system are: belt conveyor start control; belt fault detection and control; belt fault protection control; manual emergency stop control, etc. [4] The main chip of the system is produced by Luminary Micro’s Stellaris family of microcontrollers LM3S8962 processor, the chip has a 256KB FLASH and 64KB SRAM, storage capacity to meet the design requirements, without external expansion. LM3S8962 provide high performance, extensive integration capabilities, as well as the choice of location in accordance with requirements applicable to a variety of cost-focused and clearly requires a process of control and connectivity of applications. This micro-controller is designed for industrial application; these programs include remote monitoring, test and measurement equipment [5], factory automation and so on. LM3S8962 contains two RS485 modules, so using the RS485 interface chip 75LBC184 as a communications module. PCF8574 chip selected as the LM3S8962 microcontroller’s external expansion of its current consumption is very low, and the output latch with a large current drive capability can directly drive LED. System overall block diagram as illustrated in Fig.1.
Fig. 1. System overall block diagram and the system main modules include: key modules, motor drive module, CAN bus module, detection module, the alarm task module and emergency stop module
3 μC/OS-II Transplantation μC/OS-II is an exoteric sound code real time embed operating system. It can work independently of each task, it is easy to implement on time and correctly implemented, so that the design of real-time applications and extension easier, so that the application
336
S. Wang et al.
considerably simplify the design process. It has been successfully transplanted in digital signal processor (DSP), 16/32-bit MCU. 3.1 μC/OS-II System Structure To conduct μC/OS-II migration, we must first understand the μC/OS-II architecture [6]. Fig. 2. is a μC/OS-II the relationship between the file structure and hardware. When using μC/OS-II in an application, the system needs is provided by the user application software and μC/OS-II configuration section [7].
Fig. 2. This is μC/OS-II system structure. μC/OS-II configuration file is associated with the application, its code including: OS_CFG, H, INCLUDES.H. The code has nothing to do with the processor type.
3.1 Based on LM3S8962’s μC/OS-II Transplantation This system chose LM3S8962 ARM and LMlink compiler accord with the operating system. The host through the JTAG interface target board to establish cross-development environment for debugging [8]. μC/OS-II migration hierarchical structure are shown in Fig. 3. In the process of transplantation, μC/OS-II of the core code without modification can be placed directly on μc/osIISource folder. Startup.S file in the catalogue of Target is the microcontroller startup code and interrupt vector table, Target.C and Target.H provide MCU initialization function TargetInit() and other simple peripheral control API. The μc/osII\Ports directory is stored μC/OS-II transplant code, which includes OS_CPU_C.C, OS_CPU_A.ASM, OS_CPU.H three necessary files. The μC/OS-II when transplanted to the ARM processor need to modify these three files. 1. OS_CPU.H File OS_CPU.H file contains the μC/OS-II need constant, macros, and custom types. OS_CPU.H need to provide for growth in the direction of the stack. Stack growth direction different processors is not the same, CortexM3 the stack is high address to low address growth, so the definition of constant OS_STK_GROWTH 1 [9].
Research on Belt Conveyor Monitoring and Control System
337
OS_CPU.H also calls the macro definition OS_TASK_SW() to switch the task-level context. Because the context switch has a close relationship with the processor: #define OS_TASK_SW() OSCtxSw()
Fig. 3. In the User layer, user directory store user code and settings. In part of the middle layer, middleware directory store intermediate files which provide by company or write by user themsleves. μC/OS-II Source directory in the source layer, is used to store μC/OS-II source code. Transplantation layer consists of two directories: μC/OS-II Ports directory and Target directory. In Driver Library layer, LM3S_DriverLib directory store lm3s MCU-driven function, it is directly facing the target board hardware layer; in general, in addition to μC/OS-II, the other code to be directly or indirectly, through its access to the hardware.
2. OS_CPU_C.C File In OS_CPU_C.C defined C function, OSTaskStkInit () function is associated with the CPU, so porting the code to modify the function. 3. OS_CPU_A.ASM File μC / OSII transplant need to write five simple assembly language functions. OS_ENTER_CRITICAL():close the interrupt source. OS_EXIT_CRITICAL():re-open interrupt source. OSStartHighRdy():run the current highest priority task. OSCtxSw():give up the CPU to use a task right call. OSIntCtxSw(): exiting interrupt service function OSIntExit () is called to realize the task switching interrupt. Because LM3S MCU only supports 8-bit interrupt priority in the high three, so here is the 1 left 5 is 00100000B, the macro is defined as OS_CRITICAL_INT_PRIOEQU (1 <<5) [10]. To recover from the new task stack R4 ~ R11;Restore interrupted;Abnormal return; completion of the work, and then depending on the target board as long as the actual
338
S. Wang et al.
situation of Target directory 3 files, μC/OS-II can run on the LM3S8962 microcontroller. μC/OS-II don’t use C language of int, short, long Etc. data type, because they are concerned with the compiler type, implied not portability. μC/OS-II redefined data types. typedef unsigned char
BOOLEAN;
typedef unsigned char
INT8U;
typedef unsigned int
INT16U;
typedef unsigned long
INT32U;
typedef double
FP64;
typedef unsigned int
OS_STK;
typedef unsigned int
OS_CPU_SR;
μC/OS-II needs to close all access to critical code which could undermine the critical code execution of interruptions; then open interruption before exit critical code. We can modify the macro OS_CRITICAL_INT_PRIO, before set to enter the critical code need to close equal and below a certain priority interrupt. In μC/OS C/OS-II, it defined two macros respectively close and open the interrupts: OS_ENTER_CRITICAL and OS_EXIT_CRITICAL. #define OS _CRITICAL_METHOD #define OS_ENTER_CRITICAL()
3 {cpu_sr=OS_CPU_SR_Save();}
//Close interrupt #define OS_EXIT_CRITICAL()
{OS_CPU_SR_Restore•cpu_sr•;} cpu_sr•;}
//Open interrupt When using these macros function, need to define local variables cpu_sr. #if OS _CRITICAL_METHOD==3 OS_CPU_SR
cpu_sr=0;
#endif The main task of the whole control system include: literacy UART0 tasks, alarm task, show task, start and stop tasks. Literacy UART0 tasks: primarily responsible for interacting with the host computer, parsing the host computer sends control commands over and perform, and finally returns the results of the implementation of the PC. Alarm Task: loop detection remote sensor, if the system abnormal, sending signals to the control system. When there is a failure to send signals to the display task. Display task: After receive the signal which send from alarm task, judge fault type, and displayed the corresponding text on the LCD screen.
Research on Belt Conveyor Monitoring and Control System
339
Start-stop task: primarily responsible for controlling the start and stop the belt conveyor. Sensor send a signal over to meet the launch conditions, you can start the system, if failure to determine fault signal, in case of failure, judge fault signal, and decide whether to need to stop system. Finish the above task code; write related interrupt service routines and start operating systems, applications began to run. If you need to add other function, need to increase its mission and call the certainly system service.
5 Conclusion By LM3S8962 microcontroller design of belt conveyor control system with rich functionality and powerful real-time processing capabilities, features easy to extend. The operating system uses the μC/OS-II to simplify programming, enhance modularity. This high-performance microcontroller, combined with real-time operating system has already become a kind of control system development trend. On this basis, you can use the RS-485 communication mode from the CAN bus instead, to increase the communication range.
References 1. Qiang, L., Wen, D., Wu, S.-t.: Study on Graft of ARM and μC/OS-II/OS-Based Embedded System. J. Journal of Shandong University of Science and Technology (Natural Science), Qingdao (2006) 2. Yanwei, Y.: Research on Belt Off-tracking Monitoring and Rectification Device for Belt Conveyer. J. Mining & Processing Equipment, Luoyang (2002) 3. Wang, J., Ji, Q.: The Real-time Embedded OSμC/OS- Porting on ARM7 Processor. J. Computer Knowledge and Technology, Hefei (2009) 4. Tian-jing, Z.: Causes and precaution against the belt off its caurse in belt conveyor. J. Coal Mine Machinery, Haerbin (2001) 5. Li-zhao, Z., Xiao-rong, C., Yan-fen, L.: The Research of Porting Embedded Operating System μC/OS-II on ARM. J. Instrumentation Technology, Shanghai (2009) 6. Yun, Y., Yong, Z.: Research and implementation of porting μC/OS-II based on ARM7. J. Computer Engineering and Design, Beijing (2009) 7. Zhang, X.: The Task Analysis of Interface in μC/OS-II Operating System. J. Software Guide, Wuhan (2009) 8. Jiang, F.: The Porting of Real-Time Operation System μC/OS-II on ARM. J. Microcomputer Information, Beijing (2008) 9. Labrosse, J.J.: Micro C/OS-II The Real-Time Kernel, Second Edition. CMP Media, LLC, New York (2002) 10. Ji-kui, F., Yan-jing, S.: Design of intelligent monitoring substation node based on μC/OS-II. Computer Engineering and Design, Beijing
Ⅱ
Distribution of the Stress of Displacement Field during Residual Slope in Residual Ore Mining Based on the Computer Simulation System Zhiqiang Kang1,*, Yanhu Xu2, Fuping Li 1, Yanbo Zhang1, and Ruilong Zhou3 1
College of Resources and Environment, Hebei Polytechnic University, Tangshan 063009, Hebei, China 2 College of Sciences, Hebei Polytechnic University, Tangshan 063009, Hebei, China 3 Geology and Mineral Resources Exploration and Development Center fo Jiangxi province, Nanchang 330001, Jiangxi, China
[email protected]
Abstract. The residual ore bodies of open slope vein up mining through adit to maximize the recovery of ore outside the realm, and the residual slope is formed after the completion of the mining Veining up mining hanging wall ore destroys the slope rock structure, redistributes the body of stress of rock slope, and affects the stability of the slope. Taking the veining up the hanging wall ore residual slope of Shirengou Iron Mine for example, the paper establishes threedimensional numerical model of veining up mining of the high residual slope of Shirengou Iron Mine, and analyzes stability of residual slope through calculating with numerical simulation by using the large-scale finite element numerical simulation analysis software of ANSYS. The conclusion shows: adit clock recovery has little effects on the overall stability of the open-air slope of Shirengou iron ore mining, and mining residual slope is stable. Keywords: ANSYS; residual slope; numerical simulation; veining up mining; stress field.
1 Introduction At present, due to the strain on the resources of ore, the residual slope is formed after the completion of the mining to maximize the recovery of ore outside the realm, the residual ore bodies of open slope vein up mining through adit. A number of research on the opencast mine slope stability analysis have been made at home and abroad [1, 2, 3], but the research on the opencast mining residual stability is not many yet. In Shirengou Iron Mine, residual slope is formed by using vein up mining in northern mining area. Veining up mining hanging wall ore destroys the slope rock structures, stress distribution is more complex and engineering geology gets worse, the instability possibility of the slope and empty areas increase. Therefore, by establishing three-dimensional numerical model of veining up mining, through calculating with numerical simulation [4, 5, 6], ANSYS analyzes stability of residual slope of Shirengou Iron Mine. *
Corresponding author.
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 340–346, 2010. © Springer-Verlag Berlin Heidelberg 2010
Distribution of the Stress of Displacement Field
341
2 Establishing the Model of Hanging Wall Slope Mining 2.1 Rock Mechanics Parameters of Shirengou Iron Mine According to the geological data, parameters on physical-mechanical characters of rock mass of Shirengou Iron Mine can be obtained through indoor rock mechanics experiments by using the strength reduction coefficient method [7, 8], which has been shown in Table 1. Table 1. Prarameter on physical-mechanical characters of rock mass of Shirengou Iron Mine Name Density
1.94
Gneiss regolith 2.69
Unweathered gneiss 2.74
Magnetite quartzite 3.47
Fracture zone 2.61
Topsoil
ρ (g/cm3)
Modulus E (Gpa)
Horizontal
1.86
3.38
4.82
5.33
2.71
vertical
1.33
1.86
2.62
2.87
1.69
Poisson's ratio
Horizontal
0.4
0.3
0.27
0.24
0.36
vertical
0.42
0.33
0.29
0.27
0.38
Horizontal
0.66
1.30
1.90
2.15
1.04
vertical
0.52
0.89
1.31
1.34
0.73
v
Shear modulus G (Gpa)
2.2 Establishing the Model of Residual Slope of Shirengou Iron Mine ANSYS is the most widely used general-purpose finite element analysis software, which mainly consists of three modules: pre-processing module provides a powerful solid modeling and meshing tools; analysis and calculation modules include structural analysis, fluid dynamics analysis, electromagnetic field analysis, sound field analysis, piezoelectric analysis and analysis of multi-physics coupling; post-processing modules can output or display the results in the form of chart and curve [9, 10]. A typical ANSYS analysis process can be divided into three steps: (1) model building; (2)load and solving; (3)examine results and obtain conclusions. Pulse chase adit hanging wall ore mining is carried on in northern slope of Shirengou Iron Mine, adit mined-out area 1#, 2#, 3# under the open slope are formed after mining. According to the projection maps and floor plans of 1#, 2#, 3# adit as well as the northern slope 0m plan, the height of slope is 135m.The total length of this slope reaches 310m as the length of the left side of 1# adit and the right side of 3# choose 60m and 100m each to eliminate the boundary effect. As the same way, the slope length in front and behind take 200m, the thickness gets 12m in the bottom of adit rock. 1# adit is in the way perpendicular to the slope, it is simplified perpendicular to the slope when building three-dimensional model. 2# adit rotates 8° to the left relative to 1# and 3# adit rotates 14°to the right. In order to realistically simulate post-harvest residues of Shirengou Iron open slope, choosing the anisotropic three-dimensional solid element Solid64 to simulate the rock mass of perpendicular anisotropy according to the anisotropic nature of the
342
Z. Kang et al.
rock mass. Node constraints imposed by the boundary conditions, left and right sides and back of the model are subject to X, Z direction constraints, and the bottom by X, Y, Z three directions constraints. The gravity of slope is the only one to consider according to the actual situation, Therefore, reverse acceleration due to gravity is imposed in Y direction.
3. Stress Field Simulation Results and Analysis of Shirengou Residual Slope Mining 3.1 The Self-weight Stress Field Analysis before Residual Slope Mining The stress field distribution contour map in all directions under the self-weigh stress before slope mining is shown in Figure 1, Figure 2 and Figure 3. From these Figures,
Fig. 1. Stress contour maps of X direction
Fig. 2. Stress contour maps of Y direction
Distribution of the Stress of Displacement Field
343
Fig. 3. Stress contour maps of Z direction
we can see, in the case of weight, maximum compressive stress in X direction is 2.3Mpa, and maximum tensile stress is 1.22Mpa. In Y direction, the maximum compress stress and tensile stress are 3.01Mpa and 0.0397Mpa. The stresses are 2.62Mpa and 0.492Mpa in Z direction. Data shows that under gravity, the maximum compressive stress are in bottom of the slope, while the tensile stress are at the top of it and the value is far less than the ultimate tensile strength of rock slope, not enough to create tension damage. Slope is in the steady state. 3.2 Residual Slope Stress Field Analysis after the Hanging Wall Ore Mining The distribution of slope of the stress field contour map at the end of mining is shown in Figure 4, Figure 5 and Figure 6.
Fig. 4. Stress contour maps of X direction
344
Z. Kang et al.
Fig. 5. Stress contour maps of Y direction
From Figure 4, Figure 5, Figure 6 we can see that the maximum compressive stress in X direction is 4.7Mpa, 0.97Mpa larger than the maximum exploitation of the second paragraph, the maximum tensile stress is 1.98Mpa, 1.741Mpa larger than the maximum exploitation of the second paragraph. In Y direction, the maximum compressive and tensile stresses are 6.06Mpa and 0.43Mpa, each 2.41Mpa and 0.242Mpa than in the second paragraph. The maximum compressive stress in Z direction is 4.03Mpa, 0.81Mpa smaller than the maximum exploitation of the second paragraph, and the tensile stress is 0.783Mpa, 0.189Mpa larger than that.
Fig. 6. Stress contour maps of Z direction
Distribution of the Stress of Displacement Field
345
At the end of hanging wall ore mining, the volume of mined-out area 1#, 2#, 3# adit are 1.9×104m3, 1.1×104m3 and 2.8×104m3. Tensile stress around the mined-out area increases with the addition of the area, so that rock slope is at the stress of adjustment. From the point of X direction and Y direction, the range of tensile stress in the upper of 1#, 2#, 3# adit not only expands than the exploitation of the second paragraph of ore bodies, and almost all of the upper fault showed tensile stress, the maximum tensile stress also increases. Practice shows that the destruction of rock is mainly caused by the tensile stress of it, the increasing range of tensile stress in X and Y direction will surely increase the possibility of stretched damage of cave. Therefore, cave mouth must be strengthened and so does the roof in mined-out area. Although the maximum tensile stress in three directions has increased, but not large, and the larger phenomenon of stress concentration doesn’t appear. The slope of the overall post-harvest residues is still in stable condition.
4 Conclusion Through the establishment of the three-dimensional model of Shirengou opencast iron mine after adit vein mining, by using of ANSYS finite element software for numerical simulation of residual mining slope analysis and calculation, the following conclusions can be obtained after the stress and stress analysis in X, Y, Z directions. 1) Through ANSYS by establishing three-dimensional mathematical model of residual iron ore mining slope, numerical simulation analysis shows that: adit vein mining of Shirengou iron mine has not disrupted the overall structure of rock slope, slope remains safe and stable. 2) With the forward recovery pulse slope mining and the increasing of roadway and volume of mined-out area, so that the tensile stress near the fault on the upper part of rock. In Y direction, the exploitation of the maximum increment of tensile stress is twice than the original, the trend of the mined-out area and fault increase, so do the sinking trend in the upper strata. As the rock is mainly broken by tensile stress, then, mined-out area supporting and the roof should be strengthened so that the tensile stress on the roof and the fault effects can be cut down or eliminated. 3) At the end of hanging wall ore mining, tensile stress at the upper part of 1 #, 2 #, 3 # cave has continued to increase in Y direction. It shows that as the time goes on, cave mouth will be destructed. Thus, cave mouth should be strengthened in time to prevent the collapse or fall of it after mining. 4) With the forward recovery pulse slope mining, increasingly growing roadway and the volume of mined-out area, so that rock slope displacement gradually increased, especially near the fault. Therefore, the slope of the management should be strengthened and take the necessary precautions, so that the mined-out area makes initial state of equilibrium broken in F10 fault. In turn, the deformation will affect the safety of mined-out area roof and the roadway. Therefore, roof management should be strengthened, and necessary supporting and protective measures must be taken according to the actual situation.
346
Z. Kang et al.
Acknowledgements The authors wish to acknowledge the funding support from HeBei Province Natural Foundation (No E2009000782 ). At the same time, acknowledge the laboratory support from The HeBei Province Key Laboratory of Mining Development and Safety Technique.
References 1. Griffiths, D.V., Gordon, A.F.: Probabilistic slope stability analysis by finite elements. J. Journal of Geotechnical and Geoenvironmental Engineering 130, 507–518 (2004) 2. Kumsar, H., Aydan, O.: Dynamic and static stability assessment of rock slopes against wedge failures. J. Rock Mech. Rock Engineering 33, 55–60 (2000) 3. Junfeng, Z., Zhengguo, L.: Stability analysis of fractured rock slope under strong seismic loading. J. Mechanics in Engineering 32, 24–28 (2010) 4. Manzari, M.T., Nour, M.A.: Significance of soil dilatancy in slope stability analysis. J. Journal of Geotechnical and Geoenvironmental Engineering 126, 75–80 (2000) 5. Duncan, J.M.: State of the Art: Limit equilibrium and finite-element analysis of slopes. J. Journal of Geotechnical Engineering 122, 577–596 (2006) 6. Wei, W., Li, L., Xin, L.: 3D dynamical response analysis of slope in open-pit using Anays. J. Industrial Minerals & Processing 2, 27–29 (2010) 7. Yu, N., Wei-ya, X., Wen-tang, Z.: Application in complicated high slope with strength reduction method based on discrete element method. J. Rock and Soil Mechanics 28, 569–574 (2007) 8. Wei-ya, X., Wu, X.: Study on slope failure criterion based on strength reduction and gravity increase method. J. Rock and Soil Mechanics 28, 505–511 (2007) 9. Kai-song, W., Su-mei, L.: Research of ANSYS Modal Analysis Based on Different Modeling Methods. J. Coal Technology 28, 12–14 (2009) 10. Shi-jian, H.: The application of slope stability analysis based on ANSYS in shenzhen. J. Building and Decoration 78, 91–92 (2008)
Numerical Simulation on Inert Gas Injection Applied to Sealed Fire Area* Jiuling Zhang1,2, Xinquan Zhou1, Wu Gong1, and Yuehong Wang3 1 2
China University of Mining and Technology (Beijing) Beijing 100083, China KaiLuan(group)Limited Liability Company Tangshan 063018, Hebei, China 3 School of Resource & Environment, Hebei Polytechnic University, Tangshan 063009, Hebei, China
[email protected]
Abstract. To provide a theoretical basis for application of inert gas injection measures on the mine fire disaster relief, the paper establish mathematical model for the influence of inert gas injection to fire zone’s flow field, and the FLUENT software is used to carry out numerical simulation, at last, the simulation results are compared with the experimental results. The result shows that, while the velocity of inert gas is in low, the gas layer of closed fire zone is still existing; its verified the effect that inert gas can promote the front gas layer and dilution the back mixed gas layer; what’s more, the numerical simulation of experiments verify the reliability of the results inert gas injection. The results have great significance on mastering the fire zone’s state change when injecting the inert gas, forecasting the explosion suppression effects and giving a good guide on using the inert gas to suppress the fire zone’s explosion. Keywords: Mine fire; inert gas injection; numerical simulation; FLUENT.
1 Introduction Sealed fire zone and filled it with inert gas is a common method in the mine fire disaster relief [1]. Scholars carried a lot of numerical simulation study on the application of inert gas injection to relief mine fire. With the finite element method, Li Zong-Xiang [2, 3, 4] proposes a new method that using numerical simulation to determine a reasonable nitrogen injection parameters (nitrogen injection flux, location and time); based on non-homogeneous air leakage and seepage equation, Li Hai-yang & Jia Jin-zhang establish unsteady numerical model of the mining district nitrogen injection and the mined-out area spontaneous combustion prevention, using windward finite element method to solve simultaneous; Wang Hua & Ge Ling-Mei [5] studied the influence of inert gas CO2 and N2 to the gas concentration explosive limits and the critical oxygen concentration; Wang Sheng-shen & Jiang Jun-cheng [6] established a fire tunnel gas flow model and conducted a numerical simulationcalculate; Zhou Bo Xiao [7] studied the influence of the "piston effect" to the sealed zone’s gas concentration distribution. *
"973" National Key Basic Research and Development Program (No. 2005 cb221506) , Sponsor/ National Natural Science Foundation of China (No. 50874111; 50534090).
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 347–353, 2010. © Springer-Verlag Berlin Heidelberg 2010
348
J. Zhang et al.
Sealed fire zone gas explosion which caused by inert gas injection is a complex turbulent diffusion process of heat and mass transfer [8], it is not clear the internal gas changes in the fire zone at the actual inert gas injection process. This paper conducts numerical simulation to study the sealed fire zone’s state changing after inert gas injection by using FLUENT, verifies the reliability of the results.
2 Mathematical Model Mass conservation, energy conservation and momentum balance are the basic law of the objective world, which must be observed in the process of fluid flow; therefore, establishing continuity equation, energy equation, motion equation, together with inert gas state equation that constitutes the basic equation group of inert gas motion. Considering the changes of floating life force in the process of inert gas injection, there will be control-equations consist of inert gas flow and heat and mass transfer equations. After homogenization the equations, used k-ε turbulence model to seal the time averaged equations, the time averaged equations: after the equations are sealed, the control equations that the influence of inert gas injection to the sealed fire zone are including the continuity equation, momentum equation, energy equation, component equation of turbulent kinetic energy equation (k) and turbulent kinetic energy dissipation rate equation (ε). All these equations are satisfied the following general form of equation. r ∂ ( ρφ ) + div ( ρv φ + J φ ) = S φ . ∂t
(1)
At the initial time (t = 0), the parameters in the formula assumes are: the pressure P is P0, the axial velocity u is average wind speed u0, vertical velocity v and horizontal velocity w are equal to 0, the temperature T is equal to air temperature T0, all gases’ fraction are same to air, the initial concentration of a component is C0. Assumed that after happens the tunnel fire, the fire source section are burning flames and carry out a complex chemical reaction. In order to study the movement law of combustible gases in the process of inert gas injection, it is regardless to the specific combustion process of fire; no fire combustion model is given, simply deal with the fire source into a constant temperature zone where temperature is Tf. At the fire source zone, the smoke concentration is assumed to be constant Cf and it continues to release heat and smoke. The control equations are the instantaneous conservation equations that describe the environment gas, inert gas, and gas flows within the roadway and their interaction. Combined with proper initial conditions and boundary conditions, the equations can be solved using the CFD numerical simulation software FLUENT, and obtains the gas flow condition, the velocity field and concentration field distribution laws and the change characteristics at any time within solution region of the sealed fire tunnel.
Numerical Simulation on Inert Gas Injection Applied to Sealed Fire Area
349
Table 1. The dereferencing about φ, Γφ, Sφ of governing equation Equstion
φ
Γφ
Sφ
Continuity Equation
1
0
0
U Equation
u
μ eff
V Equation
v
μ eff
−
∂p ∂ ∂u ∂ ∂v ∂ ∂w + ( μ eff ) + ( μ eff ) + ( μ eff ) ∂x ∂x ∂x ∂y ∂x ∂z ∂x
−
∂ ∂w ∂ ∂v ∂u ∂p ∂ ) + ( μ eff ) + ( μ eff ) + ( μ eff ∂z ∂y ∂y ∂y ∂y ∂y ∂x
W Equation
w
μ eff
∂ ∂w ∂ ∂v ∂u ∂p ∂ ) + ( μ eff ) + ( μ eff ) + ( μ eff ∂z ∂y ∂z ∂ z ∂z ∂z ∂x T − T0 + ρ0 g − g T
H Equation
h
μ eff / σ h
− qr
CS Equation
CS
μ eff / σ c
0
K Equation
k
μ eff / σ k
Gk + Gb − ρε
ε Equation
ε
μ eff / σ ε
−
ε k
[(Gk + Gh )C1 − C2 ρε ]
3 Mesh Generation and Numerical Simulation 3.1 Mesh Generation To make results comparable, set the size of computational domain the same with experimental program, specific programs in document [9]. Stable airflow state before inert gas injection as the original condition, flow field simulation results of 3min after the fire burning as the initial condition of inject inert gas to the laneway. The boundary conditions and experimental conditions are the same. The tunnel exit condition is full development condition, parameters of the mesh nodes at the exit section have no effect on the parameters of the most nearby node to exit inside border, exit as the pressure-exit, and initial pressure depend on the experiments. After the beginning of iteration, each iteration of a time step, the pressure update, updated pressure depend on the average pressure of all units near the border when the previous time step iteration ended. As the fig1, the simulation tunnel are divided into 303278 meshes.
350
J. Zhang et al.
Fig. 1. Divided meshes of laneway
3.2 Analysis of Results
Fig. 2. Distributing of CO2 consistence at 3 min.
Fig. 3. Distributing of CO2 consistence at 10 min.
Fig. 4. Distributing of CO2 consistence at 17 min.
Flow field simulation results of 3min after the fire burning as the initial condition of inject inert gas to the laneway. The boundary conditions and experimental conditions are the same; the injection rate is 257.6m3/H. The simulation results are shown in Figs 2, 3, and 4. Figs 2, 3, 4 show the CO2 concentration distributing simulate result in the experimental tunnel portrait middle section (X=1.2) respectively
Numerical Simulation on Inert Gas Injection Applied to Sealed Fire Area
351
after injected the inert gas 3, 10, 17min. From the Figs, inert gas flow longitudinally and mix with gas layer ahead, the length of inert gas well-mixed with gas layer is 5m at 3min, 8m at 10min, about 10m at 17min while end the injecting. The gas layer goes down at the interface between inert gas and gas layer can be seen from the figs, and maybe still layered distribute after injection, but the injected inert gas of turbulent state disturbs the gas layer flow state, weakened the gas stratification, accordingly, the gas stratification after injection is weaker than before; the dilution effect of inert gas injection on the combustible gas layer related to the distance between combustible gas layer and the injection port, the closer the better; and the dilution effect at the middlelower part is better than the upper part; the inert gas can drive the combustible gas migrate along the direction of injection, resulting in the smoke concentration at the frontage raised. The piston effect at the front end and dilution effect at the back end of inert gas injection on the combustible gas is verified.
4 Comparison Simulation with Experimental Result Fig. 5 shows the simulate result of CO concentration at different height (1.1m, 1.8m, 2m), the distance from the inject port as abscissa. The CO concentration at different height distance from the inject port of 0-10m is the same, it shows that the inert gas and the smoke well-mixed in this area. At the 10-15m area, the CO concentration apparently higher than the 0-10 area, and the CO concentration at height1.8m,2.0m higher than at height 1.1m, it shows that the smoke layer still exist in this area, and inert gas does not well-mixed with the smoke in this area. 0.14 0.12 0.1 ) 0.08 % (0.06 O C 0.04 0.02 0
1.1m simulate result 1.8m simulate result 2m simulate result
0 5 10 15 20 Distance from the inject port(m)
Fig. 5. The simulate result of CO consistence at different height
Figs 6, 7 and 8 show CO concentration contrast of experimentation and simulate result at different height(1.1m, 1.8m, 2m), the A,B,C are measuring surfaces of the experiment. The experimental test values have the same trend with the simulation results ignoring the impact of experimental error, and all appear turning point at measuring surfaces B (distant from the inject port 10m), this shows the experimental result reliable.
352
J. Zhang et al. 0.13 0.12 0.11 ) % 0.1 ( OC 0.09 0.08
1.1m simulate result 1.1m experimentation result
C
0.07 0.06
0
B
A
5 10 15 Distance from the inject port(m)
20
Fig. 6. Contrast of experimentation and simulate result at 1.1m (CO) 1.8m simulate result
0.16
1.8mexperimentation result
0.14 )0.12 % ( 0.1 O C 0.08 0.06
C 0
B
A
5 10 15 20 Distance from the inject port(m)
Fig. 7. Contrast of experimentation and simulate result at 1.8m (CO) 0.14 0.13 0.12 )0.11 % 0.1 ( O 0.09 C 0.08 0.07 0.06
2m simulate result 2m experimentation result
C 0
B
A
5 10 15 20 Distance from the inject port (m)
Fig. 8. Contrast of experimentation and simulate result at 2.0m (CO)
5 Conclusions The numerical simulation is proceed based on the mathematical model built by FLUENT, the following conclusions are educed by comparing the simulation and
Numerical Simulation on Inert Gas Injection Applied to Sealed Fire Area
353
experimental results: 1)According to the analysis of CO2 concentration distributing simulate result after injected the inert gas 3,10,17min, the gas in the closed fire zone maybe still layered distribute after injection. 2) According to the CO concentration contrast of experimentation and simulate result at different height, the experimental results are verified reliability. 3) The piston effect at the front end and dilution effect at the back end of inert gas injection on the combustible gas is verified. 4) In the process of relief mine fire disaster, the actual data monitoring and collection is often neglected while anxious to relief the disaster, this requires more means of numerical simulation to analysis in future. Some approximate results are obtained initially in the 3d numerical simulation of fire zone inert gas injection, although its validity and applicability is experimental verification, it has some errors, and the study of numerical simulation of inert gas injection migration must be continuous improvement in the future.
Acknowledgments The authors wish to thank He-gang coal mine rescue group and all other partners in this project for their helpful support. The anonymous reviewers are acknowledged for helpful and careful comments and modification of this manuscript that improved the quality of this paper.
References 1. Xinquan., Z., Bing, W.: Theory and practice of mine fire disaster relief. Beijing: China Coal Industry Publishing House 11, 176–302 (1996) 2. Zongxiang, L., Haiyang, L., Jinzhang, J.: Numerical simulation of preventing spontaneous combustion by nitrogen injection in goaf of Y-type ventilation face. Journal of China Coal Society 5, 93–597 (2005) 3. Zongxiang, L., Longbiao, S., Wenjun, Z.: Numerical simulation study of nitrogen injection process for fire prevention and extinguishment in goaf. Journal of Hunan University of Science & Technology (Natural Science Edition) 9, 5–9 (2004) 4. Guoliang, C.: Study of mechanism of action and application of nitrogen extinguishing in closed space. CUMTB, 50–56 (2003) 5. Hua, W., Lingmei, G., Jun, D.: Experimental Study of Using Inert Gas to Suppress Mine Gas Explosion. Mining Safety & Environmental Protection 35, 4–8 (2008) 6. Juncheng, J., Xingshen, W.: Numerical analysis of smoke in roadway in a mine fire. Journal Of China Coal Society 22, 165–170 (1997) 7. Boxiao, Z.: Study of the mechanism of closed fire zone to relieve disaster induce gas explosion. CUMTB, 78–89 (2006) 8. Adamus, A., Min, E.: Review of nitrogen as an inert gas in undergroud mines. Jounal of The Mine Ventilation Society of South Africa (2001) 9. Jiuling, Z.: Effect of inert gas injection on transport law of gas in closed fire zone. CUMTB, 65–67 (2009)
AUTO CAD Assisted Mapping in Building Design Wenshan Lian and Li Zhu College of Civil and Architecture Engineering, Hebei Polytechnic University, Tangshan Hebei, 063009 China
[email protected]
Abstract: As time advanced, specialized in civil engineering, the software named AUTO CAD has been on behalf of the traditional hand-drawn engineering drawing and become the mainstream design approach, its efficiency, accuracy, easy to modify, clear image rendering characteristics as the first choice of professionals, and its excellent three-dimensional characteristics and the field not only in the mapping and field area is in the interior, architectural, landscape architecture has played a main role. Keywords: AUTOCAD; assisted mapping; architect; project; digital technology.
1 Foreword "Being Digital" has quietly come. With the changes in people's way of life, the building will certainly be some new trend, but also to building designers bring new ideas. This paper describes a digital technique and development linkages, the architect proposed the development of future response to the task of digital technology, a comprehensive analysis of the future of architectural design trends.
2 Traditional Mapping Software and Computer Graphics Difference The traditional engineering drawing is thousands of years with rectangular foot and hand drawing paper form, on the drawing demanded to have absolute professionalism and accuracy, and good art skills, while drawing a very time-consuming mapping of the workload labor-intensive, modify, is more problematic, it also laid the architect's social status can not be replaced; The emergence of CAD computer aided design drawing, greatly enhanced social productive forces and the speed of building design to allow more time for architects to think of activities. And its highly efficient, accurate, easy to modify, clear image rendering characteristics of more is to become the inevitable choice of many designers [1].
3 Related Work The building design in computer graphics design, while a little, not later in geological mapping, but because the content and design architect on the project on the architectural design concerns and research and academic discussion of the theory more in the design of the software, so that the application of computer-aided drawing software to become a R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 354–360, 2010. © Springer-Verlag Berlin Heidelberg 2010
AUTO CAD Assisted Mapping in Building Design
355
habit and be ignored. Today, digital technology has been the construction of more and more , the current digital technology has changed from the stage of development of computer-aided drawing to computer-aided design and analysis stage [2]. Hierarchical characteristics of the software architects in drawing process and the finished drawings in different from the usual reference to the mapping in civil engineering that simple CAD, because the identity of the different functions are independent, so architects may need to switch corresponding layer, showing different professional functions, such as water are shown, electrical, HVAC and other layers, also the complex layers on or off in order to more clearly understand when drawing. But also to facilitate the flexibility of CAD Architects lines drawn elevation profile of the complex plane, or even into the detailing. But also to facilitate the drawing detailing the construction's knowledge about To carry out the content, so that they communicate more easily. Figure 1 shows the CAD layers drop-down menu displayed, you can see different names in different blocks of color to display, while Figure 2 shows a completed construction plans, architectural drawings can be seen in the red layer of the axis drawn on the basis of the construction plane, including the separation wall, column grid array, labeling, etc., each category of the diagram lines are for the unity of color, saved as different layers in Figure 1. If you do not need a layer we just need to put a layer of a certain class of previous bulb switched off or freeze. (Fig. 1, Fig. 2)
Fig. 1. AUTO CAD layer
Fig. 2. Interface
356
W. Lian and L. Zhu
Design computer normally spend a lot of different schemes of calculation, analysis and comparison, to determine the optimal program; all kinds of design information, both numbers, text or graphics, can be stored in the computer's memory or external memory years, and can quickly retrieve; designers usually start with a sketch design sketches into a working plan to the heavy work can be completed to the computer; automatically generated by the computer design results, you can quickly make a graphical display, enable designers to design a judgment in a timely manner and modified; you can use the computer and graphic editing, zoom in, zoom out, translation and rotation of the graph data processing work. Cad can reduce the designers of paint, repetitive work, focusing on the design itself, shorten design cycles and improving design can reduce the designers of paint, repetitive work, focusing on the design itself, shorten design cycles and improving the quality of design.
Fig. 3. Elevation and detail drawing
In recent years, with the building information model and construction life-cycle management concept proposed , in the field of architecture for the construction project from design, preliminary design, construction design to the construction of the entire construction process, how this aided design and The architectural design combines the reality becomes even more can not be ignored. In support of new construction during the early users’ decision-making, create a better more accurate design documents, based on various possible options for enhanced assessment of the program. Computer-aided graphic design is the theory of architectural design process more research on the basis of induction raises fundamental questions about the design process thinking., And the design process by building type design features, and design stages and design process analysis and research the establishment of the elevation and profile views of each model, the independent or combined with computer-assisted
AUTO CAD Assisted Mapping in Building Design
357
formation of visualization software and to seeks to understand and grasp architectural design process [3] (Fig. 3).
4 Advantages of CAD Computer-Aided Mapping Ratio: CAD drawing is displayed in the 1:100, 1:1000 and so the proportion of it, but in the process of drawing computers are geometric operations that CAD software screen 1000mm for the actual site of 1000mm, this allows designers do not have to spend energy to calculate the ratio with the scale. The CAD computer graphics software, view and scale function, as a miniature sculpture works in the same home can be a very small area of sensory rather than the actual size of the amplification, thus ensuring more details of the part of the order and design, which is the traditional hand-drawing can not be completely comparable. While such as detail class works its scale to adjust the size of the required, but the process also simply adjust the drawing control of digital can bulk modify. Elevation: CAD computer graphics and three-dimensional relationship between the elevation angle function, a great impact on Surveying and Mapping, easy for people to intuitively understand the real scene. The building design is also based on geological mapping of the base building construction based on the implementation. Base elevation and shape have a direct impact on architectural design patterns, and building their own computer graphics, like CAD design height difference can be clearly identified, which also help architects and owners display and communication. Drawing width: CAD software level width, its color can help designers to distinguish the type of design, and its width can help designers to classify different levels of objects. And the way the alignment of simple, rapid and diverse, a great enrich the architectural details. Annotation: The annotation software, standardization. On the approval process, the standardization of CAD software to make such a showing in the drawing, the various types of tagging more clarity, whether water or a building itself, whether the details of construction materials or building body design, more convenient access. CAD review: Computer software on the linear changes with unparalleled advantages, speed fast, or repair that could be a map. Replication module, then a more accurate precision, the traditional hand-drawing may be dozens of hours of work by copying feature several seconds to complete. For example, the floor, repeating the same body design, assuming that story is not the same thing, only after the system parameters can be modified in the mark. Again such as some line width weight or you can use the Format Painter tool for rapid replacement. Input and output: the special nature of digital technology, architects do not design before beginning the discussion on the sheets out how much the drawings, but only when the design into a template frame box drawn in the diagram, and then be printed according to Yin the map, while the superiority of digital technology is that it can also immediately drawing, and can repeat the map. The features of real-time changes and more convenient temporary error correction and output are shown in Fig. 4.
358
W. Lian and L. Zhu
Fig. 4. Print
Calculation: CAD convenient lies can easily calculate length, perimeter, area and even more surface area and, beginning in the drawing can be drawn as required length of line segments needed, curve, multi-segment lines. These processes only need designers at several key points on the point the mouse or select a department you need to calculate the outer edge of the area. This kind of digital computing mode has the advantage of the absolute addresses complex manual drawing the map and calculates the area of challenges. Intellectual property confidential: As information system application of gradually increasing, our core competence will come from technical inventions, patents, innovation, etc. Now, with the computer's popularity, more and more technological invention, innovation, and so rely on computer technology, so many core of confidential documents in electronic form on your computer. Therefore, many cores of confidential documents in electronic form on your computer, or even most of the core technical documentation itself is design drawings, program source code, and other electronic documents. And use CAD architects can after drawing on paper is encrypted, the retrial without permission cannot print electronic documents, such as intellectual property protection is CAD software in the applications, of CAD software, a great advantage.
5 CAD Mapping with Modern Digital Technology Software has a variety of software combined with a very excellent performance, a few years ago the architect had had the use of CAD computer software and PHOTOSHOP, 3DS MAX and other graphics software combined with production and a large number of excellent works, but this year the situation of the computer division of more sophisticated Under the architects of most proficient in CAD drawing software. This effect is good and operation more simple and the effect of mapping software, which greatly reduces the design intent but good grounding in the United States patients the burden of weak architects.
AUTO CAD Assisted Mapping in Building Design
359
Fig. 5. Combination of AUTO CAD and PHOTOSHOP do indoor plane
Fig. 5 is produced in conjunction with PHOTOSHOP CAD interior floor plan. It first used the CAD ready base map, frame, and then used PHOTOSHOP to fill. For the CAD, also have a flat fill and three-dimensional production capabilities, but the advantage, CAD more is applied to engineering and construction sectors, which is the building of the plane, facade and the section, and so made the construction drawings. While the horizontal fill and three-dimensional production served other software expertise transfers to like MAX, and so in this respect is the dominant software.
6 Conclusion and Outlook With new means to non-linear design, parameter design and build design, new design methods in architectural design , this new digital technology used in building design and construction of great significance for the design optimization. CAD computer professional working in the architectural design of technology known as computer-aided architectural design (Computer Aided Architectural Design, CAAD), its advantages are increasingly evident in the architectural design reflected the speed of its design, unlike PHOTOSHOP, 3D MAX and digital software, high performance configuration on the computer, slightly more complicated calculations required to spend a lot of time Wait. CAD computer aided design greatly improved the quality of architectural design reduces design costs shorten the design cycle, increased competition. Digital high as industrialization forms of production technology will revolutionize the development process, but also greatly change people's way of life and mind; the development of the building will also produce an extensive and profound impact [4]. Is the rapid development of international science and has even appeared as the "D form" of the three-dimensional printer, the digital modeling of the formation of future building models, Digital has advantages in its field may share part of the building features but it also may offer a new type of buildings functional requirements and architecture, which requires more time to keep up with the pace of architects designers, constantly updated information knowledge and social awareness in order to more efficiently carry out a more efficient construction of city campaign.
360
W. Lian and L. Zhu
References 1. Chen, Y.W.: The development of digital technology and architecture. Housing Industry 1 (2009) 2. Tian, L.: The basic process of building design. Construction Times 3 (2005) 3. Liu, Y., Liang, T.: Digital Technology in Architectural Design. Fujian Architecture 6 (2008) 4. Hong, E.M.: The digital technology in architectural design. The Chinese Science and Technology Information 20 (2007) 5. Kurokawa, F., Sakemi, J., Sukita, S.: Dynamic characteristics of DC-DC converter with novel digital peak current-injected control. In: International Telecommunications Energy Conference, pp.1–6 (2009) 6. Santiago, R.D., Raabe, A.L.A., et al.: Architecture for Learning Objects Sharing among Learning Institutions-LOP2P. IEEE Transactions on Learning Technologies 3(2), 91–95 (2010) 7. Murroni, M., Scalise, S., Vanelli-Coralli, A., et al.: Convergence of Aigital TV Systems and Services. International Journal of Digital Multimedia Broadcasting, doi: 10.1155/2009/817636 8. Haist, T., Osten, W.: Ultrafast Digital-Optical Arithmetic Using Wave-Optical Computing. In: Dolev, S., Haist, T., Oltean, M. (eds.) OSC 2008. LNCS, vol. 5172, pp. 33–45. Springer, Heidelberg (2008) 9. Reith, M., Carr, C., Gunsch, G.: An Examimation of Digital Forensic Models. International Journal of Digital Evidence 1(3) (2002) 10. Hu, H.F., Yang, B.: Achievement of Three-Dimention Fictitious Flight Technique Based on Mapgis Platform. Geology of Anhui 1 (2009) 11. Foster, K., Stelmack, A., Hindman, D.: Sustainable Residential Interiors. Wiley, Chichester (2007) 12. Lockwood, T., Walton, T.: Building Design Strategy. Allworth Press (2008) 13. Frederick, S.: Building Design and Construction Handbook, 6th edn. McGraw-Hill press, New York (2009)
The OR Data Complement Method for Incomplete Decision Tables Jun Xu1, Yafeng Yang1, and Baoxiang Liu2 1
College of Light Industry, Hebei Polytechnic University, Tangshan 063009, China 2 College of Science, Hebei Polytechnic University, Tangshan 063009, China
[email protected]
Abstract. Uncertain information can not be processed by Pawlak rough set theory, and the missing data must be completed for applying in knowledge acquisition. In this paper, OR transposed table was proposed based on attribute reduction mentality to target incomplete decision tables. OR value can be selected from values of attribute by transforming information table into OR transposed table and using attribute reduction mentality, therefore the incomplete decision tables were transformed into complete decision table. And finally the example is analyzed. Keywords: Incomplete; data complement; attribute reduction; rough sets.
1 Introduction Pawlak rough set takes complete information as research object, equivalence relationship as foundation, and it divides the universe into equivalence classes which does not intersect by equivalence relation. However, the massive information collected was often incomplete and damaged in practical application. If certain attribute value of the object was unknown, then the decision table was incomplete. These unknown values of attributes have not only contained certain information, but also have the very important influence for the decision-making. Therefore, it is important to process incomplete information for the theory and application, and it is an important link of knowledge discovery. Data complement is a method which transforms incomplete information system into complete information system. At present, data complement often delete object containing omission data directly, then the loss data was complemented by statistics method. Incomplete information’s processing method of data multiple-valued question was studied seldom. This paper proposed a data complement method based on attribute reduction, namely OR data complement. OR transposed table was proposed based on attribute reduction mentality to target incomplete decision tables, thus OR value can be selected from values of attribute, therefore the incomplete decision tables were transformed into complete decision table. This method proposed the definition of OR transposed table according to attribute reduction mentality for data multiple-valued problem. Select an appropriate value namely OR value by establishing OR table, and therefore the incomplete decision tables were transformed into complete decision table [1, 2, 3]. R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 361–367, 2010. © Springer-Verlag Berlin Heidelberg 2010
362
J. Xu, Y. Yang, and B. Liu
2 Information System and Attribute Reduction 2.1 Information System An information system is represented as (U , A , F ) where U = {x1 , L , x n } is a
nonempty finite set with n objectives, A = {a1 , L , a m } is a nonempty finite set with m attributes, F = { f a j f a j : U → Va j , j ≤ m} is a relation sets of U and A , Va j is called
the range of a j , and f a j : U → Va j is an information function. A knowledge representation system is also called an information system, which is represented as a relation table. A knowledge representation system with attribute of condition and decision is a relation table. An information system can be represented as a relation table shown in table 1. Table 1. Information system
U\A x1 x2
a1 a11 a21
a2 a12 a22
a3 a13 a23
M
M
M
M
xn
an1
an2
an3
K K K M
K
am-1 a1m-1 a2m-1
am a1m a2m
M
M
anm-1
anm
2.2 The Lower and Upper Approximations
It is considered that knowledge is related with classification closely by the rough set theory. Knowledge has the ability which is based on object. Assorting process is that the object with small difference will be divided into one kind. Their relation is indiscernibility relation, and also called the equivalence relation. Knowledge base can be represented as K = (U , R) , and U is the nonempty finite set called a universe of discourse. Let R be an equivalence relation on U. We use U/R to denote the family of all equivalence classes of R., and [ x] R to denote an equivalence class of R containing the element x ∈ U . If P ⊆ R and P ≠ Φ , IP is one kind of all equivalence classes, and also called indiscernibility relationship on P as ind (P) , [ x]ind{R} = I [ x] R , P ⊆ R . R∈P
Inaccuracy concept is described under two accurate sets of the lower and upper approximations according to the rough set theory. The pair (U , R ) is called an approximationspace. For any X ⊆ U , we can define the lower and upper approximations of X as R− ( X ) = {x [ x] R ⊂ X , x ∈ U } ,
(1)
R − ( X ) = { x [ x] R ∩ X ≠ Φ, x ∈ U } .
(2)
The difference between the upper and lower approximations is called the R-boundary of X and is denoted by bnR (X ) , i.e.,
The OR Data Complement Method for Incomplete Decision Tables
bn R ( X ) = R − ( X ) − R− ( X ) . R-positive boundary of X is defined as
363
(3)
posR( X ) = R− ( X ) . R-negative boundary of X is defined as
(4)
negR( X ) = U − R− ( X ) .
(5)
2.3 Attribute Reduction
Attribute reduction is one of core contents about rough set. It is well known that actual data often contains many wrong and redundant attributes which often cause the classification quality drop. Therefore, an intelligent system must determine that which attributes are related with classification, and which are irrelevant and redundant. The so-called attribute reduction is that the irrelevant and unimportant attributes are deleted under the condition of classification ability invariable. Definition 1 (Necessary Attribute): Let R is a group of equivalent relationships, r ∈ R , if ind ( R) = ind ( R − {r}) , then r is called necessary in R ; otherwise r is called unnecessary in R . If every r ∈ R is necessary in R , then R is called independent; otherwise R is called dependent. Definition 2 (Reduction): Suppose Q ⊆ P , if Q is independent and ind (Q) = ind ( P ) , then Q is called a reduction of P . The definition of reduction also can be explained that the subset Q ⊆ P ,
if posQ ( D ) = pos P ( D) , and for ∀a ∈ Q such that posQ −{a} ( D) ≠ posQ ( D ) , then Q is called a reduction of P . Obviously, P has many reductions. The set of all reductions about P is written as red (P) .
3 OR Data Complement Method Based on Attribute Reduction 3.1 OR Transposed Table
( AR , U , F ) is called a OR transposed table, in which AR = {a1 ,L a k } is a reduction of A ; U is a nonempty finite set with n objectives; F is a relation set about U and A . In fact, OR transposed table takes the set of attributes as line and the set of objects as column. Table 2. OR transposed table
A \U
x1
x2
x3
a1 a2
a11 a12
a21 a22
a31 a32
M
M
M
M
ak
a1k
a2k
a3k
K K K M
K
xn-1
xn
an-11 an-12
an1 an2
M
M
an-1k
ank
364
J. Xu, Y. Yang, and B. Liu
For example, suppose U = {x1 , x2 , x3 , x4 } , where x1 -Beijing, x2 -Tianjin, x3 Hebei province x4 -Shanxi province. A = {a1 ,L , a5} , where a1 - R & D disbursement by regions accounts for the GDP ratio, a2 -personnel quantity of R & D by regions accounts for all in-service staffs quantity ratio, a3 - internal gross expenditures of R & D by regions, a4 - techniques market value in exchange by regions, a5 -three authorization quantity of the patent by regions. Table 3. Average value of statistics indicators a2 1.48 0.63 0.09 0.11
a1 6.85 1.52 0.52 0.64
U\A x1 x2 x3 x4
a3 200.7 30.4 31.0 12.7
a4 204.5 33.8 6.7 2.3
a5 6686 1943 3132 1031
Table 3 is an information system. And the OR transposed table is shown in table 4. Table 4. Average value of statistics indicators
A \U a1 a2 a3 a4 a5
x1 6.85 1.48 200.7 204.5 6686
x2 1.52 0.63 30.4 33.8 1943
x3 0.52 0.09 31.0 6.7 3132
x4 0.64 0.11 12.7 2.3 1031
3.2 OR Value
Suppose the attribute a j of xi has n possible values, OR value is the most possible value. Since a reduction is an independent attributes set, namely every attribute in reduction is necessary, they are equivalence class separately if these necessary attributes was taken as objects before classification. Then OR value is that elements of AR are equivalence class separately. The process of determination about OR value is that the incomplete information system were transformed into complete information system. The idea is shown as follows: 1) Delete the samples containing uncertain data, and get training set U ′ ; 2) Data discretization on U ′ ; 3) Attribute reduction; 4) Establish OR table; for every OR table, and obtain OR value. 5) Compute AR U
The OR Data Complement Method for Incomplete Decision Tables
365
The OR table is different along with different uncertain data value, and therefore the OR table established is not alone in step 4. In addition, the uncertain data can be deleted directly, when it belongs to the redundant attribute. That because the uncertain data is redundant in this case.
4 Example Analysis 4.1 Procedures
Fig. 1 illustrates our experimental procedure.
Fig. 1. Experimental procedure
In our experiments, we consider each data set as a transaction set. 1) Data discretization: The algorithm [4, 5] can be applied to process data. 2) Attribute reduction: The reduction algorithm [6, 7] can be applied to find a reduction here. 3) Establish OR table: OR transposed table takes the reduction generated from step2 as line and the set of all objects as column. A 4) Obtain OR value: Compute R for every OR table, and obtain OR value. U 4.2 Example
Vegetation coverage ratio in certain area is shown in table 5, in which a1 , a2 , L , am express flower, grass etc vegetation coverage ratio separately. x1, x2 , x3 , x4 express four areas. In Table 5, f a1 ( x5 ) confused with other values in inputting, resulting in there are
uncertain values f a1 ( x5 ) in information system S because of subjective omission in statistics. And the possible values of f a5 ( x1 ) are f a5 ( x1 ) = {0.8,1.3, 2.2,3.2} .
366
J. Xu, Y. Yang, and B. Liu Table 5. The information system S
U\A
a1
a2
a3
a4
a5
a6
a7
a8
a9
a10
x1 x2 x3 x4
0.8 1.3 1 1
1.5 13 1 0
1.5 4.5 3.8 0
2.7 2.7 2.1 1
fa5(x1) 2.5 1.8 1
2.7 3.4 2.7 1
3.4 1.3 0.6 0
4.8 3.4 2.7 1
5.6 2.7 2.1 0
5.6 0.2 0.1 0
1) Remove x1 which contains uncertain data f a1 ( x5 ) , and obtain training set S ∗ , are shown in Table 6. Table 6. The training set S ∗
U\A x2 x3 x4
a1 1.3 1 1
a2 13 1 0
a3 4.5 3.8 0
a4 2.7 2.1 1
a5 2.5 1.8 1
a6 3.4 2.7 1
a7 1.3 0.6 0
a8 3.4 2.7 1
a9 2.7 2.1 0
a10 0.2 0.1 0
S ∗ is a complete information system.
2) Data discretization on S ∗ . We obtain a new information system shown in Table 7 by data discretization. Table 7. Attribution discretion of complete information system a1 0 0 1
U\A x2 x3 x4
a2 0 0 0
a3 2 2 0
a4 0 0 1
a5 0 0 1
a6 1 1 1
a7 0 0 0
a8 1 1 1
a9 0 0 0
a10 0 0 0
Table 7 is a complete information system with discrete data. And it can be processed by rough set theory directly. 3) Attribute reduction and obtain a reduction AR = {a1 , a3 , a 4 , a5 , a7 , a8 } . 4) Establish OR table. Obviously, objects x2 , x3 belongs to the same class, therefore, we choose one of them, suppose we choose x2 , and obtain OR table S i , shown in Table 8. Table 8. OR table Si
A \U a1 a3 a4 a5 a7 a8
x1 0 0 1 Ti 2 3
x2 0 2 0 0 0 1
x4 1 0 1 1 0 1
The OR Data Complement Method for Incomplete Decision Tables
367
Ti expresses the possible discrete values about f a1 ( x5 ) . Since 0.8 ∈ ( 0.8,1.5] ,
1.3 ∈ (0.8,1.5] , 2.2 ∈ (1.5,2.7] , 3.2 ∈ (2.7,3.4] , the corresponding discrete values are 0,1,1,2. 5) Obtain OR value. Obviously, TOR = 2 , f a1 ( x5 ) = 3.2 .
5 Conclusion The OR data complement method based on attribute reduction is that every attribute is classified into separate equivalence class by establishing OR table for uncertain data. Accordingly, the uncertain data of attribute is determined by their independent relationships: the procedure of obtaining OR value, achieving the purpose that the incomplete information system is translated into complete information system. The advantage of the method is that the data is not only completed, but also attribute reduction result is obtained.
References 1. Wang, X.: A Data Complement Method for Incomplete Decision Tables. Journal of Tianjin University of Science & Technology 22(3), 62–64 (2007) 2. Li, Y.: A Data Complement Method of Incomplete Decision Table. Science & Technology Information 20, 508–509 (2008) 3. Li, P., Wu, Q.: Completing data algorithms based on probability similarity. Application on Research of Computers 26(3), 881–883 (2009) 4. Chen, H., Zhang, M., Yang, J.-a.: Method of Data Discretization Based on Rough Set Theory. Computer Engineering 44(3), 30–32 (2008) 5. Ning, W., Zhao, M.: More Improved Greedy Algorithm for Discretization of Decision Table. Computer Engineering and Applications 43(3), 173–174 (2007) 6. Qu, B., Lu, Y.: Fast Attribute Reduction Algorithm Based on Rough Sets. Computer Engineering 33(11), 7–9 (2007) 7. Lu, S., Liu, F., Hu, B.: A attribute reduction algorithm based on attribute dependence. Journal of Huazhong University of Science & Technology 36(2), 39–41 (2008) 8. Zhang, W., Liang, Y., Wo, Z.: Information System and Knowledge Discovery. Science Publishers, Beijing (2003) 9. Zhang, W., Liang, Y., Wo, Z.: Rough Set Theory and Knowledge Acquisition. Xi’an Jiaotong University Press, Xi’an (2001) 10. Kantardzic, M.: Data Mining Concepts, Models, Methods and Algorithms. WileyInterscience, Piscataway (2003)
Comprehensive Evaluation of Banking Sustainable Development Based on Entropy Weight Method Donghua Wang and Baofeng Li Department of Mathematics and Information Science Tangshan Teachers College Tangshan China
[email protected]
Abstract. To study the development state of the bank, from the view of sustainable development an evaluation index system about banking development is established firstly which includes three subsystems (expenditure subsystem, income subsystem and fixed assets subsystem). Then based on entropy weight method, this paper gives the comprehensive evaluation method about banking sustainable development. Finally, combining with the practical situation of one bank in Tangshan, it analyzed the sustainable state of the bank using this method and rechecked its outcomes with the results of principal component analysis, which shows that the results are credible. In a word, it provides some scientific reference for the managers. Keywords: Bank; sustainable development; coordination development; entropy weight method; comprehensive evaluation.
1 Introduction Since the concept of sustainable development was put forward in the 1980’s, the study of sustainable development has increasingly become one important subject which is paid more and more attention by international organizations, national governments and scholars [1, 2, 3] as well as in the banking sector. Currently, the research on the banking sustainable development is still in the state of qualitative study. So the quantitative study is become necessary, in which the method of weight setting is the key. Existing setting methods include subjective weighting method (such as analytic hierarchy process [4], expert judging method [5] and fuzzy comprehensive evaluation [6]), objective weighting method (such as entropy weight method [7], multiobjective optimization method [8] and principal component analysis method [9]) and combination weighting approach(such as linear weighted comprehensive assessment method [10]). Generally, entropy weight method is believed to can reflect the utility value of index and the given index weights in it are more reliable than in analytic hierarchy process. So the entropy weight method is more suitable for the study of sustainable development. In order to avoid the subjectivity and overcome the error influences of extreme condition to the evaluation in the real application, this paper attempts to use entropy weight method to research the sustainable development of bank. R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 368–375, 2010. © Springer-Verlag Berlin Heidelberg 2010
Comprehensive Evaluation of Banking Sustainable Development
369
2 Entropy Weight Method There are m samples and n indexes which can compose an original date matrix X = ( xij ) m×n . Because there are many differences among the indexes in dimension,
magnitude and positive-negative orientation, we need to standardize the original data. (1) The standardization of original data ( i = 1,L, m ,
j = 1,L, n ) Using the Standardized methods in [11], we get the matrix Y = ( yij ) m×n .
(2) Information entropy and utility value ( j = 1,L, n ). The information entropy of the jth index can be computed by: e j = − K
m
∑y
ij
ln yij ,
i =1
1 . ln m Then utility value of the jth index can be calculated by d j = 1 − e j .
where K =
(3) Evaluation index weight ( j = 1,L, n ) d The weight of jth index is w j = n j . dj
∑ j =1
(4) Comprehensive evaluation value ( i = 1,L, m ) f i =
n
∑w y j
ij
, where f i is the
j =1
comprehensive evaluation value of ith sample. (5) Sequencing of comprehensive evaluation value. The evaluation value which is computed by entropy weight method is the value of index that can describe the development state of the corresponding index.
3 Comprehensive Evaluation of Sustainable Development Considering the sustainable development of the banking system from three aspects (expenditure, income and fixed assets), that is the banking sustainable system includes three subsystems, the sustainability of the whole system can be measure from sustainability and coordination relations among the subsystems. 3.1 Sustainability Coefficient of Banking Sustainable Development ( i = 1,L, m )
According to the hierarchy characteristics of index system, sustainability coefficient of banking sustainable development can be computed by the comprehensive weighted method. The sustainability coefficient of ith sample is
Si =
3
∑W F k
ki
,
k =1
where Wk = Dk / D , DK =
n
∑d j =1
the kth subsystem.
j
, D=
3
∑D
K
K =1
, Fki is the index value of ith sample in
370
D. Wang and B. Li
For the sake of convenience, the sustainability coefficients can be divided into three categories on the interval [0, 1]. Table 1. Classifying of banking Sustainability level Sustainability coefficient S State of Sustainability
0 ≤ S ≤ 0.5
0 .5 ≤ S ≤ 0 .8
0 .8 ≤ S ≤ 1
Weak sustainability
Elementary sustainability
Strong sustainability
This coefficient can reflect the development situation of the bank as well as the development trend and direction of the bank. 3.2 Coordination Coefficient of Banking Sustainable Development ( i = 1,L, m )
The coordination coefficients can be used to describe the coordination relations among the indexes. The coordination coefficient of the ith sample can be computed S by Ci = 1 − i , where S i is the standard deviation of all index values about the ith F sample and F is the mean of all index values about the ith sample. The coordination coefficients which are calculated by the upper formula are still on the interval [0, 1]. So for the sake of observation convenience, the coordination coefficients can be divided into three categories according to the classification of sustainability coefficient, which can be expressed in Table 2. Table 2. Classifying of coordination ability of banking development Coordination coefficient C State of coordinating ability
0 ≤ C ≤ 0 .5
0 .5 ≤ C ≤ 0 .8
0 .8 ≤ C ≤ 1
Uncoordination development
Elementary coordination development
Coordination development
This coefficient can reflect the development relationship among the sub-systems. 3.3 Comprehensive Evaluation
The comprehensive evaluation needs to look at the development level, speed and coordination of samples, which are embodied by sustainability coefficients and coordination coefficients. The sustainability coefficients and coordination coefficients can form a two dimensional evaluation space whose horizontal axis expresses sustainability coefficient and vertical axis expresses coordination coefficients. The curve expresses the level of sustainable development. In accordance with the division of sustainability coefficient and coordination coefficient, the evaluation space can be divided into nine regions, such as Table 3.
Comprehensive Evaluation of Banking Sustainable Development
371
Table 3. Characteristics of banking sustainable development Characteristics of sustainable development Strong Sustainability and coordination development A Strong Sustainability and elementary coordination development B Strong Sustainability and uncoordination development C Elementary Sustainability and coordination development D Eelementary Sustainability and elementary coordination development E Elementary Sustainability and uncoordination development F Weak Sustainability and coordination development G Weak Sustainability and elementary coordination development H Weak Sustainability and uncoordination development I
sustainability coefficient S
0 .8 ≤ S ≤ 1
coordination C
coefficient
0 .8 ≤ C ≤ 1
0 .5 ≤ C ≤ 0 .8 0 ≤ C ≤ 0 .5 0 .5 ≤ S ≤ 0 .8
0 .8 ≤ C ≤ 1 0 .5 ≤ C ≤ 0 .8 0 ≤ C ≤ 0 .5
0 ≤ S ≤ 0.5
0 .8 ≤ C ≤ 1 0 .5 ≤ C ≤ 0 .8 0 ≤ C ≤ 0 .5
4 Case Study Based on the date of one state-owned bank in Tangshan since 2004 to 2009, this paper is to comprehensively evaluate the sustainable development levels of the bank under the idea of sustainable development and to put forward the strategies which can promote the sustainable development of this bank. 4.1 Evaluation System of Banking Sustainable Development
According to our preliminary study and through structural equation modeling method validation, the banking sustainable development system involves two aspects (development and coordination), which includes three factors (expenses, income, fixed assets) and 24 indexes (see appendix). 4.2 Results
It makes the original data standardizing firstly. Then it calculates the weights (see Table 5, Table 6, Table 7) of the 24 indexes using entropy weight method.
372
D. Wang and B. Li Table 5. Weights of expenditure indexes Weight 0.3573 B1 B2 B3 B4 B5 B6 B7 B8 B9 B10
Information entropy 0.9635 0.9878 0.5799 0.9790 0.9861 0.6663 0.9849 0.7825 0.9600 0.7977
Utility value 0.0365 0.0122 0.4201 0.0210 0.0139 0.3337 0.0151 0.2175 0.0400 0.2023
Weight 0.0278 0.0093 0.3201 0.0160 0.0106 0.2543 0.0115 0.1657 0.0304 0.1542
Table 6. Weights of income indexes Weight 0.3015 B11 B12 B13 B14 B15 B16 B17 B18 B19 B20
Information entropy 0.9577 0.8887 0.9054 0.9736 0.9690 0.8137 0.8752 0.8878 0.9049 0.7167
Utility value 0.0423 0.1113 0.0946 0.0264 0.0310 0.1863 0.1248 0.1122 0.0951 0.2833
Weight 0.0382 0.1005 0.0854 0.0238 0.0280 0.1682 0.1127 0.1013 0.0859 0.2558
Table 7. Weights of fixed assets indexes Weight 0.3412 B21 B22 B23 B24
Information entropy 0.8408 0.4530 0.6194 0.8336
Utility value 0.1592 0.5470 0.3806 0.1664
Weight 0.1271 0.4365 0.3037 0.1327
Finally it derives the annual ranking by calculating the overall score according to the weight of the indexes (Table 8) Table 8. The total index since 2004 to 2009 Year Index of expenditure Ranking Index of income Ranking Index of fixed assets Ranking
2004 0.2198 5 0.4735 3 0.0545 6
2005 0.2456 4 0.3223 5 0.0767 5
2006 0.2129 6 0.2189 6 0.1094 4
2007 0.2886 3 0.3924 4 0.3163 2
2008 0.5508 2 0.6404 2 0.2980 3
2009 0.8471; 1 0.7988; 1 0.8352 1
Comprehensive Evaluation of Banking Sustainable Development
373
4.3 Rechecking by Principal Component Analysis
In order to verify the calculation based on entropy weight method, we use principal component analysis method to calculate the scores of the three indices. Table 9. Rechecking by principal component analysis Year 2004 2005 2006 2007 2008 2009
Expenditure Score Ranking -3.80365 5 -0.08429 4 -3.94981 6 0.08745 3 1.21439 2 2.98106 1
Income Score 1.15241 -0.99919 -2.4662 -0.9598 1.2397 2.03308
Ranking 3 5 6 4 2 1
Fixed assets Score -1.48068 -1.27134 -0.97051 1.14739 0.73204 1.8431
Ranking 6 5 4 2 3 1
As can be seen from the Table 9 and Table 8 that entropy ranking and principal component analysis ranking is exactly the same order. It shows that the entropy weight method for the assessment of sustainable development is feasible.
5 Results Analyses We can derive table 10 from the pooled analysis about sustainability coefficient and coordination coefficient of banking sustainable development. Table 10. Sustainability coefficient and Coordination coefficient of banking development since 2004 to 2009 Year Sustainability coefficient Coordination coefficient Characteristics of sustainable development
2004 0.2399 0.1533 I
2005 0.2111 0.4152 I
2006 0.1794 0.6588 H
2007 0.3293 0.8383 G
2008 0.4916 0.6423 G
2009 0.8285 0.9696 A
We can draw that sustainable growth factor and the coordination coefficient are increasing because of internal reforms. But the coordination coefficient decreased in 2008 due to financial turmoil in 2007 and the coordinated development of the sustainability of the momentum appears upturn under a series of measures such as risk avoidance in 2009.
6 Conclusion Based on entropy weight method, the evaluation results of the banking industry are more effective and direct real-time phase of the real strength of banking enterprises than the evaluation results in other evaluation methods, which avoid one-sidedness and the weak convergence. Multiple enterprises in the same area or the overall competitiveness of the situation can be comprehensively compared through a
374
D. Wang and B. Li
calculation. So it can provide decision basis in the formulation of development strategies of banks even business strategy and improving the quality of serviced for decision making in many aspects.
References 1. Lixian, W.: Several Problems on the Calculation of Benefits from Soil-Conserving Measures. Soil and Water Conservation in China 3, 46–48 (1995) 2. Shaofeng, J., Mao, H.: A review of the Overseas Study on the Measurement of Sustainable Development. J. Advance in Earth Sciences 14, 596–601 (1999) 3. Jian, H.: Sustainable development of port economics based on system dynamics. J. Systems Engineering-Theory & Practice 30, 56–61 (2010) 4. Jian, G.: Flood risk assessment on pipeline bridge based on the analytic hierarchy process. J. Natural Gas Industry 2, 106–109 (2010) 5. Jianzhong, H.: Project Resettlement Principles and Methods of assessing the effects. Yellow River 6, 34–35 (2000) 6. Biyu, R.: Fuzzy Synthetic Evaluation of Carrying Capacity of Water Resources in Mengkaige Large Irrigation District. Journal of Yunnan Agricultural University 2, 277–282 (2010) 7. Tiancheng, S.: A research of China’s urban residential location. Journal of Southeast University 1, 40–42 (2010) 8. Ting, l.: Approaches of Evolutionary Multiobjective Optimization. Control and Decision 6, 601–605 (2006) 9. Qiong, L.: Application of an Improved Principal Component Analysis to Flood Damage Evaluation. J. Water Resources and Power 3, 39–42 (2010) 10. Yongkai, Z.: Evaluation on the Sustainable Development of Resources City in Arid Region Based on Entropy. J. Resources and Industries 8, 1–6 (2006) 11. Reguo, F.: Entropy Weighting Ideal Point Method and its applications in investment decision making. J. Wuhan Univ. of Hydr. & Elec. Eng. 6, 105–107 (1998)
Comprehensive Evaluation of Banking Sustainable Development
Appendix Evaluation index system about banking sustainable development Sequence number A1
Subsystems Expenditure subsystem
A2
Income subsystem
A3
Fixed assets subsystem
Indexes B1 Interest payment B2 Expenditure of internal and external dealing B3 Expenditure of dealing between provinces B4 Expenditure of inter-province dealing B5 Expenditure of financial enterprises dealing B6 Expenditure of commission charges B7 Operating expense B8 Expenditure of depreciation B9 Sales tax and addition B10 Expenditure of non-business B11 Interest income B12 Income of internal and external dealing B13 Income of dealing between provinces B14 Income of inter-province dealing B15 Income of financial enterprises dealing B16 Income of intermediate business B17 Other operating income B18 Income of bond B19 Exchange gains B20 Income of non-business B21 Realestate B22 Electronic equipments B23 Transportation means B24 Personnel
375
Fitting with Interpolation to Resolve the Construction of Roads in Mountains Jinran Wang, Xiaoyun Yue, Yajun Guo, Xiaojing Yang, and Yacai Guo Institute of Mathematics and Information Technology, Hebei Normal University of Science and Technology, Qinhuangdao 066004, Hebei Province, China
[email protected]
Abstract. The interpolation fitting is an important method in the numerical analysis, and it has applied in the practical life widely. As for the questions proposed in the mathematics model, to resolve the construction of roads in the mountains with the least cost, this paper used local optimization theory, under the specific circumstances of the mountain set up control points, made the topographic map method into the network, established new network of roads, bridges tunnels and gave different weights, By minimizing the cost instead using the shortest path problem in fitting with interpolation and solved problem using Dijkstra algorithm, finally reached the optimal route and the minimum cost. Keywords: Shortest path; interpolation fitting; least cost; Dijkstra algorithm.
1 Introduction Building of roads in a mountainous area, measured the elevation of some sites, (surface area 0≤ x ≤5600, 0≤ y ≤4800 in the table, unit m). (Y-axis forward for the North). Data: y =3200 meters in a east-west mountain; from the coordinates (2400, 2400) to(4800, 0) has a northwest-southeast trend of the valley; in (2000, 4800) has a path near the lake, the highest level slightly higher than 1350 meters, the rainy season in the valley form a stream.Coordinate the relationship can be expressed as W ( x) = (
x − 2400 3 4 ) +5 2
(2400≤ x ≤4000).
(1)
Road from the foot of the mountain (0,800) department started by the settlement (4000, 2000) to mine (2000, 4000), such as following Table 1. Table 1. Forms and data project type
general section
bridge
project cost (yuan / m)
300
2000
Restrictions on the slope α
α <0.125
α=0
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 376–383, 2010. © Springer-Verlag Berlin Heidelberg 2010
tunnel 1500 (length ≤ 300 m); 3000 (length> 300 m) α <0.100
Fitting with Interpolation to Resolve the Construction of Roads in Mountains
377
(1) Test gives a circuit design, including theory, method and more precise line position (including bridges, tunnels), and asked to the total cost of the program. (2) If the settlement to 3600≤ x ≤4000, 2000≤ y ≤2400≤4000,2000≤ neighborhoods, roads through residential areas as long as you can, then how your program has changed?
2 Model and Solution 2.1 Model Assumptions and Analysis Assume the regional topography is continuous, there is no fault and cliff; assume a geometric line instead of highway and no count of its width, ignore transverse slope of the width restrictions; not considered urgent change point line, priority constraints; addition to the residents of a given areas and mining areas, other areas can be used for road alignment; any two points Vi , V j is defined as the length between the equivalent road: L=cost ( Vi ,V j )/Po, which is the cost of inter-connected, Po is the general section of the unit cost is 300 (yuan / m); does not exist on the construction of harmful factors, such as seismic zone, Karsts area, climate conditions. 2.2 Model Establish and Solution This model will be discrete topographic map grid points into the network nodes, the principles are: 1) Topographic maps on a grid point corresponding to the network nodes. 2) Topographic map adjacent grid points (on an arbitrary grid points that surround it and its adjacent 8 points) associated with the network. 3) In the network nodes associated with two Vi ,V j edge between (Vi ,V j ) weight decided by the following equation [1, 2, 3]: (Vi , V j )build general road ⎧ K ij × Dij ⎪20 × D 3 (V , V )across the river build bridge a = 0 ij i j ⎪ ⎪ (Vi , V j )build tunnel Dij ≤ 300 Wij = ⎨5 × Dij . ⎪10 × D ( V , V ) build tunnel D ≤ 300 ij i j ij ⎪ (Vi , V j )across the river a > 0 ⎪⎩∞
(2)
While Dij is the distance between Vi and V j , K ij is the weight coefficient, a as the slope, H i , H j as Vi , V j ' s elevation. a < 0.125 ⎧⎪1 K ij = ⎨ . ( H − H ) ( H − H ) ( D × 0 . 125 ) a ≥ 0.125 ⎪⎩ i j i j ij
(3)
Description: while a ≥ 0.125 we can build the winding curves, increased use of road length for the increase in elevation, known as "Z" shaped road. For each curve, in condition of a < 0.125 ,Z-shaped road length is ΔH /[ x × 0.125 × 65 × x 8] ≈8 ΔH . connection between two nodes means be chosen by the following principles: the roads, so long as to ensure the connection of the general sections of nodes, then first use (including the Z-shaped road); for the bridge, due to high costs associated with
378
J. Wang et al.
only two nodes between them, such as elevation and as lakes, streams separate until use; for the Tunnel, a tunnel section of the project cost is 5 times or 10 times than the general section of small,, and its height is restricted to a < 0.1 , clearly can not be used to join two nodes. Only appears as shown in table 1, only to open the tunnel section [4]. The conclusion is following: ⎧1/a 1 + 1/a 2 ≤ 1.6, open the tunnel . ⎨ revise road ⎩1/a 1 + 1/a 2 > 1.6,
⎧(1/a1 + 1/a 2 ) ≤ 3.2, open the tunnel ⎨ revise road ⎩ (1/a 1 + 1/a 2 ) > 3.2,
This topographic map into the network, find the minimum cost route network is transformed into seeking the shortest path problem. The settlements coarse network ( d = 400m) of the circuit design according to the principle of connectivity options, analysis of data in the table, by four bridges and two tunnels path starting and ending position as follows [5, 6]: Table 2. The best route of build road 4800 1350 1370 1390 1400 1410
960
940
880
800
690
570
430
290
210
150
4400 1370 1390 1410 1430 1440 1140 1110 1050
950
820
690
540
380
300
210
4000 1380 1410 1430 1450 1470 1320 1280 1200 1080
940
780
620
460
370
350
3600 1420 1430 1450 1480 1500 1550 1510 1430 1300 1200
980
850
750
550
500
3200 1430 1450 1460 1500 1550 1600 1550 1600 1600 1600 1550 1500 1500 1550 1550 950
1190 1370 1500 1200 1100 1550 1600 1550 1380 1070
2400
910
1090 1270 1500 1200 1100 1350 1450 1200 1150 1010
2000
880
1060 1230 1390 1500 1500 1400
2800
900
1100 1060
950
900
1050 1150 1200
880
1000 1050 1100
870
900
930
950
1600
830
980
1180 1320 1450 1420 1400 1300
700
900
850
840
380
780
750
1200
740
880
1080 1130 1250 1280 1230 1040
900
500
700
780
750
650
550
800
650
760
880
970
1020 1050 1020
830
800
700
300
500
550
480
350
400
510
620
730
800
850
870
850
780
720
650
500
200
300
350
320
0
370
470
550
600
670
690
670
620
580
450
400
300
100
150
250
(y, x)
0
400
800
1200 1600 2000 2400 2800 3200 3600 4000 4400 4800 5200 5600
The dotted line is the best. Four bridges: (3200,1200)-(3600,1600);(3600,800)(4000,1200), Two tunnels: (4000, 2800)-(4000, 3000),(4400,2800)-(4400,3600). There are two general sections of the shortest: Red Line: (0,800)-(400,400) - (800,400)-(1200,400)-(1600, 400)-(2000,400)- (2400,400) - (2800,800 ) (3200,1200) - (3600,1600) - (4000,2000)-(3600,2000) - (3200,2400) - (2800,2400) (2400,2800) - (2400,3200) - (2400,3600)- (2000,4000) (total cost: 474.2327 million). The solid line: (0,800)-(400,400)- (800,0)-(1200,0)-(1600,0)-(2000,0)- (2400,0)(2800,0)-(3200 , 400)-(3600,800) - (4000,1200) - (4400,1200) - (4400,1600) (4400,2000) - (4000,2000) - (3600,2000) - (3200,2400 ) - (2800,2400) - (2400,2800) - (2400,3200) - (2400,3600) - (2000,4000) (total cost: 485.1126 million).
Fitting with Interpolation to Resolve the Construction of Roads in Mountains
379
Following results obtained the Dijkstra algorithm, the minimum cost of the line is:(0,800) - (400,400) - (800,400) - (1200,400) - (1600,400) - (2000,400) - (2400,400) - (2800,800) - (3200,1200) - (3600 , 1600) - (4000,2000) - (3600,2000) - (3200,2400) - (2800,2400) - (2400,2800) - (2400,3200) - (2400,3600) - (2000,3000 ). Total cost: 4,742,327 yuan (474.2327 million). The results can be seen: from the residents out to the mines at the cost of only 7122 × 300 = 2,136,600 (million), while the projected construction of two tunnels, each cost to reach 8000 × 300 = 2.4 million (dollars), compared with residents of the entire mining area The cost is also a large section of the road, so coarse grid condition, not revise the tunnel is reasonable. 2.3 Interpolation Fitting From Table 1 that assumption based on topography, to fit the piecewise linear interpolation. That is the horizontal grid is subdivided into 50 m units of vertical coordinates are given in units of 100 meters above sea level. The coarse mesh would be turned into small grid [7, 8]. 1) Determine the emergence of new sub-grid of elevation grid points
Fig. 1. Topographic
Fig. 2. Equipotent interpolated
The assumption that the terrain under the conditions of continuous change can be considered a coarse grid of small mesh point within the full height of four vertices by a large grid of elevation grid points, the decision, namely the use of quadratic interpolation formula. Principle: in the same grid line, any two points of H ( X m , Yn ), H ( X m' , Yn' ) in H ( X , Y ) = H ( X m , Yn ) 2 + H ( X m' + Yn' ) 2 .In other words, the
second interpolation method to ensure the elevation in the X -axis direction and Y axis linear variation. Fitting three-dimensional as follows: 2.4 Bridgehead to Determine the Candidate Point Fig. 1, the valley has provided some information for us on the terrain. We can approximate the bottom as a straight line on both sides of the valley is also
380
J. Wang et al.
fundamental symmetry. From this can give both sides of the equation when forming the stream. Set the bottom equation [9]: x + y = 4800 (2400 ≤ x ≤ 4000). By (1) the formula for the monotone increasing function, so images of cross-strait equation was trumpet, first seek the West Bank equation. In the West Bank to take any point (a, b), then this point, and with x + y = 4800 vertical the equation is xy = ab , with x + y = 4800 of the intersection of the abscissa is c = ( a − b + 4800) / 2 , thus the cross point ( a, b) of the river width is W (c) can be seen, point (a, b + 2 2 × W (c)) satisfy the equation, which x + y = 4800 (a + b + 2 2 × W (c)) = 4800 .
(4)
Because the ( a, b ) is any point in the West Bank, so (4)-type equation that is the west bank equation, to collate available: 2 (4800 − x − y ) = (
x− y 34 ) +5 . 4
(5)
Similarly, the east coast equation: 2 ( x + y − 4800) = (
x− y 34 ) +5. 4
(6)
So as to shorten the length of the bridge, make the bridge across the boundaries of the two sides, due to restrictions on the bridge on the slope value of 0, which called for equal elevation at both ends of the bridge. This need to consider both ends of the vertical coordinates In addition, the stream width W (x) is an increasing function of x, so the first election in the upstream bridge site to search, so that the selected bridge site points to satisfy equation (5), (6), and the vertical coordinates difference is not large, recorded by the VB program found X , Y value: X = 3200, Y = 1500; X = 3300, Y = 1600; X = 3300, Y = 1400; X = 3400, Y = 1500; X = 3500, Y = 1300; X = 3600, Y = 1400; X = 3400, Y = 1200; X = 3600, Y = 1400; 2.5 Tunnel Position Considering the tunnel is less than 300 meters. This required the road is to climb a certain height. In fact, the use of Appendix 2 can determine the x = 4400 peaks were digging the tunnel, because of its north and south on both sides of the terrain descends very quickly. (And so potential Figure 2) which is x = 4400 cross-section (part), If the tunnel 300 meters long and is level, its altitude can be obtained [9]: h = 1500 −
300 × 650 = 1266 ( m ) 650 . 400 × (1 + ) 600
(7)
To tunnel cross marked in the 4400 election, higher than 1266 meters above sea level, through to plane analytic geometry easy to determine groups of candidate points.
Fitting with Interpolation to Resolve the Construction of Roads in Mountains
381
2.6 Determination of the Most Short-Circuit Using the theories of above, defining graph G = ( V , E ), V is the vertex V1 , V2 , LV8 set, E is a connected set of vertices of the arc, arc may wish to look at the number next to as the distance between two vertices ( Vi to V j with Lij is the distance; for non-adjacent vertices of distance, such as L18 , denoted by ∞ in the programming can be recorded as a very large number). Labeling the basic idea is V1 start from the starting point, and gradually to find the shortest path to reach the point, in every step of the total recorded for each vertex a number, known as the point label, it said the V1 to the point of the upper bound of the shortest distance (called T label), or V1 is the shortest distance to that point (called P label). The specific algorithm is 1) To the start point of V1 labeled P label d (V1 ) = 0 , to other points marked T label d (V j ) = L1 j ( j = 1,2, L , N ) ; 2) all T label in the least to take, say, d (V j 0 ) = L1 j 0 , put the T-point V j 0 P label to label, and re-calculated with the T label the other points of the T label: select point V j T-label d (V j ) and d (V j 0 ) + L j 0 j in the smaller of the new T as V j label; 3) Similar with 2), the newly acquired T-label identified in the P label. So until Vi ( i is 1 to N, a natural number) with the P label [10]. To obtain the V1 to Vi the shortest distance, and determine the most short-circuit V1 to Vi in the program by adding the bridges and tunnels, adding special data in the Dijkstra algorithm get the following results: the minimum cost of the line is (0,800)-(400,400)-(2400,400)-(3300,1300)-(3300,1400)-(3400,1500)-(3500,1500)(4000,2000)-(3600,2000)-(3300,2300)-(2700,2300)-(2400,2600)-(2400,3600)(2000,4000). Labeling for the title will be seeking the shortest path, we must determine the vertex set V and arc set E .This can be the basis of the original grid structure, such as the 400 × 400 encryption for the 50 × 100, the new grid constitute the vertex set V , arc set E includes graphs horizontal, vertical and diagonal directions of the paths, then Vi , V j is defined as the distance between the two adjacent vertices as (8)equation (and consider the slope limit) [11, 12]. ⎧ 2 2 2 12 ⎪[(Δx) + ( Δy ) + ( Δz ) ] , ⎪ [(Δx ) 2 d ij = ⎨ ⎪∞ ⎪ [(Δx ) 2 ⎩
Δz + ( Δy ) 2 ] 1 2 Δz + ( Δy ) 2 ] 1 2
≤ 0.125
.
(8)
> 0.125
Vertex set V and arc set E has been determined, it can be directly used for Graph Labeling any point to other points of the most short-circuited. 2.7 Residential Treatment Just as the chosen route through the neighborhood at any point you can, it is essential to the points within residential areas through point for an analysis of the situation,
382
J. Wang et al.
determine the point in the neighborhood as a residential area, front the problem has been resolved into settlements. The program is still fine grid method used by the interpolation formula, divided into small grid elevation values shown after each grid point, the total cost of the road passing through, drawn into a histogram. According to the histogram can be seen in Table 3, the total cost of road was from north to south, from east to west trend of gradually decreasing, where (3600, 2000) department minimize total cost, and was chosen as an essential point of the road. The result is(0,800)-(400,400)-(800,400)-(1200,400)-(1600,400)-(2000,400)-(2400,400)(2800,800)-(3300,1300)-(3300,1400)-(3400,1500)-(3500,1500)-(3700,1700)-(3700, 1900)-(3300, 2300)-(2400,3200)-(2700,2300)-(2400,2600)-(2400,3600)-(2000,4000). Table 3. The total cost
total cost
x
y 2400 2300 2200 2100 2000
3600
3700
3800
3900
4000
4459092 4455258 4453845 4452429 4451013
4459092 4455258 4453845 4452429 4451013
4461024 4457190 4455777 4454361 4452945
4484319 4480485 4479072 4477656 4476240
4518114 4518114 4516701 4515285 4513869
Table 4. Histogram
x y 2400 2300 2200 2100 2000
3600
3650
3700
3750
3800
3850
3900
3950
4000
1150 1128 1105 1083 1060
1133 1111 1089 1068 1046
1115 1094 1074 1053 1033
1098 1078 1058 1038 1019
1080 1061 1043 1024 1005
1063 1045 1027 1009 991
1045 1028 1011 994 978
1028 1012 996 980 964
1010 995 980 965 950
3 Model Evaluation Circuit design of this model is the complexities of network programming discrete points, simplifying the problem to solve. Request the network using the Dijkstra algorithm shortest path algorithm to solve the computer not only improve efficiency, but also to ensure the line of optimality. Model disadvantages: (1) Increase the scope of the local shape map, because of computer storage of data limitations, can lead not directly calculate the optimal route. (2) Model assumption is not ideal; address this issue, this model can make a few improvements: scaled down the network, reducing data points, for Chubu design, are generally line, then restore the original ratio, select the terrain undulation and important sections the local optimization can greatly improve the model application.
Fitting with Interpolation to Resolve the Construction of Roads in Mountains
383
Acknowledgements This work is supported by Scientific Technology Research and Development Program Fund Project in Qinhuangdao city (No.200901A288) and by the Teaching and Research Project of Hebei Normal University of Science and Technology (No.0807). The authors are grateful for the anonymous reviewers who made constructive comments.
References 1. Lu, J., Shuai, F.: Spatial interpolation for Lake Water Pollution Assessment in the application. Hunan University (Natural Science) 22, 124–127 (2007) 2. Liu, R., Wang, X.: The theory and method of the spatial optimal estimation on the water quality parameters of lake. China Environmental Science 21, 177–179 (2001) 3. Wang, L., Li, C., Liu, T.: Spatial Estimation on Nutrient Parameters in the Sediment of Lake. Journal of Agro- Environment Science 25, 772–775 (2006) 4. Qiao, M., Zhou, R., Wang, P.: GIS-based spatial interpolation method in the mine geological structure prediction study. Mining 8, 58–60 (2010) 5. Yu, Z.: A New Geological Surface Interpolation Calculation Method in Spline Method 15 China Mining Institute, vol. 4, pp. 69–76 (1987) 6. Jiang, Q., Xie, J., Ye, J.: Mathematical Model, 3rd edn. Higher Education Press (1996) 7. Zhang, Z., Xu, Y.: MATLAB Tutorial. Aeronautics and Astronautics university Press, Beijing (2004) 8. Hu, S., Li, B.: Based On MATLAB Math Test. Science Press, Beijing (1998) 9. Zhao, J., Dan, Q.: Mathematical Modeling and Mathematical Experiments, 2nd edn. Higher Education Press (2003) 10. Guerra, T.M., Vermeiren, L.: LMI2based Relaxed Nonquadratic Stabilization Conditions for Nonlinear System in the TakagiSugeno’sform. Automatica 40, 823–829 (2004) 11. Zhao, J., Wu, L.: Power Tunnel Diameter Interpolation Optimization of China Three Gorges University (Natural Sciences), vol. 30, pp. 28–31 (2008) 12. Liu, X.: Determine the Geoid by using Method of Interpolation and Fitting. Beijing Surveys and Draws 1, 49–52 (2010)
Response Spectrum Analysis of Surface Shallow Hole Blasting Vibration Chao Chen1, Yabin Zhang1,2, and Guobin Yan2 1 2
College of traffic & surveying of Hebei Polytechnic University, Tangshan, 063009, China Civil & Environment Engineering School of University of Science and Technology Beijing, Beijing, 100083, China
[email protected],
[email protected]
Abstract. Based on analysis of a Series of blasting vibration data, the response spectrum characteristics of surface shallow blasting vibration was analyzed and studied. The curve of blasting velocity response spectrum has more peaks with periodic changes, but when on a certain period of great value, the curve shape of the velocity response spectrum is parallel with the periodic axis. The curve of acceleration response spectrum has a similar attenuation law as the velocity response spectrum, but attenuation of the velocity response spectrum is faster. The curve of displacement response spectrum, on the contrary, has a increasing trend with periodic increasing after a post-peak. The blasting response spectrum has a faster speed of attenuation than a earthquake and the curve of blasting response spectrum has a exponential decay. Keywords: Blasting vibration; response spectrum; surface blasting; shallow hole.
1 Introduction Long-term since, the extensive research about blasting vibration influence on structure is done at home and abroad. At present, the particle velocity of blasting vibration is generally as a criterion for building damage and most safety standards of blasting vibration is established on this basis. Obviously these standards, although can judge the safety of buildings in certain circumstances, but not be enough to seismic to blasting vibration. Explanation on the influence of blasting vibration providing by these standards is very limited, not only to decide the blasting vibration load on building, but also to quantitatively analyze destructive capability of the blasting vibration. Engineering practice and study fruit have shown that the damage of building depends not only on the size of the particle vibration velocity variation, but more comprehensive, including the vibration velocity, the vibration frequency and duration, the structural features of building structures, etc. The huge difference in kinds of building structure cause that the building responses to blasting vibration are also different. It is not comprehensive and unreasonable that the particle velocity using only as a safety control of blasting vibration to building different structures and materials. The blasting vibration response spectrum, reflecting the structure characteristics and saying the R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 384–389, 2010. © Springer-Verlag Berlin Heidelberg 2010
Response Spectrum Analysis of Surface Shallow Hole Blasting Vibration
385
largest response single degree of freedom system in blasting vibration, is more reasonable as the safety control standard than the particle velocity standard. 70s, Wuhan Institute of Hydraulic and Electric Engineering, using self-made blast vibration response spectrum tester, tested the vibration response spectrum of chamber blasting and tried to determine the influence of the blasting vibration to buildings, and some preliminary conclusions were putted forward. In the early 1980s, Seismological Bureau of Jiangsu Province, applying midpoint acceleration response spectrum method, studied the response spectrum of Jiangshan cement plant and Hainan Shilu iron ore’s record of blasting vibration, and compared with earthquake response spectrum. Although A lots of work also to be done in the United States and Italy etc, but there are still many deficiencies in the field of the blasting vibration response spectrum [1, 2, 4]. Based on the response spectrum analysis of surface shallow hole blasting vibration, a comprehensive study is performed from engineering blasting seismic wave characteristics and vibration response characteristics of building structure in the paper.
2 Blasting Vibration Monitoring System and Program 2.1 Project Profile The foundation excavation engineering of Huaneng (Shantou) power, including main workshop, steam turbine engine, coal boiler, the exclusive, control tower, etc, has total 6055m3 earthworks, 60553m3 stone. Within the scope of Yanshan period for the excavation of grain granite, located in the landscape in the edge of the area and anti-corrosion platform. The foundations of buildings covered by the quaternary, the strata are: plain fill, silt, silty soil, gravel sand, alluvial loam, sandy clay slope plot, residual plot sub-clay. The underlying bedrock is coarse-grained granite and diabase filling veins, quartz veins in late stage of the same period. The major mechanical and power equipments of Huaneng (Shantou) power’s first generation are imported from Russian. The powerhouse utilizes all-steel load-bearing structural system, the beam welding or rolling, steel structure component making choices in the factory production, installation using high strength bolt connection. To ensure that the whole factory has certain seismic stiffness, the framework of longitudinal and floor are setup cross seismic support. 2.2 Blasting Vibration Monitoring System The Main monitoring equipment is NCSC-5000 system produced by the United States SAULS, which consists of speed sensors, monitors and dedicated analysis software. The NCSC-5000 system, including Good detection performance of low-frequency band (1.5~300) and wide operating temperature range (-10 ~60 ), high measurement accuracy and signal to noise ratio, can meet the safety monitoring and evaluation d of blasting vibration.
℃
℃
386
C. Chen, Y. Zhang, and G. Yan
Fig. 1. NCSC5000 system diagram
2.3 Blasting Vibration Monitoring Program [2] The blasting vibration monitoring program is arranged in two monitoring lines, No.1 is powerhouse and No.2 & No.3 are equipment zone. Testing-points arrangement is shown in Fig. 2.
Fig. 2. Layout diagram blasting vibration monitoring program
3 Blasting Vibration Velocity Response Spectrum 3.1 Calculation Principle of Blasting Vibration Velocity Response Spectrum By using compared, the single-mass-string displacement excitation, forced vibration velocity, is adopted [3, 4, 5, 6, 7, 8, 9]. Standardization form of the equation:
u&&(t ) + 2ζω u& (t ) + ω 2 u (t ) = 2ζω X& g (t ) + ω 2 X g (t )
(1)
k/m; 2ζω=c/m; ω= (k/m) 1/2, ζ=c/2(mk) =c/c , kis stiffness,
Where: ω2=
1/2
d
cd is the critical damping coefficient and ω is excluding damping vibration frequency of circle.
Response Spectrum Analysis of Surface Shallow Hole Blasting Vibration
387
According to equation (1) can be solved absolute displacement u(t), the relative displacement z(t) can be expressed as [5, 10]:
z (t ) = u (t ) − X g (t ) .
(2)
Equation (2) differential time, the relative velocity of single-particle: t t z&(t) = ∫ ⎡2ζX& g (τ ) + ω∫ X g (τ )dτ ⎤e−ζω(t −τ ) [− ζωsinω(t −τ ) + ω cos(t −τ )]dτ − X& g (t ) (3) ⎥⎦ 0⎢ 0 ⎣
Then, relative velocity response spectrum Sv can be expressed as: S v = z& ( t )
max
.
(4)
Absolute acceleration response spectrum (dynamic amplification factor of velocity) can be expressed as: . (5) β = Sv v
X& g ( t )
max
3.2 Blasting Vibration Test Results The blasting vibration monitoring program of Huaneng (Shantou) power plant lasted nearly two months, tests times and gets 313 group of blasting vibration data. The vibration velocity distributes from 0.091cm/s to 5.104cm/s, including control centre room vibration velocity 0.2cm/s ~ 0.324cm/s, 2 # turbo generator vibration velocity 0.176cm/s~1.12speedcm/s. Spectrum analysis blasting vibration velocity curves shows that the vibration frequency is 8Hz ~ 64Hz, vibration duration is 0.3 s~ 0.85s. Table 1 is the blasting parameters and testing data statistics. Table 1. Blasting parameters and testing data statistics
Survey area
Data
R (m)
I II III IV other
201 30 30 49 3
10~101 10~130 10~80 10~60 150
Vertical peak velocity (cm/s) 0.069 ~ 5.104 0.025~0.324cm/s 0.176~1.321cm/s 0.091~2.57cm/s 0.176~0.921cm/s
Main vibration frequency (Hz)
Duration (s)
21~64 8~62 8~62 8~62 22~38
0.3 ~ 0.7 0.35~0.79 0.33~0.85 0.33~0.75 0.6~0.8
3.3 Blasting Vibration Response Spectrum Using the effectiveness of vibration velocity measurement calculating program curve and the speed of response spectrum standard speed response spectrum and computing time step 0.00098s, damping ratio 0.05. Fig.4 is a record of the relative speed of blasting vibration acceleration response spectrum response spectrum, and their standard response spectrum curve.
388
C. Chen, Y. Zhang, and G. Yan
Fig. 3. Measurement speed curve and corresponding calculation of acceleration and displacement curve 4
Bv
Acceleration,g
2
2
0 0
12
2
8
Ba
Velocity,cm/s
10
6 4 2
0
0
0.0 0.0
0.5
1.0 T,s
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.5 T,s
Fig. 4. Relative velocity and acceleration response spectrum response spectrum curve and standard response spectrum
4 Response Spectrum Characteristics of Blasting Vibration Based on the field of blasting vibration record 313 times response spectrum calculation and analysis, it is found that there are following characteristics of structural dynamic response being excited by shallow hole blasting seismic wave. The first, the velocity response spectrum curve maxima concentrated in period T = 0.05s ~ 0.03s (corresponding to 20Hz ~ 33Hz) around and the maximum acceleration response spectrum appears to be lagging velocity response spectrum. It is shown that the frequency of blasting vibration Velocity wave concentrates in 20Hz ~ 33Hz and that the construction of structures, the period in the paragraph (which corresponds to the natural frequency of 20Hz ~ 30Hz) , have the largest response to blasting vibration. The peak of blasting vibration response spectrum concentrates in short period (less than 0.1
Response Spectrum Analysis of Surface Shallow Hole Blasting Vibration
389
seconds) in the range and has large peak sharpness, because the frequency of blasting vibration response spectrum mainly distributes in high frequency part. The second, surface shallow hole blasting vibration response spectrum has less peaks than earthquakes because of blasting vibration waves’ duration relatively very smaller than earthquakes’. Shallow hole blasting vibration response spectrum have dropped sharply after peaks’ point than earthquakes, and seems more close to the exponential decay, especially the acceleration response spectrum more obvious with the characteristic. The third, shallow hole blasting vibration acceleration response spectrum in structures, which have more than 1 second period, is almost to zero. And then the rigid structures have larger damage. The forth, the blasting velocity response spectrum has more peaks with periodic changes, but when on a certain period of great value, the curve shape of the velocity response spectrum is parallel with the periodic axis. The acceleration response spectrum has a same decay law, but its decay faster. The displacement response spectrum curve, on the contrary, has certain attenuation, but then has increased with increasing period.
References 1. Zhang, X., Huang, S.: Seismic Effect. Science Press, Beijing (1981) 2. Yu, Y., Chen, C., Zhang, L., et al.: Blasting vibration test and safety assessment of Huaneng (Shantou) power plant. University of Science and Technology Beijing, Beijing (2003) 3. Zhang, Y., Chi, E., Li, B.: Blasting Demolition of Yanjia Bridge and the Test and Analysis of Blasting Vibration. Mining Research and Development 29(1), 55–59 (2009) 4. Qian, S.: Blasting ground vibration velocity displacement dynamic response analysis of the incentive structure. Blasting 17 (suppl.), 28—33 (2000) 5. Shi, X., Dong, K., Chen, X.: Study on the Rock Damage Mechanism of Blasting Vibrations with Low Frequency. Mining Research and Development 29(1), 68–70 (2009) 6. Cao, X.: Study on Vibrations Effects of Ground Resulted from Blasting in Shallow Tunnel. Southeast Jiaotong University, Chengdu city, China (2007) 7. Chen, S.-h., Wei, H.-x., Du, R.-q.: Multi-resolution wavelet analysis of blasting vibration signals. Rock and Soil Mechanics 30 (suppl.), 135–140 (2009) 8. Liu, Y., Zhao, J.: Experimental Study on Amplification Effect of Blasting Vibration under Buildings. Mining Research and Development 29(6), 32–36 (2009) 9. Lin, Q.: Evaluation and controlling of Blasting Vibration Effects on Buildings. Fujian Architecture & Construction, 89–91 (2009) 10. Wang, H.: Study on Influence of Rock Roadway Driving Blasting Seismic Effect and Wall Rock Stability. Anhui University of Science and Technology, Huainan city, China (2009)
Iterative Method for a Class of Linear Complementarity Problems Longquan Yong Department of Mathematics, Shaanxi University of Technology, Hanzhong 723001, Shaanxi, P.R. China
[email protected]
Abstract. An iterative method for solving a class of linear complementarity problems with positive definite symmetric matrices is presented. Firstly, linear complementarity problem is transformed into absolute value equation, which is also a fixed-point problem. Then we present an iterative method for the linear complementarity problem based on fixed-point principle. The method begins with an initial point chosen arbitrarily and converges to optimal solution of original problem after finite iterations. The effectiveness of the method is demonstrated by its ability to solve some standard test problems found in the literature. Keywords: iterative method, linear complementarity problem, positive definite symmetric matrices, absolute value equation, fixed-point principle.
1 Introduction The linear complementarity problem (LCP) is to determine a vector pair ( ω , z ) satisfying,
⎧ω − M z = q , ⎪ ⎨ω ≥ 0 , z ≥ 0 , ⎪ ω T z = 0, ⎩
(1)
where ω , z , q ∈ R n , M ∈ R n × n and M is a positive definite symmetric matrix. LCP (1) is a fundamental problem in mathematical programming. It is known that any differentiable linear and quadratic programming can be formulated into LCP (1). LCP also has wide range of applications in economic and engineering. The interested readers are referred to the survey paper [1]. A number of direct methods have been proposed for their solution. The book by Cottle et al. [2] is a good reference for pivoting methods developed to solve LCP. Another important class of methods used to tackle LCP is the interior point methods. Interior point methods (IPMs) are an important method for LCP. Modern interior point methods originated from an algorithm introduced by Karmarkar in 1984 for R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 390–398, 2010. © Springer-Verlag Berlin Heidelberg 2010
Iterative Method for a Class of Linear Complementarity Problems
391
linear programming [3]. Most IPMs for LCP can be viewed as natural extensions of the interior point methods for linear programming. The most successful interior point method is the primal-dual method. The primal-dual IPMs for linear optimization problem were first introduced in [4] and [5]. Kojima et al. [4] first proved the polynomial computational complexity of the algorithm for linear optimization problem, and since then many other algorithms have been developed based on the primal-dual strategy. Kojima et al. [6] proposed a polynomial time algorithm for monotone LCP under the nonemptiness assumption of the set of feasible interior point. Each algorithm in the class of interior point methods for the LCP has a common feature such that generates a sequence {(ω k , z k ), k = 0,1, 2,L} in the positive orthant
(ω 0 , z 0 ) . If each point (ω k , z k ) of the generated sequence satisfies the equality system ω − Mz = q ,
of R 2n under the assumption of knowing a feasible initial point
then we say that the algorithm is a feasible interior point algorithm. However, it is a very difficult task in finding a feasible initial point to start the algorithm. To overcome this difficulty, recent studies have been focused on some new interior point algorithms without the need to find a feasible initial point. In 1993, Kojima et al. presented the first infeasible interior point algorithm with global convergence [7], soon later Zhang [8] and Wright [9] introduced this technique to the linear complementarity problem. Thus few numerical experiments were reported about the feasible interior point algorithm because of difficulty in selecting the strictly feasible starting point. Based on above reasons, in this paper, we proposed an iterative method for solving LCP (1) with positive definite matrices. Firstly, LCP (1) is transformed into absolute value equation, which is also a fixed-point problem. Then we present an iterative method for the LCP (1) based on fixed-point principle. The method begins with an initial point chosen arbitrarily and converges to optimal solution of original problem after finite iterations. The effectiveness of the method is demonstrated by its ability to solve some standard test problems found in the literature. In section 2 we transform LCP (1) problem into fixed-point problem and present our new algorithm. The proof of the convergence result will be developed in section 3. Section 4 gives some standard test problems found in the literature. Section 5 contains some concluding remarks and comments. We now describe our notation. All vectors will be column vectors. For x ∈ R n the 2-norm will be denoted by x , while x will denote the vector with absolute values of each component of x . The notation A ∈ R m×n will signify a real m × n matrix. If A is square matrix of order n , its norm, dented by A is defined to be the supremum of
{|| Ax || / || x ||: x ∈ R , x ≠ 0} n
. From this definition, we have
Ax ≤ A x for all x ∈ R . We write I for the identity matrix ( I is suitable dimension in context). A vector of zeros in a real space of arbitrary dimension will be denoted by 0. n
392
L. Yong
2 Theoretical Background and New Algorithm The following result is due to W. M. G. Van Bokhoven [10]. We consider the LCP (1) where M is assumed to be a positive definite symmetric matrix, for q ≥ 0 ,
ω = z =0
is the unique solution of the LCP (1). So we only consider the case q ∉ Q , here Q = { q q ≥ 0} .
Theorem 1 (W. M. G. Van Bokhoven). Let M be positive definite and symmetric. The LCP (1) is equivalent to the fixed-point problem of determining x ∈ R n satisfying
x = f ( x) . where
f ( x ) = B x + c, B = ( I + M )
−1
(2)
(I − M ),c = −( I + M )
−1
q.
Proof. In (1) transform the variables by substituting
ωj =
x j − x j , z j = x j + x j , for each j = 1 to n
We verify that the constraints
ω j ≥ 0, z j ≥ 0
(3)
for j = 1 to n automatically hold, from
(3). Also substituting (3) in ω − M z = q , lead to f ( x ) − x = 0 . Further,
ωjzj
=0
for j = 1 to n , by (3). So any solution x of (2) automatically leads to a solution of the LCP (1) through (3). Conversely suppose ( ω , z ) is the solution of LCP (1).
Then x = ( z − ω ) / 2 can be verified to be the solution of (2). Since M is positive definite and symmetric, all its eigenvalues are real and positive. If λ1 ,L , λn are the eigenvalues of M , then the eigenvalues of
B = (I + M )
−1
(I − M )
are given by μi = (1 + λi ) −1 (1 − λi ) , i = 1 to n ; and hence
all μi are real and satisfy μi < 1 for all i (since λi > 0 ). Since B is also symmetric we have
B = max{ μi : i = 1 to n} < 1 .
Following we describe our new method for solving absolute value equations (2). Since (2) is also a fixed-point problem, iterative method is a common method for solving fixed-point problem. The name iterative method usually refers to a method that provides a simple formula for computing the (k + 1)th point as an explicit function of the k th point x k +1 = f ( x k ) . The method begins with an initial point x1 (often x1 can be chosen arbitrarily, subject to some simple constraints that may be specified, such as x1 ≥ 0 , etc.) and generates the sequence of points {x1 , x 2 ,L} one after the other using the above formula. The method can be terminated whenever one of the points in the sequence can be recognized as being a solution to the problem under consideration. If finite termination does not occur, mathematically the method has to be continued indefinitely. In some of these methods, it is possible to prove that the sequence {x k }
Iterative Method for a Class of Linear Complementarity Problems
393
converges in the limit to a solution of the problem under consideration, or it may be possible to prove that every accumulation point of the sequence {x k } is a solution of the problem. In practice, it is impossible to continue the method indefinitely. In such cases, the sequence is computed to some finite length, and the final solution accepted as an approximate solution of the problem. Most of the algorithms for solving nonlinear programming problems are iterative in nature and the iterative method discussed here can be interpreted as specializations of some nonlinear programming algorithms applied to solve AVE (1). Based on above discussion, we now state iterative method for solving (2). Algorithm 1 (The iterative method) Given arbitrary initial point x1 ∈ R n , convergence tolerance ε > 0 ; For k = 1, 2,3,L Calculate the next point point
xk +1 = f ( xk ) = B xk + c ;
(4)
End.
The equation (4) defines the iterative scheme. Beginning with the initial point x1 ∈ R n chosen arbitrarily, generate the sequence {x1 , x 2 ,L} using (4) repeatedly. This iteration is just the successive substitution method for computing the Brouwer's fixedpoint of (2). We will now prove that the sequence generated {x1 , x 2 ,L} converges in the limit to the unique fixed point x* of (2).
3 Convergence Analysis Theorem 2. When M is positive definite and symmetric. The sequence of points {x k } defined by (4) converges in the limit to x* , the unique solution of (2), and the
solution ( ω * , z * ) of the LCP (1) can be obtained from the transformation (3). Proof. For any x, y ∈ R
n
,we have
f ( x) − f ( y ) = B ( x − y ) ≤ B since
(x − y)
≤ x− y
and
(x − y)
≤ x− y ,
B < 1 . So f ( x) is a contraction mapping (see
reference [11]) and by Banach contraction mapping theorem the sequence {x k } generated by (4) converges in the limit to the unique solution x* of (2). The rest follows from Theorem 1. We will denote B by the symbol ρ . We known that ρ < 1 , and it can actually be computed by matrix theoretic algorithms (see reference [12]). Theorem 3. When M is positive definite and symmetric. Let x k be the k th point obtained in the iterative scheme (4) and let x* be the unique solution of (2). Then for k ≥1,
394
L. Yong
x k +1 − x * ≤
ρk 1− ρ
x 2 − x1 .
Proof. We have
x k +1 − x* = f ( x k ) − f ( x* ) ≤ ρ x k − x* . Applying the iterative scheme (4) repeatedly we get x k +1 − x* ≤ ρ k x1 − x* . And for k ≥ 2 we have
x k +1 − x k ≤ ρ k −1 x 2 − x1
(5) using iterative scheme (4)
repeatedly. We also have x* − x1 = x* − x2 + (x2 − x1) . So we have x* − x1 ≤ x* − x2 + x2 − x1 . Using this same argument repeatedly, and the fact that the x* = lim x k as k tend to ∞ , we get ∞ ∞ 1 x* − x1 ≤ ∑k =1 x k +1 − xk ≤ ∑ k =0 ρ k x2 − x1 ≤ x2 − x1 . (6) 1− ρ
Using formula (6) in (5) leads to x k +1 − x* ≤
ρk 1− ρ
x 2 − x1 for k ≥ 1 .
Corollary 1. When M is positive definite and symmetric. Let x1 = 0 . We have ρk k x +1 − x * ≤ c . 1− ρ Proof. Follows from Theorem 3. Theorem 4. When M is positive definite and symmetric. Let x1 = 0 . Then the
(
algorithm terminates with an approximate solution in O log( ρε c
−1
) log ρ
)
iterations. Proof. Since
x k +1 − x k ≤ ρ k −1 x 2 − x1 = ρ k −1 x 2 = ρ k −1 f ( x1 ) = ρ k −1 c ,
A sufficient condition for x k +1 − x k ≤ ε is given by
ρ k −1 c ≤ ε . This implies that
(k − 1) log ρ + log c ≤ log ε .
So we obtain the bound k ≥ log( ρε c Thus the above theorem is proven.
−1
) log ρ .
Corollary 2. When M is positive definite and symmetric. Let x* is the unknown 1 solution of (2). Then x* ≥ c . 1+ ρ
Iterative Method for a Class of Linear Complementarity Problems
Proof. From (2), we have
(
)
(
x* = B x* + c ≥ c − B x *
)≥
395
c − ρ x* .
So x* ≥
1 c . 1+ ρ
4 Numerical Results LCP1. First we consider one LCP problem where the data (M, q) are
⎡ 3 −2 −1⎤ M = ⎢⎢−2 2 1 ⎥⎥ , ⎢⎣−1 1 1 ⎥⎦
⎡ 14 ⎤ q = ⎢⎢−11⎥⎥ . ⎢⎣ −7 ⎥⎦
Since M is symmetric and its eigenvalues eig(M)=[0.3080,0.6431,5.0489], the LCP is uniquely solvable by theorem 2. Chose initial point x1 = 0 , ε = 1× 10−4 . We use
x k +1 − x k ≤ ε as the stopping rule. After 13 iterations, the unique solution to fixed-point problem (2) is x* = [-1.5000,2.0000,1.5000]T . Thus the unique solution
( ω * , z * ) of the LCP (1) can be obtained from the transformation (3), that is
ω * = x * − x * = [ 2 .999 9,0,0 ]T , z* = x* + x* = [0,3.9999,3.0000]T . LCP2. This test problem was taken from [13] and has also been cited in [14]. It is a standard test problem for LCP,
⎡ ⎢ ⎢ M =⎢ ⎢ ⎢ ⎢⎣ 2
⎤ 5 6 L 6 ⎥⎥ 6 9 L 10 ⎥ , ⎥ M M O M ⎥ 10 L 4n − 3⎥⎦
1 2 2 L 2 2 2 M 6
⎡ −1⎤ ⎢ −1⎥ ⎢ ⎥ q = ⎢ −1⎥ . ⎢ ⎥ ⎢M ⎥ ⎢⎣ −1⎥⎦
Where the matrix M is positive definite, the solution is x* = (1, 0,L , 0)T . Using the Algorithm 1, the number of iterations to converge to the optimal solutions is given in the Table 1. LCP3. This problem was also taken from [15], M is the triple diagonal matrix
⎡4 ⎢ −1 ⎢ M =⎢0 ⎢ ⎢ M ⎢⎣ 0
−1 0 L 4 −1 M 0
0⎤ − 1 L 0 ⎥⎥ 4 L 0 ⎥, ⎥ M O M ⎥ 0 L 4 ⎥⎦
⎡ − 1⎤ ⎢ − 1⎥ ⎢ ⎥ q = ⎢ − 1⎥ . ⎢ ⎥ ⎢ M ⎥ ⎢⎣ − 1⎥⎦
396
L. Yong Table 1. The number of iterations of LCP2 by Algorithm 1
Dimension
Iterations
Elapsed time (in seconds)
4
49
0
8
227
0
16
832
0.0470
32
2705
0.8440
64
7747
27.2500
128
17731
297.4690
Table 2. The number of iterations of LCP3 by Algorithm 1
Dimension
Iterations
Elapsed time (in seconds)
4
17
0
8
17
0
16
17
0
32
17
0
64
17
0
128
17
0
Table 3. The number of iterations of LCP4 by Algorithm 1
Dimension
Iterations
Elapsed time (in seconds)
4
38
0
8
85
0
16
255
0
32
573
0.0630
64
1385
0.5310
128
2501
5.5310
It is a standard test problem for LCP, too. Using the Algorithm 1, the number of iterations to converge to the optimal solutions is given in the Table 2. LCP4. Following we consider some randomly generated LCP with positive definite and symmetric M where the data (M, q) are generated by the Matlab scripts: n=input('dimension of matrix M'); rand('state',0); R=rand(n,n); M=R'*R+n*eye(n); q=rand(n,1);
Iterative Method for a Class of Linear Complementarity Problems
397
and we set the random-number generator to the state of 0 so that the same data can be regenerated. Choose initial point x1 = 0 , ε = 1× 10−4 . In all instances the algorithm perform extremely well, and finally converge to an optimal solution for the LCP. More detail of numerical results are presented in Table 3. All the experiments were performed on Windows XP system running on a Hp540 laptop with Intel(R) Core(TM) 2×1.8GHz and 2GB RAM, and the codes were written in Matlab 6.5.
5 Conclusion In this work we have established an iterative method for solving a class of linear complementarity problems with positive definite symmetric matrices. We have established global convergence for this method. Preliminary numerical experiments with standard test problems and some randomly generated problems indicate that the proposed algorithm seems promising for solving the LCP. Acknowledgments. The author is very grateful to the referees for their valuable comments and suggestions. This work is supported by Natural Science Foundation of Shaanxi Educational Committee (No.09JK381).
References 1. Billups, S.C., Murty, K.G.: Complementarity Problems. Journal of Computational and Applied Mathematics 124, 303–318 (2000) 2. Cottle, R.W., Pang, J.S., Stone, R.E.: The Linear Complementarity Problems. Academic Press, London (1992) 3. Karmarkar, N.: A new polynomial-time Algorithm for linear programming. Combinatorica 4, 373–395 (1984) 4. Kojima, M., Megiddo, N., Noma, T., Yoshise, A.: A Primal-dual Interior Point Algorithm for Linear Programming. In: Megiddo, N. (ed.) Progress in Mathematical Programming; Interior Point Related Methods, pp. 29–47. Springer, New York (1989) 5. Megiddo, N.: Pathways to the optimal set in linear programming. In: Megiddo, N. (ed.) Progress in Mathematical Programming; Interior Point and Related Methods, pp. 158–313. Springer, New York (1989) 6. Kojima, M., Mizuno, S., Yoshise, A.: A Polynomial-time Algorithm for a Class of Linear Complementarity Problems. Math. Prog. 44, 1–26 (1989) 7. Kojima, M., Megiddo, N., Mizuno, S.: A Primal-dual Infeasible Interior Point Algorithm for Linear Programming. Math. Prog. 61, 261–280 (1993) 8. Zhang, Y.: On the Convergence of a Class of Infeasible-Interior-Point Methods for the Horizontal Linear Complementarity Problem. SIMA J. Optim. 4, 208–227 (1994) 9. Wright, S.J.: An Infeasible-Interior-Point Algorithm for Linear Complementarily Problems. Math. Prog. 67, 29–52 (1994) 10. Van Bokhoven, W.M.G.: A Class of Linear Complementarity Problems is Solvable in Polynomial Time. Department of Electrical Engineering, University of Technology, P.O.Box 513,560 MB Eindhoven, Netherlands (1980)
398
L. Yong
11. Granas, A., Dugundji, J.: Fixed Point Theory. Springer, New York (2003) 12. Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, New York (1990) 13. Yong, L.: An Interior Point Method for Solving Monotone Linear Complementarity Problem. Journal of Mathematics 29, 681–686 (2009) 14. Yong, L.: Potential-reduction Interior Point Method for Monotone Linear Complementarity Problem. Journal of Shaaxi University of Technology Nature Science Edition 25, 52–57 (2009) 15. Sun, Z., Zeng, J.: Semismooth Newton Schwarz iterative methods for the linear complementarity problem. BIT Numerical Mathematics 50, 425–429 (2010)
A Hybrid Immune Algorithm for Sequencing the Mixed-Model Assembly Line with Variable Launching Intervals Ran Liu1, Peihuang Lou1, Dunbing Tang1, and Lei Yang2 1
Nanjing University of Aeronautics and Astronautics, College of Mechanical and Electrical Engineering, 210016, Nanjing, China 2 Jiangsu Miracle Logistics, 214187, Wuxi, China {Ran Liu,milkfst}@163.com
Abstract. A challenging multi-objective sequencing problem with variable launching intervals has been studied. A novel hybrid algorithm based on a multi-objective clonal selection algorithm and a co-evolutionary algorithm has been developed for the system control. The clonal selection algorithm for the multi-objective sequencing models is worked as a driving system, while the coevolutionary immune algorithm for acquiring launching intervals is subordinated and run in parallel on distributed systems in order to guarantee the real-time requirements. The evolution operators such as coding, decoding and collaboration formation mechanism are defined. The scheme has been proven to improve the system optimization and achieve better solution sets as compared with other available algorithms. Keywords: immune algorithm; distributed systems; mixed-model assembly line; variable launching intervals.
1 Instruction In the researches on Mixed-model assembly line (MMAL), minimizing the total utility work cost, the total production rate variation cost and the total setup cost are considered as the most important objectives. Tavakkoli-Moghaddam [1] presented a memetic algorithm to solve a number of test-bed problems with these weighted objectives. Alireza RV[2] designed a hybrid algorithm based on shuffled frog-leaping algorithm and bacteria optimization for this multi-objective problem. In MMAL, the variable launching intervals can be dynamically adapted to avert idle time and workoverloads[3]. Stefan Bock[4] extended the pure sequencing problem by the integration of six variables including launching discipline and addressed it by setting up a real-time control system. Parviz Fattahi[5] brought up a sequencing problem based on variable launching interval . A simulated annealing algorithm combined with a LIBP algorithm was performed to minimize the total utility and idle costs. The proposed LIBP algorithm was a kind of heuristic local-search method. Its efficiency was limited to the progressive gradient of the algorithm and the weak accuracy of launching intervals affected the results directly. R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 399–406, 2010. © Springer-Verlag Berlin Heidelberg 2010
400
R. Liu et al.
Recent advances show that coevolutionary architectures are effective ways to broaden the use of traditional evolutionary algorithms. Tse Guan Tan[6] integrated cooperative coevolutionary with the SPEA2. The comparison of the new algorithm with SPEA2 showed the effectiveness of cooperation coevolution method. Ramin Halavati[7] presents a novel algorithm that combines traditional artificial immune systems and symbiotic combination operator. K.C.Tan[8] proposed a cooperative coevolutionary algorithm on distributed systems which decomposes decision vectors into smaller components and evolves multiple solutions in the form of cooperative subpopulations. Considering the superiority of coevolutionary method on multi-variable optimization problem, a hybrid immune algorithm is proposed in this paper for the multi-criteria sequencing problem based on variable launching intervals. The coevolutionary algorithm running on distributed system is conducted in order to overcome the shortage of local search method, such as LIBP. The structure of this paper is as follows: In section 2, we present the detailed description of the problem. In section 3, we propose the hybrid algorithm. Section 4 provides experimental results to show the efficiency of the proposed algorithm. Finally, we give the conclusion in section 5.
2. Problem Description and Formulation The Minimum Part Set (MPS) discipline has been accepted in this paper. Three objectives are considered in this model:
① minimizing the cost of utility work and idle time[5] I
J
Min∑∑ (C Idl × IDij + CU × U ij ) i =1 j =1
(1)
s.t. M
∑ xi m = 1
i = 1,..., I
(2)
m = 1,..., M
(3)
m =1
I
∑ xi m = d m m =1
j
Z 1 j +1 = ∑ Ll j = 1,..., J − 1 l =1
(4)
M
Z i +1 j = Z ij + v c × (∑ xim × t mj − U ij − ai +1 + IDi +1 j ) i = 1,..., I − 1 m =1
M
j
m =1
l =1
U ij ≥ ( Z i j + vc ∑ x im × t mj − ∑ Ll ) / vc i = 1,..., I − 1 j = 1,..., J
(5)
(6)
A Hybrid Immune Algorithm for Sequencing the MMAL with Variable Launching Intervals
M
j −1
m =1
l =1
U Ij ≥ ( Z ij + v c ∑ x Im × t mj − (∑ Ll + vc × a I +1 )) / v c j −1
M
l =1
m =1
401
j = 1,..., J
(7)
IDi j ≥ (∑ Ll − ( Z i −1 j + v c ∑ xi −1m t mj − v c ∗ U i −1 j − vc ∗ a i )) / v c i = 2,..., I − 1 j = 1,..., J
(8)
Where C Idl is the cost of idle time; CU is the cost of utility work; d m is the number of the model of type m ; L j is the length of workstation j ; t mj is the assembly time for model m at
station
j
;
is
Z ij
the
starting
position
of
product
i
at
station j and Z11 = 0 ; IDij is the idle time for product i at station j and IDij ≥ 0 ; U ij is the utility work time for product i at station j and U ij ≥ 0 ; ai +1 is the variable launching interval between the product i and the product i + 1 ; I is the total number of products in a sequence, i ∈ {1,2,..., I } ; M is the number of models, m ∈ {1,2,...M } ; J is the total number of workstations, j ∈ {1,2,...J } ; xim is 1 if product i in a sequence is model m and 0 otherwise; vc is the speed of the conveyor and C is the cycle time.
② minimizing the cost of total production rate variation [1] I M ⎛ l x d ⎞ Min∑∑ ⎜ ∑ im − m ⎟ ⎜ l I ⎟⎠ l =1 m =1 ⎝ i =1
(9)
③ minimizing the cost of total setups[1] J
I
M
M
Min∑∑∑∑ ximr c jmr j =1 i =1 m =1 r =1
(10)
s.t. M
M
∑∑ ximr = 1 ∀i
m =1 r =1
M
M
m =1
p =1
∑ ximr = ∑ x(i+1)rp M
M
m =1
p =1
i = 1,..., I − 1, ∀r
∑ x1mr = ∑ x1rp I
(11)
(12)
∀r
(13)
∀m
(14)
M
∑∑ ximr = d m i =1 r =1
Where C jmr is the setup cost required when the model type is changed from m to r at station j; ximr is 1 if the ith product is the model of type m and the i+1th product is the model of type r ; otherwise, ximr is 0;
402
R. Liu et al.
3 Proposed Hybrid Algorithm The proposed model can be analyzed from two aspects: sequencing problem (SP) and variable launching intervals (VLI) under determined sequences. In this paper, the sequencing problem is solved with the multi-objective clonal selection algorithm (MOCSA) performed as the main algorithm in the framework. The VLI problem is solved as a multivariate function whose minimum is returned to the main algorithm as the fitness of objective I. A co-evolutionary immune algorithm (CEIA) has been brought up for the VLI problem. 3.1 MOCSA for SP The model-based encoding has been chosen. In this paper, the coding and the fitness differences are both adopted as the parameters of affinity between antibodies. The clonal ratio of antibody Ai (denoted by cri ) is defined as follows: ⎡ 1 ⎤ ⎥ cri = ⎢α ∗ cd i ∗ θ i ∗ ⎢ ri ⎥⎥ ⎢
{ (
θ i = min exp Ai − A j
)}
(15)
(16)
Where ri is the rank of Ai based on non-dominated sorting[9]; cdi is the crowding distance of Ai [9]; Ai − A j is the hamming distance between two antibodies; α is the adjustable coefficient for clonal size. cdi measures the affinity between Ai and other antibodies from fitness value while Ai − A j measures from coding. The switch and inverse operators from genetic algorithms have been used in this paper. The mutational rate ε t is obtained from the equation below: ε t = ε 0 * exp( −δt )
(17)
δ is the decay factor; t is the evolution generation; ε 0 is the initial mutational rate. This rate is dynamic since its value is high at the beginning and lower when the population becomes mature. This action of the mutation rate is beneficial to the convergence of the algorithm. For a problem with M o objective numbers, denote the population size as N s . The
complexity of calculating ri is O( M o N s 2 ) . The complexity of crowding distance assignment is O (M o N s log( N s )) [9]. While calculating hamming distance, each solution must be compared with every other solution in the population. This requires O( N s ) for each solution and O( N s 2 ) for all population members. As we can see, the overall complexity of the algorithm is O(M o N s 2 ) .
A Hybrid Immune Algorithm for Sequencing the MMAL with Variable Launching Intervals
403
3.2 CEIA for VLI Problem In CEIA, each sub-population is responsible for one launching interval. The evolution of every population still follows the clonal selection algorithm. Denote P i as one of the populations and Ami is an antibody in it. Suppose that the length of the binarycoded antibody is l . Considering production practice, the value of an antibody after decoding is given by: χ mi = t 2 +
[A ]
i m 2
2 −1 l
∗ (t max − t min )
(18)
Where t max is the maximum value of assembly times for every model at all stations while t min is the minimum one. Three types of cooperation are applied to produce the candidate solutions. The first method is combining with the partners of its parent. The second one is selecting the current ‘best’ components from each sub-population. This method will lead to the high possibility that those good partners of parent individuals can be matched with the children individual again, so that these good combinations could be reserved. The third type is selecting random components. The best of these three combinations will be retained. By carrying on this hybrid collaboration formation mechanism, we increase the chance of finding more diverse solutions. Denote ξ mi as the normalized fitness of Ami .The clonal ratio of antibody Ami in subpopulation P i is given by:
{
}
⎡ min exp( Ami − Ani ) ⎤ ⎥ crmi = ⎢α ∗ ⎢ ⎥ ξ mi ⎢ ⎥
(19)
Denote the size of one subpopulation is Ni . As mentioned above, the complexity of calculating hamming distance is O( N i 2 ) .The complexity of collaboration formation mechanism is O( N i ) . So the complexity of one subpopulation is O( N i 2 ) . Suppose that the algorithm has M v subpopulations, the overall complexity of CEIA is O(M v N i 2 ) . 3.3 Hybrid Algorithm on Distributed Systems Running the hybrid algorithm will be a large consumption of CPU resource especially for CEIA and the speed will be limited. The distributed computing has been implemented in this paper to figure out this problem. Rivera[10] introduced a global parallelization in which only the time-consuming evaluations of fitness are parallelized by assigning a fraction of the population to each processor. This global parallelization is particularly suitable for the hybrid algorithm in this paper, which is conducted in the following way: MOCSA algorithm is running on a central machine. When the objective I is calculated, N threads will be started by the main process. Each thread invokes the CEIA, which is running as a remote service hosted on clients. Every launching interval achieved by CEIA will be returned to MOCSA and processed by the central computer. This parallelization strategy preserves the behavior of MOCSA as much as possible and is very easy to carry on.
404
R. Liu et al.
4 Numerical Experiment and Discussion The problem for testing has 10 workstations and three models (denoted by A, B, C). The other data for production are presented in table 1, 2. The parameters of the algorithm are set as follows: For MOCSA, N =50 IT =50 α =5 δ =0.01, ε 0 =0.2; For CEIA, N
, , , =30, =30, =9, α =5, ε =0.2 , δ =0.01。 IT
l
0
Table 1. Cost of setups Models A B C
A 0 2 1
B 2 0 1
C 1 1 0
Table 2. Assemble time and workstation length 1
2
3
4
5
6
7
8
9
10
A
4
6
8
4
5
4
7
8
5
6
B
8
9
6
7
5
6
8
6
7
6
C
7
4
6
5
7
6
9
9
4
7
Length
12
14
12
11
11
9
14
14
11
11
4.1 Variable Launching Intervals and Fixed Rate Cycle Time In order to find out how the variable launching intervals affect the objectives, the intervals in objective I are changed with fixed rate cycle time of 7.1 which is tested to be suitable for this model. For the convenience of comparing, a heuristic rule that the solution with the lowest setups will be chosen as the representation of the solution set is adopted. Test problems with ten different MPS are generated. For each problem, we run the algorithm ten times for both variable launching intervals and fixed rate cycle time. The comparisons of average results presented in figure 1 show that the launching strategy improves the cost of utility work and idle time significantly and affect the cost of total production rate variation as well. In fact, the average improvement of the test problems is 55.7% for objective I and 29.6% for objective II while the values of objective III are always equal. Besides, the average cycle length (sum of launch intervals in one MPS period) using variable launching intervals is 64.17. While using fixed rate cycle time, it is 78.1. 4.2 Comparison of CEIA and LIBP In order to compare CEIA and LIBP, a contrast algorithm in which the CEIA is changed by LIBP is tested on the same problems presented on 4.1. For each problem, ~ the hybrid algorithm is run ten times. We implemented C [11] as the measurement of the quality of Pareto sets obtained by two hybrid algorithms. The average results of
A Hybrid Immune Algorithm for Sequencing the MMAL with Variable Launching Intervals
405
ten problems are listed in figure 2. As we can see, the hybrid algorithm using CEIA obtained better solution set than the compared algorithm. That’s because CEIA searches for viable launching intervals in parallel when objective I is calculated while LIBP is local-search method and it calculates the intervals one by one iteratively.
Fig. 1. Comparison of Objective I and Objective II 0.6 0.4 C(VEIA,LIBP) 0.2 0
C(LIBP,VEIA) P1
P2
P3
P4
P5
P6
P7
P8
P9
P10
~
Fig. 2. The histogram of C metric
4.3 CPU Time of Hybrid Algorithm The running environment of the proposed algorithm is a distributed system consisting of 11 nodes. Each node is a PC with 1G RAM and 3.0G CPU. MOCSA is running on the central machine and CEIA is paralleled at ten other nodes as background service. In all experiments, the CPU time of the proposed algorithm is between 27S~33S while the average value is 31.73S. In production practice, the cycle time is always set to more than 1 minute. So the algorithm is able to guarantee the real-time requirements.
5 Conclusions In this paper, a hybrid algorithm which combines the multi-objective clonal selection algorithm with co-evolutionary immune algorithm is developed for the sequencing problem of MMAL with variable launching intervals. The co-evolutionary immune algorithm is presented to solve the launching interval problem for each sequence and realized on distributed systems. The first group of computational results shows that the variable launching intervals can improve the objectives and also can get shorter
406
R. Liu et al.
cycle length of one MPS period. The second group shows that the multi-objective algorithm with CEIA can get better solution set than the compared algorithm. The running time of the algorithm indicates that the proposed algorithm is able to meet the demand of production practice.
References 1. Tavakkoli-Moghaddam, R., Rahimi-Vahed, A.R.: A Memetic Algorithm for Multi-criteria Sequencing Problem for a Mixed-Model Assembly Line in a JIT Production System. In: 2006 IEEE Congress on Evolutionary Computation, pp. 2993–2998. IEEE Press, Vancouver (2006) 2. Rahimi-Vahed, A., Mirzaei, A.H.: A hybrid multi-objective shuffled frog-leaping algorithm for a mixed-model assembly line sequencing problem. J. Computers & Industrial Engineering 53(4), 642–666 (2007) 3. Boysen, N., Fledner, M., Scholl, A.: Sequencing Mixed-model Assembly Lines: Survey, Classification and Model Critique. J. European Journal of Operational Research 192, 349–373 (2009) 4. Bock, S., Rosenberg, O., Brackel, T.: Controlling Mixed-model Assembly Lines in Realtime by Using Distributed Systems. J. European Journal of Operational Research 168, 880–894 (2006) 5. Fattahi, P., Mohsen, S.: Sequencing the Mixed-model Assembly Line to Minimize the Total Utility and Idle Costs with Variable Launching Interval. J. International Journal of Advanced Manufacturing Technology 45, 987–988 (2009) 6. Tse, G.T., Hui, K.L., Jason, T.: Cooperative Coevolution for Pareto Multiobjective Optimization: An Empirical Study using SPEA2. In: TENCON 2007 - 2007 IEEE Region 10 Conference, pp. 1–4. IEEE Press, Taipei (2007) 7. Ramin, H., Saeed, B.S.: Symbiotic artificial immune system. J. Soft Computing 13, 565–575 (2008) 8. Tan, K.C., Yang, Y.J., Lee, T.H.: A Distributed Cooperative Coevolutionary Algorithm for Multiobjective Optimization. J. IEEE Transaction on Evolutionary Computation 10, 2513–2520 (2006) 9. Deb, K., Pratap, A., Agarwal, S., et al.: A Fast and Elitist Multi-objective Genetic Algorithm: NSGA-II. J. IEEE Transaction on Evolutionary Computation 6, 182–197 (2002) 10. Rivera, W.: Scalable Parallel Genetic Algorithms. J. Artificial Intelligence Review 16, 153–168 (2001) 11. Zilzler, E., Deb, K., Thiele, L.: Comparison of Multiobjective Evolutionary Algorithms: Empirical Results. J. Evolutionary Computation 8, 173–195 (2000)
A Cooperative Coevolution UMDA for the Machine-Part Cell Formation Qingbin Zhang1, Bo Liu2, Boyuan Ma1, Song Wu1, and Yuanyuan He1 1
Shijiazhuang Institute of Railway Technology, Shijiazhuang 050041, China 2 .Hebei Academy of Sciences, Shijiazhuang 050081, China
[email protected]
Abstract. The machine-part cell formation is a NP- complete combinational optimization problem in cellular manufacturing system. Past research has shown that although the genetic algorithm (GA) can get high quality solutions, special selection strategy, crossover and mutation operators as well as the parameters must be defined previously to solve the problem efficiently and flexibly. The Estimation of Distribution Algorithms (EDAs) can get the same or better solutions with less operators and parameters, but the EDAs need more function evaluations than that of the GA. In this paper, a Cooperation Coevolution UMDA is proposed to solve the machine-part cell formation problem. Simulation results on six well known problems show that the Cooperation Coevolution UMDA can solve the machine-part cell formation problem more effectively and efficiently. Keywords: UMDA, cooperative coevolution, machine-part cell formation, grouping efficacy.
1 Introduction One fundamental problem in cellular manufacturing system is to identify the part families and machine groups and consequently to form manufacturing cells, which is named as the machine-part cell formation or manufacturing cell design. The machine-part cell formation is a NP-complete combinational optimization problem, so it is appropriate to adopt evolutionary algorithms to obtain good solutions. Joines, Culbreth and King [1] developed an integer programming model and used GA to solve the machine-part cell formation problem. Gonalves and Resende [2] developed an approach that combines a local search heuristic with GA. Brown and Sumichrast [3] proposed a CF-GGA, a grouping genetic algorithm for the machine-part cell formation problem. Mojtaba Salehi, Reza Tavakkoli-Moghaddam [4] also proposed an approach using GGA to solve the machine-part cell formation. Qingbin zhang, Bo Liu, Lihong Bi, et al. [5] adopted UMDA [6] and EBNA [7], two kinds of EDAs to solve the machine-part cell formation problem. All these researchers have concluded that GA and EDAs can get high quality solutions to the problems in the literature. However, the past research has also shown that the performance of GA for the machine-part cell formation depends heavily on encoding methods and determination R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 407–414, 2010. © Springer-Verlag Berlin Heidelberg 2010
408
Q. Zhang et al.
of selection, crossover and mutation operators as well as choice of related parameters, thus needs prior knowledge about the problem at hand, otherwise extensive experiments have to been done to choose an effective configuration of the operators and the parameters for the GA. EDAs can get the same or higher grouping efficacy with less operators and parameters, but the EDAs need much more population size to build the probabilistic model accurately[8], which means more function evaluations are required to solve the problems. Cooperative Coevolution Genetic Algorithm (CCGA) has been introduced by Potter and De Jong [9] as a promising framework for tackling complex problems with high dimensions. CCGA employs the divide-and-conquer technique and subdivides an original population into some subpopulations with fewer parameters. Research has shown that the CCGA can converge more rapidly and accurately to the optimal results [10]. In CCGA, each subpopulation is evolved with the traditional GA methods. In this paper, we propose a Cooperative Coevolution UMDA (CCUMDA) in which each subpopulation is evolved by univariate marginal distributions of the selected individuals, and then adopt CCUMDA to solve the machine-part cell formation. Experimental results compared with that of the UMDA are reported. The remainder of this paper is organized as follows: The machine-part cell formation problem is described in section 2. The principle of the CCUMDA is introduced in section 3. The performance on six well known problems in the literature is shown in section 4. Finally, conclusion is made in section 5.
2 The Machine-Part Cell Formation Problem 2.1 Problem Definition At the conceptual level, the machine-cell formation problem can be represented by a binary machine-part incidence matrix. Consider an m machine and n part cell formation problem with k cells, machine-part incidence matrix is a zero-one matrix of order m×n in which a element aij has a value of 1 if part j needs processing on machine i, and a value 0 otherwise. At the same time, the model must fulfill the follow constraints [1]:
⎧1, if machine i is assigned to cell j , xil = ⎨ ⎩0, otherwise ⎧1, if part j is assigned to part family l , ⎩0, otherwise
y jl = ⎨
k
∑x
il
(1)
(2)
= 1, i = 1, 2, ..., m ,
(3)
= 1, j = 1, 2,..., n .
(4)
l= 1
k
∑y l=1
jl
A Cooperative Coevolution UMDA for the Machine-Part Cell Formation
409
In order to determine the utilization of machines and the inter-cell moment of parts, previous research has mainly focused on the block diagonalization of the given machine-part incidence matrix. When columns and rows are arranged in the order corresponding to the groups identified by a machine-part cell formation problem solution, the incidence matrix can be evaluated to determine the performance of the solutions [3]. The best solutions are those that contain a minimal number of voids (zeros in the diagonal blocks) and a minimal number of exceptional elements (ones outside the diagonal blocks). 2.2 Measure of the Performance
The most widely used performance measures on the machine-part cell formation problem is the grouping efficacy [11], which can be defined as: Γ=
1−
e0
e = e − e0 e e + ev 1+ v e
,
(5)
where e is the number of ones in the incidence matrix, e0 is the number of the exceptional elements, and ev is the voids in the diagonal blocks. Given the definition of the grouping efficacy and the assignment variables above, Γ can be defined in detail as follows: kmax
n
m
l=1
j=1
i=1
∑∑∑ y Γ=
n
m
j=1
i=1
kmax
n
l=1
j=1
xil aij
jl
m
kmax
n
m
l=1
j=1
i=1
∑∑ a + ∑ [(∑ y )(∑ x )] − ∑∑ ∑ y ij
jl
il
i=1
. jl
(6)
xil aij
3 Cooperative Coevolution UMDA 3.1 UMDA
The UMDA was proposed by Mühlenbein and Paaß [6]. UMDA uses the simplest model to estimate the joint probability distribution of the selected individuals at each generation. This joint probability distribution is factorized as a product of independent univariate marginal distributions: n
n
i =1
i =1
pt ( x ) = ∏ pt ( xi ) = ∏ p ( xi Dt −1se ) .
(7)
Usually each univariate marginal distribution is estimated from marginal frequencies as follows: p( xi Dt −1
se
∑ )=
N j =1
δ j ( X i = xi Dt −1se ) N
,
(8)
410
Q. Zhang et al.
where
⎧1,
if in the j th case of Dt −1se , X i = xi
⎩0,
otherwise
δ j ( X i = xi Dt −1se ) = ⎨
(9)
A pseudo-code for the UMDA algorithm can be seen as follows. UMDA
←
D0 Generate M individuals to form the initial population; Repeat for t=1,2,…until the stopping criterion is met
←
Dt −1se Select N<M individuals from Dt-1 according to the selection method; Estimate the joint probability distribution: n
n
i =1
i =1
pt ( x) = p ( x Dt −1 ) = ∏ pt ( xi ) = ∏ se
←
∑
N j =1
δ j ( X i = xi Dt −1se ) N
Dt Sample M individuals to form the new population from pt(x). 3.2 Cooperative Coevolution Genetic Algorithm
Cooperative Coevolution tries to simplify the search space of a problem by breaking the structure of a candidate solution into subcomponents, each evolved in a separate subpopulation, so it has been proposed as a promising framework for tackling high-dimensional optimization problems [12]. The fitness of an individual in each subpopulation is evaluated by combining it with all the best individuals in the rest of the subpopulations to form what is called a context vector [13]. This context vector contains all the parameters required by the objective function and is fed into the objective function for fitness evaluation. Obviously, this technique is only effective when there is limited interaction between the parameters. The original CCGA framework follows the following steps[9]: CCGA 1. Divide the parameters of the objective function into some low-dimensional subcomponents, generate each initial subpopulation. 2. Optimize each of the subpopulation in a round robin fashion using traditional GA operators. 3. Run the evolutionary process until the stopping criteria is satisfied.
A Cooperative Coevolution UMDA for the Machine-Part Cell Formation
411
3.3 Cooperative Coevolution UMDA
In CCGA, each subcomponent is optimized with a traditional GA method. In fact, we can optimize it with any other evolutionary algorithms. In this paper, we propose a Cooperative Coevolution UMDA (CCUMDA) which combined the efficiency of the UMDA and the effectivity of the CCGA. In CCUMDA, initial subpopulations are generated by initial probability distribution, then each subpopulation is evolved by univariate marginal distribution of the selected individuals until the stopping criterion is met. A pseudo-code for the CCUMDA algorithm can be seen as follows. CCUMDA Initialize all subpopulations using the initial probability distribution; Repeat for t=1,2,…until the stopping criterion is met Evaluate each subpopulation combined with the best individual of the other subpopulations; Select N<M individuals in each subpopulation according to the selection method; Estimate the joint probability distribution of each subpopulation; Sample M individuals to form each new subpopulation.
4 Experiments and Results 4.1 Individual Representation
In [1] and [5], each individual is represented as a vector of m + n integer variables as follows: individual → ( x1 , x2 ,..., xm , y1 , y2 ,..., yn ) . 14243 14243 machines
(10)
parts
The first m variables represent the machines while the last n variables are associated with the parts. Value range of each variable is from 1 to kmax , where kmax represents an upper bound on the number of the cells. In our proposed CCUMDA, the population is divided into two subpopulations, the first subpopulation represents machines and the second subpopulation represents the parts. Each individual in subpopulation 1 and subpopulation 2 can be represented as follows:
individual in subpopulation 1 → ( x1 , x2 ,..., xm ) 14243 machines
individual in subpopulation 2 → ( y1 , y2 ,..., yn ) 14243 parts
.
(11)
412
Q. Zhang et al.
4.2 Experimental Results
To demonstrate the performance of the CCUMDA on the machine-part cell formation problem, we use UMDA and CCUMDA to solve six well known problems collected from the literature, the grouping efficacy obtained by the CCUMDA are compared with that of the UMDA. Experimental results are presented in Table1 and Table 2. In our experiments, the initial population is generated randomly with uniform distribution, truncation selection method is used and 50% of the top individuals in the population are selected to construct the probabilistic model [14]. At same time, the elitist is copied to next generation. The k is set equal to the best known number of cells determined by other algorithms. All the results of the UMDA and CCUMDA are averaged by 10 run. From the Table 1 we can see that for the upper four relatively simple experimental problems, the CCUMDA can get same grouping efficacy as that of the UMDA. For the Stanfel problem, both the UMDA and CCUMDA can get the same maximum grouping efficacy and the CCUMDA can get higher mean grouping efficacy. For Kumar et al. problem, the CCUMDA can get higher maximum and mean grouping efficacy than that of the UMDA. Table 1. Group efficacy of the UMDA and CCUMDA
Matrix problem
UMDA k
size
Simple Chan&Miller
8
MAX
MIN
MEAN
MAX
MIN
MEAN
×15
3
46
0.9200
0.9200
0.9200
0.9200
0.9200
0.9200
×20
3
61
0.8525
0.8525
0.8525
0.8525
0.8525
0.8525
10
Chandrasekharan&
CCUMDA
e
Rajagopalan Srinivasan
16
×30
4
116
0.6783
0.6783
0.6783
0.6783
0.6783
0.6783
Burbidge
20
×35
4
136
0.7571
0.7571
0.7571
0.7571
0.7571
0.7571
Stanfel
14
×24
5
61
0.7051
0.5862
0.6740
0.7051
0.6706
0.6858
Kumar et al.
20
×23
5
113
0.4658
0.4333
0.4572
0.4800
0.4552
0.4695
In the Table 2, it can be concluded that for all the experimental problems, the evolution generations needed to run of the CCUMDA are less than that of the UMDA with the same population size.
A Cooperative Coevolution UMDA for the Machine-Part Cell Formation
413
Table 2. Evolution generations required of the UMDA and CCUMDA
problem
UMDA
popsize
CCUMDA
MAX
MIN
MEAN
500
22
16
18.9
9
6
7.5
500
20
17
18.7
11
7
9.5
Srinivasan
1500
38
30
34.5
22
15
17.3
Burbidge
1000
42
30
34.3
18
14
15.6
Stanfel
2000
40
32
38.4
22
15
17.8
Kumar et al.
2000
50
41
44.2
26
18
20.8
Simple Chan&Miller
MAX
MIN
MEAN
Chandrasekharan& Rajagopalan
5 Conclusion In this paper, a Cooperative Coevolution UMDA intended to combine the efficiency of UMDA and the affectivity of CCGA is proposed. Experimental results on six well cited machine-part cell formation problems show that the proposed Cooperative Coevolution UMDA can get the same or higher grouping efficacy with less evolution generations.
Acknowledgments This work was supported by Nature Science Foundation of Hebei Province (F2008001166) and Mentoring Programs of Scientific and Technological Research & Development in Hebei Province (072135133).
References 1. Joines, J.A., Culbreth, C.T., King, R.E.: Manufacturing cell design: an integer programming model employing genetic algorithms. IIE Transactions 28, 69–85 (1996) 2. Gonalves, J.F., Resende, M.G.C.: An evolutionary algorithm for manufacturing cell formation. Computers & Industrial Engineering 47, 247–273 (2004) 3. Brown, E.C., Sumichrast, R.T.: CF-GGA: a grouping genetic algorithm for the cell formation problem. International Journal of Production Research 39, 3651–3670 (2001) 4. Salehi, M., Tavakkoli-Moghaddam, R.: A grouping genetic algorithm for the cell formation problem. International Journal of Natural and Engineering Sciences 3(1), 67–71 (2009)
414
Q. Zhang et al.
5. Zhang, Q., Liu, B., Bi, L., et al.: Estimation of Distribution Algorithms for the MachinePart Cell Formation. In: Advances in Computation and Intelligence. LNCS, vol. 5821, pp. 82–91. Springer, Heidelberg (2009) 6. Mühlenbein, H., Paaß, G.: From Recombination of Genes to the Estimation of Distributions I. Binary Parameters. In: Ebeling, W., Rechenberg, I., Voigt, H.-M., Schwefel, H.-P. (eds.) PPSN 1996. LNCS, vol. 1141, pp. 178–187. Springer, Heidelberg (1996) 7. Etxeberria, R., Larrañaga, P.: Global optimization using Bayesian networks. In: Rodriguez, A.A.O., Ortiz, M.R.S., Hermida, R.S. (eds.) Second Symposium on Artificial Intelligence, pp. 332–339 (1999) 8. Chen, Y.-p., Lim, M.-H.: Linkage in Evolutionary Computation. Springer, Heidelberg (2008) 9. Potter, M., De Jong, K.: A cooperative coevolutionary approach to function optimization. In: Davidor, Y., Männer, R., Schwefel, H.-P. (eds.) PPSN 1994. LNCS, vol. 866, pp. 249–257. Springer, Heidelberg (1994) 10. Krasnogor, N., Melin-Batista, B., Moreno-Prez, J.A., et al.: Nature Inspired Cooperative Strategies for Optimization (NICSO 2008). Studies in Computational Intelligence (2009) 11. Kumar, C.S., Chandrasekharan, M.P.: Grouping efficacy: a quantities criterion for goodness of block diagonal forms of binary matrices in group technology. International Journal of Production Research 28, 223–243 (1990) 12. Yang, Z., Tang, K., Yao, X.: Large scale evolutionary optimization using cooperative coevolution. Information Sciences 178, 2985–2999 (2008) 13. van den Bergh, F., Engelbrecht, A.P.: A cooperative approach to particle swarm optimization. IEEE Transactions on Evolutionary Computation 8(3), 225–239 (2004) 14. Lima, C.F., Pelikan, M., Goldberg, D.E., et al.: Influence of selection and replacement strategies on linkage learning in BOA. In: IEEE Congress on Evolutionary Computation CEC 2007, Singapore, pp. 1083–1090 (2007)
Hardware Implementation of RBF Neural Network on FPGA Coprocessor Zhi-gang Yang and Jun-lei Qian School of Computer and Automation Engineering, Hebei Polytechnic University, Tangshan 063009, China
[email protected] [email protected]
Abstract. The compute core of a FPGA is a complicated programmable logic integrated circuit array which different from the ordinal instruction execution of the traditional computer. It changes the compute pattern of traditional compute and provides a new method to realize the high speed compute. Hardware Implementation is very important when considering computational velocity of neural networks (NNs), especially NNs with learning ability implemented by integrated hardware. Firstly, this paper presents the design of FPGA based coprocessor which is the hardware platform well-suited for the implementation of NNs. Secondly, it expounds hardware implementation of RBF (Radial Basis Function) Neural Network, and analyzes the performance and problem of the system. According the data of experiments, the compute speed of the RBF neural network implemented by hardware that realized by FPGA coprocessor is much higher than the speed of compute run on the PC. Keywords: FPGA coprocessor; RBF neural network; VHDL; hardware implementation.
1 Introduction Neural networks are widely used now, and people can simulate the model of neural networks on a computer by the form of some kind of arithmetic. There is a CPU in the computer system with the structure of Von•Neumann [1]. The CPU can execute only one instruction in one instruction cycle. The program determines the sequence of the execution. While there are a great deal of paratactic arithmetic and distribute storage in the neural networks, so it is the choke point that limits the speed of running the neural networks to execute the programs in the sequence of the computer [2, 3]. With the development of the capacity, performance and the efficiency of the FPGA (Field Programmable Gate Array), its application in communication, digital signal process and industrial control system is progressively increasing. FPGA is structured by many interior RAM and many unit arrays that can realize the operation of numerical value and logic. FPGA can change the structure of connection among the internal logical unit easily by the internal programmable bus. It can easily realize the paratactic arithmetic and R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 415–422, 2010. © Springer-Verlag Berlin Heidelberg 2010
416
Z.-g. Yang and J.-l. Qian
distribute storage which satisfies the feature of the neural networks [4]. So the appearance of FPGA provides an effective way for the realization of the hardware of neural networks [5, 6, 7, 8].
2 Design of the Coprocessor’s Hardware of FPGA The whole structure of the coprocessor is showed in figure1. Making the Virtex-4 XC4VLX25 FPGA of Xilinx as the core of the system, it adopt the design of multibus, that is local bus, SRAM bus, FLASH bus, configure bus of the system and user interface bus. +1.2 +1.8 +2.5 +3.3VI
Power
60MHz
64MB SDRAM
32MB FLASH
Clock
4MW x 16b
2M x 16b
Unit +5v
32b\ 33MHz
PMC/PCIBus
PCI Bridge
LED
FPGA Virtex-4
PLX P CI9030
XC4VLX25
PCI Local Bus
Coprocessor
System Set Switch
32M
Difference Clock Channel
XCF32P
Configure Chip
50pin user defined interface
Program Simulation Download Interface
Fig. 1. The block diagram of the coprocessor based on FPGA
XC4VLX25 is a kind of FPGA from Xilinx company which provide 48 hardware XtremeDSP, 250 thousand logical unit, 1296k-bit ROM. The 668 pin-covered BGA provides 448 external I/O pins for the development which supports the engineers to construct a system with multibus. XCF32P chip is used to configure the FPGA when power on. An alter method is to update the internal code of the FPGA via the external interface for download. The constants of arithmetic and the data table can be stored in external FLASH. The FLASH adopt Intel TE28F320 which is configured as 2Mx16bit pattern, provides a space of 32Mbyte for storage, the time for reading and writing is 90ns and supports to program on line. The temporary data can be stored in the external SDRAM, its space is 4M. SDRAM adopt the HY57V64 which time for reading and writing is 20ns. The high-speed variable can be stored and read by SRAM inside the FPGA. The coprocessor exchange data with CPU by PMC (or PCI) bus, and implement the transition from PMC (or PCI) bus to FPGA local bus by using PCI9030 PCI bridge chip.
Hardware Implementation of RBF Neural Network on FPGA Coprocessor
417
3 Implementation of RBF Neural Networks on the Coprocessor 3.1 The Arithmetic of RBF Neural Networks RBF (Radial Basis Function) neural network is a network with three forward layers one of which is connotative layer [9]. Its structure is shown in Fig. 2. u y
Fig. 2. The structure of RBF neural network
The arithmetic begins with feed forward, its equation is: The connotative layer: net = u − ci = ∑ (u u −ciu ) 2 ,
(1)
q i = R (net )
(2)
.
The output layer (take no account of the threshold value):
yk = ∑ wki qi ,
(3)
• is the Euclidean distance, R(•) is the Gauss function , ci is the central vector of the connotative layer, input,
wki is the weighting coefficient of the output layer, u is the
yk is the output.
The arithmetic of RBF net consists of two parts: learning with a teacher and learning without a teacher. Learning without a teacher, considering the ability to learn on line, RBF adopt the arithmetic of central vector dynamic recursive algorithm, its step: 1) Give out the random initial centre ( 0 < a < 1 ). 2) Compute the distance of the i th step
ci ( 1 ≤ i ≤ P ), and the learning rate
di = u − ci .
(4)
3) Compute the minimum distance d min_ nn = min d i .
(5)
418
Z.-g. Yang and J.-l. Qian
4) Preferential regulation of central vector ci (k ) = ci (k − 1) 1 ≤ i ≤ P i ≠ min_ nn ,
(6)
cmin_ nn (k) = cmin_ nn (k −1) + a(u(k) − cmin_ nn (k −1)) .
(7)
5) Compute the distance of unit min_nn d min_ nn = u − cmin_ nn ,
(8)
Learning with a teacher, considering the difficulty to implement the FPGA, adopt the arithmetic of LMS. Weighting regulation arithmetic as: wki (k ) = wki (k − 1) + b
ek (k )qip qp
2
, b is a constant, 0 < b < 2 .
(9)
3.2 Implementation of the Arithmetic of RBF Neural Networks on FPGA VHDL hardware description language is used to design the internal arithmetic of the coprocessor, adopt from top to down flow to design. According to the idea of from top to down [10], firstly divide the RBF neural network into several basic unit as: buffer storage unit, address generation unit, central vector unit, weighting coefficient unit, connotative layer neural cell unit, output neural cell unit, minimum and serial output unit, gauss function unit, reciprocal unit, deviation and weighting coefficient regulation parameter unit. Clock
Central buffer
Learning rate Start Reset Forward backward
Control unit -
Adder
Input buffer
Output
Multiplier
Central value
Subtracter
Input
Internal clock
Time buffer
Output buffer New center
Choose channe2 Choose channel
Fig. 3. The block diagram of the connotative layer neural cell
The block diagram of the implementation of the connotative layer neural cell is shown in Figure3. The subtracter, multiplier and the adder implement arithmetic with
Hardware Implementation of RBF Neural Network on FPGA Coprocessor
419
sign that realized by the Adder Subtracter v7.0 and Multiplier v7.0 from Xilinx IP core. The control unit controls the running sequence of the arithmetic. The connotative layer unit has the ability of both forward compute and backward central vector learning without teacher. When the value of the forward-backward control line is “1”, the neural cell work at the state of forward (execute equation1), the steady output is transmitted to the next unit (minimum and serials output unit) via the output bus, otherwise, the neural cell work at the state of backward (execute equation7) the new central value is transmitted to the storage of the central vector via the central vector bus. Output
Buffer
Teacher input
Buffer C-layer neural cell 1
Deviation& weight regulation t Gauss fun
Central vector 1 C-layer neural cell 2
Input Buffer
Reciprocal function 7
Mini& serial output
C-layer neural cell P 2
W_1
4 5 3
Central vector P
6
Address generater Control Bus
9
Output neural cell Output neural cell 1
Central vector 2 1
Output neural
Output neural cell k W_2 8
Gauss fun
Fig. 4. The diagram of whole structure of the hardware implementation of RBF neural networks. 1. The input bus, 2. 6. address bus, 3. The steady output bus of the connotative layer, 4. 8. The control bus, 5. The weight regulation parameter bus, 7. The input bus of the output layer, 9. The output bus of the output layer.
The diagram of whole structure of the hardware implementation of RBF neural networks is shown in Figure4.The working process of the system includes the following states. The state of connotative layer input compute the central distance of the input (execute equation 1). The connotative layer neural cell corresponding to the minimum central distance is computed via the minimum and the serial output unit,
420
Z.-g. Yang and J.-l. Qian
and the serial output is exported at the same time. The statechart get the code of the connotative layer neural cell with the minimum central distance, choose the neural cell and make it work at the pattern of central vector regulation. At the same time the minimum and the serial output is transmitted to the Gauss function unit to compute the input of the output layer (execute equation 2) .The Gauss function unit is realized by looking up the table. When the compute of the Gauss function begin, the neural cell of the output layer begin to compute the output of the net. (execute equation 3). The weighting coefficient and the input of No.0 unit of the output layer cell are both the input of the output layer, compute the norm of the input ( q p 2 ), the reciprocal of the output is obtained by the reciprocal unit. Thus the operation of division is turned to operation of multiplication the reciprocal function can be realized by looking up table. The state of updating the weighting coefficient compute the regulation parameter by the deviation and weighting coefficient regulation parameter unit, at the same time the output neural cell unit is made to work at the pattern of weighting coefficient regulation. (execute equation 9).
Fig. 5. The simulation sequence of the hardware implementation of the RBF neural networks
By using the integrated development environment of ISE of Xilinx company [11], a RBF net with 4 input layers, 8 connotative layers and an output layer is realized. The simulation time sequence of the system is shown in Figure5, at the frequency is 60MHz, the RBF can implement 25 thousand times of computation per second. The RAM inside the FPGA is used to store the central vector of the connotative layer cell and the weighting coefficient of the output layer. There is independent bus for each RAM which implements to distributing store the data and adapt to parallel computation. All of the units of the neural cells do parallel computation, this structure adapt to construct large scale neural networks.
Hardware Implementation of RBF Neural Network on FPGA Coprocessor
421
4 The System Test The hardware test to run the system consists: FPGA coprocessor board, A15C PowerPC board of MEN company, VME box, the host computer used to develop Vxworks operating system, computers used to develop FPGA software and so on. The model for system is: y = A1 sin( X 1t + θ 1 ) + A2 sin( X 2 t + θ 2 ) + e(t ) U = [sin( X 1t ), cos( X 1t ), sin( X 2t ), cos( X 2t )] D = [y]
(10)
e(t ) is the noise signal. The output of the RBF net realized by FPGA coprocessor is recorded by the user program of Vxworks, set t as the x-coordinate, the output as the y-axis, then the discriminating curve is shown in fig.6: 1.5
Identified curve 1
Aim curve 0.5
0
-0.5
-1
-1.5 0
2
4
6
8
10
12
14
16
18
20
Fig. 6. The discriminating curve of the model of the system
On the general PC, the RBF net can operate about 32000 times per second, while by using the RBF realized the FPGA coprocessor hardware it can implement about 282500 times per second, about 9 times the speed of running on the PC.
5 Conclusion Install the FPGA coprocessor on the A15C PowerPC board of MEN company, then run the coprocessor at the frequency of 60MHz. According test, the steady communication of data between PowerPC and FPGA is implemented. The arithmetic of RBF neural network constringes to the training sample. The FPGA coprocessor is adapt to other arithmetic too, it provides an efficient hardware platform for the chock point of solving the operating speed of the intelligent control arithmetic.
422
Z.-g. Yang and J.-l. Qian
Acknowledgments This research was supported in part by grants from Science and Technology Bureau of Tangshan City, Hebei Province, China (09110205a, 09110221c).
References 1. Porrmann, M., Witkowski, U., Kalte, H., Riickert, U.: Implementation of Artificial Neural Networks on a Reconfiguralable Hardware Accelerator. In: 10th Euromicro Workshop on Parallel, Distributed and Network Based Processing (PDP 2002), Gran Canaria Island, Spain, January 9-11 (2002) (to be published) 2. Ang, L., Qin, W., Zhancai, L., Yong, W.: Neural networks hardware implementation based on FPGA. Journal of University of Science and Technology Beijing 01, 90–94 (2007) 3. Zhengrong, P., Zhaoliang, Z., Juxiang, Z.: Hardware implementation of an artificial neural network based on SoPC. Electronic Measurement Technology 06, 116–119 (2009) 4. Li-Ge, L., Bao-Ding, Y., Zhong, H.: Reconfigurable Hardware Realization of Neural Network Based on FPGA. Journal of Henan University of Science and Technolog 01, 37–40 (2009) 5. Ruilin, B., Xianming, S., Zhihui, Z.: Implementation of Fuzzy CMAC Based on FPGA. Computer Measurement & Control 04, 527–530 (2007) 6. Zhang, H.-y., Li, X., Tian, S.f.: Simulation Line Design and Its FPGA Realization Based on BP Neural Network. Journal of Electronics & Information Technology 05, 1267–1270 (2007) 7. Maeda, Y., Tada, T.: FPGA Implementation of a Pulse Density Neural Network With Learning Ability Using Simultaneous Perturbation. IEEE Trans. Neural Networks 14(3) (May 2003) 8. Wen, Z.Z., Don, W.: An implementation of neural network based on FPGA. Radio Engineering 30(5), 57–59 (2000) 9. Lina, X.: Neural Networks Control. Publishing house of electronics industry, Beijing (2002) 10. Hong, G., He, H., Jiming, W.: VHDL design directory. China machine press, BeiJing (2005) 11. Cheng, W., Xiaogang, X., Xinchao, Z.: FPGA/CPLD design tool-Xilinx ISE 5.x using menu. Posts & Telecom Press, Beijing (2003)
Prediction on Development Status of Recycle Agriculture in West China Based on Artificial Neural Network Model Fang Wang and Hongan Xiao College of Economics and Management, Sichuan Agricultural University, Ya'an, Sichuan, 625014, China
[email protected] Abstract. Recycle agriculture in West China is a complicated category with high systematicness, whose development objective is characterized by diversification, abstraction and theorization. To make a prediction on the comprehensive development status of the recycle agricultural in west China with application of back-propagation artificial neural networks (BPN) so as to provide methods and theoretical directions for the research of applying neural network model to the agricultural development system. This paper, by means of the comprehensive assessment index system and the analytical method of the development of the recycle agriculture it builds and based on the comprehensive evaluation the Z value in 1995-2004 of the recycle agriculture in west China, predicts the development status of the recycle agriculture in west China under the BP neural network model through the MATLAB program, and eventually concludes that we must take corresponding measures to promote resources decrement input and resource reuse efficiency, protect the forest resources, and reinforce harnessing of water loss and soil erosion, with the help of the analytical hierarchy process and the entropy method. Keywords: BP neural network, recycle agriculture, prediction, comprehensive assessment.
1 Introduction Artificial neural network (ANN) is an extremely simplified model of the construction and function of the actual neural system, which features large-scale parallel, distributed storage and treatment, self-organization, self-adaptation, self-study and fault tolerance, and is particularly applicable to problems which require a simultaneous consideration of many factors and conditions, in the systems of which, interaction mechanism between factors is still unclear and imprecise, and whose information is fuzzy. It can simulate complicated ecological processes and behaviors that can’t be simulated by many traditional models. Thus, it has attracted widespread concern from scientists, and has been applied to all fields of organism and agriculture research [8] and has become one of the most dynamic leading edge fields in international community at present. Now, using neural network technology, of its high computation and learning, becomes an important agricultural modernization means. Neural networks have been widely applied to all aspects of agricultural production [6]. People have used neural networks R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 423–429, 2010. © Springer-Verlag Berlin Heidelberg 2010
424
F. Wang and H. Xiao
to predict or evaluate the water [18], pests and diseases [7, 17], agricultural machinery ownership [2, 12], weather [10], yield [13, 14, 15], output [9], technological innovation effect [11], CE [5]etc. In addition, Zhang Ying proposed the neural network theory applied to agriculture expert system design with others [16]. However, this technology being not mature within the scope of agriculture still needs further study. We can improve the artificial neural network algorithm, and enhance the application of neural networks [6].
2 BP Neural Network Model 2.1 The Fundamental Principle of BP Neural Networks The BP network is completely put forward by Rumelhart, Hinton and Williams [1]. It is a multi-layer feed forward network with one-way transmission, and solves the learning problem of the hidden-layer unit connection weight in multi-layer network. The input signal is successively transmitted through each hidden layer from the input node, and then transmitted to output node. The output of each layer’s node only affects the output of the next layer’s node. In order to accelerate the convergence speed of network training, we can standardize the input vector and give an initial value to each connection weight. BP neural network can be viewed as a highly nonlinear mapping from input to output, i.e. to find a mapping f; f is to made as the optimal approximation of g. The neural network makes several combinations of simple nonlinear functions and can be taken as approximation of some complicated functions [4]. 2.2 Mathematical Model of BP Neural Networks The fundamental principle of BP neural network model in processing information is: the input signal Pi acts on the output node through the intermediate node (hidden-layer node), which produces the output signal OK after non-linear transformation. Each sample of the network training includes the deviations between the input vector P and the desired output T, and between the network output value O and the desired output T, and we can make the deviation decrease along the gradient direction through adjusting the connection weight wij between the input node and the hidden-layer node, and the connection weight
T jk
and the threshold value between hidden-layer node and the
output node. After repeated learning and training, we can determine the network parameters corresponding to the minimum error (weight and threshold values), and then stop training. Now, the trained BP neural network can automatically process the input information within the input range, and then output information which has the minimum error and has undergone non-linear transformation. Its specific mathematical model goes as follows: (1) Transfer function: it is also called stimulus function, which reflects the stimulus pulse power of the lower layer input to the upper layer node. Generally, we get continuous function Sigmoid in (0, 1): 1 (1) f ( x) = 1+ e − x
Prediction on Development Status of Recycle Agriculture in West China
425
(2) Error calculation model: it is the function which reflects the error between the desired output and the calculated output of the neural networks. The output error of cell node j: 2 1 n (2) Ek = ∑ ( y jk −T jk ) 2 k =1 Overall error: 1 N E= (3) ∑ Ek 2 N k =1 T jk is the desired output of cell node j; y jk is the actual output of cell node j.
(4) O1 = f ( ∑ w1ij x j ) jk The mathematical model of the intermediate layer node: O1 represents the output of node j when sample k is input on the intermediate jk layer. x j is the input of node j. w1ij is the weight from the input layer to the intermediate layer. The mathematical model of the output node: (5) O 2 = f (∑ wij2 O1 ) jk jk O 2 represents the output of node j when sample k is input on the output layer. wij2 is jk the weight from the intermediate layer to the output layer. (3) Modified weight ∂E (6) wij = wij + μ ∂wij
3 Prediction on the Comprehensive Development Trend of the Recycle Agriculture in West China The BP neural network is widely applied to system prediction. Here, we combine the analytic hierarchy process and the information entropy method to establish an index system, where with standardized individual index values, we can obtain the comprehensively evaluated the Z value, and with the BP neural network, we can make a short-term prediction on the comprehensive development trend Z of the recycle agriculture in west China. 3.1 Standardized Individual Indexes
The standardization of fuzzy membership functions is employed to solve for the standardized values of the individual indexes of the recycle agriculture in China 1995-2004. When the entropy method is employed to calculate, we translate the ′′ = 1 + xij ′ so as to eliminate the effects of standardization. coordinate xij
426
F. Wang and H. Xiao
3.2 Comprehensive Evaluation Z Value
Due to the complexity and the hierarchy of the agricultural eco-economic system, each index in the evaluation index system of the recycle agriculture development needs to reflect the status of the recycle agricultural development from different levels and aspects. Therefore, the evaluation of recycle agriculture development is a comprehensive evaluation. Generally, we adopt the weighting function method to calculate. If the comprehensive evaluation value is represented as Z, then:
Z = ∑ wi Z i ,
(7)
where, wi is the weight of index I, Z i is the value of index i. The bigger the Z value is, the higher the level of recycle agriculture development is. Table 1. Comprehensive evaluation the Z value of the recycle agricultural development in west China Year
1995
1996
1997
1998
1999
Z-score (%)
0.19102
0.19186
0.19148
0.19446
0.19194
Year
2000
2001
2002
2003
2004
Z-score (%)
0.19321
0.19524
0.20411
0.19925
0.20374
3.3 Network Design
According to the design network of the BP neural network, general prediction problems can be realized by single hidden-layer BP neural network. The input vector has 17 index values, so there are 17 neurons in the network input layer. Output vector has one comprehensive development index, so there is one neuron in the output layer. Through multiple comparative analyses, when neurons in the intermediate layer are 13, the network converges better. Due to the output of the function is between the interval [0,1], the transfer function of t neurons in the network intermediate layer uses S type tangent function tansig, the transfer function of the neurons in the output layer uses S type logarithm function logsig so as to meet the demand of the network output [3]. The above codes are employed to design the network: threshold=[zeros(17,1) ones(17,1)]; net=newff(threshold,[13,1],{'tansig','logsig'},'trainl m'); 3.4 Network Training
The network is used for comprehensive index prediction after training. Considering the complicated network structure and large number of neurons, we need to moderately increase training times and learning rate, and after many trials, the training parameters are set as shown in Table 2.
Prediction on Development Status of Recycle Agriculture in West China
427
Table 2. Training parameters Training times 15000
Training target 0.000001
Learning rate 0.05
The training result is: TRAINLM, Epoch 0/15000, MSE 0.517762/1e-006, Gradient 2.15874/1e-010 TRAINLM, Epoch 13/15000, MSE 9.45943e-007/1e-006, Gradient 2.02021e-005/1e-010 TRAINLM, Performance goal met.
After 13 times training, network error meets the requirements, the result of which is shown in figure 2.
Fig. 1. Training result
Fig. 2. Test fitting
3.5 Network Test and Prediction
The trained network can be tested by the actual value to see whether it can be used to predict the future values. Here the input values are standardized values of all indexes in
428
F. Wang and H. Xiao
1995-2002, the output value is Z in 1996-2003 and the Z value in 1996-2003 is the detected value, The prediction concludes that the fitting is good as shown by comparison of Z in1997-2004 and the actual calculated the Z value (see Figure 3). The error 0.00082 in 1997 is the maximum error, and the error 0.000411 in 2004 is the minimum error. The small error can be continuously used to predict the comprehensive development index of the system. 3.6 Predicted Results
The input value is the standardized index value in 1996-2003; the output value is the Z value in 1997-2004; and the Z value in 2005 is thus predicted. Then we use the standardized index value in 1997-2004 and the output value of the Z value in 1998-2005 to predict the Z value in 2006 and rolling-predict the Z value of the comprehensive development index in 2010.
Fig. 3. Predicted results
4 Conclusions Through the prediction on the recycle agriculture system in West China in the coming six years, we can find that the comprehensive development of the agriculture in West China takes on an overall trend of gradual increase. However, from the absolute Z value, the recycle agriculture in West China is still in the stage of relatively low level. The comprehensive evaluation results of the development of the recycle agriculture in West China show that the status of the recycle agriculture in West China has been improving since 1995. Among these years, the period from 1995 to 1998 saw a stage of steady development of the recycle agriculture in West China, the period from 1999 to 2001 saw a stage of checked development, and the period from 2002 to 2004 saw a stage of improvement. The investigation of the classified indexes of the recycle agriculture in West China shows that the three indexes of resource decrement input, reclamation re-use and environmental security are far below the socio-economic development, which not only confirms that the development of the agriculture in West China has always been emphasizing the economic benefits and ignoring the resource environmental capacity
Prediction on Development Status of Recycle Agriculture in West China
429
and ecological efficiency, but also accounts for the development of the recycle agriculture in West China is restricted by this unbalanced development to a large extent. In order to promote the development of the recycle agriculture in West China, relevant measures shall be taken to improve the resource decrement input and the reclamation reuse efficiency of resources, to protect forest resources and strengthen the harnessing of water loss and soil erosion.
References 1. Barnard: Optimization for training neural nets. IEEE Trans. on Neural Networks 3(2), 232–240 (1992) 2. Lineng, C., Yongliang, X.: Prediction technology of agricultural machinery ownership based on BP neural network. Agricultural Machinery Journal (1) (2001) 3. Feisi Science and Technology R&D Center, Neural Network Theory and Matlab 7 Realization—Matlab Application Technology. Electronic Industry Press, Beijing (2005) (in Chinese) 4. Towell, G., Shavlik, J.: Extracting refined rules from knowledge based neural networks. Machine Learning 13(1), 71–101 (1993) 5. Xiangbing, G., Hongli, B., Bing, D.: Based on unascertained BP neural network evaluation of circular economy in rural areas. Contemporary Economic (5) (2007) 6. Xuesong, H., Jingran, W., Juan, Y.: The application of BP neural network in agricultural engineering. Business Modernization (2007) 7. Xiaoping, H., et al.: Plant pests BP neural network prediction system’s development and application. Northwest Farming and Forestry Scientific and Technical University Journal (Natural Sciences Version) (2) (2001) 8. Lek, S., Guegan, J.F.: Artificial neural networks as a tool in ecological modeling: all introductions. Ecological Modeling 120, 65–73 (1999) 9. Lilan, L., Yong, H.: Improved BP neural network and its application in the forecast of total output value of agricultural commodities. Technology Bulletin (1) (2005) 10. Jiaxin, Q., Senxin, Z., Ji, M.: Production of agricultural weather forecast system based on BP neural network. Micro Computer Information (2009) 11. Jinjie, Q., Wei, C., Yongjun, S.: Evaluation of agricultural technology innovation effect based on BP network. Science Teacher Journal (4) (2008) 12. Hui, S., Xin, D., Bing, W.: Agricultural machinery total power prediction model research based on BP neural network. Northeast Agricultural University Journal (4) (2009) 13. Qiping, W.: The application of BP neural network in our country’s grain output prediction. Forecast (3) (2002) 14. Jianli, Y., Ya, L.: Based on artificial neural network prediction model of food production. Henan Agricultural Sciences (7) (2005) 15. Shujuan, Z., Yong, H., Hui, F.: The application of artificial neural networks in analyzing the relationship between crop yield and soil spatial distribution information. Systems Engineering Theory and Practice (12) (2002) 16. Ying, Z., Yingze, Y., Weizhi, H.: The application of neural networks in Agricultural Expert System. Agricultural Mechanization Research (10) (2008) 17. Guofu, Z., Peng, Z.: Analysis and Implementation of pest forecasting systems based on the BP network’s. Agricultural Mechanization Research (4) (2008) 18. Zhenguo, Z., Li, L., Jianxin, X.: Prediction of regional agricultural water use based on BP neural network. People’s Yellow River (9) (2007)
An Improved Particle Swarm Optimization Algorithm for Vehicle Routing Problem with Simultaneous Pickup and Delivery Rong Wei1, Tongliang Zhang2, and Hui Tang2 1
College of Science, Hebei polytechnic university, Tangshan, Hebei 063009, P.R. China 2 Research Institutes of Highway, Ministry of Transport, Beijing 100088, P.R. China
[email protected].
[email protected],
[email protected]
Abstract. The Vehicle Routing Problem with Simultaneous Pickup and Delivery (VRPSPD) is a variant of the Capacitated Vehicle Routing Problem (CVRP), in which clients require both pickup and delivery services. This paper proposes an improved particle swarm optimization algorithm based on multiple social structures for solving VRPSPD. The decoding of single particle in swarm consists two parts: the first is m dimensional (m-D) variables for m customers, and the second comprises 2n dimensions (2n-D) for n vehicles which presents vehicle route orientation. The particle is transformed to customers’ list and vehicles matrix. A benchmark dataset is used to validate the performance of proposed algorithm. Comparing with prior works, promising results indicate that the proposed algorithm may hold high potentials for generating powerful tool for solving VRPSPD and other attributes of vehicle routing problem. Keywords: Vehicle routing problem, simultaneous pickup and delivery, particle swarm optimization, multiple social structures, logistics.
1 Introduction The Vehicle Routing Problem with Simultaneous Pickup and Delivery (VRPSPD) is a variant of the Capacitated Vehicle Routing Problem (CVRP), in which clients require both pickup and delivery services. In some practice, customers may have both a delivery and a pickup demand, such as empty bottles must be returned in soft drink industry. The separated serviced for the delivery and pickup could be accepted because a handling effort is necessary for both activities and the effort may be considerably reduced by a simultaneous operation. Applications of the VRPSPD can be found especially within the Reverse Logistics context. Companies become interesting in gaining control over the whole lifecycle of products, especially when environmental issues are involved. Companies are increasingly faced with the task of managing the reverse flow of finished goods or raw-materials. R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 430–436, 2010. © Springer-Verlag Berlin Heidelberg 2010
An Improved Particle Swarm Optimization Algorithm for VRPSPD
431
The VRPSPD is clearly NP-hard since it can be reduced to the CVRP when all the pickup demands are equal to zero. Min [1] first proposed the VRPSPD in 1989. Recently, the studies on VRPSPD increase rapidly for the development of reverse logistics and retail distribution. Dethloff [2] proposed a mathematical formulation for VRPSPD from the point of view of reverse logistics to minimize the total traveled distance subject to maximum capacity constraint of the vehicle. The development of modern heuristics provides strong techniques for resolving the problem of multi-objective optimization. A few heuristics algorithms have been proposed for VRPSPD, like local search heuristics [3], tabu search algorithm [4], genetic algorithm [5, 6]. Angelelli and Mansini developed an exact algorithm based branch-and-price for VRPSPD with time windows [7]. Particle swarm optimization (PSO) is a population-based stochastic optimization technique developed by Kennedy and Eberhart [8], which motivated by the group organism behavior like bird flock and fish shoal. The application PSO on VRPSPD is still rare. Jin and Kachitvichyanukul [9] proposed a random key-based solution representation and decoding method for implementing PSO for VRPSPD. This paper presents mathematical formulation for VRPSPD, and further proposes an improved PSO algorithm for solving it.
2 Material and Methods 2.1 VPRSPD Mathematics Formulation The VPRSPD can be formulated as a graph G=(V,A), where V={v0,…,vm} is a vertex set, and A={(vi,vj)} is arc set. Distance matrix (dij) and time matrix (tij) is the parameters of arc aij=(vi,vj). Vertex v0 is the depot at which n vehicles are stationed. Each customer corresponds vertex vi and has a non-negative pickup quantity pi, delivery quantity qi. The vehicle has some fix parameters, such fixed cost of fi, capacity Q, and service duration limit D. the vehicle routes in VPRSPD must meet the restriction: (1) the total routing cost is minimized; (2) the total duration of each route does not exceed the limit D of the servicing vehicle; (3) the total vehicle load in any route does not exceed the capacity Q of the vehicle. (4) each vertex (customer) is visited exactly once by one vehicle; and (5) each vehicle starts from depot v0,ang return to it. The mathematics formulation of VPRSPD is presented below: n
m
m m+1 n
MinC= f ∑∑x0 jk + g∑∑∑dij xijk , k=1 j =1
i=0 j=1 k=1
(1)
where, xijk indicates whether the arc (i,j) is traversed by vehicle k. ⎧0 if vehicle k does not traversed arc (i,j) xijk = ⎨ ⎩1 if vehicle k traversed arc (i,j)
(2)
432
R. Wei et al.
subject to, m
n
∑∑ x i =0 j =1
j =0
=1
,
(3)
m +1
m
∑x
ijk
jik
= ∑ xijk , for 1 ≤ i ≤ m, 1 ≤ k ≤ n ,
(4)
j =1
yijk ≤ xijk Q, for 0 ≤ i ≤ m, 1 ≤ j ≤ m + 1, 1 ≤ k ≤ m , m
∑y j =1
0 jk
m
m
j =1
i =0
= ∑ q j ∑ xijk , for 1 ≤ k ≤ n ,
(5) (6)
where, yijk presents the load of vehicle k which traverses arc (i,j). m
∑y i =0
ijk
m
m +1
i =0
i =1
+ ( p j − q j )∑ xijk = ∑ yijk , for 1 ≤ j ≤ m, 1 ≤ k ≤ n
.
(7)
The objective function Eq. (1) indicates that this model minimizes routing cost, which consists of transportation fixed cost and variable cost. Eq. (3) and (4) represent that every customer is visited by exactly one vehicle. Vehicle load constraints are explained in (5)-(7). Constraint (5) states that if vehicle k serving customer j after serving customer i (xijk = 1), the corresponding load (yijk) must at most equal to the vehicle load capacity (Q); and otherwise the load yijk = 0 if xijk = 0. Constraint (6) assures that all customer deliveries are from the depot. It states that the load of a vehicle at the departure from the depot must be equal to the total load for customer deliveries of the corresponding vehicle. Constraint (7) balances the load of a vehicle after it serves a customer. 2.2 Particle Swarm Optimization Algorithm for VPRSPD PSO is a population-based optimization algorithm. The detail of PSO algorithm can refer to the literature [10]. Here, we use an improved PSO algorithm with multiple social learning structures [11] to solve VPRSPD. Except for the global best (gbest) and particle best (pbest), other two parameters are involved, the local best (lbest). The local best is the best position of among several adjacent particles. The decoding of particle in swarm is the key elements for effective implementation of PSO for VRPSPD. It consists two parts: the first part is m dimensions related to m customers, the second part is presentation of vehicle route orientation. Route orientation of a vehicle is defined as a point that represents vehicle service area. A route orientation point has its coordinate (x,y). Hence, the presentation of vehicle route orientation will consist 2n dimensions of a particle for n vehicles. The details of the PSO algorithm for solving VRPSPD is illuminated in Table 1, and the notation of parameters in algorithm are listed in Table 2.
An Improved Particle Swarm Optimization Algorithm for VRPSPD
433
Table 1. Proposed PSO algorithm for VRPSPD 01. 02. 03.
Initialize swarm S with n particles, initialize PSO parameters For each particle i in S Initialize random position Pi ∈[θ min , θ max ]
04.
Velocity vi = 0
05.
Initialize pbesti with a copy of the position for particle, Pi best = Pi
06.
End For
07.
Set iteration τ
=1
08.
decode i-th particles in the τ − th iteration Pi (τ ) to a set of vehicle route Ri.
09.
While the termination conditions are not met
10.
For i =1…L
11.
Compute the fitness value f ( Pi )
12.
Get the global best (gbest)
13.
Get the local best in i-th particle
14.
Update pbest, If f(Pi)
15.
Update gbest, If f(Pi)
16.
Update lbest, If f(Pi)
17.
Update Inertia weight w(τ ) = w(T ) + T − τ ( w(1) − w(T ))
18
For h=1…H
T −1
vih (τ + 1) = w(τ )vih (τ + 1) + C p u ( pbest h − pih (τ )) + C g u ( gbest h − pih (τ ))
19.
+ Cl u(lbesth − pih (τ )) 20.
Pih (τ + 1) = Pih (τ ) + vih (τ + 1)
21.
If Pih (τ + 1) > θ max Then
22.
Pih (τ + 1) = θ max ,
23.
vih (τ + 1) = 0
24.
End If
25.
If Pih (τ + 1) < θ min Then Pih (τ + 1) = θ min ,
26.
vih (τ + 1) = 0
27. 28.
End If
29.
End For
30.
End For
31.
τ = τ +1
32.
End While
434
R. Wei et al.
Table 2. The notation of parameters in proposed algorithm Name i h
Notation iteration index; τ = 1...T particle index; i=1…L dimension index; h=1…H
pbesti
Personal best in i-th particle
τ
gbest
Global best in swarm
lbesti
w(τ )
Local best position in i-th particle, among all pbest from K neighbors of the i-th particle, set the personal best which obtains the least fitness value to be lbesti Inertia weight in the τ − th interation
w(T )
Last Inertia weight
vih (τ )
Cp
velocity of i − th particle at the h − th dimension in the τ − th interaction position of the i − th particle at the h − th dimension in the τ − th interaction personal best position acceleration constant
Cg
global best position acceleration constant
Cl
local best position acceleration constant
Pih (τ )
u
uniform random number in the range of [0, 1]
θ θ min
max
maximum position value minimum position value
3 Results and Discussion The proposed algorithm is validated on the benchmark data set of Delthloff [2], which comprises four data sets named SCA3, SCA8, CoN3 and CON8, respectively. The same data sets are also used in prior work for performance measurement. In order to compare with prior work in same scenario, the problem parameters in VRPSDP formulation are set as follow: the fixed cost per vehicle f=0; service duration limit D = ∞ ; variable cost per distance unit g=1. The algorithm is implemented in matlab on PC with 2.6G, 2G RAM. The PSO parameters are set based on the result of some preliminary experiments, and the parameters are listed in Table 3. The proposed algorithm implement 10 replications on each data set. The best results among 10 PSO iterations are listed in Table 4, where for facilitating comparison, the results by other methods on the same dataset are also listed. It is shown that the proposed method is encouraging. Our results are better than that of Dethloff [2] and The Jin Ai and Voratas [9] for all data sets, better than Tang and Galvao [4] for SCA8 data set, and better than Bianchessi and Righini [12] for CON3 data set.
An Improved Particle Swarm Optimization Algorithm for VRPSPD
435
Table 3. PSO parameters in VRPSDP Parameter
Value
Particel Number
40
Neighbor number
8
Maximum iteration number
500
First inertia weight
w(1)=0.95
Last inertia weight
w(T)=0.45
pbest position acceleration constant
Cp=1
Gbest position acceleration constant
Cg=0.1
lbest position acceleration constant
Cl=0.9
Table 4. Total cost obtained by different algorithms on four data sets
Total cost Data Set
Dethloff[2]
SCA3
746.6
Tang and Galvao[4] 674.2
Bianchessi, Righini[12] 684.6
The Jin Ai and Voratas [9] 675.8
Proposed method 672.8
SCA8
1166.4
1044.4
1035.7
1041.8
1039.4
CON3
597.3
564.2
568.5
569.6
565.3
CON8
860.6
774.3
776.4
798.3
767.9
4 Conclusions An improved PSO algorithm was proposed based on multiple social structures for solving VPRSPD. The concept of local best is considered in PSO, which presents the best position of among several adjacent particles of i-th particle. The particles learning can be strengthened in multiple social learning structures. The decoding method of PSO for VPRSPD is presented. Each particle in swarm consists (m+2n) dimensions corresponding to m customers and n vehicles. A benchmark dataset is used to validate the performance of proposed algorithm. Compared to prior works, the results of proposed algorithm is encouraging.
Acknowledgments This work was supported financially in part by 863 Foundation (Grant No. 2006AA04A105) and Tangshan Science Foundation (Grant No. 09110219C).
436
R. Wei et al.
References 1. Min, H.: The multiple vehicle routing problem with simultaneous delivery and pickup points. Transportation Research A 23, 377–386 (1989) 2. Dethloff, J.: Vehicle routing and reverse logistics: the vehicle routing problem with simultaneous delivery and pick-up. OR Spektrum 23, 79–96 (2001) 3. Tang, F.A., Galvao, R.D.: Vehicle routing problems with simultaneous pick-up and delivery service. Journal of the Operational Research Society of India (OPSEARCH) 39, 19–33 (2002) 4. Tang, F.A., Galvao, R.D.: A tabu search algorithm for the vehicle routing problems with simultaneous pick-up and delivery service. Computer & Operation Research 33, 595–619 (2006) 5. Baker Barrie, M., Ayechew, M.A.: A Genetic Algorithm for the Vehicle Routing Problem. Computers & Operations Research 30, 787–800 (2003) 6. Christian, P.: A Simple and Effective Evolutionary Algorithm for the Vehicle Routing Problem. Computers & Operations Research 31, 1985–2002 (2004) 7. Angelelli, E., Mansini, R.: A branch-and-price algorithm for a simultaneous pick-up and delivery problem. Working Paper, Article Presented at the EURO/INFORMS Meeting, 2003. 8. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948 (1995) 9. Ai, J., Kachitvichyanukul, V.: A particle swarm optimization for the vehicle routing problem with simultaneous pickup and delivery. Computers & Operations Research 36, 1693–1702 (2009) 10. Mateo, S.: Swarm Intelligence. Morgan Kaufmann, CA (2001) 11. Bianchessi, N., Righini, G.: Heuristic algorithms for the vehicle routing problem with simultaneous pick-up and delivery. Computers and Operations Research 34, 578–594 (2007)
Optimizing Particle Swarm Optimization to Solve Knapsack Problem Yanbing Liang, Linlin Liu, Dayong Wang, and Ruijuan Wu Hebei Polytechnic University, Tangshan 063000, China
[email protected]
Abstract. Knapsack problem, a typical problem of combinatorial optimization in operational research, has broad applied foregrounds. This paper applies particle swarm optimization to solve discrete 0/1 knapsack problem. However, traditional particle swarm optimization has nonnegligible disadvantages: all the parameters in the formula affect the abilities of local searching and global searching greatly, which is liable to converge too early and fall into the situation of local optimum. This paper modifies traditional particle swarm optimization, and makes the position of particle which achieves global optimization reinitializated. Through analyzing the final result, the paper has proven that the improved algorithm could improve searching ability of particle swarm, avoid converging too early and solve 0/1 knapsack problem more effectively. Keywords: particle swarm optimization, local optimum, global optimum, fitness.
1 0-1 Knapsack Problem Knapsack problem is a typical problem of combinatorial optimization in operational research, having broad application foreground, such as resource allocation problem, goods shipment problem, project selection problem and so on. Knapsack problem belongs to NP hard complete problem [1, 2, 3], at Present the methods of solving optimization problem are accurate method (such as dynamic programming, recursion method, backtracking, branch and bound algorithm and so on), approximation algorithm (such as greedy method, Lagrange method and so on) and intelligent optimization algorithm (such as simulated annealing algorithm, genetic algorithm, ant colony algorithm and so on) [4, 5]. Accurate method can obtain exact solution, but time complexity is 2n , and there is an exponential relation between time complexity and goods number. Approximation algorithm and intelligent optimization algorithm don’t always obtain exact solution, but they can obtain better of approximate solution and time complexity is lower [6, 7, 8]. 1.1 Problem Description
There are N goods and a knapsack that capacity is B .
ai is the cost of the
ith good,
bi is the value of the ith good. Then solve which goods are let into the knapsack to R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 437–443, 2010. © Springer-Verlag Berlin Heidelberg 2010
438
Y. Liang et al.
make the cost total of the goods no more than capacity of the knapsack and get the maximum total of the value. 1.2 Problem Analysis
0/1 knapsack problem is the most basic knapsack problem, it includes the most basic idea of design state and equation in the knapsack problem, in addition, other kinds of knapsack problem can be converted into 0/1 knapsack problem to solve. Expressions are as follows: n
max ∑ ai * xi s.t i =1
n
∑b * x i =1
i
i
<= B , xi = 0,1(i = 1,2,3L n) ,
where xi is 0-1 decision variable. xi = 1 expresses to put the ith good into knapsack, otherwise xi = 0 .
2 Particle Swarm Optimization and Mathematical Derivation Particle swarm optimization was developed in 1995 by J. Kennedy and R. C. Eberhart, etc. It is a kind of evolutionary computation technology, and it comes from the simulation about a social model simplified. Swarm coming from particle swarm accords with five basic principles of swarm intelligence proposed by M. M. Millonas when he develops and applies to the model of artificial life. Particle is a eclectic selection, because the members of population need to be described as no quality and no volume ones, they need to be described speed state and acceleration state [9, 10]. PSO is for graceful but unpredictable movement of graphical simulation bird group at first. But through observing the animal social behavior, they find that it provides an advantage of evolution for social sharing of information in the group, so they develop an algorithm which is based on it. Through adding adjacent velocity matching and considering multi-dimensional search, according to acceleration of distance, the initial version of PSO is formed. Then inertia weight w is introduced to control development and exploration better, the standard version of PSO is formed. 2.1 Principle
PSO is based on the swarm, it makes individual in the swarm move to a good region. But it doesn’t use evolution operator to individual, each individual is regarded as a no volume particle(point) in the D-dimensional search space and flies by a definite speed in the search space, the speed can be adjusted dynamically according to its own flight experience and flight experience of other particles. The ith particle is pressed as X i = ( xi1 , xi 2 ,L , xiD ) , the best position undergone (having the best fitness value) is Pi = ( pi1 , pi 2 ,L , piD ) , it is called pbest also. Index sign of the best position where all particles of the swarm have undergone is g , that is Pg , it is called g best also. The
Optimizing Particle Swarm Optimization to Solve Knapsack Problem
439
speed of the ith particle is Vi = (vi1 , vi 2 ,L , viD ) . For each generation, the Dth dimensional changes according to the following equations: vi (t + 1) = w * vi (t ) + c1r1 ( pi − xi (t )) + c2 r2 ( pg − xi (t )) . xi (t + 1) = xi (t ) + vi (t + 1) .
In which w is inertia weight, c1 and c2 are acceleration constants, making c1 = c2 = 2 usually, rand () and Rand () are two random values changing on the range of [0,1] . In addition, the velocity of the particle is limited by a maximum velocity vmax . If accelerating the particle vid causes that the velocity of some dimension exceeds the maximum velocity of the dimension vmax,d , the velocity of some dimension is limited the maximum velocity of the dimension vmax,d . For formula 1, the first part is the previous act inertia of the particle, the second part is the part of “cognition”, it expresses the particle’s own thinking, the third part is the part of “society”, and it presses information sharing and mutual cooperation between the particles. The part of “cognition” can be explained by the effect rule of Thorndike, that is one random behavior strengthened can appear more in the future. This behavior is “cognition”, it is assumed that the acquirement of correct knowledge is strengthened; the particle will be encouraged to reduce the error according to this model assumption. The part of “society” can be explained by replacement intensifying of Bandura. According to the expectation of this theory, when the observer observes a model strengthening some behavior, the probability that carries out the behavior is increased. Namely the particle’s own cognition will be imitated by the other particles. PSO is assumed with the following psychology: individual remembers ones own belief usually and considers colleagues’ belief simultaneously in the process of searching uniform cognition. When it perceives colleagues’ belief better, it can adjust adaptively. 2.2 The Process of Standard PSO
The process of standard PSO is as follows: a). Initialize a swarm of particles(swarm size is m ), including random position and random velocity; b). Evaluate the fitness of each particle; c). To each particle, compare the best global position undergone pbest with its fitness value, if it’s better, it’s the best current position pbest . d). To each particle, compare the best position undergone g best with its fitness value, if it’s better, the index sign of g best will be set anew. e). Change the velocity and the position of each particle according to equation 1; f). If it doesn’t reach the termination condition (the condition is enough good fitness value or reaching a presupposition maximum algebra G ), return to b).
440
Y. Liang et al.
2.3 Algorithm Parameters
PSO parameters conclude: swarm size is m , inertia weight is w , acceleration constants are c1 and c2 , maximum velocity is vmax , maximum algebra is Gmax . vmax determines the resolution of the region between current position and the best position. If vmax is too high, the particle can fly over the best value, if vmax is too low, the particle can’t make enough exploration, vmax falls into the local optimum. The limitation has three objectives: preventing calculation from exceeding; realizing artificial learning and attitude change; determining research size of the problem space. w makes the particle keep motion inertia, extend the trend of searching space and suffice to explore the new region. c1 and c2 represent the weight of accelerating term that each particle is pushed into the positions of pbest and g best . The low value allows the particle linger besides target regional before the particle is pulled back, the high one leads the particle to dash to the target regional or pass over it suddenly.
3 Algorithm Process 3.1 Process of PSO
The process of PSO is as follows: Step1: Initialize a swarm of particles (swarm size is m ), set random position and random velocity of each particle on the allowable range randomly, the position of each particle determines randomly according to xij (0) = ⎧⎨0 ⎩1
rand (0,1) < 0.5 , the rand (0,1) >= 0.5
velocity of each particle generates randomly according to vij (0) = vmin + rand (0,1)(vmax − vmin ) , vmin is the minimum of velocity, vmax is the maximum of velocity. Step2: Evaluate the fitness of each particle n n ⎧ ⎫ f ( xi ) = ∑ a j xij (t ) − Q min ⎨0, B − ∑ b j xij (t ) ⎬ , j =1 j = 1 ⎩ ⎭
in which Q is a sufficiently large positive number, and calculate objective function of each particle; Step3: To each particle, compare the best global position undergone pbest with its fitness value, if it’s better than pbest , it’s the best current position pbest . Step4: To each particle, compare the best position undergone g best with its fitness value, if it’s better than g best , it’s the best swarm position ,the index sign of g best will be set anew.
Optimizing Particle Swarm Optimization to Solve Knapsack Problem
441
Begin
Initialize particle swarm
Evaluate particle fitness
Calculate individual historical optimal position Calculate group historical optimal position
Update velocity and position of particle according to the equations
No Whether it meets termination condition Yes End of algorithm Fig. 1. The flow chart of PSO
Step5: Change the velocity and the position of each particle according to equation 1 and equation2, according to the iteration formulas:
vi (t + 1) = w * vi (t ) + c1r1 ( pi − xi (t )) + c2 r2 ( p g − xi (t )) ,
xi (t + 1) = xi (t ) + vi (t + 1) .
(1) (2)
Step6: Check termination condition (the condition is enough good fitness value or reaching the maximum iterations, or that optimal solution changes no longer),If it meets the above condition , stop iteration; otherwise return to Step2. The flow chart of PSO is shown as in Fig.1. 3.2 Operating Result Analysis
The program operates 50 times independently, we compare with optimal solution obtained by exact solution. The result obtained by exact solution is 3103, we obtain that the close range is (97.132%,99.679%) with comparing optimal solution obtained by particle swarm optimization and exact solution.
442
Y. Liang et al. Table 1. Optimized value optimal value
3062
3067
3069
3033
3050
goods weight in knapsack
1000
999
1000
1000
1000
optimal value
3067
3060
3077
3038
3049
999
999
998
1000
1000
optimal value
3081
3078
3021
3062
3083
goods weight in knapsack
1000
999
1000
1000
1000
optimal value
3070
3040
3018
3054
3065
goods weight in knapsack
1000
1000
999
999
1000
optimal value
3081
3079
3014
3093
3041
999
999
998
1000
1000
optimal value
3078
3066
3053
3073
3036
goods weight in knapsack
1000
998
1000
1000
1000
optimal value
3052
3063
3056
3050
3069
999
1000
1000
1000
998
optimal value
3062
3067
3069
3033
3050
goods weight in knapsack
1000
999
1000
1000
1000
optimal value
3067
3060
3077
3038
3049
999
999
998
1000
1000
optimal value
3081
3078
3021
3062
3083
goods weight in knapsack
1000
999
1000
1000
1000
goods weight in knapsack
goods weight in knapsack
goods weight in knapsack
goods weight in knapsack
4 Conclusions Particle swarm optimization obtains enlightenment from social behavior of bird group, it is a novel intelligent optimization algorithm, its implementation is simple and its effect is good. This paper applies particle swarm optimization to solve 0/1 knapsack problem, and elucidates the realization process of the algorithm. We improve basic particle swarm optimization for accelerating search ability of particle swarm. When the position of some particle equates the best position of the swarm, we make initialization assignment anew for the position of the particle and make the operation that the new particle replaces the particle adapting weak value to prevent the algorithm from falling into local optimization. Results show that the algorithm can solve 0/1 knapsack problem effectively.
References 1. Li, J., Fang, P., Zhou, M.: A Hybrid Genetic Algorithm about Solving Knapsack Problem. Journal of Nanchang Institute of Aeronautical Technology 12(3), 31–35 (1998) 2. Shen, X.J., Wang, W.W., Zheng, B.J., Li, Y.X.: Based on Improved Optimizing Particle Swarm Optimization to Solve 0-l Knapsack Problem. Computer Engineering 32(18), 23–24 (2006)
Optimizing Particle Swarm Optimization to Solve Knapsack Problem
443
3. http://baike.baidu.com/view/1531379.htm?fr=ala0_1 4. Liu, Q.D., Wang, L.: Research and Application on the Intelligent Particle Swarm Optimization 5 (2005) 5. Zeng, J.C., Jie, J., Cui, Z.H.: Particle Swarm Optimization. Science and Technology Press (2004) 6. Wang, X.D.: Computer Algorithm Design and Analysis. Electronic Industry Press (2001) 7. Yang, W., Li, Q.Q.: Survey on Particle Swarm Optimization Algorithm. Engineering Science 5 (2004) 8. Hembecker, F., Lopes, H.S.: Particle Swarm Optimization for the Multidimensional Knapsack Problem. In: Beliczynski, B., Dzielinski, A., Iwanowski, M., Ribeiro, B. (eds.) ICANNGA 2007. LNCS, vol. 4431, pp. 358–365. Springer, Heidelberg (2007) 9. Ma, H.M., Ye, C.M., Zhang, S.: Binary improved particle swarm optimization algorithm for knapsack problem. Journal of University of Shanghai for Science and Technology 1 (2006) 10. Zhao, C.X., Ji, Y.M.: Particle Swarm Optimization for 0/1 Knapsack Problem. Microcomputer Development 15(10) (2005)
BP Neural Network Sensitivity Analysis and Application Jianhui Wu, Gouli Wang, Sufeng Yin, and Liqun Yu Hebei Province Key Laboratory of Occupational Health and safety for Coal Industry, Division of Epidemiology and Health Statistics, North China Coal Medical College, Tang Shan 063000, China
[email protected]
Abstract. BP neural network as a data-mining technique can be used to factor analysis, but multi-layer BP neural network with hidden layer and between layers of neurons connected by weights staggered, so input variables on output variables. The size of impact is not intuitive. This research is based on different input variables change in value of the output variable degree of sensitivity analysis, the sensitivity of the size of the response by the input variables influence the output variables. Finally, application of the method on the cost factors affecting the sensitivity analysis, the factors that influence the ranking of a more reasonable level, the sensitivity of the BP neural network analysis of factors affecting the feasibility of certain. Keywords: BP neural network; sensitivity analysis; hospitalization charge; influence factor.
1 Introduction Factors analysis is an important part of medical research, the impact factors of the existing methods are parametric and nonparametric two categories. However, statistical methods for parameter information required to meet certain conditions, rather than the characteristics parameter method although the data do not do requests, but can not use the information of all of the information, but only use part of the information, testing is less efficient. BP neural network without any requirements on the data characteristics, and can take full advantage of information all of the information, because of its learning and adaptability are two features, without any restriction to automatically learn to identify any relationship between the variables. However, using multi-layer BP neural network hidden layer and between layers of neurons connected by weights staggered, so input variables on output variables can influence the size of the visual received.
2 BP Neural Network Theory BP neural network is a nonlinear dynamic system [1, 2], by a number of functional distribution of single neurons in the parallel composition, BP algorithm from the data stream prior to the calculation (forward propagation) and the error signal back-propagation 2 constitutes a process .Positive communication, the propagation R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 444–449, 2010. © Springer-Verlag Berlin Heidelberg 2010
BP Neural Network Sensitivity Analysis and Application
445
direction of input layer →hidden layer→ output layer, each layer neurons under the influence of the state of only a layer of neurons. If not desired in the output layer output error signal is turned back propagation process. By alternating the two processes, the implementation of the right vector space error function gradient descent strategy, search for a group of dynamic iteration the weight vector, the network error function reaches a minimum, thus completing the process of information extraction and memory. The neural network is widely applied in nonlinear system modeling, control and identification etc for it has better capability [3, 4, 5].
Fig. 1. Model of the neuron in an artificial neural network
3 Sensitivity Analysis 3.1 Introduction Sensitivity Sensitivity analysis [6] is mainly based on the output of model or system to analysis the parameters of model or system, especially the value on input to output. According to their expectations to modify until achieve expectations. Although there are different methods of sensitivity analysis [7, 8], the basic idea are similar. This method is suitable for system dynamics model is not clear or uncertain problems. Neural network structure as a dynamic problem can use the sensitivity analysis of the internal parameters of the neural network output. Specifically, in turn, change the input variables, if the input variables for the categorical variables, then the various values of the variables tested, if the data are normalized, then using a normalized value; If the input variable continuous variable, the variable quartered all the data were split using the five points of value (minimum, P25, P50, P75 and maximum) for testing, if the data normalized to [0,1] range, use the 0,0.25,0.5,0.75 and 1 respectively at 5 values for testing. An input variable, respectively, the value of the introduction of the test network, the observed values of output variables, recording the largest and smallest output, and calculate the maximum, minimum and maximum output of the difference between the ratios, all ratios compared with the mean the sensitivity of the input variables. If the network set the validation set, sensitivity analysis will be based on validation sets, if not set the test set, then the analysis is based on test sets.
446
J. Wu et al.
3.2 Matlab Implementation of Sensitivity Analysis Example to introduce gender as an input variable sensitivity analysis Matlab program, with male recorded as "1", women recorded as "2." output =[1 2];
% dereferencing of input variable
obsnum=1:1:nn;
% for each observation tests
for j=1:nn valp=val.P;
% use validation data sets is analyzed
value=[0 1];
% dereferencing of input variable after normalized
for m=1:2
% test each variable value
valp(v,j)=value(m);
% return every variables
a=sim(net,valp(:,j));
% each variable value into network simulation
output(m)=a;
% after a variable into the output value
end c=max(output);
% the maximum output
d=min(output);
% the minimum output
obsnum(j)=(c-d)/c;
% (maximum-minimum)/ maximum
end out (v)=sum(obsnum)/nn;
%the ratio of all means
end
4 Application Based on the hospitalization charge and influence factor of cerebral infarction patients, we set up the model of BP neural network and process sensitivity. The assignments of all input variables are in Table 1. Table 1. The quantized methods for influence factors of cerebral infarction patients factor
code
quantized method
sex
x1
male=1, female=2
first-time hospitalization
x2
Y=1, N=2
rescue
x3
N=0, Y=1
medical insurance
x4
Y=1, N=2
hospitalization days
x5
day
treatment outcome
x6
cure=1, improve=2, not improved=3, die=4
BP Neural Network Sensitivity Analysis and Application
447
Table 2. Hospitalization costs BP neural network model parameter Network structure parameters Hidden layers: a layer: Hidden layer neurons: 13 Input layer neurons: 6 Output layer neuron: 1
Network training parameters Training Algorithm: LM algorithm A total cessation of training iterations: 14 Learning speed: 0.01
Simulation results of test set R =0.738
Training set fitting results R =0.721
R 2 =0.545
R 2 =0.520
2 Radj = 0.493
2 Radj = 0.496
Performance function: SSE Stop training SSE *=11.284
SSE =3.005e+010
SSE =1.208e+009
SSE =0.0089 RMSE =0.094
SSE =0.0098 RMSE =0.099
* The SSE for the normalized [0, 1] in terms of data.
Fig. 2. The test set actual output and predict the output value fitting
Fig. 3. The training set actual output and predict the output value fitting
448
J. Wu et al.
4.1 BP Neural Network Model Results The use of OSS algorithm BP neural network model building, model parameters are as shown in Table 2. The simulation results are shown as in Figs 2 and 3. 4.2 The Results of Sensitivity Analysis To analyze the sensitivity of each influence factor, the results are in Table 3. Table 3. The results of sensitivity Number
1
2
factor
Hospitalizati rescue on days
Sensitivity
0.8599
0.2549
3
4
5
6
first-time Treatment hospitalizati outcome on
medical insurance
sex
0.146
0.0776
0.0689
0.0892
As can be seen from Table 3, the greatest influence on the cost factor is the length of hospitalization, followed by rescue, first-time hospitalization and other factors, the impact of sex on the cost of the minimum, combined with professional knowledge and other reports in the literature [9, 10] that the hospital charges factors influence the size of the order is reasonable.
5 Conclusion Based on BP neural network sensitivity analysis of factors to reflect the independent variable on the contribution of response variables, some studies such as the parameters of sensitivity analysis [11], impact analysis [12], also can be used for neural network analysis of the impact of factors. However, such approaches are the weights of neural network analysis, and sensitivity is based on the values of different input variables on the impact of changes in network output. Hospital charges to patients with cerebral infarction, for example, the various factors affecting the sensitivity analysis, that the medical cost factors influence the size of order is more reasonable that the sensitivity of the BP neural network analysis of factors affecting the feasibility of certain.
References 1. Hornik, K., Stinchcome, M., White, H.: Multilayer feedford networks are universal approximaters. Neural Networks 2, 359–379 (1989) 2. Therneau, T.M., Grambsch, P.M., Fleming, T.: Martingale-based residuals for survival models. Biometrika 77, 147–153 (1990)
BP Neural Network Sensitivity Analysis and Application
449
3. Xiaohua, W., Yigang, H.: Optimal design of frequency-response-masking filters using neural networks. Acta Electronica Sinica 36(3), 486–489 (2008) 4. Ge, S.S., Hang, C.C., Lee, T.H., et al.: Stable Adaptive Neural Network Control. Kluwer Academic Publishers, Boston (2001) 5. Guang, T., Feihu, Q.: Feature transformation and SVM based hierarchical pedestrian detection with a monocular moving camera. Acta Electronica Sinica 36(5), 1024–1028 (2008) 6. Saltelli, A., Ratto, M., Tarantola, S., et al.: Sensitivity analysis practices: Strategies for model-based inference. Reliability Engineering and System Safety 91(10-11), 1109–1125 (2006) 7. Ratto, M., Tarantola, S., Saltelli, A.: Sensitivity analysis in model calibration: GSA-GLUE approach. Computer Physics Communications 136(3), 212–224 (2001) 8. Cariboni, J., Catelli, D., Liska, R., et al.: The role of sensitivity analysis in ecological modeling. Ecological Modelling 203(1-2), 167–182 (2007) 9. Chaohui, Y., Hong, L., Yi, H., et al.: Analysis of Influencing Factors of Medical Expenses of Three Single Internal Diseases by Cumulative Logistic Regression Model in Certain Tertiary Hospital in Wuhan City. Medicine and Society 23(3), 13–15 (2010) 10. Fengjiang, W., Zhuang, C., Changping, L., et al.: Analysis of Hospitalization Cost and Relative Factors on Acute Appendicitis. Modern Preventive Medicine 37(5), 847–849 (2010) 11. Chen, T., Han, D.: The parameter sensitivity analysis of the neural network method and its engineering application. Chinese Journal of Computational Mechanics 21(6), 752–756 (2004) 12. Zhu, C., Ni, Z.: Based on the BP neural network model analysis and application of the influence. Chinese Journal of Health Statistics 19(6), 342–344 (2002)
Data Distribution Strategy Research Based on Genetic Algorithm Mingjun Wei1 and Chaochun Xu2 1
College of Science, Hebei Polytechnic University, Xinhua Westroad. 46, 063009 Tangshan, China 2 College of Computer and Automatic Control, Hebei Polytechnic University, Xinhua Westroad. 46, 063009 Tangshan, China
[email protected],
[email protected]
Abstract. Data distribution has a direct impact on improving the entire distributed database application system, data availability, and efficiency and reliability of distributed database. In order to solve the data distribution better, this paper adopts adaptive mutation operator to maintain the balance between colony diversity and searching random of the algorism, and presents a strategy based on genetic algorithm. During the study, the paper has improved the genetic algorithm, and proved strategy to be close to the optimal solution by experiment. Keywords: Distributed database; genetic algorithm; data distribution; heuristic search.
1 Introduction Domestic and foreign scholars have achieved many findings according to distributed databases and data distribution, and proposed lots of distribution algorithms, but the general price formula is complex and algorithms implementation costs more, or there is a large gap between data distribution programmer and the best distribution programmer [1]. In this paper we use the genetic algorithms which can maintain a good balance of good performance between depth-first search and breadth-first search, and propose a distributed database data distribution strategy based on genetic algorithm.
2 Description of the Data Distribution Before we introduce data distribution strategy based on genetic algorithm, let us describe the notion the data distribution firstly. Supposing, there is a network composed by site collection S=(S1,S2…Sm), in the network runs a transaction set T=(T1,T2…Tq), stores a data fragment set F=(F1,F2…Fn). Different copies of each fragment Fi are distributed to different site by Sk a certain manner assigns, the distribution strategy is expressed as A
. This is the so-called data assignment problem [2]. R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 450–457, 2010. © Springer-Verlag Berlin Heidelberg 2010
Data Distribution Strategy Research Based on Genetic Algorithm
451
Suppose there are m sites, n data fragments, and then in considering the case of redundancy distribution, the increasing along with m and n of the total distribution strategy number will assume the geometric series growth, thus the data distribution problem is an NP complete problem, the exhaustion method is obviously not feasible. Of course, many practical optimization problems do not need to require necessarily a global optimal solution, and extracted the second-best solution to be possible. But even if you find the second-best solution, the data distribution is still very complicated. Generally speaking all are needed to use the heuristic search method to solve. The existing enlightened distribution has been made great progress, but most of them are still existed cost formula complex, algorithms implementation costs more, or the demand result has a gap with the best distribution [3, 4]. This paper proposes using the genetic algorithm to solve the data distribution problem.
3 Description of the Basic Genetic Algorithm Genetic algorithms maintained by a group of the individual stocks Pt(t represents genetic algebra). Every individual representing a latent solution of the question, is appraised the fit or unfit quality and obtains its sufficiency. Some individuals will undergo the operation known as random genetic transformation, then generate new individual. The heredity operation mainly has crossover and mutation two kinds: mutation operation is taken Gene reverse to obtain a new individual; crossover operation is the mating of two individuals of the corresponding part of the exchange to form a new individual. New Individual (known as future generation Ct) continues to be evaluated merits. It chooses superior formation of a new species from the parent population and offspring population of individual. After several generations, algorithm converges to the best individual, and the individual is likely to represent the optimal or suboptimal solution to the problem.
4 The Improvement of Data Distribution for the Genetic Algorithms In order to enable the genetic algorithms to better solve practical problems, both domestic and foreign scholars have begun to improve genetic algorithms evolution strategy, and made many effective improvements [5]. This paper has made the following improvement to the genetic algorithm in its application of data assignment: 4.1 Updating Search to Initialize Colony According to the Data Fragments Results show that the distribution of initial colony affected seriously the quality of the convergence properties of the whole. At present, most genetic algorithms are based on generated randomly initial colony [6, 7, 8]. This randomness may cause the
452
M. Wei and C. Xu
individual's herd behavior. The space of some regions may spread a large individual, but some regions may be few, or none, which will bring adverse effects when seeking to expand the search space on the global optimal solution. In order to improve the quality of initial colony, speed up the genetic algorithms search speed, the paper proposes update search to initialize groups according to the data fragments to solve the data distribution. 4.2 Using the Combination of the Selection Mechanism between Fitness Proportional and the Elitist Selection Strategy The idea of fitness proportion is: calculating the ratio between individual fitness and the sum of all individual fitness; this ratio is regarded as the probability of choosing the individual. That is, the higher the fitness, the greater the probability of an individual is to be selected, which makes the quality of a new generation of groups usually better than the older generation groups. But because of the probability of selection, we can not guarantee the most adaptive offspring in the group is better than the father most adaptive generation in its group; therefore elite retention strategies are introduced [9, 10]. Elitist selection strategy is basic guarantee for the convergence properties of genetic algorithm. In the Genetic algorithm, if the fitness of best offspring individual is less than the best one of father Generation, the best father individual will directly copy to the offspring, and replace the least adaptive offspring. 4.3 Using Adaptive Crossover Operator and Adaptive Mutation Operator Crossover probability and mutation probability are the very important parameter of genetic algorithm, which affect directly the search speed and solution quality [11]. The greater the probability of crossover, the faster algorithm searches, but the good genes institutions is more likely to be damaged; If the crossover probability is too small, the algorithm search speed will slow down or even stalled. If mutation probability is too small, it is unlikely to generate new individual structure and lead to premature convergence; if the mutation probability is too large, the algorithm will turn into a blind random search. So this paper uses adaptive crossover operator and adaptive mutation operator.
5 Realization of Data Distribution Strategy 5.1 Encoding the Set of Parameters In this paper, the distribution strategy use binary code, and set question space parameter expression for character-based repertoire cage. {0, 1} is constituted the chromosome bit string as {0, 1}. A distribution plan of each data segment express with a bit string, and bit string length is equal to the number with the site. 1 stands for the corresponding site to which the data fragment is assigned, and 0 is the opposite.
Data Distribution Strategy Research Based on Genetic Algorithm
453
5.2 Initializing Colony The larger the colony, the better the diversity of individual groups, and the lesser risk the algorithm into local solutions. But with the increasing of the colony size, there are will be more times for the individual fitness calculation and assessment calculation, the number of computation also increases, then the algorithm efficiency will be significantly reduced. In this paper’s distribution strategy, the number of sites selected is equivalent with the number of individuals in the groups. 5.3 Fitness Evaluation According to fitness, we select individual genetic into the next generation. The selection method is adopted the combination strategy between fitness proportional and the elitist selection strategy, expression “elimination of inferior, survival of the fittest” principle. Namely, a more adaptive individual is more probable to be elected. 5.4 Choice In this paper, genetic algorithm uses the combination of the strategy between fitness proportional and the elitist selection strategy. Fitness proportional selection is the most basic selection method, in which the expectations of each individual selected number of groups and their adaptation and the ratio of the average fitness. Firstly, calculating the fitness of each individual, and then calculating the fitness in the proportion of the total, indicating that the probability of being selected of the individual in the selection process. Selection process reflects the thought “elimination of inferior, survival of the fittest”, and to ensure good genes to the next generation individual. 5.5 Crossover In order to maintain the balance between algorithm search speed and good genes reservation, this paper uses adaptive crossover operator. Adaptive crossover not only shows good results in the later stage of evolution, but also can improve crossover probability of quality individual with high fitness in early stage, reducing the probability of the search falling into local optimal solution. 5.6 Mutation In order to maintain the balance between colony diversity and search algorithm randomness, this paper uses the adaptive mutation operator, so that the crossover probability and mutation probability changes with changes of fitness. After selection, crossover and mutation operation has produced a new generation of groups. Then in view of the new generation of colony, and then make fitness evaluation, selection, crossover and mutation operations again. Having repeated it for certain generations,
454
M. Wei and C. Xu
the algorithm converges to an optimal individual; then the individual is on behalf of a data segment of the best and the better distribution of the Programmer. The flow chart of algorithm is shown by Fig.1. Determine the data distribution parameters Parameter coding Group Initialization Evaluate group
1string-bit decoding 2 calculate the total cost 3 fitness evaluation 4 adjust evaluation
End
Yes
Meet the stopping criteria? No Genetic operating
Three basic operators: (1) selection (2) crossover (3) mutation
Fig. 1. Basic flow diagram of distribution strategy
6 Experimental Results 6.1 Experiment Environment We used three kinds of hypothetical distributed environment in the experiment. The first environment assumed that there were 2 data segments, 3 services, 4 sites; the second environment assumed that there were 3 data fragments, 3 services, 4 sites; the third assumed that there are three were 3 data fragments, 3 services, 5 sites. Because there are (2n-1)m distribution programs in an environment about m segments of data, n sites. So, the total number of data distribution program were 225, 3375 kinds, and 29791 kinds respectively in these three environments. In each distributed environment, they generated randomly 5 groups’ statistics. According to each statistic, we calculated search costs and compared with the use of heuristic distribution strategies. 6.2 Experimental Results Two kinds of distribution strategies in different environments were obtained by the total renovation cost of search as shown in Tables 1, 2, 3 and Figs. 2, 3, and 4.
Data Distribution Strategy Research Based on Genetic Algorithm Table 1. The results in the first experiment Number
Optimal distribution
1 2 3 4 5
10341 13560 14079 69438 42157
Distribution strategy in this paper 10341 13560 14079 69438 42157
Heuristic distribution strategy 10341 14836 19643 75895 42157
the total renovation
The results in the first experiment 80000 70000 60000 50000 40000 30000 20000 10000 0 1 Optimal distribution
2
3
Distribution strategy in this paper
4
5
Heuristic distribution strategy
Fig. 2. Comparison of results in the first experiment Table 2. The results in the second experiment Number
Optimal distribution
1 2 3 4 5
23156 32658 11578 16384 37684
Distribution strategy in this paper 23156 32214 11584 16218 37684
Heuristic distribution strategy 23156 39856 19598 21305 41021
The results in the second experiment
the total renovation
50000 40000 30000 20000 10000 0 1 Optimal distribution
2
3
Distribution strategy in this paper
4
5
Heuristic distribution strategy
Fig. 3. Comparison of results in the second experiment
455
456
M. Wei and C. Xu Table 3. The results in the third experiment Number
Optimal distribution
1 2 3 4 5
6854 7985 13245 7015 4354
Distribution strategy in this paper 6854 10468 13987 7258 4354
Heuristic distribution strategy 8652 18201 15478 11249 4679
The results in the third experiment
the total renovation
20000 15000 10000 5000 0 1 Optimal distribution
2
3
Distribution strategy in this paper
4
5
Heuristic distribution strategy
Fig. 4. Comparison of results in the third experiment
,
From the above test results of three different environments we can see that this distribution strategy costs are lower than the heuristic distribution strategy in all cases, and can be close to the optimal solution. The distributed environment is more complex, this superiority is more obvious.
7 Conclusion In this paper, the research is a data distribution strategy based on the genetic algorithm distributional database. It maintains a good balance performance between the depth-first search and breadth-first search. The process of the research has made certain improvements to the genetic algorithm in the following aspects: basing on the data fragments update search to initialize groups, using the combination of the selection mechanism between fitness proportional and the elitist selection strategy, and using adaptive crossover operator and adaptive mutation operator. The experimental results show that the data distribution scheme obtained from the distribution strategy data is closer to the optimal solution, the overall performance is better than heuristic search methods.
Acknowledgments Our thanks go to Hebei Province Department of Education (grant number: 2008457), which grant us enough fund to support our research. Also, we extend our sincere
Data Distribution Strategy Research Based on Genetic Algorithm
457
gratitude to editors, the anonymous reviewers and the sponsor. Last but not least, without the help of my colleagues, my paper will not be completed in the form.
References 1. Zheng, Y., Zhou, G.s.: Distributed database of data distribution strategies and case studies. Computer Engineering and Applications, 1–3 (1997) 2. Yang, C.: Data distribution strategy for distributed database research, pp. 21–23. Harbin Engineering University, Harbin (2007) 3. Li, X.: Data distribution strategy for distributed database research. Scientific Papers Online, 33–35 (2009) 4. Yang, Y.: Distributed database of data distribution method of, pp. 119–121. Chongqing University, Chongqing (2004) 5. Yin-Fu, H., Jyh-Her, C.: Fragment distribution in distributed database design. Journal of Information Science and Engineering, 73–76 (2001) 6. Tamer, O.M., Patriek, V.: Principles of Distributed Database Systems, 2nd edn., pp. 1175–1176. Tsinghua University Press, Beijing (2002) 7. Shuoi, W., Hsing-Lung, C.: Near-optimal data distribution over multiple broadcast Channe1S. Computer Communications, 1341–1349 (2006) 8. Han, Q.L., Hao, Z.X.: Allocation algorithm for real-time data in a distributed environment. Computer Engineering, 19-21(2008) 9. Liu, Z.L., Luo, Y.J.: Research on Data Allocation Model Based on Distribution Database System. Journal of China West Normal University (Natural Sciences), 185–186 (2009) 10. Li, Z.P., Lu, X.L.: Optimal data allocation algorithm based on multiple path. Application Research of Computer, 1247–1248 (2010) 11. Chen, S.G., Song, M.C.: Two techniques for fast computation of constrained shortest paths. IEEE /ACM Trans on Networking, 105–115 (2008)
Water Pollution Forecasting Model of the Back-Propagation Neural Network Based on One Step Secant Algorithm Xiaoyun Yue, Yajun Guo, Jinran Wang, Xuezhi Mao, and Xiaoqing Lei Institute of Mathematics and Information Technology, Hebei Normal University of Science and Technology, Qinhuangdao 066004, Hebei Province, China [email protected]
Abstract. To overcome the shortage of the conventional Back-propagation (BP) Network, the BP network is trained by using one step secant (OSS) algorithm. According to the Yangtze River water statistics reported from 1995 to 2004, the BP neural network model for water quality evaluation was established to predict the consequence of water pollution in the next ten years. The result shows: (1) With no effective measures, the Yangtze River water pollution will be on drastic rise in ten years; (2) This model can predict development tendency in ten years and its result is reasonable and also proves that it has strong generalization ability. It is a very valid model of estimating non-linear problem. (3) BP neural network based on OSS algorithm possesses the advantage of high accuracy and high speed for convergence. Keywords: Water pollution; BP neural network; one step secant algorithm.
1 Introduction With the rapid development of industry and city, a great deal of wastes goes into river, lake and sea and resulted in a serious pollution for the water environment that the mankind relies on existence. For getting rid of water pollution, it is necessary to draw up reasonable and viable fluid programming and management measure. Because water quality influence factors are more, they are not complete structure to the harm of the human body and the living creature; as a result it comes to the complexity of water quality program and difficulty for solving. But it is be gratified to so fast developing of the artificial neural network theory. It provides a kind of new effective method for the water quality management and prediction in recent years [1, 2]. Artificial Neural Network (ANN) [3, 4, 5] are cutting-edge of complex non-linear science. Artificial neural network has the capability of massively parallel computing, self-organizing, adaptive and self-learning capabilities, and particularly adapts to deal with non-linear, imprecise, fuzzy information processing problems. In Artificial Neural Network model, the theory is more mature and applied extensive is feedforward neural network. But Back-Propagation Network (BP) is a new developed feedforward neural network in recent years, it embodied artificial nerve network most R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 458–464, 2010. © Springer-Verlag Berlin Heidelberg 2010
Water Pollution Forecasting Model of the BP Neural Network Based on OSS Algorithm
459
part of essence. It is presenting a strong vitality and important tool in the application study of water environmental problem. Based on the Yangtze River water statistics reported from 1995 to 2004, this paper establishes the Back-Propagation Neural Network (BPNN) model to predict the consequence of water pollution in the next ten years. The method has the exploratory meaning of research.
2 BP Network Algorithm and Improvement 2.1 BP Network Basic Principle The back propagation method [6, 7, 8, 9], also known as the error back propagation algorithm, is based on the error-correction learning rule. It consists of two passes through the different layers of the network: a forward pass and a backward pass. In the forward pass, an activity pattern is applied to the input nodes of the network, and its effect propagates through the network layer by layer. Finally, a set of outputs is produced as the actual response of the network. The weight adjustment is made according to the generalized delta rule to minimize the error. An example of a three layer BPNN with one hidden layer is shown in Fig. 1.
Output layer Hidden layer Input layer Fig. 1. Back-propagation neural network
Activating function is the core of a nerve network, network problem-solving ability and in addition to relating to the structure of network, the effect is decided by the adoption of activating function. 2.2 Limitation of BP Algorithm The BPNN can arbitrarily approach to any continuous function, but it mainly exists the following disadvantages [10]: (1) It comes down to a nonlinear problem of gradient optimization; therefore it inevitably falls into local minimum problem;
460
X. Yue et al.
(2) Algorithm has slow convergence speed, and usually needs up to thousand or more; (3) The network structure is a forward structure and no feedback connection; therefore it is a nonlinear mapping system. 2.3 Improvement of BP Network Algorithm In this paper One Step Secant algorithm in BP network predicts the developing trend of water pollution in Yangtze River. Since the conjugate gradient algorithm requires more storage and computation in each iteration, there is need for a secant approximation with smaller storage and computation requirements. This algorithm does not store the complete Hessian matrix; it assumes that the previous Hessian was the identity matrix in each iteration. It has the additional advantage that new search direction can be calculated without computing a matrix inverse. Practice proves that the method can effectively improve speed several ten to several hundred [11, 12, 13, 14].
3 Prediction Model Water quality for the Yangtze River may be divided into I class, II class, III class, I V class, V class, poor V class. Model data mainly reference to the Yangtze River yearbook. 3.1 Model Hypothesis (1) A region blowdown mainly come from observation points. (2) Pollutant density is only related to the natural purification ability of river, and is not influence by other factors. (3) Zero treatment to all appeared negative numbers in the prediction result of water pollution development trend in future. (4) Water pollution prediction of the next 10 year neglects some factor influence, such as the flood disaster, the war, etc.. 3.2 Model Conditions of the 1995~2004 Yangtze River, input learning sample is to predict water quality and carrys on training. When Using the 3 layers BPNN to establish predicting model, it include two processes, namely learning process and forecasting process [13]. Establishment of learning process. Because water quality is divided into 6 classes, 6 dimensional vector X i -water quality of the i th year, namely input layer is 6 neurons. The hidden layer is a layer, the output layer is an i + 10 th years water quality, that is also 6 dimensional vector Y j , therefore, neuron number of output layer is 6. Establishment of forecasting process. The conjunction weight and threshold value have been already confirmed according to the learning process, put the predicting sample into the well trained BP network, output predictive values after network learning. For instance need to be predicted water quality of the i th year, put the
Water Pollution Forecasting Model of the BP Neural Network Based on OSS Algorithm
461
i − 10 th years water quality condition into the network's model. Namely output is predictive value of the i th year water quality.
Selection of activation functions. The activate function between input layer and hidden layer is adoption of slicing function, Slicing function is for:
f ( x) = tan {1 [1 + exp(− x)]} . The adoption between hidden layer and output layer is line function. Pure Linear: f ( x) = β x
for β > 0 .
Both functions are continuously differentiable everywhere. 3.3 Model Solving
Sample set ϕ = {( xk , yk ) x ∈ R m , y ∈ R n , k = 1, 2, L n} is as training samples. To establish a mapping relationship by training samples, a network importation relationship between input and output as follows: p
m
j =1
i =1
yˆ k = ∑ v j ⋅ f [∑ wi j ⋅ xi + θ j ] + r , where yk is expected output, yˆ k is network actual output. 1 N [ yk − yˆ k ]2 ≤ ε . ∑ 2 k =1 As MinE to a target function, it comes down to a nonlinear optimization problem. The paper improves a classic nerve network study mechanism, adopts the OSS learning mechanism to have low request to amount of training data, one step and convergence speed faster. At first, we determine sewage discharge by adapting curve fitting method in the next 10 years. To determination of yearly flow in the whole basin, we use a special procedure and produce randomly numbers as follows (see table 1):
Let a total error of network is less than ε , then E =
Table 1. Random results for yearly flow in the whole basin Serial number Sewage quantity Total flow Serial number Sewage quantity Total flow
1 11866 301.11 6 9080 348.85
2 10254 315.47 7 10994 348.44
3 9098 327.98 8 12433 343.42
4 12185 338.08 9 10844 333.27
5 11236 345.22 10 12196 317.43
Then we get prediction results in the next 2014 year (see table 2). In 2014, water content of class and already far exceeds the 20%. If we don't control in time, the consequence is stuperdous. Water of class poor has already arrived the situation of hasten management in flood period. By decreasing continuously input of total blow down quantity, indirect control water quality, water percentage of class and will lower than 20%, no poor class and ensure sewage quantity minimum.
Ⅳ Ⅴ
Ⅴ
Ⅴ
Ⅳ
Ⅴ
Full flow
Evaluation scope
1441 106 1335
6365
34029
0
40394
59361
0
0
67845
8484
2159
70
2229
32788
6259
39047
3.92
1.67
3.57
0
0
0
6.58
1.12
5.71
12358
2112
14470
14603
464
15067
9741
1389
11130
36.32
33.18
35.82
24.6
5.47
22.21
29.71
22.19
28.50
11459
2425
13884
24611
6945
33556
13022
2463
15485
33.67
38.10
34.37
41.46
88.86
49.46
39.72
39.35
39.66
(3) According to the statistics datas, Dry season is from January to April , Flood period is from May to October and normal river flow period is from November to December annually in Yangtze River.
17.91 13.91 18.49 12.54 16.09 11.88
10974 5067 1024 4043
11.86
3890 1180
19.27
1206 12154
13.05
5096
1831
427
2258
6302
664
6966
909
608
1517
5.38
6.71
5.59
10.62
7.83
10.27
2.77
9.71
3.89
3002
272.1
3274.1
102.8
0
102.8
368.87
521.73
3590.6
8.82
4.27
8.11
0.17
0
0.16
9.36
8.34
9.20
class class class Č class č class poorč class Evaluation river length River Percentage River Percentage river Percentage River Percentage River Percentage river Percentage length length length length length length
Remark: (1) Unit of the river length is a km, proportion unit is a percentage in the table. (2) The hydrological year means the average value of all examination datas in a year.
Full flow Hydrological Main stream year branch
branch
Flood period Main stream
Full flow
branch
Dry season Main stream
period
Table 2. Water reports in the Yangtze River in 2014
462 X. Yue et al.
Water Pollution Forecasting Model of the BP Neural Network Based on OSS Algorithm
463
Prediction to sewage water emission of T
Input value of neural network
Percentage of class ΔT = ΔT + ΔT3
ΔT = ΔT + ΔT3
Ⅳ andⅤ<20%
Yes
No
Ⅵ
Poor class =0 Yes
No ΔT =
∑ ΔT
i
Fig. 2. Sewage water algorithm flowchart
According to fig. 2, the computer imitates sewage quantity need to be disposed annually. By simulating; control scheme for sewage and sewage emission rules of original predictive value is given. Predictive value of original sewage: 301.0833 315.4697 327.9848 338.0839 345.2220 348.8543 348.4359 343.4219 333.2675 317.4277 Sewage Discharge after controlling: 301.833 229.9774 215.1908 273.8480 278.1717 218.6568 285.6849
251.6668 228.8833 254.0098
Sewage quantity to need to be disposed annually: 0 85.4923 112.7940 64.2359 93.5552 119.9710 94.4261 65.2502 114.6107 31.7428
4 Conclusion This paper makes use of the improved 3 layer BP network to predict water quality in Yangtze River. Aimed at overcoming the disadvantage of easily trapped into local optimal solution and slow convergence of BP neural network, we builds a model of water pollution evaluation and prediction on using the OSS algorithm of BP neural network. The study indicates that: using neural network model is feasible in theory for water quality evaluation and prediction, and is valuable to continue study in-depth in practice, which has a good application prospects. In this paper a new method is provided for the water pollution research, also promote theory development of the artificial neural network and the OSS algorithm. But regardless of the model,
464
X. Yue et al.
improvement of algorithm and the water pollution control research, it is still further deepening and perfect.
Acknowledgments This work is supported by Scientific Technology Research and Development Program Fund Project in Qinhuangdao city (No.200901A288) and by the Teaching and Research Project of Hebei Normal University of Science and Technology (No.0807). The authors are grateful for the anonymous reviewers who made constructive comments.
References 1. Xiao, Z., Chen, L.: Run off Prediction Model for Changjiang River Basin Based on Mike Basin Program. Journal of Yangtze River Scientific Research Institute 6, 43–47 (2008) 2. Zhao, J., Dan, Q.: Mathematical Modeling and Mathematical Experiment, 2nd edn. Higher education Press, Beijing in China (2003) 3. Lina: Neural Networks control. Electronic industry Press, Beijing in China (2003) 4. Yuan, Z.: Artificial Neural Networks and its Application. Tsinghua University Press, Beijing in China (1999) 5. Yang, G.: Application of Artificial Neural Network on Water Quality Assessment and Prediction. Arid Land Resources & Environment (6), 10–14 (2004) 6. Li, X., Liu, C., Zhu, X., Xie, X.: Integrated Assessment of Sea Water Quality based on BP Neural Network. Marine Science Bulletin 29(2), 225–230 (2010) 7. Ni, S., Bai, Y.: Application of BP Neural Network Model in Groundwater Quality Evaluation. Systems Engineering-Theory & Practice (8), 124–127 (2000) 8. Wei, H., Li, W., Zhang, S., Wang, G.: Prediction of reservoir water quality in Northeast China with BP neural network model. Water Technology 3(1), 16–19 (2009) 9. Cao, J., Liu, H., Zhang, S.: Prediction of Water Quality Index in Danjiangkou Reserveior based on BP Neural Network. Electronic Design Engineering 18(3), 17–19 (2010) 10. Yu, B.: Discussion on the Limitation and Improvement of BP Neural Network. Journal of Shanxi Agricultural University 29(1), 89–93 (2009) 11. Tian, L., Jiang, F., Bai, G.: Application of Improved BP Neural Network to Evaluate Water Quality of Hunhe. Opencast Coal Mining Technology (2), 18–20 (2005) 12. Zhang, W., Zhang, W.: The application of improved BP manual neural network in the groundwater quality assessment in jilin City. Water Resources (9), 31–34 (2008) 13. Guo, R., Feng, Q., Zhai, L., Si, J., Chang, Z.: Simulation and Prediction of Groundwater Level with Improved BP Neural Network Model in Minqin Oasis. Journal of Desert Research 30(3), 737–741 (2010) 14. Huan, S., Dong, M.: Application of adaptive variable step size BP network to evaluate water quality. Hydraulic Engineering (10), 119–123 (2002)
Passive Analysis and Control for Descriptor Systems Chunyan Ding1, , Qin Li2 , and Yanjuan Zhang3 1
Yantai Engineering and Technology College, Yantai, Shandong 264006 China [email protected] 2 Department of Mathematics and Information Science, Yantai University, Yantai, Shandong 264005 China [email protected] 3 College of Light Industry, Hebei Polytechnic University, Tangshan, Hebei 063009 China [email protected]
Abstract. In this paper, passive analysis and control are discussed for descriptor systems. By means of linear matrix inequality (LMI), the conditions for the system to be admissible and passive with dissipation η are proposed in the cases of state feedback control and observer-based feedback control. To obtain the maximum dissipation, the design procedures are given respectively. Simulation examples are given to show the validity and applicability of the proposed methods. Keywords: Descriptor systems; Passive; State feedback; Observer-based control; Linear matrix inequality (LMI); Dissipation.
1
Introduction
The passivity theory has played a major role in robust and nonlinear stabilization issues [1], [2]. Control of descriptor systems has been an attractive field in control theory and applications, since the descriptor system models provide convenient and natural representations in the descriptions of economic systems, power systems, circuits systems and mechanical systems, see [3], [4] and [5]. Viewing the importance of the passivity and the generality of the descriptor system models, development of the passive control for descriptor systems becomes an essential and attractive topic. There are some literatures which have discussed the passive control for descriptor systems ([6], [7] and [8]), but most of the results which mentioned passive control neglected the scalar of importance. The scalar η appearing in the definition of strict passivity is the dissipation of the system which was mentioned in [9]. If a system has large dissipativity, that is the largest dissipation of the system, then it will be able to tolerate larger uncertainties and disturbances ([10]). As for state-space systems, [11] studied the guaranteed dissipation controller of uncertain time-delay systems, the state
This work was supported by National Science Foundation of China (No. 60974028).
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 465–472, 2010. c Springer-Verlag Berlin Heidelberg 2010
466
C. Ding, Q. Li, and Y. Zhang
feedback controller and output feedback controller were proposed. The problem for descriptor systems is still open. The scope of this paper is the passive analysis and control with dissipation η for descriptor systems. The LMI technique is applied to obtain the existence conditions of the state feedback controller and observer-based state feedback controller, such that the closed-loop systems are admissible and passive with dissipation η. Then the design procedures are given to obtain the maximum dissipation η , and each process of obtaining η ∗ can be formulated as an LMI optimization problem, which can be efficiently solved by LMI control toolbox. The present paper extends the passivity results of the state-space system to the descriptor system. The main contribution of this paper is that the dissipation of descriptor systems is considered, and design procedures for obtaining the maximum dissipation is presented.
2
Problem Formulation
Consider a class of uncertain descriptor systems as follows: E x(t) ˙ = Ax(t) + Bw w(t) + Bu u(t) z(t) = Cz x(t) + Dw w(t) + Du u(t) y(t) = Cy x(t)
(1)
where x(t) ∈ Rm is the state vector, u(t) ∈ Rm is the control input, z(t) ∈ Rr is the control output, y(t) ∈ Rl is the measured control, w(t) ∈ Lr2 [0, T ], ∀T > 0 is the external disturbance. A, Bw , Bu , Cz , Cy , Dw , Du are constant matrices of appropriate dimensions. E ∈ Rn×n may be singular, assume rankE ≤ n. Lemma 1 ([12]). A pair (E, A) is admissible if and only if there exists P ∈ Rn×n , such that E T P = P T E ≥ 0, AT P + P T A < 0.
(2) (3)
It’s get to know (3) implies that A and P are nonsingular. As to the passivity of the descriptor system, we have Definition 1. The system (1) with u(t) = 0 is said to be passive if T wT zdt ≥ 0, ∀T > 0 o
holds for all trajectories with zero initial condition. When the inequality holds strictly, the system is said to be strictly passive. Furthermore, it is said to be passive with dissipation η if the dissipation inequality T (wT z − ηwT w)dt ≥ 0, ∀T > 0 (4) o
Passive Analysis and Control for Descriptor Systems
467
holds for all trajectories with zero initial condition. Thus, the passivity corresponds to nonnegative dissipation. The largest dissipation of the system, i.e., the largest number η such that (4) holds, will be called its dissipativity ([9]). When η = 0, it is the case of passivity that we usually discussed, and when η > 0, it is the case of strict passivity, and most of the cases we neglected the scalar η.
3 3.1
Passive Analysis and Control for Descriptor Systems without Uncertainty Passive Analysis
Theorem 1. Given a scalar η, if there exists a nonsingular matrix P such that (2) and following linear matrix inequality hold T A P + P T A P T Bw − CzT (5) T < 0. ∗ 2ηI − Dw − Dw Then the unforced system of (1) is admissible and passive with dissipation η. Proof. The admissibility of the system is obvious by Lemma 1. The passivity could be easily proved by choosing the Lyapunov function as V (x) = xT (t)E T P x(t). In fact, take the derivative along the system (1) and by calculation, when (2) and (5) hold, there is V˙ (x) − wT z − z T w + 2ηwT w ≤ 0. Take T integral from 0 to T , since V (x) ≥ 0, we have 0 (wT z − ηwT w)dt ≥ 0 under zero initial condition. This completes the proof. 3.2
Passive Control
State feedback control In this section, we aim to find the controller u(t) = Kx(t), where K is the controller gain, such that the closed-loop system E x(t) ˙ = (A + Bu K)x(t) + Bw w(t) z(t) = (Cz + Du K)x(t) + Dw w(t))
(6)
is admissible and passive with dissipation η. Theorem 2. Given a scalar η, if there exists a nonsingular matrix P˜ , and a ˜ such that following LMIs hold matrix K, P˜ T E T = E P˜ ≥ 0, ⎤ P˜ T AT + AP˜ ˜ T C T − KD ˜ T B − P w z u⎦ ⎣ +K ˜ ˜ T B T + Bu K < 0. u T ∗ 2ηI − Dw − Dw ⎡
(7) (8)
Then the closed-loop system (6) is admissible and passive with dissipation η. And ˜ P˜ −1 x(t). the controller is u(t) = K
468
C. Ding, Q. Li, and Y. Zhang
Proof. Substituting A + Bu K, Cz + Du K for A, C into (5) respectively, we get ⎤ ⎡ (A + Bu K)T P T T P B − (C + D K) w z u ⎦ < 0. ⎣ +P T (A + Bu K) (9) T ∗ 2ηI − Dw − Dw Multiplying (9) by diag{P −T , I} and diag{P −1 , I} on the left and right respec˜ then (9) is equivalent to (8). c8 tively, and let P˜ = P −1 , K P˜ = K, The maximum dissipation for the systems is computed by the following eigen˜ value problem (EVP) in P˜ and K, max η
˜ P˜ , K
subject to
(10) η > 0, (7) and (8)
The design procedure of maximum dissipation is summarized as follows: Design procedure 1: ˜ Step 1: Solve the EVP in (10) to obtain P˜ and K. Step 2: Increase η and repeat step 1 until nonsingular P˜ cannot be found. At the end of this step, the maximum dissipation η∗ can be obtained. If the maximum dissipation η ∗ is obtained, the controller with η∗ can be constructed, which is said to be a maximum guaranteed dissipation controller. Observer-based state feedback control. Suppose that (E, A, Bu ) is stabilizable and (E, A, Cy ) is detectable. For the descriptor system (1), the following observer is proposed to deal with the state estimation. ˙ = Aξ(t) + Bu u(t) + L(y(t) − Cy ξ(t)) E ξ(t) u(t) = −Gξ(t) Let us denote the estimation error as e(t) = x(t) − ξ(t), by differentiating, we get the estimation dynamics equation: E e(t) ˙ = (A − LCy )e(t) + Bw w(t). Then the augmented system is equivalent to the following form: ¯x ¯x(t) + B ¯w w(t) E ¯˙ (t) = A¯ ¯(t) + Dw w(t) z(t) = C¯z x
(11)
where A − B u G Bu G E 0 x(t) ¯w = Bw , C¯z = , A¯ = , E¯ = x ¯(t) = , B 0 E 0 A − LCy e(t) Bw
Cz − Du G Du G . Let us choose a Lyapunov function for the augmented system (11) as V (¯ x(t)) = x ¯T (t)E¯ T P¯ x ¯(t). Then, we get the following result.
Passive Analysis and Control for Descriptor Systems
469
Theorem 3. Given the scalar η, if the following matrix inequalities
¯ T P¯ = P¯ T E ¯≥0 E T ¯ T T ¯ ¯ ¯ ¯ A P + P A P Bw − C¯zT T < 0 ∗ 2ηI − Dw − Dw
(12) (13)
have a nonsingular solution P¯ , then the system (11) is admissible and passive with dissipation η. If P¯ is chosen as the following form: P 0 ¯ , P = 0 R where P and R satisfy E T P = P T E ≥ 0 and E T R = RT E ≥ 0. Then V˙ (¯ x(t)) − wT (t)z(t) − z T (t)w(t) + 2ηwT (t)w(t)
= xT (t) wT (t) eT (t) Ξ x(t) w(t) e(t) ⎤ ⎡ Ξ11 P T Bw − (Cz − Du G)T P T Bu G T T Ξ=⎣ ∗ 2ηI − Dw − Dw Bw R − Du G⎦ , ∗ ∗ Ξ33
where
Ξ11 = P T (A − Bu G) + (A − Bu G)T P , Ξ33 = RT (A − LCy ) + (A − LCy )T R. Using the inequality 2xT y ≤ εxT x + ε−1 y T y, where x ∈ Rn , y ∈ Rn , ε > 0, to deal with xT (t)P T Bu e(t) + eT (t)BuT P x(t), wT (t)(−Du G)e(t) + T eT (t)(−Du G)T e(t) and eT (t)RT Bw w(t) + wT (t)Bw Re(t), We could get that ˜ Theorem 4. Given the scalar η, if there exist nonsingular matrices P˜ and R, ˜ ˜ matrices G, L and scalars ε1 > 0, ε2 > 0, ε3 > 0 such that P˜ T E T = E P˜ ≥ 0, ⎤ ˜ AP˜ + P˜ T AT − Bu G T T ˜ ˜ ⎢ −G ˜ T BuT + ε1 Bu BuT Bw − P Cz + GDu ⎥ ⎢ ⎥ < 0, T ⎣ ⎦ 2ηI − Dw − Dw ∗ T T +ε2 Du Du + ε3 Bw Bw ⎡
E T R = RT E ≥ 0, ⎤ R A + AT R T T T G R ⎥ ⎢ −LC ˜ G ˜ y − CyT L ⎢ ⎥ ⎢ ∗ −ε1 I 0 0 ⎥ ⎢ ⎥ < 0. ⎣ ∗ ∗ −ε2 I 0 ⎦ ∗ ∗ ∗ −ε3 I ⎡
(14) (15)
(16)
T
(17)
Then the system (11) is admissible and passive with dissipation η. And the con˜ ˜ P˜ −1 , observer gain is L = R−T L. troller gain is G = G
470
C. Ding, Q. Li, and Y. Zhang
The maximum dissipation for the system can be characterized as the following maximum problem: max
˜ L,ε ˜ 1 ,ε2 ,ε3 P˜ ,R,G,
subject to
η
η > 0, (14), (15), (16) and (17).
Design procedure 2: ˜ ε1 , ε2 , ε3 (thus G = G ˜ P˜ −1 can also Step1: Solve (14) and (15) to obtain P˜ , G, be obtained). Step 2: Substituting G and ε1 , ε2 , ε3 into (16) and (17), then solve the LMIs ˜ (thus L = R−T L ˜ can also be obtained). (16), (17) to obtain R, L Step 3:Increase η, repeat step1 and step 2 until nonsingular P˜ and R cannot be found. At the end of the step, the maximum dissipation η∗ can be obtained. Step 4: Construct the maximum guaranteed dissipation controller.
4 4.1
Simulation Examples State Feedback Passive Control
In this section, we demonstrate the theory developed in this paper by means of a simple the parameters of system are given by Consider (1) example.
1 −2.8 2.6 1.2 10 , Bu = , Bw = , Cz = 1.5 −2 , Cy = ,A = E = 6 5 −3.2 −2 0 0
−12 14 , Du = −2, Dw = 1.2. Assume η = 0.4, and we could get the solutions of (7) and (8) are as follows:
0.0027 0 ˜ = −2.4575 3.7008 . ˜ ,K P = 1.7223 −2.7912 And the controller gain is
˜ P˜ −1 = −65.5134 −1.3259 . K =K According to design procedure 1, the maximum dissipation η∗ could be obtained and η ∗ = 0.72. In this case the guaranteed maximum dissipation controller is
u∗ (t) = (1.0e + 003) ∗ −1.6071 −0.0010 x(t). Fig.1 shows the trajectories of state x1 (t) and x2 (t) of the open system and the trajectories of state x1 (t) and x2 (t) of the closed-loop system. 4.2
Observer-Based Passive Control
The parameters of system (1) are the same as 4.1.
Passive Analysis and Control for Descriptor Systems
471
Fig. 1. Plot of x1 (t) and x2 (t) of the open system and closed-loop systems
Suppose η = 0.4, we could get the solutions of (14) and (15) are as follows:
150.9832 0 ˜ = 812.5205 14.5044 , ε1 = 0.3289, ε2 = 0.1637, ,G P˜ = −699.8317 −13.4987 ε3 = 0.1474. So the controller gain is
˜ P˜ −1 = 0.4010 −1.0745 . G=G Using the obtained data to solve the inequalities (16) and (17), we get
1.0070 0 ˜ = 1.4327 1.3826 . ,L R= −0.6929 −0.5872 Therefore, the observer gain is
˜ = −3.0429 −2.3546 . L = R−T L According to design procedure 2, the maximum dissipation η ∗ could be ob tained and it is 0.82. In this case the controller gain is G = 0.3400 −1.0876 , −8.1187 . Fig.2 shows the trajectoand the observer gain is L = (1.0e + 004) ∗ −2.3493 ries of state x1 (t) and estimation state ξ1 (t), and the trajectories of state x2 (t) and estimation state ξ2 (t), they present the simulation results from the initial
T T
condition x1 (0) x2 (0) e1 (0) e2 (0) = 1.0 −7.1548 2.0 1.7142 .
Fig. 2. Plot of state x1 (t), x2 (t) and estimation state ξ1 (t), ξ2 (t)
472
5
C. Ding, Q. Li, and Y. Zhang
Conclusions
This paper investigates passive control synthesis problem for descriptor systems. The state feedback controller is presented for the closed-loop systems to satisfy the relative performance. If the state variables are not available, observer-based state feedback controller is designed to solve the problem. Meanwhile, the approaches for obtaining the maximum dissipation are given. Numerical examples have shown that the design procedures are feasible and efficient. Furthermore, our controller design methodology leads to larger dissipation.
References 1. Chen, X.R., Liu, C.X.: Passive control on a unified chaotic system. Nonlinear Analysis: Real World Applications 11, 683–687 (2010) 2. Castanos, F., Ortega, R.: Energy-balancing passivity-based control is equivalent to dissipation and output invariance. Systems & Control Letters 58, 553–560 (2009) 3. Yang, C.Y., Zhang, Q.L., Zhou, L.N.: Strongly absolute stability problem of descriptor systems: Circle criterion. Journal of the Franklin Institute 345, 437–451 (2008) 4. Camlibel, M.K., Frasca, R.: Extension of Kalman-Yakubovich-Popov lemma to descriptor systems. Systems & Control Letters 12, 795–803 (2009) 5. Virnik, E.: Stability analysis of positive descriptor systems. Linear Algebra and its Applications 429, 2640–2659 (2008) 6. Dong, X.Z., Zhang, Q.L., Guo, K.: Passive control for singular systems with timevarying uncertainties. Control Theory & Application 21(4), 517–520 (2004) (in Chinese) 7. Dong, X.Z., Zhang, Q.L.: Passive control of linear singular systems via output feedback. Journal of Northeastern University (Natural Science) 25(4), 310–313 (2004) (in Chinses) 8. Chen, F.X., Zhang, W.D., Wang, W.: Robust passive control for uncertain singular systems with state delay. In: Proceedings of 2006 American Control Conference, Minneapolis, Minnesota, USA, June 14-16, pp. 1535–1538 (2006) 9. Boyd, S., Ghaoui, L.E., Feron, E., Balakrishnan, V.: Linear Matrix Inequalities in Systems and Control Theory. SIAM, Philadelphia (1994) 10. Uang, H.J.: On the dissipativity of nonlinear systems: fuzzy control approach. Fuzzy Sets and Systems 156, 185–207 (2005) 11. Liu, F., Su, H.Y., Chu, J.: Synthesis of guaranteed dissipation controller for uncertain time-delay systems. In: Proceedings of the IEEE Conference on Decision and Control, Orlando, Florida, USA, pp. 3216–3217 ( December 2001) 12. Masubuchi, I., Kamitane, Y., Ohara, A., Suda, N.: H∞ control for descriptor systems: a matrix inequalities approach. Automatica 33(4), 669–673 (1997)
Study of Bird’s Nest Habit Based on Variance Analysis Yong-quan Dong1 and Cui-lan Mi2 1
Department of Mathematics and Information Science, Tangshan Teachers College, Tangshan 063000, China 2 Department of Mathematics and Physics, Hebei Polytechnic University, Tangshan 063000, China [email protected]
Abstract. Using variance analysis and variable coefficient theory, this paper study the nest habit of three kind of birds: Ficedula zanthopygia, Parus major and Sitta auropaea in the two status of natural nest and artificial nestbox. For Ficedula zanthopygia the effect of factors A (status) on indicators (birds entered number) is significant, B (tree species, a total of 13) on them is more significant. For Parus major tree species is significant, but for Sitta auropaea status is significant. In nesting, Ficedula zanthopygia and Sitta auropaea give priority to tree height, then tree DBH, but Parus major give priority to tree DBH, then tree height. All the calculation is completed by statistics with R. Keywords: bird nest, tree species, variance analysis, variable coefficient, statistics with R.
1 Introduction Mathematical statistics being one of the most active subjects of applied mathematics, has been widely used not only in the field of natural sciences, engineering, industrial and agricultural production [1,2], but also in medical and health, social life, economic and other fields [3, 4]. To master the methods and principles of mathematical statistics become necessary skills of college students, graduate students and scientific and technical personnel. Some scientists often face the experimental data processing problems which beset their experiment could not normally go. Only finding the appropriate data methods, we can get the right conclusions [5]. This is an example of the original test data more than 1100 groups being related to some indicators of the level of the nest, including bird species, tree species, tree height, tree DBH, nest height, hole type, hole size, hole toward, newest, location of the nest tree and the state when found. We take three birds for example: Ficedula zanthopygia, Prus major and Sitta auropaea, their nests being done in the tree hole [6]. We want to know what is the difference about them using the trees in both cases (natural and artificial attract state). and to compare the preference and difference about the three kinds of birds selecting tree species, tree DBH and nest height. By carefully studying the data and the requirements, we will use variance analysis methods to solve the problem [7]. R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 473–479, 2010. © Springer-Verlag Berlin Heidelberg 2010
474
Y.-q. Dong and C.-l. Mi
2 Variance Analysis Let A1 , A2 , L , Ar be different levels of factor A , B1 , B2 ,L , Bs be different levels of
factor B , X ijk be the samples of population X ij ~ N ( μij , σ 2 ) corresponding to the test
result k ( k = 1, 2,L , l ) for the factor matching Ai and B j , where X ij are independent. Let ε ijk = X ijk − μij
μ=
be the errors, then ε ijk ~ N (0, σ 2 ) , and
1 r s 1 s μij , μi⋅ = ∑ μij , α i = μi ⋅ − μ , i = 1, 2,L , r , ∑∑ s j =1 rs i =1 j =1
μ⋅ j = r
where
∑α i =1
i
s
∑β
=0,
j =1
j
= 0.
1 r ∑ μij , β j = μ⋅ j − μ , r i =1
j = 1, 2,L , s
μ ij is decomposed as below:
μij = μ + ( μi ⋅ − μ ) + (μ⋅ j − μ ) + ( μij − μi ⋅ − μ⋅ j + μ ) = μ + α i + β j + γ ij , r
where
∑γ i =1
ij
=0,
s
∑γ j =1
ij
(1)
,
(2)
= 0 . The mathematical model of variance analysis of two-
factor repeated tests is: ⎧ X ijk = μ + α i + β j + γ ij + ε ijk , ⎪ ε ~ N (0, σ 2 ), and ε are independent , ijk ⎪⎪ ijk r s r s ⎨ α 0, β 0, γ 0, γ ij = 0, = = = ∑ ∑ ∑ ∑ i j ij ⎪ j =1 i =1 j =1 ⎪ i =1 ⎩⎪ (i = 1, 2L , r; j = 1, 2,L , s; k = 1, 2,L , l ).
(3)
The main task of two-factor repeated analysis of variance is to test the assumptions for model (3): H 01 : α1 = α 2 = L = α r = 0 , H 02 : β1 = β 2 = L = β s = 0 H 03 : γ ij = 0 (i = 1, 2,L , r;
(4)
j = 1, 2,L , s)
Let
X =
1 r s l ∑∑∑ X ijk , rsl i =1 j =1 k =1
X ij ⋅ =
1 l ∑ X ijk , l k =1
(5)
Study of Bird’s Nest Habit Based on Variance Analysis
X i⋅⋅ =
1 s l ∑∑ X ijk , sl j =1 k =1
X ⋅ j⋅ =
475
1 r l ∑∑ X ijk . rl i =1 k =1
Now we consider the decomposition of sum of square of total variation: r
s
l
∑∑∑ ( X
ST =
i =1 j =1 k =1
ijk
− X ) 2 = Se + S A + S B + S A× B ,
(6)
where
Se =
r
s
l
∑∑∑ ( X i =1 j =1 k =1
r
ijk
− X ij ⋅ ) 2 , S A = sl ∑ ( X i ⋅⋅ − X ) 2 , i =1
(7) s
r
s
S B = rl ∑ ( X ⋅ j ⋅ − X ) , S A× B = l ∑∑ ( X ij ⋅ − X i ⋅⋅ − X ⋅ j ⋅ + X ) . 2
j =1
i =1 j =1
For the significance level α
FA =
2
,if assumption H
01
hold
S A (r − 1) S = A ~ F (r − 1, rs(l − 1)) , Se ( rs(l − 1)) Se
(8)
the test refused domain is FA > F1−α (r − 1, rs (l − 1)) . If assumption H 02 hold FB =
S B ( s − 1) S = B ~ F ( s − 1, rs (l − 1)) , Se ( rs(l − 1)) Se
(9)
the test refused domain is FB > F1−α ( s − 1, rs (l − 1)) . If assumption H 03 hold FA× B =
S A× B ((r − 1) ( s − 1)) S A× B = ~ F ((r − 1)(s − 1), rs (l − 1)) , S e (rs (l − 1)) Se
(10)
the test refused domain is FA× B > F1−α ((r − 1)( s − 1), rs (l − 1)) .
3 Empirical Analysis 3.1 Test on the Interaction
Let factor A be bird species: Ficedula zanthopygia (denoted by FZ), Parus major (PM) and Sitta auropaea (SA), B be tree species: 1. white willow, 2. black birch, 3. quercus mongolica, 4. scholar tree, 5. manchurian linden, 6. ashtree, 7. linden, 8. rhynchophylla, 9. white poplar, 10. amur cork, 11. juglans mandshurica maxim, 12. bird cherry, 13. other species. In combinations Ai B j ( i = 1, 2,3; j = 1, 2,L ,13 ), two experiments: in the state of nature (denoted by N) and artificial attract (A) were done ( l = 2 ), to observe and record birds entered situation (tagging birds stay nest, then count up the information of the nest: tree species, tree DBH and nest height, ect.). We start with the original data, sorted out the following table:
476
Y.-q. Dong and C.-l. Mi Table 1. Number of birds entered in the two states Birds
Status
FZ
PM
SA
tree species 1
2
3
4
5
6
7
8
9
10
N
12
7
5
4
4
1
A
9
5
2
1
3
2
2
1
2
1
N
41
22
35
8
20
3
A
26
61
76
5
36
7
7
3
9
15
11
19
13
N
80
16
50
1
16
7
1
6
30
12
A
1
7
11
1
5
1
1
1
1
1
11
12
13 3 5
1
3
10
1
2
On the table 1, the row is bird species, the columns represent tree species, and the number of nest where birds stay are input in the form (white space is not a bird entered), which called entered number. The variance analysis results shown in table 2 [8]. As F1−0.01 (2,39)=5.20, F1− 0.01 (12,39)=2.68, F1−0.05 (24,39)=1.82, we have FA > 5.20,
FB > 2.68, which indicates the effects of factors A and B on indicators (entered number) are highly significant, that is, there are obvious differences between different tree species and different birds nesting. However, FA× B < 1.82, which indicates the effects of factor A × B on indicators is non-significant. Table 2. Variance analysis results Variance source
Sum of square
Degree of freedom
Factor A
S A =2661.33
r −1 = 2
FA = 7.98
**
Factor B Interaction A× B Error e
S B =7564.62
s − 1 = 12
FB = 3.78
**
S A× B =3722
(r − 1)( s − 1) = 24
FA× B = 0.93
Se =6502
rs (l − 1) = 39
Sum
ST =20449.95
rsl − 1 = 77
F
Significanc
Furthermore, we analyze the status of each bird selecting tree species in natural and artificial attract. 3.2 Analysis of Variance without Interaction
Now we analyzed the status of each bird selecting tree species in natural and artificial attract using analysis of variance theory without interaction. For the entered number of Ficedula zanthopygia (FZ) [9], c.f. Table 1, let factor A be status, with two levels: nature and artificial attract, factor B be tree species, 13 levels, and the total entered number is 17. Using statistics with R to get the variance analysis results as shown in Table 3.
Study of Bird’s Nest Habit Based on Variance Analysis
477
Table 3. Variance analysis results for Ficedula zanthopygia
Variance source
Sum of square
Degree of freedom
F
P
Status
S A = 10.083
1
FA = 7.8571
0.037857 *
Tree species
S B = 136.559
10
FB = 10.6409
0.008793 **
Error
Se = 6.417
5
That is the effect of factors A (status) on indicators (entered number) is significant, factors B (tree species) on indicators is highly significant. So in the two different status(natural and artificial attract), the use of tree species for Ficedula zanthopygia are significantly different. Similarly, for the entered number of Parus major (PM) and Sitta auropaea (SA), c. f. Table 1, we get the variance analysis results as shown in Tables 4 and 5. Table 4. Variance analysis results for Parus major F
P
S A = 530.5
Degree of freedom 1
FA = 3.3619
0.09993
S B = 6340.5
12
FB = 3.3487
0.03888 *
Se = 1420.1
9
Variance source
Sum of square
Status Tree species Error
Table 5. Variance analysis results for Sitta auropaea
Variance source
Sum of square
Degree of freedom
F
P
Status
S A = 1786.0
1
FA = 5.9371
0.03757 *
Tree species
S B = 3370.0
11
FB = 1.0184
0.49741
Error
Se = 2707.4
9
We can see from table 4 for Parus major the effect of factors B (tree species) on indicators is significant, factors A (status) on indicators is non-significant. On the contrary, for Sitta auropaea, from table 5, the effect of factors A (status) on indicators is significant, factors B (tree species) on indicators is non-significant. In short the nesting habits between the three kinds of birds are differences. Then we use variable coefficient to compare the preference and difference about the three kinds of birds selecting tree species, tree DBH and nest height.
478
Y.-q. Dong and C.-l. Mi
3.3 Variable Coefficient
Let ( X 1 , X 2 ,L X n ) be samples from the population X , then X =
1 n ∑ Xi , n i =1
1 n ∑ ( X i − X )2 , S = S 2 is respectively the sample mean, sample variance n − 1 i =1 and standard deviation, and S (11) C ⋅ V = ×100% , X is variable coefficient of X . Variable coefficient is commonly used in comparison of variability of multi-group data with different units of measure or significant differences between the mean[10]. Larger coefficient of variation, indicate a relatively larger variance, the more scattered data, otherwise, variable coefficient smaller, more focused data. We sorted out from the original data containing the natural state of tree species, tree DBH (in cm) and nest height (in m) as shown in Table 6.
S2 =
Table 6. Three kinds of data of three kinds of birds
FZ
Tree
PM
SA
1
Entered number 12
Tree DBH 35.91
Nest height 5.13
Entered number 41
Tree DBH 29.85
Nest height 2.88
Entered number 80
Tree DBH 27.25
Nest height 3.97
2
7
37.14
3.54
22
23.37
3.45
16
30.12
4.17
3
5
28
2.16
35
26.78
2.36
50
28.26
3.50
4
4
25
3.90
8
21.58
2.60
1
10
4
20
25.92
1.99
16
29.31
4.16
7
28.26
4.26
5
4
34
3.6
6
1
16.88
4.20
7
3
17.25
1.98
1
23
4.23
8
3
27.33
1.2
6
27.34
3.52
9
9
31.33
4.47
30
28.18
4.93
10
15
28.94
2.35
12
23.41
3.52
1
25
5
2
19.59
3.15
11 12 13
5
27.50
0.85
By Eq. (11), we calculated the variable coefficients of tree species, tree DBH and nest height for three kinds of birds, with the results being shown in Table 7. From the second column of Table 7, we can see the preference for Ficedula zanthopygia doing its nest building is: nest height > tree DBH > tree species, because of 25.88 < 26.33 < 67.79. That is Ficedula zanthopygia first consider tree height, then the thickness of the tree, the final consideration being the tree species. Similarly, for Parus major the preference is: tree DBH > nest height > tree species, for Sitta auropaea the preference is: nest height > tree DBH > tree species.
Study of Bird’s Nest Habit Based on Variance Analysis
479
Table 7. Variable coefficient of three kinds of factors
Factors
(%)
FZ
PM
(%)
(%)
SA
Entered number
67.79
83.10
130.80
Tree DBH
26.33
16.20
22.50
Nest height
25.88
43.35
13.90
In short, the preference for the three kinds of birds doing their nest building is nest height or tree DBH being exceed over tree species.
4 Conclusion This paper tested it is quite different for three kinds of birds using the trees to nest in the two different status: natural and artificial attract, compare the preference and difference about the three kinds of birds selecting tree species, tree DBH and nest height, and provide mathematical methods and reference conclusions for the further research of nest issue. This shows the importance of mathematics: the application of mathematics being to penetrate all areas, or today's society being increasingly mathematical, that is mathematical science is a universal and implement technology.
Acknowledgement This paper is supported by the Dr. Fundation of Tangshan Teachers College (No. 09A02).
References 1. Fu-xia, X., Yong-quan, D.: Extreme Dependence of Relief Factors of Debris Flow. Systems Engineering-Theory & Practice 2, 180–185 (2009) 2. Beirlant, J., Teugels, J., Goegebeur, Y.: Statistics of Extremes: Theory and Applications. JohnWiley & Sons, West Sussex (2004) 3. Fu-xia, X., et al.: Multi-state Distribution Network Demand Forecasting Based on EDI Transactions. System Engineering 1, 58–61 (2006) 4. Yong-quan, D.: Evaluation and Treatment Efficacy Prediction of AIDS. China Health Statistics 2, 204–206 (2008) 5. Ci-nan, Y., Wei-li, C.A.O.: Applied Mathematical Statistics. China Machine Press, Beijing (2004) 6. Guang-mei, Z.: Birds Classification and Distribution in Chinese. Science Press, Beijing (2005) 7. Mendenhall, W., Sincich, T.: Statistics for Engnerring and Sciences, 5th edn. China Machine Press, Beijing (2009) 8. Dalgaard, P.: Introductory Statistics with R. Springer, New York (2002) 9. Wei, Z.: Reproductive parameters of ficedula zanthopygia in nest-box. Chinese Journal of Zoology 43, 123–126 (2008) 10. Rong-qian, D.: Biometrics, 3rd edn. Higher Education Press, Beijing (2009)
Finite p-groups Which Have Many Normal Subgroups Xiaoqiang Guo, Qiumei Liu, Shiqiu Zheng, and Lichao Feng Department of Mathematics Hebei Polytechnic University Tangshan Hebei, 063009, P.R. China [email protected]
Abstract. Normal subgroups of a group play an important role in determining the structure of a group. A Dedekindian group is the group all of whose subgoups are normal. The classification of such finite groups has been completed in 1897 by Dedekind. And Passman gave a classification of finite p-groups all of whose nonnormal subgroups are of order p. Above such two finite groups have many normal subgroups. Alone this line, to study the finite p-groups all of whose nonnormal subgroups are of order p or p2 , that is, its subgroups of order ≥ p3 are normal. According to the order of the derived subgroups, divide into two cases expression and give all non-isomophic groups. Keywords: finite p-groups, minimal non-abelian p-groups, Dedekindian groups, central product.
1
Introduction
As is well known, the structure of finite groups are usually characterized by their normal subgroups. Two classical classes of groups are Dedekindian groups and simple groups. The classification of two such finite groups have been completed, see [1] and [2]. It is easy to see that the number of nontrivial normal subgroup of a finite group has great influence on its structure. This motivates us to study finite groups which have “many” normal subgroups or “few” normal subgroups. In [3], Passman gave a classification of finite p-groups all of whose subgroups of order ≥ p2 are normal. In [4], Qinhai Zhang classified finite groups whose nonnormal subgroups are of order p or pq, where p, q are primes. In [5], Boˇ zikor and Janko gave a complete classification of finite p-groups all of whose noncyclic subgroups are normal. In [6], Zhang Junqiang and Li Xianhua studied finite pgroups all of whose proper subgroups have small derived subgroups. This paper is a continuation of their works. We classify the finite p-groups all of whose subgroups of order ≥ p3 are normal. The notation and terminology we use are standard, see [7], [8], [9] Let G be a finite p-group. And we use Cn , D2n , Q2n and Cnm to denote a cyclic group of order n, a dihedral group of order 2n , a generalized quaternion group of order 2n and the direct product of m cyclic groups of order n, respectively. If A and R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 480–487, 2010. c Springer-Verlag Berlin Heidelberg 2010
Finite p-groups Which Have Many Normal Subgroups
481
B are subgroups of G with G = AB and [A, B] = 1, we call G a central product of A and B, denoted by G = A ∗ B.
2
Paper Preparation
A group G is called a Dedekindian group if all its subgroups are normal in G. A group G is said to be minimal nonabelian if G is nonabelian but all of its proper subgroups are abelian. Lemma 1. [3] Let H be a nonnormal subgroup of a p-group G of maximal possible order. Then NG (H)/H is either cyclic or Q8 . In particular, Z(G)/H ∩ Z(G) is cyclic. Lemma 2. [3] Suppose G is not a Dedekindian p-group, then there exists K G with |G : K| = p such that G/K is not Dedekind. Lemma 3. [10] Let G be a p-group with |G | = p. Then G = (A1 ∗ A2 ∗ . . . ∗ As )Z(G), where A1 , · · · As are minimal nonabelian subgroups. So G/Z(G) is elementary abelian of even rank. Lemma 4. [10] Let E be a subgroup of a p-group G such that |E | = p. If [G, E] = E , then G = E ∗ CG (E). Lemma 5. (O. Taussky) Let G be a nonabelian 2-group such that |G : G | = 4. Then G is one of the 2-groups of maximal class. Theorem 1. [11] If G is a minimal nonabelian p-group, then G is one of the following groups: m n m−1 (i) M(m,n) = a, b ap = bp = 1, ab = a1+p , m ≥ 2, n ≥ 1; m n (ii) M(m,n,1) = a, b, c ap = bp = cp = 1, [a, b] = c, [c, a] = [c, b] = 1, if p = 2, m + n ≥ 3; (iii) Q8 . Theorem 2. [3] Suppose that all nonnormal subgroups of a non Dedekindian p-group G have order p. Then one of the following holds: (i) M(m,1) ; (ii) D8 ∗ C2n , n ≥ 2; (iii) M(1,1,1) ∗ Cpn ; (iv) D8 ∗ Q8 .
3
Main Results
In this section we give a complete classification finite p-groups G all of whose subgroups of order ≥ p3 are normal . Next we always assume that G has at least a nonnormal subgroup of order p2 . Lemma 6. |G | ≤ p2 .
482
X. Guo et al.
Proof. Assume that |G | > p2 . By Lemma 2, G has a normal subgroup K with |G : K| = p such that G/K is not Dedekindian. Hence G/K has a nonnormal subgroup H/K. Obviously, H G. Since |K| ≥ p2 , we get |H| ≥ p3 , a contradiction. Lemma 7. Let G be a finite p-group with |G | = p. If H ≤ G with H ∩Z(G) = 1, then H G if and only if G ≤ H. By Lemma 6, we will classify the group G according to |G | . Theorem 3. Assume |G | = p. Then G is one of the following: ∼ M(m,1) × Cp ; (i) G= ∼ (D8 ∗ C2n ) × C2 ; (ii) G = (iii) G ∼ = (M(1,1,1) ∗ Cpn ) × Cp ; (iv) G ∼ = M(m,1) ∗ D8 ; (v) G ∼ = M(m,1) ∗ M(1,1,1); (vi) G ∼ = D8 ∗ D8 ∗ C2n ; (vii) G ∼ = M(1,1,1) ∗ M(1,1,1) ∗ Cpn ; (viii) G ∼ = M(m,2) ; (ix) G ∼ = Q8 × C4 ; (x) G ∼ = M(2,1,1) ∗ Cpn ; (xi) G ∼ = D 8 ∗ D 8 ∗ Q8 ; (xii) G ∼ = (D8 ∗ Q8 ) × C2 ; (xiii) G ∼ = M(2,1,1) ∗ Q8 . Proof. By Lemma 3, G = (A1 ∗ A2 ∗ . . . ∗ As )Z(G), where A1 , . . . As are minimal nonabelian subgroups of G. By hypothesis and Theorem 1, Ai (i = 1, 2, . . . s) is one of the groups M(m,2) , M(m,1) , M(2,1,1), M(1,1,1) and Q8 , and moreover if there does not exist Ai such that Ai ∼ = Q8 , then s ≤ 2; if there exists Ai such that Ai ∼ = Q8 , then there is only one Ai ∼ = Q8 and s ≤ 3. We proceed in six cases: Case 1: Ai ∼ = M(m,1) or M(m,2) , i = 1, 2, . . . s I. Ai ∼ = M(m,1) , i = 1, 2, . . . s, s = 1 or 2 When G = A1 Z(G), where A1 = a, b ∼ = M(m,1) , we claim that there exists a nonnormal subgroup H of order p2 such that H ∩ Z(G) = 1. If not, by Lemma 1, Z(G) is cyclic. Since Z(G) > Z(A1 ), there exists c1 ∈ G\A1 such that Z(G) = t−m+1 c1 , where o(c1 ) = pt . Since ap ∈ Z(G), we have ap = cp1 . Let a1 = −pt−m ∼ ∼ ∼ . Then a1 , b = M(1,1,1) or a1 , b = D8 . It follows that G = M(1,1,1) ∗Cpt ac1 or G ∼ = D8 ∗ C2t . By Theorem 2, G has not nonnormal subgroups of order p2 . This is a contradiction. Since H ∩ Z(G) = 1, letting H ∩ Z(G) = c, c = G . Then Z(G) is of type (pn , p). If Z(G) = ap × c, then we get the group (i). Suppose that Z(G) > ap × c. Then there exists d ∈ G \ A1 such that Z(G) =
Finite p-groups Which Have Many Normal Subgroups
483
d × c, where o(d) = pt (t ≥ m). Since ap ∈ Z(G), without loss of generality, t−m+1 t−m ci . If p i, letting a1 = ad−p , then a1 , b ∼ let ap = dp = M(1,1,1) (p > 2) or D8 (p = 2). So we get the groups (ii) and (iii). If p i, then m > 2. If not, since t−1 / b, d, b, d G by Lemma 7, a contradiction. Let G = ap and ap = dp ci ∈ t−m −p a1 = ad . Then a1 , b ∼ = M(2,1,1) and G is the group (x). When G = (A1 ∗ A2 )Z(G), where A1 = a, b ∼ = M(m1 ,1) , A2 = c, d ∼ = M(m2 ,1) . And we may let m1 ≥ m2 ≥ 2. Let H be a nonnormal subgroup of order p2 . Then we claim that H ∩ Z(G) = 1. By Lemma 1, Z(G) is cyclic. If Z(G) = Z(A1 ∗A2 ), since m1 ≥ m2 ≥ 2, Z(A1 ∗A2 ) = ap and G = A1 ∗A2 . Since m1 −m2 +1 m1 −m2 cp ∈ Z(G), letting cp = ap , c1 = ca−p . We have c1 , d ∼ = M(1,1,1) when p > 2 or D8 when p = 2. By Lemma 4, G = c1 , d ∗ CG (c1 , d). Without loss of generality, let b1 = bd. Then a, b1 ∼ = M(m1 ,1) , G = a, b1 ∗ c1 , d. We get the groups (iv) and (v). If Z(G) > Z(A1 ∗ A2 ), the same as the argument above we get the groups (vi) and (vii). II. Ai ∼ = M(m,2) , i = 1, 2, . . . s, s = 1 or 2 When G = A1 Z(G), where A1 = a, b ∼ = M(m,2) . If Z(G) = Z(A1 ), then we get the group (viii). Suppose that Z(G) > Z(A1 ). There exists c ∈ G \ A1 such that Z(G) = c × bp , where o(c) = pn , n ≥ m. Since ap ∈ Z(G), without loss of generality, n−m+1 n−m bip . If p i, letting a1 = ac−p , then a1 , b ∼ assume ap = cp = M(2,1,1) and G is the group (x). If p i, without loss of generality, assume i = 1. If n > m = 2, these exists a nonnormal subgroup of order ≥ p3 . If n ≥ m > 2, we have G is the group (x). Assume n = m = 2, we have p = 2 and G is the group (ix). When G = (A1 ∗ A2 )Z(G), where A1 = a, b ∼ = M(m1 ,2) , A2 = c, d ∼ = M(m2 ,2) . And we may let m1 ≥ m2 ≥ 2. Obviously, Z(G) is of type (pn , p). If Z(G) = Z(A1 ∗ A2 ), then G = A1 ∗ A2 and Z(G) = Z(A1 ) = ap × bp . m1 −m2 +1 m1 −1 bip , dp = ap bp . If Since cp , dp ∈ Z(G), we can suppose that cp = ap m1 > 2 or m1 = 2 and p > 2, these exists a nonnormal subgroup of order p3 . So m1 = 2 and p = 2. Thus G = a2 = c2 . Let d1 = db. Then c, d1 ∼ = Q8 . By Lemma 1, NG (b)/b ∼ = Q8 . Since Z(G) ≤ N , Z(G) is type (2, 2). By Lemma 4, G = c, d1 ∗ CG (c, d1 ). Let a1 = ac. Then a21 = 1, [a1 , d] = 1, [a1 , c] = 1, [a1 , b] = [a, b] = a2 = c2 . It follows that a1 , b ∼ = M(2,1,1) , That is, G is the group (xiii). If Z(G) > Z(A1 ∗ A2 ), similar to the argument above, it is easy to get Z(G) is of type (2, 2). This contradicts Z(G) > Z(A1 ∗ A2 ). Case 2: Ai ∼ = M(1,1,1) or M(2,1,1) , i = 1, 2, . . . s. It is easy to see that s = 1 if G have subgroup H such that H ∼ = M(2,1,1) . So we consider the following two subcases. I. Ai ∼ = M(1,1,1) , (i = 1, 2, . . . s), s = 1 or 2. When G = A1 Z(G), where A1 = a, b ∼ = M(1,1,1) . Similar to the argument in Case 1, I, we have Z(G) is of type (pn , p). So there exists d ∈ G such that Z(G) = d × u, where o(d) = pn . If n = 1, then d = c = [a, b], G = M(1,1,1) × Cp . If n ≥ 2, then c = [a, b] ∈ d. In both cases we get G is the group (iii).
484
X. Guo et al.
When G = (A1 ∗ A2 )Z(G), where A1 = a, b ∼ = M(1,1,1) , A2 = c, d ∼ = M(1,1,1) , we have Z(G) is cyclic by Lemma 1. Moreover, since [a, b], [c, d] ∈ Z(G), we can get G is the group (vii). II. G = A1 Z(G), where A1 = a, b ∼ = M(2,1,1) . Since a G and ap ∈ Z(G), n Z(G) is type of (p , p) by Lemma 1. So there exists d ∈ G such that Z(G) = d × ap , where o(d) = pn . Then [a, b] ∈ d and G is the group (x). Case 3: Ai ∼ = Q8 , (i = 1, 2, . . . s), s ≤ 3. I. When G = A1 Z(G), where A1 = a, b ∼ = Q8 . Since Ω1 (G) ≤ Z(G), all of nonnormal subgroups of G are cyclic and of order 4. By Lemma 1, Z(G) is type of (2n , 2). Let Z(G) = c × d, where o(c) = 2n (n > 1), o(d) = 2 and d belongs to some nonnormal subgroup. Since a2 = b2 ∈ Z(G), we may assume n−1 n−1 / ac, a2 = b2 = c2 di . We claim that n > 1. If i = 1, letting d1 = c2 d = a2 ∈ then ac G by Lemma 7. So o(ac) = 4 and n = 2. Thus G is the group (ix). If n−2 i = 2, letting b1 = bc−2 , then b21 = 1, [a, b1 ] = [a, b] = a2 . Hence a, b1 ∼ = D8 , G is the group (ii). II. When G = (A1 ∗ A2 )Z(G), where A1 = a, b ∼ = Q8 , A2 = c, d ∼ = Q8 . By D8 ∗ D8 ∼ ∗ Q , we can choose a nonnormal subgroup H of order 4 such Q = 8 8 that H ∩ Z(G) = 1. Z(G) is cyclic and G ≤ Z(G), So G is the group (vi). III. When G = (A1 ∗ A2 ∗ A3 )Z(G), where Ai ∼ = Q8 . Similar to that of argument of II, |Z(G)| = 2 and G is the group (xi). Case 4: Ai ∼ = M(m,n) and Aj ∼ = M(m,n,1) , (i, j = 1, 2, . . . s). It is easy to see that n = 1, i.e. Ai ∼ = M(m,1) and Aj ∼ = M(m,1,1) . Moreover, s = 2. So G = (A1 ∗ A2 )Z(G), where A1 = a, b ∼ = M(m,1) , A2 = c, d ∼ = M(m,1,1) . Since b, d G and b, d ∩ Z(G) = 1, Z(G) is cyclic by Lemma 1. If Z(G) = ap , then G is the group (v). If Z(G) > ap , similar to the argument of Case 1.I, we get G is the group (vii). Case 5: Ai ∼ = M(m,n) and Aj ∼ = Q8 , (i, j = 1, 2, . . . s). We distinguish the following two subcases: n = 1 and n = 2. I. When G = (A1 ∗ A2 )Z(G), where A1 = a, b ∼ = M(m,1) , A2 = c, d ∼ = Q8 . −2m−2 If Z(G) = Z(A1 ∗ A2 ), then G = A1 ∗ A2 and m > 2. Let c1 = ca , d1 = m−2 m−1 . Then c21 = d21 = 1, [c1 , d1 ] = [c, d] = a2 . Hence c1 , d1 ∼ da−2 = D8 and G is the group (iv). If Z(G) > Z(A1 ∗ A2 ) = a2 , then there exists u ∈ Z(G) \ a2 m−1 u and Lemma 7, H = b×u G. such that u∩a2 = 1. By G = a2 Hence o(u) = 2. Moreover, by Lemma 1, NG (H)/H ∼ = Q8 . Since Z(G) ≤ NG (H) and Z(G) ∩ H = u, Z(G) is of type (2,2). So m = 2, G ∼ = (D8 ∗ Q8 ) × C2 , that is, G is the group (xii). When G = (A1 ∗ A2 ∗ A3 )Z(G), where A1 = a, b ∼ = M(m1 ,1) , A2 = c, d ∼ = ∼ M(m2 ,1) , A3 = u, v = Q8 . Let H = b, d. Then NG (H)/H ∼ = Q8 by Lemma 1. Since Z(G) ≤ NG (H) and H ∩Z(G) = 1, |Z(G)| = 2 = |G |. Hence m1 = m2 = 2 and G ∼ = D8 ∗ D8 ∗ Q8 , that is, G is the group (xi). II. When G = (A1 ∗ A2 )Z(G), where A1 = a, b ∼ = M(m,2) , A2 = c, d ∼ = Q8 . Let H = b. Then NG (H)/H ∼ = Q8 . Since Z(G) ≤ NG (H) and b2 ∈ Z(G), Z(G) is of type (2,2). Hence m = 2 and G ∼ = = M(2,2) ∗ Q8 , but we have M(2,2) ∗ Q8 ∼ M(2,1,1) ∗ Q8 , so we get G is the group (xiii). Case 6: Ai ∼ = M(m,n,1) and Aj ∼ = Q8 , (i, j = 1, 2, . . . s).
Finite p-groups Which Have Many Normal Subgroups
485
G = (A1 ∗ A2 )Z(G), where A1 = a, b ∼ = M(2,1,1) , A2 = c, d ∼ = Q8 . Obviously, there exists a nonnormal subgroup H of order 4 such that H ∩ Z(G) = 1. By Lemma 1, NG (H)/H ∼ = Q8 . Since Z(G) ≤ NG (H) and H ∩ Z(G) = 1, Z(G) is of type (2,2). Hence Z(G) ≤ A1 , and G is the group (xiii). Theorem 4. Assume |G | = p2 . Then G is one of the following: (i) G is a p-group class of order p4 ; 4 of maximal 4 2 (ii) G = a, b, c a = b = c = 1, [a, b] = 1, [a, c] = b2 , [b, c] = a2 b2 ; (iii) G = a, b, c a4 = b4 = 1, c2 = b2 , [a, b] = 1, [a, c] = b2 , [b, c] = a2 ; 2 2 (iv) G = a, b, c ap = bp = cp = 1, [a, b] = 1, [a, c] = bp , [b, c] = ap bwp, p−1 where w = 0, 1, 2, . . . , 2 ; 2 2 (v) G = a, b, c ap = bp = cp = 1, [a, b] = 1, [a, c] = bνp , [b, c] = ap bwp, where ( νp ) = −1, w = 0, 1, 2, . . . , p−1 2 ; 4 4 (vi) G = a, b, c, d a = b = 1, c2 = a2 b2 , d2 = a2 , [a, b] = a2 , [c, d] = 2 2 a b , [a, c] = [b, d] = 1, [b, c] = [a, d] = b2 . Proof. By Lemma 2 and Theorem 2, there exists K G such that |G : K| = p and G/K is isomorphic to one of the groups M(m,1) , D8 ∗ C2n , M(1,1,1) ∗ Cpn and D 8 ∗ Q8 . We proceed in four cases: m m−1 Case 1: G/K ∼ a, ¯b a¯p = ¯bp = ¯ 1, [¯ a, ¯b] = a ¯p . = M(m,1) = ¯ 2 4 Since K ≤ G , G = a, b. If o(a) = p , then |G| = p and G is type of (p, p). By the classification of groups of order p4 , there doesn’t exist this kind of group. Assume o(a) ≥ p3 . Then a G, G is metacyclic, and G has a cyclic subgroup of index p. Since |G | = p2 , by [8, Theorem IV.5.14] , we have p = 2 and |G : G | = 4. By Lemma 5, G is a 2-group of maximal class of order 24 . We get the group (i). 2n n−1 Case 2: G/K ∼ a, ¯b, c¯ a ¯ = ¯b2 = c¯2 = ¯1, [¯b, c¯] = a ¯2 , [¯ a, ¯b] = = D8 ∗C2n = ¯ ¯ [¯ a, c¯] = 1, where n ≥ 2. n−1 Since K ≤ G , G = a, b, c. Suppose that K = u. Then G = a2 , u. n n−1 We claim u ∈ / a. If not, then K = a2 . It follows that G = a2 . Since n n−1 [a, b], [a, c] ∈ Z(G), a2 ∈ Z(G). If b2 = a2 , letting b1 = ba−2 , then b21 = 1, n−1 n n−1 n b1 c = b1 a2 ak2 . Since o(b1 ) = 2, o(bc1 ) = 2. But o(b1 c ) = o(b1 a2 ak2 ) = 4, a contradiction. Hence u ∈ / a. It follows that o(a) = 2n and u ∈ b or u ∈ c. Since one of [a, b] and [a, c] is at least not 1, a G. So n = 2 and G = a2 ×u. Without loss of generality we assume u ∈ b, hence u = b2 . Thus G = a, b, c a4 = b4 = 1, c2 = b2v , [b, c] = a2 b2k , [a, b] = b2s , [a, c] = b2t where k, v, s, t are 0 or 1. When v = 0, since a, c G, [a, c] = b2 . So t = 1. If s = 0, then [a, b] = 1. Since ab, c G, [ab, c] = b2 a2 b2k = a2 b2 . Hence k = 1, (ii) holds. If s = 1, then [a, b] = b2 . Since ab, c G, [ab, c] = b2 a2 b2k = a2 = (ab)2 . Hence k = 0. Let b1 = abc. Then b21 = b2 , [a, b1 ] = 1, [b1 , c] = a2 b2 . So G is isomorphic to (ii). When v = 1, similar to the above, G is isomorphic to the group (iii).
486
X. Guo et al.
n Case 3: G/K ∼ a, ¯b, c¯ a¯p = ¯bp = c¯p = 1, [¯ a, ¯b] = [¯ a, c¯] = = M(1,1,1) ∗ Cpn = ¯ n−1 p ¯1, [¯b, c¯] = a ¯ . If G/K ∼ = M(1,1,1) , then |G| = p4 . By the classification of groups of order p4 , G is a p-group of maximal class of order p4 . Then G is the group (i). Suppose |G| > p4 . It is easy to see that n > 1. Since K ≤ G , G = a, b, c. n−1 Let K = u. Then G = ap , u. Similar to the argument in Case 2, we have the following facts: ap ∈ Z(G), o(a) = p2 , G = ap × u, u ∈ b or c, one of [a, b] and [a, c] is at least not 1. By [8, Th.X.2.4 and Th.X.2.5], G is p-abelian. Without loss of generality we assume u = bp . If cp = bvp , p v, letting c1 = cb−v , then cp1 = 1. So G have the following presentation: 2 2 G = a, b, c ap = bp = cp = 1, [a, b] = bsp , [a, c] = btp , [b, c] = ap bwp, where 0 ≤ s, t, w < p. We claim that [a, b] = 1. If s = 0, we have done. If s = 0, letting b1 = c−is b, where it ≡ 1(mod p). Then bp1 = bp , [a, b1 ] = 1, [b1 , c] = ap bwp . Since a, c G, [a, c] = 1. It follows that t = 0. So we have ( pt ) = 1 or ( pt ) = −1. If ( pt ) = 1, then t ≡ i2 (mod p), where i = 1, 2, . . . , p−1 2 . For any i > 1, letting a1 = aj , c1 = cj , where ij ≡ 1(mod p), then [a1 , b] = 1, [a1 , c1 ] = [aj , cj ] = 2 2 [a, c]j = b(ij) p = bp , [b, c1 ] = [b, cj ] = [b, c]j = ajp bjwp = ap1 bjwp . Let w = jw. p−1 If w > 2 , then p − w < p+1 . Since [b, c] = ap bw p = ap b−(p−w )p , we may let 2 a2 = a−1 , c2 = c−1 . Then [a2 , b] = 1, [a2 , c2 ] = [a−1 , c−1 ] = [a, c] = bp , [b, c2 ] = [b, c−1 ] = [b, c]−1 = a−p b(p−w )p = ap2 b(p−w )p . Thus we can assume w ≤ p−1 2 without loss of generality. So we get the group (iv). If ( pt ) = −1, Similar to the above argument, we get the group (v). 4 Case 4: G/K ∼ a, ¯b, c¯, d¯ a ¯ = ¯b2 = ¯1, [¯ a, ¯b] = a ¯2 , c¯2 = d¯2 = = D8 ∗ Q8 = ¯ 2 2 ¯ ¯ ¯ ¯ ¯ a ¯ , [¯ c, d] = a ¯ , [¯ a, c¯] = [¯ a, d] = [b, c¯] = [b, d] = ¯ 1. Since K ≤ G , G = a, b, c, d. Let K = u. Then G = a2 , u. If G = a2 , then o(a) = o(c) = o(d) = 8, |c, d| = 16, |c, d | = 4. But it is not true by Lemma 5. Hence G = a2 × u and o(a) = o(c) = o(d) = 4. Suppose that H ≤ G = G/K and H is the original image H in G. We have the following observation: If H = ¯ a, ¯b ∼ = D8 , then o(b) = 4. Moreover, H ∼ = M(2,2) . In fact, if not, then we have a4 = b2 = 1, c2 = a2 uk , d2 = a2 uv , [a, b] = a u , [c, d] = a2 ut , where 0 ≤ k, v, s, t ≤ 1. If [a, b] = a2 , since a, b G, we have [a, c] = [a, d] = [b, c] = [b, d] = 1, [c, d] = a2 u. Since a, c and a, d are normal in G, c2 = d2 = a2 u. It follows that G = D8 × Q8 . Obviously, G have a nonnormal subgroup of order 8, this is a contradiction. If [a, b] = a2 u, Without loss of generality assume [b, c] = 1. Since b, c G, we have c2 = a2 u, [b, d] = [a, c] = 1, [c, d] = a2 u. Since b, d G, we get d2 = a2 u, [a, d] = 1. Hence G = a2 u, a contradiction again. Hence we have the following facts: o(b) = 4, u = b2 and G = a2 × b2 . Moreover, if [a, b] = a2 b2 , letting b1 = ab, then o(b1 ) = 2, [a, b1 ] = a2 b2 . On the other hand, ¯ a, b¯1 ∼ = D8 , it follows by 2 s
Finite p-groups Which Have Many Normal Subgroups
487
the observation above that o(b1 ) = 4, a contradiction. Hence [a, b] = a2 . So a, b ∼ = M(2,2) . Now, let us determine the presentation of G. ¯ Since Without loss of generality we may assume [a, c] = 1. Let H1 = ca, d. 2 2 ¯ ¯ ∼ ca = 1 and [ca, d] = [¯ c, d] = a ¯ , H1 = D8 . By the observation above, (ca)2 = 2 2 2 2 c a [a, c] = c a = 1. Hence c2 = a2 b2 . Let H2 = da, c¯. Then H2 ∼ = D8 . In the same way, (da)2 = d2 a2 [a, d] = 1. Without loss of generality, we can suppose that d2 = a2 b2v . Then [a, d] = b2(1−v) . Since [ca, d] = [c, d][a, d] = d2 , [c, d] = d2 [d, a] = a2 b2 . Thus G = a, b, c a4 = b4 = 1, c2 = a2 b2 , d2 = a2 b2v , [a, b] = a2 , [c, d] = a2 b2 , [a, c] = 1, [a, d] = b2(1−v) , [b, c] = b2s , [b, d] = b2t , where v, s, t is 0 or 1. ∼ Q8 is normal in We claim that v = 0. If not, we have v = 1. Since c, d = G, [b, c] = [b, d] = 1. Let H = a, bc. Since (bc)2 = b2 c2 = a2 and [a, bc] = [a, c][a, b]c = [a, b] = a2 , H ∼ = Q8 . On the other hand, since (bc)d = bcd = 2 2 bc[c, d] = bca b ∈ / H, H G, a contradiction. If s = 0, letting H = a, bc, since (bc)2 = b2 c2 = a2 , [a, bc] = [a, c][a, b]c = [a, b] = a2 , it is easy to get H ∼ = Q8 . But b2 ∈ / H, we have H G, a contradiction. Hence s = 1. Now, if t = 1, letting d1 = cd, then d21 = (cd)2 = d2 = a2 , [b, d1 ] = [b, cd] = [b, c][b, d] = 1, [a, d1 ] = [a, cd] = [a, c][a, d] = b2 , [c, d1 ] = [c, d] = a2 b2 . So we can assume t = 0. Hence G is isomorphic to the group (vi).
References 1. Dedekind, R.: Uber Gruppen, deren samtliche Teiler Normalteiler sind. Annals of Mathematic 48, 548–561 (1897) 2. Gorenstein, D., Lyons, R., Solomon, R.: The Classification of the Finite Simple Groups. No.6. Part IV, The special odd case. Mathematical Surveys and Monngraphs, vol. 40.6. American Mathematical Society, Providence (2005) 3. Passman, D.S.: Nonnormal subgroups of p-groups. Journal of Algebra 15(3), 352–370 (1970) 4. Zhang, Q.-h., Guo, X.-q., Qu, H.-p., Xu, M.-y.: Finite groups which have many normal subgroups. Journal of Korean Mathematical Society 46(6), 1165–1178 (2009) 5. Boˇzikor, Z., Janko, Z.: A complete classification of finite p-groups all of whose noncyclic subgroups are normal. Glasnik Matematic 44(1), 177–185 (2009) 6. Zhang, J.-q., Li, X.-h.: Finite p-groups all of whose proper subgroups have small derived subgroups. Science China Mathematics 53(5), 1357–1362 (2010) 7. Huppert, B.: Endliche Gruppen I. Springer, Berlin (1967) 8. Xu, M.-y.: An Introduction to Finite Groups. Science Press, Beijing (2001) (Chinese) 9. Berkovich, Y.: Groups of prime power order I. Walter de Gruyter, Berlin (2008) 10. Berkovich, Y.: On subgroups of finite p-groups. Journal of Algebra 224(2), 198–240 (2000) 11. R´edei, L.: Das schiefe Product in der Gruppentheorie. Comment Mathematical Helve 20, 225–267 (1947)
Cubic NURBS Interpolation Curves and Its Convexity Lijuan Chen, Xiaoxiang Zhang, and Mingzhu Li School of Science, Qingdao Technological University, 266033, China College of Management, Qingdao Technological University, 266033, China School of Science, Qingdao Technological University, 266033, China
Abstract. Shape preserving interpolation is studied well in polynomial interpolation. The aim of this paper is to give a local interpolation method. The local interpolation is presented by using the cubic nonuniform rational B-spline curves. The generated interpolation curve can be continuous and has a local shape parameter. Based on the convexity of the cubic non-uniform rational B-spline curves, the convexity of the given interpolation curves is discussed . Some computed examples of the interpolation curves are given. Keywords: NURBS curve, interpolation, convexity.
1
Introduction
Polynomial interpolation function has been well studied for monotonicity preserving, which is seen in papers [6] – [11]. In paper [12], the shape preserving interpolation by space curves is studied. Non-uniform rationed B-spline (NURBS) curves are the most frequently used curves in CAGD referred to [13] and [15]. On local interpolation of NURBS curves , the segments are constructed using polynomial or rational Bezier curves ,then a NURBS curve can be obtained by selecting a suitable knot vector . Litter has been done for the convexity of the interpolation curves by using NURBS . To construct a interpolation method with local shape parameters is also important. The aim of this paper is to give a local interpolation method by using cubic NURBS curves in such a way that the interpolation curves are continuous, have local shape parameters and preserve the local monotonicity of the interpolation data in a way. The present paper is organized as follows. In section 2 , the piecewise expression of cubic NURBS is described.In section 3, cubic NURBS interpolation curves are given. Much flexibility in altering the shape of the curves is offered. Based upon the convexity of the cubic NURBS curves, the convexity of the given interpolation curves is discussed in a way in section 4.
2
Notes on Cubic NURBS Curve
Usually, the NURBS curves are defined recurrently, see [13] and [15]. In this section, we give the piecewise expression cubic NURBS curve. R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 488–495, 2010. c Springer-Verlag Berlin Heidelberg 2010
Cubic NURBS Interpolation Curves and Its Convexity
489
Given data Pi (i = 0, 1, · · · , m) corresponding to knots u1 < u2 < · · · < um+3 , let hi = ui+1 − ui , ai =
h2i h2i , bi = , (ui+1 − ui−1 )(ui+1 − ui−2 ) (ui+2 − ui )(ui+3 − ui ) ci =
hi hi , di = , ui+1 − ui−2 ui+3 − ui vi (u) =
u − ui , hi
Bi,0 (v) = ai (1 − v)3 , Bi,1 (v) = (1 − ai − bi )(1 − v)3 + 3(1 − di−1 )(1 − v)2 v + 3ci+1 (1 − v)v 2 + ai+1 v3 , Bi,2 (v) = bi−1 (1 − v)3 + 3di−1 (1 − v)2 v + 3(1 − ci+1 )(1 − v)v 2 + (1 − ai+1 − bi )v3 , Bi,3 = bi v 3 for u ∈ [ui , ui+1 ], i = 3, 4, · · · , m. The cubic non-uniform rational B-spline curve can be written by i j=i−3 ωj Bi,j−i+3 (vi )Pj R(u) = i , (1) j=i−3 ωj Bi,j−i+3 (vi ) where ωj > 0(j = 0, 1, · · · , m) are the associated weights. It is known that the cubic NURBS curves also make sense when the knots are considered with multiplicity r ≤ 4. If a knot has multiplicity 1 ≤ r ≤ 4, then the curve R(u) has C 3−r continuity at this knot.
3
Cubic NURBS Interpolation Curve
In this section, by choosing data Pi and weights ωi in the (1), we develop cubic NURBS to generate interpolation curves with local shape parameters. Theorem 1. Given interpolation data Qk ∈ Rd (d ≥ 1), k = 1, 2, · · · , n, let ⎧ ⎨ P3k−3 = Qk − αk Tk P3k−2 = Qk , ⎩ P3k−1 = Qk + βk Tk ⎧ ⎨ ω3k−3 = βk b3k−1 ω3k−2 > 0 . ⎩ ω3k−1 = αk α3k
(2)
(3)
Then using the data Pj , ωj (j = 0, 1, · · · , 3n − 1), the cubic NURBS curve (1) interpolates Qk = R(u3k ), k = 1, 2, · · · , n.
490
L. Chen, X. Zhang, and M. Li
Proof. According to (1), 3k j=3k−3
B3k,j−3k+3 (0)ωj Pj
= ω3k−3 α3k P3k−3 + ω3k−2 (1 − a3k − b3k−1 )P3k−2 . +ω3k−1 b3k−1 P3k−1 = [[ω3k−3 α3k + ω3k−2 (1 − a3k − b3k−1 ) + ω3k−1 b3k−1 ]Qk −(ω3k−3 α3k αk − ω3k b3k−1 βk )Tk = [ω3k−3 α3k + ω3k−2 (1 − a3k − b3k−1 ) + ω3k−1 b3k−1 ]Qk Therefore
3k
R(u3k ) =
j=3k−3 3k
ωj B3k,j−3k+3 (0)Pj
j=3k−3
ωj B3k,j−3k+3 (0)
= Qk . The knots ui (i = 1, 2, · · · , m + 3) can be chosen to be uniformly spaced or non-uniformly spaced. There are some parametrization strategies can be used, see [14]. Much flexibility is offered in (2) and (3) . we give some schemes as follows. Let Dk = ω3k−3 α3k + ω3k−2 (1 − a3k − b3k−1 ) + ω3k−1 b3k−1 . A straight forward computation gives that
R (u3k ) =
3a3k d3k−1 αk βk Tk , k = 1, 2, · · · , n. h3k Dk
Therefore, Tk are tangent to the curve R(u) at the Qk . We can choose Tk to adjust the behavior of curve R(u). For example, we can take ⎧ ⎨ T 1 = Q 2 − Q1 Tk = Qk+1 − Qk−1 , k = 2, 3, · · · , n − 1 . (4) ⎩ Tn = Qn − Qn−1 The weight ω3k−2 > 0 at P3k−2 can be freely chosen. We can take ω3k−2 =
ω3k−1 + ω3k−3 . 2
(5)
If values of the knots are considered, we can take ω3k−2 =
ω3k−3 a3k + ω3k−1 b3k−1 , a3k + b3k−1 = 0. a3k + b3k−1
Adjusting values of αk and βk can corresponding change the lengths of Qk −P3k−3 and Qk − P3k−1 , so the choice of αk and βk can affect the shape of curve. If we restrict αk and βk 0 ≤ ak ≤
Qk − Qk−1 , k = 2, 3, · · · , n, 2Tk
(6)
Cubic NURBS Interpolation Curves and Its Convexity
0 ≤ ak ≤
Qk+1 − Qk , k = 1, 2, · · · , n − 1, 2Tk
0 ≤ α1 ≤ β1 , 0 ≤ βn ≤ αn .
491
(7) (8)
Further, 1 Q2 − Q1 , 2
P0 − Q1 ≤ P3k−3 − Qk ≤ P3k−1 − Qk ≤
1 Qk − Qk−1 , k = 2, 3, · · · , n, 2
1 Qk+1 − Qk , k = 1, 2, · · · , n − 1, 2
P3n−1 − Qn ≤
1 Qn − Qn−1 . 2
In general, we take αk =
βk =
Qk − Qk−1 , k = 2, 3, · · · , n, λk Tk
Qk+1 − Qk , k = 1, 2, · · · , n − 1, λk Tk α1 = β1 , βn = αn ,
where λk ≥ 2(k = 1, 2, · · · , n − 1). Therefore, Q1 − P0 = P3n−1 − Qn = P3k−1 − Qk = Qk+1 − P3k =
1 Q2 − Q1 , λ1 1 Qn − Qn−1 , λn−1 1 Qk+1 − Qk , k = 1, 2, · · · , n − 1 λk
Remark: As λk increases, P3k−1 and P3k are corresponding close to Qk and Qk+1 . Curve R(u)(u ∈ [u3k , u3k+3 ]) is also close to segment Qk Qk+1 . Therefore, parameter λk controls the shape of curve. λk is a local parameter, and we can change λk so as to achieve the desired curve. Easily to know, the interpolation curve R(u) is C 2 continuous in general node. If the interpolation node u3k is a dual node, then the curve is C 1 continuous in this node. Figure 1 and Figure 2 give us planner data interpolation curve in condition of average knots. Tangent vector Tk and weight ω3k−1 take as (4) and (5), and the shape parameter take λk = 4 and 10 respectively.
492
L. Chen, X. Zhang, and M. Li
Fig. 1. Planner data interpolation curve(λk = 4)
Fig. 2. Planner data interpolation curve(λk = 10)
Figure 3 and Figure 4 give us space data interpolation curve in condition of average knots. Interpolation data Qk = (e−0.1k cosk, e−0.1k sink), k = 1, 2, · · · , 24, Tangent vector Tk , weight ω3k−1 and λk take as Figure 1.
Fig. 3. Space data interpolation curve(λk = 4)
Cubic NURBS Interpolation Curves and Its Convexity
493
Fig. 4. Space data interpolation curve(λk = 10)
4
Convexity of Cubic NURBS Interpolation Curve
From the above, we have that R(u3k ) = Qk , R(u3k+3 ) = Qk+1 . In this section, we discuss the convexity of R(u)(u ∈ [u3k , u3k+3 ]) in condition of (Qk+1 − Qk )R (u) ≥ 0. In the same time, we discuss the convexity of the combination curve. Theorem 2. Let all αi and βi satisfy (6), (7) and (8) and norm is Euclidean norm. If (Qk+1 − Qk )Ti ≥ 0(i = k, k + 1), then
(Qk+1 − Qk )R (u) ≥ 0, u ∈ [u3k , u3k+3 ], (1 ≤ k ≤ n − 1). Proof: According to (2), if (Qk+1 − Qk )(Pi − Pj ) ≥ 0(3k − 3 ≤ j ≤ i ≤ 3k + 2), then
(Qk+1 − Qk )R (u) ≥ 0, u ∈ [u3k , u3k+3 ]. According to (3), if (Qk+1 − Qk )Ti ≥ 0(i = k, k + 1) and (Qk+1 − Qk )(Qk+1 − Qk − αk+1 Tk+1 − βk Tk ) ≥ 0, then
(Qk+1 − Qk )R (u) ≥ 0, u ∈ [u3k , u3k+3 ].
494
L. Chen, X. Zhang, and M. Li
According to (6), (7) and (8), we have (Qk+1 − Qk )T (αk+1 Tk+1 + βk Tk ) ≤ Qk+1 − Qk 2 (αk+1 Tk+1 2 + βk Tk 2 ) ≤ Qk+1 − Qk 22 . So the theorem is proven. From the proof of theorem 2, for the convexity of combination cubic NURBS interpolation curve, we have the following corollary: Corollary 3. Suppose Tk = (tk,1 , tk,2 , · · · , tk,d ), Qk = (qk,1 , qk,2 , · · · , qk,d ), R(u) = (r1 (u), r2 (u), · · · , rd (u)), u ∈ [u3k , u3k+3 ], (1 ≤ k ≤ n − 1). If αi ≥ 0, βi ≥ 0, (qk+1,j − qk,j )ti,j ≥ 0(i = k, k + 1), αk+1 |tk+1,j | + βk |tk,j | ≤ |qk+1,j − qk,j |, then
(qk+1,j − qk,j )rj ≥ 0(1 ≤ j ≤ d). Obviously, when all Ti take the values according to (4). If qi+1,j − qi,j ≥ 0(≤ 0)(i = k − 1, k, k + 1),
we can take αi , βi (i = k, k + 1) to make rj (u) ≥ 0(≤ 0), u ∈ [u3k , u3k+3 ].
5
Conclusion
In this paper, a local interpolation is presented by using the cubic non-uniform rational B-spline curves. The generated interpolation curve can be continuous and has a local shape parameter. Based on the convexity of the cubic nonuniform rational B-spline curves, the convexity of the given interpolation curves is discussed.
References 1. Zhang, Y., Duan, Q., Twizell, E.H.: Convexity control of a bivariate rational interpolating spline surfaces. Computers and Graphics 31(5), 679–687 (2007) 2. Liu, Z., Tan, J.-q., Chen, X.-y., Zhang, L.: The conditions of convexity for Bernstein CBzier surfaces over triangles. Computer Aided Geometric Design 27(6), 421–427 (2010) 3. Convexity preserving scattered data interpolation using Powell CSabin elements. Computer Aided Geometric Design 26(7), 779–796 (2009) 4. Zhang, Y., Duan, Q., Twizell, E.H.: Convexity control of a bivariate rational interpolating spline surfaces. Computers and Graphics 31(5), 679–687 (2007)
Cubic NURBS Interpolation Curves and Its Convexity
495
5. Liu, Z., Tan, J.-q., Chen, X.-y., Zhang, L.: The conditions of convexity for Bernstein CBzier surfaces over triangles. Computer Aided Geometric Design 27(6), 421–427 (2010) 6. de Boor, C., Swartz, B.: Piecewise Monotone Interpolation. J. Approx. Theory 21, 411–416 (1997) 7. Costantini, P.: On Monotone and Convex Spline Interpolation. Math. Comp. 46, 203–214 (1986) 8. Fritsch, F.N., Carlson, R.E.: Monotone Piecewise Cubic Interpolation. SIAM J. Number. Anal. 17, 238–246 (1980) 9. Manni, C., Sablonniere, P.: Monotone Interpolation of Order 3 by Cubic Splines. IMA J. of Number. Anal. 17, 305–320 (1997) 10. Passsow, E.: Monotone Quardratic Spline Interpolation. J. Approx. Theory 19, 143–147 (1977) 11. Schumaker, L.L.: On shape preserving quadratic spline interpolation. SIAM J. Numer. Anal. 20, 854–864 (1983) 12. Goodman, T.N.T., Ong, B.H.: Shape preserving interpolation by space curves. Comput. Aid. Geom. Des. 15, 1–17 (1997) 13. Farin, G.: NURBS curves and surfaces. A.K.Peters, Wellesley (1995) 14. Hoschek, J., Lasser, D.: Fundamentals of computer aided geometrice design. A.K. Peters, Wellesley (1993) 15. Piegl, L., Tiller, W.: The NURBS book. Springer, New York (1995)
Optimal Dividend Problem for the Compound Binomial Model with Capital Injections Yali He1 and Xiuping Zhao2 1
Institute of Science, Hebei Polytechnic University, Tangshan 063009, China [email protected] 2 School of Computer Science and Software, Hebei University of Technology, Tianjin, 300401, China [email protected]
Abstract. In this paper we discuss the optimal dividend problem for the compound binomial risk model with capital injections. The objective is to maximize the difference between the expected accumulative discounted dividend payment and the expected accumulative discounted capital injections. We derive the Bellman equation for the problem and show that the optimal strategy is a band strategy. At last, by virtue of Bellman equation, the characterization of the optimal strategies and their computing method are presented. Keywords: Compound binomial risk model, Bellman equation, optimal dividend strategy, capital injection.
1 Introduction The very first risk model with dividends in the literature was proposed by De Finetti 1957 [1], the introduction of dividend break the pattern that the insurance company is only concerned on ruin probability in the past. Therefore, the problem of De Finetti cause great interest of scholars, such as Gerber and Schmidli, etc., has done many deep and extensive research in this area. In the problem proposed in De Finetti [1], the dividends last bankruptcy, that is, bankruptcy must occur under the optimal dividend strategy, when the insurance company has negative earnings for the first time. However, Borch [2] propose that negative surplus does not mean bankruptcy. For insurance company with the bankruptcy signs occur, measures such as injection or merger should be taken. Waters [3] also propose that when the company has negative surplus, shareholders should inject the capital in order to maintain the company's operations. Injection problem has caused the attention of some scholars, Schmidli [4] studies optimal dividend strategy of Cramer-Lundberg model with injection on the basis of Schmidli [5]. Most dividend problems in risk theory are on the continuous time model. In fact, the introduction of dividend for the discrete time risk model has the inherent value, the practical operations of insurance companies is discrete time, not continuous time. Currently, the classical risk model for discrete time discussed mostly is compound binomial model, which is proposed by Gerber [6]. Gerber [6] give some conclusions on ruin probability of the compound binomial risk model and the joint probability R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 496–503, 2010. © Springer-Verlag Berlin Heidelberg 2010
Optimal Dividend Problem for the Compound Binomial Model with Capital Injections
497
distribution of bankruptcy and surplus before of the bankruptcy. Thonhauser [7] and Avanzi, B [8] summarize the optimality results for dividend problems in insurance. Tan [9] studies the compound binomial model with randomized decisions on paying dividends. Zhang [10] studies multi-layer dividend strategy in a Sparre Andersen model. But the researches on the compound binomial risk model are mostly focused on the ruin probability, with little regard to the discussion of the optimal dividend strategy. This article discusses optimal dividend problem of the compound binomial risk model with injection. In the following, we will give the description of the compound binomial risk model. Suppose there exists at most a claim in one unit time, let ξ n = 1 denote a claim,
Pr (ξ n = 1) = p, 0 < p < 1 , ξ n = 0 denote no claim, Pr (ξ n = 0) = q = 1 − p , and claims in different time interval are independent, i.e. ξ1,ξ 2 Lξ n L are i.i.d random sequences. The insurance companies assess a unit premium per unit time interval. The number of claims N(n) = ξ1 + ξ 2 + L + ξ n follows binomial distribution, then surplus process is given by n
N (n)
i =1
i =1
X n = x + n − ∑ ξ iYi or X n = x + n − ∑ Yi . Where initial surplus x = X 0 is nonnegative integer, individual claims {Yi : i ∈ N } are i.i.d random sequences, taking only positive integer values, and are independent on N (n) . Let f (i ) = Pr (Yi = i ) be the common distribution of individual claims,
f (0) = 0 , EYi = μ < ∞ , F ( y ) be its distribution function and continuous. Suppose shareholders inject capital, when the surplus deficit, and the shareholders will make up the deficit, regardless of the size of the deficit, so the company began to operate from 0. A strategy ( D, Z ) satisfying Pr X t( D,Z ) ≥ 0, ∀t ≥ 0 = 1 , is called feasible strategy, i.e. injection and dividend policy must ensure non-negative earnings. let Π denote the set of all feasible strategies, Π = {Π n = ( Dn , Z n ), n ∈ N } . Under feasible
{
}
strategy ( D, Z ) , time for the company's bankruptcy is ∞ , X n( D , Z ) = X n − Dn + Z n . Inject process can be understood as Z n = ( Dn − X n ) ∨ 0 , So we only need optimize dividend strategy. Let Π n = Dn − φZ n be net income in the n-th unit time interval, φ > 1 be the penalty factor, can be interpreted as transaction costs of some injection. Under feasible strategy ( D, Z ) , the surplus process under control can be described as
X n( +D1,Z ) = X n − Dn + φZ n + 1 − ξ n+1 y n+1 . The cumulative net income function of shareholders corresponding to the feasible strategy is
⎡∞ ⎤ ⎡∞ ⎤ ⎡∞ ⎤ V ( D ,Z ) ( x) = E ⎢∑ Π k δ k X 0 = x ⎥ = E ⎢∑ Dkδ k X 0 = x ⎥ − φE ⎢∑ Z k δ k X 0 = x ⎥ . ⎣ k =0 ⎦ ⎣ k =0 ⎦ ⎣ k =0 ⎦ Where 0 < δ < 1 is discount factor. Our purpose is to maximize V ( D , Z ) ( x) , So the value function is defined as V ( x ) = sup V ( D ,Z ) ( x ) . ( D , Z )∈Π
498
Y. He and X. Zhao
2 Property of the Value Function The Value Function defined above has some well properties. Lemma 1. V (x) is strictly increasing, and Proof: Suppose
Π
∀x ≥ y ≥ 0 , V ( x ) ≥ V ( y ) + x − y .
is any a strategy with Initial value y, for initial capital
x ≥ y , set
Π ′ : Π '0 = Π 0 + x − y, Π 'n = Π n , n ≥ 1 , Then we have:
X1' = X 0' + 1 − ξ1Y1 − Π 0' = x − (Π 0 + x − y) + 1 − ξ1Y1 = y − Π 0 + 1 − ξ1Y1 = X 1 . Since Π 'n = Π n , n ≥ 1 , then
X n' = X n' −1 + 1 − ξ nYn − Π ′n' = X n −1 + 1 − ξ nYn − Π 'n = X n . Thus we have V ( x) ≥ V Π′ ( x) = V Π ( y ) + x − y . Take supremum in both sides, we have V ( x) ≥ V ( y ) + x − y . Lemma 2. For ∀x ≥ 0 , we have:
x+
δ 1− δ
−
φμδ (1 − p) δ . ≤ V ( x) ≤ x + (1 − δ )(1 − δp ) 1−δ
To the shareholders, based on the initial value x , the best case (maximum gain) is that all the premiums are used to be dividends, so ∞
V ( x) ≤ x + ∑1 ⋅ δ k = x + k =1
δ 1−δ
.
The worst case is to inject all the claims. Since the time Tk the k-th claim occur follows negative binomial distribution NB(k , p) , we have
[
]
∞ ⎤ ⎡∞ ⎞ ⎛ E[δ Tk ] = ∑ δ k Ckn+n −1 p k q n = (δp ) k (1 − q ) −k − 1 , E ⎢∑ Yk δ Tk ⎥ = μ ⎜⎜ δ − δp ⎟⎟ . n =1 ⎦ ⎣ k =1 ⎝ 1 − δ 1 − δp ⎠
So
V ( x) ≥ x +
δ 1− δ
δ φμδ (1 − p) . ⎛ δ δp ⎞ − φμ ⎜⎜ − ⎟⎟ = x + − 1 δ ( 1 − δ )(1 − δp ) − ⎝ 1 − δ 1 − δp ⎠
3 Optimal Strategy 3.1 The Bellman’s Equation and the Optimal Strategy
A Dynamic programming principle is given in Schmidli [5] for discrete time risk model under allow control condition, which is also called Bellman’s equation. We give it here.
Optimal Dividend Problem for the Compound Binomial Model with Capital Injections
499
Lemma 3. Suppose VT (x) is bounded, for Dynamic programming function VT (x) , and we have
{
}
VT ( x) = sup r ( x, Π ′) + e −δ E[VT −1 ( f ( x, Π ′, Y ))] , Π′∈Π
(1)
Yn are identical distributed random variables. Let VT −1 ( x) be the
where Y and
remaining value of T − 1 time periods after one unit time period, r ( x, Π′) be the gain of shareholders with the initial value x . if T = ∞ , then dynamic programming principle become
{
}
V ( x) = sup r ( x, Π ′) + e −δ E[V ( f ( x, Π ′, Y ))] . Π′∈Π
Lemma 4. Suppose T = ∞ , V (x) < ∞ , if for any function Π (x ) , maximizing the right side of (2), and
(2)
x , there exists a measurable
⎤ ⎡∞ lim sup E ⎢∑ r ( X k' , Π 'k ) e −δk ⎥ = 0 . n →∞ Π '∈Π ⎦ ⎣ k =n
(3)
where X n' +1 = f ( X n' , Π 'n , Yn +1 ) , set Π n = D ( X n ) , then we have:
V Π ( x ) = V ( x) . To the compound binomial risk model, if x < 0 , there exists only injection, no dividends, V ( x) = V (0) + φ x . If x ≥ 0 , injection Z = 0 , (2) can be rewritten as ∞ ⎫ ⎧ V ( x) = sup ⎨D + δp∑ f ( j )V ( x + 1 − j − D) + δqV ( x + 1 − D)⎬ . 0≤ D ≤ x j =1 ⎭ ⎩
(4)
Since D only have finite values, Therefore, the supremum will be able to taken at a certain point, i.e., for any x ∈ N , there exists D * ( x) , such that ∞
V ( x) = D * + δp∑ f ( j)V ( x + 1 − j − D* ) + δqV ( x + 1 − D * ) . j =1
x+1−D*
= D* + δp
∞
∑ f ( j)V(x +1− j − D ) + δp ∑ f ( j)[V(0) + x +1− j − D ] + δqV(x +1− D ) *
j=1
*
j=x+2−D
*
Theorem 5. (i) Strategy Dn = D ( X n ) is an optimal strategy. *
Proof: Firstly, for any strategy, we have ∞
∑Π δ k =n
k
∞
k
m
= (1 − δ ) ∑∑ Π k δ m . m =n k =n
Define pseudo- strategy:
0, k n ⎩ ⎩ k (ξ k =1) − 1) ∨ 0, k > n ' k
*
(5)
500
Y. He and X. Zhao
Then m
∑Π k =n
k
δ k ≤ φ X n' δ n + φ
m
∑δ
k = n +1
k
(1 + y k ) ,
⎡m ⎤ ⎡ m ⎤ δ n+1 . E ⎢∑ Π k δ k ⎥ ≤ φE X n' δ n + φE ⎢ ∑ δ k (1 + y k )⎥ = φE X n' δ n + φ (1 + μ ) 1− δ ⎣ k =n ⎦ ⎣k =n+1 ⎦ Thus
⎡m ⎤ δ n +1 . E ⎢∑ Π k δ k ⎥ ≤ φδ n [x + n(1 + φ pμ )] + φ (1 + μ ) 1− δ ⎣ k =n ⎦ Then
⎡∞ ⎤ lim E ⎢∑ Π k δ k ⎥ = 0 . n→ ∞ ⎣ k =n ⎦ By lemma 4, we know Dn = D * ( X n ) is an optimal strategy. (ii) For any x − D * ( x) ≤ y < x , we have
V ( x ) = V ( y ) + ( x − y ) , D * ( y ) = D * ( x) − ( x − y) , D * [ x − D * ( x)] = 0 . For any initial value
Proof:
x∈Z + ,
D0 = D * ( X 0 ) = D * ( x ) ≤ x , then
0 ≤ x − D * ( x) ≤ x . Under this strategy, for any x − D * ( x) ≤ y < x , the probable profits with initial value y is D * ( x) − ( x − y ) , for initial value y , and we have ∞
V ( y ) = D( y ) + δp ∑ f ( j )V [ y + 1 − j − D ( y )] + δqV [ y + 1 − D( y )] j =1
∞
≥ D* ( x) − ( x − y) + δp∑ f ( j )V [ x + 1 − j − D* ( x)] + δqV [ x + 1 − D* ( x)] j =1
= V ( x) − ( x − y ) . By lemma 1, we have V ( x ) ≥ V ( y ) + ( x − y ) , thus V ( x) = V ( y ) + ( x − y ) . Take y = x − D * ( x ) , we have V ( x) = V [ x − D * ( x)] + D * ( x) .
()
D * maximize the right side of 4 , then D * ( y) = D * ( x) − ( x − y ) .
Since
Take y = x − D ( x ) , then D [ x − D ( x)] = D* ( x) − [ x − x + D* ( x)] = 0 . *
{
*
*
}
{
}
(iii) The set x ∈ N : D * ( x ) = 0 is bounded, i.e. x0 = sup x : D* ( x) = 0 is a finite number. Proof: By boundedness in lemma 2, for
x satisfying D * ( x ) = 0 , we have
∞
V ( x) = δp ∑ f ( j )V ( x + 1 − j ) + δqV ( x + 1) j =1
δ ⎞ δ ⎞ ⎛ δ ⎞ ⎛ ⎛ ≤ δp∑ f ( j )⎜ x + 1 − j + ⎟ + δq⎜ x + 1 + ⎟ = δ ⎜ x +1+ ⎟ − δpμ , 1 − δ 1 − δ 1 − δ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ j =1 ∞
Optimal Dividend Problem for the Compound Binomial Model with Capital Injections
501
So
x+
δ 1− δ
−
φμδ (1 − p ) δ ⎞ ⎛ ≤ δ ⎜ x +1+ ⎟ − δpμ . (1 − δ )(1 − δp ) 1−δ ⎠ ⎝
i.e.
x≤
φμδ (1 − p ) δpμ . − 2 (1 − δ ) (1 − δp) 1 − δ
That is, the set sup{x : D* ( x) = 0} is bounded. Since D * (0) = 0 , then the set nonempty, thus there exists such that sup{x : D* (x) = 0} is x0 ≥ 0 x0 = sup x : D * ( x) = 0 .
{
(iv) For any
}
x ≥ x0 , V ( x ) = V ( x 0 ) + x − x 0 .
Proof: For any
Take
x ≥ x0 , by (ii), we have D * [ x − D * ( x)] = 0 , thus, x − D * ( x) ≤ x0 .
y = x0 in D * ( y ) = D * ( x) − ( x − y ) , then 0 = D * ( x0 ) = D * ( x ) − ( x − x0 ) ⇒ D * ( x ) = x − x 0 .
( ), we have V ( x) = V [ x − D ( x)] + D ( x) , then V ( x) = V ( x ) + x − x *
By ii
*
0
This shows that when x ≥ x0 , the part surpassing
0
x0 is dividend.
3.2 The Characterization of the Optimal Strategies Theorem 6. ∀x ≥ 1 , we have D * ( x ) = sup{n ∈ N : V ( x ) = V ( x − n) + n}. Proof: When n ≤ D * ( x) , x − n ≥ x − D * ( x) , i.e. x − D * ( x) ≤ x − n < x , from (ii)of theorem 5, we have
V ( x) = V ( x − n) + [ x − ( x − n)] = V ( x − n) + n . In the following, we consider the case n > D * ( x ) , for example D * ( x) + 1 . Taking into account D * ( x ) is maximum of the right side of (4), if
V ( x) = V [ x − D * ( x) − 1] + D * ( x) + 1 , Then
D * ( x) = D * [ x − D * ( x) − 1] + D * ( x) + 1 . This is a contradiction, thus V ( x) ≠ V [ x − D * ( x ) − 1] + D * ( x ) + 1 .Thus, for x ≥
V ( x) > V [ x − D ( x) − 1] + D ( x) + 1 . *
For m ≥ 2 , when
*
D * ( x) + m ≤ x i.e. 2 ≤ m ≤ x − D * ( x) , we have
y,
502
Y. He and X. Zhao
V ( x ) ≥ V [ x − D * ( x ) − m] + D * ( x ) + m , V [ x − D* ( x) − 1] ≥ V [ x − D* ( x) − 1 − (m − 1)] + m − 1 = V [ x − D* ( x) − m] + m − 1 ,
V (x) > V[ x − D ( x) −1] + D ( x) + 1 ≥ V [ x − D* ( x) − m] + m − 1 + D* ( x) + 1 *
*
= V [ x − D * ( x ) − m] + D * ( x ) + m . Thus, for any n = D * ( x ) + m ≤ x (m ≥ 1) , we have V ( x) > V ( x − n) + n .
4 Numerical Calculation for Optimal Dividend Strategy of Compound Binomial Model By theorem 5 and the characters of compound binomial model, formula (4) can be rewritten as following ∞ ⎧ ⎫ (6) V ( x) = max ⎨V ( x − 1) + 1, δp ∑ f ( j )V ( x + 1 − j ) + δqV ( x + 1) ⎬ . j =1 ⎩ ⎭ Denote the first dividend point by n0 = inf {n ∈ N : D(n) = 1} , by theorem 5, we have
1 ≤ n0 < ∞ , for 0 ≤ n < n0 , we have n−1
δqV(n +1) = [1−δpf (1)]V (n) − δp∑ f (n +1− j)V ( j) −δpF(n +1)V (0) + δpφG(n) .
(7)
j =0
Where n
F (n) = ∑ f ( j ) , G ( n ) = j =1
Take
∞
n
j = n+ 2
k =1
∑ ( j − n − 1) f ( j ) , G (0) = μ − 1, G(n) = G(0) + ∑ F (k ) .
ρ 0 = 1 , and define the recursion: n −1
δqρ n+1 = [1 − δpf (1)]ρ n − δp ∑ f (n + 1 − j ) ρ j , j =0
then
ρ1 =
1 − δpf (1) p + q − δp q + (1 − δ ) p ρ0 > ρ0 = ρ0 > ρ0 . δq δq δq
By induction, we know
ρn
is strictly increasing. By recursive computing, then
V (n + 1) = ρ n+1V (0) −
⎤ p ⎡ n+1 p n ⎢∑ ρ n+1− j F ( j )⎥V (0) + φ ∑ ρ n − j G( j ) , q ⎣ j =1 q j =0 ⎦
i.e. ⎤ ⎡ p n p n−1 p n−1 V (n) = ⎢ ρ n − ∑ ρ n− j F ( j )⎥V (0) + φ ∑ ρ n−1− j G (0) + φ ∑ ρ n−1− j F ( j ) . q j =1 q j =0 q j =0 ⎦ ⎣
We only need maximize V (0) , thus we will find V (n0 ) = V ( n0 − 1) + 1 , and
(8)
n maximizing V (0) . Since
Optimal Dividend Problem for the Compound Binomial Model with Capital Injections
V (n0 ) − V (n0 − 1) =
1− δ δ −1 φ p ρ V (0) + φG(0) ρn0 −1 + φρ + ( ρ − ρ ) = 1 δq n0 −1 δq n0 −1 δq n0 −1 n0 −2 q
V (0) = Once
503
pμ − q 1 δφ + 1− δ 1− δ
⎛ ρ n0 −2 1 ⎞⎟ . ⎜φ + δq ⎜ ρ n −1 ρ n0 −1 ⎟⎠ 0 ⎝
n0 is determined, similarly, we can determine
n1 = inf{n > n0 : D (n) > 0} .
Replacing V (0) in (8) with V (n0 + 1) , we have V (n) = qnV (n0 + 1) + k n , by V (n1 ) = V (n1 − 1) + 1 , we can determine V ( n0 + 1) . We also can determine every probable n1 by selecting n1 maximizing V (n0 + 1) . From the above discussion, we get that the optimal dividend strategy of the compound binomial risk model with injection is band strategy.
References 1. Finetti, D., Su, B.: Un’impostazione alternative dell teoria collettiva del risichio. Transactions of the XVth International Congress of Actuaries 2, 433–443 (1957) 2. Borch, K.: The theory of risk. Journal of the Royal Statistical Society, Series B 29, 423–467 (1967) 3. Dickson, D.C.M., Waters, H.R.: Some optimal dividend problems. ASTIN Bull. 34, 49–74 (2004) 4. Schmidli, H.: Optimal Dividend strategies in a Cramer-Lundberg model with capital injections. Insurance: Mathematics and Economics 5, 1–9 (2008) 5. Schmidli, H.: Stochastic Control in Insurance, pp. 3–20. Springer, London (2008) 6. Gerber, H.U.: Mathematical fun with the compound binomial process. ASTIN Bulletin 18, 161–168 (1988) 7. Albrecher, H., Thonhauser, S.: Optimality results for dividend problems in insurance. RACSAM Revista de la Real Academia de Ciencias; Serie A, Matematicas 103(2), 295–320 (2009) 8. Avanzi, B.: Strategies for dividend distribution: A review. North American Actuarial Journal 13, 217–251 (2009) 9. Tan, J., Yang, X.: The compound binomial model with randomized decisions on paying dividends. Insurance: Mathematics and Economics 39, 1–18 (2006) 10. Yang, H., Zhang, Z.: Gerber-Shiu Discounted Penalty Function in a Sparre Andersen Model with Multi-layer Dividend Strategy. Insurance: Mathematics and Economics 42(3), 984–991 (2008)
The Research of Logical Operators Based on Rough Connection Degree Yafeng Yang1, Jun Xu1, and Baoxiang Liu2 1
College of Light Industry, Hebei Polytechnic University, Tangshan 063009, China 2 College of Science, Hebei Polytechnic University, Tangshan 063009, China [email protected]
Abstract. From the two-valued logic to fuzzy logic, proposition logic obtains a rapid development. This paper aims to construct a new kind os proposition logic form with the value of connection number. With the basic method of fuzzy logic, the value of proposition in the form of rough connection degree is obtained, and the three logical operators, disjunction, conjunction and negation are constructed. The three logical operators meet the seven rules of Involution, Idempotent, Exchange, Combination, Distribution, Absorption, and Morgan. It is proved that the algebra constructed by the three operators is soft algebra. Keywords: Set pair analysis; connection degree; fuzzy Logic; RSP logic.
1 Introduction In the traditional two-valued logic, there are only two possibilities of the true value, right or false. But in the objective world, there has never been so strictly to describe things. There is no absolute right or wrong. Fuzzy mathematics [1] provided a tool to solve this problem, and fuzzy logic [2, 3] mapped the fuzzy proposition to the closed interval [0, 1], which greatly expanded the scope of the true value of proposition. It reflects the characteristics of objective things more objectively. However, membership function has no definition on the closed interval [-1, 0], so many troubles occur when we solve some fuzzy decision-making issue. To solve this problem, professor Shi Kaiquan proposed the both-branch fuzzy set theory [4], and carried out a series of applications [5]. Liu Gang, etc. constructed the bothbranch fuzzy logic [6] based on the both-branch fuzzy sets. Regardless of single or both-branch fuzzy logic, it only takes the true or false degree into account when the truth value of proposition is given. But in some cases, it is difficult to judge whether the proposition is true or false, with the thing itself having a great deal of uncertainty. Connection degree, in another sence, connection number, a new values form, can reflect the certainty and uncertainty comprehensively. Authors constructed a new kind of connection number: rough connection degree, which can reflect the system more objectively. This paper aims to construct a new kind of logic based on rough connection degree. R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 504–511, 2010. © Springer-Verlag Berlin Heidelberg 2010
The Research of Logical Operators Based on Rough Connection Degree
505
2 Rough Connection Degree Set pair analysis [7] is a new theory which describes the certainty and uncertainty with IDC connection degree μ = a + bi + cj . According to the basic method of set pair analysis, if we consider the equivalence relations as system characters, and equivalence classes [8, 9] as elements, a new kind of set pair degree can be defined. Here, we call it as RCD, Rough Connection Degree [10]. Definition 1: Given knowledge base K = (U , ℜ) and equivalence relations R ∈ ind (K ) ; A, B ⊆ U make up a set pair H R = ( A, B ) . Then we define the knowledge connection degree of H R = ( A, B ) as follow:
μ R ( A, B ) =
P S F + i+ j. N N N
(1)
Let S = k ( R ( A I B))
F = k ( R A I RB − R( A I B )) P = k ( R( A U B) − R A I RB ) N = k ( R( A U B )) Here, k (S ) denote to the number of the equivalence classes in S . S F P , b = , c = , then: μ R ( A, B) = a + bi + cj ; i ∈ [−1,1] , and values N N N uncertainly. j = −1 , 0 ≤ a, b, c ≤ 1 and a + b + c = 1 . Suppose a =
In the formula (1), S is the number of equivalence classes which belong both completely to A and B ; F is the number of equivalence classes which both have intersection with A and B but not belong to A and B together; P is the number of equivalence classes which have intersection with one and have no intersection with the other one and N is the number of all the equivalence classes having intersection with A or B . This is consistent with the basic ideas of IDC method in set pair analysis theory.
3 RSP Logical Operators 3.1 Basic Concepts
In order to describe things more objectively, comprehensively, and systematically, the basic knowledge of RSP (Rough Set Pair) logic is given as follows. Definition 2: For an assertive sentence, if we can get its true, false, and uncertain degrees, then the assertive sentence is a RSP proposition. Denoted by the capital letters A, B,L .
506
Y. Yang, J. Xu, and B. Liu
Definition 3: Suppose S be the set of RSP propositions, if the mapping μ : S → {μ μ = a + bi + cj} meets:
1) 2)
μ ( A ∨ B ) = μ ( A) ∨ μ ( B ) μ ( A ∧ B ) = μ ( A) ∧ μ ( B )
3)
μ ( A) = μ ( A)
Then the mapping μ is the true value function on S , μ ( A) called the true value of proposition A . When we give true value to RSP proposition A , it is known as the assignment to the RSP proposition A . For two RSP proposition A and B , if μ ( A) is equal to μ (B) identically, then we call that A and B are equivalent. Definition 4: If the true value of proposition A is μ ( A) = 1 , then A is called as S − true proposition. Definition 5: If the true value of proposition A is μ ( A) = i , then A is called as S − uncertain proposition. Definition 6: If the true value of proposition A is μ ( A) = j , then A is called as S − false proposition. For the general situation, see in the following table: Table 1. RSP proposition and posture a, b, c RSP Proposition a = c, b > a Tiny equal proposition a = c, b = a Weak equal proposition a = c, a > b > 0 Strong equal proposition a = c, b = 0 Quasi equal proposition a > c, b = 0 Quasi same proposition a > c, c > b Strong same proposition a > c, a > b > c Weak same proposition a > c, b > a Tiny same proposition a < c, b = 0 Quasi equal proposition a < c,0 < b < a Strong equal proposition a < c, b > a , b < c Weak contrary proposition a < c, b > c Tiny contrary proposition c = 0, a > b Uncertain same proposition Uncertain uncertain c = 0, a < b 14 Uncertain uncertain posture proposition Note: No.1 to No.12 are set pair posture under the condition a ≠ 0, c ≠ 0 ; When c = 0, b ≠ 0, a ≠ 0 , they are set pair uncertain posture. RSP proposition and set pair posture are bijection.
Order 1 2 3 4 5 6 7 8 9 10 11 12 13
Division
posture Tiny equal posture Weak equal posture Equal Posture Strong equal posture Quasi equal posture Tiny same posture Weak same posture Same Posture Strong same posture Quasi same posture Tiny contrary posture Contrary Weak contrary posture Posture Strong equal posture Quasi equal posture Uncertain same posture
The Research of Logical Operators Based on Rough Connection Degree
507
RSP propositional logic has broken through the traditional non-real or fake constraints, and divided the value of propositions to 14 levels. A more objective description of things was obtained. 3.2 Logical Operators
The previous section pointed out that the true value of RSP proposition is the form of connection number, namely: μ = a + bi + cj , In which, a, b, c , respectively, stand for the true, the uncertainty and the false degree. Coincidentally, both-branch fuzzy logic has given two separate concepts: real and pseudo-degrees. That is: for a both-branch fuzzy proposition A , and its real value T ( A) is a fixed value between -1 and 1. If T ( A) ∈ [0,1] , then T ( A) is known as the true degree of both-branch fuzzy proposition A ; If T ( A) ∈ [−1,0] , then T ( A) is known as pseudo-degree of A . According to the basic ideas of set pair analysis, the true value of proposition is not able to express using a simple number, but should includes three aspects: true, false, and uncertainty. Therefore, for a set pair proposition, true and false degree exists at the same time, and with uncertainty to form a comprehensive representation of the true value. As the pseudo-degree is negative number in the interval [-1, 0], so here we use j (-1) to describle. And for the uncertainty, which may cause the positive and negative effects. Therefore, uncertainty factors i is used to characterize it. Definition 7: For the RSP proposition A , if its true degree is a ( A) , pseudo-degree is c( A) j , and its uncertainty b( A) = 1 − a( A) − c( A) , then the true value of proposition A is: μ ( A) = a ( A) + b( A)i + c( A) j .
Here, 0 ≤ a ( A), b( A), c( A) ≤ 1 , and a ( A) + b( A) + c( A) = 1 . Suppose A, B ∈ S , μ ( A) = a ( A) + b( A)i + c( A) j , μ ( B ) = a ( B ) + b( B )i + c( B) j , we here carry out both-branch fuzzy logic calculus rules for true degree and pseudodegree respectively, then: 1) Disjunction True degree:
a ( A ∨ B) = a( A) ∨ a ( B ) . = max{a( A), a ( B )}
Pseudo-degree: c( A ∨ B) j = c( A) j ∨ c( B) j = max{c( A) j, c( B) j} . = min{c( A), c( B )} j Uncertainty degree: b( A ∨ B ) = 1 − max{a( A), a ( B )} − min{c ( A), c( B )} . Then: μ ( A ∨ B) = μ ( A) ∨ μ ( B) = max{a( A), a( B)} + (1 − max{a( A), a( B)} . − min( c( A), c( B )))i + min( c( A), c( B)) j
508
Y. Yang, J. Xu, and B. Liu
2) Conjunction True degree:
a ( A ∧ B) = a( A) ∧ a( B) . = min{a( A), a( B)}
Pseudo-degree: c( A ∧ B ) j = c( A) j ∧ c( B ) j = min{c( A) j , c( B) j} . = max{c( A), c( B)} j Uncertainty degree: b( A ∧ B) = 1 − min{a( A), a( B)} − max{c( A), c( B)} . Then: μ ( A ∧ B ) = μ ( A) ∧ μ ( B ) = min{a( A), a( B)} + (1 − min{a ( A), a ( B )} . − max(c( A), c( B )))i + max(c( A), c( B)) j 3) Negation: True degree: a ( A) = −a( A) . Pseudo-degree: c( A) j = −c( A) j . Here, the true degree turns into a negative number, and the pseudo-degrees become a positive number. This is contradictory with the above definitions, so we here transform the form, that is, a ( A) = −a ( A) = a ( A) j , c( A) j = −c( A) j = c ( A) . Therefore, the true degree of proposition A became the pseudo-degree of A , and the pseudodegree of A changed to the true degree of A . This change is reasonable. Therefore, the uncertainty degree is b( A) = 1 − a ( A) − c( A) = b( A) . Namely: μ ( A) = μ ( A) = c( A) + b( A)i + a( A) j . To sum up, in the RSP logic, the Conjunctive, Disjunctive and Negative calculus are defined as follows: μ ( A ∨ B) = μ ( A) ∨ μ ( B) = max{a( A), a( B)} + (1 − max{a( A), a( B)} . − min(c( A), c( B)))i + min( c( A), c( B)) j
μ ( A ∧ B) = μ ( A) ∧ μ ( B) = min{a( A), a( B)} + (1 − min{a( A), a( B )} . − max(c( A), c( B )))i + max(c( A), c( B)) j
μ ( A) = μ ( A) = c( A) + b( A)i + a( A) j . For the RSP proposition s A, B : If μ ( B) = μ ( A) , then we call that the two propositions are reciprocal. A is the converse proposition of B , and B is also the converse proposition of A .
The Research of Logical Operators Based on Rough Connection Degree
509
If μ ( B) = μ ( A) , then we call that the two propositions are equivalent. A is the equivalent proposition of B , and B is also the equivalent proposition of A .
4 Basic Properties Agreements: Suppose A, B, C ∈ S are RSP propositions, and their values are following respectively. μ ( A) = a ( A) + b( A)i + c( A) j ,
μ ( B) = a ( B ) + b( B )i + c( B) j , μ (C ) = a(C ) + b(C )i + c(C ) j . The following are set pair propositional logic theorems: Theorem 1: Idempotent law: μ ( A ∧ A) = μ ( A), μ ( A ∨ A) = μ ( A) . Theorem 2: Commutative law: μ ( A ∧ B) = μ ( B ∧ A), μ ( A ∨ B ) = μ ( B ∨ A ) . Theorem 3: Absorption law: μ ( A ∨ ( A ∧ B )) = μ ( A), μ ( A ∧ ( A ∨ B)) = μ ( A) . Theorem 4: Associative law: μ (( A ∧ B) ∧ C ) = μ ( A ∧ B ∧ C ) ,
μ (( A ∨ B) ∨ C ) = μ ( A ∨ B ∨ C ) . Theorem 5: Distributive law: μ ( A ∨ ( B ∧ C )) = μ (( A ∨ B) ∧ ( A ∨ C )) . Proof: For the equation μ ( A ∨ ( B ∧ C )) = μ (( A ∨ B ) ∧ ( A ∨ C )) , there
μ ( A ∨ ( B ∧ C )) = μ ( A) ∨ μ ( B ∧ C ) = μ ( A) ∨ ( μ ( B) ∧ μ (C )) = max{a( A), min{a( B), a(C )}} + (1 − max{a ( A), min{a( B), a (C )}} . − min{c( A), max{c( B), c(C ))}}i + min{c( A), max{c( B), c(C )}} j μ (( A ∨ B) ∧ ( A ∨ C )) = μ ( A ∨ B) ∧ μ ( A ∨ C ) = min{max{a( A), a( B)}, max{a( A), a(C )}} . + (1 − min{max{a( A), a( B)}, max{a( A), a(C )}} − max(min{c( A), c( B)}, min{c( A), c(C )})i + max(min{c( A), c( B )}, min{c( A), c(C )} j Below are the classified conclusions: For a ( A), a( B), a(C ) , there are six kinds of relationships: a ( A) ≥ a( B ) ≥ a(C ), a(C ) ≥ a( B) ≥ a( A) a (C ) ≥ a( A) ≥ a ( B), a( B) ≥ a( A) ≥ a (C ) . a ( A) ≥ a(C ) ≥ a ( B), a( B) ≥ a(C ) ≥ a( A)
510
Y. Yang, J. Xu, and B. Liu
For c( A), c( B), c(C ) , there are also six kinds of relationships: c( A) ≥ c( B ) ≥ c(C ), c(C ) ≥ c( B) ≥ c( A) c(C ) ≥ c( A) ≥ c( B ), c( B) ≥ c( A) ≥ c(C ) . c ( A) ≥ c (C ) ≥ c ( B ), c ( B ) ≥ c (C ) ≥ c ( A) So there are 36 kinds of relationships for a ( A), a( B), a(C ) and c( A), c( B), c(C ) . Through the verifies, μ ( A ∨ ( B ∧ C )) = μ (( A ∨ B) ∧ ( A ∨ C )) . Using the same method : μ ( A ∧ ( B ∨ C )) = μ (( A ∧ B) ∨ ( A ∧ C )) . So, ( S ,∨ ,∧ ) is the distributive lattice. Theorem 6: In the distributive lattice ( S ,∨ ,∧ ) , the biggest and smallest element exists, and the following equation is right. μ ( A) ∨ j = μ ( A) , μ ( A) ∧ j = j , μ ( A) ∨ 1 = 1 , μ ( A) ∧ 1 = μ ( A) . Theorem 7: Double-negation Law:
μ ( A) = μ ( A) . Theorem 8: De Morgan law: μ ( A ∧ B) = μ ( A ∨ B), μ ( A ∨ B) = μ ( A ∧ B) . Proof: The theorems 1-7 are obvious. Here, we only give out the proof of theorem 8 as follows:
For the equation: μ ( A ∧ B) = μ ( A ∨ B) , there are: Left: μ ( A ∧ B ) = μ ( A ∧ B) = μ ( A) ∧ μ ( B ) . = max(c( A), c( B)) + (1 − min{a ( A), a ( B )} − max(c( A), c( B)))i + min{a ( A), a( B)} j Right: μ ( A ∨ B) = μ ( A) ∨ μ ( B) = μ ( A) ∨ μ ( B) = (c( A) + b( A)i + a( A) j ) ∨ c( B) + b( B)i + a( B) j . = max( c( A), c( B)) + (1 − min{a( A), a( B)} − max( c( A), c( B)))i + min{a( A), a( B)}` To sum up, μ ( A ∧ B ) = μ ( A ∨ B ) . In like manner: μ ( A ∨ B) = μ ( A ∧ B) . The theorems from one to eight proved that ( S ,∨,∧,−) is soft algebra.
5 Conclusion With the basic methods of set pair analysis, papers have defined the basic concepts of RSP logic based on fuzzy logic. The three operations such as conjunction, disjunction, and negative have been re-defined and proved that the co-definition of operation meet the eight theorems contain Idempotent, Distributive, and so on. The construction of RSP logic has important theoretical value for the improvement of set pair analysis theory, and provides a new approach and tools to its application. In the other hand, it
The Research of Logical Operators Based on Rough Connection Degree
511
also is a breakthrough of traditional and fuzzy logic. However, research on RSP logical reasoning is still at a preliminary stage, and needs a further study in the future.
References 1. Yang, L., Gao, Y.: Fuzzy Mathemtaic and Application, pp. 120–125. South China University of Technology Press, GuangZhou (2006) 2. Wang, G.: On the logic foundations of fuzzy reasoning. Information Science (117), 47–48 (1999) 3. Wang, G.: On the logic foundations of fuzzy modus ponens and fuzzy modus tollens. Fuzzy Mathematics (Los Angeles) 5(1), 229–250 (1997) 4. Shi, K.: Both-branch fuzzy sets. Journal of Shandong University of Technology 28(2), 127–134 (1998) 5. Liu, J., Shi, K.: Measures of Fuzziness in Both-branch Fuzzy Sets. Systems Engineering and Electronics 29(5), 732–736 (2007) 6. Liu, G., Xu, Y., et al.: Both-branch fuzzy logic. Computer Engineering and Applications 30, 96–98 (2003) 7. Zhao, K.: Set pair analysis and its preliminary application. Zhejiang Science and Technology Press, Hangzhou (2000) 8. Pawlak, Z.: Rough sets. International Journal of Computer and Information Science 11(5), 341–356 (1982) 9. Zhang, W., Wu, W., Liang, J., et al.: Rough Set Theory and Methods, pp. 1–16. Science Press, Beijing (2001) 10. Liu, B., Yang, Y., Li, Y.: Construction of Set Pair Knowledge Connection Function and Application. In: International Conference on Machine Learning and Computing, pp. 170– 172 (2009)
A Result Related to Double Obstacle Problems* Xiujuan Xu, Xiaona Lu, and Yuxia Tong College of Since, Hebei Polytechnic University, Tang Shan 063009, China [email protected], [email protected], [email protected]
Abstract. In recent years, the researching on regularity of Harmonic Equation and obstacle problems has made a great progress, although the regularity of very weak solutions for double obstacle problems are not been studied. In this paper, the basic tool for the Young inequality, Hölder inequality, Minkowski inequality, Poincaré inequality and a basic inequality. The definition of very weak solutions for double obstacle problems associated with non-homogeneous elliptic equation is given, and the local integrability result is obtained by using the technique of Hodge decomposition. Keywords: Elliptic equation; double obstacle problem; very weak solution.
1 Introduction and Statement of Result First, we will introduce the definition of regular regional used in this paper [1]. Let Ω ⊂ Rn , n≥2 , Green function in Ω . For G ( x, y ) is
F = ( f 1 , f 2 L f n ) ∈ C0∞ (Ω, R n ) , the following integral gives a solution with zero boundary to the poisson equation of ∇u = divF , u ( x) = −
∫
Ω
∇ y G ( x, y ) F ( y )dy . The
gradient of u can express by the below singular integral: ∇u ( x) = −
∫ ∇ ∇ G( x, y) F ( y)dy = (Η Ω
x
y
Ω F )( x)
,
where Ω is regular, weak operator Η Ω is a bounded in any space of Lr (Ω, R n ) ,
1< r < ∞ . If Ω is regular, the hodge decomposition estimates (3) and (4) can hold [1]. In this paper we always suppose Ω is a bounded regular domain. Let Ω be a bounded regular domain in R n (n ≥ 2) . We consider the following equation divA( x, ∇u ( x)) = divF ( x) ,
(1)
where the mapping A : Ω × R n a R n is a Carathéodory function, and the following assumptions are hold for almost every x ∈ Ω and all ξ ∈ R n : *
This paper is fully supported by the National Science Infrastructure Program of Hebei (No.A2010000910).
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 512–518, 2010. © Springer-Verlag Berlin Heidelberg 2010
A Result Related to Double Obstacle Problems
〈
〉
513
p
( i ) A( x, ξ ), ξ ≥ a ξ ; ( ii ) A( x, ξ ) ≤ b ξ
p −1
+ k ( x) .
s p −1
Here F ( x) ∈ ( Lloc (Ω)) n , F ( x) = ( F1 ( x), F2 ( x),L Fn ( x)), Fi ( x) > 0 , 1 < p < ∞ , s
p −1 0 < a ≤ b < ∞ , and k ( x) ∈ ( Lloc (Ω)) , s > r . Suppose that ϕ ,ψ are any functions in Ω with values in R U {+∞,−∞} , and
1, s θ ∈ W 1, r (Ω) , max{1, p − 1} < r < p . The functions ϕ ,ψ ∈ Wloc (Ω) are two obstacles and θ determines the boundary values. Let Kϕθ ,,ψr = {u ∈ W 1,r (Ω) : u − θ ∈ W01,r (Ω),ϕ ≤ u ≤ ψ , a.e.in Ω} .
Consider the hodge decomposition ∇(v − u )
r− p
∇(v − u ) = ∇η v,u + hv,u ,
r
where hv, u ∈ Lr − p +1 (Ω) and η v,u ∈ W
1,
r r − p+1
(2)
(Ω) are divergence free field. The
following estimates hold [1]: ∇η v , u hv, u
r r − p +1
r r − p +1
≤ c ∇ (v − u )
r − p +1 , r
≤ c( p − r ) ∇(v − u )
r − p +1 . r
(3) (4)
Next we give the definition of very weak solutions for double obstacle problems of non-homogeneous elliptic equation (1). Definition: If a function u ∈ Kϕθ ,,ψr (Ω) satisfies
∫ 〈A( x, ∇u), ∇(v − u) ∇(v − u) 〉dx , ≥ ∫ 〈A( x, ∇u ), h 〉 dx + ∫ 〈F , ∇η 〉dx r− p
Ω
Ω
v ,u
Ω
for any v ∈ Kϕθ ,,ψr (Ω) , then we say that Kϕθ ,,ψr
v,u
(5)
u is a very weak solution to the
(Ω) − double obstacle problems, where ∇ηv ,u and hv, u come from Hodge
decomposition (2). The meaning of the very weak solution is that the sobolev integrability exponent r of u may less than the sobolev integrability exponent p of weak solution. By hodge decomposition we have that: ∇(v − u )
r− p
∇(v − u ) − hv,u = ∇η v,u ,
is gradients, when r = p , by the uniqueness of hodge decomposition we know that hv,u = 0 , in the generalized sense divF = 0 , this definition is consistent with the definition 1 in [2, 3, 4].
514
X. Xu, X. Lu, and Y. Tong
After the definition of very weak solutions for double obstacle problems of nonhomoenous elliptic equation (1), we will naturally raise the question that: as a weak solution of the extension, the very weak solutions u ∈ kϕr ,,ψθ (Ω) for double obstacle problems of nonhomoenous elliptic equation whether keep some nature of the weak solution? The following theorem answers the question. Our main result is the following local integrability result. Theorem: Suppose that K ϕθ ,,ψr (Ω) is a nonempty, if r > r0 , then for any very weak solution u and any v ∈ K ϕr ,,ψθ (Ω) to the non-homogeneous Kϕθ ,,ψr (Ω) − double obstacle problems , then we have the following integral estimates:
∫ Where ϕ ,ψ
r
Ω
∇u dx ≤ C (
1, s (Ω) , ∈ Wloc
∫
r
Ω
∇v dx +
∫
Ω
k ( x)
r p −1
dx +
∫
Ω
F
r p −1
dx ,
(6 )
s > r . c = c( n, p, r0 ,α , β ) .
To facilitate the proof of the following theorem, we introduce the main tools and the conclusions used in this paper.
2 Basic Lemma 2.1 Lemma 1 [5] Basic inequality: The following elementary inequality valid for all X , Y ∈ R n ,
0 <ε <1,
X
−ε
X −Y
−ε
Y ≤ 2ε
1+ ε 1− ε X −Y . 1−ε
(6)
2.2 Lemma 2 [6] Young's inequality: The following Young's inequality valid for all a, b > 0 , ε > 0 , and p > 1 :
ab ≤ εa p ′ + C (ε , p)b p ,
1 1 + = 1. p p′
(7)
2.3 Lemma 3 [7] Hölder inequality: The following Hölder inequality valid for all p > 1, q > 1,
1 1 + = 1 , and f ⋅ g ∈ L(E ) , f ∈ Lp (E ), g ∈ Lq (E ) then p q
∫
E
f ( x) g ( x) dx ≤ (
∫
1 p
f ( x) dx) p ( E
∫
1 q
g ( x) dx) q . E
(8)
A Result Related to Double Obstacle Problems
515
2.4 Lemma 4 [8]
Suppose that Kψr ,θ (Ω) is a nonempty, if r > r0 , then for any very weak solution u and any v ∈ Kψr ,θ (Ω) to the non-homogeneous Kψr ,θ (Ω) − obstacle problems, there exist that r0 ∈ ( p − 1, p) can meet the following integral estimates:
∫
r
Ω
∇u dx ≤ C
∫
r
Ω
∇v dx ,
(6)
1, s where ϕ ,ψ ∈ Wloc (Ω) , s > r . c = c(n, p, r0 ,α , β ) .
3 Proof of Theorem Let u be a very weak solution to the non-homogeneous Kϕθ ,,ψr (Ω ) -double obstacle problems, v ∈ Kϕθ ,,ψr (Ω) is a very weak solution to the non-homogeneous Kϕθ ,,ψr (Ω) double obstacle problems, Let E (v, u ) = ∇ (v − u )
r− p
∇ (v − u ) + ∇ u
r− p
∇u
.
(9)
By lemma 1(6) we have: E (v, u ) ≤ 2 r − p
p − r +1 r − p +1 . ∇v r − p +1
(10)
By (9) we have:
∫
Ω
=
〈 A( x, ∇u ), ∇u
∫
Ω
r− p
∇u〉 dx
〈 A( x, ∇u ), E (v, u )〉 dx −
∫
Ω
〈 A( x, ∇u ), ∇ (v − u )
r− p
∇(v − u )〉 dx
. (11)
By assumption ( i ) we have: a
∫
r
Ω
∇u dx ≤
∫
Ω
〈 A( x, ∇u ), ∇u
r− p
∇u 〉 dx .
(12)
By (10) we have:
∫
Ω
〈 A( x, ∇u), ∇u
≤ 2 p −r
p − r +1 r − p +1
∫
Ω
r− p
∇u〉 dx
〈 A( x, ∇u ), ∇v
r − p +1
〉 dx −
∫
Ω
〈 A( x, ∇u), ∇(v − u)
r− p
. ∇(v − u)〉 dx
(13)
516
X. Xu, X. Lu, and Y. Tong
By assumption (ⅱ) we have:
∫
Ω
〈 A( x, ∇u ), ∇u p − r +1 r − p +1
≤ b2 p−r
∫
−
Ω
∫
Ω
r− p
∇u 〉 dx p −1
〈 (b ∇u
+ k ( x)), ∇v
r − p +1
〉 dx .
(14)
〈 A( x, ∇u ), ∇(v − u )
r− p
∇(v − u )〉 dx
By (5) that the definition of very weak solutions for double obstacle problems of nonhomogeneous elliptic equation (1) we have:
∫
Ω
〈 A( x, ∇u ), ∇u
≤ b2 p−r
∫
−
Ω
p − r +1 r − p +1
∫
Ω
r− p
∇u 〉 dx
∇u
p −1
∇v
∫
r − p +1
〉 dx + 2 p −r
p − r +1 r − p +1
∫
Ω
p − r +1 r − p +1
∫
Ω
k ( x) ∇v
r − p +1
〉 dx .
(15)
〈 A( x, ∇u ), hv ,u 〉 dx − 〈 F , ∇η v ,u 〉 dx
ⅱ
Ω
By assumption ( ) we have:
∫
Ω
〈 A( x, ∇u ), ∇u
≤ b2 p−r +
For
∫
Ω
p − r +1 r − p +1
∇u
p −1
∫
Ω
r− p
∇u 〉 dx
∇u
hv,u dx +
p −1
∫
Ω
∇v
r − p +1
〉 dx + 2 p −r
k ( x) hv,u dx +
∫
Ω
k ( x) ∇v
r − p +1
〉 dx .
(16)
F ∇η v ,u dx
1 p −1 1 r − p +1 = , = , by lemma 3(8) we have: p r q r
∫
Ω
〈 A( x, ∇u ), ∇u
≤ b2 p−r
r− p
p − r +1 ∇u r − p +1
+ b ∇u
p −1 r
hv,u
∇u 〉 dx
p −1 r
r r − p +1
∇v
r − p +1 r
+ k ( x)
r p −1
+ 2 p −r
hv,u
p − r +1 k ( x) r − p +1
r r − p +1
+ F
r p −1
r p −1
∇η v ,u
∇v
r − p +1 r
. (17)
r r − p +1
By (3) we have: F
r p −1
∇η v ,u
r r − p +1
≤c F
r p −1
( ∇u
r − p +1 r
p −1 ( r
∇u
+ ∇v
r − p +1 ) r
(18)
By (4) we have: ∇u
p −1 r
hv,u
r r − p +1
≤ c ( p − r ) ∇u
r − p +1 r
+ ∇v
r − p +1 ) r
,
(19)
A Result Related to Double Obstacle Problems
k ( x)
r p −1
hv ,u
≤ c( p − r ) k ( x)
r r − p +1
r p −1
( ∇u
r − p +1 r
+ ∇v
r − p +1 ) r
517
(20)
.
By (18), (19) and (20) we have:
∫
Ω
〈 A( x, ∇u ), ∇u
≤ b 2 p −r
p − r +1 ∇u r − p +1
+ c( p − r ) k ( x)
For
r p −1
( ∇u
r p −1
∇u 〉 dx
p −1 r
p −1 ( r
+ bc( p − r ) ∇u
+c F
r− p
∇u
( ∇u
r − p +1 r
∇v
r − p +1 r
r − p +1 + r r − p +1 + r
+ ∇v
+ 2 p −r ∇v
∇v
p − r +1 k ( x) r − p +1
∇v
r p −1
r − p +1 r
r − p +1 ) r r − p +1 ) r
.
(21)
r − p +1 ) r
p −1 r − p +1 + = 1 by lemma 2 (7) we have: r r [a − bc( p − r )] ∇u ≤ b 2 p −r
r r
p − r +1 r r [ε ∇u r + c (ε , p ) ∇v r ] r − p +1 r
+ c[ε ∇u r + c (ε , p ) F
r p −1 r p −1
r
+ c[ε ∇u r + c (ε , p ) k ( x )
(22) r p −1 r p −1
r
] + c[ε ∇v r + c (ε , p ) F r p −1 r p −1
, ]
r
] + c[ε ∇v r + c (ε , p ) k ( x )
r p −1 r p −1
]
where c = c(n, p, r , a, b) , c(ε ) = c(n, p, r , a, b, ε ) then we have:
∇u
r r
r
r p −1 r p −1
r
≤ cε ∇u r + c(ε ) ∇v r + c(ε ) F + c(ε ) k ( x)
r p −1 r p −1
+ c ( p − r ) ∇u
(23)
.
r r
Then we have: ∇u
r r
r
≤ c(ε )[ ∇v r + k ( x)
r p −1 r p −1
+ F
r p −1 r p −1
(24)
].
So we have the conclusion:
∫
r
Ω
∇u dx ≤ C (
∫
r
Ω
∇u dx +
This completes the proof of theorem.
∫
Ω
k ( x)
r p −1
dx +
∫
Ω
F
r p −1
dx .
(25)
518
X. Xu, X. Lu, and Y. Tong
4 Conclusion The theorem shows that: when r > r0 , in all the function that have the same boundary value θ and the same obstacle ϕ ,ψ with the soultion u , under the condition that does not include a constant factor C , the very weak solutions for double obstacle problems of non-homogeneous elliptic equation (1) u have the least r − Dirichlet integral estimates, can integral control by ∇v, F and k (x) ,we can similar to think that u has minimum energy. This conclusion is consistency with the classical conclusion when r = p [ 9], this conclusion as the promotion of [10].
References 1. Iwaniec, T., Sbordone, C.: Weak minima of variational intearals. J. Reine. Angew. Math. 1994(454), 143–161 (1994) 2. Yuxia, T., Dianchuan, J., Jiantao, G.: Remarks on weak solutions of obstacle problems. Journal of Ningxia Univers(Natural Science Edition) (03), 217–03 (2009) 3. Yuxia, T., Jiantao, G., Jianliang, C.: Regulatity of the very weak solutions for nonhomogeneous obstacle problems. Mathematica Applicata 21(1), 185–192 (2008) 4. Hongya, G., Huiying, T.: Local regularity result for solutions of obstacle problems. Acta Mathematical Science 24B(1), 71–74 (2004) 5. Heinonan, J., Kilpelainen, T., Martio, O.: Nonlinear potential theory of second order degenerate elliptic partial disserential eauations. Oxford University Press, Oxford (1993) 6. Jichang, K.: Applied Inequalities, 3rd edn. Shandong Science and Technology Press, Jinan (2004) 7. Iwanlec, T., Migliaccio, L., Nania, L., Sbordone, C.: Integrability and removability results for quasiregular mappings in high dimensons. Journal of Mathematical Research and Exposition 24(1), 159–167 (2004) 8. Hongya, G., Min, W., Hongliang, Z.: Very weak solutions for obstacle problems of Aharmonic equation. Journal of Mathematical Research and Exposition 24(1), 159–167 (2004) 9. Hong, L., hongya, G.: Some regulatity results in nonhomogeneous obstacle problems. J. of Math. (26), 501–508 (2006) 10. Hong, L.: The regularity of very weak solutions for obstacle of nonmohngeneous obstacle problems. J. of Math. (26), 501–508 (2006)
Properties of Planar Triangulation and Its Application Ling Wang1, , Dianxuan Gong1 , Kaili Wang1 , Yuhuan Cui2 , and Shiqiu Zheng1 1
2
College of Sciences, Hebei Polytechnic University, Tangshan 063009, China College of Light Industry, Hebei Polytechnic University, Tangshan 063000, China [email protected]
Abstract. Further research will be done on triangulation partitions. particularly, more careful analysis is made on even triangulation of simply connected domain, and a number of new properties are obtained. Using these new properties, some proof of the theorems on graph theory become easy and simple. For example, using the property an arbitrary planner even triangulation can be expressed as the union of a number of disjoint star domains, one can easily prove the equivalence of the three statement triangulation is even, triangulation is 3-vertex signed and triangulation is 2-triangle signed. Keywords: even partition; triangulation; 3-vertex signed.
1
Introduction
Let Ω ∈ R2 be a simply connected domain, and let Δ = {Ti}N i=1 be a triangulation of Ω satisfying N (1) Ω = i=1 Ti , (2) Each Ti is a closed triangle, (3) For all the 1 ≤ i = j ≤ N ,no vertex of Ti lies in the interior of Tj or in the interior of an edge of Tj . The triangular domain of the partition is called a cell of Δ,N is the number of triangles in Δ. The edges of the triangles are called the partition lines, those fall within the domain Ω are called interior lines,or are called boundary lines. The vertex of triangle is called partition vertex,that lies within the domain is called interior vertex, or are called boundary vertex. Suppose v be a vertex of partition
Project supported by National Nature Science Foundation of China (No.60533060), Educational Commission of Hebei Province of China (No.2009448), Natural Science Foundation of Hebei Province of China (No.A2009000735) and Natural Science Foundation of Hebei Province of China (No.A2010000908). Corresponding author.
R. Zhu et al. (Eds.): ICICA 2010, Part I, CCIS 105, pp. 519–526, 2010. c Springer-Verlag Berlin Heidelberg 2010
520
L. Wang et al.
Δ, and Star(v) = {Ti ∈ Δ|v ∈ Ti a cell of Δ} be the collection of cells in Δ sharing v as a common vertex, we call Star(v) the star region of v. The degree, d(v), of a vertex v is the number of edges with which it is incident. We call two vertices v1 and v2 are adjacent, if there is a line connect v1 and v2 ([9]). Triangulation is widely used in various fields of scientific research. Especially in the applications such as computer-aided geometric design, graphics, image processing, surface reconstruction, finite element method, multi-spline methods([2,3,7]). Take data fitting and surface reconstruction for instance, in order to improve the accuracy, we often need to divide the region into many small regions which is grid division. Given any polygonal domain in R2 , it can be surely triangulated, but not necessarily can be subdivided by quadrilateral or other polygon mesh. In practice, if you are not satisfied on the accuracy obtained on a given triangulation, you can easily get an overall breakdown or partial breakdown ([2,8]). Triangulation is a very useful tool not only for the application but also for the theory perspective. The author has been working on the theory of spline, simply speaking, spline function is a piecewise polynomial function with a certain smoothness degree([7]). The properties of multivariate spline function on arbitrary triangulations has been a hot and difficult topic. So the research on nature properties of triangulation is of basic theoretical importance, and it is necessary to do more in-depth research on it. In this paper, taking the triangulation as a graph, we consider its topology properties. As we all know, given a triangulation, denote p be the number of the vertices of the partition, l be the number of the lines (number of the triangle edges), n be the number of the triangles, then e = p − l + n is a topological invariants, that is e = p − l + n is unrelated to the geometry. e is called the Euler characteristic([1]). If the vertices of the partition Δ can be marked with −1 and 1 so that all the triangles in the Δ are marked with different symbols, we call such triangulation is 2-vertex signed. An interesting conjecture is proposed in reference [6]: any triangulation is 2-vertex signed. And this is proved in reference [9] using some definition and results from graph theory. In section 2 of this paper, some new results is got based on in-depth study on the properties of triangulation especially even triangulation.
2
Properties of Triangulation
Let Δ be a triangulation of domain Ω ⊂ R2 , then for any interior line in Δ, there are and only are two triangles sharing this line as an edge. The position relationship between two different star domain in triangulation has following three cases: 1. have common cells, 2. have common lines but none common cells, 3. have common vertices but none common lines. In the rest of this paper, if not specified the discusses are limited on 2-dimensional simply connected region, and we say two star domain intersect means they having common cells.
Properties of Planar Triangulation and Its Application
521
Property 1. Given an arbitrary planner triangulation Δ, take any two adjacent vertices v1 and v2 , denote the corresponding star domain by Star(v1 ) and Star(v2 ), then Star(v1 ) ∩ Star(v2 ) are the two triangle cells sharing interior line v1 v2 as the common edge. Proof. Firstly, v1 and v2 are interior vertices,so v1 v2 is interior line, there are and only are two triangle cells denoted by Δv1 v2 v3 and Δv1 v2 v3 that have edge v1 v2 . Secondly, by the definition of star domain, Δv1 v2 v3 and Δv1 v2 v3 are all belong to Star(v1 ) and Star(v2 ), so (Δv1 v2 v3 ∪ Δv1 v2 v3 ) ⊂ (Star(v1 ) ∩ Star(v2 )). On the other hand, suppose Δ ⊂ (Star(v1 ) ∩ Star(v2 )), then Δ has vertices v1 and v2 , so the line v1 v2 is an edge of Δ , then Δ must be Δv1 v2 v3 or Δv1 v2 v3 . Remark 1. One can easily see from the proving procedure, that the result hold so long as one of v1 and v2 is interior vertex. If both v1 and v2 are boundary vertices, then (Star(v1 ) ∩ Star(v2 )) is one triangle cell. Property 2. Given an arbitrary planner triangulation Δ, get 3 vertices v1 , v2 and v3 such that they are adjacent to each other. Denote the corresponding star domain by Star(v1 ), Star(v2 ) and Star(v3 ), then Star(v1 ) ∩ Star(v2 ) ∩ Star(v3 ) is the triangle cell defined by three vertices v1 v2 v3 . Proof. On one hand, triangle cell determined by vertices v1 , v2 and v3 must belongs to Star(v1 ) ∩ Star(v2 ) ∩ Star(v3 ). On the other hand, suppose partition cell Δ belongs to Star(v1 ) ∩ Star(v2 ) ∩ Star(v3 ), by the definition fo star domain, v1 v2 v3 must be the vertices of Δ . We call the triangulation Δ even partition, if all the degrees of the vertices are even. In the rest of the text, we mainly discuss the properties of even triangulation. Property 3. For an arbitrary planner even triangulation Δ, it can be expressed as the union of a number of disjoint star domains. In other words, for given planner even triangulation Δ, ∃K ∈ N , and {v1 , v2 , · · · , vK } such that Δ=
K
Star(vi ),
i=1
and for any i = j, Star(vi )∩Star(vj ) is empty (here, the “empty” means Star(vi ) and Star(vj ) have no common partition cell ). Proof. We first proof the following result: Given an even triangulation Δ on a planner simply connected domain, we can always found two adjacent but with no intersection star domain Star(vi ) and Star(vj ) satisfying that the domain of Star(vi ) ∪ Star(vj ) is till simply connected (no hole). In fact, as long as the triangulation Δ itself is not a star partition, we definitely can find two adjacent but with no intersection star domain say Star(v1 ) and
522
L. Wang et al.
Star(v2 ). If the domain Star(v1 ) ∪ Star(v2 ) has a hole, there must be other interior vertex in the hole. If there is only one vertex say v3 lying in the hole, then the star domain Star(v1 ) and Star(v3 ) must be adjacent and with no intersectionOtherwise, if there are m > 1 vertices lying in the hole (see Fig.1.(a) shows the case that Star(v1 ) ∪ Star(v2 ) has no hole, and (b) shows the case that Star(v1 ) ∪ Star(v2 ) has a hole), we can choose n vertex say v4 such that Star(v1 ) and Star(v4 ) are adjacent without intersection. Now if the domain Star(v1 ) ∪ Star(v4 ) still has a hole, then the number of the vertices lying in the hole is strictly less than m, so once again, we can choose a new vertex say v5 in the new hole meeting Star(v1 ) and Star(v5 ) are adjacent and with no intersection, carry on like this, one will always find two adjacent star domain with no intersection and no hole in the union.
v3
v1
v1
v5
v6
v2
v2
v4
(a)
(b)
Fig. 1. Two adjacent star domain without intersection
Now prove the theorem, given an even planar triangulation Δ, according to the analysis above, their exist two star domain Star(v1 ) and Star(v2 ), there is no hole in Star(v1 ) ∪ Star(v2 ), no other vertex either. Now take Star(v1 ) ∪ Star(v2 ) as one star domain say Star(v1 ), then we got a new even triangulation Δ , so certainly can find two adjacent star domain Star(v3 ) and Star(v4 ) in Δ (here, one of v3 and v4 might be v1 ) such that Star(v3 ) ∩ Star(v4 ) is empty and Star(v3 ) ∪ Star(v4 ) has no hole. Now take Star(v3 ) ∪ Star(v4 ) as a new star domain Star(v2 ), then once again, we obtain a new even triangulation say Δ ; Carry on like this, we will finally got one star domain out of which there is no other inner vertex. Along the process above, we have already found a sequence of vertices say {v1 , v2 , · · · , vK }, satisfying Δ = K i=1 Star(vi ), and for all i = j, Star(vi ) ∩ Star(vj ) is empty. Remark 2. The Property 3 insures the existence of {v1 , v2 , · · · , vK } that Δ = K i=1 Star(vi ), but such a sequence of vertices is not necessarily unique. Two choices ( take {v1 , v4 , v8 , v10 , v18 } and {v2 , v3 , v5 , v11 , v12 , v15 , v17 } for example ) of a same triangular partition are show in Fig.2.
Properties of Planar Triangulation and Its Application
v15
v12
v8 v6
v10
v9
v15
v4
v11
v6
v10
v7
v2
v4
v13
v11
v3
v16
v1
v9
v7
v2 v3
v12
v8
523
v16
v1 v13
v5 v17
v5 v18
v14
v17
v14
(a)
v18
(b)
Fig. 2. Two choices of vertex sequence
Here, we introduce the following definition: Definition 1. Call triangulation Δ is 3-vertex signed, if all the vertices of Δ can be so signed with −1, 0 or 1 that the signs of three vertices on each triangle in Δ are different to each other. Call triangulation Δ is 2-vertex signed, if all the triangles in Δ can be so signed with 1 or 2 that any two adjacent triangles in Δ are marked differently. Now we prove the following useful theorem: Theorem 1. For planner triangulation Δ of simply connected domain, the following propositions are equivalent (i) Δ is an even triangulation; (ii) Δ is 3-vertex signed; (iii) Δ is 2-triangle signed. Proof. First prove (i) ⇒ (iii). Suppose Δ is an even triangulation, according to the proof process of Property 3, Δ can be expressed as the union of a number of star domains K i=1 Star(vi ), and for any two adjacent star domain Star(vi ) and Star(vj ), they can’t share any common partition cells, so they either have only one common edge, either have several common edges (See Fig.3.(b) shows the case that Star(v1 ) and Star(v2 ) share three common edges). Since the degree of vertices in even triangulation are all even number, combining the proof process of Property 3, star domain Star(v1 ) is 2-triangle signed, and the triangles in Star(v2 ) can be appropriately marked that Star(v1 ) ∪ Star(v2 ) is 2-triangle signed. Take Star(v1 ) ∪ Star(v2 ) as one star domain denoted by Star(v0 ) ( this equals to compress v1 and v2 to be one vertex v0 , see Fig. 4), then Star(v1 )∪Star(v2 ) is compressed to one star domain Star(v0 ), and Star(v0 ) is obversely 2-triangle signed, so the triangles in Star(v3 ) can be appropriately signed that Star(v1 ) ∪ Star(v2 ) ∪ Star(v3 ) = Star(v0 ) ∪ Star(v2 ) is 2-triangle signed. Again take v0 ∪ v3 as one star domain and go on the similarly discuss, one can deduce that even triangulation Δ is 3-triangle signed.
524
L. Wang et al.
1
2 v1
1
v1
2
2
v2
1
2 1
2
1
2
1
1
2
v2
2
1
(a)
(b)
Fig. 3. The cases of 2-triangle signed for two adjacent star domain
1 2
2
v1 1
1
v3 1
2
1
1
1
2
2
2
2
2
v3
v0
2
1 2
v4
1
v2 1 2
(a)
1
2
v4
1
(b)
Fig. 4. An example of compressing Star(v1 ) Star(v2 ) to be Star(v0 )
(iii) ⇒ (i) is obversely. In fact, if the triangulation Δ is 2-triangle signed, for any interior vertex v in Δ, the star domain Star(v) is 2-triangle signed, so the number of the triangles in Star(v) must be even, in another word the degree of v is even, so Δ is an even triangulation. So (i) ⇔ (iii). Now prove (i) ⇔ (ii). Suppose Δ is an even triangulation, then the triangles in Δ can be signed with 1 or 2 such that any two adjacent triangles are marked differently. Combined with Property 3, there exist {v1 , v2 , · · · , vk } (for 1 ≤ i = j ≤ k, Star(vi ) ∩ Star(vj ) K is empty), Δ = i=1 Star(vi ). For i = 1, 2, · · · , K, triangles in each Star(vi ) are signed with 1 or 2 and any two adjacent triangles are signed differently. Firstly sign vi with 0. Secondly for the vertices on the bound of Star(vi ), rule that around vi in the clockwise direction, mark the vertices which is passed when goes from a triangle signed 2 into a triangle marked 1 by −1, sign all the other vertices with 1. Then, the vertex signs of each triangle in Δ are different to each other. This proves Δ is 3-vertex signed. In the other side, if Δ is 3-vertex signed, then one can mark the vertices of Δ with −10 or 1, and the three vertices of each triangle in Δ are signed differently to each other. For any interior vertex v, if v is marked by 0, then the vertices on
Properties of Planar Triangulation and Its Application
525
the boundary of Star(v) are marked by −1 and 1 alternately, so the degree of v is even. This proves Δ is even. Some applications of these results will be shown in the next section.
3
Application of the Properties
Using the properties and the theorems obtained in the last section, we can easily prove some theorems in Graph Theory. Some notations can be found in [1]. Example 1. Suppose Δ is planner even triangulation, then the dual graph Δ of Δ can be 1-factorized. Proof. On the one hand, Δ is even, by Theorem 1, Δ is 3-vertex signed, mark edge vi vj in triangle Δvi vj vk with the sign of vk , Δ is 3-edge signed. On the other hand, dual graph of Δ is a cubic graph. Mark edges in Δ with the sign of its corresponding edge in Δ. Then all the edges marked by a same sign in Δ construct a 1-factor (perfect matching). Obviously, we have found three 1-factor of Δ, and every edge belongs and only belongs to one 1-factor. In another word, Δ can be 1-factorized. Example 2. Call the triangle constructed by all three median fit lines of a triangle the middle triangle. Denote all the middle triangles by M (Δ), then for any even triangulation Δ, there exist 3 piecewise algebraic curves li : fi (x, y) = 0, fi (x, y) ∈ S10 (Δ), i = 1, 2, 3, such that M (Δ) = l1 ∪ l2 ∪ l3 . Here, S10 (Δ) is the collection of all the continuous functions which restricted on every triangle cell is a linear polynomial. Proof. As same as Example 1, Δ is even, we have deduced that Δ is 3-edge signed. Mark edge in M (Δ) with the sign of the parallel edge of the corresponding edge in Δ. Then M (Δ) is 3-edge signed, and all the edges marked with a same sign in M (Δ) construct a S10 −piecewise algebraic curve. So, we have got three piecewise algebraic curves li , and M (Δ) = l1 ∪ l2 ∪ l3 . Conclusion: Triangulation constructs a bridge between Graph Theory and Several disciplines. These examples show that using the results in this paper, the proof of some propositions and theorems become very simple. More contents can be seen in [4,5]. So the study on the properties of triangulation is helpful to the research of Graph Theory. Conversely, study triangulation partition by Graph Theory would help us to get more and deeper research achievements too.
References 1. Bollob´ as, B.: Modern Graph Theory (Graduate texts in mathematics 184). Springer, New York (1998) 2. Cox, D., Little, J., O’shea, D.: Ideals, Varieties and Algorithms. Springer, Berlin (1992)
526
L. Wang et al.
3. Cox, D., Little, J., O’Shea, D.: Using Algebraic Geometry. Springer, New York (1998) 4. Davydov, O., Sommer, M., Strauss, H.: Interpolation by bivariate linear splines. Journal of Computational and Applied Mathematics 119, 115–131 (2003) 5. Gong, D.X., Wang, R.H.: Piecewise Algebraic curves and Four-Color Theorem. Ultilitas Mathematics (in press) 6. Shi, X.Q., Wang, R.H.: The Bezout numbers for piecewise algebraic curves. BIT Numerical Mathematics 39(1), 339–349 (1999) 7. Wang, R.H.: Multivariate Spline Functions and Their Applications. Science Press/Kluwer Pub., Beijing/New York (2001) 8. Wang, R.H., Li, C.J., Zhu, C.G.: Textbook of Computational Geometry. Science Press, Beijing (2008) 9. Wang, R.H., Xu, Z.Q.: The estimates of Bezout numbers for the piecewise algebraic curves. Science in China (Series A) 33(2), 185–192 (2003) 10. Xu, Z.Q., Wang, R.H.: Some properties of cross partition. Numerical Mathematics A Journal of Chinese Universities 4, 289–292 (2001) 11. Gong, D.X.: Some Research on Theory of Piecewise Algebraic Variety and RBF Interpolation. Dalian, Ph.D. theses of Dalian University of Technology (2009) 12. Gong, D.X., Wang, L., Wang, K.L., Qu, J.G.: Some properties of triangulation. Ludong University Journal (Natural Science Edition) 26(3), 18–21 (2010) 13. Zhang, S.L., Wang, Y.G., Chen, X.D.: Intersection method for triangular mesh model based on space division. Journal of Computer Applications 29(10), 2671–2673 (2009)
Author Index
Annie, R. Arockia Xavier An, Tao I-217 Bai, Zhonghao
II-127
II-142
Cao, Jianli II-172 Cao, Libo II-142 Chang, Jincai II-326, II-391 Chen, Chao I-384 Chen, Chunhua II-476 Chen, Guang I-282 Chen, Haibin II-268 Chen, Hao II-236 Chen, Hongkai II-88 Chen, Jianguo II-236 Chen, Lei II-212 Chen, Lijuan I-488 Chen, Min-ye II-204 Chen, Ruihan I-334 Chen, Yanyan II-507 Chen, Yuzhen I-311 Chen, Zengqiang I-234 Cheng, Bin I-137 Cheng, Jiejing I-144 Cheng, Wei I-70 Chi, Hong I-25 Chi, Xuebin II-40 Chien, Chen-shin I-250, I-289 Chien, Jason I-250, I-289 Cui, Yingshan I-274 Cui, Yuhuan I-519, II-341, II-398 Dai, Yu II-56 Ding, Xiuhuan I-177 Ding, Chunyan I-465 Ding, Yong I-209 Dong, Jianli II-64 Dong, Yong-quan I-473 Duan, Qizhi II-40 Fang, Fang I-334 Fang, Shaomei II-111 Fei, Rui II-1 feng, Lianggui I-70 Feng, Lichao I-480, II-275, II-375
Fu, Gang I-326 Fu, Li II-468 Fu, Xinhong I-25 Gan, Hai-Tao I-185 Gani, Abdullah II-16 Gao, Caiyun II-196 Gao, Jianxin II-460 Gao, Yi II-420, II-515 Gong, Dianxuan I-85, I-519, II-275 Gong, Taisheng II-1 Gong, Wu I-347 Guo, Changhong II-111 Guo, Jie II-236 Guo, Weina I-334 Guo, Xiaoqiang I-480 Guo, Yacai I-376 Guo, Yajun I-40, I-376, I-458 Han, Congying I-129 Han, Dong II-150 Han, Xuebing II-413, II-484 Hao, Xiaohui I-123 He, Chunyan I-62 He, Dengxu II-228 He, Guoping I-129 He, Shangqin I-40 He, Yali I-496 He, Yuanyuan I-407 He, Zhengqiu I-258 Huang, Bingnan II-507 Huang, Cheng-hui II-150 Huang, DaRong II-24 Huang, Jingjing I-144 Huang, Kangyu I-258 Huang, Wenxue II-348 Huang, Xiaoli I-274 Huo, Ping II-305 Jabeen, Fouzia II-260 Jaffar, Arfan II-260 Jan, Zahoor II-260 Ji, Nan I-101, I-117 Jia, Dan-dan II-354 Jia, Hongmei I-282
528
Author Index
Jiang, Chao I-1 Jiang, Junna I-32, I-78 Jiang, Yong I-326 Jin, Dianchuan II-360 Jin, Gang I-217 Jin, Qianqian I-311 Jin, Zhong II-40 Ju, Xingsong II-289 Kang, Jinlong II-428 Kang, Zhiqiang I-340 Lai, Haiguang I-258 Lai, Jun II-1 Lan, Wangsen I-201 Lei, Xiaoqing I-458 Lei, Yilong II-398 Li, Baofeng I-368, II-334 Li, Chunmiao II-282 Li, Fuping I-340 Li, Gang II-476 Li, Huabo I-258 Li, L.C. II-244 Li, Li I-193 Li, Lihong I-32, I-78, II-319 Li, Linfan II-275 Li, Meng II-188 Li, Mingzhu I-488 Li, Ping II-275 Li, Qin I-465 Li, Shu I-217 Li, Ting I-334 Li, Wei I-185, I-311 Li, Xiang I-25 Li, Xiang-yang II-305 Li, Xiangyu II-212 Li, Xiaorui I-129 Li, Ye I-311 Li, Ying II-319 Li, Yuling I-169, I-242 Li, Zhanjin II-297 Li, Zhendong I-78, II-367 Li, Zhibin I-48 Li, Zhihua II-468 Li, Zhiyan II-341 Li, Zhi-zhong II-204 Lian, Wenshan I-354 Liang, Gaoyong II-1 Liang, Yanbing I-437 Liang, Z.Z. II-244
Liao, Huxiong I-17 Liao, Zhigao I-318 Lin, Shufei II-119 Ling, Yunxiang I-17 Liu, Baoxiang I-361, I-504, II-312, II-319 Liu, Bo I-407 Liu, Chunfeng II-326, II-391 Liu, Huai II-180 Liu, Jin-hua I-217 Liu, Ke I-1 Liu, Lei II-220 Liu, Linghui II-341 Liu, Linlin I-437 Liu, Lu II-354 Liu, Qian II-40 Liu, Qiangyan II-312 Liu, Qiumei I-480, II-491 Liu, Ran I-399 Liu, Xiaobin II-452 Liu, Xiaohong II-119 Liu, Xiaoli II-383 Liu, Xiaoxiao I-144 Liu, Yang II-367 Liu, Yingli II-282 Liu, Yunchuan II-468 Liu, Zhijing II-165 Liu, Zhongxin I-234 Lou, Lu II-24 Lou, Peihuang I-399 Lu, Jimin I-161 Lu, Xiaona I-123, I-512 Luo, Yuanyuan I-117 Lv, Wanjin I-62 Ma, Boyuan I-407 Ma, Defu II-48 Ma, Guofu II-96 Ma, Hanwu I-152 Ma, T.H. II-244 Ma, Xinghua II-468 Ma, Yan II-80 Mao, Xuezhi I-458 Meng, Qingbin II-367 Meng, Xiangjun II-289 Mi, Cui-lan I-473 Mi, Cuilan II-523 Ming, Yang I-9 Mirza, Anwar M. II-260 Mu, Xufang I-78
Author Index Niu, Ming I-161 Niu, Zengwei II-360 Park, Namje II-72 Pei, Wei-chi II-305 Peng, Yamian I-9, II-375 Pinheiro, Pl´ acido Rog´erio II-252 Pires de Ara´ ujo, Luiz Jonat˜ a II-252 Qian, Jun-lei I-415 Qin, Yueping I-109 Qin, Yu-ping I-217 Qiu, Wei II-135 Qu, Jingguo II-275, II-341, II-398 Qu, Liangdong II-228 Qu, Yunhua II-444 Shang, Xinchun I-296 Shen, Jianqiang II-8 Shen, Xiaoqin I-9 Shi, Ningguo II-64 Song, Jun II-24 Song, Lichuan I-274 Song, Meina I-1 Song, Qiang II-96 Su, Yongfu II-428 Sun, Ji II-180 Sun, Lu II-80 Sun, Xiujuan II-119 Sun, ZhenTao II-16 Suo, Yaohong II-31 Tang, Dunbing I-399 Tang, Hongmei II-88 Tang, Hui I-430 Tao, Lingyun I-209 Tian, Bing II-103 Tian, Hong-yan II-354 Tian, Mingxing I-48 Tong, Weiqin I-137 Tong, Xiao-Jun I-185 Tong, Xiaojun II-268 Tong, Yuxia I-512 Wan, Duanji I-311 Wang, Bin I-282 Wang, Cui-fang II-499 Wang, Dayong I-437 Wang, De-qiang I-267 Wang, Donghua I-368, II-334
529
Wang, Fang I-326, I-423 Wang, Feifei II-196 Wang, Gouli I-444 Wang, Hong-Lei II-354 Wang, Jian II-452 Wang, Jinpeng I-32 Wang, Jinran I-40, I-376, I-458 Wang, Kaili I-519, II-413, II-436, II-484 Wang, Ling I-519 Wang, Rujuan II-220 Wang, Shasha I-334 Wang, Tao I-17 Wang, Xia II-111 Wang, Xiaolei II-297 Wang, Youhan II-158 Wang, Yuehong I-109, I-347 Wang, Yue-hui II-354 Wang, Zhen I-93 Wang, Zhijiang II-406, II-436 Wei, Chuanan I-85 Wei, Mingjun I-450 Wei, Qing II-165 Wei, Qun I-169 Wei, Rong I-430 Wen, Wu I-334 Wu, Haiming II-383, II-391 Wu, Jianhui I-444 Wu, Jingyi II-220 Wu, Lifa I-258 Wu, Ruijuan I-437, II-383 Wu, Song I-407 Wu, Tingzeng II-48 Wu, Xian I-267 Wu, Xiujun I-226 Xiang, Guiyun I-318 Xian, Zhicong II-8 Xiao, Hongan I-423 Xiao, Jing I-217 Xie, Jie-hua I-54 Xu, Chaochun I-450 Xu, Fang II-8 Xu, Guangli I-169, II-406 Xu, Guofeng I-234 Xu, Jiuping I-318 Xu, Jun I-361, I-504, II-523 Xu, Ke I-1 Xu, Xin II-24 Xu, Xiujuan I-512 Xu, Yanhu I-340, II-468
530
Author Index
Xu, Zheng II-142 Xu, Zhou I-209 Xue, Xiaoguang II-348 Yan, Guobin I-384 Yan, Hongcan II-452 Yan, Hua I-193 Yan, Manfu I-123 Yan, Teliang II-282 Yan, Yan I-101, I-117 Yan, Ying II-375 Yan, Zaizai II-103 Yang, Aimin II-275, II-383 Yang, Hongmei II-460 Yang, Lei I-399, II-56 Yang, Qianli II-326 Yang, Tangsheng II-212 Yang, Tao II-507 Yang, Xiang I-152 Yang, Xiaojing I-376 Yang, Yafeng I-361, I-504, II-523 Yang, Zhaosheng I-282 Yang, Zhi-gang I-415 Yao, Hong-guang II-204 Yin, Hongwu II-491 Yin, Jian II-150 Yin, Li I-93 Yin, Sufeng I-444 Yogesh, P. II-127 Yong, Longquan I-390 You, Shibing II-507 Yu, Liqun I-444 Yu, Sen I-70 Yu, Yaping II-319 Yu, Ying II-499 Yue, Xiaoyun I-40, I-376, I-458 Zang, Wenru I-25 Zhai, Jun II-188 Zhang, Bin II-56 Zhang, Buying II-491 Zhang, Cai-Ming I-193 Zhang, Dengfan I-152 Zhang, Guohua I-17 Zhang, Hao II-165
Zhang, Huancheng II-375, II-398, II-428, II-436 Zhang, Jinying I-304 Zhang, Jiuling I-109, I-347 Zhang, Lin I-282 Zhang, Qianbin II-142 Zhang, Qingbin I-407 Zhang, Qiuna I-101, II-119 Zhang, Shuang I-217 Zhang, Tongliang I-430 Zhang, Wenxiu I-326 Zhang, Xiaohua I-40 Zhang, Xiaoxiang I-488 Zhang, Yabin I-384 Zhang, Yanbo I-340 Zhang, Yanjuan I-304, I-465 Zhang, Yanru II-367 Zhang, Y.B. II-244 Zhang, Yonghui II-268 Zhang, Yongli II-119 Zhang, Zhong-jie I-267 Zhao, Chenxia I-304, II-312 Zhao, De-peng I-267 Zhao, Guohao I-201 Zhao, Haiyong II-165 Zhao, Hongli II-289 Zhao, Huijuan I-304 Zhao, Xiuping I-496 Zhao, Zhenwei I-296 Zheng, Ning II-348 Zheng, Shiqiu I-480, I-519 Zheng, Yu II-468 Zhou, Guanchen II-406 Zhou, Kaitao II-188 Zhou, Lihui I-101 Zhou, Ruilong I-340 Zhou, Xinquan I-347 Zhu, Jundong I-274 Zhu, Li I-354 Zhu, Lulu I-137 Zhu, Yanwei II-119 Zhu, Zhiliang II-56 Zou, Wei I-54 Zou, Xuan II-8