Lecture Notes in Artificial Intelligence Edited by R. Goebel, J. Siekmann, and W. Wahlster
Subseries of Lecture Notes in Computer Science
5990
Ngoc Thanh Nguyen Manh Thanh Le ´ ˛tek (Eds.) Jerzy Swia
Intelligent Information and Database Systems Second International Conference, ACIIDS Hue City, Vietnam, March 24-26, 2010 Proceedings, Part I
13
Series Editors Randy Goebel, University of Alberta, Edmonton, Canada Jörg Siekmann, University of Saarland, Saarbrücken, Germany Wolfgang Wahlster, DFKI and University of Saarland, Saarbrücken, Germany Volume Editors Ngoc Thanh Nguyen Wroclaw University of Technology, Institute of Informatics Str. Wyb. Wyspianskiego 27, 50-370 Wroclaw, Poland E-mail:
[email protected] Manh Thanh Le Hue University, Str. Le Loi 3, Hue City, Vietnam E-mail:
[email protected] ´ ˛tek Jerzy Swia Wroclaw University of Technology Faculty of Computer Science and Management Str. Lukasiewicza 5, 50-370 Wroclaw, Poland E-mail:
[email protected]
Library of Congress Control Number: 2010922322
CR Subject Classification (1998): I.2, H.3, H.2.8, H.4, H.5, F.1, K.4 LNCS Sublibrary: SL 7 – Artificial Intelligence ISSN ISBN-10 ISBN-13
0302-9743 3-642-12144-6 Springer Berlin Heidelberg New York 978-3-642-12144-9 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2010 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper 06/3180
Preface
The 2010 Asian Conference on Intelligent Information and Database Systems (ACIIDS) was the second event of the series of international scientific conferences for research and applications in the field of intelligent information and database systems. The aim of ACIIDS 2010 was to provide an international forum for scientific research in the technologies and applications of intelligent information, database systems and their applications. ACIIDS 2010 was co-organized by Hue University (Vietnam) and Wroclaw University of Technology (Poland) and took place in Hue city (Vietnam) during March 24–26, 2010. We received almost 330 papers from 35 countries. Each paper was peer reviewed by at least two members of the International Program Committee and International Reviewer Board. Only 96 best papers were selected for oral presentation and publication in the two volumes of the ACIIDS 2010 proceedings. The papers included in the proceedings cover the following topics: artificial social systems, case studies and reports on deployments, collaborative learning, collaborative systems and applications, data warehousing and data mining, database management technologies, database models and query languages, database security and integrity, ebusiness, e-commerce, e-finance, e-learning systems, information modeling and requirements engineering, information retrieval systems, intelligent agents and multiagent systems, intelligent information systems, intelligent internet systems, intelligent optimization techniques, object-relational DBMS, ontologies and information sharing, semi-structured and XML database systems, unified modeling language and unified processes, Web services and Semantic Web, computer networks and communication systems. Accepted and presented papers highlight new trends and challenges of intelligent information and database systems. The presenters showed how new research could lead to new and innovative applications. We hope you will find these results useful and inspiring for your future research. We would like to express our sincere thanks to the Honorary Chairs, Van Toan Nguyen (President of Hue University, Vietnam) and Paul P. Wang (Duke University, USA) for their support. Our special thanks go to the Program Co-chairs, all Program and Reviewer Committee members and all the additional reviewers for their valuable efforts in the review process which helped us to guarantee the highest quality of selected papers for the conference. We cordially thank the organizers and Chairs of special sessions, which essentially contribute to the success of the conference. We would like to thank our main sponsors, Hue University and Wroclaw University of Technology. Our special thanks are due also to Springer for publishing the proceedings, and other sponsors for their kind support.
VI
Preface
We wish to thank the members of the Organizing Committee for their very substantial work, especially those who played essential roles: Huu Hanh Hoang, Radosław Katarzyniak (Organizing Chairs) and the members of the Local Organizing Committee for their excellent work. Our special thanks go to the Foundation for Development of Wroclaw University of Technology for its efficiency in dealing with the registration and management issues. We also would like to express our thanks to the Keynote Speakers (Leszek Rutkowski, A. Min Tjoa, Jerzy Świątek and Leon S L. Wang) for their interesting and informative talks of world-class standard. We cordially thank all the authors for their valuable contributions and the other participants of this conference. The conference would not have been possible without their support. Thanks are also due to many experts who contributed to making the event a success.
Ngoc Thanh Nguyen Manh Thanh Le Jerzy Świątek
ACIIDS 2010 Conference Organization
Honorary Chairs Van Toan Nguyen Paul P. Wang
President of Hue University, Vietnam Duke University, USA
General Chairs Manh Thanh Le Świątek Jerzy
Hue University, Vietnam Wroclaw University of Technology, Poland
Program Chair Ngoc Thanh Nguyen
Wroclaw University of Technology, Poland
Program Co-chairs Shyi-Ming Chen Kwaśnicka Halina Huu Hanh Hoang Jason J. Jung Edward Szczerbicki
National Taiwan University of Science and Technology, Taiwan Wroclaw University of Technology, Poland Hue University, Vietnam Yeungnam University, Korea University of Newcastle, Australia
Local Organizing Co-chairs Huu Hanh Hoang Radoslaw Katarzyniak
Hue University, Vietnam Wroclaw University of Technology, Poland
Workshop Chairs Radoslaw Katarzyniak Mau Han Nguyen
Wroclaw University of Technology, Poland Hue University, Vietnam
Organizing Committee Marcin Maleszka Adrianna Kozierkiewicz-Hetmańska Anna Kozlowska Tran Dao Dong Phan Duc Loc
Hoang Van Liem Dao Thanh Hai Duong Thi Hoang Oanh Huynh Dinh Chien
VIII
Organization
Keynote Speakers Leszek Rutkowski Polish Academy of Sciences, Technical University of Czestochowa, Poland A. Min Tjoa Vienna University of Technology, Austria Jerzy Świątek Wroclaw University of Technology, Poland Leon S.L. Wang National University of Kaohsiung, Taiwan
Special Sessions 1. Multiple Model Approach to Machine Learning (MMAML 2010) Oscar Cordón, European Centre for Soft Computing, Spain Przemysław Kazienko, Wroclaw University of Technology, Poland Bogdan Trawiński, Wroclaw University of Technology, Poland 2. Applications of Intelligent Systems (AIS 2010) Shyi-Ming Chen, National Taiwan University of Science and Technology, Taipei, Taiwan 3. Modeling and Optimization Techniques in Information Systems, Database Systems and Industrial Systems (MOT 2010) Le Thi Hoai An, Paul Verlaine University – Metz, France Pham Dinh Tao, INSA-Rouen, France
International Program Committee Babica Costin Bielikova Maria Bressan Stephane Bui The Duy Cao Longbing Cao Tru Hoang Capkovic Frantisek Cheah Wooi Ping
University of Craiova, Romania Slovak University of Technology, Slovakia National University of Singapore, Singapore National University Hanoi, Vietnam University of Technology Sydney, Australia Ho Chi Minh City University of Technology, Vietnam Slovak Academy of Sciences, Slovakia Multimedia University, Malaysia
Organization
Chen Shyi-Ming Cuzzocrea Alfredo Dang Khanh Tran Davidsson Paul Forczmański Paweł Frejlichowski Dariusz Giorgini Paolo Halina Kwasnicka Helin Heikki Ho Tu Bao Hong Tzung-Pei Hoang Huu Hanh Hoang Trinh Hon Janiak Adam Jezic Gordan Jung Jason J. Kacprzyk Janusz Karim S. Muhammad Kim Cheonshik Kim Chonggun Krol Dariusz Kuwabara Kazuhiro Lau Raymond Le Thi Hoai An Lee Eun-Ser Lee Huey-Ming Lee Zne-Jung Lewis Rory Lingras Pawan Luong Chi Mai Matsuo Tokuro Narasimha Deepak Laxmi Numao Masayuki Nguyen Ngoc Thanh Nguyen Thanh Binh
National Taiwan University of Science and Technology, Taiwan University of Calabria, Italy Ho Chi Minh City University of Technology, Vietnam Blekinge Institute of Technology, Sweden West Pomeranian University of Technology, Poland West Pomeranian University of Technology, Poland University of Trento, Italy Wroclaw University of Technology, Poland TeliaSonera, Finland Japan Advanced Institute of Science and Technology, Japan National University of Kaohsiung, Taiwan Hue University, Vietnam Ulsan University, Korea Wroclaw University of Technology, Poland University of Zagreb, Croatia Yeungnam University, Korea Polish Academy of Sciences, Poland Quaid-i-Azam University, Pakistan Anyang University, Korea Yeungnam University, Korea Wroclaw University of Technology, Poland Ritsumeikan University, Japan Systems City University of Hong Kong, Hong Kong University Paul Verlaine - Metz, France Andong National University, Korea Chinese Culture University, Taiwan Huafan University, Taiwan University of Colorado at Colorado Springs, USA Saint Mary's University, Canada Institute of Information Technology, Vietnam Yamagata University, Japan University of Malaysia, Malaysia Osaka University, Japan Wroclaw University of Technology, Poland Hue University, Vietnam
IX
X
Organization
Okraszewski Zenon Ou Chung-Ming Pan Jeng-Shyang Pandian Vasant Paprzycki Marcin Pedrycz Witold Prasad Bhanu Phan Cong Vinh Selamat Ali Shen Victor R.L. Sobecki Janusz Stinckwich Serge Szczerbicki Edward Takama Yasufumi Trawinski Bogdan Truong Hong-Linh Zhang Wen-Ran
Wroclaw University of Technology, Poland Kainan University, Taiwan National Kaohsiung University of Applied Sciences, Taiwan PCO Global Polish Academy of Sciences, Poland Canada Research Chair, Canada Florida A&M University, USA London South Bank University, UK Universiti Teknologi Malaysia, Malaysia National Taipei University, Taiwan Wroclaw University of Technology, Poland UMI 209 UMMISCO • UPMC, IRD, MSI University of Newcastle, Australia Tokyo Metropolitan University, Japan Wroclaw University of Technology, Poland Vienna University of Technology, Austria Georgia Southern University, USA
Program Committees of Special Sessions Special Session on Multiple Model Approach to Machine Learning (MMAML 2010) Jesús Alcalá-Fdez Oscar Castillo Suphamit Chittayasothorn Emilio Corchado Oscar Cordón José Alfredo F. Costa Bogdan Gabryś Patrick Gallinari Lawrence O. Hall Francisco Herrera Tzung-Pei Hong Hisao Ishibuchi Yaochu Jin Nikola Kasabov Przemysław Kazienko Rudolf Kruse Mark Last
University of Granada, Spain Tijuana Institute of Technology, Mexico King Mongkut's Institute of Technology Ladkrabang, Thailand University of Burgos, Spain European Centre for Soft Computing, Spain Federal University (UFRN), Brazil Bournemouth University, UK Pierre et Marie Curie University, France University of South Florida, USA University of Granada, Spain National University of Kaohsiung, Taiwan Osaka Prefecture University, Japan Honda Research Institute Europe, Germany Auckland University of Technology, New Zealand Wrocław University of Technology, Poland Otto-von-Guericke University of Magdeburg, Germany Ben-Gurion University of the Negev, Israel
Organization
Kun Chang Lee Kyoung Jun Lee Urszula Markowska-Kaczmar Kazumi Nakamatsu Yew-Soon Ong Dymitr Ruta Robert Sabourin Ke Tang Bogdan Trawiński Pandian Vasant Shouyang Wang Michał Wozniak Lean Yu Zhongwei Zhang Zhi-Hua Zhou
Sungkyunkwan University, Korea Kyung Hee University, Korea Wrocław University of Technology, Poland University of Hyogo, Japan Nanyang Technological University, Singapore British Telecom, UK University of Quebec, Canada University of Science and Technology of China, China Wrocław University of Technology, Poland University Technology Petronas, Malaysia Academy of Mathematics and Systems Science, China Wrocław University of Technology, Poland Academy of Mathematics and Systems Science, China University of Southern Queensland, Australia Nanjing University, China
Modeling and Optimization Techniques in Information Systems, Database Systems and Industrial Systems (MOT 2010) Kondo Adjallah Riad Aggoune Aghezzaf El Houssaine Lydia Boudjeloud Bouvry Pascal Brzostowski Krzysztof Brzykcy Grażyna, Brieu Conan-Guez Czarnowski Ireneusz Do Thanh Nghi Drapala Jarosław Forczmański Pawel Fraś Dr. Mariusz Alain Gelly Hao Jin-Kao Francois-Xavier Jollois Kubik Tomasz Le Thi Hoai An Lisser Abdel Marie Luong Nalepa Grzegorz Narasimha Deepak Laxmi Nguyen Vincent Orski Donat Pham Dinh Tao Jean-Marie
XI
Paul Verlaine University - Metz, France CRP-Tudor, Luxembourg University of Gent, Belgium Paul Verlaine University-Metz, France University of Luxembourg, Luxembourg Wroclaw University of Technology, Poland Poznań University of Technology, Poland Paul Verlaine University - Metz, France Gdynia Maritime University, Poland ENST Brest, France Wroclaw University of Technology, Poland West Pomeranian University of Technology, Poland Wrocław University of Technology, Poland Paul Verlaine University - Metz, France University of Algiers, France University of Paris V, France Wrocław University of Technology, Poland Paul Verlaine University – Metz, France, Chair University of Paris 11, France University of Paris 13, France AGH University of Science and Technology, Poland University of Malaya, Malaysia The University of New South Wales, Australia Wrocław University of Technology, Poland INSA-Rouen, France, Co-chair Research Director, INRIA-Metz, France
XII
Organization
Rekuć Witold Ibrahima Sakho Daniel Singer Paul
Wrocław University of Technology, Poland Paul Verlaine University - Metz, France Verlaine University - Metz, France
Additional Reviewers Chang Chung C. Chen Jr-Shian Chen Rung-Ching Chen Yen-Lin Chien Chih-Yao Deutsch Alin Dobbie Gill Felea Victor Garrigos Irene Gely Alain Jeng Albert B. Lee Li-Wei Lee Ting-Kuei Lehner Wolfgang Leu Yungho Lu Kun-Yung Manthey Rainer Mazon Jose Norberto Morzy Tadeusz Park Dong-Chul Proth Jean-Marie Shih An-Zen Suciu Dan Thomas Wojciech Vidyasankar K. Vossen Gottfried Wang Cheng-Yi Wang Jia-wen Wang Yongli Woźniak Michał Wong Limsoon
Chinese Culture University, Taiwan Hungkuang University, Taiwan Chaoyang University of Technology, Taiwan National Taipei University of Technology, Taiwan National Taiwan University of Science and Technology, Taiwan University of California, USA University of Auckland, New Zealand "Al.I.Cuza" University of Iasi, Romania University of Alicante, Spain Paul Verlaine University, France Jinwen University of Science and Technology, Taiwan National Taiwan University of Science and Technology, Taiwan National Taiwan University of Science and Technology, Taiwan Technical University of Dresden, Germany National Taiwan University of Science and Technology, Taiwan National United University, Taiwan University of Bonn, Germany University of Alicante, Spain Poznan University of Technology, Poland Myong Ji University, South Korea INRIA-Metz, France Jin-Wen Science and Technology University, Taiwan University of Washington, USA Wrocław University of Technology, Poland Memorial University of Newfoundland, Canada University of Muenster, Germany National Taiwan University of Science and Technology, Taiwan Nanhua University, Taiwan North China Electric Power University, China Wroclaw University of Technology, Poland National University of Singapore, Singapore
Table of Contents – Part I
Keynote Speeches Selected Problems of the Static Complex Systems Identification . . . . . . . . ´ atek Jerzy Swi¸ Combining and Integrating Advanced IT-Concepts with Semantic Web Technology Mashups Architecture Case Study . . . . . . . . . . . . . . . . . . . . . . . Amin Anjomshoaa, A Min Tjoa, and Andreas Hubmer
1
13
Intelligent Database Systems Web Page Augmentation with Client-Side Mashups as Meta-Querying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stephan Hagemann and Gottfried Vossen Soft Computing Techniques for Intrusion Detection of SQL-Based Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jaroslaw Skaruz, Jerzy Pawel Nowacki, Aldona Drabik, Franciszek Seredynski, and Pascal Bouvry Efficiently Querying XML Documents Stored in RDBMS in the Presence of Dewey-Based Labeling Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . Moad Maghaydah and Mehmet A. Orgun From Data Mining to User Models in Evolutionary Databases . . . . . . . . . C´esar Andr´es, Manuel N´ un ˜ez, and Yaofeng Zhang How to Construct State Registries–Matching Undeniability with Public Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Przemyslaw Kubiak, Miroslaw Kutylowski, and Jun Shao
23
33
43 54
64
Data Warehouses and Data Mining Regions of Interest in Trajectory Data Warehouse . . . . . . . . . . . . . . . . . . . . Marcin Gorawski and Pawel Jureczek AFOPT-Tax: An Efficient Method for Mining Generalized Frequent Itemsets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yu Xing Mao and Bai Le Shi A Novel Method to Find Appropriate for DBSCAN . . . . . . . . . . . . . . . . . Jamshid Esmaelnejad, Jafar Habibi, and Soheil Hassas Yeganeh
74
82 93
XIV
Table of Contents – Part I
Towards Semantic Preprocessing for Mining Sensor Streams from Heterogeneous Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jason J. Jung
103
HOT aSAX: A Novel Adaptive Symbolic Representation for Time Series Discords Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ninh D. Pham, Quang Loc Le, and Tran Khanh Dang
113
The Vector Space Models for Finding Co-occurrence Names as Aliases in Thai Sports News . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Thawatchai Suwanapong, Thanaruk Theeramunkong, and Ekawit Nantajeewarawat Efficiently Mining High Average Utility Itemsets with a Tree Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chun-Wei Lin, Tzung-Pei Hong, and Wen-Hsiang Lu
122
131
Intelligent Information Retrieval Human Activity Mining Using Conditional Radom Fields and Self-Supervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nguyen Minh The, Takahiro Kawamura, Hiroyuki Nakagawa, Ken Nakayama, Yasuyuki Tahara, and Akihiko Ohsuga Visualization of Influenza Protein Segment HA in Manifold Space . . . . . . Cheng-Yuan Liou and Wei-Chen Cheng Multimodal sn,k -Grams: A Skipping-Based Similarity Model in Information Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pakinee Aimmanee and Thanaruk Theeramunkong Going Beyond the Surrounding Text to Semantically Annotate and Search Digital Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shahrul Azman Noah, Datul Aida Ali, Arifah Che Alhadi, and Junaidah Mohamad Kassim Context Oriented Analysis of Web 2.0 Social Network Contents - MindMeister Use-Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amin Anjomshoaa, Khue Vo Sao, A Min Tjoa, Edgar Weippl, and Michael Hollauf
140
150
159
169
180
Technologies for Intelligent Information Systems Generation of a FIR Filter by Means of a Neural Network for Improvement of the Digital Images Obtained Using the Acquisition Equipment Based on the Low Quality CCD Structurecture . . . . . . . . . . . . Jakub Peksi´ nski and Grzegorz Mikolajczak
190
Table of Contents – Part I
Robust Fuzzy Clustering Using Adaptive Fuzzy Meridians . . . . . . . . . . . . . Tomasz Przybyla, Janusz Je˙zewski, Janusz Wr´ obel, and Krzysztof Horoba
XV
200
An Ambient Agent Model Incorporating an Adaptive Model for Environmental Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jan Treur and Muhammad Umair
210
A New Algorithm for Divisible Load Scheduling with Different Processor Available Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amin Shokripour, Mohamed Othman, and Hamidah Ibrahim
221
Evolutionary Computational Intelligence in Solving the Fractional Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Muhammad Asif Zahoor Raja, Junaid Ali Khan, and I.M. Qureshi
231
A Neural Network Optimization-Based Method of Image Reconstruction from Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Robert Cierniak
241
Entrance Detection of Buildings Using Multiple Cues . . . . . . . . . . . . . . . . . Suk-Ju Kang, Hoang-Hon Trinh, Dae-Nyeon Kim, and Kang-Hyun Jo
251
Spectrum Sharing with Buffering in Cognitive Radio Networks . . . . . . . . . Chau Pham Thi Hong, Youngdoo Lee, and Insoo Koo
261
Extracting Chemical Reactions from Thai Text for Semantics-Based Information Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Peerasak Intarapaiboon, Ekawit Nantajeewarawat, and Thanaruk Theeramunkong Coalition Formation Using Combined Deterministic and Evolutionary Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Halina Kwasnicka and Wojciech Gruszczyk
271
282
Applications of Intelligent Systems A New CBIR System Using SIFT Combined with Neural Network and Graph-Based Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nguyen Duc Anh, Pham The Bao, Bui Ngoc Nam, and Nguyen Huy Hoang Computer-Aided Car Over-Taking System . . . . . . . . . . . . . . . . . . . . . . . . . . . Magdalena Bara´ nska Learning Model for Reducing the Delay in Traffic Grooming Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Viet Minh Nhat Vo
294
302
308
XVI
Table of Contents – Part I
Power Load Forecasting Using Data Mining and Knowledge Discovery Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yongli Wang, Dongxiao Niu, and Yakun Wang A Collaborative Framework for Multiagent Systems . . . . . . . . . . . . . . . . . . Moamin Ahmed, Mohd Sharifuddin Ahmad, and Mohd Zaliman M. Yusoff Solving Unbounded Knapsack Problem Based on Quantum Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rung-Ching Chen, Yun-Hou Huang, and Ming-Hsien Lin Detecting Design Pattern Using Subgraph Discovery . . . . . . . . . . . . . . . . . . Ming Qiu, Qingshan Jiang, An Gao, Ergan Chen, Di Qiu, and Shang Chai A Fuzzy Time Series-Based Neural Network Approach to Option Price Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yungho Leu, Chien-Pang Lee, and Chen-Chia Hung Integrated Use of Artificial Neural Networks and Genetic Algorithms for Problems of Alarm Processing and Fault Diagnosis in Power Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Paulo C´ıcero Fritzen, Ghendy Cardoso Jr., Jo˜ ao Montagner Zauk, Adriano Peres de Morais, Ubiratan H. Bezerra, and Joaquim A.P.M. Beck Using Data from an AMI-Associated Sensor Network for Mudslide Areas Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cheng-Jen Tang and Miau Ru Dai Semantic Web Service Composition System Supporting Multiple Service Description Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nhan Cach Dang, Duy Ngan Le, Thanh Tho Quan, and Minh Nhut Nguyen Forecasting Tourism Demand Based on Improved Fuzzy Time Series Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hung-Lieh Chou, Jr-Shian Chen, Ching-Hsue Cheng, and Hia Jong Teoh Weighted Fuzzy Time Series Forecasting Model . . . . . . . . . . . . . . . . . . . . . . Jia-Wen Wang and Jing-Wei Liu A System for Assisting English Oral Proficiency – A Case Study of the Elementary Level of General English Proficiency Test (GEPT) in Taiwan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chien-Hsien Huang and Huey-Ming Lee
319 329
339 350
360
370
380
390
399
408
416
Table of Contents – Part I
XVII
Verification of Stored Security Data in Computer System . . . . . . . . . . . . . Heng-Sheng Chen, Tsang-Yean Lee, and Huey-Ming Lee
426
Ant Colony Clustering Using Mobile Agents as Ants and Pheromone . . . Masashi Mizutani, Munehiro Takimoto, and Yasushi Kambayashi
435
Mining Source Codes to Guide Software Development . . . . . . . . . . . . . . . . Sheng-Kuei Hsu and Shi-Jen Lin
445
Forecasting Stock Market Based on Price Trend and Variation Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ching-Hsue Cheng, Chung-Ho Su, Tai-Liang Chen, and Hung-Hsing Chiang
455
Intelligent Prebuffering Using Position Oriented Database for Mobile Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ondrej Krejcar
465
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
475
Table of Contents – Part II
Web-Based Systems for Data Management Enhancing Accuracy of Recommender System through Adaptive Similarity Measures Based on Hybrid Features . . . . . . . . . . . . . . . . . . . . . . . Deepa Anand and Kamal K. Bharadwaj
1
Exploring Wikipedia and Text Features for Named Entity Disambiguation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hien T. Nguyen and Tru H. Cao
11
Telemedical System in Evaluation of Auditory Brainsteam Responses and Support of Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Piotr Strzelczyk, Ireneusz Wochlik, Ryszard Tadeusiewicz, Andrzej Izworski, and Jaroslaw Bulka
21
Service Discovery in the SOA System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Krzysztof Brzostowski, Witold Reku´c, Janusz Sobecki, and Leopold Szczurowski
29
An Information Theoretic Web Site Navigability Classification . . . . . . . . . Cheng-Tzu Wang, Chih-Chung Lo, Chia-Hsien Tseng, and Jong-Ming Chang
39
MACRO-SYS: An Interactive Macroeconomics Simulator for Advanced Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C´esar Andr´es, Mercedes G. Merayo, and Yaofeng Zhang Managing Web Services in SOKU Systems . . . . . . . . . . . . . . . . . . . . . . . . . . Agnieszka Prusiewicz
47 57
Autonomous Systems A Computational Analysis of Cognitive Effort . . . . . . . . . . . . . . . . . . . . . . . Luca Longo and Stephen Barrett An Algorithm for Computing Optimal Coalition Structures in Non-linear Logistics Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chattrakul Sombattheera
65
75
Collaborative Systems Moral Hazard Resolved by Common-Knowledge in S5n Logic . . . . . . . . . . Takashi Matsuhisa
85
XX
Table of Contents – Part II
An Algorithmic Approach to Social Knowledge Processing and Reasoning Based on Graph Representation – A Case Study . . . . . . . . . . . Zbigniew Tarapata, Mariusz Chmielewski, and Rafal Kasprzyk A Real Time Player Tracking System for Broadcast Tennis Video . . . . . . Bao Dang, An Tran, Tien Dinh, and Thang Dinh Twittering for Earth: A Study on the Impact of Microblogging Activism on Earth Hour 2009 in Australia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marc Cheong and Vincent Lee Student Courses Recommendation Using Ant Colony Optimization . . . . . Janusz Sobecki and Jakub M. Tomczak
93 105
114 124
Web Ontology Building System for Novice Users: A Step-by-Step Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shotaro Yasunaga, Mitsunori Nakatsuka, and Kazuhiro Kuwabara
134
Automatic Lexical Annotation Applied to the SCARLET Ontology Matcher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Laura Po and Sonia Bergamaschi
144
State of the Art of Semantic Business Process Management: An Investigation on Approaches for Business-to-Business Integration . . . . . . . Hanh Huu Hoang, Phuong-Chi Thi Tran, and Thanh Manh Le
154
Tools and Applications Evolving Concurrent Petri Net Models of Epistasis . . . . . . . . . . . . . . . . . . . Michael Mayo and Lorenzo Beretta
166
Partial Orderings for Ranking Help Functions . . . . . . . . . . . . . . . . . . . . . . . Sylvia Encheva and Sharil Tumin
176
Robust Prediction with ANNBFIS System . . . . . . . . . . . . . . . . . . . . . . . . . . Robert Czabanski, Michal Jezewski, Krzysztof Horoba, Janusz Je˙zewski, and Janusz Wr´ obel
185
An Unsupervised Learning and Statistical Approach for Vietnamese Word Recognition and Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hieu Le Trung, Vu Le Anh, and Kien Le Trung Named Entity Recognition for Vietnamese . . . . . . . . . . . . . . . . . . . . . . . . . . Dat Ba Nguyen, Son Huu Hoang, Son Bao Pham, and Thai Phuong Nguyen Task Allocation in Mesh Connected Processors with Local Search Meta-heuristic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wojciech Kmiecik, Marek Wojcikowski, Leszek Koszalka, and Andrzej Kasprzak
195 205
215
Table of Contents – Part II
Computer System for Making Efficiency Analysis of Meta-heuristic Algorithms to Solving Nesting Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pawel Bogalinski, Leszek Koszalka, Iwona Pozniak-Koszalka, and Andrzej Kasprzak Towards Collaborative Library Marketing System for Improving Patron Satisfaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Toshiro Minami
XXI
225
237
Multiple Model Approach to Machine Learning DAG Scheduling on Heterogeneous Distributed Systems Using Learning Automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Habib Moti Ghader, Davood KeyKhosravi, and Ali HosseinAliPour
247
Visualization of the Similar Protein Structures Using SOM Neural Network and Graph Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Do Phuc and Nguyen Thi Kim Phung
258
Real Time Traffic Sign Detection Using Color and Shape-Based Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tam T. Le, Son T. Tran, Seichii Mita, and Thuc D. Nguyen
268
Standard Additive Fuzzy System for Stock Price Forecasting . . . . . . . . . . Sang Thanh Do, Thi Thanh Nguyen, Dong-Min Woo, and Dong-Chul Park
279
Complex Neuro-Fuzzy Self-learning Approach to Function Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chunshien Li and Tai-Wei Chiang
289
On the Effectiveness of Gene Selection for Microarray Classification Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhongwei Zhang, Jiuyong Li, Hong Hu, and Hong Zhou
300
A Multiple Combining Method for Optimizing Dissimilarity-Based Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sang-Woon Kim and Seunghwan Kim
310
A Comparative Study on the Performance of Several Ensemble Methods with Low Subsampling Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zaman Faisal and Hideo Hirose
320
Analysis of Bagging Ensembles of Fuzzy Models for Premises Valuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marek Krzystanek, Tadeusz Lasota, Zbigniew Telec, and Bogdan Trawi´ nski
330
XXII
Table of Contents – Part II
Comparison of Bagging, Boosting and Stacking Ensembles Applied to Real Estate Appraisal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Magdalena Graczyk, Tadeusz Lasota, Bogdan Trawi´ nski, and Krzysztof Trawi´ nski
340
A Three-Scan Algorithm to Mine High On-Shelf Utility Itemsets . . . . . . . Guo-Cheng Lan, Tzung-Pei Hong, and Vincent S. Tseng
351
Incremental Prediction for Sequential Data . . . . . . . . . . . . . . . . . . . . . . . . . . Tomasz Kajdanowicz and Przemyslaw Kazienko
359
Predictive Maintenance with Multi-target Classification Models . . . . . . . . Mark Last, Alla Sinaiski, and Halasya Siva Subramania
368
Modeling and Optimization Techniques in Information Systems, Database Systems and Industrial Systems Multiresolution Models and Algorithms of Movement Planning and Their Application for Multiresolution Battlefield Simulation . . . . . . . . . . . Zbigniew Tarapata
378
An Algorithm for Generating Efficient Outcome Points for Convex Multiobjective Programming Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nguyen Thi Bach Kim and Le Quang Thuy
390
Resources Utilization in Distributed Environment for Complex Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adam Grzech
400
DCA for Minimizing the Cost and Tardiness of Preventive Maintenance Tasks under Real-Time Allocation Constraint . . . . . . . . . . . . . . . . . . . . . . . Tran Duc Quynh, Le Thi Hoai An, and Kondo Hloindo Adjallah
410
Cooperative Agents Based-Decentralized and Scalable Complex Task Allocation Approach Pro Massive Multi-Agents System . . . . . . . . . . . . . . . Zaki Brahmi, Mohamed Mohsen Gammoudi, and Malek Ghenima
420
Mining Informative Rule Set for Prediction over a Sliding Window . . . . . Nguyen Dat Nhan, Nguyen Thanh Hung, and Le Hoai Bac
431
Using Text Classification Method in Relevance Feedback . . . . . . . . . . . . . . Zilong Chen and Yang Lu
441
Fast and Near Optimal Parity Assignment in Palette Images with Enhanced CPT Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nguyen Hai Thanh and Phan Trung Huy
450
Table of Contents – Part II
XXIII
Solving QoS Routing Problems by DCA . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ta Anh Son, Le Thi Hoai An, Djamel Khadraoui, and Pham Dinh Tao
460
Perceptual Watermarking Using a Multi-scale JNC Model . . . . . . . . . . . . . Phi-Bang Nguyen, Marie Luong, and Azeddine Beghdadi
471
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
481
Selected Problems of the Static Complex Systems Identification ´ atek Jerzy Swi¸ Institute of Computer Science, Faculty of Computer Science and Management,Wroclaw University of Technology, Poland
[email protected]
Abstract. In the paper the fundamental identification problems are formulated. For the static case the plan parameters determination problem is presented, and choice of the optimal model is discussed. The description of the complex plant is introduced. For the static complex plant the following problems are pretended: 1. Identification problem with restricted measurement possibilities. 2. Global and local identification.
1
Introduction
Modeling the large scale plant generates new tasks of complex system identification. Particularly new problems can be found when designing computer control systems for complex process or when modeling complex systems of a different nature (for example: biological plants). Usually a large scale system the elementary processes are distinguished and connection between elements is given. In the production process the elementary operations are elements of complex systems and structure of the system is given by time schedule of operations given by technological conditions. In this case we have problem of complex of operation system modeling, [1], [3], [4]. Investigation of technical or biological plants, [2], [4], [5], [6], usually gives us modeling identification of complex input-output system. In this case we can distinguish a sub-process (i.e. an elementary process) with some inputs and outputs which can be operated separately. Connection between inputs and outputs of each element gives us a complex system. Modeling of such a system is connected with modeling of each process separately taking into account the connection between the elements. The new of looking at the investigated process as at a complex input output system generates new ideas and methods of complex system modeling and identification. In the area of system identification we can distinguish the following investigated ideas: – Identification of complex systems with restricted measurement possibilities. – Locally and globally optimal modeling of complex systems Identification of complex system with restricted measurement possibilities deals with the plant in which elementary processes are distinguished. The structure of ´ atek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 1–12, 2010. N.T. Nguyen, M.T. Le, and J. Swi c Springer-Verlag Berlin Heidelberg 2010
2
´ atek J. Swi¸
the system is known, i.e. the connections between the elements are defined. The description of each element (subsystem) is known with accuracy to parameters. It is assumed that all external inputs and some (not all) outputs are measured. The selection of output to be measured depends on measurement possibilities. Sometimes some inner outputs could not be measured or measurements are dangerous or expensive. In this case the separability problem arises, i.e. possibility to determine the plant parameter on the basis of collected data. The problem of the identification of complex system with restricted measurement possibilities has been discussed in [4], [8], [9], [10]. The globally and locally optimal modeling deals with the complex process of known structure, but description of the elementary process is unknown. For each subsystem the description is proposed. The problem is to determine the optimal model parameters. There are two approaches. The first one is to determine optimal model parameters separately for each subsystem. This is locally optimal identification. The second approach is to determine optimal model parameters taking into account the model of the system as a whole. This gives globally optimal models. Multi-criteria approach gives the possibility to determine globally optimal parameters with respect to local model quality. Globally and locally optimal models and respective identification algorithms are presented in [4], [6], [7], [9], [10]. In this chapter the identification problem for simple input output static system for deterministic case is analyzed. Two typical cases, i.e. determination of the plant characteristic parameters and optimal model parameters, are introduced. They will correspond to further complex system identification tasks considered in this chapter. Next the description of the input output complex systems is proposed. Later on the separability problem is discussed. Finally, globally and locally optimal modeling is presented.
2
Identification
The identification problem is to determine the model of the investigated process on the basis of measurement data collected on during the experiment. More precisely, [1], for the identification plant the problem is to find the relation between input values u(1) , u(2) , . . . , u(S) and output values y (1) , y (2) , . . . , y (L) . For further consideration let us assume that input and output are S- and L-dimensional vectors, respectively. We denote them in the form: T u = u(1) u(2) . . . u(S) ,
T y = y (1) y (2) . . . y (L) ,
(1)
where: u – plant input vector, u ∈ U ⊆ R S , U is a S-dimensional input space, R is the set of real numbers, y – plant output vector, y ∈ Y ⊆ R L , Y is a L-dimensional output space, T – denotes vector transposition. The problem is to determine relation between input u and output y on the basis of experimental data. Denote by yn result of n-th output measurement for
Selected Problems of the Static Complex Systems Identification
3
the given input un , n = 1, 2, . . . , N , where N is the number of the measurements. Results of measurements are collected in the following matrices: UN = [u1 u2 . . . uN ] ,
YN = [y1 y2 . . . yN ] ,
(2)
where: UN , YN are matrices of input and output measurements, respectively. The identification problem is to find the identification algorithm which allows getting the plant characteristic (plant model). The identification algorithm depends on the knowledge about investigated plant. There are two possible cases. The first one is that we know the plant characteristic with accuracy to parameters. In this case the problem is reduced to determination of the plant parameters. The second one is to find the best approximation of the plan characteristic on the basis of the experiment. 2.1
Determination of the Plant Parameters
Now let us assume that plant characteristic is known with accuracy to parameters, i.e. the static plant characteristic has the form: y = F (u, θ),
(3)
where: F – known function and θ is R-dimensional unknown vector of the plant characteristic parameters, i.e.: T θ = θ(1) θ(2) . . . θ(R) , (4) where: θ - unknown vector parameters, θ ∈ Θ ⊆ R R , Θ - parameter space. During the experiment we measured points from plant characteristic. It means that each measurement point must fulfil the relation (3), i.e.: yn = F (un , θ),
n = 1, 2, . . . , N.
(5)
The above system of equations we can rewrite and in the comprehensive form: YN = F (UN , θ),
(6)
[F (u1 , θ) F (u2 , θ) . . . F (uN , θ)] ≡ F (UN , θ),
(7)
where: The solution of the system of equations (6) with respect to θ gives the identification algorithm, i.e.: −1
θ = F θ (UN , YN ) ≡ ΨN (UN , YN ),
(8)
−1 Fθ
where: – inverse function with respect θ, ΨN – identification algorithm. That solution (8) depends on the plant characteristic (3) and the results of measurements (2). Notice that sometimes it is impossible to uniquely determine plant parameters. It depends on plant prosperities called [1] “identifiable”. Definition 1. The system is identifiable if there exists such a sequence UN = [u1 u2 . . . uN ], which together with corresponding results of output measurements YN = [y1 y2 . . . yN ] uniquely determines plant characteristic parameters.
4
2.2
´ atek J. Swi¸
Choice of the Best Model
Now let us assume that plant characteristic is not known. We propose the model of the form: y = Φ(u, θ), (9) where: Φ is a proposed, arbitrary given function describing the relation between model output y and input u, and θ is a vector of parameters of the proposed description. The values u and θ are elements of the respective spaces and y ∈ Y ⊆ RL . The result of the experiment in n-th point of measurement, for a given input un , from the input measurement domain Du ⊆ U ⊆ R S , gives output measurement yn . The measured value yn usually differs from value yn , output model obtained by substituting un into (9). For each measurement n points we will introduce the measure of the difference between measured output yn and model output yn , i.e.: ∀n=1,2,...,N q(yn , yn ) = q(yn , Φ(un , θ)). (10) For the whole input sequence UN the performance index QN (θ) = YN − Y N (θ)UN ,
(11)
shows the difference between experiment and proposed model, where: Y N (θ) ≡ [Φ(u1 , θ) Φ(u2 , θ) . . . Φ(uN , θ)] .
(12)
The optimal value of vector model parameters by we obtained minimization of the performance index (12) with respect to θ, i.e.: ∗ ∗ θN → QN (θN ) = min QN (θ), θ∈Θ
(13)
∗ where: θN is the optimal vector of model parameters and the relation (9) with ∗ parameters θN , i.e.: ∗ y = Φ(u, θN ), (14)
gives us the optimal model. The obtained model (14) is the optimal one for: – given measurement sequence (1), – proposed model (9), – performance index (10) and(11).
3
Complex System Description
Now, let us consider the complex input output system with M elementary subsystems O1 , O2 , . . . , OM . Let: ym = Fm (um , θm )
(15)
ym = Fm (um , θm )
(16)
or
Selected Problems of the Static Complex Systems Identification
5
denote the static characteristic of the m-th subsystem with input um and output ym , Fm is a known function like in the description (3) or model output y m and proposed function like in the description Φm . In the descriptions (15) and (16) θm is Rm - dimensional vector of unknown parameters. Input, output and parameters of the m-th subsystem are elements of the respective spaces, i.e.: ⎡
um
⎤ ⎡ (1) ⎤ ⎡ (1) ⎤ (1) um ym θm ⎢ (2) ⎥ ⎢ (2) ⎥ ⎢ (2) ⎥ y θm ⎥ ⎢ um ⎥ ⎢ ⎥ ⎢ m Sm Lm Rm ⎥ ⎢ ⎥ ⎢ ⎥ =⎢ ⎢ .. ⎥ ∈ U ⊆ R , ym = ⎢ .. ⎥ ∈ Y ⊆ R , θm = ⎢ .. ⎥ ∈ ⊆ R , ⎣ . ⎦ ⎣ . ⎦ ⎣ . ⎦ (S ) (L ) (R ) um m ym m θm m
where: Sm and Lm are dimensions of the input and output spaces, m = 1, 2, . . . , M . Let u, y, denote vectors of all inputs and outputs in the complex plant, i.e.: ⎡ (1) ⎤ ⎡ ⎤ ⎡ (1) ⎤ ⎡ ⎤ ⎡ (1) ⎤ ⎡ ⎤ u y y u1 y1 y1 ⎢ u(2) ⎥ ⎢ u2 ⎥ ⎢ y (2) ⎥ ⎢ y2 ⎥ ⎢ y (2) ⎥ ⎢ y 2 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ u = ⎢ . ⎥ ≡ ⎢ . ⎥,y = ⎢ . ⎥ ≡ ⎢ . ⎥,y = ⎢ . ⎥ ≡ ⎢ . ⎥, ⎣ .. ⎦ ⎣ .. ⎦ ⎣ .. ⎦ ⎣ .. ⎦ ⎣ .. ⎦ ⎣ .. ⎦ u(S)
uM
y (L)
yM
⎤ ⎡ θ(1) ⎢ θ(2) ⎥ ⎢ ⎢ ⎥ ⎢ y=⎢ . ⎥≡⎢ . ⎣ . ⎦ ⎣
θ1 θ2 .. .
θ(L)
θM
⎡
y(L)
yM
⎤ ⎥ ⎥ ⎥, ⎦
u ∈ U = U1 × U2 × . . . UM ⊆ R S ,
(17)
M
S=
Sm ,
(18)
Lm ,
(19)
m=1
y, y ∈ Y = Y1 × Y2 × . . . YM ⊆ R L ,
L=
M m=1
θ ∈ Θ = Θ1 × Θ2 × . . . ΘM ⊆ R R ,
R=
M
Rm .
(20)
m=1
The structure of the complex system is given by the relation between inputs and outputs of the system elements. Notice that in the complex systems there are inputs, which are not outputs of any system elements. Those inputs are called external ones. Denote them by T x = x(1) x(2) . . . x(S) , (21) x is S dimensional external input vector x ∈ X ⊆ U from the space X ⊆ R S . The structure of the system is given by the relation:
u = Ay + Bx,
(22)
6
´ atek J. Swi¸
where: A is S × L and B is S × S zero – one matrix. The matrix A defines the connections between system elements, i.e.: 1 if u(s) = y (l)
A = [asl ]
, asl = (23) s=1,2,...,S 0 if u(s) = y (l)
l=1,2,...,L
and matrix B shows the external inputs, i.e.: B = [bs s ]
s=1,2,...,S
s =1,2,...,S
,
bs s =
1 if u(s) = x( s) . 0 if u(s) = x( s)
(24)
Moreover, in the complex system we distinguish some outputs because of some reasons for example because of control task or measurements possibilities. Let us denote the distinguished (external) complex system outputs by T v = v (1) v (2) . . . v (L) . (25) dimensional vector v, distinguished The external outputs are represented by L × L matrix C, i.e.: from all the possible outputs of complex system defined by L v = Cy, where:
(26)
C = c ll
l=1,2,...,L
l=1,2,...,L
,
c ll =
1 if v (l) = y (l) . 0 if v (l) = y (l)
(27)
The external output vector
v ∈ V = {v : ∀y∈Y v = Cy} ⊆ R L .
(28)
The structure given by matrices A, B, and C in equations (22) and (26) defines a new element for the complex system as a whole with external input x and distinguishes (external) output v. Consequently, the new static characteristic of the system as a whole with external inputs x and measured outputs v in the case of the description (15) has the form: −1
v = CF y (x, θ; A, B) ≡ F (x, θ),
(29)
−1
where F y is an inverse function F (Ay + Bx, θ) with respect y, and T
F (u, θ) ≡ [F1 (u1 , θ1 ) F2 (u2 , θ2 ) . . . FM (uM , θM )] .
(30)
Similarly, the new model proposition of the system as a whole with external inputs x and measured outputs v in the case of the description (16) has the form: −1 v = CΦy (x, θ; A, B) ≡ Φ(x, θ), (31) −1
where Φy
is an inverse function Φ(Ay + Bx, θ) with respect y, and T
Φ(u, θ) ≡ [Φ1 (u1 , θ1 ) Φ2 (u2 , θ2 ) . . . ΦM (uM , θM )] .
(32)
Selected Problems of the Static Complex Systems Identification
4
7
Identification of Complex Systems with Restricted Measurement Possibilities
In further consideration it will be assumed that only external inputs x and outputs v shown by matrix C in the relation (26) are measured. Now a new question appears: Is it possible to uniquely determine plant characteristic parameters based on restricted output measurements? Notice that the structure of the complex system given by (22) and measurement possibilities by (26) with elements with static characteristics (15) defines a new identification plant with input x and output v with the characteristic given by (29). A newly defined system allows introducing the idea of separability, what practically means the possibility to uniquely determine unknown vectors parameters for each element separately. Definition 2. The complex system with a given structure and characteristics of each element known with accuracy to parameters is called separable, if the element defined by measurement possibilities is identifiable. Using Definition 2 we can conclude, the complex system is separable if there exists such an identification sequence XN = [x1 x2 . . . xN ], which one together with output measurements VN = [v1 v2 . . . vN ] gives system of equations vn = F (xn , θ),
n = 1, 2, . . . , N,
(33)
for which there exists the unique solution with respect to θ. Let us notice that parameters θ in the characteristic (29), for the newly defined element, are transformed. The characteristic (29) can be rewritten in the form: −1 v = CF y (x, θ; A, B) = F (x, θ) ≡ F , (x, θ),
(34)
where vector of plant parameters θ in the newly defined plant is given by the relation θ ≡ Γ (θ), (35) where Γ is a known function such that: Θ = θ : ∀θ∈Θ θ = Γ (θ) ⊆ R R , Γ : Θ → Θ,
(36)
is dimension of the vector parameter in the new plant characteristic (34) and R F is a known function, such that: →V. F : X × Θ
(37)
The form of functions F and Γ depends on the description of particular elements (15), system structure (22), and measurement possibilities (26). Theorem 1. The complex system with the structure given by (22) and elements with static characteristics (15) is separable if the element defined by measurements possibilities (26) with characteristics (34) is identifiable and function Γ is an one to one mapping.
8
´ atek J. Swi¸
Proof. Because the element (34) is identifiable; there exists such an identification sequence XN which together with corresponding results of output measurements VN uniquely determines plant characteristics parameters (34). In the other words there exists such a sequence XN , which together with output measurements VN gives system of equations vn = F (xn , θ),
n = 1, 2, . . . , N,
(38)
i.e.: which one have the unique solution with respect to θ, −1 (X , V ) ≡ Ψ (X , V ), θ = F N N N N N θ −1 is an inverse function of F with respect θ, where F θ (X , θ) ≡ F (x , θ) F (x , θ) . . . F (x , θ) T . F N 1 2 N
(39)
(40)
and Ψ N is an identification algorithm. Because Γ is one to one mapping from the relation (35) it is possible to uniquely determine θ, i.e.: θ ≡ Γ −1 (θ),
(41)
where Γ −1 is an inverse function of Γ . Taking into account (39) in (42) we obtain identification algorithm θ = Γ −1 Ψ N (XN , YN ) = ΨN (XN , YN ), (42) which one uniquely determines unknown vector of parameters plant characteristic (29) with inputs x and outputs v.
5
Choice of the Best Model of Complex System
Now, it is possible to formulate two quite different methods of optimal model parameter determination. The first one is to determine the optimal model parameter for each element separately for the description (16), and then, taking into account the system structure, we obtain the model of the whole system. We will call such a description a locally optimal model of complex system. The second approach is to determine optimal model parameters for the system as a whole taking into account the system structure and performance index which compares the global plant outputs with global model (31) outputs. 5.1
Locally Optimal Model of Complex System
Now, it will be assumed that each element of complex system is observed independently. For m-th elements for a given input sequence the output is measured. The results of the experiment are collected in the following matrices: UmNm = [um1 um2 . . . umNm ] ,
YmNm = [ym1 ym2 . . . ymNm ] ,
(43)
Selected Problems of the Static Complex Systems Identification
9
where Nm is a number of measurement points for m-th element, m = 1, 2, . . . , M . For each m-th element we propose model in the form of (16). Similarly like in section 2.2 we propose the performance index (11), i.e. the difference between observed output measurements YmNm and respective model output sequence for a given input sequence UmNm : QmNm (θm ) = YmNm − Y mNm (θm )U , (44) mNm
where: Y mNm (θm ) ≡ [Φm (um1 , θm ) Φm (um2 , θm ) . . . Φm (umNm , θm )] .
(45)
The optimal value of vector model parameters for m-th element is obtained by minimization of the performance index (45) QmNm (θm ) with respect to θm from the space Θm i.e.: ∗ ∗ θmN → QmNm (θmN ) = min QmNm (θm ), m m θm ∈Θm
(46)
where θmNm is the optimal value of m-th model parameters and function (45) with vector θmNm , i.e.: ∗ ym = Φm (um , θmN ), (47) m is called locally optimal model of m-th element. The local identification task is repeated for each element separately, i.e.: m = 1, 2, . . . , M . Now using system structure equations (22) and (26) we can determine optimal model of the complex system. Let us denote vector of all the locally optimal parameters by: ∗ T ∗ ∗ ∗ θN ≡ θ1N θ2N . . . θMN , 1 2 M where N =
M
(48)
Nm . The model (47) with locally optimal parameters (48), i.e.:
m=1 −1 ∗ ∗ θN v = CΦy (x, θN ; A, B) ≡ Φ(x, ),
(49)
is called locally optimal model of complex system. 5.2
Globally Optimal Model of Complex System
Another view on the system, i.e. comparison only selected plant output with respective global model output, defines the globally optimal model. Now, it will be assumed that for a given input sequence XN of external input x the respective measurement sequence VN of the selected global output v is obtained. The model (31) is proposed for the system as a whole and the performance index for input sequence XN QN (θ) = VN − V N (θ)X , (50) N
10
´ atek J. Swi¸
shows the difference between the result of the experiment VN and the respective sequence of model outputs calculated from (31) for input sequence XN , i.e.: V N (θ) ≡ [Φ(x1 , θ) Φ(x2 , θ) . . . Φ(xN , θ)] .
(51)
The globally optimal value of vector model parameters is obtained by minimization of the performance index (50) with respect to θ, i.e.: θ N → QN (θ N ) = min QN (θ), θ∈Θ
(52)
where: θ N is the optimal vector of model parameters and the relation (31) with parameters θ N , i.e.: v = Φ(u, θ N ), (53) is called a globally optimal model of complex system. 5.3
Globally Optimal Model with Respect to Quality of Local Models
The presented approach shows two quite different locally and globally optimal models. The question arises: Which one is better? For example, if proposed model is used to investigate control algorithms for complex system then the answer depends on the structure of control systems. In two levels control system investigation of control algorithms on the lower level requires locally optimal models, but on the upper level the globally optimal model is necessary. There are two quite different criteria: the local (44) and global one (50). There is a possibility to formulate another optimal model based on multi-criteria approach. One of them is to define new synthetic performance index which takes into account both local and global model qualities. It can have the form of: Q(θ) = α0 Q(θ) +
M
αm QM (θm ),
(54)
m=1
where: α0 , α1 , . . . , αM is a sequence of weight coefficients. They show weigh of participation of global (50) and local (44) performance indexes respectively, in the synthetic performance index. Now the optimal model parameters for synthetic performance index θ N → Q(θ N ) = min Q(θ), θ∈Θ
(55)
where θN is an optimal vector for global model for synthetic performance index. In the other approach we assume that local models must be sufficiently good. It means that the local performance indexes must be less then a given number. So parameters of the m-th model θm must fulfill the following conditions: Qm (θm ) ≤ βm ,
m = 1, 2, . . . , M,
(56)
Selected Problems of the Static Complex Systems Identification
11
where quality sufficient number βm is grater then locally optimal performance index, i.e.: ∗ βm > Qm (θm ) m = 1, 2, . . . , M. (57) Now, the optimal model parameters will be obtained by minimization of global performance index (50) with additional constrains (56), i.e.: ∗ ∗ θ N → QN (θ N ) = min QN (θ),
(58)
θ∈Θ
where ≡ θ ∈ Θ ⊆ RR : Θ
∗ Qm (θm ) ≤ βm , βm > Qm (θm ), m = 1, 2, . . . , M
(59)
∗ and θ N is a globally optimal vector parameters sufficiently good for local models.
6
Final Remarks
In this work the problems of complex systems have been discussed. Two main identification tasks i.e. parameter estimation and choice of the best models have been introduced. Then, the identification problem of complex system with limited measurement possibilities has been presented and the separability problem has been introduced. The locally and globally optimal models have been defined. Both presented problems have been discussed for deterministic case. It has been shown that based on multi-criteria approach, other models may be defined.
References 1. Bubnicki, Z.: Identification of Control Plants, Oxford, N. York. Elsevier, Amsterdam (1980) 2. Bubnicki, Z.: Optimisation problems in large-scale systems modelling and identification. In: Straszak, A. (ed.) Large Scale Systems: Theory and Applications, pp. 411–416. Pergamon Press, Oxford (1984) 3. Bubnicki, Z.: Global modelling and identification of complex systems. In: Proc. of 7th IFAC/IFORS Symp. Identification and System Parameter Estimation, pp. 261–263. Pergamon Press, Oxford (1985) ´ atek, J.: Global modeling of complex systems by neural networks. 4. Dralus, G., Swi In: Proc. of 7th International Symposium on Artificial Life and Robotics, Oita, Japan, pp. 618–621 (2002) ´ atek, J.: Identification. Problems of Computer Science and Robotics. In: Grzech, 5. Swi A. (ed.) Zaklad Narodowy im Ossolinskich - Wydawnictwo PAN, Wroclaw, pp. 29– 44 (1998) (in Polish) ´ atek, J.: Global and Local Modeling of Complex Input-Output Systems. In: 6. Swi Proc. of 16th International Conf. on Systems Engineering, Coventry University, England, pp. 669–671 (2003) ´ atek, J.: Global Identification of Complex Systems with Cascade Structure. In: 7. Swi Proc. of the 7th International Conf. on Artificial Intelligence and Soft Computing, Zakopane, pp. 990–995 (2004)
12
´ atek J. Swi¸
´ atek, J.: Identification of Complexes of Operations System with Limited Mea8. Swi surement Possibilities. In: Proc. of 18th International Conference on Systems Engineering, Las Vegas, pp. 124–129 (2005) ´ atek, J.: Selected problems of complex systems identification. In: Nguyen, N.T., 9. Swi Kolaczek, G., Gabry´s, B. (eds.) Knowledge processing and reasoning for information society. Warsaw: “Exit”, pp. 201–229 (2008) ´ atek, J.: Some problems of complex static systems identification. Wroclaw 10. Swi University of Technology Publishing House, Poland (2009)
Combining and Integrating Advanced IT-Concepts with Semantic Web Technology Mashups Architecture Case Study Amin Anjomshoaa, A Min Tjoa, and Andreas Hubmer Institute of Software Technology and Interactive Systems, Vienna University of Technology {anjomshoaa,amin}@ifs.tuwien.ac.at,
[email protected]
Abstract. Even though Semantic Web technologies have flourished consistently in the past few years, it is unlikely to achieve the Semantic Web goals on the global Web in near future. The initial expectations such as turning the World Wide Web to a machine-comprehendible medium are far away from realization. The best proof of this is a look at the current status of World Wide Web and small percentage of websites and services that are Semanticallyenabled. The main reason for this situation is that so far it is not easy to get people to learn and apply Semantic Web concepts for their Web content and use them efficiently in their daily life. In this context advanced IT concepts such as Mashups can support Semantic Web goals and at the same time Semantic Web technologies can also improve computer-to-computer and human-to-computer interactions in enterprise systems. In this paper the two-way support of advanced IT and Semantic Web is explored and a novel approach for advancing the power of web forms is presented. Keywords: Semantic Web, Mashups, Web 2.0, Software Architecture.
1 Introduction The initial vision of Semantic Web has been “turning the World Wide Web to an environment in which information is given well-defined meaning, better enabling computers and people to work in cooperation” [1]. Semantic Web technology has been widely accepted and used to capture and document context information in many domains. It plays a significant role in information sharing scenarios and interoperability across applications and organizations [2]. This added-value opens the way to integrate huge amount of data and becomes extremely useful when used by many applications that comprehend this information and bring them into play without human interaction. Unfortunately the content description methods are not being used by all content owners and the current Web’s Achilles’ heel in our belief is the lack of semantic information that can be used to link this huge amount of information efficiently. This brings many web specialists to expecting another web called Web 3.0 to complete the deficiencies of the current web. N.T. Nguyen, M.T. Le, and J. Świątek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 13–22, 2010. © Springer-Verlag Berlin Heidelberg 2010
14
A. Anjomshoaa, A.M. Tjoa, and A. Hubmer
The basic reason for this situation is that so far it is not easy to get people to learn and apply Semantic Web concepts for their Web content and use them efficiently in their daily life. The real breakthrough in Semantic Web implementation occurs by the emergence of semantic-enabled content authoring and management tools that makes the paradigm shift from Traditional Web to Semantic Web feasible. The shift away from the traditional web is forced by the growing need for more efficient information sharing, collaboration and business processes [3]. Meanwhile Web 2.0 has set a new trend in the Rich Internet Application (RIA) world. It makes better use of the client machine’s processing power and at the same time pushes the SOA paradigm to its limits. Web 2.0 has also introduced new possibilities for a better human computer interaction via rich applications such as Mashups that provide a user-driven micro-integration of web-accessible data [4]. At the moment mashups are mainly used for less important tasks such as customized queries and map-based visualizations; however they have the potential to be used for more fundamental, complex and sophisticated tasks in combination with Semantic Web concepts. In this paper, emerging IT-concepts such as Web 2.0 and Mashups will be discussed and a novel approach for smart integration of global web information into business processes will be introduced. The proposed solution demonstrates how such IT-concepts can benefit from Semantic Web technologies to facilitate the information sharing and smarter inter-process data exchange. We will also show how Semantic Web can utilize the Web 2.0 and Mashups to accelerate realization of its goals and making the global web machine-interpretable.
2 State of the Art There are several methods for applying the semantic concepts to current web applications and making its content machine readable. One such solution is to embed semantic information into the web content at creation time so that machines can read and interpret the content without the overhead of natural language processing methods. In this context the RDFa initiative of W3C [5], provides a set of HTML attributes to augment visual data with machine-readable hints. It is highly beneficial to express the structure of web data in context; as users often want to transfer structured data from one application to another, sometimes to or from a non-web-based application, where the user experience can be enhanced. There are many major use cases where embedding structured data in HTML using RDFa provides a significant benefit. For example people’s contact information, events and content’s licenses such as creative commons can be included in web contents using RDFa syntax and relevant namespaces. The RDFa is not the only solution for providing more intelligent data on the web. A similar approach for embedding machine-readable data in web content is delivered by microformats [6] which are supposed to be coinciding with the design principles of "reduce, reuse, and recycle". The main difference between these two approaches can be explained by its historical background. The microformats have grown out of the work of the blog developer community as an easy and ad-hoc response to common applications,whereas RDFa is built with a more systematic vision of the W3 Semantic Web group and its associated thinkers.
Combining and Integrating Advanced IT-Concepts
15
In parallel to Semantic Web concepts and technologies the World Wide Web evolution has resulted in new topics such as Web 2.0 that has hit the mainstream, and continues its rapid evolution. Successful Web 2.0 applications are characterized by crowd-sourcing and social networking where people create and share the content. The Web 2.0 has set a new trend in Rich Internet Application (RIA) world. It makes a better use of the client machine’s processing power and at the same time pushes the SOA paradigm to its limits. In this context, Internet will play the role of a global operating system that hosts the Web Services. In other words, the Web 2.0 fosters the global cloud computing idea where business services are presented on the Internet and developers should select and weave them together to create new compound services. In this context SOA and Semantic Web Services are supposed to provide and manage the required services in a smart way, but despite all advantages of SOA and Semantic Web Services (SWS), they are complex and cost-intensive technologies. Moreover the target user group of SOA and SWS are IT-professionals who have a thorough understanding of these technologies and their underlying standards. In other words, there are not suitable for end-users who lack these capabilities. Another shortcoming of SOA-based approaches is their inability to react rapidly to the changes in the business environment, as implementations of such systems are cost and time intensive and any changes in the environment may require some modifications in the system. An important characteristic of systems for end-users is their capability for personalization and customization. Most of today’s business processes and SOA-based solutions are still mainly designed to satisfy the mass of users. Unfortunately customization and mass generic production are at odd with each other. As a matter of fact IT solutions typically focus on 20% of user requirements that have an impact on most users and the long tail of requirement is usually ignored by IT providers. It is too difficult for an ordinary user to take benefit of the available services by composing the appropriate services together. This is the reason why SOA-based approaches are mainly covered in the hands of IT-departments of big firms who have a complex stack of technologies to realize SOA-based scenarios. Per contra end users require simple, cost-effective techniques which enable them to design solutions in an ad hoc and “quick and dirty” manner. The Web 2.0’s approach for creating such “quick and dirty” solutions is given by Mashups that introduces new possibilities for a better human-computer interaction. The term mashup originates from music industry where a song or composition is created by blending two or more songs, usually by overlaying the vocal track of one song seamlessly over the music track of another [7]. Mashups owes its popularity and fast improvements to its two basic blocks namely Web 2.0 and SOA. Mashup envisions building effective and light-weight information processing solutions based on the exposed Web Services of organizations. Such Web Services may range from simple Web Services such as RSS [8] and REST-based [9] services to complex BPEL services for more serious use cases; however the Mashups benefits in the latter use cases is not yet known to IT decision makers. The power of mashups is also being examined in real world information management scenarios and has attracted many attentions [11]. Mashups can be applied to a broad spectrum of use cases ranging from simple data widgets to more complex use
16
A. Anjomshoaa, A.M. Tjoa, and A. Hubmer
cases such as task automation and system integration. The mashup applications are roughly categorized under the following groups: •
•
•
Consumer Mashups (Presentation Mashups): are the simplest group of Mashups used to facilitate the creation of information portals from different resources for presentation purpose. This group of mashups has the lowest degree of customization and is usually implemented as pre-built widgets that can be added to user interfaces. Data Mashups: which are used to integrate data and services from different resources such as Web Services, RESTful APIs, Web Extractors, RSS Feeds, etc. These kind of mashups aim to facilitate the data access and cross-referencing between resources. Enterprise Mashups (Logic Mashups): always involve programming and therefore are the most complex mashup category. They connect two or more applications, automate certain tasks and include awareness of workflow [12]. Enterprise mashups usually depend on some server-side components and compete with data integration and service orchestration technologies such as BPEL and Enterprise Service Buses (ESBs).
According to market research reports, this situation is going to change quickly in the coming years. Mashups are identified among the top 10 strategic technologies for 2009 [10] and it is expected that many serious business scenarios will be delivered through enterprise mashups. The power of mashups is also being examined in real world information management scenarios and has attracted many recent efforts [13].
3 Semantic Web and Mashups The bidirectional support of Semantic Web and Mashups boost each other and has the potential to provide a solid basis for many interesting applications. The Mashup support for Semantic Web has come alertness via ad-hoc mashups which on the one hand enables the connection to preconfigured information resources and the processing of the data of these resources and on the other hand maps its context data to the relevant domain ontology. In other words, instead of embedding the semantic meaning to the web content, semantics is attached to relevant content via mashups in a dynamic and loosely coupled manner. Today, there are a handful of projects such as Google Sidewiki [14], Dapper [15], and Lixto [16] that follow this approach to extract and enrich web page information; however in most of the cases the process remains at a simple data extraction and text annotation level and semantic aspects are not completely covered. In the case of Google Sidewiki which is a recently released browser sidebar, users are enabled to contribute and read information alongside any web page. Google Sidewiki does not support semantic concepts at the moment and just aims to use the power of the crowd to enrich metadata of web pages. The interesting part of this approach is the free annotation of page content which may overlap each other. In other words part of the page content can be annotated differently by different endusers based on their target applications and use cases. Hence the quality of user annotations will be evaluated by the crowd and better annotations appear on top of the list.
Combining and Integrating Advanced IT-Concepts
17
In the near future, the Mashup support for Semantic Web will play an important role in realizing the Semantic Web goals and transforming the current web to an information space that can be interpreted and used by machines for advanced use cases. The other facet of the relationship between Mashups and Semantic Web is the support that Semantic Web and ontologies can provide for Mashups in facilitating the creation of Mashups for novice users. This application of Semantic Web has its roots in Semantic Web Services that are aiming to automate service discovery and composition without human intervention. The basic difference between Semantic Web Services and Semantic Mashups approaches is derived from their different target users. The Semantic Web Services are mainly managed and used by IT experts who are aware of underlying data structures and corresponding services, whereas the Semantic Mashups target group are novice users who need to combine the Mashup Widgets for their specific purposes [11]. This issue is especially important for the creation of Enterprise Mashups which always involves programming and includes awareness of workflow. Enterprise mashups usually depend on some server-side components and compete with data integration and service orchestration technologies such as BPEL and Enterprise Service Buses (ESBs). In Semantic Mashup context, these server-side components are described by an appropriate domain ontology that eases the composition of mashup widgets and creation of new mashups. One of the major drawbacks of Mashup based solutions is the fact that such solutions are fragile and not as stable as formal business process. To provide a solid basis for more serious business application, the gap between mashups and well established BPEL solutions should be closed. In the next section a novel approach for bridging this gap will be introduced which enables the end user to benefit from simple service composition of Mashups and at the same time to manage and control the process by stable business process engines.
4 Mashup to BPEL Conversion As explained before Mashups are very helpful in creating fast solutions for data integration. To provide a solid basis for more serious business application, the gap between mashups and well established BPEL solutions should be closed. A closer look into the main mashup categories (presentation, data, and enterprise mashups) reveals that the functionality of mashups is to some extent similar to SOA and its service composition; however the overlap with SOA is much more blurred for logic (enterprise) mashups than for the other two categories. Client-side mashups compete with server-side orchestration technologies such as BPEL (Business Process Execution Language), but that has not hindered large enterprise SOA vendors as IBM, Microsoft, BEA, Sun and Oracle from embracing enterprise mashups. The reason here is that, like BPEL, mashups depend on standardized Web services and APIs, all of which SOA is aimed at making them available. This does not mean that mashups depend on SOA. In a field survey, nearly two thirds of enterprises that use browserbased mashups have done so without SOA, and one of the biggest advantages of mashup technology is the low barrier to entry, with many applications using free services on the Internet [17].
18
A. Anjomshoaa, A.M. Tjoa, and A. Hubmer
As the number of serious enterprise mashups grows and they get used for more serious tasks, there is a growing need to make mashups more stable and robust for complex business scenarios. As a matter of fact, in the context of SOA and Mashup solutions, stability and ease are at odd with each other (see Figure 1). In a mashup-based approach the services can be easily composed to create a situational solution; however the result is not that stable. On the other hand a SOA-based approach provides a stable basis for creating and running the business processes for expert users who are familiar with complex stack of SOA, but a novice user will not be able to create solutions based on available services.
Fig. 1. SOA Solution vs. Mashup Solution
To change this situation, a novel approach is proposed that benefits from the advantages of both SOA and Mashup solutions to facilitate the creation of situational solutions that are also stable and as a result, business processes can rely on them. The core idea of the proposed approach is to break the design-time and runtime processes between Mashup and SOA environments respectively. In other words the solutions will be first created in a mashup environment and benefits from all advantages of mashup solutions. As the next step the Mashup will be translated to a formal business process using a Mashup-BPEL convertor. Finally this means that any stable BPEL engine can run the process and take care of process management issues. As a result the end user will benefit from simple service composition of mashups and at the same time the process will be managed and controlled by well established business process engines. In this way, the process of running a mashup is safely transferred from the browser on the client computer to a BPEL engine on server-side. This approach might not be that helpful for presentation and data mashups, but will be of great advantage for enterprise mashups. It is also important to note that, the proposed approach is not trying to reinvent the wheel and duplicate the functionality of BPEL editors. Unlike a typical BPEL editor, the proposed approach hides the complexities of a business process such as defining the partner links, services, etc and saves the end-users from technical details and let them solely concentrate on the their use cases. To demonstrate the feasibility of this approach a small use case will be presented that shows how a simple BPEL process can be generated from its corresponding mashup. This use case is based on the basic math services that can be combined to do more complex calculations. Figure 2 depicts a simple Mashups that uses math services and delivers the result to display.
Combining and Integrating Advanced IT-Concepts
19
Fig. 2. Simple calculation Mashup
The corresponding BPEL process for this mashup is shown in Figure 3. The Mashup-BPEL convertor uses the available widgets and connections of mashups and formulates the widget services as BPEL invoke actions. The resulted business process can be then deployed on any BPEL engine for execution. At runtime, user will submit the required input values to this process and the rest of steps will be handled by BPEL engine.
Fig. 3. The generated BPEL process for Simple calculation Mashup
The proposed approach for converting mashups to BPEL processes can get more complicated when an end-user’s contribution is required. Unlike mashups where user inputs are simply added on screen, BPEL does not include the human interactions in its domain. Despite wide acceptance of Web services in distributed business applications, the absence of human interactions is a significant deficiency for many realworld business processes. For this reason BPEL4People extends BPEL from a sole orchestration of Web services to an orchestration of role-based human activities as well [18].
20
A. Anjomshoaa, A.M. Tjoa, and A. Hubmer
In context of the proposed approach, the required user interactions are implemented using XForms [19] to send the missing data to the business processes and provide a simplified BPEL4People integration. To clarify this, consider the previous example depicted in Figure 2 and suppose that the dashed connection between the input and multiply widgets is removed. As a result the corresponding BPEL can be run in one service call and will break on “multiply web service” invocation. At this point the end-user needs to provide the missing parameter for continuation of business process. In order to include the user contribution in the target business process, the Mashup-BPEL convertor will detect the location of missing parameter and inserts a people notification web service call before this location.
Fig. 4. The generated BPEL process for People Interaction
One great advantage of XForms is the fact that an XForms client can send its data as XML. This capability can be elegantly used to send a SOAP message which is also encoded in XML to end point of web service. In the context of the proposed solution, this feature of XForms has been used to receive the missed parameters from user.
5 Web Form Services Nearly all human-computer interactions are performed through Input forms which are responsible for receiving the user input and sending it to appropriate component for further processes. Particularly, a large portion of Internet advances owes the humancomputer interaction and data exchange via web forms and we are using them extensively in our daily activities. Current complex Internet applications demand a significant amount of time for development and maintenance of web forms which are solely designed for human users and cannot be automated and reused in business processes. In this section a novel approach for overcoming the web form integration obstacles, namely Web Form Services will be presented. Web Forms Services are a
Combining and Integrating Advanced IT-Concepts
21
translation of ordinary web forms that provide a web service interface for corresponding forms and make the reuse and integration of the web form logic more convenient. For instance the converted web form functionality can be reused in Enterprise Mashups to automate data interaction between different web applications. This idea is especially helpful in automating the tasks between multiple web applications in a formal way. It is now very common in big enterprises to use web applications for different systems and usually there is no unified system that can take over all the tasks. So enterprises are now encountering a bunch of different web applications such as accounting, personnel management, etc., and each application is responsible for part of the business activities. In use cases requiring data records that are managed by different applications, end users should either wait for some complex backend integration from software architects, or manually transfer the data between applications to get the required results. In this context Web Form Services provide a unified way of describing the services and reusing them for different business activities. Figure 5 shows the WFS server and its components that are used to transform web forms to Web Services.
Fig. 5. Web Form Service architecture overview
The core element of WFS architecture is its configuration files that define the necessary interactions between system and a specific web form to achieve the required results. The configuration file captures such interactions in a formal way and at runtime the system will use these documented interactions to simulate the actions of a human user. The configuration files are also used to generate the definition of corresponding web services and presenting them as WSDL files that include input and output parameters.
6 Conclusions The shift away from traditional Web 1.0, is forced by the growing need for more efficient information sharing, collaboration and business processes. Mashup Architecture is one of the outcomes of Web 2.0 paradigm that has been widely accepted and used for user-centric information processing. At the moment mashups are mainly used for less fundamental tasks such as customized queries and map-based visualizations; however it has the potential to be used for more fundamental and sophisticated tasks too.
22
A. Anjomshoaa, A.M. Tjoa, and A. Hubmer
As more serious applications make use of mashup architecture, there is a growing need to study the business chances and feasible scenarios of mashup architecture and foster its applications for organizational use cases. In our belief, mashups have the potential to facilitate the transition from traditional web to Semantic Web era and support this paradigm shift with “zero footprints” on the web pages. More specifically, the proposed approaches in this paper shows how elaborate different information integration solutions in combination with mashups can be used to facilitate the information integration in business processes and support creation of situational solutions that can be shared and reused by other users. Furthermore the translation of enterprise mashups to stable business processes can accelerate the application of mashups for more serious use cases such as organizational data sharing and data integration scenarios. Acknowledgments. This research is partly supported by FIT-IT project, Secure 2.0 (project number: 820852).
References 1. Berners-Lee, T.: The Semantic Web. Scientific American (May 2001) 2. Anjomshoaa, A., Karim, S., Shayeganfar, F., Tjoa, A.M.: Exploitation of Semantic Web technology in ERP systems. In: Procs. of Confenis 2006 (2006) 3. Anjomshoaa, A., Bader, G., Tjoa, A.: Exploiting Mashup Architecture in Business Use Cases. In: NBIS 2009, Indianapolis, USA (2009) 4. A Business Guide to Enterprise Mashups, JackBe Corporation (April 2008) 5. RDFa in XHTML Syntax and Processing, http://www.w3.org/TR/2008/ CR-rdfa-syntax-20080620 (last visited, November 2009) 6. Microformats, http://www.microformats.org (last visited, November 2009) 7. Music Mashups, http://en.wikipedia.org/wiki/Mashup_music (last visited, May 2009) 8. Really Simple Syndication, http://en.wikipedia.org/wiki/RSS(last visited, May 2009) 9. Representational State Transfer, Roy Thomas Fielding, Architectural Styles and the Design of Network-based Software Architectures, Ph.D. Thesis, University of California (2000) 10. Top 10 Strategic Technologies for 2009, Gartner Symposium/ITxpo, http://www.gartner.com/it/page.jsp?id=777212 (October 2008) 11. Hoyer, V., Stanoevska-Slabeva, K.: The Changing Role of IT Departments in Enterprise Mashup Environments. In: ICSOC 2008. LNCS, vol. 5472, pp. 148–154. Springer, Heidelberg (2009) 12. Mashup Basics: Three for the Money, Andy Dornan, http://www. networkcomputing.com/showitem.jhtml?articleID=201804223 13. Hoyer, V., Fischer, M.: Market Overview of Enterprise Mashup Tools. In: Bouguettaya, A., Krueger, I., Margaria, T. (eds.) ICSOC 2008. LNCS, vol. 5364, pp. 708–721. Springer, Heidelberg (2008) 14. Google Sidewiki, http://www.google.com/sidewiki/intl/en/index. html15. Dapper homepage, http://www.dapper.net/ (last visited, November 2009) 15. Lixto Solutions, http://www.lixto.com (last visited, November 2009) 16. Dornan, A.: Mashup Basics: Three for the Money (2007), http://www.networkcomputing.com/data-networking-management/ mashup-basics-three-for-the-money.php (last visited, November 2009) 17. BPEL4People, BPEL4People, http://en.wikipedia.org/wiki/BPEL4People 18. XForms 1.1, W3C Recommendation (October 2009), http://www.w3.org/TR/xforms11/ (last visited, November 2009)
Web Page Augmentation with Client-Side Mashups as Meta-Querying Stephan Hagemann and Gottfried Vossen European Research Center for Information Systems (ERCIS) University of M¨ unster Leonardo-Campus 3, 48149 M¨ unster, Germany {shageman,vossen}@uni-muenster.de
Abstract. Web browser extensions are increasingly popular to customize and personalize browsing experiences. Several browser extensions provide client-side mashups which augment content currently being displayed. There have been no efforts yet to understand how these approaches relate conceptually. This paper analyzes mashup extensions by reformulating them as meta-querying. This gives a generalized view of client-side mashup provisioning which (i) shows the robustness of relational technology and (ii) can guide the engineering of these Web information systems.
1
Introduction
As Web browsers become evermore important as means towards accessing the application platform that the Web has become [1], Web browser extensions are increasingly popular as well [2]. These extensions provide additional functionality in areas such as bookmarking, communication, security, or simply browsing. There exist several projects developing extensions that provide client-side mashups to enhance the browsing experience by allowing users to add mashups that match the currently displayed content [3,4,5]. On the other hand, metaquerying [6,7,8] is a technique that allows queries (to databases) to use other queries for computing their own results, that is, queries about queries. This paper brings the two approaches together: The meta-querying technique is used to formulate a generalized view on client-side mashup provisioning. Extensions to a Web browser that deliver client-side mashups are quite divers and take quite different approaches, including the following: Indeed, Piggy Bank enhances the Web browsing experience by adding an Resource Description Framework (RDF) store to the Web browser [5]. This store can be filled with pieces of RDF whenever a Web site offers its data in this structured way. If no RDF is provided, extractors can be used to extract information and allow Piggy Bank to store it. Since the store does not distinguish by the source of information, it essentially mixes the content from multiple Web sites. The Intel Mash Maker uses widgets to display mashups that are calculated on the basis of information extracted from Web pages [3]. Mashups are directly embedded into the currently visible Web page. Again, a combination of RDF and extractors is used to determine the information they are based on. ´ atek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 23–32, 2010. N.T. Nguyen, M.T. Le, and J. Swi c Springer-Verlag Berlin Heidelberg 2010
24
S. Hagemann and G. Vossen
While ActiveTags takes a similar approach towards the display of mashups, it uses as data only what is added to a Web page by users as tags [4]. Figure 1 shows an advanced example, where tags added to delicious.com bookmarks trigger the display of several mashups. In this advanced usage scenario, the bookmarks store content related to a robotics course and create a personally tailored learning Web page. It shows the contents of the related resources directly as an embedded video, an editable document, and a fully accessible ebook.
Fig. 1. ActiveTags
A straightforward analogy can be drawn between the request of a Web page and the execution of a program or a method. In order for a method to be called, it is typically referenced by its name. In addition, parameters may be passed to the method, which specify the particular execution. As depicted in the first row of Figure 2, the method that requests a Web page is the Hypertext Transfer Protocol (HTTP) GET method. The Uniform Resource Locator (URL) can be seen as its parameter. The execution of the HTTP request leads to the result, i.e., a Web page. When client-side mashups augment the surfing experience, there is an additional step in this process. As the second row in Figure 2 depicts, the Web page and the extractors define what mashups are to be executed and appended to the Web page. The role of the extractors is thus to specify which portions of the Web page are to be interpreted as the specification of additional “method calls.” The result of these executions is again a Web page, but augmented with the results from the execution of mashups. It turns out that the analogy already contains the essence of meta-querying: Queries that execute or manipulate other queries as their data. Client-side mashups turn the data that is the original Web page into a query when extractors are applied and mashups are subsequently executed. One can say that a Web page implicitly includes the execution information, which is made explicit
Web Page Augmentation with Client-Side Mashups as Meta-Querying Specification
1.
GET http://www...
Execution
25
Result
HTTP request
Request Web page
Web page
Mashups
2.
Apply augmentation
Web page
Web page
Extractors
Fig. 2. Web page retrieval and meta-querying analogy
by mashups and extractors. In this paper we study this analogy by reformulating the application of client-side mashups as meta-querying. This “step back” we take, highlights the contribution of client-side mashup applications, shows their similarities, and points towards possible future extensions of these approaches. We will derive our analogy on the basis of ActiveTags and discuss why this does not limit the analogy. Using meta-querying for the realization of the analogy highlights that, once again, seemingly unrelated approaches can be ascribed to relational technology. This holds even for the most recent developments (mashups) and again shows the robustness of the relational model. Such an advancement avoids a “reinvention of the wheel” and at the same time supports the engineering of Web information systems by contributing a conceptual viewpoint that highlights the need for further developments. The remainder of this paper is organized as follows: Section 2 introduces the abstraction of the Web with which we are going to discuss our meta-querying analogy. After this, Section 3 forms the main part of this work by outlining the steps that implement the meta-querying analogy. Section 4 discusses related work, before this paper is concluded in Section 5. Due to space limitations, we refer the reader to [9] for further details on the approach.
2
A Relational View of the Web
In order to analyze the meta-querying analogy more formally, an appropriate language must be chosen. Any language that allows for meta-programming or even reflection can be used. The question is: How to choose among the multitude of languages that exist with this property? As noted in [6], “. . . adding reflection to a computationally complete programming language will not enhance its
26
S. Hagemann and G. Vossen
expressive power, the features are typically meant to allow for a more natural or succinct expression of certain advanced programming constructions.” Indeed, computationally complete languages obviously allow for the representation of client-side mashups, since they have been built as such. The question thus boils down to whether a more limited language can be used to model our approach. That this is indeed the case is shown by the WebSQL project [10], at least for ordinary Web queries, since it has defined an “SQL-like query language for extracting information from the web.” However, reflection is not native to Structured Query Language (SQL) or to the relational world, but has been introduced and analyzed in the literature: Programs as data have been analyzed during the early 1980s already [11]. [6] presented an approach where relational algebra programs are stored in specific program relations, whereas [7] created the relational meta algebra, which allows relational algebra expression as a data type in relations. The culmination of these efforts can be seen in Meta-SQL [8] where the idea of the “program as a data type” is transferred to SQL and commercial database systems (in particular IBM’s DB2). “Practical meta-querying”, as it is called, is enabled here by storing queries in an Extensible Markup Language (XML) notation and defining appropriate functions that enable reflection in SQL. This paper uses the results from work on WebSQL and Meta-SQL to remodel the approach of client-side mashup applications from a relational standpoint. Following [10], the Web is modeled using two relations Document and Anchor (see Table 1). In the Document relation we assume without loss of generality that the content of the text column contains Hypertext Markup Language (HTML) in the form of Extensible Hypertext Markup Language (XHTML). This ensures that we can use XML operations without restrictions. The url column identifies each document. All other columns of the document relation contain document metadata. The Anchor relation stores the links between documents: One, signified by the url column, points to the other denoted in the href column with the label contained in the column label.
Table 1. Document and Anchor relations url
title
text
length
type
modif
http://www.acme.com/a http://www.acme.com/b http://www.acme.com/c
title 1 title 2 title 3
text 1 text 2 text 3
1,234 2,691 6,372
text text text
09-01-01 09-01-01 09-01-01
. . . url
label
href
http://www.acme.com/a http://www.acme.com/a http://www.acme.com/c
label 1 label 2 label 3 . . .
http://www.acme.com/b http://www.acme.com/c http://www.acme.com/b
Web Page Augmentation with Client-Side Mashups as Meta-Querying
27
Surfing the Web becomes requesting a row from the Document table. An individual Web page is retrieved from Document by selecting on url: select d.title, d.text from Document d where url="$SOME_URL", The commutative diagram in Figure 3 shows the effect of this relational mapping of the Web: In the relational depiction of the Web (denoted as rel ) an HTTP request resulting in a Web page becomes an SQL query resulting in a “Web tuple” as defined by the query above. This tuple is the same as one gets when applying the “tuplezing” function tup to a Web page. This function simply projects the title and the text of a Web page into a tuple.
Webrel
SQL
>
∧
Web tuple ∧
rel
Web
tup
HTTP
>
Web page
Fig. 3. Commutative diagram of Web and Web pages
3
Meta-SQL Implementation of Client-Side Mashups
In order to transfer the functionality of client-side mashups into the relational world, we need a way to incorporate it into the relational view and into the way this “relational Web” is queried. This is sketched in this section, where, as mentioned, the reader is referred to [9] for details. As the discussion above has highlighted, it is the extractors and mashup definitions that perform the implicit meta-querying contained in the Web pages. Therefore, these two need to be represented in relations as well. To this end, we introduce two new relations for storing extractors and mashups, resp. The Extractor relation (see Table 2) has two columns with selector storing the XPath expression that selects all the tags on a page and urlPattern defining, in the form of a regular expression, the URLs on which the extractor operates. This captures the essential properties of TagExtractors from ActiveTags [4]. Table 2. Extractor relation selector
urlPattern
//a[contains(concat(’’,@rel,’’),’tag’)] //div[contains(@id,’tagdiv’)]/a[@class=’Plain’] //a[contains(@class,’spTagboxLink’)] . . .
.* ^.*(www\.)?flickr\.com ^.*www\.spiegel\.de.*
28
S. Hagemann and G. Vossen
Table 3 shows relation Mashup, which contains mashup definitions. The id column identifies mashups. url typically specifies which document the mashup loads when it is called, note that this column can contain null values. title gives the name of the mashup. tags contains an XML document which defines the tags required for the execution of the mashup. urlPattern is a regular expression that is evaluated with the URL of a concrete Web page to test whether the mashup is to be executed. Column template contains a template of what the mashup will insert into a Web page should it be executed. Notice that not all mashups need to have a template. The last column, program, contains the SQL code of the mashup. This is the code that needs to be executed to evaluate the effect of a mashup. Querying the mashed-up Web needs to incorporate the contents from the two additional tables. To this end, we need to define how extractors work on documents and how mashups are appended. An overview of this is given next. Before mashups can be applied, extractors need to be applied in order to determine which tags are present on a Web page. It was defined above that Web pages are stored in XML and that TagExtractors are XPath expressions. It therefore suffices to extract all elements from the Web page that are returned by such an expression to determine the tags on a page. Using this idea, a view DocumentWithTags can be defined that appends the extracted tags as an extra column to the other information from the Document relation. While we have spoken of “tag extraction” in this section, the extension towards more elaborate information is straightforward. Extracting more information from a page can be implemented by defining several extractors for a page. If other data structures (instead of only tags) are to be allowed, another column may be introduced into the extractor relation, which specifies how the data is to be stored. Mashups may then make use of this. However, even all this may be encoded into tags, for example, by using structured (also called machine) tags [12]. With view DocumentWithTags adding tags to documents we have the first component for the “mashed-up relational Web” in place. The second component needed is the actual combination of documents with mashups. For this we will create another view, which but first we need another function, which can check whether the set of tags coming from a Web page is sufficient for the execution of a particular mashup. Again, without going into detail, lets call this function checkTagSufficiency. What effectively needs to be checked is that all the required tags are present in (and thus form a subset of) the tags of a document. The two sets of tags that need to be compared come from the DocumentWithTags view on the one hand and the Mashup relation on the other hand. To make checkTagSufficiency work with the documents, we construct another view based on DocumentWithTags, which we may call DocumentWithMashups. Essentially, this view needs to copy all columns from DocumentWithTags and perform necessary transformations on the title and the text column to include mashups. A note is appended to the title, making it clear that this document has been processed by ActiveTags. Mashup contents are to be added to the text
url
http://upcoming. yahoo.com/. . .
http://www. lastfm.de/. . .
http://dblp. mpi-inf.mpg.de/. . .
NULL
id
1
2
3
4
DBLP & Events
DBLP search
Last.fm event
Upcoming event
title
. . .
.*
http:\/\/.* bibsonomy. . .
DBLPsearch
DBLPsearch upcoming:event
.*
lastfm:event
NULL
.*
upcoming:event
template
urlPattern
tags
Table 3. Mashup relation
select a p p l y Te m p l a t e ( m. t e m p l a t e , m. t i t l e , m. u r l ) from Mashup m where m. i d = 1 , select a p p l y Te m p l a t e ( m. t e m p l a t e , m. t i t l e , m. u r l ) from Mashup m where m. i d = 2 , select a p p l y Te m p l a t e ( m. t e m p l a t e , m. t i t l e , m. u r l ) from Mashup m where m. i d = 3 , s e l e c t CMB( mashed ) from Mashup m, mashed i n UEVAL(m. program ) where m. i d = 1 or m. i d = 3,
program
Web Page Augmentation with Client-Side Mashups as Meta-Querying 29
30
S. Hagemann and G. Vossen
of the page. Of course, only those mashups that match the URL and checkTagSufficiency requirements are to be appended. The final component for querying the “mashed-up relational Web” is the execution of mashups. Looking again at Table 3, we see that the first three mashups all contain a template and use function applyTemplate in their program column. The template contains elements of the form at:parameter, which the applyTemplate function replaces by the current parameter values. The notation we use is that of [13], who provides a full-fledged templating solution for Extensible Stylesheet Language Transformations (XSLT). Looking at the templates in Table 3, one can see that those of Mashups 1 and 2 define simple link mashups [4]: they create a link to the additional information. The respective programs “reselect” the row of the mashup definition to retrieve the other columns as parameters and apply url and title to the template. The result of applyTemplate is the result of the query. Mashup 3 works similarly, but it creates an iframe mashup [4]: The related information is loaded into the current page as an iframe. While Mashups 1–3 use essentially the same program, Mashup 4 is defined quite differently and shows the capabilities of this approach. This mashup has NULL-valued url and template columns, which it does not need, as it combines the functionality of Mashups 1 and 3 by selecting their definitions from the mashup database, executing them, and combining their results. This combination leads to a new mashup, which is semantically different from the individual mashups since it is only called, when all the necessary tags are present. Note that url and template do not have to be NULL-valued for mashups to use other mashup definitions: more intricate mashup definitions might use both. All functions and relations needed for querying the “Meta Web” are in now in place. Querying a document from the Web augmented with mashups boils down to the query shown below. Since views have been created to define the necessary steps for tag extraction and mashup inclusion, all that needs to be changed as opposed to the query for the original Web is that the query needs to get documents from view DocumentWithMashups instead of from Documents directly. select d.title, d.text from DocumentWithMashups d where url="$SOME_URL"; This last observation brings us back to the commutative diagram of Figure 3, which showed the correct abstraction of a Web request in the relational depiction. Figure 4 shows a similar diagram for the mashed-up Web and its relational equivalent. With client-side mashup applications requesting a Web page, they no longer only execute an HTTP request, but also apply client-side mashups, denoted as HTTPM . The relational equivalent we have constructed in the previous sections is denoted as SQLM . Note that the results of these functions are still Web page and Web tuple: ActiveTags produces Web pages (albeit augmented) and the adapted query produces a tuple of the same arity as the original query. This is why the tup function can remain unchanged.
Web Page Augmentation with Client-Side Mashups as Meta-Querying
Webrel
SQLM
>
∧
Web tuple ∧
rel
Web
31
tup
HTTPM
>
Web page
Fig. 4. Commutative diagram of ActiveTags-augmented Web and Web pages
4
Related Work
While there are no related efforts towards a unification of client-side mashup applications, there are related approaches to such mashups that we want to mention here. GreaseMonkey is an extension to the Firefox browser that allows the installation of client-side scripts, so-called userscripts, that execute while a user surfs the Web. These scripts are typically used to augment HTML Web pages with additional information, to incorporate assistive technology, or to change the styling of pages, but since JavaScript, which is a full-fledged programming language, is used for the implementation of the scripts, basically any computable program is implementable [14]. Its generality allows GreaseMonkey scripts to tailor a restricted set of Web pages, possibly producing sophisticated mashups. As such, GreaseMonkey complements client-side mashup applications which strive for a broader applicability of mashups by separating data extraction from mashup definition. The Active XML (AXML) framework [15] defines so-called AXML documents, which are XML documents that can contain calls to Web services. When a portion of an AXML document that contains such a Web service call is requested, the Web service is called and the result of the execution is returned as the respective part of the AXML document. In this approach parts of an XML document are “active”, while with client-side mashup applications certain combinations of data on a Web page activate a mashup. Thus, the major difference between the two approaches is that AXML documents contain Web service calls explicitly, while for client-side mashup applications such calls are only (with the appropriate interpretation) implicitly defined on a Web page.
5
Conclusion
This paper has proposed a relational meta-querying analogy to client-side mashups. The analogy has been based on ActiveTags, a client-side mashup application that operates on user-defined tags, but it has been shown that this does not pose an essential limitation. As such, the analogy has shown the similarity of the approaches of several client-side mashup applications. Additionally, it has explained how these can be elegantly modeled using SQL based on a construction similar to that of WebSQL and with the help of constructs from Meta-SQL.
32
S. Hagemann and G. Vossen
This shows that the meta-querying analogy is indeed helpful in understanding the contribution of client-side mashups. While in this work the analogy has been used for explanatory purposes only, it is possible to implement it as a running system, since all the components are available. This is a direction for future work which may unify the different approaches. Through a unification of extractor and mashup definitions, this may concentrate the efforts and thus give the approaches greater reach and increase the potential for a wider adoption of client-side mashups.
References 1. Musser, J., O’Reilly, T.: Web 2.0 - Principles and Best Practices. O’Reilly Media, Sebastopol (2007) 2. Scott, J.: 600,000,000 Add-on Downloads. Blog of Metrics (January 2008), http://blog.mozilla.com/metrics/2008/01/30/600000000-add-on-downloads/ (2009-04-22) 3. Ennals, R., Brewer, E.A., Garofalakis, M.N., Shadle, M., Gandhi, P.: Intel Mash Maker: join the web. SIGMOD Rec. 36(4), 27–33 (2007) 4. Hagemann, S., Vossen, G.: ActiveTags: Making tags more useful anywhere on the Web. In: Lin, X., Bouguettaya, A. (eds.) 20th Australasian Database Conf. (ADC 2009), Wellington, New Zealand. Conferences in Research and Practice in Information Technology (CRPIT), vol. 92 (2009) 5. Huynh, D., Mazzocchi, S., Karger, D.: Piggy Bank: Experience the Semantic Web inside your web browser. Web Semant. 5(1), 16–27 (2007) 6. van den Bussche, J., van Gucht, D., Vossen, G.: Reflective Programming in the Relational Algebra. J. Comput. Syst. Sci. 52(3), 537–549 (1996) 7. Neven, F., van den Bussche, J., van Gucht, D., Vossen, G.: Typed Query Languages for Databases Containing Queries. Inf. Syst. 24(7), 569–595 (1999) 8. van den Bussche, J., Vansummeren, S., Vossen, G.: Towards practical metaquerying. Inf. Syst. 30(4), 317–332 (2005) 9. Hagemann, S., Vossen, G.: Web-Wide Application Customization: The Case of Mashups. Techn. Report, ERCIS, Univ. of Muenster (January 2010) 10. Arocena, G.O., Mendelzon, A.O., Mihaila, G.A.: Applications of a Web Query Language. Computer Networks 29(8-13), 1305–1315 (1997) 11. Stonebraker, M., Anderson, E., Hanson, E., Rubenstein, B.: QUEL as a data type. In: SIGMOD 1984: Proc. of the 1984 ACM SIGMOD Int. Conf. on Management of data, pp. 208–214. ACM Press, New York (1984) 12. Straup Cope, A.: Machine tags. Flickr API / Discuss (January 27, 2007), http://www.flickr.com/groups/api/discuss/72157594497877875/ (2009-04-22) 13. Diamond, J.: Template Languages in XSLT. O’Reilly xml.com – xml from the inside out (March 2002), http://www.xml.com/pub/a/2002/03/27/templatexslt.html (2009-04-22) 14. Pilgrim, M.: Dive Into Greasemonkey (May 2005), http://diveintogreasemonkey.org/ (2009-04-22) 15. Abiteboul, S., Benjelloun, O., Manolescu, I., Milo, T., Weber, R.: Active XML: A Data-Centric Perspective on Web Services. In: Levene, M., Poulovassilis, A. (eds.) Web Dynamics - Adapting to Change in Content, Size, Topology and Use, pp. 275–300. Springer, Heidelberg (2004)
Soft Computing Techniques for Intrusion Detection of SQL-Based Attacks Jaroslaw Skaruz1 , Jerzy Pawel Nowacki2 , Aldona Drabik2 , Franciszek Seredynski2,3 , and Pascal Bouvry4 1
Institute of Computer Science, University of Podlasie, Sienkiewicza 51, 08-110 Siedlce, Poland
[email protected] 2 Polish-Japanese Institute of Information Technology, Koszykowa 86, 02-008 Warsaw {nowacki,adrabik,sered}@pjwstk.edu.pl 3 Institute of Computer Science, Polish Academy of Sciences, Ordona 21, 01-237 Warsaw, Poland
[email protected] 4 Faculty of Sciences Technology and Communication, University of Luxembourg 6 rue Coudenhove Kalergi, L-1359 Luxembourg, Luxembourg
[email protected]
Abstract. In the paper we present two approaches based on application of neural networks and Gene Expression Programming (GEP) to detect SQL attacks. SQL attacks are those attacks that take the advantage of using SQL statements to be performed. The problem of detection of this class of attacks is transformed to time series prediction and classification problems. SQL queries are used as a source of events in a protected environment. To differentiate between normal SQL queries and those sent by an attacker, we divide SQL statements into tokens and pass them to our detection system based on recurrent neural network (RNN), which predicts the next token, taking into account previously seen tokens. In the learning phase tokens are passed to a recurrent neural network (RNN) trained by backpropagation through time (BPTT) algorithm. Then, two coefficients of the rule are evaluated. The rule is used to interpret RNN output. In the testing phase RNN with the rule is examined against attacks and legal data to find out how evaluated rule affects efficiency of detecting attacks. The efficiency of this method of detecting intruders is compared with the results obtained from GEP.
1
Introduction
Large number of Web applications, especially those deployed for companies to e-business purpose involve data integrity and confidentiality. Such applications are written in script languages like PHP embedded in HTML allowing to establish connection to databases, retrieving data and putting them in WWW site. Security violations consist in not authorized access and modification of data in ´ atek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 33–42, 2010. N.T. Nguyen, M.T. Le, and J. Swi c Springer-Verlag Berlin Heidelberg 2010
34
J. Skaruz et al.
the database. SQL is one of languages used to manage data in databases. Its statements can be ones of sources of events for potential attacks. In the literature there are some approaches to intrusion detection in Web applications. In [8] the authors developed anomaly-based system that learns the profiles of the normal database access performed by web-based applications using a number of different models. Besides that work, there are some other works on detecting attacks on a Web server which constitutes a part of infrastructure for Web applications. In [4] a detection system correlates the server-side programs referenced by clients queries with the parameters contained in these queries. It is similar approach to detection to the previous work. The system analyzes HTTP requests and builds data model based on the attribute length of requests, attribute character distribution, structural inference and attribute order. In a detection phase built model is used for comparing requests of clients. In [1] logs of Web server are analyzed to look for security violations. However, the proposed system is prone to high rates of false alarm. To decrease it, some site-specific available information should be taken into account which is not portable. In this work we present a new approach to intrusion detection in Web applications. Rather than building profiles of normal behavior we focus on a sequence of tokens within SQL statements observed during normal use of an application. RNN is used to encode a stream of such SQL statements. The problem of detecting intruders is also transformed to the classification problem and GEP is used to solve it. Results obtained from these two techniques are compared. The paper is organized as follows. The next section discusses SQL attacks. In section 3 we present training data. Section 4 describes the architecture of Jordan network. Section 5 shows modern metaheuristic called GEP. Next, section 6 contains experimental results. Last section summarizes results.
2
SQL Attacks
SQL injection attack consists in such a manipulation of an application communicating with a database, that it allows a user to gain access or to allow it to modify data for which it has not privileges. To perform an attack in the most cases Web forms are used to inject part of SQL query. Typing SQL keywords and control signs an intruder is able to change the structure of SQL query developed by a Web designer. If variables used in SQL query are under control of a user, he can modify SQL query which will cause change of its meaning. Consider an example of a poor quality code written in PHP presented below. $connection=mysql_connect(); mysql_select_db("test"); $user=$HTTP_GET_VARS[’username’]; $pass=$HTTP_GET_VARS[’password’]; $query="select * from users where login=’$user’ and password=’$pass’";
Soft Computing Techniques for Intrusion Detection of SQL-Based Attacks
35
$result=mysql_query($query); if(mysql_num_rows($result)==1) echo "authorization successful" else echo "authorization failed"; The code is responsible for authorizing users. User data typed in a Web form are assigned to variables user and pass and then passed to the SQL statement. If retrieved data include one row it means that a user filled in the form login and password the same as stored in the database. Because data sent by a Web form are not analyzed, a user is free to inject any strings. For example, an intruder can type: ”’ or 1=1 –” in the login field leaving the password field empty. The structure of SQL query will be changed as presented below. $query="select * from users where login =’’ or 1=1 --’ and password=’’"; Two dashes comment the following text. Boolean expression 1=1 is always true and as a result user will be logged with privileges of the first user stored in the table users.
3
Training Data
All experiments were conducted using synthetic data collected from a SQL statements generator. The generator takes randomly a keyword from selected subset of SQL keywords, data types and mathematical operators to build a valid SQL query. Since the generator was developed on the basis of the grammar of the SQL language, each generated SQL query is correct. The set of all SQL queries was divided into 21 subsets (instances of the problem), each containing SQL statements of different length, in the range from 9 to 29 tokens (see below). Using available knowledge about SQL attacks, we defined their characteristic parts. Next, these parts of SQL queries were inserted randomly to the generated query in such a way that it provides grammatical correctness of these new statements. Queries in each instance were divided into training and testing subsets, each of them contains 500 SQL statements. Each SQL query is divided on distinct parts, which we further call tokens. In this work, the following tokens are considered: keywords of SQL language, numbers and strings. We used the collection of SQL statements to define 36 distinct tokens. The table 1 shows selected tokens, their indexes and the coding real values. Each token has assigned the real number. The range of these numbers starts with 0.1 and ends with 0.9. The values assigned for the tokens are calculated according to eq. 1: coding value = 0.1 + (0.9 − 0.1)/n ∗ k, where n is the number of all tokens and k is the index of each token.
(1)
36
J. Skaruz et al. Table 1. A part of a list of tokens and their coding values token index coding value SELECT 1 0.1222 FROM 2 0.1444 ... ... ... ... ... ... UPDATE 9 0.3 ... ... ... number 35 0.8777 string 36 0.9
4 4.1
Recurrent Neural Networks RNN Architecture
In this work we use Jordan neural network to detect SQL attacks. In comparison to feedforward neural network, Jordan network has context layer containing the same number of neurons as the output layer. Input signal for context layer neurons comes from the output layer. Moreover, Jordan network has an additional feedback connection in the context layer. Each recurrent connections have fixed weight equals to 1.0. Network was trained by BPTT [7]. 4.2
Training
The training process of RNN is performed as follows. The two subsequent tokens of the SQL statement become input of a network. Activations of all neurons are computed. Next, an error of each neuron is calculated. These steps are repeated until all tokens have been presented to the network. Next, all weights are evaluated and activation of the context layer neurons is set to 0. For each input data, training data at the output layer are shifted so that RNN should predict the next token in the sequence of all tokens. Each token has an index and the real number between 0.1 and 0.9, which reflects input coding of these statements according to table 1. The indexes are used for preparation of input data for neural networks. The index e.g. of a keyword UPDATE is 9 and the token with index 1 relates to SELECT. The number of neurons at the input layer is constant and equals to 2. Network has 37 neurons in the output layer. 36 neurons correspond to each token but the neuron 37 is included to indicate that just processing input data vector is the last within a SQL query. Training data, which are compared to the output of the network have value either equal to 0.1 or 0.9. If a neuron number i in the output layer has small value then it means that the next processing token can not have index i. On the other hand, if output neuron number i has value of 0.9, then the next token in a sequence should have index equal to i. Below, there is an example of a SQL query: SELECT
name F ROM
users.
(2)
Soft Computing Techniques for Intrusion Detection of SQL-Based Attacks
37
At the beginning, the SQL statement is divided into tokens. The indexes of tokens are: 1, 36, 2 and 36 (see table 1). Because there are two input neurons, we have 3 subsequent pairs (1,36), (36,2), (2,36) of training data, each pair coded by real values in the input layer: 0.1222 0.9,
(3)
0.9
0.1444,
(4)
0.1444 0.9.
(5)
The first two tokens that have appeared are 1 and 36. The next token in a sequence has index equal to 2 and it should be predicted by the network in the result of the input pair (1,36). It means that only neuron number 2 at the output layer should have the high value. In the next step of training, 2nd and 3rd tokens are presented to the network. Fourth token should be predicted so neuron number 36 in the output of RNN should have the high value. Finally, input data are 0.1444 and 0.9. In that moment weights of RNN are updated and the next SQL statement is considered.
5 5.1
Gene Expression Programming Overview
GEP is a modern metaheuristic originally developed by Ferreira [2]. Since its origination GEP has been extensively studied and applied to many problems such as: time series prediction [6][11], classification [9][10] and linear regression [3]. GEP evolves a population of computer programs subjected to genetic operators, which leads to population diversity by introducing a new genetic material. GEP incorporates both linear chromosomes of fixed length and expression trees (ET) of different sizes and shapes similar to those in GP. It means that in opposite to the GP genotype and phenotype are separated. All genetic operators are performed on linear chromosomes while ET is used to calculate fitness of an individual. There is a simple method used for translation from genotype to phenotype and inversely. The advantage of distinction between genotype and phenotype is that after any genetic change of a genome ET is always correct and solution space can be searched through in a more extent. At the beginning chromosomes are generated randomly. Next, in each iteration of GEP, a linear chromosome is expressed in the form of ET and executed. The fitness value is calculated and termination condition is checked. To preserve the best solution in a current iteration, the best individual goes to the next iteration without modifications. Next, programs are selected to the temporary population, they are subjected to genetic operators with some probability. New individuals in temporary population constitute current population.
38
5.2
J. Skaruz et al.
The Architecture of Individuals
The genes of GEP are made of a head and a tail. The head contains elements that represent functions and terminals while the tail can contain only terminals. The length of the head is chosen as a GEP parameter, whereas the length of the tail is calculated according to the eq. 6: tail = h(n − 1) + 1,
(6)
where h is the length of the head and n is a number of arguments of the function with more arguments. Consider an example of a gene presented in eq. 7: + Qd/ + cabdbbca.
(7)
Its encoded form is represented by ET and shown in figure 1. The length of the gene head presented in eq. 7 equals to 6 and the length of the tail equals to 7 according to eq. 6. The individual shown in figure 1 can be translated to the mathematical expression 8: a+b + d. (8) c A chromosome can be built from a few genes. Then sub-ETs are linked by a function - parameter of GEP. For detailed explanation of all genetic operators see [2].
Fig. 1. Expression tree
5.3
Fitness Function
In the problem of anomaly detection there are four notions, which allow to look inside performance of the algorithm. True positives (TP) relates to correctly detecting attacks while false positive (FP) means that normal SQL queries are considered as an attack. False negative (FN) concerns attacks as normal SQL
Soft Computing Techniques for Intrusion Detection of SQL-Based Attacks
39
queries and true negative (TN) relates to correctly classified normal SQL statements. It is obvious that the larger both TP and TN the better mechanism of classification. In this work we use sensitivity and precision, which are the most widely used statistics used to describe a diagnostic test [5]. Sensitivity and precision are calculated according to eq. 9 and eq. 10: sensitivity =
TP , TP + FN
(9)
precision =
TP . TP + FP
(10)
Eq. 11 relates to fitness calculation of an GEP individual: f itness = 2 ∗
precision ∗ sensitivity . precision + sensitivity
(11)
An individual representing the optimal solution of the problem has fitness equals to 1.0 and the worst chromosome has 0.0. GEP evolve the population of individuals to maximize their fitness value.
6 6.1
Experimental Results Detecting SQL Attacks by RNN
In the first phase of experimental study we evaluated the best parameters of RNN and learning algorithm. For the following values of parameters the error of the networks was minimal. For the Jordan network tanh function was chosen for the hidden layer and sigmoidal function for the output layer, 38 neurons in the hidden layer, η (training coefficient) is set to 0.2 and α value (used in momentum) equals to 0.1. While output signal of a neuron in the output layer can be only high or low, for each input data there is a binary vector as the output of the network. If a token is well predicted by the network, it means that there is no error in the vector. Otherwise, if at any position within the vector there is 0 rather than 1 or 1 rather than 0, it means that the network predicted wrong token. Experimental results presented in [7] show that there is a great difference in terms of the number of errors and the number of vectors containing any error for cases, in which attack and legal activity were executed. To distinguish between an attack and a legitimate SQL statement we defined classification rule: an attack occurred if the average number of errors for each output vector is not less than coefficient 1 and coefficient 2 is greater than the number of output vectors that include any error [7]. The problem with that rule is that assumed values of coefficients can not be used generally. They depend on the length of SQL query. The objective of this research is to show the relationship between the values of coefficients and length of SQL query.
40
J. Skaruz et al.
Relationship between the form of the rule and the length of SQL query 30 coefficient 1 coefficient 2
The value of coefficients 1 and 2
25
20
15
10
5
0 10
15
20 The length of SQL queries
25
Fig. 2. The form of the rule for SQL queries
For each data subset we had one network. After the network was trained using data with attacks, it was examined against attacks and legitimate SQL queries. Next, based on the network output for legal and illegal SQL statements, we evaluated two coefficients of the rule so that the number of false alarms is minimal. The coefficients of the rule which were evaluated during the experiment, were used as a classification threshold for RNN output. For testing purpose, SQL statements, which were not used in the training and during defining the rule were applied. Figure 2 presents the values of coefficients of the rule for each length of SQL query considered in this work. It can be seen that the dependence between coefficients and the length of SQL queries is similar to linear. The quality of trained network depends on the length of SQL query. The longer SQL statement, the more difficult is to train it. For such cases, output vectors of RNN include more errors and there is little difference in the output network when attacks and legal SQL queries are presented to the network. That is the reason for which
Intrusion detection efficiency
Intrusion detection efficiency
100
100 False positive False negative Average of Fn i Fp
False positive False negative Average of Fn i Fp 80 Percentage number of false alarms
Percentage number of false alarms
80
60
40
20
60
40
20
0
0 50
100
150
200
250
300
Number of SQL queries
(a)
350
400
450
50
100
150
200
250
300
350
400
450
Number of SQL queries
(b)
Fig. 3. Relationship between the number of SQL queries used for defining the rule and false alarms for different length SQL statements a) 10 tokens b) 20 tokens
Soft Computing Techniques for Intrusion Detection of SQL-Based Attacks
41
greater values of coefficients must be used to discriminate between attacks and legal activity. From the figure 3 it can be concluded, what is the optimal number of SQL queries that must be used to define the rule. For queries constituted from 10 tokens the optimal value of data equals to 250. It is easy to see that for good quality networks increasing the number of data used for defining the rule improves results in terms of the number of false alarms. If the length of query increases, it is more difficult to train the network completely. For SQL queries including 20 tokens, the optimal number of data used during setting the rule decreased to 200.
6.2
Detecting SQL Attacks by GEP
In this section we are going to use GEP to find a function that can be used to classify SQL statements. In this work we apply in the most cases the same values of parameters as in [2]. As the search space is bigger than this in [2], the number of individuals was increased to 100. The set of terminals depends on the length of SQL queries and equals to the number of tokens, which constitute SQL query. As a selection operator roulette wheel was chosen. Figure 4 shows detection system performance during training and testing.
False alarms - training phase
False alarms - testing phase
50
50 false positive false negative
false positive false negative
40
False alarms
False alarms
40
30
20
10
30
20
10
0
0 10
12
14
16
18
20
22
The length of SQL query
(a)
24
26
28
10
12
14
16
18
20
22
24
26
28
The length of SQL query
(b)
Fig. 4. GEP performance, training phase (a), testing phase (b)
It is easily noticeable that the false alarms rate in both figures 4 a) and b) are very similar. Presented percentage values of false alarm rate are averaged over 10 runs of GEP. Such results of the experiment allow to say that the best evolved mathematical expression classifies SQL queries in the testing set with nearly the same efficiency as in the training set. One of the reasons this happens is that although SQL statements are placed randomly to both data sets, they features similar structure in both data sets, which were used in the classification task.
42
7
J. Skaruz et al.
Conclusions
In the paper we have presented two techniques of soft computing to detect SQLbased attacks. RNN with classification rules is able to predict sequences of 10 tokens with false alarms rate below 1%. We also showed how the number of SQL queries used for setting the coefficients affects the number of false alarms. Classification accuracy received from GEP depicts great efficiency for SQL queries constituted from 10 to 15 tokens. For longer statements the averaged FP and FN equals to about 23%.
References 1. Almgren, M., Debar, H., Dacier, M.: A lightweight tool for detecting web server attacks. In: Proceedings of the ISOC Symposium on Network and Distributed Systems Security (2000) 2. Ferreira, C.: Gene Expression Programming: A New Adaptive Algorithm for Solving Problems. Complex Systems 13(2), 87–129 (2001) 3. Ferreira, C.: Gene Expression Programming: Mathematical Modeling by an Artificial Intelligence. Angra do Heroismo, Portugal (2002) 4. Kruegel, C., Vigna, G.: Anomaly Detection of Web-based Attacks. In: Proceedings of the 10th ACM Conference on Computer and Communication Security (CCS 2003), pp. 251–261 (2003) 5. Linn, S.: A New Conceptual Approach to Teaching the Interpretation of Clinical Tests. Journal of Statistics Education 12(3) (2004) 6. Litvinenko, V.I., Bidyuk, P.I., Bardachov, J.N., Sherstjuk, V.G., Fefelov, A.A.: Combining Clonal Selection Algorithm and Gene Expression Programming for Time Series Prediction. In: Third Workshop 2005 IEEE Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications, pp. 133–138 (2005) 7. Skaruz, J., Seredynski, F.: Some Issues on Intrusion Detection in Web Applications. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2008. LNCS (LNAI), vol. 5097, pp. 164–174. Springer, Heidelberg (2008) 8. Valeur, F., Mutz, D., Vigna, G.: A Learning-Based Approach to the Detection of SQL Attacks. In: Proceedings of the Conference on Detection of Intrusions and Malware and Vulnerability Assessment (DIMVA), Austria (2005) 9. Zhou, C., Xiao, W., Nelson, P.C., Tirpak, T.M.: Evolving Accurate and Compact Classification Rules with Gene Expression Programming. IEEE Transactions on Evolutionary Computation 7(6), 519–531 (2003) 10. Zhou, C., Nelson, P.C., Xiao, W., Tirpak, T.M.: Discovery of Classification Rules by Using Gene Expression Programming. In: International Conference on Artificial Intelligence, pp. 1355–1361 (2002) 11. Zuo, J., Tang, C., Li, C., Yuan, C.-a., Chen, A.-l.: Time Series Prediction Based on Gene Expression Programming. In: Li, Q., Wang, G., Feng, L. (eds.) WAIM 2004. LNCS, vol. 3129, pp. 55–64. Springer, Heidelberg (2004)
Efficiently Querying XML Documents Stored in RDBMS in the Presence of Dewey-Based Labeling Scheme Moad Maghaydah and Mehmet A. Orgun Department of Computing, Macquarie University, Sydney, NSW 2109, Australia {moad,mehmet}@science.mq.edu.au
Abstract. Storing an XML document as a single record of type BLOB (or sequence of bytes) in RDBMS has become a widely used solution that reduces the complexity of reassembling the original document. However, shredding and indexing the XML document, using special labeling methods to recover the document order, is still required to efficiently support data-centric queries. The Dewey based labeling method has been considered to be the most suitable labeling technique to support dynamic XML documents. In this paper, we present a new space-efficient and easy to process Dewey-based labeling scheme. The new label structure, which is composed of two components: (Parent, Child) in Dewey format, would significantly improve the performance of XML queries that are based on parent-child and sibling relationships. Furthermore, we introduce an efficient alternative approach to navigate upwards the XML tree, which can be used to validate ancestor relationships. We report on an extensive experimental label length evaluation and performance tests between our approach and some recent Dewey based approaches using well-known XML benchmarks. Keywords: XML Management Systems, Labeling Dynamic XML Documents, XML Query processing in RDBMS.
1 Introduction New record types have been recently introduced to the relational database systems (RDBMS) to store the Extensible Markup Language (XML) documents. This approach reduces the complexity of reassembling the original XML document since the whole document is stored as one record. However, it adds more challenges to efficiently querying the data within the XML documents. Moreover, many studies have demonstrated that shredding an XML document, labeling and indexing its elements to recover document order, is still a very competitive approach to managing and querying XML documents that are stored in relational database systems [1, 2, 3, 6, 11, 16]. Having these two approaches side by side may still play a major role in how XML documents are stored and managed in the future. Maintaining the XML document order in RDBMS is a major challenge. Variant labeling methods have been proposed to address this issue [3, 5, 10, 11]. However, most of the proposed labeling techniques either do not support or lack efficient supports for dynamic XML documents (i.e. delete, insert or update operations). N.T. Nguyen, M.T. Le, and J. Świątek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 43–53, 2010. © Springer-Verlag Berlin Heidelberg 2010
44
M. Maghaydah and M.A. Orgun
The Dewey-based labeling technique for XML nodes has emerged as the most suitable labeling approach to support dynamic XML documents [4, 6, 15]. It supports variant operations on dynamic XML documents; from inserting large sub-trees, without re-labeling the existing nodes, to fine-grained locking thereby avoiding access to external storage as much as possible. Some special functions are used to compare node labels to verify parent-child, or ancestor-descendent relationships. However, Dewey labels, which are represented as binary strings, are considered to be long for very large and deeply nested XML documents or when there is a frequent node insertion. Furthermore, functions are used to process the labels and validate relationships between nodes which might not make efficient use of the RDBMS indexing mechanisms. To mitigate the problems with the current Dewey-based techniques, we introduce a new labeling technique PoD (Prefixing on Demand), based on Dewey identifiers, which maintains the document order and supports dynamic XML documents. The new technique uses fixed-width prefixes and only when they are needed. PoD reduces the maximum label length and the total size of the labels. We also propose a new structure for the Dewey-based labels by splitting the label into two components (parent, child). This approach would significantly improve query performance for queries that involve parent-child or/and sibling relationships mainly in RDBMS that do not support indexing on functions, like the widely used MySQL. In addition, we propose that more information about the structure of an XML document can be stored in the “XML_Paths” table; the XML_Paths table contains the all possible paths in an XML document along with their unique path_Id, node_level, node_type, and data_type…etc; this table is a very small table which can fit in memory and can be used during query translation. Having more information in that table would provide an efficient method to navigate backwards the XML tree, and to evaluate the structural-join queries in the format: “//A[//B=’C’]//D”, as we will demonstrate in section 4.1. In the following, Section 2 briefly discusses related work and motivates our approach. In Section 3 we discuss in details the PoD (Prefixing on Demand) technique. In Section 4 we discuss translation of queries from XQuery to SQL. In section 5 we report on extensive experimental evaluations for label length and query performance between various configurations of PoD and other Dewey-based approaches, using well-known XML benchmarks. Section 6 concludes the paper with a brief summary.
2 Related Work and Motivation Storing and Managing XML documents in RDBMS have become an accepted solution for many applications. To maintain the document order, which is an important feature in the XML data model, the nodes and the attributes of XML documents have to be identified and numbered. There are two main labeling methods reported in the literature; number-based approaches [3, 11, 14, 16], and prefixes and Dewey-based approaches [4, 6, 8, 18]. The intervals approach is a number-based technique which identifies each node by a pair of numerical Start and End values; the Start value represents the order when the node is visited for the first time and the End value represents the order after visiting
Efficiently Querying XML Documents Stored in RDBMS
45
all of the node’s descendants [11, 13]. It has been shown that the intervals approach provides a better representation for static XML documents. Prime [16] is another number-based labeling scheme which uses the property of prime numbers; the label of each node is the product of its own self-label (a unique prime number) and its parent’s label. Prime mainly supports ancestor-descendent relationships; node A is an ancestor of node B if the label of node B is divisible by the label of node A. The Dewey coding concept for XML trees provides a promising labeling scheme based on concatenating the local orders of the nodes from the root node to the current node (e.g. 1.5.8.1). The order information is encoded in UTF-8 strings [4]. The major drawback of UTF-8 is its inflexibility since its compression is poor for small ordinals, e.g., the label (1.1.1.1) uses four one-byte components. ORDPATH [6] is a recent Dewey based approach which eliminates the need for relabeling the existing nodes, in a given XML document, when new nodes are inserted. However, as in many other similar approaches, ORDPATH uses the variable length prefix-free technique, which is not the ultimate compression technique [21]. QED [21] is a compact encoding technique, which can be applied to both intervals and Dewey labeling methods. QED provides an efficient support for frequent node insertion; however QED requires XML documents to be parsed twice when they are stored for the first time. Furthermore, QED does not provide a significant label size reduction over ORDPATH for XML documents with small fan-outs. We observe that there is a scope for improving on the performance and label-size of the current Dewey-based approaches such as ORDPATH by eliminating the need for complex and excessive variable length prefix-free strings by using a fixed-width prefix and when only needed. Also we split the label in two components (parent, child) to make a better use of available RDBMS indexing techniques.
3 Prefixing On Demand Labeling Scheme (PoD) 3.1 Basic Labeling Unit (BLU) The basic concept of PoD labeling scheme is to have a Basic Labeling Unit (BLU) with a fixed length (l bits). Most of the values in the BLU will be used to label nodes based on their document order. However, some of the highest values within the BLU will be preserved for prefixing the extended labels, which will be used to label larger number of nodes. Furthermore, the maximum value within the BLU is preserved as an insertion point (e.g. for a BLU of 4 bits the value 15 is the insertion point). The fixed length BLU preserves the document order and eliminates the need for complex variable length prefix-free strings. This technique makes it possible to label more nodes using shorter labels before the need to use extended labels arises. Proposal 1: A fixed number of bits (l) can be used to label up to M number of XML elements within the same parent node where the label value (v) does not have a prefixing component and the value (v) is in the range: 0≤v15) { Label = CONCAT (PL, CL) } else { PL[lastByte] = PL[lastByte]| CL[firstByte] Label = CONCAT (PL, CL [starting form firstByte+1]) } Where ( | ) is the bit OR operation.
48
M. Maghaydah and M.A. Orgun
4 XML Query Translation XQuery is emerging as the standard query language for XML documents. Yet a few features of XQuery have been implemented in RDBMS. We adapted translation methods for Dewey-based labels from [4], and we also adapted some mapping methods from Edge [3] and XRel[11]. We added additional attributes to the XML_Paths table such as node’s level and node’s type to provide more information about the document structure without sacrificing space. 4.1 Navigating Upwards the XML Tree The ancestor-descendent relationship between any two nodes is evaluated by comparing the label values of the two nodes; node N2 is a descendent of node N1 if the label value of N2 is between the label of node N1 and the upper limit label of any possible child node of node N1. The same approach is used to prove that any given two nodes N2 and N3 at any levels are both descendents of the same ancestor node N1: (N1 < N2 < xmc(N1) ) AND (N1 < N3 < xmc(N1)) Where: xmc function returns the upper limit value for node N descendents. However, this approach might be very expensive since it will involve three joins (or three selfjoins) on a very large table(s). We are proposing a new technique based on the fact that the Dewey-based label for any given node contains the label values of its parent and ancestor nodes. The new function xgp (XML Grandparent) which takes two parameters xgp(label, level), provides an option to navigate upwards the XML tree without the need to join on the table that contains the label of the grandparent (ancestor) node., the above comparison can be evaluated as follows: xgp(N2, Level(N1)) == xgp(N3, Level(N1)) Which can be further optimized as follows: N2 between xgp(N3,Level(N1)) and xmc(xgp(N3,Level(N1)))
5 Experimental Evaluation We conducted experiments to evaluate the storage requirements and query run time of PoD labeling method using the XMark and Michigan benchmarks [19, 20]. We used the benchmarks tools to generate XML documents of different sizes. We have compared our results against those of ORDPATH [6] and QED [21]. 5.1 Label Length and Scalability Experiments In this test we evaluated the total size and the maximum label length of the generated labels for small and large XML documents. To eliminate the impact of the attributes on the ORDPATH’s label lengths, we used the same technique that we use in PoD where the first value (0 or 1) is used as a virtual parent for the attributes of any node.
Efficiently Querying XML Documents Stored in RDBMS
49
The results in Figure 1 show that PoD outperformed ORDPATH by more than 20% especially when the size of XML documents grows very large. Also the results showed that QED is not a space-efficient labeling for XML documents with low fanout (e.g. like the documents in Michigan benchmark); the QED total label size for Michigan benchmark documents is around 35% larger than PoD. As expected, the total label size for PoD-Split is slightly larger than the total label size for PoD; the advantages of the split configuration might justify this slight increase in label size.
Fig. 1. The results of label length experiments for PoD of 4-bit BLUs, ORDPATH, and QED using two benchmarks (XM: XMark benchmark and MCH: Michigan benchmark)
5.2 Performance Test The test was conducted on an Intel Duel Core (1.8GH) machine with 1GB of memory. We used MySQL server v5.1. We report on two main tests: First, evaluating PoD and PoD-Split query performance, and compared these results with the results of ORDPATH (a & b) and QED. The second test was to evaluate using the level information stored in the XML_Path table and use it in conjunction with “xgp” function (XML grandparent function) to navigate upwards an XML tree. 5.2.1 PoD and PoD-Split Performance Test we picked up 18 different queries from the Michigan benchmark; we selected queries from each group based on their relevance to the purpose of this test. We ran each query 10 times, we ignored the first run time for each query. Figure 2 shows the average runtime on logarithmic scale for each query for both the XML documents. The results demonstrate that PoD can be used effectively to address almost all queries against XML documents that are stored in an off-the-shelf relational database system. Furthermore, for almost all queries, PoD ran consistently faster than the ORDPATH and QED by around 10%; this is due to the fact that PoD has shorter label lengths than ORDPATH and QED by around 20% and 35% respectively
50
M. Maghaydah and M.A. Orgun
Fig. 2. The results of performance test in logarithmic scale for PoD-Split, PoD, ORDPATH (a and b), and QED using (Michigan micro benchmark)
However, PoD-Split has made a significant performance gain over all other configurations whenever the query nodes have a parent-child relationship or a sibling relationship. The performance gain is less in the case of the large size XML document (500MB); this is due to the fact that all the XML document nodes are stored in one large table and the queries took longer to evaluate a larger number of unrelated values, in particular, for complex queries. We believe that this performance issue can be improved if the “All_Nodes” table is partitioned in several smaller tables and the query processor joins only the tables that satisfy the query conditions. Queries QS24, QS31 and QA5 demonstrated the advantage of PoD-Split. For the 500MB document, PoD-Split returned successful results in (46.6s, 70.1s, and 47.2s) respectively. However, for the other configurations, the same three queries were aborted after more than 4 hours. Furthermore, PoD-Split did not have performance degradation for queries that do not have parent-child or sibling relationships. 5.2.2 Navigating Backwards Test We have conducted this test to evaluate the impact of having the level information stored in the XML_Paths table and use it in conjunction with the (XGP) function to navigate upwards an XML tree.
Efficiently Querying XML Documents Stored in RDBMS
51
We have used 6 queries from the XMark benchmark to evaluate the default (DEF) query translation method and the optimized translation using the (XGP) function. We report on the performance results of PoD and ORDPATH (a); the same idea can be applied to all other configurations. It was much harder to implement the XGP function for ORDPATH labels due to their variable-length prefixes structures. Table 3 shows the results of this test using two XML documents from XMark benchmark. The results show that our new technique (Level + xgp function) provided a performance gain which can be extremely faster than the default translation technique (Q1, Q4 and Q16). However, other factors in the query might affect the query performance as in Q9 and Q10. Table 3. The test results for the default and xgp function techinques using XMark benchmark
XMark Queries Q1 Q4 Q9 Q10 Q16 Q19
XMark (113MB) /Time (s) XMark (568MB) /Time (s) PoD 4 OrdPath (a) PoD 4 OrdPath (a) DEF XGP DEF XGP DEF XGP DEF XGP 0.125 0.0001 0.122 0.0001 0.575 0.0008 0.593 0.0008 0.387 0.001 0.406 0.001 1.993 0.001 2.093 0.001 5.193 3.653 5.325 3.853 277.159 270.67 295.493 289.375 11.656 9.181 11.703 9.556 58.675 46.731 59.99 48.29 1.343 0.084 1.4 0.087 6.859 0.412 6.953 0.437 10.218 5.684 10.387 6.103 58.02 34.98 65.52 45.97
6 Conclusions PoD is a new Dewey-based labeling technique for dynamic XML documents. It reduces the total label size as well as the maximum label length for general XML documents without any prior knowledge about the document structure (i.e., DTD or XSchema). However, PoD with its parameterized features can use XML metadata to further reduce the label length for specific XML documents in applications where space is an important factor. Furthermore, PoD minimizes the processing overhead by not using variable length prefix-free binary strings. The PoD of 4-bit BLU best represents the idea of Prefixing on Demand and it can be considered the most suitable configuration for both small and large XML documents in most applications. PoD of 4-bit BLU outperformed other recent Dewey labeling method in terms of storage by at least 20%, which also enhanced the query performance by 10%, especially for large XML document. We have also proposed two major optimization techniques that can be applied to any Dewey-based labeling technique. The first one is to split the label into two components (Parent, Child), which would significantly enhance the query performance for off-the-shelf RDBMS by making a better use of the indices. The second technique, based on level information and (xgp) function, provides an efficient mechanism to navigate upwards an XML tree which can be used to evaluate ancestor relationships. Further research will be conducted to evaluate tailored versions of PoD to suit particular applications making use of PoD’s flexible features.
52
M. Maghaydah and M.A. Orgun
References [1] Grust, T., Rittinger, J., Teubner, J.: Why off-the-shelf RDBMSs are better at XPath than you might expect. In: The ACM SIGMOD International Conference on Management of Data, Beijing, China (2007) [2] Härder, T.: XML Databases and Beyond - Plenty of Architectural Challenges Ahead. In: Advances in Databases and Information Systems, 9th East European Conference,Tallinn, Estonia (2005) [3] Florescu, D., Kossmann, D.: Storing and Querying XML data using an RDBMS. IEEE Data Engineering Bulletin 22(3) (1999) [4] Tatarinov, I., Beyer, K., Shanmugasundaram, J., Viglas, S., Shekita, E., Zhang, C.: Storing and Querying Ordered XML Using a Relational Database System. In: Proceedings of ACM SIGMOD International Conference on Management of Data, Madison, USA, WI (2002) [5] Deutsch, A., Fernandez, M., Suciu, D.: Storing Semistructured Data with STORED. In: Proceedings of ACM SIGMOD, Philadelphia, PN, USA (1999) [6] O’Neil, P., O’Neil, E., Pal, S., Cseri, I., Schaller, G., Westbury, N.: ORDPATHs: InsertFriendly XML Node Labels. In: Proceedings of the ACM SIGMOD International Conference on Management of Data, Paris, France (2004) [7] Böhme, T., Rahm, E.: Supporting Efficient Streaming and Insertion of XML Data in RDBMS. In: Third International Workshop on Data Integration over the Web, Riga, Latvia (2004) [8] Cohen, E., Kaplan, H., Milo, T.: Labeling Dynamic XML Trees. In: 21st ACM SIGACTSIGMOD-SIGART Symposium on Principles of Database Systems, Madison, WI, USA (2002) [9] Deutsch, A., Papakonstantinou, Y., Xu, Y.: The NEXT Framework for Logical XQuery Optimization. In: Proceedings of the 30th VLDB Conference, Toronto, Canada (2004) [10] Shanmugasundaram, J., Tufte, K., He, G., Zhang, C., DeWitt, D., Naughton, J.: Relational Databases for Querying XML Documents: Limitations and Opportunities. In: The 25th VLDB Conference, Edinburgh, Scotland (1999) [11] Yoshikawa, M., Amagasa, T., Shimura, T., Uemura, S.: XRel: A path-based approach to Storage and Retrieval of XML Documents using Relational Database. ACM Transactions on Internet Technology 1(1), 110–141 (2001) [12] Kha, D., Yoshikawa, M., Umeura, S.: An XML Indexing Structure with Relative Region Coordinate. In: The 17th International Conference on Data Engineering, Heidelberg, Germany (2001) [13] Zhang, C., Naughton, J., DeWitt, D., Luo, Q., Lohman, G.: On Supporting Queries in Relational Database Management Systems. Special Interest Group on Management of Data, Santa Barbara, CA, USA (2001) [14] Maggie, D., Zhang, Y.: LSDX: A new Labeling Scheme for Dynamically Updating XML Data. In: The 16th Australian Database Conference, Newcastle, Australia (2005) [15] Haustein, M., Härder, T., Mathis, C., Wagner, M.: DeweyIDs The Key to Fine-Grained Management of XML Documents. In: 20th Brazilian Symposium on Databases, Brazil (2005) [16] Wu, X., Lee, M.L., Hsu, W.: A Prime Number Labeling Scheme for Dynamic Ordered XML Trees. In: The 20th International Conference on Data Engineering ICDE, Boston, MA, USA (2004)
Efficiently Querying XML Documents Stored in RDBMS
53
[17] Schmidt, A., Kersten, M., Windhouwer, M., Waas, F.: Efficient Relational Storage and Retrieval of XML Documents. In: The World Wide Web and Databases, Third International Workshop WebDB, Dallas, TX, USA (2000) [18] Maghaydah, M., Orgun, M.: An Adaptive Labeling Method for Dynamic XML Documents. In: Proceedings of the IEEE International Conference on Information Reuse and Integration, Las Vegas, NV, USA, pp. 618–623 (2007) [19] Schmidt, A., Waas, F., Kerten, M., Carey, M., Manolescu, I., Busse, R.: XMARK: A Benchmark for XML Data Management. In: Proceedings of the 28th VLDB Conference, Hong Kong, China (2002) [20] Runapongsa, K., Patel, J., Jagadish, H., Chen, Y., Al-Khalifa, S.: The Michigan Benchmark: Towards XML Query Performance Diagnostics. Information Systems 31, 73–97 (2006) [21] Li, C., Ling, T.W.: QED: A Novel Quaternary Encoding to Completely Avoid Relabeling in XML Updates. In: CIKM 2005 (2005)
From Data Mining to User Models in Evolutionary Databases C´esar Andr´es, Manuel N´ un ˜ez, and Yaofeng Zhang Departamento Sistemas Inform´ aticos y Computaci´ on Universidad Complutense de Madrid, E28040 Madrid, Spain
[email protected],
[email protected],
[email protected]
Abstract. Testing is one of the most widely used techniques to increase the quality and reliability of complex software systems. In this paper we refine our previous work on passive testing with invariants to incorporate (probabilistic) knowledge obtained from users of the system under test by using data mining techniques. We adapt our previous approach in order to reduce costs. We emphasize the use of data mining to discover knowledge, a technique that is not only accurate but also comprehensible for the user.
1
Introduction
Testing is a major issue in any software development project because faults are indeed difficult to predict and find in real systems. In testing, there is usually a distinction between two approaches: Passive and Active. The main difference between them is whether a tester can interact with the System Under Test (SUT). If the tester can interact with the SUT, then we are in the active testing paradigm. On the contrary, if the tester simply monitors the behavior of the SUT, then we are in the passive testing paradigm. Actually, it is very frequent either that the tester is unable to interact with the SUT or that the internal nondeterminism of the system makes difficult to interact with it. In particular, such interaction can be difficult in the case of large systems working 24/7 since it might produce a wrong behavior of the system. Therefore, passive testing usually consists in recording the trace produced by the SUT and trying to find a fault by comparing this trace with the specification [10,12,11,13,6,9]. A new methodology to perform passive testing was presented in [7,5]. The main novelty is that a set of invariants is used to represent the most relevant expected properties of the specification. An invariant can be seen as a restriction over the traces allowed to the SUT. We have recently extended this work to consider the possibility of adding time constraints as properties that traces extracted from the SUT must hold [2].
Research partially supported by the Spanish MEC projects WEST/FAST (TIN200615578-C02-01) and TESIS (TIN2009-14312-C02-01), and UCM-BSCH programme to fund research groups (GR58/08 - group number 910606).
´ atek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 54–63, 2010. N.T. Nguyen, M.T. Le, and J. Swi c Springer-Verlag Berlin Heidelberg 2010
From Data Mining to User Models in Evolutionary Databases
55
We can consider different methods to obtain a set of invariants. The first one is that testers propose a set of representative invariants. Then, the correctness of these invariants with respect to the specification has to be checked. The main drawback of this approach is that we still have to rely on the manual work of the tester. If we do not have a complete formal specification, another option is to assume that the set of invariants provided by the testers are correct by definition. In this paper we consider an alternative method: We automatically derive invariants from the specification. This approach was introduced in [4] and improved in [3]. Initially, we consider an adaptation of the algorithm presented in [7]. The problem with this first attempt is that the number of invariants that we extract from the specification is huge and we do not have any criteria to decide which are the best ones. Therefore, we improve our contribution by using data mining techniques to provide a probabilistic model [1] to guide the extraction of these properties. The underlying idea was that invariants should check the most frequent user actions. We provided a methodology to extract from a specification, by using its associated probabilistic model, a set of invariants with a relevance degree. In [3] it was showed that the cost of building a user model from a database is big. Thus, we are not able to build a new user model each time that new data is added. In this paper we study when a user model, extracted from an original database, continues being valid with respect to a timed evolution of the original one. That is, we study the conformance of a probabilistic user model with respect to a temporal evolution of the initial database. With this methodology we guarantee that the set of extracted invariants using the algorithm in [3] remains with a high degree of relevance. In addition, we study a feasible approach to update on the fly the user model in order to save time and computational resources. The rest of the paper is structured as follows. In Section 2 we present our passive testing framework of timed systems. In Section 3 we present our formalism to define user models. In Section 4 we define the conformance of a user model with respect to a database and introduce a dynamic adaptation taking into account the new recorded data. We conclude with Section 5 where we present our conclusions.
2
Testing Framework
In this section we review our framework [2]. We adapt the well known formalism of Finite State Machines (FSMs) to model our specifications. The main difference with respect to usual FSMs consists in the addition of time to indicate the lapse between offering an input and receiving an output. We use positive real numbers as time domain for representing the time units that the output actions take to be executed. Definition 1. A Timed Finite State Machine is a tuple M = (S, s0 , I, O, T ), where S is a finite set of states, s0 ∈ S is the initial state, I and O, with I ∩ O = ∅, are the finite sets of input and output actions, respectively, and
56
C. Andr´es, M. N´ un ˜ ez, and Y. Zhang a/z/3 s1
b/y/3
s3 b/z/3
b/y/1
a/y/2
start
s0
b/z/2
a/y/2
s2
a/x/3
a/y/1
s4 b/x/3
Fig. 1. Example of TFSM
T ⊆ S × I × O × IR+ × S is the set of transitions. Along this paper we will use i/o
s− −−−→ t s as a shorthand to represent the transition (s, i, o, t, s ) ∈ T . We say that M is deterministic if for all state s and input i there exists at most one transition labeled by i departing from s. We say that M is input-enabled if for all state s ∈ S and i ∈ I there exist s ∈ S, o ∈ O and t ∈ IR+ such that i/o
s− −−−→ t s .
A transition belonging to T is a tuple (s, i, o, t, s ) where s, s ∈ S are the initial and final states of the transition, i ∈ I and o ∈ O are the input and output actions, respectively, and t ∈ IR+ denotes the time that the transition needs to be completed. In this paper we consider that all defined TFSMs are input-enabled and deterministic. Example 1. Let us consider the TFSM depicted in Figure 1. The set of inputs is {a, b} while the set of outputs is {x, y, z}. For example, let us consider the a/y
transition s0 − −−−→ 2 s1 denotes that if the system is in state s0 and the input a is received, then the output y will appear after 2 time units and the system moves to s1 . Let us also remark that this TFSM is input-enabled, that is, for all states we have that all inputs belonging to the set of inputs are defined. Next we introduce the notion of trace. A trace represents a sequence of actions that the system may perform from any state. Definition 2. Let M be a TFSM. We say that ω = i1 /o1 /t1 , . . . , in /on /tn is a trace of M , if there is a sequence of transitions such that i1 /o1
i2 /o2
in /on
s1 −−−−−→ t1 s2 −−−−−→ t2 s3 . . . sn−1 −−−−−→ tn sn In the previous definition let us mention that we do not force traces to start in the initial state.
From Data Mining to User Models in Evolutionary Databases
57
Example 2. Let us consider the TFSM defined in Figure 1. Some traces of this TFSM are a/x/3, b/z/2 (traversing the states s2 , s2 , s0 ) and b/z/3, b/x/3 (traversing the states s3 , s4 , s4 ) but not a/x/3, b/x/2 . Next we introduce the notion of invariant. An invariant allows us to express properties that must be fulfilled by the SUT. For example, we can express that the time the system takes to perform a transition always belongs to a specific interval. In our framework, an invariant expresses the fact that each time the SUT performs a given sequence of actions, it must exhibit a behavior reflected in the invariant. Invariants can be seen as a kind of temporal logic that expresses both timed and untimed properties of the specification, modeled by a TFSM. Untimed properties are represented by the sequences of input/output pairs that conform the invariant while timed properties are given by the intervals that indicate when these actions must be performed. Next, we present the intuitive idea of invariants by using examples. Example 3. Let us consider the specification presented in Figure 1. The invariant ϕ = a/y/[2, 3], a → {x, z}/[2.5, 3.5] [4, 7] means that every time that we observe the input a followed by the output y within a timed value belonging to [2, 3] then, if we observe the input a then we have to observe either the output x or the output z in a time belonging to [2.5, 3.5]. The last time restriction makes reference to the whole length of the checked log, that is the elapsed time from the input action a to the last output of the invariant, must belong to [4, 7].
3
User Model
As we discussed in the introduction, there are several ways to obtain a set of invariants. In this paper we propose that invariants can be automatically derived from the specification. The main problem to perform this approach is that the set of correct extracted invariants is huge and potentially infinite. Thus, we need a methodology to select good invariants among the set of candidates. A first approach was given in [3] . In that paper we assume that there exists a big set of recorded logs into a database, which corresponds to the set of user interactions with the system. In order to process this information, a probabilistic model can be defined to represent the users of the system. This new model is called user model [1], and it can be automatically extracted from the database by applying data mining techniques. Intuitively, a user model includes the probability that a user chooses each input in each situation. We use a particular case of Probabilistic Machine (PM) to represent user models. Let us note that the interaction of a user with a SUT ends whenever the user decides to stop it. Consequently, models denoting users must represent this event as well. Therefore, given a PM representing a user model, we will require that the sum of the probabilities of all inputs associated to a state is lower than or equal to 1. The remainder up to 1 represents the probability of stopping the interaction at this state.
58
C. Andr´es, M. N´ un ˜ ez, and Y. Zhang U2
U1
a/z b/y
s0
a/z a/x
s0
a/y
a/x
fs0 (a) = 0.7
b/x a/y b/z
b/z s1
a/x b/y a/z
a/y
b/x
fs0 (a) = 0.3 fs0 (b) = 0.6 fs1 (a) = 0.5 fs1 (b) = 0.4 Fig. 2. Example of user models
Definition 3. A user model is represented by a tuple U = (S, s0 , I, O, F, T ) where S is the finite set of states and s0 ∈ S is the initial state. I and O, with I ∩ O = ∅, are the finite sets of input and output actions, respectively. F = {fs |s ∈ S} is a set of probabilistic functions such that for all state s ∈ S, fs : I → [0, 1] fulfills that i∈I fs (i) ≤ 1. Finally, T ⊆ S × I × O × S is the set of transitions. We say that U is deterministically observable if there do not exist in T two transitions (s, i, o1 , s1 ), (s, i, o2 , s2 ) ∈ T with o1 = o2 and s1 = s2 . In this paper, we assume that user models are deterministically observable. We i/o
adopt s − −−−→ p s as a shorthand to express (s, i, o, s ) ∈ T and p = fs (i). Example 4. Let us consider the user models depicted in Figure 2. Both models represent a type of user that interact with the TFSM presented in Figure 1. The user model U1 represents users who always are interacting with the SUT by applying the a input action. The probability to apply this input is represented by the function fs0 (a). The remainder to 1, that is 0.3, represents the probability to stop in this state. U2 represents a complex kind of users. These users can perform any input at any time. The probability associated with each state and input is represented by the associated functions fs0 (a), fs0 (b), fs1 (a), and fs1 (b), respectively. Next, we identify the probabilistic traces of user models. These traces will be used in order to provide a coverage degree of a possible user behavior. Definition 4. Let I be a set of input actions and O be a set of output actions. A probabilistic trace is a sequence (i1 /o1 , p1 ), (i2 /o2 , p2 ), . . . , (in , on , pn ) , with n ≥ 0, such that for all 1 ≤ k ≤ n we have ik ∈ I, ok ∈ O, and pk ∈ [0, 1].
From Data Mining to User Models in Evolutionary Databases
59
Let U be a user model and σ = (i1 /o1 , p1 ), (i2 /o2 , p2 ), . . . , (in /on , pn ) be a probabilistic trace. We say that σ is a trace of U , denoted by σ ∈ ptr(U ), if there is a sequence of transitions of U i1 /o1
i2 /o2
in /on
s0 −−−−−→ s1 −−−−−→ s2 · · · −−−−−→ sn and for all 1 ≤ k ≤ n we have pk = fsk−1 (ik ). We define the stopping probability of U after σ as s(U, σ) = 1 − {fsn (i) | i ∈ I} We denote the complete probability of σ as ⎛ ⎞ prob(U, σ) = ⎝ pj ⎠ · s(U, σ) 1≤j≤n
Let us remark that the set of probabilistic traces of a user model is related to the database of logs, that has been used to generate this model. The probability of finding a log in the database has to be equivalent to the probability to produce this sequence in the user model, and stop. Example 5. Let us consider the user models U1 and U2 presented in Figure 2. Let DB1 and DB2 be the two databases from where these models where extracted. Next we present the relation between finding a log in a database and the probability to generate this sequence from a user model. For example, we calculate the probability of finding the log a/y/2 in DB1 . It has to be the same value as prob(U1 , a/y/0.7 ) = 0.21 computed as 0.7 · 0.3. Another example is to calculate the probability of finding the log a/y/2, a/x/3 . In this case we have prob(U1 , a/y/0.7, a/y/0.7 ) = 0.147, since 0.7 · 0.7 · 0.3. Let us remark that the probability of finding a log into DB1 can be 0. For example, this happens if we look for the log a/y/2, b/y/3 . We have that the probability of finding any log in the user model U2 (extracted from DB2 ) is always bigger than 0. For example, if we use the previous log a/y/2, b/y/3 we have that, its frequency in DB2 has to be the same as the probability prob(U2 , a/y/0.5, b/y/0.4 ) = 0.02, computed as 0.5 · 0.4 · 0.1. We use user models to guide the algorithm of extracting invariants from the specification. Let us note that if we generate invariants that reflect only a finite set of input sequences, then we can calculate the proportion of covered interactions between the user and the SUT. In terms of number of interactions, this proportion is 0 because we are considering a finite number of cases out of an infinite set of possibilities. However, the probability that the user interacts with the SUT and performs one of the traces considered in a given finite set of cases is not 0 in general. Thus, we may use this value as a coverage measure of a finite set of invariants.
60
4
C. Andr´es, M. N´ un ˜ ez, and Y. Zhang
Comparation and Evolution of User Models
The use of user models to guide the algorithm of invariant extraction from the specification is feasible. The problem of this approach is that building by applying data mining techniques is a hard task, that is, we need a lot of resources of the database and of computation to generate it. This implies that the extraction of the set of invariants continues being very expensive. Our goal is to reduce the number of times that a user model should be rebuilt. Also, we provide an adaptive method based on the social insects algorithms, called Ant Colony Optimization [8], to update the model. 4.1
Comparation of User Models
We present a measure function to decide how closed are the users from two databases. The idea is that databases are changing along the time. We can assume that a user model is correct whit respect to a database when the users from this database and the users from the original database, that is the one used to generate the user model, are similar. When this measure is higher than a certain bound, then we should drop the oldest user model and build a new one. In this approach we propose that users that use the SUT may be structured in different classes. These classes denote the degree of experience that this user has. In a database composed by logs recorded from the SUT, the distribution of different users among classes depend on the ammount of traces that belongs to the database, the number of different days that these users interact with the SUT, and the distance (in time) between the first and the last interaction. Each class is a set of sorted users (according to their relevance). So, the set with the biggest relevance, that is experts users, is u1 , the set labeled with u2 represents people that normally use the system, and so on. As it usually happens in knowledge communities, the amounts of users in each class should follow a pyramidal structure. Thus, the condition | u1 | < | u2 | < . . . < | un | will be assumed. It is important to remark that the distribution of users among classes is dynamically done by taking into account their behavior. However, when creating a hierarchy for the first time there is no information about the previous behaviour of the users. Next, we relate the conformance of a user model, the database used to generate this user model, and a possible timed evolution of this database. The main parameter to be dynamically studied is the deformation speed of the log, that is, the speed at which the database is increasing its number of recorded fields, and the users start to change from one class to another. Definition 5. Let U be a user models and Let DB1 , DB2 be two databases, where DB1 is the database used to produce U . We denote by ui (DBj ) the set of users of the database DBj belonging to the class ui , and by U (DBj ) the total amount of users in DBj .
From Data Mining to User Models in Evolutionary Databases
61
We define the correlation measure between DB1 and DB2 as: P(DB1 , DB2 ) =
n (ui (DB1 ) − ui (DB2 ))2 i=1
U (DB1 )2 + U (DB2 )2
With respect to the timed evolution of the database we find, on the one hand, if an interaction from a non yet classified users is stored into the database, then this user is classified into the level un . On the other hand, if the interaction is of an existing user, then the database can change her associated class, for example, due to being less/more participative. Let us remark that given two databases DB1 and DB2 , we have that P(DB1 , DB2 ) value belongs to [0, 1]. Definition 6. Let U1 be a user model generated from the database DB1 . Let DB2 be a timed evolution of DB1 . We say that U1 conforms DB2 with a degree of confidence α ∈ [0, 1] if: 1 − P(DB1 , DB2 ) ≥ α By requesting a higher value of α we ensure that the user model is more conforming with respect to the timed evolution database. This value can be also used to denote when the tester should generate a new set of invariants. For example, we can consider that when the degree of confidence of the current database with respect to a user model is less than 0.7, then we generate a new user model from this database, and we generate a new set of invariants using this new user model. 4.2
Dynamic Adaptation of a User Model
By using the previous correlation measure of the databases, testers know when a set of invariants is not valid with respect to a database. However, they still have the problem of the cost extraction algorithm of a new user model from the new database. In this section we present a dynamic adaptation of a user model with respect to the incoming new data into the database. This method provides a low cost and dynamic methodology that allows to always have an updated user model. For this approach, we assume that we are provided with an initial model. This one is defined as a graph where each path (and sub-path), by means of a probabilistic trace, has associated a live counter. This counter at the beginning contains the complete probability value associated with this path (see Definition 4). When a new user interaction is recorded into the database, this interaction is recorded also into the user model, that is, the live counter of this path increases while the value of the rest of paths decreases. By using this technique, a user model can obtain several possible evolutions. The most representative evolutions are used to build new paths in the user model and to remove old paths.
62
C. Andr´es, M. N´ un ˜ ez, and Y. Zhang U1
U2
a/z a/y
s0
a/x a/z b/x b/y b/z
a/z b/z
s1
a/x b/x
a/y
s0 a/y b/y
a/x
b/z s1
a/x b/y a/z
a/y
b/x
fs0 (a) = 0.9 fs0 (b) = 0
fs0 (a) = 0.7
fs1 (a) = 0.5
fs0 (b) = 0.1
fs1 (b) = 0.4
fs1 (a) = 0.4 fs1 (b) = 0.4 Fig. 3. Possible adaptation of the user models U1 and U2
Let us present the intuitive idea of adding new paths to the user U1 presented in Figure 2. Let us consider that the database receives several user interactions. After performing the first b input action, we have that the probability of continuing with either the a action, or the b action is the same. A possible adaptation of U1 with these conditions is presented as the model U1 in Figure 3. Next, let us present the idea of dropping paths. Let us consider that we are working with the second user model U2 (see Figure 2) and the database receives several sequences that always start with the input action a. After processing these new logs, a possible adaptation of the model could be the one presented in U2 (see Figure 3). The main difference is that users modeled by U2 will never start with the input action b.
5
Conclussions
In this paper we have presented an improvement of a previous methodology to perform passive testing by using invariants. These invariants are automatically extracted from a specification helped with a user model that represent the usual behaviours of users interacting with the SUT. By using data mining techniques we can generate a user model but the cost of building it is very high. In this
From Data Mining to User Models in Evolutionary Databases
63
paper we present, on the one hand, an approach to decide when a user model continues being useful with respect to a database and, on the other hand, a dynamic algorithm to adapt a given user model with respect to new recorded data of the database.
References 1. Andr´es, C., Llana, L., Rodr´ıguez, I.: Formally transforming user-model testing problems into implementer-model testing problems and viceversa. Journal of Logic and Algebraic Programming 78(6), 425–453 (2009) 2. Andr´es, C., Merayo, M.G., N´ un ˜ ez, M.: Passive testing of timed systems. In: Cha, S(S.), Choi, J.-Y., Kim, M., Lee, I., Viswanathan, M. (eds.) ATVA 2008. LNCS, vol. 5311, pp. 418–427. Springer, Heidelberg (2008) 3. Andr´es, C., Merayo, M.G., N´ un ˜ ez, M.: Supporting the extraction of timed properties for passive testing by using probabilistic user models. In: 9th Int. Conf. on Quality Software, QSIC 2009 (2009) (in press) 4. Andr´es, C., Merayo, M.G., N´ un ˜ ez, M.: Using a mining frequency patterns model to automate passive testing of real-time systems. In: 21st Int. Conf. on Software Engineering & Knowledge Engineering, SEKE 2009, Knowledge Systems Institute, pp. 426–431 (2009) 5. Bayse, E., Cavalli, A., N´ un ˜ ez, M., Za¨ıdi, F.: A passive testing approach based on invariants: Application to the WAP. Computer Networks 48(2), 247–266 (2005) 6. Benharref, A., Dssouli, R., Serhani, M., Glitho, R.: Efficient traces’ collection mechanisms for passive testing of web services. Information & Software Technology 51(2), 362–374 (2009) 7. Cavalli, A., Gervy, C., Prokopenko, S.: New approaches for passive testing using an extended finite state machine specification. Information and Software Technology 45(12), 837–852 (2003) 8. Dorigo, M., Maniezzo, V., Colorni, A.: Ant system: optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man and Cybernetics B 26(1), 29–41 (1996) 9. Hierons, R.M., Bogdanov, K., Bowen, J.P., Cleaveland, R., Derrick, J., Dick, J., Gheorghe, M., Harman, M., Kapoor, K., Krause, P., Luettgen, G., Simons, A.J.H., Vilkomir, S., Woodward, M.R., Zedan, H.: Using formal methods to support testing. ACM Computing Surveys 41(2) (2009) 10. Lee, D., Netravali, A.N., Sabnani, K.K., Sugla, B., John, A.: Passive testing and applications to network management. In: 5th IEEE Int. Conf. on Network Protocols, ICNP 1997, pp. 113–122. IEEE Computer Society Press, Los Alamitos (1997) 11. Petrenko, A.: Fault model-driven test derivation from finite state models: Annotated bibliography. In: Cassez, F., Jard, C., Rozoy, B., Dermot, M. (eds.) MOVEP 2000. LNCS, vol. 2067, pp. 196–205. Springer, Heidelberg (2001) 12. Tabourier, M., Cavalli, A.: Passive testing and application to the GSM-MAP protocol. Information and Software Technology 41(11-12), 813–821 (1999) 13. Ural, H., Xu, Z.: An EFSM-based passive fault detection approach. In: Petrenko, A., Veanes, M., Tretmans, J., Grieskamp, W. (eds.) TestCom/FATES 2007. LNCS, vol. 4581, pp. 335–350. Springer, Heidelberg (2007)
How to Construct State Registries–Matching Undeniability with Public Security Przemyslaw Kubiak1 , Miroslaw Kutylowski1, , and Jun Shao2 1
Institute of Mathematics and Computer Science, Wroclaw University of Technology
[email protected] 2 College of Information Sciences and Technology, Pennsylvania State University
Abstract. We propose a cryptographic architecture for state registries, which are reference databases holding data such as personal birth data, driver’s licenses, vehicles admitted for road traffic. We focus on providing a practical construction that fulfills fundamental requirements concerning strong undeniability of data from the database and resilience to manipulations. On the other hand, the construction has to enable creating fake records in the database by the authorities for the purpose of law enforcement (e.g. creating identities for covered agents trying to infiltrate organized crime). Keywords: registry, secure system architecture, undeniability, group signatures, hash function.
1
Problem
State registries are databases of extreme importance for legal purposes. They hold most important data on citizens, property and so on and serve as reference databases in legal procedures. For example, there are states registers holding a record for each resident of a country with the data such as birth place and date, civil status (marriages, ...), children. For centuries, the databases have been kept as paper data records. Nowadays they migrate to electronic form with all advantages (e.g., fast search time) and disadvantages (manipulation threats). Due to exceptional role in the legal procedures there are the following fundamental requirements for such a database: 1. only an authorized person, called Registrar, can create an entry in the database, 2. inserting an entry is possible in the “append” mode only, 3. no entry can be removed or modified after insertion so that such an operation remains undetected.
The paper is partially supported by Polish Ministry of Science and Higher Education, grant N N206 2701 33. Contact author.
´ atek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 64–73, 2010. N.T. Nguyen, M.T. Le, and J. Swi c Springer-Verlag Berlin Heidelberg 2010
How to Construct State Registries
65
It is known how to build a database system that fulfills the above requirements - one can use cryptographic techniques based on hash trees. They provide very strong evidence of database integrity as long as the underlying cryptographic functions are not broken. However, the main challenge is existence of additional requirements that somewhat contradict the previous ones: 4. certain authority, called SA (Security Agency), has possibility to break the rules 1-2 and insert additional entries with past date, 5. it is impossible to distinguish the entries created according to rule 4 from the regular entries, even with private keys used to create the entries, 6. another authority, called Supervisor, has extra private keys and using them may reveal if a given entry in the database has been created by Registrar or by SA. Additionally, for each entry from the database (the real one or a fake one created by SA) an authorized person obtains its copy together with appropriate proof data. The proof data may be used against the third party to prove that the entry is indeed in the registry. In practice, registries are either implemented as a standard database (so that fulfilling the requirements 1-3 is based on procedural measures only) or with stronger techniques that create problems due to requirements 4-6. The aim of this paper is to present a rather simple cryptographic architecture on top of a database that enables the system to meet all the requirements stated above.
2
Cryptographic Tools
In this section we describe in a concise way cryptographic techniques used to construct our solution. 2.1
Hash Function with a Trapdoor
We use some specific cryptographic hash function H: H : {0, 1}m × {0, 1}u → {0, 1}n . where u is a size of signatures (to be determined later), m, n are security parameters big enough to cope with the brute search. We demand that – H is one-way, collision resistant function: given z (where z = H(x, s) for some x, s) it is infeasible to find any x , s such that H(x , s ) = z, and it is also infeasible to find any two distinct pairs (x, s) and (x , s ) such that H(x, s) = H(x , s ), – there is a secret trapdoor S, so that given z¯, s¯, and the trapdoor secret S one can find x ¯ such that H(¯ x, s¯) = z¯. The above conditions are conditions for chameleon hash function [1]. A hash function with these properties can be based on asymmetric encryption with a public encryption key. Let E denote the encryption function and D denote the decryption function with a secret decryption key. Then we define H(x, s) = E(E(x) ⊕ s)
66
P. Kubiak, M. Kutylowski, and J. Shao
Notice that with the decryption function and a signature s it is easy to find a value x such that H(x, s) = z. On the other hand, inverting H would mean breaking E. Indeed, given a ciphertext c, the attacker has to find x, s such that D(c) = E(x) ⊕ s. Also, building a collision for H would mean finding x such that E(x) ⊕ E(x ) = s ⊕ s . In our scheme the values s and s must be signatures, so one has to find a pair of plaintexts yielding a given difference of ciphertexts. Again, it is infeasible for a good cipher. Another solution for H proposed by Moti Yung is using CBC encryption with an asymmetric encryption algorithm, where x is the initial vector. In fact, this is the same solution as above if the first encrypted block is a block of zeroes and the second block consists of s. Also VSH [2] seems to be a candidate for the function H (cf. Sect. 4 in [2]). However, in Subsect. 2.4 we additionally demand that values of H should be statistically indistinguishable from random strings, hence in the case of VSH Probabilistic Bias Removal Method (PBRM) [3] might be applied to get random binary strings of some fixed length. 2.2
Hash Tree
In our construction we shall use Merkle hash trees. For this purpose we shall use a standard hash function G like SHA-256 (and not the function H described above). A hash tree is a labeled binary tree such that: – leaves of the tree are labeled by some values (in our case these values will be constructed in a special way), – if the child nodes of a node v are labeled with values h1 , h2 , then the label of v equals G(min(h1 , h2 ), max(h1 , h2 )). Moreover, before we start building the labels we rearrange the leaves so that their labels increase from the left to the right. We shall use the standard proof that there is a leaf with a given label inside a hash tree with a certain label on the root. Simply, we take the path from the leaf to the root and present the labels of all nodes that do not belong to the path but have siblings on the path. As these labels enable to reconstruct the labels on the path, we can check if the top node on the path should have the label claimed. For later use we call the set of labels of the sibling nodes will be called hash proof. 2.3
Group Signature
A standard group signature [4] satisfies the following properties: Correctness: Signatures generated by a group member must be accepted by the verifying algorithm. Unforgeability: A signature accepted by the verifying algorithm must have been generated by a group member.
How to Construct State Registries
67
Anonymity and traceability: Given a signature accepted by the verifying algorithm, it is computationally hard to identify the signer for everyone except the group manager, who can find the real signer. Unlinkability: It is computationally hard to decide whether two different signatures accepted by the verifying algorithm were generated by the same group member. Exculpability: Neither a group member nor the group manager can sign on behalf of other group members. Our scheme uses a group signature with the following requirements: – there is an upper bound on the number of group members (for instance 2), – the group manager cannot become a group member. – the group manager can prove that a signature was created by a given person with a zero knowledge proof (so that it is not transferable and can be used in a closed court procedure), – a group member cannot prove to a third party that a given signature has been created by himself (or that it has not been created by himself). There are many methods to transform a standard group signature scheme to the one satisfying the additional requirements. For the first condition let the two group members share the secret key k1 which is used to generate the membership certificate, and the supervisor only knows the secret key k2 which is used to trace the actual signer. If the group members generate another group member and this group member generates a signature, then the supervisor may find that the signature has not been created by legitimate group members and therefore they are both malicious. The exculpability, traceability, and anonymity properties guarantee the second, third, and fourth requirements, respectively. With the above ideas, we can get a group signature scheme satisfying the requirements of state registry problem by combining the group signature scheme from [4] with the distributed RSA-key method from [5]. 2.4
Verifiable “Randomness”
According to the protocol some values (leaves of Merkle trees) are labelled either by hash values or by random values, or rather “non-hash values”. While it is easy to verify that a given value has been created as a hash, the opposite is not obvious. Note that simple usage of a random number generator composes some threat to the system. If the system manufacturer has build some kleptographic channel [6] into the random number generator, telling simply “I am a random value” to the manufacturer, then the manufacturer might easily distinguish random x from hash function’s results. As a consequence, he might distinguish covered agents from real identities. For choosing a random value we may use the following solution. In this scenario Alice determines a value that is either a hash value or a “random” value, while
68
P. Kubiak, M. Kutylowski, and J. Shao
Bob is the verifier that may check if the value presented has been created as a hash value or as a “random value”. If Alice wishes to determine a “random value”, then – she chooses a random value x, – she computes an undeniable signature s˜ [7] of x with designated verifier Bob. The underlying designated signature scheme should be non-delegateable [8]; otherwise, Bob can delegate the verification ability to a third party, which is against the requirement of the state registry problem. The signature scheme should provide strings of the same length as the hash function and statistically indistinguishable from random strings of this length. As hash values should be also statistically indistinguishable from random values, the signatures and hash values would be statistically indistinguishable. The verification procedure of the signatures mentioned above should be performed as a zero knowledge proof between Alice and Bob. If Alice has to convince Bob whether a given value is “random”, then verification procedure is used: either confirming or denying the signature. The above step is aimed to resist the following situation: the registrar and SA use the same value for real and fake entries in the registry, and there is a conflict showing that at least one of the entries is fake and has been created by SA. Thanks to the described technique when one conflict happens, we can be sure that this is the fault of SA (cf. Subsect. 3.1, in which SA is the designated verifier of the signatures, hence SA knows which values are not used for real entries; moreover, according to Subsect. 3.1 SA knows which values are used for real entries).
3
Description of the System
There are the following actors in the system: Registrar: the authority running the registry. It is responsible for creating the entries and their contents according to strict legal procedures, as well as for issuing legal copies of the entries on request of entitled persons. SA: a security agency that may create fake copies of the entries in the registry and provides necessary authentication information for them. Archive: a bulletin board that publishes all authentication information provided by Registrar about the contents of the registry. Supervisor: an authority that supervises Registrar and SA. Supervisor has no right to generate the entries in the registry or their copies, however, he is entitled to control integrity of the registry. Entitled users: an entitled user is a person which, due to legal regulations, receives a copy of the entry created in the registry together with authentication information. Other users: each person (physical or legal) that is given a copy of an entry from the registry and has to verify that it corresponds to an entry in the registry.
How to Construct State Registries
69
The following secrets are available to the participants of the protocol: Registrar has: – a standard key pair for digital signatures, with private key KR , – the first share of the key for generating key material for group members, – a key KG for creating group signatures, – a key KU for creating undeniable signatures mentioned in Sect. 2.4, SA has: – the second share of the key for generating key material for group members, – a key K¯G for creating group signatures, – a trapdoor KD for H. Archive has no private keys except, possibly, the keys for authenticating the web page with the bulletin board. Supervisor has the key KC for revealing the author of the group signature. All participants have the public key of the group signatures used and the public key of Registrar. 3.1
System Description
Steps Performed by Registrar. Apart from standard activities (creating entries to be included in the registry), during each day Registrar performs the following steps in order to provide authentication data. At the end of day t Registrar creates a hash tree Ht . This is a full binary tree with L leaves, where L is sufficiently high to be greater than the number of entries on each single day plus the number of false SA expected for the day. Namely: 1. for the entries m1 , . . . , mk created during day t Registrar creates signatures s1 , . . . , sk using the key KG , 2. Registrar chooses the strings x1 , . . . , xk at random, in fact he uses the procedure from Sect. 2.4 to determine them; then for i ≤ k computes yi = H(xi , si ), the values xi , si get stored together with mi in the database, 3. for k < j ≤ L Registrar creates pseudo-random values yj according to the procedure described in Sect. 2.4 using the key KU , 4. Registrar contacts SA, they perform the following steps: – Registrar shows yk+1 , . . . , yL and performs together with SA the verification procedure described in Sect. 2.4, additionally, for each yi Registrar presents the hash proof pi , – Registrar shows x1 , . . . , xk and performs together with SA the verification procedure described in Sect. 2.4; additionally, Registrar also shows to SA corresponding signatures s1 , . . . , sk , to prove that x1 , . . . , xk were really used to create leaves of the tree, 5. Registrar creates a hash tree with the leaves y1 , . . . , yL according to the method described in Sect. 2.2, and obtains a label lt for the root of the tree. (Note that due to the method of constructing the tree the original ordering of y1 , . . . , yL is gone.)
70
P. Kubiak, M. Kutylowski, and J. Shao
6. Registrar signs lt with the key KR and sends it to Archive, 7. for each mi Registrar creates hash proof pi (see Sect. 2.2) and sends the authentication data (mi , si , xi , pi ) to the entitled person(s) for this entry, asking them for confirmation. 8. Registrar stores y1 , . . . , yL in a local database together with the confirmations received from the entitled persons. Publishing. Archive maintains a public bulletin board keeping signed labels lt for all days. (The bulletin board can be further secured with techniques such us hash chaining in order to prevent exchanging the past lt values.) Verification of the Copies. An entitled user may present an authentication data (mi , si , xi , pi ) to any person A. Then A can check the quadruple against label lt of the day, when mi has been created. Namely, person A: 1. checks the signature si , 2. checks that H(xi , si ) = yi , 3. uses pi to recompute the hash values on the path between the leaf with label yi to the root, and checks if the label reconstructed for the root equals lt . Steps Performed by SA. Each day SA learns from Registrar some values yj that can be used for creating fake entries in the registry (a typical application is creating some documents in order to certify fake identity of covert agents, for witness protection and so on). Once SA has to create a fake entry for some document m the following steps are executed: 1. SA chooses some y that has been shown by Registrar and proved as pseudorandom value not corresponding to any real entry, let p be the hash proof corresponding to y and stored by SA, 2. SA creates a signature s of m using the key K¯G and the group signature scheme, 3. SA uses the trapdoor KH to find x such that y = H(x, s), 4. SA creates a tuple (m, s, x, p) and gives it to an entitled person (e.g., a covert agent). Note that Registrar in unaware of the steps performed. He may never learn whether fake leaves have been used as: – the electronic verification concerns only the labels of the root labels, – paper documents that may accompany any m from the registry can be created by SA. Steps Performed by Supervisor. Having a copy of an alleged entry from the registry together with the authentication data Supervisor may inspect who has created it. For this purpose he uses the key KC and examines the group signature
How to Construct State Registries
71
s contained in the authentication data by executing the protocol of revealing the creator of a group signature (Sect. 2.3). There are two possible outcomes: – it turns out that either Registrar or SA has created the signature, in this case Supervisor can further control either integrity of the database held by Registrar or activities of SA with the fake documents, – if neither Registrar nor SA are the authors of the signature, then either the cryptographic scheme has been broken or they cooperate illegally.
4
Sketch of a Threat Analysis
In this section we deal with the most interesting security features of our scheme. For a real implementation, a detailed (but somewhat tedious and uninteresting) analysis is necessary. 4.1
Group Manager
Preventing uncontrolled creation the entries in the registry is the key issue. The general rule is that only Registrar can create entries. However, there must be an exception for SA. On the other hand, Supervisor must not have any possibility to create entries. In our solution, Registrar, SA and Supervisor cooperatively initialize the system and create their private keys (so that each of them gets only own private key). As the key for generating key material for group members is split into pieces and hold by Registrar and SA, no other group member should be created. Indeed, this is against the interests of Registrar and SA to create such group members. 4.2
Registrar
Irregularities in Inserting Records. Registrar may insert any entry, for instance an entry for an m as well as for m which is a negation of m. Afterwards, Registrar would be able to present either m or m , according to his advantage. This is possible, since the entries of registry are not published. Note that today this is the case also for public registries such as real estate registries, where one can ask for a certain entry but cannot download the database or perform most search queries. In our solution Registrar must get a signed confirmation for every entry m. This confirmation must be attached to the entry in the registry database. Therefore, even if a fake entry is inserted, it cannot be revealed afterwards without coercion with this party. This problem concerns any registry, where the contents of the database is not published and protected in the cryptographic way. Revealing Fake Entries. Each entry includes a signature s and a leaf y of Merkle hash tree. Firstly, from the signature, Registrar cannot convince the Adversary who is the signer due to the anonymity of group signature. Secondly,
72
P. Kubiak, M. Kutylowski, and J. Shao
Registrar cannot convince the Adversary that y is really not a hash value. There are two reasons for that. First, Registrar cannot do that by the claim ”I have no idea about x, such that y = H(x, s)”. Second, Registrar cannot do that by showing y is a signature of designated verifier signature, since the signature can only be verified by SA, and the signature looks a hash value of H (recall that the signature and the hash value come from the same domain.) Subliminal Information in the Hash Tree. Registrar may try to leak the information saying which labels of the hash tree correspond to the fake entries and support his claim by some strong cryptographic argument. It is impossible to provide an evidence directly about each entry, however the Adversary may know that some entries are real, as they have been created on request of some entitled users known by the Adversary. Note that once the labels of the tree are created, the computation of the hash tree is deterministic and no information can be hidden by exploiting randomness in construction of the hash tree. So even presenting the whole hash tree to the Adversary does not leak any subliminal information. However, Registrar has a certain degree of freedom when creating the values yi . Namely, he is free to choose the underlying values xi at random. Even if the Adversary cannot control really the values yi (due to the procedures used), he can try several times xi in a blind way until some property is fulfilled. For an example of such a property consider an asymmetric encryption scheme with private decryption function D and public encryption function E. For a real y it should be the case that D(y) starts with a 1, while D(y) should start with a 0 for a fake y. Registrar has just to repeat creating y a few times with different random parameters x until the property discussed is fulfilled. The Adversary gets the plaintexts and with E can check that they are matching the leaves in the hash tree. Another technique for building a subliminal information channel for the Adversary is based on positions of real entries in the tree. During day t, knowing the total number k of real entries for that day, Registrar uses a pseudo-random permutation generator GEN with a secret key K known only to the Adversary and Registrar: π = GEN (K, k, t) mod L!. (1) Then Registrar can put the real entries on the positions defined by the first k values of π. Complexity of this process is limited if we do not force leaves to be sorted before we start build the tree (cf. Subsect. 2.2) – in such a case the attack is realistic. Another problem is that Registrar can prove to the Adversary that a certain yi corresponds to a real entry. Simply, Registrar provides the authentication data already on the day of creation - as it is unlikely that SA creates a fake entry already the same day and that presents authentication data to Registrar. However, Registrar has no possibility to prove that a certain yi has been created for the use by SA. One of the attacks presented require multiple tries for creation of values yi . However, Registrar should use a HSM (Hardware Secure Module) for deriving
How to Construct State Registries
73
values yi in order to protect the secret keys. The HSM may be equipped with a counter showing how many times each procedure has been used. Then by inspecting the counter we might see if any irregularities of this type have occurred (the HSM should be used exactly L times a day). 4.3
SA
The SA can be careless and reuse an entry of a real value (i.e. a leaf in the Merkle tree of some day, belonging to some real citizen). However, this may lead to a conflict – presenting two values with the same hash. Then an observer would know that at least one of the entries has been created by SA (and concerns an agent!).
5
Conclusions
We have presented a solution that can be used as a framework for a secure implementation of state registry, fulfilling all crucial requirements that are satisfied by paper-based systems. We find a solution that provides efficiency, undeniability of online verification of the copies from the registry, as well as possibility of creating fake documents by security agencies necessary for public security. The solution presented concerns the case when a user cannot access the registry database searching for a given entry. A user can only use the data published by Archive to authenticate digital data. However, extending our solution to different access scenarios seem to be possible with slightly different cryptographic tools.
References 1. Chen, X., Zhang, F., Tian, H., Wei, B., Kim, K.: Key-exposure free chameleon hashing and signatures based on discrete logarithm systems. Cryptology ePrint Archive, Report 2009/035 (2009) 2. Contini, S., Lenstra, A.K., Steinfeld, R.: VSH, an efficient and provable collision resistant hash function (2005) 3. Young, A., Yung, M.: Malicious cryptography: Kleptographic aspects. In: Menezes, A. (ed.) CT-RSA 2005. LNCS, vol. 3376, pp. 7–18. Springer, Heidelberg (2005) 4. Ateniese, G., Camenisch, J., Joye, M., Tsudik, G.: A practical and provably secure coalition-resistant group signature scheme. In: Bellare, M. (ed.) CRYPTO 2000. LNCS, vol. 1880, pp. 255–270. Springer, Heidelberg (2000) 5. Frankel, Y., MacKenzie, P.D., Yung, M.: Robust efficient distributed RSA-key generation. In: PODC, p. 320 (1998) 6. Young, A., Yung, M.: Kleptography: Using cryptography against cryptography. In: Fumy, W. (ed.) EUROCRYPT 1997. LNCS, vol. 1233, pp. 62–74. Springer, Heidelberg (1997) 7. Chaum, D., Antwerpen, H.V.: Undeniable signatures. In: Brassard, G. (ed.) CRYPTO 1989. LNCS, vol. 435, pp. 212–216. Springer, Heidelberg (1990) 8. Huang, X., Susilo, W., Mu, Y., Wu, W.: Universal designated verifier signature without delegatability. In: Ning, P., Qing, S., Li, N. (eds.) ICICS 2006. LNCS, vol. 4307, pp. 479–498. Springer, Heidelberg (2006)
Regions of Interest in Trajectory Data Warehouse Marcin Gorawski and Pawel Jureczek Silesian University of Technology, Institute of Computer Science, Akademicka 16, 44-100 Gliwice, Poland {Marcin.Gorawski,Pawel.Jureczek}@polsl.pl
Abstract. In the paper we present algorithms for determining regions of interest in the context of movement patterns. The developed algorithms are based on the clustering of grid cells. The grid cells are merged on the basis of the information about movement flows between cells and the number of trajectories that intersected them. The proposed solutions allow to determine the rectangular regions of interest of different size. The size of a resulting region depends on the intensity of movement flows. To determine flows between regions the interpolation of regions has been applied. The interpolation of regions uses a linear interpolation function at the output of which we get the intersection points between the trajectory segment and grid cells. This paper also shortly reviews existing approaches to constructing regions of interest. Keywords: Region of Interest, Clustering, Movement Pattern, Trajectory Data Warehouse.
1
Introduction
In the recent years we observe a growing interest in applications aimed at analyses of different patterns of moving objects. One of the simpler methods of determining spatio-temporal patterns of moving objects is an analysis of regions of interest (RoIs) that depending on the application domains may significantly differ from each other. A large number of trajectory data that are usually stored in the form of coordinates, as well as the need of preprocessing of these data sets, requires the use of Trajectory Data Warehouse (TrDW) that allows appropriate aggregations. In the literature there are various approaches to determining regions of interest. The paper [1] presents issues on which attention should be focused when designing a TrDW. In turn, in [2] have been presented concrete solutions, e.g., a ETL (Extract, Transform, and Load) process for trajectories. In [3] object coordinates are sent to the TrDW in a irregular and unbounded way, and trajectories are stored in the star schema. The paper [4] shows how on the basis of trajectory points one can determine rectangular RoIs which may have different ´ atek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 74–81, 2010. N.T. Nguyen, M.T. Le, and J. Swi c Springer-Verlag Berlin Heidelberg 2010
Regions of Interest in Trajectory Data Warehouse
75
size. The authors of the paper [5] focus on determining RoIs on the basis of segments of trajectories. In paper [6] regions of interest are introduced on the basis of the knowledge of the expert, who knows the application domain. The paper [7] extends the research presented in [6] into RoIs obtained on the basis of clustering of trajectory points. Another approach to clustering trajectories is presented in [8]. In this paper we focus on algorithms that can be used for determining regions of interest of different size in the context of movement patterns. We developed algorithms that allow to take into account factors that are important in examining the movement patterns of objects. The rest of the paper is organized as follows. In Section 2 we presented different approaches for calculating regions of interest in different contexts. In Section 3 we examined the performance of algorithms proposed by us. Section 4 summarizes this paper.
2
Places of Interest
In [4] a basic algorithm for calculating regions of interest was presented. That algorithm, called PopularRegions, can be applied to applications that analyze the stops. The stop means a region in which an object (e.g., cab) spends some period of time. In the case of the PopularRegions algorithm we assume the following: (1) each region has a rectangular shape and may have different size; (2) the size of a region cannot be less than the size of single cell (the whole space is divided into rectangular cells that can be combined into larger areas according to the density); (3) the density of a cell is determined on the basis of trajectory points that are contained in it; (4) the density of a cell must be greater or equal to a minimum threshold defined by the user; (5) connected cells with similar density (greater or equal to a user-specified minimum) are grouped together into a region. For more details please see [4]. A drawback of this algorithm is that generated regions may be very large, what may hinder finding causes of stops. We can minimize
Fig. 1. PopularRegions clustering algorithm with (a) and without restrictions (b)
76
M. Gorawski and P. Jureczek
Fig. 2. A sequence of RoIs of different size (a) and sequence of fixed-size RoIs (b)
this problem by introducing two parameters: the minimal and maximal size of a region. In Figure 1 we illustrate situations in which the additional constrains are (a) and are not (b) imposed on regions. For simplicity, we also assume that for each region the minimal (maximal) height and width are the same, and are equal to 3. As you can see these additional parameters can be use to filter regions that are too small or too large. Figure 2 shows a sequence of regions for the algorithm with (a) and without (b) restrictions imposed on the size of regions. A precision of mapping the trajectory (see Figure 2b) can be improved by changing the minimal and maximal size of regions. Notice that obtained regions are some form of aggregation, which reduces the number of information that we need to store. The increase of the region size may lead to the decrease of the mapping precision and stored data. However, if we must investigate movement patterns the PopularRegions algorithm is not a good solution. This is due to the fact that the PopularRegions algorithm merges cells which occur in a certain region and are appropriate dense — it is sufficient for stops but in the case of movement patterns it is necessary to take into account additional conditions. In Figure 3 we show the differences between stops and movement patterns. Figure 3a shows two road segments, on which there is an intensive road traffic. The cells with the highest traffic density (i.e., that have a density greater than a fixed threshold) are marked in gray. The PopularRegions algorithm merges the cells in one large region R1 (see Figure 3b). However, in the case of movement patterns if vehicles cannot pass from one road segment to the other, these road segments should be put into separate regions of interest. Moreover, if an object has moved along any road pictured in Figure 3b, the cells of the region R1 will also point to the other road that has no connection with the former. Therefore, in algorithms developed by us we want that from any point of a road in a region one could get to the other points in this region and the region was crossed by trajectories as much as possible.
Regions of Interest in Trajectory Data Warehouse
77
Fig. 3. An example: (a) dense regions, b) the RoI for stops, c) RoIs for movement patterns
In contrast with the PopularRegions algorithm, in algorithms developed by us, during determining a region the movement flows of objects between the current region and new cells are also taken into account — note that for an edge between two cells the total flow, i.e., in both directions, is counted. In Figure 4 we show how our algorithms extend a current region R. The gray cells create the current region R and dark gray cells are candidates. The first algorithm takes into account only the average of flows between the current region R and the candidate cells (see Figure 4a). The second algorithm (cf. Figure 4b) takes into account the average of the total traffic from the candidate cells, but only from these cells that have the movements (suitable flows) from/to the region cells (a movement must be greater than a threshold specified by user). The presented approaches require the use of an algorithm that will count the movement flows between edges of cells. These two approaches make the best choice at each step. In the case of the second approach we also include the possibility of extension of a region in the next step of the algorithm. Both algorithms have a parameter that specifies a minimum threshold of the movement flow.
Fig. 4. Two approaches: (a) flows between adjacent cells only, (b) flows from adjacent cells also
78
M. Gorawski and P. Jureczek
Fig. 5. The interpolation of regions
When we have the high resolution of the grid and/or long intervals between measurements, you may find that two consecutive trajectory points lie in the regions that do not adjoin each other (see Figure 5a). To take into account base cells through which a trajectory segment passed and in which any point of this segment is not included, we can use the interpolation of regions (see Figure 5b). The interpolation of regions uses linear interpolation of points. The linear interpolation function is determined on the basis of two nodes, i.e., two consecutive trajectory points. The interpolated points are determined for the divisions of the regular grid (for X and Y axes). Flows between regions are determined on the basis of information about the points of intersection. Note that each region has the information about the flows of objects in four directions (UP, DOWN, RIGHT, LEFT) and has the number of trajectories that intersect it. 2.1
Algorithms
Because two proposed algorithms are very similar we will only present second algorithm for determining the regions of interest for movement patterns. The PopularRegion2 algorithm is based on the PopularRegions algorithm from [4]. In the listing below, the maxReg and minReg parameters determine the size of resulting regions. In lines 15–16 we check if a current region is not too large and in line 21 if is not too small. If a resulting region is too small, the cells forming the region return to the pool and can be reused in next steps of the algorithm. The parameter minCrossed specifies the minimum flow between r and rdir /r (in line 12 this condition is checked). It should be noted that the density of a cell is related to flows occurring between the edges of this cell (see line 2). Therefore, in the algorithm from the listing the parameter δ is used to exclude regions that are not dense.
Regions of Interest in Trajectory Data Warehouse
79
Procedure PopularRegion2(G, δ, minCrossed, minReg, maxReg) Input: A grid G of densities, minimum density threshold δ, minimal region size minReg, maximal region size maxReg, minimum flow minCrossed Output: The set R of RoIs in G. 1 for (i, j) ∈ G do used(i, j) = true; 2 R = 0; G∗ = {(i, j) ∈ G|G(i, j) δ} 3 for (i, j) ∈ G∗ do used(i, j) = false; 4 for (i, j) ∈ G∗ in desc order of G(i, j) do; 5 if ¬used(i, j) then 6 r = {(i, j)}; 7 repeat 8 for dir ∈ {lef t, right, up, down} do 9 rdir = extend r in dir; 10 ext = {dir|rdir ⊆ G ∧ 11 avg den(rdir ) δ ∧ 12 avg crossed(rdir ) minCrossed∧ 13 ∃(h, k) ∈ (rdir \ r).G(h, k) δ ∧ 14 ∀(h, k) ∈ rdir .¬used(h, k)δ ∧ 15 height(rdir ) maxReg∧ 16 width(rdir ) maxReg}; 17 if ext = 0 then 18 dir = max; 19 r = rdir ; 20 until ext = 0; 21 if heigth(r) ≥ minReg ∧ width(r) ≥ minReg then 22 for (i, j) ∈ r do 23 used(i, j) = true; 24 R = R ∪ {r}; 25 return R;
3
Experimental Results
In experiments we examined the average number of cells used per sequence. We conducted the experiments using the computer with Intel Core Duo E6550 2.33GHz, 4GB of RAM and WinXP Pro SP3 as operating system. All algorithms were coded in Java JDK1.6. For the experiments we used a generator of trajectory points from [9] and the road network of the city of Oldenburg. The total number of generated trajectories was 1000. In subsequent experiments we changed the size of the base cells by changing the resolution of the grid (the number of rows and columns was the same for a given resolution). The PopularRegionsExt algorithm is the PopularRegions algorithm extended with the possibility of setting a minimal and maximal size (i.e., minReg and maxReg respectively) of the resulting regions. However, in the experiments we did not impose any constraint on the size of regions. Besides, a region is added to the region sequence of a trajectory if a point of the trajectory falls into this region. In all the experiments a minimum density threshold δ was set to 35.
80
M. Gorawski and P. Jureczek
Fig. 6. The average length of resulting sequences in dependence of minCross. The map resolution set to 25.
Fig. 7. The average length of resulting sequences in dependence of minCross. The map resolution set to 50.
Fig. 8. The average length of resulting sequences in dependence of minCross. The map resolution set to 80.
Figures 6-8 show the average number of cells used per sequence when the map resolution (the map width and height were 40000 and 40000) was set to 25, 50 and 80, respectively. For a given map resolution the number of distinct base cells was constant. Moreover, the number of sequences was the same. If the map resolution
Regions of Interest in Trajectory Data Warehouse
81
is small, after mapping the trajectories, for the PopularRegions2 algorithm we get the most cells (see Figure 6). However, the trajecotry mapping is better when we increase the map resolution and the minCross parameter (cf. Figures 6, 7 and 8). Note that the PopularRegions1 and PopularRegionsExt algorithms do not use the minCross parameter, therefore their appropriate curves do not change. As we can see with the increase in the minCross parameter the average number of cells generated by the PopularRegion2 algorithm decreases. This is due to the fact that the number of resulting regions increases and trajectories are better mapped.
4
Conclusions
In the paper we deal with the algorithms for generating regions of interest in the context of movement patterns. Presented algorithms use information about the number of trajectories intersecting the cell as well as information about flows between the cells of a regular grid. To determine flows we used the interpolation of regions. In future work we will be looking at other methods for determining regions of interest for the movement patterns.
References 1. Trajectory Data Warehousing Project (2007–2010), http://lbdwww.epfl.ch/f/jobs/SNF2007.pdf (accessed July 23, 2009) 2. Marketos, G., Frentzos, E., Ntoutsi, I., Pelekis, N., Raffaeta, A.: Building RealWorld Trajectory Warehouses. In: Seventh International ACM Workshop on Data Engineering for Wireless and Mobile Access, Vancouver, Canada, pp. 5–18 (2008) 3. Braz, F.J.: Trajectory Data Warehouses: Proposal of Design and Application to Exploit Data. In: IX Brazilian Symposium on Geoinformatics, pp. 61–72. INPE, Campos do Jord˜ ao (2007) 4. Giannotti, F., Nanni, M., Pinelli, F., Pedreschi, D.: Trajectory pattern mining. In: Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 330–339. ACM, San Jose (2007) 5. Cao, H., Mamoulis, N., Cheung, D.W.: Mining Frequent Spatio-Temporal Sequential Patterns. In: Proceedings of the 5th IEEE International Conference on Data Mining (ICDM 2005), pp. 82–89. IEEE Computer Society, Houston (2005) 6. Alvares, L.O., Bogorny, V., Kuijpers, B., de Macˆedo, J.A.F., Moelans, B., Vaisman, A.A.: A model for enriching trajectories with semantic geographical information. In: 15th ACM International Symposium on Geographic Information Systems, p. 22. ACM, Seattle (2007) 7. Palma, A.T., Bogorny, V., Kuijpers, B., Alvares, L.O.: A clustering-based approach for discovering interesting places in trajectories. In: Proceedings of the 2008 ACM Symposium on Applied Computing (SAC), pp. 863–868. ACM, Fortaleza (2008) 8. Lee, J.-G., Han, J., Whang, K.-Y.: Trajectory clustering: a partition-and-group framework. In: Proceedings of the ACM SIGMOD International Conference on Management of Data, pp. 593–604. ACM, Beijing (2007) 9. Brinkhoff, T.: A Framework for Generating Network-Based Moving Objects. GeoInformatica, 153–180 (2002)
AFOPT-Tax: An Efficient Method for Mining Generalized Frequent Itemsets Yu Xing Mao and Bai Le Shi School of Computer Science, Fudan University, Shanghai 200433, China {myuxing,bshi}@fudan.edu.cn
Abstract. The wide existence of taxonomic structures among the attributes of database makes mining generalized association rules an important task. Determining how to utilize the characteristics of the taxonomic structures to improve performance of mining generalized association rules is challenging work. This paper proposes a new algorithm called AFOPT-tax for mining generalized association rules. It projects the transaction database to a compact structure - ascending frequency ordered prefix tree (AFOPT) with a series of optimization, which reduces the high cost of database scan and frequent itemsets generation greatly. The experiments with synthetic datasets show that our method significantly outperforms both the classic Apriori based algorithms and the current FP-Growth based algorithms. Keywords: Generalized Association Rule; Generalized Frequent Itemset; Ascending Frequency Ordered Prefix-Tree; Pruning.
1 Introduction As one of the most important areas of data mining, the Association Rule Mining (ARM) was first introduced by Agrawal et al. [1]. The data studied by the traditional association rule mining is a single level concept. The study of the association rules on taxonomy (is-a) concept data is called Generalized Association Rule Mining (GARM). The difference between these two types of mining can be illustrated using the market-basket retail database in Figure 1. The “bread” can be classified as “white bread” and “wheat bread”, so “wheat bread” is a “bread”. The purpose of GARM is to find the association rules from all levels of taxonomy: not only the association rules like “the 80% of all customers who buy the bread and milk at the same time”, but also the association rules like “the 30% of all customers who buy wheat bread and milk at the same time”. Different from ARM that only finds the association rule from a single level data, the GARM can find more interesting and various association rules from any level of taxonomy. For example, it might not find any association rules between the “white bread” and “low-fat milk” using the ARM according to some threshold defined by the domain experts, but with the same threshold, it might find some association rules between the “bread” and “milk” or the “bread” and “low-fat milk”, which is also interesting and valuable for business decision making. N.T. Nguyen, M.T. Le, and J. Świątek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 82–92, 2010. © Springer-Verlag Berlin Heidelberg 2010
AFOPT-Tax: An Efficient Method for Mining Generalized Frequent Itemsets
83
Fig. 1. An example of taxonomy in a retail database
The GARM is first introduced by Srikant et al. [2]. Although a lot of effort has made for improving the mining efficiency, most of the earlier works [4,5] are based on the generate-and-test method and prune using Apirori [1] property: if any length k pattern is not frequent in the database, its length (k + 1) super-pattern can never be frequent. With the number of candidate itemsets increased, these algorithms may suffer from the cost of handling a huge number of candidate sets and scanning the database repeatedly. These drawbacks are critical for GARM, because the items in taxonomy data are usually more than that in single level data. Recently, a partitioning-based, divide-and-conquer method FP-growth [6] is introduced by Han et al. Inspired by this idea, Iko et al. [8] proposed the FP-tax algorithm, which projects the transaction database into a compressed FP-tree structure, finds the frequent itemsets by constructing and searching the conditional FP-tree without generating the candidate itemsets. The FP-tax algorithm saves a lot of traversal cost and space usage, which is benefit for the GARM. Nevertheless, it still has some drawbacks: it sorts the items in header-table and transaction database by the descending order of frequent 1-itemset and uses the bottom-up traversal strategy to find the frequent itemsets, which means the FP-tree cannot be reused and the conditional FP-tree has to be construct iterant during each frequent itemset generation. These operations cause both additional space usage and more CPU time. To tackle the problem, in this paper, we set forth a novel generalized frequent itemsets mining algorithm called AFOPT-tax. First, we project the transaction database to a new compact structure AFOPT-tree [9] with the top-down traversal and straightforward generation strategy, which reduce the cost of rebuilding the conditional databases greatly by reusing the original tree during the course of generating the frequent itemsets. Second, we do the subtree merge operation after generating the entire frequent itemsets prefix with each item. Finally, the experiments on synthetic datasets show that our method significantly outperforms both the classic Apriori based algorithms and the current FP-Growth based algorithms. This paper is organized as follows: Section 2 contains problem statements and term definitions. In Section 3, the main ideas of the AFOPT-tax algorithm are proposed with an example. In Section 4, by experiments, we compare the new algorithms with the classic and current algorithms. The study is concluded in the last section.
84
Y.X. Mao and B.L. Shi
2 Problem Statement 2.1 Term Definition
(∈
In this paper, the transaction database noted as D = {tB1,B tB2,B …, tBn}, B where tBiB i 1, 2,…, n is a transaction. Each transaction includes a unique transaction ID and an item list (the items bought in Figure 1). Let I = {iB1B, iB2B, … , iBm B} be a set of distinct items from all item lists. When all the items of itemset X are in the transaction tBiB , we call the transaction tBiB contains itemset X, note as ti ⊇ X . The set of all the transactions, which contain the itemset X, note as DX = {ti | X ⊆ ti , ti ∈ D} . The support of itemset X is
)
noted as Support (X) = |DBX B| / |D|, |DBX B| means the count of transaction which contain the itemset X, and |D| means the total number of transaction in D. If Support (X) ≥ σBminsupB, then itemset X is called frequent. σBminsupB is the minimum threshold defined by the domain experts. For example, in Figure 1, the Support (“white bread”) = 2/5 = 40% , if the σBminsupB = 35%, then the “white bread” is frequent. The taxonomy data can be classified by different dimensions. For example, in Figure 1, the food can be classified by categories or brands. So the taxonomy Г can be depicted by a Directed Acyclic Graph.
,
(1) vBn-1B is the parent item of vBnB noted parent(vBnB). In contrast, vBnB is the child item of vBn-1,B noted child(vBn-1). B (2) vB0,B …,vBn-1B are the ancestor items of vBn, noted ancestor(vBnB). In contrast, vB1,…,vB the n are B B B descendant items of vB0B , noted descendant(vB0). B (3) For any items x, y, z, if x and y are the children of z, then x and y are brother items. (4) The depth of item v noted depth(v), refers to the level of v located in taxonomy. (5) If depth(vBi)B equals to depth(vBjB), call vBi and vBjB sibling items. B (6) An item without any children is called a leaf item. (7) The items, except root and leaf items, are called interim items or generalized items.
For example, In Figure 1, “Coke” is the child item of “drink”, “food” is the ancestor item of “Coke”, “wheat bread” is the brother item of the “white bread”, “low-fat milk” is the leaf item and “milk” is a generalized item. Definition 1. Generalized Itemset and Generalized frequent Itemset In taxonomy Г, call itemset G a Generalized Itemset that does not include an item and its ancestor items or its descendant items at the same time. This is also formalized as:
∈Г, A ∩ B= Φ, and (¬∃ gB ∈ A,¬∃ gB ∈ B, ancestor(gB )= gB or ancestor (gB )= gB )} The support of G is defined as support(G)=|DB | / |D|. If |DB | / |D| ≥ σB , then G is {(A, B) | A, B 2B
1B
2B
1B
2B
1B
GB
GB
minsupB
frequent. As the item and itemset are extended in the taxonomy, We give the new definition of set operation as follows.
AFOPT-Tax: An Efficient Method for Mining Generalized Frequent Itemsets
85
Definition 2. The ∈BГ Band ⊆BГ Boperations between generalized items and generalized itemsets (1) Given items iB1 and iB2,B if iB1 is B B ancestor of iB2,B then say iB1 ∈B B Г iBB 2,B (2) Given generalized item iB1B and generalized itemset G, if ∃ iB2 ∈ B G, iB1 ∈B B Г iBB 2,B and then say iB1 ∈B G. B ГB (3) Given generalized itemsets GB1B and GB2, B if ∀ iB1 ∈ B GB1,B ∃ iB2 ∈ B GB2,B iB1 ∈B B Г iBB 2,B and then say GB1 B⊆BГ BGB2B. For example, in Figure 1, “Bread” ∈BГ B“Wheat bread”, and “Bread” ∈BГ B“Wheat bread, Milk”, and “Bread, Milk” ⊆BГ B“Wheat bread, Whole milk”. Definition 3. The ∪BГ and ∩BГ Boperations between generalized itemsets B (1) Given generalized itemsets GB1B and GB2,B we note G = GB1 ∪ B BГ GB B 2 ,B then ∀ iB1 ∈ B G, has iB1 ∈B GB or iB ∈B GB . Г 1 1 Г 2 B B B B B B (2) we note G = GB1 ∩ B BГ GB B 2,B then ∀ iB1∈ B G has iB1 ∈B B Г GB B 1B and iB1 ∈B B Г GB B 2.B For example, in Figure 1, “White bread, Drink” ∪BГ B“Wheat bread, Drink” = “White bread, wheat bread, Drink”, and “White bread, Drink” ∩BГ “Bread, Drink” = “Bread, B Drink”. Definition 4. Ancestor-Descendant relationship between generalized itemsets Given itemset XB1B, XB2 B⊆ I, if |XB1B| = |XB2B| and XB1B ≠ XB2, Bwe note the iPthP item of itemset X as X.itemBi. Bif ∀ i ≤ |XB1B|, and XB1B.itemBiB met the following conditions: ∃ j ≤ |XB1B|, XB1B.itemBi B= XB2.itemB ancestors(XB2.itemB jB or XB1.itemB iB jB), then say XB1B is the ancestor itemset of XB2.B In B B B contrast, XB2B is the descendant itemset of XB1.B For example, in Figure 1, “Bread, Drink” is the ancestor itemset of “White bread, Water”, and “Bread, Drink” is the ancestor itemset of “White bread, Drink”.
∈
Problem Statement. Given a taxonomy Г and a user defined minimum threshold σBminsup,B the problem of the generalized frequent itemsets mining is to find all itemsets from any level of the taxonomy whose support is larger than the σBminsup.B
3 The Algorithm AFOPT-Tax In the rest of this section, First, the main idea of AFOPT-tax algorithm will be described, and then this algorithm will be explained with an example. 3.1 Main Idea
The efficiency of the AFOPT-tax algorithm over the FP-tax algorithm lies in four aspects: Optimization I: Using the AFOPT-tree structure with top-down traversal strategy We use the AFOPT-tree structure with the top-down and deep-first traversal strategy, which has less traversal cost than the FP-tree structure with the bottom-up
86
Y.X. Mao and B.L. Shi
traversal strategy. The AFOPT-tree structure can refer to Figure 3 (right). Each node in the AFOPT-tree includes four fields: item name, count, child link and brother link. For example, in Figure 3, The node C represents the item C, the count two represents the number of transactions that pass through node C. The child link of node C points to the child node D and A. The brother link points to next node C. Optimization II: Merging the subtree for further compression of AFOPT After finding all frequent itemsets prefix with item X, merge the subtree of node X before deleting it from the AFOPT, which make the AFOPT-tree more compact. The subtree merge operation means: merge the subtree of node X to the first right sibling node with the same item name, if it exists, or insert the subtree of node X as the child node of the parent node of node X directly. It is noteworthy because the cost caused by the merge operation is less than the cost of constructing the conditional database. Optimization III: Reusing the AFOPT-tree to avoid constructing the conditional database In FP-tax algorithm, the conditional FP-tree has to be constructed iterant for finding the frequent itemsets, and the original FP-tree cannot be reused. In the AFOPT-tax algorithm, the header-table and sub-header-table are used to locate the item position in the AFOPT-tree, which make it possible to reuse the same AFOPT-tree for finding all frequent itemsets. It reduces the cost of constructing the conditional database. The structure of the header-table and the sub-header-table are shown in Figure 4(left). They all include three fields: item name, count and node link. The header-table is noted as H, which includes the all the frequent 1-itemsets in ascending order. The sub-header-table of item X is noted as H_X, which include all the frequent items prefix with item X in AFOPT-tree. However, the count field of these two header-tables is different from that in AFOPT-tree node. The count of node C in header-table is the sum of the transactions passing through all node C. For example, in Figure 3, there are four node A, so the count of item A in header-table H is four, which is the sum of the number of transactions passing through all node A. Optimization IV: Using the taxonomy for pruning In the AFOPT-tax algorithm, the property of taxonomy is used for pruning. When extending the itemsets of X, if Y is the descendents of X, then Y can be omitted and need not be inserted into sub-header-table of X, because XY cannot be generalized frequent itemsets according to definition. 3.2 An Example
We use the following example to illustrate the idea of the AFOPT-tree algorithm. The example taxonomy data and transaction database refers to Figure 2, and we assume that the minimum support σBminsupB is 60% , which means the itemsets at least contained in three transactions.
AFOPT-Tax: An Efficient Method for Mining Generalized Frequent Itemsets
87
Fig. 2. A taxonomy example
Step 1: Find all frequent 1-itemsets We scan the transaction database for the first time, and insert the generalized items of each item into the item list of transaction database. Then count the support of all the 1-itemsets including all generalized items. All the 1-itemsets except item E are frequent, because the support of the item A, B, C, D, E, X, Y and Z is equal to 4, 3, 3, 3, 2, 5, 5 and 3, which are large than σBminsup according to the assumption. Step 2: Sort the transaction database by the ascending order of frequent 1-itemsets The ascending order of frequent 1-itemsets is that (B:3), (C:3), (D:3), (Z:3), (A:4), (X:5) and (Y:5). During the first scan of the transaction database, sort the items in each transaction following this ascending order and prune the items that are not frequent. In this example, The item E is pruned from the transaction ID 003 and 004 because it is not frequent. The transaction database after sorting and pruning is referred to Figure 3.
:
Step 3: Construct the AFOPT-tree and header-table by scanning the transaction database According to Optimizations I and II, start to construct the AFOPT-tree and the header-table. The AFOPT-tree initially with null as the root (in real application, the root is the top concept of taxonomy). Then scan the first transaction (TID = 001), and select BAXY from the database item list, and insert it into the AFOPT-tree and count the number with (B:1), (A:1), (X:1) and (Y:1). Scan the second transaction BCDZXY from the database. Because the first item B already exists in AFOPT-tree, Share the node B and accumulate the count to two. The next item C of the second transaction does not exist among the child nodes of node B, so insert the node C as the child node of node B. Do the same steps for the rest items C, D, Z, X and Y of the second transaction. Following the same rules, scan the third, fourth and fifth transaction of the transaction database, check and insert it into the AFOPT-tree. During the AFOPT-tree built, the count and node link information of the header-table need to be updated at the same time. Until now, the whole transaction database projects to a compact tree structure with the count information being enough to check the frequency. The complete AFOPT-tree is shown in Figure 3 (right).
88
Y.X. Mao and B.L. Shi
Fig. 3. The database after sort and the AFOPT-tree
Step 4: Using the top-down and deep-first traversal strategy, find all frequent itemsets prefix with the first item B in header-table After constructing the AFOPT-tree, start the work to find all the frequent itemsets. The whole work scans the AFOPT-tree rather than the original transaction database. By using the top-down and deep-first strategy, the first item in the header-table is item B. How to generate all the frequent itemsets prefix with item B? Traverse the path from item B to a leaf item and construct the sub-header-table of item B according to Optimization III. In Figure 4, all the candidate extensions of item B are C, D, Z, A, X and Y. Only the item X and Y is frequent because their support is larger than σBminsupB. However, both item X and Y is still pruned since they are a parent item of item B according to Optimization IV. Because there are no extended items with prefixes of item B, the process of finding the frequent itemsets of item B is completed.
Fig. 4. Merge subtree and after pruning B
Step 5: Prune the node B after merging its subtree Because the work finding the frequent itemset prefixed with item B is completed, the item B can be deleted from the AFOPT-tree. The deletion is done logically by setting
AFOPT-Tax: An Efficient Method for Mining Generalized Frequent Itemsets
89
the label instead of a physical deletion. Before the deletion, we have to merge the subtree of item B according the optimization II. How is the merge operation done? Check the first subtree AXY of node B and insert the AXY as the subtree of root, because there are no other node A existing. Then check the second subtree CDZXY of node B, merge the node C with the first right node C, which is the brother node of node B, and accumulate the count of node C to three. With the same rule, the nodes D and Z could be merged too, but node X and Y could not be merged. Then the last subtree CAXY of node B, except node C could be merged, all the other nodes A, X and Y only insert separately. The AFOPT-tree after the merge and pruning, which is usually smaller than the original AFOPT-tree, is shown in Figure 4. Step 6: Find the frequent itemsets prefix with the rest items C, D, Z, A, X and Y in the header-table After finding all frequent itemsets prefixed with item B, check the items C, D, Z, A, X and Y in the header-table. For each item, run Step 4 to 5 to find all the frequent itemsets prefix with this item. The final twelve frequent itemsets are {Y, X, A, Z, D, C, B, XC, YD, XD, YZ, XZ}.
4 Performance Study To evaluate the efficiency of the AFOPT-tax algorithm, we compare the performance of AFOPT-tax with classic Cumulate [2] algorithm, SET [5] algorithm and the current TD_FP-tax [8] algorithm. All the experiments are conducted on a HP DL380 G3 server with Duo/Xeon 2.8 GHZ CPU and 2GB memory running Windows Server 2003. All the algorithms are implemented using JAVA. We conducted a set of experiments on the synthetic datasets generated by IBM Quest Synthetic Data Generation Code [1]. The datasets name with DxTxRxIx and the parameters explanation and default values are listed in Figure 5.
Fig. 5. Parameters and default value
90
Y.X. Mao and B.L. Shi
The performance of different minimum support: In this experiment, we choose two synthetic datasets D20kT6I1 and D50kT4I10, and five different minimum supports 2, 1.5, 1, 0.5, 0.25 and 0.1. As shown in Figure 6, when the minimum support is high, the performance of all algorithms is very close. The optimization of AFOPT-tax is not much more efficient because the number of the generalized frequent itemsets is small. When the minimum support is lowered, AFOPT-tax is increasingly more efficient than the opponents are. When the minimum support is 0.1, the Cumulate algorithm is aborted because there is not enough memory, and the AFOPT-tax algorithm is nearly three times faster than SET algorithm and one time faster than the TD_FP-tax algorithm. It shows that the AFOPT-tax algorithm is more suitable to the large-scale database with large potential frequent itemsets. The performance of scalability: In this experiment, we compare the scalability of all algorithms using three cases: different transaction numbers of the database, different lengths of the potential frequent itemsets and different transaction lengths of the database. In the first case, we use the five databases with different transaction numbers 10K, 15K, 20K, 30K and 45K. As shown in Figure 7(left), when the number of the transactions is increased, AFOPT-tax is more and more efficient than the opponents are. When the number of transactions reaches 45k, the AFOPT-tax algorithm is nearly three times faster than the TD_FP-tax algorithm. In the second case, we use five databases with the different length of the potential frequent itemsets as 1, 3, 5, 8 and 10. As shown in Figure 7(right), AFOPT-tax is more efficient than other algorithms. In the third case, we use five databases with different lengths of transaction 2, 4, 6 and 8. As shown in Figure 8, when the length of the transactions is increased, AFOPT-tax is more and more efficient than the opponents are. When the length of transaction is eight, the AFOPT-tax algorithm is nearly three times faster than the Cumulate algorithm and one time faster than the SET and TD_FP-tax algorithm. Beside the running time, also find that AFOPT-tax has good scalability because the curve is rather stable over other algorithms in all three cases.
Fig. 6. Supports on D20kT6I1 and D50kT4I10
AFOPT-Tax: An Efficient Method for Mining Generalized Frequent Itemsets
91
Fig. 7. Different numbers and lengths of frequent itemsets
Fig. 8. Different lengths of transactions
5 Conclusions and Future Work After analyzing the limitations of current algorithms, this paper proposed a new method called AFOPT-tax, which improved the efficiency of generalized frequent itemsets greatly. First, we use the AFOPT-tree to replace the FP-tree with the top-down and depth-first search strategy to reduce the cost of traversal time. Next, the merging and pruning operations are done to compress the scope of the AFOPT-tree further. Finally, we use the straightforward method to generating the frequent itemsets by reusing the AFOPT-tree, which avoided the overhead of reconstructing the conditional database. The experiments on the synthetic datasets show that our algorithm outperforms both the classic and current algorithms greatly. In future, we plan to extend AFOPT-tree structure other data mining areas, such as the sequential pattern mining of the taxonomy.
References 1. Agrawal, R., Srikant, R.: Fast Algorithms for mining association rules. In: Proceedings of the 20th VLDB Conference, Santiago, Chile (September 1994); Expanded version available as IBM Research Report RJ 9839 (June 1994)
92
Y.X. Mao and B.L. Shi
2. Srikant, R., Agrawal, R.: Mining generalized association rules. In: Proceedings of the 21st VLDB Conference, Zurich, Switzerland (1995) 3. Zaki, M.J.: Scalable algorithms for association mining. IEEE Transactions of Knowledge and Data Engineering 12(3), 372–373 (2000) 4. Hipp, J., Myka, A., Wirth, R., Guntzer, U.: A new algorithm for faster mining of generalized association rules. In: Proceedings of the 2nd European symposium on principles of data mining and knowledge discovery, Nantes, France, pp. 74–82 (1998) 5. Sriphaew, K., Theeramunkong, T.: A New Method for Finding Generalized Frequent Itemsets in Generalized Association Rule Mining. In: Proceeding of the 7th International Symposium on Computers and Communications, Taormina, Giardini Naxos, Italy, pp. 1040–1045 (2002) 6. Han, J., Pei, J., Yin, Y.: Mining frequent patterns without candidate generation. In: Proceedings of the SIGMOD Conference, Dallas, Texas, USA, pp. 1–12 (2000) 7. Wang, K., Tang, L., Han, J., Liu, J.: Top down FP-Growth for association rule mining. In: Proceedings of the 6th Pacific Area Conference on Knowledge Discovery and Data Mining, pp. 334–340 (2002) 8. Pramudiono, I., Kitsuregawa, M.: FP-tax: Tree Structure Based Generalized Association Rule Mining. In: Proceedings of the 9th SIGMOD workshop on Research issues in data mining and knowledge discovery, pp. 60–63 (2004) 9. Liu, G.M., Lu, H.J., Lou, W.W., Xu, Y.B., Yu, J.X.: Efficient Mining of Frequent Patterns Using Ascending Frequency Ordered Prefix-Tree. DMKD 9, 249–274 (2004)
A Novel Method to Find Appropriate for DBSCAN Jamshid Esmaelnejad1,2, Jafar Habibi1,3 , and Soheil Hassas Yeganeh1,4 1
Computer Engineering Department, Sharif University of Technology, Tehran, Iran 2
[email protected] 3
[email protected] 4
[email protected]
Abstract. Clustering is one of the most useful methods of data mining, in which a set of real or abstract objects are categorized into clusters. The DBSCAN clustering method, one of the most famous density based clustering methods, categorizes points in dense areas into same clusters. In DBSCAN a point is said to be dense if the -radius circular area around it contains at least M inP ts points. To find such dense areas, region queries are fired. Two points are defined as density connected if the distance between them is less than and at least one of them is dense. Finally, density connected parts of the data set extracted as clusters. The significant issue of such a method is that its parameters ( and M inP ts) are very hard for a user to guess. So, it is better to remove them or to replace them with some other parameters that are simpler to estimate. In this paper, we have focused on the DBSCAN algorithm, tried to remove the and replace it with another parameter named ρ (Noise ratio of the data set). Using this method will not reduce the number of parameters but the ρ parameter is usually much more simpler to set than the . Even in some applications the user knows the noise ratio of the data set in advance. Being a relative (not absolute) measure is another advantage of ρ over . We have also proposed a novel visualization technique that may help users to set the value interactively. Also experimental results have been represented to show that our algorithm gets almost similar results to the original DBSCAN with set to an appropriate value. Keywords: Data Mining, Clustering, Density Based Clustering, Parameter Estimation.
1
Introduction
Clustering is the process of categorizing a set of data into groups of similar objects. A wide range of algorithms have been proposed for clustering. One of well known clustering algorithms is DBSCAN [1] which categorizes objects according to a connectivity mechanism, based on density around a data point. The main idea of DBSCAN is that any two points are neighbors if and only if they are similar enough and at least the area around one of them is dense. Any connected component of data points forms a cluster, and any point that ´ atek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 93–102, 2010. N.T. Nguyen, M.T. Le, and J. Swi c Springer-Verlag Berlin Heidelberg 2010
94
J. Esmaelnejad, J. Habibi, and S.H. Yeganeh
does not have any dense neighbor is marked as noise. So according to the given definition, DBSCAN needs two parameters to extract neighborhood relations: First, The threshold of distance or : Any two points with distance less than are considered neighbors and second, the threshold of density or M inP ts: Any point with more than M inP ts neighbors are marked as dense.In other words, if the distance between two points is less than and at least one of them has M inP ts points around, they are neighbors. The problem. The main advantage of DBSCAN is its rare capability of finding clusters with either concave or convex shapes. But, the problem with DBSCAN is that the quality of the clustering results strongly depends on the values of its parameters. Setting appropriate values for these parameters is too hard and time consuming, and may not be feasible without prior in-depth knowledge about the data set. Removing each of those parameters or replacing them with another parameter which is simpler to estimate will increase the usability of the algorithm. Our contribution. We have proposed a heuristic method to find an appropriate value of to get better results from the DBSCAN. In our method, is estimated based on ρ, the noise ratio of data set and M inP ts, the other parameter of DBSCAN. Using noise ratio has three advantages over using : 1. Noise ratio is much more easier than to be estimated by the user. 2. Noise ratio is not dependent on the distance function used in the clustering. 3. In some applications the user knows or can easily compute the noise ratio of his data set. Drawbacks. There are two main drawbacks in our method. The first one is the fact that, although, noise ratio is an easily identifiable and understandable parameter but yet, we could not absolutely omit epsilon because we replaced it with the noise ratio and the second one is that this method is highly dependent of the value of M inP ts which is itself a parameter.
2
Related Works
This paper mainly focuses on parameter estimation for DBSCAN. As mentioned before DBSCAN is one of the well known density based clustering algorithms that forms a basis for many other clustering algorithms. 2.1
Density Based Clustering
There are a wide variety of density based clustering algorithms proposed, but the main idea of those algorithms is to find the clusters based on the density of regions in a data set. The density of data is the ratio of the number of data in each region to the volume of data in that region. The final clusters of such clustering techniques will be continuously dense regions of the data set. Using
A Novel Method to Find Appropriate for DBSCAN
95
this heuristic makes density based clustering methods capable of finding clusters with non-convex shapes. On the other hand, any other clustering method that does not use this mechanism is only capable of finding convex shaped clusters (e.g. K-Means[2], C-Means[3], or AGNES[4]). In the following, we will give a brief description for DBSCAN [1] which is a well known example of density based clustering methods. The detailed description of DBSCAN and other density based clustering methods like OPTICS [5] and DENCLUE [6] and a comparison between them can be found in the extended version of this paper. The DBSCAN [1], the main focus of this paper, defines density of the region around a data point as the number of points in a neighborhood, N (a circle with constant radius named ). To be tolerant to noise, the region is mentioned as dense if this number is greater than a threshold, M inP ts. Then connected dense regions are called clusters. The time complexity of this method is O(n2 ), which can be reduced to O(n lg n) if a spatial indexing is utilized in the algorithm. Using the DBSCAN, two parameters, and M inP ts, must be set at the beginning. There are also many other density based algorithms like DBRS[7,8], ADBSCAN[9], and Circlusters[10,11] which are all extensions to the DBSCAN. The main focus of this paper is to remove the parameter from the DBSACN. So, in the following section we will describe the researches for parameter estimation of this algorithm in detail. 2.2
Parameter Estimation
There are many researches around density based clustering, but there are only one research accomplished about parameter estimation in density based clustering which is the AEC algorithm (Automatic Eps Calculation) [12,13]. The AEC algorithm is an iterative algorithm that in each iteration a set of points is selected randomly. Then it calculates three coefficients: distance between the points, number of points located in a stripe between the points and density of the stripe. Then the algorithm chooses the best possible result, which is the minimal ´ distance between clusters. The calculated result has an in¨ıˇ nCuence on the sets of points created in the next iteration [12]. The AEC algorithm is only capable of estimating in simple data sets with small noise ratio, while noisy data sets are common in practice. Also, it needs some other parameters to estimate the (e.g. the width of the stripe) that are not easier to estimate than the .And finally, the time complexity of the AEC algorithm is much more than the time complexity of the DBSCAN in practice.
3
Estimating the
To estimate the value of , we have to answer an important question: What is the optimum value of the ? A simple naive answer may be that the more DBSCAN succeeds, the more satisfying the is. But when does DBSCAN carry out its task well? In figure 1, the dense central points(the red ones) are in a cluster, say C, and the points around (the blue ones) are noises around C. Now If one can
96
J. Esmaelnejad, J. Habibi, and S.H. Yeganeh
set for this data set to a value so that it holds the following properties, then DBSCAN will put all the red points exclusively in a single cluster just as one may expect from a good clustering algorithm:
Fig. 1. A propriate to cover all and only red points
– As shown in figure 1, there exists a closed line like B, such that it separates C’s points into two subsets IN and OU T . The maximum perpendicular distance of points in OU T from B is equal to . In fact, there is a stripe around C with as its width. – Running DBSCAN with this M inP ts and , all the points in IN are known as core points. – For each two point p, q ∈ IN , p is density-reachable from q and vice versa. As an explanation, second and third properties are needed because DBSCAN should put all red points in figure 1 in a same cluster. As you see in figure 2(a) and figure 2(b), if the stipe’s width is less than , then DBSCAN may include some noises in C and if it is greater than , DBSCAN may lose some boundary points of C or even split C into two or more clusters.
(a) Using a narrow stripe
(b) Using a wide stripe
Fig. 2. The stripe made by B, should not be too much narrow or wide
Now we have introduced the properties of the special which we are trying to estimate. If we run DBSCAN with an appropriate value, all the inner-cluster points will be marked as core points. Suppose that E is such a value and we will propose a method to find an which has the properties given in the previous part.
A Novel Method to Find Appropriate for DBSCAN
3.1
97
Simplifying Assumptions
We have made an assumption about the data set for simplification. Suppose that S is an area from the problem space in which there are no borders of any cluster. We have assumed that the data set has uniform distribution inside the clusters. In other words, the assumption is that the following value does not change as S moves around in the data set space, ξ=
vol(S) num(S)
in which, vol(S) means the area of S in two dimensional space and volume in 3 or higher dimensions, and num(s) is the number of points inside S. This assumption may seem far away from reality but the generated results using it are very remarkable. Now let’s get back to our main subject. The goal is to find a value like F which is smaller than E and if we assign it as , DBSCAN will mark all the points of the clusters as core points except a stripe around each cluster with width of F . It’s not hard to conclude the following equality for which you can find a complete proof in the extended version of this paper. k 1 F = E (1) 2 In which, k is the number of dimensions of the data set and C(k) is the coefficient which is used in the computation formula of vol(S). 3.2
A Useful Visualization for Finding Value
It is really hard to decide what is the optimum value of in a data set. Some visualization techniques can be employed to make the decision easier. In this section, we will introduce a chart which helps the user to decide on the appropriate value. The larger the is, the more core points we will have. So, for each point in the data set and a specified M inP ts value, there exists a minimum value that makes the point, a core point. We call such a value M inDist and it is formally defined in Definition 1. Definition 1 (MinDist). For any point p in the data set M inDist is defined as follows M inDist(p, M inP ts) = min { ∈ R| > 0 and |N (p)| ≥ M inP ts} To sketch a histogram based on M inDist of the points, we used rounding. Suppose that N (m, k) is the number of nodes that has [M inDist]k = m, in which [x]m means rounding number x to m digits of precision. Figure 3 shows the N function for data set 8.8 borrowed from a data set named chameleon [14] with M inP ts = 6. The horizontal axis corresponds to different values of m for which
98
J. Esmaelnejad, J. Habibi, and S.H. Yeganeh
Fig. 3. The proposed visualization for data set 8.8 with M inP ts = 6
there exists at least one point p in data set with M inDist(p, 3) = m. The vertical axis corresponds to the N value related to a special m. It is important to note that this continuous chart is created by connecting the consecutive points of the N function. There are some important notes about this figure: – The area under the chart equals to the size of the data set. Note that this area is, in fact, the sum of all points with [M inDist(p)]3 = X (for every X). It can also be concluded that the hatched area in figure 3 is equal to the number of points with [M inDist(p)]3 ≤ E. – It is obvious that the value of can be chosen from [0, +∞], but too small or too large ’s are not useful. So, the interval in which we decide on is restricted to [Mmin , Mmax ] where Mmin = min {M inDist(p)|p ∈ DB} Mmax = M ax {M inDist(p)|p ∈ DB}.Also, an explanation on the fact that we don’t lose any useful is brought in the extended version. And there are also some other important notes about this chart in the extended version of the paper. 3.3
Using the Chart to Set
Now, the problem is to find the proper value, or in other words to find the place of on the x−axis. In this section we will present the main heuristic to set the value of . Too small values will mark some non-noise points as noise and too large value will mark some noise points as cores. Choosing an value in an area shown by disk in figure 4(a) seems a good heuristic. As you see, here is the place in which the absolute value of the slope of chart and so the density of area around the points is decreasing very quickly. As shown in figure 4(b), stretching the chart will cause changes in its slope. So this simple geometric heuristic, or any other simple heuristic dependent on the slope of the chart will not help in all cases. Generally, we could not get any satisfying result from geometrical approaches. Now we present a novel heuristic idea which has produced very satisfying results. Suppose that we have the noise ratio of our data set: ρ (0 ≤ ρ ≤ 1). So we just have to set the value to somewhere in the x-axis of the chart so that: ρ
T he area under the chart to right of X T he whole area under the chart
A Novel Method to Find Appropriate for DBSCAN
Slo
pe
Slope
ch a
ng
99
chan
e
ge
(a) The appro- (b) The angles change in priate should the stretched version of the be somewhere chart near or in the disk Fig. 4. Heuristic based on the curve of the chart will not work
Then : ρ × n number of noise points where n is the size of data set. Now if we assume that M inDist(p) >= M inDist(q) for any pair of points (p, q) that p is noise and q is not, the points to the right of the chosen x value are exactly the noise points. And finally, from equation 1 we have = √12 x. So, we can estimate the value from the noise ratio, ρ.
4
Experimental Results
Using the heuristic described in previous section, we can use ρ instead of . In the followings, we have run DBSCAN three times for three different data sets. And for each data set, two times with different M inP tss. All this three data sets are brought from a data set set named Chameleon [14]. We have represented three figures for each execution of the DBSCAN. In each set, the left figure, shows the introduced chart and the vertical line determines the generated . The centered figure is the clustering result of DBSCAN execution, each color representing a cluster. The right figure which is the same data set representation while we have eliminated the noise points is used to show the noise removal capability of our method. Figure 5 shows the results related to a data set named 7.10 setting M inP ts to 12 and 30. As you see, while we have a convincing result in figure 5(b), choosing a high M inP ts value, (30), results into figure 5(f) that has some eliminated points which are not noise. Note that M inP ts is also a very effective parameter for DBSCAN. Figure 6 is related to a data set named 4.8. With M inP ts values of 10 and 15. As you see DBSCAN have determined all the clusters perfectly. But, in figure 6(b), in addition to the main clusters, you can find three new small clusters
100
J. Esmaelnejad, J. Habibi, and S.H. Yeganeh
(a)
(d)
(b)
(e)
(c)
(f)
Fig. 5. Results for data set 7.10 with M inP ts = 12 and 30, ρ = 0.10 and the resulting = 0.0162 and 0.0233
(a)
(d)
(b)
(e)
(c)
(f)
Fig. 6. Results for data set 4.8 with M inP ts = 10 and 15, ρ = 0.10 and the resulting = 0.0176 and 0.0219
(a yellow, a green and a red one which are labeled with letters A, B and C respectively). In figure 6(c) there are some non-noise eliminated points. This time, the problem is due to the smallness of M inP ts parameter because as you see in figure 6(e), the problem is solved. Finally, figure 7 shows the results for data set 8.8 with M inP ts values of 6 and 10. There are two huge clusters lying in left down side of the data set in figure 7(b) labeled with letters A and B. The points at the top of this two clusters are not dense enough to call them a cluster and not so sparse that we call them noise. This problem (having clusters with different densities in the data set) is in fact a basic problem of DBSCAN. Actually, we could not find any better results for this data set than figure 7(e).
A Novel Method to Find Appropriate for DBSCAN
(a)
(b)
(d)
(e)
101
(c)
(f)
Fig. 7. Results for data set 8.8 with M inP ts = 6 and 10, ρ = 0.05 and the resulting = 0.0160 and 0.205 respectively
5
Conclusions and Future Works
DBSCAN, the most famous density based clustering method, requires two parameters and M inP ts to be set for clustering.Finding an appropriate value for a data set is a tedious and time consuming task. In this paper, we have proposed an estimation method based on noise ratio of the data set, ρ. Using noise ratio is much more simpler than the because it is relative, more probable to be known in advance, and also easier to estimate. The main idea of the estimation heuristic method is based on visualization technique that can also be used to estimate the value interactively. As shown in the experimental results, our method is capable of estimating a proper value for real and complex data sets. Also it has been shown that the algorithm has wonderful density based noise removal capability, that can be used to cleanse the data for future usage. There are some issue that can be addressed as future works. First of all, it is important to propose a method to estimate the optimum value of M inP ts in a data set. To estimate M inP ts the first step is to accurately define a satisfying clustering;The next step then, may be testing all M inP ts values and choosing the M inP ts value which results into the best clustering result.
References 1. Ester, M., Kriegel, H.P., Sander, J., Xu, X.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: Simoudis, E., Han, J., Fayyad, U. (eds.) Second International Conference on Knowledge Discovery and Data Mining, Portland, Oregon, pp. 226–231. AAAI Press, Menlo Park (1996) 2. MacQueen, J.B.: Some methods for classification and analysis ofmultivariate observations. In: Proceedings of the Fifth Berkeley Symposium on Mathematics, Statistics and Probabilities, vol. 1, pp. 281–297 (1967)
102
J. Esmaelnejad, J. Habibi, and S.H. Yeganeh
3. Dunn, J.C.: A fuzzy relative of the isodata process and its use in detecting compact well-separated clusters. Journal of Cybernetics and Systems 3(3), 32–57 (1973) 4. Han, J., Kamber, M.: Data Mining: Concepts and Techniques, 2nd edn. The Morgan Kaufmann Series in DataManagement Systems. Morgan Kaufmann, San Francisco (2006) 5. Ankerst, M., Breunig, M.M., Kriegel, H.P., Sander, J.: Optics: ordering points to identify the clustering structure. In: Proceedings of 1999 ACM International Conference on Management of Data (SIGMOD 1999), vol. 28, pp. 49–60. ACM, New York (1999) 6. Hinneburg, A., Keim, D.A.: An efficient approach to clustering in large multimedia databases with noise. In: Knowledge Discovery and Data Mining, pp. 58–65 (1998) 7. Wang, X., Hamilton, H.J.: Dbrs: A density-based spatial clustering method with random sampling. In: Proceedings of the 7th PAKDD, Seoul, Korea, pp. 563–575 (2003) 8. Wang, X., Rostoker, C., Hamilton, H.J.: Density-Based Spatial Clustering in the Presence of Obstacles and Facilitators. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) PKDD 2004. LNCS (LNAI), vol. 3202, pp. 446–458. Springer, Heidelberg (2004) 9. Yeganeh, S.H., Habibi, J., Abolhassani, H., Tehrani, M.A., Esmaelnezhad, J.: An approximation algorithm for finding skeletal points for density based clustering approaches. In: Proceedings of the IEEE Symposium on Computational Intelligence and Data Mining, CIDM 2009, part of the IEEE Symposium Series on Computational Intelligence 2009, March 2009, pp. 403–410. IEEE, Los Alamitos (2009) 10. Yeganeh, S.H., Habibi, J., Abolhassani, H., Shirali-Shahreza, S.: A novel clustering algorithm based on circlusters to find arbitrary shaped clusters. In: International Conference on Computer and Electrical Engineering, pp. 619–624. IEEE Computer Society, Los Alamitos (2008) 11. Shirali-Shahreza, S., Hassas-Yeganeh, S., Abolhassani, H., Habibi, J.: Circluster: Storing cluster shapes for clustering. To appear in the Proceedings of the 4th IEEE International Conference on Intelligent Systems, Varna, Bulgaria (September 2008) 12. Gorawski, M., Malczok, R.: AEC Algorithm: A Heuristic Approach to Calculating Density-Based Clustering Eps Parameter. In: Yakhno, T., Neuhold, E.J. (eds.) ADVIS 2006. LNCS, vol. 4243, pp. 90–99. Springer, Heidelberg (2006) 13. Gorawski, M., Malczok, R.: Towards Automatic Eps Calculation in Density-Based Clustering. In: Manolopoulos, Y., Pokorn´ y, J., Sellis, T.K. (eds.) ADBIS 2006. LNCS, vol. 4152, pp. 313–328. Springer, Heidelberg (2006) 14. Karypis, G.: Chameleon data set (2008), http://glaros.dtc.umn.edu/gkhome/cluto/cluto/download
Towards Semantic Preprocessing for Mining Sensor Streams from Heterogeneous Environments Jason J. Jung Knowledge Engineering Laboratory Department of Computer Engineering Yeungnam University Gyeongsan, Republic of Korea 712-749 j2jung@{gmail.com, intelligent.pe.kr}
Abstract. Many studies have tried to employ data mining methods to discover useful patterns and knowledge from data streams on sensor networks. However, it is difficult to apply such data mining methods to the sensor streams intermixed from heterogeneous sensor networks. In this paper, to improve the performance of conventional data mining methods, we propose an ontology-based data preprocessing scheme, which is composed of two main phases; i) semantic annotation and ii) session identification. The ontology can provide and describe semantics of data measured by each sensor. Thus, by comparing the semantics, we can find out not only relationships between sensor streams but also temporal dynamics of a data stream. To evaluate the proposed method, we have collected sensor streams from in our building during 30 days. By using two well-known data mining methods (i.e., co-occurrence pattern and sequential pattern), the results from raw sensor streams and ones from sensor streams with preprocessing were compared with respect to two measurements recall and precision. Keywords: Ontology; Semantic sensor networks; Data streams; Preprocessing; Stream mining.
1 Introduction Since Mark Weisser firstly coined the concept “ubiquitous computing”, various types of sensors have been developed and implemented into sensor networks, which are capable of capturing environmental data for monitoring a certain space. For example, a wireless sensor network can be installed in a forest [1]. People can figure out the relationships between the status of ecosystem and climate changes by analyzing many environmental data (e.g., humidity, temperature, and so on). It means that as more diverse sensors are developed, the chance on understanding the environments might be higher. One of the main purposes of such (wireless or ubiquitous) sensor networks is to keep track of activities and behaviors of people inside the space. We may thus expect pervasive services, i.e., people can be “anytime and anywhere” provided with more relevant information and services by installing more sensors in the sensor network. Furthermore, the sensor networks can be integrated to exchange the information they have collected, ´ atek N.T. Nguyen, M.T. Le, and J. Swi ˛ (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 103–112, 2010. c Springer-Verlag Berlin Heidelberg 2010
104
J.J. Jung
Sensor Network B Sensor Network C
Sensor Network A
Data Repository Data mining process
Pattern Evaluation
GUI & KBS
Fig. 1. Mining sensor streams collected from multiple sensor networks
as shown in Fig. 1. Each sensor network utilizes the information collected in other sensor networks, so that this sensor network can efficiently provide the pervasive services to the users who are firstly visiting there. Thereby, efficient data mining methods have to be exploited to discover useful patterns from the integrated sensor streams. However, there are some difficulties on such discovery process. The problems that we are focusing on in this paper are as follows: – Noisy and redundant data from sensors – Missing information – Heterogeneity between semantics of sensor networks These problems might be caused by many unexpected reasons (e.g., sensor malfunction). We want to deal with these problems by preprocessing the raw data streams from the sensor networks, and also expect to improve the performance of data mining methods for extracting useful knowledge which happens in the environments. In this paper, we propose an ontology-based preprocessing method for implementing information integration platform with heterogeneous sensor streams. The ontology is employed to bridge multiple sensor networks which are semantically heterogeneous with each other. Two main goals of this work are – semantic annotation for data streams from sensors by using ontologies, and – session identification of semantic sensor streams The outline of this paper is as follows. In Sect. 2, we show how conventional sensor networks can exploit the ontologies for providing pervasive services. Sect. 3 addresses semantic annotation scheme for sensor streams, and Sect. 4explains semantic identification method based on similarity measurement between sensor streams. To evaluate the performance of the proposed approach, Sect. 5 illustrates the experimental results. In Sect. 6 and , we discuss the important issues that we have realized from the experiments, and compare the proposed approach to the existing ones. Finally, Sect. 7 draws a conclusion of this study.
Towards Semantic Preprocessing for Mining Sensor Streams
105
2 Ontology-Based Sensor Network Basically, the sensor nodes are installed into a certain environment, and the information collected from the sensor nodes are used to understand and detect significant patterns in the environment. However, in many cases (particularly, tracking people), the information from a single sensor network is not enough for this process. Ontology-based sensor network has been investigated to solve this problem by integrating several sensor networks. 2.1 Context Ontology for Pervasive Intelligence The main purpose of the ontologies is to annotate sensor streams, i.e., describe what semantics are related to information collected from the corresponding sensor. Context ontology we are using in this work contains environmental concepts and the relationship between the concepts. Definition 1 (Context ontology). Context ontology O is represented as O := (C, R, ER , IC )
(1)
where C and R are a set of classes (or concepts), a set of relations (e.g., equivalence, subsumption, disjunction, etc), respectively. ER ⊆ C×C is a set of relationships between classes, represented as a set of triples {ci , r, cj |ci , cj ∈ C, r ∈ R}, and IC is a power set of instance sets of a class ci ∈ C. Table 1. An example of contextual ontologies
While there are various kinds of ontology development methods, we have implemented the context ontology by extending the existing ontologies (e.g., SUMO1 and SOUPA [2]) based on TOVE ontology methodology [3]. We have chosen 24 scenarios and 1
Suggested Upper Merged Ontology (SUMO). http://www.ontologyportal.org/
106
J.J. Jung
organized them in a hierarchical form. Similar to [4], some upper-level concepts are derived from SUMO. Thus, as shown in Table 1, the context ontology, called ContextOnto2, is written in OWL3 . Especially, this ontology includes a number of physical sensor nodes by Owl:Individual. Table 1 shows only a part of ContextOnto. 2.2 Sensor Stream Mining By discovering a certain sequential pattern from a data stream, we can realize and conduct trend and periodicity analysis over time [5]. Particularly, this method has been applied to web browsing sequences for eliciting personal interests [6]. In this study, we focus on applying sequential pattern mining [7,8] to discover frequent sequential patterns from the sensor streams. For example, in Table 2, five kinds of sensor nodes have been sampled from timestamp T0 to T8 . Table 2. An example of sensor streams Timestamp T0 T1 T2 T3 T4 T5 T6 T7 T8
RFID Temperature Lights Air Conditioner Project Screen Reader (o C) (On/Off) (Switch) (Pull-down) U1 27 Off Off False U1 28 On On False U 1 , U2 30 Off On True U1 , U2 , U3 28 Off Off True U1 , U2 , U3 27 On On False U 2 , U3 25 On On False U2 25 On On False U2 26 On Off False U 1 , U2 26 Off Off True
To do the discovery process and more importantly improve the performance of this process, the sensor streams should be preprocessed by removing noises and missing data. The preprocess is done by session identification (also called sessionization) for segmenting the sequences of streams with respect to contextual situations.
3 Semantic Annotation for Sensor Streams Data streams collected from sensor nodes have to be annotated by using context ontologies (e.g., ContextOnto) This semantic annotation is to derive relevant semantic metadata from the ontologies for making the semantic streams understandable. In other words, the sequences of sensor data are transformed into the sequences of RDF metadata. Through this transformation by annotation, the relationships (i.e., semantic similarity) between RDF metadata can be measured. Especially, the heterogeneity of integration between sensor networks can be easily and efficiently dealt with. As an example in Table 3, the semantic annotation are represented by RDF. It illustrate that environmental context “Temperature = 28.0” at “Timestamp = T1 .” 2 3
ContextOnto. http://intelligent.pe.kr/SemSensorWeb/ContextOnto.owl Web Ontology Language (OWL). http://www.w3.org/TR/owl-features
Towards Semantic Preprocessing for Mining Sensor Streams
107
Table 3. An example of semantic annotation of a sensor stream 28.0
Another benefit obtained from semantic annotation is integration between heterogeneous sensor networks shown in Fig. 1. The sensor streams from multiple sensor networks are integrated as shown in Fig. 2. These sensor networks must be connected to a middleware for transmitting the data to be aggregated into the stream repository [9]. According to the number of installed sensors and sampling frequencies, a centralized sensor stream repository receives and store unique amount of information from each sensor network over time. Furthermore, any external information sources on the web can be interoperated to the middleware for enriching the integrated sensor streams.
Sensor Network B Sensor Network C
Sensor stream repository
Sensor Network A Time Fig. 2. Integration of semantic sensor streams
The integration between sensor streams can be achieved by merging RDF annotations. This work assumes that only one ontologies are exploited to all sensor networks, while there can be distributed ontologies (i.e., each sensor network has its own ontology). If there are several different ontologies, we have to consider automated ontology mapping methods to find semantic correspondences between the ontologies [10].
108
J.J. Jung
4 Session Identification as Preprocessing Even though we have enough information by integrating sensor streams from multiple environments, it is impossible to directly apply conventional data mining algorithms to the integration sensor streams. Thereby, we focus on conduct session identification [11], with respect to contextual relationships between timestamped sensor data. Generally, there have existed two heuristic-based approaches for the session identification. as follows; – time-oriented heuristic, which sessionizes with a predefined time interval, and – frequency-oriented heuristic, which sessionizes with a frequency of sensor streams (i.e., the time interval dynamically changes). Differently from them, we propose ontology-based session identification method. The main idea of this method is to compute and observe the statistical distribution of semantic similarity for finding out semantic outliers, as shifting a window W along to the sensor stream. Thereby, firstly, given two timestamps ti and tj , we have to compute the semantic similarity Δ between two RDF annotations A(ti ) and A(tj ). Jung [12] has explained several methods to measure the semantic similarities between two RDF metadata. Semantic similarity matrix DΔ can be given by ⎛ ⎞ ... ... ... DΔ (t, t ) = ⎝ . . . Δ [A(ti ), A(tj )] . . . ⎠ (2) ... ... ... where the size of this matrix is the predefined time interval |W | and the diagonal elements are all zero. For example, given a sensor stream in Table 2, we can obtain the semantic similarity matrix as shown in Table 4. Based on a semantic similarity matrix DΔ , the semantic mean μ is given by
μ (t1 , . . . , tT ) =
2
T
i=1
T
j=i
DΔ (i, j)
T (T − 1)
Table 4. An example of semantic similarity matrix (Size of the sliding window is 3.) T 0 T 1 T 2 T3 T 4 T 5 T6 T 7 T0 0 0.8 0.3 T1 0.8 0 0.25 0.15 T2 0.3 0.25 0 0.78 0.26 T3 0.15 0.78 0 0.32 0.36 T4 0.26 0.32 0 0.87 0.92 T5 0.36 0.87 0 0.89 0.88 T6 0.92 0.89 0 0.95 T7 0.88 0.95 0 T8 0.24 0.23
T8
0.24 0.23 0
(3)
Towards Semantic Preprocessing for Mining Sensor Streams
109
where DΔ (i, j) is the (i, j)-th element of semantic similarity matrix. This is the mean value of upper triangular elements except diagonals. Then, with respect to the given time interval T , the semantic deviation σ is derived as shown by T T 2 2 i=1 j=i (DΔ (i, j) − μ (t1 , . . . , tT )) σ (t1 , . . . , tT ) = (4) T (T − 1) These factors are exploited to quantify the semantic similarity between two random RDF annotations of sensor stream and statistically discriminate semantic outliers such as the most distinct or the N distinct data from the rest in the range of over preset threshold, with respect to given time interval. For instance, the semantic similarity matrix in Table 4 can be represented as Fig. 3 by observing the μ and σ over time. 1 Mean of SS Std. Dev. of SS
0.8 0.6
3
0.4
+
3 +
1
2
3
3 + 3
3
3
+
+
3
4
3 +
0.2 0 0
+ 5
+ 6
7
8
Timestamp Fig. 3. Distribution of semantic similarities
Based on the observed dynamics of the distribution (i.e., Fig. 3), we have to investigate how the sensor streams are segmented (i.e., session identification). Four heuristics have been considered that a new session begins in the RDF annotation stream. – H1−1 . when the mean of semantic similarity μ is less than a threshold τμ – H1−2 . when the standard deviation of semantic similarity σ is more than a threshold τσ , – H2−1 . when μ turns from upward to downward, – H2−2 . when σ turns from downward to upward, In consequence, from Fig. 3, these four heuristics can detect the specific moment when (or where) should be divided from the sequence. Table 4 depicts the results on session identification by the four heuristics. Finally, we can discover frequent sequential patterns by applying data mining algorithms to the sessionized data sets. It was difficult to do this from raw sensor streams in Table 2. Once we have the sessionized result (i.e., Table 4), we can easily find out the frequent patterns and also be aware of behavioral patterns of people in the particular
110
J.J. Jung Table 5. Results on session identification by the four heuristics Heuristics H1−1 H1−2 H2−1 H2−2 Session identification (T1 , T2 ) (T6 , T7 ) (T6 , T7 ) (T1 , T2 ), (T3 , T4 ) between (T6 , T7 ) (T6 , T7 )
environment. Thus, in this case, we have discovered and translated the following pattern (with support and confidence), as follows; – Project Screen Pull-Down (True) → Light (Off) [20%, 95%] – When project screen gets pull-down in this room, light switch will be turned off.
5 Experiments In order to evaluate the proposed preprocessing method, we have installed four sensor networks, which are composed of five kinds of sensor nodes, into three lecture room and one corridor. The sensor streams have been collected for about one month (from 9 March 2009 to 6 April 2009). Table 6. Comparison of performances of stream preprocessing methods MGeneric MTime Number of patterns 4 4 Precision Agree 1 (25%) 3 (60%) Disagree 2 (50%) 2 (40%) Do not know 1 (25%) 0 (0%)
MSampling MOnto 4 7 2 (50%) 5 (71.4%) 1 (25%) 2 (28.6%) 1 (25%) 0 (0%)
As a first test, by discovering the frequent sequential patterns by [7], we have compared the proposed method (i.e., MOnto ) with the existing sessionization ones (i.e., MGeneric , MTime , and MSampling ) with respect to the number of the discovered patterns and the precision of the patterns. The precision (i.e., “agree” and “disagree” scores) has been indicated by human experts on this experiments. – MGeneric has no sessionization step. It assumes that the contexts are always identical. – MTime uses a time slot to sessionize. The size of the time slot is fixed. – MSampling is to conduct a random sampling. It is expected to efficiently work on a large-scale streams. The result is shown in Table 6. MOnto proposed in this paper has outperformed the other three methods by discovering more patterns. More significantly, in terms of “agree” score, the patterns discovered by MOnto have shown higher precision than pattern by MGeneric (by 295.6%), MTime (by 119%), MSampling (by 142.8%). But, interestingly, the disagree score of MOnto has shown slightly higher than that of MSampling . We think that MSampling has been able to discover the most stable and plain patterns from the sensor streams.
Towards Semantic Preprocessing for Mining Sensor Streams
111
6 Discussions This paper applies semantics to sensor data for mining sensor data from “heterogeneous” environment and improving the performance of the mining process. In this section, we want to discuss two main issues mentioned in the paper. With respect to semantic annotation, it is pretty straightforward to convert sensor data to RDF data. Many people have applied it and also many ontologies have been developed. The challenge on this issue might be based on stream data integration and distributed ontologies integration. As a limitation of this work, we have used only one centralized ontology to annotate the sensor data (as shown in Fig. 1). With respect to session identification, this study assumes that people in the sensor space may pay attention to more than one context. Thus, it is not easy to discover meaningful patterns from sensor data without proper preprocessing. Of cause, there are more psychological issues, e.g., how to model and formulate mental status of humans.
7 Concluding Remark and Future Work Finally, as a conclusion, we have claimed that the sensor streams from multiple heterogeneous environments should be efficiently integrated and preprocessed for better data mining performance. It was difficult to do this from raw sensor streams in Table 2. Once we have the sessionized result (i.e., Table 4), we can easily find out the frequent patterns and also understand the contextual sequences of people in the particular environment. In future work, we will focus on developing a real sensor network application by applying several sequential pattern mining approaches to discover the sequential patterns. Moreover, by matching distributed ontologies, most of sensor network systems on open network will be integrated. The ontologies will be play an important role of logical reasoner in this area.
References 1. Culler, D., Estrin, D., Srivastava, M.: Overview of sensor networks. Computer 37(8), 41–49 (2004) 2. Chen, H., Perich, F., Finin, T., Joshi, A.: Soupa: Standard ontology for ubiquitous and pervasive applications. In: Proceedings of the First Annual International Conference on Mobile and Ubiquitous Systems: Networking and Services (MobiQuitous 2004), pp. 258–267. IEEE Computer Society, Los Alamitos (2004) 3. Fox, M.S.: The tove project towards a common-sense model of the enterprise. In: Belli, F., Radermacher, F.J. (eds.) IEA/AIE 1992. LNCS, vol. 604, pp. 25–34. Springer, Heidelberg (1992) 4. Eid, M., Liscano, R., Saddik, A.E.: A universal ontology for sensor networks data. In: Proceedings of the 2007 IEEE International Conference on Computational Intelligence for Measurement Systems and Applications (CIMSA 2007), Ostuni, Italy, June 27-29. IEEE Computer Society, Los Alamitos (2007) 5. Han, J., Kamber, M.: Data mining: concepts and techniques. Morgan Kaufmann, San Francisco (2006)
112
J.J. Jung
6. Jung, J.J.: Collaborative web browsing based on semantic extraction of user interests with bookmarks. Journal of Universal Computer Science 11(2), 213–228 (2005) 7. Agrawal, R., Srikant, R.: Mining sequential patterns. In: Yu, P.S., Chen, A.L.P. (eds.) Proceedings of the 8th International Conference on Data Engineering, March 6-10, pp. 3–14. IEEE Computer Society, Los Alamitos (1995) 8. Srikant, R., Agrawal, R.: Mining sequential patterns: Generalizations and performance improvements. In: Apers, P.M.G., Bouzeghoub, M., Gardarin, G. (eds.) EDBT 1996. LNCS, vol. 1057, pp. 3–17. Springer, Heidelberg (1996) 9. Heinzelman, W.B., Murphy, A.L., Carvalho, H.S., Perillo, M.A.: Middleware to support sensor network applications. IEEE Network 18(1), 6–14 (2004) 10. Jung, J.J.: Ontology-based context synchronization for ad-hoc social collaborations. Knowledge-Based Systems 21(7), 573–580 (2008) 11. Jung, J.J.: Semantic preprocessing of web request streams for web usage mining. Journal of Universal Computer Science 11(8), 1383–1396 (2005) 12. Jung, J.J.: Exploiting semantic annotation to supporting user browsing on the web. Knowledge-Based Systems 20(4), 373–381 (2007)
HOT aSAX: A Novel Adaptive Symbolic Representation for Time Series Discords Discovery Ninh D. Pham, Quang Loc Le, and Tran Khanh Dang Faculty of Computer Science and Engineering, HCM University of Technology, Vietnam National University of HoChiMinh City, Vietnam
[email protected], {locle,khanh}@cse.hcmut.edu.vn
Abstract. Finding discords in time series database is an important problem in the last decade due to its variety of real-world applications, including data cleansing, fault diagnostics, and financial data analysis. The best known approach to our knowledge is HOT SAX technique based on the equiprobable distribution of SAX representations of time series. This characteristic, however, is not preserved in the reduced-dimensionality literature, especially on the lack of Gaussian distribution datasets. In this paper, we introduce a k-means based algorithm for symbolic representations of time series called adaptive Symbolic Aggregate approXimation (aSAX) and propose HOT aSAX algorithm for time series discords discovery. Due to the clustered characteristic of aSAX words, our algorithm produces greater pruning power than the previous approach. Our empirical experiments with real-world time series datasets confirm the theoretical analyses as well as the efficiency of our approach. Keywords: Time Series Data Mining, SAX, Anomaly Detection, Clustering.
1 Introduction The last decade has seen an increasing level of interest in finding time series discords due to its variety of uses for data mining, including improving the quality of clustering, data cleaning, summarization, and anomaly detection, etc. However, time series are essentially high dimensional data and directly dealing with such data in its raw format is very expensive in terms of processing and storage cost. Because of this fact, most of time series representations proposed in the literature are based on reduceddimensionality techniques, but still preserving its fundamental characteristics, such as Discrete Fourier Transform (DFT) [3], Discrete Wavelet Transform (DWT) [2], Piecewise Aggregate Approximation (PAA) [4], and Symbolic Aggregate approXimation (SAX) [6], etc. However, there is only SAX approach allowing time series discords discovery based on a heuristic algorithm, called HOT SAX [5], which produces greater pruning power than brute force, while guaranteed to produce identical results. HOT SAX approach is based on the symbolic representations of time series to improve the quality of the brute force algorithm. In particular, SAX approach discretizes and maps reduced-dimension time series into SAX symbols based on the “breakpoints” N.T. Nguyen, M.T. Le, and J. Świątek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 113–121, 2010. © Springer-Verlag Berlin Heidelberg 2010
114
N.D. Pham, Q.L. Le, and T.K. Dang
which produce the equal-sized areas under Gaussian curve. Due to the equiprobable distribution of SAX representations of time series, the two heuristic searching orders in outer and inner loop of brute force were proposed for greater pruning power. Because of this fact, the performance of HOT SAX highly depends on the two major factors: (1) the Gaussian distributed feature of time series, and (2) the clustered feature of SAX words. In this paper, we introduce a simple, but highly adaptive symbolic approach, adaptive Symbolic Aggregate approXimation (aSAX) for time series representations and propose a heuristic algorithm called HOT aSAX for time series discords discovery. In particular, our technique is based on the original SAX, but adaptive vector of “breakpoints”. These adaptive “breakpoints” are determined by a preprocessing phase using k-means algorithm. Due to the clustered feature of the adaptive “breakpoints”, the better heuristic searching orders in outer and inner loop of brute force could be obtained in HOT aSAX. That leads to greater pruning power on both the highly Gaussian distribution datasets and the lack of Gaussian distribution datasets for various discord lengths and database sizes. The rest of this paper is organized as follows. In Section 2, we provide the problem definitions and briefly review the background material and related work. Section 3 introduces the novel adaptive representation called aSAX and HOT aSAX approach for time series discords discovery. In Section 4, we show experimental evaluations of the proposed approach with real-world datasets. Finally, in Section 5, we conclude and give some future research directions.
2 Background and Related Work Intuitively, time series discords are the subsequences that have the least similarity to all other subsequences. However, “the best matches to a subsequence tent to be located one or two points to the left or the right of the subsequence in question” [5]. Such matches are called trivial matches and should be excluded to obtain the true discords. In this section, we will therefore formally define the problem definitions, including non-self match, top-K discords and then review the basic heuristic algorithm as well as HOT SAX approach for time series discords discovery. 2.1 Problem Definitions As noted above, it is obvious and intuitive to exclude trivial matches when finding time series discords; otherwise, almost all real datasets have degenerate and unintuitive solutions. We therefore need to formally define a non-self match: Definition 1. Non-self Match: Given a time series T, subsequence C of length n beginning at position p and a matching subsequence M beginning at position q, we say that M is a non-self match to C if p − q ≥ n . We now use the definition of non-self match to define time series discord: Definition 2. Time Series Discord: Given a time series T, the subsequence D of length n beginning at position p is said to be the top-1 discord of T if D has the largest distance to its nearest non-self match.
HOT aSAX: A Novel Adaptive Symbolic Representation
115
Note that we may have more than one unusual pattern in a given time series. We are thus interested in examining the top-K discords. Definition 3. Top-K Discord: Given a time series T, the subsequence D of length n beginning at position p is said to be the top-K discord of T if D has the Kth largest distance to its nearest non-self match, with no overlapping region to the ith discord, for all 1 ≤ i < K .
The problem of locating top-1 discord was first proposed in [5] by a heuristic algorithm which compares the distance from each subsequence to its nearest non-self match, as shown in Table 1. Table 1. Heuristic Discord Discovery 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.
discord_distance = 0 discord_position = 0 for each p in T ordered by heuristic Outer nearest_neighbor_distance = infinity for each q in T ordered by heuristic Inner if |p – q| ≥ n dist = Dist(tp,...,tp+n-1, tq,...,tq+n-1) if dist < nearest_neighbor_distance nearest_neighbor_distance = dist endif if dist < discord_distance break endif endif endfor if nearest_neighbor_distance > discord_distance discord_distance = nearest_neighbor_distance discord_position = p endif endfor return (discord_distance, discord_position)
In this algorithm, each possible candidate subsequence is extracted in the heuristic outer loop to find the nearest non-self match for each candidate subsequence in the heuristic inner loop. The candidate subsequence which has the largest distance to its nearest non-self match is the top-1 discord. Note that the algorithm’s running time has been highly dependent on the two heuristics for outer and inner loop. These heuristics determine the orders in which both outer loop and inner loop visit the subsequences in the attempt to find the discord in first few subsequences of outer loop. That leads to an early termination of algorithm and considerably speeds up the algorithm. For simplicity, we have only discussed top-1 discords discovery. Extensions to top-K discords are trivial and obvious, and are omitted for brevity.
116
N.D. Pham, Q.L. Le, and T.K. Dang
2.2 HOT SAX for Finding Time Series Discords
Keogh at al. [5] proposed the first heuristic algorithm for finding time series discords, called HOT SAX with the two heuristic orders in outer loop and inner loop based on the equiprobable distribution of SAX representations of subsequences. Time series subsequences extracted by a sliding window are converted into SAX representations and inserted into two augmented data structures to support the outer and inner heuristics. In particular, an array of SAX words containing a count of how often each word occurs in the array determines the searching order for outer loop. An augmented trie with leaves containing a list of all array indices with the same SAX encoding determines the searching order for inner loop. Heuristic Outer Loop. Based on the intuition that unusual subsequences are very likely to map to the unique or rare SAX words, the first few subsequences which have minimum times are given to the outer loop to search over first. The rest of the candidates are visited in random order. By considering these candidate subsequences first, there is an increasing chance of giving a large value to the discord_distance variable early on, thus allowing more early terminations of the inner loop. Heuristic Inner Loop. In the inner loop, all subsequences which have the same SAX encoding as the candidate subsequence of the outer loop are visited first. The rest of the subsequences are visited in random order. We thus only need to find one such subsequence that is similar enough to give a small value to the dist variable in order to terminate the inner loop more early. As noted above, it is clear that HOT SAX performance is highly dependent on the clustered feature of SAX words. The highly clustered feature of SAX words will produce the better heuristics in outer and inner loop; otherwise, the speedup of HOT SAX will be reduced.
3 The Proposed Algorithm: HOT aSAX It is important to note that the pruning power of HOT SAX is highly dependent on the quality of SAX approximation. In classic SAX approach, there is an assumption that most of time series have the highly Gaussian distribution because each dimension is discretized into SAX symbols based on predetermined “breakpoints” under Gaussian curve with equiprobability. Also, the discretization is often applied on the reduceddimensionality data (e.g., PAA representation); however, no specific algorithm for that option was proposed. Finally, the two heuristic searching orders in outer and inner loop of HOT SAX are based on the equiprobable distribution of SAX representations of time series subsequences. As a consequence, HOT SAX produces higher performance on the highly Gaussian distribution datasets, but lower performance on the lack of Gaussian distribution datasets. In this section, we introduce a novel adaptive symbolic representation (aSAX) for time series and propose a heuristic algorithm, called HOT aSAX for time series discords discovery. The proposed approach produces the greater pruning power than the classic technique on both the highly Gaussian distribution datasets and the lack of Gaussian distribution datasets for various discord lengths and database sizes.
HOT aSAX: A Novel Adaptive Symbolic Representation
117
3.1 Adaptive Symbolic Representation: aSAX
Our proposed approach, adaptive Symbolic Aggregate approXimation (aSAX) is based on the classic SAX but with the adaptive “breakpoints”. These adaptive “breakpoints” are predetermined by a pre-processing phase based on Lloyd’s algorithm [7], which is nothing but the k-means algorithm [8] for clustering, specialized to one dimension. In particular, we use a small set of the normalized time series as the training set, the alphabet size of symbols as parameter k in k-means algorithm and the “breakpoints” under Gaussian curve as the clever initialization of cluster intervals. We note that the original normalized time series can be used for training but for the best performance we suggest the PAA representations of the training time series. Given a training set, we first convert all time series into PAA representations and store them in a PAA array. Denote by tn the value of the nth PAA point in PAA array. Start with the given set of intervals [β i , β i +1 ) under Gaussian curve, for i = 0,…,a-1
β 0 = −∞ and β a = ∞ . Set Δ = ∞, and fix γ > 0 . Denote by ri as the representative value for the interval i. The training algorithm to achieve the “adaptive” breakpoints for aSAX representation is derived based on k-means algorithm as followed: Table 2. The Training Algorithm for Adaptive “Breakpoints”
1. For i = 0,…,a-1, compute the new representative value as the center of mass of all PAA points in 1 the interval [β i , β i +1 ) , by ri = tn , where Ni N i tn ∈[ βi , βi+1 ) is the total number of PAA points in the interval [β i , β i +1 ) .
∑
2. Compute the new intervals by β i = (ri-1 + ri) / 2, for i = 1,…,a-1. 3. Compute
the
total
representation
k
Δ' =
∑ ∑ β β i =1 tn ∈[
i , i +1 )
2
(t n − ri ) .
If
Δ−Δ' Δ
0 y = 1(ϕ ) = ⎨ ⎩0 if ϕ ≤ 0
(6)
The network learning process was performed according to the following steps: 1.
The input image generated from a low class scanner (u1, u2, ... ,u25) is given to the network input and its output y is determined; 2. One of the following three decisions is made: 2.1 If y=y’ (where y’ is a reference image generated from a high class scanner), the process is finished; 2.2 If y is incorrect and equal to 0, the value of each input multiplied by a small number n is added to the value of an appropriate weight coefficient (w1, w2, … , w25). 2.3 If y is incorrect and equal to 1, the value of an input multiplied by a small number n is subtracted from the value of a weight vector (w1, w2, … ,w25); 3. The process is repeated as long as a satisfactory final result is obtained. The process of updating the network weights is featured by formula no. 7 (so called delta rule [6]).
wi (k +1) = wi (k ) + nδ μ uiμ
(7)
Where: k – step number; wi - weight of ith connection for kth step; uμi - ith component of the input vector u; y‘μ - requested response for;
⎧ − 1 for yμ' = 0 and yμ = 1 ⎪ δ μ = yμ' − y μ = ⎨ 0 for yμ' = yμ ⎪ 1 for y ' = 1 and y = 0 μ μ ⎩ μth input vector; yμ - current response for μth input vector; n - learning parameter. During the network learning process were obtained for 10 image pairs ten filter masks, some of which were presented in formula no 8, 9. The filter generated from the image pair no 1:
⎡− 0.0082 − 0.0098 ⎢− 0.0110 0.0094 ⎢ w1 = ⎢ 0.0270 0.1084 ⎢ ⎢ 0.0012 − 0.0058 ⎢⎣− 0.0064 − 0.0059
0.0231 − 0.0073 − 0.0036⎤ 0.1588 − 0.0047 0.0008 ⎥⎥ 0.4435 0.1087 0.0244 ⎥ ⎥ 0.1558 0.0098 − 0.0103⎥ 0.0230 − 0.0074 − 0.0076⎥⎦
(8)
196
J. Pęksiński and G. Mikołajczak
The filter generated from the image pair no 5:
⎡− 0.0089 − 0.0086 ⎢− 0.0096 0.0091 ⎢ w5 = ⎢ 0.0238 0.1116 ⎢ ⎢ 0.0013 − 0.0050 ⎢⎣− 0.0052 − 0.0075
0.0209 − 0.00078 − 0.0056⎤ 0.1572 − 0.0060 0.0010 ⎥⎥ 0.4501 0.1115 0.0255 ⎥ ⎥ 0.1571 0.0081 − 0.0101⎥ 0.0202 − 0.0082 − 0.0074⎥⎦
(9)
The amplitude characteristics of the generated filters are presented in figure no 4. During the analysis of weight coefficients of the filters featured in formulas no. 8, 9 and the amplitude characteristics shown in figure no. 4, it can be found that the filters generated during the neural network learning process are low-pass. Therefore, most interferences of an image generated from a low class scanner can be classified as noises. In order to check the efficiency of operation of the obtained filters, each of ten images generated from a low class scanner was subject to the filtration process according to formula no. 4. A new image so generated was compared with a reference image by means of the criteria according to formulas 1, 2 and 3. The results are shown in tables 3 to 4.
Fig. 4. The amplitude characteristics of the generated filters w1, presented in formula no 8
The results from tables 3 to 4 prove explicitly the efficiency of the filters generated by means of a neural network. The efficiency is related to the objective evaluations performed by means of MSE and Q criterion. The subjective evaluation S is very diverse which is not particularly surprising considering the fact that HVS (Human Visual System) factors depend greatly on a lot of elements such as: light spectral content, spectral component luminance, local scene contrast, global scene contrast, radiation patterns of image sections and image viewing angle [7, 8].
Generation of a FIR Filter by Means of a Neural Network
197
Table 3. The values of criteria for the images after filtering by filter w1 MSE before filtering Image 1a 52,381 Image 2a 57,293 Image 3a 57,511 Image 4a 50,022 Image 5a 54,135 Image 6a 56,772 Image 7a 58,635 Image 8a 54,494 Image 9a 55,765 Image 10a 49,987
MSE after filtering 23,311 30,764 32,349 21,576 26,023 29,351 34,976 27,009 29,731 20,097
Q before filtering 0,871 0,865 0,891 0,912 0,883 0,837 0,854 0,877 0,898 0,934
Q after filtering 0,898 0,882 0,924 0,943 0,897 0,864 0,872 0,898 0,951 0,968
S before filtering 9,721 9,843 9,684 9,748 9,698 9,809 9,903 9,871 9,981 9,743
S after filering 9,824 9.811 9,732 9,844 9,521 9,745 9,843 9,831 9,900 9,856
Table 4. The values of criteria for the images after filtering by filter w5 MSE before filtering Image 1a 52,381 Image 2a 57,293 Image 3a 57,511 Image 4a 50,022 Image 5a 54,135 Image 6a 56,772 Image 7a 58,635 Image 8a 54,494 Image 9a 55,765 Image 10a 49,987
MSE after filtering 27,889 31,176 27,217 26,101 25,135 28,311 31,892 30,129 31,059 23,079
Q before filtering 0,871 0,865 0,891 0,912 0,883 0,837 0,854 0,877 0,898 0,934
Q after filtering 0,878 0,875 0,921 0,950 0,899 0,854 0,868 0,882 0,929 0,945
S before filtering 9,721 9,843 9,684 9,748 9,698 9,809 9,903 9,871 9,981 9,743
S after filering 9,754 9,823 9,613 9,801 9,704 9,895 9,899 9,801 9,680 9,766
Afterwards the filter W was generated. It was the result of averaging every of 10 filter masks. The averaging procedure was performed according to formula no. 10:
W=
1 (w1 + w2 + ⋅ ⋅ ⋅ + w10 ) 10
(10)
The mask of the filter so generated is shown in formula no. 11. Figure no. 8 features the amplitude characteristics of the filter W.
⎡− 0.0054 − 0.0083 ⎢− 0.0104 0.0050 ⎢ W = ⎢ 0.0217 0.1072 ⎢ ⎢ 0.0018 − 0.0109 ⎢⎣− 0.0025 − 0.0060
0.0183 − 0.0062 − 0.0025⎤ 0.1574 − 0.0108 0.0016 ⎥⎥ 0.4739 0.1079 0.0214 ⎥ ⎥ 0.1579 0.0046 − 0.0100⎥ 0.0189 − 0.0081 − 0.0059⎥⎦
(11)
198
J. Pęksiński and G. Mikołajczak
The images generated by means of a low class scanner were subject to filtration by the filter W and compared by means of the quality measures with the reference images. The results of the measurements are shown in table 5. Table 5. The values of criteria for the images after filtering by filter W MSE before filtering Image 1a 52,381 Image 2a 57,293 Image 3a 57,511 Image 4a 50,022 Image 5a 54,135 Image 6a 56,772 Image 7a 58,635 Image 8a 54,494 Image 9a 55,765 Image 10a 49,987
MSE after filtering 29,051 31,907 31,871 26,475 28,192 29,092 31,201 28,459 27,193 24,651
Q before filtering 0,871 0,865 0,891 0,912 0,883 0,837 0,854 0,877 0,898 0,934
Q after filtering 0,887 0,864 0,917 0,921 0,889 0,860 0,875 0,894 0,907 0,941
S before filtering 9,721 9,843 9,684 9,748 9,698 9,809 9,903 9,871 9,981 9,743
S after filering 9,629 9,776 9,729 9,709 9,774 9,721 9,884 9,807 9,807 9,751
4 Final Conclusions The analysis of the results in tables 3 to 5 demonstrates that the quality of the images in relation to the original images can be improved if a filtration by means of the filters generated using a neural network is performed. Due to the low-pass feature, the filters generated during the neural network learning process can deal efficiently with most interferences of an image generated from a low class scanner. Having a reference image generated from a high class scanner of well-known parameters and an image generated from a scanner of inferior quality as well as a program performing the neural network learning process, a filter mask is generated that can directly be used in the popular graphical software. A FIR filter matched to a specific improved image or generated as the result of the averaging process can be applied up to the required quality. The neural network learning process featured in the paper guarantees that it is possible to obtain a FIR filter improving the quality of an image generated from a low class scanner so that the quality is similar to the quality of an image generated from a high class scanner.
References 1. Pratt, W.K.: Digital Image Processing, PIKS Inside. Willey, Chichester (2001) 2. Bovik, A.C.: Handbook of Image and Video Processing, 2nd edn., Department of Electrical And Computer Engineering, The University of Texas AT Austin, TEXAS. Elsevier Academic Press, Amsterdam (2005) 3. Gupta, P.K., Kanhirodan, R.: Design of a FIR Filter for Image Restoration using Principal Component Neural Networks. In: IEEE International Conference on Industrial Technology. ICIT 2006, December 15-17, pp. 1177–1182 (2006) 4. Wang, Z., Bovik, A.C.: Mean Squared Error: Love it or Leave it? IEEE Signal Processing Magazine (2009)
Generation of a FIR Filter by Means of a Neural Network
199
5. Wang, W., Bovik, A.C.: A Universal Image Quality Index. IEEE Signal Processing Letters, vol. XX No. Y (2002) 6. Korbicz, J., Obuchowicz, A., Uciński, D.: Sztuczne sieci neuronowe. Akademicka Oficyna Wydawnicza, Warszawa (1994) 7. Grainger, E.M., Cuprey, K.N.: An Optical Merit Function (SQF) Which Correlates With Subjective Image Judgments. Fotografic Science and Engineering 16(3) (1992) 8. Wyszecki, G.: “Color appearance”. In: Boff, K.R., Kaufman, L., Thomas, J.P. (eds.) Handbook of perception an human performance (1986) 9. Ackenhusen, J.G.: Real-Time Signal Processing: Desing and Implementation of Signal Processing Systems. PTR Prentice Hall, Upper Saddle River (1999) 10. Bellanger, M.: Digital Processing of Signal. Theory and Practice. Wiley, Chichester (1989) 11. Thimm, G., Fiesler, M.: Neural network initialization. In: Mira, J., (eds.) Form Neural to Artifical Neural Computation, Malaga, pp. 533–542 (1995) 12. Sheikh, H.R., Bovik, A.C.: Image information and visual quality. IEEE Trans. Image Processing 12(11), 1338–1351 (2003) 13. Papas, T.N., Safranek, R.J., Chen, J.: Perceptual criteria for image quality evolution. In: Handbook of Image and Video Processing, 2nd edn. (May 2005) 14. Cammbell, C.: Neural Network Theory. University Press, Bristol (1992)
Robust Fuzzy Clustering Using Adaptive Fuzzy Meridians Tomasz Przybyla1, Janusz Je˙zewski2 , Janusz Wr´obel2 , and Krzysztof Horoba2 1
2
Silesian University of Technology, Institute of Electronics, Akademicka 16, 44-101 Gliwice, Poland
[email protected] Institute of Medical Technology and Equipment, Departament of Biomedical Informatics, Roosvelta 118, 41-800 Zabrze, Poland
Abstract. The fuzzy clustering methods are useful in the data mining applications. This paper describes a new fuzzy clustering method in which each cluster prototype is calculated as a fuzzy meridian. The meridian is the maximum likelihood estimator of the location for the meridian distribution. The value of the meridian depends on the data samples and also depends on the medianity parameter. The sample meridian is extended to fuzzy sets to define a fuzzy meridian. For the estimation of medianity parameter value, the classical Parzen window method by real non–negative weights has been generalized. An example illustrating the robustness of the proposed method was given.
1
Introduction
The clustering aims at assigning a set of objects to clusters in such a way that objects within the same cluster have a high degree of similarity, while objects belonging to different clusters are dissimilar. Traditional clustering algorithms can be divided into two main categories [1][2][3]: hierarchical and partitional. In hierarchical clustering, the number of clusters need not to be specified a priori. Yet, the problems due to initialization and local minima do not arise. However, they cannot incorporate a priori knowledge about the global shape or size of clusters since hierarchical methods consider only local neighbors in each step [4]. Prototype–based partitional clustering methods can be classified into two classes: hard (or crisp) methods and fuzzy methods. In the hard clustering methods, every data case belongs to only one cluster. In the fuzzy clustering methods every data point belongs to every cluster. Fuzzy clustering algorithms can deal with overlapping cluster boundaries. Robustness means that the performance of an algorithm should not be affected significantly by small deviations from the assumed model. The algorithm should not deteriorate drastically due to noise and outliers. Methods that are able to tolerate noise and outliers have become very popular [6] [7], [8]. ´ atek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 200–209, 2010. N.T. Nguyen, M.T. Le, and J. Swi c Springer-Verlag Berlin Heidelberg 2010
Robust Fuzzy Clustering Using Adaptive Fuzzy Meridians
201
The most popular method of fuzzy clustering is the fuzzy c–means (FCM) method proposed by Bezdek [2]. Unfortunately the FCM method is sensitive to presence of outliers and noise in clustered data. In real applications, the data are corrupted by noise and assumed models such a Gaussian distribution are never exact. This method is a prototype–based method, where prototypes are otpimal for the Gaussian model of data distribution. The Gaussian model is inadequate in an impulsive environment. Impulsive signals are more acurately modeled by distributions which density functions have heavier tails than the Gaussian distribution [9][10]. In [13] for the outliers modelling, the Laplace distribution was used. Hence, as the cluster prototypes the fuzzy medians have been used. In paper [8] the fuzzy mixture of Student’s–t distribution has been used for modelling the outliers. In our work, for outliers modelling, the meridian distribution has been used. The meridian distribution has been proposed in [11]. The proposed distribtion describes random variable formed as the ratio of the two independent zero– mean Laplacian distrubuted random variables. The maximum likelihood (ML) estimator of the location parameter for meridian distribution is called sample meridian. An assignment of nonnegative weights to the data set generalizes the sample meridian to the weighted meridian. Fuzzy meridian is the special case of a weighted meridian, where the weights associated with the data points may be interpreted as memberships. In the proposed method, the meridian distribution has been used for modelling the impulsive signals. Hence, the fuzzy meridians as the cluster prototypes have been applied. Moreover, a method for an estimation of the medianity using the data samples and the assigned weights has been proposed. The proposed method is called the adaptive fuzzy c–meridians (aFCMer), where the word adaptive stands for automated estimation of the medianity parameter. The weighted meridian plays the same role as the weighted means in the fuzzy c–means (FCM) method. The paper is organized as follows. The section 2 contains a definition of a meridian as well as the extension of the meridian to the fuzzy sets. The proposed aFCMer clustering algorithm is presented in the section 3. The section 4 shows results obtained in a numerical experiment. Conclusions complete the paper.
2
Fuzzy Meridian
The random variable formed as the ratio of two independent zero–mean Laplacian distributed random variables is refered to as the meridian distribution [11]. The form of the proposed distribution is given by: f (x; δ) =
δ 1 . 2 (δ + |x|)2
(1)
For the given set of N independent samples x1 , x2 , . . . , xN each obeying the meridian distribution with the common scale parameter δ, the sample meridian βˆ is given by:
202
T. Przybyla et al.
βˆ = arg min
N
β∈IR
log [δ + |xi − β|] = meridian xi |N , i=1 ; δ
(2)
i=1
where δ is the medianity parameter. The sample meridian βˆ is a ML estimation of location for the meridian distribution. The sample meridian can be generalized to the weighted meridian by assigning nonnegative weights to the input samples. The weights associated with the data points may be interpreted as the membership degrees. Hence, in such an interpretation of the weights, the weighted meridian becomes the fuzzy meridian. So, the the fuzzy meridian is given by: βˆ = arg min
β∈IR
N
log [δ + ui |xi − β|] = meridian ui ∗ xi |N . i=1 ; δ
(3)
i=1
In the case of the meridian the terms weighted and fuzzy will be used interchangeable in the rest of this paper. Fuzzy Meridian Properties The behavior of the fuzzy meridian is significantly dependent on the value of its medianity parameter δ. It can be shown that for large values of δ, the fuzzy meridian is equivalent to the fuzzy median [13]. For the given data set of N independent samples x1 , x2 , . . . , xN and assigned the membership degrees u1 , u2 , . . . , uN the following equation holds true: N lim βˆ = lim meridian ui ∗ xi |N . (4) i=1 ; δ = median ui ∗ xi |i=1 δ→∞
δ→∞
This property is called the median property. The second case, when δ tends to zero, is called weighted–mode (fuzzy–mode) meridian. In this case the weighted meridian βˆ is equal to one of the most repeated values in the sample set. Furthermore: ⎡ ⎤ N 1 lim βˆ = arg min ⎣ ui |xi − xj |⎦ , (5) xj ∈M (uj )r δ→0 i=1 xi =x j
where M is the set of the frequently repeated values and r is the number of occurrences of a member of M in the sample set. Proofs of the properties can be found in [11]. Estimation of the Medianity Parameter One of the results obtained from the clustering procedure is the partition matrix. The membership grades can be used during the determination of the influence of the input data samples on the estimation of the data distribution in the clusters. In the method of the density estimation proposed by Parzen [12], the influence
Robust Fuzzy Clustering Using Adaptive Fuzzy Meridians
20
0.4
10
0.3
0
0.2
−10
0.1
−20
0
50
100 a)
150
0
2
4
6
0 −20
200
−10
0 b)
10
20
18
20
203
40 30 20 10 0
8
10 c)
12
14
16
Fig. 1. a) An example of realization of random variable with Laplace distribution, b) an empirical distribution (dotted line) and the estimated meridian distribution (solid line), c) the shape of Ψδ function
of the each sample on the estimated distribution is the same and is equal to 1/N , where N is the number of samples. The values of the estimated density function can be computed as follows:
N 1 1 x − xi ˆ f (x) = K , (6) N i=1 h h where N is the number of samples, h is the smooth parameter, and K(·) is the kernel function. When the real, nonnegative weights ui (1 ≤ i ≤ N ) are introduced, the equation (6) is changed into the following form:
N N ui x − xi fˆw (x) = K / ui , (7) h h i=1 i=1 where fˆw (x) is the weighted estimation of the density function. A cost function can be built for the input data samples of the meridian distribution with the madianity parameter δ0 : Ψδ (x) = fˆw (x) − f (x; δ0 )L ,
(8)
where fwˆ(x) is the weighted estimation of the density function, f (x; δ0 ) is the meridian density function, and · L is the L-norm. The value of medianity parameter can be computed using the following equation: δˆ0 = arg min δ∈IR
N i=1
Ψδ (xi ) = arg min δ∈IR
N i=1
fˆw (x) − f (x; δ0 )L .
(9)
204
T. Przybyla et al.
Let the norm · L be the L1 norm, then the solution of the (9) is the least median solution [5] and becomes: δˆ0 = arg min δ∈IR
N
|fˆw (x) − f (x; δ0 )| .
(10)
i=1
The least median solution is less sensitive to outliers than the least square solution. Fig. 1 illustrates the proposed approach to an estimation of medianity parameter value. In Fig. 1a a realization of a random variable with Laplace distribution has been plotted. The empirical density function has been plotted in Fig. 1b with the dotted line, and with the solid line the estimated meridian distribution. An example of the shape of Ψδ function has been plotted in Fig. 1c, where diamond marks minima of the Ψδ function. The Estimation of the Medianity Parameter 1. For the input data samples x1 , x2 , . . . , xN and the assigned weights u1 , u2 , . . . , uN , fix the initial value δ, the kernel function K(·), the smooth parameter h, the treshold ε and the iteration counter l = 1, 2. Compute the weighted meridian minimizing the (3), 3. Calculate the medianity parameter δ minimizing (10), 4. If δ (l) − δ (l−1) < ε then STOP, otherwise l = l + 1 and go to (2).
3
Fuzzy c–Meridian Clustering Method
Let us consider a clustering category in which partitions of data set are built on the basis of some perfomance index, known also as an objective function [3]. The minimization of a certain objective function can be considered as an optimisation approach leading to suboptimal configuration of the clusters. The main design challange lies in formulating an objective function that is capable of reflecting the nature of the problem so that its minimization reveals a meaningful structure in the data set. The proposed method is an objective functional based on fuzzy c–partitions of the finite data set [2]. The proposed objective function can be an extension of the classical functional of within–group sum of an absolute error. The objective function of the proposed method can be described in the following way: Jm (U, V) =
p c N
log [δ + um ik |xk (l) − vi (l)|] ,
(11)
i=1 k=1 l=1
where c is the number of clusters, N is the number of the data, and p is the number of features. The δ is the medianity parameter, uik ∈ U is the fuzzy
Robust Fuzzy Clustering Using Adaptive Fuzzy Meridians
205
partition matrix, xk (l) represents l-th feature of the k-th input data from the data set and 1 ≤ l ≤ p, and m is the fuzzyfing exponent called the fuzzyfier. The constant value of the δ parameter means that the behavior (i.e. the influence of the outliers) of the fuzzy meridian is exactly the same for each feature and the each cluster. When the medianity parameter for each class and for each feature is introduced, the cost function of the proposed clustering method becomes: Jm (U, V) =
p c N
log [δil + um ik |xk (l) − vi (l)|] .
(12)
i=1 k=1 l=1
The different values of the medianity parameter improve the more accuracy of an estimation of the cluster prototypes. The optimization objective function Jm is completed with respect to the partition matrix and the prototypes of the clusters. The first step is a constraint–based optimization, which involves Lagrange multipliers to accomodate the constraints of the membership grades [2]. The columns of the partition matrix U are independent, so the minimization of the objective function (11) can be described as: Jm (U, V) =
p N c
log [δil + um ik |xk (l) − vi (l)|] =
k=1 i=1 l=1
N
Jk .
(13)
k=1
The minimization of (13) can be reduced to the minimization of independent components Jk , 1 ≤ k ≤ N . When a linear transformation L(·) is applied to the expression δil + um ik |xk (l) − vi (l)|, its variability range is changed to (0, 2], i.e.: 0 < L (δil + um ik |xk (l) − vi (l)|) ≤ 2 .
(14)
By means of the above equation, and representing the logarithm function by its power series, the minimization of the objective function can be reduced to the following expression: Jk =
p c
[γil + um ik dik (l) − 1] ,
(15)
i=1 l=1
where m γil + um ik dik = L (δil + uik |xk (l) − vi (l)|) , dik = |xk − vi | .
When the Lagrange multipliers optimization method is applied to the (15) equation, we obtain: c
p c m Jk (λ, uk ) = [γil + uik dik − 1] − λ uik − 1 , (16) i=1 l=1
i=1
206
T. Przybyla et al.
where λ is the Lagrange multiplier, uk is the k-th column of the partition matrix c and the term λ ( i=1 uik − 1) comes from the definition of the partition matrix U [3]. When the gradient of (16) is equal to zero then, for the sets defined as: ∀
1≤k≤N
Ik = {i | 1 ≤ i ≤ c; xk − vi 1 = 0} ,
I˜k = {1, 2, · · · , c} − Ik the values of partition matrix are described by: ⎧ c xk −vi 1 1/(m−1) −1 ⎪ ⎪ ⎪ ⎪ j=1 xk −vj 1 ⎨ ∀ ∀ uik = 0 1≤i≤c 1≤k≤N ⎪ ⎪ ⎪ ⎪ ⎩1
if Ik = ∅ ∀
,
(17)
i∈I˜k
if Ik =∅
where · 1 is the L1 norm, and vi are cluster prototypes 1 ≤ i ≤ c. For the fixed number of clusters c, and the partition matrix U as well as for the exponent m, the prototype values minimizing (11) are the fuzzy meridian described as follows: vi (l) = arg min
β∈IR
N
log [δil + um ik |xk − β|] ,
(18)
k=1
where: i is the cluster number 1 ≤ i ≤ c and l is the component (feature) number 1 ≤ l ≤ p. Clustering Data With the Adaptive Fuzzy c–Meridian Method The proposed clustering method has the same algorithmic structure as the fuzzy c–means method. The aFCMer method follows 1. For the given data set X = {x1 , · · · , xN } where xi ∈ IRp , fix the number of clusters c ∈ {2, · · · , N }, the fuzzyfing exponent m ∈ [1, ∞) and assume the tolerance limit ε. Initialize randomly the partition matrix U and fix the value of the medianity parameter δ parameter, fix l = 0, 2. Calculate the prototype values V, as the fuzzy meridians. The fuzzy meridian has to be calculated for each feature vi based on (18) and the method of the medianity parameter estimation (9), 3. Update the partition matrix U using (17), 4. If U(l+1) − U(l) < ε then STOP the clustering algorithm, otherwise l = l + 1 and go to (2).
4
Numerical Experiments
In the numerical experiment the value of m = 2 and the tolerance limit ε = 10−5 have been chosen. The Laplace kernel function has been used as the kernel. The
Robust Fuzzy Clustering Using Adaptive Fuzzy Meridians
207
smooth parameter has been fixed to h = 0.1. For a computed set of prototype vectors v the clustering accuracy has been measured as the Frobenius norm distance between the true centers and the prototype vectors. The A matrix is created as μ − vF , where AF : ⎛ AF = ⎝
⎞ 12 A2j,k ⎠ .
i, k
The familiar fuzzy c–means method (FCM) proposed by Bezdek has been used as the reference clustering method. Heavy–Tailed Groups of Data The example involves three heavy–tailed and overlapped groups of data. The whole data have been generated by a pseudo–random generator. The first group Table 1. Prototypes of the clusters for the data with heavy–tails aFCMer v
cluster
FCM δ
1st [−2.4745 3.6653]T [0.0620 2.000] [−2.2145 4.0286]T 2nd [1.5555 − 0.0640]T [2.0000 31.2507] [15.6536 112.5592]T 3rd [5.0967 − 2.6998]T [2.000 0.1441] [4.9634 − 2.7977]T Fnorm
2.1403
113.5592
10 8 6
Second feature
4 2 0 −2 −4 −6 −8 −10 −10
−8
−6
−4
−2
0 2 First feature
4
6
8
10
Fig. 2. The performance of the proposed method for the heavy–tailed and overlapped groups of data
208
T. Przybyla et al.
has been generated with the Laplace distribution, the second with the Cauchy distribution and the third with the Student’s–t distribution. The true group centers are: [−3, 5]T , [0, 0]T , and [5, −3]T . The obtained prototypes for the proposed method and the reference method are shown in table 1. The bottommost row shows the Frobenius norm among the true centers and the obtained centers. The performance of proposed method has been plotted in Fig 2. The solid lines illustrate the trajectories of group prototypes. Fig. 2 illustrates the performance of the aFCMer method applied to the two– dimensional overlapped clusters. The prototype trajectories lead to the proper cluster centres marked as the black squares. The interesting case takes place for the Cauchy cluster. The existing outliers that cause the convergence problem for the reference method, are better handled by the aFCMer method. Hence, the Cauchy cluster prototype is more precisely computed by the proposed method.
5
Conclusions
In many cases, the real data are corrupted by noise and outliers. Hence, the clustering methods should be robust for noise and outliers. In this paper the adaptive fuzzy c–meridian method has been presented. The fuzzy meridian has been used as the cluster prototypes. The value of the fuzzy meridian depends on the data samples and the assigned membership grades. Moreover, the value of the fuzzy meridian depends on the medianity parameter. So, in this paper a method for an estimation of the medianity parameter using the data samples and the assigned membership grades has been proposed. The word adaptive stands for automated estimation of the medianity parameter. For the simple data set, the obtained results from the proposed method and the reference method are similar, but the results are quite different for corrupted data. The data set that includes samples of different distributions has been partitioned correctly according to our exceptations. The updated membership formula of the proposed method is very similar to the formula in the the FCM method. Hence, the existing modification of the FCM method (e.g. clustering with partial supervision or conditional clustering) can be directly applied to the adaptive fuzzy c–meridian method. The current work solves the problem of the performance of the adaptive fuzzy meridian estimation for large data sets.
Acknowledgment This work was partially supported by the Ministry of Science and Higher Education resources in 2008–2010 under Research Project N518 014 32/0980.
References 1. Kaufman, L., Rousseeuw, P.: Finding Groups in Data. Wiley–Interscience, Chichester (1990) 2. Bezdek, J.C.: Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum, New York (1981)
Robust Fuzzy Clustering Using Adaptive Fuzzy Meridians
209
3. Pedrycz, W.: Konwledge–Based Clustering. Wiley–Interscience, Chichester (2005) 4. Frigui, H., Krishnapuram, R.: A Robust Competitive Clustering Algorithm With Applications in Computer Vision. IEEE Trans. Pattern Analysis and Machine Intelligence 21, 450–465 (1999) 5. Huber, P.: Robust statistics. Wiley, New York (1981) 6. Dave, R.N., Krishnapuram, R.: Robust Clustering Methods: A Unified View. IEEE Trans. on Fuzzy System 5, 270–293 (1997) 7. L eski, J.: An ε–Insensitive Approach to Fuzzy Clustering. Int. J. Appl. Math. Com put. Sci. 11, 993–1007 (2001) 8. Chatzis, S., Varvarigou, T.: Robust Fuzzy Clustering Using Mixtures of Student’s–t Distributions. Pattern Recognition Letters 29, 1901–1905 (2008) 9. Arce, G.R., Kalluri, S.: Fast Algorithm For Weighted Myriad Computation by Fixed Point Search. IEEE Trans. on Signal Proc. 48, 159–171 (2000) 10. Arce, G.R., Kalluri, S.: Robust Frequency–Selective Filtering Using Weighted Myriad Filters admitting Real–Valued Weights. IEEE Trans. on Signal Proc. 49, 2721– 2733 (2001) 11. Aysal, T.C., Barner, K.E.: Meridian Filtering for Robust Signal Processing. IEEE Trans. on Signal Proc. 55, 3949–3962 (2007) 12. Parzen, E.: On Estimation of A Probability Density Function and Mode. Ann. Math. Stat. 33, 1065–1076 (1962) 13. Kersten, P.R.: Fuzzy Order Statistics and Their Application to Fuzzy Clustering. IEEE Trans. on Fuzzy Sys. 7, 708–712 (1999)
An Ambient Agent Model Incorporating an Adaptive Model for Environmental Dynamics Jan Treur1 and Muhammad Umair1,2 1
Vrije Universiteit Amsterdam, Department of Artificial Intelligence De Boelelaan 1081, 1081 HV Amsterdam, The Netherlands {treur,mumair}@few.vu.nl 2 COMSATS Institute of Information Technology, Lahore, Pakistan
[email protected] http://www.few.vu.nl/~{treur,mumair}
Abstract. The environments in which ambient agents are used often may be described by dynamical models, for example in the form of a set of differential equations. In this paper an ambient agent model is proposed that can perform model-based reasoning about the environment, based on a numerical (dynamical system) model of the environment. Moreover, it does so in an adaptive manner by adjusting the parameter values in the environment model that represent believed environmental characteristics, thus adapting these beliefs to the real characteristics of the environment. Keywords: Ambient Agent, Adaptive Model, Environmental dynamics.
1
Introduction
Ambient agents are often used in environments that have a highly dynamic nature. In many applications of agent systems, varying from robot contexts to virtual world contexts, some form of world model plays an important role; e.g., [7], [8], [9], [12]. Usually in such applications a world model represents a state of the world that is built up by using different types of inputs and is updated with some frequency. Examples of challenges addressed are, for example, (i) organize data collection and signal processing from different types of sensor, (ii) produce local and global world models using the multi-sensor information about the environment, and (iii) integrate the information from the different sensors into a continuously updated model (cf. [8], [9]). For dynamic environments another challenge for such an agent, which goes beyond gathering and integrating information from different sources, is to be able to reason about the environmental dynamics in order to predict future states of the environment and to (avoid or) achieve the occurrence of certain (un)desired states in the future. One way to address this challenge, is to equip the ambient agent with a model for the environmental dynamics. Examples of environmental dynamics that can be described by such models can be found not only in the natural physical and biological reality surrounding an ambient agent, but also in many relevant (local and global) processes in human-related autonomic environments in the real and/or virtual world such as epidemics, gossiping, crowd movements, social network dynamics, traffic flows. N.T. Nguyen, M.T. Le, and J. Świątek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 210–220, 2010. © Springer-Verlag Berlin Heidelberg 2010
An Ambient Agent Model Incorporating an Adaptive Model
211
When the environmental dynamics has a continuous character, such a model often has the form of a numerical dynamical system. A dynamical system model usually involves two different types of concepts: • state variables representing state aspects of the environment • parameters representing characteristics of the environment When values for the parameters are given, and initial values for the state variables, the model is used to determine values for the state variables at future points in time. In this way, from beliefs about environmental characteristics represented as parameter values, and beliefs about the initial state of the environment represented as values for the state variables, the agent derives predictions on future states of the environment. A particular problem here is that values for parameters often are not completely known initially: the beliefs the agent maintains about the environmental characteristics may (initially) not correspond well to the real characteristics. In fact the agent needs to be able to perform parameter estimation or tuning on the fly. The ambient agent model proposed here maintains in an adaptive manner a dynamic model of the environment using results from mathematical analyses from the literature such as [13]. On the one hand, based on this model predictions about future states of the environment can be made, and actions can be generated that fulfill desires of the agent concerning such future states. On the other hand, the agent can adapt its beliefs about the environmental characteristics (represented by the model parameters) to the real characteristics. This may take place whenever observations on environmental state variables can be obtained and compared to predicted values for them. The model is illustrated for an environmental model involving the environment’s ground water level and its effect (via the moisture of the soil) on two interacting populations of species. The paper is organised as follows. In Section 2 the example model for the environment is briefly introduced. The overall model-based agent model is described in Section 3. Section 4 presents the method by which the agent adapts to the environmental characteristics. In Section 5 some simulation results are discussed. Finally, Section 6 is a discussion.
2
An Example Environmental Model
The example environment for the ambient agent considered involves physical (abiotic) and biological (biotic) elements. Within this environment certain factors can be manipulated, for example, the (ground) water level. As desired states may also involve different species in the environment, the dynamics of interactions between species and abiotic factors and between different species are to be taken into account by the agent. The example model for the environmental dynamics, used for the purpose of illustration deals with two species s1 and s2; both depend on the abiotic factor moisture of the soil; see Figure 1 for a causal diagram. A differential equation form consisting of 4 first-order differential equations for the example environment model is as follows:
212
J. Treur and M. Umair 1 2
1
1 1
2 2
2
1 1
2 2
η
Fig. 1. Causal relations for the example environmental model
Here s1(t) and s2(t) are the densities of species s1 and s2 at time point t; moreover, c(t) denotes the carrying capacity for s1 and s2 at t, which depends on the moisture m(t). The moisture depends on the water level, indicated by w. This w is considered a parameter that can be controlled by the agent, and is kept constant over longer time periods. Moreover the parameters β,γ are growth rates for species s1, s2. For carrying capacity and moisture respectively, η and λ are proportion parameters, and Θ and ω are speed factors. The parameters a1, a2 and b1, b2 are the proportional contribution in the environment for species s1 and s2 respectively.
3 Using a Model for Environmental Dynamics in an Agent Model As a point of departure the ambient agent model described in [2] was taken that focuses on the class of Ambient Intelligence applications where the ambient software has context awareness about environmental aspects including human behaviours and states, and (re)acts on these accordingly, and is formally specified within the dynamical modelling language LEADSTO; cf. [3]. In this language, direct temporal dependencies between two state properties in successive states are modeled by executable dynamic properties. The LEADSTO format is defined as follows. Let α and β be state properties of the form ‘conjunction of ground atoms or negations of ground atoms’. In the LEADSTO language the notation α → →e, f, g, h β, means: If state property α holds for a certain time interval with duration g, then after some delay (between e and f) state property β will hold for a certain time interval of length h.
Here, atomic state properties can have a qualitative, logical format, such as an expression desire(d), expressing that desire d occurs, or a quantitative, numerical format such
An Ambient Agent Model Incorporating an Adaptive Model
213
as an expression has_value(x, v) which expresses that variable x has value v. Within this agent model specification a dynamical domain model is represented in the format belief(leads_to_after(I:INFO_EL, J:INFO_EL, D:REAL))
which represents that the agent believes that state property I leads to state property J with a certain time delay specified by D. Some of the temporal relations for the functionality within the ambient agent model are as follows. → → belief(I) → → belief(I)
observed_from(I, W) & belief(is_reliable_for(W, I))
communicated_from_ to(I,Y,X) & belief(is_reliable_for(X, I))
→ → belief(at(J, T+D)) → → belief(and(I1, .., In))
belief(at(I, T)) & belief(leads_to_after(I, J, D)) belief(I1) & … & belief(In)
A differential equation such as 1
1
1 1
2 2
can be represented in this leads-to-after format as follows: belief(leads_to_after( and(at(has_value(s1, V1), t), at(has_value(s2, V2), t), at(has_value(c, V3), t)), at(has_value(s1, V1+ W*V1*[V3 – W1*V1 – W2*V2]*D), t), D))
Here W, W1, W2 are the believed parameter values for respectively parameters β, a1, a2 representing certain environmental characteristics. As pointed out above, the agent has capabilities for two different aims: (1) to predict future states of the environment and generate actions to achieve desired future states, and (2) to adapt its environment model to the environmental characteristics. The latter will be discussed in more detail in the next section. For the former, a central role is played by the sensitivities of the values of the variables at future time points for certain factors in the environment that can be manipulated. More specifically, for the example, this concerns the sensitivities of s1 and s2 for the water level w, denoted by ∂s1/∂w and ∂s2/∂w. These sensitivities cannot be determined directly, but differential equations for them can be found by differentiating the original differential equations with respect to w: 1
2
1
2
1 1
2 2
1
1
1 1
2 2
2
1
1
1
2
2
2
2
These equations describe over time how the values of species s1, s2, moisture m and carrying capacity c at time point t are sensitive to the change in the value of the water level parameter w.
214
J. Treur and M. Umair
Specie S1 Specie S2
Densities
60 55
55
50
50
45
45
40
40
35
35
30
30
Specie S1 Specie S2
Densities
60
25
25 0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
Fig. 3. Densities, over 10 years after incorporating (w=240, m(0)=110, c(0)=88, β=0.01, ω=0.4), Δw = 40
Fig. 2. Predicted densities after 10 years (w=200, m(0)=110, c(0)=88, β=0.01, ω=0.4)
Figure 2, where the vertical axis represents the densities of species s1 and s2 and the horizontal axis represents number of years, shows the trend in change of densities of species over 10 years given the initial values of the water level w. Using the following formula, the agent can determine the change (Δw) in circumstance w to achieve the goal at some specific time point t in future. ∆
1
Δ
1
– 1
where s1(w+Δw) is the desired density at time t, s1(w) the predicted density s1 at time t for water level w, and (∂s1/∂w) the change in density of s1 at time t against the change in w. Within the agent model this is specified in a generic manner as (where X can be taken s1 and P can be taken w): desire(at(has_value(X, VD), T)) & belief(at(has_value(X, V), T)) & belief(has_sensitivity_for_at(X, P, S, T)) & belief(has_value(P, W)) → → intention(has_value(P, W+(VD-V)/S))
Fig. 2 depicts a situation where the densities of species s1 and s2 are predicted to decrease, given w = 200. Initial environmental characteristics Desired target after Y
AMBIENT AGENT
Environment Model
Environmental state after X years
Predicted State
Adaptation Process
ENVIRONMENT
Fig. 4. The Ambient Agent’s Adaptation Process of the Environment Model
An Ambient Agent Model Incorporating an Adaptive Model
215
The initial values for species s1 and s2 could be taken random but for this example scenario species s1 has density 55 and s2 has density 40. Under these settings the density of species s1 will be 49. If the agent wants to aim it to become 55 after 10 years, then according to the model described above it has to change w to 240 (see Fig. 3).
4 Adaptation within the Ambient Agent Model This section describes the method by which the ambient agent adapts its beliefs concerning parameters representing environmental characteristics to the real characteristics. The agent initially receives rough estimations of the values for these parameters, and maintains them as initial beliefs. With these initial beliefs the agent predicts the environmental state for after say X years. When at a certain time point the real value of some state variable is observed, as a next step the predicted value and observed value of that state variable at time X are passed to the adaptation process of the agent. The agent then tries to minimize the difference between predicted and desired value and adjust the beliefs on the environmental characteristics (i.e., the parameter values which were initially assumed). This process of adaptation is kept on going until the difference is negligible, i.e., until the agent has an accurate set of beliefs about the environmental characteristics. Within this adaptation process sensitivities of state variables for changes in parameter values for environmental characteristics play an important role. For example, differential equations for the sensitivities of values of the variables w.r.t. the parameter a1 are obtained by differentiating the original differential equations for a1: ∂ 1 ∂ 1
∂ 1 ∂ 1
∆
∂ 1 ∂ 1 1
∂ 2 ∂ 1
∆
1 1
1
1
1 1
1
2 1
2
∆
∂ 2 ∂ 1 ∂ 2 ∂ 1
γ
2
∂ ∂ 1
2 2
∂ ∂ 1
η
∆
2 2
1 1
2
1
1
∂ ∂ 1
∆
1 1
∂ ∂ 1
∂ ∂ 1
–
1
∂ ∂ 1
2 1
∆ ∆
∆
These equations describe how the values of species s1, s2, moisture m and carrying capacity c at time point t are sensitive to the change in the value of the proportional contributing parameter a1.
216
J. Treur and M. Umair
In a manner as described in Section 3, by the differential equations given above, the agent can derive its beliefs on sensitivities S represented as belief(has_sensitivity_for_at(X, P, S, T))
for each variable X with respect to each parameter P. Once these beliefs on sensitivities are available, the agent can use them to adapt certain parameter values. Here a control choice can be made. A first element of this choice is that a focus set of parameters can be chosen which are considered for adaptation (the values for parameters not in this focus set are considered fixed). A second control element is whether the value of only one parameter at a time is changed, or the values of all parameters in the focus set. In Section 5 example simulations are discussed in which only one of the parameters are adapted at a time, and also when more than one are adapted. A third choice element is on the adaptation speed: for example, will the agent attempt to get the right value in one step, or will it adjust only halfway; the latter choice may obtain more stable results. In Section 5 example simulations are discussed in which the agent attempt to get the right value in four steps. Specification of the adaptation model is based on relationships such as: predicted_for(at(has_value(X, V1), P, W, T)) & observed(at(has_value(X, V2), T) & belief(has_sensitivity_for_at(X, P, S, T)) & belief(has_value(P, W)) & belief(adaptation_speed(AS)) → → belief(adjustment_option(has_value(P, W+AS*(V2-V1)/S))
5
Simulation Results
To test the behaviour of the model to adapt the agent’s beliefs on the environmental characteristics (represented by the parameters) to the real characteristics, it has been used to perform a number of simulation runs, using standard numerical simulation software, resulting in a variety of interesting patterns. The focus set of parameters for adaptation includes a1, a2, b1, b2, β,and γ. For the real environment characteristics these parameters were given the values a1=1, a2=1, b1=1, b2=1, β =0.01, γ =0.02. Figure 5(a, b) shows the trend in change of densities of species during the adaptation process of a1 given the initial values of other parameters. Using the following formula, the agent can determine the change (Δa1) in a1 to minimize the difference both in predicted and observed densities. ∆ 1
1 1
Δa1 – 1 1
1 1
where, s1(a1+Δa1) is the observed density after X years, s1(a1) is the predicted density of s1 after X years for a1, and (∂s1/∂a1) is the change in density of s1 after X years against the change in a1. In Figure 5(a), vertical axis represents the predicted density of s1 and in Figure 4 (b) vertical axis represents adapted value of a1 by the agent. Where as in both Figure 5(a) and (b) horizontal axis represents number of iterations performed. For this simulation the initial value assumed by agent for a1 is 0.5 and the given observed density of s1 is 52.78. First iteration in the adaptation process reflects that the predicted
An Ambient Agent Model Incorporating an Adaptive Model
217
density of s1(a1) deviates a lot from its observed density s1(a1+Δa1) (see Figure 5(a)), which means that the adapted value of a1 by agent is not correct (see Figure 5 (b)) and it may required to modify a1 (Δa1) to a certain degree to attain the observed density. In next iterations this deviation becomes smaller and it needs small modification in the value of a1. Finally the agent fully adapts the value of a1 in fourth iteration to attain the observed density of s1. the agent can utilize this value to achieve the target, as depicted in Section 3. The details are described in [6]. Change in Density of S1 during Adaptation of a1
70
Adaptation of a1
1.2
65
1
60 0.8
55 50
0.6
45 0.4
40 0
1
2 No. of Iterations
3
0
4
1
2
3
4
No. of Iterations
Fig. 5. (a): Trend in change of densities of species s1 during the adaptation process of a1, (b): Adaptation process of a1
As shown in Figure 5 the agent attains the observed density of s1 for the initially assumed value 0.5 for a1, but the proposed model in this paper is so generic that it can achieve the observed density for any initially assumed values for a1. For this purpose Figure 6 (a), and (b) show the adaptation process of a1 with different initial values assumed by the agent.
Adaptation of a1
Adaptation of a1 a1 = 0.4
1.6
a1 = 1.5
1.6
1.4
1.4
1.2
1.2
1
1
0.8
0.8
0.6
0.6 0.4
0.4 0
1
2 No. of Iterations
3
4
0
1
2 No. of Iterations
3
4
Fig. 6. Adaption of a1 (a) with initial value 0.4 (b) with initial value 1.5
Similarly the simulations for the adaptation process of other parameters of focus set i.e. a2, b1, b2, β,and γ .are also carried out. The agent determines the change Δa2, Δb1 , Δb2, Δβ, Δγ in a2, b1 , b2, β, and Δγ respectively as follows:
218
J. Treur and M. Umair ∆ 2
1 2
∆ 2 – 1 2
∆ 1
1 1
Δb1 – 1 1
∆ 2
1 2
∆ 2 – 1 2
∆ ∆
1
1
1
Δβ – 1
∆ – 1
Due to the space limitation, the simulation results of a2, b1, b2, β, and γ are not included separately in the paper. Figure 7 shows the adaptation process of all parameters in the focus set. For this simulation the initial values assumed by agent for the parameters a1, a2, b1, b2, β,and γ are 0.5, 0.4, 0.3, 0.2, 0.0005, and 0.07 respectively. The given observed density of s1 is 52.78. Adaptation of all Parameters in Focus Set (a1, a2, b1, b2, β, γ)
1.2 1 0.8
a1 0.6
a2
0.4
b1
0.2
b2 β
0
γ -0.2 0
1
2 No. of Iterations
3
4
Fig. 7. Adaptation of all parameters in the focus set
6 Discussion The results shown in the previous section prove that the target which was set initially for the adaptation of the parameters of the focus set is achieved. The graphs presented in the section completely show the adaptation process of different parameters. Moreover the results show that the method used is significantly precise and accurate. In this paper an ambient agent model is proposed that maintains a model of the environmental dynamics, based on a numerical (dynamical system). Moreover, it does so in an adaptive manner by adjusting the parameter values in the environment model that represent believed environmental characteristics, thus adapting these beliefs to the real characteristics of the environment. The described ambient agent model can be used for any agent with an environment (local and/or global) that can be described in a continuous manner by a dynamical system (based on a set of first-order differential equations).
An Ambient Agent Model Incorporating an Adaptive Model
219
In [6] as a special case a decision support system model was presented that takes into account an ecological model of the temporal dynamics of environmental species and inter species interaction; see also [4], [5], [10], [11]. Approaches to environmental dynamics such as the one described in [6] are rigid in the sense that the parameters of environmental characteristics were assumed known initially and fixed which in reality is not possible because the internal dynamics of the species is a characteristic which would not be known by everyone. The current paper extends and generalises this idea by making the parameters representing environmental characteristics adaptive (see Figures 9 to 12). For future research, one of the plans is to validate the model using empirical data within an example domain. Moreover, other approaches for sensitivity analysis will be used to compare the convergence and speed of the adaptation process. Domains for which the presented ambient agent model may be relevant do not only concern natural physical and biological domains but also to human-related autonomic environments for example in logistic, economic, social and medical domains. Such applications of the approach may involve both local information and global information of the environment. An example of the former is monitoring a human’s gaze over time and using a dynamical model to estimate the person’s attention distribution over time, as described in [1]. Examples of the latter may concern monitoring and analysis of (statistical) population information about (real or virtual) epidemics, gossiping, or traffic flows. This allows to combine in an ambient agent both local (e.g., using information on individual agents) and global (e.g., using information on groups of agents) perspectives on the environment.
References 1. Bosse, T., van Maanen, P.-P., Treur, J.: Simulation and Formal Analysis of Visual Attention. Web Intelligence and Agent Systems Journal 7, 89–105 (2009); Shorter version. In: Paletta, L., Rome, E. (eds.) WAPCV 2007. LNCS (LNAI), vol. 4840, pp. 463–480. Springer, Heidelberg (2007) 2. Bosse, T., Hoogendoorn, M., Klein, M., Treur, J.: An Agent-Based Generic Model for Human-Like Ambience. In: Mühlhäuser, M., Ferscha, A., Aitenbichler, E. (eds.) Proceedings of the First International Workshop on Model Driven Software Engineering for Ambient Intelligence Applications, Constructing Ambient Intelligence. Communications in Computer and Information Science (CCIS), vol. 11, pp. 93–103. Springer, Heidelberg (2008); Extended version in: Mangina, E., Carbo, J., Molina, J.M. (eds.) Agent-Based Ubiquitous Computing. Ambient and Pervasive Intelligence book series, pp. 41–71. Atlantis Press (2008) 3. Bosse, T., Jonker, C.M., van der Meij, L., Treur, J.: A Language and Environment for Analysis of Dynamics by Simulation. International Journal of Artificial Intelligence Tools 16, 435–464 (2007) 4. Fall, A., Fall, J.: A Domain-Specific Language for Models of Landscape Dynamics. Ecological Modelling 141, 1–18 (2001); Earlier version: Fall, J., Fall, A.: SELES: A spatially Explicit Landscape Event Simulator. In: Proc. of the Third International Conference on Integrating GIS and Environmental Modeling, Sante Fe (1996)
220
J. Treur and M. Umair
5. Gärtner, S., Reynolds, K.M., Hessburg, P.F., Hummel, S., Twery, M.: Decision support for evaluating landscape departure and prioritizing forest management activities in a changing environment. Forest Ecology and Management 256, 1666–1676 (2008) 6. Hoogendoorn, M., Treur, J., Umair, M.: An Ecological Model-Based Reasoning Model to Support Nature Park Managers. In: Chien, B.-C., Hong, T.-P., Chen, S.-M., Ali, M. (eds.) Next-Generation Applied Intelligence. LNCS, vol. 5579, pp. 172–182. Springer, Heidelberg (2009) 7. Lester, J.C., Voerman, J.L., Towns, S.G., Charles, B., Callaway, C.B.: Deictic Believability: Coordinated Gesture, Locomotion, and Speech in Lifelike Pedagogical Agents. Applied Artificial Intelligence 13, 383–414 (1999) 8. Petriu, E.M., Whalen, T.E., Abielmona, R., Stewart, A.: Robotic sensor agents: a new generation of intelligent agents for complex environment monitoring. IEEE Magazine on Instrumentation and Measurement 7, 46–51 (2004) 9. Petriu, E.M., Patry, G.G., Whalen, T.E., Al-Dhaher, A., Groza, V.Z.: Intelligent Robotic Sensor Agents for Environment Monitoring. In: Proc. of the Internat ional Symposium on Virtual and Intelligent Measurement Systems, VIMS 2002 (2002) 10. Rauscher, H.M., Potter, W.D.: Decision support for ecosystem management and ecological assessments. In: Jensen, M.E., Bourgeron, P.S. (eds.) A guidebook for integrated ecological assessments, pp. 162–183. Springer, New York (2001) 11. Reynolds, K.M.: Integrated decision support for sustainable forest management in the United States: fact or fiction? Computers and Electronics in Agriculture 49, 6–23 (2005) 12. Roth, M., Vail, D., Veloso, M.: A world model for multi-robot teams with communication. In: Proc. of the International Conference on Intelligent Robots and Systems, IROS 2003. IEEE/RSJ IROS-2003, vol. 3, pp. 2494–2499 (2003) 13. Sorenson, H.W.: Parameter estimation: principles and problems. Marcel Dekker, Inc., New York (1980)
A New Algorithm for Divisible Load Scheduling with Different Processor Available Times Amin Shokripour, Mohamed Othman , and Hamidah Ibrahim Department of Communication Technology and Network, Universiti Putra Malaysia, 43400 UPM, Serdang, Selangor D.E., Malaysia
[email protected],
[email protected]
Abstract. During the last decade, the use of parallel and distributed systems has become more common. In these systems, a huge chunk of data or computation is distributed among many systems in order to obtain better performance. Dividing data is one of the challenges in this type of systems. Divisible Load Theory (DLT) is proposed method for scheduling data distribution in parallel or distributed systems. In many researches carried out in this field, it was assumed that all processors are dedicated for grid system but that is not always true in real systems. The limited number of studies which attended to this reality assumed that systems are homogeneous and presented some algorithms or closedformulas for scheduling jobs in a System with Different Processors Availability Time (SDPAT). In this article, we propose a new algorithm for scheduling jobs in a heterogeneous SDPAT.
1
Introduction
In most researches on Divisible Load Theory, it was assumed that at the start of job scheduling, all the participating processors were available. In real systems, this assumption is not always true for the reason that processors or communication links may be busy because of local loads or previously scheduled jobs. Challenges of scheduling jobs, when all the required processors are available, have been sufficiently addressed in which enough processors for doing job are ready, job scheduled, fractions distributed and communication and computation started. However, if the adequate number of processors is not available, the job remains unscheduled. This idle time should ideally be eliminated. This actuality leads to increased response time because of long delays in getting ready the minimum number of processors to participate in the activity. To reduce this inefficiency, some methods were presented one of which is backfilling algorithm. The main concept of this algorithm is that small fractions should be assigned to idle processors during inactive period. In this situation, the shortest period of idle time for each processor will occur but this method is very complex, and having a low complex method is ideal.
The author is also an associate researcher at the Lab of Computational Science and Informatics, Institute of Mathematical Research (INSPEM), University Putra Malaysia.
´ atek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 221–230, 2010. N.T. Nguyen, M.T. Le, and J. Swi c Springer-Verlag Berlin Heidelberg 2010
222
A. Shokripour, M. Othman, and H. Ibrahim
In this paper, we will schedule jobs in a SDPAT. At first, we will mention a closed-form formula for scheduling jobs in a heterogeneous environment with the assumptions that all the processors are available at the beginning of scheduling and communication method is blocking mode. Then, we use this formula for scheduling jobs when processors availability time is arbitrary. The proposed algorithm will be compared with the method presented in [1].
2
Related Works
During the last decade, many studies about DLT were done [2,3]. Each of these researches attended to some of DLT properties and defined some assumptions about the environment. In this research, we will attend to a Divisible Load system with these characteristics: communication is blocking mode, system is heterogeneous, communication and computation have overhead, and all the required processors are not available at the beginning of scheduling. Many closed-form formulas for homogeneous and heterogeneous systems, with different conditions, have been proposed [4,5]. For a one installment heterogeneous system with overhead in communication and computation and nonblocking communication, a closed-form formula based on DLT has been presented by Mingsheng, [5]. In real systems, all required processors are not available at the start of scheduling usually because of local task or previous scheduled tasks. This reality causes idle time. Researchers tried to present scheduling methods that took this actuality into consideration. It is clear that if the number of processors is equal to or smaller than required processors, response time in this type of systems is larger than that in dedicated systems. If we have more than required processors, we can ignore the processors that are not available but optimal performance is not guaranteed because at times, waiting for a powerful processor is better than doing without it. Consequently, many researchers tried to determine the number of required processors to meet the task deadline [1,6,7]. In [7], the authors presented a method to find the minimum number of required processors for each task in a homogeneous environment. The method cannot be applied on all real systems because it was presented for a homogeneous environment. In [1], the team enhanced the use of inserted idle time and presented a new fraction formula which reduced idle time more than what previous studiers accomplished. In addition, they mentioned some conditions that if they held, their method can optimally partition task and inserted idle time would be fully utilized. The advantage of this method is that this method does not need to fraction data in a heterogeneous system and can directly calculate fractions’ size in a homogeneous system. Another paper which was presented for homogeneous system uses an algorithmic method, without a closed-form formula [6]. The researchers assigned fractions to processors with the actuality that all the processors should stop before the deadline. They calculated the total of the assigned data and when all the data were distributed, the minimum number of processors was found. The task cannot otherwise be done by these processors. This is a good idea for a dedicated system with only one task for doing because in a real system it is possible that
A New Algorithm for Divisible Load Scheduling
223
other tasks are in the queue and the system can do them if, and only if, this task ends before its deadline. In this research, the goal was to do a task before the deadline but in the DLT scheduling researches, the goal is to find the minimum response time [5]. This method does not find out the minimum response time and only finds out the minimum required processors to complete task by the deadline. In other words, this method does not use processors optimally. Some of researchers, such as chuprat et al., believe that DLT is not the best method for scheduling SDPAT [6]. They believe that linear programming has better performance than method presented by Lin et al. [1,7]. They proved this claim through some experiments and showed comparison of their method performance with Lin’s method. Perhaps in this specific comparison, linear programming functions better than DLT in comparison to Lin’s method but no researchers can guarantee Lin’s method is the best DLT-based method for this problem. All the above researches used a homogeneous system for scheduling jobs. In this research, however, we are going to find an algorithm for a heterogeneous environment.
3
Preliminaries
Several notations are used for the discussions. The notations and their definitions are described in Table 1. Table 1. Notations Notation V n αi
Description Total size of data Number of processors Size of load fraction allocated to processor Pi in each internal installment. Summation of all fractions in each installment is equal to job size for this installment. In other words, m α = i i=1 V wi Ratio of the time taken by processor Pi , to compute a given load, to the time taken by a standard processor, to compute the same load zi Ratio of the time taken by link li , to communicate a given load, to the time taken by a standard link, to communicate the same load si Computation overhead for processor Pi oi Communication overhead for processor Pi ri Availability time for processor Pi Ticomp = αi wi +si It can be seen from previous notation that this is time of computation for processor Pi . Ticomm = αi zi +oi Communication time for processor Pi . T (αi ) Finish time is the finishing time of Pi for a given fraction, and defined as the difference between the instant at which the Pi stops computing and the time that the root starts to send data. In other words T (αi ) = Ticomm + Ticomp T (V ) This is the time of processing all the data so it is defined as T (V ) = max{T (α1 ), T (α2 ), . . . , T (αn )} where the number of processors is n.
224
3.1
A. Shokripour, M. Othman, and H. Ibrahim
Model
In this research, we used client-server topology for network, while all processors are connected to root processor called P0 . This processor does not do any computation and only schedules tasks and distributes chunks between others. Communication type is blocked mode; it means communication and computation cannot be overlapped. This model consists of a heterogeneous environment which includes communication and computation overhead. 3.2
Definitions
To apply the proposed algorithm on described system, we need some functions, equations and new definitions. these are as follows: Definition 1: Absolute delay time, shown by di , is the difference between availability time of processor and the time which should be available for participating i−1 in task without any delay. In other words,di = ri − j=1 Tjcomm . If calculated di is negative, we set it as equal to zero. Definition 2: Processor Pi finishing time with idle time, shown by Tiidle , includes time of communication, time of computation and absolute delay, Tiidle = Ticomm + Ticomp + di Definition 3: Additional data, αadd i , is the size of data for which, if it is deduced from the assigned data to a processor, processor will finish its job simultaneously with base processor which we assume is P1 . idle Ti − T1idle αadd = α − (1) i i wi + zi Definition 4: In a system, the absolute time of some processors is zero. Each processor with absolute delay larger than zero is head of a group, shown by Gi . Gi means processor Pi has absolute delay larger than zero. Each processor with absolute delay equal to zero and Pj , i < j is a member of this group. The first processor after Pi with absolute delay time larger than zero makes a new group.
4
The Proposed Algorithm
In one of the our previous research, we reached a closed-form formula for scheduling jobs in a heterogeneous system which includes over heads and communication mode is blocking. These formulas are same as below ⎫ −(Δ2 +Δ3 +...+Δn ) ⎪ α1 = V 1+E ⎪ 2 +E3 +...+En ⎪ ⎪ αi = Ei α1 + Δi , i = 2, 3, . . . , n ⎪ ⎪ ⎪ i ⎪ ⎬ Ei = j=2 εj i i (2) Δi = j=2 (δj k=j+1 εk ) ⎪ ⎪ ⎪ ⎪ i+1 +oi+1 ) ⎪ δi+1 = si −(s ⎪ wi+1 +zi+1 ⎪ ⎪ wi ⎭ εi+1 = wi+1 +zi+1
A New Algorithm for Divisible Load Scheduling
225
Fig. 1. Blocking Mode time diagram with arbitrary processor available time
If we want to apply this formula on a system without overhead, we can use the following equation: ⎫ αi = Ei α1 , i = 2, 3, . . . , n ⎬ (3) ⎭ α1 = 1+E2 +EV3 +...+En In Lin et al. study, it was shown that it is impossible to use all inserted idle time if some restrictions do not hold [1]. Thus before presenting the proposed algorithm we define the system. We have a heterogeneous system with n processors which includes communication and computation overhead and availability time of each processor is ri . We order processors by time of availability which means r1 ≤ r2 ≤ · · · ≤ rn . These processors were connected in client-server network topology and root processor called P0 . We assume that all the processors are available at the scheduling start time and use Eq. (2) for calculating αi . Fig. 1 shows the scheduling diagram. From the figure, it is clear that all processors do not finish their job at the same time because they are available at different times. To solve this problem, we choose the first processor and its finish time as base. Then we calculate additional data for the first processor that its finish time is add larger than base time, for instance αadd 3 , and compute α3 = α3 − α3 . If new value for α3 is negative, we ignore all the processors with ri larger than this one and schedule task again for new set of processors. Now, we assume additional data have been added to original task, we schedule additional data with Eq. (3).
226
A. Shokripour, M. Othman, and H. Ibrahim
Fig. 2. Applying the proposed algorithm on a scheduled task
We use this equation because we are adding data to previous data which helps us not to have any overhead. ⎫ αadd ⎪ 3 ⎬ αnew = 1 1+E2 +E3 +...+En (4) ⎪ ⎭ new αnew = E α , i = 2, 3, . . . , n i 1 i Then for all processors, we add new scheduled data to previous data. αi = αi + αnew , i = 1, 2, . . . , n i
(5)
Fig. 2 shows this new scheduling strategy where processors P1 to P3 will finish task at the same time but a period of idle time is added to processor P3 . We can delete this idle by letting P3 have more data; it means we shift this new idle time into unavailability time. This additional data should be processed in a time equal to P1 and P2 ’s communication time. Now the question that remains is “How much is the communication time for P1 and P2 in new scheduling?” We cannot use anew calculated by Eq. (3) because it is dependent on αadd and also 1 3 new dependent on a1 . αadd is the summation of two numbers, additional data which should be 3 rescheduled and the data which should be additionally assigned to P3 to cancel communication time of previous processors. αadd = αreschd + αcomm 3 3 3 αcomm = 3
αnew z1 αnew z2 αnew z1 αnew E2 z2 1 + 2 = 1 + 1 w3 + z3 w3 + z3 w3 + z3 w3 + z3
(6) (7)
A New Algorithm for Divisible Load Scheduling
227
Additional fraction size for processor P1 can be calculated by αnew = 1
αreschd 3 1 + E2 + E3 + . . . + En
(8)
From Eq. (7) and Eq. (8) we have αcomm = 3
αreschd 3 (z1 + z2 E2 ) (w3 + z3 )(1 + E2 + E3 + . . . + En )
(9)
We define αidle as a fraction of rescheduled data that should be added to 3 assigned data to P3 for cancelling generated idle time. αidle = 3
z1 + z2 E2 (w3 + z3 )(1 + E2 + E3 + . . . + En )
(10)
thus αadd = αreschd + αidle · αreschd 3 3 3 3
(11)
We know the value of αadd and αidle 3 3 , therefore αreschd = 3
αadd 3 1 + αidle 3
(12)
With this technique the idle time is cancelled. Now we can calculate all the new fractions for all processors. For the next processor in this example without making any restriction, we assume d4 = 0. This means that when P3 ’s communication is finished, processor P4 is ready. Therefore it is a member of G3 . From Fig. 2, it can be seen that finish time for this processor is different from previous processors. Hence we repeat the discussed steps and use Eq. (4) to find new fractions. Previous processors’ communication time should also be taken into consideration. This includes communication time for processors P1 and P2 so that communication time of P3 for additional assigned data to it, αidle 3 , should be subtracted from it. Hence αidle = 4
αidle · (w3 + z3 ) − αidle · z3 3 3 w4 + z4
(13)
αidle · w3 3 w4 + z4
(14)
αidle = 4
Thus in general we have these formula, j−1 zi Ei i=1 (wj +zj ) n i=1 Ei i−1 k αidle = αidle · k=j wk+1w+z ,i i j k+1 idle idle αj (G) = i∈Gj αi αadd αreschd = 1+αidle +i αidle (G) i i j 0, ν ϵ ℝ, N = ⌈ν ⌉, ck and bk are the constants representing the initial and boundary condition respectively. We have confined our investigation in this paper to solve such linear and nonlinear fractional differential equations which have only one fractional term. Such designed methodology is applicable to a variety of FDEs in Electromagnetics [9], Fluid Dynamics [10] and Control problems [11]. Before introducing the proposed methodology it is necessary to provide some definitions and relations which will be used in the next sections. Throughout this paper, Riemann-Liouville definition for fractional derivative has been used with lower terminal at zero. The fractional derivative of function f(t) of order ν is given as D ν f (t ) =
t
1 (t − ξ ) −ν −1 f (ξ ) dξ , for t > 0, ν ∈ ℜ + Γ (ν ) ∫0
(3)
Using (3) the fractional derivative of exponential function f(t) = eat by simple mathematical manipulation is given as
Dν e at = ε t (−ν , a ) = t −ν Ε1,1−ν (at )
(4)
where εt (ν,a) is the relation arising in solutions of FDEs and is represented as ∞
at k k =0 Γ(ν + k + 1)
ε t (ν , a ) = tν ∑
(5)
And E1,1+ν (at) is the Mittag-Leffler function of two parameters α = 1and β = 1 – ν is defined by the series expansion ∞
tk k =0 Γ(αk + β )
Εα ,β (t ) = ∑
(α > 0, β > 0)
(6)
2 Mathematical Modeling In this section, the detailed description of mathematical model is provided to solve the FDEs using the strength of feed forward artificial neural networks. Integer order case: Take the general form of differential equation of integer order n i. e. ν = n then (1) can be written as
D n y(t ) = f (t , y(t ), y′(t )), 0 ≤ t ≤ t0 with initial conditions and boundary conditions given in (2).
(7)
Evolutionary Computational Intelligence in Solving the FDEs
233
An arbitrary continuous function and its derivatives on a compact set can be arbitrarily approximated by multiple inputs, single output, single hidden layer feed forward neural networks with a linear output layer having no bias. The solution y(t) of the differential equation along with its derivatives can be approximated by the following continuous mappings as use in ANN methodology [5],[6],[8]. m
∑ δ i A ( x i ),
yˆ ( t ) =
(8)
i =1
m
∑ δ i DA ( x i ),
D yˆ ( t ) =
(9)
i =1
D n yˆ (t ) =
m
∑ δ i D n A ( xi )
(10)
i =1
where xi = wit + bi, δi, wi and bi are bounded real-valued adaptive parameters (or weights), m is the number of neurons, and A is the activation function normally taken as log sigmoid function i.e. A(t) = 1/(1 + e-t). ANN architecture formed by linear combinations of the networks represented in (8-10) can approximately model the integer order ordinary differential equations as given in (7). Fractional order case: The networks given in (8) to (10) could not directly be applied to represent the FDEs due to extreme difficulty in obtaining fractional derivative of the log-sigmoid activation function. The exponential function is a candidate to replace the log-sigmoid function in the neural network modeling of FDEs. It has universal function approximation capability and known fractional derivative as well. The approximate continuous mappings in the form of linear combination of exponential functions can be given as yˆ ( t ) =
m
∑ δi ew t+b i
(11)
i
i =1
D n yˆ ( t ) =
m
∑ δ i w in e w t + b i
i
(12)
i =1
Dν yˆ (t ) =
m
∑ δ i eb t −ν E1,1−ν ( wit ) i
(13)
i =1
The linear combination of the networks (11) to (13) can represent approximate mathematical model of the differential equation of fractional order given in expression (1). It is named as fractional differential equation neural network (FDE-NN). Detailed architecture of FDE-NN is represented in Fig. 1. The ANN architecture has been extended to model the FDEs subjected to some unknown weights.
234
M.A.Z. Raja, J.A. Khan, and I.M. Qureshi
Fig. 1. Fractional differential equation neural network architecture
3 Evolutionary Computational Intelligence In this section the learning procedure is given for finding the unknown weights of networks representing the FDE with the help of efficient computational intelligence algorithm. These training procedures are mainly based on genetic algorithm (GA) hybridized with efficient local search methods like active set algorithm. One of the prominent features of GAs is that it does not get stuck in local minima unlike other algorithms.[12] GA consists of three fundamental operators: selection, crossover and mutation. GA encodes the design parameters into finite bit string to solve the given optimization problem. GA runs iteratively using its operators randomly based upon fitness function. Finally it finds and decodes the solution from the last pool of mature strings obtained by ranking of strings, exchanging the portions of strings and changing bits at some location in the strings.[13] The flowchart showing the process of evolutionary algorithm is as shown in Figure 2(a). Randomly generated initial population consists of finite set of chromosome each with as many numbers of genes as the unknown weights in fractional differential equation neural network. The objective function to be minimized is given as the sum of errors e j = e1j + e 2j
j = 1, 2,KK
(14)
where j is the generation number and e1 j is given as e1j =
1 p
∑i =0 (Dν yˆ (ti ) − f (ti , yˆ (ti ), yˆ ′(ti )) ) p
2
(15) j
Evolutionary Computational Intelligence in Solving the FDEs
235
S tart
Initialize P opulation
R andomly varying individuals
F itnes s E valution
Cross over
Termination C riteria
B es t Individual
R efinement
S elec tion
G lobal B es t Individual
Mutation
S top
Fig. 2. (a) Flowchart of Evolutionary Algorithm, (b), (c), and (d) are graphical representation of results for problems I, II, and III respectively
where p is the number of time steps. The value of p is adjusted as a tradeoff between the computation complexity and accuracy. Similarly e2 j is related to initial and boundary conditions can be written as
236
M.A.Z. Raja, J.A. Khan, and I.M. Qureshi
e2j =
1 N
N −1
N −1
∑ ( D k y(0) − ck ) 2 + N ∑ ( D k y (t0 ) − bk ) 2 1
k =0
k =0
(16) j
Evolutionary algorithm is given in the following steps: Step 1: Randomly generate bounded real value to form initial population of P number of the individual or chromosome. Each individual represents the unknown parameters of neural network. The initial population is scatter enough for better search space of the algorithm. Step 2: Create a Q number of subpopulation each having P/Q individuals. Step 3: Initialization and parameter values assign for algorithm execution. Set the number of variable equivalent to element in the individual. Set the number of generations. Set the fitness limit, start cycle count. Set Elite count 3 and value of crossover fraction is 0.75 for reproduction. Set Migration in both forward and backward direction. Start generation count, etc. Step 4: Calculate fitness by using the fitness function given in expressions (14-16). Step 5: Ranking is carried out for each individual of the populations on the basis of minimum fitness values. Store the best fit candidate solution. Step 6: Check for termination of the algorithm, which is set as either predefine fitness value i.e. MSE 10-08 for linear FDEs equations and 10-04 for non-integer order cases of the equation or Number of cycles complete. If yes go to step 9 else continue. Step 7: Reproduction of next generation on the basis of Crossover: Call for scattered function, Mutation: Call for Adaptive Feasible function, Selection: Call for Stochastic Uniform function, and Elitism. Step 8: Repeat the procedure from step 3 to step 6 with newly generated population until total number of cycle complete. Step 9: Refinement achieved by using active-set algorithm (Call for FMINCON function of MATLAB) for local search technique by taking the best fitted individual of step 5 as start point of the algorithm. Store in the memory the refined value of fitness along with the best individual for this run of the algorithm. Step 10: Repeat step 1 to 9 for sufficient large number of the run to make the statistical analysis of the algorithm. Store these result for analysis later.
4 Simulation and Results Three different problems of FDEs were analyzed using the FDE-NN method and comparison is made with exact solutions and other numerical methods to validate the results of the scheme. 4.2 Problem I
Consider the fractional differential equation Dν y (t ) = t 2 +
2 t 2−ν − y (t ), Γ(3 − ν )
with the exact solution given as y(t) = t2 [14].
y (0) = 0, 0 < ν ≤ 1
(17)
Evolutionary Computational Intelligence in Solving the FDEs
237
To solve this with FDE-NN algorithm with number of neurons m = 8 are taken and value of the fraction order derivative is ν = 0.5. Then the total number of 24 unknown parameter (δi, wi and bi) are to be adapted. These adaptive weights are restricted to real numbers between (-10, 10). The initial population consists of a set of 160 chromosomes divided into 8 subpopulations. Each chromosome consists of 24 genes equivalent to number of unknown weights. Input of the training set is taken from time t ϵ (0, 1) with a step size of 0.1. Its mean that p = 11 in the expression (15). Therefore the fitness function is formulated as ej =
1 11 0.5 2 [ D yˆ (ti ) − ti2 − ti1.5 + yˆ (ti )]2 + { yˆ (0)}2 , j = 1,2,3K ∑ 11 i =1 Γ ( 2. 5)
(18)
j
where j be the number of generation, ŷ and D0.5ŷ are networks given in (11), and (13) respectively. Our scheme runs iteratively in order to find the minimum of fitness function ej, with stoppage criteria as 2000 number of generations or fitness value ej ≤ 10-8 whichever comes earlier. One of the unknown weights learned by FDE-NN algorithm is provided in Table 1. Using these weights in equation (11) one can obtain the solution for any input time t between 0 and 1. Comparison of the results with exact solution is made and summarized in Table 1. It can be seen from Figure 2(b) that the solution obtained by our algorithm is close to the exact solution. It validates the effectiveness of our scheme for solution of fractional differential equations. Table 1. Weights obtained by FDE-NN algorithm and solution of FDE in problem I i 1 2 3 4 5 6
Unknown Weights wi
bi
0.1960 -1.3561 -0.8717 0.9907 0.3077 0.0228
0.0830 0.6221 -0.2375 0.3870 -0.0155 -1.0413
0.2050 0.4691 0.9537 0.7539 -0.2624 0.2148
Time 0 0.2 0.4 0.6 0.8 1.0
Exact
FDE-NN
0 0.0400 0.1600 0.3600 0.6400 1.0000
-0.0004 0.0396 0.1596 0.3573 0.6352 1.0004
Absolute Error 3.726E-04 3.605E-04 4.328E-04 2.741E-03 4.777E-03 3.519E-04
4.2 Problem II
Consider the ordinary fractional differential equation 11 1 2 Dy (t ) − 15 D y (t ) + 152 y (t ) = 0,
y (0 ) = 0
(19)
with analytic solution obtained by direct approach method [2] given as y (t ) = a1ε t (0, a12 ) − a2ε t (0, a22 ) + a1ε t ( − 12 , a12 ) − a2ε t ( − 12 , a22 ).
(20)
Where a1, a2 are the zeros of indicial polynomial of (19) and εt for two inputs is given in (5). The equation (19) is a simplified form of composite fractional relaxation equation [15]. For ν = 1/2 it can represent the unsteady motion of a particle accelerating in a viscous fluid under the action of gravity referred to as Basset problem [15],[16].
238
M.A.Z. Raja, J.A. Khan, and I.M. Qureshi
This problem has been simulated by FDE-NN networks (11) to (13) with m = 6 neurons making a total of 18 unknown weights to be adapted. These adaptive parameters are restricted to the range [-100,100]. The population consists of a set of 160 individuals divided into 8 subpopulations. Evolutionary algorithm is applied for 2000 generations ranked by fitness function ej. The input of the training set is chosen from time t ϵ [0.1, 4] with a step size of 0.2. The total numbers of inputs are p = 20 and one initial conditions fitness function can be formulated as ej =
20
1 20
2
1 ∑ ⎛⎜⎝ y / (ti ) − 1511 y 2 (ti ) + 152 y(ti ) ⎞⎟⎠ + ( y (0))2 , i =1
j = 1,2KK
(21)
j
Our algorithm runs iteratively in order to find the minimum fitness function ej, with stoppage criteria as j = 2000 number of generations or fitness value ej ≤ 10-7, whichever comes earlier. The one of the unknown weights learned by FDE-NN algorithm stochastically with fitness value ej = 8.85×10-7are provided in Table 2. Using these weights in expression (11) we can find the solution to this problem for any input time t between 0 and 4. We have obtained the analytic solution using expression (20) for input time t ϵ [0.1, 4] with a step size of 0.1. We have also calculated the numerical solution by numerical technique developed by Podlubny based on successive approximation method [17] The recursive relations are used for computations with parameters h = 0.001, n = 0,1,2, … 4000. In order to compare the results on given inputs, Podlubny numerical, approximate analytical, FDE-NN solutions were obtained. Results are shown in Figure 2(c) and given in Table 2. It can be seen that our algorithm approximate the exact solutions more accurately than classical numerical method. For example at time t = 2 the solution obtained by FDE-NN approach is ŷ(t) = 0.2459521 and by Podlubny numerical techniques is 0.2334612 whereas exact solution is 0.2429514. The accuracy achieved by the FDE-NN method is of the order of 10-4 to 10-5 whereas it is of the order of 10-2 to 10-3 for Podlubny method. Table 2. Weights obtained by FDE-NN algorithm and solution of FDE in problem II i
Unknown Weights wi
1 04.8188 2 00.5509 3 -00.4554 4 -10.7147 5 -00.7676 6 00.2721
bi -46.7244 61.4131 -16.6884 -53.8387 -11.8304 03.7615
-29.5226 -10.5329 -05.9407 -08.6134 -05.9951 -03.1966
t
Exact
0.5 1.0 2.0 3.0 3.5 4.0
0.1236 0.1629 0.2459 0.3436 0.3998 0.4616
FDE-NN Numerical 0.1234 0.1633 0.2461 0.3424 0.3990 0.4616
0.1203 0.1571 0.2335 0.3214 0.3712 0.4254
Absolute Error FDE-NN Numerical 2.70E-04 3.91E-04 1.01E-04 1.21E-03 8.02E-04 4.30E-05
3.34E-3 5.83E-3 1.25E-2 2.22E-2 2.86E-2 3.61E-2
4.3 Problem III
Consider a non-linear ordinary fractional differential equation [14] given as Dν y =
40320 8−ν Γ(5 + ν 2) 4−ν 2 9 3 t −3 t + Γ(ν + 1) + ( tν 2 − t 4 )3 − y 3 2 Γ(9 −ν ) Γ(5 −ν 2) 4 2
(22)
Evolutionary Computational Intelligence in Solving the FDEs
239
with initial condition y(0) = 0 for the case 0 < ν ≤1. The exact solution for the this equation is given by y = t8 – 3t4 + ν /2 + (9/4)tν. The classical numerical techniques used in problem II are unable to provide solution for such problems. However modern deterministic methods with high computational cost can provide solution of (22) like fractional Adams method [18]. The simulation has been performed for this problem on similar pattern to the previous example. The order of fractional derivative is taken as ν = 1/2. The results are summarized in Table 3 and plotted in Figure 2(d) for some inputs. The accuracy achieved is of the order of 10-2 to 10-3, whereas numerical result of the Adams scheme [14] at input t = 1, for mesh size h = 0.1the error is 2.50×10-1 and -5.53×10-3 for ν = 0.2 and ν = 1.25 respectively. It is necessary to mention that fractional order derivative of activation function involves Mittag-Leffler Function defined in (6). In order to implement it into our program we have to incorporate MATLAB function MLF provided at MATLAB Central File exchange patent by Podlubny [19]. Table 3. Weights obtained by FDE-NN algorithm and solution of FDE in problem III i 1 2 3 4 5 6 7 8
Unknown Weights wi
2.5205 0.0156 -1.3029 0.1110 0.0268 0.5211 -0.6929 0.4450
bi -0.4835 -0.2346 -1.1186 1.3334 0.7468 0.2775 0.9731 -0.1819
-0.6029 -1.2299 1.2410 0.6163 -0.1811 0.5408 0.1722 -0.0018
Time
Exact
FDE-NN
Absolute Error
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
0.7113 1.0030 1.2145 1.3626 1.4372 1.4175 1.2813 1.0181
0.7066 0.9994 1.2198 1.3656 1.4317 1.4095 1.2857 1.0420
0.0047 0.0036 0.0054 0.0030 0.0055 0.0080 0.0044 0.0239
5 Conclusion A stochastic computational approach has been developed for solving the FDEs using neural networks and genetic algorithm. This method has been successfully applied to different linear and non-linear ordinary FDEs. It has been shown that proposed scheme approximates the exact solution with better accuracy than the standard numerical technique. An advantage of this method is that, unlike other numerical method, it gives approximate solution on the continuous finite time domain.. In future biological inspired methods will be applied to solve these equations.
References [1] Oldham, K.B., Spanier, J.: The Fractional Calculus. Academic Press, New York (1974) [2] Miller, K.B., Ross, B.: An Introduction to the Fractional Calculus and Fractional Differential Equations. Wiley, New York (1993) [3] Samko, S.G., Kilbas, A.A., Marichev, O.I.: Fractional Integrals and Derivatives and Some of Their Applications. Nauka i Technika, Minsk, Russia (1987)
240
M.A.Z. Raja, J.A. Khan, and I.M. Qureshi
[4] Anatoly, A.K., Srivastava, H.M., Trujillo, J.J.: Theory and application of fractional differential equations. North-Holland Mathematics Studies, vol. 204. Elsevier, Amsterdam (2006) [5] Rarisi, D.R., et al.: Solving differential equations with unsupervised neural networks. J. Chemical engineering and processing 42, 715–721 (2003) [6] Aarts, L.P., Van Der Veer, P.: Neural Network Method for solving the partial Differential Equations. Neural Processing Letters 14, 261–271 (2001) [7] Junaid, A., Raja, M.A.Z., Qureshi, I.M.: Evolutionary Computing approach for the solution of initial value problems in ordinary differential equations. WASET 55, 578–581 (2009) [8] Khan, J.A., Zahoor, R.M.A., Qureshi, I.M.: Swarm Intelligence for the problems of Non-linear ordinary differential equations and its application to well known Wessinger’s equation. EJSR 34(4), 514–525 (2009) [9] Engheta, N.: On the role of fractional calculus in electromagnetic theory. IEEE Antennas Propagat. Mag. 39, 35–46 (1997) [10] Zhou, S., et al.: Chaos control and synchronization in fractional neuron network system. Chaos, Solitons & Fractals 36(4), 973–984 (2008) [11] Cao, J.Y., et al.: Optimization of fractional order PID controllers based on genetic algorithm. In: International Conference on Machine Learning and Cybernetics, ICMLC, pp. 5686–5689 (2005) [12] Tsoulos, I.G., Lagaris, I.E.: Solving differential equations with genetic programming. Genetic Programming and Evolvable Machines 7(1), 33–54 (2006) [13] Sivanandam, S.N., Deepa, S.N.: Introduction to Genetic Algorithms. Springer, Berlin (2008) [14] Weibeer, M.: Efficient Numerical Methods for fractional differential equations and their analytical Background. PhD Thesis, ch. 6 (2005) ISBN: 978-3-89720-846-9 [15] Gorenflo, R., Mainardi, F.: Fractional calculus: Integral and differential equations of fractional order. CISM Lecture notes, International centre for mechanical Sciences, Udine, Italy (1996) [16] Mainardi, F.: Fractional relaxation and fractional diffusion equations, mathematical aspects. In: Ames, W.F. (ed.) Proceedings 12th IMACS world Congress, Georgia Tech, Atlanta 1, pp. 329–332 (1994) [17] Podlubny, I.: Fractional Differential Equations, ch. 8. Academic Press, New York (1999) [18] Li, C., Tao, C.: On the fractional Adams method. J. of computer & mathematics with applications 58(8), 1573–1588 (2009) [19] Podlubny, I.: Calculates the Mittag-Leffler function with desire accuracy (2005), http://www.mathworks.com/matlabcentral/fileexchange/ loadFile.do?objectId=8738
A Neural Network Optimization-Based Method of Image Reconstruction from Projections Robert Cierniak Technical University of Czestochowa, Departament of Computer Engineering, Armii Krajowej 36, 42-200 Czestochowa, Poland
Abstract. The image reconstruction from projections problem remains the primary concern for scientists in area of computer tomography. The presented paper describes a new approach to the reconstruction problem using a recurrent neural network. The reconstruction problem is reformulated to the optimization problem. The optimization process is performed during the minimizing of the energy functions in the net determined by the different error measures. Experimental results show that the appropriately designed neural network is able to reconstruct an image with better quality than obtained from conventional algorithms.
1 Introduction The images in computed tomography are obtained by applying a method of projections acquisition and an appropriate image reconstruction algorithm. The key problem arising in computerized tomography is image reconstruction from projections which are received from the X-ray scanner. There are several well-known reconstruction methods to solve this problem. The most popular reconstruction algorithms is a method using convolution and back-projection and the algebraic reconstruction technique (ART). Beside those methods, there exist some alternative reconstruction techniques. The most worthy of emphasis seems to be neural network algorithms. The reconstruction algorithms based on the supervised neural networks were presented in some papers, for example in [13], [14], [17]. Other structures representing the so-called algebraic approach to image reconstruction from projections and based on recurrent neural networks were studied in papers [20], [21]. This approach can be characterize as an unidimensional signal reconstruction problem. In this case, the main disadvantage is the extremely large number of variables arising during calculations. The computational complexity of the reconstruction process is proportional in that approach to the square of the image size multiplied by the number of performed projections what leads to a extremely large number of connections between neurons in recurrent neural networks. In this paper, a new approach to the reconstruction problem will be developed based on transformation methodology. The traditional ρ-filtered layergram reconstruction method was a prototype of this reconstruction scheme. In our approach a recurrent neural network is proposed to design the reconstruction algorithm instead of the two-dimensional filtering in traditional solution [6]. Owing to a 2D methodology of the image processing in our approach we significantly decreased the complexity of the tomographic reconstruction problem [3] and [4]. The weights of the neural network arising in our reconstruction method ´ atek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 241–250, 2010. N.T. Nguyen, M.T. Le, and J. Swi˛ © Springer-Verlag Berlin Heidelberg 2010
242
R. Cierniak
will be determined in a novel way. The calculations of these weights will be carried out only once before the principal part of the reconstruction process is started. To decrease calculation complexity during determination of the neural network weights, the discrete Radon transform methodology is introduced. This methodology provides the so-called grid "friendly" angles at which the projections are performed. In this paper, the comparison of the equiangular interval sampling procedure with the grid "friendly" methodology of the projection angles specifying is presented. Additionally, because the number of neurons in network does not depend on the resolution of the performed earlier projections, we can quite freely modulate the number of carried-out projections.
2 Neural Network Reconstruction Algorithm Let μ(x, y) denote the reconstructed image. This function discribes a distribution of the attenuation of X-rays in an investigated cross-section of body. Unknown image μ(x, y) will be calculated using obtained projections, also called the Radon transform, one of the key concepts in computed tomography. The function p (s, α) shows how deep is the shadow done by investigated object using single X-ray at a distance s from the origin when a projection is made at a specific angle α. This is called the Radon transform [12] and mathematically is written as +∞+∞ p (s, α) = μ(x, y) · δ (xcosα + ysinα − s) dxdy,
(1)
−∞ −∞
It is useful to find following relation between the coordinate system (x, y) and the roteted coordinate system (s, u) s = x cos α + y sin α. (2) We propose a novel neural network reconstruction from defined by Eq. (1) projections algorithm. The idea of the presented reconstruction method using the neural network is shown in 1, where the grid-"friendly" projections are taken into consideration. 2.1 The Grid-“Friendly" Angles of the Parallel Projections It is assumed in our approach to the reconstruction problem formulation that processed image is discrete. Therefore, we expect that the form of the taking into consideration in this problem interpolation function has a strong influence on the quality of reconstructed image. The closer to pixels will go the X-rays the better we will obtain results. This is a main motivation to thicken lines of X-rays during projections around pixels in reconstructed image. In order to do it, according to the concept of discrete Radon transform (DRT) [8], we propose approach involved with the grid-"friendly" angles of parallel projections concept instead of the equiangular distribution procedure of the projections acquisition. A motivation for this approach is the better adjustment of the rays in the parallel beam crossing a discrete image to the grid of pixels in this image, if at every angle of projection every ray crosses at least one pixel. In the simplest case this angles will be indexed in the following way: ψg f = − (I − 1) /2, . . . , 0, . . . , 3 (I − 1) /2 − 1, where
A Neural Network Optimization-Based Method of Image Reconstruction
243
Projections
...
l
pˆ p l ,
L 1 gf
L 1
I 1 / 2 ,..., 0 ,..., 3 I 1 / 2
1
Back-projection (interpolation) i
1,..., I
j
1,..., J
~ˆ i, j
Reconstruction using recurrent neural network
Calculation of the coefficients
hij
ˆ i, j Monitor
Fig. 1. The scheme of the neural network reconstruction algorithm
(I − 1) /2 is the total number of projection angles. Considering the above condition, the values of the discrete values of parameter α p are as follows: 2ψ ⎧ ⎪ arctan I−1gr − π2 , i f ψg f = −64, . . . , 64 ⎪ ⎪ ⎪ ⎪ ⎨ p αψg f = ⎪ (3)
⎪ ⎪ 2(I−1−ψgr ) ⎪ π ⎪ ⎩ arcctan − 2 , i f ψg f = 65, . . . , 191 I−1 The proposed distribution of the projection angles is approximately equiangular in the range of α p [−3π/4, π/4), what is depicted clearly in Fig. 3 in the case if I = 129. Taking into consideration the sample of parameters s and α p of the parallel projections, we can write pˆ p l, ψg f = p p lΔ sp , αψg f . (4) 2.2 Back-Projection Operation After the next step of our reconstruction algorithm for parallel beams, consists of the back-projection operation [12], we obtain a blurred image which can be expressed by the following formula π μ(x, ˜ y) = p p (s, α p )dα p . (5) 0
This operation means the accumulation of all performed projections which passed through a fixed point of reconstructed image μ(x, y) in new image μ(i, ˜ j). An arised in
244
R. Cierniak
Fig. 2. The scheme of the neural network reconstruction algorithm
this way new image includes information about reconstructed image μ(x, y) but strongly distored. In our approach, the reconstruction from projections problem consists in the direct recovering of this image from the blurred image μ(i, ˜ j). Because we posses only limited number of the parallel projection values, it is necessary to apply interpolation. In this case a projection value mapped to a certain point (x, y) of the reconstructed image. Mathematically, the interpolation can be expressed as follows +∞ p p p¯ ( s¯, α ) = p p (s, α p ) · I( s¯ − s)ds, (6) −∞
where: I(Δs) is an interpolation function and s¯ = x cos α p + y sin α p . The dashes over symbols will help to operate over variables in the case of multidimensional convolutions. In the presented method we take into consideration the discrete form of reconstructed image and we approximate the 2-D convolution using two finite sums of ranges [1, . . . , I] and [1, . . . , J]. In this way one can reformulate relation (6) (assuming that Δ x = Δy = Δ s ) as follows ˆ˜ j) (Δ s )2 μ(i, μ( ˆ ¯i, ¯j) · Δαp Iˆ (x cos α p + y sin α p − x¯ cos α p − y¯ sin α p ) . (7) ¯i
¯j
ψg f
Consequently, we can express eq. (7) in the following way ˆ˜ j) μ(i, μ( ˆ ¯i ¯j) · hΔi,Δ j , ¯i
where hΔi,Δ j = Δαp (Δ s )2 ·
(8)
¯j
Iˆ iΔ s cos ψΔαp + jΔ s sin ψΔαp − ¯iΔ s cos ψΔαp − ¯jΔ s sin ψΔαp . (9) ψg f
A Neural Network Optimization-Based Method of Image Reconstruction
245
Assuming that interpolating function Iˆ (•) is even we may simplify relation (9) to the form hΔi,Δ j = Δαp (Δ s )2 · Iˆ |i − ¯i|Δ s cos ψΔαp + | j − ¯j|Δ s sin ψΔαp . (10) ψg f
Equation (8) defines the 2D discrete aproximate reconstruction problem in the case of equiangular distribution of projection angles. 2.3 Reconstruction Process Using Recurrent Neural Network The discrete reconstruction problem given by equation (8) can be reformulated to optimization problem as is depicted in Fig. 3.
Reconstruction problem (deconvolution)
Optimization problem
Fig. 3. Scheme of the problem reformulation
The recurrent neural network structure applied to optimization problem was proposed for the first time in [7]. The approaches to reconstruction problem in 1D using recurrent neural networks ware presented for example in [1], [9] and [16]. The network shown in this paper realizes the image reconstruction from projection in 2D by deconvolution of relation (8). This can be formulated as the following optimization problem ⎛ ⎞ I J ⎜⎜⎜ ⎟⎟⎟ min ⎜⎜⎜⎜⎝w · f e¯i ¯j (M) ⎟⎟⎟⎟⎠ , (11) M ¯i=1 ¯j=1
where: M = μ(i, ˆ j) is a matrix of pixels from original image; w is suitable large positive coefficient; f (•) is penalty function If a value of coefficient w tends to infinity or in other words is suitably large, then the solution of (11) tends to the optimal result. We propose the following penalty function in our research e¯i ¯j (M) f e¯i ¯j (M) = λ · ln cosh , (12) λ and derivation of (12) has the form d f e¯i ¯j (M) 1 − exp e¯i ¯j (M) /λ f e¯i ¯j (M) = = , de¯i ¯j (M) 1 + exp e¯i ¯j (M) /λ where: λ is slope coefficient.
(13)
246
R. Cierniak
Now we can start to formulate the energy expression Et = w ·
I J f e¯i ¯j Mt .
(14)
¯i=1 ¯j=1
which will be minimized by the constructed neural network to realize the deconvolution task expressed by equation (8). In order to find a minimum of function (14) we calculate the derivation t I J I J ∂ f e Mt ∂ e ¯i ¯j ¯i ¯j M dE t dμˆ t (i, j) =w· . (15) t dt ∂μˆ t (i, j) dt ¯ ¯ i=1 j=1 ∂ e¯i ¯j (M ) i=1 j=1
t I J ∂f e ∂ e¯i ¯j Mt ¯i ¯j M dμˆ t (i, j) = −w , t dt ∂ e¯i ¯j (Mt ) ∂μˆ (i, j) ¯ ¯
Let us assume
(16)
i=1 j=1
equation (15) takes the form 2 I J dE t dμˆ t (i, j) =− . dt dt i=1 j=1
(17)
Having above described methodology of the approach to the optimization problem we can assume the form of distance e¯i ¯j Mt , in the simplest case, Basing on the Euclidean distance, we propose following form of this function e¯iE¯j (M)
=
I J
hΔi,Δ j μˆ (i, j) − μˆ˜ (i, j) .
(18)
i=1 j=1
Hence, we can determine a derivation from equation (16) dμˆ t (i, j) = −w f e¯iE¯j (M) hΔi,Δ j , dt ¯ ¯ I
J
(19)
i=1 j=1
Taking into consideration the origin of the distance e¯i ¯j Mt we can expect good results of image reconstruction using described algorithm measure of projections, for example in X-ray computed tomography. One can use another loss function based on the Kullback-Leibler divergence (also called I-divergence) too, in the following alternative to equation (18) form μˆ˜ (i, j) KL1 t ˆ e¯i ¯j M = μ˜ (i, j) ln + [HM]i j − μˆ˜ (i, j) , (20) [HM]i j or dual to above
[HM]i j t (i, e¯iKL2 M = [HM] j) ln − [HM]i j + μˆ˜ (i, j) , ij ¯j μˆ˜ (i, j)
(21)
A Neural Network Optimization-Based Method of Image Reconstruction
247
Table 1. View of the images (window: C=1.02, W=0.11): a) reconstructed image using algorithm described by eq. (18), (19); b) reconstructed image using algorithm described by eq. (20), (23); c) reconstructed image using algorithm described by eq. (21), (24); d) reconstructed image using standard convolution/back-projection method with rebinning and Shepp-Logan kernel; d) original image. Image
MSE
a)
0.009
b)
0.0103
c)
0.0110
d)
0.0143
d)
—-
248
R. Cierniak
where: H = hΔiΔ j is the matrix of the reconstruction coefficients, and I J [HM]i j = hΔi,Δ j μˆ (i, j) .
(22)
i=1 j=1
Having distances defined by (20) and (21) we can express a derivation from equation (16) respectively, as follows I J dμˆ t (i, j) μ˜ˆ (i, j) KL1 = −w f e¯i ¯j (M) hΔi,Δ j · , dt [HM]i j ¯ ¯
(23)
I J [HM]i j dμˆ t (i, j) KL2 = −w f e¯i ¯j (M) hΔi,Δ j ln , dt μˆ˜ (i, j) ¯i=1 ¯j=1
(24)
i=1 j=1
and
respectivelly. All neural network structure defined by the pair of equations: (18), (19), (20), (21), (23) and (24), are the starting points to construct recurrent neural network structures realising the image reconstruction process in algorithm depicted in Fig. 1.
3 Experimental Results The size of the processed image was fixed at 129 × 129 pixels, which determines the number of neurons in each layer of the net. Before the reconstruction process using a recurrent neural network is started, it is necessary to calculate coefficients h¯i ¯j using equation (7). Using the linear interpolation functions, the values of these coefficients are presented in Fig. 4.
Fig. 4. Scheme of the problem reformulation
A Neural Network Optimization-Based Method of Image Reconstruction
249
It is very convenient during the computer simulations to construct a mathematical model of the projected object, a so-called phantom, to obtain fan-beam projections. We adopted the well-known Shepp-Logan phantom of the head to our experiments (see eg. [2]). Such a kind of phantom for parallel beam acquisition was used in many papers, for example [12]. A view of the mathematical model of a phantom is depicted in Table 1d. It very subjective to evaluate the reconstruction procedure basing only on the view of the reconstructed image. That is why the quality of the reconstructed image has been evaluated by standard error measures: MSE, where μ (x, y) is the original image of the Shepp-Logan mathematical phantom. Table 1 presents the obtained results of the computer simulations (obtained after 30000 iterations in the case of neural network algorithms).
4 Conclusions The presented simulations show a convergence of the image reconstruction algorithm based on the recurrent neural networks described in this work. The image of the crosssection of the investigated mathematical model obtained after a sufficient number of iterations is reconstructed with objective high exactness. Described in this paper algorithm overperforms standard reconstruction methods in the sense of the standard error measures. Having a solution to the image reconstruction from projections problem for parallel beams, one can extend our results to other geometries of projections: fan-beams and cone-beams, in particularly incorporated in spiral tomography. That means the possibility to implement a recurrent neural network in new designs of tomograph devices.
Acknowledgments This work was partly supported by Polish Ministry of Science and Higher Education (Research Project N N516 185235).
References 1. Cichocki, A., Unbehauen, R., Lendl, M., Weinzierl, K.: Neural networks for linear inverse problems with incomplete data especially in application to signal and image reconstruction. Neurocomputing 8, 7–41 (1995) 2. Cierniak, R.: Tomografia komputerowa. Budowa urzdzeñ CT. Algorytmy rekonstrukcyjne. Akademicka Oficyna Wydawnicza EXIT, Warszawa (2005) 3. Cierniak, R.: A new approach to tomographic image reconstruction using a Hopfield-type neural network. Elsevier Science: International Journal Artificial Intelligence in Medicine 43, 113–125 (2008) 4. Cierniak, R.: New neural network algorithm for image reconstruction from fan-beam projections. Elsevier Science: Neurocomputing 72, 3238–3244 (2009) 5. Frieden, B.R., Zoltani, C.R.: Maximum bounded entropy: application to tomographic reconstruction. Appl. Optics 24, 201–207 (1985) 6. Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. National Academy of Science USA 79, 2554–2558 (1982)
250
R. Cierniak
7. Hopfield, J.J., Tank, D.W.: Neural computation of decision in optimization problems. Biological Cybernetics 52, 141–152 (1985) 8. Kingston, A., Svalbe, I.: Mapping between digital and continuous projections via the discrete Radon transform in Fourier space. In: Proc. VIIth Digital Image Computing: Techniques and Applications, Sydney, pp. 263–272 (2003) 9. Ingman, D., Merlis, Y.: Maximum entropy signal reconstruction with neural networks. IEEE Trans. on Neural Networks 3, 195–201 (1992) 10. Jaene, B.: Digital Image Processing - Concepts, Algoritms and Scientific Applications. Springer, Berlin (1991) 11. Jain, A.K.: Fundamentals of Digital Image Processing. Prentice Hall, New Jersey (1989) 12. Kak, A.C., Slanley, M.: Principles of Computerized Tomographic Imaging. IEEE Press, New York (1988) 13. Kerr, J.P., Bartlett, E.B.: A statistically tailored neural network approach to tomographic image reconstruction. Medical Physics 22, 601–610 (1995) 14. Knoll, P., Mirzaei, S., Muellner, A., Leitha, T., Koriska, K., Koehn, H., Neumann, M.: An artificial neural net and error backpropagation to reconstruct single photon emission computerized tomography data. Medical Physics 26, 244–248 (1999) 15. Lewitt, R.M.: Reconstruction algorithms: transform methods. Proceeding of the IEEE 71, 390–408 (1983) 16. Luo, F.-L., Unbehauen, R.: Applied Neural Networks for Signal Processing. Cambridge University Press, Cambridge (1998) 17. Munlay, M.T., Floyd, C.E., Bowsher, J.E., Coleman, R.E.: An artificial neural network approach to quantitative single photon emission computed tomographic reconstruction with collimator, attenuation, and scatter compensation. Medical Physics 21, 1889–1899 (1994) 18. Radon, J.: Ueber die Bestimmung von Functionen durch ihre Integralwerte Tangs gewisser Mannigfaltigkeiten. Berichte Saechsiche Akad. Wissenschaften, Math. Phys. Klass 69, 262– 277 (1917) 19. Ramachandran, G.N., Lakshminarayanan, A.V.: Three-dimensional reconstruction from radiographs and electron micrographs: II. Application of convolutions instead of Fourier transforms. Proc. Nat. Acad. Sci. 68, 2236–2240 (1971) 20. Srinivasan, V., Han, Y.K., Ong, S.H.: Image reconstruction by a Hopfield neural network. Image and Vision Computing 11, 278–282 (1993) 21. Wang, Y., Wahl, F.M.: Vector-entropy optimization-based neural-network approach to image reconstruction from projections. IEEE Transaction on Neural Networks 8, 1008–1014 (1997)
Entrance Detection of Buildings Using Multiple Cues Suk-Ju Kang, Hoang-Hon Trinh, Dae-Nyeon Kim, and Kang-Hyun Jo Graduate School of Electrical Engineering, University of Ulsan, Korea, Daehak-ro 102, Nam-gu, Ulsan 680 - 749, Korea {sjkang,hhtrinh,dnkim2005}@islab.ulsan.ac.kr,
[email protected]
Abstract. This paper describes an approach to detect the entrance of building with hopeful that it will be applied for autonomous navigation robot. The entrance is an important component which connects internal and external environments of building. We focus on the method of entrance detection using multiple cues. The information of entrance characteristics such as relative height and position on the building is considered. We adopt the probabilistic model for entrance detection by defining the likelihood of various features for entrance hypotheses. To do so we first detect building’s surfaces. Secondly, wall region and windows are extracted. The remained regions except the wall region and windows are considered as candidate of entrance. Finally, the entrance is identified by its probabilistic model. Keywords: Probabilistic model, geometrical characteristics, entrance detection, multiple cues.
1
Introduction
We present a new method for detecting entrance in perspective images of outdoor environments by using a CCD camera. It is important to find the entrance of buildings in external environment. Robots have to recognize the entrance for various navigation and manipulation tasks. The features of entrance are similar to doors of indoor environment such as vertical lines, horizontal lines, corners of composed elements and rectangle in the shape [1,3,4,5,6,7]. The door detection of interior has been studied numerous times in the past. Approaches and methods differ in the type of sensors they use. They consider the variability of the environment and images. For example in [3] doors are detected by sonar sensors and vision sensors with range data and visual information. In [4-6] authors use a CCD camera getting geometrical information and color information of doors from indoor scene. Authors in [4] use the fuzzy system to analyze the existence of doors and genetic algorithm to improve the region of doors. In [5] authors detect the doors by probabilistic method. They use the shape and the color information on doors. In [7] the research has been studied by laser sensor. For entrance detection [2], authors detect building components ´ atek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 251–260, 2010. N.T. Nguyen, M.T. Le, and J. Swi c Springer-Verlag Berlin Heidelberg 2010
252
S.-J. Kang et al.
with entrance in order to recognize building. They use laser scanners and a CCD camera. Our laboratory has studied the building recognition [8,9,10,11]. We use the algorithm which is researched in our laboratory for building recognition and looking for the entrance in connection with the window detection. Fig.1 shows an overall overview of our full project where surface, wall region and window detection have been done by our previous works. Entrance is considered in relation with wall and window. Almost entrances have different color from wall and different positions from windows. We can obtain some information like the floor’s height which is estimated by two rows of detected windows. We focus on geometrical characteristics and multiple cues to recognize entrance. Almost, we acquire geometrical information of buildings from the detected window in the image. Among the candidates of entrance some candidates is rejected by comparing candidates with geometrical information from windows. And then in the remained region we use probabilistic model to detect entrance. The recognition step proceeds with the computation of posterior probability P(Entrance | X, A) where X, A are the positions, length of lines and appearance of the entrance. We assume that the entrance can be characterized by a few numbers of parameters θ considering the feature of entrance. The likelihood P(X, A | θ) can be evaluated given the image measurements.
Fig. 1. An overall overview of our full works
We describe in section 2 the method of building surface detection and wall region detection which are done in the previous works. We describe the method how to detect window in section 3. Section 4 describes a method for rejecting some regions which do not belong to candidates of entrance. Section 5 represents the probabilistic model; section 6 shows a set of parameters. In section 7, we explain the hypotheses and likelihood evaluation. Finally, section 8 and 9 represent respectively entrance detection experiments and the conclusion.
Entrance Detection of Buildings Using Multiple Cues
2
253
Surface and Wall Region Detection
The processes for detecting building surface and estimating wall regions were explained in detail in our previous works [8,9,10,11]. First, we detected line segments and then roughly rejected the segments which come from the scene as tree, bush and so on. MSAC (m-estimator sample consensus) algorithm is used for clustering segments into the common dominant vanishing points comprising one vertical and several horizontal vanishing points. The number of intersections between the vertical line and horizontal segments is counted to separate the building pattern into the independent surfaces. And then we found the boundaries of surface as the green frame in the second row of Fig.2. To extract wall region, we used color information of all pixels in the detected surface [10].
3
Window Detection
For each surface, a binary image is constructed where the non-wall regions are considered as candidates of windows and entrance; the surface is rectified into the rectangular shape. Geometrical characteristics of each candidate such as coordinates, centroid, height and width and bounding box area and aspect ratio are calculated, [12]. A candidate may become a window when its geometrical characteristics should be satisfied the set of conditions, the boundaries are then re-calculated by iteratively re-weighted least squares of boundary pixels. The false positive (FP) and negative (FN) are improved by context information. From detected window, the median of area and centroid are calculated. The FP is usually appeared as a small window located on the bottom of detected face so that it will be rejected by the difference of median centroid and area. The FN is appeared when a real window is occluded by trees, posts and so on. Each position of rectangular region where three other windows are located should be
Fig. 2. Detection of building surface, wall region, windows and candidates of entrance (magenta color)
254
S.-J. Kang et al.
a FN. It will be recovered after checking the corresponding position of remained image. The results are shown in the third row of Fig.2. After window detection, the remaining regions marked by magenta color are considered as the candidates of entrance.
4
Rejection of Noise and Line Segment
We deal with the method of rejection of noises and line segment from rectified image. The geometrical characteristics such as the position of candidates, relation of the height of a floor and the height of a window are used for noise reduction. After noise reduction, in remained region, the vertical and horizontal lines are extracted by using Hough transform based method. 4.1
Rejection of Noise
We considered three conditions for rejecting the noise. Among them, two conditions are set from the relative position of entrance; the other one is associated with the scale of height of entrance as following Condition 1: Entrance is in the lowest position on building surface. Condition 2: Entrance is not over the second floor. Condition 3: The height of entrance is larger than the height of windows. We acquired the height of a floor (hf ) by distance between two rows of windows, as in Fig.3. Similarly, we can get the information like the height of window hw , the position of the second floor hwp , the height of candidate hnw and the position of candidate hnwp as defined in (1) and Fig.3. To reject noises we compare the information on candidates with the value obtained from windows as defined by (2).
Fig. 3. Information of Geometrical characteristics: (a) detected window, (b) entrance candidate
Entrance Detection of Buildings Using Multiple Cues
⎧ hw ⎪ ⎪ ⎪ ⎪ ⎨ hf hwp ⎪ ⎪ h ⎪ nw ⎪ ⎩ hwp
w = xw nmax − xnmin w w = xnmin − xn−1min = xw nmin nw = xnw nmax − xnmin nw = xnmin
hnw > hw and hnwp < hwp 4.2
255
(1)
(2)
Line Segment
We extract lines from regions after noise reduction using Hough transform. Hough transform converts the lines into the points in Hough space [14]. The distance of x-coordinate and y-coordinate between two the ends of a line; if the distance Di is less than a certain threshold (5 in the paper) then this line is considered as the vertical line. Similarly, we can select the horizontal line. The reason why we extract the vertical and horizontal line is that entrance has the strong vertical and horizontal lines.
5
Problem Formulation
We will assume that our entrance model is described by a set of parameters θ. We don’t consider the fully Bayesian method. We define a restricted simple setting to compute P(θ|X, A) given the measurements X, A, which characterize the detected line and color in the image. Assuming that our object of interest can be well described by a small set of parameters θ = (θL , θC ), line and color parameters respectively, this posterior probability can be decomposed: P (θ|X, A) ∝ P (X, A|θ)P (θ) = P (X, A|θL , θC )P (θL , θC ) (3) = P (X, A|θL , θC )P (θL )P (θC )) We consider parameter θL and θC to be independent. The color of entrance is independent of line feature. The interpretation of the final terms in (3) is as follows: P(θL ) is the prior information of the line parameters of the entrance, for instance, the number of intersections between vertical and horizontal lines and the distance between two horizontal lines (h in Fig.4) and the ratio between half size of floor and h. P(θC ) represents the prior knowledge about the color of the entrance. P(X, A | θL , θC ) is the likelihood term of individual measurements. We consider maximum likelihood values of parameters θL and θC . For lines, they are given by a known model. The likelihood term can be further factored as follows, P (X, A|θL , θC ) = P (A|X, θL , θC )P (X|θL , θC ) (4) = P (A|XL , θC )P (X|θL ) where the colors and lines are independent to each other.
256
S.-J. Kang et al.
Fig. 4. Model of lines
6
Model Parameters
As mentioned before, our model for entrance is explained by a set of parameters of lines and color. While line parameters are given, color parameters are learned from observations of histogram. Representation of lines: The model of lines can be characterized by parameters θL = [nip , h, rv ]. nip is the intersection points between vertical lines and horizontal lines; h is distance between two horizontal lines; rv is the ratio between the half size of a floor and h (see Fig.4). Representation of color: The modeled color parameters, θC , can be learned from the reference hand labelled entrance segments. The color is represented by histograms computed for entrance image region through training. We use the HSI and RGB color space. In RGB space, we used only G channel because the distribution of each channel is similar each other in histogram shown in Fig.5(a). In HSI space, we used H and S channels in the same time for more powerful detection from influence of illumination as in Fig.5(b). We use the value from 0 to 255 in H and S channels for their histogram.
Fig. 5. (a) Sample entrance region and histograms (b) H Vs S histogram
Entrance Detection of Buildings Using Multiple Cues
7 7.1
257
Likelihood Computation Evaluation of Line Likelihood
As described previous section 5, the line model has three parameters θL = [nip , hv , rv ]. Therefore, the likelihood is a combination of three terms: P (X|θL ) = P (X|nip )P (X|hv )P (X|rL )
(5)
Where X is lines associated with the hypothesis, the intersection points and ratio of the associated line. The first term, P(X|nip ), assigns higher likelihood to hypotheses which were supported by the number of intersections. It consists of the discrete probability density function as follows, P (X|nip ) = 1 − 0.2(2 − nip ), nip = [0, 1, 2]
(6)
The second term P(X|hv ) considers the ratio of length of a vertical line. We take into account how much that line is lost. Then, the following line likelihood terms is defined: h−h −h P (X|hv ) = e
−
uf h
df
(7)
huf (and hdf ) is the length from upper (lower) end to half of a floor. Generally, the front of entrance is not occluded by objects like trees, bulletin board and so on. Therefore, we take into consideration about length of a vertical line. The third term P(X|rv ) takes into consideration of ratio between from a half of a floor to upper and lower ends of a line. hd and hu are the basis length from a half of a floor to lower intersection point and upper intersection point respectively. P (X|rv ) = e 7.2
−
h
u −huf hu
(8)
Evaluation of Color Likelihood
We integrate probabilities of all pixels in the region delimited between each of two lines, Xl . The color likelihood estimation is considered all pixels in Xl as
Fig. 6. Results of entrance detection from the rectified surfaces
258
S.-J. Kang et al.
Fig. 7. Failure example: occlusion, decoration
describing in Eq.9, where T (H, S) is number of pixels satisfied the following condition in Eq.9, c(Xl ) is total number of pixels of Xl P (Xc |θc , Xl ) =
T (H, S) ; H ∈ (80, 135), S ∈ (40, 90), in HSI space c(Xl )
(9)
The components H, S in HSI space are normalized into the range of [0, 255]. Fig.6 shows several examples of entrance detection from the rectified surfaces. Then it will be transferred into the original images by homogeneous matrix which obtained when we rectified the surfaces.
Entrance Detection of Buildings Using Multiple Cues
8
259
Experimental Results
The proposed method has been experimented for a variety of image with entrance. Fig.7 is several examples. For each building, the top is the original image; the middle one is results of surface, window detection and the candidate region of entrance; the last one is results of entrance detection marked by red region and blue frame. The results show that our method is robust with noise as tree, posters or even undetected windows (as in building 2 and building 4 in Fig.7) and so on. In building 5, just only a part of is detected because light and many brightness objects inside the room. The last building is fail detection because its door is occluded by the car where we are difficult to recognize even by our eyes. Our database contains 30 detected surfaces concluding the doors. The results obtained 26 correct cases including a part entrance detection (building 5).
9
Conclusion
The proposed method performs entrance detection for understanding and exploring outdoor environment. We have presented a new technique for recognizing entrance using visual information. The model of line and color is described by a set of parameters from the candidate region. We use constraints of man-made environments to generate multiple hypotheses of the model and evaluate their likelihood. Furthermore, the main components of buildings are analyzed for understanding building and exploring outdoor environment as wall region, windows and entrance. We are hopeful that it will be applied for autonomous navigation robot; because the building is a good landmark of robot in urbane environment. A future work, we plan to make the robust detection algorithm irrelevant to illumination and color. We are going to integrate texture information. Acknowledgments. The authors would like to thank to Ulsan Metropolitan City. MKE and MEST which has supported this research in part through the NARC, the Human Resource Training project for regional innovation through KIAT and post BK21 project at University of Ulsan and Ministry of Knowledge Economy under Human Resources Development Program for Convergence Robot Specialists.
References 1. Ali, H., Seifert, C., Jindal, N., Paletta, L., Paar, G.: Window Detection in Facades. In: 14th International Conf. on Image Analysis and Processing (2007) 2. Schindler, K., Bauer, J.: A model-Based Method For Building Reconstruction. In: Proc. of the First IEEE International Workshop on Higher-Level Knowledge in 3D Modeling and Motion Analysis (2003) 3. Stoeter, S.A., Le Mauff, F., Papanikopoulos, N.P.: Real-Time Door Detection in Cluttered Environments. In: 2000 Int. Symposium on Intelligent Control, pp. 187– 192 (2000)
260
S.-J. Kang et al.
4. Munoz-Salinas, R., Aguirre, E., Garcia-Silvente, M.: Detection of doors using a generic visual fuzzy system for mobile robots. Auton Robot 21, 123–141 (2006) 5. Murillo, A.C., Kosecka, J., Guerrero, J.J., Sagues, C.: Visual Door detection integrating appearance and shape cues. In: Robotics and Autonomous Systems, pp. 512–521 (2008) 6. Lee, J.-S., Doh, N.L., Chung, W.K., You, B.-J., Youm II, Y.: Door Detection Algorithm of Mobile Robot In Hallway Using PC-Camera. In: Proc. of International Conference on Automation and Robotics in Construction (2004) 7. Anguelov, D., Koller, D., Parker, E., Thrun, S.: Detecting and Modeling Doors with Mobile Robots. In: Proc. of the IEEE International Conf. on Robotics and Automation, pp. 3777–3784 (2004) 8. Trinh, H.H., Kim, D.N., Jo, K.H.: Structure Analysis of Multiple Building for Mobile Robot Intelligence. In: Proc. SICE (2007) 9. Trinh, H.H., Kim, D.N., Jo, K.H.: Urban Building Detection and Analysis by Visual and Geometrical Features. In: ICCAS (2007) 10. Trinh, H.H., Kim, D.N., Jo, K.H.: Supervised Training Database by Using SVDbased Method for Building Recognition. In: ICCAS (2008) 11. Trinh, H.H., Kim, D.N., Jo, K.H.: Facet-based multiple building analysis for robot intelligence. Journal of Applied Mathematics and Computation (AMC) 205(2), 537–549 (2008) 12. Trinh, H.H., Kim, D.N., Jo, K.H.: Geometrical Characteristics based Extracting Windows of Building Surface. In: Huang, D.-S., Jo, K.-H., Lee, H.-H., Kang, H.-J., Bevilacqua, V. (eds.) ICIC 2009. LNCS, vol. 5754, pp. 585–594. Springer, Heidelberg (2009) 13. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification. John Wiley and Son, Inc., Chichester (2001) 14. Shapiro, L.G., Stockman, G.C.: Computer Vision. Prentice Hall, Englewood Cliffs (2001) 15. Kim, D.N., Trinh, H.H., Jo, K.H.: Objects Segmentation Using Multiple Features for Robot Navigation on Outdoor Environment. International Journal of Information Acquisition 6(2), 99–108 (2009)
Spectrum Sharing with Buffering in Cognitive Radio Networks Chau Pham Thi Hong, Youngdoo Lee, and Insoo Koo School of Electrical Engineering, University of Ulsan 680-749, San 29, Muger 2-dong, Ulsan, Republic of Korea
[email protected]
Abstract. Cognitive radio network is proposed as the solution to fully utilize the valuable spectrum. In the absence of primary users (PUs), secondary users (SUs) have the same right to use spectrum. With spectrum sharing technique, cognitive radio network access should be coordinated to prevent multiple secondary users colliding. In this paper, we propose a new spectrum sharing scheme with buffering for new SU and interrupted SU. Markov model is developed, and analyzed to derive the performances of the proposed spectrum sharing scheme in both primary system and secondary system. Particularly, the blocking probability, the forced termination probability, the non-completion probability and waiting time are calculated for SUs. Simulation results show that the proposed approach can reduce the blocking probability extremely with very little increased forced termination probability. Keywords: cognitive radio, spectrum sharing, Markov chain, spectrum hadoff.
1
Introduction
Cognitive radio network is the smart radio network. This network has ability to achieve spectrum efficiencies by enabling interactive wireless users [1, 2]. In CR networks, the emerging area of dynamic spectrum access is given to solve the spectrum lack by allowing so-called secondary users to transmit in assigned bands without generated interference. Primary system and secondary system coexist in each spectrum band. Primary system refers to the licensed system with exclusively assigned frequency. This system has the unlimited right to access the allocated spectrum range whenever it has packets to transmit. Secondary system refers to the unlicensed system and can only opportunistically access the spectrum hole which is unoccupied by primary system currently. The subscriber in the primary system is called primary user (PU) and the subscriber in the secondary system is called secondary user (SU). SUs need to be aware of the state of channels in the way that they sense and learn the surrounding environment. When SU is using sub-channel and PU appears on that sub-channel, SU has to
Corresponding author.
´ atek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 261–270, 2010. N.T. Nguyen, M.T. Le, and J. Swi c Springer-Verlag Berlin Heidelberg 2010
262
C.P.T. Hong, Y. Lee, and I. Koo
vacate that sub-channel. At that time, SU will establish the new connection to another available spectrum band, which is called spectrum handoff. Cognitive radio network has caught the attention of many researchers. Some resource management schemes for cognitive radio networks have been proposed to improve the system performance [4, 5]. In [4], a spectrum sharing system with channel reservation was proposed. A Markov chain was used for the system analysis, and the forced termination probability and the blocking probability were derived. With this system, a SU request will be rejected directly if there is no available resource. This will be led to the intensification of the blocking probability. In another literature [5], dynamic spectrum access schemes in the absence or presence of buffering mechanism for the SUs were proposed and analyzed. Performance metrics for SU were developed with respect to blocking probability, interrupted probability, forced termination probability, non-completion probability and waiting time. However, this scheme only utilizes the buffer for the new SUs. This may be not desirable for the interrupted SUs and for CR network since a priority should be given for interrupted SUs over new SUs due to service disruption. In CR network, spectrum handoff is allowed, and interrupted SUs can immediately switch to idle sub-channels elsewhere. This re-allocation of sub-channel can be performed by access control. Hence, if there are free sub-channels around, forced termination will not happen. To utilize the properties of spectrum handoff, and further to overcome the drawbacks of the studies mentioned above, we propose a new spectrum sharing scheme for both of new SUs and interrupted SUs for bettering the spectrum utilization of CR networks. In the proposed scheme, spectrum sharing with the finite buffer for both the new SUs and the interrupted SUs is considered. That is, when new SUs and the interrupted SUs arrive at the system and there is no available sub-channel and the buffer is not full, the interrupted SU as well as new SU will be put into the queue, and SUs in head of line of the buffer will get released sub-channels when a PU or SU completes its service and release some sub-channels. For the performance analysis of the proposed scheme, two dimensional Markov model is developed. Based on this, we derive the forced termination, the blocking probability, the non-completion probability, the waiting time of SUs. The interaction between the system parameters is also explored. The rest of this paper is organized as follows: The system model used in the paper is showed in Section 2. In Section 3, we present the proposed scheme for new SUs and interrupted SUs. The performance measures are provided in Section 4. Simulation results are showed in Section 5. Finally, this paper is concluded in Section 6.
2
Spectrum Sharing with Buffering
In this section, a spectrum sharing scheme is proposed for providing a merged service of both PUs and SUs. This scheme allows SU to use the region reserved for PU. But SU has to vacate the channel when PU appears in that channel.
Spectrum Sharing with Buffering in Cognitive Radio Networks
2.1
263
System Description
Fig. 1 shows the system model for cognitive radio network. We define the channel as a bandwidth unit used in the PU system; and the sub-channel as a bandwidth unit used in the SU system. According to this term, a PU needs one channel for service and a SU needs one sub-channel for service. In the figure, M denotes the number of channels. Each channel is divided into N sub-channels. To avoid interference to PU, SU can use sub-channel only if there is no the presence of PU in that channel. This scheme has finite waiting queue for interrupted SUs and new SUs. Q is the length of the buffer. The new SUs requests and interrupted SUs due to the new PU appearance will be put into the buffer when all subchannels are occupied. We also use access control to service call requests from PU and SU.
Fig. 1. System model
2.2
Traffic Model
The offered traffic is modelled with two random processes per radio system. The arrival traffic is modelled as a Poisson random process with rate λs , λp for SU and PU, respectively. The radio system access duration of SU and PU are negative exponentially distributed with mean time 1/μs , 1/μp so the departure of SU and PU are another Poisson random process with rate μs , μp , respectively. The assumption here is that for each type of the radio system, we have the same traffic load and occupation time.
264
3
C.P.T. Hong, Y. Lee, and I. Koo
Spectrum Sharing with Buffering for New SUs and Interrupted SUs
In this section, we proposed spectrum sharing with buffering for new SUs and interrupted SUs in more detail. Instead of utilizing the buffer for only new SUs, the buffer is also made for interrupted SUs. With the proposed CR networks, SU can overflow into the radio resource region reserved for PU. However, SU has to leave the sub-channel when PU arrives. In that case, SU is preempted and moved to queue if there is no resource and the buffer is not full. In addition, the new SU request can be saved in the queue if there is no available sub-channel in the system but empty place in the buffer. In order to process the collected information from SUs and PUs, access control is used. Based on data of channel states, access control will make a decision whether to accept or reject the PU and SU’s call request. When PU or SU completes its service and releases some sub-channels, access control will also allocate the released sub-channels to SUs in head of line of queue. 3.1
Call Arrival and Call Completion
We make the flow charts below for PU and SU call arrivals. Based on the conditions of bandwidth, PU call arrival and SU call arrival can be accepted or rejected. When the PU completes its service, a channel will be released. This means that N sub-channels are available and SUs in head of line of queue will get the released sub-channels. One sub-channel will become to be free if SU completes its service. In this case, the SU in head of line of queue will use that released sub-channel. For performance analysis, we denote i as the total number of sub-channels used by SU, j as the total number of channels used by PU, and k as the total number of SU requests saved in the buffer. 3.2
Analytic Model
For improving the performance of the system by decreasing the SU blocking probability, a finite buffer with length Q is used for both of new SUs and interrupted SUs. When SU arrives, if there is no any available sub-channel in the system, SU call request will be put in the queue. Also, the preempted SU by the newly arriving PU will be put in the queue. Let (i, j, k) represent the system state. The state space ΓB in this case becomes ΓB = {(i, j, k)|0 ≤ i ≤ N M ; 0 ≤ j ≤ M ; 0 ≤ i + jN ≤ N M ; 0 ≤ k ≤ Q} These possible states can be divided into four sub-state regions as like Fig. 2. Ωa ≡ {(i, j, k) ∈ ΓB |i + jN ≤ N (M − 1), k = 0}
(1)
Ωb ≡ {(i, j, k) ∈ ΓB |N (M − 1) < i + jN ≤ N M, k = 0}
(2)
Spectrum Sharing with Buffering in Cognitive Radio Networks
Fig. 2. Rate diagram of state (i, j, k) with buffering
265
266
C.P.T. Hong, Y. Lee, and I. Koo
Ωc ≡ {(i, j, k) ∈ ΓB |i + jN = N M, k = 0}
(3)
Ωd ≡ {(i, j, k) ∈ ΓB |i + jN = N M, k > 0}
(4)
Noting that total rate of flowing into each state is equal to that of flowing out, we can get the steady state balance equation for each state as follows: When the current state (i, j, k) belongs to sub-state region Ωa ,the total number of busy sub-channels is no greater than N (M − 1), there is no interrupted SU with spectrum handoff mechanism when PU arrives the system. From Fig. 2(a), with the notice that the total rate of flowing into the state (i, j, k) is equal to that of flowing out, the steady-state balance equation is given as (λs + iμs + λp + jμp )p(i, j, 0) = λs p(i − 1, j, 0) + (i + 1)μs p(i + 1, j, 0) + λp p(i, j − 1, 0) + (j + 1)μp p(i, j + 1, 0)
(5)
where the steady state probability of a valid state (i, j, k) in state space ΓB is p(i, j, k). When the current state (i, j, k) belongs to sub-state region Ωb , there are some SUs will be interrupted when the PU arrives. The interrupted SUs can be saved in the queue if the buffer is not full. From Fig. 2(b), the steady-state balance equation is given as (λs + iμs + λp 1((M−1−j)N,j+1,i−(M−j−1)N ) + jμp )p(i, j, 0) = λs p(i − 1, j, 0) + (i + 1)μs p(i + 1, j, 0) (M−j−1)N + λp p(i, j − 1, 0) + (j + 1)μp p(m, j + 1, i − m)
(6)
m=i−(N −1)
where 1(i, j, k) will be equal to one if the state (i, j, k) is a valid state in the state space ΓB , and zero otherwise. When the current state (i, j, k) belongs to sub-state region Ωc , the steadystate balance equation is given as (λs + iμs + λp 1(i−N,j+1,N ) + jμp )p(i, j, 0) = λs p(i − 1, j, 0) + iμs p(i, j, 1) + (j + 1)μp 1(i−N,j+1,N ) p(i − N, j + 1, N ) + λp p(i, j − 1, 0)
(7)
When the current state (i, j, k) belongs to sub-state region Ωd , the steady-state balance equation is given as (λs + iμs + λp 1(i−N,j+1,k+N ) + jμp 1(i+min(k,N ),j−1,k−min(k,N )) )p(i, j, 0) = λs p(i, j, k − 1) + iμs p(i, j, k + 1) + (j + 1)μp 1(i−N,j+1,k+N ) p(i − N, j + 1, k + N ) + λp p(i + N, j − 1, k − N )
(8)
Spectrum Sharing with Buffering in Cognitive Radio Networks
267
Moreover, if the total number of all valid states is n, there are (n − 1) linearly independent balance equations and the summation of all steady state probabilities satisfies the normalized equation, (i,j,k)∈ΓB p(i, j, k) = 1, a set of n linearly independent equations is performed as following ΠQ = P
(9)
where Π is the vector of all states, Q is the transit rate matrix, and P = [0, ..., 1]. The dimension of Π , Q and P are 1 x n, n x n, n x 1, respectively. All steady state probabilities are obtained by solving Π = P Q−1 .
4
Performance Estimate
To evaluate the performance of the proposed scheme, we calculate some performance metrics such as blocking probability, interrupted probability, force termination probability and non-completed probability. Firstly, let’s consider the blocking probabilities of PU and SU calls. A SU call will be blocked when all channels and sub-channels are occupied and the buffer is full. The SU blocking probability can be calculated according to following equation: Pbs = p(i, j, k) (10) {(i,j,k)∈ΓB |i+jN =N M,k=Q}
After a SU is accepted, it can be interrupted due to the presence of PU. Hence, let denote the interrupted probability of a SU call as Pint . We should consider two situations to calculate Pint . In the first case is that current state (i, j, k) belongs to [ N (M − 1), N M ). In this case the number of interrupted SU is equal to i + jN − N (M − 1), and the interrupted probability in such situation Pint1 is given by
Pint1 =
{(i,j,k)∈ΓB |N (M−1)0} p(i, j, k)
Pint =
(13)
When a SU is using a sub-channel and a PU appears in this sub-channel, the SU will be forced to termination if the buffer is full. The forced termination
268
C.P.T. Hong, Y. Lee, and I. Koo
represents the disruption of SU in service. Let Pf t denote the forced termination probability of SU, and let ts denote the service time of an SU. Let tp,1 denote the period from the SU service starting moment to the instant of the immediate next PU arrival. Moreover, let tp,h (h = 2, 3, ...) denote the period between PU arrivals after the first PU arrival. Then, tp,h follows the exponential distribution with mean 1/λp . And then, the forced termination probability becomes ∞ h Pf t = Pr ts > tp,u (1 − Pint )h−1 Pint (14) u=1
h=1
h
where let define ζh = u=1 tp,u . Then, ζh is an Erlang distributed random variable with parameter λp and h. Its probability density function (pdf) fζh (t) and the Laplace Transform of its pdf fζ∗h (s) are given by λp (λp t)h−1 −λp t ∗ fζh (t) = e ; fζh (s) = (h − 1)!
λp s + λp
h (15)
In (14), we have h ∞ P r ts > tp,u = 0 fζh (t)[1 − Fts (t)] u=1
h λp = μs +λp
(16)
Substituting the result above into (14), we obtain Pf t = =
∞
λp μs +λp
h
(1 − Pint )h−1 Pint
h=1 λp Pint μs +λp Pint
(17)
One of important performance metrics is the non-completion probability of SU, Pnc , which is a probability that a SU call is not successfully completed. A SU call is not successful when it is blocked upon the moment of its arrival or when it is forced to termination due to the no sub-channels and no buffer space. Therefore, Pnc can be calculated as Pnc = Pbs + (1 − Pbs )Pf t
(18)
When new SU and interrupted SU calls are buffered, the calls should wait in queue until they get available sub-channel. Here, let’s define the average waiting time of SU as the time that a SU call waits in a queue until get a service in terms of average, and denote W . Then, according to Little’s theorem [9], average waiting time can be calculated as following: W = =
q λs Pbuf f ering {(i,j,k)∈ΓB |i+jN =N M,0 1 because for i = 1 the problem becomes trivial. Resources are given in some abstract entities. We denote a set of agents A =< A1 , . . . , Aa >, where each agent Ai is represented as Ai =< U1 , . . . , Un >, Ui ∈ N, i > 1. A value p on the position Uj means that agent i has p recourses (skills) necessary for realization task j. By K we denote a partition of a set of agents such that: each element Ci ∈ K represents a coalition of agents, each element Ci is a set of agents: Ci ∈ A, in particular Ci = ∅), for = k, n each two elements Ci , Ck ∈ K : Ci ∩ Ck = ∅, i = A, n = card(K), i=1 agent A belongs to coalition Ci , i.e., agent A uses his recourses (skills) for task i, – K represents the partition with the maximal coalition value (the best one so far). – – – – –
We define a total ability of coalition Ci to solve task i as a sum of resources (skills) of all its members. Formally, a total ability of coalition Ci is δi = At ∈Ci Ui [At ]. Task i is solved for a given coalition if δi > Ti . For such CFP we propose a mixed method, consisting of two phases in which we utilize the advantages of two different approaches: 1. ability to obtain any time solutions of known quality (border) – Sandholm’s algorithm 2. ability to find a satisfactory solution in a relatively short period of time, even having a very large solution space – Evolutionary Algorithm. In the latter case, we do not have any knowledge concerning the distance between the found solution and the optimal one. As the first we use Sandholm algorithm with modification made on the basis of the considered task properties. One can find different method of coalition formation in multi-agent systems, mainly focused on coalition structure formulation. In the paper we focus on three categories of the methods [3] (for more information about different approaches see [3,4,5,6]):
284
H. Kwasnicka and W. Gruszczyk
1. algorithms wit low computational complexity that guarantee finding the optimal structure of coalition, 2. quick algorithms, that do not guarantee finding the optimal structure of coalition, 3. anytime algorithms with guarantee the quality of finding solution. Ad. 1. Intuitively, algorithms from this class seem to be an ideal solution to CFPs, because they assure the optimal coalition structure. But usually searching for the optimal solution requires not acceptable computation time. Often we are interested not exactly the optimal, but rather ’good enough’ solution obtained in the reasonable time. Combinatorial problems, in which a subproblem occurs frequently and its solution is common for a certain class of problems, can be solved by dynamic programming methods. A detailed discussion of such approach is in [7]. Authors of [8] present an algorithm of determining the revenue maximizing set of non conflicting bids [3]. Ad.2. The ability of working with large problems and returning ’good’ solutions is the main advantage of these algorithms, but they do not guarantee quality of achieved solutions. Metaheuristic approaches, as Simulated Annealing, Evolutionary Computation, Neural Networks belong to this group of methods [9], and [10]. In [10] the authors propose an Evolutionary Algorithm with standard binary coding schema and classic genetic operators. The disadvantage of this approach is using excessive chromosomes. The authors of [9] present a competitive method, the main drawback of this approach is that the crossover operator produces incorrect solutions and some repair procedure has to be included. We propose an Evolutionary Algorithm with intention to combine the desirable features of both mentioned evolutionary algorithms (section 2.2). Ad.3. A method which, when it is stopped after acceptable time, gives us evaluation of quality of obtained solution is very desireable. Two methods presented in [11] and [12] can give – in any time, on demand – evaluation of the current found solution. Additionally, they find the optimal solution by searching the whole coalition structure graph. The main drawback of both algorithms is that, for majority cases, they are ineffective. Sandholm Algorithm proposed in [12] provides the backbone of our mixed method (SanEv, section 2).
2
Mixed Sandholm-Evolutionary Method (SanEv )
As it is mentioned in section 1, the deterministic part of the proposed method is Sandholm Algorithm (SanA). Its description together with theorems prove are in [12] and [13]. Next subsection gives an idea of the Sandholm Algorithm and its modification which make it more efficient to the considered problem. The second subsection presents shortly the proposed evolutionary approach. 2.1
Sandholm Algorithm (SanA)
We have a set A of agents, where a = |A|. A coalition structure CS is defined as a partition of agents A. Each agent belongs to exactly one coalition. vS denotes the value of coalition S. We define V (CS) – value of coalition structure CS as:
Coalition Formation Using Combined Deterministic and EA
V (CS) =
vS .
285
(1)
S∈CS
M denotes a set of all possible coalition structures. Searching a part of coalition structure graph (N ⊆ M ) (Fig. 1), we find such coalition structure which guarantees the highest value from all seen so far coalition structures: ∗ CSN = arg max V (CS). CS∈N
(2)
Also, we are looking for k – the worst case bound of the coalition structure value: k = min {κ} ,
(3)
where (the proof can be found in [12] and [13]) κ=a≥
V (CS ∗ ) ∗ ). V (CSN
(4)
The goal of the coalition formation problem is to maximize the social welfare of the agents A by finding a coalition structure CS ∗ = arg max (V (CS)). CS∈M
(5)
Calculation of the bound requires following assumptions: (1) value of each coalition does not depend on the values of other coalitions, (2) value of a coalition is nonnegative and finite: ∀S∈M vS ≥ 0 ∧ vS < ∞. Four agents constitute 15 possible coalition structures. The coalition structure graph (Fig. 1) consists of four levels LV1 , . . . , LV4 . LV1 is the lowest one, LV4 is the last, the highest one. It is proved ([12] and [13]) that the searching the two lowest levels of the graph is sufficient to evaluate bound k: k = a; a a−1 number of checked nodes is: m = that after searching a2 . Also, it is proved level LVl the bound k(m) is h , where h = a−l + 2. This bound is tight. 2
Fig. 1. Coalition structure graph for four agents (after [3])
286
H. Kwasnicka and W. Gruszczyk
Sandholm algorithm is general, but we can add some assumptions suitable for the considered task. In our case, not only a form of coalition influence the coalition value vS , but also assignment the coalition to the tasks. It causes that in each node of the coalition structure graph we have to consider all possible assignments and the highest value is the resulting coalition value. When a number of agents is greater than a number of tasks, some coalition structures have value V (CS) = 0. Let us assume that we have processed levels LV1 and LV2 of the graph, then the ∗ minimal value V (CSN ) ≥ 0. We denote a number of agents by a; a number of tasks by n; a current level by l; and let be n < a. With above, at level l = a (2 < l > n), at least one coalition has no task assigned and its value is equal to 0. Obviously, this coalition structure cannot be better than the best found so far. Therefore, after processing the first two levels of a coalition structures graph, the level LVmin(a,n) is the next that should be processed. The bound of a solution quality after processing LV2 level is given by Sandholm’s theorem. The bound coming from levels LVa − LVmin(a,n)+1 cannot be accepted because for these levels, value of the correct coalitions depends on the mistakenly assigned coalition (with no tasks), that violates the condition about independence of the coalition value from actions of other coalitions. Thereby improving the quality of solutions can be determined only after completion of the search of level LVmin(a,n) . 2.2
Evolutionary Algorithm (EA)
Proposed evolutionary algorithm (EA) tries to omit disadvantages mentioned in section 1. It goes as follows: 1. Create an initial population as a randomly generated set of μ individuals, 2. Repeat until stop condition is met: 3. Create the offspring (auxiliary) population consisting of λ (or λ + 1, if μ ≡ 1 (mod 2)) individuals in the following way: (a) choose randomly two individuals from μ individuals of the current population (roulette wheel selection method) [14], (b) run on these individuals crossover and mutation operators, (c) put the modified individuals to the auxiliary population, 4. create a tentative population by joining the current and the auxiliary populations, 5. find the best individual in the tentative population, put its copy into the next generation, 6. choose (using the roulette wheel method) one individual and put its copy to the next generation population until its size is μ, 7. check the stop condition: if false go to point 2, if true – return the best fund so far individual as a result. As the stop criterion we can use: an assumed number of generations; an assumed computation time; a quality the best found solution; an assumed number of generation without improving the solution, etc. Coding schema. Each individual represents a partition of set A together with assigning them to tasks, and it is encoded as vector vi :
Coalition Formation Using Combined Deterministic and EA
287
vi =< d1 , . . . , dq >, di ∈ N ∧ di ∈ [1, card(T )] ∧ q = card(A). Value of t on position r in vi denotes that rth agent is solving tth task. Agents solving the same task form a coalition. Evaluation of individuals – fitness function (f itV al(ind)). Our goal is to maximize a number of performed tasks, however a fitness function should differentiate individuals and steer the evolution to the better solutions. After a number of experiments we have defined the fitness function which takes into account a number of solved tasks and utilization of agents resources: 1. Set fitness value f itV al of an evaluated individual ind as a number of all tasks numberOf T asks: f itV al(ind) = numberOf T asks; 2. Repeat for each coalition Ci represented in the individual ind: (a) if the total ability δi of coalition Ci is lower than the required resources of task Ti (δi < Ti ) then change the value of f itV al according to formula: f itV al = f itV al − 1.2 + max(0.2, δi /Ti ); (b) if δi = Ti then f itV al = f itV al − min(0.05, (δi − Ti )/Ti ); (c) if (δi > Ti ) then f itV al = f itV al − 0.05; 3. Return f itV al. Selection method. We use a roulette wheel method, ith individual is selected with the probability equal to the ratio of its fitness to the average fitness in the current population. Crossover operator. We use two-points crossover. Mutation operator. It operates on single individual (after the crossover) with an assumed probability pm . Assignment of an agent to a task is randomly changed. Both operators, mutation and crossover, preserve correctness of individuals. The proposed EA was tested using a number of different, including specially defined, artificial data. The result were presented in [15]. Our EA achieved efficacy more than 90%, in most cases it found the optimal solutions. 2.3
Mixed Method – SanEv
The mixed Sandholm-Evolutionary method (SanEv) starts with Sandholm algorithm (SanA), it has to process at least two first levels of a coalition structures graph (LV1 and LV2 in Fig. 1). It is stopped when the whole search space is processed or when the assumed time constraint occurs. SanA, after checking l levels of a coalition structures graph, gives us the bound k which defines univocally the theoretical maximum of coalition value in the worst case. ∗ ∗ This maximum is: vmax (l) = k · vN , it can be calculated for k and vN only when the whole considered level l of the graph is processed. This phase of our method produces two values: (1) the best coalition structure together with assigned tasks ∗ (from all coalition structures seen so far) CSN , (2) the theoretical maximum of the coalition structure value. After finishing the first two levels of the graph,
288
H. Kwasnicka and W. Gruszczyk
SanA can be stopped, this stop can occur when it finishes search of all space (the whole graph), or time constraint occurs. When the second condition occurs, the Evolutionary Algorithm (EA) is run. Based on the earlier study of our EA, the initial population is generated as 10 copies of the best solution found by SanA and 10 the mutated copies. The rest individuals are randomly generated. EA runs assumed number of generations. Summing up our method, we can say that Sandholm algorithm, despite its computational complexity and combinatorial explosion, has two important advantages: – It allows lowering constraint for the maximal coalition value (bound), – It allows for slow, but deterministic search for better, and better solutions in the space of coalition structures. Evolutionary algorithm reveals following main features: – It searches a good, close to the optimal solutions in a relatively short time, – It is not able to give a theoretical bound of the coalition structure value. To evaluate the quality of solutions obtained by EA we use the results of SanA. As it was mentioned, the first phase allows to determine the lower value of upper bound k of optimal coalition value vmax . Thus, at the end of the EA, we know ∗ value of vN (the best one found so far), and we can appoint a bound of the solution for the worst case (we call it quality of the solution) as: q=
3
∗ vN · 100%. vmax
Experimental Study of the SanEv Method
We have designed a number of special cases to test the influence of input data on the results produced by our mixed Sandholm-Evolutionary method (SanEv). Experiments were made on computer with Intel Pentium Core 2 Duo processor, 2GB RAM. The binary program was run under Ubuntu Linux 8.04 - Hardy Heron operation system. Parameters for EA were constant in all runs: number of generations – 500; population size – 100; crossover probability – 1; mutation probability – 0.005%; There is no place for wide presentation of our study, we try to focus on some experiments that indicate some weak and strong sides of the proposed method. From the game theory point of view, a coalition formation can be perceived as N -person game in the characteristic function form [16]. We have a set of players N and a function v, which, for each subset S ⊆ N assigns a value v(S). A player is an agent, a characteristic function is a function of coalition value. Two cases with different properties are distinguished in game theory: superadditive and subadditive games. Superadditive games promote formation of large coalitions, ’great coalition’ of all agents. Subadditive games promote formation of very small coalitions, consisting of single agents. Such problems seem to be adequate for
Coalition Formation Using Combined Deterministic and EA
289
SanEv efficiency testing. Other data were designed to test the method efficacy for CFPs with different relations between a number of agents, a number of tasks, and tasks demand for resources. Experiment 1 – superadditive game, 15 agents and 10 tasks, each agent is able to realize only the first task. Additionally, a sum of resources of all agents is equal to the demand of task T1 . Thus the task T1 is the only one possible that can be solved. Used data: set of tasks: T = [30, 1, 1, 1, 1, 1, 1, 1, 1, 1], set of agents A: A = {[2,0,0,0,0,0,0,0,0,0], [2,0,0,0,0,0,0,0,0,0], [2,0,0,0,0,0,0,0,0,0], [2,0,0,0,0,0,0,0,0,0], [2,0,0,0,0,0,0,0,0,0],
[2,0,0,0,0,0,0,0,0,0], [2,0,0,0,0,0,0,0,0,0], [2,0,0,0,0,0,0,0,0,0], [2,0,0,0,0,0,0,0,0,0], [2,0,0,0,0,0,0,0,0,0],
[2,0,0,0,0,0,0,0,0,0], [2,0,0,0,0,0,0,0,0,0], [2,0,0,0,0,0,0,0,0,0], [2,0,0,0,0,0,0,0,0,0], [2,0,0,0,0,0,0,0,0,0] }
The optimal coalition (all agents are assigned to the first task) has to be found by ∗ ∗ SanA during processing the first level LV1 . We have CSN = CS ∗ and v(CSN )= v(CS ∗ ) = 1. However the best solution cannot be further improvement, processing by SanA other levels (in ordering LV2 → LV10 → LV9 → . . . → LV3 ) gives a series of decreasing bounds of coalition structure values: Level Bound
2 15
10 3
9 3
8 3
7 2
6 2
5 2
4 2
3 1
Such experiments confirm our expectations. Levels LV1 and LV2 were processed in 10 seconds. After finishing level LV2 , the bound for LV11 was appointed. It was done without computational requirements because all coalition structures on the levels ≥ LV10 have values equal to 0. Level LV10 was not processed to the end, after 5 hours SanA was stopped. More results are presented in [13]. Experiment 2 – subadditive game. The data: set of 10 tasks: T =< 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 >, set of 10 agents A: A = {[1,0,0,0,0,0,0,0,0,0], [0,1,0,0,0,0,0,0,0,0], [0,0,1,0,0,0,0,0,0,0], [0,0,0,1,0,0,0,0,0,0], [0,0,0,0,1,0,0,0,0,0], [0,0,0,0,0,1,0,0,0,0], [0,0,0,0,0,0,1,0,0,0], [0,0,0,0,0,0,0,1,0,0], [0,0,0,0,0,0,0,0,1,0], [0,0,0,0,0,0,0,0,0,1] }
Each agent has resources for solving exactly one task, so it cannot be useful in coalitions solving other tasks. Thus after processing level LV2 we receive ∗ v(CSN ) = 2: there are two coalitions – one consisting of a single agent and the second consisting of all others (nine agents). Such coalition structure allows to solve two tasks. According to the Sandholm algorithm property, after processing LV2 level, we obtain k(2) = a = 10. It gives an upper bound ∗ k(2) · v(CSN ) = 10 · 2 = 20. Next, SanA processes LV10 level, where it finds ∗ optimal solution v(CSN ) = 10 = v(CS ∗ ). But upper bound takes now k(10) = 5 ∗ – it is worse than that after level LV2 : k(10) · v(CSN ) = 5 · 10 = 50. In this situation, the global evaluation of maximum (the best so far) is not changed and still is equal to 20. The optimal solution is found at LV10 level, the bound evaluation also does not decrease, its improvement can be achieved after processing the whole graph. Level Bound
2 10
10 5
9 5
8 3
7 3
6 2
5 2
4 2
3 1
290
H. Kwasnicka and W. Gruszczyk
Of course, EA ran in the second phase also cannot improve the solution obtained after processing LV10 level, we have guarantee our solution equal to 50%. Only Sandholm algorithm can improve this result, but after processing the whole graph. Thus, for subadditive games, it is worth processing more than the last level (the third one in the processing order) only if we have time to process the whole graph. The EA should not be ran in a such case. Superadditive and subadditive games are good examples for efficiency analysis, because we know what is searched (known optimum) and where this solution is placed on the coalition structure graph. In real-life problems we do not have such knowledge. Therefore we have made a number of other experiments to verify utility of the proposed method. Experiment 3. Problems with high complexity. We consider a problem with five tasks and 20 agents. Set of tasks: T = [8, 8, 8, 8, 8], set of agents A: A = {[2,1,1,1,1], [1,2,1,1,1], [1,1,2,1,1], [1,1,1,2,1],
[2,1,1,1,1], [1,2,1,1,1], [1,1,2,1,1], [1,1,1,2,1],
[2,1,1,1,1], [1,2,1,1,1], [1,1,1,2,1], [1,1,1,2,1],
[2,1,1,1,1], [1,1,2,1,1], [1,1,1,2,1], [1,1,1,2,1],
[1,2,1,1,1], [1,1,2,1,1], [1,1,1,2,1], [1,1,1,2,1]}
Agents can perform all tasks and they must spend all their resources. The solution can be easy found manually, but for the method this CFP is difficult. Processing whole LV5 level is impossible – assuming that evaluation of one coalition consumes 0.001 s, the estimated time is 100 years. Table 1 presents some obtained results. Running the deterministic algorithm (SanA) we do not obtain important improvement of solution quality due to time restriction. EA gives optimal solution, i.e., all five tasks are solved, in 75% of all performed runs. In 25% of runs the results were better than those obtained in the same time using Table 1. Results of Mixed Sandholm-Evolutionary method: 5 tasks, 20 agents, time restriction – 4 hours
Graph level
Sandholm Algorithm Coalition Quality Bound Estimated Time value [%] k(l) maximum [min:s]
Coalition structure
2
2.00
5.00
20
40
2:33
[((1,2,3,4),1)((5,· · ·,20),2]
6 & part 7th
3.00
7.50
20
40
5:28
[((2),1)((3,· · ·,8),2),((9,· · ·,12),3), ((13,· · ·,20),4),((1),5)]
Graph level
Evolutionary Algorithm Coalition Quality Fitness In value [%] function generation
Coalition structure
1, 4, 6, 12, 20
4.00
10.00
4.55
242, 271, 4, 17, 13
[((1,2,3,4),1),(5,6,7,8),2),((9,10,11,12),3)
2,3,5,7,8,
5.00
12.50
5.00
168, 16, 242, 307
[
9, 10, 11, 13
420, 215, 281, 18,
((1,2,3,4),1),((5,6,7,8),2),((9 · · ·12),3),
14, 15, 16, 17,
410, 327, 223, 311,
((13,· · ·,16),4),((17,· · ·20),5)
18, 19
107, 149, 203
]
((13,14,15,16,18),4),((17,19,20),5)]
Coalition Formation Using Combined Deterministic and EA
291
SanA. One generation in EA consumes a few seconds, what, comparing with SanA, causes that it is worth to run EA after finishing level LV2 . With high probability, we can obtain significantly better solution. We have considered also the similar, but a larger problem, with 10 tasks (T = [14, 14, 8, 8, 8, 8, 8, 8, 8, 8]) and 30 agents. We have to run SanA for two levels, next – EA. But SanA requires very long time for searching two first levels. On level (LV2 ) the set of agents is divided into two nonempty subsets (coalitions). A number of possible different divisions is equal to: 30 = 536870911, 2 n denotes Stirling numbers of the second kind count the number k of ways to partition a set of n elements into k nonempty subsets. There are 10 possible assignments of the first coalition on this level, the second coalition can be assigned in nine different ways. To finish level LV2 SanA has to evaluate 536870911 · 10 · 9 = 48318381990 possible assignments. Assuming that one evaluation can be calculated in 30 μs, processing level LV2 could consume 16 days. For such large tasks it seems reasonable to run only EA (or other meta-heuristic approach) [15]. We can calculate that, in above assumption concerning the time of evaluation of a single coalition, and for 30 agents, the time of processing the second level (LV2 ) is: for 10 tasks – 16 days; for 15 tasks – 39 days; for 20 tasks – 70 days. This time very strongly depends on a number of agents. Similar values for 25 agents are, respectively: 12 hours; 29 hours; 53 hours; for 20 agents: 23 minutes; 55 minutes; 99 minutes; where symbol
Experiment 4. Too small resources and more tasks than agents. We have defined a set of 10 tasks: T = [7, 7, 4, 8, 9, 4, 4, 4, 4, 4] and a set of 15 agents. Defined set of agents is able to solve only five tasks. We know the optimal coalition structure. A number of coalitions can solve five tasks, but their resources are greater than thous demanded by the tasks. This problem is very difficult, in reasonable time (10 hours) SanA finished level LV2 and started LV10 . The bound k(l) = 15; quality – 10%, evaluated maximum – 30. The coalition value, i.e., a number of solved tasks was 3. EA was able to find coalition solving 4 tasks (quality 13.33%). In this case our EA was not able to escape from the local optimum in 500 generations. Next tested problem contains 25 tasks and 15 agents. Each tasks can be solved because there exists at least one coalition having enough resources. But, the optimal coalition structure does not solve all tasks due to the number of tasks is greater than the number of agents. The question is: how much tasks can be solved? We assumed two hours for the method, so SanA finished only two first levels. The best found coalition was able to solve only 2 tasks (quality 6.6%).
292
4
H. Kwasnicka and W. Gruszczyk
Summary
Performed experiments have shown that the proposed two-phased, SandholmEvolutionary methods (SanEv) is able to give an acceptable solutions even for a complicated CFP. The deterministic part of the method (SanA), processing at least two levels, provides the boundary of coalition value for the worst case. EA is not able to decrease above boundary, but it quickly can improve the solution. Usually we should try to use a deterministic part of SanEv for the first two levels, and next, the EA. If we have a great number of agents we should consider using only meta-heuristic approach (EA in our method) to find a relatively good solution without any guarantee concerning its quality. A number of agents influence the computational time more strongly than the number of tasks. The disadvantage of SanEv is that, for very large problems, it is impossible to process the two first levels of the graph, and we should relay on the solution found by the evolutionary algorithm. Further research should be focused mainly on two things: automation of EA parameters tuning, and computation distribution. In the proposed approach we have assumed that agents are cooperative ones, and they cannot refuse to be a member of calculated for them coalition. It seems interesting considering a set of autonomous agents. In such approach, one should develop a communication schema and negotiation between them to create coalitions, in which each agent should maximize his own profit.
References 1. Wooldridge, M., Jennings, N.R.: Intelligent Agents: Theory and Practice. Knowledge Engineering Review 10, 115–152 (1995) 2. Horling, B., Lesser, V.: A Survey of Multi-Agent Organizational Paradigms. The Knowledge Engineering Review 19(4), 281–316 (2005) 3. Rahwan, T.: Algorithms for Coalition Formation in Multi-Agent Systems. PhD thesis, University of Southampton (2007) 4. Shehory, O., Kraus, S.: Task allocation via coalition formation among autonomous agents. In: Proc. 14th Int. Joint Conf. on Artificial Intelligence, pp. 655–661 (1995) 5. Shehory, O., Kraus, S.: Formation of overlapping coalitions for precedence-ordered task-execution among autonomous agents. In: Proc. of the 2nd Intern. Conf. on Multiagent Systems (ICMAS 1996), pp. 330–337 (1996) 6. Shehory, O., Kraus, S.: Methods for Task Allocation Via Agent Coalition Formation. Artificial Intelligence 101(1-2), 165–200 (1998) 7. Cormen, T., Leiserson, C., Rivest, R., Stein, C.: Introduction to Algorithms. The MIT Press, USA (2001) 8. Rothkopf, M.H., Pekec, A., Harstad, R.M.: Computationally Manageable Combinatorial Auctions. Management Science 44(8), 1131–1147 (1995) 9. Jiangan, Y., Zhenghu, L.: Coalition formation mechanism in multi-agent systems based on genetic algorithms. Applied Soft Computing 7(2), 561–568 (2006) 10. Sandip, S., Ip, S., Partha, S.D.: Searching for Optimal Coalition Structures. In: Proc. of the Fourth Intern. Conf. on Multiagent Systems, pp. 286–292. IEEE, Los Alamitos (2000)
Coalition Formation Using Combined Deterministic and EA
293
11. Dang, V.D., Jennings, N.R.: Generating Coalition Structures with Finite Bound from the Optimal Guarantees. In: AAMAS 2004 Proc. of the Third Intern. Joint Conf. on Autonomous Agents and Multiagent Systems, pp. 564–571. IEEE Comp. Soc, Los Alamitos (2004) 12. Sandholm, T., Larson, K., Andersson, M., Shehory, O., Tohme, F.: Coalition Structure Generation with Worst Case Guarantees. Artificial Intelligence 111(1-2), 209– 238 (1999) 13. Gruszczyk, W.: Coalition formation problem in time–constrained multi–agent environments (in Polish). Master Thesis, Wroclaw Univ. of Technology, Poland (2009) 14. Michalewicz, Z.: Genetic Algorithms + Data Structures = Evolution Program. Springer, Berlin (1999) 15. Gruszczyk, W., Kwasnicka, H.: In: Ganzha, M., et al. (eds.) Proc. of the Int. Multiconf. on Comp. Science and Information Technology, Poland, pp. 125–130 (2008) 16. Straffin, P.D.: Game Theory and Strategy. Mathematical Assoc. of America (1993)
A New CBIR System Using SIFT Combined with Neural Network and Graph-Based Segmentation Nguyen Duc Anh, Pham The Bao, Bui Ngoc Nam, and Nguyen Huy Hoang Faculty of Mathematics and Computer Science, University of Science, Ho Chi Minh City, Vietnam {ndanh,ptbao,bnnam,nhhoang}@hcmus.edu.vn
Abstract. In this paper, we introduce a new content-based image retrieval (CBIR) system using SIFT combined with neural network and Graph-based segmentation technique. Like most CBIR systems, our system performs three main tasks: extracting image features, training data and retrieving images. In the task of image features extracting, we used our new mean SIFT features after segmenting image into objects using a graph-based method. We trained our data using neural network technique. Before the training step, we clustered our data using both supervised and unsupervised methods. Finally, we used individual object-based and multi object-based methods to retrieve images. In the experiments, we have tested our system to a database of 4848 images of 5 different categories with 400 other images as test queries. In addition, we compared our system to LIRE demo application using the same test set.
1 Introduction The need of fast and precise image retrieval (IR) systems is growing rapidly as computer power evolves quickly as well as the size of image databases. Most present IR systems fall into two categories: text-based systems and content-based ones. The simplest systems are text-based such as Google, Yahoo, etc. Though those systems are usually fast at retrieving time, they all suffer from many big drawbacks like the ambiguity of languages or the absence of precisely visual presentation in annotation as well as time consuming annotating process as stated in [1]. Unlike text-based IR systems, CBIR systems do not involve the subjective human perception in describing the semantic meaning of an image. Instead, the visual features of an image are extracted automatically. Thus, the performance of CBIR systems largely depends on what kinds of visual features being used and the extracting methods. There have been a large variety of image features proposed by many researchers as discussed in [2]. Among those, SIFT features of Lowe [3] have been proven to be very effective due to their invariance to many different transformations. Furthermore, most IR systems divide the original set of data which can be images, image objects or image annotations into subsets or clusters. Therefore, clustering methods also involve the retrieval performance. Motivated by the effectiveness of SIFT features, in this paper, we developed our new mean SIFT features. We then applied these features to regions or objects N.T. Nguyen, M.T. Le, and J. Świątek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 294–301, 2010. © Springer-Verlag Berlin Heidelberg 2010
A New CBIR System Using SIFT Combined with Neural Network
295
segmented from images. Many approaches to image segmentation have been developed in recent years like those in[4].We used graph-based segmentation method of Pedro F. Felzenszwalb and Daniel P. Huttenlocher[5] due to its fast performance and its ability to preserve detail in low-variability image regions while ignoring detail in high-variability regions. Like other IR systems we also clustered the database data. Specifically, we employed supervised and unsupervised clustering methods. In the first method, we clustered data manually. In the later one, the system clustered data automatically. Among existing automatic clustering methods like k-mean clustering of Stuart Lloyd [6], hierarchical clustering of S.C. Johnson [7], we decided to develop our own method by modifying the graph-based segmentation method since former techniques either failed to converge or the clustering results were not good. Given the clustered data, the goal to determine which cluster the input data belongs gives rise to training the database data. In our CBIR system, we used neural network to train our clustered data instead of some statistic techniques since many of them require either a prior knowledge or assumptions about the data, such as naive Bayes models [8]. Finally, we used two retrieval methods: individual object-based and multi object-based.
2 System Overview As mentioned our IR system has three main functions. Each of them can be part of one or both of the following stages: the database training stage and the retrieval stage. Diagram 1 shows the database training stage and diagram 2 shows the retrieval stage.
Image database
Extracting mean SIFT features
Image object database
Clustering
Neural network
Training clustered objects
Diagram 1. Database training stage
In the first step of the database training stage, we extracted features from each image in the database as follows: i. Extract original SIFT features from the image. ii. Automatically segment the image into sub-regions which we considered as objects. iii. For each object, compute its mean SIFT feature.
296
N.D. Anh et al.
Given this database of objects features, we then used supervised and unsupervised clustering methods to cluster the original image database. We used our own unsupervised clustering algorithm. After that, all objects of images in the same clusters were also put into the same clusters. Finally, we trained those clusters using neural network technique to get the supervised and unsupervised neural networks accordingly.
Query Image
Extracting mean SIFT features of image object(s)
Individual object based cluster specification
Multi-objects based cluster specification
Image objects database
Results
Diagram 2. Retrieval stage
The first step of the retrieval stage is similar to that of the database training stage. Given the set of objects automatically extracted from the input image, we then extracted the mean SIFT feature(s) of the object(s). As mentioned earlier, there are two neural networks. Thus we also have two distinct ways to classify objects. For each of these two neural networks, we first let user choose one object and classify that object. The results are a set of objects closest to the query object in the specified group. This is called individual object-based retrieval. Another way is to select the group that most objects fall into and compute the distance between all query objects and objects in that group and then return N objects of N smallest distances. We called this method multi-object-based retrieval. In our system, we used arcos distance for the sake of the effectiveness of our mean SIFT features. In the next two sections, we are going to introduce our new mean SIFT feature and an automatic clustering routine based on the graph-based segmentation method mentioned earlier. Finally, we show some experimental results.
3 Mean SIFT Before SIFT method, there had been a lot of works on image features extraction, such as Harris Corner Detector [9] or methods discussed in [2]. Those methods all have their own advantages. However, such methods fail to detect similar features at different scales. Based on scale space theory [10], SIFT method has been proven to be a successful candidate to the detection of image features at different scales. Furthermore, it has been shown that SIFT features are invariant to Affine transformations and changes in illumination. SIFT method has three main stages: locating key-points on
A New CBIR System Using SIFT Combined with Neural Network
297
image, building detectors and building descriptors. The descriptors are then the features of an image. Despite outstanding advantages, SIFT method does have major drawbacks, such as the large number of descriptors, a 1024×768 image can, in practice, have up to 30000 descriptors. While the amount of descriptors is usually large, the number of matches between two similar images is small. Furthermore, sometimes, the number of matches between two similar images is the same as that of two irrelevant ones. The first issue leads to unbearable complexity in performance when dealing with large database, especially in matching time. The latter one concerns the unreliability in using the number of matching as a similarity measure between images. Many efforts have been put by researchers to tackle those issues such as using PCA to reduce descriptor’s dimension [11] or removing features based on certain assumptions [12]. However, the size of a feature vector is still large in both methods. Besides, none of them addresses the unreliability issue. In this paper, we propose our new mean SIFT feature based on the following experimental result. Given three images A,B and C in which A and B are similar and C stands out. A | has a set of descriptors 1, and its mean descriptor where ∑
.
(1)
The same notation is used for B and C. Let the arcos distance between be denoted by . . | || |
and (2)
We shall now prove that the arcos distance between two similar images is smaller than that of two different images or 0 in this case.
Fig. 1. Each column is a triplet. Top row-image A. Middle row-image B. Last row-image C.
To obtain this result, we first conducted a test over 300 triplets of images, figure 1. Each triplet has two similar images, A and B, and an odd one, C. Let and is the frequency of having 0. There are 253 triplets which have 0 in 300 samples. Based on the central limit theorem, we gain the following result: ~ where
is 300,
is 253/300 and
0,1
is the probability of having
(3) 0.
298
N.D. Anh et al.
Let the significance level be 95%, we get the formula: 1.96
1.96
(4)
As a result, the value of is between 0.802 and 0.884. That means the event has the probability from 0.802 to 0.884 with a significance level of 95%. The above proof states that if two images are more similar to each other, then chances are the distance between their mean descriptors is smaller. Thus we decided to use mean descriptor as our new mean SIFT feature. This feature can be applied for an entire image or each object segmented from an image. In either ways, the number of feature vectors is extremely small since one image has a very small number of meaningful objects.
4 An Automatic Clustering Method Based on Graph-Based Segmentation Unlike image segmentation, data clustering in general may involve more data and the inter-relation between data may be stronger. Therefore, only one run of the original method may not result in a satisfactory segmentation. It is likely that some groups are good in the sense that their members really share similar features while some are not good meaning their members do not have anything in common. Moreover, the size of groups is another issue. By experiments we saw that using one run of the original technique in data other than image pixels may result in groups either too small or too big. Because of those drawbacks, we decided to construct our new clustering algorithm based on the original graph-based segmentation. This method must address the two big issues mentioned above. The most appropriate solution is to put the original method into a loop and set reasonable conditions to check the outcome after each loop. Like the original algorithm, we considered each data as a vertex and the weight of an edge connecting two vertices is the similarity measure between them. Step 1: Initiate the set of edges E from the set of vertices V. Step 2: Use the original graph-based segmentation to cluster V into groups: 1, . . . Step 3: For each , if it satisfies condition , we consider as a qualified group and remove all vertices in and all edges related to them. Otherwise, we loop through each edge in , if it satisfies condition , we change the weight of this edge using a function: . Step 4: if V satisfies condition , stop. Otherwise, go back to step 1. Conditions 1,3, in general, could be any user-defined ones. In this context is the condition that the mean and standard deviation of all edges’ weights in are smaller than pre-defined thresholds. This is to tackle the first issue. is the
A New CBIR System Using SIFT Combined with Neural Network
299
condition that an edge’s weight must be smaller than a threshold, and F is a weight reduction function, particularly, we lessen the weight by 90% its original value. This is to deal with the second issue. Finally, is the condition that the number of remaining vertices in V is smaller than a threshold or after a certain amount of loops the size of V remains unchanged. This condition guarantees that the algorithm will stop.
5 Experiments Our databaseconsists of 4848 images of five different categories: landscapes, vehicles, train-planes, signs and women. This is a very challenging database due to its variance in many properties like resolution, image domains, etc. We also use other 400 images as test queries. For each query, we returned 20 results, figure 2. As stated earlier, we use two retrieval methods: individual object-based and multi-object-based, and two clustering techniques. Thus we have four different test cases in total. The table below shows the average precision of all methods. Table 1. Average precisionof retrieval methods
Supervised Unsupervised
Individual object-retrieval 80% 61.8%
Multi-object-retrieval 78.5% 59.4%
Fig. 2. A query image and 20 nearest results using individual-object retrieval and supervised clustering
300
N.D. Anh et al.
We also used the same test set for the LIRE demo application, a java open source CBIR software developed by Mathias Lux and Savvas A. Chatzichristofis [13][14]. Particularly, we chose the method of Color and edge directivity descriptor (CEDD) [14] [15]. The precision is 50.87% which is not as good as that of our methods. In addition, in cases of images containing complex background information, our technique performs better than LIRE program, figure 3. However, the LIRE’s training process is faster and less complicated than that of our system.
(a)
(b)
(c)
Fig. 3. The comparision between LIRE demo application and our system related to a sign image with a complex background: (a) the querry image; (b) the result of LIRE; (c) the result of our system
6 Conclusion Since our CBIR system still lacks two important pre-processing and post-processing parts: noise removal and user-feedback reception, this result could be improved if those parts are taken into account. Nonetheless, this result is encouraging since our system was tested to a challenging database. Besides, the automatic clustering method that we developed can be used for other purposes based on the specific problems.
Acknowledgement We would like to thank DOLSOFT Inc for providing us a large-scale image database with a wide range of categories, resolution and transformations.
A New CBIR System Using SIFT Combined with Neural Network
301
References 1. Lam, T., Singh, R.: Semantically Relevant Image Retrieval by Combining Image and Linguistic Analysis. In: International Symposium on Visual Computing, pp. 770–779 (2006) 2. Smeulders, A.W.M., Worring, M., Santini, S., Gupta, A., Jain, R.: Content-Based Image Retrieval at the End of Early Years. IEEE Transaction on Pattern Analysis and Machine Intelligence 22(12) (December 2000) 3. Lowe, D.G.: Distinctive Image Features from Scale-Invariant Keypoints. International Journal of Computer Vision 60(2), 91–110 (2004) 4. Gonzales, R.C., Woods, R.E.: Digital Image Processing, 2nd edn., pp. 567–634. Prentice Hall, Englewood Cliffs 5. Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient Graph-Based Image Segmentation. International Journal of Computer Vision 59(2), 167–181 (2004) 6. Lloyd, S.P.: Least squares quantization in PCM. IEEE Transactions on Information Theory 28(2), 129–137 (1982) 7. Johnson, S.C.: Hierarchical Clustering Schemes. Psychometrika, 241–254 (1967) 8. Domingos, P., Pazzani, M.J.: Beyond independence: Conditions for the optimality of the simple Bayesian classifier. In: International Conference on Machine Learning, pp. 105–112 (1996) 9. Harris, C., Stephens, M.: A combined corner and edge detector. In: Fourth Alvey Vision Conference, Manchester, UK, pp. 147–151 (1988) 10. Lindeberg, T.: Scale-space theory: A basic tool for analysing structures at different scales. Journal of Applied Statistics 21(2), 224–270 (1994) 11. Ke, Y., Sukthankar, R.: PCA-SIFT: A more distinctive representation for local image descriptors. In: Proceedings of IEEE Computer Vision and Pattern Recognition, vol. 2, pp. 506–513 (2004) 12. Ledwich, L., Williams, S.: Reduced sift features for image retrieval and indoor localization. In: Australian Conference on Robotics and Automation (2004) 13. http://www.semanticmetadata.net/lire/ 14. Mathias, L., Chatzichristofis, S.A.: Lire: Lucene Image Retrieval – An Extensible Java CBIR Library. In: Proceedings of the 16th ACM International Conference on Multimedia, Vancouver, Canada, pp. 1085–1088 (2008) 15. Chatzichristois, S.A., Boutalis, Y.S.: Cedd: Color and edge directivity descriptor: A compact descriptor for image indexing and retrieval. In: Gasteratos, A., Vincze, M., Tsotsos, J.K. (eds.) ICVS 2008. LNCS, vol. 5008, pp. 312–322. Springer, Heidelberg (2008)
Computer-Aided Car Over-Taking System Magdalena Barańska Wroclaw University of Technology, PhD Student on Faculty Electronics, Wyb. Wyspiańskiego 27, 50-370 Wrocław, Poland
[email protected]
Abstract. In this paper a model of a simplified system of vehicle over-taking (support) has been presented. Accepting the limits presented in this work has enabled the creation of a simple system of over-taking support. The system is based on radar sensors and cameras. Its task is to inform the driver if the overtaking maneuver can be performed safely. This information is provided by light and sound signals. The principles applied in this system are very restrictive. The improvement of the system reliability is connected with installment of sensors of a better quality which would enable the thorough examination of the space around the vehicle performing the maneuver of over-taking. The model of the system together with an algorithm is designed with the use of Matlab software, and simulation tests are reported. The final step will be the application of the system in a real car. The proposed solution could reduce the number of car accidents and improve safety on the road. Keywords: steering algorithm, over-taking maneuver, sensors.
1 Introduction – Vehicle Over-Taking Maneuver Overtaking is one of the most difficult and most dangerous maneuvers. It normally involves over-taking a vehicle or another participant of traffic travelling in the same direction. In Poland it is in most cases connected with changing the lane to the one in the opposite direction. The aim of the presented system is to improve overtaking, using an algorithm based on data obtained from camera sensors. Its task is to give a driver information whether overtaking can be done safely or not. This information is sent with the aid of light and sound signals. The overtaking maneuver described in this work [1] enables the calculation of the distance and time needed to complete the task. In the article [2] a description of the location of sensors and cameras may be found, which help to monitor the area around the car and consequently to measure the distance of other vehicles nearby. Most systems created nowadays are designed to increase the safety of users of vehicles. The system for distance estimation [4], the system of recording data for car movement [3], the system signaling the changing of the lane [5] are all examples of such solutions. N.T. Nguyen, M.T. Le, and J. Świątek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 302–307, 2010. © Springer-Verlag Berlin Heidelberg 2010
Computer-Aided Car Over-Taking System
303
In this article the aspect of vehicle over-taking is considered as belonging to the category of monitoring and recording of car movement.
2 Sensors and Their Location The system of location of cameras and radar sensors is presented in [2]. It enables direct monitoring, hence ultimately control over the area around the vehicles. In overtaking support system one of two suggested approaches in [2] for the locations of radar sensors/cameras will be adopted – they will be placed as presented in Figure 1.
Fig. 1. Location of sensors and cameras
Good location of sensors will enable thorough observation of surrounding and this is connected with the safety of over-taking. Due to this measurement/sensor system all information necessary to supply to an algorithm can be obtained. Using radar sensors makes it possible to determine the distance between vehicles (an overtaking vehicle and that is coming from the opposite direction) and, as a result, their speed. All the sensors and cameras are connected with the car system by use of a controlled area network (CAN) to obtain additional parameters of the car, among others the speed, the position of accelerator and break pedals, engine speed, etc.
3 Steering Algorithm To determine an algorithm for the steering maneuver the following specifications have been taken: -
An overtaken vehicle and the vehicle coming from the opposite direction will not accelerate more than 10% of the speed they have when the maneuver of overtaking starts, therefore v1 = v1 min + 10%v1 min = 110%v1 min and
304
-
-
M. Barańska
v3 = v3 min + 10%v3 min = 110%v3 min (v1 – speed of vehicle in front of active vehicle, v1min – minimal speed of vehicle in front of active vehicle, v3 – speed of vehicle coming from the opposite direction, v3min – minimal speed of vehicle coming from the opposite direction), Assume a maximum length of overtaken vehicle as 18 m, Assume that during over-taking we can verify the length of the vehicle can be verified on the basis of sensor and appropriate calculations, based on laws of physics (dynamics). If the vehicle is too long to be overtaken according the sensor measurement, there will be an instruction given to abandon the maneuver of over-taking and continue driving in the same lane, Assume that the distance between the overtaken vehicle and the next vehicle in front of it is sufficient to start over-taking, Reading data from radar sensors enables to check if the following vehicle is not over-taking.
Knowing the change of the distance covered by vehicle, denoted Δs in a stated time denoted Δt allows the calculation of its speed according to
v=
Δs Δt
(1)
Based on measurements from front radar sensors the speed of a vehicle travelling in front denoted v1 and a vehicle coming from the opposite direction denoted v3 ,
Fig. 2. Course of maneuver of overtaking with constant acceleration and deceleration
Computer-Aided Car Over-Taking System
305
came calculated. The speed of the vehicle in front will be used to calculate (the length of) the distance needed to perform safe overtaking. Figure 2 illustrates the maneuver of over-taking with constant acceleration and deceleration. Based on this diagrammatic representation the length of the path denoted s w and time denoted t w needed to overtake the vehicle in front can be determined. Knowing the speed of the overtaking vehicle and based on the assumption that the car in front will not increase by more than 10% of its speed, it can be deduced that the acceleration, may be expressed
&x&2 =
v2 max − v2 t
(2)
Assuming that the speed after acceleration will be 120% of the initial speed, i.e.
v 2 max = 120%v 2 , whereas the deceleration will become a 2 = − &x&2 . Because the speed of the overtaking car should, after finishing the maneuver, return to the speed level before over-taking, the notion of deceleration is introduced. The length of the path denoted s w needed to perform safe overtaking has been calculated on the basis of the following and the situation depicted in Figure 2.
s w = v1 2
&x&2 + a 2 (s + s 01 + lc1 + l c 2 ) + (s02 + s 01 + l c1 + l c 2 ) (3) &x&2 ⋅ a 2 02
The time of the maneuver can be calculated according to
t w = (v 2 max − v1 )
&x&2 + a 2 &x&2 a 2
(4)
Knowing the time of overtaking, the distance covered by the vehicle travelling in the opposite direction at the same time can be deduced as
sp =
v3 max t w 2
(5)
In the next step it is checked if the distance needed to perform the over-taking maneuver is shorter than the difference between the initial position of the vehicle travelling in the opposite direction and the calculated distance covered by the vehicle during this time. To make this maneuver safe, it is assumed that the distance needed to overtake a vehicle will be 20% greater than calculated from
s 0 − s p ≤ 120% s w A block algorithm of an over-taking support system in given in Figure 3.
(6)
306
M. Barańska
Fig. 3. A block algorithm of over-taking support system
Computer-Aided Car Over-Taking System
307
4 Summary The maneuver of over-taking is one of the most difficult and dangerous performed in road traffic. Together with existing systems supporting driving in difficult conditions, such as ABS, ESP, ASR it seems reasonable to create an over-taking support system. It could lead to decrease in the number of road accidents and increase safety on the roads. The assumptions used here are very restrictive. The reliability of the proposal system is connected with the introduction of high quality sensors, which can help to monitor, hence control the over-taking can in the surrounding area. The model of the over-taking support system together with an appropriate algorithm is to be designed and use will be made of Matlab software and simulation tests. Whilst the overall aim in to achieve an application of the approach on an active car, there are a number of intermediate outcomes so the work will be of interest to the automatic sector.
References 1. Arczyński, S.: Mechanika ruchu samochodu, Warszawa, Wydawnictwo NaukowoTechniczne, pp. 206–210 (1993) 2. Bourbakis, N., Findler, M.: Smart Cars as Autonomous Intelligent Agents, Washington, pp. 25–33. IEEE Computer Society, Los Alamitos (2001) 3. Chet, N.C.: Design of black box for moving vehicle warning system. In: Conference on Research and Development, SCOReD, pp. 193–196 (2003) 4. Goldbeck, J., Huertgen, B., Ernst, S., Kelch, L.: Lane following combining vision and DGPS. In: Image and Vision Computing, pp. 425–433 (2000) 5. Ko, S., Gim, S., Pan, C., Kim, J., Pyun, K.: Road Lane Departure Warning using Optimal Path Finding of the Dynamic Programming. In: SICE-ICASE International Joint Conference, pp. 2919–2923 (2006)
Learning Model for Reducing the Delay in Traffic Grooming Optimization Viet Minh Nhat Vo Faculty of Hospitality & Tourism - Hue University - Vietnam
[email protected]
Abstract. Hopfield networks have been suggested as a tool for the optimization of traffic grooming. However, this method of optimization based on neural networks normally requires a considerable delay to find an optimal solution. This limited their practicability in optical data transport. This paper proposes a solution to reduce this delay. That is a learning model in which arriving service patterns are clustered into groups corresponding to separate optimal grooming solutions. When a new service pattern arrives, it is checked to see if it belongs to an existing optimized group: if one is found, the corresponding optimal grooming solution is returned immediately. If not, an optimization process is required to determine its optimal grooming solution. This paper discusses the practicability of the proposed learning model and the clustering strategies. Keywords: Traffic grooming optimization, learning, clustering.
1 Introduction Traffic grooming has attracted the attention of researchers in the area of optical data transport, with the objective to exploit at maximum the potential bandwidth capacity of optical fibers. As defined in [1] “Traffic grooming is the term used to describe how different traffic streams are packed into higher speed streams.” The motivation of traffic grooming results from the application of multiplexing techniques that appear across different transport systems (technologies) or multiple layers of a single transport system. Traffic grooming can be considered under many different aspects. For example, in [2][3], it is considered as the minimization of the number of wavelengths, the number of add/drop multiplexers, or/and the number of wavelength converters, which are formulated as integer linear functions and then minimized using heuristic algorithms. In [4][5], a low-bandwidth traffic arriving at a (edge or core) node for grooming is defined as a service and so a set of low-bandwidth traffics is considered a service pattern; traffic grooming is an optimization of the service-into-burst multiplexing and of the service-between-burst switching, which is presented by a Hopfield energy function and optimized by the principle “the minimal energy, the optimal solution” [6]. Optimization methods based on neural networks normally require a considerable delay to find out an optimal solution, but this has limited their practicability in optical data transport. However, a characteristic of these methods is that the optimization process is done gradually basing on the principle “the longer the optimization cycle, N.T. Nguyen, M.T. Le, and J. Świątek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 308–318, 2010. © Springer-Verlag Berlin Heidelberg 2010
Learning Model for Reducing the Delay in Traffic Grooming Optimization
309
the better the returned solution”. So, if the limited delay is short, a nearly optimal solution can be returned for the current traffic grooming. As shown in Fig. 1, Hopfield energy function never increases with time [4][5]. Assuming that nearly-optimal solutions can be used from time tk, we can then stop the current optimization process at any time tc (tc≥tk) and the corresponding returned solution is nearly-optimal. Of course, if tc≤tk, the methods of optimization based on Hopfield network cannot be used.
Fig. 1. A nearly-optimal grooming solution is returned if the delay for traffic grooming is limited (tc≥tk)
In [4][5], we have proposed solutions for reducing the delay of traffic grooming optimization based on Hopfield network. We determine, in advance, the connection weights and the activation threshold of the Hopfield network used for optimization. Moreover, the chosen constraint arguments (i.e. constraint weights) and the improvements in optimization algorithms also help to reduce this delay. However, based on the simulation results in [4][5], the optimization based on Hopfield network is only practical with service patterns of small dimensions. When this dimension increases, the delay becomes too long to be practicable (Fig. 2). Algo-1
Algo-2
Delay (ms)
1000 800 600 400 200 0 10
20
30 40 50 Pattern dim ension
60
Fig. 2. The delay increases when increasing pattern dimension
This paper proposes a more complete solution to reduce the delay of traffic grooming optimization based on Hopfield networks. That is a learning model, in which arriving service patterns are clustered into groups corresponding to separate optimal grooming solutions. When a new service pattern arrives, it is checked to see if it belongs to an existing optimized group: if one is found, the corresponding optimal
310
V.M.N. Vo
grooming solution is returned immediately. If not, an optimization process is required to determine its optimal grooming solution. This paper also discusses about the practicability of our learning model and clustering strategies. Implementation of the learning model and analysis on its simulation are represented at the end of the paper.
2 Overview of the Traffic Grooming Optimization Based on Hopfield Network In [4][5], traffic grooming are considered as multiplexing of arriving services into bursts at edge nodes and switching services between bursts at intermediate nodes. The bursts used to support these services are divided into timeslots. Arriving services (considered as a service pattern) are represented by a 2-dimension binary matrix (Fig. 3a), where each line corresponds to a service and the number of columns is equal to the number of timeslots in each burst. On each line of the matrix, a service is expressed by a chain of successive cells of value 1. The position of a service on a burst is important.
Fig. 3. Example of multiplexing a service pattern into bursts
With this representation, optimization of multiplexing a service pattern into bursts is considered as arranging the services in the bursts so that the number of bursts needed to use is minimal. Fig. 3b illustrates an example of multiplexing 4 services into 2 bursts. Similarly, switching services between bursts can require the exchanges of their timeslots on a burst, the exchanges of their bursts (i.e. wavelength conversion) or both, the exchanges of their timeslots and bursts. In these cases, optimization of switching services between bursts is considered as minimization the cost of the switching. Fig. 4 illustrates an example for all possible cases of switching an arriving service.
Fig. 4. Example of switching a service between bursts
Learning Model for Reducing the Delay in Traffic Grooming Optimization
311
The mentioned problems of traffic grooming is then formulated under an objective function and concerned constraints. To transform them to a Hopfield energy function, the penalty function approach [7] is used. From there, we can determine the Hopfield network used to optimize, with the principle “The minimal energy, the optimal solution”. A limit of the methods of optimization based on Hopfield network is to require a considerable delay to find an optimal solution. In fact, as shown in [4][5], this method avoided the delay caused by the process of training the connection weights. Moreover, we proposed improvements in the optimization algorithm. However, based on the simulation results in [4][5], the delay is still too long, especially when the dimension of service patterns is large (Fig. 2). Therefore, a complete solution to reduce this delay is important.
3 Learning Model for Reducing the Delay The learning model proposed to reduce the delay of traffic grooming optimization based on Hopfield network is presented in Fig. 5.
Fig. 5. Learning model for reducing the delay
Initially, an arriving service pattern is provided to the optimization unit (1) to determine its optimal grooming solution. Because of the time-limit for its grooming, a nearly optimal (instead of optimal) solution is returned (2). However, the optimization process is continued to determine an optimal solution. This result is then provided to the clustering unit (3). In fact, there always exist arriving service patterns that have the same optimal grooming solution, as shown in Fig. 6. Then, the clustering unit clusters these service patterns into a same group, which corresponds to a found optimal grooming solution. The results of this clustering are then delivered to the learning unit to be stored (4). When a new service pattern arrives; it is delivered to the learning unit to check if there exists a corresponding optimal solution (5). If one is found, the corresponding optimal solution is returned immediately (6). If not, an optimization process is required to find its optimal solution (7). However, the time required to check if a similar pattern exists will increase as the size of knowledge about optimized grooming solutions increases. This delay can
312
V.M.N. Vo
Fig. 6. Arriving service patterns (a) and (b) can have the same multiplexing solution
exceed the time limit for a grooming. The solution for this problem is that an arriving service pattern is delivered at the same time to the optimization unit (1) and the learning unit (5). The first result returned by one of two units is the optimal grooming solution for the current service pattern.
4 Practicability of the Learning Model Theoretically, the proposed learning model seems to be a complete solution to reduce the delay of traffic grooming optimization using Hopfield networks. However, an important question arises “what is the likelihood that an arriving service pattern is in the same group as another optimized”. If this likelihood is too small, implementation of this learning model has no value. Let us to consider arriving NxT-services patterns represented by binary matrix, where N is the number of services in the pattern and T is the number of timeslots. The number of possible representations of service patterns is then
( )
n = CT
N
,
(1)
where CT is the number of possible representations of the services on a burst. However, since an arriving service pattern has N! permutations, the number of possible representations of NxT-services patterns in reality is
(C ) n=
N
T
(2)
N!
Fig. 7 illustrates the variation (when T=8) of the number of possible representations when increasing the dimension of service patterns. With N=35 or N=36, the number of possible representations is maximal (2.86*1014). From equation (2), the probability for an arriving service pattern being in the same group with another optimized is
p=
N!
(C )
N
T
(3)
Representations
Learning Model for Reducing the Delay in Traffic Grooming Optimization
313
3.5E+14 3E+14 2.5E+14 2E+14 1.5E+14 1E+14 5E+13 0 1
11 21
31 41 51 61 71 Service patterns
81 91 101
Fig. 7. Variation of the number of possible representations when increasing the dimension of service patterns
Clearly, the probability of an arriving service pattern being in the same group as another optimized is too small and nearly 0 in almost all cases. However, this is the case for the first arriving service patterns only. This probability will then increase. In case of a simulation with 4x8-services patterns, this probability increases along with the number of arriving service patterns. As shown in Fig. 8, the probability of an arriving service pattern being similar with any optimized one reached 0.0114, over 10000 service patterns. When all possible representations (≈70000) of these 4x8services patterns have been clustered, this probability reaches 1. 0.012 Probability
0.01 0.008 0.006 0.004 0.002 0 1
2001
4001 6001 Tested pattern
8001
Fig. 8. The probability increases along with the number of arriving service patterns
5 Hierarchic Clustering of Service Patterns In fact, there are some arriving service patterns which have only one solution for their multiplexing. In this case, it is not necessary to optimize them. Moreover, there exist different arriving service patterns having the same optimal grooming solution, which are called the “similar” service patterns. Therefore, we need strategies to eliminate the service patterns for which no optimization is needed and cluster “similar” service patterns. 5.1 Eliminate Service Patterns We can recognize that there are some arriving service patterns which always require N bursts for their multiplexing. Those are the patterns (see Fig. 9) for which any pair
314
V.M.N. Vo
of services cannot be multiplexed on the same burst. That is services, for which there always exists at least a column having all values set to 1 (e.g. column t3 in Fig. 9a and t2, t3 in Fig. 9b) on its representation matrix. For these service patterns, optimization of their multiplexing is unneeded. They are eliminated as they arrive to the learning model.
Fig. 9. These service patterns always require 3 bursts for their multiplexing
Also, simulation with 4x8-services patterns, the probability of an arriving service pattern being similar to another optimized pattern (PoS) (0.0114 over 10000 service patterns tested) is smaller than the probability of an arriving service pattern to be optimized (P2O) (0.1447). However, compared to the probability of an arriving service pattern not to be optimized (P2U) (0.8439), it is very small. Fig. 10 shows the comparison between these three probabilities. P2O gradually decreases and goes down to 0 (when all possible representations of service patterns have arrived), while PoS increases and reaches 1-P2U. P2U
P2O
PoS
Probability
1 0.8 0.6 0.4 0.2 0 1
2001
4001 6001 Tested patterns
8001
Fig. 10. A comparison between P2U, P2O and PoS
5.2 Clustering Based on the Number of Used Bursts Normally, for a given arriving service pattern, we can determine the minimal number of bursts to be multiplexed. As shown by the examples in Fig. 11, the minimal number of bursts needed is 2, because the maximal overlap on all columns of their representation matrix is 2 (in Fig. 9a, service s1 overlaps with service s2 at timeslot t2 and with s3 at t3, and in Fig. 9b, service s1 overlaps with s2 at t2 and with s3 at t4.) In conclusion, the minimal number of needed bursts is determined by the maximal number of overlapping cells (with value 1) on all columns.
Learning Model for Reducing the Delay in Traffic Grooming Optimization
315
Fig. 11. These service patterns require at least 2 bursts for their multiplexing
Clearly, this minimal number of needed bursts cannot be lower than M=(N/T) if N is multiple of T and M=(N/T)+1 otherwise. In other words, we can cluster arriving NxT-services patterns into (N-M) separated groups corresponding to the number of needed bursts: the N-bursts group, the (N-1)-bursts group, etc. and the M-bursts group. For the N-bursts group, it corresponds to arriving service patterns not to be optimized. 5.3 Clustering Based on the Grooming Solution The clustering based on the number of used bursts only clusters arriving services patterns into (N-M) “basic” groups. In each of these groups, there exist many smaller groups which correspond to separate optimal grooming solutions. For example, two arriving service patterns in Fig. 12 are all in the 2-bursts group, but corresponding to two separate optimal multiplexing solutions. In fact, it is almost impossible to find out what is the similarity between the arriving service patterns which have the same optimal grooming solution. For example, two arriving service patterns in Fig. 12 are nearly similar, but corresponding two separated optimal multiplexing solutions. While there exists arriving service patterns (as shown in Fig. 13) not similar but having the same optimal grooming solution. The simplest way of clustering is based on optimal grooming solution: a new arriving service pattern is first entered to the optimizing unit to determine its optimal grooming solution. Based on this result, it will be clustered into a right group.
Fig. 12. These arriving service patterns are all in the 2-bursts group, almost similar, but not having the same optimal multiplexing solution
In conclusion, the clustering process of arriving service patterns can be described hierarchically as shown in Fig. 14. The first step is the clustering based on the number of used bursts. This step also determines which service patterns do not need to be optimized for their grooming (the N-bursts group). For others groups, another step of clustering based on the grooming solution are used. This step clusters them into groups which correspond to a separate optimal grooming solution.
316
V.M.N. Vo
Fig. 13. These arriving service patterns are not similar, but having the same optimal grooming solution
Fig. 14. Hierarchic clustering model for arriving service patterns
Basing on this hierarchic clustering model, a pre-clustering unit is added in our learning model as shown in Fig. 15.
Fig. 15. The improved learning model
Learning Model for Reducing the Delay in Traffic Grooming Optimization
317
With this improved learning model, an arriving service pattern is first delivered to the pre-clustering unit to check if it is in N-bursts group. If so, it is delivered directly to output without optimization. In case we need to optimize, the arriving service pattern is delivered to the optimization unit, or the learning unit, or both. The next processes are the same as described in the section 2.
6 Simulation Results and Analyses All simulations of our learning model were run on a PC Pentium 4, 256 RAM, and 1.5GHz. We tested the delay for 3 different cases: without learning model (as done also in 5), with the unimproved learning model (Fig. 5), and with the improved learning model (Fig. 15). In each case, two algorithms were used (see 5): one in which a service taken to try is not multiplexed yet and another in which a service taken to try is arbitrary. Results of these tests are shown in table 1: (Algo-1, Algo-2) is for the case without learning model, (Algo-L1, Algo-L2) is for the case with the unimproved learning model, and (Algo-CL1, Algo-CL2) for the case with the improved learning model. Table 1. Average delay (millisecond) over algorithms Algo-1
Algo-2
Algo-L1
Algo-L2
Algo-CL1
Algo-CL2
0.851
1.081
0.841
1.031
0.421
0.49
9.854
12.899
10.124
13.389
1.572
1.913
50.203
59.595
44.765
57.973
2.374
3.295
134.843
178.377
133.281
185.878
2.143
2.874
308.795
415.197
330.195
470.506
2.874
3.796
625.279
848.07
631.759
950.527
4.727
6.279
As shown in table 1, the delay in the case with the unimproved learning model is not better than for the case without learning model. Instead, the objective of the unimproved learning model is only to avoid optimizing service patterns which have been optimized before. However, as shown in Fig. 10, the probability of a new arriving service pattern being similar with another optimized (PoS) is too small. Moreover, checking of the similarity between a new arriving service pattern and another optimized also takes a delay. So, no advantage is denoted between the case with the unimproved learning model and the case without the learning model. In the case with the improved learning model, a pre-clustering unit is integrated in the model. This unit eliminates all arriving service patterns not to be optimized. This considerably saves the delay for traffic grooming optimization. As shown in Fig. 10, the probability of a new arriving service pattern not to be optimized is rather high (0.84). Therefore, this delay is short.
7 Conclusion The optimization based on neural networks normally requires a considerable delay to find out an optimal solution, but this has limited the applicability of this method in
318
V.M.N. Vo
optical data transport, where the delay required for traffic grooming is short. So, reducing of this delay is an important factor of our model of traffic grooming optimization based on Hopfield network. In [4][5], we have proposed solutions to reduce this delay, but not thoroughly. This paper therefore has proposed a better solution. That is a learning model, in which two main functions are composed of, (1) eliminating arriving service patterns not to be optimized and (2) checking and avoiding the optimization of arriving service patterns optimized before. Based on the simulation results, our learning model has proved its advantages. This improves the practicability of our traffic grooming optimization model based on Hopfield network.
References 1. Zhu, K., et al.: A Review of Traffic Grooming in WDM Optical Networks: Architectures and Challenges, Computer Science Department, University of California, Davis, CA 95616, USA 2. Hu, J.Q.: Traffic grooming in WDM ring networks: A linear programming solution. Journal of Opt. Networks 1 (2002) 3. Zhao, C.M., et al.: Traffic Grooming for WDM Rings with Dynamic Traffic (2003) (manuscript) 4. Vo, V.M.N., et al.: Traffic Switching Optimization on Optical Routing by Using Hopfield Network. In: RIVF 2004, Hanoi, Vietnam, February 02-05 (2004) 5. Vo, V.M.N., et al.: Optimization of Services-into-Burst Multiplexing based on Hopfield Network. In: ICHSN 2005, Montreal (August 2005) 6. Lagoudakis, M.G.: Neural Networks and Optimization Problems - A Case Study: The Minimum Cost Spare Allocation Problem, University of Southwestern Louisian 7. Lillo, W., et al.: On Solving Constrained Optimization Problems with Neural Networks: A Penalty Method Approach. IEEE Trans. on Neural Networks 4(6) (1993) 8. Zhang, X., et al.: An effective and comprehensive approach to traffic grooming and wavelength assignment in SONET/WDM rings. In: SPIE Proc. Conf. All-Opt. Networking, Boston, MA, September 1998, vol. 3531 (1998) 9. Wang, J., et al.: Improved approaches for cost-effective traffic grooming in WDM ring networks: ILP formulations and single-hop and multihop connections. IEEE Journal of Lightwave Technology 19(11) (2001) 10. Simmons, J., et al.: Quantifying the benefit of wavelength add-drop in WDM rings with distance-independent and dependent traffic. IEEE Journal of Lightwave Technology 17 (January 1999)
Power Load Forecasting Using Data Mining and Knowledge Discovery Technology Yongli Wang1, Dongxiao Niu1, and Yakun Wang2 1
School of Economics and Management, North China Electric Power University, Beijing, China
[email protected],
[email protected] 2 Shijiazhuang Vocational College of Industry and Commerce, Shijiazhuang, China
[email protected]
Abstract. Considering the importance of the peak load to the dispatching and management of the system, the error of peak load is proposed in this paper as criteria to evaluate the effect of the forecasting model. This paper proposes a systemic framework that attempts to used data mining and knowledge discovery (DMKD) pretreatment of the data. And a new model is proposed which combining artificial neural networks with data mining and knowledge discovery for electric load forecasting. With DMKD technology, the system not only could mine the historical daily loading which had the same meteorological category as the forecasting day to compose data sequence with highly similar meteorological features, meanwhile, but also could eliminate the redundant influential factors. Then an artificial neural network is constructed to predict according to its characteristics. Using this new model, it could eliminate the redundant information accelerated the training of neural network and improve the stability of the convergence. Comparing with single SVM and BP neural network, this new method can achieve greater forecasting accuracy. Keywords: Data mining; Knowledge discovery; Power system; BP Neural Network; Power load forecasting.
1 Introduction Load forecasting has become a crucial issue for the operational planners and researchers of electric powesr systems. For electricity load reliance, electricity providers face increasing competition in the demand market and must pay more attention to electricity quality, including unit commitment, hydrothermal coordination, short-term maintenance, interchange and transaction evaluation, network power flow dispatched optimization and security strategies1. 1
This research has been supported by Natural Science Foundation of China(70671039), It also has been supported by Beijing Municipal Commission of Education disciplinary construction and Graduate Education construction projects.
N.T. Nguyen, M.T. Le, and J. Świątek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 319–328, 2010. © Springer-Verlag Berlin Heidelberg 2010
320
Y. Wang, D. Niu, and Y. Wang
Inaccurate forecast of power load will leads to a great deal of loss for power companies. Bunn and Farmer pointed out that a 1% increase in forecasting error implied a 10 million increase in operating costs. The short-term forecasts refer to hourly prediction of electricity load demand for a lead time ranging from 1h to several days ahead. In certain instances the prediction of the daily peak load is the objective of short-term load forecasting, since it is the most important load during any given day. The quality of short-term hourly load forecasts has a significant impact on the economic operation of the electric utility since many decisions based on these forecasts have significant economic consequences. These decisions include economic scheduling of generating capacity, scheduling of fuel purchases, system security assessment, and planning for energy transactions. The importance of accurate load forecasts will increase in the future because of the dramatic changes occurring in the structure of the utility industry due to deregulation and competition. This environment compels the utilities to operate at the highest possible efficiency which as indicated above, requires accurate load forecasts. The load has complex and non-linear relationships with several factors such as weather factors, climatic conditions, social activities, and seasonal factors, past usage patterns, the day of the week, and the time of the day. At present, researchers are using the technology with abnormal value processing technique or different smooth method to force the sequence of history data to consume to a new one of smaller amplitude. Using this method, they build a neural net model to forecast. However, there are some disadvantages with this method: it can’t show the change of the short load forecasting which has feature with specific weather; Meanwhile, some new problems are happened when use the method artificial neural net to forecast. The objective of this paper is to propose a conceptual framework that identifies major research areas of data mining and knowledge discovery (DMKD). Such a framework is useful in three aspects. First, as an interdisciplinary field, DMKD has an identity problem. This identity problem is common for interdisciplinary fields[2]. DMKD draws extensively from database technology, machine learning, statistics, algorithm design, data visualization, and mathematical modeling. Some widely acknowledged DMKD methods and tools, Through above analysis, a new thought which can improve the accuracy of load forecasting be presented, it believes the key which can improve the accuracy are the preprocessing of the history data and the improved forecasting model, so it present a new method which is BP neural network based on data mining and knowledge discovery technology in power load forecasting model. First it comprehend the weather character of forecasting data through weather forecasting; Second it takes advantage of the merit in disposing large data size and erasing redundant information to search a number of history load data which similar with forecasting date in weather character. Basing on the correlation analysis technology extract the data and compose data sequence which has very similar weather characters, it can decrease the training data. At
Power Load Forecasting Using Data Mining and Knowledge Discovery Technology
321
last, build the forecasting model with these data. From the actual study, it is shown more accurate with this model.
2 Data Mining and Knowledge Discovery Preprocessing 2.1 Data Mining Introduction Data mining technology is a process to search new relation or pattern and trend with pattern recognition technology, statistic and mathematic technology that is to mine the information which has potential value. It has function as following: (1) Summarization. Summarization is the abstraction or generalization of data. A set of task-relevant data is summarized and abstracted. This results in a smaller set which gives a general overview of the data, usually with aggregate information. (2) Classification. Classification derives a function or model which determines the class of an object based on its attributes. A set of objects is given as the training set. In it, every object is represented by a vector of attributes along with its class. A classification function or model is constructed by analyzing the relationship between the attributes and the classes of the objects in the training set. This function or model can then classify future objects. This helps us develop a better understanding of the classes of the objects in the database. (3)Association: Search the relative events or records, deduce the potential relations among the events, and recognize the pattern which probable occur repeatedly. It can collate according to the extent of the relation. (4) Clustering. Clustering identifies classes-also called clusters or groups-for a set of objects whose classes are unknown. The objects are so clustered that the interclass similarities are maximized and the interclass similarities are minimized. This is done based on some criteria defined on the attributes of the objects. Once the clusters are decided, the objects are labeled with their corresponding clusters. The common features for objects in a cluster are summarized to form the class description. (5) Trend analysis. Time series data are records accumulated over time. For example, a company’s sales, a customer!¯s credit card transactions and stock prices are all time series data. Such data can be viewed as objects with an attribute time. The objects are snapshots of entities with values that change over time. Finding the patterns and regularities in the data evolutions along the dimension of time can be fascinating. (6)Forecasting: Analyze law of development of objects and forecast coming trend. 2.2 Data mining and Knowledge Discovery we select the daily typical(such as weekend or holiday), daily highest temperature, daily average temperature, daily lowest temperature, daily rainfall, daily wind speed and cloudy cover and daily seasonal attribute as random disturbance factors to short-term load, and it can be seen in Table l.
322
Y. Wang, D. Niu, and Y. Wang Table l. Initial decision table
Attribute name Z1 , L , Z 7 Z8 , L , Z14 Z15 ,L , Z 21 Z 22 , L, Z 27 Z 28 Z 29 Z 30 Z 31 Z 32 Z 33 Z 34 Z 35 Z 36
Attribute meaning Tdmax − i (i = 1, L , 7) , express d − i day’s max-temperature Tdmin − i (i = 1, L , 7) , express d − i day’s min-temperature avg Td − i (i = 1,L , 7) , express d − i day’s average-temperature Tdl (l = 1,5,9,13,17, 21) , express d day’s six temperature point on l o’clock Rd , express d day’s rainfall Wd d , express d day’s wind speed Humd , express d day’s humidity Cld d , express d day’s cloudy cover M d , express d day’s month Sead , express d day’s season Wkd , express d day’s week H d , express whether d day is holiday, 0 is holiday, 1 is not holiday Wnd d , express whether d day is weekend, “-” is weekend, “+” is not weekend
1. Gray relation analysis theory. Relation analysis is a method which analyzes the extent of relation among factors in gray system theory, the essence is to judge the extent of relation of the factors according to the relation of caves. Detailed processes are as following: (1) Construct sequence matrix. It needs to do relation taxis analysis when it has constructed data classification bank. Hypothesize T0 represent the weather character of forecasting date, if the weather forecasting reports the highest temperature is 40°C, the average temperature is 25°C, the lowest temperature is 15°, the rainfall is 20mm, then T0 = (T0 (1), T0 (2), T0 (3), T0 (4)) =(40, 25, 15, 20). In this way, constructs comparing sequence with daily weather data of the obtained data bank, they are represented as T1 , T2 ,L , Tn ⎡ T0 (1) T1 (1) … Tn (1) ⎤ ⎥ (T0 , T1 , T2 , …Tn ) = ⎢⎢ | | | | ⎥ ⎣⎢T0 (m) T1 (m) … T n (m) ⎦⎥
(1)
(2) Nondimension: Processing data with the method of initiation to eliminate dimension. The formulas are:
Ti ′(k ) =
Ti ( k ) , i = 0,1, 2, … , n; Ti (1)
k = 1, 2, … , m
(2)
Power Load Forecasting Using Data Mining and Knowledge Discovery Technology
323
Nondimension matrix: ⎡ T0′(1) T1′(1) … Tn′(1) ⎤ (T0′, T1′, T2′, …, Tn′) = ⎢⎢ | | | | ⎥⎥ ⎣⎢T0′(m) T1′(m) … Tn′( m) ⎦⎥
(3)
(3) Calculate relation coefficient:
ξ 0i (k ) =
min min x 0 ( k ) − x i ( k ) + ρ max max x 0 ( k ) − x i ( k ) i
k
i
k
x 0 ( k ) − x i ( k ) + ρ max max x 0 ( k ) − x i ( k ) i
(4)
k
In this formula, i=0,1,2,…,n; k=1,2, …,m , ρ is resolution ratio, ρ ∈[0,1], Generally ρ =0.5, then, the relation coefficient matrix can be obtained:
⎡ ξ 01 (1) … ξ 0 n (1) ⎤ ⎢ | ⎥ | | ⎢ ⎥ ⎢⎣ξ 01 ( m) … ξ 0 n (m) ⎥⎦
(5)
(4) Calculate associated degree:
r0i ==
1 m ∑ ξi (k ) m k =1
, i =1,2,…, n
(6)
2. Ascertain history loading sequence for forecasting. In this article, the reference sequence is the meteorologic factor index vector of forecasting date T0 ; The comparing sequence is the meteorologic factor index vector of history dates which are similar with the forecasting date in weather Tt . Then calculate the associated degree between T0 and Tt is rt . Set a threshold value α , choose those dates whose associated degree rt ≥ α . Then collate these loading dates orderly to be a new sequence.
3 BP Neural Network Model The Backpropagation (BP) neural network is not only the most widely used, but also one of the most maturely developed neural networks. It is a multi-network training with weights of the nonlinear differential function. 80 percent to 90 percent of artificial neural network models are BP network or its transfiguration in practical application. Backpropagation was created by generalizing the Widrow-Hoff learning rule to multiple-layer networks and nonlinear differentiable transfer functions. Input vectors and the corresponding target vectors are used to train a network until it can approximate a function, associate input vectors with specific output vectors, or classify input vectors in an appropriate way as defined by you. Networks with biases, a sigmoid layer, and a linear output layer are capable of approximating any function with a finite number of discontinuities. Standard backpropagation is a gradient descent algorithm, as is the Widrow-Hoff learning rule, in which the network weights are moved along the negative
324
Y. Wang, D. Niu, and Y. Wang
of the gradient of the performance function. Properly trained backpropagation networks tend to give reasonable answers when presented with inputs that they have never seen. Typically, a new input leads to an output similar to the correct output for input vectors used in training that are similar to the new input being presented. This generalization property makes it possible to train a network on a representative set of input/target pairs and get good results without training the network on all possible input/output pairs. Artificial neural network is a sophisticated networks system that is made of many neutrons connected with each other. The main advantage of using neural networks lied in their abilities to learn the nonlinear relationships between the load and explanatory variables directly from the historical data without necessity of selecting an appropriate model. L-M algorithm is adopted to substitute classical BP algorithm to speed up the training of the neural network and improve the stability of convergence of the training. The performance function for the training of the neural network is defined as N
V ( x) = ∑ ei 2 ( x)
(8)
i =1
In equation (1), the parameter vector x is the weights and biases, which can change during the training of the network, ei ( x ) is the error between the outputs calculated by the network and the outputs expected for the training sample i . The aim of the training is to iterate modifying network’s weights and biases to minimize the performance function. To minimize the function V ( x) , the Newron’s method would be
Δx = −[∇ 2V ( x)]−1 ∇V ( x)
(9)
Where ∇2V ( x) is the Hessian matrix and ∇V ( x) is the gradient. Then it can be shown that
∇V ( x ) = J T ( x )e ( x ) ∇ 2V ( x) = J T ( x) J ( x) + S ( x)
(10) (11)
Where J ( x) is the Jacobian matrix, and N
S ( x) = ∑ ei ( x)∇ 2 ei ( x)
(12)
i =1
For the Gauss-Newton method it is assumed that S ( x) ≈ 0 , and the update (13) becomes
Δx = [ J T ( x) J ( x)]−1 J T ( x)e( x)
(14)
The L-M modification to The Gauss-Newton method is
Δx = [ J T ( x) J ( x) + μ ]−1 J T ( x)e( x)
(15)
The parameter μ is multiplied by some factor ( β ) whenever a step would result in an increased V ( x) . When a step reduces V ( x ) , μ is divided by β . (In this paper, we
Power Load Forecasting Using Data Mining and Knowledge Discovery Technology
325
used β = 10 and μ = 0.01 as the start point.) From that way, the direction of iterating can be adjusted automatically to accelerate the convergence of the training. The process of the L-M algorithm is below (1) Present all inputs to the net work and compute the corresponding network outputs and error. Compute the sum of squares of errors over all inputs ( V ( x ) ). (2) Compute the Jacobian matrix of V ( x ) . (3) Solve (7) to obtain Δx . (4) Recompute the sum of squares of errors using x + Δx . If this new sum of squares is smaller than that computed in step1, then reduce μ by β , let x = x + Δx , and go back to step1. If the sum of squares is not reduced, then increase μ by β and go back to step3. (5) The algorithm is assumed to have converged when the norm of the gradient ((3)) is less than some predetermined value, or when the sum of squares has been reduced to some error goal. The initialization of weights and bias has an important impact on the process of training. The method of picking initial weights for the network proposed in paper[10] is adopted to decrease training time.
4 BP Neural Network Model Based on Data Mining and Knowledge Discovery 4.1 Process for Forecasting
It can be seen that the process of BP forecasting model based on data mining and knowledge discovery and genetic algorithm from fig. 1. 1. Mapping the Information of source database into decision system mode. 2. Using gray relation analysis method to construct the needed history loading sequence. 3. Checking data integrity and accuracy. Select input and define output variables. 4. Using the combined data mining and knowledge discovery technology which introduced above to input the history data and preprocess, get the training and testing sample bank which have the most similar weather character. Reduction of the knowledge and Getting rules from new and reduction decision system. 5. Conceptualize of the data’s attribute value. Determine layer(s) and the number of neurons in hidden layers. No hard rule is available for determining them, which may depend on trial and error. 6. Learning (or training) from historical data. Learning is the process by which a neural network modifies its weights in response to external inputs in order to minimize the global error. The equation that specifies this change is called the learning rule. 7. Testing. When a neural network is well trained after learning, the neural networks are processed via a test set containing historical data that the network has never seen. If the testing results are in an acceptable range, the network can be considered as fully trained. The next step can then be performed.
326
Y. Wang, D. Niu, and Y. Wang
Input the data seties
Begin
Acquiring the original data
Data mining and knowledge discovery
Error not meet
Constructing BP neural network Data mining and designing BP algorithms
Training neural network and obtaining the network structure
BP forecasting Error meet the object Stop
Fig. 1. The load forecasting BP neural networks based on data mining and knowledge discovery
8. Recalling. Recalling refers to how the network processes a driving force from its input layer and creates a response at the output layer. Recalling does not do any learning and expects no desired outputs. 4.2 Error Analysis
Relative error and root-mean-square relative error are used as the final evaluating indicators: x − yt Er = t × 100% (16) xt RMSRE =
1 n ⎛ xt − yt ⎞ ∑⎜ ⎟ n t = n ⎝ xt ⎠
2
(17)
4.3 Application and Analysis
Power load data in an area of Inner Mongolian Autonomous Region was used to prove the effectiveness of the model. The power load data from 0:00 at 10/6/2005 to 12:00 at 2/7/2006 were as training sample and used to establish the single-variable time series {x(t1 ), x(t2 ),L x(t948 )} . And the power load data at 09:00 from 03/07/2006 to 14/07/2006 were as testing sample. BP algorithm is used to make prediction with sigmoid function. The parameters are chosen as the following: the node number of input layer is 13 and the node number of
Power Load Forecasting Using Data Mining and Knowledge Discovery Technology
327
output layer is 1. The node number of interlayer is 7 according to the experience. The system error is 0.001 and the maximal interactive time is 5000. SVM is used to make prediction after the samples are normalized. Matlab7.0 is used to compute the results and radial basis function is chosen as the kernel function. The parameters are chosen as the following: C=71.53, The results are shown in table 2.
ε =0.019, σ 2 =4.23
Table 2. Comparison of forecasting error from July 3, 2006 to July 14, 2006 (%)
Date 2006-7-3 2006-7-4 2006-7-5 2006-7-6 2006-7-7 2006-7-8 2006-7-9 2006-7-10 2006-7-11 2006-7-12 2006-7-13 2006-7-14 RMSRE
DMDK-BP -1.55 2.09 -1.84 3.58 1.23 2.35 -1.88 -2.24 2.58 3.87 -1.67 2.31 2.39
SVM 2.63 3.54 -1.74 4.32 -2.37 -3.51 2.71 5.63 2.82 3.01 3.35 3.96 3.44
BP 3.83 -1.45 -2.95 6.32 2.52 -4.11 2.54 5.21 -3.18 4.63 -3.78 4.59 3.97
8
6 SVM 4
Error
2
0
-2 BP DMDK-BP -4
-6 0
2
4
6 Time point
8
10
Fig. 3. The error analysis with different models
12
328
Y. Wang, D. Niu, and Y. Wang
5 Conclusion Firstly, it is necessary to preprocess the data which is subjected to influence of many uncertain factors in short-term power load forecasting. We used data mining and knowledge technology to pretreatment the data, then divided the data into many groups according to the different groups and eliminate the redundant influential factors. Then BP prediction model is established to make prediction. The real load data prediction shows that the model is effective in short-term power load forecasting. Secondly, The main influential factors are considered adequately to forecast in this method, and it combines the fuzzy classifier and gray relation analysis to data-mine. Through preprocessing it reduces training samples, accelerates training speed and considers the weather factors. Thirdly, comparing with the single SVM and BP net-work, it can be proved that this method not only improves accuracy of short-term load forecasting, practicability of system, but also can be accomplished by software.
References 1. Niu, D.-x., Cao, S., Lu, J.: Technology and Application of Power Load Forecasting. China Power Press, Beijing (2009) 2. Niu, D.-x., Wang, Y.-l., Wu, D.-s.: Dash. Power load forecasting using support vector machine and ant colony optimization. Expert Systems with Applications 37, 2531–2539 (2010) 3. Niu, D.-x., Wang, Y.-l., Duan, C.-m., Xing, M.: A New Short-term Power Load Forecasting Model Based on Chaotic Time Series and SVM. Journal of Universal Computer Science 15(13), 2726–2745 (2009) 4. Luo, Q.: Advancing knowledge discovery and data mining. In: Workshop on Knowledge Discovery and Data Mining, pp. 1–5 (2008) 5. Nolan, R.L., Wetherbe, J.C.: Toward a comprehensive framework for MIS research. MIS Quarterly 4(2), 1–19 (1980) 6. Zhang, W., Zeng, T., Li, H.: Parallel mining association rules based on grouping. Computer Engineering 30(22), 84–85 (2004) 7. Li, Q., Yang, L., Zhang, X., et al.: An effective apriori algorithm for association rules in data mining. Computer Application and Software 21(12), 84–86 (2004) 8. Han, J., Pei, J., Yin, Y., Mao, R.: Mining Frequent Patterns without Candidate Generation: A Frequent-Pattern Tree Approach. Data Mining and Knowledge Discovery, 53–87 (2004) 9. Huang, H.G., Hwang, R.C., Hsieh, J.G.: A new artificial intelligent peak power load forecaster based on non-fixed neural networks. Electrical Power Energy Syst., 245–250 (2002) 10. Li, K., Gao, C., Liu, Y.: Support vector machine based hierarchical clustering of spatial databases. Journal of Beijing Institute of Technology 22(4), 485–488 (2002) 11. He, F., Zhang, G., Liu, Y.: Improved load forecasting method based on BP network. East China Electric Power 32(3), 31–33 (2004) 12. Zi, Z., Zhao, S., Wang, G.: Study of relationship between fuzzy logic system and support vector machine. Computer Engineering 30(21), 117–119 (2004) 13. Jiang, Y., Lu, Y.: Short-term Load Forecasting Using a Neural Network Based on Similar Historical Day Data. In: Proceedings of the EPSA, pp. 35–40 (2001)
A Collaborative Framework for Multiagent Systems Moamin Ahmed, Mohd Sharifuddin Ahmad, and Mohd Zaliman M. Yusoff Universiti Tenaga Nasional, Km 7, Jalan Kajang-Puchong, 43009 Kajang, Selangor, Malaysia
[email protected], {sharif,zaliman}@uniten.edu.my
Abstract. In this paper, we demonstrate the use of agents to extend the role of humans in a collaborative work process. The extended roles of agents provide a convenient means for humans to delegate mundane tasks to software agents. The framework employs the FIPA ACL communication protocol which implements communication between agents. An interface for each agent implements the communication between human actors and agents. Such interface and the subsequent communication performed by agents and between agents contribute to the achievement of shared goals. Keywords: Intelligent Software Agents, Multiagent Systems, Agent Communication Language, Collaboration.
1 Introduction In human-centered collaboration, the problem of adhering to deadlines presents a major issue. The variability of tasks imposed on humans poses a major challenge in keeping time for implementing those tasks. One way of overcoming this problem is to use time management systems which keep tracks of deadlines and provide reminders for time-critical tasks. However, such systems do not always provide the needed assistance to perform follow-up tasks in a collaborative process. In this paper, we demonstrate the development and application of role agents to implement a collaborative work of Examination Paper Preparation and Moderation Process (EPMP) in an academic faculty. We use the FIPA agent communication language (ACL) to implement communication between agents to provide a convenient means for humans to delegate mundane tasks to software agents. An interface for each agent implements the communication between humans and agents. Such interface and the subsequent communication performed by agents and between agents contribute to the achievement of a shared goal, i.e. the completion of examination paper preparation and moderation. We use the FIPA ACL to demonstrate the usefulness of the system to take over the timing and execution of communication from humans to achieve the goal. However, the important tasks, i.e. preparation and moderation tasks are still performed by humans. The system intelligently urges human actors to complete the tasks by the deadline, execute communicative acts to other agents when the tasks are completed and upload and download documents on behalf of their human counterpart. This paper reports an extension to our previous work in the same project [1]. Section 2 of this paper briefly dwells on the related work in agent communication. In N.T. Nguyen, M.T. Le, and J. Świątek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 329–338, 2010. © Springer-Verlag Berlin Heidelberg 2010
330
M. Ahmed, M.S. Ahmad, and M.Z.M. Yusoff
Section 3, we present our framework that uses agent communication to solve the problem of EPMP. Section 4 discusses the development and testing of the framework and Section 5 concludes the paper.
2 Related Work We review some important aspects on the development of an agent communication. The outcome of our analysis is used to conceive a framework to develop and implement a collaborative work process using an agent communication language. 2.1 The Speech Act Theory The speech act theory [5, 7] considers three aspects of utterances. Locution refers to the act of utterance itself. Illocution refers to the ‘type’ of utterance as in a request to turn on the heater. Perlocution refers to the effect of an utterance, i.e., how it influences the recipient. Most ACLs in used today are based on illocutionary speech acts. The illocutionary verbs (e.g. request, tell) in a natural language typically correspond to performatives in ACL. The theory is consistent with the mentalistic notion of agents in that the message is intended to communicate attitudes about information such as beliefs, goals, etc. Cohen & Perrault [6] view a conversation as a sequence of actions performed by the participants, intentionally affecting each other's model of the world, primarily their beliefs and goals. These actions can only be performed if certain conditions hold. For example, in the semantics for request(s, h, φ), the preconditions are: • s believe h can do φ o you don’t ask someone to do something unless you believe they can do it • s believe h believe h can do φ o you don’t ask someone unless they believe they can do it • s believe s want φ o you don’t ask someone unless you want it Similarly, the post-condition for such request is that • h believe s believe s want φ o the effect is to make the aware of your desire 2.2 The Knowledge Query and Manipulation Language The Knowledge Query and Manipulation Language (KQML) is a high-level communication language and protocol for exchanging information and knowledge and support run-time knowledge sharing among agents [2]. The KQML message structure consists of three layers: content, message and communication layers. The content layer contains the actual content of the message specified in any language. The message layer consists of the set of performatives provided by the language. Performatives in KQML are grouped into nine categories, each define the allowable speech acts that agents may use. They specify whether the content is a query, an assertion or any of those defined in the categories. The communication
A Collaborative Framework for Multiagent Systems
331
layer consists of low-level communication parameters, such as sender, receiver, and message identities. A typical KQML message format is as follows: (tell : sender : receiver : language : ontology : reply-with : content
A B prolog bill id1 “stat(x,y)”)
In KQML, tell is the performative consisting of other important message parameters. The other parameters hold values that facilitate the processing of the message. In this example, agent A queries agent B, in Prolog (: language), about the truth status of stat(x,y). A response to this message is identified by id1 (: reply-with). The value of : content represents the subject matter of the communication (illocutionary) act. The ontology bill, provides additional information for the interpretation of : content [2, 5]. 2.3 The Foundation for Intelligent Physical Agents The Foundation for Intelligent Physical Agents (FIPA) is an IEEE Computer Society standards organization that promotes agent-based technology and the interoperability of its standards with other technologies. FIPA proposes a standard for an ACL based on the speech act theory, which is quite similar to KQML [3]. A FIPA ACL message contains a set of one or more message parameters. The use of these parameters needed for effective agent communication varies according to the situation. The mandatory parameter in all ACL messages is the performative, although most ACL messages also contain the sender, receiver, and content parameters. Users have the flexibility to include user-defined message parameters other than that specified by FIPA. If an agent does not recognize or is unable to process one or more of the parameters or parameter values, it replies with the not-understood message. The following is an example of a FIPA ACL message, the semantics of which is similar to KQML: (disconfirm : sender : receiver : content : language
(agent-identifier :name i) (set (agent-identifier :name j)) "((mammal shark))" fipa-sl)
FIPA specifications deal with ACL messages, message exchange interaction protocols, speech act theory-based communicative acts and content language representations. The references we used for the development are as follows [3, 4]: • SC00061G - FIPA ACL Message Structure Specification • XC00086D - FIPA Ontology Service Specification
332
M. Ahmed, M.S. Ahmad, and M.Z.M. Yusoff
The FIPA ACL Message Structure Specification defines a set of message parameters for our domain [2]. The FIPA Ontology Service Specification assumes that two agents, who wish to converse, share a common ontology for the domain of discourse, i.e. the agents ascribe the same meaning to the symbols used in the message [4].
3 Development of the Collaborative Framework The development of our framework is based on a four-phased cycle shown in Figure 1. The development process includes domain selection, domain analysis, tasks and messages identification and application.
Fig. 1. The Four-Phased Development Cycle
3.1 Domain Selection: The EPMP Domain Our framework involves the relation between an agent and its human counterpart. Consequently, we chose the EPMP as a platform for our framework that contains both humans and agents. EPMP has three agents that represents the Examination Committee (C), Moderator (M) and Lecturer (L). The goal of this collaborative process is to complete the examination paper preparation and moderation. 3.2 Domain Analysis The process starts when the Examination Committee sends out an instruction to lecturers to start prepare examination papers. A lecturer then prepares the examination paper, together with the solutions and the marking scheme (Set A). He then submits the set to be checked by an appointed moderator. The Moderator checks the set and returns them back with a moderation report (Set B) to the lecturer. If there are no corrections, the lecturer submits the set to the Examination Committee for further actions. Otherwise, the lecturer needs to correct the paper and resubmit the corrected paper to the moderator for inspection. If corrections have been made, the moderator returns the paper to the lecturer. Finally, the lecturer submits the paper to the Committee for further processing. The lecturer and moderator are given deadlines to complete the process. Figure 2 shows the process flow for the EPMP.
A Collaborative Framework for Multiagent Systems
333
Fig. 2. The EPMP Process Flow
The problem with this process is the absence of a system to track the submission of examination papers from lecturers to moderators, moderators to lecturers and lecturers to committee. Due to this problem, lecturers often breach the deadline of paper submission until the end of the moderation process. Consequently, a moderator cannot conduct a qualitative assessment of the examination paper due to insufficient time. An agent-based system that reminds and alerts the human actors would alleviate the problem of deadline breaches. 3.3 Tasks and Message Identification We represent the tasks and message exchanges for each agent as T#X and E#X respectively, where # is the task or message exchange number and X refers to the agents C, M, or L. A message from an agent is represented by μ#SR, where # is the message number, S is the sender of the message μ, and R is the receiver. S and R refer to the agents C, M, or L. For system’s tasks, CN#X refers to the task an agent performs to enable connection to a port and DCN#X indicates a disconnection task. From Figure 2 of the EPMP process flow, we identify the tasks and message exchanges that are required by each agent to complete the paper preparation and moderation process. For each agent, the following tasks and message exchanges are initiated: CN1X : Agent X opens port and enables connection to initiate message exchange. E1X : Agent X sends a message μ1XL, to L. E2X : Agent X sends an ACKNOWLEDGE message, e.g. μ1LX, in response to a received message from L. T1X : X displays the message, e.g. μ1LX, on the screen to alert its human counterpart. T2X : X opens and displays a new Word document on the screen. T3X : X sends a remind message to Agent L. DCN1X : Agent X disconnects and closes the port when it completes the tasks. We develop an environment to include systems’ parameters, agents’ tasks and message exchanges, reminders and timers that enable agents to closely monitor the actions of its human counterpart. The side effect of this ability is improved autonomy
334
M. Ahmed, M.S. Ahmad, and M.Z.M. Yusoff
for agents to make accurate decision. Due to space limitation, we reserve the description of the agent environment in future publications. 3.4 Application We then apply the task and message representations to the EPMP domain. Based on the tasks and message exchanges identified in Section 3.3, we create the interaction sequence between the three agents. The following interactions show the tasks and message exchanges between the three agents to complete the moderation process. These interactions do not include those activities for which human actions are required. Due to space limitation, we omit the tasks for paper corrections and remoderation. (a) Agent C CN1C : Agent C opens port and enables connection when a start date is satisfied. - Agent C starts the process when the start date is true. E1C : C sends a message μ1CL, to L – PREPARE examination paper. - Agent L sends an ACKNOWLEDGE message, μ1LC. - Agent C reads the ACKNOWLEDGE, checks the ontology and understands its meaning. - It then turns on a Timer to time the duration for sending reminder messages. It controls this timer by checking the date and file submission status everyday and makes the decision to send reminder messages. DCN1C : Agent C disables connection and closes the port. When Agent C decides to send message autonomously it will initiate, CN1C T2C
: C connects to Agent L. - Agent C makes this decision by checking its environment. : C sends a reminder message to Agent L. - Agent L receives the message and displays it on screen to alert its human counterpart. : Agent C disconnects and closes the port when it completes the task.
DCN3C (b) Agent L CN1L : Agent L opens port and enables connection when it receives the message from Agent C. - Agent L reads the performative PREPARE, checks the ontology and understands its meaning. E1L : L replies with a message μ1LC, to C – ACKNOWLEDGE. T1L : L displays the message μ1CL, on the screen to alert its human counterpart. T2L : L opens and displays a new Word document on the screen. - Agent L opens a new document to signal its human counterpart to start preparing the examination paper.
A Collaborative Framework for Multiagent Systems
T3L DCN1L
335
: L opens and displays the Word document of the Lecturer form on the screen. - Agent L opens the form which contains the policy to follow. : Agent L disconnects and closes the port.
When the human Lecturer uploads a completed examination paper, E2L
: L sends a message μ2LM, to M – REVIEW examination paper. - Agent M sends an ACKNOWLEDGE message, μ1ML. - Agent L check the states of the environment. - When Agent L receives the message μ1ML, it turns on a Timer to time the duration for M to complete the moderation. It controls this timer by checking the date and file submission status everyday and makes the decision to send reminder messages to M.
When agent L decides to send a reminder message autonomously, it will perform the following actions: CN2L T1L DCN2L
: L connects with agent M. : L sends the remind message. - Agent M receives the message and displays the message to its human counterpart. : Agent L disconnects when it completes its task.
(c) Agent M CN1M : Agent M opens port and enables connection. - Agent M reads the performative REVIEW, checks the ontology, understands its meaning and performs the next action E1M : M replies with a message μ1ML, to L - ACKNOWLEDGE. T1M : M displays the message μ2LM, on the screen to alert its human counterpart. T2M : M opens and displays the Word document of the examination paper on the screen to alert the human moderator to start reviewing the paper. T3M : M opens and displays the Word document of the moderation form on the screen. DCN1M : M disconnects from Agent L when it completes its tasks. If no corrections are required: E2M
: M sends a message μ2ML, to L – PASS moderation.
(d) Agent L CN3L : Agent L opens port and connects with Agent M. - Agent L reads the performative PASS, checks the ontology understands its meaning. E3L : L replies with a message μ3LM, to M – ACKNOWLEDGE. - Agent L turns off the Timer for message reminder because the deadline is satisfied and the documents have been submitted.
336
M. Ahmed, M.S. Ahmad, and M.Z.M. Yusoff
T3L T4L T5L DCN3L
: L displays the message μ2ML, on the screen. : L opens and displays the Word document of the moderated examination paper on the screen. : L opens and displays the Word document of the completed moderation form on the screen. : Agent L disconnects from Agent M and closes the port when it completes its task.
CN4L : Agent L opens port and connects with Agent C E4L : L sends a message μ4LC, to C – COMPLETE. DCN4L : Agent L disconnects from Agent C and closes the port. (e) Agent C CN2C : Agent C opens port and connects with Agent L. - Agent M reads the performative COMPLETE, checks the ontology and understands its meaning. E2C : C replies with a message μ2CL, to L – ACKNOWLEDGE. - Agent C turns off the Timer for message reminder because the deadline is satisfied and the documents have been submitted. T1C : C displays the message μ4LC, on the screen to alert its human counterpart. T2C : C opens and displays the Word document of the moderated examination paper on the screen. T3C : C opens and displays the Word document of the Committee Form. DCN2C : Agent C closes the port and disconnect from Agent M when it completes its task. All these actions are executed if all agents submit the documents before the deadlines. If the deadlines are exceeded (i.e. the Timers expired) and there are no submissions, then Agent L advertises no submission and identifies the offending agent. CN5L E1L DCN5L CN6L E2L DCN6L
: Agent L opens port and connects with Agent M. : Agent L sends advertise message to Agent M and agent M displays the message to its human counterpart. : Agent L disconnects from Agent M and closes the port when it completes the task. : Agent L opens port and connects with Agent C. : Agent L send advertise message to Agent C and agent C displays the message to its human counterpart. : Agent L disconnects from Agent C and closes the port when it finishes the task.
4 Systems Development and Testing To implement the EPMP, we use Win-Prolog and its extended module Chimera, which has the ability to handle multiagent systems [8]. Chimera provides the module
A Collaborative Framework for Multiagent Systems
337
to implement peer-to-peer communication via the use of TCP/IP. Each agent is identified by a port number and an IP address. Agents send and receive messages through such configurations. For the message development, we use the parameters specified by the FIPA ACL Message Structure Specification [3]. We include the performatives, the mandatory parameter, in all our ACL messages. We also define and use our own performatives in the message structure, which are Prepare, Check, Remind, Review, Complete, Pass, Modify, and Acknowledge. To complete the structure, we include the message, content and conversational control parameters as stipulated by the FIPA Specification. In the ontology development, we implicitly encode our ontologies with the actual software implementation of the agent themselves and thus are not formally published to an ontology service [4]. We develop the collaborative process as a multiagent system of EPMP based on the above framework and test the implementation in a laboratory environment on a Local Area Network. Each of the agents C, M and L runs on a PC connected to the network. In the implementation, communication is executed based on the tasks outlined in Section 3.4. We deploy human actors to perform the roles of human Committee, Lecturer and Moderator. These people communicate with their corresponding agents to advance the workflow. An interface for each agent provides the communication between human actors and agents (see Figure 3). We also simulate delays by human ignorance to deadlines and record subsequent responses from agents.
Fig. 3. A Lecturer Agent Interface
The results of the test show that with the features and autonomous actions performed by the agents, the collaboration between human Committee, Lecturer and Moderator improves significantly. The human’s cognitive load has been reduced by ignoring the need to know the deadlines of important tasks and documents’ destinations. This is enhanced by the consistent reminding and alerting services provided by the agents that ensure constant reminders of the deadlines.
338
M. Ahmed, M.S. Ahmad, and M.Z.M. Yusoff
The communication interface and the subsequent communicative acts performed by agents and between agents contribute to the achievement of the shared goal, i.e. the completion of the moderation process. Our framework reduces the problem of deadline breaches and develops a relationship between humans and agents for agents to serve humans. Autonomous services from agents reduce the cognitive strains in humans and improve the efficiency of work processes. In particular, significant time savings are experienced in submitting completed documents to the right destinations when these are handled by the agents.
5 Conclusions and Further Work In this research, we developed and implemented a collaborative framework using the FIPA agent communication language. We demonstrated the usefulness of the system to take over the timing and execution of scheduled tasks from human actors to achieve a shared goal. The important tasks, i.e. preparation and moderation tasks are still performed by the human actors. The agents perform communicative acts to other agents when the tasks are completed. Such acts help reduce the cognitive stress of human actors in performing scheduled tasks and improve the collaboration process. In our future work, we will study and analyze the technical issues involving one-tomany and many-to-many relationships between agents and implement the requirements for such phenomena.
References 1.
2.
3. 4. 5. 6. 7. 8.
Ahmed, M., Ahmad, M.S., Mohd Yusoff, M.Z.: A Review and Development of Agent Communication Language. Electronic Journal of Computer Science and Information Technology (eJCSIT) 1(1), 7–12 (2009) Finin, T., Fritzson, R., McKay, D., McEntire, R.: KQML as an Agent Communication Language. In: Proceedings of the Third International Conference on Information and Knowledge Management, CIKM 1994 (1994) FIPA ACL Message Structure Specification: SC00061G (December 2002) FIPA Ontology Service Specification: XC00086D (August 2001) Labrou, Y., Finin, T.: Semantics for an Agent Communication Language, PhD Dissertation, University of Maryland (1996) Perrault, C.R., Cohen, P.R.: Overview of Planning Speech Acts, Dept. of Computer Science University of Toronto Searle, J.R., Kiefer, F., Bierwisch, M. (eds.): Speech Act Theory and Pragmatics. Springer, Heidelberg (1980) LPA Win-Prolog, http://www.lpa.co.uk/chi.htm
Solving Unbounded Knapsack Problem Based on Quantum Genetic Algorithms Rung-Ching Chen, Yun-Hou Huang, and Ming-Hsien Lin Department of Information Management Chaoyang University of Technology, 168 Gifeng E.Rd., Wufeng, Taichung Country, 413, Taiwan, R.O.C
[email protected]
Abstract. Resource distribution, capital budgeting, investment decision and transportation question could form as knapsack question models. Knapsack problem is one kind of NP-Complete problem and Unbounded Knapsack problems (UKP) are more complex and harder than general Knapsack problems. In this paper, we apply QGAs (Quantum Genetic Algorithms) to solve Unbounded Knapsack Problem. First, present the problem into the mode of QGAs and figure out the corresponding genes types and their fitness functions. Then, find the perfect combination of limitation and largest benefit. Finally, quant bit is updated by adjusting rotation angle and the best solution is found. In addition, we use the strategy of greedy method to arrange the sequence of chromosomes to enhance the effect of executing. Preliminary experiments prove our system is effective. The system also compare with AGAs (Adaptive Genetic Algorithms) to estimate their performances. Keywords: Quantum genetic algorithms, Knapsack problems, Optimization problems.
1 Introduction The general Knapsack problem is a well-known problem in the field of optimal operations research [13]. It concerns the choice of items to be placed in a mountaineer’s Knapsack: though each item benefits the climber, capacity is limited. The Knapsack problem consists of finding the best trade-off of benefit against capacity. The unbounded Knapsack problem (UKP) is formally defined as follows: There are different types of items that have different values and weights. The number of each item is unbounded. There are no restrictions on choice. The equation is shown in (1), but the condition of xi is different from the Knapsack problem. The value of xi is a positive integer including zero. m
Maximize f (x) =
∑x p i
i =1
m
Subject to
∑xw i =1
0≤
i
i
i
≤C,
;
xi , xi ∈ integer, i=1……n
N.T. Nguyen, M.T. Le, and J. Świątek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 339–349, 2010. © Springer-Verlag Berlin Heidelberg 2010
(1)
340
R.-C. Chen, Y.-H. Huang, and M.-H. Lin
The unbounded Knapsack problem is defined as follows. We have n kinds of items, x1 through xn. Each item x has a profit value p and a weight w. The maximum weight that we can carry in the bag is C, and each item has many copies. Formulated this way, the unbounded Knapsack problem is an NP-hard combinatorial optimization problem. In 1975, Genetic Algorithms (GAs) proposed by Holland [9], has been applied to many different areas. GAs insists on gene’s transition, and the parameter of problem has been encoded into a chromosome, then using the genetic computing and evolution to find the best solution of the problem. The process of genetic computing are include population reproduction crossover and mutation. In 1989, Goldberg [5] developed a Simple Genetic Algorithms (SGAs) using GAs that provides an important base for computer program analysis of GAs. In order to improve efficiency for GAs, some mixed genetic algorithms were proposed. For example, combine genetic algorithms with fuzzy theory [3] that uses the power of decision inference by fuzzy theory, then thought genetic algorithms to find the best solution. Besides, to combine GAs with simulated annealing method (SA) [20] that advances the power of partial search for GAs. In addition, to combine GAs with neural network [2] that enhances the speed of solving problems for GAs. About solving unbounded knapsack problem, K.L. Li etc.(2003)[10] used GAs to solve UKP, utilized problem-specific knowledge and incorporated a preprocessing procedure, but it was affected by knowledge; R.C Chen and C.H. Jian [4] provide the Adaptive Genetic Algorithms (AGAs) to solve UKP in 2007, AGAs automatic adjust the runs and populations by the value of initial runs to achieve optimization. We will compare the performances with QGAs and AGAs later. Quantum mechanical computers were proposed in the early 1980s [1]. In the late 1980s, the description of quantum mechanical computers was formalized [18]. In the early 1990s, there are well-known quantum algorithms such as Shor’s quantum factoring algorithm [16] and Grover’s search algorithm [6]. Since late 1990s, research on merging evolutionary computing and quantum computing has been started. They can be classified into two fields. One concentrates on generating new quantum algorithms using automatic programming techniques such as genetic programming [17]. The other concentrates on quantum-inspired evolutionary computing for a classical computer, a branch of study on evolutionary computing that is characterized by certain principles of quantum mechanics such as standing waves, interference, coherence, etc[8]. In [14] and [15], the concept of interference was included in a crossover operator. Han and Kim proposed quantum genetic algorithms (QGAs) [7] [8] that combines GAs with quantum computing in 2002. Quantum genetic algorithm is new probability optimization method based on quantum calculation theory. It codes the chromosome with quantum bit, executes the search with the action and the renovation of the quantum. He solved the knapsack problem using QGAs that indicate by quantum bit (qubit) and replace three operators of classical genetic algorithms with rotation gate to achieve evolution goal. The result shows it is more excellent that traditional GAs in group multiplicity and computing parallel and enhances the algorithms speed and decreases untimely convergence to enhance efficiency [21] [19]. Beside, there are some mixed QGAs that combine with Fuzzy and Neural network [11] [12]. In this paper we will solve unbounded knapsack problem using QGAs. The remainders of the theis are organized as follows: We give a system overview and introduce the quantum genetic algorithm in Chapter 2. In Chapter 3, we propose a quantum genetic algorithm with greedy method. Experimental results are given in Chapter 4. Conclusions are presented in Chapter 5.
、
、
Solving Unbounded Knapsack Problem Based on Quantum Genetic Algorithms
341
2 The System Overview An example makes the following assumptions: The constraint of the knapsack’s weight is C kg. The number of the items is i (i=1,…., n). In addition, the weight of each item is equal to W1, W2, Wi,…,Wn. Each item has a different profits given by P1, P2, Pi,…..., Pn. For example, we let n=20 and the total capacity of the knapsack restriction is C (Table 1). The total capacity of the knapsack restriction be C. We use a quantum genetic algorithm to find the optimal solution given a set profits and weight for each item. Figure 1 shows the workflow of the novel method. Table 1. The unbounded knapsack problem with twenty items (n=20)
Item#
1
2
3
4
5
6
7
8
9
10
Weight
22
50
20
10
15
16
17
21
8
25
Value
1078
2350
920
440
645
656
816
903
400
1125
Item#
11
12
13
14
15
16
17
18
19
20
Weight
24
19
23
28
18
26
9
12
14
27
Value
1080
760
1035
1064
720
1326
369
456
518
972
2.1 Representation The smallest unit of information stored in a two-state quantum computer is called a quantum bit or qubit. A qubit may be in the “1”and “0” superposition state that can be represented as ψ = α|0> + β|1>
(2)
where α and β are complex numbers that conform with
α + β =1 2
2
(3)
The representations of QGAs encoding are defined below.
⎡α ⎤ ⎢β ⎥ ⎣ ⎦ 2 = 1 α gives the probability that the qubit will be found in the
Definition 1: A qubit as where
α +β
2
β
2
2
“0” state and
,
gives the probability that the qubit will be found in the “1” state.
Definition 2: A qubit individual as a string of m qubits is defined as ⎡α j t α j t Lα N tm ⎤ 1 2 t ⎥ j =1,2,….,N, qj = ⎢ t t ⎢ β j β j L β N tm ⎥ ⎣ 1 2 ⎦
,
342
R.-C. Chen, Y.-H. Huang, and M.-H. Lin
where m is the length of quantum chromosome; N is the number of quantum chromosome in group. Definition 3: The group Q individual as a string of N chromosome is defined as
[
]
Q(t)= qtj | q1t q2t LLqNt , where N is the number of the group. The representations of the chromosome use quantum bits in QGAs that make a chromosome can show multiple message of state. If there is a system of m qubits, the system can represent states at the same time. It is advantageous to keep group multiplicity and conquer untimely convergence problem. 2.2 QGA This paper will use case study methods with quantum genetic algorithms on practical application. We will use the algorithm that was developed according to quantum computing principle and proposed by Han and Kim in 2002. Figure 1 shows the workflow of the program and the process as follows.
Fig. 1. The workflow of the quantum genetic algorithms
2.3 Arrangement the Chromosome Sequences In this thesis, we use the strategy of greedy method to rearrange the sequence of chromosomes. So it could be found a phenomenon that the righter side bits of a chromosome have higher frequencies to exchange bits. In the greedy method, the large
Solving Unbounded Knapsack Problem Based on Quantum Genetic Algorithms
343
ratio of benefit to weight is the better candidate. We evaluate the input data of benefits and weights, counting the ratio and put the larger ratio items in the righter side. The right item gets the higher probability to operate crossover mechanism. We use the essence of the greedy method which always chooses the best chromosome in each stage. That increases the best item exchange rate in the coding gene procedure, enhancing the efficiency of the genetic algorithm.
3 Quantum Genetic Algorithm Using Greedy Methods In this section we offer a step by step description of our quantum genetic algorithm. This system is based on the steps of the QGAs proposed by Han and Kim. Generally speaking, processing the QGAs requires several steps, but we generalize 4 parts that are main frames of QGAs. The 4 parts are including gene coding, unsuitable solutions revising, fitness function calculation and the probability of Q (t) update that explain as follow. 3.1 Gene Coding The system will represent the unbounded Knapsack problem genetically. We begin with the weight of each item, consider the capacity constraint, and restrict the quantity of each item under the basic limitation of maximum weight that is less than or equal to C kg. (a) Individual definition: Assume the weight of first item is W1kg and the numbers of combination is i1 (the low bound integer of (C/W1). The number of options ranges from 0 to i1, giving i1+1 objects. The remaining options numbers for the other items are (i1+1), (i3+1), …. ,( ik+1). The mathematical model is depicted by k
S = ∏ (li + 1) i =1
(4)
Where Si = {x x ∈ Integer ;0 ≤ x ≤ li + 1} Si is the set of the number of item i selection options. x is the number of selection. S is the total options of the give spaces. Assume the maximum capacity is 100kg. There are a total of 4.70763×1015 options. (b) Transfer the definition to binary codes: Select a number from range [0, 1] in the same probability and compare with the probability of β that means qubit is “1”. If it 2
is smaller than β , that means the probability of qubit will be “1” is greater, so we set 2
t
binary ( x j ) equate 1. Else, set it equate 0 until finish selecting all items. The result is a 1
binary string. 3.2 Unsuitable Solutions Revising P (t) repair is revising the unsuitable solutions. In order to let the knapsack conform to total weights were allowed. If the knapsack over total weights, we must give some
344
R.-C. Chen, Y.-H. Huang, and M.-H. Lin
items up by order until satisfy with request. If the knapsack under weights limit, we select some items into knapsack by order to satisfy with the restriction that the sum of the knapsack are smallest than allowed total weights. The restriction satisfies m
with
∑w *x i =1
i
i
≤C.
3.3 Fitness Function Calculation After finishing P (t) repair, we will calculate its fitness that is the total profits by chose items in the knapsack. The definition of fitness function for each individual chromosome is listed in equation (5). The constraint definition is shown in equation (6). There are n kinds of item i. Each item i has the benefit Pi and the weight Wi. We will get the best profits of the combination after formula substitution. n
s .t .∑ W i × X i ≤ C
(5)
i =1
0 ≤ xi , xi ∈ integer,i=1…n
(6)
3.4 The Probability of Q (t) Update Setting rotation angle Δ θ i we use lookup table as Table 2. Where f(x) is the best fitness value of new generation chromosome and f (b) is the best fitness value of old generation chromosome. We will classify several categories to explain as follow. Table 2. The Setting of rotation angle
Solving Unbounded Knapsack Problem Based on Quantum Genetic Algorithms
In the Table 2,
345
xi are ith bits of P(t), bi are ith bits of B(t), f (x) are profits of P(t)
and f (b) are profits of B(t), we classify some situation as follow. a. if xi =0 and
bi =1 ,and f (x) < f (b)
If the Q-bit is located in the first or the third quadrant, the value of
Δθ i is set to a
positive value to increase the probability of the state “1”; elas, it should be used to increase the probability of the state “1”. b. if
xi =1 and bi =0 ,and f (x) < f (b)
If the Q-bit is located in the first or the third quadrant, the value of
Δθ i is set to a
negative value to increase the probability of the state “0”; elas, it should be used to increase the probability of the state “0”. If there are out of situation above, all the value of Δθ i are set “0” that [ θ i | i= 1 2 4 6 7 8] =0. In general, the rotation angle
θ3
and
θ 5 are set between 0.01π to 0.09π.
4 Experimental Results and Analysis The system was implemented using C++ language on PC with Intel Pentium(R) processor 2.2 GHZ and the picture of QGAs program. The system is capable of altering the benefit, weight, and maximum capacity, depending on the problem. The parameters of the QGAs, including the group size, generations, and rotation angle, may also be controlled. We use twenty items as Table 1 in the second section and the weights limit will be set 200KG, generations are 200, Groups are 50, and rotation angle is 0.03π. The QGAs find the best solution is on the 65 generations. The best solution is: one 4th items, one 9th items, and seven 16th items and the total values are 10122 that are optimal. We compared our method to the enumerated method while the search spaces are less than 109. This experiment solves unbounded knapsack problem thought Quantum Genetic Algorithms. In the first part of experiment, we accede to greedy method into QGAs to enhance performance. In the greedy method, the large ratio of benefit to weight is the better candidate. We evaluate the input data of benefits and weights, counting the ratio and put larger ratio items in the righter side. The right item gets the higher probability to operate select mechanism. We use the essence of the greedy method that always chooses the best chromosome before each run. If we only use greedy method to find the optimal solution, we just get the optimal solution is 10098, not 10122. So we join the greedy method into QGAs and the result as Figure 2. We only adjust the rotation angle and with greedy method or not. In the Figure 2(a), QGAs with greedy method get better profits than not. In the generations of the best solutions, QGAs with greedy method are less than not as Figure 2(b). This is because greedy method makes converge early. Besides, we see QGAs with greedy method is excellent than not in the counts of the best solutions as Figure 2(c). Therefore, the experiment proves that QGAs with greedy method can raise the algorithm’s performance.
346
R.-C. Chen, Y.-H. Huang, and M.-H. Lin Fig. 2(a)
Rotation Angles V.S. Profits
Profits
10500.000 10000.000
Avg. Value
9500.000
Max. Value
9000.000
Mix. Value
8500.000 0.01
0.01 & G
0.03
0.03 & G
Rotation Angles
Fig. 2(b)
het fo sn oit rae ne G
G: with greedy method
Rotation angles V.S. Generations of the Best Solutions
sn 250.000 iot 200.000 ul 150.000 oS 100.000 ts eB 50.000 0.000
Avg. Value Max. Value Mix. Value 0.01
0.01 & G
0.03
0.03 & G
Rotation angles
Counts of the Best Solutions
Fig. 2(c)
G: with greedy method
Rotation angles V.S. Counts of the Best Solutions 20 15
15
18
10
Counts of the Best Solutions
7
8
0.03
0.03 & G
5 0 0.01
0.01 & G
Rotation angles
G: with greedy method
Fig. 2. The comparison of QGAs and QGAs with Greedy method
In next expand situation, we expand the number of items from 20 to 100 items. If the constraint set to 100kg weight and 100 items, it would get 1046options. We compare the chromosome rearrangement by greedy method to not rearrangement. We found out the time saving very good for chromosome rearrangement. We use the QGAs and compare the chromosomes rearrangement to not rearrangement. It saves a great time by using chromosome rearrangement. It only takes 15 seconds. If chromosomes does not rearrange it would take 167 seconds. The time of chromosomes not rearrange is rearrange’s ten times. In the third expend situation, we will compare QGA with Adaptive Genetic Algorithm (AGA) [4] that proposed by R.C Chen and C.H. Jian (2007) under 100 items. First, we use the same 100 items from R.C Chen and C.H. Jian. Both of the weights limit and generation will be set 100. In the QGA, we will set group size is 10 and rotation angle is 0.05. In the AGA, the population is 100, mutation rate is 30% and crossover rate is 70% by R.C Chen and C.H. Jian (2007). The QGAs only cost 15 second to find the best solution is on the 69 generations. The best solution is three 89th items and the total values are 8940 that are optimal. The speed of QGA is faster
Solving Unbounded Knapsack Problem Based on Quantum Genetic Algorithms
347
than 41 second. Therefore, the group size of QGA is smaller, but it has better group multiplicity. Finally, we use the 10 items in front of Table 1 to experiment and analyze. We set the generation is 60 and the weight limit is 100. In QGAs, the group size is 20 and rotation angle is 0.05; on the other hand, we set mutation rate is 30% and crossover rate is 70% in the end for AGAs. The results show as Figure 3(a) and 3(b). QGAs have more times optimal solutions than AGAs. And the convergence situation in QGAs tends overall optimal solutions. In the Figure 3(b), the average profits of QGAs are better than AGAs. Therefore, the result shows the search ability in QGAs is more excellent than AGAs.
Optimal Curve 5000 4900
st if 4800 or P 4700 4600 4500 1
11
21
31
41
51
Generations
AGAs QGAs
Fig. 3(a). The comparison of QGAs and AGAs under 10 items Average Profits 4891.39604
4900 4890
Profits 4880
4871.25 Avg.
4870 4860
AGAs
QGAs
Fig. 3(b). The comparison of average profits
5 Conclusions and Future Works In this paper, we solve unbounded knapsack problem using QGAs by quantum bit and replace three operators of traditional genetic algorithms with rotation gate to achieve evolution goal. The unbounded knapsack problem is more complex than the general Knapsack problem. We used the QGAs for solving unbounded knapsack problem, overcoming the problem of slow convergence in traditional GAs. We also join greedy method into QGAs to enhance performance. Our approach is able to find the optimal
348
R.-C. Chen, Y.-H. Huang, and M.-H. Lin
solution using the greedy method strategy in a wide search space. Experiments results have shown that this method is capable of finding the optimal solution of a problem. We also expand the searching spaces to 1046options to prove that chromosome rearrangement gets very good efficiency. We prove the group multiplicity and computing parallel in QGAs and enhance the algorithms speed and decreases untimely convergence to enhance efficiency. In the future, we will try to find the best relationship between generations size and rotation angle in order to improve the efficiency of our system.
References 1. Benioff, P.: The computer as a physical system: A microscopic quantum mechanical Hamiltonian model of computers as represented by Turing machines. J. Statist. Phys. 22, 563–591 (1980) 2. Chaiyaratana, N., Zalzala, A.M.S.: Hybridization of Neural Networks and Genetic Algorithms for Time-Optimal Control. Evolutionary Computation 1, 389–396 (1999) 3. Chakrabort, B.: Genetic Algorithm with Fuzzy Fitness Function for Feature Selection. In: Proc. of the IEEE Int. Sym. on Industrial Electronics, pp. 315–319 (2002) 4. Chen, R.C., Jian, C.H.: Solving Unbounded Knapsack Problem Using an Adaptive Genetic Algorithm with Elitism strategy. In: Thulasiraman, P., et al. (eds.) ISPA 2007. LNCS, vol. 4743, pp. 193–202. Springer, Heidelberg (2007) 5. Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, Reading (1989) 6. Grover, L.K.: A fast quantum mechanical algorithm for database search. In: Proc. 28th ACM Symp. Theory of Computing, pp. 212–219 (1996) 7. Han, K.H., Kim, J.H.: Genetic quantum algorithm and its application to combinatorial optimization problems. In: IEEE Conference on Evolutionary Computation, pp. 1354–1360 (2000) 8. Han, K.H., Kim, J.H.: Quantum-Inspired Evolutionary Algorithm for a Class of Combinatorial Optimization. IEEE Transactions on Evolutionary Computation 6(6), 580–593 (2002) 9. Holland, J.H.: Adaptation in Natural and Artificial System. The University of Michigan Press, Ann Arbor (1975) 10. Li, K.L., Dai, G.M., Li, Q.H.: A Genetic algorithm for the Unbounded Knapsack Problem. In: IEEE Conference on Machine Learning and Cybernetics, vol. 3, pp. 1586–1590 (2003) 11. Li, P.C., Li, S.Y.: Optimal Design of Normalized Fuzzy Neural Network Controller Based on Quantum Genetic Algorithm. Journal of System Simulation 19(16), 3710–3730 (2007) 12. Li, S.Y., Li, P.C., Yuan, L.Y.: Quantum genetic algorithm with application in fuzzy controller parameter optimization. System Engineering and Electronics 29(7), 1134–1138 (2007) 13. Martello, S., Toth, P.: Knapsack Problems: Algorithms and Computer Implementations. John Wiley &Sons Ltd., Chichester (1990) 14. Narayanan, A., Moore, M.: Quantum-inspired genetic algorithms. In: IEEE Int. Conf. Evolutionary Computation, pp. 61–66. IEEE Press, Piscataway (1996) 15. Narayanan, A.: Quantum computing for beginners. In: Proc. 1999 Congress on Evolutionary Computation, vol. 3, pp. 2231–2238. IEEE Press, Piscataway (1999) 16. Shor, P.W.: Quantum computing. Doc. Mathematica, Vol. Extra Volume ICM, 467–486 (1998)
Solving Unbounded Knapsack Problem Based on Quantum Genetic Algorithms
349
17. Spector, L., Barnum, H., Bernstein, H.J., Swamy, N.: Finding a better-than-classical quantum AND/OR algorithm using genetic programming. In: Proc. Congress on Evolutionary Computation, vol. 3, pp. 2239–2246. IEEE Press, Piscataway (1999) 18. Teng, H., Yang, B., Zhao, B.: A New Mutative Scale Chaos Optimization Quantum Genetic Algorithm on Chinese Control and Decision Conference (CCDC), pp. 1547–1551 (2008) 19. Vlachogiannis, J.G., Ostergaard, J.: Reactive power and voltage control based on general quantum genetic algorithms. Expert Systems with Applications 36(3), 6118–6126 (2009) 20. Wang, L., Zheng, D.Z.: A Modified Genetic Algorithm for Job Shop Scheduling. International Journal of Advanced Manufacturing Technology 20(6), 72–76 (2002) 21. Zhu, X.R., Zhang, X.H.: A quantum genetic algorithm with repair function and its application Knapsack question. The Computer Applications 27(5), 1187–1190 (2007)
Detecting Design Pattern Using Subgraph Discovery Ming Qiu, Qingshan Jiang, An Gao, Ergan Chen, Di Qiu, and Shang Chai Software School of Xiamen University Xiamen P.R. China, 361005
[email protected]
Abstract. Design patterns have been widely adopted by software industry to reuse the best practices and improve the quality of software systems. In order to enable software engineers to understand and reengineer the software program, quite a few approaches have been developed to identify design patterns from source code. However, the existing approaches generally identify patterns sequentially. As a result, the computation time of these approaches is linearly dependent on the number of design patterns to be detected. In this paper, a new approach based on subgraph discovery is proposed to recoginze a set of design patterns at a time. The computational time of the novel algorithm is sublinearly dependent on the number of patterns. A state space graph is introduced to avoid the search space explosion and reduce the opportunity to detect subgraph isomorphism. We run detailed experiments on three open source systems to evaluate our approach. The results of our experiments confirm on the efficiency and effectiveness of the proposed approach.
1
Introduction
Software maintenance is an extremely time consuming activity. According to [1], from 40% to 60% of the software maintenance effort is devoted to understanding of the system’s design and architecture. Nowadays most software systems are developed rapidly due to the ever-changing market and user requirements. Documentation is often obsolete. Therefore source code is the only reliable source to understand. Although reverse engineering tools today are able to extract various information, such as class structure, inter class relationships and call graphs, from source code. However the architecture of most systems is often complicated. It is still hard for software engineers to find the design intention from the complicated class structure and inter class relationships. Design patterns are concepts to improve the understanding of object-oriented designs[2]. They are micro-structures on a higher abstraction level than class structure and inter class relationships. Consequently, identification of implemented design patterns in a system provides engineers with considerable insight on the software and helps to comprehend the rationale behind the design[3]. Many approaches have been proposed to recognize design pattern instances from source code[4,5,6,7]. The first work to detect design patterns was done by ´ atek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 350–359, 2010. N.T. Nguyen, M.T. Le, and J. Swi c Springer-Verlag Berlin Heidelberg 2010
Detecting Design Pattern Using Subgraph Discovery
351
Karmer and Prechelt[4] in 1996. They represented both designs and patterns in PROLOG and utilized PROLOG engine to search the patterns. Ballis et al.[5] described design patterns with a formal language and developed a rule-based matching method that is able to find all instances of a pattern in class diagrams. Besides the approaches based on formal language, there are some approaches based on graphs. Tsantalis et al. [6] described both the system and design patterns as graphs and employed a set of matrices to represent inner-class relationships among classes. A graph similarity algorithm was applied to calculate the similarity score between the matrices of the system and those of design patterns. Dong et al.[7] adopted a template matching method to detect design patterns based on the same intermediate representation as Tsantalis did. However they measured the similarity between two matrices by calculating their normalized cross correlation. Compared with the approaches based on formal language, the approaches based on graph do not rely on any pattern-specific heuristic, facilitating the extension to novel pattern structures[6]. Existing approaches based on graph aim at detecting a pattern at a time. However, there may be dozens of design patterns used in a software system. Consequently, these approaches have to compare each system-pattern pair, resulting in a computation time that is linearly dependent on the number of patterns to be detected. In this paper, we propose a new approach that is more efficient for the detection from a set of design patterns to a software system. We first propose a novel intermediate representation based on labeled graph for the system and design patterns. Thus, recognizing instances of design patterns is conducted by discovering subgraphs in a labeled graph.Then a subgraph discovery algorithm is proposed to find instances of multiple subgraphs. A state space graph of the matching process is used to prevent the exploration of the search space. To investigate our approach, CodeMiner, a plugin for Eclipse IDE, was developed. A series of experiments based on JUnit, JRefactory and JHotdraw were performed to evaluate the approach. The rest of the paper is organized as follows. Section 2 describes the intermediate representation for systems and design patterns. We introduce the subgraph discovery algorithm in Section 3. The results of experiments on three open source systems are reported in Section 4. Section 5 concludes our study.
2 2.1
Graph Representation Abstract Syntax Graph
Prior to design pattern detection, it is necessary to transform both the system under study and the design patterns to be detected into an intermediate representation. Such a representation should contain all information that is vital to the identification of patterns. We adopt Abstract Syntax Graph (ASG), which is a labeled graph, as the intermediate representation. The vertices of ASG are classes and methods, and edges represent relationships between classes and methods. The labels of vertices and edges in ASG are listed in the Table 1.
352
M. Qiu et al. Table 1. Labels of vertices and edges in ASG Label AC C M asc agg gen has ret pam dep ord new
Meaning An abstract class or interface. A non-abstract class. A method. The one-to-one relationship between two classes. The one-to-many relationship between two classes. The source class inherits a target class. The target method is an operation of the source class. The source method returns an instance of the target class. The source method has an input parameter of the target class. The source method calls the target method. The source method overrides the target method. An object of the target class is instantiated by the source method.
Let LN = {AC, C, M } and LE = {asc, agg, gen, has, ret, pam, dep, ord, new} denote the set of vertex and edge labels, respectively. Then we have the following definitions. Definition 1 (Abstract Syntax Graph). An abstract syntax graph G is a 4-tuple G = (V, E, μ, ν), where • • • •
V is the set of vertices, E ⊆ V × V is the set of edges, μ : V −→ LV is a function assigning labels to the vertices, ν : E −→ LE is a function assigning labels to the edges.
In this definition, each vertex or edge is not required to have a unique label, and the same label can be assigned to many vertices or edges. Definition 2 (Subgraph of an Abstract Syntax Graph). Let G = (V, E, μ, ν) and Gs = (Vs , Es , μs , νs ) be abstract syntax graphs, Gs is a subgraph of G, if • • • •
Vs ⊆ V Es ⊆ E ∩ (Vs × Vs ) μs (v) = μ (v) for all v ∈ Vs νs (e) = ν (e) for all e ∈ Es
Compared to the subgraph definition given in [8], the condition Es = E ∩ (Vs × Vs ) is relaxed to Es ⊆ E ∩ (Vs × Vs ), which is more suitable for design pattern detection. Applying the above definitions, a system can be described by an ASG. Fig. 1 is a simple example. The ASG is able to depict not only the structural information of source code, such as class inheritance, class association and class aggregation, but also the behavioral information of source code, such as object creation, method delegations and method overriding.
Detecting Design Pattern Using Subgraph Discovery
353
!! "#
! # $
Fig. 1. Describing source code with an ASG
(a) Factory Method
(b) Template Method
Fig. 2. Pattern signatures of Factory Method and Template Method
2.2
Design Pattern Signature
Design patterns are described by listing the intents, motivations, applicability, structure, participants, collaborations, consequences, implementation details and sample code[2]. All of these are written informally. Hence we describe each design pattern with a design pattern signature, which is also an ASG. Figure 2 shows the design pattern signatures of FactoryMethod and TemplateMethod pattern. As can be observed from Fig. 2, a design pattern signature only contains the vital information necessary to differ the pattern from the others. For example, AbstractClass class have two primitiveOperation methods in Fig. 2(b), but only a primitiveOperation Method vertex is associated with AbstractClass in the signature. It is because that a primitiveOperation is enough to recognize the pattern.
354
3
M. Qiu et al.
Subgraph Discovery
In our approach, both the software system under study and design patterns to detected are transformed to ASGs. To be more general, we called the ASG transformed from the software system as the input graph and the design pattern signatures as model graphs in this section. Then the problem of identifying design patterns from a software system is transformed to a subgraph discovery problem. Definition 3 (Exact Subgraph Discovery from an Input Graph). Given a set of model graphs S1 , ..., Sn and an input graph G, find all subgraph isomorphisms from any of the model graphs to the input graph G. In Definition 3, the subgraph isomorphism detection is proved to be a NPcomplete problem[9]. Most subgraph isomorphism algorithms, such as Ullmann [10], VF algorithm[11] and Nauty[12], match the input graph sequentially to each model graph. So their computation times are linearly dependent on the number of model graphs. To overcome this inefficiency, we propose an algorithm that can detect a set of subgraph isomorphisms at a time. 3.1
State Space Graph
In our algorithm, the process of subgraph discovery is guided by a state space graph. we use the state space graph to determine whether a subgraph is a candidate for model graphs.
(a) The graphs associated with the states in the state space graph
(b) State Space Graph Fig. 3. The state space graph to detect FactoryMethod and TemplateMethod pattern
Detecting Design Pattern Using Subgraph Discovery
355
Fig. 3(a) is a state space graph that is used to identify pattern Factory Method and Template Method. Each state s in Fig. 3(a) is associated with a small graph shown in Fig. 3(b).A transition from state s to a successor s means adding an edge to the small graph associated with s. For example, there are two outgoing transitions from state 4 in Fig. 3(a). The transition from state 4 to state 5 means adding an edge labeled dep to a vertex labeled Method in state 4. While the transition from state 4 to state 6 means adding an edge labeled ret to the same vertex in state 4. Among the states in Fig. 3, state 1 is an initial state, which is associated with a single-edge graph. State 7 and 9 are final states, which are the model graphs to be recognized. In Fig. 3, state 7 and 9 represent Template Method pattern and Factory Method pattern respectively. 3.2
Subgraph Discovery Algorithm
With a state space graph, the subgraph discovery problem is solved by a divideand-conquer strategy. That is, we first locate the single-edge subgraphs, which are associated with the initial states, in the input graph. Then guided by the state space graph, locate the two-edge subgraphs that are associated to the successor states. The discovery process terminates at locating the subgraphs associated with final states or reaching an edge limit. Input:
An input graph G ,maximum edge threshold edgeT hreshold, the state space graph of model graphs ssGraph Output: Identified subgraphs resultList SubgraphDiscovery( G, ssGraph, edgeT hreshold ) 1: candidateList ← ∅, resultList ← ∅, processedEdge = 0; 2: searchList ← ExtendBySSG ( G, SSGraph, ∅ ); 3: while processedEdge ≤ edgeT hreshold ∧ searchList is not empty do 4: for each subgraph g in searchList; 5: if the state of g is the final state of ssGraph then 6: Add g to resultList; 7: else 8: exGraphs = ExtendBySSG( G, ssGraph, g ); 9: for each candidateG in exGraphs do 10: Merge candidateG into the candidateList; 11: searchList ← candidateList, candidateList ← ∅, processedEdge + +; 12: return resultList; Fig. 4. Subgraph discovery algorithm
The overall structure of subgraph discovery algorithm is shown in Fig. 4. It takes as input the input graph G, the state space graph ssGraph and the maximum edge threshold edgeT hreshold, and on completion, it returns the set of identified subgraphs resultList. candidateList and resultList are linked lists of substructures which are used to contain subgraphs to be expanded, expanded
356
M. Qiu et al.
Input:
An input graph G, the state space graph of model graphs ssGraph the subgraph to be extended g Output: extended subgraph exGraphs ExtendBySSG( G, ssGraph, g ) 1: exGraphs ← ∅; 2: if g = ∅ then 3: initStates ← the initial states of ssGraph; 4: for each s ∈ initStates do 5: subgraphsInG ← the subgraphs in G that is isomorphic to the graph associated to state s; 6: add s and subgraphsInG into exGraphs; 7: return exGraphs; 8: curState ← the state of g, curStateGraph ← the subgraph of state curState; 9: nextStates ← the successor states to curState in ssGraph; 10: for each s ∈ nextStates do 11: Get the newly added edge e in transition from state curState to s; 12: for each inst in g do 13: Get the vertex mapping relation between inst and curStateGraph; 14: v1 ,v2 ← the vertices of edge e; 15: if v1 , v2 ∈ the vertex set of curStateGraph then 16: v1 , v2 ← the corresponding vertices of v1 ,v2 in inst; 17: Find e in G that connects v1 ,v2 and can be mapped to e; 18: if found then 19: add s and instance (inst + e ) into exGraphs; 20: else 21: v ← the corresponding vertex of either v1 or v2 that is in inst; 22: if there exists an edge e in G that can be mapped to e and connects v and a vertex not in curStateGraph then 23: add s and instance (inst + e ) into exGraphs; 24: return exGraphs; Fig. 5. Procedure ExtendBySSG
subgraphs and recognized subgraphs respectively. The substructure consists of a state definition, all its instances throughout the input graph G and their vertex mapping relations ( μs (v) = μ (v) in Definition 2 ). At step 2, the initial states of state space graph ssGraph and all their isomorphic instances in graph G are put into searchList. The subgraphs of initial states contain only one edge. Therefore, it is not difficult to find out their instances in graph G. During each iteration (loop starting at step 4), the algorithm scans each substructure in searchList and extends it by ExtendBySSG. Fig. 5 illustrates the general structure of ExtendBySSG. Step 12 to 23 in Fig. 5 check each instance in substructure g to see whether it can be extended to a new instance that is isomorphic to the successor state s. To avoid the computation of subgraph isomorphism, which is a time consuming activity, the algorithm makes use of the vertex mapping relation and tries to add edge e to the corresponding vertex in a instance. Since the edge e in graph G is mapped
Detecting Design Pattern Using Subgraph Discovery
357
to the edge e (νs (e) = ν (e) in Definition 2 ), the newly formed instance is sure to be isomorphic to the successor state s and is a candidate for further extension. The search terminates upon searchList is empty or reaching a limit on the number of edges that have been extended (edgeT hreshold). Once the search terminates and the algorithm returns the list of identified substructures resultList.
4
Experiments
We implemented a plug-in for the Eclipse IDE, called CodeMiner, based on the aforementioned approach. Fig 6 depicts the process of design pattern detection. Both design patterns example code and source code are the input of the detection process. The process consists of two part. First, design patterns to be detected are represented as design pattern signatures. Then, the state space graph is derived from design pattern signatures. The first two activities are done manually and offline. At run time, CodeMiner parses java code by Eclipse JDT, and represents the software system as an ASG. Both the ASG and state space graph are taken as input to the algorithm described in the previous section. The identified design pattern instances are displayed in an Eclipse view (see Fig. 7).
Fig. 6. Design Pattern Detection Process
We evaluated CodeMiner on three open source projects, JUnit, JHotDraw and JRefactory. These projects have been selected because other work on design pattern discovery uses one or some of these projects to evaluate their approaches. It allows us to compare and evaluate our experiment results. The results of the pattern detection are summarized in Table 2. The results are interpreted by counting the number of correctly identified patterns (True Positives – TP) and the number of identified patterns that are not comply with the pattern description (False Positive – FP). We inspect all identified instances in source code manually. As can be observed from Table 2, all FP values are zero except that of Bridge pattern. That is mainly because the design pattern signatures captured the essential information for each pattern. The reason why false positive occurred in Bridge pattern is that the incorrectly identified instances are structurally conformed to the definition of Bridge pattern, but not in accord with the intention of that, which is to decouple an abstraction from its implementation so that the two can vary independently[2].
358
M. Qiu et al.
Fig. 7. CodeMiner Interface Table 2. Pattern detection results of our approach
Design Patterns
JHotDraw TP FP
JRefactory TP FP
Adapter Bridge Command Composite Decorator Factory Method Observer Proxy Singleton State/Strategy Template Method Visitor
30 94 16 1 3 5 6 0 2 33 6 0
4 130 2 0 0 3 1 11 8 13 17 2
0 7 0 0 0 0 0 0 0 0 0 0
0 11 0 0 0 0 0 0 0 0 0 0
JUnit TP FP 0 7 1 1 1 1 2 0 0 3 1 0
0 0 0 0 0 0 0 0 0 0 0 0
We compared the results of our approach with the results of PATTERN4[6]. Our approach obtains better results than PATTERN4 does. It is because that PATTERN4 represented graphs as matrices. Due to the large number of vertices in the graph, the system has to be partitioned into subsystems on hierarchies. Therefore, pattern instances involving characteristics that extend beyond the subsystem boundaries can not be detected[6]. While CodeMiner is based on subgraph discovery algorithm. It can do with a large graph and do not need to partition it.
Detecting Design Pattern Using Subgraph Discovery
5
359
Conclusions
In this paper we presented an approach for discovering design patterns in Java source code. The approach describes the software system and design patterns as graphs, and a novel subgraph discovery algorithm is used to efficiently detect subgraph isomorphisms from all design patterns to the system. The main idea of the algorithm is that a partial match (initially empty) is iteratively expanding by adding to it a new edge. The new edge is chosen to ensure the newly formed graph is a candidate for a design pattern, which is guaranteed by a state space graph. To evaluate our approach, we performed a number of practical experiments on three open source systems, and compared our results with those of related work. Acknowledgments. This work was supported by the National Science Foundation of China under the Grant No. 10771176.
References 1. Pfleeger, S.L.: Software Engineering: Theory and Practice, 2nd edn. Prentice Hall, Englewood Cliffs (2001) 2. Gamma, E., Helm, R., Johnson, R., et al.: Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, Reading (1995) 3. Vok´ aˇc, M.: An Efficient Tool for Recovering Design Patterns from C++ Code. Journal of Object Technology 5(1), 139–157 (2006) 4. Kr¨ amer, C., Prechelt, L.: Design Recovery by Automated Search for Structural Design Patterns in Object-Oriented Software. In: Proceedings of the Third Working Conference on Reverse Engineering, Monterey, CA, pp. 208–215 (1996) 5. Ballis, D., Baruzzo, A., Comini, M.: A Rule-based Method to Match Software Patterns against UML Models. Electronic Notes in Theoretical Computer Science 219, 51–66 (2008) 6. Tsantalis, N., Chatzigeorgiou, A., Stephanides, G., et al.: Design Pattern Detection using Similarity Scoring. IEEE Transactions on Software Engineering 32(11), 896– 909 (2006) 7. Dong, J., Sun, Y., Zhao, Y.: Design Pattern Detection by Template Matching. In: Proceedings of the 2008 ACM Symposium on Applied Computing, Fortaleza, Brazil, pp. 765–769 (2008) 8. Cook, D.J., Holder, L.B.: Graph-based Data Mining. IEEE Intelligent Systems and Their Applications 15(2), 32–41 (2000) 9. Cook, D.J., Holder, L.B.: Mining Graph Data. John Wiley & Sons, Chichester (2007) 10. Ullmann, J.R.: An Algorithm for Subgraph Isomorphism. Journal of the ACM 23(1), 31–42 (1976) 11. Cordella, L.P., Foggia, P., Sansone, C., et al.: A (Sub)Graph Isomorphism Algorithm for Matching Large Graphs. IEEE Transactions on Pattern Analysis and Machine Intelligence 26(10), 1367–1372 (2004) 12. Mckay, B.D.: Practical Graph Isomorphism. Congressus Numerantium 30, 45–87 (1981)
A Fuzzy Time Series-Based Neural Network Approach to Option Price Forecasting Yungho Leu, Chien-Pang Lee, and Chen-Chia Hung Department of Information Management, National Taiwan University of Science and Technology, 43, Keelung Road, Section 4, Taipei, Taiwan 10607, ROC {yhl,D9509302}@cs.ntust.edu.tw
Abstract. Recently, option price forecasting has become a popular financial issue. Being affected by many factors, option price forecasting remains a challenging problem. This paper proposes a new method to forecast the option price. The proposed method, termed as fuzzy time series-based neural network (FTSNN), is a hybrid method composed of a fuzzy time series model and a neural network model. In FTSNN, the fuzzy time series model is used to select a dataset for training the neural network model for prediction. The experiment results show that FTSNN outperforms several existing methods in terms of MAE and RMSE. Keywords: Option Price Forecasting, Hybrid method, Fuzzy Time Series, Neural Networks.
1 Introduction The option is an important tool for risk management in financial markets [1]. For example, a producer can buy a put option to prevent a profit loss due to the decrease of the price of his product in the future. Similarly, a customer can buy a call option to buy his desired product at an expected price in the future. However, an option is like an insurance policy in that one has to pay premium for an option. The premium, also called the price, of an option is determined by many factors such as the current stock price, the option strike price, the time to expiration, the volatility of the stock price and the risk-free interest rates [2]. Being affected by many factors, option price forecasting remains a challenging problem. Due to the popularity of using options as a risk management tool, the option price forecasting becomes an important task. The well-known Black-Scholes model (B-S model) was introduced in 1973 [2] for option pricing. However, the B-S model is limited due to the fact that many of its assumptions are not true in a real life financial market. For this reason, many researches proposed new methods to predict the option price. Artificial neural networks (ANNs) model is popular for financial forecasting, such as the stock price indices forecasting [3], [4] and the exchange rate forecasting [5], [6]. The neural network model is also widely used in option price forecasting. For example, N.T. Nguyen, M.T. Le, and J. Świątek (Eds): ACIIDS 2010, Part I, LNAI 5990, pp. 360–369, 2010. © Springer-Verlag Berlin Heidelberg 2010
A Fuzzy Time Series-Based Neural Network Approach to Option Price Forecasting
361
many researchers proposed hybrid neural network models to predict the option price [1], [7], [8], [9]. Recently, the fuzzy time series model is gaining its popularity in financial forecasting, such as the stock price indices forecasting [10], [11], the exchange rate forecasting [12], and the futures exchange index forecasting [13]. In this paper, we proposed a new method to predict the option price of “Taiwan Stock Exchange Stock Price Index Options (TXO)”. Our proposed method, termed as fuzzy time series-based neural network (FTSNN), is a hybrid method composed of a fuzzy time series model and a neural network model. In FTSNN, the fuzzy time series model is used to select a dataset for training the neural network model for prediction. The remainder of this paper is organized as follows. Section 2 reviews the definitions of fuzzy time series. Section 3 introduces the procedure of FTSNN. Section 4 describes an application of the FTSNN to option price forecasting. Section 5 compares the performance of the FTSNN with other proposed methods. Section 6 gives the conclusions of this paper.
2 Fuzzy Time Series This paper proposes to combine a two-factor fuzzy time series and a neural network for option price forecasting. In the following, we will briefly review the definition of fuzzy time series. According to [13], [14], [15], the following definitions are given to fuzzy time series model. Definition 1: Let Y(t) (t =…,0,1,2,…), a subset of R1, be the universe of discourse on which fuzzy sets fi(t) (i = 1,2...) are defined. If F(t) is a collection of fi(t). Then F(t) is called a fuzzy time series defined on Y(t). Definition 2: If for any fj(t)∈ F(t), there exists an fi(t-1)∈ F(t-1), such that there exists a fuzzy relation Rij(t, t-1) and fj(t)=fi(t-1) 。Rij(t, t-l) where ‘ 。’ is the max-min composition, then F(t) is said to be caused by F(t-1) only, and we denote this as F(t-1)→F(t). Definition 3: If F(t) is caused by F(t-1), F(t-2),…,and F(t-n), F(t) is called a one-factor n-order fuzzy time series, and is denoted by F(t-n),…, F(t-2), F(t-1)→F(t). Definition 4: If F1(t) is caused by (F1(t-1), F2(t-1)), (F1(t-2), F2(t-2)),…, (F1(t-n), F2(t-n)), F1(t) is called a two-factor n-order fuzzy time series, which is denoted by (F1(t-n), F2(t-n)),…, (F1(t-2), F2(t-2)), (F1(t-1), F2(t-1))→F1(t). Let F1(t)=Xt and F2(t)= Yt , where Xt and Yt are fuzzy variables whose values are possible fuzzy sets of the first factor and the second factor, respectively, on day t. Then, a two-factor n-order fuzzy logic relationship (FLR) [14] can be expressed as: (Xt-n, Yt-n), …, (Xt-2, Yt-2), (Xt-1, Yt-1)→Xt,
362
Y. Leu, C.-P. Lee, and C.-C. Hung
where (Xt-n, Yt-n), …, (Xt-2, Yt-2) and (Xt-1, Yt-1), are referred to as the left-hand side (LHS) of the relationship, and Xt is referred to as the right-hand side (RHS) of the relationship.
3 The FTSNN Method When there are many different fuzzy sets at the LHS of a FLR, it is difficult to find a matched FLR for prediction. Some measures have to be taken to counter this problem [12]. In this paper, we use the fuzzy time series to select several similar FLRs for prediction, and use the neural network to construct a prediction model. The proposed method is a hybrid method in the sense that the fuzzy time series model is used to select training data set (i.e., FLRs) and the neural network is used to build a prediction model using the selected training dataset. The detail procedure of FTSNN method is described in the following. Step 1: Divide the universe of discourse The universe of discourse of the first factor is defined as U= [Dmin-D1, Dmax+D2], where Dmin and Dmax are the minimum and maximum of the first factor, respectively; D1 and D2 are two positive real numbers to divide the universe of discourse into n equal length intervals. The universe of discourse of the second factor is defined as V= [Vmin-V1, Vmax+V2], where Vmin and Vmax are the minimum and maximum of the second factor, respectively; similarly, V1 and V2 are two positive real numbers used to divide the universe of discourse of the second factor into m equal length intervals. Note that the length of the interval of each factor is determined by its largest value of the factor in historical data. Step 2: Define fuzzy sets Linguistic terms Ai, 1≤ i ≤ n, are defined as fuzzy sets on the intervals of the first factor. They are defined as follows: A1 = 1 u1 + 0.5 u2 + 0 u3 + ⋅⋅⋅ + 0 un − 2 + 0 un −1 + 0 un , A2 = 0.5 u1 + 1 u2 + 0.5 u3 + ⋅⋅⋅ + 0 un − 2 + 0 un −1 + 0 un , M An −1 = 0 u1 + 0 u2 + 0 u3 + ⋅⋅⋅ + 0.5 un − 2 + 1 un −1 + 0.5 un , An = 0 u1 + 0 u2 + 0 u3 + ⋅⋅⋅ + 0 un − 2 + 0.5 un −1 + 1 un ,
where ui denotes the ith interval of the first factor. Similarly, linguistic term Bj, 1≤ j ≤ m, is defined as a fuzzy set on the intervals of the second factor. They are defined as follows: B1 = 1 v1 + 0.5 v2 + 0 v3 + ⋅⋅⋅ + 0 vm − 2 + 0 vm −1 + 0 vm , B2 = 0.5 v1 + 1 v2 + 0.5 v3 + ⋅⋅⋅ + 0 vm − 2 + 0 vm −1 + 0 vm , Bm −1
M = 0 v1 + 0 v2 + 0 v3 + ⋅⋅⋅ + 0.5 vm − 2 + 1 vm −1 + 0.5 vm ,
Bm = 0 v1 + 0 v2 + 0 v3 + ⋅⋅⋅ + 0 vm − 2 + 0.5 vm −1 + 1 vm , th
where vi is the i intervals of the second factor.
A Fuzzy Time Series-Based Neural Network Approach to Option Price Forecasting
363
Step 3: Determine the order of FTSNN FTSNN uses a two-factor n-order fuzzy time series model to search for similar FLRs for building a neural network prediction model. Similar FLRs imply similar trends in the historical data. The order of FTSNN can be regarded as the length of the trends. How to determine a suitable length of the trends is a problem to the FTSNN method. In this paper, we choose to set the order from 1 to 5 and build five different prediction models. Then, we choose the one with the best prediction accuracy as the final prediction model. Note that since n can be equal to any of 1, 2, 3, 4, or 5, step a to step d in the following will be performed five times to build five different models. (a) Construct FLRs database For the historical data on day i, let Xi-n, Yi-n denote the fuzzy set of F1(i-n) and F2(i-n) of the fuzzy time series. Let Xi denotes the fuzzy set of F1(i). The FLRs database on day i can be represented as follows: (Xi-n, Yi-n), …, (Xi-2, Yi-2), (Xi-1, Yi-1)→Xi. (b) Construct the LHS of FLR of the predicting day (assume that day t is the predicting day) The LHS of the FLR on day t can be represented as follows: (Xt-n, Yt-n), …, (Xt-2, Yt-2), (Xt-1, Yt-1). (c) Search for similar FLRs Calculate the Euclidean distance (ED) of the LHS of the FLR on day t against the LHS of each candidate FLR in the FLRs database. Then, we select the top five FLRs with the smallest Euclidean distance from the FLR database as the training example for building a neural network. The Euclidean distance between the FLR on day t and the FLR on day i can be calculated according to Formula 1 2, and 3. EDAi = ( IX t − n − RX i − n ) 2 + L + ( IX t − 2 − RX i − 2 )2 + ( IX t −1 − RX i −1 ) 2 ,
(1)
EDBi = ( IYt − n − RYi − n )2 + L + ( IYt − 2 − RYi − 2 )2 + ( IYt −1 − RYi −1 ) 2 ,
(2)
EDi = ( EDAi + EDBi ) / 2.
(3)
In the above formulae, IXt-n and IYt-n are the subscripts of the fuzzy sets of the first factor and the second factor, respectively, of the LHS of day t’s FLR. Similarly, RXi-n and RYi-n are subscripts of the first factor and the second factor, respectively, of the LHS of day i’s FLR. (d) Build neural network models With the top five similar FLRs as the training examples, we can train a radial basis function neural network (RBFNN) model. The training framework of the neural network model is shown in Figure 1.
364
Y. Leu, C.-P. Lee, and C.-C. Hung
Fig. 1. The training framework of the neural network model
(e) Model selection Once the five models are built, we can choose the best model for prediction. In this paper, we use the prediction accuracy on day t-1 as the criterion of model selection. In other words, we feed the LHS of the FLR on day t-1 as the input of a trained model and calculate the error according to Formula 4. We then choose the model with the smallest error. Note that in Formula 4, the forecasted RHS denotes the subscript of the forecasted fuzzy set on day t-1, and the testing RHS denotes the actual subscript of the fuzzy set of the RHS on day t-1.
=|Forecasted RHS-Testing RHS|
Error function
(4)
Step 4: Forecasting When the best n-order FTSNN model is constructed in step 3, we can perform forecasting by feeding the LHS of the FLR on the predicting day into the neural network to get the forecasted subscript of the RHS on the predicting day. Because the forecasted value is a subscript of a fuzzy set, we have to defuzzify it into the option price. We use the weighted average method as the defuzzification method. Formula 5 shows the weighted average method. M [1] + 0.5 ⋅ M [2] ⎧ k = 1, ⎪ 1 + 0.5 ⎪ ⎪ 0.5 ⋅ M [ k − 1] + M [k ] + 0.5 ⋅ M [k + 1] forecast _ value = ⎨ 1 < k ≤ n − 1, 0.5 + 1 + 0.5 ⎪ 0.5 ⋅ M [ n − 1] + M [n] ⎪ k = n, ⎪ 0.5 + 1 ⎩
(5)
where M[k] denotes the midpoint value of the fuzzy set k. Note that an iteration of the above procedure (step 1 through step 4) predicts only one forecasting value.
4 Option Price Forecasting Using FTSNN To forecast the price of “Taiwan Stock Exchange Stock Price Index Option (TXO)”, we choose the closing price of TXO as the first factor in FTSNN and the “Taiwan Stock
A Fuzzy Time Series-Based Neural Network Approach to Option Price Forecasting
365
Exchange Capitalization Weighted Stock Index (TAIEX)” as the second factor. A part of the historical data is shown in Table 1. With this historical data, U is set to [0, 1000] and is divided into 100 intervals. That is, u1=[0, 10], u2=[10, 20],…,and u100=[990, 1000]. For the second factor, V is set to [3000, 13000] and is divided into 240 intervals. That is, v1=[3000, 3050], v2=[3050, 3100], …,and v240=[12950, 13000]. Having defined the intervals, we fuzzify the historical data into fuzzy sets. Table 2 shows parts of the fuzzified historical data. Then, we construct the FLRs database from the fuzzified historical data. A 2-order FLR database for this historical data is showed in Table 3. Having constructed the FLR database, we can forecast the option price. For example, if we want to forecast the option price on day 11, we first construct the LHS of FLR on day 11 as follows, (A25B117),
(A30B117).
Then, we calculate the Euclidean distance between LHS of the FLR on day 11 and the LHS of each candidate FLRs in the FLRs database. Table 4 shows the calculated Euclidean distances. Then, we select top five similar FLRs from the FLR database. In this example, FLR8, FLR5, FLR7, FLR6, FLR4 are selected. Finally, we use these FLRs as training examples to build a neural network model. Table 1. Parts of historical data Date Option Prices Stock indices Date Option Price Stock indices
Day 1 585 6143.12 Day 6 376 5942.85
Day 2 481 6060.46 Day 7 398 5975.66
Day 3 418 5988.37 Day 8 286 5879.08
Day 4 420 5982.12 Day 9 255 5853.94
Day 5 368 5935.99 Day 10 304 5889.52
Table 2. Parts of fuzzified dataset Day 1 (A58, B122) Day 6 (A37, B118)
Day 2 (A48, B121) Day 7 (A39, B119)
Day 3 (A41, B119) Day 8 (A28, B117)
Day 4 (A42, B119) Day 9 (A25, B117)
Day 5 (A36, B118) Day 10 (A30, B117)
To select a model for forecasting, we use FLR8, which is the FLR on day 10, as the testing data. To perform the testing, we feed the LHS of the FLR on day 10 into the neural network to get the predicted subscript of the fuzzy set of the RHS on day 10. Then, we calculate the error between the predicted subscript and the actual subscript. For this example, the forecasted subscript is 27 and the actual subscript is 30. Hence, the error is equal to 3.Note that the procedure above will be performed on 1-order, 2-order, 3-order, 4-order and 5-order fuzzy time series models. The one with the smallest error is the model for forecasting.
366
Y. Leu, C.-P. Lee, and C.-C. Hung Table 3. The FLRs database FLR1 FLR2 … FLR8
(A58, B122) (A48, B121) … (A28, B117)
→ → … →
(A48, B121) (A41, B119) … (A25, B117)
A41 A42 … A30
Table 4. Euclidean distances EDA
EDB
FLR1
(25 − 58) + (30 − 48)
FLR2 ...
(25 − 48) 2 + (30 − 41)2 ...
FLR8
(25 − 28) 2 + (30 − 25) 2
2
(117 − 122) + (117 − 121)
2
2
ED 2
750.9726
(117 − 121) 2 + (117 − 119) 2 ...
349.1129
(117 − 117) + (117 − 117) 2
... 2
16.7772
For the above example, assume that a 2-order neural network model is selected, and the forecasted subscript is 35 on day 11. Substituting 345, 355 and 365 for M[34], M[35], and M[36], respectively in Formula 5, we get 355 as the forecasted option price on day 11. Note that the actual option price on day 11 is 360 in this example.
5 Results and Performance 5.1 Dataset The dataset of this paper are the daily transaction data of TXO and TAIEX from January 3, 2005 to December 29, 2006. This paper investigates a sample of 23,819 call option price data. Call options can be divided into three categories according to their S/K ratio. The distribution of the dataset is shown in Table 5. We refer to [8] and [9] for the definition of the categories. Our dataset comprises 30 different strike price from 5,200 to 8,200 and 12 different expiration dates from January 2005 to December 2006. Table 5. Data distribution according to moneyness Catogeries In-the-money At-the-money Out-of-the-money
Moneyness S/K > 1.02 0.95 < S/K 1.02 S/K 0.95
≦
≦
Number 8938 7508 7373
Notes: S is the spot price; K is the strike price.
Note that the option prices of the beginning 10 transaction dates of each option are not predicted due to insufficient historical data. In predicting the option price of a specific date, the option prices of the previous transaction dates become the historical data.
A Fuzzy Time Series-Based Neural Network Approach to Option Price Forecasting
367
5.2 Performance Measures Two different performance measures, mean absolute error (MAE) and root mean square error (RMSE), are used to measure the forecasting accuracy of FTSNN. The formulae are shown as follows: n
MAE =
∑ A −P t
2
t
t =1
(6)
,
n n
RMSE =
∑ (A − P ) t
t =1
n
2
t
,
(7)
where At and Pt denote the real option price and the forecasting option price on day t, respectively. 5.3 Performance The performance of FTSNN is compared with the other proposed methods which were published in the previous literatures [8] and [9]. Table 6 shows parts of the results of the option price forecasting by FTSNN. The forecasting prices are close to the real option Table 6. Parts of the results of FTSNN Strike price 7100 7100 7100 7100 7100 7100 7100
Date 2006/9/25 2006/9/26 2006/9/27 2006/9/28 2006/9/29 2006/10/2 2006/10/3
Real option price 127 118 132 95 91 119 109
FTSNN forecasting 106 118 117 125 94 115 106
Table 7. The performance in RMSE and MAE RMSE GJR Grey-GJR EGARCH Grey-EGARCH 76.19 73.76 73.90 72.11* 41.06 40.11 41.35 40.26 25.53 25.89 26.34 26.13 MAE Moneyness FTSNN GARCH GJR Grey-GJR EGARCH Grey-EGARCH In 52.11* 69.54 59.28 56.13 57.02 57.26 At 23.98* 34.73 31.67 30.21 32.17 32.51 Out 9.78* 18.78 17.41 17.26 18.30 18.26 Notes: 1. * denotes the smallest value. 2. GJR denotes GJR–GARCH model. 3. Grey-GJR denotes Grey-GJR–GARCH model. Moneyness In At Out
FTSNN 72.79 36.99* 16.19*
GARCH 85.49 44.02 25.73
368
Y. Leu, C.-P. Lee, and C.-C. Hung
Fig. 2. Time series of the real option price and the forecasting price
prices, except on the dates when the option prices change abruptly. The performance of FTSNN and the other proposed methods are shown in Table 7. Table 7 shows that the performance of a traditional method, GARCH, is always the worst. The other hybrid methods are better than GARCH, especially FTSNN and Grey-EGARCH. In RMSE, the forecasting accuracy of the FTSNN is better than the other methods except the options belonging to in-the-money category. The RMSE of the Grey-EGARCH is better than that of FTSNN, but it is not significant. Table 7 shows that the accuracy of FTSNN is the best in all three categories in MAE. Figure 2 shows the forecasting results of an option with strike price equal to 7,400 and expiration date December 20, 2006.
6 Conclusion In this paper, we propose the FTSNN method to predict the option price. In FTSNN, we use the fuzzy time series model to select best training data from the historical dataset to train a neural network forecasting model. The experiment results show that FTSNN is more accurate than the other proposed methods in terms of RMSE for options belonging to at-the-money and out-the-money. For in-the-money options, the performance of FTSNN is similar to those of the other methods. Furthermore, in terms of MAE, FTSNN is better than the other methods for all categories of options. Hence, FTSNN offers a useful alternative for option price forecasting.
References 1. Ko, P.-C.: Option valuation based on the neural regression model. Expert Syst. Appl. 36, 464–471 (2009) 2. Black, F., Scholes, M.: The Pricing of Options and Corporate Liabilities. J. Polit. Econ. 81, 637 (1973) 3. Grudnitski, G., Osburn, L.: Forecasting S&P and gold futures prices: An application of neural networks. J. Futures Mark. 13, 631–643 (1993)
A Fuzzy Time Series-Based Neural Network Approach to Option Price Forecasting
369
4. Morelli, M.J., Montagna, G., Nicrosini, O., Treccani, M., Farina, M., Amato, P.: Pricing financial derivatives with neural networks. Physica A 338, 160–165 (2004) 5. Shin, T., Han, I.: Optimal signal multi-resolution by genetic algorithms to support artificial neural networks for exchange-rate forecasting. Expert Syst. Appl. 18, 257–269 (2000) 6. Panda, C., Narasimhan, V.: Forecasting exchange rate better with artificial neural network. J. Policy Model. 29, 227–236 (2000) 7. Lajbcygier, P.: Improving option pricing with the product constrained hybrid neural network. Neural Networks, IEEE Trans. Neural Netw. 15, 465–476 (2004) 8. Tseng, C.-H., Cheng, S.-T., Wang, Y.-H., Peng, J.-T.: Artificial neural network model of the hybrid EGARCH volatility of the Taiwan stock index option prices. Physica A 387, 3192–3200 (2008) 9. Wang, Y.-H.: Nonlinear neural network forecasting model for stock index option price: Hybrid GJR-GARCH approach. Expert Syst. Appl. 36, 564–570 (2009) 10. Yu, H.-K.: Weighted fuzzy time series models for TAIEX forecasting. Physica A 349, 609–624 (2005) 11. Cheng, C.-H., Chen, T.-L., Teoh, H.J., Chiang, C.-H.: Fuzzy time-series based on adaptive expectation model for TAIEX forecasting. Expert Syst. Appl. 34, 1126–1132 (2008) 12. Leu, Y., Lee, C.-P., Jou, Y.-Z.: A distance-based fuzzy time series model for exchange rates forecasting. Expert Syst. Appl. 36, 8107–8114 (2009) 13. Lee, L.-W., Wang, L.-H., Chen, S.-M., Leu, Y.: Handling forecasting problems based on two-factors high-order fuzzy time series. Fuzzy Systems, IEEE Trans. Fuzzy Syst. 14, 468–477 (2006) 14. Chen, S.-M.: Forecasting Enrollments Based on High-Order Fuzzy Time Series. Cybern. Syst. 33, 1–16 (2002) 15. Song, Q., Chissom, B.S.: Forecasting enrollments with fuzzy time series – Part I. Fuzzy Sets Syst. 54, 1–9 (1993) 16. Song, Q., Chissom, B.S.: Forecasting enrollments with fuzzy time series – part II. Fuzzy Sets Syst. 62, 1–8 (1994)
Integrated Use of Artificial Neural Networks and Genetic Algorithms for Problems of Alarm Processing and Fault Diagnosis in Power Systems Paulo Cícero Fritzen1,2, Ghendy Cardoso Jr2, João Montagner Zauk2, Adriano Peres de Morais2, Ubiratan H. Bezerra3, and Joaquim A.P.M. Beck4 1
Federal Institut of Education, Science and Tecnology of Tocantins, Campus Palmas, IFTO Avenue LO 5, Block AE 310 South, Zip code 77021-090, Palmas, Tocantins, Brazil 2 Federal University of Santa Maria, Eletrical Engineering Pós-Graduate Program, UFSM Avenue Roraima, 1000, Zip code 97105-900, Santa Maria, Rio Grande do Sul, Brazil 3 Federal University of Pará, Department of Electrical Engineering and Computer, UFPA, Street Augusto Corrêa, 01, Guamá, Zip code 66075-110, Belém, Pará, Brazil 4 Centrais Elétricas do Norte do Brasil S/A, Eletronorte Quarter 06, Set A, B and Block C, Asa Norte, Zipcode 70.716-901, Brasília, DF, Brazil
[email protected],
[email protected],
[email protected],
[email protected],
[email protected]
Abstract. This work approaches relative aspects to the alarm processing problem and fault diagnosis in system level, having as purpose filter the alarms generated during a outage and identify the equipment under fault. A methodology was developed using Artificial Neural Networks (ANN) and Genetic Algorithms (GA) in order to resolve the problem. This procedure had as initiative explore the GA capacity to deal with combinatory problems, as well as the ANN processing speed and generalization capacity. Such strategy favors a fast and robust solution. Keywords: Alarm Processing, Genetic Algorithms, Neural Network, Fault Diagnosis, Supervision and Control of Electrical Systems.
1 Introduction The use of computational tools to support decision-making has become essential in electrical power operation and control centers, especially to restore the system to its normal operation state. The protection schemes are designed to isolate the fault as quickly as possible, through the shutdown of the smallest amount of equipment. The alarms are triggered to signal the protection relays operation. The operators of power systems may be surprised by a devastating number of reported alarms caused by contingencies. If these alarms are not filtered or processed according to their importance and grouped in a window of time able to define a particular event, they can confuse the operation staff. Based on this information, the operators should use their experience to decide what exactly happened with the system. Frequently this task may be difficult, because there N.T. Nguyen, M.T. Le, and J. Świątek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 370–379, 2010. © Springer-Verlag Berlin Heidelberg 2010
Integrated Use of ANeural Networks and Genetic Algorithms
371
is the possibility of multiple events, failure or improper relays operation, circuit breakers failure and remote units of data acquisition failure. In an attempt to reduce the possibility of error during the analysis of alarms, activated by protection relays operation, computational tools for the alarm processing and fault diagnosis are developed [1]. The function of a processor and alarm group, select the operator and present only the most important alarms [2]. As [3], the intelligent alarm processor must also suggest corrective actions to control when necessary. Expert systems, neural networks, genetic algorithms, structured graph and fuzzy logic are techniques suggested for alarm processing. This paper proposes a method to alarm processing based on genetic algorithms and artificial neural networks.
2 Artificial Neural Networks Minimization of human effort has been one of the engineering goals. Among these techniques and implementations are the Neural Networks, which due to its ability of learning, generalization and classification, are used in pattern recognition, control, modeling, approximation of functions, among others. 2.1 GRNN (Generalized Regression Neural Network) This topology can have multiple layers interconnected, as shown in Fig. 1.
Fig. 1. GRNN Network
Each neuron in the pattern units is a grouping center, where this layer number of neurons is equal to the number of samples used to represent knowledge. When a new pattern appears, the network calculates the distance between this and the previously stored samples. The absolute value of these differences are added and multiplied by the bias, and then sent to a nonlinear activation function [1]. An exponential is used as activation function, the bias adjust is 0.8326/spread, where spread is the opening of the radial basis function used. The performance of the network is influenced by the bias adjustment (spread) and stored patterns. So, to a very large value of spread the network becomes too widespread, while a very small value makes the network unable to generalize [4].
372
P.C. Fritzen et al.
The GRNN network can be used for forecasting, modeling, mapping, interpolation or control. The single step learning is one of the GRNN main advantages [4].
3 Genetic Algorithms The genetic algorithms (GA) are computational models inspired by the evolution of species, which embody a potential solution to a specific problem. They use a structure similar to a chromosome and apply operators of selection and crossing in order to preserve information relevant to the problem solution. Each chromosome, called as individual in the GA, is a point in the solution space to the optimization problem. The solution process used in genetic algorithms is to generate, through specific rules, a large number of individuals, populations, in order to explore the solution space. In [5], it is highlighted that using the GA based method, multiple optimal solutions can be found in a single step, because it is easy for it to find the optimal global and in addition it has the opportunity to find many local optima. This is especially suitable for situations with bad-operation of protection relays and / or circuit breakers, because in these situations, there may be more than one reasonable estimated result, and it is expected that all may be found. The GA belongs to the class of probabilistic search and optimization methods, although it is not merely a random search. Some characteristics differentiate the GA from other methods of optimization [6]: • Uses a set of points (candidates -solutions); • Operate in a coded solutions space; Requires only information about the objective function to be optimized and evaluated for each population member.
4 Methodology This work developed a methodology which Artificial Neural Networks (ANN) and Genetic Algorithms (GA) are complementary in order to resolve the problem. In order to facilitate the understanding of the proposed method it was created the flow chart shown in Fig 2. Thus, the alarms received by the data acquisition system (SCADA) feed the proposed system with information about tripped relays and circuit breakers state. The details about the tripped relays are processed by the GRNN and the information about the state of circuit breakers with the GRNN outputs are treated by the GA, which provide the fault diagnosis. 4.1 Protection Philosophy Primary protection, backup protection, and auxiliary relays, compose the fundamental principles of the system protection. The protection logic of busbars, transmission lines and transformers were modeled using the GRNN. Fig. 3, 4 and 5 show the protection schemes associated with the transformer, busbar and transmission lines, respectively.
Integrated Use of ANeural Networks and Genetic Algorithms
Fig. 2. Proposed method flowchart
Fig. 3. Transformer protection schemes
Fig. 4. Busbar protection schemes
Fig. 5. Transmission line protection schemes
373
374
P.C. Fritzen et al.
Protection associated devices shown in Fig. 3, 4 and 5 have the following definitions: • 87 - differential relay; • 63 T - Buchholz autotransformer relay; • 63VS - security valve; • 63 C – pressure relay of the under-load-tap-switch; • 51 D - phase time overcurrent relay, side D; • 51 Np - ground time overcurrent relay; • 51 P - phase time overcurrent relay, side P; • 27 - Undervoltage relay; • 59 - Overvoltage relay; • 21-1 – phase distance directional relay; • 21N-1 - phase distance directional ground relay; • 21-2 – time second zone distance relay; • 21p – directional phase distance relay; • 21 S – reverse distance directional relay (carrier start unit) ; • 67NI – instantaneous ground directional overcurrent relay; • 67NT – time ground directional overcurrent relay; • 67NP – directional overcurrent ground relay • 67NP/G1 - reverse directional overcurrent ground relay (carrier start unit). The protection can be classified as main protection or backup protection. The first is for a fast remove of a system component with defect. This type of protection is selective and fast. The backup protection function is to operate when the primary protection fails or is in maintenance (assuming the primary protection function). The backup relays are arranged so as not to fail for the same reasons that lead to the primary protection failure. Usually the transformers are affected by the occurrence of short circuits between coils and phases involving or not the earth. Thus, it is quite common to protect the power transformer through the differential relays (87) and gas detector relays Buchholz (63). The backup protection is formed with temporized overcurrent relays (51) and / or fuse [1]. Most of times the busbar protections are designed to minimize the number of disconnected circuits, i.e., only the circuit breakers associated with the busbar should be tripped. [1]. The transmission lines, depending on their importance, are protected by distance and overcurrent relays and pilot [1]. The pilot scheme considered was blocking scheme by directional comparison, because it is largely utilized in power systems. When a failure occurs in the circuit breaker opening, in order to reduce the fault time, circuit breaker failure function is used, Called in this work “CB Protection failure ", as shown in Fig.3 and 5. To illustrate the protection logic, consider a fault in transformer T1 of Fig. 6. The relays associated with the transformer principal protection must act by sending a signal to open circuit breakers CB2 and CB4. However, if the circuit breaker CB2 fail, it will trigger the circuit breaker failure protection, which will open the circuit breakers CB1, CB2 and CB3.
Integrated Use of ANeural Networks and Genetic Algorithms
375
4.2 Test System The electrical system was used as shown in Fig. 6. This is the same used in [5] and [7]. The algorithms that make the proposed tool were implemented in MatLab®.
Fig. 6. Test system
4.3 GRNN Training Set In this study 05 GRNN networks were trained. One for each equipment (transformer, line and bar) and two GRNN networks for the pilot modeling, which are used in line transmission GRNN network. The GRNN pilot networks identify if pilot operated for an internal fault or to an external fault at the lines. If the pilot has operated for a fault in the line, the operation side (P or D) is identified. The outputs of these networks are incorporated into GRNN network of transmission lines. The networks outputs were coded as follows: for the busbar, Busbar Selective Protection (BSP) and Busbar Sub and Over Voltage Protection (BSOVP) for the lines, Line Principal Protection (LPP), External Fault Toward D (ExtD) External Fault Towards P (ExtP) and for transformers, Transformer Selective Protection (TSP) and Transformer Non Selective Protection (TNSP). Each input neuron represents a protection relay and each neuron output is a classification of the protection type. The binary representation was used to form the input vectors. Thus, the binary value "1" was used to indicate the receipt of an associated alarm, while the value "0" was used to indicate the non-receipt of an alarm. The network is only activated if an alarm is received. During the GRNN training it was adjusted the spread value through verification, and after several tests the spread values of 0.3 for the busbar neural network, 0.37 to transformer, 0.35 for the transmission lines, and 0.5 to the pilot two networks were chosen. Table I shows some of the cases used in the transformer neural network training.
376
P.C. Fritzen et al. Table 1. Logic operation of the transformer relays
Relays
Cases
1 2 3 ... 16 17 18 19 20 ... 32 33 34 ... 96 97 ... 112 113 ... 126 127 87 0 0 0 ... 0 0 0 0 0 ... 0 0 0 ... 0 0 ... 0 0 ... 1 1 63 T 0 0 0 ... 0 0 0 0 1 ... 0 0 0 ... 0 0 ... 0 0 ... 1 1 63 VS 0 1 1 ... 0 0 1 1 0 ... 0 0 1 ... 0 0 ... 0 0 ... 1 1 63 C 1 0 1 ... 0 1 0 1 0 ... 0 1 0 ... 0 1 ... 0 1 ... 0 1 51 D 0 0 0 ... 0 0 0 0 0 ... 0 0 0 ... 1 1 ... 1 1 ... 1 1 51 P 0 0 0 ... 0 0 0 0 0 ... 1 1 1 ... 1 1 ... 1 1 ... 1 1 51 Np 0 0 0 ... 1 1 1 1 1 ... 0 0 0 ... 0 0 ... 1 1 ... 1 1 Type A A A ... B A A A A ... B A A ... B A ... B A ... A A A – Transformer Selective Protection (TSP). B –Transformer Non-selective Protection (TNSP).
The GRNN network was chosen for having a rapid process of training, which is mathematically made in a single step. This feature is ideal for applications involving real systems, it facilitates the inclusion of new training patterns and customization for other equipments with different logics of relays. 4.4 GA Parameterization In this step of GA configuration and testing the protection philosophy is modeled. The GA purpose is to analyze the protection system logic in an integrated manner. For this the neural networks outputs and the states of circuit breakers (CB) are considered. The objective function utilized is similar to that proposed by [8]. The criterion that reflects the requirements to solve the fault estimation of the missing section problem is based on the parsimony theory, i.e., the simplest hypothesis capable of explaining the alarms received must be the best solution. This way a minimizing criterion is used and can be mathematically formulated by the expression (1). Min f E
W
w ∆A
w ∆B
w E
(1)
Where: The term E is a vector of ne binary elements that represents the hypotheses being tested. Each element in vector E represents the state of an event included in Es, receiving 0 (if it is assumed not to have occurred) and 1 (if the event is assumed to have occurred). Es is the column matrix of events. W= Constant which ensure the negative fitness function value, regardless of the variables; w1, w2, w3 = positive coefficients of weights, which will define the relative importance of each term; ΔA is a vector of na elements and depends on two other vectors, Ar and Am (E). Ar is the vector of received alarms (0 represents the non-received alarms and 1 represent the received) and Am (E) is the vector of alarms for events that compose E. The ΔA vector is determined as follows: if the jth element of the vector Ar is 0, then the jth element of ΔA receives 0. If the jth element of the vectors Ar and Am (E) are both 1, then the jth element of ΔA receives 0. The other vector bits are 1. [ΔA] is the
Integrated Use of ANeural Networks and Genetic Algorithms
377
number of nonzeros in the vector ΔA. This term verify if E is a cover of all received alarms, representing the small possibility of a received alarm be false [8]. ΔB is a vector of na elements and is determined by subtracting the elements of vectors Ar and Am (E), creating a difference vector. [ΔB] is the amount of non-zeros in the vector ΔB. This term represents the inconsistency between E and the received alarms, representing the amount of alarms not justified by E and alarms that are missing in the event because of any communication failure. [E] = number of nonzero contained in E, representing the number of events that compose E. Taking the simplest answer as the most correct, following the parsimony principle. To test Fig. 6 system we mapped 108 alarms and 132 events. The patterns that connect alarms to events, which constitute the AG knowledge base, have been prepared according to the following criteria: only one failure of a protective device (circuit breaker or relay) per time. In Table 2 only 12 events are presented, each associated with their expected set of alarms. Table 2. Events and their corresponding characteristic sets of alarms Event
Expected Alarms
Diagnosis
Detail
1 2 5 23 33 52 61 70
TSP1, CB2, CB4 TNSP1, CB2, CB4 TSP2, CB3, CB5 TSP6, CB21, CB22, CB25, BSOVPB5 LPP1, CB7, CB11 LPP5, CB19, CB33, CB39, BSOVPB8 LPP8, CB30, CB40 BSPA2, CB17, CB17, CB18, BSOVPA2 BSOVPB2, CB4, CB5, CB11, CB12, CB27, CB28, TNSP1, TNSP2, ExtL1D, ExtL2D, ExtL3D, ExtL4D, BSOVPB1 BSPB4, CB11, CB13, CB20, BSOVPB4 BSPB5, CB24, CB25, CB26, CB27, CB39, ExtL7D, BSOVPB5 BSPB8, BSOVPB8, CB32, CB33, CB39
Transformer T1 Transformer T1 Transformer T2 Transformer T6 Line L1 Line L5 Line L8 Bus A2
Operation O.K. TSP Failure Operation O.K. CB23 Failure Operation O.K. CB32 Failure Operation O.K. Operation O.K.
Bus B2
BSP Failure
Bus B4
Operation O.K.
93 104 115 128
Bus B5
CB29 Failure
Bus B8
Operation O.K.
Several parameters of genetic algorithm can be modified to improve its performance and adapt it to the particular characteristics of each problem. GA was adjusted based on exhaustive tests, resulting in the following rates: population = 1200; generations = 100; crossover = 0,75; mutation probability = 0,02. The maximum number of generations was used so that the algorithm can obtain the best possible answer even if there were cases with problems to find the appropriate fitness function value. But the use of so many generations implies a higher processing time, even in cases in which the algorithm can quickly find out the most appropriate solution. To prevent the algorithm to run until the generation end, we used 2 different stopping criteria. In the first one the algorithm must stop when the fitness value reaches F = -W + 1 × w3, because this is the ideal answer to the simplest possible case, where all alarms are explained by a single event, turning vectors [ΔA] and [ΔB] to zero. The second criteria used was the fitness value stagnation, which consists in
378
P.C. Fritzen et al.
stopping the algorithm when in 20 generations the best fitness value found does not change , indicating it has already been obtained.
5 Results Some tests applied to the transformer GRNN network are in Table 3, which shows the protection performance, as non-selective or selective for each alarms set. Table 3. Obtained results for transformer GRNN Event 16 19 48 67 80 97 123
Expected Alarms 51Np 63Vs, 63C,51Np 51P, 51Np 63Vs, 63C,51D 51D, 51Np 63C, 51D, 51P 87, 63Vs, 63C, 51D, 51P, 51Np
Diagnosis PNS PS PNS PS PNS PS PS
GRNN Reply (SP / NSP) 0.0018 / 0.9977 1.000 / 0.000 0.0018 / 0.9982 1.000 / 0.000 0.0018 / 0.9982 0.9995/ 0.0005 1.000 / 0.000
Table 4. Obtained results (AG) Case
Received Alarms
1 TSP2, CB3, CB5 2 TSP6, CB21, CB22, CB25, BSOVPB5 3 BSPA2, CB17, CB17, CB18, BSOVPA2 BSPB8, CB31, CB32, CB34, CB35, CB39, CB40,
4 BSOVPB8, BSOVPB7 5 CB2, CB4 6 LPP8, CB30 7 LPP1, CB20, BSOVPB4
Events that explain Results obtained the Received Alarms E5 E23 E70
E5 E23 E70
E131
E131
E1 (missing TSP1) E1 or E2 E61 (missing CB40) E61 E36 (missing CB7 and CB13) E36 BSOVPB2, CB4, CB5, CB11, CB12, CB27, CB28, E93(missing ExtL4D, E86 or E93 8 TNSP1, TNSP2, ExtL1D, ExtL2D, ExtL3D BSOVPB1) BSPB8, BSOVPB8, CB32 E128 (missing CB33, CB39) E128 9 BSPB8, CB24, CB25, CB26, CB27, CB31, CB32, E131 and E115 E131 and E115 10 CB34, CB35, CB39, CB40, BSOVPB8, BSOVPB7, BSPB5, BSOVPB5, ExtL7D E33 (one false alarm) E33 and E34 11 LPP1, CB7, CB11, ExtL1D E70 (one false alarm ) E70 12 BSPA2, CB16, CB17, CB18, PSSTA2, BSOVPB8
13
LPP1, CB7, CB11, CB13, CB20, BSPB4, BSOVPB4
E33 and E104 or E36 and E104
E33 and E104 or E36 and E104
The proposed method was analyzed based on several cases, but here only 13 cases with their results are presented in Table 4. Below there is an analysis of the cases. 1 to 4: are simple cases where the received alarms are identical to an event set of alarms in AG database. 5 to 9: in these ones, at least one alarm is missing. Despite the missing alarms, the algorithm correctly identified each event. 10: two simultaneous events properly justify the received alarms. 11: in these cases just an event does not justify completely the received alarms, remaining one unanswered alarm. In the AG
Integrated Use of ANeural Networks and Genetic Algorithms
379
implementation we used the criterion of a small probability of an alarm received to be false, so these cases were diagnosed as a two-event combination. 12: in this case, the set of events able to explain the alarms received is formed by one less alarm. 13: two answers with two events each, cover all alarms received. The AG convergence time was different for each studied case. In tests that the cases were simple, all alarms received were identical to an event, so the stopping criterion reached was the smallest possible fitness function value, requiring a lower processing time. In the more complex tests the stopping criterion reached was the fitness value stagnation, requiring a greater processing time to converge.
6 Conclusions This work presents a methodology to solve the alarm processing and diagnosis of faults problems at the system control level through the integration of two techniques: the Artificial Neural Networks (ANNs) and Genetic Algorithms (GA). The neural networks were used to infer, based on relays tripping signs, if the equipment protection have operated in a selective or not selective way. Through this strategy we were able to significantly reduce the number of messages associated with the protection relays. The AG was used to represent the protection philosophy as a whole, linking the GRNN network outputs with the circuit breakers. Through this, we determined how the network equipment protections distinguish the defect during a fault. The results show that the proposed method is promising because it is able to deal with the uncertainties inherent to the problem, dealing with the simultaneous faults possibility in a natural way.
References 1. Cardoso Jr., G.: Estimação da Seção em Falta em Sistemas Elétricos de Potência via Redes Neurais e Sistemas Especialistas Realizada em Nível de Centro de Controle. Florianópolis, 2003, p. 176. Tese (Doutorado em Engenharia Elétrica), Universidade Federal de Santa Catarina, Florianópolis (2003) 2. Fu, S., et al.: An Expert System for On-Line Diagnosis of System Faults and Emergency Control to Prevent a Blackout. In: IFAC Control of Power Plants and Power Systems – SIPOWER 1995, Cancun, Mexico, pp. 303–308 (1995) 3. Vale, M.H.M., et al.: Sta – Alarm Processing System. VIII Simpósio de Especialistas em Planejamento da Operação e Expansão Elétrica – VIII SEPOPE, Brasília, DF, Maio (2002) 4. Specht, D.F.: A General Regression Neural Network. IEEE Transactions on Neural Networks 2(6), 558–576 (1991) 5. Wen, F., Han, Z.: Fault section estimation in power systems using a genetic algorithm. Eletric Power Systems Research 34, 165–172 (1995) 6. Tanomaru, J.: Motivação, fundamentos e aplicações de algoritmos genéticos. In: II Congresso Brasileiro de Redes Neurais, Anais. Curitiba, PR, Brasil: [s.n.] (1995) 7. Wen, F.S., Chang, C.S., Srinivasan, D.: Probabilistic approach for fault-section estimation in power systems based on a refined genetic algorithm. IEE Proc. Gener. Transm. Distrib. 144(2), 160–168 (1997) 8. Wen, F.S., Chang, C.S.: Tabu search approach to alarm processing in power systems. IEE Proc. Gener. Transm. Distrib. 144(1), 160–168 (1997)
Using Data from an AMI-Associated Sensor Network for Mudslide Areas Identification Cheng-Jen Tang and Miau Ru Dai The Graduate Institute in Communication Engineering Tatung University, Taipei, Taiwan
[email protected],
[email protected]
Abstract. A typhoon can produce extremely powerful winds and torrential rain. On August 8th, 2009, Typhoon Morakot hit southern Taiwan. The storm brought a record-setting rainfall, nearly 3000mm (almost 10 feet) rainfall accumulated in 72 hours. Heavy rain changes the stability of a slope from a stable to an unstable condition. Mudslides happened, and made a devastating damage to several villages and buried hundreds of lives. In most of mudslide-damaged residences, the electricity equipments, especially electricity poles, are usually tilted or moved. Since the location and status of each electricity pole are usually recorded in AMI (Advanced Metering Infrastructure) MDMS (Meter Data Management System), AMI communication network is a substantial candidate for constructing the mudslide detection network. To identify the possible mudslide areas from the numerous gathered data, this paper proposes a data analysis method that indicates the severity and a mechanism for detecting the movement. Keywords: Advanced Metering Infrastructure, Mudslide Detection, Sensor Network, Meter Data Management System, Data Analysis, Disaster Prevention System.
1 Introduction Typhoon has a large low-pressure center and numerous thunderstorms that produce strong winds and heavy rains. The strong winds and heavy rains usually cause landslides and mudslides. In Taiwan, there are sometimes over a dozen of typhoons visited in summer. In fact, the name Typhoon means “the deadly storm from Taiwan.” On August 8th, 2009, Typhoon Morakot hit southern Taiwan. It is the deadliest typhoon that sweeps Taiwan in recorded history. The storm brought a record-setting rainfall, nearly 3000mm (almost 10 feet) rainfall accumulated in 72 hours. The rainfall spawned mudslide made a devastating damage to several villages and buried hundreds of lives. There was not any warming mechanism worked when the mudslide embodied as an advent devil. Threatened by such nature disaster, how good the warming system performs decides life and death. The sensor system is one of the key issues to predict a disaster accurately. This paper discusses the techniques of a sensor system, and compares their cost and N.T. Nguyen, M.T. Le, and J. Świątek (Eds): ACIIDS 2010, Part I, LNAI 5990, pp. 380–389, 2010. © Springer-Verlag Berlin Heidelberg 2010
Using Data from an AMI-Associated Sensor Network for Mudslide Areas Identification
381
performance. One of the techniques uses the operating theorem of a mechanical mouse as basic. Another uses the image processing to get the differences of two images. Another key point for predicting is the communication network. With the rapidly emerging Advanced Metering Infrastructure (AMI), AMI communication network is a substantial candidate for constructing the mudslide detection network, since the location and status of each electricity pole are usually recorded in AMI MDMS (Meter Data Management System). With the data location and status of each electricity pole, a mudslide warming system performs the detection and prediction processes. Quantifying the result of the detection and prediction processes defines the degree of possible damage. The mudslide warming system triggers different alarms according to the predicted degree of possible damages.
2 Background Global warming changed the climate around the globe. An increase in global temperature causes sea levels to rise and changes the amount and pattern of precipitation. These phenomenons also cause many weather events. Observing weather events discoveries the degree of damage is nonlinear increase. Typhoon is an example. An extremely strong tropical cyclone, Typhoon, has a large low-pressure center and numerous thunderstorms that produce strong winds and heavy rains. The strong winds and heavy rains usually cause landslides and mudslides. Around the globe, landslides and mudslides are serious geological hazards affecting people, and cause significant damages every year. Therefore, many researches develop solutions to decrease damage. Many detection and identification techniques have been developed, including image enhancement [1, 2, 3], image differencing [4, 5, 6], vegetation index differencing [7], image classification [3, 8] and image registration [9]. First, image enhancement emphasizes landslides or mudslides in Earth observation images, but requires human experience and knowledge of the study area for visual interpretation. Second, image differencing is simple, straightforward and easy to interpret the results method, but cannot provide a detailed change matrix and requires the selection of thresholds. Third, vegetation index differencing emphasizes differences in spectral response of different features and reduces impacts of topographic effects and illumination. However, this technique also enhances random noise or coherent noise. Fourth, image classification minimizes the impact of atmospheric, sensor and environmental differences between multi-temporal images, but requires the selection of sufficient training sample data for classification. Finally, image registration detects landslide/mudslide movements with sub-pixel accuracy, but has high computational cost in terms of CPU time. Some researches proposed a detection technique to replace human. A simple method of change detection and identification uses a local similarity measure based on mutual information and image thresholding[11]. Mutual information is used to measure the similarity, so can be used to detect landslides and mudslides from different images. This method can be used to identify the most significant changed areas.
382
C.-J. Tang and M.R. Dai
Detecting the changed pixels is fully automatic, but the post-processing for identifying coherent changed regions requires the choice of thresholds that is semi-automatic. A disadvantage of this change detection method is that cannot be used to detect and quantify small landslides when size of the imagery used is too large compared with the landslide areas. The ratio of the approximate size of the landslide over the size of the imagery is showed in Table 1. Table 1. Comparison on the ratio of the approximation of the landslide area with size of imagery used. (Reprinted from [11])
Another research introduces a closed related mudslide detection technique: Slip Surface Localization in Wireless Sensor Networks [10]. This method uses a sensor network consists of sensor columns deployed on hills to detect the early signals preceding a mudslide. This detection performs through a three-stage algorithm: First, sensors detect small movements consistent with the formation of a slip surface, and separating the sliding part of a hill from the static one. In other words, it determines whether a slip surface is formed. Once the sensors find the presence of such surface, this method conducts a distributed voting algorithm to classify the subset of sensors that report movements. Second, the moved sensors self-localize through a trilateration mechanism and the displacements are calculated. Finally, the direction of the displacements and the locations of the moved nodes are used to estimate the position of the slip surface. This information with collected soil measurements are subsequently passed to a Finite Element Model that predicts whether and when a landslide will occur. The sensor network consists of a collection of sensor columns placed inside vertical holes drilled during the network deployment phase and arranged on a semi-regular grid over the monitored area. The size of the grid depends on the characteristics of the site where the sensors are deployed. Each sensor column has two components: The sensing component that is buried underground and contains all the instruments and the computing component that stays above ground and contains the processor and radio module. Due to the system’s power constraints, a two-tier approach is used. The lower tier consists of local detection based on local measurement at each strain gage. Once a local detection decision is made, the network will activate the second tier detection that involves collaboration of multiple nodes and takes into account the correlation induced by the global phenomenon (the formation of the slip surface).
Using Data from an AMI-Associated Sensor Network for Mudslide Areas Identification
Fig. 1. Landslide Warning System (Reprinted from [10])
383
Fig. 2. Sensor Column (Reprinted from [10])
Local detection is based on an outlier detection algorithm to detect statistically large length changes based on an empirical characterization of the null hypothesis distribution P0 ( I ) . Initially, assuming a simple Gaussian model is characterized by the estimated mean m I and standard deviation σ I . A local detection is made if ΔI i − m I σ I ≥ QG (α 2) , where QG (α 2) is the (1 − α 2) -quantile of a standard Gaussian
distribution with zero mean and unit variance and α is a design parameter that dictates the detection and false alarm trade-offs of the local detection algorithm. The empirical mean m I and standard estimation σ I is estimated from prior measurements either only locally at the node or within the node’s neighborhood. When local positive detections are made, the neighboring strain gages collaborate to evaluate whether the local decisions are consistent with the hypothesis that a slip surface is formed. This collaborative signal processing reduces the false alarms generated from random local movements. Once a slip surface was formed, this method first decides which sensors are above (and hence have moved) and which are below the slip surface (and hence have not moved). This problem regards as the classification problem. A simple distributed heuristic is developed based on the following insights: • The distance between two nodes below the slip surface should not change since both of them have not moved; • The distance between two points across the slip plane is likely to change; • The distance between two nodes above the slip surface would see a small change since they moved somewhat together; and • The nodes located closest to the known rigid part of the structure are unlikely to move. These nodes are referred as the anchor nodes. The major problem of the mentioned technique is the deployment cost. To obtain a “good” prediction, the density of the sensors must be high enough. In other words, if there is a broad area requiring monitoring, the investment must be sky-high. Ironically, the mudslide usually happens in an area that draws little economical interests. Therefore, to protect people living in these areas, the detection devices must be inexpensive. Furthermore, the mounting of these devices must be associated with the public utilities system to reach where people are living.
384
C.-J. Tang and M.R. Dai
3 System Architecture A warning system includes sensing, analysis and communication. First, data collected by sensors were stored in a database. Then analyses data and defines the tilted severity. Information computed by analyzing was also stored in database. Warning system sends different warning signal according the information computed by analyzing. Transmitting warning signal plans to use AMI-associated sensor network. 3.1 AMI-Associated Sensor Network Recently, many advanced areas such as Europe, America, and Japan etc. commit themselves to develop the Smart Grid, SG. SG is an integration of electricity usage monitoring, generation and distribution automation, meter data management and efficiency improving. SG achieves the energy saving and carbon reduction. SG has functions of energy transmission, energy management and two-way communication, which involve energy generation, transmission, distribution and the customer site. Advanced Metering Infrastructure (AMI) is regarded as the fundamental and key technology of enabling SG. AMI has a communication network which consists of many advanced metering and sensing devices, including Smart meters. Smart meters provide interval usage (at least hourly) and collect at least daily. According to the analysis from Gartner, AMI system consists of following characteristics: data acquisition, data transform, data cleansing, data processing, information storage/persistency. Table 2. AMI process step and involved technologies [Source: Gartner] Process Step
Involved Technologies
1. data acquisition 2. data transform
Meter device Broadband over Power Lines(BPL), Wireless, RF Satellite validation Editing Estimation (VEE) Tools, Meter Data Management System(MDMS) MDMS MDMS Portals, Web Services, Electronic Data Interchange (EDI), MDMS
3. data cleansing 4. data processing 5. information storage/persistency 6. information delivery/presentment
AMI architecture is shown in Fig. 3. The blue lines represent the power lines that connect the transforming stations, utility devices, and customer sites. Each customer site has a meter, a meter interface unit (MIU), and a communication module. The information of power consumption is sent to the data collector through the communication module. The data collectors also use the same communication channel to send back messages. Each smart meter is connected to an energy management device. The smart meter communicates with the control center to manage the energy consumption at its location. The communication lines at the customer sites are presented in green.
Using Data from an AMI-Associated Sensor Network for Mudslide Areas Identification
385
Fig. 3. Advanced Metering Infrastructure
For storing the collected data, there is a meter data management system (MDMS) that communicates with the customer information portal, the customer information management system, the dispatch automation system (DAS), and the data collector. The status of each device within an AMI installation is sent to the data collector, and then forward to the MDMS. This study proposes an inexpensive detection device that is mounted on each electricity pole. The movement or tilting of an electricity pole triggers the mounted detection device to send a message back to the data collector through the AMI communication network. This displacement message then forwards to the MDMS that activates the mudslide analysis module for further calculations. 3.2 Detection Method In most of mudslide-damaged residences, the electricity equipments, especially electricity poles, are usually tilted or moved. In order to obtain the displacement, a movement detector attaches to a pole. A simple method is to install a camera in a pole. Each camera photographs the neighbor pole. Comparing different imagery through image processing techniques identifies whether a movement has happened. The weather affects the performance of this simple detection method. For example, the heavy fog hampers the image identification. The unclear images increase the detecting error. Adding a laser light projector in the upside of camera enhances this situation. Laser light is usually spatially coherent, which means that the light either is emitted in a narrow or low-divergence beam. Coherent light typically means the source produces light waves that are in step.
386
C.-J. Tang and M.R. Dai
Fig. 4. Camera photographs neighbor one
Fig. 5. Camera adding the laser light
Another detection method, instead of using an expensive sensor device, the proposed detection technique just uses a portion of a mechanical mouse as the detection device. The architecture is shown in Fig. 6. Everybody knows that when a mouse is moved, the ball inside rolls. This motion turns two axles, and each axle spins a slotted wheel. A light-emitting diode (LED) sends a path of light on one side of each wheel through the slots to a receiving phototransistor on the other side. The pattern of light to dark is translated to an electrical signal, which reports the position and the moving speed of the mouse. In a word, the operation of a mechanical mouse has five steps as the following: • • • • •
Moving the mouse turns the ball. X and Y rollers grasp the ball and transfer movement. Optical encoding disks include light holes. Infrared LEDs shine through the disks. Sensors gather light pulses to convert to X and Y velocities.
Fig. 6. Operations of a mechanical mouse
To make this device, one needs to attach a small piece of iron or other material that makes one side of the ball always facing down, and then mounts this “tilt-aware mouse” on an electricity pole. If the pole is moved or tilted, the ball rolls and the displacement messages are sent.
Fig. 7. Detection of the Tilted Direction
Using Data from an AMI-Associated Sensor Network for Mudslide Areas Identification
387
Fig. 8. Detection of the Tilted Angle
According how the rollers turns, the tilting direction is obtained. If the RollerA turned to the left, the ball is moving forward, as shown in the Fig.7. To obtain the tilted angle, the ball separates into x-z plane and y-z plane. From the x-z plane, the radian represents the angle of turning left/right, and the vector is OD . From the y-z plane, the radian represents the angle of turning forward or backward, and the vector is OC . The two vectors OD and OC construct a plane ODCP with the vector sum OP . The angle between ity pole.
OP and z-axis is the tilted angle of the electric-
4 Data Analysis The displacement information of the tilted electricity pole is sent to an AMI data collector periodically. The data collector then forwards the information to the MDMS. MDMS then inserts a displacement record in the Pole_Displacement table. A displacement record has the following fields: time, tilted direction, tilted angle, difference of tilted angle, checking bit, and the tilted severity of an electricity pole. { t1 , t 2 , t3 ,..., t n } denotes the set of recording time of each displacement. an denotes the tilted angle. The differences between the continued tilted angles are obtained by an − an −1 = d n and an −1 − an − 2 = d n −1 . d k denotes a difference between two continuous tilted angles of a pole. When d k is not zero, the checking bit ck is set and the tilted severity is also calculated according to d k . If d k ≤ 2m , the tilted severity sk is m. If ck-1 is set, ck is also set. If ck is set and dk is zero, sk is set to sk-1. The pairs of checking bit and tilted severity of all electricity poles constructs the alphabets in the proposed mudslide analysis system. The neighboring poles are assigned with adjacent numbers. The pair ci , si of time ti of all numbered electricity poles constructs the Si according to the pole numbers. The mudslide analysis system then builds a suffix tree STi of two adjacent Si and Si+1. The longest repeating pattern
388
C.-J. Tang and M.R. Dai
in the STi of highest tilted severity indicates the most dangerous area that needs to put most attentions. Suffix tree is one of the data structure which uses to process a stream data. For example, there is a string “mississippi”.
Fig. 9. The suffix tree of “mississippi”
The main reason of using Suffix Tree as the analysis model is for its well-known computation complexity of Ο(m) . Several substrings have the same suffix such as “i”, “m”, “p”, and “s”. The substrings use ‘is’ as the suffix that also use ‘issi’ as the suffix. In this situation, these substrings share a node in the suffix tree. In the proposed mudslide analysis, this study assumes that the repeating patterns with ci been set indicate where mudslide will probably happen.
5 Conclusion The mudslides and landslides have consumed many lives. AMI communication network is able to reach every residence that makes a substantial candidate for constructing a mudslide detection network. The location and status of each electricity pole are recorded in AMI MDMS. With an extra movement detector attached to each pole, an AMI-associated sensor network is able to identify the possible mudslide areas from the numerous gathered data. To obtain a “good” prediction, the density of the sensors must be high enough. In other words, if there is a broad area requiring monitoring, the investment must be sky-high. Ironically, the mudslide usually happens in an area that draws little economical interests. Therefore, to protect people living in these areas, the detection devices must be inexpensive. Furthermore, the mounting of these devices must be associated with the public utilities system to reach where people are living. Therefore, this paper proposes an inexpensive detection device associated with AMI network, and a mudslide analysis method that indicates the severity and urgency of a mudslide. This proposed device and method is still a concept rather than a realization. The future work is to install such system in an AMI pilot installation to prove this concept.
Using Data from an AMI-Associated Sensor Network for Mudslide Areas Identification
389
References 1. Whitworth, M.C.Z., Giles, D.P., Murphy, W.: Airbone remote sensing for landslide hazard assessment: a case study on the Jurassic escarpment slopes of Worcestershire, UK. The Quarterly Journal ofEngineering Geology and Hydrogeology 38(2), 197–213 (2005) 2. Ostir, K., Veljanovski, T., Podobnikar, T., Stancic, Z.: Application of satellite remote sensing in natural hazard management: the Mount Mangart landslide case study. International Journal of Remote Sensing 24(20), 3983–4002 (2003) 3. Nichol, J., Wong, M.S.: Satellite remote sensing for detailed landslide inventories using change detection and image fusion. International Journal of Remote Sensing 9, 1913–1926 (2005) 4. Cheng, K.S., Wei, C., Chang, S.C.: Locating landslides using multi-temporal satellite images. Advances in Space Research 33, 296–301 (2004) 5. Hervas, J., Barredo, J.I., Rosin, P.L., Pasuto, A., Mantovani, F., Silvano, S.: Monitoring landslides from optical remotely sensed imagery: the case history of Tessina landslide, Italy. Geomorphology 1346, 1–13 (2003) 6. Rosin, P.L., Hervas, J.: Remote sensing image thresholding methods for determining landslide activity. International Journal of Remote Sensing 26, 1075–1092 (2005) 7. Lin, W.T., Chou, W.C., Lin, C.Y., Huang, P.H., Tsai, J.S.: Vegetation recovery monitoring and assessment at landslides caused by earthquake in Central Taiwan. Forest Ecology and Management 210, 55–66 (2005) 8. Nichol, J., Wong, M.S.: Detection and interpretation of landslides using satellite images. Land Degradation and Development 16, 243–255 (2005) 9. Yamaguchi, Y., Tanaka, S., Odajima, T., Kamai, T., Tsuchida, S.: Detection of a landslide movement as geometric misregistration in image matching of SPOT HRV data of two different dates. Int. J. Remote Sensing, preview article 1, 12 (2002) 10. Terzis, A., Anandarajah, A., Moore, K., Wang, I.-J.: Slip Surface Localization in Wireless Sensor Networks for Landslide Prediction. In: IPSN 2006, Nashville, Tennessee, USA, April 19–21 (2006) 11. Khairunniza-Bejo, S., Petrou, M., Ganas, A.: Landslide Detection Using a Local Similarity Measure. In: Proceedings of the 7th Nordic Signal Processing Symposium. NORSIG (2006) 12. Mouse (computing) Wikipedia (2009). Wikipedia, http://en.wikipedia.org/wiki/ Mechanical_mouse#Mechanical_mice (September 24, 2009)
Semantic Web Service Composition System Supporting Multiple Service Description Languages Nhan Cach Dang1, Duy Ngan Le2, Thanh Tho Quan1, and Minh Nhut Nguyen3 1 Faculty of Computer Science and Engineering Hochiminh City University of Technology (HCMUT), Vietnam
[email protected],
[email protected] http://www.hcmut.edu.vn/en/ 2 Etisalat-BT Innovation Centre (EBTIC), Khalifa University of Science, Technology and Research (KUSTAR), UAE
[email protected] http://www.ku.ac.ae/sharjah/index.php?page=ebtic 3 School of Computer Engineering, Nanyang Technological University (NTU), Singapore
[email protected] http://www.ntu.edu.sg
Abstract. Semantic Web services have become a core technology in developing business operation on the Web. Web service composition is an important activity to leverage the use of semantic Web services. Several composition systems have been proposed to meet this need. However, these systems only support a particular Web service description language such as OWL-S or WSMO, thus limiting the capability of invoking the services available from various sources over the Internet. This paper introduces a Web service composition system which supports multiple service description languages. As of practical aspect, we have implemented the system supporting the two most popular service description languages, i.e. OWL-S and WSMO. However, our design is left open to support additional language if. needed. The composition algorithm employed in our system is based on an existing composition algorithm for OWL-S Web service but it is extended to support WSMO. An example will be introduced to present how the system works. Keywords: Semantic Web service, composition, OWL-S, and WSMO.
1 Introduction Semantic Web service [1], an enhancement of current Web services by employing semantics to describe the service, has become an important technology in e-business due to its strength in supporting discovery, composition, and monitoring. This paper focuses on composition as it is an important and complex activity to leverage the use of semantic Web services. Web service composition is a process that composes multiple advertised Web services to satisfy a request. Advertised Web services which are stored in a repository are provided by Web service providers who developed the services. N.T. Nguyen, M.T. Le, and J. Świątek (Eds): ACIIDS 2010, Part I, LNAI 5990, pp. 390–398, 2010. © Springer-Verlag Berlin Heidelberg 2010
Semantic Web Service Composition System
391
Researchers have developed several composition systems to meet the requirement. However, current composition systems have a drawback that they only support Web services based on a particular language such as OWL-S or WSMO. In the real world, the composition of advertised Web services should be able to service the requester even though these advertised Web services are based on different description languages. In particular, a composition of OWL-S and WSMO Web services can satisfy a requirement based on OWL-S or WSMO. A composition system that supports multiple advertised Web service description languages is the motivation behind this work. In particular, our system supports Web services based on OWL-S and WSMO. Systems for compositing OWL-S Web services have been introduced in [2] and for compositing WSMO has been introduced in [3]. In this paper, OWL-S and WSMO have been chosen as they are the two most popular languages. The algorithm to compose OWL-S and WSMO is an extension of the composition algorithm for OWL-S Web services described in [2]. Other languages such as METRO-S and WSDL-S will be considered as a part of future work. The rest of the paper is organized as follows. Section 2 introduces a brief background of OWL-S and WSMO. Section 3, which is the core of the paper, presents the composition algorithm. Section 4 introduces a scenario to illustrate how the algorithm works. Experimental results are presented in section 5. The related work is presented in section 6, followed by the conclusion in section 7.
2 Background 2.1 OWL-S: Semantic Markup for Web Services Upper ontology for services. OWL-S defines a semantic Web service via four basic classes, namely Service, ServiceProfile, ServiceModel, and ServiceGrounding. The class Service provides an organizational point for a Web service. It has presents, describedBy, and supports properties which have ranges are ServiceProfile, ServiceModel, and ServiceGrounding, respectively. Generally, ServiceProfile provides the information needed for discover, composition, and interoperation purpose, while the ServiceModel and ServiceGrounding are used together for the invocation purpose once the services are found. Therefore, in some aspects, ServiceProfile is the most important because the Web services are useless if they cannot be discovered. OWL-S based on a set of Ontology Web Language - OWL ontologies to declare these classes, and therefore, OWL becomes the core of the specification. The OWL Web Ontology Language. The Web Ontology Language (OWL) [4] was designed by the W3C and is the standard for ontology description in the semantic Web. It is used when the information contained in documents needs to be processed by applications as opposed to situations where the content only needs to be presented to humans. An OWL ontology may include descriptions of classes, instances of classes, properties, as well as range and domain constraints on properties. It may also contain various types of relationships between classes or between properties. A class, which is also called concept in ontology, defines a group of individuals that belong together. Individuals are also called instances corresponding to actual entities
392
N.C. Dang et al.
that can be grouped into these classes. Properties are used to describe relationships between individuals or from individuals to data values. 2.2 WSMO: Web Service Modeling Ontology WSMO Elements. The core elements of WSMO [5] are Ontologies, Goals, Web Services, and Mediators. The terminology used by every WSMO element is provided by ontologies, which consist of the following parts: Non-Functional Properties, Imported Ontologies, Used Mediators, Axioms, Concepts, Relations, Functions and Instances. Just as OWL-S is based on OWL, WSMO is based on the Web Service Modeling Framework (WSMF) [6]. A Goal which is similar to requested Web service described in OWL-S, expresses what the user wants. It describes what the service provides. There are four different types of mediators, namely: ooMediators, ggMediators, wgMediators and wwMediators. Web Service Modeling Framework (WSMF). The Web Service Modeling Language WSML is a language for the specification of ontologies and different aspects of Web services [7]. As an ontology language, WSML is similar to OWL which contains information about classes, instances of classes, properties, as well as range and domain constraints on properties. It may also contain various types of relationships between classes or between properties. One of the major difference between OWL and WSMF is OWL is XML-based while WSMF is not.
3 OWL-S and WSMO Composition Algorithm As mentioned in section 1, composition system for OWL-S and OWL-S Web services and composition system for WSMO and WSMO Web services were introduced in [2] and [3], respectively. The focus of this paper is on composing OWL-S and WSMO Web services. The algorithm for this composition is an extension of the algorithm described in [2] which is for OWL-S and OWL-S Web services. The composition algorithm introduced by Bao Duy et al. [2] is a progressive AIplanning-like algorithm which proceeds in a recursive depth-first manner. The algorithm composes the advertised Web services by connecting their inputs and outputs based on their similarity. The resultants are expressed in the form of directed multigraphs of advertised Web services and inter-connecting edges. The similarity between input and output of Web services is the core of the algorithm. The ‘best solution’ is the one with the highest similarity between matching connections. Concept similarity (CS), which is the core of the algorithm, measures the similarity between two concepts from the same ontology or different ontologies. The concept similarity algorithm includes four main components: syntactic similarity, properties similarity, context similarity, and neighbourhood similarity. The final similarity result is the average of the sum of the four components as the following formula: CS =
ws ∗ synSim + w p ∗ proSim + wc ∗ conSim + wn ∗ neiSim ws + w p + wc + wn
Semantic Web Service Composition System
393
where ws, wp, wc, and wn are weights defined by users. Details of the concept similarity algorithm and how to define ws, wp, wc, and wn were introduced in [8]. This algorithm is applied to matching two concepts from the same or different OWL ontologies. The proposed algorithm is an extension of the algorithm described by [2]. In other words, the two systems are the same but they have different in measuring concept similarity as the proposed system supports both OWL-S and WSMO specifications. Therefore, the concept similarity which was introduced above will be adopted to measure similarity between two concepts from OWL and WSML ontologies since OWL and WSML ontologies to described OWL-S and WSMO respectively. In order to employ [8] to measure concept similarity between OWL and WSML ontologies, we need to extract information from WSML which defines concept name, concept description, properties, context and neighborhood relationship. Figure 1 presents a portion of an example of a WSML ontology example. The ontology has several concepts such as Vehicle, Airplane, and Train. Airplane and Train concepts have relationship ‘subConcept’ of Vehicle concept. The Train concept has properties.
Fig. 1. A portion of WSML specification
Fig. 2. OWL and WSML concept similarity measurement example
394
N.C. Dang et al.
Figure 2 presents a matching example between two concepts from OWL and WSML ontologies, respectively. Each component in OWL is matched against corresponding component in WSML which was described in figure 1. Figure 2 does not present a fully matching between two concepts but some components including syntactic, property, and neighborhood similarity since the context is not presented here.
4 A Composition Scenario A composition scenario in this section which illustrates how the proposed system works elaborated upon as follows: • Assume a customer is end-user who would like to travel from New York, USA to Hue, Vietnam to attend the second ACIIDS conference. He uses the proposed system to search for the most suitable composition of travel service providers for the traveling. He needs to key in information such as his personal information, credit card number, and so on via the Graphic User Interface (GUI) of the proposed system. • Travel service providers are companies providing traveling services by using airplane, train, and bus. They advertise their service information through the proposed system’s repository. These services contain all information about the travelling including information about locations (start and destination) and price via Web services. An advertised Web services includes input, output, and operation of the service profile which were created based on ontologies. • When the proposed system receives the advertised information from the travel providers, it indexes and stores the information in its repository. Upon a request from a user, the proposed system searches the most suitable composition of travel service providers. The service composition information will then be returned to the user. If the user does not have the support of the proposed system, he/she must search for the providers manually. Current discovery and composition systems are able to support Web service providers based on the same description languages. The proposed system has an advantage that it overcomes the language problems of semantic Web services.
5 Experiment and Results As there is no standard test data or benchmark available for web service composition, especially for composition Web services based on different specifications, we developed 10 Web service profiles based on the scenario to validate the designed and developed algorithm. This section introduces the test data, the result, and discussion on the results. 5.1 Testing Data Among 10 Web service profiles developed, there are five OWL-S Web services and five WSMO Web services. Four of the ten Web services are created based on the scenario given in Section 4, the other six services are from different domains (buy and sell computer). The purpose of creating such the testing data is to test the ability of the system to come out with a solution.
Semantic Web Service Composition System
395
Web service profiles. Among four Web services were created based on the scenario, three of them are OWL-S Web services and the other one is WSMO Web services. OWL-S Web services are presented as in table 1 including three services, namely, Train_service_from_HCM_to_Hue, Search_location, and Flight_service_from NewYork_to_Sydney. In the table, owl_OA is the shortcut of “http://localhost/WSComposition/Ontology/OWL-S/travel/TravelOntologyA.owl“ and owl_OB is the shortcut of “http://localhost/WSComposition/Ontology/OWL-S/travel/TravelOntologyB.owl“ Table 1. OWL-S Web service profiles Service profile
Input (name/type)
Output (name/type)
1
Train_service_from HCM_to_Hue.owl
HCM / owl_OA#HCM
2
Searching_location.owl
3
Flight_service_from NewYork_to_Sydney.owl
Person / owl_OB#Person PhoneNumber/owl_OA#PhoneNumber NewYork/ owl_OA#NewYork
Hue / owl_OA#Hue Train / owl_OA#Train Price / owl_OB#Price Location/ owl_OA#Location Sedney/owl_OA#Sedney Airplane/ owl_OA#Airplane
Similarly, WSMO Web services are presented as in table 2 including two services, namely, Travel_Australia and Goal which is a requested service or designed service. In the table, wsml_OA is the shortcut of “http://localhost/WSComposition/Ontology/WSML/travel/TravelOntologyA.wsml” and wsml_OB is the shortcut of “http://localhost/WSComposition/Ontology/WSML/travel/TravelOntologyB.wsml“ Some different terms between WSMO and OWL-S: − − −
Precondition in WSMO is input in OWL-S Post-condition in WSMO is output in OWL-S Goal in WSMO is requested Web service in OWL-S Table 2. WSMO Web service profiles Service Description/Goal
1
Flight_service_from_Sy dney_to_HCM.wsml
Precondition (name/type) Sedney/wsml_OA#Sedney
Postcondition (name/type) HCM/wsml_OA#HCM
Airplane/wsml_OA#Airplane Location/wsml_OB#Location
2
Goal.wsml
Person/wsml_OB#Person
Hue/wsml_OA#Hue
PhoneNumber/wsml_OB#
Train/wsml_OA#Train
PhoneNumber
Price/wsml_OB#Price
NewYork/wsml_OA#NewYork
Ontologies created. The above Web services were developed based on ontologies. They are OWL and WSML ontologies for OWL-S and WSMO Web services, respectively. These ontologies are shown as in figure 3.
396
N.C. Dang et al.
a)
WSML travel ontologies
b)
OWL travel ontologies
Fig. 3. Travel ontologies
5.2 Results With the above test data, the results were obtained in the form of a composition graph, as shown in Figure 4. The graph represents that in order to travel to Hue, Vietnam,
Fig. 4. Composition result
Semantic Web Service Composition System
397
from New York, USA. Since there is no direct flight from New York to Hue, a composition of possible flights and other vehicles should be a good solution for the traveller. As shown in the result, the traveller must first flight from New York, USA to Sydney, Australia. Next, he/she must take another flight from Sydney to Ho Chi Minh City (HCM), Vietnam. Finally, he/she will take a train from Chi Minh City to Hue, Vietnam which is the destination. 5.3 Discussion The purpose of the testing is to confirm is capable of performing the composition based on a set of OWL-S and WSMO service profiles. Overall, the resultant which is a composition graph was as expected. The graph includes nodes which are services and inter-connections of outputs to inputs, which satisfying the required outputs. The connections between nodes were defined by measuring the similarity between concepts from OWL and WSML ontologies based on four components as mention in section 3. Among ten service profiles in the repository, six of them did not appear in the result. It is reasonable as these services are from different domains and are not related to the scenario. Only four services which are related to the scenario appeared in the result. In short, all requirements for the composition have been met by the proposed design.
6 Related Work As the emergency of semantic Web and the importance of Web service composition activity, several approaches have been proposed to meet the need. Web service composition algorithms can be divided into two major category approaches, namely, Workflow and AI (Artificial Intelligence) planning. In a Workflow based approach, where a composed requested Web services is predefined, advertised Web Services are matched against single requested Web services. This approach has a drawback of lacking a dynamic aspect since the workflow needs to be predefined. A workflow is a composition of tasks and each task is considered as a requested Web services. As a consequence, the advantage of the approach is time performance as the workflow predefined. In contrast, AI planning approach is more advanced in dynamic aspect by using inputs and outputs of Web services. Therefore, the composition of Web services does not need to be predefined. However, this approach has a disadvantage in time performance. Moreover, AI planning sometimes result in a meaningless composition. That is, even though the input/output matches, the composed Web service may not represent a meaningful function. AI planning approaches can be further classified into situation calculus [9]; rule based planning [10], and other approaches which include pre-computation[11-13], PDDL (Planning Domain Definition Language), and semiautomatic [14]. However, all the mentioned systems do not support semantic Web service based different description languages. The proposed system is different from these systems as it overcomes the heterogeneity problem in the Web by supporting Web services based on OWL-S and WSMO. A system supporting multiple languages is important
398
N.C. Dang et al.
as the composition of multiple advertised Web services based on different specification can satisfy the request.
7 Conclusion The paper has introduced a composition system to compose multiple advertised Web services to satisfy a request. The proposed system has overcome the language heterogeneity problem of current composition system by supporting Web services based on OWL-S and WSMO which are the two most popular semantic Web service description languages. A composition scenario was introduced to illustrate how the system works. Test date was conducted based on the scenario and experiments were carried out to confirm the validity of the system. As a part of the future work, the system will be extended to support METRO-S and WSDL-S Web service specifications. The extended algorithm will be developed as the same manner with the system has been developed.
References 1. Honglei, Z., Son, T.C.: Semantic Web Services. Intelligent Systems (IEEE) 16, 46–53 (2001) 2. Tran, B.D., Tan, P.S., Goh, A.E.S.: Composing OWL-S Web Services. In: IEEE International Conference on Web Services (ICWS), Salt Lake City, Utah, USA (2007) 3. Sapkota, B., et al.: D21.v0.1 WSMX Triple-Space Computing. WSMO Working Draft (2005) 4. W3C, OWL - Web Ontology Language Overview, Organization, Editor 5. Roman, D., Lausen, H., Keller, U.: Web Service Modeling Ontology - Standard (WSMO Standard), WSMO deliverable D2 version 1.1 6. Fensel, D., Bussler, C.: The Web Service Modeling Framework WSMF. In: Electronic Commerce Research and Applications (2002) 7. Lausen, H., et al.: WSML - a Language Framework for Semantic Web Services. In: Position Paper for the W3C rules workshop, Washington DC, USA (2005) 8. Ngan, L.D., Hang, T.M., Goh, A.: Semantic Similarity between Concepts from Different OWL Ontologies. In: 2006 IEEE International Conference on Industrial Informatics, Singapore (2006) 9. Wu, D., et al.: Automatic Web Services Composition using SHOP2. In: Workshop on Planning for Web Services (2003) 10. Medjahed, B., Bouguettaya, A., Elmagarmid, A.: Composing Web services on the Semantic Web. The VLDB Journal The International Journal on Very Large Data Bases 12(4), 333–351 (2003) 11. Lecue, F., Leger, A.: Semantic Web service composition based on a closed world assumption. IEEE Computer Society, Zurich (2006) 12. Lecue, F., Leger, A.: A formal model for semantic Web service composition. Springer, Athens (2006) 13. Kwon, J., et al.: PSR: Pre-computing Solutions in RDBMS for FastWeb Services Composition Search. In: IEEE International Conference on Web Services. ICWS 2007 (2007) 14. Sirin, E., Parsia, B., Hendler, J.: Filtering and selecting semantic Web services with interactive composition techniques. IEEE Intelligent Systems 19(4), 42–49 (2004)
Forecasting Tourism Demand Based on Improved Fuzzy Time Series Model Hung-Lieh Chou1,2, Jr-Shian Chen3, Ching-Hsue Cheng1, and Hia Jong Teoh4 1
Department of Information Management, National Yunlin University of Science and Technology, 123, Section 3, University Road, Touliu, Yunlin 640, Taiwan {g9623805,chcheng}@yuntech.edu.tw 2 Department of Computer Center, St. Joseph’s Hospital, No. 74, Sinsheng Rd., Huwei Township, Yunlin County 632, Taiwan 3 Department of Computer Science and Information Management, HUNGKUANG University, No.34, Chung-Chie Road, Shalu, Taichung 433, Taiwan
[email protected] 4 Department of Accounting and Information Technology, Ling Tung University, 1 Ling Tung Road, Nantun, Taichung 408, Taiwan
[email protected]
Abstract. The total tourist arrivals, is an important factor to understand the tourism market and to predict the trend of tourism demand, is necessity and exigency for tourism demand and hospitality industries for subsequent planning and policy marketing. This paper proposed a fusion model of fuzzy time-series to improve the forecasting accuracy on total tourist arrivals, which consider the cluster characteristic of observations, define more persuasive universe of discourse based on k-mean approach, fuzzify the observation precisely by triangular fuzzy number, establish fuzzy logical relationships groups by employing rough set rule induction, and assign weight to various fuzzy relationship based on rule-support. In empirical case study, the proposed model is verified by using tourist datasets and comparing forecasting accuracy with listed models. The experimental results indicate that the proposed approach outperforms listed models with lower mean absolute percentage error. Keywords: Fuzzy Time Series, K-mean, Tourism Demand.
1 Introduction In tourism demand forecasting, the majority of researcher have focused on econometrics, and the time series models play import roles[1, 2]. However, there are two drawbacks in ARIMA models: (1) There are very rigorous restriction in variables and materials in ARIMA models, (2) ARIMA models show with form of equations, that more difficult to understand for general users. Song and Chissom proposed the fuzzy time series to solve the problems of the traditional time series methods in 1993. Recent studies that have proposed fuzzy time N.T. Nguyen, M.T. Le, and J. Świątek (Eds): ACIIDS 2010, Part I, LNAI 5990, pp. 399–407, 2010. © Springer-Verlag Berlin Heidelberg 2010
400
H.-L. Chou et al.
series models to deal with uncertain and vague data. This study proposes an improved model of fuzzy time-series, which consider the observations, define more persuasive universe of discourse based on clustering approach, fuzzify the observations precisely by triangular fuzzy number, establish fuzzy logical relationships groups by rule induction method, and assign weight to various fuzzy relationship based on rule-support. In empirical, this study uses the statistics on visitors in Taiwan. This study expected that the forecasting result can be more steadily and reduce the deviation of predicting. This paper is organized as: The related work is described in section 2. Section 3 will be devoted to presents our proposed approach. Practical experiment results are shown in Section 4. The final is conclusions.
2 Related Work In this section, we will briefly discuss the following research: fuzzy numbers, k-mean and fuzzy time series. 2.1 Fuzzy Numbers Zadeh (1965) first introduced the concept of fuzzy set for modeling the vagueness type of uncertainty[3]. A fuzzy set A% defined on the universe X is characterized by a membership function μ A% : x → [0,1] , which satisfying the following condition: (1) μ A% is interval continuous, (2) μ A% is a convex and (3) μ A is a normalized fuzzy set and μ A (m)=1, where m is a real number. μ A% ( x)
A%
Fig. 1. Triangular fuzzy numbers
When describing imprecise numerical quantities, one should capture intuitive concepts of approximate numbers or intervals such as “approximately m”. A fuzzy number must have a unique modal value “m”, convex and piecewise continuous. A common approach is to limit the shape of membership functions defined by LR-type
Forecasting Tourism Demand Based on Improved Fuzzy Time Series Model
401
fuzzy numbers. A special case of LR-type fuzzy numbers TFN is defined by a triplet, denoted by A% → ( a, b, c ) . The graph of a typical triangular fuzzy numbers (TFN) is shown in Fig. 1. The membership function for this TFN is defined as
x Decision (t) If (condition=L1) Then (decision=L1(3),L2(1)) If (condition=L2) Then (decision= L2(1),L3(1)) If (condition=L3) Then (decision= L3(3),L4(1))
Rules support 4 2 4
Step 5: Defuzzify and calculate forecast value. From the rule-set of Step 4, the equation (4) is used to adapt the initial forecasts from defuzzification. The performance comparison of the refined rules is listed in table 5. Table 5. The forecast error in 2001 Model Fuzzy[14] GM(1,1) Markov Proposed
Country Germany
HK
US
1.93 1.887 6.582
13.9 -2.492 -6.096
6.28 -10.634 -5.999
Average
7.37 5.00 6.23
7.96
5.94
3.49
5.80
Forecasting Tourism Demand Based on Improved Fuzzy Time Series Model
407
5 Conclusions The empirical results of the forecasting prediction show that, the improved fuzzy time series model offers better forecasting accuracy than the listing models. The important findings based on the forecasting performance include: 1. The clustering approach enabled definition of a more persuasive universe of discourse, partition of linguistic interval lengths. 2. Although the forecasting accuracy in the study was inferior to ARIMA model, we believe that the proposed approach more easy to understand for general users.
References [1] Song, H., Li, G.: Tourism demand modelling and forecasting—A review of recent research. Tourism Management 29, 203–220 (2008) [2] Lim, C., McAleer, M.: Time series forecasts of international travel demand for Australia. Tourism Management 23, 389–396 (2002) [3] Ross, T.J.: Fuzzy logic with engineering applications. John Wiley & Sons, Ltd., USA (2004) [4] Hartigan, J., Wong, M.: A K-Means Clustering Algorithm. Applied Statistics 28, 100– 108 (1979) [5] Macqueen, J.B.: Some Methods for classification and analysis of multivariate observations. Presented at Proceedings of the Fifth Berkeley Symposium on Math., Statistics, and Probability (1967) [6] Song, Q., Chissom, B.S.: Fuzzy time series and its models. Fuzzy Sets and Systems 54, 269–277 (1993) [7] Song, Q., Chissom, B.S.: Forecasting enrollments with fuzzy time-series - Part II. Fuzzy Sets and Systems 62, 1–8 (1994) [8] Sullivan, J., Woodall, W.H.: A comparison of fuzzy forecasting and Markov modeling. Fuzzy Sets and Systems 64, 279–293 (1994) [9] Huarng, K.: Effective lengths of intervals to improve forecasting in fuzzy time-series. Fuzzy Sets and Systems 123, 387–394 (2001) [10] Chen, S.M.: Forecasting enrollments based on fuzzy time series. Fuzzy Sets and Systems 81, 311–319 (1996) [11] Miller, G.A.: The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. Psychological Review 63, 81–97 (1956) [12] Yu, H.-K.: Weighted fuzzy time series models for TAIEX forecasting. Physica A: Statistical Mechanics and its Applications 349, 609–624 (2005) [13] Teoh, H.J., Cheng, C.-H., Chu, H.-H., Chen, J.-S.: Fuzzy time series model based on probabilistic approach and rough set rule induction for empirical research in stock markets. Data and Knowledge Engineering 67, 103–117 (2008) [14] Wang, C.H.: Predicting tourism demand using fuzzy time series and hybrid grey theory. Tourism Management 25, 367–374 (2004)
Weighted Fuzzy Time Series Forecasting Model Jia-Wen Wang1,∗ and Jing-Wei Liu2 1
Department of Electronic Commerce Management, Nanhua University 32, Chung Kcng Li, Dalin, Chiayi, 62248, Taiwan Tel.: +886-5-2721001#56237; Fax: +886-5-2427197
[email protected] 2 Department of Multimedia and Game Science, Taipei College of Maritime Technology, No.150, Sec.3, Binhai Rd., Danshui Township, Taipei Country 251, Taiwan Tel: +886-2-28102292#5140,5142; Fax: +886-2-28106688
[email protected]
Abstract. Traditional time series methods fail to forecast the problems with linguistic historical data. An alternative forecasting method such as fuzzy time series is needed to deal with these kinds of problems. This study proposes a fuzzy time series method based on trend variations. In experiments and comparisons, the enrollment at the University of Alabama is adopted to illustrate and verify the proposed method, respectively. This paper utilizes the tracking signal to compares the forecasting accuracy of proposed model with other methods, and the comparison results show that the proposed method has better performance than other methods. Keywords: Fuzzy time series; trend variations; fuzzy clustering, tracking signal.
1 Introduction Song and Chissom proposed a Fuzzy Time Series model [9] to deal with the problems involving human linguistic terms [8]. Cheng et al. [1] proposed a Fuzzy Time series model based on clustering method that can improve the universe of discourse and the fixed length of intervals problem. Chen presented a method to forecast the enrollments of the University of Alabama based on fuzzy time series [12]. Besides the above researchers, they continued to discuss the difference between time-invariant and time-variant models [1, 6]. The paper proposes a fuzzy time series method based on trend variations. It has three advantages: (1) determining universe of discourse and the length of intervals based on fuzzy clustering; (2) forecasting data based on trend variations and (3) using the weighted rule to forecast the data. The ultimate goal of any forecasting endeavour is to have an accurate and unbiased forecast [5]. The tracking signal is a control indicator that can monitor effectiveness of forecasting method. If a forecast is consistently producing values that are too high or too low, a tracking signal can use to designate the forecasting method to be out of control, similar to the function of quality control ∗
Corresponding author.
N.T. Nguyen, M.T. Le, and J. Świątek (Eds): ACIIDS 2010, Part I, LNAI 5990, pp. 408–415, 2010. © Springer-Verlag Berlin Heidelberg 2010
Weighted Fuzzy Time Series Forecasting Model
409
charts. A tracking signal is also very versatile in that it can be used with a variety of forecasting methods. The only required numbers are an actual demand value and its corresponding forecasted value. The paper utilizes the tracking signal to compare the forecasting accuracy. In experiments and comparisons, the enrollments at the University of Alabama are adopted to illustrate and verify the proposed method. The rest of this paper is organized as follows. In Section 2, the paper briefly reviews literature. In section 3, the paper describes the proposed method in details. The paper presents an example to verify our method and compare with other methods in section 4. Finally, section 5 is conclusions.
2 Related Literature In this section, two related literatures including fuzzy time series and fuzzy clustering are briefly reviewed. 2.1 Fuzzy Time Series Song and Chissom [10] first proposed a forecasting model called Fuzzy Time Series, which provided a theoretic framework to model a special dynamic process whose observations are linguistic values. The concepts of fuzzy time series are described as follows. Definition 1: Fuzzy time series Let S (t )(t = ..., 0, 1, 2,...) be a subset of R and S (t ) be the universe of discourse defined by fuzzy set ui (t )(i = 1, 2,...) . If F (t ) consists of ui (t )(i = 1, 2,...) , then F (t ) is called a fuzzy time series on S (t )(t = ..., 0, 1, 2,...) . Definition 2: Fuzzy relation
If there exists a fuzzy relationship R(t − 1, t ) , such that F (t ) = F (t − 1) × R(t − 1, t ) , where × is an operator, then F (t ) is said to be caused by F (t − 1) . The relationship between F (t ) and F (t − 1) can be denoted by F (t −1) → F (t ) . Definition 3: Time-invariant fuzzy time series and time-variant fuzzy time series
Suppose F (t ) is caused by F (t − 1) only, and F (t ) = F (t − 1) × R(t − 1, t ) . For any t, if R(t − 1, t ) is independent of t, then F (t ) is named a time-invariant fuzzy time series, otherwise a time-variant fuzzy time series. 2.2 Fuzzy Clustering
Cluster analysis is a technique for grouping individuals or objects into clusters so that objects in the same cluster are more like one another than they are like objects in other clusters. Various cluster techniques can be classified into two classes, namely hard (or crisp) clustering and fuzzy (or soft) clustering. This paper uses Fuzzy C
410
J.-W. Wang and J.-W. Liu
Means to cluster the attributes. Fuzzy C Mean (FCM), proposed by Bezdek [4], is the most famous and basic fuzzy clustering algorithm. Recently, there have been several efforts to use cluster techniques in mining time series data. Those models can give results that consist accurately with the distance between the time series. The theoretical explanation behind this notion is two time series are in one cluster if the underlying systems that generate them are the same or similar. 2.3 Forecast Accuracy
In this paper, the forecast accuracy is compared by mean square error (MSE) and tracking signal (TS). Supposed the actual value of t Period is A(t ) and the forecasted value of t Period is F (t ) , then the MSE and TS are computed by the following equation respectively: 1.
Mean absolute deviation (MAD)
MAD = wher et = A(t ) - F (t ) 2.
(1)
,n is the number of Period t.
Mean absolute percentage error(MAPE) MAPE =
3.
∑ et n
e 1 ∑ t (100) n A(t )
(2)
Running sum of forecast errors (RSFE)
RSFE = ∑ et
(3)
The RSFE is an indicator of bias in the forecasts. A zero RSFE means the positive errors equaled the negative errors. 4.
Tracking signal (TS)
Tracking signal (TS) = RSFE / MAD
(4)
The tracking signal is checked to determine if it is within the acceptable control limits.
3 Fuzzy Time Series Model Based on the Trend Variations For solving multiple-attribute problem, the proposed method needs to discriminate main attribute and auxiliary attributes, which is defined as definition [1]. Section 3.1 is the algorithm of proposed method. Definition 4: Main attribute and auxiliary attributes Supposed a time series S with n observations of m attributes, the attribute of the observations of time t is denoted as S j (t ) , where t = 1,2,..., n and j = 1,2,..., n .
Weighted Fuzzy Time Series Forecasting Model
411
Assume that we want to forecast S j (t + 1) and use other attributes to aid the forecasting of S j (t + 1) , then attribute j and other attributes are called the main attribute and auxiliary attributes, respectively. 3.1 Algorithm of Proposed Method
By summarizing the issues, an algorithm for weighted fuzzy time series model is illustrated below. STEP 1: Calculate the difference Dt .
Dt = A(t − 1) − A(t )
Period t − 1 .
(5)
where A(t ) =the actual value for Period t, A(t − 1) = the actual value for
STEP 2: Use fuzzy cluster method to cluster Dt into C i (i = 1,2,3,..., k ) , and rank each cluster by ci (i = 1,2,3,..., k ) , where the cluster center c1 〉 ci 〉 ck . STEP 3: Obtain the fuzzify Dt as linguistic variables Lr . Utilizing the center of ci as the representative of Ci , each cluster is ranked by value ordering of main attribute. The ranks are utilized to define the clusters as ordered linguistic variables Lr (r = 1,2,...,c). For example, supposed we have three clusters whose centers are 40, 100, and 70 respectively, their centres are utilized to rank them as c3 , c1 , and c2 and define them as L3 , L1 , and L2 respectively. STEP 4: Establish fuzzy relationships and the fuzzy relationship groups ( LR ). The one period fuzzy relations can be extracted from the difference Dt . The one period fuzzy relation can be defined as
(L ) → (L ) (L )→ (L )
(L
Dt i
Dt +1 j
Dt +1 i
Dt + 2 j
(6)
M
Dt + m −1 i
) → (L ) Dt + m j
where 1 ≤ i ≤ m , 1 ≤ j ≤ m , m denotes the total number of linguistic D
values, LDi t denotes the linguistic variables of Dt , and L j t +1 denotes the linguistic variables of Dt +1 . After all fuzzy relationships are induced, they are combined according to the same left-hand sides of the fuzzy relationships and discarded the redundant fuzzy relations to derive the fuzzy relationship groups LR . The fuzzy relationship group
412
J.-W. Wang and J.-W. Liu
= LR → Lr ( n )
(R = 1,2,3..., k ; r = 1,2,3..., k ) .
In this study, LR
and Lr represent the linguistic values of antecedent and consequent part for relational rules, respectively, and n is the cardinality of LR → Lr . For example L1 → L2 , L1 → L 2 , L1 → L3 , and L1 → L4 are induced, we can rearrange them as fuzzy relationships group L1 → L2(2 ) , L3 , L4 , denotes L2(2 ) is means the cardinality of L1 → L2 is 2, and the cardinality of L1 → L3 is 1. STEP 5: Calculate the weight WLR by fuzzy relationship groups.
[
]
W LR = w L1 , w L2 , K , w Lk = [
L R → L1 ∑ {L R → Lr k
r =1
,
L R → L2
} ∑ {L R → L r } k
,L,
r =1
L R → Lr ∑ {L R → Lr k
r =1
]
}
(7)
where R = 1,2,3..., k , • is the cardinality of LR → Lr . For example, the weight of fuzzy relationships group 2⎤ ⎡1 1 , L4 → L3 , L4 , L6(2 ) is WL4 = ⎢ , = [0.25, 0.25, 0.5] 4 4 4 ⎥⎦ ⎣ STEP 6: Calculate the trend variations TV (LR )
TV (LR ) = ∑ik=1 ci × wLR
(8)
k is the number of cluster STEP 7: Calculate the forecast value Ft
Ft = A(t − 1) − TV ( LR )
(9)
the forecasted value of t Period is Ft STEP 8: Calculate the forecast accuracy
4 The Enrollments at the University of Alabama Previous studies on fuzzy time series often use the yearly data on enrollments at the University of Alabama to evaluate their models [1, 11 and 12]. This paper uses the same data to evaluate the proposed model and compare with other methods which shown in Table 1. In this study, the cluster center is used as the representative of those data which belong to this cluster or linguistic. For example, supposed a condition that Dt = L2 , a fuzzy relation group L2 → L1 , L3 is induced, and the cluster centers of L1 and L3 are 953.45 and -24.40 respectively, the forecasting of TV (L2 ) is 464.52 = (953.45-24.40)/2.
Weighted Fuzzy Time Series Forecasting Model Table 1. Forecasting Enrollments of the University of Alabama Year
Enrollment
1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992
13055 13563 13867 14696 15460 15311 15603 15861 16807 16919 16388 15433 15497 15145 15163 15984 16859 18150 18970 19328 19337 18876
Dt -508 -304 -829 -764 149 -292 -258 -946 -112 531 955 -64 352 -18 -821 -875 -1291 -820 -358 -9 461
Table 2. Cluster Center Cluster 1
Center 446.12
2
-296.51
3 4
Linguistic
L2
Cluster Center 5 -1288.45
Linguistic
L7
L4
6
953.45
L1
-24.40
L3
7
-839.87
L6
-505.59
L5
Table 3. Fuzzy relationship groups of enrollments
L1 → L3 L2 → L1 , L3 L3 → L2(3 ) , L4 , L6 L4 → L3 , L4 , L6(2 )
L5 → L4 L6 → L3(2 ) , L4 , L6(2 ) , L7
L7 → L6
413
414
J.-W. Wang and J.-W. Liu
Previous studies on fuzzy time series often use the yearly data on enrollments at the University of Alabama to evaluate their models. This paper uses the same data to evaluate the proposed model and compare with other methods which shown in Table 5. According to Miller[2], seven (linguistic value) is utilized as the number of clusters for demonstration to correspond with the limitation of human cognition in shorten memory. Hence, we use difference of each period into FCM and we can get seven values of cluster centres as shown in Table 2. From the step 4 the fuzzy relationships and the fuzzy relationship groups are shown as Table 3. Finally, the results of enrollments are shown as Table 4. Table 4. Comparisons of the forecasting results of different methods
Chen [2] Cheng et al. [3] Song and Chissom[4 ] Proposed Model
MSE 427113.85 231429.74 423026.65 193373
MAD 517.55 412.46 516.35 378.68
RSEF -1491.00 263.81 -1387.00 -30.93
TS -2.88 0.64 -2.69 -0.08
5 Conclusions The paper proposes a fuzzy time series method based on trend variations. From the compare result, we can see that the proposed model outperforms the listing models [1, 11 and 12]. In the future, more applications are necessary to validate the general applicability.
Acknowledgements The first author gratefully appreciates the financial support from National Science Council, Taiwan under contract NSC98-2221-E-343-002.
References [1] Cheng, C.-H., Cheng, G.-W., Wang, J.-W.: Multi-attribute fuzzy time series method based on fuzzy clustering. Expert Systems with Applications 34(2), 1235–1242 (2008) [2] Miller, G.-A.: The magical number seven, plus or minus two: some limits on our capacity of processing information. The Psychological Review 63, 81–97 (1956) [3] Yu, H.-K.: Weighted fuzzy time series models for TAIEX forecasting. Physica A 349, 609–624 (2004) [4] Bezdek, J.C.: Pattern recognition with fuzzy objective function algorithms. Plenum Press, NY (1981) [5] Wisner, J.-D., Leong, G.-D., Tan, K.-C.: Principles of supply chain management-a balanced approach, South-Western, Tomson (2005) [6] Huarng, K.: Effective lengths of intervals to improve forecasting in fuzzy time series. Fuzzy Sets and Systems 123, 387–394 (2001)
Weighted Fuzzy Time Series Forecasting Model
415
[7] Huarng, K.: Heuristic models of fuzzy time series for forecasting. Fuzzy Sets and Systems 123, 369–386 (2001) [8] Zadeh, L.-A.: Fuzzy sets. Information and Control 8(3), 338–353 (1965) [9] Song, Q., Chissom, B.-S.: Fuzzy time series and its models. Fuzzy Sets and Systems 54, 269–277 (1993a) [10] Song, Q., Chissom, B.-S.: Forecasting enrollments with fuzzy time series - Part I. Fuzzy sets and systems 54, 1–10 (1993b) [11] Song, Q., Chissom, B.-S.: Forecasting enrollments with fuzzy time series - Part II. Fuzzy Sets and Systems 62, 1–8 (1994) [12] Chen, S.-M.: Forecasting enrollments based on fuzzy time series. Fuzzy sets and systems 81, 311–319 (1996)
A System for Assisting English Oral Proficiency – A Case Study of the Elementary Level of General English Proficiency Test (GEPT) in Taiwan Chien-Hsien Huang and Huey-Ming Lee Department of Information Management, Chinese Culture University 55,Hwa-Kang Road, Yang-Ming-San, Taipei, Taiwan
[email protected],
[email protected]
Abstract. One quarter of the world’s population use English as their native language. Researches show that Asian students have high score on their English proficiency test. However, it is real difficult for them to use English practically. The report of the Taiwanese Language Training & Testing Center indicated that the numbers of the Taiwanese people who passed the oral examination are quite few, and the main factor is failing in "answering the question orally". This study was to create an interactive voice response system. Through the system, the learner had chance to spend several minutes to practice English oral ability every day. The system would build up the keyword related to each question and answer on the database. This study may achieve the effectiveness of decreasing the learner’s anxiety about the English oral practice, and further help the learner to pass the oral examination on GEPT. Keywords: English, SMS, m-learning, fuzzy.
1 Introduction The problem of oral English learning in the non-English countries, one is the lack speaking environment, letting the learner cannot pronounce correctly, the other is no time to practice, it make the wrong expression of using vocabulary. According to the ETS (Educational Testing Service, U.S.A.), the report shows the test is too difficult, especial oral examination [1]. Therefore the oral training is an important issue. The learning of pronunciation must be kept constantly practice and correct. The accuracy expressions of using vocabulary also need to be trained by the question and answer reaction. Both of above create an important environment of “How to pronounce and speak correctly”. There are some studies for improving oral ability as follows: Lee and Huang [2] proposed use fuzzy thesis to judge oral answer accurately. Ma [3] proposed an analysis for speaking training on e learning. Parault and Parkinson [4] proposed that sound symbolism is a word property that influences the learning of unknown words. HayesHarb et al [5] proposed the intelligibility data were also considered in relation to various temporal-acoustic properties of native English and Mandarin-accented English N.T. Nguyen, M.T. Le, and J. Świątek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 416–425, 2010. © Springer-Verlag Berlin Heidelberg 2010
A System for Assisting English Oral Proficiency
417
speech in effort to better understand the properties of speech that may contribute to the interlanguage speech intelligibility benefit. Telephone is the most useful communication instrument at the present day, but the invention of mobile phone has created the more convenience-talking environment on the go. According to the technical literature, it pointed that the Interactive Voice Response (IVR) in the dialog system of switchboard is more useful for the training speaking reaction, through the integration of telephone and Interactive Voice Response is a great contribution to the oral learning. Therefore in this study, we based on the problem-posing approach, action learning and interactive voice response to build up an oral reaction training system. Via this system, we can transfer the oral material into voice mail and store the reply message. When the reply message transferred to text, and then could automatically produce degree via our judgment system. By the judgment system for training, our oral reaction training will become more efficient.
2 Framework of the Proposed System In this section, we present an oral reaction training system based on fuzzy inference, as shown in Fig 1.
Fig. 1. The system architecture of oral training system
There are five modules in this system, namely, teach material module (TMM), access module (AM), speech transfer module (STM), judgment module (JM), and time setup module (TSM). The main interface is as shown in Fig 2.
418
C.-H. Huang and H.-M. Lee
Fig. 2. Main interface
Fig. 3. TMM interface
2.1 Teach Material Module (TMM) The TMM records teaching material by teacher. The operation procedures of TMM are as follows: (1) prompting teaching material choice pattern, (2) starting record and transferring to wav file, and (3) putting the wav file to database. The interface is as shown in Fig 3. 2.2 Access Module (AM) The AM transfers the teaching material record into the voice mail and sends a short message to learner. The operation procedures of AM are as follows:(1) grabbing the question from the database by setting time, (2) putting the question into voice mail, (3) catching the learner’s telephone number from system, (4) according to the telephone number to search the learning statement of judgment, (5) sending the short message to learner by sequence, (6) after receiving the short message from system, learner should call the assigned telephone number to the voice mail automatically, (7) starting listening the teaching material (question) from the voice mail, and (8) learner replying the question to the system’s voice mail, then the received module will receive messages. 2.3 Speech Transfer Module (STM) The STM transfers the replying voice record to be text. Teacher depends on rules to inference the elementary vocabulary. The operation procedures of STM are as follows: (1) system captures the messages from voice mail, (2) transferring the message to wav file and saving it to database of student, (3) using HTK (The Hidden Markov Model Toolkit) to recognize the wav file of student to text, (4) artificial checking the result and correcting it, and (5) saving the result to database of student. 2.4 Judgment Module (JM) The JM determines the situations of vocabulary by fuzzy inferences, and grads the text through the scoring system and records in the data. The operation procedures of
A System for Assisting English Oral Proficiency
419
JM are as follows: (1) system will produce score by keyword, (2) giving score, and (3) recording the score to database. The interface is as shown in Fig 5. 2.5 Time Setup Module (TSM) TSM records time of transfer teaching material and time sent by short message service (SMS). The operation procedures of TSM are as follows: (1) system recording the time of the sending short message and the transferring teaching material, (2) depending on setting time to transfer material to voice box, and sending short message. The interface is as shown in Fig 4.
Fig. 4. TSM interface
Fig. 5. JM interface
2.6 Algorithm Performance Oral exam adjusts standards based on fluency, comprehensibility and accuracy [6]. In this study, we focus on the accuracy of content as follows: (1) Accuracy and appropriateness of content: if talking issues are closely interrelated to the content then it gets the point. (2) Scoring stander: the more close interrelation between talking issues the more score. (3) Talking issues relation degree constructing method: In accordance with the database of the elementary talking issues of GEPT (General English Proficiency Test, Taiwan), expert who is professional in this field sets up the relation dimension between talking issues, such as, the relation degree of family with members of family is 1, but it will be different from the perception by different experts. So the system adopts the fuzzy environment to run out the reasonable and more approximate to the actual result. The criteria ratings of relations between talking issues are linguistic variables with linguistic values V1 , V2 , …, V5 , where V1 =very high, V2 =high, V3 =middle,
420
C.-H. Huang and H.-M. Lee
V4 =low, V5 =very low. The triangular fuzzy number representations of the linguistic values are shown in Table 1. The membership functions of the set of criteria rating of relations are shown in Fig. 6. The system sets up the relation degree from 0 to 1, and establishing five rules of fuzzy inferences are shown in Table 2. Table 1. Triangular fuzzy numbers of the criteria of relations Rating of relation
Triangular fuzzy number
V1 : very high
100 200 ~ V1 = (0, , ) 6 6 100 200 300 ~ V2 =( , , ) 6 6 6
V2 : high V3 : middle V4 : low V5 : very low
200 300 400 ~ V3 =( , , ) 6 6 6 300 400 500 ~ V4 =( , , ) 6 6 6
~ 400 500 600 V5 =( , , ) 6 6 6
Fig. 6. Membership functions of the set of the criteria rating of relations Table 2. Rules of inference
1 2 3 4 5
Relation The relation of talking issues are very high The relation of talking issues are high The relation of talking issues are middle The relation of talking issues are low The relation of talking issues are very low
Relation degree 1 0.75 0.5 0.25 0
A System for Assisting English Oral Proficiency
421
3 System Implementation To assess the learning performance of the proposed oral reaction training system, this study recruited twenty-one freshmen that were majoring in the Department of physical Education at Chinese Culture University. When they entranced into the university, they had do an English examination to identify the ability of English, so we can make sure that all of they had failed in the elementary of GEPT. In those thirty experiment days, we change the training material, which just spend thirty seconds, every day, and send a message to inform the statement of judgment to those students and to remind those students to listen the next training material. The teacher can listen the student’s answer from his computer. The console of teacher is shown in Fig.7.
Fig. 7. Voice message records from teacher’s console
The student can use his mobile phone to receive the SMS of system’s judgment. Shown in Fig.8. When students receive the SMS, he will know how about his answer of the yesterday’s question, and prepare to call the system’s assigned phone number to listen the new question today. The experiment’s purpose is to verify whether the system’s qualification and satisfaction and the student’s anxiety of English speaking have been improved or not. So we will do oral anxiety pretest before the experiment. After we finished the experiment, we did a survey about the system’s qualification and satisfaction and student’s anxiety of English speaking.
422
C.-H. Huang and H.-M. Lee
Fig. 8. Receive SMS in student’s mobile phone
According to the elementary speaking material of GEPT, we recorded thirty seconds voice file for the training material, the reference showed in Table3. Table 3. The example teaching material Teaching Material 1
Example We all know one week has seven days. There are Monday, Tuesday, Wednesday, Thursday, Friday, Saturday and Sunday. Please answer below question. What day is today? What day is today?
Table 4. Descriptive statistics of system qualification
SQQ1 SQQ2 SQQ3 SQQ4 SQQ5 SQQ6 SQQ7 SQQ8 SQQ9 SQQ10
Mean
Standard deviation
Skewness
3.8095 3.7143 3.4286 3.7143 3.6667 3.5238 3.2381 3.3333 3.1429 3.2381
.74960 .71714 .97834 .71714 .85635 .87287 .94365 .96609 .96362 1.04426
-.450 -.404 -.665 -.404 -.313 -.329 -.526 -.395 -.310 -.236
(SQQ*: The number 0f questionnaire of system qualification).
A System for Assisting English Oral Proficiency
423
Table 5. Average descriptive statistics of system qualification
Total average
Mean 3.4810
Standard deviation .61124
According to the statistic result that the qualification of 3.4810, the number point out the system is useful to the showed in Table 4 and Table 5. According to the result that the student’s satisfaction of 3.3048, the number point out the system is useful to the showed in Table6 and Table7.
this system’s average is students. The reference this system’s average is students. The reference
Table 6. Average descriptive statistics of student’s satisfaction
Total average
Mean 3.3048
Standard deviation .83814
Table 7. Descriptive statistics of student’s satisfaction
LSQ1 LSQ2 LSQ3 LSQ4 LSQ5 LSQ6 LSQ7 LSQ8 LSQ9 LSQ10
Mean
Standard deviation
Skewness
3.4286 3.4762 3.0476 3.2857 3.2381 3.5238 3.3333 3.3333 3.0476 3.3333
.92582 .98077 .92066 .95618 .94365 .98077 .96609 1.01653 1.16087 1.06458
.605 .454 .101 .263 .526 .600 .395 .444 .101 .204
(LSQ*: The number of questionnaire of student’s satisfaction).
About the anxiety of English speaking, we focus on the twenty-one students and take two examines, one was before the oral training; the other was after the oral training. Depend on the result of statistics, pretest mean =54.857, Std. deviation =4.127, posttest Mean =45.381, Std. deviation = 7.117, the test showed that the posttest’s oral anxiety mean is low than the pretest’s oral anxiety mean. Refer to the tables 8. Table 8. Paired samples statistics
Pair 1 Pretest Posttest
Mean
Number
54.8571 45.3810
21 21
Standard deviation Standard error mean 4.12657 .90049 7.11671 1.55299
424
C.-H. Huang and H.-M. Lee
Depend on the result of statistics, Correlation =-.606, P =(KC+D1) LP2=KC; If (LF+LLST+Number)< (KC+D1)) LP2= mod(KC+/(LF+LLST+Number)); If LP2+D1> LF+LLST reset Offset and Number; LP2 begin to insert verification code. 3) If (LF+LLST+Number+LEDT) >= KC then LP3=KC;. If (LF+LLST+Number+LEDT) < KC then LP3= mod(KC/(LF+LLST+Number +LEDT)). LP3 inserts EDT and length is LEDT. D. Format code The fields in EDT have FC, LLST, NB, RB, VC, DF and DO. The different format of EDT is depending on format code (FC) as Table 9. The EDTF is stored in system. Table 9. Encryption data table format (EDTF)
FC 1 2 3 …
LLST, NB, RB, VC, DF ,DO LLST, NB, RB, VC, DF, DO LLST, NB, RB, VC, DO, DF, LLST, NB, RB, DF, DO, VC …
5 Comparison The difference between the proposed algorithms with others methods is as following: (1). The same plaintext has different cipher text. 1) The different length and content of left shift table; 2) The different length and content of verification code. (2). Encryption data table is stored in the cipher text and used to encrypt and decrypt. (3). We use basic computer operations like shift, rotate and move to design. (4). We use key code to store encryption data table to cipher text and each file may have different key code.
Verification of Stored Security Data in Computer System
433
6 Implementation In this section, we implement the proposed algorithms. A. Computing Environment Computer type: INTEL, Pentium D830 Memory size: DDR 512 MB * 2;
Computer Language: C Language.
B. Executing Results Table 10. Encryption and decryption processing time
Encryption Times 1M
1)
1)
Encryption (Symbol table size) 128 41
2)
Decryption (Symbol table size)
1024
4096
128
1024
4096
280
1036
53
388
1484
4M
165
1158
4155
213
1572
6053
8M
325
2300
8386
424
3154
12000
M=1000000 processing times.
2)
processing time in second.
The processing time of the different combinations of symbol size and executing times are as shown in Table 10.
7 Conclusion In this study, we use the basic computing operations to design these encryption algorithms. Finally, we make some comments about this study. (1). The security data may be any combination of letters, graphics and any other figures. (2). It is safer, because we must know the following messages to do the decryption 1) The EDT in cipher text; 2) The different format of EDT of format code; 3) The value of each field in EDT. (3). When encryption program is stolen, he uses the same data to get encrypted security data. Each cipher text may have different length and content. It is difficult to do cryptanalysis. (4). When security data file contains many records, we can process one record per time and store encryption data table in file information data base.
434
H.-S. Chen, T.-Y. Lee, and H.-M. Lee
References 1. Denning, D.: Cryptography and Data Security. Addison-Wesley, Reading (1982) 2. Lee, H.-M., Lee, T.-Y., Lin, L., Su, J.-S.: Cipher Text Containing Data and Key to Be Transmitted in Network Security. In: Proceeding of the 11Th WSEAS International Conference on System (CSCC 2007) Agios Nikolaos, Crete Island, Greece, July 23-28, pp. 275–279 (2007) 3. Lee, T.-Y., Lee, H.-M.: Encryption and Decryption Algorithm of Data Transmission in Network Security. WSEAS Transactions on Information Science and Applications 3(12), 2557–2562 (2006) 4. Matsui, M.: Linear cryptanalysis method for DES cipher. In: Helleseth, T. (ed.) EUROCRYPT 1993. LNCS, vol. 765, pp. 386–397. Springer, Heidelberg (1994) 5. National Bureau of Standards, NBS FIPS PUB 46: Data Encryption Standard, National Bureau of Standards, U.S.A. Department of Commerce (January 1977) 6. National Bureau of Standards, NBS FIPS PUB 81: Data Modes of Operation, National Bureau of Standards, U. S. Department of Commerce (January 1980) 7. National Institute of Standards and Technology (NIST). FIPS PUB 180: Secure Hash Standard (SHS), May 11 (1993) 8. National Institute of Standards and Technology (NIST). NIST FIPS PUB 185, Escrowed Encryption Standard (February 1994) 9. Pieprzyk, J., Hardjono, T., Seberry, J.: Fundamentals of Computer Security. Springer, Berlin (2003) 10. Stallings, W.: Cryptography and Network Security: Principles and Practices, International Edition, 3rd edn. by Pearson Education, Inc. Upper Saddle River, NJ 07458 (2003)
Ant Colony Clustering Using Mobile Agents as Ants and Pheromone Masashi Mizutani1 , Munehiro Takimoto1 , and Yasushi Kambayashi2 1
Department of Information Sciences, Tokyo University of Science, Japan 2 Department of Computer and Information Engineering, Nippon Institute of Technology, Japan
Abstract. This paper presents a new approach for controlling mobile multiple robots connected by communication networks. The control mechanism is based on a specific Ant Colony Clustering (ACC) algorithm. In traditional ACC, an ant convey an object, but in our approach, the ant implemented as a mobile software agent controls the robot corresponding to an objects, so that the object moves to the direction required by the ant agent. At this time, the process where an ant searches an object corresponds to some migrations of the ant agent, which are much more efficient than physically searching. Also, the ACC uses a pheromone for making a cluster grown and stabilized. However, it is difficult to implement such a pheromone as an physical entity, because it can diffuse, mutually intensify its strength, and restrict its effect in its scope. In our approach, the pheromone is implemented as a mobile software agent as well as an ant. The mobile software agents can migrate from one robot to another, so that they can diffuse over robots within their scopes. In addition, since they have their strength as vector values, they can represent mutually intensifying as synthesis of vectors. We have been developing elemental techniques for controlling multiple robots using mobile software agents, and showed effectiveness of applying them to the previous ACC approach which requires a host for centrally controlling robots. The new ACC approach decentralizes it, and makes a robot system free from special devices for checking locations. Keywords: Mobile agent, Ant Colony Clustering, Intelligent robot control.
1
Introduction
Ant colony clustering is one of the clustering methods that model the behaviors of social insects such as ants. The ants collect objects that are scattered in a field. In ant colony clustering, artificial ants imitate the real ants and gradually form several clusters. The application we have in our mind is a kind of intelligent carts. When we pass through terminals of an airport, we often see carts scattered in the walkway and laborers manually collecting them one by one. It is a laborious ´ atek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 435–444, 2010. N.T. Nguyen, M.T. Le, and J. Swi c Springer-Verlag Berlin Heidelberg 2010
436
M. Mizutani, M. Takimoto, and Y. Kambayashi
task and not a fascinating job. It would be much easier if carts were roughly gathered in any way before the laborers begin to collect them. Multi-robot systems have made rapid progress in various fields, and the core technologies of multirobots systems are now easily available [1]. Therefore, it is possible to make some simple robots have minimum intelligence, making them collect carts. We previously proposed a kind of intelligent carts which draw themselves together automatically. In the case of introducing such a cart, a simple implementation would be to give each cart a designated assembly point to which it automatically returns when free. It is easy to implement, but some carts would have to travel a long way back to their own assembly point, even though they are located close to other assembly points. In that approach, a big powerful battery which is heavy and expensive would be needed, though such intelligent cart systems with small batteries are desirable. Thus energy saving is an important issue in such a system [2,3] In order to ameliorate the situation, we employed mobile software agents to locate robots scattered in a field, e.g. an airport, and made them autonomously determine their moving behavior using Ant Colony Clustering (ACC) which is Ant Colony Optimization (ACO) specialized for clustering objects. ACO is a swarm intelligence-based method and a multi-agent system that exploits artificial stigmergy for the solution of combinatorial optimization problems. ACC is inspired by the collective behaviors of ants, and Deneubourg formulated an algorithm that simulates the ant corps gathering and brood sorting behaviors [4]. We previously proposed the ACC approach using mobile software agents [5]. In this approach, a mobile software agent traverses all robots corresponding to objects, collecting the information of their locations. The information are token to a host computer, where the ACC algorithm is performed to determine the locations of clusters. In addition, it defined a pheromone for each robot. If the strength of the pheromone is strong, other robots are locked there. That contributed to stabilizing the clusters and relatively monotonic growth. Although the previous approach yielded favorable results in preliminary experiments, it needed a host for centrally managing locations of robots and executing ACC algorithm, so that the effective feature as a distributed system which ACC has was restricted. In this paper, we propose new pheromone based ACC approach using mobile software agents. In our new approach, the host for centrally controlling robots is not necessary. Some Ant agents which is mobile software agents corresponding to ants iteratively traverse intelligent carts corresponding to objects. The Ant agent on each cart behaves depending on ACC algorithm. Furthermore, the pheromone is also implemented as a collection of mobile software agents. We call them Pheromone agent. A Pheromone agent has a datum representing its strength, and is created on each cart. Once it is created on a cart, it migrates to other carts within the scope. Some Pheromone agents reaching a cart are combined to a single agent with the synthesized strength of them, which is used for guiding Ant agents.
Ant Colony Clustering Using Mobile Agents as Ants and Pheromone
437
The structure of the balance of this paper is as follows. In the second section, we describe the background. The third section describes the mobile software agent system that performs the arrangement of the multiple robots. The agent system consists of several Ant and Pheromone agents. The fourth section describes the ACC algorithm we have employed to calculate the quasi optimal assembly positions. Finally, we conclude in the fifth section and discuss future research directions.
2
Background
Kambayashi and Takimoto have proposed a framework for controlling intelligent multiple robots using higher-order mobile agents [6,2]. The framework helps users to construct intelligent robot control software by migration of mobile agents. Since the migrating agents are higher-order, the control software can be hierarchically assembled while they are running. Dynamically extending control software by the migration of mobile agents enables them to make base control software relatively simple, and to add functionalities one by one as they know the working environment. Thus they do not have to make the intelligent robot smart from the beginning or make the robot learn by itself. They can send intelligence later as new agents. Even though they demonstrate the usefulness of the dynamic extension of the robot control software by using the higher-order mobile agents, such higher-order property is not necessary in our setting. We have employed a simple, non higher-order mobile agent system for our framework. They have implemented a team of cooperative search robots to show the effectiveness of their framework, and demonstrated that their framework contributes to energy saving of multiple robots [2]. They have achieved significant saving of energy. Our simple agent system should achieve similar performance. Deneuburg has formulated the biology inspired behavioral algorithm that simulates the ant corps gathering and brood sorting behaviors [7]. Wang and Zhang proposed an ant inspired approach along this line of research that sorts objects with multiple robots [8]. Lumer has improved Deneuburg’s model and proposed a new simulation model that is called Ant Colony Clustering [4]. His method could cluster similar objects into a few groups. Chen et al have further improved Lumer’s model and proposed Ants Sleeping Model [4].
3
The Ant Colony Clustering
The coordination of an ant colony is composed by the indirect communication through pheromones. In traditional ACO system, artificial ants leave pheromone signals so that other artificial ant can trace the same path [7]. The randomly walking artificial ants have high probability to pick up an object with weak pheromone, and to put the object where it senses strong pheromone. They are not supposed to walk long distance so that the artificial ants tend to pick up a scattered object and produce many small clusters of objects. When a few clusters are generated, they tend to grow.
438
M. Mizutani, M. Takimoto, and Y. Kambayashi
START lo op u ntil the en d requ irem ent.
hav e a o b ject ?
NO
YES P hero m on e w alk em p ty sp ace d etected ? an d o bject circum jacen t?
R an do m w alk NO
ob ject d etected ?
YES
YES P u t O bjects
w h en satisfied req uirem en t en d .
NO
C atch O b jects
END
Fig. 1. The ACC algorithm
Since the purpose of the traditional ACC is clustering or grouping objects into several different classes based on some properties; it is desirable that the generated chunks of clusters grow into one big cluster so that each group has distinct characteristic. In our system, however, we want to produce several roughly clustered groups of the same type, and make each robot have minimum movement. (We assume we have one kind of cart robots, and we do not want robots move long distance.) In the implementation of our ACC algorithm, when the artificial ants are generated, they have randomly supplied initial positions and walking directions. While an artificial ant performs random walk, when it finds an isolated object, it picks up the object, and continues random walk. While the artificial ant performs random walk, when it senses strong pheromone, it put the conveying object. The artificial ants repeat this simple procedure until the terminate condition is satisfied. Fig. 1 shows the behavior of an artificial ant. These behaviors in the ACC are achieved using mobile software agents below.
4
The Mobile Agents
Robot systems have made rapid progress in not only their behaviors but also in the way they are controlled. Also, multi-agent systems introduced modularity, reconfigurability and extensibility to control systems, which had been traditionally monolithic. It has made easier the development of control systems on distributed environments such as intelligent multi-robot systems. On the other hand, excessive interactions among agents in the multi-agent system may cause problems in
Ant Colony Clustering Using Mobile Agents as Ants and Pheromone
Laptop
439
Laptop
Agent System
Agent System
PA
Wireless LAN
PA
PA PA
Locked Robot
AA Unlocked Robot
Fig. 2. The relation between agents and robots
PA
AA AA
(a) Ant agents
(b) Pheromone agents
Fig. 3. Traversals of mobile agents
the multi-robot environment. Consider a multi-robot system where each robot is controlled by an agent, and interactions among robots are achieved through a communication network such as a wireless LAN. Since the circumstances around the robot change as the robots move, the condition of each connection among the various robots also changes. In this environment, when some of the connections in the network are disabled, the system may not be able to maintain consistency among the states of the robots. Such a problem has a tendency to increase as the number of interactions increases. In order to lessen the problems of excessive communication, mobile agent methodologies have been developed for distributed environments. In the mobile agent system, each agent can actively migrate from one site to another site. Since a mobile agent can bring the necessary functionalities with it and perform its tasks autonomously, it can reduce the necessity for interaction with other sites. In the minimal case, a mobile agent requires that the connection be established only when it performs migration [9]. This property is useful for controlling robots
440
M. Mizutani, M. Takimoto, and Y. Kambayashi
some migrations
creation
PA
AA Locked Robot
AA
sensor information
randomly walking
Unlocked Robot Fig. 4. The behavior of picking up an object and randomly walking
that have to work in a remote site with unreliable communication or intermittent communication. The concept of a mobile agent also creates the possibility that new functions and knowledge can be introduced to the entire multi-agent system from a host or controller outside the system via a single accessible member of the intelligent multi-robot system [1]. Our system model consists of robots and two kinds of mobile software agents as shown by Fig. 2. All the controls for the mobile robots as well as ACC computation are achieved through the agents. They are: 1) Ant agents (AA), and 2) Pheromone agents (PA). Some mobile agents (AA) traverse robots scattered in the field one by one to search unlocked carts as shown by Fig. 3(a). In this traversal, migrating to an unlocked cart means that an ant finds an object, and picks it up. After that, AA controls the cart, driving it randomly. On the other hand, PA is created on each locked cart, and PA migrates to other carts as shown by Fig. 3(b). Once PA reaches the cart where AA exists, the PA guides the AA to the locked cart where the PA originated. In the following sections, we describe the details of Ant agents and Pheromone agents. 4.1
Ant Agents
AA has IP list of all the carts in order to traverse them one by one. If it has visited all the carts, it goes back to the home host for administration of the cart system to check the number of carts, and updates its IP list in the following cases: 1. some new carts have been added, and 2. some carts have been broken. However, these are so rare that the home host is made to go through in most cases. In addition, AA can observe the states of carts as follows: 1. it is being used by a customer, 2. it is locked, and 3. it is free, that is, unlocked and not used by any customer. AA only goes through the carts which is used by customers or locked. If it visits the free cart, it begin to control it. At this time, if there are PA on the
Ant Colony Clustering Using Mobile Agents as Ants and Pheromone
441
Adjacentfourobjects ScopeofPheromones 1 2 3
DecreaseofPheromone
Fig. 5. Strengths of pheromones and their scope
guiding
AA sensor information
PA pheromone walking
Unlocked Robot Fig. 6. Pheromone walking
same cart, AA makes the cart move following the guidance of the PA as shown in the next section, otherwise AA drives it randomly. One the cart controlled by AA is locked because of reaching a suitable cluster, AA has to migrate to find another free cart. Before that, AA creates PA if there is no PA on the cart, as shown by Fig. 4. As a result, PA can begin the behavior for a pheromone as shown by the next section. 4.2
Pheromone Agents
Since the purpose of ACC is to grow clusters, the number of objects in a cluster affect picking up and putting down an object. For example, such a property of a cluster is modeled by a pheromone attracting ants as shown by the Fig. 5. The pheromone has the following effects: 1. its effect increases in proportional to the number of objects, 2. its effect decreases in proportional to the distance, and 3. it has no effect out of a scope. These effects seem to be able to be evaluated by information of the number of objects and the distance between the objects and itself provided by the camera of a moving cart as shown by Fig. 7. However, in this manner, the correct number of carts cannot be recognized because the back carts are hidden by front carts.
442
M. Mizutani, M. Takimoto, and Y. Kambayashi
Fig. 7. Recognizing several robots
cloning
PA
migration
PA
synthesis
PA
Locked Robot A
PA AA
migration
Unlocked Robot cloning
PA
PA
Locked Robot B Fig. 8. Synthesis of pheromones
Based on these observations, we implement a pheromone as an agent which is called pheromone agent (PA). We show the two properties: the migration and the fusion below. The Migration of PA PA is created by AA on a locked cart as shown by Fig. 4. Once PA is created, it clones itself, and the new PA migrates to another cart as shown by Fig. 6. PA has a vector value within it. The value is computed by PA itself after the migration as follows: 1. PA migrates from cart A to cart B, 2. PA on B observes A where PA is born, gets the distance between A nd B, and the direction from B to A. 3. PA computes the vector value with the inverse of the distance and the direction.
Ant Colony Clustering Using Mobile Agents as Ants and Pheromone
+
443
=
Fig. 9. Synthesizing Pheromone agents
Considering the scope, the absolute value of vector v is represented as follows: 0 out of the scope |v| = C otherwise distance Here, C is a suitable constant. The Fusion of PAs The moving cart controlled by AA can receive several PAs. In this case, AA needs the consistent guidance of the PAs. We assume that several PAs are fused to a PA. Since the datum with PAs is vector value, they can be easily synthesized as shown by Fig. 9.
5
Conclusions
We have proposed a framework for controlling mobile multiple robots connected by communication networks. In this framework, scattered mobile multiple robots autonomously form into several clusters based on the ant colony clustering (ACC) algorithm. The ACC algorithm finds quasi-optimal positions for the mobile multiple robots to form the clusters. In our ACC algorithm, we introduced two kind of mobile software agents:, i.e. ant agents and pheromone agents. The ant agents represent the artificial ants that a search carts as objects and drive them to the quasi-optimal positions. The pheromone agents represent pheromone and diffuse it by migrations. In general, making mobile multiple robots perform the ant colony optimization is impossible for enormous inefficiency, but our approach do not need robots corresponding to ants and other special devices, so that it is not only efficient but also enables energy consumption to be suppressed. So far we are not aware of any multi-robot system that integrates pheromone as a control means as Deneubourg envisaged in his monumental paper [4]. The preliminary experiments suggest favorable results. We will show the feasibility of our multi-robot system using pheromone base ACC by further numerical experiments.
444
M. Mizutani, M. Takimoto, and Y. Kambayashi
Acknowledgements This work is supported in part by Japan Society for Promotion of Science (JSPS), with the basic research program (C) (No. 20510141), Grant-in-Aid for Scientific Research.
References 1. Kambayashi, Y., Takimoto, M.: Scheme implementation of the functional language for mobile agents with dynamic extension. In: Proceedings of IEEE International Conference on Intelligent Engineering Systems, pp. 151–156 (2005) 2. Takimoto, M., Mizuno, M., Kurio, M., Kambayashi, Y.: Saving energy consumption of multi-robots using higher-order mobile agents. In: Nguyen, N.T., Grzech, A., Howlett, R.J., Jain, L.C. (eds.) KES-AMSTA 2007. LNCS (LNAI), vol. 4496, pp. 549–558. Springer, Heidelberg (2007) 3. Nagata, T., Takimoto, M., Kambayashi, Y.: Suppressing the total costs of executing tasks using mobile agents. In: Proceedings of the 42nd Hawaii International Conference on System Sciences. IEEE Computer Society, Los Alamitos (2009) 4. Deneubourg, J., Goss, S., Franks, N.R., Sendova-Franks, A.B., Detrain, C., Chreien, L.: The dynamics of collective sorting: Robot-like ant and ant-like robot. In: Proceedings of the First Conference on Simulation of Adaptive Behavior: From Animals to Animats, pp. 356–363. MIT Press, Cambridge (1991) 5. Kambayashi, Y., Ugajin, M., Sato, O., Tsujimura, Y., Yamachi, H., Takimoto, M., Yamamoto, H.: Integrating ant colony clustering to a multi-robot system using mobile agents. Industrial Engineering and Management Systems 8(3), 181–193 (2009) 6. Kambayashi, Y., Takimoto, M.: Higher-order mobile agents for controlling intelligent robots. International Journal of Intelligent Information Technologies 1(2), 28–42 (2005) 7. Lumer, E.D., Faiesta, B.: Diversity and adaptation in populations of clustering ants, from animals to animats 3. In: Proceedings of the 3rd International Conference on the Simulation of Adaptive Behavior, pp. 501–508. MIT Press, Cambridge (1994) 8. Wand, T., Zhang, H.: Collective sorting with multi-robot. In: Proceedings of the First IEEE International Conference on Robotics and Biomimetics, pp. 716–720 (2004) 9. Hulaas, G., Binder, W., Villazon, A.: Portable resource control in the j-seal2 mobile agent system. In: Proceedings of International Conference on Autonomous Agents, pp. 222–223 (2001)
Mining Source Codes to Guide Software Development Sheng-Kuei Hsu1,2 and Shi-Jen Lin1 1
National Central University, Jhongli 320, Taiwan, ROC Nanya Institute of Technology, Jhongli 320, Taiwan, ROC
[email protected],
[email protected] 2
Abstract. The reuse of software library and application framework is an important activity for rapid software development. However, due to rapid software changes, software libraries and application frameworks are usually not welldocumented. To deal with this issue, we have developed a tool, named MACs, that provides developers with efficient and effective access to the API pattern databases for a software development project that are form by relevant source files. After an initial program statement is given, our MACs prototype can correctly predict useful relevant API code snippets. In our evaluation, we present a study investigating the usefulness of MACs in software development tasks. Our experimental evaluation shows that MACs has significant potential to assist developers, especially project newcomers, and provides a reuse method for code reuse from relevant source codes files. Keywords: Mining source code, API usage pattern, Code reuse.
1 Introduction The rise of information technology used in software development processes specifically promoted the field of mining software repositories (MSR) by applying data mining techniques to software engineering data. From the perspective of information sources, there are three basic categories of information in a software repository that can be mined [1]: the metadata about the software, the differences between the artifacts and versions, and the software artifacts and versions. Due to the complexity of data extraction and preprocessing, there is very limited research on mining software artifacts, especially source code. Xie and Pei [2] propose an API (Application Programming Interface) usage mining framework and a supporting tool called MAPO. Given a query by developers that describes a method, class, or package for an API, the system can provide a short list of frequent API usages. Although there are attempts at processing the source code artifacts, methods to reuse the results of mining to improve software development is still critically lacking. To discover the field of mining source codes and improve the productivity of API newcomers, we have built a tool called MACs (Mining API Code snippet for code reuse) to provide developers with efficient and effective frequent API usage patterns mined from relevant source code files. MACs, a practical Eclipse plug-in tool based on these ideas, is available for download at: http://ciaweb.nanya.edu.tw/web/p/macs N.T. Nguyen, M.T. Le, and J. Świątek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 445–454, 2010. © Springer-Verlag Berlin Heidelberg 2010
446
S.-K. Hsu and S.-J. Lin
2 Related Work To facilitate the use of APIs, several researchers have proposed tools and approaches to mine code repositories to help developers reuse code. The approaches locate uses of code such as library APIs, and attempt to match these uses to the needs of a developer working on a new piece of code. In recent years, the first scholar to study the approach was Michail [3]. He describes how data mining can be used to discover library reuse patterns in existing applications. Since then, some progress has been made in the field. Jungloids [4] can help determine a possible call chain between a source type and a target type. Xie and Pie [2] describe a data mining approach using an API’s usage history to identify call patterns. They also developed a supporting tool called MAPO. Given a query that describes a method, class, or package for an API, the tool can gather relevant source files and conduct data mining. XSnippet [5], a context-sensitive code assistant tool, was developed to find code fragments relevant to the programming task at hand. In addition, some studies [6,7,8,9] have been conducted to guide software changes by using data mining. In the field of mining source code for reuse, our MACs is different from other approaches by three main aspects. First, all of the studies mentioned above can only process a few specific statements, such as “method calls” or chains of method-calls. In contrast, our approach can process more than 17 types of statements. Second, previous research, including CodeWeb and MAPO, can be seen as code search engines that provide search services for API usage. These projects show the results on an easier type, such as browse-based, while our approach offers a context-sensitive coding environment. This environment not only allows the search of API usage but also provides the generation of resulting code snippets according the programming task at hand. Finally, all of the above-mentioned research to solve a specific problem, such as the problem of code search, while our approach is best understood as light-weight rapid software development.
3 Approach 3.1 An Overview The core idea of our approach is to recommend API code snippets mined from source code files that are relevant to a developer working on software development tasks. MACs can be viewed as a recommender system for software developers that draws its recommendations from databases of relevant API usage patterns. Fig. 1 schematically shows the workflow in MACs. MACs performs two distinct functions: Forming the pattern database, and making recommendations and generating code. Forming the pattern database consists of three processes: the relevant-file retrieval process, the preprocessing process, and the pattern mining process. The developer provides one or more lines of code (called a query statement). For example a fielddeclaration “Connection conn” is a query statement. The source-file retriever first processes the query statement into an abstract form, such as (FD, java.sql.Connection, pkg.class:18). The first portion of the abstract form is the statement-type; the FD
Mining Source Codes to Guide Software Development
447
means “Field-Declaration”. In the second portion, the “Connection” expresses that it is a field-declaration with type “Connection” and its namespace “java.sql”. The last portion expresses that the statement is within a class scope under package pkg and appears at line 18. After parsing the query statement, the source-file retriever automatically retrieves relevant source code files from Koders.com [10], a source code control system (CVS), or from previous projects. The preprocessing process extracts each statement from the source code files. The pattern mining process proceeds to mine the frequent API usage patterns, including association API usage patterns, and sequential API usage patterns. At that point, it forms the pattern database.
Fig. 1. The major components and workflow in MACs system
The stage of making recommendations and generating code is composed of three processes: logical design, detailed design, and code refactoring. In object-oriented analysis and design, classes represent abstractions that specify the common structure and behavior of a set of objects. Similarly, during the logical design, our approach provides an assistant to construct the abstraction of the class on which we are working by retrieving the relevant association API usage patterns, such as field-declaration or method-declaration, and generating the code snippet. During the detailed design, our approach focuses on the inside of a function (or so-call method) to create the sequence code snippets to fulfill the requirements by drawing the sequential API usage patterns and generating the code snippet. Finally, we put the refactoring process into the second stage to get a better structure of the software program. Refactoring is a technique for restructuring an existing body of code, altering its internal structure without changing its external behavior. After our logical design and detailed design were completed, a prototype class was built. Several refactoring process can be used in the refactoring process, including the “extract method”, the “extract class” or the “move method” and so on.
448
S.-K. Hsu and S.-J. Lin
3.2 Abstraction of Source Code Consider the two field-declaration statements in the Java language: “Connection connection” and “Connection conn”. Here, a field-declaration is made with different object names. However, the two statements have the same meaning. To address the problem, we transfer source code form (or simply source form) into a higher level of abstraction (or simply item form). The common item form was expressed as below: (Item_type, Item_name, Entity:Location) For example, a typical item might be (FD, dom.ASTParser, p.Ex:05), where the item type is a field declaration (FD), and a field declaration statement of ASTParser appears at the line 5 within class Ex, and the class is under the package p. Finally, the namespace of ASTParser was identified as dom. As another example, the item (MI, method_A(java.lang.String):void, p.class_B.method_C():130) indicates a method invocation method_A, with a parameter in type of java.lang.String and return void type appearing in method method_C() , which is inside class class_B and the class is with namespace p. We consider 17 types of items in our research and show these types in Table 1. Although our approach can be applied to any object-oriented programming language, we shall, for concreteness, base our presentation on Java. Table 1. The item types we have done Item PD ID TD FD CI MD MI VD IF
Description and Example Package declaration, “package foo.biz;” Import declaration, “import java.io.File; ” Type declaration, “public class Example_Class {}” Field declaration, “ private Connection conn;” Class instance creation, “new File();” Method declaration, “public File method(String s){}” Method invocation, “object.open(filename);” Local variable declaration, “String s;” Interface implementation, “implements Runnable{}”
Item ACD AA AC CTI FA SCI RT SC
Description and Example Anonymous class declaration, “new Enumeration() {};” Array access, “file_array[i]++;” Array creation, “new File[n];” Constructor invocation, “this(parameter);” Field access, “object.field_A=2;” Super constructor invocation, “super(parameter);” Return statement, “return a_file_object;” Super class inheritance, “extends SuperClass{}”
3.3 From Source Code to Transactions We define the concept of entities to represent the different blocks which contain code snippets. An entity is a triple (t, i, b), where t is the item type, i is the item, and b is the block to which the item belongs. For example, the entity (FD, javax.swing.JButton, pkga.CB:87)
Mining Source Codes to Guide Software Development
449
represents the field declaration javax.swing.JButton of the class classB in the package pkga and its position (line number) inside the file is 87. A transaction is the set of entities simultaneously used in a block, such as a class block or a method block. For example: (VD, ….core.ICompilationUnit, p.classA.m():40 ), (VD,….core.dom.ASTParser, p.classA.m():97 ), (MI,...dom.ASTParser.setKind(int),p.classA.m():155), (MI, ...dom.ASTParser.setSource(), p.classA.m():210 ), (MI, ...dom.ASTParser.setResolve..(),p.classA.m():260), : :
T=
The elements of transaction are called items. Each item represents a way to use one or more specific API, and these are the basis for later mining processes (“The statement was used; which other entities should I typically use?”). 3.4 From Transactions to Rules The goal of API code snippet mining is to mine usage patterns (rules), including association patterns and sequential patterns, from a given code sample repository that are relevant to the development task at hand. A. Association Rule The aim of the MACs tool is to mine rules from the transactions described in the previous section. Here is an example of such a rule: { (FD, dom.ASTParser, class_block) }
=
﹥
(FD, dom.CompilationUnit,class_block), (MI, dom.ASTParser.setKind(int):method_block)
This rule indicates that whenever developers use the (FD, ASTParser) in a class block (declare an attribute), they should also declare an attribute in same block with the type CompilationUnit and invoke method setKind() (of ASTParser object) with an int type parameter in a method block. Generally, association rule mining algorithms extract sets of items that happen frequently enough among the transactions in a database. In our context, such sets, called API usage patterns, refer to API code snippets that tend to be used together in a software development task. In our study, we investigate frequent association pattern mining for extracting API usage patterns from the relevant source files. The idea of frequent association pattern mining is to find recurring sets of items, or API code snippets in our context of finding API usage patterns, among transactions (a class scope in association mining or a method scope in sequential mining) in a database D (a set of relevant source files). The strength of a pattern is measured by support count and confidence.
450
S.-K. Hsu and S.-J. Lin
B. Sequential Pattern The API code snippet pattern mined by association rule pattern mining can be used to create the structure of a class, including class attributes and method definitions. However, association rule mining does not consider the order of transactions. In a method block, such orderings are significant. For example, in an API calling sequence, developers call the method of an object after they declare the object. For these situations, association rules will not be appropriate. Sequential patterns are needed. In its construction of API sequences in a method block, MACs is based on sequential pattern mining, which is an association analysis technique used to discover frequent sequences in a sequence database [11]. Given the transaction translated from the statements in methods in previous sections, we can create a sequence database S. The sequence database is a set of tuples, 〈 SID, s 〉 , where SID is a sequence_ID and s is a sequence. In MACs, a sequence_ID can be produced by referring to the line number of a statement and a sequence (s) to refer to the statements inside a method block. A tuple 〈 SID , s 〉 is said to contain a sequence α, if α is a subsequence of s. The support of a sequence α in a sequence database S is the number of tuples in the database containing α; that is, supportS(α)= |{〈 SID , s 〉 | 〈 SID, s 〉 ∈ S ) ∧ (α ⊆ s )} | . It can be denoted as support(α) if the sequence database is clear from the context. Given a positive integer min_sup as the minimum support threshold, a sequence α is frequent in sequence database S if supportS(α)≥min_sup. That is, for sequence α to be frequent, it must occur at least min_sup times in S. A frequent sequence is called a sequential pattern. A sequential pattern with length k is called a k-pattern. Here is an example of such a sequence in our MACs: α=
〈 s1s2 s3 s5 〉 , where
s1 = (MI,dom.ASTParser.newParser(int):dom.ASTParser), for example, s2 , s3 and s5 represent other items (statements), respectively. There are four instances of items in the sequence α; therefore, it has a length of four and is called a 4-sequence. MACs uses a recently proposed sequential pattern mining algorithm called PrefixSpan (Prefix-projected Sequential Pattern mining) [7]. This is a sequence mining algorithm, which is a pattern growth method that does not require candidate generation. 3.5 Querying and Ranking the API Code Snippets In the logical design phase, MACs recommends the association API code snippets, and each recommendation provides a support and a confidence describing the strength of the relationship. We took the support value to order our recommendations, from largest to smallest. In the detailed design phase, MACs recommends the sequential API code snippets and each recommendation includes several statements. The amount of statements in a sequence is called k-sequence. We took the product of k and the support value of the sequence in sequential pattern database to rank the recommendations.
Mining Source Codes to Guide Software Development
451
4 The MACs Tool We have built a working MACs prototype that applies the approach and functionality described above. The MACs prototype is implemented as a plug-in for the Eclipse IDE. One of Eclipse’s primary design goals was ease of extensibility, which means that MACs can be seamlessly integrated into the IDE and cooperate with Eclipse’s other functions, such as refactoring, to form a light weight coding strategy. The MACs consists of two stages: forming the pattern database and making recommendations and generating code snippets. The main task of the first stage is to retrieve the relevant source code files from Koders.com and preprocess them to be suitable as input to a data mining algorithm. At this point, we apply an association rule mining and sequential rule mining algorithm to form API usage patterns. In the second stage, we address how to reuse the API association patterns to construct the blueprint of a class. At this stage, we first begin logical design. Two main statements function as our points: field declaration and method declaration. Though query API association patterns, we can find API association usage patterns related to our program. Then we can generate the program statement from pattern form into our current program. The next task is detailed design. For this task, we redo the previous task, but focus the querying of API sequential usage patterns to form our statement sequence in a method (function) block.
Fig. 2. A screenshot of MACs running in the task of querying and reusing the sequential pattern rules
452
S.-K. Hsu and S.-J. Lin
The following example shows how we use the MACs to program a Java class with the API ASTParser of elipse in the stage of querying and reusing the sequential pattern rules. In the stage, MACs focus on to find out sequential pattern rules and to output a set of fragment code into a method block (in stage of query association patterns, MACs outputs association pattern rules into a class scope). As shown in Fig. 2, we have the statement ‘ASTParser.newParser(AST.JLS3);’ which we got by querying the association pattern rules with the statement ‘private ASTParser parser;’. We use it as the input to query the relevant statement sequences which the statement is within. The example shows that we found several statement sequences ranked with their ranking scores. We choose the first recommendation including five statements and preview the recommendation in the form of code. Then we can modify the statements in the preview window to meet our needs and output the final source code into the workspace. At the end of the task, we have completed the class.
5 Evaluation 5.1 Sample and Criteria To investigate the usefulness and accuracy of MACs for providing information relevant to a developer working on a software programming task, we evaluated MACs’s recommendations on eight open-source projects. These projects were chosen to cover a wide range of applications on Sourceforge.net, including communication, desktop environment, education, Internet, security, software development, terminal, and text editor. For each project, we randomly selected a file for investigation. Each file we selected had to use a third-party API. If it did not, we randomly selected until we found a file that fit the criteria. For each source file selected, we assumed that an API-newcomer developer would program it again. When the developer completed part of the program, including one statement for a specific API, we evaluated whether MACs could recommend the remains of the specific API statements in the class scope (association pattern rule) or in the method scope (sequential pattern rule). Since we knew all statements, including the specific statements we tested, we could evaluate how useful MACs’s recommendations would have been by observing at the overlap between the API-statements it recommended and the actual solution set. This is a simplified measure of recommendation quality because it does not take into account the case of two or more APIs used in a class scope, but it gives us at least a conservative indication of MACs’s usefulness for finding the relevant information. The performance of information retrieval systems is often evaluated in terms of recall and precision. In our contexts, precision and recall are defined in terms of a set of retrieved API-statements (e.g. the list of API-statements recommended by MACs for a query and for short API-stmts) and a set of relevant API-statements (e.g. the list of all API-statements within a source file we evaluated that are relevant for a specific API). 5.2 Result Fig. 3 shows the precision and recall values that resulted from applying our evaluation method to both investigations. In the investigation of API association patterns, the
Mining Source Codes to Guide Software Development
453
recall and precision of top-10 recommendations are 0.85 and 0.6. The result reveals that MACs’s API association recommendation is quite useful. It covers 85% of relevant API statements used in top-10. In the investigation of API sequential patterns, the recall and precision of the first sequence are also quite high. They are 0.82 and 0.85, respectively. It means the statements of the sequence cover 82% of the developer’s needs and with a precision 85%. When the sequence length was reduced, the recall was reduced too, but precision rose. Shorter sequences cannot cover all necessary statements, but they can more easily target the need.
Fig. 3. Recall versus precision plot showing the two results of the investigations
6 Conclusion In this paper, we investigated how implicit API-usage patterns mined from relevant source files could be used to facilitate development. To investigate our approach with realistic programming tasks, we developed MACs, an Eclipse plug-in that allows developers to query for relevant API code snippets from a set of relevant source files or past projects. Using MACs we can form such an API-usage pattern database, and MACs can recommend appropriate usage patterns drawn from the database to APInewcomers working on programming tasks. Our MACs is differentiated from most other approaches by three main aspects. Firstly, MACs can process more statement types which than other tools in previous studies. Secondly, MACs not only allows for the search of API usage patterns, but also generate relevant code snippets. Other tools only provide results in easier types, such as browse-based. Thirdly, our approach and system can be seen as an approach for light weight rapid software development. We also presented two studies of MACs’s effectiveness in such situations. The two studies support our claim of MACs’s usefulness to API-newcomers engaged in
454
S.-K. Hsu and S.-J. Lin
software development tasks. These results also raise some wider questions regarding potential limits of using frequent pattern mining to learn from the past. For example, we gave higher minimum support or fewer useful source files, few useful patterns would be missing.
References 1. Kagdi, H., Collard, M.L., Maletic, J.I.: A survey and taxonomy of approaches for mining software repositories in the context of software evolution. Journal of Software Maintenance and Evolution-Research and Practice 19(2), 77–131 (2007) 2. Xie, T., Pei, J.: MAPO: mining API usages from open source repositories. In: Proceedings of the 2006 international workshop on Mining software repositories. ACM Press, New York (2006) 3. Michail, A.: Data mining library reuse patterns using generalized association rules. In: Proceedings of the 2000 International Conference on Software Engineering, pp. 167–176. IEEE Press, Los Alamitos (2000) 4. Mandelin, D., Xu, L., Bod, R., Kimelman, D.: Jungloid mining: helping to navigate the API jungle. In: Proceedings of the 2005 ACM SIGPLAN Conference on Programming language design and implementation. ACM Press, New York (2005) 5. Sahavechaphan, N., Claypool, K.: XSnippet: mining For sample code. In: Proceedings of the 21st annual ACM SIGPLAN Conference on Object-oriented programming systems, languages, and applications. ACM Press, New York (2006) 6. Gall, H.C., Fluri, B., Pinzger, M.: Change Analysis with Evolizer and ChangeDistiller. IEEE Software 26(1), 26–33 (2009) 7. Koch, S.: Software evolution in open source projects - a large-scale investigation. Journal of Software Maintenance and Evolution-Research and Practice 19(6), 361–382 (2007) 8. Purushothaman, R., Perry, D.E.: Toward understanding the rhetoric of small source code changes. IEEE Transactions on Software Engineering 31(6), 511–526 (2005) 9. Sunghun, K., Whitehead, E.J., Yi, Z.: Classifying Software Changes: Clean or Buggy? IEEE Transactions on Software Engineering 34(2), 181–196 (2008) 10. Koders.com, a free on-line search engine, http://www.koders.com 11. Jian, P., Jiawei, H., Mortazavi-Asl, B., Jianyong, W., Pinto, H., Qiming, C.: Mining sequential patterns by pattern-growth: the PrefixSpan approach. IEEE Transactions on Knowledge and Data Engineering 16(11), 1424–1440 (2004)
Forecasting Stock Market Based on Price Trend and Variation Pattern Ching-Hsue Cheng1, Chung-Ho Su1,2, Tai-Liang Chen3,*, and Hung-Hsing Chiang1 1 Department of Information Management, National Yunlin University of Science and Technology, 123, Section 3, University Road, Touliu, Yunlin 640, Taiwan {chcheng,g9823809,g9523715}@yuntech.edu.tw 2 Department of Digital technology and Game Design, Shu-Te University, 59 Hun Shan Rd., Yen Chau, Kaohsiung County. Taiwan 82445
[email protected] 3 Department of Information Management and Communication, Wenzao Ursuline College of Languages, 900, Mintsu 1st Road, Kaohsiung 807, Taiwan, ROC
[email protected]
Abstract. Since people cannot predict accurately what will happen in the next moment, forecasting what will occur mostly has been one challenging and concerned issues in many areas especially in stock market forecasting. Whenever reasonable predictions with less bias are produced by investors, great profit will be made. As the emergence of artificial intelligence (AI) algorithms has arisen in recent years, it has played an important role to help people forecast the future. In the stock market, many forecasting models were advanced by academy researchers to forecast stock price such as time series, technical analysis and fuzzy time-series models. However, there are some drawbacks in the past models: (1) strict statistical assumptions are required; (2) objective human judgments are involved in forecasting procedure; and (3) a proper threshold is not easy to be found. For the reasons above, a novel forecasting model based on variation and trend pattern is proposed in this paper. To verify the forecasting performance of the proposed model, the TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) of the year 2000, is used as experimental dataset and two past fuzzy time-series models are used as comparison models. The comparison results have shown that the proposed model outperforms the listed models in accuracy and stability. Keywords: Stock price trend; Price pattern; Time series model; Stock market forecasting.
1 Introduction Many stock investors have lost all their property by wrong investments in the period of the bear market or unstable market, where there are large variations in stock price, *
Corresponding author.
N.T. Nguyen, M.T. Le, and J. Świątek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 455–464, 2010. © Springer-Verlag Berlin Heidelberg 2010
456
C.-H. Cheng et al.
because of lacking insufficient market information or accurate forecasting tools. Accurate and complete market news are very important in practical modern stock markets because electronic transactions have accelerated the broadcasting speed of the market news, no matter what they are good or bad, and, therefore, the ability of fast response to these news is the key to make profit for investors. As a result, finding accurate and fast forecasting tool has been regarded as one challenging work and, therefore, many researchers have concentrated in finding out good forecasting models to obtain higher accuracy in predicting stock price. In the area of stock forecasting, we argue that there are three categories of forecasting methods to help investors to get adequate information and produce forecasts as follows: (1) statistic methods such as autoregressive moving average (ARMA)[1], and autoregressive integrated moving average (ARIMA) [1]; (2) technical analysis such as Elliott‘s wave theory[2]; and (3) artificial intelligence such as artificial neural network[3], genetic algorithm [4], support vector machine[5], fuzzy logic theory [6] and fuzzy time-series[7-10]. However, there are some advantages and disadvantages in these techniques mentioned listed below: (1) strict assumption about data distribution is required in time series models; (2) objective human judgments from financial analyst are required in technical analysis to produce forecast; (3) an optimal threshold, which makes the performance reach best, is not generated easily from artificial intelligence. To overcome these drawbacks, this paper has proposed one new simple model based on pattern analysis to forecast the TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) accurately. The pattern analysis is adopted as main concept of the proposed model, which employs easily understandable method to reveal the hidden trend and variation patterns in history stock data. Many financial researchers have believed that there are information and patterns underlying stocks toward financial forecast [11]. With the forecasting method with pattern analysis, the proposed model can learn the stock patterns from the past period of stock data and make reliable forecasts for the future stock price.
2 Preliminaries Technical Analysis There is a foundation theory of technical analysis mentioned by researches is wave theory. As the pioneer, Elliott (1938) published his Wave Theory, which was devised to help explain why and where certain chart patterns were developed [2]. The Elliott Wave Principle, in stock technical analysis fields, which had played an important role in stock analysis for more than six decades. Technical analysis can be defined as simply the study of individual securities and the overall market based on supply and demand. Technicians record, usually in chart form, historical price and volume activity and deduce from that pictured history the probable future trend of prices [12]. The basic belief of technical analysis is “the wave theory illustrate that the history pattern
Forecasting Stock Market Based on Price Trend and Variation Pattern
457
will be repeated again in a time period”. Therefore, many analysis tools are built up to acquire analysis results, such as chart patterns, moving averages, relative strength index, and many technical market indicators. And they are most probably represented by charts which are bar chart, line chart, point and figure chart, and candlestick chart. Pattern Analysis Many researchers developed pattern recognition techniques and identification of patterns towards financial forecast [11]. There are some hidden indicators and patterns underlying stocks [13]. There are up to 47 different chart patterns, which can be identified in stock patterns [14]. However, there are two difficult reasons in analysis and identification of patterns [11]. Firstly, there exists no single time scale which works for all purposes. Secondly, every stock chart may exhibit countless different patterns which may contain sub-patterns as well. Therefore, we argue that the stock price trend and variation between two or several consecutive days can be easy way to be acquired to pattern analysis. Fuzzy Time Series Fuzzy theory was originally developed to deal with problems involving human linguistic terms[15-17]. In 1993, Song and Chissom proposed the definitions of fuzzy time-series and methods to model fuzzy relationships among observations [18]. For getting better execution performance and forecasting result, Chen proposed an arithmetic approach to improve Song’s model [19]. In the following research, there many researchers have proposed their works in fuzzy time series [7-10]. In this paper, Song and Chissom’s definitions [18] and the algorithm of Chen’s model [19] are used to introduce fuzzy time series model. Definition 1: fuzzy time-series Let Y(t)(t = …,0,1,2,…), a subset of real numbers, be the universe of discourse by which fuzzy sets fj (t) are defined. If F(t) is a collection of f1 (t), f2 (t),…, then F(t) is called a fuzzy time-series defined on Y(t). Definition 2: fuzzy time-series relationships Assuming that F(t) is caused only by F(t-1) , then the relationship can be expressed as F(t) = F(t-1)*R(t,t-1), which is the fuzzy relationship between F(t) and F(t-1), where “ ∗ ” represents as an operator. To sum up, let F(t-1) = Ai and F(t) = Aj. The fuzzy logical relationship between F(t) and F(t-1) can be denoted as Ai → Aj, where Ai refers to the left-hand side and Aj refers to the right-hand side of the fuzzy logical relationship. Furthermore, these fuzzy logical relationships (FLRs) can be grouped to establish different fuzzy relationships.
458
C.-H. Cheng et al.
The Algorithm of Chen’s model Step 1: Define the universe of discourse and intervals for rules abstraction. Based on the issue domain, the universe of discourse can be defined as: U = [starting, ending]. As the length of interval is determined, U can be partitioned into several equal length intervals. Step 2: Define fuzzy sets based on the universe of discourse and fuzzify the historical data. Step 3: Fuzzify observed rules. For example, a datum is fuzzified to Aj if the maximal degree of membership of that datum is in Aj . Step 4: Establish fuzzy logical relationships (FLRs) and group them based on the current states of the data of the fuzzy logical relationships. For example, A1 A2, A1 A1, A1 A3, can be grouped as: A1 A1, A2, A3.
→
→
→
→
Step 5: Forecast. Let F (t −1) = Ai .Case 1: There is only one fuzzy logical relation-
→
ship in the fuzzy logical relationship sequence. If Ai Aj, then F(t), forecast value, is equal to Aj. Case 2: If Ai Ai , Aj , …, Ak, then F(t), forecast value, is equal to Ai , Aj , …, Ak .
→
Step 6: Defuzzify. Apply “Centroid” method to get the results of this algorithm. This procedure (also called center of area, center of gravity) is the most often adopted method of defuzzification.
3 The Proposed Model From the literature noted above, three major drawbacks are discovered as follow. Firstly, strict assumptions for data distribution are required in statistic time series models. The time series variables are independent each other and identically distribution as normal random variables, and it should be tested whether the variables are stationary or not. Secondly, objective human judgments from financial analysts are required in technical analysis to produce forecasts. The forecast result is different from each analyst and, therefore, the subjective prediction cannot be produced from technical analysis, which might result in terrible forecast with huge financial loss. And, lastly, optimal threshold, which makes the performance reach best, is not generated easily from forecasting methods based on artificial intelligence algorithms (i.e. fuzzy time-series). In order to reconcile the problems above, a forecasting model based on price trend and variation pattern is proposed and three refined concepts are factored into the forecasting procedures as follows: (1) the mathematic assumptions such as stationary and variable independence is ignored; (2) only subjective fluctuations in stock market
Forecasting Stock Market Based on Price Trend and Variation Pattern
459
such as price trends and variations are the basis for the proposed model; and (3) a simple threshold searching method is provided to produce reasonable forecasts. To implement the concepts, the proposed algorithm is provided and introduced as follows. Step1: select data set for training and testing
One-year period of stock index is used as one unit of experimental dataset. The dataset is divided into two sub-datasets: (1) previous 10-month period is used as training dataset, and (2) the rest 2 month period, from November to December, is used for testing [7-10]. Step 2: determine the number of lags in dataset
In this step, from one to four orders are evaluated to determine which number of time lag is most fitting for the experiment dataset. In empirical research, the stock investors in Taiwan stock market prefer short-term investment based on recent stock information such as the latest market news, technical analysis reports and price fluctuations [10]. The appropriate pattern for the TAIEX should discuss on the price trends and variations within consecutive 5 days (contain 4 orders). Hence, from 1 to 4 orders are used to search the optimal order for the proposed model to forecast stock price. Step 3: produce price variations and trends between consecutive days to predict
In this step, the price variation, based on the optimal from step 2, and trend between consecutive two days are produced as forecasting factors. The variation, V(t), is defined as equation (1), and the trend is defined as the sign of the variation such as “+” or “-”, defined in equation (2) . V (t ) = P (t ) − P (t − 1)
(1)
Sign (t ) = the sign of P (t ) − P (t − 1)
(2)
Where P(t) denotes the stock price at time t. Step 4: calculate the difference between chosen pattern and history patterns
In this step, “pattern difference” between a chosen pattern and history patterns are produced one by one. In order to compute the difference, the Euclidean distance (REF) is employed to represent “pattern difference,” defined in equation (3). D(t a , tb ) = d (V (t a ), V (tb ) ) =
(V (ta ) − V (tb ))2
(3)
Where V(ta) is the variation at time, ta ; V(tb) is the variation at time, tb ;and D(ta, tb) is the difference between V(ta) and V(tb).
460
C.-H. Cheng et al.
Step 5: tune a threshold to select patterns for forecasting
In this step, the n similar patterns, V(t1), V(t2),…, V(ti),…, V(tn), with lower difference are selected as forecasting factors to produce an initial prediction. An average method is utilized to produce initial forecasts, and there are two equations defined in the method: equation (4) and equation (5). n
Initial _ forecast _ pattern(t + 1) = ∑ i =1
Sign(ti + 1) × V (ti + 1) n
(4)
Where V(ti+1) is the forecasting variation based on the similar pattern, V(ti); Sign(ti+1)is the forecasting trend based on the similar pattern, Sign(ti); and Initial_forecast_pattern_(t+1) is the average forecasting pattern for the future pattern. Initial _ forecast (t + 1) = Initial _ forecast _ pattern(t + 1) + P (t )
(5)
Where P(t) is the present stock price at time t; Initial_forecast(t+1) is the forecasting value for the future stock price. Definition of Threshold In this model, the threshold value is determined by the amount of patterns, n, defined by equation (6), Threshold =
n ×100% N
(6)
Where the amount of the proper patterns is defined as n ; and the amount of the total variations in training dataset is defined as N. Definition of Error Indicator The threshold value is selected from 1% to 20% with a stepped value of 1% to compute forecasting error for finding out the optimal value of threshold. The optimal value is determined when the minimum forecasting error is reached. The paper employs mean absolute percent error (MAPE), defined in equation (7) (Pai & Lin, 2004), as forecasting error indicator. MAPE =
100 N d t − zt ∑ N t =1 d t
(7)
Where N is the number of forecasting periods; dt is the actual stock price at period t; and zt is the forecasting stock price at period t.
4 Model Verification In this paper, the TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) of year 2000 is selected as experimental dataset, where previous 10-month
Forecasting Stock Market Based on Price Trend and Variation Pattern
461
period is used as training sub-dataset and the rest period, from November to December, is selected for testing [7-10]. The experimental dataset is collected from the TSEC (Taiwan Stock Exchange Corporation [20]). To evaluate the performance of the proposed model, two fuzzy time series models, Chen’s [19] and Huarng &Yu’s [7], are employed as comparison models. Although RMSE is a common indicator to measure forecasting performance of fuzzy timeseries models [7-10], many researchers used MAPE for evaluating forecasting accuracy of their models [21-22]. To simplify the performance data, therefore, in this paper, the MAPE is used as performance indicator for the proposed model. The forecasting performances (forecasts and forecasting error percentage) for the three models are listed in Table 1. Table 1. Comparisons of the forecasts by three models for the TAIEX of year 2000
Date
Actual Stock Index
Chen's Forecast
Error %
2000/11/2 2000/11/3 2000/11/4 2000/11/6 2000/11/7 2000/11/8 2000/11/9 2000/11/10 2000/11/13 2000/11/14 2000/11/15 2000/11/16 2000/11/17 2000/11/18 2000/11/20 2000/11/21 2000/11/22 2000/11/23 2000/11/24 2000/11/27 2000/11/28 2000/11/29 2000/11/30 2000/12/1 2000/12/2 2000/12/4 2000/12/5
5626.08 5796.08 5677.3 5657.48 5877.77 6067.94 6089.55 6088.74 5793.52 5772.51 5737.02 5454.13 5351.36 5167.35 4845.21 5103 5130.61 5146.92 5419.99 5433.78 5362.26 5319.46 5256.93 5342.06 5277.35 5174.02 5199.2
5300 5750 5450 5750 5750 5750 6075 6075 6075 5450 5450 5450 5300 5350 5150 4850 5150 5150 5150 5300 5300 5350 5350 5250 5350 5250 5150
5.80% 0.80% 4.00% 1.64% 2.17% 5.24% 0.24% 0.23% 4.86% 5.59% 5.00% 0.08% 0.96% 3.53% 6.29% 4.96% 0.38% 0.06% 4.98% 2.46% 1.16% 0.57% 1.77% 1.72% 1.38% 1.47% 0.95%
Huarng &Yu’s Forecast 5340 5721.7 5435 5721.7 5721.7 5760 6062 6062 6062 5435 5435 5435 5340 5350 5150 4850 5150 5150 5150 5340 5340 5350 5350 5250 5350 5250 5150
Error % 5.08% 1.28% 4.27% 1.13% 2.66% 5.07% 0.44% 0.43% 4.64% 5.85% 5.26% 0.35% 0.21% 3.53% 6.29% 4.96% 0.38% 0.06% 4.98% 1.73% 0.42% 0.57% 1.77% 1.72% 1.38% 1.47% 0.95%
Proposed Model’s Forecast 5425.79 5623.45 5794.80 5678.92 5657.15 5875.31 6066.34 6089.74 6088.78 5796.28 5772.81 5737.16 5456.64 5352.31 5169.12 4848.17 5100.47 5130.61 5146.74 5417.68 5433.87 5363.60 5319.72 5256.93 5340.93 5277.35 5174.88
Error % 3.56% 2.98% 2.07% 0.38% 3.75% 3.17% 0.38% 0.02% 5.10% 0.41% 0.62% 5.19% 1.97% 3.58% 6.69% 4.99% 0.59% 0.32% 5.04% 0.30% 1.34% 0.83% 1.19% 1.59% 1.20% 2.00% 0.47%
462
C.-H. Cheng et al. Table 1. (continued)
Date
Actual Stock Index
Chen's Forecast
Error %
2000/12/6 2000/12/7 2000/12/8 2000/12/11 2000/12/12 2000/12/13 2000/12/14 2000/12/15 2000/12/16 2000/12/18 2000/12/19 2000/12/20 2000/12/21 2000/12/22 2000/12/26 2000/12/27 2000/12/28 2000/12/29 2000/12/30
5170.62 5212.73 5252.83 5284.41 5380.09 5384.36 5320.16 5224.74 5134.1 5055.2 5040.25 4947.89 4817.22 4811.22 4721.36 4614.63 4797.14 4743.94 4739.09
5150 5150 5250 5250 5250 5350 5350 5350 5250 5150 5450 5450 4950 4850 4850 4750 4650 4750 4750
0.40% 1.20% 0.05% 0.65% 2.42% 0.64% 0.56% 2.40% 2.26% 1.88% 8.13% 10.15% 2.76% 0.81% 2.72% 2.93% 3.07% 0.13% 0.23%
MAPE
Huarng &Yu’s Forecast 5150 5150 5250 5250 5250 5350 5350 5350 5250 5150 5405 5405 4950 4850 4850 4750 4650 4750 4750
2.43%
Error % 0.40% 1.20% 0.05% 0.65% 2.42% 0.64% 0.56% 2.40% 2.26% 1.88% 7.24% 9.24% 2.76% 0.81% 2.72% 2.93% 3.07% 0.13% 0.23% 2.36%
Proposed Model’s Forecast 5198.86 5171.14 5211.79 5252.12 5283.80 5379.04 5384.19 5321.33 5225.38 5134.96 5055.87 5039.99 4948.51 4818.37 4810.58 4721.98 4615.60 4795.18 4744.84
Error % 0.55% 0.80% 0.78% 0.61% 1.79% 0.10% 1.20% 1.85% 1.78% 1.58% 0.31% 1.86% 2.73% 0.15% 1.89% 2.33% 3.78% 1.08% 0.12% 1.84%
From Table 1, it is shown that the forecasting error for Chen’s model ranges from 0.05% to 10.15%, Huarng &Yu’s model ranges from 0.05% to 9.24%, and the proposed model ranges from 0.02% to 5.19%. Further to say, the proposed model bears the smallest forecasting error range among three models which means the proposed model performs more stably in forecasting error than Chen’s and Huarng &Yu’s models. Besides, from the overall performance results of three models, listed in Table 1 (the MAPE of Chen’s model is 2.43%, Huarng &Yu’s model is 2.36%, and the proposed model is 1.84%), the MAPE comparisons indicate that the proposed model outperforms the other two forecasting models in average accuracy. Therefore, it is apparent that the proposed model performs better than Chen’s and Huarng &Yu’s models in forecasting accuracy and stability.
5 Experimental Findings and Future Works A novel forecasting model based on price trend and variation pattern has been proposed in this paper, and from the performance comparisons with fuzzy time series (see Table 1), it is clear that the proposed model outperforms Chen’s and Huarng &Yu’s models in accuracy and stability. The explanations for the comparison results
Forecasting Stock Market Based on Price Trend and Variation Pattern
463
may be that two complex computation procedures, fuzzify and defuzzify, used by fuzzy time series make their forecasts less accuracy and stability and the price trends and variations used in the proposed model can provide more information to produce forecasts. From model verification, we argue that three advantages for the proposed model are issued as follow: (1) the best forecasting results can be easily provided by tuning the threshold value appropriately; (2) more accurate forecasts can be provided by using “stock price trends and variations” than the forecasting model using “stock price”; (3) no mathematic assumptions about data distribution of variables are required and it is easy to implement the proposed algorithm with a computer system. In the future works, a further experiment using a longer period of the TAIEX or other database such as HSI (Hang Seng Index) or DJI (Dow Jones Industrial Average) should be implemented to verify the experimental findings in this paper. Additionally, more statistic time-series models such as autoregressive (AR) and autoregressivemoving average (ARMA) should be used as comparison models to examine the superiority of the proposed model.
References [1] Box, G.E.P., Jenkins, G.: Time series analysis: Forecasting and control. Holden-Day, San Francisco (1976) [2] Jordan, K.J.: An Introduction to the Elliott Wave Principle. Alchemist 40, 12–14 (2004) [3] Gately, E.: Neural Networks for Financial Forecasting. Wiley, New York (1996) [4] Chen, S.M., Chung, N.Y.: Forecasting enrollments using high-order fuzzy time series and genetic algorithms. International Journal of Intelligent Systems 21, 485–501 (2006) [5] Pai, P.F., Lin, C.S.: A hybrid ARIMA and support vector machines model in stock price forecasting. Omega 33, 497–505 (2004) [6] Ture, M., Kurt, I.: Comparison of four different time series methods to forecast hepatitis A virus infection. Expert Systems with Applications 31, 41–46 (2006) [7] Huarng, K., Yu, H.K.: A Type 2 fuzzy time series model for stock index forecasting. Physica A 353, 445–462 (2005) [8] Cheng, C.H., Chen, T.L., Chiang, C.H.: Trend-weighted fuzzy time series model for TAIEX forecasting. In: King, I., Wang, J., Chan, L.-W., Wang, D. (eds.) ICONIP 2006. LNCS, vol. 4234, pp. 469–477. Springer, Heidelberg (2006) [9] Chen, T.L., Cheng, C.H., Teoh, H.J.: Fuzzy Time-Series Based on Fibonacci Sequence for Stock Price Forecasting. Physica A: Statistical Mechanics and its Applications 380, 377–390 (2007) [10] Chen, T.L., Cheng, C.H., Teoh, H.J.: High Order Fuzzy Time-series Based on Multiperiod Adaptation Model for Forecasting Stock Markets. Physica A: Statistical Mechanics and its Applications 387, 876–888 (2008) [11] Liu, N.K.J.W., Kwong, M.R.: Automatic extraction and identification of chart patterns towards financial forecast. Applied Soft Computing 7, 1197–1208 (2007) [12] Meyers, T.A.: The technical analysis course: A winning program for investors & traders. Probus Pub., Chicago (1994) [13] Plummer, T.: Forecasting financial markets. Kogan Page Ltd. (1993) [14] Thomas, N.B.: Encyclopedia of chart patterns. John Wiley & Sons, Chichester (2000) [15] Zadeh, L.A.: The concept of a linguistic variable and its application to approximate reasoning I. Information Science 8, 199–249 (1975)
464
C.-H. Cheng et al.
[16] Zadeh, L.A.: The concept of a linguistic variable and its application to approximate reasoning II. Information Science 8, 301–357 (1975) [17] Zadeh, L.A.: The concept of a linguistic variable and its application to approximate reasoning III. Information Science 9, 43–80 (1976) [18] Song, Q., Chissom, B.S.: Fuzzy time-series and its models. Fuzzy Sets and Systems 54, 269–277 (1993) [19] Chen, S.M.: Forecasting enrollments based on fuzzy time-series. Fuzzy Sets and Systems 81, 311–319 (1996) [20] Taiwan Stock Exchange Corporation, http://www.twse.com.tw [21] Hanke, J.E., Reitsch, A.G.: Business forecasting. Prentice-Hall, New Jersey (1995) [22] Bowerman, B.L., O’connell, R.T., Koehler, A.B.: Forecasting time series and regression: An applied approach. Thomson Brooks/Cole, Belmont (2004)
Intelligent Prebuffering Using Position Oriented Database for Mobile Devices Ondrej Krejcar VSB Technical University of Ostrava, Center for Applied Cybernetics, Department of measurement and control, 17. Listopadu 15, 70833 Ostrava Poruba, Czech Republic
[email protected]
Abstract. Paper describes a concept of PDPT Framework with main focus on Position Oriented Database on server and mobile devices. Also the problem of low data throughput on mobile devices is described as a main, which can be solved by PDPT framework. Some interested details from development of Position Oriented Database on mobile devices equipped with Windows Mobile operating system are also showed as a program code in C# language and a database structure at server and mobile side. Localization and user tracking is described only as a necessary condition for prebuffering realization because the PDPT Core maces a decision when and which artifact (large data files) need to be prebuffered. Every artifact is stored along with its position information (e.g. in building or larger area environment). The accessing of prebuffered data artifacts on mobile device improve the download speed and response time needed to view large multimedia data. The conditions for real sticking in corporate areas are discussed at the end of paper along with problems that must be solved before stocking. Keywords: Prebuffering; Response Time; Downlink Speed; Mobile Device; SQL Server CE; Position Oriented Database.
1 Introduction The using of Mobile wireless devices like laptops, PDA devices or Smart phones are commonly used with internet connection which is available almost everywhere and anytime these days. The connection speed of the two most common standards GPRS and WiFi varies from hundreds of kilobits to several megabits per second. In case of corporate information systems or some other types of facility management, zoological or botanical gardens, libraries or museums information systems, the WiFi infrastructure network is often used to connect mobile device clients to a server. Unfortunately, the theoretical maximum connection speed is only achievable on laptops where highquality components are used (in comparison to mobile devices). Other mobile devices like family PDAs or Smart phones have lower quality components due to a very limited space. The limited connection speed represents a problem for online systems using large artifacts data files. It is not possible to preload these artifacts before the mobile device is used in remote access state. This problem was found as a very N.T. Nguyen, M.T. Le, and J. Świątek (Eds.): ACIIDS 2010, Part I, LNAI 5990, pp. 465–474, 2010. © Springer-Verlag Berlin Heidelberg 2010
466
O. Krejcar
important point. The rest of this paper specifies the problem and suggests a possible solution. The goal is to complete the data networking capabilities of RF wireless LANs [3], [6] with accurate user location and tracking capabilities for user needed data prebuffering. This property is used as an information base for an extension of existing information systems or to create a special new one. Information about location is used to determine both an actual and future position of a user. A number of experiments with the information system have been performed and their results suggest that determination of the location should be focused on. The following sections describe also the conceptual and technical details about Predictive Data Push Technology Framework (PDPT). 1.1 The Low Data Throughput Problem The real downlink for WiFi network is about 160 kB/s for modern PDA devices. More details about the facts of slow transfer speed on mobile devices can be found in [7], [15], [16], [17], [18]. Primary dataflow can be increased by data prebuffering. Selecting of data objects to be buffered to mobile device cache is made on the base of position of user’s device. For every position in area, where the prebuffering is made, the objects with relevant position for such user’s position exists. PDPT Core pushes the data from SQL database (WLA database [Fig. 3]) to clients PDA on the base of PDPT Core decision algorithms. The benefit of PDPT consists in time delay reduction which is needed to display desired artifacts requested by a user from PDA. First of all the maximum response time of an application (PDPT Client) for user must be specified. Nielsen [4] specified this time delay to 10 seconds [5]. During this time the user was focused on the application and was willing to wait for an answer. The Nielsen book is a basic literature for this phenomenon. Galletta, Henry, McCoy and Polak (2002) findings suggest that, ‘decreases in performance and behavioral intentions begin to flatten when the delays extend to 4 seconds or longer, and attitudes flatten when the delays extend to 8 seconds or longer’. The time period of 10 seconds is used in PDPT framework to calculate the maximum possible data size of a file transferred from server to client (during this period). For transfers speed 160 kB/s the result file size is 1600 kB. The next step was an average artifact size definition. The network architecture building plan is used as a sample database, which contained 100 files of average size of 470 kB. The client application can download during the 10 second period from 2 to 3 artifacts. The problem is the long time delay in displaying of artifacts in some original file types (e.g. Autocad in case of vector graphic or MS Office in general case). It is needed to use only basic data formats, which can be displayed by PDA natively (bmp, jpg, wav, mpg, etc.) without any additional striking time consumption. If other file types are used, the delay for presentation of file must be included. The final result of several real tests and consequential calculations is definition of artifact size to average value of 500 kB. The buffer size may differ from 50 to 100 MB in case of 100 to 200 artifacts.
Intelligent Prebuffering Using Position Oriented Database for Mobile Devices
467
2 Position Oriented Database If the mobile device knows the position of the stationary device (transmitter), it also knows that its own position within a range of this location provider [3], [6]. The typical range varies from 30 to 100 m in WiFi case, respectively 50 m in BT case or 30 km for GSM. Granularity of location can be improved by triangulation of two or more visible APs (Access Points) or using the more accurate position algorithms (Monte Carlo localization). In PDPT framework only the triangulation technique is used due to the sufficient granularity of user position information. Monte Carlo localization was tested in one segment of tested environment without marginal success (Time needed to implement algorithm was inadequate to position quality results). Information about the user position are stored in Position table [Fig. 1]. Locator table contain info about wireless AP with signal strength which are needed to determine user position. WiFi_AP, BT_AP and GSM_AP tables contain all necessary info about used wireless base stations. WLA_data table contain data artifact along with their position, priority and others metadata.
Fig. 1. Scheme of WLA architecture (Wireless Location Architecture) PDPT server DataBase
468
O. Krejcar Table 1. PDPT Server – SQL Server 2005 database – WLA_data table
Table 2. PDPT Client – SQL Server 2005 Mobile Edition database – Buffer table
2.1 PDPT Client - Mobile Database Server The large data artifacts from PDPT Server (WLA_data table [Fig. 1], [Table 1]) are needed to be presented for user on mobile device. In case of classical online system the data artifacts are downloaded on demand. In case of PDPT solution, the artifacts are preloaded to mobile device cache before user requests. As mobile cache the SQL Server 2005 Mobile Edition was selected. Our mobile cache contain only one data table Buffer [Table 2]. Only the needed columns from PDPT server WLA_data table were taken for mobile version Buffer table. MS SQL Server 2005 Mobile Edition was selected for easiest managing of them in case that the Visual Studio and classic SQL Server are used. Small data amount for installation (2,5 MB) is also an advantage. 2.2 PDPT Client Application – SQL Server CE manager For managing of database file on PDA device, the small DB manager was created [Fig. 2]. First combobox menu on this tab deal with IP address settings of PDPT server. DB Buffer size follows on second combobox. This size is important for maximum space taking by prebuffering database on selected data media. Data medium can be selected on DB Storage combobox. For check of database existence the SQLCE DB Exist button must be pressed. In example the db is ready means the database file exists on selected location.
Intelligent Prebuffering Using Position Oriented Database for Mobile Devices
469
Fig. 2. PDPT Client - SQL Server CE database managing presented on tab_page “DB”
If such db file does not exist, we need to execute the SQL CE DB Delete & Create. This buttons can be used for recreating of db file. The part of program code (C# language) with operation for creating of database: private void btn_SQL_Cr_Click(object sender, EventArgs e) { // DB connection creating SqlCeConnection CEcon = new SqlCeConnection("Data Source=\\"+cmbBox_DBStorage.SelectedItem.ToString() +"\\DbBuffer.sdf"); CEcon.Open(); // definition of DB structure and data types string CE_SQL_String = "CREATE TABLE buffer(" + "Date_Time DateTime not null, " + "cell nvarchar(50) not null, " + "file_type nvarchar(50) not null, " + "file_binary binary not null, " + "file_description nvarchar(50) not null, " + "ID bigint not null " + ")"; CEcmd.ExecuteNonQuery(); }
470
O. Krejcar
“Compact Db File” and “Shrink of DB file” buttons means two options of compacting a database by manual way (this code is also used in other part of code). The time in millisecond is measured in text box which is located between both buttons using the Environment.TickCount method. Both of these mechanisms are used in prebuffering cycles when the large artifact is deleted from database table, because the standard operation of delete order is not include this technique, so the database file is still has occupied space of deleted artifact. This is due to recover possibilities in Microsoft SQL Server CE databases. The part of program code (C# language) with operation for Compacting of database: private void btn_DBComp_Click(object sender, EventArgs e) { Int32 TimeStart = Environment.TickCount; string originalDB = "\\"Storage Card"\\DbBuffer.sdf"; SqlCeEngine engine = new SqlCeEngine("Data Source = " + originalDB); engine.Compact(null); engine.Dispose(); Int32 TimeConsuption = Environment.TickCount - TimeStart; txtBox_CompactDBTime.Text = TimeConsuption.ToString(); }
3 The PDPT Framework In most cases of caching principle, the low level software cache is used [10] or the residing of chips on system desk is recommended [11] to improve the performance of a system when operating with multimedia content. Such techniques are not allowed on existing mobile device where the operation system exists. A combination of a predicted user position with prebuffering of data associated with physical locations bears many advantages in increased throughput of mobile devices. A possible solution (Microsoft US patent [12]) needs to know all information of all wireless base stations in mobile device before the localization process can be started. Moreover, the Moving Direction Estimator module is also situated in a mobile device application. These two facts present limitations to changing wireless base stations structure or to computing power consumption. Another solution (HP US patent [13]) represents a similar concept. The key difference between [12], [13] and PDPT solution is that the location processing, track prediction and cache content management are situated at server side [Fig. 3]. This fact allows for managing many important parameters (e.g. AP info changes, position determination mechanism tuning, artifacts selection evaluation tuning, etc.) online at a PDPT Server.
Intelligent Prebuffering Using Position Oriented Database for Mobile Devices
471
Fig. 3. PDPT architecture – UML design.
3.1 Creating and Managing of Data Artifacts Artifact data object if defined as an image, audio, video or other file types, which represent an object in Position Oriented Database – table WLA_data. Every artifact is associated with position coordinates in 3D environment. To manage and work with locations of artifacts, firstly the building map is needed. There is several ways to create it [18]. The discovering a corporate APs location is also needed to determine a user position [15], [3], [6]. All obtained positions info need to be stored in PDPT Core web service. Artifacts with position coordinates are stored in WLA_data table [Fig. 1] by “WLA Database Artifact Manager”. This software application was created to manage the artifacts in Position Oriented Database. User can set the priority, location, and other metadata of the artifact. The Manager allows creating a new artifact from multimedia file source (image, video, etc.), and work with existing artifacts [15]. 3.2 PDPT Core - Area Definition for Selecting Artifacts to Buffering The PDPT buffering and predictive PDPT buffering principle consists of several steps. Firstly the client must activate the PDPT buffering on PDPT Client. This client creates a list of artifacts (PDA buffer image), which are contained in his mobile SQL Server CE database. Server create own list of artifacts (imaginary image of PDA
472
O. Krejcar
buffer) based on area definition for actual user position and compare it with real PDA buffer image. The area is defined as an object where the user position is in the center of object. The cuboid form is used in present time for initial PDPT buffering. This cuboid has a predefined area with a size of 10 x 10 x 3 (high) meters. The PDPT Core continues in next step with comparing of both images. In case of some difference, the rest artifacts are prebuffered to PDA buffer. When all artifacts for current user position are in PDA buffer, there is no difference between images. In such case the PDPT Core is going to make a predicted user position. On base of this new predicted user position it makes a new predictive enlarged imaginary image of PDA buffer. The size of this new cuboid is predefined area of size 20 x 20 x 6 meters. The new cuboid has a center in direction of predicted user moving and includes a cuboid area for current user position. The PDPT Core compares the both new images (imaginary and real PDA buffer) and it will continue with buffering of rest artifacts until they are same. Creation of an algorithm for dynamic area definition is better in real case of usage to adapt a system to user needs more flexible in real time. The PDPT Client application realizes thick client and PDPT and Locator modules extension [Fig. 3].
4 Discussion of Results The PDPT Framework project is developed from 2005 until now in several consequential phases. Current state of the project is near the real company stocking. Final tests were executed in university campus of Technical university of Ostrava [section 4.1]. For company stocking is possible to think about several areas. These possibilities will be discussed in [section 4.2]. 4.1 Final Test Results of PDPT Framework For testing purpose, five mobile devices were selected with different hardware and software capabilities. Six types of tests batches were executed in test environment. Two different test scenarios were executed as static and dynamic tests scenarios. Static test was based on a predefined collection of data artifact which belongs to defined user position in test environment. Five test position were selected where approximately 12 data artifacts was needed to successful prebuffering. Three iterations were repeated in each position. If any of these expected artifacts stay un-buffered, the quality of prebuffering is going low. The tests were performed with result from 69,23 % to 100 %. The mean value of test results was 93,63 %. From all 15 tests, the 9 were executed with a 100 % of successful score. Every dynamic test was between two points with 132 meter distance. Every even test was in reversed direction. Five iterations (five devices used) were made during one batch. Results provide a good level of usability when user is moving slowly (less than 0,5 m/s). This fact is caused by low number of visible WiFi APs in test environment, where 60 % of time only 1 AP was visible, 20 % for 2 visible and 5 % for 3 or more visible WiFi APs. 15 % of time represents a time without any WiFi connections. Reached values of prebuffering quality in such case are very good.
Intelligent Prebuffering Using Position Oriented Database for Mobile Devices
473
4.2 Possibilities of Using a PDPT Framework in Real Environment Dynamic tests of PDPT Framework show the problem of a low number of visible WiFi APs for localization determination in the test environment of university campus. For the real case of usage and for the high level of prebuffering quality, the minimal number of simultaneously visible WiFi APs at each place of stocking area must be from 3 APs. For successful stocking of PDPT, the area of prebuffering is needed to be defined and also the data artifacts must be defined. One way is in use of developed software “WLA Database Artifact Manager” for offline case, but the useful solution is in determination of large data objects in online case. Such determination is not easy. Possible solution can be seen in application of Position Oriented Database scheme to convert an existing server database of online system to Position Oriented Database structure. After such conversion, the data are possible to select based on position in stocking area. Consequently if data object – artifact can be selected, the PDPT server can prebuffer such data to mobile device.
5 Conclusions A concept of PDPT Framework was described with main focus on Position Oriented Database on server and mobile devices. Also the final static and dynamic tests were performed and discussed. The developed PDPT Framework can be stocked on a wide range of wireless mobile devices for its main issue at increased downlink speed. The localization part of PDPT framework is currently used in another project of biotelemetrical system for home care agencies named “Guardian” to make a patient’s life safer [9]. Several areas for PDPT stocking was founded in projects of hydroinformation system “Transcat” [1], [2] and Biotelemetry Homecare [14], [19]. In these selected areas the use of PDPT framework is not only partial, but complete include the use of wide spectrum of wireless communication networks and GPS for tracking people and urgent need of a high data throughput on mobile wireless connected monitoring devices. These possibilities will be investigated in future. Acknowledgement. This work was supported by the Ministry of Education of the Czech Republic under Project 1M0567.
References 1. Horak, J., Unucka, J., Stromsky, J., Marsik, V., Orlik, A.: TRANSCAT DSS architecture and modelling services. Journal: Control and Cybernetics 35, 47–71 (2006) 2. Horak, J., Orlik, A., Stromsky, J.: Web services for distributed and interoperable hydroinformation systems. Journal: Hydrology and Earth System Sciences 12, 635–644 (2008) 3. Ramrekha, T.A., Politis, C.: An Adaptive QoS Routing Solution for MANET Based Multimedia Communications in Emergency Cases. In: MOBILIGHT 2009. LNICST, vol. 13, pp. 74–84. Springer, Heidelberg (2009) 4. Nielsen, J.: Usability Engineering. Morgan Kaufmann, San Francisco (1994)
474
O. Krejcar
5. Haklay, M., Zafiri, A.: Usability engineering for GIS: learning from a screenshot. The Cartographic Journal 45(2), 87–97 (2008) 6. Brida, P., Duha, J., Krasnovsky, M.: On the accuracy of weighted proximity based localization in wireless sensor networks. In: Personal Wireless Communications. IFIP, vol. 245, pp. 423–432 (2007) 7. Krejcar, O.: Prebuffering as a way to exceed the data transfer speed limits in mobile control systems. In: ICINCO 2008, 5th International Conference on Informatics in Control, Automation and Robotics, Funchal, Portugal, May 11-15, pp. 111–114 (2008) 8. Kasik, V.: FPGA based security system with remote control functions. In: 5th IFAC Workshop on Programmable Devices and Systems, Gliwice, Poland, November 22-23, pp. 277–280 (2001) 9. Krejcar, O., Janckulik, D., Motalova, L., Kufel, J.: Mobile Monitoring Stations and Web Visualization of Biotelemetric System - Guardian II. In: Mehmood, R., et al. (eds.) EuropeComm 2009. LNICST, vol. 16, pp. 284–291. Springer, Heidelberg (2009) 10. Asaduzzaman, A., Mahgoub, I., Sanigepalli, P., Kalva, H., Shankar, R., Furht, B.: Cache Optimization for Mobile Devices Running Multimedia Applications. In: IEEE Sixth International Symposium on Multimedia Software Engineering (ISMSE 2004), pp. 499–506 (2004) 11. Rosner, S., Mcclain, M., Gershon, E.: System and method for improved memory performance in a mobile device. United States Patent, Spansion LLC, 20060095622 (2006) 12. Brasche, G.P., Fesl, R., Manousek, W., Salmre, I.W.: Location-based caching for mobile devices. United States Patent, Microsoft Corporation (Redmond, WA, US), 20070219708 (2007) 13. Squibbs, R.F.: Cache management in a mobile device. United States Patent, HewlettPackard Development Company, L.P., 20040030832 (2004) 14. Cerny, M., Penhaker, M.: Biotelemetry. In: 14th Nordic-Baltic Conference an Biomedical Engineering and Medical Physics, IFMBE Proceedings, Riga, Latvia, June 16-20, vol. 20, pp. 405–408 (2008) 15. Krejcar, O., Cernohorsky, J.: New Possibilities of Intelligent Crisis Management by Large Multimedia Artifacts Prebuffering. In: I.T. Revolutions 2008, Venice, Italy, December 1719. LNICST, vol. 11, pp. 44–59. Springer, Heidelberg (2008) 16. Krejcar, O.: Problem Solving of Low Data Throughput on Mobile Devices by Artefacts Prebuffering. EURASIP Journal on Wireless Communications and Networking, Article ID 802523, 8 (2010) 17. Kostuch, A., Gierłowski, K., Wozniak, J.: Performance Analysis of Multicast Video Streaming in IEEE 802.11 b/g/n Testbed Environment. In: Wireless and Mobile Networking. IFIP, vol. 308, pp. 92–105 (2009) 18. Krejcar, O.: Full Scale Software Support on Mobile Lightweight Devices by Utilization of all Types of Wireless Technologies. In: Granelli, F., Skianis, C., Chatzimisios, P., Xiao, Y., Redana, S. (eds.) Mobilight 2009. LNICST, vol. 13, pp. 173–184. Springer, Heidelberg (2009) 19. Penhaker, M., Cerny, M., Martinak, L., Spisak, J., Valkova, A.: HomeCare - Smart embedded biotelemetry system. In: World Congress on Medical Physics and Biomedical Engineering, Seoul, South Korea, August 27-September 1. PTS 1-6, vol. 14, pp. 711–714 (2006)
Author Index
Adjallah, Kondo Hloindo II-410 Ahmad, Mohd Sharifuddin I-329 Ahmed, Moamin I-329 Aimmanee, Pakinee I-159 Alhadi, Arifah Che I-169 Ali, Datul Aida I-169 Anand, Deepa II-1 Andr´es, C´esar I-54, II-47 Anh, Nguyen Duc I-294 Anjomshoaa, Amin I-13, I-180 An, Le Thi Hoai II-410, II-460 Bac, Le Hoai II-431 Bao, Pham The I-294 Bara´ nska, Magdalena I-302 Barrett, Stephen II-65 Beck, Joaquim A.P.M. I-370 Beghdadi, Azeddine II-471 Beretta, Lorenzo II-166 Bergamaschi, Sonia II-144 Bezerra, Ubiratan H. I-370 Bharadwaj, Kamal K. II-1 Bogalinski, Pawel II-225 Bouvry, Pascal I-33 Brahmi, Zaki II-420 Brzostowski, Krzysztof II-29 Bulka, Jaroslaw II-21 Cao, Tru H. II-11 Cardoso Jr, Ghendy I-370 Chai, Shang I-350 Chang, Jong-Ming II-39 Chen, Ergan I-350 Cheng, Ching-Hsue I-399, I-455 Cheng, Wei-Chen I-150 Chen, Heng-Sheng I-426 Chen, Jr-Shian I-399 Chen, Rung-Ching I-339 Chen, Tai-Liang I-455 Chen, Zilong II-441 Cheong, Marc II-114 Chiang, Hung-Hsing I-455 Chiang, Tai-Wei II-289 Chmielewski, Mariusz II-93 Chou, Hung-Lieh I-399
Cierniak, Robert I-241 Czabanski, Robert II-185 Dai, Miau Ru I-380 Dang, Bao II-105 Dang, Nhan Cach I-390 Dang, Tran Khanh I-113 de Morais, Adriano Peres I-370 Dinh, Thang II-105 Dinh, Tien II-105 Do, Sang Thanh II-279 Drabik, Aldona I-33 Encheva, Sylvia II-176 Esmaelnejad, Jamshid I-93 Faisal, Zaman II-320 Fritzen, Paulo C´ıcero I-370 Gammoudi, Mohamed Mohsen Gao, An I-350 Ghenima, Malek II-420 Gorawski, Marcin I-74 Graczyk, Magdalena II-340 Gruszczyk, Wojciech I-282 Grzech, Adam II-400 Habibi, Jafar I-93 Hagemann, Stephan I-23 Hirose, Hideo II-320 Hoang, Hanh Huu II-154 Hoang, Nguyen Huy I-294 Hoang, Son Huu II-205 Hollauf, Michael I-180 Hong, Tzung-Pei I-131, II-351 Horoba, Krzysztof I-200, II-185 HosseinAliPour, Ali II-247 Hsu, Sheng-Kuei I-445 Huang, Chien-Hsien I-416 Huang, Yun-Hou I-339 Hubmer, Andreas I-13 Hu, Hong II-300 Hung, Chen-Chia I-360 Hung, Nguyen Thanh II-431 Huy, Phan Trung II-450
II-420
476
Author Index
Ibrahim, Hamidah I-221 Intarapaiboon, Peerasak I-271 Izworski, Andrzej II-21 Je˙zewski, Janusz I-200, II-185 Jezewski, Michal II-185 Jiang, Qingshan I-350 Jo, Kang-Hyun I-251 Jung, Jason J. I-103 Jureczek, Pawel I-74 Kajdanowicz, Tomasz II-359 Kambayashi, Yasushi I-435 Kang, Suk-Ju I-251 Kasprzak, Andrzej II-215, II-225 Kasprzyk, Rafal II-93 Kassim, Junaidah Mohamad I-169 Kawamura, Takahiro I-140 Kazienko, Przemyslaw II-359 KeyKhosravi, Davood II-247 Khadraoui, Djamel II-460 Khan, Junaid Ali I-231 Kim, Dae-Nyeon I-251 Kim, Nguyen Thi Bach II-390 Kim, Sang-Woon II-310 Kim, Seunghwan II-310 Kmiecik, Wojciech II-215 Koo, Insoo I-261 Koszalka, Leszek II-215, II-225 Krejcar, Ondrej I-465 Krzystanek, Marek II-330 Kubiak, Przemyslaw I-64 Kutylowski, Miroslaw I-64 Kuwabara, Kazuhiro II-134 Kwasnicka, Halina I-282 Lan, Guo-Cheng II-351 Lasota, Tadeusz II-330, II-340 Last, Mark II-368 Le Anh, Vu II-195 Le, Duy Ngan I-390 Lee, Chien-Pang I-360 Lee, Huey-Ming I-416, I-426 Lee, Tsang-Yean I-426 Lee, Vincent II-114 Lee, Youngdoo I-261 Le, Quang Loc I-113 Le, Tam T. II-268 Le, Thanh Manh II-154 Le Trung, Hieu II-195
Le Trung, Kien II-195 Leu, Yungho I-360 Li, Chunshien II-289 Li, Jiuyong II-300 Lin, Chun-Wei I-131 Lin, Ming-Hsien I-339 Lin, Shi-Jen I-445 Liou, Cheng-Yuan I-150 Liu, Jing-Wei I-408 Lo, Chih-Chung II-39 Longo, Luca II-65 Luong, Marie II-471 Lu, Wen-Hsiang I-131 Lu, Yang II-441 Maghaydah, Moad I-43 Mao, Yu Xing I-82 Matsuhisa, Takashi II-85 Mayo, Michael II-166 Merayo, Mercedes G. II-47 Mikolajczak, Grzegorz I-190 Minami, Toshiro II-237 Mita, Seichii II-268 Mizutani, Masashi I-435 Moti Ghader, Habib II-247 Nakagawa, Hiroyuki I-140 Nakatsuka, Mitsunori II-134 Nakayama, Ken I-140 Nam, Bui Ngoc I-294 Nantajeewarawat, Ekawit I-122, I-271 Nguyen, Dat Ba II-205 Nguyen, Hien T. II-11 Nguyen, Minh Nhut I-390 Nguyen, Phi-Bang II-471 Nguyen, Thai Phuong II-205 Nguyen, Thi Thanh II-279 Nguyen, Thuc D. II-268 Nhan, Nguyen Dat II-431 Niu, Dongxiao I-319 Noah, Shahrul Azman I-169 Nowacki, Jerzy Pawel I-33 N´ un ˜ ez, Manuel I-54 Ohsuga, Akihiko I-140 Orgun, Mehmet A. I-43 Othman, Mohamed I-221 Park, Dong-Chul nski, Jakub Peksi´
II-279 I-190
Author Index Pham, Ninh D. I-113 Pham, Son Bao II-205 Pham Thi Hong, Chau I-261 Phuc, Do II-258 Phung, Nguyen Thi Kim II-258 Po, Laura II-144 Pozniak-Koszalka, Iwona II-225 Prusiewicz, Agnieszka II-57 Przybyla, Tomasz I-200 Qiu, Di I-350 Qiu, Ming I-350 Quan, Thanh Tho I-390 Qureshi, I.M. I-231 Quynh, Tran Duc II-410 Reku´c, Witold
II-29
477
The, Nguyen Minh I-140 Theeramunkong, Thanaruk I-122, I-159, I-271 Thuy, Le Quang II-390 Tjoa, A Min I-13, I-180 Tomczak, Jakub M. II-124 Tran, An II-105 Tran, Phuong-Chi Thi II-154 Tran, Son T. II-268 Trawi´ nski, Bogdan II-330, II-340 Trawi´ nski, Krzysztof II-340 Treur, Jan I-210 Trinh, Hoang-Hon I-251 Tseng, Chia-Hsien II-39 Tseng, Vincent S. II-351 Tumin, Sharil II-176
Seredynski, Franciszek I-33 Shao, Jun I-64 Shi, Bai Le I-82 Shokripour, Amin I-221 Sinaiski, Alla II-368 Skaruz, Jaroslaw I-33 Sobecki, Janusz II-29, II-124 Sombattheera, Chattrakul II-75 Son, Ta Anh II-460 Strzelczyk, Piotr II-21 Subramania, Halasya Siva II-368 Su, Chung-Ho I-455 Suwanapong, Thawatchai I-122 ´ atek, Jerzy I-1 Swi¸ Szczurowski, Leopold II-29
Umair, Muhammad
I-210
Tadeusiewicz, Ryszard II-21 Tahara, Yasuyuki I-140 Takimoto, Munehiro I-435 Tang, Cheng-Jen I-380 Tao, Pham Dinh II-460 Tarapata, Zbigniew II-93, II-378 Telec, Zbigniew II-330 Teoh, Hia Jong I-399 Thanh, Nguyen Hai II-450
Yasunaga, Shotaro II-134 Yeganeh, Soheil Hassas I-93 Yusoff, Mohd Zaliman M. I-329
Vo Sao, Khue I-180 Vossen, Gottfried I-23 Vo, Viet Minh Nhat I-308 Wang, Cheng-Tzu II-39 Wang, Jia-Wen I-408 Wang, Yakun I-319 Wang, Yongli I-319 Weippl, Edgar I-180 Wochlik, Ireneusz II-21 Wojcikowski, Marek II-215 Woo, Dong-Min II-279 Wr´ obel, Janusz I-200, II-185
Zahoor Raja, Muhammad Asif Zauk, Jo˜ ao Montagner I-370 Zhang, Yaofeng I-54, II-47 Zhang, Zhongwei II-300 Zhou, Hong II-300
I-231